repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
Atomahawk/flagging-suspicious-blockchain-transactions | lab_notebooks/spark-play.ipynb | mit | # from pyspark import SparkContext
from pyspark.mllib.clustering import KMeans, KMeansModel
# http://spark.apache.org/docs/2.0.0/api/python/pyspark.mllib.html#pyspark.mllib.classification.NaiveBayesModel
from pyspark.mllib.classification import NaiveBayesModel
# http://spark.apache.org/docs/2.0.0/api/python/pyspark.mllib.html#pyspark.mllib.evaluation.RankingMetrics
from pyspark.mllib.evaluation import BinaryClassificationMetrics, MulticlassMetrics, RankingMetrics
import numpy as np
import pandas as pd
from random import randrange
from math import sqrt
!ls -l
sc
# sc = SparkContext("local", "Simple App")
# sc.stop(sc)
# sc.getOrCreate("local", "Simple App")
"""
Explanation: K-means in PySpark
The next machine learning method I'd like to introduce is about clustering, K-means. It is an unsupervised learning method where we would like to group the observations into K groups (or subsets). We call it "unsupervised" since we don't have associated response measurements together with the observations to help check and evaluate the model we built (of course we can use other measures to evaluate the clustering models).
K-means may be the simplest approach for clustering while it’s also an elegant and efficient method. To produce the clusters, K-means method only requires the number of clusters K as its input.
The idea of K-means clustering is that a good clustering is with the smallest within-cluster variation (a measurement of how different the observations within a cluster are from each other) in a possible range. To achieve this purpose, K-means algorithm is designed in a "greedy" algorithm fashion
K-means Algorithm
1. For each observation, assign a random number which is generated from 1 to *K* to it.
2. For each of the *K* clusters, compute the cluster center. The *k*th cluster’s center is the vector of the means of the vectors of all the observations belonging to the kth cluster.
3. Re-assign each observation to the cluster whose cluster center is closest to this observation.
4. Check if the new assignments are the same as the last iteration. If not, go to step 2; if yes, END.
An example of iteration with K-means algorithm is presented below
Now it's time to implement K-means with PySpark. I generate a dateset myself, it contains 30 observations, and I purposedly "made" them group 3 sets.
troubleshooting:
- https://stackoverflow.com/questions/23280629/multiple-sparkcontexts-error-in-tutorial
- check this tutorial he based it off of: http://blog.insightdatalabs.com/jupyter-on-apache-spark-step-by-step/
Dependencies
End of explanation
"""
# Generate the observations -----------------------------------------------------
n_in_each_group = 10 # how many observations in each group
n_of_feature = 5 # how many features we have for each observation
observation_group_1=[]
for i in range(n_in_each_group*n_of_feature):
observation_group_1.append(randrange(5, 8))
observation_group_2=[]
for i in range(n_in_each_group*n_of_feature):
observation_group_2.append(randrange(55, 58))
observation_group_3=[]
for i in range(n_in_each_group*n_of_feature):
observation_group_3.append(randrange(105, 108))
data = np.array([observation_group_1, observation_group_2, observation_group_3]).reshape(n_in_each_group*3, 5)
data = sc.parallelize(data)
"""
Explanation: KMeans Fake Data
End of explanation
"""
# Build the K-Means model
# the initializationMode can also be "k-means||" or set by users.
clusters = KMeans.train(data, 3, maxIterations=10, initializationMode="random")
# Collect the clustering result
result=data.map(lambda point: clusters.predict(point)).collect()
print(result)
# Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = data.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
"""
Explanation: # Run the K-Means algorithm -----------------------------------------------------
End of explanation
"""
# Create a Vertex DataFrame with unique ID column "id"
v = sqlContext.createDataFrame([
("a", "Alice", 34),
("b", "Bob", 36),
("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = sqlContext.createDataFrame([
("a", "b", "friend"),
("b", "c", "follow"),
("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)
# Query: Get in-degree of each vertex.
g.inDegrees.show()
# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()
# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()
"""
Explanation: GraphX in PySpark -----------------------------------------------------
https://stackoverflow.com/questions/23302270/how-do-i-run-graphx-with-python-pyspark
GraphFrames is the way to go
Work flow: http://aosc.umd.edu/~ide/data/teaching/amsc663/14fall/amsc663_14proposalpresentation_stefan_poikonen.pdf
http://www.datareply.co.uk/blog/2016/9/20/running-graph-analytics-with-spark-graphframes-a-simple-example
http://graphframes.github.io/quick-start.html
http://graphframes.github.io/user-guide.html
http://go.databricks.com/hubfs/notebooks/3-GraphFrames-User-Guide-python.html
Work Flow
Data & features
First set up spark:https://github.com/PiercingDan/spark-Jupyter-AWS
Set-up pyspark
set up anaconda
set-up graphframes
End of explanation
"""
data = [LabeledPoint(0.0, [0.0, 0.0]),
LabeledPoint(0.0, [0.0, 1.0]),
LabeledPoint(1.0, [1.0, 0.0])]
model = NaiveBayes.train(sc.parallelize(data))
model.predict(array([0.0, 1.0]))
model.predict(array([1.0, 0.0]))
model.predict(sc.parallelize([[1.0, 0.0]])).collect()
sparse_data = [LabeledPoint(0.0, SparseVector(2, {1: 0.0})),
LabeledPoint(0.0, SparseVector(2, {1: 1.0})),
LabeledPoint(1.0, SparseVector(2, {0: 1.0}))]
model = NaiveBayes.train(sc.parallelize(sparse_data))
model.predict(SparseVector(2, {1: 1.0}))
model.predict(SparseVector(2, {0: 1.0}))
import os, tempfile
path = tempfile.mkdtemp()
model.save(sc, path)
sameModel = NaiveBayesModel.load(sc, path)
sameModel.predict(SparseVector(2, {0: 1.0})) == model.predict(SparseVector(2, {0: 1.0}))
# True
from shutil import rmtree
try:
rmtree(path)
except OSError:
pass
"""
Explanation: Initial Classification - tip from guo
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/1 Search/A-Star-Search-Slim.ipynb | gpl-2.0 | import heapq
"""
Explanation: Improved A$^*$ First Search
The module heapq provides
priority queues
that are implemented as heaps.
Technically, these heaps are just lists. In order to use them as priority queues, the entries of these lists will be pairs of the form $(p, o)$, where $p$ is the priority of the object $o$. Usually, the priorities are numbers
and, contra-intuitively, high priorities correspond to <b>small</b> numbers, that is $(p_1, o_1)$ has a higher priority than $(p_2, o_2)$ iff $p_1 < p_2$.
We need only two functions from the module heapq:
- Given a heap $H$, the function $\texttt{heapq.heappop}(H)$ removes the pair
from H that has the highest priority. This pair is also returned.
- Given a heap $H$, the function $\texttt{heapq.heappush}\bigl(H, (p, o)\bigr)$
pushes the pair $(p, o)$ onto the heap $H$. This method does not return a
value. Instead, the heap $H$ is changed in place.
End of explanation
"""
def search(start, goal, next_states, heuristic):
Visited = set()
PrioQueue = [ (heuristic(start, goal), [start]) ]
while PrioQueue:
_, Path = heapq.heappop(PrioQueue)
state = Path[-1]
if state in Visited:
continue
if state == goal:
return Path
for ns in next_states(state):
if ns not in Visited:
prio = heuristic(ns, goal) + len(Path)
heapq.heappush(PrioQueue, (prio, Path + [ns]))
Visited.add(state)
%run Sliding-Puzzle.ipynb
%load_ext memory_profiler
"""
Explanation: The function search takes three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
- heuristic is a function that takes two states as arguments. It returns an estimate of the
length of the shortest path between these states.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The variable PrioQueue that is used in the implementation contains pairs of the form
$$ \bigl(\texttt{len}(\texttt{Path}) + \texttt{heuristic}(\texttt{state},\; \texttt{goal}), \texttt{Path}\bigr), $$
where Path is a path from start to state and $\texttt{heuristic}(\texttt{state}, \texttt{goal})$
is an estimate of the distance from state to goal. The idea is to always extend the most promising Path, i.e. to extend the Path whose completed version would be shortest.
End of explanation
"""
%%time
%memit Path = search(start, goal, next_states, manhattan)
print(len(Path)-1)
animation(Path)
"""
Explanation: Solving the 8-puzzle can be done in 240 ms and uses 5 megabytes.
End of explanation
"""
%%time
Path = search(start2, goal2, next_states, manhattan)
print(len(Path)-1)
animation(Path)
"""
Explanation: Solving the 15-puzzle can be done in less than 2 seconds and uses 28 megabytes.
End of explanation
"""
|
mohanprasath/Course-Work | coursera/python_for_data_science/2.1_Tuples.ipynb | gpl-3.0 | tuple1=("disco",10,1.2 )
tuple1
"""
Explanation: <a href="http://cocl.us/topNotebooksPython101Coursera"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>TUPLES IN PYTHON</font></h1>
<a id="ref0"></a>
<center><h2>About the Dataset</h2></center>
Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">About the Dataset</a></li>
<li><a href="#ref1">Tuples</a></li>
<li><a href="#ref2">Quiz on Tuples</a></li>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
Imagine you received album recommendations from your friends and compiled all of the recomendations into a table, with specific information about each album.
The table has one row for each movie and several columns:
artist - Name of the artist
album - Name of the album
released_year - Year the album was released
length_min_sec - Length of the album (hours,minutes,seconds)
genre - Genre of the album
music_recording_sales_millions - Music recording sales (millions in USD) on SONG://DATABASE
claimed_sales_millions - Album's claimed sales (millions in USD) on SONG://DATABASE
date_released - Date on which the album was released
soundtrack - Indicates if the album is the movie soundtrack (Y) or (N)
rating_of_friends - Indicates the rating from your friends from 1 to 10
<br>
<br>
The dataset can be seen below:
<font size="1">
<table font-size:xx-small style="width:25%">
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table>
</font>
<hr>
<a id="ref1"></a>
<center><h2>Tuples</h2></center>
In Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows:
<img src = "https://ibm.box.com/shared/static/t2jw5ia78ulp8twr71j6q7055hykz10c.png" width = 750, align = "center"></a>
End of explanation
"""
type(tuple1)
"""
Explanation: The type of variable is a tuple.
End of explanation
"""
print( tuple1[0])
print( tuple1[1])
print( tuple1[2])
"""
Explanation: Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number:
<img src = "https://ibm.box.com/shared/static/83kpang0opwen5e5gbwck6ktqw7btwoe.gif" width = 750, align = "center"></a>
We can print out each value in the tuple:
End of explanation
"""
print( type(tuple1[0]))
print( type(tuple1[1]))
print( type(tuple1[2]))
"""
Explanation: We can print out the type of each value in the tuple:
End of explanation
"""
tuple1[-1]
"""
Explanation: We can also use negative indexing. We use the same table above with corresponding negative values:
<img src = "https://ibm.box.com/shared/static/uwlfzo367bekwg0p5s5odxlz7vhpojyj.png" width = 750, align = "center"></a>
We can obtain the last element as follows (this time we will not use the print statement to display the values):
End of explanation
"""
tuple1[-2]
tuple1[-3]
"""
Explanation: We can display the next two elements as follows:
End of explanation
"""
tuple2=tuple1+("hard rock", 10)
tuple2
"""
Explanation: We can concatenate or combine tuples by using the + sign:
End of explanation
"""
tuple2[0:3]
"""
Explanation: We can slice tuples obtaining multiple values as demonstrated by the figure below:
<img src = "https://ibm.box.com/shared/static/s9nofy728bcnsgnx3vh159bu16w7frnc.gif" width = 750, align = "center"></a>
We can slice tuples, obtaining new tuples with the corresponding elements:
End of explanation
"""
tuple2[3:5]
"""
Explanation: We can obtain the last two elements of the tuple:
End of explanation
"""
len(tuple2)
"""
Explanation: We can obtain the length of a tuple using the length command:
End of explanation
"""
Ratings =(0,9,6,5,10,8,9,6,2)
"""
Explanation: This figure shows the number of elements:
<img src = "https://ibm.box.com/shared/static/apxe8l3w42f597yjhizg305merlm4ijf.png" width = 750, align = "center"></a>
Consider the following tuple:
End of explanation
"""
Ratings1=Ratings
Ratings
"""
Explanation: We can assign the tuple to a 2nd variable:
End of explanation
"""
RatingsSorted=sorted(Ratings )
RatingsSorted
"""
Explanation: We can sort the values in a tuple and save it to a new tuple:
End of explanation
"""
NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2)))
"""
Explanation: A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:
End of explanation
"""
print("Element 0 of Tuple: ", NestedT[0])
print("Element 1 of Tuple: ", NestedT[1])
print("Element 2 of Tuple: ", NestedT[2])
print("Element 3 of Tuple: ", NestedT[3])
print("Element 4 of Tuple: ", NestedT[4])
"""
Explanation: Each element in the tuple including other tuples can be obtained via an index as shown in the figure:
<img src = "https://ibm.box.com/shared/static/estqe2bczv5weocc4ag4mx9dtqy952fp.png" width = 750, align = "center"></a>
End of explanation
"""
print("Element 2,0 of Tuple: ", NestedT[2][0])
print("Element 2,1 of Tuple: ", NestedT[2][1])
print("Element 3,0 of Tuple: ", NestedT[3][0])
print("Element 3,1 of Tuple: ", NestedT[3][1])
print("Element 4,0 of Tuple: ", NestedT[4][0])
print("Element 4,1 of Tuple: ", NestedT[4][1])
"""
Explanation: We can use the second index to access other tuples as demonstrated in the figure:
<img src = "https://ibm.box.com/shared/static/j1orgjuasaaj3d0feymedrnoqv8trqyo.png" width = 750, align = "center"></a>
We can access the nested tuples :
End of explanation
"""
NestedT[2][1][0]
NestedT[2][1][1]
"""
Explanation: We can access strings in the second nested tuples using a third index:
End of explanation
"""
NestedT[4][1][0]
NestedT[4][1][1]
"""
Explanation: We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree:
<img src ='https://ibm.box.com/shared/static/vjvsygpzpwcr6czsucgno1wukyhk5vxq.gif' width = 750, align = "center"></a>
Similarly, we can access elements nested deeper in the tree with a fourth index:
End of explanation
"""
genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \
"R&B", "progressive rock", "disco")
genres_tuple
"""
Explanation: The following figure shows the relationship of the tree and the element NestedT[4][1][1]:
<img src ='https://ibm.box.com/shared/static/9y5s7515zwzc9v6i4f67yj3np2fv9evs.gif'width = 750, align = "center"></a>
<a id="ref2"></a>
<h2 align=center> Quiz on Tuples </h2>
Consider the following tuple:
End of explanation
"""
len(genres_tuple)
"""
Explanation: Find the length of the tuple, "genres_tuple":
End of explanation
"""
genres_tuple[3]
"""
Explanation: <div align="right">
<a href="#String1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="String1" class="collapse">
"len(genres_tuple)"
<a ><img src = "https://ibm.box.com/shared/static/n4969qbta8hhsycs2dc4n8jqbf062wdw.png" width = 1100, align = "center"></a>
```
```
</div>
Access the element, with respect to index 3:
End of explanation
"""
genres_tuple[3:6]
"""
Explanation: <div align="right">
<a href="#2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="2" class="collapse">
<a ><img src = "https://ibm.box.com/shared/static/s6r8v2uy6wifmaqv53w6adabqci47zme.png" width = 1100, align = "center"></a>
</div>
Use slicing to obtain indexes 3, 4 and 5:
End of explanation
"""
genres_tuple[:2]
"""
Explanation: <div align="right">
<a href="#3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="3" class="collapse">
<a ><img src = "https://ibm.box.com/shared/static/nqo84vydw6eixdex0trybuvactcw7ffi.png" width = 1100, align = "center"></a>
</div>
Find the first two elements of the tuple "genres_tuple":
End of explanation
"""
genres_tuple.index("disco")
"""
Explanation: <div align="right">
<a href="#q5" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q5" class="collapse">
```
genres_tuple[0:2]
```
#### Find the first index of 'disco':
End of explanation
"""
C_tuple=sorted((-5, 1, -3))
C_tuple
"""
Explanation: <div align="right">
<a href="#q6" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q6" class="collapse">
```
genres_tuple.index("disco")
```
<hr>
#### Generate a sorted List from the Tuple C_tuple=(-5,1,-3):
End of explanation
"""
|
NYUDataBootcamp/Projects | UG_S17/Booth_Praveen_Final_Project.ipynb | mit | import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import datetime as dt # date tools, used to note current date
import requests
from bs4 import BeautifulSoup
import urllib.request
from matplotlib.offsetbox import OffsetImage
%matplotlib inline
#per game statistics for MVP candidates
url = 'http://www.basketball-reference.com/play-index/pcm_finder.fcgi?request=1&sum=0&player_id1_hint=James+Harden&player_id1_select=James+Harden&player_id1=hardeja01&y1=2017&player_id2_hint=LeBron+James&player_id2_select=LeBron+James&y2=2017&player_id2=jamesle01&player_id3_hint=Kawhi+Leonard&player_id3_select=Kawhi+Leonard&y3=2017&player_id3=leonaka01&player_id4_hint=Russell+Westbrook&player_id4_select=Russell+Westbrook&y4=2017&player_id4=westbru01'
cl = requests.get(url)
soup = BeautifulSoup(cl.content, 'html.parser')
column_headers = [th.getText() for th in
soup.findAll('tr')[0].findAll('th')]
data_rows = soup.findAll('tr')[1:]
player_data = [[td.getText() for td in data_rows[i].findAll('td')]
for i in range(len(data_rows))]
df = pd.DataFrame(player_data, columns=column_headers[1:])
df = df.set_index('Player')
df = df.sort_index(ascending = True)
#getting advanced statistics for MVP candidates
url1 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&per_minute_base=36&per_poss_base=100&type=advanced&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=2017&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&c6mult=1.0&order_by=ws'
bl = requests.get(url1)
soup1 = BeautifulSoup(bl.content, 'html.parser')
column_headers_adv = [th.getText() for th in
soup1.findAll('tr')[1].findAll('th')]
data_rows_adv = soup1.findAll('tr')[2:8]
player_data_adv = [[td.getText() for td in data_rows_adv[i].findAll('td')]
for i in range(len(data_rows_adv))]
df_adv = pd.DataFrame(player_data_adv, columns=column_headers_adv[1:])
df_adv = df_adv.set_index('Player')
#drop other players from list
df_adv = df_adv.drop(['Rudy Gobert', 'Jimmy Butler'])
#sort players alphabetically
df_adv = df_adv.sort_index(ascending = True)
#drop duplicate and unnecessary columns
df_adv = df_adv.drop(['Season', 'Age', 'Tm', 'Lg', 'G', 'GS', 'MP'], axis=1)
#combined table of per game and andvanced statistics
MVP = pd.concat([df, df_adv], axis=1)
MVP
#convert to proper dtypes
MVP = MVP.apply(pd.to_numeric, errors='ignore')
#get per game statistics for MVP winners since 1980
url2 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&type=per_game&per_minute_base=36&per_poss_base=100&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=1981&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&award=mvp&c6mult=1.0&order_by=season'
al = requests.get(url2)
soup2 = BeautifulSoup(al.content, 'html.parser')
column_headers_past = [th.getText() for th in
soup2.findAll('tr')[1].findAll('th')]
data_rows_past = soup2.findAll('tr')[2:]
player_data_past = [[td.getText() for td in data_rows_past[i].findAll('td')]
for i in range(len(data_rows_past))]
df_past = pd.DataFrame(player_data_past, columns=column_headers_past[1:])
df_past = df_past.set_index('Player')
df_past = df_past.drop(['Tm', 'Lg'], axis=1)
#drop row of null values, which was used to separate decades on the Basketball Reference website
df_past = df_past.dropna(axis=0)
#get advanced statistics for MVP winners since 1980
url3 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&per_minute_base=36&per_poss_base=100&type=advanced&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=1981&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&award=mvp&c6mult=1.0&order_by=season'
dl = requests.get(url3)
soup3 = BeautifulSoup(dl.content, 'html.parser')
column_headers_past_adv = [th.getText() for th in
soup3.findAll('tr')[1].findAll('th')]
data_rows_past_adv = soup3.findAll('tr')[2:]
player_data_past_adv = [[td.getText() for td in data_rows_past_adv[i].findAll('td')]
for i in range(len(data_rows_past_adv))]
df_past_adv = pd.DataFrame(player_data_past_adv, columns=column_headers_past_adv[1:])
df_past_adv = df_past_adv.set_index('Player')
#drop duplicate and unnecessary columns
df_past_adv = df_past_adv.drop(['Age', 'Tm', 'Lg', 'Season', 'G', 'GS', 'MP'], axis=1)
#drop row of null values
df_past_adv = df_past_adv.dropna(axis=0)
historical = pd.concat([df_past, df_past_adv], axis=1)
historical
#convert to proper data types
historical = historical.apply(pd.to_numeric, errors='ignore')
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (12,12), sharex=True, sharey=False)
MVP['PTS'].plot.bar(ax=axes[0,0], color = ['b', 'b', 'b', 'r']); axes[0,0].set_title('Points per Game')
MVP['eFG%'].plot.bar(ax=axes[1,0], color = ['b', 'b', 'r', 'b']); axes[1,0].set_title('Effective Field Goal Percentage')
MVP['AST'].plot.bar(ax=axes[0,1], color = ['r', 'b', 'b', 'b']); axes[0,1].set_title('Assists per Game')
MVP['TRB'].plot.bar(ax=axes[1,1], color = ['b', 'b', 'b', 'r']); axes[1,1].set_title('Rebounds per Game')
"""
Explanation: Data Bootcamp Final Project: Who deserves to be NBA MVP?
Vincent Booth & Sam Praveen
Background:
The topic of which NBA player is most deserving of the MVP is always a contentious one. Fans will endlessly debate which of their favorite players has had a more tangible impact on their respective teams as well as who has the better stat lines. We recognize that statistics such as points, assists, and rebounds do not tell enough of a story or give enough information to definitively decide who is more deserving of the award.
Process:
For simplicity sake, we will focus on four players who we believe have earned the right to be in the conversation for MVP: James Harden, Lebron James, Kawhi Leonard, and Russell Westbrook. We will use the advanced statistics that we gather from sources such as espn.com and basketball-reference.com to compare the performance of these four players to see who is in fact more valuable. In addition, we will go back to 1980 and take the stats for each MVP from that season onwards and try to look for any patterns or trends that will be able to serve as predictors for who will win the award this season.
James Harden
James Harden is a guard for the Houston Rockets. He joined the Rockets in 2012 and has been a leader for the team ever since. He is known as a prolific scorer, being able to score in the paint and from three-point range alike. He lead the Rockets to a 3rd place finish in the Western Conference with a record of 55-27, and are currently in the conference semi-finals against the San Antonio Spurs
Kawhi Leonard
Kawhi Leonard is a forward for the San Antonio Spurs. He was drafted by San Antonio in 2011 and broke out as a star after the 2014 Finals, in which he lead his team to victory over the LeBron James led Miami Heat. Since then Kawhi has been known for being a very complete, consistent, and humble player - a reflection of the team and coach he plays for. Leonard led the Spurs to a 2nd place finish in the West with a record of 61-21 and is currently in the conference semis against the Houston Rockets
LeBron James
LeBron James is a forward for the Cleveland Cavaliers. He was drafted by the Cavaliers in 2003, and after a stint with the Miami Heat, returned to Cleveland in 2014. James excels in nearly every aspect of the game, as he has already put together a Hall of Fame career and put his name in the conversation for greatest player of all time, although there is some debate about that. James lead the Cavaliers to a 2nd place finish in the Eastern Conference, and is currently in the conference semi's agaisnt the Toronto Raptors.
Russell Westbrook
Russell Westbrook is a guard for the Oklahoma City Thunder. He was drafted by OKC in 2008, and become the sole leader of the team this season when Kevin Durant left for the Golden State Warriors. Westbrook is known for his athleticism as well as passion on the court. Westbrook has taken an expanded role on the court for the Thunder this year, and lead them in almost every statistical category. He lead OKC to a 6th place finish in the West with a record of 47-35. They were eliminated in the first round in 5 games to the Houston Rockets.
End of explanation
"""
import seaborn as sns
fig, ax = plt.subplots()
MVP['PER'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('PER')
ax.set_xlabel('')
ax.axhline(historical['PER'].mean(), color = 'k', linestyle = '--', alpha = .4)
"""
Explanation: Here we have displayed the most basic statistics for each of the MVP canidates, such as points, assists, steals and rebounds a game. As we can see, Westbrook had some of the highests totals in these categories. Westbrook was the second player in histroy to average double digits numbers in points, rebounds and assists in NBA histroy. Many believe that this fact alone should award him the title of MVP. However it is important to know that players who are renowned for their defense, such as Kawhi Leonard arent usually the leaders in these categories, so these statistics can paint an imcomplete picture of how good a player is.
End of explanation
"""
fig, ax = plt.subplots()
MVP['VORP'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('Value Over Replacement Player')
ax.set_xlabel('')
ax.axhline(historical['VORP'].mean(), color = 'k', linestyle = '--', alpha = .4)
"""
Explanation: Player efficiency rating (PER) is a statistic meant to capture all aspects of a player's game to give a measure of overall performance. It is adjusted for pace and minutes and the league average is always 15.0 for comparison. Russell Westbrook leads all MVP candidates with a PER of 30.6. All candidates just about meet or surpass the historical MVP average of 27.42. However, PER is a little flawed as it is much more heavily weighted to offensive statistics. It only takes into account blocks and steals on the defensive side. This favors Westbrook and Harden, who put up stronger offensive numbers than Leonard and James. On the other hand, Westbrook and Harden are not known for being great defenders, while James, and especially Kawhi Leonard (who is a two-time Defensive Player of the Year winner), are two of the top defenders in the NBA.
End of explanation
"""
fig, ax = plt.subplots()
MVP['WS'].plot(ax=ax, kind = 'bar', color = ['r', 'b', 'b', 'b'])
ax.set_ylabel('Win Shares')
ax.set_xlabel('')
ax.axhline(historical['WS'].mean(), color = 'k', linestyle = '--', alpha = .4)
"""
Explanation: According to Basketball Reference, Value over Replacement Player (VORP) provides an "estimate of each player's overall contribution to the team, measured vs. what a theoretical 'replacement player' would provide", where the 'replacement player' is defined as a player with a box plus/minus of -2. By this metric, Russell Westbrook contributes the most to his team, with a VORP of 12.4. Westbrook and James Harden are the only candidates with a VORP above the historical MVP average of 7.62.
End of explanation
"""
fig, ax = plt.subplots()
MVP['DWS'].plot(ax=ax, kind = 'bar', color = ['b', 'r', 'b', 'b'])
ax.set_ylabel('Defensive Win Share')
ax.set_xlabel('')
ax.axhline(historical['DWS'].mean(), color = 'k', linestyle = '--', alpha = .4)
"""
Explanation: Win shares is a measure of wins a player produces for his team. This statistic is calculated by taking into account how many wins a player has contributed to their team based off of their offensive play, as well as their defensive play.
We can see that this past season, none of the MVP canidates genereated as many wins for their teams as the average MVP has, which is 16.13 games. James Harden was the closest with a Win share value of just over 15. To understand how meaninful this statisitc this, we also have to keep in mind the production that the rest of the MVP canidates team is putting up. We have to ask the question that if a player is putting up great statistics, but so are other players on the same team, how much impact is that one player really having.
End of explanation
"""
fig, ax = plt.subplots()
MVP['WS/48'].plot(ax=ax, kind = 'bar', color = ['b', 'r', 'b', 'b'])
ax.set_ylabel('Win Share/48 Minutes')
ax.set_xlabel('')
ax.axhline(historical['WS/48'].mean(), color = 'k', linestyle = '--', alpha = .4)
print(historical['WS/48'].mean())
"""
Explanation: Here we try to compare the defensive proudction of each of the MVP canidates. Defensive Win Share is calculated by looking at how a player's respective defensive production translates to wins for a team. A player's estimated points allowed per 100 possesions, marginal defense added, as well as points added in a win are all taken into account to calculate this number. Because points added in a win is used in this calculation, even though it is supposed to be a defensive statistic, there is still some offensive bias. So players that score more points, and win more games could get higher values for this statistic. Despite these possible flaws, we see that Leonard and Westbrook lead the way with DWS of 4.7 and 4.6 respectively. All players, still fall short of the historical MVP average of 5.1.
End of explanation
"""
fig, ax = plt.subplots()
MVP['USG%'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('Usage Percentage')
ax.set_xlabel('')
ax.axhline(historical['USG%'].mean(), color = 'k', linestyle = '--', alpha = .4)
print(historical['USG%'].mean())
"""
Explanation: Win Shares/ 48 Minutes is another statistic used to measure the wins attributed to a certain player. This statistic is slightly different because instead of just taking in to account how many games the team actually wins over the course of a season, this stat attempts to control for actual minutes played by the player. Here we see that Kawhi Leonard has the highest Win Share, and not Harden. We believe that this is due to the fact that Leonard plays significantly fewer minutes than the other canidates. Leonard is the only player whose WS/48 of .264 surpasses the MVP average of .261
End of explanation
"""
url4 ='http://www.basketball-reference.com/play-index/tsl_finder.cgi?request=1&match=single&type=team_totals&lg_id=NBA&year_min=2017&year_max=2017&order_by=wins'
e1 = requests.get(url4)
soup4 = BeautifulSoup(e1.content, 'html.parser')
column_headers_past_adv = [th.getText() for th in
soup4.findAll('tr')[1].findAll('th')]
data_rows_past_adv = soup4.findAll('tr')[2:]
column_headers_team = [th.getText() for th in
soup4.findAll('tr')[1].findAll('th')]
data_rows_team = soup4.findAll('tr')[3:12]
team_wins = [[td.getText() for td in data_rows_team[i].findAll('td')]
for i in range(len(data_rows_team))]
df_team = pd.DataFrame(team_wins, columns=column_headers_team[1:])
df_team = df_team.set_index('Tm')
df_team =df_team.drop(['TOR*','UTA*','LAC*','WAS*'])
Team =df_team
Team
Team['W']['SAS*']
Hou_wins = int((Team['W']['HOU*']))
Harden_Wins = int(MVP['WS']['James Harden'])
Harden_winpct = Harden_Wins/Hou_wins
Harden_nonwin = 1 - Harden_winpct
SAS_wins = int((Team['W']['SAS*']))
Leo_Wins = int(MVP['WS']['Kawhi Leonard'])
Leo_winpct = Leo_Wins/SAS_wins
Leo_nonwin = 1 - Leo_winpct
Cle_wins = int((Team['W']['CLE*']))
LeBron_Wins = int(MVP['WS']['LeBron James'])
LeBron_winpct = LeBron_Wins/Cle_wins
LeBron_nonwin = 1 - LeBron_winpct
OKC_wins = int((Team['W']['OKC*']))
Westbrook_Wins = int(MVP['WS']['Russell Westbrook'])
Westbrook_winpct = Westbrook_Wins/OKC_wins
Westbrook_nonwin = 1 - Westbrook_winpct
df1 = ([Harden_winpct, Leo_winpct, LeBron_winpct, Westbrook_winpct])
df2 = ([Harden_nonwin, Leo_nonwin, LeBron_nonwin, Westbrook_nonwin])
df3 = pd.DataFrame(df1)
df4 = pd.DataFrame(df2)
Win_Share_Per = pd.concat([df3, df4], axis =1)
Win_Share_Per.columns = ['% Wins Accounted For', 'Rest of Team']
Win_Share_Per = Win_Share_Per.T
Win_Share_Per.columns = ['James Harden', 'Kawhi Leonard', 'LeBron James', 'Russell Westbrook']
pic1 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/201935.png", "201935.png")
pic2 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/202695.png", "202695.png")
pic3 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/2544.png", "2544.png")
pic4 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/201566.png", "201566.png")
pic5 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/2/28/Houston_Rockets.svg/410px-Houston_Rockets.svg.png", "410px-Houston_Rockets.svg.png")
pic6 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/a/a2/San_Antonio_Spurs.svg/512px-San_Antonio_Spurs.svg.png", "512px-San_Antonio_Spurs.svg.png")
pic7 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/f/f7/Cleveland_Cavaliers_2010.svg/295px-Cleveland_Cavaliers_2010.svg.png", "295px-Cleveland_Cavaliers_2010.svg.png")
pic8 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/5/5d/Oklahoma_City_Thunder.svg/250px-Oklahoma_City_Thunder.svg.png", "250px-Oklahoma_City_Thunder.svg.png")
harden_pic = plt.imread(pic1[0])
leonard_pic = plt.imread(pic2[0])
james_pic = plt.imread(pic3[0])
westbrook_pic = plt.imread(pic4[0])
rockets_pic = plt.imread(pic5[0])
spurs_pic = plt.imread(pic6[0])
cavaliers_pic = plt.imread(pic7[0])
thunder_pic = plt.imread(pic8[0])
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (12,12))
Win_Share_Per['James Harden'].plot.pie(ax=axes[0,0], colors = ['r', 'yellow'])
Win_Share_Per['Kawhi Leonard'].plot.pie(ax=axes[0,1], colors = ['black', 'silver'])
Win_Share_Per['LeBron James'].plot.pie(ax=axes[1,0], colors = ['maroon', 'navy'])
Win_Share_Per['Russell Westbrook'].plot.pie(ax=axes[1,1], colors = ['blue', 'orangered'])
img1 = OffsetImage(harden_pic, zoom=0.4)
img1.set_offset((290,800))
a = axes[0,0].add_artist(img1)
a.set_zorder(10)
img2 = OffsetImage(leonard_pic, zoom=0.4)
img2.set_offset((800,800))
b= axes[0,1].add_artist(img2)
b.set_zorder(10)
img3 = OffsetImage(james_pic, zoom=0.4)
img3.set_offset((290,290))
c = axes[1,0].add_artist(img3)
c.set_zorder(10)
img4 = OffsetImage(westbrook_pic, zoom=0.4)
img4.set_offset((790,290))
d = axes[1,1].add_artist(img4)
d.set_zorder(10)
img5 = OffsetImage(rockets_pic, zoom=0.4)
img5.set_offset((150,620))
e = axes[1,1].add_artist(img5)
e.set_zorder(10)
img6 = OffsetImage(spurs_pic, zoom=0.3)
img6.set_offset((650,620))
f = axes[1,1].add_artist(img6)
f.set_zorder(10)
img7 = OffsetImage(cavaliers_pic, zoom=0.4)
img7.set_offset((150,130))
g = axes[1,1].add_artist(img7)
g.set_zorder(10)
img8 = OffsetImage(thunder_pic, zoom=0.4)
img8.set_offset((650,130))
h = axes[1,1].add_artist(img8)
h.set_zorder(10)
plt.show()
"""
Explanation: Usage percentage is a measure of the percentage of team possessions a player uses per game. A higher percentage means a player handles the ball more per game. High usage percentages by one player can often lead to decreased overall efficiency for the team, as it means the offense is ran more through one player. In this case, Russell Westbrook's usage percentage is considerably higher than the other candidates and is the highest usage percentage in NBA history by about 3%. The other candidates are much closer to the historical average MVP usage percentage of 29.77%.
End of explanation
"""
|
thehackerwithin/berkeley | code_examples/python_matplotlib/Matplotlib_THW_tutorial.ipynb | bsd-3-clause | import matplotlib as mpl
mpl
# I normally prototype my code in an editor + ipy terminal.
# In those cases I import pyplot and numpy via
import matplotlib.pyplot as plt
import numpy as np
# In Jupy notebooks we've got magic functions and pylab gives you pyplot as plt and numpy as np
# %pylab
# Additionally, inline will let you plot inline of the notebook
# %pylab inline
# And notebook, as I've just found out gives you some resizing etc... tools inline.
# %pylab notebook
y = np.ones(10)
for x in range(2,10):
y[x] = y[x-2] + y[x-1]
plt.plot(y)
plt.title('This story')
"""
Explanation: An Introduction to Matplotlib
Tenzing HY Joshi: thjoshi@lbl.gov
Nick Swanson-Hysell: swanson-hysell@berkeley.edu
What is Matplotlib?
matplotlib is a library for making 2D plots of arrays in Python. ... matplotlib is designed with the philosophy that you should be able to create simple plots with just a few commands, or just one! ...
The matplotlib code is conceptually divided into three parts: the pylab interface is the set of functions provided by matplotlib.pylab which allow the user to create plots with code quite similar to MATLAB figure generating code (Pyplot tutorial). The matplotlib frontend or matplotlib API is the set of classes that do the heavy lifting, creating and managing figures, text, lines, plots and so on (Artist tutorial). This is an abstract interface that knows nothing about output. The backends are device-dependent drawing devices, aka renderers, that transform the frontend representation to hardcopy or a display device (What is a backend?).
Resources
Matplotlib website
Python Programming tutorial
Sci-Py Lectures
What I'll touch on
Importing
Simple plots
Figures and Axes
Useful plot types
Formatting
End of explanation
"""
plt.show()
print('I can not run this command until I close the window because interactive mode is turned off')
"""
Explanation: Where's the plot to this story?
By default, with pyplot the interactive Mode is turned off. That means that the state of our Figure is updated on every plt command, but only drawn when we ask for it to be drawn plt.draw() and shown when we ask for it to be shown plt.show(). So lets have a look at what happened.
End of explanation
"""
%pylab inline
# Set default figure size for your viewing pleasure...
pylab.rcParams['figure.figsize'] = (10.0, 7.0)
"""
Explanation: Interactive mode on or off is a preference. See how it works for your workflow.
plt.ion() can be used to turn interactive mode on
plt.ioff() then turns it off
For now lets switch over to the %pylab notebook configuration to make it easier on ourselves.
End of explanation
"""
x = np.linspace(0,5,100)
y = np.random.exponential(1./3., 100)
# Make a simply plot of x vs y, Set the points to have an 'x' marker.
plt.plot(x,y, c='r',marker='x')
# Label our x and y axes and give the plot a title.
plt.xlabel('Sample time (au)')
plt.ylabel('Exponential Sample (au)')
plt.title('See the trend?')
"""
Explanation: Some Simple Plots
End of explanation
"""
x = np.linspace(0,6.,1000.)
# Alpha = 0.5, color = red, linstyle = dotted, linewidth = 3, label = x
plt.plot(x, x, alpha = 0.5, c = 'r', ls = ':', lw=3., label='x')
# Alpha = 0.5, color = blue, linstyle = solid, linewidth = 3, label = x**(3/2)
# Check out the LaTeX!
plt.plot(x, x**(3./2), alpha = 0.5, c = 'b', ls = '-', lw=3., label=r'x$^{3/2}$')
# And so on...
plt.plot(x, x**2, alpha = 0.5, c = 'g', ls = '--', lw=3., label=r'x$^2$')
plt.plot(x, np.log(1+x)*20., alpha = 0.5, c = 'c', ls = '-.', lw=3., label='log(1+x)')
# Add a legend (loc gives some options about where the legend is placed)
plt.legend(loc=2)
"""
Explanation: Lots of kwargs to modify your plot
A few that I find most useful are:
* alpha
* color or c
* linestyle or ls
* linewidth or lw
* marker
* markersize or ms
* label
End of explanation
"""
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses
# size = area variable, c = colors variable
x = plt.scatter(x, y, s=area, c=colors, alpha=0.4)
plt.show()
N=10000
values1 = np.random.normal(25., 3., N)
values2 = np.random.normal(33., 8., N/7)
valuestot = np.concatenate([values1,values2])
binedges = np.arange(0,101,1)
bincenters = (binedges[1:] + binedges[:-1])/2.
# plt.hist gives you the ability to histogram and plot all in one command.
x1 = plt.hist(valuestot, bins=binedges, color='g', alpha=0.5, label='total')
x2 = plt.hist(values2, bins=binedges, color='r', alpha=0.5, histtype='step', linewidth=3, label='values 1')
x3 = plt.hist(values1, bins=binedges, color='b', alpha=0.5, histtype='step', linewidth=3, label='values 2')
plt.legend(loc=7)
"""
Explanation: Nice scatter Example from the MPL website. Note that the kwargs are different here. Quick inspection of the docs is handy (shift + tab in jupy notebooks).
End of explanation
"""
fig = plt.figure(figsize=(10,6))
# Make an axes as if the figure had 1 row, 2 columns and it would be the first of the two sub-divisions.
ax1 = fig.add_subplot(121)
plot1 = ax1.plot([1,2,3,4,1,0])
ax1.set_xlabel('time since start of talk')
ax1.set_ylabel('interest level')
ax1.set_xbound([-1.,6.])
# Make an axes as if the figure had 1 row, 2 columns and it would be the second of the two sub-divisions.
ax2 = fig.add_subplot(122)
plot2 = ax2.scatter([1,1,1,2,2,2,3,3,3,4,4,4], [1,2,3]*4)
ax2.set_title('A commentary on chairs with wheels')
print(plot1)
print(plot2)
"""
Explanation: Loads of examples and plot types in the Matplotlib.org Gallery
Its worth looking through some examples just to get a feel for what types of plots are available and how they are used.
Figures and Axes
Working with MPL Figure and Axes objects gives you more control. You can quickly make multiple plots, shared axes, etc... on the same Figure.
Figure command
Subplot command
Different sized subplots
Axes controls
ticks, labels, else?
End of explanation
"""
fig2 = plt.figure(figsize=(10,10))
ax1 = fig2.add_axes([0.1,0.1,0.8,0.4])
histvals = ax1.hist(np.random.exponential(0.5,5000), bins=np.arange(0,5, 0.1))
ax1.set_xlabel('Sampled Value')
ax1.set_ylabel('Counts per bin')
ax2 = fig2.add_axes([0.3,0.55, 0.7, 0.45])
ax2.plot([13,8,5,3,2,1,1],'r:',lw=3)
"""
Explanation: fig.add_axes is another option for adding axes as you wish.
* Relative lower left corner x and y coordinates
* Relative x and y spans of the axes
End of explanation
"""
import scipy.stats as stats
# With subplots we can make all of the axes at ones.
# The axes are return in a list of lists.
f, [[ax0, ax1], [ax2, ax3]] = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=False)
# Remove the space between the top and bottom rows of plots
# wspace would do the same for left and right columns...
f.subplots_adjust(hspace=0)
ax0.plot(range(50,250), np.exp(np.arange(50,250) / 23.) )
ax2.scatter(np.random.normal(125,27,100), np.random.binomial(200,0.4,100))
ax1.plot(range(0,300), np.random.exponential(0.5,300), 'g')
ax3.plot(range(0,300), stats.norm.pdf(np.arange(0,300),150, 30) , 'g')
"""
Explanation: plt.subplots gives an alternative route, creating all of the axes at once. Less flexability since you'll end up with a grid of subplots, but thats exactly what you want a lot of the time.
sharex and sharey kwargs do exactly that for all of the axes.
End of explanation
"""
plt.colormaps()
cmap0 = plt.cm.cubehelix
cmap1 = plt.cm.Accent
cmap2 = plt.cm.Set1
cmap3 = plt.cm.Spectral
colmaps = [cmap0,cmap1,cmap2,cmap3]
Ncolors = 12
col0 = cmap0(np.linspace(0,1,Ncolors))
f, [[ax0, ax1], [ax2, ax3]] = plt.subplots(nrows=2, ncols=2, figsize=(13,13))
x = np.linspace(0.01,100,1000)
for idx, axis in enumerate([ax0,ax1,ax2,ax3]):
colormap = colmaps[idx]
colors = colormap(np.linspace(0,1,Ncolors))
axis.set_title(colormap.name)
for val in range(Ncolors):
axis.plot(x,x**(1.0 + 0.1 * val), c=colors[val], lw=3, label=val)
axis.loglog()
"""
Explanation: Colors and colormaps
MPL has a variety of Colormaps to choose from.
I also use the python library Palettable to gain access to a few other colors and colormaps in convienent ways. I won't use this library today, but if you're interested in some other options from what MPL has it is worth a look.
End of explanation
"""
# Lets look at a two distributions on an exponential noise background...
Nnoise = 475000
Nnorm1 = 10000
Nnorm2 = 15000
# Uniform noise in x, exponential in y
xnoise = np.random.rand(Nnoise) * 100
ynoise = np.random.exponential(250,475000)
# Uniform in X, normal in Y
xnorm1 = np.random.rand(Nnorm1) * 100
ynorm1 = np.random.normal(800, 50, Nnorm1)
# Normal in X and Y
xnorm2 = np.random.normal(50, 30, 15000)
ynorm2 = np.random.normal(200, 25, 15000)
xtot = np.concatenate([xnoise, xnorm1, xnorm2])
ytot = np.concatenate([ynoise, ynorm1, ynorm2])
xbins = np.arange(0,100,10)
ybins = np.arange(0,1000,10)
H, xe, ye = np.histogram2d(xtot, ytot, bins=[xbins, ybins])
X,Y = np.meshgrid(ybins,xbins)
fig4 = plt.figure(figsize=(13,8))
ax1 = fig4.add_axes([0.1,0.1,0.35,0.4])
ax2 = fig4.add_axes([0.5,0.1,0.35,0.4])
pcolplot = ax1.pcolor(X, Y, H, cmap=cm.GnBu)
ax1.set_title('Linear Color Scale')
plt.colorbar(pcolplot, ax=ax1)
from matplotlib.colors import LogNorm
pcolplot2 = ax2.pcolor(X, Y, H, norm=LogNorm(vmin=H.min(), vmax=H.max()), cmap=cm.GnBu)
ax2.set_title('Log Color Scale')
plt.colorbar(pcolplot2, ax=ax2)
"""
Explanation: Colormap normalization can also be pretty handy!
They are found in matplotlib.colors.
Lets look at Lograthmic (LogNorm), but also symmetric log, power law, discrete bounds, and custom ranges available.
* Colormap Normalization
End of explanation
"""
xvals = np.arange(0,120,0.1)
# Define a few functions to use
f1 = lambda x: 50. * np.exp(-x/20.)
f2 = lambda x: 30. * stats.norm.pdf(x, loc=25,scale=5)
f3 = lambda x: 200. * stats.norm.pdf(x,loc=40,scale=10)
f4 = lambda x: 25. * stats.gamma.pdf(x, 8., loc=45, scale=4.)
# Normalize to define PDFs
pdf1 = f1(xvals) / (f1(xvals)).sum()
pdf2 = f2(xvals) / (f2(xvals)).sum()
pdf3 = f3(xvals) / (f3(xvals)).sum()
pdf4 = f4(xvals) / (f4(xvals)).sum()
# Combine them and normalize again
pdftot = pdf1 + pdf2 + pdf3 + pdf4
pdftot = pdftot / pdftot.sum()
fig5 = plt.figure(figsize=(11,8))
ax3 = fig5.add_axes([0.1,0.1,0.9,0.9])
# Plot the pdfs, and the total pdf
lines = ax3.plot(xvals, pdf1,'r', xvals,pdf2,'b', xvals,pdf3,'g', xvals,pdf4,'m')
lines = ax3.plot(xvals, pdftot, 'k', lw=5.)
"""
Explanation: Lines and text
Adding horizontal and vertical lines
hlines and vlines
Adding text to your figures is also often needed.
End of explanation
"""
# Calculate the mean
mean1 = (xvals * pdf1).sum()
mean2 = (xvals * pdf2).sum()
mean3 = (xvals * pdf3).sum()
mean4 = (xvals * pdf4).sum()
fig6 = plt.figure(figsize=(11,8))
ax4 = fig6.add_axes([0.1,0.1,0.9,0.9])
# Plot the total PDF
ax4.plot(xvals, pdftot, 'k', lw=5.)
# Grabe the limits of the y-axis for defining the extent of our vertical lines
axmin, axmax = ax4.get_ylim()
# Draw vertical lines. (x location, ymin, ymax, color, linestyle)
ax4.vlines(mean1, axmin, axmax, 'r',':')
ax4.vlines(mean2, axmin, axmax, 'b',':')
ax4.vlines(mean3, axmin, axmax, 'g',':')
ax4.vlines(mean4, axmin, axmax, 'm',':')
# Add some text to figure to describe the curves
# (xloc, yloc, text, color, fontsize, rotation, ...)
ax4.text(mean1-18, 0.0028, r'mean of $f_1(X)$', color='r', fontsize=18)
ax4.text(mean2+1, 0.0005, r'mean of $f_2(X)$', color='b', fontsize=18)
ax4.text(mean3+1, 0.0002, r'mean of $f_3(X)$', color='g', fontsize=18)
ax4.text(mean4+1, 0.0028, r'mean of $f_4(X)$', color='m', fontsize=18, rotation=-25)
temp = ax4.text(50, 0.0009, r'$f_{tot}(X)$', color='k', fontsize=22)
"""
Explanation: Lets use vertical lines to represent the means of our distributions instead of plotting all of them.
We'll also add some text to describe these vertical lines.
End of explanation
"""
# Compute CDFs
cdf1 = pdf1.cumsum()
cdf2 = pdf2.cumsum()
cdf3 = pdf3.cumsum()
cdf4 = pdf4.cumsum()
cdftot = pdftot.cumsum()
fig7 = plt.figure(figsize=(11,8))
ax7 = fig7.add_axes([0.1,0.1,0.9,0.9])
# Plot them
ax7.plot(xvals, cdftot, 'k', lw=3)
ax7.plot(xvals, cdf1, 'r', ls=':', lw=2)
ax7.plot(xvals, cdf2, 'b', ls=':', lw=2)
ax7.plot(xvals, cdf3, 'g', ls=':', lw=2)
ax7.plot(xvals, cdf4, 'm', ls=':', lw=2)
# Force the y limits to be (0,1)
ax7.set_ylim(0,1.)
# Add 50% and 90% lines.
ax7.hlines(0.5, 0, 120., 'k', '--', lw=2)
ax7.hlines(0.95, 0, 120., 'k', '--', lw=3)
# Add some text
ax7.set_title('CDFs of dists 1-4 and total with 50% and 95% bounds')
ax7.text(110, 0.46, r'$50\%$ ', color='k', fontsize=20)
temp = ax7.text(110, 0.91, r'$95\%$ ', color='k', fontsize=20)
"""
Explanation: We can do the same with horizontal lines
End of explanation
"""
import matplotlib.image as mpimg
img=mpimg.imread('Tahoe.png')
imgplot = plt.imshow(img)
"""
Explanation: Displaying images
Loading image data is supported by the Pillow library. Natively, matplotlib only supports PNG images. The commands shown below fall back on Pillow if the native read fails.
Matplotlib plotting can handle float32 and uint8, but image reading/writing for any format other than PNG is limited to uint8 data.
End of explanation
"""
f, [ax0,ax1,ax2] = plt.subplots(nrows=3, ncols=1, figsize=(10,15))
f.subplots_adjust(hspace=0.05)
for ax in [ax0,ax1,ax2]:
# ax.set_xticklabels([])
ax.set_xticks([])
ax.set_yticklabels([])
ax0.imshow(img[:,:,0], cmap=cm.Spectral)
ax1.imshow(img[:,:,1], cmap=cm.Spectral)
ax2.imshow(img[:,:,2], cmap=cm.Spectral)
"""
Explanation: Lets plot the R, G, and B components of this image.
End of explanation
"""
|
amueller/nyu_ml_lectures | First Steps.ipynb | bsd-2-clause | from sklearn.datasets import load_digits
digits = load_digits()
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
X_train.shape
"""
Explanation: Get some data to play with
End of explanation
"""
from sklearn.svm import LinearSVC
"""
Explanation: Really Simple API
0) Import your model class
End of explanation
"""
svm = LinearSVC(C=0.1)
"""
Explanation: 1) Instantiate an object and set the parameters
End of explanation
"""
svm.fit(X_train, y_train)
"""
Explanation: 2) Fit the model
End of explanation
"""
print(svm.predict(X_test))
svm.score(X_train, y_train)
svm.score(X_test, y_test)
"""
Explanation: 3) Apply / evaluate
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=50)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
%load https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/examples/classification/plot_classifier_comparison.py
"""
Explanation: And again
End of explanation
"""
# %load solutions/train_iris.py
"""
Explanation: Exercises
Load the iris dataset from the sklearn.datasets module using the load_iris function.
Split it into training and test set using train_test_split.
Then train an evaluate a classifier of your choice.
End of explanation
"""
|
sysid/nbs | LP/Introduction-to-linear-programming/LaTeX_formatted_ipynb_files/Introduction to Linear Programming with Python - Part 4.ipynb | mit | import pulp
# Instantiate our problem class
model = pulp.LpProblem("Cost minimising blending problem", pulp.LpMinimize)
"""
Explanation: Introduction to Linear Programming with Python - Part 4
Real world examples - Blending Problem
We're going to make some sausages!
We have the following ingredients available to us:
| Ingredient | Cost (€/kg) | Availability (kg) |
|------------|--------------|-------------------|
| Pork | 4.32 | 30 |
| Wheat | 2.46 | 20 |
| Starch | 1.86 | 17 |
We'll make 2 types of sausage:
* Economy (>40% Pork)
* Premium (>60% Pork)
One sausage is 50 grams (0.05 kg)
According to government regulations, the most starch we can use in our sausages is 25%
We have a contract with a butcher, and have already purchased 23 kg pork, that must go in our sausages.
We have a demand for 350 economy sausages and 500 premium sausages.
We need to figure out how to most cost effectively blend our sausages.
Let's model our problem:
$
p_e = \text{Pork in the economy sausages (kg)} \
w_e = \text{Wheat in the economy sausages (kg)} \
s_e = \text{Starch in the economy sausages (kg)} \
p_p = \text{Pork in the premium sausages (kg)} \
w_p = \text{Wheat in the premium sausages (kg)} \
s_p = \text{Starch in the premium sausages (kg)} \
$
We want to minimise costs such that:
$\text{Cost} = 0.72(p_e + p_p) + 0.41(w_e + w_p) + 0.31(s_e + s_p)$
With the following constraints:
$
p_e + w_e + s_e = 350 \times 0.05 \
p_p + w_p + s_p = 500 \times 0.05 \
p_e \geq 0.4(p_e + w_e + s_e) \
p_p \geq 0.6(p_p + w_p + s_p) \
s_e \leq 0.25(p_e + w_e + s_e) \
s_p \leq 0.25(p_p + w_p + s_p) \
p_e + p_p \leq 30 \
w_e + w_p \leq 20 \
s_e + s_p \leq 17 \
p_e + p_p \geq 23 \
$
End of explanation
"""
# Construct our decision variable lists
sausage_types = ['economy', 'premium']
ingredients = ['pork', 'wheat', 'starch']
"""
Explanation: Here we have 6 decision variables, we could name them individually but this wouldn't scale up if we had hundreds/thousands of variables (you don't want to be entering all of these by hand multiple times).
We'll create a couple of lists from which we can create tuple indices.
End of explanation
"""
ing_weight = pulp.LpVariable.dicts("weight kg",
((i, j) for i in sausage_types for j in ingredients),
lowBound=0,
cat='Continuous')
"""
Explanation: Each of these decision variables will have similar characteristics (lower bound of 0, continuous variables). Therefore we can use PuLP's LpVariable object's dict functionality, we can provide our tuple indices.
These tuples will be keys for the ing_weight dict of decision variables
End of explanation
"""
# Objective Function
model += (
pulp.lpSum([
4.32 * ing_weight[(i, 'pork')]
+ 2.46 * ing_weight[(i, 'wheat')]
+ 1.86 * ing_weight[(i, 'starch')]
for i in sausage_types])
)
"""
Explanation: PuLP provides an lpSum vector calculation for the sum of a list of linear expressions.
Whilst we only have 6 decision variables, I will demonstrate how the problem would be constructed in a way that could be scaled up to many variables using list comprehensions.
End of explanation
"""
# Constraints
# 350 economy and 500 premium sausages at 0.05 kg
model += pulp.lpSum([ing_weight['economy', j] for j in ingredients]) == 350 * 0.05
model += pulp.lpSum([ing_weight['premium', j] for j in ingredients]) == 500 * 0.05
# Economy has >= 40% pork, premium >= 60% pork
model += ing_weight['economy', 'pork'] >= (
0.4 * pulp.lpSum([ing_weight['economy', j] for j in ingredients]))
model += ing_weight['premium', 'pork'] >= (
0.6 * pulp.lpSum([ing_weight['premium', j] for j in ingredients]))
# Sausages must be <= 25% starch
model += ing_weight['economy', 'starch'] <= (
0.25 * pulp.lpSum([ing_weight['economy', j] for j in ingredients]))
model += ing_weight['premium', 'starch'] <= (
0.25 * pulp.lpSum([ing_weight['premium', j] for j in ingredients]))
# We have at most 30 kg of pork, 20 kg of wheat and 17 kg of starch available
model += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) <= 30
model += pulp.lpSum([ing_weight[i, 'wheat'] for i in sausage_types]) <= 20
model += pulp.lpSum([ing_weight[i, 'starch'] for i in sausage_types]) <= 17
# We have at least 23 kg of pork to use up
model += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) >= 23
# Solve our problem
model.solve()
pulp.LpStatus[model.status]
for var in ing_weight:
var_value = ing_weight[var].varValue
print "The weight of {0} in {1} sausages is {2} kg".format(var[1], var[0], var_value)
total_cost = pulp.value(model.objective)
print "The total cost is €{} for 350 economy sausages and 500 premium sausages".format(round(total_cost, 2))
"""
Explanation: Now we add our constraints, bear in mind again here how the use of list comprehensions allows for scaling up to many ingredients or sausage types
End of explanation
"""
|
quantumlib/Cirq | docs/gatezoo.ipynb | apache-2.0 | try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet --pre cirq
print("installed cirq.")
import IPython.display as ipd
import cirq
import inspect
def display_gates(*gates):
for gate_name in gates:
ipd.display(ipd.Markdown("---"))
gate = getattr(cirq, gate_name)
ipd.display(ipd.Markdown(f"#### cirq.{gate_name}"))
ipd.display(ipd.Markdown(inspect.cleandoc(gate.__doc__ or "")))
else:
ipd.display(ipd.Markdown("---"))
"""
Explanation: Gate Zoo
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/gatezoo.ipynbb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Setup
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre
End of explanation
"""
display_gates("X", "Y", "Z", "H", "S", "T")
"""
Explanation: Cirq comes with many gates that are standard across quantum computing. This notebook serves as a reference sheet for these gates.
Single Qubit Gates
Gate Constants
Cirq defines constants which are gate instances for particular important single qubit gates.
End of explanation
"""
display_gates("Rx", "Ry", "Rz")
"""
Explanation: Traditional Pauli Rotation Gates
Cirq defines traditional single qubit rotations that are rotations in radiants abougt different Pauli directions.
End of explanation
"""
display_gates("XPowGate", "YPowGate", "ZPowGate")
"""
Explanation: Pauli PowGates
If you think of the cirq.Z gate as phasing the state $|1\rangle$ by $-1$, then you might think that the square root of this gate phases the state $|1\rangle$ by $i=\sqrt{-1}$. The XPowGate, YPowGate and ZPowGates all act in this manner, phasing the state corresponding to their $-1$ eigenvalue by a prescribed amount. This ends up being the same as the Rx, Ry, and Rz up to a global phase.
End of explanation
"""
display_gates("PhasedXPowGate", "PhasedXZGate", "HPowGate")
"""
Explanation: More Single Qubit Gate
Many quantum computing implementations use qubits whose energy eigenstate are the computational basis states. In these cases it is often useful to move cirq.ZPowGate's through other single qubit gates, "phasing" the other gates. For these scenarios, the following phased gates are useful.
End of explanation
"""
display_gates("CX", "CZ", "SWAP", "ISWAP", "SQRT_ISWAP", "SQRT_ISWAP_INV")
"""
Explanation: Two Qubit Gates
Gate Constants
Cirq defines convenient constants for common two qubit gates.
End of explanation
"""
display_gates("XXPowGate", "YYPowGate", "ZZPowGate")
"""
Explanation: Parity Gates
If $P$ is a non-identity Pauli matrix, then it has eigenvalues $\pm 1$. $P \otimes P$ similarly has eigenvalues $\pm 1$ which are the product of the eigenvalues of the single $P$ eigenvalues. In this sense, $P \otimes P$ has an eigenvalue which encodes the parity of the eigenvalues of the two qubits. If you think of $P \otimes P$ as phasing its $-1$ eigenvectors by $-1$, then you could consider $(P \otimes P)^{\frac{1}{2}}$ as the gate that phases the $-1$ eigenvectors by $\sqrt{-1} =i$. The Parity gates are exactly these gates for the three different non-identity Paulis.
End of explanation
"""
display_gates("XX", "YY", "ZZ")
"""
Explanation: There are also constants that one can use to define the parity gates via exponentiating them.
End of explanation
"""
display_gates("FSimGate", "PhasedFSimGate")
"""
Explanation: Fermionic Gates
If we think of $|1\rangle$ as an excitation, then the gates that preserve the number of excitations are the fermionic gates. There are two implementations, with differing phase conventions.
End of explanation
"""
display_gates("SwapPowGate", "ISwapPowGate", "CZPowGate", "CXPowGate", "PhasedISwapPowGate")
"""
Explanation: Two qubit PowGates
Just as cirq.XPowGate represents a powering of cirq.X, our two qubit gate constants also have corresponding "Pow" versions.
End of explanation
"""
display_gates("CCX", "CCZ", "CSWAP")
"""
Explanation: Three Qubit Gates
Gate Constants
Cirq provides constants for common three qubit gates.
End of explanation
"""
display_gates("CCXPowGate", "CCZPowGate")
"""
Explanation: Three Qubit Pow Gates
Corresponding to some of the above gate constants are the corresponding PowGates.
End of explanation
"""
display_gates("IdentityGate", "WaitGate")
"""
Explanation: N Qubit Gates
Do Nothing Gates
Sometimes you just want a gate to represent doing nothing.
End of explanation
"""
display_gates("MeasurementGate")
"""
Explanation: Measurement Gates
Measurement gates are gates that represent a measurement and can operate on any number of qubits.
End of explanation
"""
display_gates("MatrixGate", "DiagonalGate", "TwoQubitDiagonalGate", "ThreeQubitDiagonalGate")
"""
Explanation: Matrix Gates
If one has a specific unitary matrix in mind, then one can construct it using matrix gates, or, if the unitary is diagonal, the diagonal gates.
End of explanation
"""
display_gates("DensePauliString", "MutableDensePauliString", "PauliStringPhasorGate")
"""
Explanation: Pauli String Gates
Pauli strings are expressions like "XXZ" representing the Pauli operator X operating on the first two qubits, and Z on the last qubit, along with a numeric (or symbolic) coefficient. When the coefficient is a unit complex number, then this is a valid unitary gate. Similarly one can construct gates which phases the $\pm 1$ eigenvalues of such a Pauli string.
End of explanation
"""
display_gates("BooleanHamiltonianGate", "QuantumFourierTransformGate", "PhaseGradientGate")
"""
Explanation: Algorithm Based Gates
It is useful to define composite gates which correspond to algorithmic primitives, i.e. one can think of the fourier transform as a single unitary gate.
End of explanation
"""
display_gates("QubitPermutationGate")
"""
Explanation: Classiscal Permutation Gates
Sometimes you want to represent shuffling of qubits.
End of explanation
"""
|
miykael/nipype_tutorial | notebooks/basic_mapnodes.ipynb | bsd-3-clause | from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
"""
Explanation: MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow:
<img src="../static/images/mapnode.png" width="325">
Node A outputs a list of files, but node B accepts only one file. Additionally, C expects a list of files. What you would like is to run B for every file in the output of A and collect the results as a list and feed it to C. Something like this:
```python
from nipype import Node, MapNode, Workflow
a = Node(interface=A(), name="a")
b = MapNode(interface=B(), name="b", iterfield=['in_file'])
c = Node(interface=C(), name="c")
my_workflow = Workflow(name="my_workflow")
my_workflow.connect([(a,b,[('out_files','in_file')]),
(b,c,[('out_file','in_files')])
])
```
Let's demonstrate this with a simple function interface:
End of explanation
"""
square.run(x=2).outputs.f_x
"""
Explanation: We see that this function just takes a numeric input and returns its squared value.
End of explanation
"""
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
square_node.inputs.x = [0, 1, 2, 3]
res = square_node.run()
res.outputs.f_x
"""
Explanation: What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode.
iterfield
The MapNode constructor has a field called iterfield, which tells it what inputs should be expecting a list.
End of explanation
"""
def power_func(x, y):
return x ** y
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
res = power_node.run()
print(res.outputs.f_xy)
"""
Explanation: Because iterfield can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
End of explanation
"""
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
res = power_node.run()
print(res.outputs.f_xy)
"""
Explanation: But not every input needs to be an iterfield.
End of explanation
"""
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype import Node, MapNode, Workflow
# Here we specify a list of files (for this tutorial, we just add the same file twice)
files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',
'/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz']
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
"""
Explanation: As in the case of iterables, each underlying MapNode execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you process large amounts of data efficiently and reproducibly.
In more advanced applications it is useful to be able to iterate over items of nested lists (for example [[1,2],[3,4]]). MapNode allows you to do this with the "nested=True" parameter. Outputs will preserve the same nested structure as the inputs.
Why is this important?
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterward, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.
<img src="../static/images/mapnode.png" width="300">
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:
- Gunzip, to unzip the files (plural)
- Realign, to do the motion correction
End of explanation
"""
gunzip = Node(Gunzip(), name='gunzip',)
try:
gunzip.inputs.in_file = files
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
"""
Explanation: If we try to specify the input for the Gunzip node with a simple Node, we get the following error:
End of explanation
"""
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
"""
Explanation: bash
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'] <class 'list'> was specified.
But if we do it with a MapNode, it works:
End of explanation
"""
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = '/output'
mcflow.run('MultiProc', plugin_args={'n_procs': 4})
"""
Explanation: Now, we just have to create a workflow, connect the nodes and we can run it:
End of explanation
"""
#write your solution here
from nipype import Workflow, Node, MapNode, Function
import os
def range_fun(n_min, n_max):
return list(range(n_min, n_max+1))
def factorial(n):
# print("FACTORIAL, {}".format(n))
import math
return math.factorial(n)
def summing(terms):
return sum(terms)
wf_ex1 = Workflow('ex1')
wf_ex1.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_min', 'n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
factorial_nd = MapNode(Function(input_names=['n'],
output_names=['fact_out'],
function=factorial),
iterfield=['n'],
name='factorial')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_min = 0
range_nd.inputs.n_max = 3
wf_ex1.add_nodes([range_nd])
wf_ex1.connect(range_nd, 'range_list', factorial_nd, 'n')
wf_ex1.connect(factorial_nd, 'fact_out', summing_nd, "terms")
eg = wf_ex1.run()
"""
Explanation: Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.:
$$\sum {k=n{min}}^{n_{max}} k! = 0! + 1! +2! + 3! + \cdots$$
if $n_{min}=0$ and $n_{max}=3$
$$\sum _{k=0}^{3} k! = 0! + 1! +2! + 3! = 1 + 1 + 2 + 6 = 10$$
Use Node for a function that creates a list of integers and a function that sums everything at the end. Use MapNode to calculate factorials.
End of explanation
"""
eg.nodes()
"""
Explanation: let's print all nodes:
End of explanation
"""
list(eg.nodes())[2].result.outputs
"""
Explanation: the final result should be 10:
End of explanation
"""
print(list(eg.nodes())[0].result.outputs)
print(list(eg.nodes())[1].result.outputs)
"""
Explanation: we can also check the results of two other nodes:
End of explanation
"""
|
giacomov/3ML | docs/notebooks/Quickstart.ipynb | bsd-3-clause | from threeML import *
# Let's generate some data with y = Powerlaw(x)
gen_function = Powerlaw()
# Generate a dataset using the power law, and a
# constant 30% error
x = np.logspace(0, 2, 50)
xyl_generator = XYLike.from_function("sim_data", function = gen_function,
x = x,
yerr = 0.3 * gen_function(x))
y = xyl_generator.y
y_err = xyl_generator.yerr
"""
Explanation: Quickstart
In this simple example we will generate some simulated data, and fit them with 3ML.
Let's start by generating our dataset:
End of explanation
"""
fit_function = Powerlaw()
xyl = XYLike("data", x, y, y_err)
parameters, like_values = xyl.fit(fit_function)
"""
Explanation: We can now fit it easily with 3ML:
End of explanation
"""
fig = xyl.plot(x_scale='log', y_scale='log')
"""
Explanation: Plot data and model:
End of explanation
"""
gof, all_results, all_like_values = xyl.goodness_of_fit()
print("The null-hypothesis probability from simulations is %.2f" % gof['data'])
"""
Explanation: Compute the goodness of fit using Monte Carlo simulations (NOTE: if you repeat this exercise from the beginning many time, you should find that the quantity "gof" is a random number distributed uniformly between 0 and 1. That is the expected result if the model is a good representation of the data)
End of explanation
"""
import scipy.stats
# Compute the number of degrees of freedom
n_dof = len(xyl.x) - len(fit_function.free_parameters)
# Get the observed value for chi2
# (the factor of 2 comes from the fact that the Gaussian log-likelihood is half of a chi2)
obs_chi2 = 2 * like_values['-log(likelihood)']['data']
theoretical_gof = scipy.stats.chi2(n_dof).sf(obs_chi2)
print("The null-hypothesis probability from theory is %.2f" % theoretical_gof)
"""
Explanation: The procedure outlined above works for any distribution for the data (Gaussian or Poisson). In this case we are using Gaussian data, thus the log(likelihood) is just half of a $\chi^2$. We can then also use the $\chi^2$ test, which give a close result without performing simulations:
End of explanation
"""
|
jakevdp/sklearn_tutorial | notebooks/04.2-Clustering-KMeans.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
"""
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Clustering: K-Means In-Depth
Here we'll explore K Means Clustering, which is an unsupervised clustering technique.
We'll start with our standard set of initial imports
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
"""
Explanation: Introducing K-Means
K Means is an algorithm for unsupervised clustering: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
End of explanation
"""
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
"""
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
End of explanation
"""
from fig_code import plot_kmeans_interactive
plot_kmeans_interactive();
"""
Explanation: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm: Expectation Maximization
K-Means is an example of an algorithm which uses an Expectation-Maximization approach to arrive at the solution.
Expectation-Maximization is a two-step approach which works as follows:
Guess some cluster centers
Repeat until converged
A. Assign points to the nearest cluster center
B. Set the cluster centers to the mean
Let's quickly visualize this process:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
est = KMeans(n_clusters=10)
clusters = est.fit_predict(digits.data)
est.cluster_centers_.shape
"""
Explanation: This algorithm will (often) converge to the optimal cluster centers.
KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
Application of KMeans to Digits
For a closer-to-real-world example, let's again take a look at the digits data. Here we'll use KMeans to automatically cluster the data in 64 dimensions, and then look at the cluster centers to see what the algorithm has found.
End of explanation
"""
fig = plt.figure(figsize=(8, 3))
for i in range(10):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.imshow(est.cluster_centers_[i].reshape((8, 8)), cmap=plt.cm.binary)
"""
Explanation: We see ten clusters in 64 dimensions. Let's visualize each of these cluster centers to see what they represent:
End of explanation
"""
from scipy.stats import mode
labels = np.zeros_like(clusters)
for i in range(10):
mask = (clusters == i)
labels[mask] = mode(digits.target[mask])[0]
"""
Explanation: We see that even without the labels, KMeans is able to find clusters whose means are recognizable digits (with apologies to the number 8)!
The cluster labels are permuted; let's fix this:
End of explanation
"""
from sklearn.decomposition import PCA
X = PCA(2).fit_transform(digits.data)
kwargs = dict(cmap = plt.cm.get_cmap('rainbow', 10),
edgecolor='none', alpha=0.6)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].scatter(X[:, 0], X[:, 1], c=labels, **kwargs)
ax[0].set_title('learned cluster labels')
ax[1].scatter(X[:, 0], X[:, 1], c=digits.target, **kwargs)
ax[1].set_title('true labels');
"""
Explanation: For good measure, let's use our PCA visualization and look at the true cluster labels and K-means cluster labels:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
"""
Explanation: Just for kicks, let's see how accurate our K-Means classifier is with no label information:
End of explanation
"""
from sklearn.metrics import confusion_matrix
print(confusion_matrix(digits.target, labels))
plt.imshow(confusion_matrix(digits.target, labels),
cmap='Blues', interpolation='nearest')
plt.colorbar()
plt.grid(False)
plt.ylabel('true')
plt.xlabel('predicted');
"""
Explanation: 80% – not bad! Let's check-out the confusion matrix for this:
End of explanation
"""
from sklearn.datasets import load_sample_image
china = load_sample_image("china.jpg")
plt.imshow(china)
plt.grid(False);
"""
Explanation: Again, this is an 80% classification accuracy for an entirely unsupervised estimator which knew nothing about the labels.
Example: KMeans for Color Compression
One interesting application of clustering is in color image compression. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and conversely a large number of pixels will have similar or identical colors.
Scikit-learn has a number of images that you can play with, accessed through the datasets module. For example:
End of explanation
"""
china.shape
"""
Explanation: The image itself is stored in a 3-dimensional array, of size (height, width, RGB):
End of explanation
"""
X = (china / 255.0).reshape(-1, 3)
print(X.shape)
"""
Explanation: We can envision this image as a cloud of points in a 3-dimensional color space. We'll rescale the colors so they lie between 0 and 1, then reshape the array to be a typical scikit-learn input:
End of explanation
"""
from sklearn.cluster import MiniBatchKMeans
# reduce the size of the image for speed
n_colors = 64
X = (china / 255.0).reshape(-1, 3)
model = MiniBatchKMeans(n_colors)
labels = model.fit_predict(X)
colors = model.cluster_centers_
new_image = colors[labels].reshape(china.shape)
new_image = (255 * new_image).astype(np.uint8)
# create and plot the new image
with plt.style.context('seaborn-white'):
plt.figure()
plt.imshow(china)
plt.title('input: 16 million colors')
plt.figure()
plt.imshow(new_image)
plt.title('{0} colors'.format(n_colors))
"""
Explanation: We now have 273,280 points in 3 dimensions.
Our task is to use KMeans to compress the $256^3$ colors into a smaller number (say, 64 colors). Basically, we want to find $N_{color}$ clusters in the data, and create a new image where the true input color is replaced by the color of the closest cluster.
Here we'll use MiniBatchKMeans, a more sophisticated estimator that performs better for larger datasets:
End of explanation
"""
|
GSimas/EEL7045 | .ipynb_checkpoints/Aula 10 - Circuitos RL-checkpoint.ipynb | mit | print("Exemplo 7.3")
import numpy as np
from sympy import *
I0 = 10
L = 0.5
R1 = 2
R2 = 4
t = symbols('t')
#Determinar Req = Rth
#Io hipotético = 1 A
#Analise de Malhas
#4i2 + 2(i2 - i0) = -3i0
#6i2 = 5
#i2 = 5/6
#ix' = i2 - i1 = 5/6 - 1 = -1/6
#Vr1 = ix' * R1 = -1/6 * 2 = -1/3
#Rth = Vr1/i0 = (-1/3)/(-1) = 1/3
Rth = 1/3
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
vl = L*diff(i,t)
ix = vl/R1
print("Corrente ix(t):",ix,"A")
"""
Explanation: Circuitos RL sem fonte
Jupyter Notebook desenvolvido por Gustavo S.S.
Considere a conexão em série de um resistor e um indutor, conforme mostra a
Figura 7.11. Em t = 0, supomos que o indutor tenha uma
corrente inicial Io.
\begin{align}
I(0) = I_0
\end{align}
Assim, a energia correspondente armazenada no indutor como segue:
\begin{align}
w(0) = \frac{1}{2} LI_0²
\end{align}
Exponenciando em e, obtemos:
\begin{align}
i(t) = I_0 e^{-t \frac{R}{L}}
\end{align}
Isso demonstra que a resposta natural de um circuito RL é uma queda exponencial
da corrente inicial. A resposta em corrente é mostrada na Figura 7.12. Fica
evidente, da Equação, que a constante de tempo para o circuito RL é:
\begin{align}
τ = \frac{L}{R}
\end{align}
A tensão no resistor como segue:
\begin{align}
v_R(t) = I_0 R e^{-t/τ}
\end{align}
A potência dissipada no resistor é:
\begin{align}
p = v_R i = I_0^2 R e^{-2t/τ}
\end{align}
A energia absorvida pelo resistor é:
\begin{align}
w_R(t) = \int_{0}^{t} p(t)dt = \frac{1}{2} L I_0^2 (1 - e^{-2t/τ})
\end{align}
Enquanto t → ∞, wr(∞) → 1/2 L I0², que é o mesmo que wl(0), a energia armazenada inicialmente no indutor
Assim, os procedimentos são:
Determinar corrente inicial i(0) = I0 por meio do indutor.
Determinar a constante de tempo τ = L/R
Exemplo 7.3
Supondo que i(0) = 10 A, calcule i(t) e ix(t) no circuito da Figura 7.13.
End of explanation
"""
print("Problema Prático 7.3")
L = 2
I0 = 12
R1 = 1
#Determinar Req = Rth
#i0 hipotetico = 1 A
#vx = 4 V
#vx + 2(i0 - i1) + 2vx - v0 = 0
#-2i1 - v0 = -14
#-2vx + 2(i1 - i0) + 6i1 = 0
#8i1 = 10
#i1 = 10/8 = 5/4
#v0 = vx + 2(i0 - i1) + 2vx
#v0 = 4 + 2 - 5/2 + 8 = 11.5
#Rth = v0/i0 = 11.5/1 = 11.5
Rth = 11.5
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
vx = -R1*i
print("Tensão vx(t):",vx,"V")
"""
Explanation: Problema Prático 7.3
Determine i e vx no circuito da Figura 7.15. Façamos i(0) = 12 A.
End of explanation
"""
print("Exemplo 7.4")
Vs = 40
L = 2
def Req(x,y): #funcao para calculo de resistencia equivalente em paralelo
res = (x*y)/(x + y)
return res
Req1 = Req(4,12)
V1 = Vs*Req1/(Req1 + 2)
I0 = V1/4
Req2 = 12 + 4
Rth = Req(Req2, 16)
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
"""
Explanation: Exemplo 7.4
A chave do circuito da Figura 7.16 foi fechada por um longo período. Em t = 0, a chave
é aberta. Calcule i(t) para t > 0.
End of explanation
"""
print("Problema Prático 7.4")
L = 2
Cs = 15
R1 = 24
Req1 = Req(12,8)
i1 = Cs*R1/(R1 + Req1)
I0 = i1*8/(8 + 12)
Rth = Req(12+8,5)
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
"""
Explanation: Problema Prático 7.4
Para o circuito da Figura 7.18, determine i(t) para t > 0.
End of explanation
"""
print("Exemplo 7.5")
Vs = 10
L = 2
print("Para t < 0, i0:",0,"A")
I0 = Vs/(2 + 3)
v0 = 3*I0
print("Para t < 0, i:",I0,"A")
print("Para t < 0, v0:",v0,"V")
Rth = Req(3,6)
tau = L/Rth
i = I0*exp(-t/tau)
v0 = -L*diff(i,t)
i0 = -i*3/(3 + 6)
print("Para t > 0, i0:",i0,"A")
print("Para t > 0, v0:",v0,"V")
print("Para t > 0 i:",i,"A")
"""
Explanation: Exemplo 7.5
No circuito indicado na Figura 7.19, encontre io, vo e i durante todo o tempo, supondo
que a chave fora aberta por um longo período.
End of explanation
"""
print("Problema Prático 7.5")
Cs = 24
L = 1
#Para t < 0
i = Cs*4/(4 + 2)
i0 = Cs*2/(2 + 4)
v0 = 2*i
print("Para t < 0, i =",i,"A")
print("Para t < 0, i0 =",i0,"A")
print("Para t < 0, v0 =",v0,"V")
#Para t > 0
R = Req(4 + 2,3)
tau = L/R
I0 = i
i = I0*exp(-t/tau)
i0 = -i*3/(3 + 4 + 2)
v0 = -i0*2
print("Para t < 0, i =",i,"A")
print("Para t < 0, i0 =",i0,"A")
print("Para t < 0, v0 =",v0,"V")
"""
Explanation: Problema Prático 7.5
Determine i, io e vo para todo t no circuito mostrado na Figura 7.22.
End of explanation
"""
|
armandosrz/UdacityNanoMachine | titanic-survival-exploration/titanic_survival_exploration/Titanic_Survival_Exploration.ipynb | apache-2.0 | import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
"""
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
"""
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
#print predictions
"""
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Sex')
"""
Explanation: Answer: 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
"""
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
#print predictions
"""
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
"""
Explanation: Answer: 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
"""
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'female'"])
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age > 10", "Pclass == 3","Parch == 0"])
survival_stats(data, outcomes, 'Pclass', ["Sex == 'female'"])
# females from classes one and two will survive
survival_stats(data, outcomes, 'Parch', ["Sex == 'female'", "Pclass == 3"])
# in the 3class if parch equal 0 u will moew likely survive
survival_stats(data, outcomes, 'Fare', ["Sex == 'female'", "Pclass == 3", "Parch != 0"])
# Fare less than 20 will survive
"""
Explanation: Answer: 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Age'] < 40 and passenger['Age'] >20:
predictions.append(1)
elif passenger['Pclass'] == 3 and passenger['Parch'] == 1 and passenger['Age'] < 30 and passenger['Age'] >20:
predictions.append(1)
else:
predictions.append(0)
else:
if passenger['Pclass'] == 3:
if passenger['Age'] > 40 and passenger['Age'] < 60:
predictions.append(0)
elif passenger['Parch'] == 0:
predictions.append(1)
else:
if passenger['Fare'] < 20:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | BeautifulSoup/Improving Performance by Parsing Only Part of the Document.ipynb | apache-2.0 | from bs4 import BeautifulSoup,SoupStrainer
import re
doc = '''Bob reports <a href="http://www.bob.com/">success</a>
with his plasma breeding <a
href="http://www.bob.com/plasma">experiments</a>. <i>Don't get any on
us, Bob!</i>
<br><br>Ever hear of annular fusion? The folks at <a
href="http://www.boogabooga.net/">BoogaBooga</a> sure seem obsessed
with it. Secret project, or <b>WEB MADNESS?</b> You decide!'''
"""
Explanation: Introduction reference
Beautiful Soup turns every element of a document into a Python object and connects it to a bunch of other Python objects. If you only need a subset of the document, this is really slow. But you can pass in a SoupStrainer as the parse_only argument to the soup constructor. Beautiful Soup checks each element against the SoupStrainer, and only if it matches is the element turned into a Tag or NavigableText, and added to the tree.
If an element is added to to the tree, then so are its children—even if they wouldn't have matched the SoupStrainer on their own. This lets you parse only the chunks of a document that contain the data you want.
End of explanation
"""
links = SoupStrainer('a')
[tag for tag in BeautifulSoup(doc,"lxml",parse_only=links)]
"""
Explanation: parse only a tag
End of explanation
"""
linksToBob = SoupStrainer('a', href=re.compile('bob.com/'))
[tag for tag in BeautifulSoup(doc,"lxml", parse_only=linksToBob)]
mentionsOfBob = SoupStrainer(text=re.compile("Bob"))
[text for text in BeautifulSoup(doc,"lxml", parse_only=mentionsOfBob)]
"""
Explanation: pare a tags with specified contecnt inside it
End of explanation
"""
allCaps = SoupStrainer(text=lambda t:t.upper()==t)
[text for text in BeautifulSoup(doc,"lxml", parse_only=allCaps)]
"""
Explanation: specify criterion for tags
End of explanation
"""
|
citxx/sis-python | crash-course/slices.ipynb | mit | lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::])
"""
Explanation: <h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Получение-среза" data-toc-modified-id="Получение-среза-1">Получение среза</a></span><ul class="toc-item"><li><span><a href="#Без-параметров" data-toc-modified-id="Без-параметров-1.1">Без параметров</a></span></li><li><span><a href="#Указываем-конец" data-toc-modified-id="Указываем-конец-1.2">Указываем конец</a></span></li><li><span><a href="#Указываем-начало" data-toc-modified-id="Указываем-начало-1.3">Указываем начало</a></span></li><li><span><a href="#Указываем-шаг" data-toc-modified-id="Указываем-шаг-1.4">Указываем шаг</a></span></li><li><span><a href="#Отрицательный-шаг" data-toc-modified-id="Отрицательный-шаг-1.5">Отрицательный шаг</a></span></li></ul></li><li><span><a href="#Особенности-срезов" data-toc-modified-id="Особенности-срезов-2">Особенности срезов</a></span></li><li><span><a href="#Примеры-использования" data-toc-modified-id="Примеры-использования-3">Примеры использования</a></span></li><li><span><a href="#Срезы-и-строки" data-toc-modified-id="Срезы-и-строки-4">Срезы и строки</a></span></li></ul></div>
Срезы
Получение среза
Бывает такое, что нам нужна только некоторая часть списка, например все элементы с 5 по 10, или все элементы с четными индексами. Подобное можно сделать с помощью срезов.
Срез задаётся как список[start:end:step], где из списка будут браться элементы с индексами от start (включительно) до end (не включительно) с шагом step. Любое из значений start, end, step можно опустить. В таком случае по умолчанию start равен 0, end равен длине списка, то есть индексу последнего элемента + 1, step равен 1.
Cрезы и range очень похожи набором параметров.
Без параметров
Срез a[::] будет содержать просто весь список a:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:])
"""
Explanation: Можно опустить и : перед указанием шага, если его не указывать:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:5]) # то же самое, что и lst[:5:]
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:0])
"""
Explanation: Указываем конец
Указываем до какого элемента выводить:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2:])
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2:5])
"""
Explanation: Указываем начало
Или с какого элемента начинать:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[1:7:2])
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::2])
"""
Explanation: Указываем шаг
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::-1])
"""
Explanation: Отрицательный шаг
Можно даже сделать отрицательный шаг, как в range:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2::-1])
"""
Explanation: С указанием начала срез с отрицательным шагом можно понимать как: "Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1 до того, как список закончится".
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
# Допустим, хотим элементы с индексами 1 и 2 в обратном порядке
print(lst[1:3:-1])
# Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1
# до того, как встретим элемент с индексом 0 (0 не включительно)
print(lst[2:0:-1])
"""
Explanation: Для отрицательного шага важно правильно указывать порядок начала и конца, и помнить, что левое число всегда включительно, правое - не включительно:
End of explanation
"""
a = [1, 2, 3, 4] # а - ссылка на список, каждый элемент списка это ссылки на объекты 1, 2, 3, 4
b = a # b - ссылка на тот же самый список
a[0] = -1 # Меняем элемент списка a
print("a =", a)
print("b =", b) # Значение b тоже поменялось!
print()
a = [1, 2, 3, 4]
b = a[:] # Создаём копию списка
a[0] = -1 # Меняем элемент списка a
print("a =", a)
print("b =", b) # Значение b не изменилось!
"""
Explanation: Особенности срезов
Срезы не изменяют текущий список, а создают копию. С помощью срезов можно решить проблему ссылочной реализации при изменении одного элемента списка:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:4] + lst[5:])
"""
Explanation: Примеры использования
С помощью срезов можно, например, пропустить элемент списка с заданным индексом:
End of explanation
"""
lst = [1, 2, 3, 4, 5, 6, 7, 8]
swapped = lst[5:] + lst[:5] # поменять местами, начиная с элемента с индексом 5
print(swapped)
"""
Explanation: Или поменять местами две части списка:
End of explanation
"""
s = "long string"
s = s[:2] + "!" + s[3:]
print(s)
"""
Explanation: Срезы и строки
Срезы можно использовать не только для списков, но и для строк. Например, чтобы изменить третий символ строки, можно сделать так:
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/examples/legacy_spots.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Comparing Spots in PHOEBE 2 vs PHOEBE Legacy
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.add_spot(component='primary', relteff=0.8, radius=20, colat=45, colon=90, feature='spot01')
b.add_dataset('lc', times=np.linspace(0,1,101))
b.add_compute('phoebe', irrad_method='none', compute='phoebe2')
b.add_compute('legacy', irrad_method='none', compute='phoebe1')
"""
Explanation: Adding Spots and Compute Options
End of explanation
"""
b.set_value_all('atm', 'extern_planckint')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.run_compute('phoebe2', model='phoebe2model')
b.run_compute('phoebe1', model='phoebe1model')
"""
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
"""
afig, mplfig = b.plot(legend=True, ylim=(1.95, 2.05), show=True)
"""
Explanation: Plotting
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/HvassLabsTutorials/05_Ensemble_Learning.ipynb | apache-2.0 | from IPython.display import Image
Image('images/02_network_flowchart.png')
"""
Explanation: TensorFlow Tutorial #05
Ensemble Learning
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs.
This is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks.
This tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
This tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
"""
Explanation: Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
"""
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
"""
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.
End of explanation
"""
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
"""
Explanation: Class numbers
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
End of explanation
"""
combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)
combined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)
"""
Explanation: Helper-function for creating random training-sets
We will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels.
End of explanation
"""
print(combined_images.shape)
print(combined_labels.shape)
"""
Explanation: Check that the shape of the combined arrays is correct.
End of explanation
"""
combined_size = len(combined_images)
combined_size
"""
Explanation: Size of the combined data-set.
End of explanation
"""
train_size = int(0.8 * combined_size)
train_size
"""
Explanation: Define the size of the training-set used for each neural network. You can try and change this.
End of explanation
"""
validation_size = combined_size - train_size
validation_size
"""
Explanation: We do not use a validation-set during training, but this would be the size.
End of explanation
"""
def random_training_set():
# Create a randomized index into the full / combined training-set.
idx = np.random.permutation(combined_size)
# Split the random index into training- and validation-sets.
idx_train = idx[0:train_size]
idx_validation = idx[train_size:]
# Select the images and labels for the new training-set.
x_train = combined_images[idx_train, :]
y_train = combined_labels[idx_train, :]
# Select the images and labels for the new validation-set.
x_validation = combined_images[idx_validation, :]
y_validation = combined_labels[idx_validation, :]
# Return the new training- and validation-sets.
return x_train, y_train, x_validation, y_validation
"""
Explanation: Helper-function for splitting the combined data-set into a random training- and validation-set.
End of explanation
"""
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
"""
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
"""
def plot_images(images, # Images to plot, 2-d array.
cls_true, # True class-no for images.
ensemble_cls_pred=None, # Ensemble predicted class-no.
best_cls_pred=None): # Best-net predicted class-no.
assert len(images) == len(cls_true)
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if ensemble_cls_pred is None:
hspace = 0.3
else:
hspace = 1.0
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# For each of the sub-plots.
for i, ax in enumerate(axes.flat):
# There may not be enough images for all sub-plots.
if i < len(images):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if ensemble_cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
msg = "True: {0}\nEnsemble: {1}\nBest Net: {2}"
xlabel = msg.format(cls_true[i],
ensemble_cls_pred[i],
best_cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
"""
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the neural network.
A loss measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
"""
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
"""
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
"""
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
"""
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
"""
y_true_cls = tf.argmax(y_true, dimension=1)
"""
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
"""
x_pretty = pt.wrap(x_image)
"""
Explanation: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
End of explanation
"""
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=10, labels=y_true)
"""
Explanation: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
"""
Explanation: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
"""
saver = tf.train.Saver(max_to_keep=100)
"""
Explanation: Saver
In order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
Note that if you have more than 100 neural networks in the ensemble then you must increase max_to_keep accordingly.
End of explanation
"""
save_dir = 'checkpoints/'
"""
Explanation: This is the directory used for saving and retrieving the data.
End of explanation
"""
if not os.path.exists(save_dir):
os.makedirs(save_dir)
"""
Explanation: Create the directory if it does not exist.
End of explanation
"""
def get_save_path(net_number):
return save_dir + 'network' + str(net_number)
"""
Explanation: This function returns the save-path for the data-file with the given network number.
End of explanation
"""
session = tf.Session()
"""
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
def init_variables():
session.run(tf.initialize_all_variables())
"""
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below.
End of explanation
"""
train_batch_size = 64
"""
Explanation: Helper-function to create a random training batch.
There are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
"""
def random_batch(x_train, y_train):
# Total number of images in the training-set.
num_images = len(x_train)
# Create a random index into the training-set.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random images and labels.
x_batch = x_train[idx, :] # Images.
y_batch = y_train[idx, :] # Labels.
# Return the batch.
return x_batch, y_batch
"""
Explanation: Function for selecting a random training-batch of the given size.
End of explanation
"""
def optimize(num_iterations, x_train, y_train):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch(x_train, y_train)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if i % 100 == 0:
# Calculate the accuracy on the training-batch.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Status-message for printing.
msg = "Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
"""
Explanation: Helper-function to perform optimization iterations
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
"""
num_networks = 5
"""
Explanation: Create ensemble of neural networks
Number of neural networks in the ensemble.
End of explanation
"""
num_iterations = 10000
"""
Explanation: Number of optimization iterations for each neural network.
End of explanation
"""
if True:
# For each of the neural networks.
for i in range(num_networks):
print("Neural network: {0}".format(i))
# Create a random training-set. Ignore the validation-set.
x_train, y_train, _, _ = random_training_set()
# Initialize the variables of the TensorFlow graph.
session.run(tf.initialize_all_variables())
# Optimize the variables using this training-set.
optimize(num_iterations=num_iterations,
x_train=x_train,
y_train=y_train)
# Save the optimized variables to disk.
saver.save(sess=session, save_path=get_save_path(i))
# Print newline.
print()
"""
Explanation: Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.
You may want to skip this computation if you just want to re-run the Notebook with different analysis of the results.
End of explanation
"""
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_labels(images):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted labels which
# will be calculated in batches and filled into this array.
pred_labels = np.zeros(shape=(num_images, num_classes),
dtype=np.float)
# Now calculate the predicted labels for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images between index i and j.
feed_dict = {x: images[i:j, :]}
# Calculate the predicted labels using TensorFlow.
pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
return pred_labels
"""
Explanation: Helper-functions for calculating and predicting classifications
This function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
"""
def correct_prediction(images, labels, cls_true):
# Calculate the predicted labels.
pred_labels = predict_labels(images=images)
# Calculate the predicted class-number for each image.
cls_pred = np.argmax(pred_labels, axis=1)
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct
"""
Explanation: Calculate a boolean array whether the predicted classes for the images are correct.
End of explanation
"""
def test_correct():
return correct_prediction(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
"""
Explanation: Calculate a boolean array whether the images in the test-set are classified correctly.
End of explanation
"""
def validation_correct():
return correct_prediction(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
"""
Explanation: Calculate a boolean array whether the images in the validation-set are classified correctly.
End of explanation
"""
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
return correct.mean()
"""
Explanation: Helper-functions for calculating the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4
End of explanation
"""
def test_accuracy():
# Get the array of booleans whether the classifications are correct
# for the test-set.
correct = test_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
"""
Explanation: Calculate the classification accuracy on the test-set.
End of explanation
"""
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
correct = validation_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
"""
Explanation: Calculate the classification accuracy on the original validation-set.
End of explanation
"""
def ensemble_predictions():
# Empty list of predicted labels for each of the neural networks.
pred_labels = []
# Classification accuracy on the test-set for each network.
test_accuracies = []
# Classification accuracy on the validation-set for each network.
val_accuracies = []
# For each neural network in the ensemble.
for i in range(num_networks):
# Reload the variables into the TensorFlow graph.
saver.restore(sess=session, save_path=get_save_path(i))
# Calculate the classification accuracy on the test-set.
test_acc = test_accuracy()
# Append the classification accuracy to the list.
test_accuracies.append(test_acc)
# Calculate the classification accuracy on the validation-set.
val_acc = validation_accuracy()
# Append the classification accuracy to the list.
val_accuracies.append(val_acc)
# Print status message.
msg = "Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}"
print(msg.format(i, val_acc, test_acc))
# Calculate the predicted labels for the images in the test-set.
# This is already calculated in test_accuracy() above but
# it is re-calculated here to keep the code a bit simpler.
pred = predict_labels(images=data.test.images)
# Append the predicted labels to the list.
pred_labels.append(pred)
return np.array(pred_labels), \
np.array(test_accuracies), \
np.array(val_accuracies)
pred_labels, test_accuracies, val_accuracies = ensemble_predictions()
"""
Explanation: Results and analysis
Function for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below.
End of explanation
"""
print("Mean test-set accuracy: {0:.4f}".format(np.mean(test_accuracies)))
print("Min test-set accuracy: {0:.4f}".format(np.min(test_accuracies)))
print("Max test-set accuracy: {0:.4f}".format(np.max(test_accuracies)))
"""
Explanation: Summarize the classification accuracies on the test-set for the neural networks in the ensemble.
End of explanation
"""
pred_labels.shape
"""
Explanation: The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.
End of explanation
"""
ensemble_pred_labels = np.mean(pred_labels, axis=0)
ensemble_pred_labels.shape
"""
Explanation: Ensemble predictions
There are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes.
The method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble.
End of explanation
"""
ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)
ensemble_cls_pred.shape
"""
Explanation: The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.
End of explanation
"""
ensemble_correct = (ensemble_cls_pred == data.test.cls)
"""
Explanation: Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.
End of explanation
"""
ensemble_incorrect = np.logical_not(ensemble_correct)
"""
Explanation: Negate the boolean array so we can use it to lookup incorrectly classified images.
End of explanation
"""
test_accuracies
"""
Explanation: Best neural network
Now we find the single neural network that performed best on the test-set.
First list the classification accuracies on the test-set for all the neural networks in the ensemble.
End of explanation
"""
best_net = np.argmax(test_accuracies)
best_net
"""
Explanation: The index of the neural network with the highest classification accuracy.
End of explanation
"""
test_accuracies[best_net]
"""
Explanation: The best neural network's classification accuracy on the test-set.
End of explanation
"""
best_net_pred_labels = pred_labels[best_net, :, :]
"""
Explanation: Predicted labels of the best neural network.
End of explanation
"""
best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)
"""
Explanation: The predicted class-number.
End of explanation
"""
best_net_correct = (best_net_cls_pred == data.test.cls)
"""
Explanation: Boolean array whether the best neural network classified each image in the test-set correctly.
End of explanation
"""
best_net_incorrect = np.logical_not(best_net_correct)
"""
Explanation: Boolean array whether each image is incorrectly classified.
End of explanation
"""
np.sum(ensemble_correct)
"""
Explanation: Comparison of ensemble vs. the best single network
The number of images in the test-set that were correctly classified by the ensemble.
End of explanation
"""
np.sum(best_net_correct)
"""
Explanation: The number of images in the test-set that were correctly classified by the best neural network.
End of explanation
"""
ensemble_better = np.logical_and(best_net_incorrect,
ensemble_correct)
"""
Explanation: Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.
End of explanation
"""
ensemble_better.sum()
"""
Explanation: Number of images in the test-set where the ensemble was better than the best single network:
End of explanation
"""
best_net_better = np.logical_and(best_net_correct,
ensemble_incorrect)
"""
Explanation: Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.
End of explanation
"""
best_net_better.sum()
"""
Explanation: Number of images in the test-set where the best single network was better than the ensemble.
End of explanation
"""
def plot_images_comparison(idx):
plot_images(images=data.test.images[idx, :],
cls_true=data.test.cls[idx],
ensemble_cls_pred=ensemble_cls_pred[idx],
best_cls_pred=best_net_cls_pred[idx])
"""
Explanation: Helper-functions for plotting and printing comparisons
Function for plotting images from the test-set and their true and predicted class-numbers.
End of explanation
"""
def print_labels(labels, idx, num=1):
# Select the relevant labels based on idx.
labels = labels[idx, :]
# Select the first num labels.
labels = labels[0:num, :]
# Round numbers to 2 decimal points so they are easier to read.
labels_rounded = np.round(labels, 2)
# Print the rounded labels.
print(labels_rounded)
"""
Explanation: Function for printing the predicted labels.
End of explanation
"""
def print_labels_ensemble(idx, **kwargs):
print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)
"""
Explanation: Function for printing the predicted labels for the ensemble of neural networks.
End of explanation
"""
def print_labels_best_net(idx, **kwargs):
print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)
"""
Explanation: Function for printing the predicted labels for the best single network.
End of explanation
"""
def print_labels_all_nets(idx):
for i in range(num_networks):
print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)
"""
Explanation: Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.
End of explanation
"""
plot_images_comparison(idx=ensemble_better)
"""
Explanation: Examples: Ensemble is better than the best network
Plot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network.
End of explanation
"""
print_labels_ensemble(idx=ensemble_better, num=1)
"""
Explanation: The ensemble's predicted labels for the first of these images (top left image):
End of explanation
"""
print_labels_best_net(idx=ensemble_better, num=1)
"""
Explanation: The best network's predicted labels for the first of these images:
End of explanation
"""
print_labels_all_nets(idx=ensemble_better)
"""
Explanation: The predicted labels of all the networks in the ensemble, for the first of these images:
End of explanation
"""
plot_images_comparison(idx=best_net_better)
"""
Explanation: Examples: Best network is better than ensemble
Now plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network.
End of explanation
"""
print_labels_ensemble(idx=best_net_better, num=1)
"""
Explanation: The ensemble's predicted labels for the first of these images (top left image):
End of explanation
"""
print_labels_best_net(idx=best_net_better, num=1)
"""
Explanation: The best single network's predicted labels for the first of these images:
End of explanation
"""
print_labels_all_nets(idx=best_net_better)
"""
Explanation: The predicted labels of all the networks in the ensemble, for the first of these images:
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
"""
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation
"""
|
agentzh2m/muic_class_freq | class_freq.ipynb | mit | df = pd.read_csv('t2_2016.csv')
df = df[df['Type'] == 'master']
df.head()
#format [Day, start_time, end_time]
def time_extract(s):
s = str(s).strip().split(" "*16)
def helper(s):
try:
temp = s.strip().split(" ")[1:]
comb = temp[:2] + temp[3:]
comb[0] = comb[0][1:]
comb[2] = comb[2][:-1]
return comb
except:
temp = s.strip().split(" ")
comb = temp[:2] + temp[3:]
comb[0] = comb[0][1:]
comb[2] = comb[2][:-1]
return comb
top = helper(s[0])
if len(s) > 1:
bottom = helper(s[1])
return top, bottom
return top
# df.iloc[791]
# time_extract(df['Room/Time'][791])
tdf = df[df['Room/Time'].notnull()]['Room/Time']
tdf.apply(time_extract)[:10]
"""
Explanation: Worapol B. and hamuel.me reserved some right maybe hahaha
for muic math club and muic student that want to use this as references
Import as DF
From the data seen below we will use "master" section subject only and we will use the number of student in "registered" not actual registered because registered include both master and joint section this will eliminate duplicate section we also remove subject that does not specify the date and time
End of explanation
"""
def normalize_time(t):
temp = t.split(":")
h = int(temp[0]) * 60
m = 60 if int(temp[1]) == 50 else 0
return int(h + m)
"""
Explanation: Here we want to generate a histogram that is in the following format
[t1 , t2, ..., tn]
Here t1 could is the time from 8 - 9
the following is the logic in putting the subject in the correct time freq
we use the example of 8-10 we round up the 50 to 60
We aim to plot a histogram from Monday to Friday
End of explanation
"""
def gen_hist(day):
filtered = []
for i,d in zip(tdf.index, tdf.apply(time_extract)):
if len(d) == 2:
for dd in d:
if dd[0] == day:
filtered.append((i, dd))
else:
if d[0] == day:
filtered.append((i, d))
hist = []
for i, d in filtered:
start = normalize_time(d[1])
end = normalize_time(d[2])
cc = start
while cc <= end:
for f in range(df['Registered'][i]):
hist.append(cc/60)
cc += 60
plt.title("Student studying on " + day)
plt.ylabel("Frequency")
plt.xlabel("Time in hours")
plt.hist(hist, bins=11);
# return hist
gen_hist('Mon')
"""
Explanation: Histogram for Monday
monday frequency of people in classes
End of explanation
"""
gen_hist('Tue')
gen_hist('Wed')
gen_hist('Thu')
gen_hist('Fri')
gen_hist('Sat')
"""
Explanation: Histogram for Tuesday
End of explanation
"""
|
faneshion/MatchZoo | tutorials/data_handling.ipynb | apache-2.0 | data_pack = mz.datasets.toy.load_data()
data_pack.left.head()
data_pack.right.head()
data_pack.relation.head()
"""
Explanation: DataPack
Structure
matchzoo.DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A matchzoo.DataPack consists of three parts: left, right and relation, each one of is a pandas.DataFrame.
End of explanation
"""
data_pack.frame().head()
"""
Explanation: The main reason for using a matchzoo.DataPack instead of pandas.DataFrame is efficiency: we save space from storing duplicate texts and save time from processing duplicate texts.
DataPack.FrameView
However, since a big table is easier to understand and manage, we provide the frame that merges three parts into a single pandas.DataFrame when called.
End of explanation
"""
type(data_pack.frame)
"""
Explanation: Notice that frame is not a method, but a property that returns a matchzoo.DataPack.FrameView object.
End of explanation
"""
frame = data_pack.frame
data_pack.relation['label'] = data_pack.relation['label'] + 1
frame().head()
"""
Explanation: This view reflects changes in the data pack, and can be called to create a pandas.DataFrame at any time.
End of explanation
"""
data_slice = data_pack[5:10]
"""
Explanation: Slicing a DataPack
You may use [] to slice a matchzoo.DataPack similar to slicing a list. This also returns a shallow copy of the sliced data like slicing a list.
End of explanation
"""
data_slice.relation
"""
Explanation: A sliced data pack's relation will directly reflect the slicing.
End of explanation
"""
data_slice.left
data_slice.right
"""
Explanation: In addition, left and right will be processed so only relevant information are kept.
End of explanation
"""
data_pack.frame[5:10]
"""
Explanation: It is also possible to slice a frame view object.
End of explanation
"""
data_slice.frame() == data_pack.frame[5:10]
"""
Explanation: And this is equivalent to slicing the data pack first, then the frame, since both of them are based on the relation column.
End of explanation
"""
num_train = int(len(data_pack) * 0.8)
data_pack.shuffle(inplace=True)
train_slice = data_pack[:num_train]
test_slice = data_pack[num_train:]
"""
Explanation: Slicing is extremely useful for partitioning data for training vs testing.
End of explanation
"""
data_slice.apply_on_text(len).frame()
data_slice.apply_on_text(len, rename=('left_length', 'right_length')).frame()
"""
Explanation: Transforming Texts
Use apply_on_text to transform texts in a matchzoo.DataPack. Check the documentation for more information.
End of explanation
"""
data_slice.append_text_length().frame()
"""
Explanation: Since adding a column indicating text length is a quite common usage, you may simply do:
End of explanation
"""
data_pack.relation['label'] = data_pack.relation['label'].astype(int)
data_pack.one_hot_encode_label(num_classes=3).frame().head()
"""
Explanation: To one-hot encode the labels:
End of explanation
"""
data = pd.DataFrame({
'text_left': list('ARSAARSA'),
'text_right': list('arstenus')
})
my_pack = mz.pack(data)
my_pack.frame()
"""
Explanation: Building Your own DataPack
Use matchzoo.pack to build your own data pack. Check documentation for more information.
End of explanation
"""
x, y = data_pack[:3].unpack()
x
y
"""
Explanation: Unpack
Format data in a way so that MatchZoo models can directly fit it. For more details, consult matchzoo/tutorials/models.ipynb.
End of explanation
"""
mz.datasets.list_available()
"""
Explanation: Data Sets
MatchZoo incorporates various datasets that can be loaded as MatchZoo native data structures.
End of explanation
"""
toy_train_rank = mz.datasets.toy.load_data()
toy_train_rank.frame().head()
toy_dev_classification, classes = mz.datasets.toy.load_data(
stage='train', task='classification', return_classes=True)
toy_dev_classification.frame().head()
classes
"""
Explanation: The toy dataset doesn't need to be downloaded and can be directly used. It's the best choice to get things rolling.
End of explanation
"""
wiki_dev_entailment_rank = mz.datasets.wiki_qa.load_data(stage='dev')
wiki_dev_entailment_rank.frame().head()
snli_test_classification, classes = mz.datasets.snli.load_data(
stage='test', task='classification', return_classes=True)
snli_test_classification.frame().head()
classes
"""
Explanation: Other larger datasets will be automatically downloaded the first time you use it. Run the following lines to trigger downloading.
End of explanation
"""
mz.preprocessors.list_available()
"""
Explanation: Preprocessing
Preprocessors
matchzoo.preprocessors are responsible for transforming data into correct forms that matchzoo.models. BasicPreprocessor is used for models with common forms, and some other models have customized preprocessors made just for them.
End of explanation
"""
preprocessor = mz.models.Naive.get_default_preprocessor()
"""
Explanation: When in doubt, use the default preprocessor a model class provides.
End of explanation
"""
train_raw = mz.datasets.toy.load_data('train', 'ranking')
test_raw = mz.datasets.toy.load_data('test', 'ranking')
preprocessor.fit(train_raw)
preprocessor.context
train_preprocessed = preprocessor.transform(train_raw)
test_preprocessed = preprocessor.transform(test_raw)
model = mz.models.Naive()
model.guess_and_fill_missing_params()
model.build()
model.compile()
x_train, y_train = train_preprocessed.unpack()
model.fit(x_train, y_train)
x_test, y_test = test_preprocessed.unpack()
model.evaluate(x_test, y_test)
"""
Explanation: A preprocessor should be used in two steps. First, fit, then, transform. fit collects information into context, which includes everything the preprocessor needs to transform together with other useful information for later use. fit will only change the preprocessor's inner state but not the input data. In contrast, transform returns a modified copy of the input data without changing the preprocessor's inner state.
End of explanation
"""
data_pack = mz.datasets.toy.load_data()
data_pack.frame().head()
tokenizer = mz.preprocessors.units.Tokenize()
data_pack.apply_on_text(tokenizer.transform, inplace=True)
data_pack.frame[:5]
lower_caser = mz.preprocessors.units.Lowercase()
data_pack.apply_on_text(lower_caser.transform, inplace=True)
data_pack.frame[:5]
"""
Explanation: Processor Units
Preprocessors utilize mz.processor_units to transform data. Processor units correspond to specific transformations and you may use them independently to preprocess a data pack.
End of explanation
"""
data_pack = mz.datasets.toy.load_data()
chain = mz.chain_transform([mz.preprocessors.units.Tokenize(),
mz.preprocessors.units.Lowercase()])
data_pack.apply_on_text(chain, inplace=True)
data_pack.frame[:5]
"""
Explanation: Or use chain_transform to apply multiple processor units at one time
End of explanation
"""
mz.preprocessors.units.Vocabulary.__base__
vocab_unit = mz.preprocessors.units.Vocabulary()
texts = data_pack.frame()[['text_left', 'text_right']]
all_tokens = texts.sum().sum()
vocab_unit.fit(all_tokens)
"""
Explanation: Notice that some processor units are stateful so we have to fit them before using their transform.
End of explanation
"""
for vocab in 'how', 'are', 'glacier':
print(vocab, vocab_unit.state['term_index'][vocab])
data_pack.apply_on_text(vocab_unit.transform, inplace=True)
data_pack.frame()[:5]
"""
Explanation: Such StatefulProcessorUnit will save information in its state when fit, similar to the context of a preprocessor. In our case here, the vocabulary unit will save a term to index mapping, and a index to term mapping, called term_index and index_term respectively. Then we can proceed transforming a data pack.
End of explanation
"""
data_pack = mz.datasets.toy.load_data()
vocab_unit = mz.build_vocab_unit(data_pack)
data_pack.apply_on_text(vocab_unit.transform).frame[:5]
"""
Explanation: Since this usage is quite common, we wrapped a function to do the same thing. For other stateful units, consult their documentations and try mz.build_unit_from_data_pack.
End of explanation
"""
x_train, y_train = train_preprocessed.unpack()
model.fit(x_train, y_train)
data_gen = mz.DataGenerator(train_preprocessed)
model.fit_generator(data_gen)
"""
Explanation: DataGenerator
Some MatchZoo models (e.g. DRMM, MatchPyramid) require batch-wise information for training so using fit_generator instead of using fit is necessary. In addition, sometimes your memory just can't hold all transformed data so to delay a part of the preprocessing process is necessary.
MatchZoo provides DataGenerator as an alternative. Instead of fit, you may do a fit_generator that takes a data generator that unpack data on the fly.
End of explanation
"""
preprocessor = mz.preprocessors.DSSMPreprocessor(with_word_hashing=False)
data = preprocessor.fit_transform(train_raw, verbose=0)
dssm = mz.models.DSSM()
dssm.params['task'] = mz.tasks.Ranking()
dssm.params.update(preprocessor.context)
dssm.build()
dssm.compile()
term_index = preprocessor.context['vocab_unit'].state['term_index']
hashing_unit = mz.preprocessors.units.WordHashing(term_index)
data_generator = mz.DataGenerator(
data,
batch_size=4,
callbacks=[
mz.data_generator.callbacks.LambdaCallback(
on_batch_data_pack=lambda dp: dp.apply_on_text(
hashing_unit.transform, inplace=True, verbose=0)
)
]
)
dssm.fit_generator(data_generator)
"""
Explanation: The data preprocessing of DSSM eats a lot of memory, but we can workaround that using the callback hook of DataGenerator.
End of explanation
"""
num_neg = 4
task = mz.tasks.Ranking(loss=mz.losses.RankHingeLoss(num_neg=num_neg))
preprocessor = model.get_default_preprocessor()
train_processed = preprocessor.fit_transform(train_raw)
model = mz.models.Naive()
model.params['task'] = task
model.params.update(preprocessor.context)
model.build()
model.compile()
data_gen = mz.DataGenerator(
train_processed,
mode='pair',
num_neg=num_neg,
num_dup=2,
batch_size=32
)
model.fit_generator(data_gen)
"""
Explanation: In addition, losses like RankHingeLoss and RankCrossEntropyLoss have to be used with DataGenerator with mode='pair', since batch-wise information are needed and computed on the fly.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/launching_into_ml/labs/intro_logistic_regression.ipynb | apache-2.0 | import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Introduction to Logistic Regression
Learning Objectives
Create Seaborn plots for Exploratory Data Analysis
Train a Logistic Regression Model using Scikit-Learn
Introduction
This lab is an introduction to logistic regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. In this lab, we will use a synthetic advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Import Libraries
End of explanation
"""
# TODO 1: Read in the advertising.csv file and set it to a data frame called ad_data.
# TODO: Your code goes here
"""
Explanation: Load the Dataset
We will use a synthetic advertising dataset. This data set contains the following features:
'Daily Time Spent on Site': consumer time on site in minutes
'Age': customer age in years
'Area Income': Avg. Income of geographical area of consumer
'Daily Internet Usage': Avg. minutes a day consumer is on the internet
'Ad Topic Line': Headline of the advertisement
'City': City of consumer
'Male': Whether or not consumer was male
'Country': Country of consumer
'Timestamp': Time at which consumer clicked on Ad or closed window
'Clicked on Ad': 0 or 1 indicated clicking on Ad
End of explanation
"""
ad_data.head()
"""
Explanation: Check the head of ad_data
End of explanation
"""
ad_data.info()
ad_data.describe()
"""
Explanation: Use info and describe() on ad_data
End of explanation
"""
ad_data.isnull().sum()
"""
Explanation: Let's check for any null values.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: Exploratory Data Analysis (EDA)
Let's use seaborn to explore the data! Try recreating the plots shown below!
TODO 1: Create a histogram of the Age
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 1: Create a jointplot showing Area Income versus Age.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 2: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 1: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'
End of explanation
"""
from sklearn.model_selection import train_test_split
"""
Explanation: Logistic Regression
Logistic regression is a supervised machine learning process. It is similar to linear regression, but rather than predict a continuous value, we try to estimate probabilities by using a logistic function. Note that even though it has regression in the name, it is for classification.
While linear regression is acceptable for estimating values, logistic regression is best for predicting the class of an observation
Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on!
End of explanation
"""
X = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']]
y = ad_data['Clicked on Ad']
"""
Explanation: Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 2: Split the data into training set and testing set using train_test_split
End of explanation
"""
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
"""
Explanation: Train and fit a logistic regression model on the training set.
End of explanation
"""
predictions = logmodel.predict(X_test)
"""
Explanation: Predictions and Evaluations
Now predict values for the testing data.
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
"""
Explanation: Create a classification report for the model.
End of explanation
"""
|
sindrerb/VecDiSCS | notebooks/Generate.ipynb | gpl-3.0 | lattice = np.array([[ 3.2871687359128612, 0.0000000000000000, 0.0000000000000000],
[-1.6435843679564306, 2.8467716318265182, 0.0000000000000000],
[ 0.0000000000000000, 0.0000000000000000, 5.3045771064003047]])
positions = [[0.3333333333333357, 0.6666666666666643, 0.9996814330926364],
[0.6666666666666643, 0.3333333333333357, 0.4996814330926362],
[0.3333333333333357, 0.6666666666666643, 0.3787615522102606],
[0.6666666666666643, 0.3333333333333357, 0.8787615522102604]]
numbers = [30, 30, 8, 8]
cell= (lattice, positions, numbers)
sym = spg.get_symmetry(cell, symprec=1e-5)
print(spg.get_spacegroup(cell, symprec=1e-5))
mesh = np.array([3,3,3]) # use odd numbers
mapping, grid = spg.get_ir_reciprocal_mesh(mesh, cell, is_shift=[0, 0, 0])
grid.shape
occurences = np.bincount(mapping)[np.unique(mapping)]
grid_ir = grid/(mesh-1)
grid_ir = grid_ir[np.unique(mapping)]
mapping_ir = mapping[np.unique(mapping)]
for i in range(len(mapping_ir)):
print("occ = ",occurences[i], "\t irr = ",grid_ir[i])
grid_ir
last = np.zeros(3)
for point in grid_ir[1:]:
last=point
'''
======================
3D surface (color map)
======================
Demonstrates plotting a 3D surface colored with the coolwarm color map.
The surface is made opaque by using antialiased=False.
Also demonstrates using the LinearLocator and custom formatting for the
z axis tick labels.
'''
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
# Customize the z axis.
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
"""
Explanation: Define cells and symmetry
For parabolic bands, use a cubic symmetry
End of explanation
"""
def gauss(sigma, eRange):
dE = eRange[1]-eRange[0]
gx = np.arange(-3*sigma,3*sigma, dE)
gaussian = np.exp(-0.5*(gx/sigma)**2)
gaussian = gaussian/np.sum(gaussian)
gauss =np.zeros((len(gaussian),1,1,1))
gauss[:,0,0,0] = gaussian
return gauss
def smooth(hist, eRange, sigma):
gaussian = gauss(sigma, eRange)
crop_front = len(gaussian)//2
if len(gaussian)%2 == 1:
crop_end = crop_front
else:
crop_end = crop_front-1
return signal.convolve(hist, gaussian)[crop_front:-crop_end]
_hbar = 1973 #eV AA
_me = .511e6 #eV
def band(k_vec, E0, m, k_center):
band = E0+(_hbar**2/(2*_me))*((k_vec[:,0]-k_center[0])**2/m[0]\
+(k_vec[:,1]-k_center[1])**2/m[1]\
+(k_vec[:,2]-k_center[2])**2/m[2])
return band
sigma = 0.0649/2.3548
sigma
wave = np.array([0, 1])
band_gap = 3.3
def runSeries(root, fermiEnergies):
eBin = np.linspace(3,5,120)
"""
coordinates = [[ 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0]]
eff_masses = [[-2.55, -2.55, -0.27], [-2.45, -2.45, -2.45], [-0.34, -0.34, -2.47], [ 0.29, 0.29, 0.25]]
energy_offset = [0.0, 0.0, 0.0, band_gap]
"""
coordinates = [[ 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0]]
eff_masses = [[-0.34, -0.34, -0.27], [ 0.29, 0.29, 0.25]]
energy_offset = [0.0, band_gap]
k_list = []
bands = []
for i in range(len(coordinates)):
bands.append((coordinates[i],eff_masses[i],energy_offset[i]))
k_arr = np.zeros((len(np.unique(mapping)),len(grid[0])))
e_arr = np.zeros((len(np.unique(mapping)),len(bands),))
w_arr = np.zeros((len(np.unique(mapping)),len(bands),len(wave)))
for i, map_id in enumerate(mapping[np.unique(mapping)]):
k_list.append((grid[mapping==map_id]/(mesh-1)).tolist())
k_arr[i] = grid[map_id]/(mesh-1)
for i, band_info in enumerate(bands):
e_arr[:,i] = band(k_arr, band_info[2], band_info[1], band_info[0])
w_arr[:,i] = np.outer(wave,np.ones(len(k_arr))).T
k = np.linspace(-0.5,0.5,100)
k = np.stack([k,np.zeros(len(k)),np.zeros(len(k))],axis=1)
"""plt.xlabel('k_x')
plt.ylabel('Energy')
old_band = 0
test = []
for i, band_info in enumerate(bands):
band_energies = band(k, band_info[2], band_info[1], band_info[0])
test.append(band_energies)
plt.plot(k[:,0],band_energies)
old_band = band_energies
plt.plot([-0.5,0.5],[fermiEnergy,fermiEnergy],'--')
plt.show()"""
folder = root
for fermiEnergy in fermiEnergies:
model = "Parabolic"
name = "Scleife 4 bands fermi {}".format(fermiEnergy)
title = "Fermi {}".format(fermiEnergy)
notes = "Based on Schleife 2009, Bands: G7c, G7+v, G7v, G7-v"
meta = spectrum.createMeta(name=name, title=title, authors="Sindre R. Bilden", notes=notes, model=model, cell=lattice, fermi=fermiEnergy, coordinates=coordinates, effective_masses=eff_masses, energy_levels=energy_offset)
EELS = spectrum.calculate_spectrum((mesh,k_list,e_arr,w_arr),eBin,fermiEnergy)
signal = spectrum.createSignal(data=EELS,eBin=eBin,mesh=mesh,metadata=meta)
signal_smooth = spectrum.createSignal(data=smooth(EELS, eBin, sigma),eBin=eBin,mesh=mesh,metadata=meta)
spectrum.saveHyperspy(signal, filename='../Results/%s/fermi_%.2f' %(folder,fermiEnergy-band_gap))
spectrum.saveHyperspy(signal_smooth, filename='../Results/%s/smoothed/fermi_%.2f' %(folder,fermiEnergy-band_gap))
fermi = [3.485]
runSeries("Schleife2009/unreal_valence/51/", fermi)
EELS_smooth = smooth(EELS, eBin, 0.02) #smoothing of data
"""
Explanation: Generate energies, waves and k-list
End of explanation
"""
|
fcollonval/coursera_data_visualization | Analysis_Variance.ipynb | mit | # Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
# Read the data
data_filename = r'gapminder.csv'
data = pd.read_csv(data_filename, low_memory=False)
data = data.set_index('country')
"""
Explanation: Data Analysis Tools
Assignment: Running an analysis of variance
Following is the Python program I wrote to fulfill the first assignment of the Data Analysis Tools online course.
I decided to use Jupyter Notebook as it is a pretty way to write code and present results.
Research question
Using the Gapminder database, I would like to see if an increasing Internet usage results in an increasing suicide rate. A study shows that other factors like unemployment could have a great impact.
So for this assignment, the three following variables will be analyzed:
Internet Usage Rate (per 100 people)
Suicide Rate (per 100 000 people)
Unemployment Rate (% of the population of age 15+)
Data management
For the question I'm interested in, the countries for which data are missing will be discarded. As missing data in Gapminder database are replace directly by NaN no special data treatment is needed.
End of explanation
"""
display(Markdown("Number of countries: {}".format(len(data))))
display(Markdown("Number of variables: {}".format(len(data.columns))))
# Convert interesting variables in numeric format
for variable in ('internetuserate', 'suicideper100th', 'employrate'):
data[variable] = pd.to_numeric(data[variable], errors='coerce')
"""
Explanation: General information on the Gapminder data
End of explanation
"""
data['unemployrate'] = 100. - data['employrate']
"""
Explanation: But the unemployment rate is not provided directly. In the database, the employment rate (% of the popluation) is available. So the unemployement rate will be computed as 100 - employment rate:
End of explanation
"""
subdata = data[['internetuserate', 'suicideper100th', 'unemployrate']]
subdata.head(10)
"""
Explanation: The first records of the data restricted to the three analyzed variables are:
End of explanation
"""
subdata2 = subdata.assign(internet_grp4 = pd.qcut(subdata.internetuserate, 4,
labels=["1=25th%tile", "2=50th%tile",
"3=75th%tile", "4=100th%tile"]))
sns.factorplot(x='internet_grp4', y='suicideper100th', data=subdata2,
kind="bar", ci=None)
plt.xlabel('Internet use rate (%)')
plt.ylabel('Suicide per 100 000 people (-)')
_ = plt.title('Average suicide per 100,000 people per internet use rate quartile')
"""
Explanation: Data analysis
The distribution of the three variables have been analyzed previously.
Variance analysis
Now that the univariate distribution as be plotted and described, the bivariate graphics will be plotted in order to test our research hypothesis.
Let's first focus on the primary research question;
The explanatory variable is the internet use rate (quantitative variable)
The response variable is the suicide per 100,000 people (quantitative variable)
From the scatter plot, a slope slightly positive has been seen. And as most of the countries have no or very low internet use rate, an effect is maybe seen only on the countries having the higher internet use rate.
End of explanation
"""
model1 = smf.ols(formula='suicideper100th ~ C(internet_grp4)', data=subdata2).fit()
model1.summary()
"""
Explanation: This case falls under the Categorical to Quantitative case of interest for this assignement. So ANOVA analysis can be performed here.
The null hypothesis is: There is no relationship between the Internet use rate and suicide
The alternate hypothesis is: There is a relationship between the Internet use rate and suicide
End of explanation
"""
nesarc = pd.read_csv('nesarc_pds.csv', low_memory=False)
races = {1 : 'White',
2 : 'Black',
3 : 'American India/Alaska',
4 : 'Asian/Native Hawaiian/Pacific',
5 : 'Hispanic or Latino'}
subnesarc = (nesarc[['S3BQ4', 'ETHRACE2A']]
.assign(ethnicity=lambda x: pd.Categorical(x['ETHRACE2A'].map(races)),
nb_joints_day=lambda x: (pd.to_numeric(x['S3BQ4'], errors='coerce')
.replace(99, np.nan)))
.dropna())
g = sns.factorplot(x='ethnicity', y='nb_joints_day', data=subnesarc,
kind="bar", ci=None)
g.set_xticklabels(rotation=90)
plt.ylabel('Number of cannabis joints per day')
_ = plt.title('Average number of cannabis joints smoked per day depending on the ethnicity')
"""
Explanation: The p-value found is 0.143 > 0.05. Therefore the null hypothesis cannot be rejected. There is no relationship between the internet use rate and the suicide rate.
Test case to fulfill the assignment
In order to fulfill the assignment, I will switch to the NESARC database.
End of explanation
"""
model2 = smf.ols(formula='nb_joints_day ~ C(ethnicity)', data=subnesarc).fit()
model2.summary()
"""
Explanation: The null hypothesis is There is no relationship between the number of joints smoked per day and the ethnicity.
The alternate hypothesis is There is a relationship between the number of joints smoked per day and the ethnicity.
End of explanation
"""
import statsmodels.stats.multicomp as multi
multi1 = multi.MultiComparison(subnesarc['nb_joints_day'], subnesarc['ethnicity'])
result1 = multi1.tukeyhsd()
result1.summary()
"""
Explanation: The p-value is much smaller than 5%. Therefore the null hypothesis is rejected. We can now look at which group are really different from the other.
End of explanation
"""
|
ManyBodyPhysics/NuclearForces | doc/exercises/Variable_Phase_Approach.ipynb | cc0-1.0 | # I know you're not supposed to do this to avoid namespace issues, but whatever
from numpy import *
from matplotlib.pyplot import *
# Global variables for this notebook
mu=1.
R=1.
hbar=1.
"""
Explanation: Preliminaries
In this notebook, we compute the phase shifts of the potential with $V = -V0$ for $r < R$ and $V=0$ for $r > R$ with an analytic formula and using the Variable Phase Approach (VPA). The reduced mass is $\mu$.
We work in units where $\hbar=1$ and we measure mass in units of $\mu$ and lengths in terms of $R$. For convenience we set $\mu$ and $R$ to $1$. However, we will continue to make them explicit in the formulas.
End of explanation
"""
@vectorize
def Vsw(r,V0): # function for a square well of width R (set externally) and depth V0 (V0>0)
if r > R:
return 0.
return -V0
def Ek(k): # kinetic energy
return k**2 / (2.*mu)
V0 = 10
x=linspace(0,10,100)
plot(x,Vsw(x,V0))
ylim(-V0,V0)
show()
"""
Explanation: The only parameter to adjust now is the depth $V0$.
End of explanation
"""
def deltaAnalytic(k, V0):
return arctan(sqrt(Ek(k)/(Ek(k)+V0))*tan(R*sqrt(2.*mu*(Ek(k)+V0))))-R*sqrt(2.*mu*Ek(k))
V0 = 1
k=linspace(0,10,100)
plot(k,deltaAnalytic(k,V0))
show()
"""
Explanation: Analytic result for the phase shift
Use the formula for $\delta(E)$ from one of the problems, converting it to $\delta(k)$ using $E_k(k)$:
End of explanation
"""
V0 = .5
k=linspace(0,10,100)
plot(k,arctan(tan(deltaAnalytic(k,V0))))
show()
def deltaAnalyticAdjusted(k, V0):
return arctan(tan(deltaAnalytic(k, V0)))
"""
Explanation: What is going on with the steps? Why are they there? Is the phase shift really discontinuous? How would you fix it?
Here is one "fix" that makes the result continuous for this example, but we will still have issues when $V0$ is large enough to support bound states.
End of explanation
"""
V0 = .5
k=linspace(0,10,100)
seterr(all='ignore') # use 1/tan for cot, so ignore 1/0 errors
plot(k,k/tan(deltaAnalytic(k,V0)))
seterr(all='warn')
show()
"""
Explanation: We can also avoid this issue by looking at $k cot[\delta(k)]$ instead, which doesn't have these ambiguities.
End of explanation
"""
from scipy.integrate import odeint
?odeint
"""
Explanation: Phase shift from the Variable Phase Approach
Use the formula:
$$\frac{d}{dr}\delta_{k}(r)= -\frac{2 mu}{k} V(r)\sin^2\left(\delta_{k}(r)+k r\right)$$
with the initial condition $\delta_{k}(r)= 0$
We'll need a numerical differential equation solver. In python/scipy there are two choices, the simplest is scipy.integrate.odeint, the slightly more complicated on scipy.integrate.ode which is class based wrapper to many different solvers. For the VPA, scipy.integrate.ode is more power than needed so we will use scipy.integrate.odeint.
End of explanation
"""
def delta_VPA_simple(k, V0, r=10.):
# actual VPA code
def RHS(delta,x,Vd,k):
return (-2.* mu/k) * Vsw(x,Vd) * sin(delta+k*x)**2.
soln = odeint(RHS, 0, r, args=(V0,k), mxstep=10000, rtol=1e-6, atol=1e-6)
# that is it
return soln
def delta_VPA(k,V0, r=10.):
# doing some tricks so that we can process either a single k or a vector of k
if isscalar(k):
delta0 = array([0.])
kv = array([k])
else:
kv = array(k)
if 0. in kv:
kv[kv==0.] = 1e-10 # cannot allow zero for k due to 1/k in RHS
# doing some tricks so that we can process either a single Rmax or a vector of r
if isscalar(r):
rv = array([r],float)
else:
rv = array(r,float)
rv = sort(rv)
if rv[0] != 0.:
rv = insert(rv,0,0.)
# actual VPA code
def RHS(delta,x,Vd,k):
return (-2.* mu/k) * Vsw(x,Vd) * sin(delta+k*x)**2.
soln = zeros((len(rv),len(kv)))
for i, ki in enumerate(kv):
soln[:,i] = odeint(RHS, 0, rv, args=(V0,ki),mxstep=10000,rtol=1e-6,atol=1e-6)[:,0]
# that is it
# tricks to return based on input
if isscalar(k):
if isscalar(r):
return soln[-1,0]
return rv, soln[:,0]
if isscalar(r):
return soln[-1,:]
return rv, soln.T
def delta_VPA_faster(k,V0, r=10.):
# faster version of VPA, couples the error term for all k so it will break down if you give a it a vector of k with very small or zero momenta
# doing some tricks so that we can process either a single k or a vector of k
if isscalar(k):
delta0 = array([0.])
kv = array([k])
else:
kv = array(k)
delta0 = zeros(len(k),float)
if min(kv) < 1e-3:
raise Exception('k must be >=1e-3,otherwise the method is unstable')
# doing some tricks so that we can process either a single Rmax or a vector of r
if isscalar(r):
rv = array([r],float)
else:
rv = array(r,float)
rv = sort(rv)
if rv[0] != 0.:
rv = insert(rv,0,0.)
# actual VPA code
def RHS(delta,x,Vd,kv):
return (-2.* mu/kv) * Vsw(x,Vd) * sin(delta+kv*x)**2.
soln = odeint(RHS, delta0, rv, args=(V0,kv))
# that is it
# tricks to return based on input
if isscalar(k):
if isscalar(r):
return soln[-1,0]
return rv, soln[:,0]
if isscalar(r):
return soln[-1,:]
return rv, soln.T
"""
Explanation: Click on the border to remove the help information
In principle we integrate out to infinity. In practice, we choose a value of k and integrate out to Rmax, chosen to be well beyond the range of the potential (at which point the right side of the equation for $\delta_p(r)$ is zero), and then evaluate at $r=R_{\textrm{max}}$, which is the phase shift for momentum k. Here is an implementation:
End of explanation
"""
V0=2.6
print deltaAnalyticAdjusted(2.,V0)
print delta_VPA(2.,V0)
print delta_VPA_faster(2.,V0)
print delta_VPA_simple(2.,V0)
"""
Explanation: Do a quick check against the analytic result with sample values for $k$ and $V0$ to see the accuracy we are getting:
End of explanation
"""
V0=1.5
r=linspace(0,5,100)
k = 2.
rv, delta = delta_VPA(k,V0,r)
plot(rv,delta)
xlabel('r')
ylabel(r'$\delta_p(r)$')
show()
"""
Explanation: To get more digits correct, increase atol and rtol in odeint (at the cost of slower evaluation of the function).
Check the cutoff phase shift out to Rmax. Does this plot make sense?
End of explanation
"""
V0 = 5.
k = linspace(1e-3,10,100)
delta = delta_VPA(k,V0,2.0)
seterr(all='ignore')
plot(k,k / tan(delta),label='VPA at Rmax = 2.')
plot(k,k / tan(deltaAnalytic(k,V0)),label='exact')
seterr(all='warn')
xlabel('k')
ylabel(r'$k \cot\delta(k)$')
legend(loc='upper left')
show()
plot(k,delta,label='VPA at Rmax = 2.')
plot(k,deltaAnalyticAdjusted(k,V0),label='exact')
xlabel('k')
ylabel(r'$\delta(k)$')
legend(loc='upper right')
show()
"""
Explanation: Lets check phase shifts for a V0 with no bound states
End of explanation
"""
V0 = 4.
k = linspace(1e-4,10,100)
delta = delta_VPA(k,V0)
print delta.shape
seterr(all='ignore')
plot(k,k / tan(delta),label='VPA at Rmax = 2.')
plot(k,k / tan(deltaAnalytic(k,V0)),label='exact')
seterr(all='warn')
xlabel('k')
ylabel(r'$k \cot\delta(k)$')
legend(loc='upper left')
show()
plot(k,delta,label='VPA at Rmax = 2.')
plot(k,deltaAnalyticAdjusted(k,V0),label='exact')
xlabel('k')
ylabel(r'$\delta(k)$')
legend(loc='upper right')
show()
"""
Explanation: Lets check phase shifts for a V0 with 1 bound state
End of explanation
"""
i = 0
V0s = [.5,1.,2.,5.,10.,20.]
k=linspace(.001,20,100)
for V0 in V0s:
delta=delta_VPA(k,V0)
plot(k,delta,label='V0 = %4.2f' % V0)
xlabel('k')
ylabel(r'\delta(k)')
legend(loc='upper right')
show()
"""
Explanation: Now it's time to play!
Check whether Levinson's theorem holds by calculating phase shifts for increasing depths V0 and noting the jump to $\delta(0) = n\pi$, where n is the number of bound states (found, for example, from the square_well_example.nb notebook).
End of explanation
"""
|
GoogleCloudPlatform/practical-ml-vision-book | 07_training/07c_export.ipynb | apache-2.0 | import tensorflow as tf
print('TensorFlow version' + tf.version.VERSION)
print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!'))
print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU"))))
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
"""
Explanation: Saving model state
In this notebook, we checkpoint and export the model.
Enable GPU and set up helper functions
This notebook and pretty much every other notebook in this repository
will run faster if you are using a GPU.
On Colab:
- Navigate to Edit→Notebook Settings
- Select GPU from the Hardware Accelerator drop-down
On Cloud AI Platform Notebooks:
- Navigate to https://console.cloud.google.com/ai-platform/notebooks
- Create an instance with a GPU or select your instance and add a GPU
Next, we'll confirm that we can connect to the GPU with tensorflow:
End of explanation
"""
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import os
# Load compressed models from tensorflow_hub
os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED'
from tensorflow.data.experimental import AUTOTUNE
IMG_HEIGHT = 448 # note *twice* what we used to have
IMG_WIDTH = 448
IMG_CHANNELS = 3
CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split()
def training_plot(metrics, history):
f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5))
for idx, metric in enumerate(metrics):
ax[idx].plot(history.history[metric], ls='dashed')
ax[idx].set_xlabel("Epochs")
ax[idx].set_ylabel(metric)
ax[idx].plot(history.history['val_' + metric]);
ax[idx].legend([metric, 'val_' + metric])
class _Preprocessor:
def __init__(self):
# nothing to initialize
pass
def read_from_tfr(self, proto):
feature_description = {
'image': tf.io.VarLenFeature(tf.float32),
'shape': tf.io.VarLenFeature(tf.int64),
'label': tf.io.FixedLenFeature([], tf.string, default_value=''),
'label_int': tf.io.FixedLenFeature([], tf.int64, default_value=0),
}
rec = tf.io.parse_single_example(
proto, feature_description
)
shape = tf.sparse.to_dense(rec['shape'])
img = tf.reshape(tf.sparse.to_dense(rec['image']), shape)
label_int = rec['label_int']
return img, label_int
def read_from_jpegfile(self, filename):
# same code as in 05_create_dataset/jpeg_to_tfrecord.py
img = tf.io.read_file(filename)
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
img = tf.image.convert_image_dtype(img, tf.float32)
return img
def preprocess(self, img):
return tf.image.resize_with_pad(img, IMG_HEIGHT, IMG_WIDTH)
# most efficient way to read the data
# as determined in 07a_ingest.ipynb
# splits the files into two halves and interleaves datasets
def create_preproc_dataset(pattern):
"""
Does interleaving, parallel calls, prefetch, batching
Caching is not a good idea on large datasets.
"""
preproc = _Preprocessor()
files = [filename for filename in tf.io.gfile.glob(pattern)]
if len(files) > 1:
print("Interleaving the reading of {} files.".format(len(files)))
def _create_half_ds(x):
if x == 0:
half = files[:(len(files)//2)]
else:
half = files[(len(files)//2):]
return tf.data.TFRecordDataset(half,
compression_type='GZIP')
trainds = tf.data.Dataset.range(2).interleave(
_create_half_ds, num_parallel_calls=AUTOTUNE)
else:
trainds = tf.data.TFRecordDataset(files,
compression_type='GZIP')
def _preproc_img_label(img, label):
return (preproc.preprocess(img), label)
trainds = (trainds
.map(preproc.read_from_tfr, num_parallel_calls=AUTOTUNE)
.map(_preproc_img_label, num_parallel_calls=AUTOTUNE)
.shuffle(200)
.prefetch(AUTOTUNE)
)
return trainds
def create_preproc_image(filename):
preproc = _Preprocessor()
img = preproc.read_from_jpegfile(filename)
return preproc.preprocess(img)
class RandomColorDistortion(tf.keras.layers.Layer):
def __init__(self, contrast_range=[0.5, 1.5],
brightness_delta=[-0.2, 0.2], **kwargs):
super(RandomColorDistortion, self).__init__(**kwargs)
self.contrast_range = contrast_range
self.brightness_delta = brightness_delta
def call(self, images, training=None):
if not training:
return images
contrast = np.random.uniform(
self.contrast_range[0], self.contrast_range[1])
brightness = np.random.uniform(
self.brightness_delta[0], self.brightness_delta[1])
images = tf.image.adjust_contrast(images, contrast)
images = tf.image.adjust_brightness(images, brightness)
images = tf.clip_by_value(images, 0, 1)
return images
"""
Explanation: Training code
This is the original code, from ../06_preprocessing/06e_colordistortion.ipynb
modified to use the most efficient ingest as determined in ./07a_ingest.ipynb.
End of explanation
"""
#PATTERN_SUFFIX, NUM_EPOCHS = '-0000[01]-*', 3 # small
PATTERN_SUFFIX, NUM_EPOCHS = '-*', 20 # full
import os, shutil
shutil.rmtree('chkpts', ignore_errors=True)
os.mkdir('chkpts')
def train_and_evaluate(batch_size = 32,
lrate = 0.001,
l1 = 0.,
l2 = 0.,
num_hidden = 16):
regularizer = tf.keras.regularizers.l1_l2(l1, l2)
train_dataset = create_preproc_dataset(
'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX
).batch(batch_size)
eval_dataset = create_preproc_dataset(
'gs://practical-ml-vision-book/flowers_tfr/valid' + PATTERN_SUFFIX
).batch(batch_size)
layers = [
tf.keras.layers.experimental.preprocessing.RandomCrop(
height=IMG_HEIGHT//2, width=IMG_WIDTH//2,
input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS),
name='random/center_crop'
),
tf.keras.layers.experimental.preprocessing.RandomFlip(
mode='horizontal',
name='random_lr_flip/none'
),
RandomColorDistortion(name='random_contrast_brightness/none'),
hub.KerasLayer(
"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4",
trainable=False,
name='mobilenet_embedding'),
tf.keras.layers.Dense(num_hidden,
kernel_regularizer=regularizer,
activation=tf.keras.activations.relu,
name='dense_hidden'),
tf.keras.layers.Dense(len(CLASS_NAMES),
kernel_regularizer=regularizer,
activation='softmax',
name='flower_prob')
]
# checkpoint and early stopping callbacks
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
filepath='./chkpts',
monitor='val_accuracy', mode='max',
save_best_only=True)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
monitor='val_accuracy', mode='max',
patience=2)
# model training
model = tf.keras.Sequential(layers, name='flower_classification')
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lrate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False),
metrics=['accuracy']
)
print(model.summary())
history = model.fit(train_dataset,
validation_data=eval_dataset,
epochs=NUM_EPOCHS,
callbacks=[model_checkpoint_cb, early_stopping_cb]
)
training_plot(['loss', 'accuracy'], history)
return model
model = train_and_evaluate()
"""
Explanation: Training
Interleaving, parallel calls, prefetch, batching
Caching is not a good idea on large datasets.
End of explanation
"""
import os, shutil
shutil.rmtree('export', ignore_errors=True)
os.mkdir('export')
model.save('export/flowers_model')
!ls export/flowers_model
!saved_model_cli show --tag_set serve --signature_def serving_default --dir export/flowers_model
## for prediction, we won't have TensorFlow Records.
## this is how we'd predict for individual images
filenames = [
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9158041313_7a6a102f7a_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg'
]
serving_model = tf.keras.models.load_model('export/flowers_model')
input_images = [create_preproc_image(f) for f in filenames]
f, ax = plt.subplots(1, 6, figsize=(15,15))
for idx, img in enumerate(input_images):
ax[idx].imshow((img.numpy()));
batch_image = tf.reshape(img, [1, IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])
batch_pred = serving_model.predict(batch_image)
pred = batch_pred[0]
pred_label_index = tf.math.argmax(pred).numpy()
pred_label = CLASS_NAMES[pred_label_index]
prob = pred[pred_label_index]
ax[idx].set_title('{} ({:.2f})'.format(pred_label, prob))
"""
Explanation: Save model, then load it to make predictions
This way, we don't have to have the model in memory
End of explanation
"""
## it's better to vectorize the prediction
filenames = tf.convert_to_tensor([
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9158041313_7a6a102f7a_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg'
])
print(filenames)
input_images = tf.map_fn(create_preproc_image,
filenames, fn_output_signature=tf.float32)
batch_pred = serving_model.predict(input_images)
print('full probs:\n', batch_pred)
top_prob = tf.math.reduce_max(batch_pred, axis=[1])
print('top prob:\n', top_prob)
pred_label_index = tf.math.argmax(batch_pred, axis=1)
print('top cls:\n', pred_label_index)
pred_label = tf.gather(tf.convert_to_tensor(CLASS_NAMES), pred_label_index)
print(pred_label)
@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])
def predict_flower_type(filenames):
input_images = tf.map_fn(
create_preproc_image,
filenames,
fn_output_signature=tf.float32
)
batch_pred = model(input_images) # same as model.predict()
top_prob = tf.math.reduce_max(batch_pred, axis=[1])
pred_label_index = tf.math.argmax(batch_pred, axis=1)
pred_label = tf.gather(tf.convert_to_tensor(CLASS_NAMES), pred_label_index)
return {
'probability': top_prob,
'flower_type_int': pred_label_index,
'flower_type_str': pred_label
}
shutil.rmtree('export', ignore_errors=True)
os.mkdir('export')
model.save('export/flowers_model',
signatures={
'serving_default': predict_flower_type
})
serving_fn = tf.keras.models.load_model('export/flowers_model').signatures['serving_default']
filenames = [
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9158041313_7a6a102f7a_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg'
]
pred = serving_fn(tf.convert_to_tensor(filenames))
print(pred)
print('******')
print(pred['flower_type_str'].numpy())
f, ax = plt.subplots(1, 6, figsize=(15,15))
for idx, (filename, prob, pred_label) in enumerate(
zip(filenames, pred['probability'].numpy(), pred['flower_type_str'].numpy())):
img = tf.io.read_file(filename)
img = tf.image.decode_jpeg(img, channels=3)
ax[idx].imshow((img.numpy()));
ax[idx].set_title('{} ({:.2f})'.format(pred_label, prob))
"""
Explanation: Serving signature
Expecting clients to do all this reshaping is not realistic.
Let's do two things:
(1) Define a serving signature that simply takes the name of a file
(2) Vectorize the code to accept a batch of filenames, so that we can do it all in one go.
End of explanation
"""
%%bash
PROJECT=$(gcloud config get-value project)
BUCKET=${PROJECT} # create a bucket with this name, or change to a bucket that you own.
gsutil cp -r export/flowers_model gs://${BUCKET}/flowers_5_trained
"""
Explanation: Save the model to Google Cloud Storage
Save the model so that it is available even after this Notebook instance is shutdown.
Change the BUCKET to be a bucket that you own.
End of explanation
"""
|
mbeyeler/opencv-machine-learning | notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb | mit | from sklearn import datasets
X, y = datasets.make_classification(n_samples=100, n_features=2,
n_redundant=0, n_classes=2,
random_state=7816)
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Detecting Pedestrians with Support Vector Machines | Contents | Detecting Pedestrians in the Wild >
Implementing Our First Support Vector Machine
In order to understand how SVMs work, we have to think about decision boundaries.
When we used linear classifiers or decision trees in earlier chapters, our goal was always to
minimize the classification error. We did this by accuracy or mean squared error. An SVM
tries to achieve low classification errors too, but it does so only implicitly. An SVM's explicit
objective is to maximize the margins between data points of one class versus the other. This
is the reason SVMs are sometimes also called maximum-margin classifiers.
For a detailed treatment of SVMs and how they work, please refer to the book.
For our very first SVM, we should
probably focus on a simple dataset, perhaps a binary classification task.
Generating the dataset
A cool trick about scikit-learn's dataset module that I haven't told you about is that you can
generate random datasets of controlled size and complexity. A few notable ones are:
- datasets.make_classification([n_samples, ...]): This function generates a random $n$-class classification problem, where we can specify the number of samples, the number of features, and the number of target labels
- datasets.make_regression([n_samples, ...]): This function generates a random regression problem
- datasets.make_blobs([n_samples, n_features, ...]): This function generates a number of Gaussian blobs we can use for clustering
This means that we can use make_classification to build a custom dataset for a binary
classification task.
For the sake of simplicity, let's limit ourselves to only two
feature values (n_features=2; for example, an $x$ and a $y$ value). Let's say we want to create
100 data samples:
End of explanation
"""
X.shape, y.shape
"""
Explanation: We expect X to have 100 rows (data samples) and two columns (features), whereas the
vector y should have a single column that contains all the target labels:
End of explanation
"""
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.set_cmap('jet')
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], c=y, s=100)
plt.xlabel('x values')
plt.ylabel('y values')
"""
Explanation: Visualizing the dataset
We can plot these data points in a scatter plot using Matplotlib. Here, the idea is to plot the
$x$ values (found in the first column of X, X[:, 0]) against the $y$ values (found in the second
column of X, X[:, 1]). A neat trick is to pass the target labels as color values (c=y):
End of explanation
"""
import numpy as np
X = X.astype(np.float32)
y = y * 2 - 1
"""
Explanation: You can see that, for the most part, data points of the two classes are clearly separated.
However, there are a few regions (particularly near the left and bottom of the plot) where
the data points of both classes intermingle. These will be hard to classify correctly, as we
will see in just a second.
Preprocessing the dataset
The next step is to split the data points into training and test sets, as we have done before.
But, before we do that, we have to prepare the data for OpenCV:
- All feature values in X must be 32-bit floating point numbers
- Target labels must be either -1 or +1
We can achieve this with the following code:
End of explanation
"""
from sklearn import model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(
X, y, test_size=0.2, random_state=42
)
"""
Explanation: Now we can pass the data to scikit-learn's train_test_split function, like we did in the
earlier chapters:
End of explanation
"""
import cv2
svm = cv2.ml.SVM_create()
"""
Explanation: Here I chose to reserve 20 percent of all data points for the test set, but you can adjust this
number according to your liking.
Building the support vector machine
In OpenCV, SVMs are built, trained, and scored the same exact way as every other learning
algorithm we have encountered so far, using the following steps.
Call the create method to construct a new SVM:
End of explanation
"""
svm.setKernel(cv2.ml.SVM_LINEAR)
"""
Explanation: As shown in the following command, there are different modes in which we can
operate an SVM. For now, all we care about is the case we discussed in the
previous example: an SVM that tries to partition the data with a straight line. This
can be specified with the setKernel method:
End of explanation
"""
svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
"""
Explanation: Call the classifier's train method to find the optimal decision boundary:
End of explanation
"""
_, y_pred = svm.predict(X_test)
"""
Explanation: Call the classifier's predict method to predict the target labels of all data
samples in the test set:
End of explanation
"""
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred)
"""
Explanation: Use scikit-learn's metrics module to score the classifier:
End of explanation
"""
def plot_decision_boundary(svm, X_test, y_test):
# create a mesh to plot in
h = 0.02 # step size in mesh
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_hypo = np.c_[xx.ravel().astype(np.float32),
yy.ravel().astype(np.float32)]
_, zz = svm.predict(X_hypo)
zz = zz.reshape(xx.shape)
plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
plt.figure(figsize=(10, 6))
plot_decision_boundary(svm, X_test, y_test)
"""
Explanation: Congratulations, we got 80 percent correctly classified test samples!
Of course, so far we have no idea what happened under the hood. For all we know, we
might as well have gotten these commands off a web search and typed them into the
terminal, without really knowing what we're doing. But this is not who we want to be.
Getting a system to work is one thing and understanding it is another. Let us get to that!
Visualizing the decision boundary
What was true in trying to understand our data is true for trying to understand our
classifier: visualization is the first step in understanding a system. We know the SVM
somehow came up with a decision boundary that allowed us to correctly classify 80 percent
of the test samples. But how can we find out what that decision boundary actually looks
like?
For this, we will borrow a trick from the guys behind scikit-learn. The idea is to generate a
fine grid of $x$ and $y$ coordinates and run that through the SVM's predict method. This will
allow us to know, for every $(x, y)$ point, what target label the classifier would have
predicted.
We will do this in a dedicated function, which we call plot_decision_boundary. The
function takes as inputs an SVM object, the feature values of the test set, and the target
labels of the test set. The function then creates a contour plot, on top of which we will plot the individual data points colored
by their true target labels:
End of explanation
"""
kernels = [cv2.ml.SVM_LINEAR, cv2.ml.SVM_INTER, cv2.ml.SVM_SIGMOID, cv2.ml.SVM_RBF]
"""
Explanation: Now we get a better sense of what is going on!
The SVM found a straight line (a linear decision boundary) that best separates the blue and
the red data samples. It didn't get all the data points right, as there are three blue dots in the
red zone and one red dot in the blue zone.
However, we can convince ourselves that this is the best straight line we could have chosen
by wiggling the line around in our heads. If you're not sure why, please refer to the book (p. 148).
So what can we do to improve our classification performance?
One solution is to move away from straight lines and onto more complicated decision
boundaries.
Dealing with nonlinear decision boundaries
What if the data cannot be optimally partitioned using a linear decision boundary? In such
a case, we say the data is not linearly separable.
The basic idea to deal with data that is not linearly separable is to create nonlinear
combinations of the original features. This is the same as saying we want to project our
data to a higher-dimensional space (for example, from 2D to 3D) in which the data
suddenly becomes linearly separable.
The book has a nice figure illustrating this idea.
Implementing nonlinear support vector machines
OpenCV provides a whole range of different SVM kernels with which we can experiment.
Some of the most commonly used ones include:
- cv2.ml.SVM_LINEAR: This is the kernel we used previously. It provides a linear decision boundary in the original feature space (the $x$ and $y$ values).
- cv2.ml.SVM_POLY: This kernel provides a decision boundary that is a polynomial function in the original feature space. In order to use this kernel, we also have to specify a coefficient via svm.setCoef0 (usually set to 0) and the degree of the polynomial via svm.setDegree.
- cv2.ml.SVM_RBF: This kernel implements the kind of Gaussian function we discussed earlier.
- cv2.ml.SVM_SIGMOID: This kernel implements a sigmoid function, similar to the one we encountered when talking about logistic regression in Chapter 3, First Steps in Supervised Learning.
- cv2.ml.SVM_INTER: This kernel is a new addition to OpenCV 3. It separates classes based on the similarity of their histograms.
In order to test some of the SVM kernels we just talked about, we will return to our code
sample mentioned earlier. We want to repeat the process of building and training the SVM
on the dataset generated earlier, but this time we want to use a whole range of different
kernels:
End of explanation
"""
plt.figure(figsize=(14, 8))
for idx, kernel in enumerate(kernels):
svm = cv2.ml.SVM_create()
svm.setKernel(kernel)
svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train)
_, y_pred = svm.predict(X_test)
plt.subplot(2, 2, idx + 1)
plot_decision_boundary(svm, X_test, y_test)
plt.title('accuracy = %.2f' % metrics.accuracy_score(y_test, y_pred))
"""
Explanation: Do you remember what all of these stand for?
Setting a different SVM kernel is relatively simple. We take an entry from the kernels list
and pass it to the setKernels method of the SVM class. That's all.
The laziest way to repeat things is to use a for loop:
End of explanation
"""
|
vicente-gonzalez-ruiz/YAPT | scientific_computation/about_accuracy.ipynb | cc0-1.0 | x = 7**273
print(x)
print(type(x))
"""
Explanation: About arithmetic accuracy in Python
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Integers" data-toc-modified-id="Integers-1"><span class="toc-item-num">1 </span><a href="https://docs.python.org/3/c-api/long.html" target="_blank">Integers</a></a></span></li><li><span><a href="#Floats" data-toc-modified-id="Floats-2"><span class="toc-item-num">2 </span><a href="https://docs.python.org/3/tutorial/floatingpoint.html" target="_blank">Floats</a></a></span></li></ul></div>
Integers
In python, integers have arbitrary precision and therefore we can represent an arbitrarily large range of integers (only limited by the available memory).
End of explanation
"""
format(0.1, '.80f')
"""
Explanation: Floats
Python uses (hardware) 754 double precision representation for floats. This means that some floats can be only represented approximately.
Using string format to see the precision limitation of doubles in Python. For example, it is impossible to represent exactly the number 0.1:
End of explanation
"""
.1 + .1 + .1 == .3
.1 + .1 == .2
"""
Explanation: This can give us surprises:
End of explanation
"""
from decimal import Decimal, getcontext
"""
Explanation: For "infinite" precision float arithmetic you can use decimal or mpmath:
End of explanation
"""
getcontext().prec=80
format(Decimal(1)/Decimal(7), '.80f')
"""
Explanation: Getting 30 digits of 1/7:
End of explanation
"""
format(1/7, '.80f')
#12345678901234567 (17 digits)
"""
Explanation: We can see how many digits are true of 1/7 using doubles:
End of explanation
"""
Decimal(1)/Decimal(7)
"""
Explanation: Decimal arithmetic produces decimal objects:
End of explanation
"""
print('{:.50f}'.format(Decimal(1)/Decimal(7)))
"""
Explanation: Decimal objects can be printed with format:
End of explanation
"""
# https://stackoverflow.com/questions/28284996/python-pi-calculation
from decimal import Decimal, getcontext
getcontext().prec=1000
my_pi= sum(1/Decimal(16)**k *
(Decimal(4)/(8*k+1) -
Decimal(2)/(8*k+4) -
Decimal(1)/(8*k+5) -
Decimal(1)/(8*k+6)) for k in range(1000))
'{:.1000f}'.format(my_pi)
"""
Explanation: A more complex example: lets compute 1000 digits of the $\pi$ number using the Bailey–Borwein–Plouffe formula:
$$
\pi = \sum_{k = 0}^{\infty}\Bigg[ \frac{1}{16^k} \left( \frac{4}{8k + 1} - \frac{2}{8k + 4} - \frac{1}{8k + 5} - \frac{1}{8k + 6} \right) \Bigg]
$$
End of explanation
"""
|
dashee87/blogScripts | Jupyter/2017-12-19-charting-the-rise-of-song-collaborations-with-scrapy-and-pandas.ipynb | mit | import scrapy
import re # for text parsing
import logging
class ChartSpider(scrapy.Spider):
name = 'ukChartSpider'
# page to scrape
start_urls = ['http://www.officialcharts.com/charts/']
# if you want to impose a delay between sucessive scrapes
# download_delay = 0.5
def parse(self, response):
self.logger.info('Scraping page: %s', response.url)
chart_week = re.sub(' -.*', '',
response.css('.article-heading+ .article-date::text').extract_first().strip())
for (artist, chart_pos, artist_num, track, label, lastweek, peak_pos, weeks_on_chart) in \
zip(response.css('#main .artist a::text').extract(),
response.css('.position::text').extract(),
response.css('#main .artist a::attr(href)').extract(),
response.css('.track .title a::text').extract(),
response.css('.label-cat .label::text').extract(),
response.css('.last-week::text').extract(),
response.css('td:nth-child(4)::text').extract(),
response.css('td:nth-child(5)::text').extract()):
yield {'chart_week': chart_week, 'chart_pos':chart_pos, 'track': track, 'artist': artist,
'artist_num':re.sub('/.*', '', re.sub('/artist/', '', artist_num)),
'label':label, 'last_week':re.findall('\d+|$', lastweek)[0],
'peak_pos':re.findall('\d+|$', peak_pos)[0],
'weeks_on_chart':re.findall('\d+|$', weeks_on_chart)[0]}
# move onto next page (if it exists)
for next_page in response.css('.charts-header-panel:nth-child(1) .chart-date-directions'):
if next_page.css("a::text").extract_first()=='prev':
yield response.follow(next_page, self.parse)
import scrapy
import re # for text parsing
import logging
class ChartSpider(scrapy.Spider):
name = 'usChartSpider'
# page to scrape
start_urls = ['https://www.billboard.com/charts/hot-100/']
# if you want to impose a delay between sucessive scrapes
# download_delay = 1.0
def parse(self, response):
self.logger.info('Scraping page: %s', response.url)
chart_week = response.xpath('.//time/@datetime').extract_first()
for num, (artist, track, lastweek, peak_pos, weeks_on_chart) in \
enumerate(zip(response.css('.chart-row__artist::text').extract(),
response.css('.chart-row__song::text').extract(),
response.css('.chart-row__rank .chart-row__last-week::text').extract(),
response.css('.chart-row__top-spot .chart-row__value::text').extract(),
response.css('.chart-row__weeks-on-chart .chart-row__value::text').extract())):
yield {'chart_week': chart_week, 'chart_pos':num+1, 'track': track, 'artist': artist.strip(),
'last_week':re.findall('\d+|$', lastweek)[0],
'peak_pos':re.findall('\d+|$', peak_pos)[0],
'weeks_on_chart':re.findall('\d+|$', weeks_on_chart)[0]}
# move onto next page (if it exists)
for next_page in response.css('.chart-nav__link'):
if next_page.css('a::attr(title)').extract_first() == 'Previous Week':
yield response.follow(next_page, self.parse)
"""
Explanation: Keen readers of this blog (hi Mom!) might have noticed my recent focus on neural networks and deep learning. It's good for popularity, as deep learning posts are automatically cool (I'm really big in China now). Well, I'm going to leave the AI alone this time. In fact, this post won't even really constitute data science. Instead, I'm going to explore a topic that has been on my mind and maybe produce a few graphs.
These days, my main interaction with modern music is through the radio at the gym. It wasn't always like this. I mean I used to be with it, but then they changed what it was. I wouldn't go so far as to say that modern music is weird and scary, but it's certainly getting harder to keep up. It doesn't help that songs now have about 5 people on them. Back in my day, you might include a brief rapper cameo to appear more edgy. So I thought I'd explore how song collaborations have come to dominate the charts.
Note that the accompanying Jupyter notebook can be viewed here. Let's get started!
Scrapy
In my research, I came across a similar post. That one looked at the top 10 of the Billboard charts going back to 1990. Just to be different, I'll primarily focus on the UK singles chart, though I'll also pull data from the Billboard chart. From what I can tell, there's no public API. But it's not too hard to scrape the data off the official site. I'm going to use Scrapy. We'll set up a spider to pull the relevant data and then navigate to the previous week's chart and repeat that process until it finally reaches the first chart in November 1952. This is actually the first time I've ever used Scrapy (hence the motivation for this post), so check out its extensive documentation if you have any issues. Scrapy isn't the only option for web scraping with Python (others reviewed here, but I like how easy it is to deploy and automate your spiders for larger projects.
End of explanation
"""
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'uk_charts.json'
})
# minimising the information presented on the scrapy log
logging.getLogger('scrapy').setLevel(logging.WARNING)
process.crawl(ChartSpider)
process.start()
"""
Explanation: Briefly explaining what happened there: We create a class called ChartSpider, essentially our customised spider (called ukChartSpider). We specify the page we want to scrape (start_urls). The spider then selects specific CSS elements (response.css()) within the page that contain the information we want (e.g. #main .artist a represents the artist's name). These tags may seem complicated, but they're actually quite easy to retrieve with a tool like Selector Gadget. Isolate the elements you want to extract and copy the css elements highlighted with the tool (see image below).
Finally, we'll opt to write the spider output to a json file called uk_charts.json. Scrapy accepts numerous file formats (including CSV), but I went with JSON as it's easier to append to this file type, which may be useful if your spider unexpectedly terminates. We're now ready to launch ukChartSpider. Note that the process for the US Billboard chart is very similar. That code can be found in the accompanying Jupyter notebook.
End of explanation
"""
import pandas as pd
uk_charts = pd.read_json('https://raw.githubusercontent.com/dashee87/blogScripts/master/files/uk_charts.json')
# convert the date column to the correct date format
uk_charts = uk_charts.assign(chart_week=pd.to_datetime(uk_charts['chart_week']))
uk_charts.head(5)
"""
Explanation: Pandas
If that all went to plan, we can now load in the json file as pandas dataframe (unless you changed the file path, it should be sitting in your working directory). If you can't wait for the spider to conclude, then you can import the file directly from github (you can also find the corresponding Billboard Hot 100 file there- you might prefer downloading the files and importing them locally).
End of explanation
"""
pd.concat((uk_charts[uk_charts['artist'].str.contains(' FEAT\\.')][0:1],
uk_charts[uk_charts['artist'].str.contains(' FEATURING ')][0:1],
uk_charts[uk_charts['artist'].str.contains(' FEAT ')][0:1],
uk_charts[uk_charts['artist'].str.contains('/')][0:1],
uk_charts[uk_charts['artist'].str.contains(' AND ')][0:1],
uk_charts[uk_charts['artist'].str.contains(' & ')][0:1],
uk_charts[uk_charts['artist'].str.contains(' WITH ')][0:1],
uk_charts[uk_charts['artist'].str.contains(' VS ')][0:1],
uk_charts[uk_charts['artist'].str.contains(' VS. ')][0:1]))
"""
Explanation: That table shows the top 5 singles in the UK for week starting 8st December 2017. I think I recognise two of those songs. As we're interested in collaborations, you'll notice that we have a few in this top 5 alone, which are marked with an 'FT' in the artist name. Unfortunately, there's no consistent nomenclature to denote collaborations on the UK singles chart (the Billboard chart isn't as bad).
End of explanation
"""
pd.concat((uk_charts[uk_charts['artist'].str.contains('AC/DC')].tail(1),
uk_charts[uk_charts['artist'].str.contains('BOB MARLEY AND')].tail(1),
uk_charts[uk_charts['artist'].str.contains('BOB MARLEY &')].tail(1)))
"""
Explanation: Okay, we've identified various terms that denote collaborations of some form. Not too bad. We just need to count the number of instances where the artist name includes one of these terms. Right? Maybe not.
End of explanation
"""
uk_charts[(uk_charts['artist'].str.contains('DEREK AND THE DOMINOES')) &
(uk_charts['weeks_on_chart']==1)]
uk_charts = pd.merge(uk_charts,
uk_charts.groupby('artist').track.nunique().reset_index().rename(
columns={'track': 'one_hit'}).assign(one_hit = lambda x: x.one_hit==1)).sort_values(
['chart_week', 'chart_pos'], ascending=[0, 1]).reset_index(drop=True)
uk_charts.head()
# doing all the same stuff for the scraped Billboard chart data
us_charts =pd.read_json('https://raw.githubusercontent.com/dashee87/blogScripts/master/files/us_charts.json')
us_charts = us_charts.assign(chart_week=pd.to_datetime(us_charts['chart_week']))
us_charts['artist'] = us_charts['artist'].str.upper()
us_charts = pd.merge(us_charts,
us_charts.groupby('artist').track.nunique().reset_index().rename(
columns={'track': 'one_hit'}).assign(one_hit = lambda x: x.one_hit==1)).sort_values(
['chart_week', 'chart_pos'], ascending=[0, 1]).reset_index(drop=True)
us_charts.head()
"""
Explanation: I'm a firm believer that domain expertise is a fundamental component of data science, so good data scientists must always be mindful of AC/DC and Bob Marley. Obviously, these songs shouldn't be considered collaborations, so we need to exclude them from the analysis. Rather than manually evaluating each case, we'll discount artists that include '&', 'AND', 'WITH', 'VS' that registered more than one song on the chart ('FT' and 'FEATURING' are pretty reliable- please let me know if I'm overlooking some brilliant 1980s post-punk new wave synth-pop group called 'THE FT FEATURING FT'). Obviously, we'll still have some one hit wonders mistaken as collaborations. For example, Derek and the Dominoes had only one hit single (Layla); though we're actually lucky in this instance, as the song was rereleased in 1982 under a slight different name.
End of explanation
"""
import seaborn
import matplotlib.pyplot as plt
import datetime
fig, (ax1, ax2) = plt.subplots(2,1)
# we're just going to do the same operation twice
# it's lazy; you could set up a loop or
# combine the two dataframes into one (with a grouping column to tell which country it is)
uk_charts = uk_charts.assign(FT=((uk_charts['artist'].str.contains(' FT '))),
FEAT=((uk_charts['artist'].str.contains(' FEAT | FEAT\\. '))),
FEATURING=((uk_charts['artist'].str.contains(' FEATURING '))),
AND=((uk_charts['artist'].str.contains(' AND ')) &
~(uk_charts['artist'].str.contains(' AND THE ') & ~uk_charts['artist'].str.contains(' THE WEEKND'))
& (uk_charts['one_hit'])),
AMPERSAND=((uk_charts['artist'].str.contains(' & ')) &
~(uk_charts['artist'].str.contains(' & THE ') & ~uk_charts['artist'].str.contains(' THE WEEKND'))
& (uk_charts['one_hit'])),
SLASH=((uk_charts['artist'].str.contains('/')) & (uk_charts['one_hit'])),
WITH=((uk_charts['artist'].str.contains(' WITH ')) & (uk_charts['one_hit'])),
X=((uk_charts['artist'].str.contains(' X ')) &
~(uk_charts['artist'].str.contains('LIBERTY X|TWISTED X|MALCOLM X|RICHARD X|X MEN')) &
(uk_charts['one_hit'])),
VS=((uk_charts['artist'].str.contains(' VS | VS\\. ')) & (uk_charts['one_hit']))).assign(
collab = lambda x: x.FT | x.FEATURING | x.AND | x.AMPERSAND | x.SLASH| x.WITH | x.VS| x.FEAT | x.X)
us_charts = us_charts.assign(FT=((us_charts['artist'].str.contains(' FT '))),
FEATURING=((us_charts['artist'].str.contains(' FEATURING '))),
FEAT=((us_charts['artist'].str.contains(' FEAT | FEAT\\. '))),
AND=((us_charts['artist'].str.contains(' AND ')) &
~(us_charts['artist'].str.contains(' AND THE ') & ~us_charts['artist'].str.contains(' THE WEEKND'))
& (us_charts['one_hit'])),
AMPERSAND=((us_charts['artist'].str.contains(' & ')) &
~(us_charts['artist'].str.contains(' & THE ') & ~us_charts['artist'].str.contains(' THE WEEKND'))
& (us_charts['one_hit'])),
SLASH=((us_charts['artist'].str.contains('/')) & (us_charts['one_hit'])),
WITH=((us_charts['artist'].str.contains(' WITH ')) & (us_charts['one_hit'])),
X=((us_charts['artist'].str.contains(' X ')) &
~(us_charts['artist'].str.contains('LIBERTY X|TWISTED X|MALCOLM X|RICHARD X|X MEN')) &
(us_charts['one_hit'])),
VS=((us_charts['artist'].str.contains(' VS | VS\\. ')) & (us_charts['one_hit']))).assign(
collab = lambda x: x.FT | x.FEATURING | x.FEAT | x.AND | x.AMPERSAND | x.SLASH| x.WITH | x.VS | x.X)
uk_charts.groupby(['chart_week'])['FT','FEATURING', 'FEAT', 'AMPERSAND', 'SLASH', 'AND', 'WITH', 'X', 'VS'].mean().plot(
linewidth=1.5, ax=ax1)
us_charts.groupby(['chart_week'])['FT','FEATURING', 'FEAT', 'AMPERSAND', 'SLASH', 'AND', 'WITH', 'X', 'VS'].mean().plot(
linewidth=1.5, ax=ax2)
ax1.set_xticklabels('')
ax1.set_title('UK Singles Chart 1952-2017')
ax2.set_title('Billboard Hot 100 1958-2017')
for ax in [ax1, ax2]:
ax.set_ylim([0, 0.43])
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_yticklabels(['{:3.0f}'.format(x*100) for x in ax.get_yticks()])
ax.set_xlim([datetime.date(1952,10,1), datetime.date(2018,1,1)])
ax1.legend(bbox_to_anchor=(0.1, 1), loc=2, borderaxespad=0., prop={'size': 10})
fig.text(0.0, 0.5,'Songs on Chart (%)', va='center', rotation='vertical',fontsize=12)
fig.set_size_inches(9, 6)
fig.tight_layout()
ax2.legend_.remove()
plt.show()
"""
Explanation: We've appended a column denoting whether that song represents that artist's only ever entry in the charts. We can use a few more tricks to weed out mislabelled collaborations. We'll ignore entries where the artist name contains 'AND THE' or '& THE'. Again, it's not perfect, but it should get us most of the way (data science in a nutshell). For example, 'Ariana Grande & The Weeknd' would be overlooked, so I'll crudely include a clause to allow The Weeknd related collaborations. With those caveats, let's plot the historical frequency of these various collaboration terms.
End of explanation
"""
fig, ax1 = plt.subplots(1,1)
uk_charts.groupby(['chart_week'])['collab'].mean().plot(ax=ax1, color='#F38181')
us_charts.groupby(['chart_week'])['collab'].mean().plot(ax=ax1, color='#756C83')
ax1.set_xlabel('')
ax1.set_ylabel('Collaborative Songs (%)')
ax1.set_yticklabels(['{:3.0f}'.format(x*100) for x in ax1.get_yticks()])
ax1.set_xlim([datetime.date(1952,10,1), datetime.date(2018,1,1)])
fig.set_size_inches(9, 3.5)
ax1.legend(["UK Singles Chart", "Billboard Hot 100"],
bbox_to_anchor=(0.07, 1), loc=2, borderaxespad=0., prop={'size': 12})
fig.tight_layout()
plt.show()
"""
Explanation: In the 1960s, 70s and 80s, colloborations were relatively rare (~5% of charted singles) and generally took the form of duets. Things changed in the mid 90s, when the number of colloborations increases significantly, with duets dying off and featured artists taking over. I blame rap music. Comparing the two charts, the UK and US prefer 'ft' and 'featuring', repsectively (two nations divided by a common language). The Billboard chart doesn't seem to like the '/' notation, while the UK is generally much more eclectic.
Finally, we can plot the proportion of songs that were collobarations (satisfied any of these conditions).
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/d418deb5d74ab4363c42409de6a8e6df/label_source_activations.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
label = 'Aud-lh'
meg_path = data_path / 'MEG' / 'sample'
label_fname = meg_path / 'labels' / f'{label}.label'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = meg_path / 'sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
"""
Explanation: Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
End of explanation
"""
pick_ori = "normal" # Get signed values to see the effect of sign flip
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
label = mne.read_label(label_fname)
stc_label = stc.in_label(label)
modes = ('mean', 'mean_flip', 'pca_flip')
tcs = dict()
for mode in modes:
tcs[mode] = stc.extract_label_time_course(label, src, mode=mode)
print("Number of vertices : %d" % len(stc_label.data))
"""
Explanation: Compute inverse solution
End of explanation
"""
fig, ax = plt.subplots(1)
t = 1e3 * stc_label.times
ax.plot(t, stc_label.data.T, 'k', linewidth=0.5, alpha=0.5)
pe = [path_effects.Stroke(linewidth=5, foreground='w', alpha=0.5),
path_effects.Normal()]
for mode, tc in tcs.items():
ax.plot(t, tc[0], linewidth=3, label=str(mode), path_effects=pe)
xlim = t[[0, -1]]
ylim = [-27, 22]
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Activations in Label %r' % (label.name),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
"""
Explanation: View source activations
End of explanation
"""
pick_ori = 'vector'
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
data = stc_vec.extract_label_time_course(label, src)
fig, ax = plt.subplots(1)
stc_vec_label = stc_vec.in_label(label)
colors = ['#EE6677', '#228833', '#4477AA']
for ii, name in enumerate('XYZ'):
color = colors[ii]
ax.plot(t, stc_vec_label.data[:, ii].T, color=color, lw=0.5, alpha=0.5,
zorder=5 - ii)
ax.plot(t, data[0, ii], lw=3, color=color, label='+' + name, zorder=8 - ii,
path_effects=pe)
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Mean vector activations in Label %r' % (label.name,),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
"""
Explanation: Using vector solutions
It's also possible to compute label time courses for a
:class:mne.VectorSourceEstimate, but only with mode='mean'.
End of explanation
"""
|
smorton2/think-stats | code/chap03soln.ipynb | gpl-3.0 | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
"""
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
"""
Explanation: Again, I'll load the NSFG pregnancy file and select live births:
End of explanation
"""
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
"""
Explanation: Here's the histogram of birth weights:
End of explanation
"""
n = hist.Total()
pmf = hist.Copy()
for x, freq in hist.Items():
pmf[x] = freq / n
"""
Explanation: To normalize the disrtibution, we could divide through by the total count:
End of explanation
"""
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PMF')
"""
Explanation: The result is a Probability Mass Function (PMF).
End of explanation
"""
pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])
pmf
"""
Explanation: More directly, we can create a Pmf object.
End of explanation
"""
pmf.Prob(2)
"""
Explanation: Pmf provides Prob, which looks up a value and returns its probability:
End of explanation
"""
pmf[2]
"""
Explanation: The bracket operator does the same thing.
End of explanation
"""
pmf.Incr(2, 0.2)
pmf[2]
"""
Explanation: The Incr method adds to the probability associated with a given values.
End of explanation
"""
pmf.Mult(2, 0.5)
pmf[2]
"""
Explanation: The Mult method multiplies the probability associated with a value.
End of explanation
"""
pmf.Total()
"""
Explanation: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
End of explanation
"""
pmf.Normalize()
pmf.Total()
"""
Explanation: Normalize divides through by the total probability, making it 1 again.
End of explanation
"""
pmf = thinkstats2.Pmf(live.prglngth, label='prglngth')
"""
Explanation: Here's the PMF of pregnancy length for live births.
End of explanation
"""
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
"""
Explanation: Here's what it looks like plotted with Hist, which makes a bar graph.
End of explanation
"""
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
"""
Explanation: Here's what it looks like plotted with Pmf, which makes a step function.
End of explanation
"""
live, firsts, others = first.MakeFrames()
"""
Explanation: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
End of explanation
"""
first_pmf = thinkstats2.Pmf(firsts.prglngth, label='firsts')
other_pmf = thinkstats2.Pmf(others.prglngth, label='others')
"""
Explanation: Here are the distributions of pregnancy length.
End of explanation
"""
width=0.45
axis = [27, 46, 0, 0.6]
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='PMF', axis=axis)
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel='Pregnancy length(weeks)', axis=axis)
"""
Explanation: And here's the code that replicates one of the figures in the chapter.
End of explanation
"""
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')
"""
Explanation: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
End of explanation
"""
d = { 7: 8, 12: 8, 17: 14, 22: 4,
27: 6, 32: 12, 37: 8, 42: 3, 47: 2 }
pmf = thinkstats2.Pmf(d, label='actual')
"""
Explanation: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
End of explanation
"""
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
"""
Explanation: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
End of explanation
"""
biased_pmf = BiasPmf(pmf, label='observed')
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
"""
Explanation: The following graph shows the difference between the actual and observed distributions.
End of explanation
"""
print('Actual mean', pmf.Mean())
print('Observed mean', biased_pmf.Mean())
"""
Explanation: The observed mean is substantially higher than the actual.
End of explanation
"""
def UnbiasPmf(pmf, label=None):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf[x] *= 1/x
new_pmf.Normalize()
return new_pmf
"""
Explanation: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
End of explanation
"""
unbiased = UnbiasPmf(biased_pmf, label='unbiased')
print('Unbiased mean', unbiased.Mean())
"""
Explanation: We can unbias the biased PMF:
End of explanation
"""
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, unbiased])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
"""
Explanation: And plot the two distributions to confirm they are the same.
End of explanation
"""
import numpy as np
import pandas
array = np.random.randn(4, 2)
df = pandas.DataFrame(array)
df
"""
Explanation: Pandas indexing
Here's an example of a small DataFrame.
End of explanation
"""
columns = ['A', 'B']
df = pandas.DataFrame(array, columns=columns)
df
"""
Explanation: We can specify column names when we create the DataFrame:
End of explanation
"""
index = ['a', 'b', 'c', 'd']
df = pandas.DataFrame(array, columns=columns, index=index)
df
"""
Explanation: We can also specify an index that contains labels for the rows.
End of explanation
"""
df['A']
"""
Explanation: Normal indexing selects columns.
End of explanation
"""
df.loc['a']
"""
Explanation: We can use the loc attribute to select rows.
End of explanation
"""
df.iloc[0]
"""
Explanation: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute:
End of explanation
"""
indices = ['a', 'c']
df.loc[indices]
"""
Explanation: loc can also take a list of labels.
End of explanation
"""
df['a':'c']
"""
Explanation: If you provide a slice of labels, DataFrame uses it to select rows.
End of explanation
"""
df[0:2]
"""
Explanation: If you provide a slice of integers, DataFrame selects rows by integer index.
End of explanation
"""
resp = nsfg.ReadFemResp()
# Solution
pmf = thinkstats2.Pmf(resp.numkdhh, label='numkdhh')
# Solution
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Number of children', ylabel='PMF')
# Solution
biased = BiasPmf(pmf, label='biased')
# Solution
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased])
thinkplot.Config(xlabel='Number of children', ylabel='PMF')
# Solution
pmf.Mean()
# Solution
biased.Mean()
"""
Explanation: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise: Something like the class size paradox appears if you survey children and ask how many children are in their family. Families with many children are more likely to appear in your sample, and families with no children have no chance to be in the sample.
Use the NSFG respondent variable numkdhh to construct the actual distribution for the number of children under 18 in the respondents' households.
Now compute the biased distribution we would see if we surveyed the children and asked them how many children under 18 (including themselves) are in their household.
Plot the actual and biased distributions, and compute their means.
End of explanation
"""
live, firsts, others = first.MakeFrames()
preg_map = nsfg.MakePregMap(live)
# Solution
hist = thinkstats2.Hist()
for caseid, indices in preg_map.items():
if len(indices) >= 2:
pair = preg.loc[indices[0:2]].prglngth
diff = np.diff(pair)[0]
hist[diff] += 1
# Solution
thinkplot.Hist(hist)
# Solution
pmf = thinkstats2.Pmf(hist)
pmf.Mean()
"""
Explanation: Exercise: I started this book with the question, "Are first babies more likely to be late?" To address it, I computed the difference in means between groups of babies, but I ignored the possibility that there might be a difference between first babies and others for the same woman.
To address this version of the question, select respondents who have at least two live births and compute pairwise differences. Does this formulation of the question yield a different result?
Hint: use nsfg.MakePregMap:
End of explanation
"""
import relay
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, 'actual speeds')
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Speed (mph)', ylabel='PMF')
# Solution
def ObservedPmf(pmf, speed, label=None):
"""Returns a new Pmf representing speeds observed at a given speed.
The chance of observing a runner is proportional to the difference
in speed.
Args:
pmf: distribution of actual speeds
speed: speed of the observing runner
label: string label for the new dist
Returns:
Pmf object
"""
new = pmf.Copy(label=label)
for val in new.Values():
diff = abs(val - speed)
new[val] *= diff
new.Normalize()
return new
# Solution
biased = ObservedPmf(pmf, 7, label='observed speeds')
thinkplot.Pmf(biased)
thinkplot.Config(xlabel='Speed (mph)', ylabel='PMF')
"""
Explanation: Exercise: In most foot races, everyone starts at the same time. If you are a fast runner, you usually pass a lot of people at the beginning of the race, but after a few miles everyone around you is going at the same speed.
When I ran a long-distance (209 miles) relay race for the first time, I noticed an odd phenomenon: when I overtook another runner, I was usually much faster, and when another runner overtook me, he was usually much faster.
At first I thought that the distribution of speeds might be bimodal; that is, there were many slow runners and many fast runners, but few at my speed.
Then I realized that I was the victim of a bias similar to the effect of class size. The race was unusual in two ways: it used a staggered start, so teams started at different times; also, many teams included runners at different levels of ability.
As a result, runners were spread out along the course with little relationship between speed and location. When I joined the race, the runners near me were (pretty much) a random sample of the runners in the race.
So where does the bias come from? During my time on the course, the chance of overtaking a runner, or being overtaken, is proportional to the difference in our speeds. I am more likely to catch a slow runner, and more likely to be caught by a fast runner. But runners at the same speed are unlikely to see each other.
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners’ speeds, and the speed of a running observer, and returns a new Pmf representing the distribution of runners’ speeds as seen by the observer.
To test your function, you can use relay.py, which reads the results from the James Joyce Ramble 10K in Dedham MA and converts the pace of each runner to mph.
Compute the distribution of speeds you would observe if you ran a relay race at 7 mph with this group of runners.
End of explanation
"""
|
spacy-io/thinc | examples/04_parallel_training_ray.ipynb | mit | # To let ray install its own version in Colab
!pip uninstall -y pyarrow
# You might need to restart the Colab runtime
!pip install --upgrade "thinc>=8.0.0a0" "ml_datasets>=0.2.0a0" ray psutil setproctitle
"""
Explanation: Parallel training with Thinc and Ray
This notebook is based off one of Ray's tutorials and shows how to use Thinc and Ray to implement parallel training. It includes implementations for both synchronous and asynchronous parameter server training.
End of explanation
"""
import thinc
from thinc.api import chain, Relu, Softmax
@thinc.registry.layers("relu_relu_softmax.v1")
def make_relu_relu_softmax(hidden_width: int, dropout: float):
return chain(
Relu(hidden_width, dropout=dropout),
Relu(hidden_width, dropout=dropout),
Softmax(),
)
CONFIG = """
[training]
iterations = 200
batch_size = 128
[evaluation]
batch_size = 256
frequency = 10
[model]
@layers = "relu_relu_softmax.v1"
hidden_width = 128
dropout = 0.2
[optimizer]
@optimizers = "Adam.v1"
[ray]
num_workers = 2
object_store_memory = 3000000000
num_cpus = 2
"""
"""
Explanation: Let's start with a simple model and config file. You can edit the CONFIG string within the file, or copy it out to a separate file and use Config.from_disk to load it from a path. The [ray] section contains the settings to use for Ray. (We're using a config for convenience, but you don't have to – you can also just hard-code the values.)
End of explanation
"""
import ml_datasets
MNIST = ml_datasets.mnist()
def get_data_loader(model, batch_size):
(train_X, train_Y), (dev_X, dev_Y) = MNIST
train_batches = model.ops.multibatch(batch_size, train_X, train_Y, shuffle=True)
dev_batches = model.ops.multibatch(batch_size, dev_X, dev_Y, shuffle=True)
return train_batches, dev_batches
def evaluate(model, batch_size):
dev_X, dev_Y = MNIST[1]
correct = 0
total = 0
for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y):
Yh = model.predict(X)
correct += (Yh.argmax(axis=1) == Y.argmax(axis=1)).sum()
total += Yh.shape[0]
return correct / total
"""
Explanation: Just like in the original Ray tutorial, we're using the MNIST data (via our ml-datasets package) and are setting up two helper functions:
get_data_loader: Return shuffled batches of a given batch size.
evaluate: Evaluate a model on batches of data.
End of explanation
"""
from collections import defaultdict
def get_model_weights(model):
params = defaultdict(dict)
for node in model.walk():
for name in node.param_names:
if node.has_param(name):
params[node.id][name] = node.get_param(name)
return params
def set_model_weights(model, params):
for node in model.walk():
for name, param in params[node.id].items():
node.set_param(name, param)
def get_model_grads(model):
grads = defaultdict(dict)
for node in model.walk():
for name in node.grad_names:
grads[node.id][name] = node.get_grad(name)
return grads
def set_model_grads(model, grads):
for node in model.walk():
for name, grad in grads[node.id].items():
node.set_grad(name, grad)
"""
Explanation: Setting up Ray
Getters and setters for gradients and weights
Using Thinc's Model.walk method, we can implement the following helper functions to get and set weights and parameters for each node in a model's tree. Those functions can later be used by the parameter server and workers.
End of explanation
"""
import ray
@ray.remote
class ParameterServer:
def __init__(self, model, optimizer):
self.model = model
self.optimizer = optimizer
def apply_gradients(self, *worker_grads):
summed_gradients = defaultdict(dict)
for grads in worker_grads:
for node_id, node_grads in grads.items():
for name, grad in node_grads.items():
if name in summed_gradients[node_id]:
summed_gradients[node_id][name] += grad
else:
summed_gradients[node_id][name] = grad.copy()
set_model_grads(self.model, summed_gradients)
self.model.finish_update(self.optimizer)
return get_model_weights(self.model)
def get_weights(self):
return get_model_weights(self.model)
"""
Explanation: Defining the Parameter Server
The parameter server will hold a copy of the model. During training, it will:
Receive gradients and apply them to its model.
Send the updated model back to the workers.
The @ray.remote decorator defines a remote process. It wraps the ParameterServerclass and allows users to instantiate it as a remote actor. (Source)
Here, the ParameterServer is initialized with a model and optimizer, and has a method to apply gradients received by the workers and a method to get the weights from the current model, using the helper functions defined above.
End of explanation
"""
from thinc.api import fix_random_seed
@ray.remote
class DataWorker:
def __init__(self, model, batch_size=128, seed=0):
self.model = model
fix_random_seed(seed)
self.data_iterator = iter(get_data_loader(model, batch_size)[0])
self.batch_size = batch_size
def compute_gradients(self, weights):
set_model_weights(self.model, weights)
try:
data, target = next(self.data_iterator)
except StopIteration: # When the epoch ends, start a new epoch.
self.data_iterator = iter(get_data_loader(model, self.batch_size)[0])
data, target = next(self.data_iterator)
guesses, backprop = self.model(data, is_train=True)
backprop((guesses - target) / target.shape[0])
return get_model_grads(self.model)
"""
Explanation: Defining the Worker
The worker will also hold a copy of the model. During training it will continuously evaluate data and send gradients to the parameter server. The worker will synchronize its model with the Parameter Server model weights. (Source)
To compute the gradients during training, we can call the model on a batch of data (and set is_train=True). This returns the predictions and a backprop callback to update the model.
End of explanation
"""
from thinc.api import registry, Config
C = registry.resolve(Config().from_str(CONFIG))
C
"""
Explanation: Setting up the model
Using the CONFIG defined above, we can load the settings and set up the model and optimizer. Thinc's registry.resolve will parse the config, resolve all references to registered functions and return a dict of the resolved objects.
End of explanation
"""
optimizer = C["optimizer"]
model = C["model"]
(train_X, train_Y), (dev_X, dev_Y) = MNIST
model.initialize(X=train_X[:5], Y=train_Y[:5])
"""
Explanation: We didn't specify all the dimensions in the model, so we need to pass in a batch of data to finish initialization. This lets Thinc infer the missing shapes.
End of explanation
"""
ray.init(
ignore_reinit_error=True,
object_store_memory=C["ray"]["object_store_memory"],
num_cpus=C["ray"]["num_cpus"],
)
ps = ParameterServer.remote(model, optimizer)
workers = []
for i in range(C["ray"]["num_workers"]):
worker = DataWorker.remote(model, batch_size=C["training"]["batch_size"], seed=i)
workers.append(worker)
"""
Explanation: Training
Synchronous Parameter Server training
We can now create a synchronous parameter server training scheme:
Call ray.init with the settings defined in the config.
Instantiate a process for the ParameterServer.
Create multiple workers (n_workers, as defined in the config).
Though this is not specifically mentioned in the Ray tutorial, we're setting a different random seed for the workers here.
Otherwise the workers may iterate over the batches in the same order.
End of explanation
"""
current_weights = ps.get_weights.remote()
for i in range(C["training"]["iterations"]):
gradients = [worker.compute_gradients.remote(current_weights) for worker in workers]
current_weights = ps.apply_gradients.remote(*gradients)
if i % C["evaluation"]["frequency"] == 0:
set_model_weights(model, ray.get(current_weights))
accuracy = evaluate(model, C["evaluation"]["batch_size"])
print(f"{i} \taccuracy: {accuracy:.3f}")
print(f"Final \taccuracy: {accuracy:.3f}")
ray.shutdown()
"""
Explanation: On each iteration, we now compute the gradients for each worker. After all gradients are available, ParameterServer.apply_gradients is called to calculate the update. The frequency setting in the evaluation config specifies how often to evaluate – for instance, a frequency of 10 means we're only evaluating every 10th epoch.
End of explanation
"""
ray.init(
ignore_reinit_error=True,
object_store_memory=C["ray"]["object_store_memory"],
num_cpus=C["ray"]["num_cpus"],
)
ps = ParameterServer.remote(model, optimizer)
workers = []
for i in range(C["ray"]["num_workers"]):
worker = DataWorker.remote(model, batch_size=C["training"]["batch_size"], seed=i)
workers.append(worker)
current_weights = ps.get_weights.remote()
gradients = {}
for worker in workers:
gradients[worker.compute_gradients.remote(current_weights)] = worker
for i in range(C["training"]["iterations"] * C["ray"]["num_workers"]):
ready_gradient_list, _ = ray.wait(list(gradients))
ready_gradient_id = ready_gradient_list[0]
worker = gradients.pop(ready_gradient_id)
current_weights = ps.apply_gradients.remote(*[ready_gradient_id])
gradients[worker.compute_gradients.remote(current_weights)] = worker
if i % C["evaluation"]["frequency"] == 0:
set_model_weights(model, ray.get(current_weights))
accuracy = evaluate(model, C["evaluation"]["batch_size"])
print(f"{i} \taccuracy: {accuracy:.3f}")
print(f"Final \taccuracy: {accuracy:.3f}")
ray.shutdown()
"""
Explanation: Asynchronous Parameter Server Training
Here, workers will asynchronously compute the gradients given its current weights and send these gradients to the parameter server as soon as they are ready. When the Parameter server finishes applying the new gradient, the server will send back a copy of the current weights to the worker. The worker will then update the weights and repeat. (Source)
The setup looks the same and we can reuse the config. Make sure to call ray.shutdown() to clean up resources and processes before calling ray.init again.
End of explanation
"""
|
bramacchino/numberSense | Plotly-Mesh3d.ipynb.ipynb | mit | from IPython.display import HTML
HTML('<iframe src=https://plot.ly/~empet/13475/ width=850 height=350></iframe>')
"""
Explanation: Generating and Visualizing Alpha Shapes with Python Plotly
Notebook available at https://plot.ly/~notebook_demo/125/generating-and-visualizing-alpha-shapes/
Starting with a finite set of 3D points, Plotly can generate a Mesh3d object, that depending on a key value can be the convex hull of that set, its Delaunay triangulation or an alpha set.
This notebook is devoted to the presentation of the alpha shape as a computational geometric object, its interpretation, and visualization with Plotly.
Alpha shape of a finite point set $S$ is a polytope whose structure depends only on the set $S$ and a parameter $\alpha$.
Although it is less known in comparison to other computational geometric objects, it has been used in many practical applications in pattern recognition, surface reconstruction, molecurar structure modeling, porous media, astrophysics.
In order to understand how the algorithm underlying Mesh3d works, we present shortly a few notions of Computational Geometry.
Simplicial complexes and Delaunay triangulation
Let S be a finite set of 2D or 3D points. A point is called $0$-simplex or vertex. The convex hull of:
- two distinct points is a 1-simplex or edge;
- three non-colinear points is a 2-simplex or triangle;
- four non-coplanar points in $\mathbb{R}^3$ is a 3-simplex or tetrahedron;
End of explanation
"""
HTML('<iframe src=https://plot.ly/~empet/13503/ width=600 height=475></iframe>')
"""
Explanation: If $T$ is the set of points defining a $k$-simplex, then any proper subset of $T$ defines an $\ell$-simplex, with $\ell<k$.
These $\ell$-simplexes (or $\ell$-simplices) are called faces.
A 2-simplex has three $1$-simplexes, and three 0-simplexes as faces, whereas a tetrahedron has as faces three 2-simplexes, six 1-simplexes and four zero simplexes.
k-simplexes are building blocks for different structures in Computational Geometry, mainly for creating meshes from point clouds.
Let $S$ be a finite set in $\mathbb{R}^d$, $d=2,3$ (i.e. a set of 2D or 3D points). A collection $\mathcal{K}$ of k-simplexes, $0\leq k\leq d$, having as vertices the points of $S$,
is a simplicial complex if its simplexes have the following properties:
1. If $\sigma$ is a simplex in $\mathcal{K}$, then all its faces are also simplexes in $\mathcal{K}$;
2. If $\sigma, \tau$ are two simplexes in $\mathcal{K}$, then their intersection is either empty or a face in both simplexes.
The next figure illustrates a simplicial complex(left), and a collection of $k$-simplexes (right), $0\leq k\leq 2$
that do not form a simplicial complex because the condition 2 in the definition above is violated.
End of explanation
"""
HTML('<iframe src=https://plot.ly/~empet/13497/ width=550 height=550></iframe>')
"""
Explanation: Triangular meshes used in computer graphics are examples of simplicial complexes.
The underlying space of a simplicial complex, $\mathcal{K}$, denoted $|\mathcal{K}|$,
is the union of its simplexes, i.e. it is a region in plane or in the 3D space, depending on whether d=2 or 3.
A subcomplex of the simplicial complex $\mathcal{K}$ is a collection, $\mathcal{L}$, of simplexes in $\mathcal{K}$ that also form a simplicial complex.
The points of a finite set $S$ in $\mathbb{R}^2$ (respectively $\mathbb{R}^3$) are in general position if no $3$ (resp 4) points are collinear (coplanar), and no 4 (resp 5) points lie on the same circle (sphere).
A particular simplicial complex associated to a finite set of 2D or 3D points, in general position, is the Delaunay triangulation.
A triangulation of a finite point set $S \subset \mathbb{R}^2$ (or $\mathbb{R}^3$)
is a collection $\mathcal{T}$ of triangles (tetrahedra),
such that:
1. The union of all triangles (tetrahedra) in $\mathcal{T}$ is the convex hull of $S$.
2. The union of all vertices of triangles (tetrahedra) in $\mathcal{T}$ is the set $S$.
3. For every distinct pair $\sigma, \tau \in \mathcal{T}$, the intersection $\sigma \cap \tau$ is either empty or a common face of $\sigma$ and $\tau$.
A Delaunay triangulation of the set $S\subset\mathbb{R}^2$ ($\mathbb{R}^3$) is a triangulation with the property
that the open balls bounded by the circumcircles (circumspheres) of the triangulation
triangles (tetrahedra) contain no point in $S$. One says that these balls are empty.
If the points of $S$ are in general position, then the Delaunay triangulation of $S$ is unique.
Here is an example of Delaunay triangulation of a set of ten 2D points. It illustrates the emptiness of two balls bounded by circumcircles.
End of explanation
"""
HTML('<iframe src=https://plot.ly/~empet/13479/ width=825 height=950></iframe>')
"""
Explanation: Alpha shape of a finite set of points
The notion of Alpha Shape was introduced by Edelsbrunner with the aim to give a mathematical description of the shape of a point set.
In this notebook we give a constructive definition of this geometric structure. A more detailed approach of 3D alpha shapes can be found in the original paper.
An intuitive description of the alpha shape was given by Edelsbrunner and his coauthor
in a preprint of the last paper mentioned above:
A huge mass of ice-cream fills a region in the 3D space,
and the point set $S$ consists in hard chocolate pieces spread in the ice-cream mass.
Using a sphere-formed ice-cream spoon we carve out the ice-cream such that to avoid bumping into
chocolate pieces. At the end of this operation the region containing the ciocolate pieces
and the remaining ice cream is bounded by caps, arcs and points of chocolate. Straightening
all round faces to triangles and line segments we get the intuitive image of the
alpha shape of the point set $S$.
Now we give the steps of the computational alpha shape construction.
Let $S$ be a finite set of points from $\mathbb{R}^d$, in general position, $\mathcal{D}$ its Delaunay triangulation
and $\alpha$ a positive number.
Select the d-simplexes of $\mathcal{D}$ (i.e. triangles in the case d=2, respectively tetrahedra
for d=3) whose circumsphere has the radius less than $\alpha$. These simplexes and their faces form
a simplicial subcomplex of the Delaunay triangulation, $\mathcal{D}$.
It is denoted $\mathcal{C}_\alpha$, and called $\alpha$-complex.
The $\alpha$-shape of the set $S$ is defined by its authors, either as the underlying space of the $\alpha$-complex,
i.e. the union of all its simplexes or as the boundary of the $\alpha$-complex.
The boundary of the $\alpha$-complex is the subcomplex consisting in all k-simplexes, $0\leq k<d$, that are faces of a single $d$-simplex (these are called external faces).
In the ice-cream example the alpha shape was defined as the boundary of the alpha-complex.
The underlying space of the $\alpha$-complex is the region where the ice-cream spoon has no access, because its radius ($\alpha$) exceeds the radius of circumscribed spheres to tetrahedra formed by pieces of chocolate.
To get insight into the process of construction of an alpha shape we illustrate it first for a set of 2D points.
The following panel displays the Delaunay triangulation of a set of 2D points, and a sequence of $\alpha$-complexes (and alpha shapes):
End of explanation
"""
HTML('<iframe src=https://plot.ly/~empet/13481/ width=900 height=950></iframe>')
"""
Explanation: We notice that the Delaunay triangulation has as boundary a convex set (it is a triangulation of the convex hull
of the given point set).
Each $\alpha$-complex is obtained from the Delaunay triangulation, removing the triangles whose circumcircle has radius greater or equal to alpha.
In the last subplot the triangles of the $0.115$-complex are filled in with light blue. The filled in region is the underlying space of the $0.115$-complex.
The $0.115$-alpha shape of the given point set can be considered either the filled in region or its boundary.
This example illustrates that the underlying space of an $\alpha$-complex in neither convex nor necessarily connected. It can consist in many connected components (in our illustration above, $|\mathcal{C}_{0.115}|$ has three components).
In a family of alpha shapes, the parameter $\alpha$ controls the level of detail of the associated alpha shape. If $\alpha$ decreases to zero, the corresponding alpha shape degenerates to the point set, $S$, while if it tends to infinity the alpha shape tends to the convex hull of the set $S$.
Plotly Mesh3d
In order to generate the alpha shape of a given set of 3D points corresponding to a parameter $\alpha$,
the Delaunay triagulation or the convex hull we define a Mesh3d object or a dict. The real value
of the key alphahull points out the mesh type to be generated:
alphahull=$1/\alpha$ generates the $\alpha$-shape, -1 corresponds to the Delaunay
triangulation and 0, to the convex hull of the point set.
The other keys in the definition of a Mesh3d are given here.
Mesh3d generates and displays an $\alpha$-shape as the boundary of the $\alpha$-complex.
An intuitive idea on the topological structure modification, as $\alpha=1/$alphahull varies can be gained from the following three different alpha shapes of the same point set:
End of explanation
"""
import numpy as np
"""
Explanation: We notice in the subplots above that as alphahull increases, i.e. $\alpha$ decreases, some parts of the alpha shape shrink and
develop enclosed void regions. The last plotted alpha shape points out a polytope that contains faces of tetrahedra,
and patches of triangles.
In some cases as $\alpha$ varies it is also possible to develop components that are strings of edges and even isolated points.
Such experimental results suggested the use of alpha shapes in modeling molecular structure.
A search on WEB gives many results related to applications of alpha shapes in structural molecular biology.
Here
is an alpha shape illustrating a molecular-like structure associated to a point set of 5000 points.
Generating an alpha shape with Mesh3d
End of explanation
"""
pts=np.loadtxt('data-file.txt')
x,y,z=zip(*pts)
import plotly.plotly as py
from plotly.graph_objs import *
from plotly import tools as tls
"""
Explanation: Load data:
End of explanation
"""
points=Scatter3d(mode = 'markers',
name = '',
x =x,
y= y,
z= z,
marker = Marker( size=2, color='#458B00' )
)
simplexes = Mesh3d(alphahull =10.0,
name = '',
x =x,
y= y,
z= z,
color='90EE90', #set the color of simplexes in alpha shape
opacity=0.15
)
x_style = dict( zeroline=False, range=[-2.85, 4.25], tickvals=np.linspace(-2.85, 4.25, 5)[1:].round(1))
y_style = dict( zeroline=False, range=[-2.65, 1.32], tickvals=np.linspace(-2.65, 1.32, 4)[1:].round(1))
z_style = dict( zeroline=False, range=[-3.67,1.4], tickvals=np.linspace(-3.67, 1.4, 5).round(1))
layout=Layout(title='Alpha shape of a set of 3D points. Alpha=0.1',
width=500,
height=500,
scene = Scene(
xaxis = x_style,
yaxis = y_style,
zaxis = z_style
)
)
fig=Figure(data=Data([points, simplexes]), layout=layout)
py.sign_in('empet', 'smtbajoo93')
py.iplot(fig, filename='3D-AlphaS-ex')
"""
Explanation: Define two traces: one for plotting the point set and another for the alpha shape:
End of explanation
"""
from scipy.spatial import Delaunay
def sq_norm(v): #squared norm
return np.linalg.norm(v)**2
"""
Explanation: Generating alpha shape of a set of 2D points
We construct the alpha shape of a set of 2D points from the Delaunay triangulation,
defined as a scipy.spatial.Delaunay object.
End of explanation
"""
def circumcircle(points,simplex):
A=[points[simplex[k]] for k in range(3)]
M=[[1.0]*4]
M+=[[sq_norm(A[k]), A[k][0], A[k][1], 1.0 ] for k in range(3)]
M=np.asarray(M, dtype=np.float32)
S=np.array([0.5*np.linalg.det(M[1:,[0,2,3]]), -0.5*np.linalg.det(M[1:,[0,1,3]])])
a=np.linalg.det(M[1:, 1:])
b=np.linalg.det(M[1:, [0,1,2]])
return S/a, np.sqrt(b/a+sq_norm(S)/a**2) #center=S/a, radius=np.sqrt(b/a+sq_norm(S)/a**2)
"""
Explanation: Compute the circumcenter and circumradius of a triangle (see their definitions here):
End of explanation
"""
def get_alpha_complex(alpha, points, simplexes):
#alpha is the parameter for the alpha shape
#points are given data points
#simplexes is the list of indices in the array of points
#that define 2-simplexes in the Delaunay triangulation
return filter(lambda simplex: circumcircle(points,simplex)[1]<alpha, simplexes)
pts=np.loadtxt('data-ex-2d.txt')
tri = Delaunay(pts)
colors=['#C0223B', '#404ca0', 'rgba(173,216,230, 0.5)']# colors for vertices, edges and 2-simplexes
"""
Explanation: Filter the Delaunay triangulation to get the $\alpha$-complex:
End of explanation
"""
def Plotly_data(points, complex_s):
#points are the given data points,
#complex_s is the list of indices in the array of points defining 2-simplexes(triangles)
#in the simplicial complex to be plotted
X=[]
Y=[]
for s in complex_s:
X+=[points[s[k]][0] for k in [0,1,2,0]]+[None]
Y+=[points[s[k]][1] for k in [0,1,2,0]]+[None]
return X,Y
def make_trace(x, y, point_color=colors[0], line_color=colors[1]):# define the trace
#for an alpha complex
return Scatter(mode='markers+lines', #set vertices and
#edges of the alpha-complex
name='',
x=x,
y=y,
marker=Marker(size=6.5, color=point_color),
line=Line(width=1.25, color=line_color),
)
def make_XAxis(axis_style):
return XAxis(axis_style)
def make_YAxis(axis_style):
return YAxis(axis_style)
figure = tls.make_subplots(rows=1, cols=2,
subplot_titles=('Delaunay triangulation', 'Alpha shape, alpha=0.15'),
horizontal_spacing=0.1,
)
pl_width=800
pl_height=460
title = 'Delaunay triangulation and Alpha Complex/Shape for a Set of 2D Points'
figure['layout'].update(title=title,
font= Font(family="Open Sans, sans-serif"),
showlegend=False,
hovermode='closest',
autosize=False,
width=pl_width,
height=pl_height,
margin=Margin(
l=65,
r=65,
b=85,
t=120
),
shapes=[]
)
axis_style = dict(showline=True,
mirror=True,
zeroline=False,
showgrid=False,
showticklabels=True,
range=[-0.1,1.1],
tickvals=[0, 0.2, 0.4, 0.6, 0.8, 1.0],
ticklen=5
)
for s in range(1,3):
figure['layout'].update({'xaxis{}'.format(s): make_XAxis(axis_style)})# set xaxis style
figure['layout'].update({'yaxis{}'.format(s): make_YAxis(axis_style)})# set yaxis style
alpha_complex=get_alpha_complex(0.15, pts, tri.simplices)
X,Y=Plotly_data(pts, tri.simplices)# get data for Delaunay triangulation
figure.append_trace(make_trace(X, Y), 1, 1)
X,Y=Plotly_data(pts, alpha_complex)# data for alpha complex
figure.append_trace(make_trace(X, Y), 1, 2)
for s in alpha_complex: #fill in the triangles of the alpha complex
A=pts[s[0]]
B=pts[s[1]]
C=pts[s[2]]
figure['layout']['shapes'].append(dict(path='M '+str(A[0])+',' +str(A[1])+' '+'L '+\
str(B[0])+', '+str(B[1])+ ' '+'L '+\
str(C[0])+', '+str(C[1])+' Z',
fillcolor='rgba(173,216,230, 0.5)',
line=Line(color=colors[1], width=1.25),
xref='x2',
yref='y2'
)
)
py.iplot(figure, filename='2D-AlphaS-ex', width=850)
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Get data for Plotly plot of a subcomplex of the Delaunay triangulation:
End of explanation
"""
|
tensorflow/tfx-addons | tfx_addons/schema_curation/example/taxi_example_colab.ipynb | apache-2.0 | !pip install -U tfx
x = !pwd
if 'schemacomponent' not in str(x):
!git clone https://github.com/rcrowe-google/schemacomponent
%cd schemacomponent/example
"""
Explanation: <a href="https://colab.research.google.com/github/rcrowe-google/schemacomponent/blob/Nirzari%2Ffeature%2Fexample/example/taxi_example_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Chicago taxi example using TFX schema curation custom component
This example demonstrate the use of schema curation custom component. User defined function schema_fn defined in module_file.py is used to change schema feature tips from required to optional using schema curation component.
base code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb
Setup
Install TFX
Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
End of explanation
"""
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
from schemacomponent.component import component
"""
Explanation: Chicago taxi example pipeline
End of explanation
"""
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
"""
Explanation: Set up pipeline paths
End of explanation
"""
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
"""
Explanation: Download example data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Taxi Trips dataset released by the City of Chicago. The columns in this dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
With this dataset, we will build a model that predicts the tips of a trip.
End of explanation
"""
context = InteractiveContext()
#create and run exampleGen component
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
#create and run schemaGen component
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
"""
Explanation: Run TFX components
In the cells that follow, we create TFX components one-by-one and generates schema using schemaGen component.
End of explanation
"""
#display infered schema
context.show(schema_gen.outputs['schema'])
"""
Explanation: Schema curation custom component
Using schema curation component tips is changed into optional feature
Code for modifying schema is in user supplied schema_fn in module_file.py
Display infered schema
In the infered schema, tips feature is shown as a required feature:
tips | FLOAT | required | single
End of explanation
"""
#schema curation component
schema_curation = component.SchemaCuration(schema=schema_gen.outputs['schema'],
module_file='module_file.py')
context.run(schema_curation)
"""
Explanation: Modifying schema
End of explanation
"""
context.show(schema_curation.outputs['custom_schema'])
"""
Explanation: Display modified schema
feature tips is now optional in the modified schema
End of explanation
"""
|
sassoftware/sas-viya-programming | communities/Getting a CASTable Object from an Existing CAS Table.ipynb | apache-2.0 | import swat
conn = swat.CAS(host, port, username, password)
"""
Explanation: Getting a Python CASTable Object from an Existing CAS Table
Many of the examples in the Python series of articles here use a CASTable object to invoke actions or apply DataFrame-like syntax to CAS tables. In those examples, the CASTable object is generally the result of an action that loads the CAS table from a data file or other data source. But what if you have a CAS table already loaded in a session and you want to create a new CASTable object that points to it?
The first thing you need is a connection to CAS.
End of explanation
"""
conn.read_csv('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/class.csv',
casout=dict(name='class', caslib='casuser'))
"""
Explanation: We'll load a table of data here so we have something to work with. We'll also specify a table name and caslib so they are easier to reference in the next step.
End of explanation
"""
conn.tableinfo(caslib='casuser')
"""
Explanation: Using the tableinfo action, we can see that the table exists, however, we didn't store the output of the read_csv method, so we don't have a CASTable object pointing to it.
End of explanation
"""
cls = conn.CASTable('class', caslib='CASUSER')
cls
"""
Explanation: The solution is fairly simple, you use the CASTable method of the CAS connection object. You just pass it the name of the table and the name of the CASLib just as it is printed in In[2] above.
End of explanation
"""
cls.to_frame()
conn.close()
"""
Explanation: We now have a CASTable object that we can use to interact with.
End of explanation
"""
|
csiu/100daysofcode | misc/day85.ipynb | mit | url = "http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json"
"""
Explanation: I want to look into stock data.
Yahoo Finance API
According to stackoverflow ("alternative to google finance api"), financial information can be obtained through the Yahoo Finance API.
For instance, you can generate a CSV by a simple API call
```
generate and save a CSV for AAPL, GOOG and MSFT
http://finance.yahoo.com/d/quotes.csv?s=AAPL+GOOG+MSFT&f=sb2b3jk
```
http://www.jarloo.com/yahoo_finance/
https://developer.yahoo.com/yql/guide/yql-code-examples.html
or by using the webservice to return XML or JSON.
```
All stock quotes in XML
http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote
All stock quotes in JSON
http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json
```
End of explanation
"""
import requests
import json
response = requests.get(url)
# Load as JSON object
j = json.loads(response.content)
# Make tidy and print the first 3 entries
stock = [i['resource']['fields'] for i in j['list']['resources']]
stock[:3]
"""
Explanation: Making a webservice request
End of explanation
"""
import yahoo_finance
yahoo = yahoo_finance.Share('YHOO')
print(yahoo.get_open())
print(yahoo.get_price())
print(yahoo.get_trade_datetime())
# Refresh
yahoo.refresh()
print(yahoo.get_price())
print(yahoo.get_trade_datetime())
"""
Explanation: Yahoo-finance Python package
In addition to making requests by a url, there is a python package (yahoo-finance) which provides functions to get stock data from Yahoo! Finance.
End of explanation
"""
yahoo.get_historical('2014-04-25', '2014-04-29')
"""
Explanation: Small issues
From the examples, there is a function to get_historical() data. However, when I actually do run it, I get a YQLResponseMalformedError: Response malformed. error. Googling the error message, The Financial Hacker (2017) comments:
The Yahoo Finance API is dead. Without prior announcement, Yahoo has abandoned their only remaining service that was clearly ahead of the competition.
The link to this issue is found here.
End of explanation
"""
|
AtmaMani/pyChakras | python_crash_course/seaborn_cheat_sheet_2.ipynb | mit | import seaborn as sns
%matplotlib inline
tips = sns.load_dataset('tips')
tips.head()
"""
Explanation: Seaborn - categorical plotting
End of explanation
"""
sns.barplot(x='sex', y='total_bill', data=tips)
"""
Explanation: ToC
- Barplot
- Countplot
- Boxplot
- Violin plot
- Strip plot
- Swarm plot
Barplot
Barplot is used to indicate some measure of central tendancy. Seaborn adds some descriptors to indicate the variance in the data. Call this with a categorical column in X and numerical column for Y
End of explanation
"""
sns.countplot(x='sex', data=tips)
"""
Explanation: Thus the average bill for men was higher than women.
Countplot
If you want a regular bar chart that shows the count of data, then do a countplot
End of explanation
"""
sns.boxplot(x='time', y='total_bill', data=tips)
"""
Explanation: Boxplot
Boxplots are very common. It is used to display distribution of data as well as outliers. A boxplot splits the data into 4 quantiles or quartiles. The median is represented as a horizontal line with the quartile +- medain in solid shade. The end of the whiskers may represent the ends of the remaining quartiles
If outliers are calculated, then whiskers are shorter and values greater than 1.5 times the IQR - Inter Quartile Range are considered outliers.
End of explanation
"""
sns.boxplot(x='time',y='total_bill', data=tips, hue='sex')
"""
Explanation: We can interpret this as people spend more on dinner on average than lunch. The median is higher. Yet there is higher variability as well with the amount spent on dinner. The lowest being lower than lunch.
End of explanation
"""
sns.violinplot(x='time',y='total_bill', data=tips)
"""
Explanation: Violin plot
A violin plot builds on a boxplot by showing KDE of the data distribution.
End of explanation
"""
sns.violinplot(x='time', y='total_bill', data=tips, hue='sex', split=True)
"""
Explanation: You can see, lunch bills are tighter around the median compared to dinner. The Q3 of dinner is long, which can be noticed in the spread of the green violin plot.
End of explanation
"""
sns.stripplot(x='time', y='total_bill', data=tips)
"""
Explanation: From this plot, we assert our experience so far that women's bills are lesser than men - the width of the violin is higher on the lower end.
Stirp plot
Strip plot is like a scatter plot for a categorial data. You specify a categorial column for X and numeric for Y.
End of explanation
"""
sns.stripplot(x='time', y='total_bill', data=tips, jitter=True)
"""
Explanation: To make out the data distribution, you can add some jitter to the plot. Jitter will shift the points laterally in a random manner.
End of explanation
"""
sns.swarmplot(x='time', y='total_bill', data=tips)
"""
Explanation: Swarm plot
Swram plots are a combination of violin and strip plots. It shows the real data distribution using actual point values.
End of explanation
"""
sns.violinplot(x='time', y='total_bill', data=tips)
sns.swarmplot(x='time', y='total_bill', data=tips, color='black')
"""
Explanation: You can combine a violin and swarm plot to see how the KDE is calculated and smooths
End of explanation
"""
|
deepcharles/ruptures | docs/examples/text-segmentation.ipynb | bsd-2-clause | from pathlib import Path
import nltk
import numpy as np
import ruptures as rpt # our package
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import regexp_tokenize
from ruptures.base import BaseCost
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.colors import LogNorm
nltk.download("stopwords")
STOPWORD_SET = set(
stopwords.words("english")
) # set of stopwords of the English language
PUNCTUATION_SET = set("!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~")
"""
Explanation: Linear text segmentation
<!-- {{ add_binder_block(page) }} -->
Introduction
Linear text segmentation consists in dividing a text into several meaningful segments.
Linear text segmentation can be seen as a change point detection task and therefore can be carried out with ruptures.
This example performs exactly that on a well-known data set intoduced in [Choi2000].
Setup
First we import packages and define a few utility functions.
This section can be skipped at first reading.
Library imports.
End of explanation
"""
def preprocess(list_of_sentences: list) -> list:
"""Preprocess each sentence (remove punctuation, stopwords, then stemming.)"""
transformed = list()
for sentence in list_of_sentences:
ps = PorterStemmer()
list_of_words = regexp_tokenize(text=sentence.lower(), pattern="\w+")
list_of_words = [
ps.stem(word) for word in list_of_words if word not in STOPWORD_SET
]
transformed.append(" ".join(list_of_words))
return transformed
def draw_square_on_ax(start, end, ax, linewidth=0.8):
"""Draw a square on the given ax object."""
ax.vlines(
x=[start - 0.5, end - 0.5],
ymin=start - 0.5,
ymax=end - 0.5,
linewidth=linewidth,
)
ax.hlines(
y=[start - 0.5, end - 0.5],
xmin=start - 0.5,
xmax=end - 0.5,
linewidth=linewidth,
)
return ax
"""
Explanation: Utility functions.
End of explanation
"""
# Loading the text
filepath = Path("../data/text-segmentation-data.txt")
original_text = filepath.read_text().split("\n")
TRUE_BKPS = [11, 20, 30, 40, 49, 59, 69, 80, 90, 99] # read from the data description
print(f"There are {len(original_text)} sentences, from {len(TRUE_BKPS)} documents.")
"""
Explanation: Data
Description
The text to segment is a concatenation of excerpts from ten different documents randomly selected from the so-called Brown corpus (described here).
Each excerpt has nine to eleven sentences, amounting to 99 sentences in total.
The complete text is shown in Appendix A.
These data stem from a larger data set which is thoroughly described in [Choi2000] and can be downloaded here.
This is a common benchmark to evaluate text segmentation methods.
End of explanation
"""
# print 5 sentences from the original text
start, end = 9, 14
for (line_number, sentence) in enumerate(original_text[start:end], start=start + 1):
sentence = sentence.strip("\n")
print(f"{line_number:>2}: {sentence}")
"""
Explanation: The objective is to automatically recover the boundaries of the 10 excerpts, using the fact that they come from quite different documents and therefore have distinct topics.
For instance, in the small extract of text printed in the following cell, an accurate text segmentation procedure would be able to detect that the first two sentences (10 and 11) and the last three sentences (12 to 14) belong to two different documents and have very different semantic fields.
End of explanation
"""
# transform text
transformed_text = preprocess(original_text)
# print original and transformed
ind = 97
print("Original sentence:")
print(f"\t{original_text[ind]}")
print()
print("Transformed:")
print(f"\t{transformed_text[ind]}")
# Once the text is preprocessed, each sentence is transformed into a vector of word counts.
vectorizer = CountVectorizer(analyzer="word")
vectorized_text = vectorizer.fit_transform(transformed_text)
msg = f"There are {len(vectorizer.get_feature_names())} different words in the corpus, e.g. {vectorizer.get_feature_names()[20:30]}."
print(msg)
"""
Explanation: Preprocessing
Before performing text segmentation, the original text is preprocessed.
In a nutshell (see [Choi2000] for more details),
the punctuation and stopwords are removed;
words are reduced to their stems (e.g., "waited" and "waiting" become "wait");
a vector of word counts is computed.
End of explanation
"""
class CosineCost(BaseCost):
"""Cost derived from the cosine similarity."""
# The 2 following attributes must be specified for compatibility.
model = "custom_cosine"
min_size = 2
def fit(self, signal):
"""Set the internal parameter."""
self.signal = signal
self.gram = cosine_similarity(signal, dense_output=False)
return self
def error(self, start, end) -> float:
"""Return the approximation cost on the segment [start:end].
Args:
start (int): start of the segment
end (int): end of the segment
Returns:
segment cost
Raises:
NotEnoughPoints: when the segment is too short (less than `min_size` samples).
"""
if end - start < self.min_size:
raise NotEnoughPoints
sub_gram = self.gram[start:end, start:end]
val = sub_gram.diagonal().sum()
val -= sub_gram.sum() / (end - start)
return val
"""
Explanation: Note that the vectorized text representation is a (very) sparse matrix.
Text segmentation
Cost function
To compare (the vectorized representation of) two sentences, [Choi2000] uses the cosine similarity $k_{\text{cosine}}: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$:
$$ k_{\text{cosine}}(x, y) := \frac{\langle x \mid y \rangle}{\|x\|\|y\|} $$
where $x$ and $y$ are two $d$-dimensionnal vectors of word counts.
Text segmentation now amounts to a kernel change point detection (see [Truong2020] for more details).
However, this particular kernel is not implemented in ruptures therefore we need to create a custom cost function.
(Actually, it is implemented in ruptures but the current implementation does not exploit the sparse structure of the vectorized text representation and can therefore be slow.)
Let $y={y_0, y_1,\dots,y_{T-1}}$ be a $d$-dimensionnal signal with $T$ samples.
Recall that a cost function $c(\cdot)$ that derives from a kernel $k(\cdot, \cdot)$ is such that
$$
c(y_{a..b}) = \sum_{t=a}^{b-1} G_{t, t} - \frac{1}{b-a} \sum_{a \leq s < b } \sum_{a \leq t < b} G_{s,t}
$$
where $y_{a..b}$ is the subsignal ${y_a, y_{a+1},\dots,y_{b-1}}$ and $G_{st}:=k(y_s, y_t)$ (see [Truong2020] for more details).
In other words, $(G_{st})_{st}$ is the $T\times T$ Gram matrix of $y$.
Thanks to this formula, we can now implement our custom cost function (named CosineCost in the following cell).
End of explanation
"""
n_bkps = 9 # there are 9 change points (10 text segments)
algo = rpt.Dynp(custom_cost=CosineCost(), min_size=2, jump=1).fit(vectorized_text)
predicted_bkps = algo.predict(n_bkps=n_bkps)
print(f"True change points are\t\t{TRUE_BKPS}.")
print(f"Detected change points are\t{predicted_bkps}.")
"""
Explanation: Compute change points
If the number $K$ of change points is assumed to be known, we can use dynamic programming to search for the exact segmentation $\hat{t}_1,\dots,\hat{t}_K$ that minimizes the sum of segment costs:
$$
\hat{t}1,\dots,\hat{t}_K := \text{arg}\min{t_1,\dots,t_K} \left[ c(y_{0..t_1}) + c(y_{t_1..t_2}) + \dots + c(y_{t_K..T}) \right].
$$
End of explanation
"""
true_segment_list = rpt.utils.pairwise([0] + TRUE_BKPS)
predicted_segment_list = rpt.utils.pairwise([0] + predicted_bkps)
for (n_paragraph, (true_segment, predicted_segment)) in enumerate(
zip(true_segment_list, predicted_segment_list), start=1
):
print(f"Paragraph n°{n_paragraph:02d}")
start_true, end_true = true_segment
start_pred, end_pred = predicted_segment
start = min(start_true, start_pred)
end = max(end_true, end_pred)
msg = " ".join(
f"{ind+1:02d}" if (start_true <= ind < end_true) else " "
for ind in range(start, end)
)
print(f"(true)\t{msg}")
msg = " ".join(
f"{ind+1:02d}" if (start_pred <= ind < end_pred) else " "
for ind in range(start, end)
)
print(f"(pred)\t{msg}")
print()
"""
Explanation: (Note that the last change point index is simply the length of the signal. This is by design.)
Predicted breakpoints are quite close to the true change points.
Indeed, most estimated changes are less than one sentence away from a true change.
The last change is less accurately predicted with an error of 4 sentences.
To overcome this issue, one solution would be to consider a richer representation (compared to the sparse word frequency vectors).
Visualize segmentations
Show sentence numbers.
In the following cell, the two segmentations (true and predicted) can be visually compared.
For each paragraph, the sentence numbers are shown.
End of explanation
"""
fig, ax_arr = plt.subplots(nrows=1, ncols=2, figsize=(7, 5), dpi=200)
# plot config
title_fontsize = 10
label_fontsize = 7
title_list = ["True text segmentation", "Predicted text segmentation"]
for (ax, title, bkps) in zip(ax_arr, title_list, [TRUE_BKPS, predicted_bkps]):
# plot gram matrix
ax.imshow(algo.cost.gram.toarray(), cmap=cm.plasma, norm=LogNorm())
# add text segmentation
for (start, end) in rpt.utils.pairwise([0] + bkps):
draw_square_on_ax(start=start, end=end, ax=ax)
# add labels and title
ax.set_title(title, fontsize=title_fontsize)
ax.set_xlabel("Sentence index", fontsize=label_fontsize)
ax.set_ylabel("Sentence index", fontsize=label_fontsize)
ax.tick_params(axis="both", which="major", labelsize=label_fontsize)
"""
Explanation: Show the Gram matrix.
In addition, the text segmentation can be shown on the Gram matrix that was used to detect changes.
This is done in the following cell.
Most segments (represented by the blue squares) are similar between the true segmentation and the predicted segmentation, except for last two.
This is mainly due to the fact that, in the penultimate excerpt, all sentences are dissimilar (with respect to the cosine measure).
End of explanation
"""
for (start, end) in rpt.utils.pairwise([0] + TRUE_BKPS):
excerpt = original_text[start:end]
for (n_line, sentence) in enumerate(excerpt, start=start + 1):
sentence = sentence.strip("\n")
print(f"{n_line:>2}: {sentence}")
print()
"""
Explanation: Conclusion
This example shows how to apply ruptures on a text segmentation task.
In detail, we detected shifts in the vocabulary of a collection of sentences using common NLP preprocessing and transformation.
This task amounts to a kernel change point detection procedure where the kernel is the cosine kernel.
Such results can then be used to characterize the structure of the text for subsequent NLP tasks.
This procedure should certainly be enriched with more relevant and compact representations to better detect changes.
Appendix A
The complete text used in this notebook is as follows.
Note that the line numbers and the blank lines (added to visually mark the boundaries between excerpts) are not part of the text fed to the segmentation method.
End of explanation
"""
|
jorisvandenbossche/DS-python-data-analysis | _solved/00-jupyter_introduction.ipynb | bsd-3-clause | from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
"""
Explanation: <p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
"""
Explanation: <big><center>To run a cell: push the start triangle in the menu or type SHIFT + ENTER/RETURN
Notebook cell types
We will work in Jupyter notebooks during this course. A notebook is a collection of cells, that can contain different content:
Code
End of explanation
"""
import os
os.mkdir
my_very_long_variable_name = 3
"""
Explanation: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images:
Blockquotes if you like them
This line is part of the same blockquote.
Mathematical formulas can also be incorporated (LaTeX it is...)
$$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
$$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
Or tables:
course | points
--- | ---
Math | 8
Chemistry | 4
or tables with Latex..
Symbool | verklaring
--- | ---
$$BZV_{(t=0)}$$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
$$OZ_{(t=0)}$$ | initiële opgeloste zuurstof (8.5 mg.l-1)
$$BZV_{in}$$ | input BZV(1 mg.l-1.min-1)
$$OZ_{sat}$$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
$$k_1$$ | bacteriële degradatiesnelheid (0.3 min-1)
$$k_2$$ | reäeratieconstante (0.4 min-1)
Code can also be incorporated, but than just to illustrate:
python
BOT = 12
print(BOT)
See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
HTML
You can also use HTML commands, just check this cell:
<h3> html-adapted titel with <h3> </h3>
<p></p>
<b> Bold text <b> </b> of <i>or italic <i> </i>
Headings of different sizes: section
subsection
subsubsection
Raw Text
Notebook handling ESSENTIALS
Completion: TAB
The TAB button is essential: It provides you all possible actions you can do after loading in a library AND it is used for automatic autocompletion:
End of explanation
"""
round(3.2)
import os
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
"""
Explanation: Help: SHIFT + TAB
The SHIFT-TAB combination is ultra essential to get information/help about the current operation
End of explanation
"""
import glob
glob.glob??
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What happens if you put two question marks behind the command?
</div>
End of explanation
"""
%psearch os.*dir
"""
Explanation: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content
To start editing, click inside a cell or
<img src="../img/enterbutton.png" alt="Key enter" style="width:150px">
To stop editing,
<img src="../img/keyescape.png" alt="Key A" style="width:150px">
new cell A-bove
<img src="../img/keya.png" alt="Key A" style="width:150px">
Create a new cell above with the key A... when in command mode
new cell B-elow
<img src="../img/keyb.png" alt="Key B" style="width:150px">
Create a new cell below with the key B... when in command mode
CTRL + SHIFT + C
Just do it!
Trouble...
<div class="alert alert-danger">
<b>NOTE</b>: When you're stuck, or things do crash:
<ul>
<li> first try <code>Kernel</code> > <code>Interrupt</code> -> your cell should stop running
<li> if no succes -> <code>Kernel</code> > <code>Restart</code> -> restart your notebook
</ul>
</div>
Stackoverflow is really, really, really nice!
http://stackoverflow.com/questions/tagged/python
Google search is with you!
<big><center>REMEMBER: To run a cell: <strike>push the start triangle in the menu or</strike> type SHIFT + ENTER
some MAGIC...
%psearch
End of explanation
"""
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
"""
Explanation: %%timeit
End of explanation
"""
%whos
"""
Explanation: %whos
End of explanation
"""
%lsmagic
"""
Explanation: %lsmagic
End of explanation
"""
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
"""
Explanation: Let's get started!
End of explanation
"""
|
t-davidson/hate-speech-and-offensive-language | src/Automated Hate Speech Detection and the Problem of Offensive Language Python 3.6.ipynb | mit | import pandas as pd
import numpy as np
import pickle
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
from nltk.stem.porter import *
import string
import re
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer as VS
from textstat.textstat import *
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import classification_report
from sklearn.svm import LinearSVC
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
"""
Explanation: Replication for results in Davidson et al. 2017. "Automated Hate Speech Detection and the Problem of Offensive Language"
End of explanation
"""
df = pd.read_csv("../data/labeled_data.csv")
df
df.describe()
df.columns
"""
Explanation: Loading the data
End of explanation
"""
df['class'].hist()
"""
Explanation: Columns key:
count = number of CrowdFlower users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable by CF).
hate_speech = number of CF users who judged the tweet to be hate speech.
offensive_language = number of CF users who judged the tweet to be offensive.
neither = number of CF users who judged the tweet to be neither offensive nor non-offensive.
class = class label for majority of CF users.
0 - hate speech
1 - offensive language
2 - neither
tweet = raw tweet text
End of explanation
"""
tweets=df.tweet
"""
Explanation: This histogram shows the imbalanced nature of the task - most tweets containing "hate" words as defined by Hatebase were
only considered to be offensive by the CF coders. More tweets were considered to be neither hate speech nor offensive language than were considered hate speech.
End of explanation
"""
stopwords=stopwords = nltk.corpus.stopwords.words("english")
other_exclusions = ["#ff", "ff", "rt"]
stopwords.extend(other_exclusions)
stemmer = PorterStemmer()
def preprocess(text_string):
"""
Accepts a text string and replaces:
1) urls with URLHERE
2) lots of whitespace with one instance
3) mentions with MENTIONHERE
This allows us to get standardized counts of urls and mentions
Without caring about specific people mentioned
"""
space_pattern = '\s+'
giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|'
'[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')
mention_regex = '@[\w\-]+'
parsed_text = re.sub(space_pattern, ' ', text_string)
parsed_text = re.sub(giant_url_regex, '', parsed_text)
parsed_text = re.sub(mention_regex, '', parsed_text)
return parsed_text
def tokenize(tweet):
"""Removes punctuation & excess whitespace, sets to lowercase,
and stems tweets. Returns a list of stemmed tokens."""
tweet = " ".join(re.split("[^a-zA-Z]*", tweet.lower())).strip()
tokens = [stemmer.stem(t) for t in tweet.split()]
return tokens
def basic_tokenize(tweet):
"""Same as tokenize but without the stemming"""
tweet = " ".join(re.split("[^a-zA-Z.,!?]*", tweet.lower())).strip()
return tweet.split()
vectorizer = TfidfVectorizer(
tokenizer=tokenize,
preprocessor=preprocess,
ngram_range=(1, 3),
stop_words=stopwords,
use_idf=True,
smooth_idf=False,
norm=None,
decode_error='replace',
max_features=10000,
min_df=5,
max_df=0.75
)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#Construct tfidf matrix and get relevant scores
tfidf = vectorizer.fit_transform(tweets).toarray()
vocab = {v:i for i, v in enumerate(vectorizer.get_feature_names())}
idf_vals = vectorizer.idf_
idf_dict = {i:idf_vals[i] for i in vocab.values()} #keys are indices; values are IDF scores
#Get POS tags for tweets and save as a string
tweet_tags = []
for t in tweets:
tokens = basic_tokenize(preprocess(t))
tags = nltk.pos_tag(tokens)
tag_list = [x[1] for x in tags]
tag_str = " ".join(tag_list)
tweet_tags.append(tag_str)
#We can use the TFIDF vectorizer to get a token matrix for the POS tags
pos_vectorizer = TfidfVectorizer(
tokenizer=None,
lowercase=False,
preprocessor=None,
ngram_range=(1, 3),
stop_words=None,
use_idf=False,
smooth_idf=False,
norm=None,
decode_error='replace',
max_features=5000,
min_df=5,
max_df=0.75,
)
#Construct POS TF matrix and get vocab dict
pos = pos_vectorizer.fit_transform(pd.Series(tweet_tags)).toarray()
pos_vocab = {v:i for i, v in enumerate(pos_vectorizer.get_feature_names())}
#Now get other features
sentiment_analyzer = VS()
def count_twitter_objs(text_string):
"""
Accepts a text string and replaces:
1) urls with URLHERE
2) lots of whitespace with one instance
3) mentions with MENTIONHERE
4) hashtags with HASHTAGHERE
This allows us to get standardized counts of urls and mentions
Without caring about specific people mentioned.
Returns counts of urls, mentions, and hashtags.
"""
space_pattern = '\s+'
giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|'
'[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')
mention_regex = '@[\w\-]+'
hashtag_regex = '#[\w\-]+'
parsed_text = re.sub(space_pattern, ' ', text_string)
parsed_text = re.sub(giant_url_regex, 'URLHERE', parsed_text)
parsed_text = re.sub(mention_regex, 'MENTIONHERE', parsed_text)
parsed_text = re.sub(hashtag_regex, 'HASHTAGHERE', parsed_text)
return(parsed_text.count('URLHERE'),parsed_text.count('MENTIONHERE'),parsed_text.count('HASHTAGHERE'))
def other_features(tweet):
"""This function takes a string and returns a list of features.
These include Sentiment scores, Text and Readability scores,
as well as Twitter specific features"""
sentiment = sentiment_analyzer.polarity_scores(tweet)
words = preprocess(tweet) #Get text only
syllables = textstat.syllable_count(words)
num_chars = sum(len(w) for w in words)
num_chars_total = len(tweet)
num_terms = len(tweet.split())
num_words = len(words.split())
avg_syl = round(float((syllables+0.001))/float(num_words+0.001),4)
num_unique_terms = len(set(words.split()))
###Modified FK grade, where avg words per sentence is just num words/1
FKRA = round(float(0.39 * float(num_words)/1.0) + float(11.8 * avg_syl) - 15.59,1)
##Modified FRE score, where sentence fixed to 1
FRE = round(206.835 - 1.015*(float(num_words)/1.0) - (84.6*float(avg_syl)),2)
twitter_objs = count_twitter_objs(tweet)
retweet = 0
if "rt" in words:
retweet = 1
features = [FKRA, FRE,syllables, avg_syl, num_chars, num_chars_total, num_terms, num_words,
num_unique_terms, sentiment['neg'], sentiment['pos'], sentiment['neu'], sentiment['compound'],
twitter_objs[2], twitter_objs[1],
twitter_objs[0], retweet]
#features = pandas.DataFrame(features)
return features
def get_feature_array(tweets):
feats=[]
for t in tweets:
feats.append(other_features(t))
return np.array(feats)
other_features_names = ["FKRA", "FRE","num_syllables", "avg_syl_per_word", "num_chars", "num_chars_total", \
"num_terms", "num_words", "num_unique_words", "vader neg","vader pos","vader neu", \
"vader compound", "num_hashtags", "num_mentions", "num_urls", "is_retweet"]
feats = get_feature_array(tweets)
#Now join them all up
M = np.concatenate([tfidf,pos,feats],axis=1)
M.shape
#Finally get a list of variable names
variables = ['']*len(vocab)
for k,v in vocab.items():
variables[v] = k
pos_variables = ['']*len(pos_vocab)
for k,v in pos_vocab.items():
pos_variables[v] = k
feature_names = variables+pos_variables+other_features_names
"""
Explanation: Feature generation
End of explanation
"""
X = pd.DataFrame(M)
y = df['class'].astype(int)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.1)
from sklearn.model_selection import StratifiedKFold, GridSearchCV
from sklearn.pipeline import Pipeline
pipe = Pipeline(
[('select', SelectFromModel(LogisticRegression(class_weight='balanced',
penalty="l1", C=0.01))),
('model', LogisticRegression(class_weight='balanced',penalty='l2'))])
param_grid = [{}] # Optionally add parameters here
grid_search = GridSearchCV(pipe,
param_grid,
cv=StratifiedKFold(n_splits=5,
random_state=42).split(X_train, y_train),
verbose=2)
model = grid_search.fit(X_train, y_train)
y_preds = model.predict(X_test)
"""
Explanation: Running the model
The best model was selected using a GridSearch with 5-fold CV.
End of explanation
"""
report = classification_report( y_test, y_preds )
"""
Explanation: Evaluating the results
End of explanation
"""
print(report)
from sklearn.metrics import confusion_matrix
confusion_matrix = confusion_matrix(y_test,y_preds)
matrix_proportions = np.zeros((3,3))
for i in range(0,3):
matrix_proportions[i,:] = confusion_matrix[i,:]/float(confusion_matrix[i,:].sum())
names=['Hate','Offensive','Neither']
confusion_df = pd.DataFrame(matrix_proportions, index=names,columns=names)
plt.figure(figsize=(5,5))
seaborn.heatmap(confusion_df,annot=True,annot_kws={"size": 12},cmap='gist_gray_r',cbar=False, square=True,fmt='.2f')
plt.ylabel(r'True categories',fontsize=14)
plt.xlabel(r'Predicted categories',fontsize=14)
plt.tick_params(labelsize=12)
#Uncomment line below if you want to save the output
#plt.savefig('confusion.pdf')
#True distribution
y.hist()
pd.Series(y_preds).hist()
"""
Explanation: Note: Results in paper are from best model retrained on the entire dataset (see the other notebook). Here the results are reported after using cross-validation and only for the held-out set.
End of explanation
"""
|
zenoss/pywbem | docs/notebooks/subscriptionmanager.ipynb | lgpl-2.1 | from __future__ import print_function
import pywbem
# The WBEM server that should emit the indications
server = 'http://myserver'
username = 'user'
password = 'password'
# The URL of the WBEM listener
listener_url = 'http://mylistener'
conn = pywbem.WBEMConnection(server, (username, password),
no_verification=True)
server = pywbem.WBEMServer(conn)
"""
Explanation: Subscription Manager
<a href="#" onclick="history.back()"><--- Back</a>
The WBEMSubscriptionManager class is a subscription manager that provides for creating and removing indication subscriptions, indication filters and listener destinations for multiple WBEM servers and multiple WBEM listeners and for getting information about existing indication subscriptions.
A WBEM listener that is used with this subscription manager is identified through its URL, so it may be the WBEM listener provided by pywbem (see class WBEMListener) or any external WBEM listener.
This tutorial presents a full blown example on how to set up and shut down a subscription manager and some filters, listener destinations and subscription, including error handling. The code sections shown below are meant to be concatenated into one script.
The following code fragment creates a WBEMConnection object for connecting to the WBEM server that is the target for the subscriptions, i.e. the server that lateron will emit the indications. It also creates a WBEMServer object based on that connection, that will be added to the subscription manager lateron.
The code also defines the URL of a WBEM listener that will be the receiver of the indications. The WBEM listener is not subject of this tutorial, so we assume it just exists, for the purpose of this tutorial:
End of explanation
"""
sub_mgr = pywbem.WBEMSubscriptionManager(subscription_manager_id='fred')
try:
server_id = sub_mgr.add_server(server)
except pywbem.Error as exc:
raise
"""
Explanation: Next, we create the subscription manager, and add the WBEM server to it.
This causes interaction to happen with the WBEM server: The add_server() method determines whether the WBEM server has any listener destinations, indication filters, or subscriptions that are owned by this subscription manager. This is determined based upon the subscription manager ID ('fred' in this example). Such instances could exist for example if this script was used before and has been aborted or failed. Because of the interaction with the WBEM server, exceptions could be raised.
In this tutorial, we use a try-block to show where exceptions can happen, but we just re-raise them without doing any recovery.
End of explanation
"""
try:
dest_inst = sub_mgr.add_listener_destinations(server_id, listener_url, owned=True)
except pywbem.Error as exc:
raise
"""
Explanation: Now, we want to create a listener destination for our WBEM listener. We use "owned" listener destinations in order to benefit from the automatic recovery, conditional creation, and cleanup of owned instances (see section WBEMSubscriptionManager for details about owned instances).
Because the subscription manager has discovered instances owned by it already, the add_listener_destinations() method creates a listener destination instance only if it does not exist yet. That makes our code easy, because we don't have to care about that. However, because the method possibly needs to create an instance in the WBEM server, we again need to be prepared for exceptions:
End of explanation
"""
filter_source_ns = "root/cimv2"
filter_query = "SELECT * FROM CIM_AlertIndication " \
"WHERE OwningEntity = 'DMTF' " \
"AND MessageID LIKE 'SVPC0123|SVPC0124|SVPC0125'"
filter_language = "DMTF:CQL"
filter_name = "DMTF:System Virtualization:Alerts"
try:
filter_inst = sub_mgr.add_filter(server_id, filter_source_ns, filter_query,
filter_language, owned=False, name=filter_name)
except pywbem.Error as exc:
raise
"""
Explanation: In this tutorial, we create a dynamic filter. We could have used a static (pre-existing) filter as well, of course.
Suppose, the management profile we implement requires us to use a specific filter name, so we need to create a "permanent" filter (an "owned" filter requires that the subscription manager has control over the filter name).
We further assume for the sake of simplicity of this tutorial, that the subscription is active only as long as the script runs, and that we tear everything down at the end of the script. Because the filter is permanent, we don't get the benefit of the automatic cleanup for owned filters, so we need to clean up the filter explicitly. Again, in order not to overload this tutorial, we do this in a straight forward flow without protecting ourselves againt exceptions. In a more real life example, you could for example protect the flow with a try-block and clean up in its finally clause.
The next piece of code creates a permanent filter:
End of explanation
"""
try:
sub_inst = sub_mgr.add_subscriptions(server_id, filter_inst.path, dest_inst.path)
except pywbem.Error as exc:
raise
"""
Explanation: We now have a filter and a listener destination at hand. We can now activate the emission of indications defined by the filter to the WBEM listener defined by the listener destination, by creating an indication subscription.
The subscription is between an owned destination and a permanent filter, so it will automatically become owned. This allows the subscription manager to deal with its lifecycle automatically when attempting to clean up the underlying owned filter.
End of explanation
"""
try:
sub_mgr.remove_subscriptions(server_id, sub_inst.path)
sub_mgr.remove_filter(server_id, filter_inst.path)
sub_mgr.remove_server(server_id) # will automatically remove the owned destination
except pywbem.Error as exc:
raise
"""
Explanation: Once we want to stop the indication sending, we need to remove any permanent instances explicitly, and can leave the cleanup of owned instances to the subscription manager.
However, because subscriptions must be removed before their underlying filters or destinations, we need to remove the owned subscription explicitly as well, then the filter, then we can leave the rest (the destination) to the subscription manager cleanup that happens when we remove the server:
End of explanation
"""
|
sallai/mbuild | docs/tutorials/tutorial_simple_LJ.ipynb | mit | import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_particle1 = mb.Particle(name='LJ', pos=[0, 0, 0])
self.add(lj_particle1)
lj_particle2 = mb.Particle(name='LJ', pos=[1, 0, 0])
self.add(lj_particle2)
lj_particle3 = mb.Particle(name='LJ', pos=[0, 1, 0])
self.add(lj_particle3)
lj_particle4 = mb.Particle(name='LJ', pos=[0, 0, 1])
self.add(lj_particle4)
lj_particle5 = mb.Particle(name='LJ', pos=[1, 0, 1])
self.add(lj_particle5)
lj_particle6 = mb.Particle(name='LJ', pos=[1, 1, 0])
self.add(lj_particle6)
lj_particle7 = mb.Particle(name='LJ', pos=[0, 1, 1])
self.add(lj_particle7)
lj_particle8 = mb.Particle(name='LJ', pos=[1, 1, 1])
self.add(lj_particle8)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: Point Particles: Basic system initialization
Note: mBuild expects all distance units to be in nanometers.
This tutorial focuses on the usage of basic system initialization operations, as applied to simple point particle systems (i.e., generic Lennard-Jones particles rather than specific atoms).
The code below defines several point particles in a cubic arrangement. Note, the color and radius associated with a Particle name can be set and passed to the visualize command. Colors are passed in hex format (see http://www.color-hex.com/color/bfbfbf).
End of explanation
"""
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
for i in range(0,2):
for j in range(0,2):
for k in range(0,2):
lj_particle = mb.clone(lj_proto)
pos = [i,j,k]
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: While this would work for defining a single molecule or very small system, this would not be efficient for large systems. Instead, the clone and translate operator can be used to facilitate automation. Below, we simply define a single prototype particle (lj_proto), which we then copy and translate about the system.
Note, mBuild provides two different translate operations, "translate" and "translate_to". "translate" moves a particle by adding the vector the original position, whereas "translate_to" move a particle to the specified location in space. Note, "translate_to" maintains the internal spatial relationships of a collection of particles by first shifting the center of mass of the collection of particles to the origin, then translating to the specified location. Since the lj_proto particle in this example starts at the origin, these two commands produce identical behavior.
End of explanation
"""
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(2)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: To simplify this process, mBuild provides several build-in patterning tools, where for example, Grid3DPattern can be used to perform this same operation. Grid3DPattern generates a set of points, from 0 to 1, which get stored in the variable "pattern". We need only loop over the points in pattern, cloning, translating, and adding to the system. Note, because Grid3DPattern defines points between 0 and 1, they must be scaled based on the desired system size, i.e., pattern.scale(2).
End of explanation
"""
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid2DPattern(5, 5)
pattern.scale(5)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: Larger systems can therefore be easily generated by toggling the values given to Grid3DPattern. Other patterns can also be generated using the same basic code, such as a 2D grid pattern:
End of explanation
"""
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
pos[0]-=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
pos[0]+=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: Points on a sphere can be generated using SpherePattern. Points on a disk using DisKPattern, etc.
Note to show both simultaneously, we shift the x-coordinate of Particles in the sphere by -1 (i.e., pos[0]-=1.0) and +1 for the disk (i.e, pos[0]+=1.0).
End of explanation
"""
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class DiskLJ(mb.Compound):
def __init__(self):
super(DiskLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pos=[-1, 0, 0]
mb.translate(sphere, pos)
self.add(sphere)
disk = DiskLJ();
pos=[1, 0, 0]
mb.translate(disk, pos)
self.add(disk)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: We can also take advantage of the hierachical nature of mBuild to accomplish the same task more cleanly. Below we create a component that corresponds to the sphere (class SphereLJ), and one that corresponds to the disk (class DiskLJ), and then instantiate and shift each of these individually in the MonoLJ component.
End of explanation
"""
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(13)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
for pos in pattern:
lj_sphere = mb.clone(sphere)
mb.translate_to(lj_sphere, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_sphere, [-5,-5,-5])
self.add(lj_sphere)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: Again, since mBuild is hierarchical, the pattern functions can be used to generate large systems of any arbitary component. For example, we can replicate the SphereLJ component on a regular array.
End of explanation
"""
import mbuild as mb
import random
from numpy import pi
class CubeLJ(mb.Compound):
def __init__(self):
super(CubeLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(1)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
cube_proto = CubeLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
rnd = random.Random()
rnd.seed(123)
for pos in pattern:
lj_cube = mb.clone(cube_proto)
mb.translate_to(lj_cube, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_cube, [-5,-5,-5])
mb.spin_x(lj_cube, rnd.uniform(0, 2 * pi))
mb.spin_y(lj_cube, rnd.uniform(0, 2 * pi))
mb.spin_z(lj_cube, rnd.uniform(0, 2 * pi))
self.add(lj_cube)
monoLJ = MonoLJ()
monoLJ.visualize()
"""
Explanation: Several functions exist for rotating compounds. For example, the spin command allows a compound to be rotated, in place, about a specific axis (i.e., it considers the origin for the rotation to lie at the compound's center of mass).
End of explanation
"""
#save as xyz file
monoLJ.save('output.xyz')
#save as mol2
monoLJ.save('output.mol2')
"""
Explanation: Configurations can be dumped to file using the save command; this takes advantage of MDTraj and supports a range of file formats (see http://MDTraj.org).
End of explanation
"""
|
Alex-Ian-Hamilton/solarbextrapolation | docs/auto_examples/plot_define_and_run_trivial_analytical_model.ipynb | mit | # Module imports
from solarbextrapolation.map3dclasses import Map3D
from solarbextrapolation.analyticalmodels import AnalyticalModel
from solarbextrapolation.visualisation_functions import visualise
"""
Explanation: Defining and Run a Custom Analytical Model
Here you will be creating trivial analytical model following the API.
You can start by importing the necessary module components.
End of explanation
"""
# General imports
import astropy.units as u
import numpy as np
from mayavi import mlab
"""
Explanation: You also need the ability to convert astropyunits, manipulate numpy arrays
and use MayaVi for visualisation.
End of explanation
"""
# Input parameters:
qua_shape = u.Quantity([ 20, 20, 20] * u.pixel)
qua_x_range = u.Quantity([ -80.0, 80 ] * u.Mm)
qua_y_range = u.Quantity([ -80.0, 80 ] * u.Mm)
qua_z_range = u.Quantity([ 0.0, 120 ] * u.Mm)
"""
Explanation: You are going to try and define a 3D cuboid grid of 20x22x20 with ranges in
arcseconds, these parameters can be stored in the following lists and astropy
quantities.
End of explanation
"""
"""
# Derived parameters (make SI where applicable)
x_0 = x_range[0].to(u.m).value
Dx = (( x_range[1] - x_range[0] ) / ( tup_shape[0] * 1.0 )).to(u.m).value
x_size = Dx * tup_shape[0]
y_0 = y_range[0].to(u.m).value
Dy = (( y_range[1] - y_range[0] ) / ( tup_shape[1] * 1.0 )).to(u.m).value
y_size = Dy * tup_shape[1]
z_0 = z_range[0].to(u.m).value
Dz = (( z_range[1] - z_range[0] ) / ( tup_shape[2] * 1.0 )).to(u.m).value
z_size = Dy * tup_shape[2]
"""
"""
Explanation: From the above parameters you can derive the grid step size and total size in
each dimension.
End of explanation
"""
class AnaOnes(AnalyticalModel):
def __init__(self, **kwargs):
super(AnaOnes, self).__init__(**kwargs)
def _generate_field(self, **kwargs):
# Adding in custom parameters to the metadata
self.meta['analytical_model_routine'] = 'Ones Model'
# Generate a trivial field and return (X,Y,Z,Vec)
arr_4d = np.ones(self.shape.value.tolist() + [3])
self.field = arr_4d
# Extract the LoS Magnetogram from this:
self.magnetogram.data = arr_4d[:,:,0,2]
# Now return the vector field.
return Map3D( arr_4d, self.meta )
"""
Explanation: You can define this analytical model as a child of the AnalyticalModel class.
End of explanation
"""
aAnaMod = AnaOnes(shape=qua_shape, xrange=qua_x_range, yrange=qua_y_range, zrange=qua_z_range)
"""
Explanation: You can instansiate a copy of the new analytical model.
End of explanation
"""
aMap3D = aAnaMod.generate()
"""
Explanation: Note: you could use default ranges and grid shape using aAnaMod = AnaOnes().
You can now calculate the vector field.
End of explanation
"""
aMap2D = aAnaMod.to_los_magnetogram()
aMap2D.peek()
"""
Explanation: You can now see the 2D boundary data used for extrapolation.
End of explanation
"""
fig = visualise(aMap3D,
show_boundary_axes=False,
show_volume_axes=False,
debug=False)
mlab.show()
# Note: you can add boundary axes using:
"""
fig = visualise(aMap3D,
show_boundary_axes=False,
boundary_units=[1.0*u.arcsec, 1.0*u.arcsec],
show_volume_axes=True,
debug=False)
"""
"""
Explanation: You also visulise the 3D vector field:
End of explanation
"""
|
giacomov/astromodels | examples/Point_source_tutorial.ipynb | bsd-3-clause | from astromodels import *
# Using J2000 R.A. and Dec (ICRS), which is the default coordinate system:
simple_source_icrs = PointSource('simple_source', ra=123.2, dec=-13.2, spectral_shape=powerlaw())
"""
Explanation: Point sources
In astromodels a point source is described by its position in the sky and its spectral features.
Creating a point source
A simple source with a power law spectrum can be created like this:
End of explanation
"""
simple_source_gal = PointSource('simple_source', l=234.320573, b=11.365142, spectral_shape=powerlaw())
"""
Explanation: We can also use Galactic coordinates:
End of explanation
"""
simple_source_icrs.display()
# or print(simple_source_icrs) for a text-only representation
"""
Explanation: As spectral shape we can use any function or any composite function (see "Creating and modifying functions")
Getting info about a point source
Info about a point source can easily be obtained with the usual .display() method (which will use the richest representation available), or by printing it which will display a text-only representation:
End of explanation
"""
l = simple_source_icrs.position.get_l()
b = simple_source_icrs.position.get_b()
ra = simple_source_gal.position.get_ra()
dec = simple_source_gal.position.get_dec()
type(ra)
"""
Explanation: As you can see we have created a point source with one component (see below) automatically named "main", with a power law spectrum, at the specified position.
Converting between coordinates systems
By default the coordinates of the point source are displayed in the same system used during creation. However, you can always obtain R.A, Dec or L,B like this:
End of explanation
"""
# Decimal R.A.
print("Decimal R.A. is %s" % ra.deg)
print("Sexadecimal R.A. is %.0f:%.0f:%s" % (ra.dms.d, ra.dms.m, ra.dms.s))
"""
Explanation: The get_ra, get_dec, get_l and get_b return either a Latitude or Longitude object of astropy.coordinates, from which you can obtain all formats for the coordinates, like:
End of explanation
"""
# Refer to the documentation of the astropy.coordinates.SkyCoord class:
# http://docs.astropy.org/en/stable/coordinates/
# for all available options.
sky_coord_instance = simple_source_icrs.position.sky_coord
# Now you can transform to another reference use transform_to.
# Here for example we compute the altitude of our source for HAWC at 2 am on 2013-07-01
from astropy.coordinates import SkyCoord, EarthLocation, AltAz
from astropy.time import Time
hawc_site = EarthLocation(lat=19*u.deg, lon=97.3*u.deg, height=4100*u.m)
utcoffset = -5*u.hour # Hour at HAWC is CDT, which is UTC - 5 hours
time = Time('2013-7-01 02:00:00') - utcoffset
src_altaz = sky_coord_instance.transform_to(AltAz(obstime=time,location=hawc_site))
print("Source Altitude at HAWC : {0.alt:.5}".format(src_altaz))
"""
Explanation: For more control on the output and many more options, such as transform to local frames or other equinoxes, compute distances between points in the sky, and so on, you can obtain an instance of astropy.coordinates.SkyCoord by using the sky_coord property of the position object:
End of explanation
"""
# These will return two Parameter instances corresponding to the parameters ra and dec
# NOT the corresponding floating point numbers:
parameter_ra = simple_source_icrs.position.ra
parameter_dec = simple_source_icrs.position.dec
# This would instead throw AttributeError, since simple_source_icrs was instanced using
# R.A. and Dec. and hence does not have the l,b parameters:
# error = simple_source_icrs.position.l
# error = simple_source_icrs.position.b
# Similarly this will throw AttributeError, because simple_source_gal was instanced using
# Galactic coordinates:
# error = simple_source_gal.position.ra
# error = simple_source_gal.position.dec
# In all cases, independently on how the source was instanced, you can obtain the coordinates
# as normal floating point numbers using:
ra1 = simple_source_icrs.position.get_ra().value
dec1 = simple_source_icrs.position.get_dec().value
l1 = simple_source_icrs.position.get_l().value
b1 = simple_source_icrs.position.get_b().value
ra2 = simple_source_gal.position.get_ra().value
dec2 = simple_source_gal.position.get_dec().value
l2 = simple_source_gal.position.get_l().value
b2 = simple_source_gal.position.get_b().value
"""
Explanation: Gotcha while accessing coordinates
Please note that using get_ra() and .ra (or the equivalent methods for the other coordinates) is not the same. While get_ra() will always return a single float value corresponding to the R.A. of the source, the .ra property will exist only if the source has been created using R.A, Dec as input coordinates and will return a Parameter instance:
End of explanation
"""
# Create the two different components
#(of course the shape can be any function, or any composite function)
component1 = SpectralComponent('synchrotron',shape=powerlaw())
component2 = SpectralComponent('IC',shape=powerlaw())
# Create a multi-component source
multicomp_source = PointSource('multicomp_source', ra=123.2, dec=-13.2, components=[component1,component2])
multicomp_source.display()
"""
Explanation: Multi-component sources
A multi-component source is a point source which has different spectral components. For example, in a Gamma-Ray Burst you can have a Synchrotron component and a Inverse Compton component, which come from different zones and are described by different spectra. Depending on the needs of your analysis, you might model this situation using a single component constituted by the sum of the two spectra, or you might want to model them independently. The latter choice allows you to measure for instance the fluxes from the two components independently. Also, each components has its own polarization, which can be useful when studying polarized sources (to be implemented). Representing a source with more than one component is easy in astromodels:
End of explanation
"""
# Change position
multicomp_source.position.ra = 124.5
multicomp_source.position.dec = -11.5
# Change values for the parameters
multicomp_source.spectrum.synchrotron.powerlaw.logK = -1.2
multicomp_source.spectrum.IC.powerlaw.index = -1.0
# To avoid having to write that much, you can create a "shortcut" for a function
po = multicomp_source.spectrum.synchrotron.powerlaw
# Now you can modify its parameters more easily
# (see "Creating and modifying functions" for more info on what you can to with a parameter)
po.K = 1e-5
# Change the minimum using explicit units
po.K.min_value = 1e-6 * 1 / (u.MeV * u.cm**2 * u.s)
# GOTCHA
# Creating a shortcut directly to the parameter will not work:
# p1 = multicomp_source.spectrum.synchrotron.powerlaw.logK
# p1 = -1.3 # this does NOT change the value of logK, but instead assign -1.3 to p1 (i.e., destroy the shortcut)
# However you can change the value of p1 like this:
# p1.value = -1.3 # This will work
multicomp_source.display()
"""
Explanation: Modifying features of the source and modify parameters of its spectrum
Starting from the source instance you can modify any of its components, or its position, in a straightforward way:
End of explanation
"""
|
astroumd/GradMap | notebooks/Lectures2018/Lecture4/Lecture4-2BodyProblem-Student.ipynb | gpl-3.0 | #Physical Constants (SI units)
G=6.67e-11
AU=1.5e11 #meters. Distance between sun and earth.
daysec=24.0*60*60 #seconds in a day
"""
Explanation: Welcome to your first numerical simulation! The 2 Body Problem
Many problems in statistical physics and astrophysics requiring solving problems consisting of many particles at once (sometimes on the order of thousands or more!). This can't be done by the traditional pen and paper techniques you are all learning in your physics classes. Instead, we must impliment numerical solutions to these problems.
Today, you will create your first of many numerical simulation for a simple problem is that solvable by pen and paper already, the 2 body problem in 2D. In this problem, we will describe the motion between two particles that share a force between them (such as Gravity). We'll design the simulation from an astronomer's mindset with their astronomical units in mind. This simulation will be used to confirm the general motion of the earth around the Sun, and later will be used to predict the motion between two stars within relatively close range.
<br>
<br>
<br>
We will guide you through the physics and math required to create this simulation. The problem here is designed to use the knowledge of scientific python you have been developing this week.
Like any code in python, The first thing we need to do is import the libraries we need. Go ahead and import Numpy and Pyplot below as np and plt respectfully. Don't forget to put matplotlib inline to get everything within the notebook.
Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understannd what they are before moving on.
End of explanation
"""
#####run specfic constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Intial conditions (posistion [m] and velocities [m/s] in x,y,z coorindates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
"""
Explanation: Next, we will need parameters for the simulation. These are known as intial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Below, I've included the intial conditions for the earth (a) and the Sun (b) at the average distance from the sun and the average velocity around the sun. We also need a starting time, and ending time for the simulation, and a "time-step" for the system. Feel free to adjust all of these as you see fit once you have built the system!
<br>
<br>
<br>
<br>
a note on dt:
As already stated, numeric simulations are approximations. In our case, we are approximating how time flows. We know it flows continious, but the computer cannot work with this. So instead, we break up our time into equal chunks called "dt". The smaller the chunks, the mroe accurate you will become, but at the cost of computer time.
End of explanation
"""
#Function to compute the force between the two objects
"""
Explanation: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can transfer the math shown into a python function. I'll show a picture on the board the physics behind this math for those interested.
$$F_g=\frac{-GM_aM_b}{r^3}\vec{r}$$
and
- $$\vec{r}=((x_b-x_a),(y_b-y_a))$$
- $$r^3=((x_b-x_a)^2+(y_b-y_a)^2)^{3/2}$$
<br><br>So, $Fg$ will only need to be a function of xa, xb, ya, and yb. The velocities of the bodies will not be needed. Create a function that calculates the force between the bodies given the posistions of the bodies. My recommendation here will be feed the inputs as seperate componets and also return the force in terms of componets (say, fx and fy). This will make your code easier to make and easier to read.
End of explanation
"""
#Run a loop for the simulation. Keep track of Ma and Mb posistions and velocites
#Intialize vectors
"""
Explanation: Now that we have our function, we need to prepare a loop. Before we do, we need to intialize the loop and choose a loop type, for or while. Below is the general outline for how each type of loop can gp.
<br>
<br>
<br>
For loop:
intialize posistions and velocities arrays with np.zeros or np.linspace for the amount of steps needed to go through the simulation (which is numSteps=(tend-t)/dt the way we have set up the problem). The for loop condition is based off time and should read rough like: for i in range(numSteps)
<br>
<br>
<br>
While loop:
intialize posistions and velocities arrays with np.array([]) and use np.append() to tact on new values at each step like so, xaArray=np.append(xaArray,NEWVALUE). The while condition should read, while t<tend
My preference here is While since it keeps my calculations and appending seperate. But, feel free to use which ever feels best for you!
End of explanation
"""
#Your loop here
"""
Explanation: Now for the actual simulation. This is the hardest part to code in. The general idea behind our loop is that as we step through time, we calculate the force, then calculate the new velocity, then the new posistion for each particle. At the end, we must update our arrays to reflect the new changes and update the time of the system. The time is super important! If we don't (say in a while loop), the simulation would never end and we would never get our result.
Outline for the loop (order matters here)
Calculate the force with the last known posistions (use your function!)
Calculate the new velocities using the approximation: vb = vb + dt*fg/Mb and va= va - dt*fg/Ma Note the minus sign here, and the need to do this for the x and y directions!
Calculate the new posistions using the approximation: xb = xb + dt*Vb (same for a and for y's. No minus problem here)
Update the arrays to reflect our new values
Update the time using t=t+dt
<br>
<br>
<br>
<br>
Now when the loop closes back in, the cycle repeats in a logical way. Go one step at a time when creating this loop and use comments to help guide yourself. Ask for help if it gets tricky!
End of explanation
"""
from IPython.display import Image
Image("Earth-Sun-averageResult.jpg")
#Your plot here
"""
Explanation: Now for the fun part (or not so fun part if your simulation had an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopfully you get something like the below image (in units of AU).
End of explanation
"""
|
jupyter-widgets/ipywidgets | docs/source/examples/Widget Styling.ipynb | bsd-3-clause | from ipywidgets import Button, Layout
b = Button(description='(50% width, 80px height) button',
layout=Layout(width='50%', height='80px'))
b
"""
Explanation: Layout and Styling of Jupyter widgets
This notebook presents how to layout and style Jupyter interactive widgets to build rich and reactive widget-based applications.
You can jump directly to these sections:
The layout attribute
The Flexbox layout
Predefined styles
The style attribute
The Grid layout
Image layout and sizing
The layout attribute
Jupyter interactive widgets have a layout attribute exposing a number of CSS properties that impact how widgets are laid out.
Exposed CSS properties
<div class="alert alert-info" style="margin: 20px">
The following properties map to the values of the CSS properties of the same name (underscores being replaced with dashes), applied to the top DOM elements of the corresponding widget.
</div>
Sizes
height
width
max_height
max_width
min_height
min_width
Display
visibility
display
overflow
Box model
border
margin
padding
Positioning
top
left
bottom
right
Flexbox
order
flex_flow
align_items
flex
align_self
align_content
justify_content
Grid layout
grid_auto_columns
grid_auto_flow
grid_auto_rows
grid_gap
grid_template
grid_row
grid_column
Shorthand CSS properties
You may have noticed that certain CSS properties such as margin-[top/right/bottom/left] seem to be missing. The same holds for padding-[top/right/bottom/left] etc.
In fact, you can atomically specify [top/right/bottom/left] margins via the margin attribute alone by passing the string '100px 150px 100px 80px' for a respectively top, right, bottom and left margins of 100, 150, 100 and 80 pixels.
Similarly, the flex attribute can hold values for flex-grow, flex-shrink and flex-basis. The border attribute is a shorthand property for border-width, border-style (required), and border-color.
Simple examples
The following example shows how to resize a Button so that its views have a height of 80px and a width of 50% of the available space:
End of explanation
"""
Button(description='Another button with the same layout', layout=b.layout)
"""
Explanation: The layout property can be shared between multiple widgets and assigned directly.
End of explanation
"""
from ipywidgets import IntSlider
IntSlider(description='A too long description')
"""
Explanation: Description
You may have noticed that long descriptions are truncated. This is because the description length is, by default, fixed.
End of explanation
"""
style = {'description_width': 'initial'}
IntSlider(description='A too long description', style=style)
"""
Explanation: You can change the length of the description to fit the description text. However, this will make the widget itself shorter. You can change both by adjusting the description width and the widget width using the widget's style.
End of explanation
"""
from ipywidgets import HBox, Label
HBox([Label('A too long description'), IntSlider()])
"""
Explanation: If you need more flexibility to lay out widgets and descriptions, you can use Label widgets directly.
End of explanation
"""
from ipywidgets import Button, HBox, VBox
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w) for w in words]
left_box = VBox([items[0], items[1]])
right_box = VBox([items[2], items[3]])
HBox([left_box, right_box])
"""
Explanation: Natural sizes, and arrangements using HBox and VBox
Most of the core-widgets have default heights and widths that tile well together. This allows simple layouts based on the HBox and VBox helper functions to align naturally:
End of explanation
"""
from ipywidgets import IntSlider, Label
IntSlider(description=r'\(\int_0^t f\)')
Label(value=r'\(e=mc^2\)')
"""
Explanation: Latex
Widgets such as sliders and text inputs have a description attribute that can render Latex Equations. The Label widget also renders Latex equations.
End of explanation
"""
from ipywidgets import Layout, Button, Box
items_layout = Layout( width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='50%')
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=word, layout=items_layout, button_style='danger') for word in words]
box = Box(children=items, layout=box_layout)
box
"""
Explanation: Number formatting
Sliders have a readout field which can be formatted using Python's Format Specification Mini-Language. If the space available for the readout is too narrow for the string representation of the slider value, a different styling is applied to show that not all digits are visible.
The Flexbox layout
The HBox and VBox classes above are special cases of the Box widget.
The Box widget enables the entire CSS flexbox spec as well as the Grid layout spec, enabling rich reactive layouts in the Jupyter notebook. It aims at providing an efficient way to lay out, align and distribute space among items in a container.
Again, the whole flexbox spec is exposed via the layout attribute of the container widget (Box) and the contained items. One may share the same layout attribute among all the contained items.
Acknowledgement
The following flexbox tutorial on the flexbox layout follows the lines of the article A Complete Guide to Flexbox by Chris Coyier, and uses text and various images from the article with permission.
Basics and terminology
Since flexbox is a whole module and not a single property, it involves a lot of things including its whole set of properties. Some of them are meant to be set on the container (parent element, known as "flex container") whereas the others are meant to be set on the children (known as "flex items").
If regular layout is based on both block and inline flow directions, the flex layout is based on "flex-flow directions". Please have a look at this figure from the specification, explaining the main idea behind the flex layout.
Basically, items will be laid out following either the main axis (from main-start to main-end) or the cross axis (from cross-start to cross-end).
main axis - The main axis of a flex container is the primary axis along which flex items are laid out. Beware, it is not necessarily horizontal; it depends on the flex-direction property (see below).
main-start | main-end - The flex items are placed within the container starting from main-start and going to main-end.
main size - A flex item's width or height, whichever is in the main dimension, is the item's main size. The flex item's main size property is either the ‘width’ or ‘height’ property, whichever is in the main dimension.
cross axis - The axis perpendicular to the main axis is called the cross axis. Its direction depends on the main axis direction.
cross-start | cross-end - Flex lines are filled with items and placed into the container starting on the cross-start side of the flex container and going toward the cross-end side.
cross size - The width or height of a flex item, whichever is in the cross dimension, is the item's cross size. The cross size property is whichever of ‘width’ or ‘height’ that is in the cross dimension.
Properties of the parent
display
display can be flex or inline-flex. This defines a flex container (block or inline).
flex-flow
flex-flow is a shorthand for the flex-direction and flex-wrap properties, which together define the flex container's main and cross axes. Default is row nowrap.
flex-direction (column-reverse | column | row | row-reverse )
This establishes the main-axis, thus defining the direction flex items are placed in the flex container. Flexbox is (aside from optional wrapping) a single-direction layout concept. Think of flex items as primarily laying out either in horizontal rows or vertical columns.
flex-wrap (nowrap | wrap | wrap-reverse)
By default, flex items will all try to fit onto one line. You can change that and allow the items to wrap as needed with this property. Direction also plays a role here, determining the direction new lines are stacked in.
justify-content
justify-content can be one of flex-start, flex-end, center, space-between, space-around. This defines the alignment along the main axis. It helps distribute extra free space left over when either all the flex items on a line are inflexible, or are flexible but have reached their maximum size. It also exerts some control over the alignment of items when they overflow the line.
align-items
align-items can be one of flex-start, flex-end, center, baseline, stretch. This defines the default behavior for how flex items are laid out along the cross axis on the current line. Think of it as the justify-content version for the cross-axis (perpendicular to the main-axis).
align-content
align-content can be one of flex-start, flex-end, center, baseline, stretch. This aligns a flex container's lines within when there is extra space in the cross-axis, similar to how justify-content aligns individual items within the main-axis.
Note: this property has no effect when there is only one line of flex items.
Properties of the items
The flexbox-related CSS properties of the items have no impact if the parent element is not a flexbox container (i.e. has a display attribute equal to flex or inline-flex).
order
By default, flex items are laid out in the source order. However, the order property controls the order in which they appear in the flex container.
<img src="./images/order-2.svg" alt="Order" style="width: 500px;"/>
flex
flex is shorthand for three properties, flex-grow, flex-shrink and flex-basis combined. The second and third parameters (flex-shrink and flex-basis) are optional. Default is 0 1 auto.
flex-grow
This defines the ability for a flex item to grow if necessary. It accepts a unitless value that serves as a proportion. It dictates what amount of the available space inside the flex container the item should take up.
If all items have flex-grow set to 1, the remaining space in the container will be distributed equally to all children. If one of the children a value of 2, the remaining space would take up twice as much space as the others (or it will try to, at least).
flex-shrink
This defines the ability for a flex item to shrink if necessary.
flex-basis
This defines the default size of an element before the remaining space is distributed. It can be a length (e.g. 20%, 5rem, etc.) or a keyword. The auto keyword means "look at my width or height property".
align-self
align-self allows the default alignment (or the one specified by align-items) to be overridden for individual flex items.
The VBox and HBox helpers
The VBox and HBox helper classes provide simple defaults to arrange child widgets in vertical and horizontal boxes. They are roughly equivalent to:
```Python
def VBox(pargs, kwargs):
"""Displays multiple widgets vertically using the flexible box model."""
box = Box(pargs, **kwargs)
box.layout.display = 'flex'
box.layout.flex_flow = 'column'
box.layout.align_items = 'stretch'
return box
def HBox(pargs, kwargs):
"""Displays multiple widgets horizontally using the flexible box model."""
box = Box(pargs, **kwargs)
box.layout.display = 'flex'
box.layout.align_items = 'stretch'
return box
```
Examples
Four buttons in a VBox. Items stretch to the maximum width, in a vertical box taking 50% of the available space.
End of explanation
"""
from ipywidgets import Layout, Button, Box, VBox
# Items flex proportionally to the weight and the left over space around the text
items_auto = [
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
Button(description='weight=3; auto', layout=Layout(flex='3 1 auto', width='auto'), button_style='danger'),
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
]
# Items flex proportionally to the weight
items_0 = [
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
Button(description='weight=3; 0%', layout=Layout(flex='3 1 0%', width='auto'), button_style='danger'),
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
width='70%')
box_auto = Box(children=items_auto, layout=box_layout)
box_0 = Box(children=items_0, layout=box_layout)
VBox([box_auto, box_0])
"""
Explanation: Three buttons in an HBox. Items flex proportionally to their weight.
End of explanation
"""
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
form_items = [
Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),
Box([Label(value='Egg style'),
Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),
Box([Label(value='Ship size'),
FloatText()], layout=form_item_layout),
Box([Label(value='Information'),
Textarea()], layout=form_item_layout)
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'
))
form
"""
Explanation: A more advanced example: a reactive form.
The form is a VBox of width '50%'. Each row in the VBox is an HBox, that justifies the content with space between..
End of explanation
"""
from ipywidgets import Layout, Button, VBox, Label
item_layout = Layout(height='100px', min_width='40px')
items = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)]
box_layout = Layout(overflow='scroll hidden',
border='3px solid black',
width='500px',
height='',
flex_flow='row',
display='flex')
carousel = Box(children=items, layout=box_layout)
VBox([Label('Scroll horizontally:'), carousel])
"""
Explanation: A more advanced example: a carousel.
End of explanation
"""
from ipywidgets import Button
Button(description='Danger Button', button_style='danger')
"""
Explanation: Predefined styles
If you wish the styling of widgets to make use of colors and styles defined by the environment (to be consistent with e.g. a notebook theme), many widgets enable choosing in a list of pre-defined styles.
For example, the Button widget has a button_style attribute that may take 5 different values:
'primary'
'success'
'info'
'warning'
'danger'
besides the default empty string ''.
End of explanation
"""
b1 = Button(description='Custom color')
b1.style.button_color = 'lightgreen'
b1
"""
Explanation: The style attribute
While the layout attribute only exposes layout-related CSS properties for the top-level DOM element of widgets, the
style attribute is used to expose non-layout related styling attributes of widgets.
However, the properties of the style attribute are specific to each widget type.
End of explanation
"""
b1.style.keys
"""
Explanation: You can get a list of the style attributes for a widget with the keys property.
End of explanation
"""
b2 = Button()
b2.style = b1.style
b2
"""
Explanation: Just like the layout attribute, widget styles can be assigned to other widgets.
End of explanation
"""
s1 = IntSlider(description='Blue handle')
s1.style.handle_color = 'lightblue'
s1
"""
Explanation: Widget styling attributes are specific to each widget type.
End of explanation
"""
b3 = Button(description='Styled button', style=dict(
font_style='italic',
font_weight='bold',
font_variant="small-caps",
text_color='red',
text_decoration='underline'
))
b3
"""
Explanation: Styles can be given when a widget is constructed, either as a specific Style instance or as a dictionary.
End of explanation
"""
from ipywidgets import Button, GridBox, Layout, ButtonStyle
"""
Explanation: The Grid layout
The GridBox class is a special case of the Box widget.
The Box widget enables the entire CSS flexbox spec, enabling rich reactive layouts in the Jupyter notebook. It aims at providing an efficient way to lay out, align and distribute space among items in a container.
Again, the whole grid layout spec is exposed via the layout attribute of the container widget (Box) and the contained items. One may share the same layout attribute among all the contained items.
The following flexbox tutorial on the flexbox layout follows the lines of the article A Complete Guide to Grid by Chris House, and uses text and various images from the article with permission.
Basics and browser support
To get started you have to define a container element as a grid with display: grid, set the column and row sizes with grid-template-rows, grid-template-columns, and grid_template_areas, and then place its child elements into the grid with grid-column and grid-row. Similarly to flexbox, the source order of the grid items doesn't matter. Your CSS can place them in any order, which makes it super easy to rearrange your grid with media queries. Imagine defining the layout of your entire page, and then completely rearranging it to accommodate a different screen width all with only a couple lines of CSS. Grid is one of the most powerful CSS modules ever introduced.
As of March 2017, most browsers shipped native, unprefixed support for CSS Grid: Chrome (including on Android), Firefox, Safari (including on iOS), and Opera. Internet Explorer 10 and 11 on the other hand support it, but it's an old implementation with an outdated syntax. The time to build with grid is now!
Important terminology
Before diving into the concepts of Grid it's important to understand the terminology. Since the terms involved here are all kinda conceptually similar, it's easy to confuse them with one another if you don't first memorize their meanings defined by the Grid specification. But don't worry, there aren't many of them.
Grid Container
The element on which display: grid is applied. It's the direct parent of all the grid items. In this example container is the grid container.
```html
<div class="container">
<div class="item item-1"></div>
<div class="item item-2"></div>
<div class="item item-3"></div>
</div>
```
Grid Item
The children (e.g. direct descendants) of the grid container. Here the item elements are grid items, but sub-item isn't.
```html
<div class="container">
<div class="item"></div>
<div class="item">
<p class="sub-item"></p>
</div>
<div class="item"></div>
</div>
```
Grid Line
The dividing lines that make up the structure of the grid. They can be either vertical ("column grid lines") or horizontal ("row grid lines") and reside on either side of a row or column. Here the yellow line is an example of a column grid line.
Grid Track
The space between two adjacent grid lines. You can think of them like the columns or rows of the grid. Here's the grid track between the second and third row grid lines.
Grid Cell
The space between two adjacent row and two adjacent column grid lines. It's a single "unit" of the grid. Here's the grid cell between row grid lines 1 and 2, and column grid lines 2 and 3.
Grid Area
The total space surrounded by four grid lines. A grid area may be comprised of any number of grid cells. Here's the grid area between row grid lines 1 and 3, and column grid lines 1 and 3.
Properties of the parent
grid-template-rows, grid-template-colums
Defines the columns and rows of the grid with a space-separated list of values. The values represent the track size, and the space between them represents the grid line.
Values:
<track-size> - can be a length, a percentage, or a fraction of the free space in the grid (using the fr unit)
<line-name> - an arbitrary name of your choosing
grid-template-areas
Defines a grid template by referencing the names of the grid areas which are specified with the grid-area property. Repeating the name of a grid area causes the content to span those cells. A period signifies an empty cell. The syntax itself provides a visualization of the structure of the grid.
Values:
<grid-area-name> - the name of a grid area specified with grid-area
. - a period signifies an empty grid cell
none - no grid areas are defined
grid-gap
A shorthand for grid-row-gap and grid-column-gap
Values:
<grid-row-gap>, <grid-column-gap> - length values
where grid-row-gap and grid-column-gap specify the sizes of the grid lines. You can think of it like setting the width of the gutters between the columns / rows.
<line-size> - a length value
Note: The grid- prefix will be removed and grid-gap renamed to gap. The unprefixed property is already supported in Chrome 68+, Safari 11.2 Release 50+ and Opera 54+.
align-items
Aligns grid items along the block (column) axis (as opposed to justify-items which aligns along the inline (row) axis). This value applies to all grid items inside the container.
Values:
start - aligns items to be flush with the start edge of their cell
end - aligns items to be flush with the end edge of their cell
center - aligns items in the center of their cell
stretch - fills the whole height of the cell (this is the default)
justify-items
Aligns grid items along the inline (row) axis (as opposed to align-items which aligns along the block (column) axis). This value applies to all grid items inside the container.
Values:
start - aligns items to be flush with the start edge of their cell
end - aligns items to be flush with the end edge of their cell
center - aligns items in the center of their cell
stretch - fills the whole width of the cell (this is the default)
align-content
Sometimes the total size of your grid might be less than the size of its grid container. This could happen if all of your grid items are sized with non-flexible units like px. In this case you can set the alignment of the grid within the grid container. This property aligns the grid along the block (column) axis (as opposed to justify-content which aligns the grid along the inline (row) axis).
Values:
start - aligns the grid to be flush with the start edge of the grid container
end - aligns the grid to be flush with the end edge of the grid container
center - aligns the grid in the center of the grid container
stretch - resizes the grid items to allow the grid to fill the full height of the grid container
space-around - places an even amount of space between each grid item, with half-sized spaces on the far ends
space-between - places an even amount of space between each grid item, with no space at the far ends
space-evenly - places an even amount of space between each grid item, including the far ends
justify-content
Sometimes the total size of your grid might be less than the size of its grid container. This could happen if all of your grid items are sized with non-flexible units like px. In this case you can set the alignment of the grid within the grid container. This property aligns the grid along the inline (row) axis (as opposed to align-content which aligns the grid along the block (column) axis).
Values:
start - aligns the grid to be flush with the start edge of the grid container
end - aligns the grid to be flush with the end edge of the grid container
center - aligns the grid in the center of the grid container
stretch - resizes the grid items to allow the grid to fill the full width of the grid container
space-around - places an even amount of space between each grid item, with half-sized spaces on the far ends
space-between - places an even amount of space between each grid item, with no space at the far ends
space-evenly - places an even amount of space between each grid item, including the far ends
grid-auto-columns, grid-auto-rows
Specifies the size of any auto-generated grid tracks (aka implicit grid tracks). Implicit tracks get created when there are more grid items than cells in the grid or when a grid item is placed outside of the explicit grid. (see The Difference Between Explicit and Implicit Grids)
Values:
<track-size> - can be a length, a percentage, or a fraction of the free space in the grid (using the fr unit)
Properties of the items
Note: float, display: inline-block, display: table-cell, vertical-align and column-?? properties have no effect on a grid item.
grid-column, grid-row
Determines a grid item's location within the grid by referring to specific grid lines. grid-column-start/grid-row-start is the line where the item begins, and grid-column-end/grid-row-end is the line where the item ends.
Values:
<line> - can be a number to refer to a numbered grid line, or a name to refer to a named grid line
span <number> - the item will span across the provided number of grid tracks
span <name> - the item will span across until it hits the next line with the provided name
auto - indicates auto-placement, an automatic span, or a default span of one
css
.item {
grid-column: <number> | <name> | span <number> | span <name> | auto /
<number> | <name> | span <number> | span <name> | auto
grid-row: <number> | <name> | span <number> | span <name> | auto /
<number> | <name> | span <number> | span <name> | auto
}
Examples:
css
.item-a {
grid-column: 2 / five;
grid-row: row1-start / 3;
}
css
.item-b {
grid-column: 1 / span col4-start;
grid-row: 2 / span 2;
}
If no grid-column / grid-row is declared, the item will span 1 track by default.
Items can overlap each other. You can use z-index to control their stacking order.
grid-area
Gives an item a name so that it can be referenced by a template created with the grid-template-areas property. Alternatively, this property can be used as an even shorter shorthand for grid-row-start + grid-column-start + grid-row-end + grid-column-end.
Values:
<name> - a name of your choosing
<row-start> / <column-start> / <row-end> / <column-end> - can be numbers or named lines
css
.item {
grid-area: <name> | <row-start> / <column-start> / <row-end> / <column-end>;
}
Examples:
As a way to assign a name to the item:
css
.item-d {
grid-area: header
}
As the short-shorthand for grid-row-start + grid-column-start + grid-row-end + grid-column-end:
css
.item-d {
grid-area: 1 / col4-start / last-line / 6
}
justify-self
Aligns a grid item inside a cell along the inline (row) axis (as opposed to align-self which aligns along the block (column) axis). This value applies to a grid item inside a single cell.
Values:
start - aligns the grid item to be flush with the start edge of the cell
end - aligns the grid item to be flush with the end edge of the cell
center - aligns the grid item in the center of the cell
stretch - fills the whole width of the cell (this is the default)
css
.item {
justify-self: start | end | center | stretch;
}
Examples:
css
.item-a {
justify-self: start;
}
css
.item-a {
justify-self: end;
}
css
.item-a {
justify-self: center;
}
css
.item-a {
justify-self: stretch;
}
To set alignment for all the items in a grid, this behavior can also be set on the grid container via the justify-items property.
End of explanation
"""
header = Button(description='Header',
layout=Layout(width='auto', grid_area='header'),
style=ButtonStyle(button_color='lightblue'))
main = Button(description='Main',
layout=Layout(width='auto', grid_area='main'),
style=ButtonStyle(button_color='moccasin'))
sidebar = Button(description='Sidebar',
layout=Layout(width='auto', grid_area='sidebar'),
style=ButtonStyle(button_color='salmon'))
footer = Button(description='Footer',
layout=Layout(width='auto', grid_area='footer'),
style=ButtonStyle(button_color='olive'))
GridBox(children=[header, main, sidebar, footer],
layout=Layout(
width='50%',
grid_template_rows='auto auto auto',
grid_template_columns='25% 25% 25% 25%',
grid_template_areas='''
"header header header header"
"main main . sidebar "
"footer footer footer footer"
''')
)
"""
Explanation: Placing items by name:
End of explanation
"""
GridBox(children=[Button(layout=Layout(width='auto', height='auto'),
style=ButtonStyle(button_color='darkseagreen')) for i in range(9)
],
layout=Layout(
width='50%',
grid_template_columns='100px 50px 100px',
grid_template_rows='80px auto 80px',
grid_gap='5px 10px')
)
"""
Explanation: Setting up row and column template and gap
End of explanation
"""
from ipywidgets import Layout, Box, VBox, HBox, HTML, Image
fit_options = ['contain', 'cover', 'fill', 'scale-down', 'none', None]
hbox_layout = Layout()
hbox_layout.width = '100%'
hbox_layout.justify_content = 'space-around'
green_box_layout = Layout()
green_box_layout.width = '100px'
green_box_layout.height = '100px'
green_box_layout.border = '2px solid green'
def make_box_for_grid(image_widget, fit):
"""
Make a VBox to hold caption/image for demonstrating
option_fit values.
"""
# Make the caption
if fit is not None:
fit_str = "'{}'".format(fit)
else:
fit_str = str(fit)
h = HTML(value='' + str(fit_str) + '')
# Make the green box with the image widget inside it
boxb = Box()
boxb.layout = green_box_layout
boxb.children = [image_widget]
# Compose into a vertical box
vb = VBox()
vb.layout.align_items = 'center'
vb.children = [h, boxb]
return vb
# Use this margin to eliminate space between the image and the box
image_margin = '0 0 0 0'
# Set size of captions in figures below
caption_size = 'h4'
"""
Explanation: Image layout and sizing
The layout and sizing of images is a little different than for other elements for a combination of historical reasons (the HTML tag img existed before CSS) and practical reasons (an image has an intrinsic size).
Sizing of images is particularly confusing because there are two plausible ways to specify the size of an image. The Image widget has attributes width and height that correspond to attributes of the same name on the HTML img tag. In addition, the Image widget, like every other widget, has a layout, which also has a width and height.
In addition, some CSS styling is applied to images that is not applied to other widgets: max_width is set to 100% and height is set to auto.
You should not rely on Image.width or Image.height to determine the display width and height of the image widget. Any CSS styling, whether via Image.layout or some other source, will override Image.width and Image.height.
When displaying an Image widget by itself, setting Image.layout.width to the desired width will display the Image widget with the same aspect ratio as the original image.
When placing an Image inside a Box (or HBox or VBox) the result depends on whether a width has been set on the Box. If it has, then the image will be stretched (or compressed) to fit within the box because Image.layout.max_width is 100% so the image fills the container. This will usually not preserve the aspect ratio of image.
Controlling the display of an Image inside a container
Use Image.layout.object_fit to control how an image is scaled inside a container like a box. The possible values are:
'contain': Fit the image in its content box while preserving the aspect ratio. If any of the container is not covered by the image, the background of the container is displayed. The content box is the size of the container if the container is smaller than the image, or the size of the image if the container is larger.
'cover': Fill the content box completely while preserving the aspect ratio of the image, cropping the image if necessary.
'fill': Completely fill the content box, stretching/compressing the image as necessary.
'none': Do no resizing; image will be clipped by the container.
'scale-down': Do the same thing as either contain or none, using whichever results in the smaller dispayed image.
None (the Python value): Remove object_fit from the layout; effect is same as 'fill'.
Use Image.layout.object_position to control how where an image is positioned within a container like a box. The default value ensures that the image is centered in the box. The effect of Image.layout.object_position depends, in some cases, on the value of Image.layout.object_fit.
There are several ways to specify the value for object_position, described below.
Examples of object_fit
In the example below an image is displayed inside a green box to demonstrate each of the values for object_fit.
To keep the example uniform, define common code here.
End of explanation
"""
with open('images/gaussian_with_grid.png', 'rb') as f:
im_600_300 = f.read()
boxes = []
for fit in fit_options:
ib = Image(value=im_600_300)
ib.layout.object_fit = fit
ib.layout.margin = image_margin
boxes.append(make_box_for_grid(ib, fit))
vb = VBox()
h = HTML(value='<{size}>Examples of <code>object_fit</code> with large image</{size}>'.format(size=caption_size))
vb.layout.align_items = 'center'
hb = HBox()
hb.layout = hbox_layout
hb.children = boxes
vb.children = [h, hb]
vb
"""
Explanation: object_fit in a Box smaller than the original image
The effect of each can be seen in the image below. In each case, the image is in a box with a green border. The original image is 600x300 and the grid boxes in the image are squares. Since the image is wider than the box width, the content box is the size of the container.
End of explanation
"""
with open('images/gaussian_with_grid_tiny.png', 'rb') as f:
im_50_25 = f.read()
boxes = []
for fit in fit_options:
ib = Image(value=im_50_25)
ib.layout.object_fit = fit
ib.layout.margin = image_margin
boxes.append(make_box_for_grid(ib, fit))
vb = VBox()
h = HTML(value='<{size}>Examples of <code>object_fit</code> with small image</{size}>'.format(size=caption_size))
vb.layout.align_items = 'center'
hb = HBox()
hb.layout = hbox_layout
hb.children = boxes
vb.children = [h, hb]
vb
"""
Explanation: object_fit in a Box larger than the original image
The effect of each can be seen in the image below. In each case, the image is in a box with a green border. The original image is 50x25 and the grid boxes in the image are squares.
End of explanation
"""
boxes = []
for fit in fit_options:
ib = Image(value=im_50_25)
ib.layout.object_fit = fit
ib.layout.margin = image_margin
# NOTE WIDTH IS SET TO 100%
ib.layout.width = '100%'
boxes.append(make_box_for_grid(ib, fit))
vb = VBox()
h = HTML(value='<{size}>Examples of <code>object_fit</code> with image '
'smaller than container</{size}>'.format(size=caption_size))
vb.layout.align_items = 'center'
hb = HBox()
hb.layout = hbox_layout
hb.children = boxes
vb.children = [h, hb]
vb
"""
Explanation: It may be surprising, given the description of the values for option_fit, that in none of the cases does the image actually fill the box. The reason is that the underlying image is only 50 pixels wide, half the width of the box, so fill and cover mean "fill/cover the content box determined by the size of the image".
object_fit in a Box larger than the original image: use image layout width 100% to fill container
If the width of the image's layout is set to 100% it will fill the box in which it is placed. This example also illustrates the difference between 'contain' and 'scale-down'. The effect of 'scale-down' is either the same as 'contain' or 'none', whichever leads to the smaller displayed image. In this case, the smaller image comes from doing no fitting, so that is what is displayed.
End of explanation
"""
object_fit = 'none'
image_value = [im_600_300, im_50_25]
horz_keywords = ['left', 'center', 'right']
vert_keywords = ['top', 'center', 'bottom']
rows = []
for image, caption in zip(image_value, ['600 x 300 image', '50 x 25 image']):
cols = []
for horz in horz_keywords:
for vert in vert_keywords:
ib = Image(value=image)
ib.layout.object_position = '{horz} {vert}'.format(horz=horz, vert=vert)
ib.layout.margin = image_margin
ib.layout.object_fit = object_fit
# ib.layout.height = 'inherit'
ib.layout.width = '100%'
cols.append(make_box_for_grid(ib, ib.layout.object_position))
hb = HBox()
hb.layout = hbox_layout
hb.children = cols
rows.append(hb)
vb = VBox()
h1 = HTML(value='<{size}><code> object_position </code> by '
'keyword with large image</{size}>'.format(size=caption_size))
h2 = HTML(value='<{size}><code> object_position </code> by '
'keyword with small image</{size}>'.format(size=caption_size))
vb.children = [h1, rows[0], h2, rows[1]]
vb.layout.height = '400px'
vb.layout.justify_content = 'space-around'
vb.layout.align_items = 'center'
vb
"""
Explanation: Examples of object_position
There are several ways to set object position:
Use keywords like top and left to describe how the image should be placed in the container.
Use two positions, in pixels, for the offset from the top, left corner of the container to the top, left corner of the image. The offset may be positive or negative, and can be used to position the image outside of the box.
Use a percentage for the offset in each direction. The percentage is a the fraction of the vertical or horizontal whitespace around the image if the image is smaller than the container, or the portion of the image outside the container if the image is larger than the container.
A mix of pixel and percent offsets.
A mix of keywords and offsets.
Image scaling as determined by object_fit will take precedence over the positioning in some cases. For example, if object_fit is fill, so that the image is supposed to fill the container without preserving the aspect ratio, then object_position will have no effect because there is effectively no positioning to do.
Another way to think about it is this: object_position specifies how the white space around an image should be distributed in a container if there is white space in a particular direction.
Specifying object_position with keywords
This form of object_position takes two keywords, one for horizontal position of the image in the container and one for the vertical position, in that order.
The horizontal position must be one of:
'left': the left side of the image should be aligned with the left side of the container
'center': the image should be centered horizontally in the container.
'right': the right side of the image should be aligned with the right side of the container.
The vertical position must be one of
'top': the top of the image should be aligned with the top of the container.
'center': the image should be centered vertically in the container.
'bottom': the bottom of the image should be aligned with the bottom of the container.
The effect of each is display below, once for an image smaller than the container and once for an image larger than the container.
In the examples below the object_fit is set to 'none' so that the image is not scaled.
End of explanation
"""
object_fit = ['none', 'contain', 'fill', 'cover']
offset = '20px 10px'
image_value = [im_600_300]
boxes = []
for image, caption in zip(image_value, ['600 x 300 image', ]):
for fit in object_fit:
ib = Image(value=image)
ib.layout.object_position = offset
ib.layout.margin = image_margin
ib.layout.object_fit = fit
# ib.layout.height = 'inherit'
ib.layout.width = '100%'
title = 'object_fit: {}'.format(ib.layout.object_fit)
boxes.append(make_box_for_grid(ib, title))
vb = VBox()
h = HTML(value='<{size}><code>object_position</code> by '
'offset {offset} with several '
'<code>object_fit</code>s with large image</{size}>'.format(size=caption_size,
offset=offset))
vb.layout.align_items = 'center'
hb = HBox()
hb.layout = hbox_layout
hb.children = boxes
vb.children = [h, hb]
vb
"""
Explanation: Specifying object_position with offsets in pixels
One can specify the offset of the top, left corner of the image from the top, left corner of the container in pixels. The first of the two offsets is horizontal, the second is vertical and either may be negative. Using a large enough offset that the image is outside the container will result in the image being hidden.
The image is scaled first using the value of object_fit (which defaults to fill if nothing is specified) then the offset is applied.
Offsets can be specified from the bottom and/or right side by combining keywords and pixel offsets. For example, right 10px bottom 20px offsets the right side of the image 10px from the right edge of the container and the image bottom 20px from the bottom of the container.
End of explanation
"""
|
lwahedi/CurrentPresentation | talks/MDI2/Scraping+Lecture.ipynb | mit | import pandas as pd
import numpy as np
import pickle
import statsmodels.api as sm
from sklearn import cluster
import matplotlib.pyplot as plt
%matplotlib inline
from bs4 import BeautifulSoup as bs
import requests
import time
# from ggplot import *
"""
Explanation: Collecting and Using Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy<br>
Follow along: Wahedi.us, Current Presentation
Agenda for today:
More on manipulating data
Scrape data
Merge data into a data frame
Run a basic model on the data
Packages to Import For Today
Should all be included with your Anaconda Python Distribution
Raise your hand for help if you have trouble
Our plots will use matplotlib, similar to plotting in matlab
%matplotlib inline tells Jupyter Notebooks to display your plots
from allows you to import part of a package
End of explanation
"""
asthma_data = pd.read_csv('asthma-emergency-department-visit-rates-by-zip-code.csv')
asthma_data.head()
"""
Explanation: Other Useful Packages (not used today)
ggplot: the familiar ggplot2 you know and love from R
seaborn: Makes your plots prettier
plotly: makes interactive visualizations, similar to shiny
gensim: package for doing natural language processing
scipy: used with numpy to do math. Generates random numbers from distributions, does matrix operations, etc.
Data Manipulation
Download the .csv file at: <br>
https://data.chhs.ca.gov/dataset/asthma-emergency-department-visit-rates-by-zip-code
OR: https://tinyurl.com/y79jbxlk
Move it to the same directory as your notebook
End of explanation
"""
asthma_data[['zip','coordinates']] = asthma_data.loc[:,'ZIP code'].str.split(
pat='\n',expand=True)
asthma_data.drop('ZIP code', axis=1,inplace=True)
asthma_data.head(2)
"""
Explanation: Look at those zip codes!
Clean Zip Code
We don't need the latitude and longitude
Create two variables by splitting the zip code variable:
index the data frame to the zip code variable
split it in two: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html
assign it to another two variables
Remember: can't run this cell twice without starting over
End of explanation
"""
asthma_grouped = asthma_data.groupby(by=['Year','zip']).sum()
asthma_grouped.head(4)
"""
Explanation: Rearrange The Data: Group By
Make child and adult separate columns rather than rows.
Must specify how to aggregate the columns <br>
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html
End of explanation
"""
asthma_grouped.drop('County Fips code',axis=1,inplace=True)
temp_grp = asthma_data.groupby(by=['Year','zip']).first()
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']]
asthma_grouped.loc[:,'Number of Visits']=asthma_grouped.loc[:,'Number of Visits']/2
asthma_grouped.head(2)
"""
Explanation: Lost Columns! Fips summed!
Group by: Cleaning Up
Lost columns you can't sum
took sum of fips
Must add these back in
Works because temp table has same index
End of explanation
"""
A = [5]
B = A
A.append(6)
print(B)
import copy
A = [5]
B = A.copy()
A.append(6)
print(B)
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']].copy()
"""
Explanation: Aside on Copying
Multiple variables can point to the same data in Python. Saves memory
If you set one variable equal to another, then change the first variable, the second changes.
Causes warnings in Pandas all the time.
Solution:
Use proper slicing-- .loc[] --for the right hand side
Use copy
End of explanation
"""
asthma_unstacked = asthma_data.pivot_table(index = ['Year',
'zip',
'County',
'coordinates',
'County Fips code'],
columns = 'Age Group',
values = 'Number of Visits')
asthma_unstacked.reset_index(drop=False,inplace=True)
asthma_unstacked.head(2)
"""
Explanation: Rearrange The Data: Pivot
Use pivot and melt to to move from row identifiers to column identifiers and back <br>
https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-melt
Tell computer what to do with every cell:
Index: Stays the same
Columns: The column containing the new column labels
Values: The column containing values to insert
<img src='pivot.png'>
Rearrange The Data: Pivot
Tell computer what to do with every cell:
Index: Stays the same
Columns: The column containing the new column labels
Values: The column containing values to insert
End of explanation
"""
asthma_unstacked.rename(columns={
'zip':'Zip',
'coordinates':'Coordinates',
'County Fips code':'Fips',
'Adults (18+)':'Adults',
'All Ages':'Incidents',
'Children (0-17)': 'Children'
},
inplace=True)
asthma_2015 = asthma_unstacked.loc[asthma_unstacked.Year==2015,:]
asthma_2015.head(2)
"""
Explanation: Rename Columns, Subset Data
End of explanation
"""
pickle.dump(asthma_unstacked,open('asthma_unstacked.p','wb'))
asthma_unstacked.to_csv('asthma_unstacked.csv')
asthma_unstacked = pickle.load(open('asthma_unstacked.p','rb'))
"""
Explanation: Save Your Data
No saving your workspace like in R or STATA
Save specific variables, models, or results using Pickle
wb: write binary. Tells computer to save the file
rb: read binary. Tells computer to read the file
If you mix them up, you may write over your data and lose it
Write your data to a text file to read later
End of explanation
"""
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
attributes = ["format=application/json",
"zipCode=20007",
"date=2017-09-05T00-0000",
"distance=25",
"API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
]
post_url = '&'.join(attributes)
print(post_url)
"""
Explanation: Scraping
How the Internet Works
Code is stored on servers
Web addresses point to the location of that code
Going to an address or clicking a button sends requests to the server for data,
The server returns the requested content
Your web browser interprets the code to render the web page
<img src='Internet.png'>
Scraping:
Collect the website code by emulating the process:
Can haz cheezburger?
<img src='burger.png'>
Extract the useful information from the scraped code:
Where's the beef?
<img src='beef.png'>
API
Application Programming Interface
The set of rules that govern communication between two pieces of code
Code requires clear expected inputs and outputs
APIs define required inputs to get the outputs in a format you can expect.
Easier than scraping a website because gives you exactly what you ask for
<img src = "beef_direct.png">
API Keys
APIs often require identification
Go to https://docs.airnowapi.org
Register and get a key
Log in to the site
Select web services
DO NOT SHARE YOUR KEY
It will get stolen and used for malicious activity
Requests to a Server
<div style="float: left;width:50%">
<h3> GET</h3>
<ul><li>Requests data from the server</li>
<li> Encoded into the URL</li></ul>
<img src = 'get.png'>
</div>
<div style="float: left;width:50%">
<h3>POST</h3>
<ul><li>Submits data to be processed by the server</li>
<li>For example, filter the data</li>
<li>Can attach additional data not directly in the url</li></ul>
<img src = 'post.png'>
</div>
Using an API
<img src = 'api.png'>
Requests encoded in the URL
Parsing a URL
<font color="blue">http://www.airnowapi.org/aq/observation/zipCode/historical/</font><font color="red">?</font><br><font color="green">format</font>=<font color="purple">application/json</font><font color="orange">&<br></font><font color="green">zipCode</font>=<font color="purple">20007</font><font color="orange">&</font><br><font color="green">date</font>=<font color="purple">2017-09-05T00-0000</font><font color="orange">&</font><br><font color="green">distance</font>=<font color="purple">25</font><font color="orange">&</font><br><font color="green">API_KEY</font>=<font color="purple">D9AA91E7-070D-4221-867CC-XXXXXXXXXXX</font>
The base URL or endpoint is:<br>
<font color="blue">http://www.airnowapi.org/aq/observation/zipCode/historical/</font>
<font color="red">?</font> tells us that this is a query.
<font color="orange">&</font> separates name, value pairs within the request.
Five <font color="green"><strong>name</strong></font>, <font color="purple"><strong>value</strong></font> pairs POSTED
format, zipCode, date, distance, API_KEY
Request from Python
prepare the url
List of attributes
Join them with "&" to form a string
End of explanation
"""
ingredients=requests.get(base_url, post_url)
ingredients = ingredients.json()
print(ingredients[0])
"""
Explanation: Requests from Python
Use requests package
Requested json format
Returns list of dictionaries
Look at the returned keys
End of explanation
"""
for item in ingredients:
AQIType = item['ParameterName']
City=item['ReportingArea']
AQIValue=item['AQI']
print("For Location ", City, " the AQI for ", AQIType, "is ", AQIValue)
"""
Explanation: View Returned Data:
Each list gives a different parameter for zip code and date we searched
End of explanation
"""
time.sleep(1)
"""
Explanation: Ethics
Check the websites terms of use
Don't hit too hard:
Insert pauses in your code to act more like a human
Scraping can look like an attack
Server will block you without pauses
APIs often have rate limits
Use the time package to pause for a second between hits
End of explanation
"""
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
zips = asthma_2015.Zip.unique()
zips = zips[:450]
date ="date=2015-09-01T00-0000"
api_key = "API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
return_format = "format=application/json"
zip_str = "zipCode="
post_url = "&".join([date,api_key,return_format,zip_str])
data_dict = {}
for zipcode in zips:
time.sleep(1)
zip_post = post_url + str(zipcode)
ingredients = requests.get(base_url, zip_post)
ingredients = ingredients.json()
zip_data = {}
for data_point in ingredients:
AQIType = data_point['ParameterName']
AQIVal = data_point['AQI']
zip_data[AQIType] = AQIVal
data_dict[zipcode]= zip_data
"""
Explanation: Collect Our Data
Python helps us automate repetitive tasks. Don't download each datapoint you want separately
Get a list of zip codes we want
take a subset to demo, so it doesn't take too long and so we don't all hit too hard from the same ip
Request the data for those zipcodes on a day in 2015 (you pick, fire season July-Oct)
Be sure to sleep between requests
Store that data as you go into a dictionary
Key: zip code
Value: Dictionary of the air quality parameters and their value
End of explanation
"""
ingredients = requests.get("https://en.wikipedia.org/wiki/Data_science")
soup = bs(ingredients.text)
print(soup.body.p)
"""
Explanation: Scraping: Parsing HTML
What about when you don't have an API that returns dictionaries?
HTML is a markup language that displays data (text, images, etc)
Puts content within nested tags to tell your browser how to display it
<Section_tag>
  <tag> Content </tag>
  <tag> Content </tag>
< /Section_tag>
<Section_tag>
  <tag> <font color="red">Beef</font> </tag>
< /Section_tag>
Find the tags that identify the content you want:
First paragraph of wikipedia article:
https://en.wikipedia.org/wiki/Data_science
Inspect the webpage:
Windows: ctrl+shift+i
Mac: ctrl+alt+i
<img src = "wikipedia_scrape.png">
Parsing HTML with Beautiful Soup
Beautiful Soup takes the raw html and parses the tags so you can search through them.
text attribute returns raw html text from requests
Ignore the warning, default parser is fine
We know it's the first paragraph tag in the body tag, so:
Can find first tag of a type using <strong>.</strong>
But it's not usually that easy...
End of explanation
"""
parser_div = soup.find("div", class_="mw-parser-output")
wiki_content = parser_div.find_all('p')
print(wiki_content[0])
print('*****************************************')
print(wiki_content[0].text)
"""
Explanation: Use Find Feature to Narrow Your Search
Find the unique div we identified
Remember the underscore: "class_"
Find the p tag within the resulting html
Use an index to return just the first paragraph tag
Use the text attribute to ignore all the formatting and link tags
Next: Use a for loop and scrape the first paragraph from a bunch of wikipedia articles
Learn More: http://web.stanford.edu/~zlotnick/TextAsData/Web_Scraping_with_Beautiful_Soup.html
End of explanation
"""
pickle.dump(data_dict,open('AQI_data_raw.p','wb'))
"""
Explanation: Back To Our Data
If it's still running, go ahead and stop it by pushing the square at the top of the notebook:
<img src="interrupt.png">
Save what you collected, don't want to hit them twice!
End of explanation
"""
collected = list(data_dict.keys())
asthma_2015_sub = asthma_2015.loc[asthma_2015.Zip.isin(collected),:]
"""
Explanation: Subset down to the data we have:
use the isin() method to include only those zip codes we've already collected
End of explanation
"""
aqi_data = pd.DataFrame.from_dict(data_dict, orient='index')
aqi_data.reset_index(drop=False,inplace=True)
aqi_data.rename(columns={'index':'Zip'},inplace=True)
aqi_data.head()
"""
Explanation: Create a dataframe from the new AQI data
End of explanation
"""
asthma_aqi = asthma_2015_sub.merge(aqi_data,how='outer',on='Zip')
asthma_aqi.head(2)
"""
Explanation: Combine The Data
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
* Types of merges:
* Left: Use only rows from the dataframe you are merging into
* Right: use only rows from the dataframe you are inserting, (the one in the parentheses)
* Inner: Use only rows that match between both
* Outer: Use all rows, even if they only appear in one of the dataframes
* On: The variables you want to compare
* Specify right_on and left_on if they have different names
End of explanation
"""
asthma_aqi.Incidents.plot.hist(20)
"""
Explanation: Look At The Data: Histogram
20 bins
End of explanation
"""
asthma_aqi.loc[:,['Incidents','OZONE']].plot.density()
"""
Explanation: Look At The Data: Smoothed Distribution
End of explanation
"""
asthma_aqi.loc[:,['PM2.5','PM10']].plot.hist()
"""
Explanation: Look at particulates
There is a lot of missingness in 2015
Try other variables, such as comparing children and adults
End of explanation
"""
asthma_aqi.plot.scatter('OZONE','PM2.5')
"""
Explanation: Scatter Plot
Try some other combinations
Our data look clustered, but we'll ignore that for now
End of explanation
"""
y =asthma_aqi.loc[:,'Incidents']
x =asthma_aqi.loc[:,['OZONE','PM2.5']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
pickle.dump([results,ols_model1],open('ols_model_results.p','wb'))
"""
Explanation: Run a regression:
Note: statsmodels supports equation format like R <br>
http://www.statsmodels.org/dev/example_formulas.html
End of explanation
"""
model_df = asthma_aqi.loc[:,['OZONE','PM2.5','Incidents',]]
model_df.dropna(axis=0,inplace=True)
model_df = (model_df - model_df.mean()) / (model_df.max() - model_df.min())
asthma_air_clusters=cluster.KMeans(n_clusters = 3)
asthma_air_clusters.fit(model_df)
model_df['clusters3']=asthma_air_clusters.labels_
"""
Explanation: Clustering Algorithm
Learn more about clustering here: <br>
http://scikit-learn.org/stable/modules/clustering.html
Use sklearn, a package for data mining and machine learing
Drop rows with missing values first
Standardize the data so they're all on the same scale
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
labels = asthma_air_clusters.labels_
ax.scatter(model_df.loc[:, 'PM2.5'], model_df.loc[:, 'OZONE'], model_df.loc[:, 'Incidents'],
c=labels.astype(np.float), edgecolor='k')
ax.set_xlabel('Particulates')
ax.set_ylabel('Ozone')
ax.set_zlabel('Incidents')
"""
Explanation: Look At Clusters
Our data are very closely clustered, OLS was probably not appropriate.
End of explanation
"""
|
vikashvverma/machine-learning | mlfoundation/istat/L1_Starter_Code.ipynb | mit | import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open('enrollments.csv', 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def read_csv(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
return list(reader)
#####################################
# 1 #
#####################################
## Read in the data from daily_engagement.csv and project_submissions.csv
## and store the results in the below variables.
## Then look at the first row of each table.
enrollments = read_csv('enrollments.csv')
daily_engagement = read_csv('daily_engagement.csv')
project_submissions = read_csv('project_submissions.csv')
print(enrollments[0])
print(daily_engagement[0])
print(project_submissions[0])
"""
Explanation: Before we get started, a couple of reminders to keep in mind when using iPython notebooks:
Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.
When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.
The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.
Load Data from CSVs
End of explanation
"""
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))
engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))
engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))
engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])
engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])
daily_engagement[0]
# Clean up the data types in the submissions table
for submission in project_submissions:
submission['completion_date'] = parse_date(submission['completion_date'])
submission['creation_date'] = parse_date(submission['creation_date'])
project_submissions[0]
"""
Explanation: Fixing Data Types
End of explanation
"""
for element in daily_engagement:
element['account_key'] = element['acct']
del element['acct']
#####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
def unique(list_data):
keys = {}
for element in list_data:
if element['account_key'] in keys:
keys[element['account_key']] += 1
else:
keys[element['account_key']] = 1
return keys
enrollment_num_rows = len(enrollments)
enrollment_unique_students = unique(enrollments)
enrollment_num_unique_students = len(enrollment_unique_students)
print(enrollment_num_rows)
print(enrollment_num_unique_students)
engagement_num_rows = len(daily_engagement)
engagement_unique_students = unique(daily_engagement)
engagement_num_unique_students = len(engagement_unique_students)
print(engagement_num_rows)
print(engagement_num_unique_students)
submission_num_rows = len(project_submissions)
submission_unique_students = unique(project_submissions)
submission_num_unique_students = len(submission_unique_students)
print(submission_num_rows)
print(submission_num_unique_students)
"""
Explanation: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data
End of explanation
"""
#####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
print(daily_engagement[0]['account_key'])
"""
Explanation: Problems in the Data
End of explanation
"""
#####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
for enrollment in enrollments:
if not enrollment['account_key'] in engagement_unique_students and enrollment['join_date'] != enrollment['cancel_date']:
print(enrollment, "\n")
"""
Explanation: Missing Engagement Records
End of explanation
"""
test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
test_accounts.add(enrollment['account_key'])
def remove_test_accounts(data):
non_udacity_accounts = []
for element in data:
if element['account_key'] not in test_accounts:
non_udacity_accounts.append(element)
return non_udacity_accounts
non_udacity_enrollments = remove_test_accounts(enrollments)
non_udacity_engagements = remove_test_accounts(daily_engagement)
non_udacity_submissions = remove_test_accounts(project_submissions)
#####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
paid_students = {}
for enrollment in non_udacity_enrollments:
if enrollment['days_to_cancel'] is None or enrollment['days_to_cancel'] > 7:
account_key = enrollment['account_key']
enrollment_date = enrollment['join_date']
if account_key not in paid_students or enrollment_date > paid_students[account_key]:
paid_students[enrollment['account_key']] = enrollment['join_date']
print(len(paid_students))
"""
Explanation: Checking for More Problem Records
End of explanation
"""
# Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print(len(non_udacity_enrollments))
print(len(non_udacity_engagement))
print(len(non_udacity_submissions))
"""
Explanation: Tracking Down the Remaining Problems
End of explanation
"""
#####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
paid_students = {}
for enrollment in non_udacity_enrollments:
if enrollment['days_to_cancel'] is None or enrollment['days_to_cancel'] > 7:
account_key = enrollment['account_key']
enrollment_date = enrollment['join_date']
if account_key not in paid_students or enrollment_date > paid_students[account_key]:
paid_students[account_key] = enrollment['join_date']
print(len(paid_students))
"""
Explanation: Refining the Question
End of explanation
"""
# Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days < 7 and time_delta.days >= 0
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
paid_engagement_in_first_week = []
for engagement in non_udacity_engagements:
join_date = paid_students.get(engagement['account_key'], None)
engagement_date = engagement['utc_date']
if join_date is not None and within_one_week(join_date, engagement_date):
paid_engagement_in_first_week.append(engagement)
print(len(paid_engagement_in_first_week))
"""
Explanation: Getting Data from First Week
End of explanation
"""
from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = list(total_minutes_by_account.values())
print("Sum:", sum(total_minutes))
print('Mean:', np.mean(total_minutes))
print('Standard deviation:', np.std(total_minutes))
print('Minimum:', np.min(total_minutes))
print('Maximum:', np.max(total_minutes))
"""
Explanation: Exploring Student Engagement
End of explanation
"""
#####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
"""
Explanation: Debugging Data Analysis Code
End of explanation
"""
#####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
def group_data(data, key):
grouped_data = defaultdict(list)
for engagement_record in data:
account_key = engagement_record[key]
grouped_data[account_key].append(engagement_record)
return grouped_data
def sum_grouped_items(grouped_data, field_name):
group_sum = {}
for key, data_points in grouped_data.items():
total = 0
for data_point in data_points:
if data_point[field_name] > 0:
total += data_point[field_name]
group_sum[key] = total
return group_sum
lessons_by_account = group_data(paid_engagement_in_first_week, 'account_key')
total_lessons_by_account = sum_grouped_items(lessons_by_account, 'lessons_completed')
total_lessons = list(total_lessons_by_account.values())
print("Sum:", sum(total_lessons))
print('Mean:', np.mean(total_lessons))
print('Standard deviation:', np.std(total_lessons))
print('Minimum:', np.min(total_lessons))
print('Maximum:', np.max(total_lessons))
"""
Explanation: Lessons Completed in First Week
End of explanation
"""
######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
def sum_days_visited(grouped_data, field_name):
group_sum = {}
for key, data_points in grouped_data.items():
total = 0
for data_point in data_points:
if data_point[field_name] > 0:
total += 1
group_sum[key] = total
return group_sum
total_days_by_account = sum_days_visited(lessons_by_account, 'num_courses_visited')
total_days = list(total_days_by_account.values())
print("Sum:", sum(total_days))
print('Mean:', np.mean(total_days))
print('Standard deviation:', np.std(total_days))
print('Minimum:', np.min(total_days))
print('Maximum:', np.max(total_days))
"""
Explanation: Number of Visits in First Week
End of explanation
"""
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = ['746169184', '3176718735']
passing_engagement = []
non_passing_engagement = []
# assigned_rating
data_by_key = group_data(paid_engagement_in_first_week, 'account_key')
submissions = set()
for data in non_udacity_submissions:
if data['lesson_key'] in subway_project_lesson_keys and (data['assigned_rating'] == 'PASSED' or data['assigned_rating'] == 'DISTINCTION'):
submissions.add(data['account_key'])
for key, data_points in data_by_key.items():
for data_point in data_points:
if data_point['account_key'] in submissions:
passing_engagement.append(data_point)
else:
non_passing_engagement.append(data_point)
print(len(passing_engagement))
print(len(non_passing_engagement))
"""
Explanation: Splitting out Passing Students
End of explanation
"""
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
# 'total_minutes_visited'
# projects_completed
def mean_metrics(data_points, key):
minutes = 0;
keys = set()
for data_point in data_points:
minutes += data_point[key]
keys.add(data_point['account_key'])
return minutes/len(keys)
passing_lessons_by_key = group_data(passing_engagement, 'account_key')
passing_lessons_by_account = sum_grouped_items(passing_lessons_by_key, 'lessons_completed')
passing_lessons = list(passing_lessons_by_account.values())
non_passing_lessons_by_key = group_data(non_passing_engagement, 'account_key')
non_passing_lessons_by_account = sum_grouped_items(non_passing_lessons_by_key, 'lessons_completed')
non_passing_lessons = list(non_passing_lessons_by_account.values())
# passing_minutes = mean_metrics(passing_engagement, 'total_minutes_visited')
# non_passing_minutes = mean_metrics(non_passing_engagement, 'total_minutes_visited')
print(np.mean(passing_lessons))
print(np.mean(non_passing_lessons))
passing_minutes_by_key = group_data(passing_engagement, 'account_key')
passing_minutes_by_account = sum_grouped_items(passing_minutes_by_key, 'total_minutes_visited')
passing_minutes = list(passing_minutes_by_account.values())
non_passing_minutes_by_key = group_data(non_passing_engagement, 'account_key')
non_passing_minutes_by_account = sum_grouped_items(non_passing_minutes_by_key, 'total_minutes_visited')
non_passing_minutes = list(non_passing_minutes_by_account.values())
print(np.mean(passing_minutes))
print(np.mean(non_passing_minutes))
passing_day_by_key = group_data(passing_engagement, 'account_key')
passing_day_by_account = sum_days_visited(passing_day_by_key, 'total_minutes_visited')
passing_day = list(passing_day_by_account.values())
non_passing_day_by_key = group_data(non_passing_engagement, 'account_key')
non_passing_day_by_account = sum_days_visited(non_passing_day_by_key, 'total_minutes_visited')
non_passing_day = list(non_passing_day_by_account.values())
print(np.mean(passing_day))
print(np.mean(non_passing_day))
"""
Explanation: Comparing the Two Student Groups
End of explanation
"""
## # 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6)) = plt.subplots(3, 2)
plt.hist(passing_lessons, bins = 8)
# ax2.plot(non_passing_lessons, bins = 20)
# ax3.plot(passing_minutes, bins = 20)
# ax4.plot(non_passing_minutes, bins = 20)
# ax5.plot(passing_day, bins = 20)
# ax6.plot(non_passing_day, bins = 20)
plt.hist(non_passing_lessons, bins = 8)
"""
Explanation: Making Histograms
End of explanation
"""
######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
"""
Explanation: Improving Plots and Sharing Findings
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | quests/sparktobq/02_gcs.ipynb | apache-2.0 | # Catch up cell. Run if you did not do previous notebooks of this sequence
!wget http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz
"""
Explanation: Migrating from Spark to BigQuery via Dataproc -- Part 2
Part 1: The original Spark code, now running on Dataproc (lift-and-shift).
Part 2: Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native)
Part 3: Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)
Part 4: Load CSV into BigQuery, use BigQuery. (modernize)
Part 5: Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless)
Catch up: get data
End of explanation
"""
BUCKET='cloud-training-demos-ml' # CHANGE
!gsutil cp kdd* gs://$BUCKET/
!gsutil ls gs://$BUCKET/kdd*
"""
Explanation: Copy data to GCS
Instead of having the data in HDFS, keep the data in GCS. This will allow us to delete the cluster once we are done ("job-specific clusters")
End of explanation
"""
from pyspark.sql import SparkSession, SQLContext, Row
spark = SparkSession.builder.appName("kdd").getOrCreate()
sc = spark.sparkContext
data_file = "gs://{}/kddcup.data_10_percent.gz".format(BUCKET)
raw_rdd = sc.textFile(data_file).cache()
raw_rdd.take(5)
csv_rdd = raw_rdd.map(lambda row: row.split(","))
parsed_rdd = csv_rdd.map(lambda r: Row(
duration=int(r[0]),
protocol_type=r[1],
service=r[2],
flag=r[3],
src_bytes=int(r[4]),
dst_bytes=int(r[5]),
wrong_fragment=int(r[7]),
urgent=int(r[8]),
hot=int(r[9]),
num_failed_logins=int(r[10]),
num_compromised=int(r[12]),
su_attempted=r[14],
num_root=int(r[15]),
num_file_creations=int(r[16]),
label=r[-1]
)
)
parsed_rdd.take(5)
"""
Explanation: Reading in data
Change any hdfs:// URLs to gs:// URLs. The code remains the same.
End of explanation
"""
sqlContext = SQLContext(sc)
df = sqlContext.createDataFrame(parsed_rdd)
connections_by_protocol = df.groupBy('protocol_type').count().orderBy('count', ascending=False)
connections_by_protocol.show()
df.registerTempTable("connections")
attack_stats = sqlContext.sql("""
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
GROUP BY protocol_type, state
ORDER BY 3 DESC
""")
attack_stats.show()
%matplotlib inline
ax = attack_stats.toPandas().plot.bar(x='protocol_type', subplots=True, figsize=(10,25))
"""
Explanation: Spark analysis
No changes are needed here.
End of explanation
"""
ax[0].get_figure().savefig('report.png');
!gsutil rm -rf gs://$BUCKET/sparktobq/
!gsutil cp report.png gs://$BUCKET/sparktobq/
connections_by_protocol.write.format("csv").mode("overwrite").save("gs://{}/sparktobq/connections_by_protocol".format(BUCKET))
!gsutil ls gs://$BUCKET/sparktobq/**
"""
Explanation: Write out report
Make sure to copy the output to GCS so that we can safely delete the cluster.
End of explanation
"""
|
dougkelly/SmartMeterResearch | SmartMeterResearch_Phase2.ipynb | apache-2.0 | s3 = boto3.client('s3')
s3.list_buckets()
def create_s3_bucket(bucketname):
"""Quick method to create bucket with exception handling"""
s3 = boto3.resource('s3')
exists = True
bucket = s3.Bucket(bucketname)
try:
s3.meta.client.head_bucket(Bucket=bucketname)
except botocore.exceptions.ClientError as e:
error_code = int(e.response['Error']['Code'])
if error_code == 404:
exists = False
if exists:
print 'Bucket {} already exists'.format(bucketname)
else:
s3.create_bucket(Bucket=bucketname, GrantFullControl='dkelly628')
create_s3_bucket('pecanstreetresearch-2016')
"""
Explanation: AWS (S3, Redshift, Kinesis) + Databricks Spark = Real-time Smart Meter Analytics
Create S3 Bucket
End of explanation
"""
# Note: Used s3cmd tools because awscli tools not working in conda env
# 14m rows or ~ 1.2 GB local unzipped; 10min write to CSV and another 10min to upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/electricity-03-06-2016.csv s3://pecanstreetresearch-2016/electricity-03-06-2016.csv
# 200k rows ~ 15 MB local unzipped; 30 sec write to CSV and 15 sec upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/weather-03-06-2016.csv s3://pecanstreetresearch-2016/weather-03-06-2016.csv
"""
Explanation: Copy Postgres to S3 via Postgres dump to CSV and s3cmd upload
End of explanation
"""
# Quick geohashing before uploading to Redshift
weather_df = pd.read_csv('/Users/Doug/PecanStreet/weather_03-06-2016.csv')
weather_df.groupby(['latitude', 'longitude', 'city']).count()
weather_df['city'] = weather_df['Austin' if weather_df.latitude=30.292432 elif '']
weather_df['city'] = 'city'
weather_df.city.unique()
# weather_df['city'][weather_df.latitude==40.027278] = 'Boulder'
weather_df.to_csv('/Users/Doug/PecanStreet/weather-03-07-2016.csv', index=False)
metadata_df = pd.read_csv('/Users/Doug/PecanStreet/dataport-metadata.csv')
metadata_df = metadata_df[['dataid','city', 'state']]
metadata_df.to_csv('/Users/Doug/PecanStreet/metadata.csv', index=False)
# !s3cmd put metadata.csv s3://pecanstreetresearch-2016/metadata/metadata.csv
redshift = boto3.client('redshift')
# redshift.describe_clusters()
# psql -h pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com -U dkelly628 -d electricity -p 5439
"""
Explanation: Amazon Redshift: NoSQL Columnar Data Warehouse
Quick data cleanup before ETL
End of explanation
"""
# Complete
COPY electricity
FROM 's3://pecanstreetresearch-2016/electricity/electricity-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY weather
FROM 's3://pecanstreetresearch-2016/weather/weather-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY metadata
FROM 's3://pecanstreetresearch-2016/metadata/metadata.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1;
# Query for checking error log; invaluable
select query, substring(filename,22,25) as filename,line_number as line,
substring(colname,0,12) as column, type, position as pos, substring(raw_line,0,30) as line_text,
substring(raw_field_value,0,15) as field_text,
substring(err_reason,0,45) as reason
from stl_load_errors
order by query desc
limit 10;
# All table definitions are stored in pg_table_def table; different from Postgres
SELECT DISTINCT tablename
FROM pg_table_def
WHERE schemaname = 'public'
ORDER BY tablename;
# Returns household, time, city, usage by hour, and temperature for all residents in Austin, TX
SELECT e.dataid, e.localhour, m.city, SUM(e.use), w.temperature
FROM electricity AS e
JOIN weather AS w ON e.localhour = w.localhour
JOIN metadata AS m ON e.dataid = m.dataid
WHERE m.city = 'Austin'
GROUP BY e.dataid, e.localhour, m.city, w.temperature;
# Returns number of participants by city, state
SELECT m.city, m.state, COUNT(e.dataid) AS participants
FROM electricity AS e
JOIN metadata AS m ON e.dataid = m.dataid
GROUP BY m.city, m.state;
# Setup connection to Pecan Street Dataport
try:
conn = psycopg2.connect("dbname='electricity' user='dkelly628' host='pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com' port='5439' password='password'")
except:
# print "Error: Check there aren't any open connections in notebook or pgAdmin"
electricity_df = pd.read_sql("SELECT localhour, SUM(use) AS usage, SUM(air1) AS cooling, SUM(furnace1) AS heating, \
SUM(car1) AS electric_vehicle \
FROM electricity \
WHERE dataid = 7982 AND use > 0 \
AND localhour BETWEEN '2013-10-16 00:00:00'::timestamp AND \
'2016-02-26 08:00:00'::timestamp \
GROUP BY dataid, localhour \
ORDER BY localhour", conn)
electricity_df['localhour'] = electricity_df.localhour.apply(pd.to_datetime)
electricity_df.set_index('localhour', inplace=True)
electricity_df.fillna(value=0.0, inplace=True)
electricity_df[['usage','cooling']].plot(figsize=(18,9), title="Pecan Street Household 7982 Hourly Energy Consumption")
sns.despine();
"""
Explanation: create table electricity (
dataid integer not null,
localhour timestamp not null distkey sortkey,
use decimal(30,26),
air1 decimal(30,26),
furnace1 decimal(30,26),
car1 decimal(30,26)
);
create table weather (
localhour timestamp not null distkey sortkey,
latitude decimal(30,26),
longitude decimal(30,26),
temperature decimal(30,26),
city varchar(20)
);
create table metadata (
dataid integer distkey sortkey,
city varchar(20),
state varchar(20)
);
End of explanation
"""
kinesis = boto3.client('kinesis')
kinesis.create_stream(StreamName='PecanStreet', ShardCount=2)
kinesis.list_streams()
firehose = boto3.client('firehose')
# firehose.create_delivery_stream(DeliveryStreamName='pecanstreetfirehose', S3DestinationConfiguration={'RoleARN': '', 'BucketARN': 'pecanstreetresearch-2016'})
firehose.list_delivery_streams()
def kinesis_write(stream, ):
"""Method that writes to kinesis stream"""
kinesis = boto3.client('kinesis')
kinesis.put(StreamName=stream, )
def kinesis_read():
"""Method to read from kinesis stream"""
"""
Explanation: Databricks Spark Analysis (see Databricks): Batch analytics on S3, Streaming using Amazon Kinesis Stream
Create Amazon Kinesis Stream for writing streaming data to S3
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session07/Day0/TooBriefVisualization.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Introduction to Visualization:
Density Estimation and Data Exploration
Version 0.1
There are many flavors of data analysis that fall under the "visualization" umbrella in astronomy. Today, by way of example, we will focus on 2 basic problems.
By AA Miller
16 September 2017
End of explanation
"""
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
"""
Explanation: Problem 1) Density Estimation
Starting with 2MASS and SDSS and extending through LSST, we are firmly in an era where data and large statistical samples are cheap. With this explosion in data volume comes a problem: we do not know the underlying probability density function (PDF) of the random variables measured via our observations. Hence - density estimation: an attempt to recover the unknown PDF from observations. In some cases theory can guide us to a parametric form for the PDF, but more often than not such guidance is not available.
There is a common, simple, and very familiar tool for density estimation: histograms.
But there is also a problem:
HISTOGRAMS LIE!
We will "prove" this to be the case in a series of examples. For this exercise, we will load the famous Linnerud data set, which tested 20 middle aged men by measuring the number of chinups, situps, and jumps they could do in order to compare these numbers to their weight, pulse, and waist size. To load the data (just chinups for now) we will run the following:
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
End of explanation
"""
plt.hist( # complete
"""
Explanation: Problem 1a
Plot the histogram for the number of chinups using the default settings in pyplot.
End of explanation
"""
plt.hist( # complete
# complete
"""
Explanation: Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
Problem 1b
Using the same data make 2 new histograms: (i) one with 5 bins (bins = 5), and (ii) one with the bars centered on the left bin edges (align = "left").
Hint - if overplotting the results, you may find it helpful to use the histtype = "step" option
End of explanation
"""
# complete
plt.hist(# complete
"""
Explanation: These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
What if we instead allow the bin width to vary and require the same number of points in each bin? You can determine the bin edges for bins with 5 sources using the following command:
bins = np.append(np.sort(chinups)[::5], np.max(chinups))
Problem 1c
Plot a histogram with variable width bins, each with the same number of points.
Hint - setting normed = True will normalize the bin heights so that the PDF integrates to 1.
End of explanation
"""
plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
"""
Explanation: Ending the lie
Earlier I stated that histograms lie. One simple way to combat this lie: show all the data. Displaying the original data points allows viewers to somewhat intuit the effects of the particular bin choices that have been made (though this can also be cumbersome for very large data sets, which these days is essentially all data sets). The standard for showing individual observations relative to a histogram is a "rug plot," which shows a vertical tick (or other symbol) at the location of each source used to estimate the PDF.
Problem 1d Execute the cell below to see an example of a rug plot.
End of explanation
"""
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
"""
Explanation: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
To recap, histograms are not ideal for density estimation for the following reasons:
They introduce discontinuities that are not present in the data
They are strongly sensitive to user choices ($N_\mathrm{bins}$, bin centering, bin grouping), without any mathematical guidance to what these choices should be
They are difficult to visualize in higher dimensions
Histograms are useful for generating a quick representation of univariate data, but for the reasons listed above they should never be used for analysis. Most especially, functions should not be fit to histograms given how greatly the number of bins and bin centering affects the output histogram.
Okay - so if we are going to rail on histograms this much, there must be a better option. There is: Kernel Density Estimation (KDE), a nonparametric form of density estimation whereby a normalized kernel function is convolved with the discrete data to obtain a continuous estimate of the underlying PDF. As a rule, the kernel must integrate to 1 over the interval $-\infty$ to $\infty$ and be symmetric. There are many possible kernels (gaussian is highly popular, though Epanechnikov, an inverted parabola, produces the minimal mean square error).
KDE is not completely free of the problems we illustrated for histograms above (in particular, both a kernel and the width of the kernel need to be selected), but it does manage to correct a number of the ills. We will now demonstrate this via a few examples using the scikit-learn implementation of KDE: KernelDensity, which is part of the sklearn.neighbors module.
Note There are many implementations of KDE in Python, and Jake VanderPlas has put together an excellent description of the strengths and weaknesses of each. We will use the scitkit-learn version as it is in many cases the fastest implementation.
To demonstrate the basic idea behind KDE, we will begin by representing each point in the dataset as a block (i.e. we will adopt the tophat kernel). Borrowing some code from Jake, we can estimate the KDE using the following code:
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
The two main options to set are the bandwidth and the kernel.
End of explanation
"""
grid = # complete
PDFtophat = kde_sklearn( # complete
plt.plot( # complete
"""
Explanation: Problem 1e
Plot the KDE of the PDF for the number of chinups middle aged men can do using a bandwidth of 0.1 and a tophat kernel.
Hint - as a general rule, the grid should be smaller than the bandwidth when plotting the PDF.
End of explanation
"""
PDFtophat1 = # complete
# complete
# complete
# complete
"""
Explanation: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Problem 1f
Plot the KDE of the PDF for the number of chinups middle aged men can do using bandwidths of 1 and 5 and a tophat kernel. How do the results differ from the histogram plots above?
End of explanation
"""
PDFgaussian = # complete
PDFepanechnikov = # complete
"""
Explanation: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Problem 1g Plot the KDE of the PDF for the number of chinups middle aged men can do using a gaussian and Epanechnikov kernel. How do the results differ from the histogram plots above?
Hint - you will need to select the bandwidth. The examples above should provide insight into the useful range for bandwidth selection. You may need to adjust the values to get an answer you "like."
End of explanation
"""
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi)
"""
Explanation: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used:
$$h = 1.059 \sigma n^{-1/5}$$
where $h$ is the bandwidth, $\sigma$ is the standard deviation of the samples, and $n$ is the total number of samples. Note - in situations with bimodal or more complicated distributions, this rule of thumb can lead to woefully inaccurate PDF estimates. The most general way to estimate the choice of bandwidth is via cross validation (we will cover cross-validation later today).
What about multidimensional PDFs? It is possible using many of the Python implementations of KDE to estimate multidimensional PDFs, though it is very very important to beware the curse of dimensionality in these circumstances.
Problem 2) Data Exploration
Now a more open ended topic: data exploration. In brief, data exploration encompases a large suite of tools (including those discussed above) to examine data that live in large dimensional spaces. There is no single best method or optimal direction for data exploration. Instead, today we will introduce some of the tools available via python.
As an example we will start with a basic line plot - and examine tools beyond matplotlib.
End of explanation
"""
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
"""
Explanation: Seaborn
Seaborn is a plotting package that enables many useful features for exploration. In fact, a lot of the functionality that we developed above can readily be handled with seaborn.
To begin, we will make the same plot that we created in matplotlib.
End of explanation
"""
sns.set_style( # complete
# complete
"""
Explanation: These plots look identical, but it is possible to change the style with seaborn.
seaborn has 5 style presets: darkgrid, whitegrid, dark, white, and ticks. You can change the preset using the following:
sns.set_style("whitegrid")
which will change the output for all subsequent plots. Note - if you want to change the style for only a single plot, that can be accomplished with the following:
with sns.axes_style("dark"):
with all ploting commands inside the with statement.
Problem 3a
Re-plot the sine curve using each seaborn preset to see which you like best - then adopt this for the remainder of the notebook.
End of explanation
"""
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
"""
Explanation: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Here we load the default:
End of explanation
"""
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
"""
Explanation: which we will now change to colorblind, which is clearer to those that are colorblind.
End of explanation
"""
iris = sns.load_dataset("iris")
iris
"""
Explanation: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Note - for those familiar with pandas seaborn is designed to integrate easily and directly with pandas DataFrame objects. In the example below the Iris data are loaded into a DataFrame. iPython notebooks also display the DataFrame data in a nice readable format.
End of explanation
"""
# note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)
"""
Explanation: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.
Problem 3b
Plot the distribution of petal lengths for the Iris data set.
End of explanation
"""
plt.scatter( # complete
"""
Explanation: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots.
At this point, we are familiar with basic scatter plots in matplotlib.
Problem 3c
Make a matplotlib scatter plot showing the Iris petal length against the Iris petal width.
End of explanation
"""
with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample)
"""
Explanation: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below:
End of explanation
"""
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar()
"""
Explanation: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
End of explanation
"""
with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False)
"""
Explanation: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.
End of explanation
"""
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
"""
Explanation: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Suppose we want to see univariate distributions in addition to the scatter plot? This is certainly possible with matplotlib and you can find examples on the web, however, with seaborn this is really easy.
End of explanation
"""
sns.jointplot( # complete
"""
Explanation: But! Histograms and scatter plots can be problematic as we have discussed many times before.
Problem 3d
Re-create the plot above but set kind='kde' to produce density estimates of the distributions.
End of explanation
"""
sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]])
"""
Explanation: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)
End of explanation
"""
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
"""
Explanation: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.
End of explanation
"""
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
"""
Explanation: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial | community/terra/qis_adv/topological_quantum_walk.ipynb | apache-2.0 | #initialization
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.wrapper.jupyter import *
from qiskit.backends.ibmq import least_busy
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
IBMQ.load_accounts()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
device_coupling = device_backend.configuration()['coupling_map']
print("the best backend is " + device_backend.name() + " with coupling " + str(device_coupling))
"""
Explanation: <img src="../../../images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Topological Quantum Walks on IBM Q
This notebook is based on the paper of Radhakrishnan Balu, Daniel Castillo, and George Siopsis, "Physical realization of topological quantum walks on IBM-Q and beyond" arXiv:1710.03615 [quant-ph](2017).
Contributors
Keita Takeuchi (Univ. of Tokyo) and Rudy Raymond (IBM Research - Tokyo)
Introduction: challenges in implementing topological walk
In this section, we introduce one model of quantum walk called split-step topological quantum walk.
We define Hilbert space of quantum walker states and coin states as
$\mathcal{H}{\mathcal{w}}={\vert x \rangle, x\in\mathbb{Z}_N}, \mathcal{H}{\mathcal{c}}={\vert 0 \rangle, \vert 1 \rangle}$, respectively. Then, step operator is defined as
$$
S^+ := \vert 0 \rangle_c \langle 0 \vert \otimes L^+ + \vert 1 \rangle_c \langle 1 \vert \otimes \mathbb{I}\
S^- := \vert 0 \rangle_c \langle 0 \vert \otimes \mathbb{I} + \vert 1 \rangle_c \langle 1 \vert \otimes L^-,
$$
where
$$
L^{\pm}\vert x \rangle_{\mathcal w} := \vert (x\pm1)\ \rm{mod}\ N \rangle_{\mathcal w}
$$
is a shift operator. The boundary condition is included.
Also, we define the coin operator as
$$
T(\theta):=e^{-i\theta Y} = \begin{bmatrix} \cos\theta & -\sin\theta \ \sin\theta & \cos\theta \end{bmatrix}.
$$
One step of quantum walk is the unitary operator defined as below that uses two mode of coins, i.e., $\theta_1$ and $\theta_2$:
$$
W := S^- T(\theta_2)S^+ T(\theta_1).
$$
Intuitively speaking, the walk consists of flipping coin states and based on the outcome of the coins, the shifting operator is applied to determine the next position of the walk.
Next, we consider a walk with two phases that depend on the current position:
$$
(\theta_1,\theta_2) = \begin{cases}
(\theta_{1}^{-},\ \theta_{2}^{-}) & 0 \leq x < \frac{N}{2} \
(\theta_{1}^{+},\ \theta_{2}^{+}) & \frac{N}{2} \leq x < N.
\end{cases}
$$
Then, two coin operators are rewritten as
$$
\mathcal T_i = \sum^{N-1}_{x=0}e^{-i\theta_i(x) Y_c}\otimes \vert x \rangle_w \langle x \vert,\ i=1,2.
$$
By using this, one step of quantum walk is equal to
$$
W = S^- \mathcal T_2 S^+ \mathcal T_1.
$$
In principle, we can execute the quantum walk by multiplying $W$ many times, but then we need many circuit elements to construct it. This is not possible with the current approximate quantum computers due to large errors produced after each application of circuit elements (gates).
Hamiltonian of topological walk
Altenatively, we can think of time evolution of the states. The Hamiltonian $H$ is regarded as $H=\lim_{n \to \infty}W^n$(See below for further details.).
For example, when $(\theta_1,\ \theta_2) = (0,\ \pi/2)$, the Schrödinger equation is
$$
i\frac{d}{dt}\vert \Psi \rangle = H_{\rm I} \vert \Psi \rangle,\ H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].
$$
If Hamiltonian is time independent, the solution of the Schrödinger equation is
$$
\vert \Psi(t) \rangle = e^{-iHt} \vert \Psi(0) \rangle,
$$
so we can get the final state at arbitrary time $t$ at once without operating W step by step, if we know the corresponding Hamiltonian.
The Hamiltonian can be computed as below.
Set $(\theta_1,\ \theta_2) = (\epsilon,\ \pi/2+\epsilon)$, and $\epsilon\to 0$ and the number of step $s\to \infty$
while $se=t/2$(finite variable). Then,
\begin{align}
H_I&=\lim_{n \to \infty}W^n\
\rm{(LHS)} &= \mathbb{I}-iH_{I}t+O(t^2)\
\rm{(RHS)} &= \lim_{\substack{s\to \infty\ \epsilon\to 0}}(W^4)^{s/4}=
\lim_{\substack{s\to \infty\ \epsilon\to0}}(\mathbb{I}+O(\epsilon))^{s/4}\
&\simeq \lim_{\substack{s\to \infty\ \epsilon\to 0}}\mathbb{I}+\frac{s}{4}O(\epsilon)\
&= \lim_{\epsilon\to 0}\mathbb{I}+iY\otimes [2\mathbb I+L^+ + L^-]t+O(\epsilon).
\end{align}
Therefore,
$$H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].$$
Computation model
In order to check the correctness of results of the implementation of quantum walk by using IBMQ, we investigate two models, which have different features of coin phases. Let the number of positions on the line $n$ is 4.
- $\rm I / \rm II:\ (\theta_1,\theta_2) = \begin{cases}
(0,\ -\pi/2) & 0 \leq x < 2 \
(0,\ \pi/2) & 2 \leq x < 4
\end{cases}$
- $\rm I:\ (\theta_1,\theta_2)=(0,\ \pi/2),\ 0 \leq x < 4$
That is, the former is a quantum walk on a line with two phases of coins, while the latter is that with only one phase of coins.
<img src="../images/q_walk_lattice_2phase.png" width="30%" height="30%">
<div style="text-align: center;">
Figure 1. Quantum Walk on a line with two phases
</div>
The Hamiltonian operators for each of the walk on the line are, respectively,
$$
H_{\rm I/II} = Y \otimes \mathbb I \otimes \frac{\mathbb I + Z}{2}\
H_{\rm I} = Y\otimes (2\mathbb I\otimes \mathbb I + \mathbb I\otimes X + X \otimes X).
$$
Then, we want to implement the above Hamiltonian operators with the unitary operators as product of two-qubit gates CNOTs, CZs, and single-qubit gate rotation matrices. Notice that the CNOT and CZ gates are
\begin{align}
\rm{CNOT_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes X_t\
\rm{CZ_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes Z_t.
\end{align}
Below is the reference of converting Hamiltonian into unitary operators useful for the topological quantum walk.
<br><br>
<div style="text-align: center;">
Table 1. Relation between the unitary operator and product of elementary gates
</div>
|unitary operator|product of circuit elements|
|:-:|:-:|
|$e^{-i\theta X_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta X_c Z_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Y_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{i\theta Y_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta Y_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Y_c t}\cdot \rm{CNOT_{jc}}$|
|$e^{-i\theta Z_c X_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_j t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Z_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Z_c t}\cdot \rm{CNOT_{jc}}$|
By using these formula, the unitary operators are represented by only CNOT, CZ, and rotation matrices, so we can implement them by using IBM Q, as below.
Phase I/II:<br><br>
\begin{align}
e^{-iH_{I/II}t}=~&e^{-itY_c \otimes \mathbb I_0 \otimes \frac{\mathbb I_1 + Z_1}{2}}\
=~& e^{-iY_c t}e^{-itY_c\otimes Z_1}\
=~& e^{-iY_c t}\cdot\rm{CNOT_{1c}}\cdot e^{-i Y_c t}\cdot\rm{CNOT_{1c}}
\end{align}
<img src="../images/c12.png" width="50%" height="60%">
<div style="text-align: center;">
Figure 2. Phase I/II on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:coin,\ q[2]:2^1$
</div>
<br><br>
Phase I:<br><br>
\begin{align}
e^{-iH_I t}=~&e^{-itY_c\otimes (2\mathbb I_0\otimes \mathbb I_1 + \mathbb I_0\otimes X_1 + X_0 \otimes X_1)}\
=~&e^{-2itY_c}e^{-itY_c\otimes X_1}e^{-itY_c\otimes X_0 \otimes X_1}\
=~&e^{-2iY_c t}\cdot\rm{CNOT_{c1}}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c1}}
\end{align}
<img src="../images/c1.png" width="70%" height="70%">
<div style="text-align: center;">
Figure 3. Phase I on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:2^1,\ q[2]:coin$
</div>
Implementation
End of explanation
"""
t=8 #time
q1_2 = QuantumRegister(3)
c1_2 = ClassicalRegister(3)
qw1_2 = QuantumCircuit(q1_2, c1_2)
qw1_2.x(q1_2[2])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.measure(q1_2[0], c1_2[0])
qw1_2.measure(q1_2[1], c1_2[2])
qw1_2.measure(q1_2[2], c1_2[1])
print(qw1_2.qasm())
circuit_drawer(qw1_2, style=qx_color_scheme())
"""
Explanation: Quantum walk, phase I/II on $N=4$ lattice$(t=8)$
End of explanation
"""
job = execute(qw1_2, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
"""
Explanation: Below is the result when executing the circuit on the simulator.
End of explanation
"""
%%qiskit_job_status
HTMLProgressBar()
job = execute(qw1_2, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
"""
Explanation: And below is the result when executing the circuit on the real device.
End of explanation
"""
t=8 #time
q1 = QuantumRegister(3)
c1 = ClassicalRegister(3)
qw1 = QuantumCircuit(q1, c1)
qw1.x(q1[1])
qw1.cx(q1[2], q1[1])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.cx(q1[2], q1[1])
qw1.u3(2*t, 0, 0, q1[2])
qw1.measure(q1[0], c1[0])
qw1.measure(q1[1], c1[1])
qw1.measure(q1[2], c1[2])
print(qw1.qasm())
circuit_drawer(qw1, style=qx_color_scheme())
"""
Explanation: Conclusion: The walker is bounded at the initial state, which is the boundary of two phases, when the quantum walk on the line has two phases.
Quantum walk, phase I on $N=4$ lattice$(t=8)$
End of explanation
"""
job = execute(qw1, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
"""
Explanation: Below is the result when executing the circuit on the simulator.
End of explanation
"""
%%qiskit_job_status
HTMLProgressBar()
job = execute(qw1, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
"""
Explanation: And below is the result when executing the circuit on the real device.
End of explanation
"""
|
sertansenturk/tomato | demos/score_analysis_demo.ipynb | agpl-3.0 | score_data = scoreAnalyzer.analyze(txt_filename, mu2_filename, symbtr_name=symbtr_name)
# pretty print the metadata
pprint(score_data['metadata'])
"""
Explanation: You can use the single line call "analyze," which does all the available analysis simultaneously
End of explanation
"""
from tomato.metadata.symbtr import SymbTr as SymbTrMetadata
from tomato.symbolic.symbtr.reader.txt import TxtReader
from tomato.symbolic.symbtr.reader.mu2 import Mu2Reader
from tomato.symbolic.symbtr.dataextractor import DataExtractor
from tomato.symbolic.symbtr.section import SectionExtractor
from tomato.symbolic.symbtr.segment import SegmentExtractor
from tomato.symbolic.symbtr.rhythmicfeature import RhythmicFeatureExtractor
# relevant recording or work mbid, if you want additional information from musicbrainz
# Note 1: MBID input will make the function returns significantly slower because we
# have to wait a couple of seconds before each subsequent query from MusicBrainz.
# Note 2: very rare but there can be more that one mbid returned. We are going to use
# the first work to get fetch the metadata
mbid = SymbTrMetadata.get_mbids_from_symbtr_name(symbtr_name)[0]
# read the txt score
txt_score, is_score_content_valid = TxtReader.read(
txt_filename, symbtr_name=symbtr_name)
# read metadata from musicbrainz
mb_metadata, is_mb_metadata_valid = SymbTrMetadata.from_musicbrainz(
symbtr_name, mbid=score_data['mbid'])
# add duration & number of notes
mb_metadata['duration'] = {
'value': sum(score_data['score']['duration']) * 0.001, 'unit': 'second'}
mb_metadata['number_of_notes'] = len(txt_score['duration'])
# read metadata from the mu2 header
mu2_header, header_row, is_mu2_header_valid = Mu2Reader.read_header(
mu2_filename, symbtr_name=symbtr_name)
# merge metadata
score_metadata = DataExtractor.merge(mb_metadata, mu2_header)
# sections
section_extractor = SectionExtractor()
sections, is_section_data_valid = section_extractor.from_txt_score(
txt_score, symbtr_name)
# annotated phrases
segment_extractor = SegmentExtractor()
phrase_annotations = segment_extractor.extract_phrases(
txt_score, sections=sections)
# Automatic phrase segmentation on the SymbTr-txt score using pre-trained model
segment_bounds = scoreAnalyzer.segment_phrase(txt_filename, symbtr_name=symbtr_name)
segments = segment_extractor.extract_segments(
txt_score,
segment_bounds['boundary_note_idx'],
sections=sections)
# rhythmic structure
rhythmic_structure = RhythmicFeatureExtractor.extract_rhythmic_structure(
txt_score)
"""
Explanation: ... or you can call all the methods individually
End of explanation
"""
|
rastala/mmlspark | notebooks/samples/102 - Regression Example with Flight Delay Dataset.ipynb | mit | import numpy as np
import pandas as pd
import mmlspark
"""
Explanation: 102 - Training Regression Algorithms with the L-BFGS Solver
In this example, we run a linear regression on the Flight Delay dataset to predict the delay times.
We demonstrate how to use the TrainRegressor and the ComputePerInstanceStatistics APIs.
First, import the packages.
End of explanation
"""
# load raw data from small-sized 30 MB CSV file (trimmed to contain just what we use)
dataFile = "On_Time_Performance_2012_9.csv"
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
flightDelay = spark.createDataFrame(
pd.read_csv(dataFile, dtype={"Month": np.float64, "Quarter": np.float64,
"DayofMonth": np.float64, "DayOfWeek": np.float64,
"OriginAirportID": np.float64, "DestAirportID": np.float64,
"CRSDepTime": np.float64, "CRSArrTime": np.float64}))
# Print information on the dataset we loaded
print("records read: " + str(flightDelay.count()))
print("Schema:")
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
"""
Explanation: Next, import the CSV dataset.
End of explanation
"""
train,test = flightDelay.randomSplit([0.75, 0.25])
"""
Explanation: Split the dataset into train and test sets.
End of explanation
"""
from mmlspark import TrainRegressor, TrainedRegressorModel
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import StringIndexer
# Convert columns to categorical
catCols = ["Carrier", "DepTimeBlk", "ArrTimeBlk"]
trainCat = train
testCat = test
for catCol in catCols:
simodel = StringIndexer(inputCol=catCol, outputCol=catCol + "Tmp").fit(train)
trainCat = simodel.transform(trainCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
testCat = simodel.transform(testCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
lr = LinearRegression().setSolver("l-bfgs").setRegParam(0.1).setElasticNetParam(0.3)
model = TrainRegressor(model=lr, labelCol="ArrDelay").fit(trainCat)
model.write().overwrite().save("flightDelayModel.mml")
"""
Explanation: Train a regressor on dataset with l-bfgs.
End of explanation
"""
flightDelayModel = TrainedRegressorModel.load("flightDelayModel.mml")
scoredData = flightDelayModel.transform(testCat)
scoredData.limit(10).toPandas()
"""
Explanation: Score the regressor on the test data.
End of explanation
"""
from mmlspark import ComputeModelStatistics
metrics = ComputeModelStatistics().transform(scoredData)
metrics.toPandas()
"""
Explanation: Compute model metrics against the entire scored dataset
End of explanation
"""
from mmlspark import ComputePerInstanceStatistics
evalPerInstance = ComputePerInstanceStatistics().transform(scoredData)
evalPerInstance.select("ArrDelay", "Scores", "L1_loss", "L2_loss").limit(10).toPandas()
"""
Explanation: Finally, compute and show per-instance statistics, demonstrating the usage
of ComputePerInstanceStatistics.
End of explanation
"""
|
4dsolutions/Python5 | STEM Mathematics.ipynb | mit | from itertools import accumulate, islice
def cubocta():
"""
Classic Generator: Cuboctahedral / Icosahedral #s
https://oeis.org/A005901
"""
yield 1 # nuclear ball
f = 1
while True:
elem = 10 * f * f + 2 # f for frequency
yield elem # <--- pause / resume here
f += 1
def cummulative(n):
"""
https://oeis.org/A005902 (crystal ball sequence)
"""
yield from islice(accumulate(cubocta()),0,n)
print("{:=^30}".format(" Crystal Ball Sequence "))
print("{:^10} {:^10}".format("Layers", "Points"))
for f, out in enumerate(cummulative(30),start=1):
print("{:>10} {:>10}".format(f, out))
"""
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
Crystal Ball Sequence
The face-centered cubic (FCC) lattice is not always presented in this simplest form, ditto the cubic close packing (CCP), which amounts to the same thing. A nuclear ball is surrounded by a layer of twelve, all touching it, and adjacent neighbors. The shape so formed is not a cube, but a cuboctahedron, with eight triangular faces and six square. This is where I can type stuff.
As the cuboctahedral packing continues to expand outward, layer by layer, the cumulative number of balls or points forms the Crystal Ball Sequence.
cubocta(), a generator, yields the number of balls in each successive layer of the cuboctahedron, according to a simple formula derived by R. Buckminster Fuller, a prolific inventor and philosopher [1]. cummulative( ) delegates to cubocta( ) while accumulating the number in each layer to provide a running total.
End of explanation
"""
from itertools import islice
def pascal():
row = [1]
while True:
yield row
row = [i+j for i,j in zip([0]+row, row+[0])]
print("{0:=^60}".format(" Pascal's Triangle "))
print()
for r in islice(pascal(),0,11):
print("{:^60}".format("".join(map(lambda n: "{:>5}".format(n), r))))
"""
Explanation: Octet Truss
When adjacent CCP ball centers interconnect, what do you get? Why the octet truss of course, a well known space frame, used a lot in architecture. Alexander Graham Bell was fascinated by this construction.[2]
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/23636692173/in/album-72157664250599655/" title="Business Accelerator Building"><img src="https://farm2.staticflickr.com/1584/23636692173_103b411737.jpg" width="500" height="375" alt="Business Accelerator Building"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
[1] Siobahn Roberts. King of Infinite Space. New York: Walker & Company (2006). pp 179-180.
"Coxeter sent back a letter saying that one equation would be 'a remarkable discovery, justifying Bucky's evident pride,' if only it weren't too good to be true. The next day, Coxeter called: 'On further reflection, I see that it is true'. Coxeter told Fuller how impressed he was with his formula -- on the cubic close-packing of balls."
[2] http://worldgame.blogspot.com/2006/02/octet-truss.html (additional info on the octet truss)
Pascal's Triangle
Pascal's Triangle connects to the Binomial Theorem (originally proved by Sir Isaac Newton) and to numerous topics in probability theory. The triangular and tetrahedral number sequences may be discovered lurking in its columns.
pascal(), a generator, yields successive rows of Pascal's Triangle. By prepending and appending a zero element and adding vertically, a next row is obtained. For example, from [1] we get [0, 1] + [1, 0] = [1, 1]. From [1, 1] we get [0, 1, 1] + [1, 1, 0] = [1, 2, 1] and so on.
Notice the triangular numbers (1, 3, 6, 10...) and tetrahedral number sequences (1, 4, 10, 20...) appear in the slanted columns. [3]
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo("9xUBhhM4vbM")
"""
Explanation: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
For example, a ball could reach the 6 in the middle of the 5th row going 1,1,2,3,6 in four ways (counting left and right mirrors), or 1,1,1,3,6 in two ways. The likely pattern when many balls fall through this maze will be a bell curve, as shown in the simulation below.
End of explanation
"""
|
castanan/w2v | ml-scripts/Word2Vec with Tweets.ipynb | mit | t0 = time.time()
datapath = '/Users/jorgecastanon/Documents/github/w2v/data/tweets.gz'
tweets = sqlContext.read.json(datapath)
tweets.registerTempTable("tweets")
twr = tweets.count()
print "Number of tweets read: ", twr
# this line add ~7 seconds (from ~24.5 seconds to ~31.5 seconds)
# Number of tweets read: 239082
print "Elapsed time (seconds): ", time.time() - t0
#Elapsed time (seconds): 31.9646401405
"""
Explanation: Read Twitter Data as a Spark DataFrame
End of explanation
"""
filterPath = '/Users/jorgecastanon/Documents/github/w2v/data/filter.txt'
filter = pd.read_csv(filterPath,header=None)
filter.head()
"""
Explanation: Read Keywords: christmas, santa, turkey, ...
End of explanation
"""
# Construct SQL Command
t0 = time.time()
sqlString = "("
for substr in filter[0]: #iteration on the list of words to filter (at most 50-100 words)
sqlString = sqlString+"text LIKE '%"+substr+"%' OR "
sqlString = sqlString+"text LIKE '%"+substr.upper()+"%' OR "
sqlString=sqlString[:-4]+")"
sqlFilterCommand = "SELECT lang, text FROM tweets WHERE (lang = 'en') AND "+sqlString
# Query tweets in english that contain at least one of the keywords
tweetsDF = sqlContext.sql(sqlFilterCommand).cache()
twf = tweetsDF.count()
print "Number of tweets after filtering: ", twf
# last line add ~9 seconds (from ~0.72 seconds to ~9.42 seconds)
print "Elapsed time (seconds): ", time.time() - t0
print "Percetage of Tweets Used: ", float(twf)/twr
"""
Explanation: Use Spark SQL to Filter Tweets:
+ In english
+ And containing at least one keyword
End of explanation
"""
tweetsRDD = tweetsDF.select('text').rdd
def parseAndRemoveStopWords(text):
t = text[0].replace(";"," ").replace(":"," ").replace('"',' ').replace('-',' ')
t = t.replace(',',' ').replace('.',' ')
t = t.lower().split(" ")
stop = stopwords.words('english')
return [i for i in t if i not in stop]
tw = tweetsRDD.map(parseAndRemoveStopWords)
"""
Explanation: Parse Tweets and Remove Stop Words
End of explanation
"""
# map to df
twDF = tw.map(lambda p: Row(text=p)).toDF()
# default minCount = 5 (we may need to try something larger: 20-100 to reduce cost)
# default vectorSize = 100 (we may want to keep default)
t0 = time.time()
word2Vec = Word2Vec(vectorSize=100, minCount=5, inputCol="text", outputCol="result")
modelW2V = word2Vec.fit(twDF)
wordVectorsDF = modelW2V.getVectors()
print "Elapsed time (seconds) to train Word2Vec: ", time.time() - t0
print sc.version
vocabSize = wordVectorsDF.count()
print "Vocabulary Size: ", vocabSize
"""
Explanation: Word2Vec: returns a dataframe with words and vectors
Sometimes you need to run this block twice (strange reason that need to de-bug)
End of explanation
"""
topN = 13
synonymsDF = modelW2V.findSynonyms('christmas', topN).toPandas()
synonymsDF
"""
Explanation: Find top N closest words
End of explanation
"""
synonymsDF = modelW2V.findSynonyms('dog', 5).toPandas()
synonymsDF
"""
Explanation: As Expected, Unrelated terms are Not Accurate
End of explanation
"""
dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features')
numComponents = 3
pca = PCA(k = numComponents, inputCol = 'features', outputCol = 'pcaFeatures')
model = pca.fit(dfW2V)
dfComp = model.transform(dfW2V).select("pcaFeatures")
"""
Explanation: PCA on Top of Word2Vec using DF (spark.ml)
End of explanation
"""
word = 'christmas'
nwords = 200
#############
r = wvu.topNwordsToPlot(dfComp,wordVectorsDF,word,nwords)
############
fs=20 #fontsize
w = r['word']
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
height = 10
width = 10
fig.set_size_inches(width, height)
ax.scatter(r['X'], r['Y'], r['Z'], color='red', s=100, marker='o', edgecolors='black')
for i, txt in enumerate(w):
if(i<7):
ax.text(r['X'].ix[i],r['Y'].ix[i],r['Z'].ix[i], '%s' % (txt), size=20, zorder=1, color='k')
ax.set_xlabel('1st. Component', fontsize=fs)
ax.set_ylabel('2nd. Component', fontsize=fs)
ax.set_zlabel('3rd. Component', fontsize=fs)
ax.set_title('Visualization of Word2Vec via PCA', fontsize=fs)
ax.grid(True)
plt.show()
"""
Explanation: 3D Visualization
End of explanation
"""
t0=time.time()
K = int(math.floor(math.sqrt(float(vocabSize)/2)))
# K ~ sqrt(n/2) this is a rule of thumb for choosing K,
# where n is the number of words in the model
# feel free to choose K with a fancier algorithm
dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features')
kmeans = KMeans(k=K, seed=1)
modelK = kmeans.fit(dfW2V)
labelsDF = modelK.transform(dfW2V).select('prediction').withColumnRenamed('prediction','labels')
print "Number of Clusters (K) Used: ", K
print "Elapsed time (seconds) :", time.time() - t0
"""
Explanation: K-means on top of Word2Vec using DF (spark.ml)
End of explanation
"""
|
kit-cel/wt | sigNT/signals_transforms/rect_sinc.ipynb | gpl-2.0 | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
"""
Explanation: Content and Objective
Show different aspects when dealing with FFT
Using rectangular function in time and frequency for illustration
Importing and Plotting Options
End of explanation
"""
# min and max time, sampling time
t_min = -5.0
t_max = 5.0
t_s = 0.1 # sample time
# vector of times
t=np.arange(t_min, t_max + t_s, t_s )
# duration of rect and according instants in time
T_rect = 2 # width of the rectangular
t_rect = np.arange( - T_rect/2, T_rect / 2 + t_s, t_s )
# sample number of domain and signal
M = len( t )
M_rect = len( t_rect )
# frequency axis
# NOTE: resolution given by characteristics of DFT
f_Nyq = 1 / ( 2*t_s )
delta_f = 1 / ( t_max-t_min )
f = np.arange( -f_Nyq, f_Nyq + delta_f, delta_f )
# rectangular function,
# one signal with ones in the middle, one signal with ones at the beginning, one signal being periodical
rect_midway = 0 * t
rect_midway[ (M-M_rect)//2 : (M-M_rect)//2+M_rect ] = 1
rect_left = 0*t
rect_left[ : M_rect] = 1
rect_periodic = 0*t
rect_periodic[ : M_rect// 2 +1 ] = 1
rect_periodic[ len(t) - M_rect // 2 : ] = 1
# choose
rect = rect_left
rect = rect_midway
rect = rect_periodic
# frequency rect corresponds to time sinc
RECT = np.fft.fft( rect )
RECT = RECT / np.max( np.abs(RECT) )
# plotting
plt.subplot(121)
plt.plot(t, rect)
plt.grid(True);
plt.xlabel('$t/\mathrm{s}$');
plt.ylabel('$x(t)$')
plt.subplot(122)
plt.plot(f, np.real( RECT ) )
plt.plot(f, np.imag( RECT ) )
plt.grid(True);
plt.xlabel('$f/\mathrm{Hz}$');
plt.ylabel('$X(f)$')
"""
Explanation: Define Rect and Get Spectrum
End of explanation
"""
|
farr/emcee | docs/_static/notebooks/autocorr.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1234)
# Build the celerite model:
import celerite
from celerite import terms
kernel = terms.RealTerm(log_a=0.0, log_c=-6.0)
kernel += terms.RealTerm(log_a=0.0, log_c=-2.0)
# The true autocorrelation time can be calculated analytically:
true_tau = sum(2*np.exp(t.log_a-t.log_c) for t in kernel.terms)
true_tau /= sum(np.exp(t.log_a) for t in kernel.terms)
true_tau
# Simulate a set of chains:
gp = celerite.GP(kernel)
t = np.arange(2000000)
gp.compute(t)
y = gp.sample(size=32)
# Let's plot a little segment with a few samples:
plt.plot(y[:3, :300].T)
plt.xlim(0, 300)
plt.xlabel("step number")
plt.ylabel("$f$")
plt.title("$\\tau_\mathrm{{true}} = {0:.0f}$".format(true_tau), fontsize=14);
"""
Explanation: Autocorrelation analysis & convergence
In this tutorial, we will discuss a method for convincing yourself that your chains are sufficiently converged.
This can be a difficult subject to discuss because it isn't formally possible to guarantee convergence for any but the simplest models, and therefore any argument that you make will be circular and heuristic.
However, some discussion of autocorrelation analysis is (or should be!) a necessary part of any publication using MCMC.
With emcee, we follow Goodman & Weare (2010) and recommend using the integrated autocorrelation time to quantify the effects of sampling error on your results.
The basic idea is that the samples in your chain are not independent and you must estimate the effective number of independent samples.
There are other convergence diagnostics like the Gelman–Rubin statistic (Note: you should not compute the G–R statistic using multiple chains in the same emcee ensemble because the chains are not independent!) but, since the integrated autocorrelation time directly quantifies the Monte Carlo error (and hence the efficiency of the sampler) on any integrals computed using the MCMC results, it is the natural quantity of interest when judging the robustness of an MCMC analysis.
Monte Carlo error
The goal of every MCMC analysis is to evaluate integrals of the form
$$
\mathrm{E}_{p(\theta)}[f(\theta)] = \int f(\theta)\,p(\theta)\,\mathrm{d}\theta \quad.
$$
If you had some way of generating $N$ samples $\theta^{(n)}$ from the probability density $p(\theta)$, then you could approximate this integral as
$$
\mathrm{E}{p(\theta)}[f(\theta)] \approx \frac{1}{N} \sum{n=1}^N f(\theta^{(n)})
$$
where the sum is over the samples from $p(\theta)$.
If these samples are independent, then the sampling variance on this estimator is
$$
\sigma^2 = \frac{1}{N}\,\mathrm{Var}_{p(\theta)}[f(\theta)]
$$
and the error decreses as $1/\sqrt{N}$ as you generate more samples.
In the case of MCMC, the samples are not independent and the error is actually given by
$$
\sigma^2 = \frac{\tau_f}{N}\,\mathrm{Var}_{p(\theta)}[f(\theta)]
$$
where $\tau_f$ is the integrated autocorrelation time for the chain $f(\theta^{(n)})$.
In other words, $N/\tau_f$ is the effective number of samples and $\tau_f$ is the number of steps that are needed before the chain "forgets" where it started.
This means that, if you can estimate $\tau_f$, then you can estimate the number of samples that you need to generate to reduce the relative error on your target integral to (say) a few percent.
Note: It is important to remember that $\tau_f$ depends on the specific function $f(\theta)$.
This means that there isn't just one integrated autocorrelation time for a given Markov chain.
Instead, you must compute a different $\tau_f$ for any integral you estimate using the samples.
Computing autocorrelation times
There is a great discussion of methods for autocorrelation estimation in a set of lecture notes by Alan Sokal and the interested reader should take a look at that for a more formal discussion, but I'll include a summary of some of the relevant points here.
The integrated autocorrelation time is defined as
$$
\tau_f = \sum_{\tau=-\infty}^\infty \rho_f(\tau)
$$
where $\rho_f(\tau)$ is the normalized autocorrelation function of the stochastic process that generated the chain for $f$.
You can estimate $\rho_f(\tau)$ using a finite chain ${f_n}_{n=1}^N$ as
$$
\hat{\rho}_f(\tau) = \hat{c}_f(\tau) / \hat{c}_f(0)
$$
where
$$
\hat{c}f(\tau) = \frac{1}{N - \tau} \sum{n=1}^{N-\tau} (f_n - \mu_f)\,(f_{n+\tau}-\mu_f)
$$
and
$$
\mu_f = \frac{1}{N}\sum_{n=1}^N f_n \quad.
$$
(Note: In practice, it is actually more computationally efficient to compute $\hat{c}_f(\tau)$ using a fast Fourier transform than summing it directly.)
Now, you might expect that you can estimate $\tau_f$ using this estimator for $\rho_f(\tau)$ as
$$
\hat{\tau}f \stackrel{?}{=} \sum{\tau=-N}^{N} \hat{\rho}f(\tau) = 1 + 2\,\sum{\tau=1}^N \hat{\rho}_f(\tau)
$$
but this isn't actually a very good idea.
At longer lags, $\hat{\rho}_f(\tau)$ starts to contain more noise than signal and summing all the way out to $N$ will result in a very noisy estimate of $\tau_f$.
Instead, we want to estimate $\tau_f$ as
$$
\hat{\tau}f (M) = 1 + 2\,\sum{\tau=1}^M \hat{\rho}_f(\tau)
$$
for some $M \ll N$.
As discussed by Sokal in the notes linked above, the introduction of $M$ decreases the variance of the estimator at the cost of some added bias and he suggests choosing the smallest value of $M$ where $M \ge C\,\hat{\tau}_f (M)$ for a constant $C \sim 5$.
Sokal says that he finds this procedure to work well for chains longer than $1000\,\tau_f$, but the situation is a bit better with emcee because we can use the parallel chains to reduce the variance and we've found that chains longer than about $50\,\tau$ are often sufficient.
A toy problem
To demonstrate this method, we'll start by generating a set of "chains" from a process with known autocorrelation structure.
To generate a large enough dataset, we'll use celerite:
End of explanation
"""
def next_pow_two(n):
i = 1
while i < n:
i = i << 1
return i
def autocorr_func_1d(x, norm=True):
x = np.atleast_1d(x)
if len(x.shape) != 1:
raise ValueError("invalid dimensions for 1D autocorrelation function")
n = next_pow_two(len(x))
# Compute the FFT and then (from that) the auto-correlation function
f = np.fft.fft(x - np.mean(x), n=2*n)
acf = np.fft.ifft(f * np.conjugate(f))[:len(x)].real
acf /= 4*n
# Optionally normalize
if norm:
acf /= acf[0]
return acf
# Make plots of ACF estimate for a few different chain lengths
window = int(2*true_tau)
tau = np.arange(window+1)
f0 = kernel.get_value(tau) / kernel.get_value(0.0)
# Loop over chain lengths:
fig, axes = plt.subplots(1, 3, figsize=(12, 4), sharex=True, sharey=True)
for n, ax in zip([10, 100, 1000], axes):
nn = int(true_tau * n)
ax.plot(tau / true_tau, f0, "k", label="true")
ax.plot(tau / true_tau, autocorr_func_1d(y[0, :nn])[:window+1], label="estimate")
ax.set_title(r"$N = {0}\,\tau_\mathrm{{true}}$".format(n), fontsize=14)
ax.set_xlabel(r"$\tau / \tau_\mathrm{true}$")
axes[0].set_ylabel(r"$\rho_f(\tau)$")
axes[-1].set_xlim(0, window / true_tau)
axes[-1].set_ylim(-0.05, 1.05)
axes[-1].legend(fontsize=14);
"""
Explanation: Now we'll estimate the empirical autocorrelation function for each of these parallel chains and compare this to the true function.
End of explanation
"""
fig, axes = plt.subplots(1, 3, figsize=(12, 4), sharex=True, sharey=True)
for n, ax in zip([10, 100, 1000], axes):
nn = int(true_tau * n)
ax.plot(tau / true_tau, f0, "k", label="true")
f = np.mean([autocorr_func_1d(y[i, :nn], norm=False)[:window+1]
for i in range(len(y))], axis=0)
f /= f[0]
ax.plot(tau / true_tau, f, label="estimate")
ax.set_title(r"$N = {0}\,\tau_\mathrm{{true}}$".format(n), fontsize=14)
ax.set_xlabel(r"$\tau / \tau_\mathrm{true}$")
axes[0].set_ylabel(r"$\rho_f(\tau)$")
axes[-1].set_xlim(0, window / true_tau)
axes[-1].set_ylim(-0.05, 1.05)
axes[-1].legend(fontsize=14);
"""
Explanation: This figure shows how the empirical estimate of the normalized autocorrelation function changes as more samples are generated.
In each panel, the true autocorrelation function is shown as a black curve and the empricial estimator is shown as a blue line.
Instead of estimating the autocorrelation function using a single chain, we can assume that each chain is sampled from the same stochastic process and average the estimate over ensemble members to reduce the variance.
It turns out that we'll actually do this averaging later in the process below, but it can be useful to show the mean autocorrelation function for visualization purposes.
End of explanation
"""
# Automated windowing procedure following Sokal (1989)
def auto_window(taus, c):
m = np.arange(len(taus)) < c * taus
if np.any(m):
return np.argmin(m)
return len(taus) - 1
# Following the suggestion from Goodman & Weare (2010)
def autocorr_gw2010(y, c=5.0):
f = autocorr_func_1d(np.mean(y, axis=0))
taus = 2.0*np.cumsum(f)-1.0
window = auto_window(taus, c)
return taus[window]
def autocorr_new(y, c=5.0):
f = np.zeros(y.shape[1])
for yy in y:
f += autocorr_func_1d(yy)
f /= len(y)
taus = 2.0*np.cumsum(f)-1.0
window = auto_window(taus, c)
return taus[window]
# Compute the estimators for a few different chain lengths
N = np.exp(np.linspace(np.log(100), np.log(y.shape[1]), 10)).astype(int)
gw2010 = np.empty(len(N))
new = np.empty(len(N))
for i, n in enumerate(N):
gw2010[i] = autocorr_gw2010(y[:, :n])
new[i] = autocorr_new(y[:, :n])
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="new")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.axhline(true_tau, color="k", label="truth", zorder=-100)
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
"""
Explanation: Now let's estimate the autocorrelation time using these estimated autocorrelation functions.
Goodman & Weare (2010) suggested averaging the ensemble over walkers and computing the autocorrelation function of the mean chain to lower the variance of the estimator and that was what was originally implemented in emcee.
Since then, @fardal on GitHub suggested that other estimators might have lower variance.
This is absolutely correct and, instead of the Goodman & Weare method, we now recommend computing the autocorrelation time for each walker (it's actually possible to still use the ensemble to choose the appropriate window) and then average these estimates.
Here is an implementation of each of these methods and a plot showing the convergence as a function of the chain length:
End of explanation
"""
import emcee
def log_prob(p):
return np.logaddexp(-0.5*np.sum(p**2), -0.5*np.sum((p-4.0)**2))
sampler = emcee.EnsembleSampler(32, 3, log_prob)
sampler.run_mcmc(np.concatenate((np.random.randn(16, 3),
4.0+np.random.randn(16, 3)), axis=0),
500000, progress=True);
"""
Explanation: In this figure, the true autocorrelation time is shown as a horizontal line and it should be clear that both estimators give outrageous results for the short chains.
It should also be clear that the new algorithm has lower variance than the original method based on Goodman & Weare.
In fact, even for moderately long chains, the old method can give dangerously over-confident estimates.
For comparison, we have also plotted the $\tau = N/50$ line to show that, once the estimate crosses that line, The estimates are starting to get more reasonable.
This suggests that you probably shouldn't trust any estimate of $\tau$ unless you have more than $F\times\tau$ samples for some $F \ge 50$.
Larger values of $F$ will be more conservative, but they will also (obviously) require longer chains.
A more realistic example
Now, let's run an actual Markov chain and test these methods using those samples.
So that the sampling isn't completely trivial, we'll sample a multimodal density in three dimensions.
End of explanation
"""
chain = sampler.get_chain()[:, :, 0].T
plt.hist(chain.flatten(), 100)
plt.gca().set_yticks([])
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta)$");
"""
Explanation: Here's the marginalized density in the first dimension.
End of explanation
"""
# Compute the estimators for a few different chain lengths
N = np.exp(np.linspace(np.log(100), np.log(chain.shape[1]), 10)).astype(int)
gw2010 = np.empty(len(N))
new = np.empty(len(N))
for i, n in enumerate(N):
gw2010[i] = autocorr_gw2010(chain[:, :n])
new[i] = autocorr_new(chain[:, :n])
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="new")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
"""
Explanation: And here's the comparison plot showing how the autocorrelation time estimates converge with longer chains.
End of explanation
"""
from scipy.optimize import minimize
def autocorr_ml(y, thin=1, c=5.0):
# Compute the initial estimate of tau using the standard method
init = autocorr_new(y, c=c)
z = y[:, ::thin]
N = z.shape[1]
# Build the GP model
tau = max(1.0, init/thin)
kernel = terms.RealTerm(np.log(0.9*np.var(z)), -np.log(tau),
bounds=[(-5.0, 5.0), (-np.log(N), 0.0)])
kernel += terms.RealTerm(np.log(0.1*np.var(z)), -np.log(0.5*tau),
bounds=[(-5.0, 5.0), (-np.log(N), 0.0)])
gp = celerite.GP(kernel, mean=np.mean(z))
gp.compute(np.arange(z.shape[1]))
# Define the objective
def nll(p):
# Update the GP model
gp.set_parameter_vector(p)
# Loop over the chains and compute likelihoods
v, g = zip(*(
gp.grad_log_likelihood(z0, quiet=True)
for z0 in z
))
# Combine the datasets
return -np.sum(v), -np.sum(g, axis=0)
# Optimize the model
p0 = gp.get_parameter_vector()
bounds = gp.get_parameter_bounds()
soln = minimize(nll, p0, jac=True, bounds=bounds)
gp.set_parameter_vector(soln.x)
# Compute the maximum likelihood tau
a, c = kernel.coefficients[:2]
tau = thin * 2*np.sum(a / c) / np.sum(a)
return tau
# Calculate the estimate for a set of different chain lengths
ml = np.empty(len(N))
ml[:] = np.nan
for j, n in enumerate(N[1:8]):
i = j+1
thin = max(1, int(0.05*new[i]))
ml[i] = autocorr_ml(chain[:, :n], thin=thin)
# Plot the comparisons
plt.loglog(N, gw2010, "o-", label="G\&W 2010")
plt.loglog(N, new, "o-", label="new")
plt.loglog(N, ml, "o-", label="ML")
ylim = plt.gca().get_ylim()
plt.plot(N, N / 50.0, "--k", label=r"$\tau = N/50$")
plt.ylim(ylim)
plt.xlabel("number of samples, $N$")
plt.ylabel(r"$\tau$ estimates")
plt.legend(fontsize=14);
"""
Explanation: As before, the short chains give absurd estimates of $\tau$, but the new method converges faster and with lower variance than the old method.
The $\tau = N/50$ line is also included as above as an indication of where we might start trusting the estimates.
What about shorter chains?
Sometimes it just might not be possible to run chains that are long enough to get a reliable estimate of $\tau$ using the methods described above.
In these cases, you might be able to get an estimate using parametric models for the autocorrelation.
One example would be to fit an autoregressive model to the chain and using that to estimate the autocorrelation time.
As an example, we'll use celerite to fit for the maximum likelihood autocorrelation function and then compute an estimate of $\tau$ based on that model.
The celerite model that we're using is equivalent to a second-order ARMA model and it appears to be a good choice for this example, but we're not going to promise anything here about the general applicability and we caution care whenever estimating autocorrelation times using short chains.
End of explanation
"""
|
pagutierrez/tutorial-sklearn | notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb | cc0-1.0 | from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
n_samples, n_features = X.shape
plt.scatter(X[:, 0], X[:, 1], c=y);
"""
Explanation: Aprendizaje no supervisado: algoritmos de clustering jerárquicos y basados en densidades
En el cuaderno número 8, introdujimos uno de los algoritmos de agrupamiento más básicos y utilizados, el K-means. Una de las ventajas del K-means es que es extremadamente fácil de implementar y que es muy eficiente computacionalmente si lo comparamos a otros algoritmos de agrupamiento. Sin embargo, ya vimos que una de las debilidades de K-Means es que solo trabaja bien si los datos a agrupar se distribuyen en formas esféricas. Además, tenemos que decidir un número de grupos, k, a priori, lo que puede ser un problema si no tenemos conocimiento previo acerca de cuántos grupos esperamos obtener.
En este cuaderno, vamos a ver dos formas alternativas de hacer agrupamiento, agrupamiento jerárquico y agrupamiento basado en densidades.
Agrupamiento jerárquico
Una característica importante del agrupamiento jerárquico es que podemos visualizar los resultados como un dendograma, un diagrama de árbol. Utilizando la visualización, podemos entonces decidir el umbral de profundidad a partir del cual vamos a cortar el árbol para conseguir un agrupamiento. En otras palabras, no tenemos que decidir el número de grupos sin tener ninguna información.
Agrupamiento aglomerativo y divisivo
Además, podemos distinguir dos formas principales de clustering jerárquico: divisivo y aglomerativo. En el clustering aglomerativo, empezamos con un único patrón por clúster y vamos agrupando clusters (uniendo aquellos que están más cercanos), siguiendo una estrategia bottom-up para construir el dendograma. En el clustering divisivo, sin embargo, empezamos incluyendo todos los puntos en un único grupo y luego vamos dividiendo ese grupo en subgrupos más pequeños, siguiendo una estrategia top-down.
Nosotros nos centraremos en el clustering aglomerativo.
Enlace simple y completo
Ahora, la pregunta es cómo vamos a medir la distancia entre ejemplo. Una forma habitual es usar la distancia Euclídea, que es lo que hace el algoritmo K-Means.
Sin embargo, el algoritmo jerárquico requiere medir la distancia entre grupos de puntos, es decir, saber la distancia entre un clúster (agrupación de puntos) y otro. Dos formas de hacer esto es usar el enlace simple y el enlace completo.
En el enlace simple, tomamos el par de puntos más similar (basándonos en distancia Euclídea, por ejemplo) de todos los puntos que pertenecen a los dos grupos. En el enlace competo, tomamos el par de puntos más lejano.
Para ver como funciona el clustering aglomerativo, vamos a cargar el dataset Iris (pretendiendo que no conocemos las etiquetas reales y queremos encontrar las espacies):
End of explanation
"""
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
clusters = linkage(X,
metric='euclidean',
method='complete')
dendr = dendrogram(clusters)
plt.ylabel('Distancia Euclídea');
"""
Explanation: Ahora vamos haciendo una exploración basada en clustering, visualizando el dendograma utilizando las funciones linkage (que hace clustering jerárquico) y dendrogram (que dibuja el dendograma) de SciPy:
End of explanation
"""
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
prediction = ac.fit_predict(X)
print('Etiquetas de clase: %s\n' % prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
"""
Explanation: Alternativamente, podemos usar el AgglomerativeClustering de scikit-learn y dividr el dataset en 3 clases. ¿Puedes adivinar qué tres clases encontraremos?
End of explanation
"""
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=400,
noise=0.1,
random_state=1)
plt.scatter(X[:,0], X[:,1])
plt.show()
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2,
min_samples=10,
metric='euclidean')
prediction = db.fit_predict(X)
print("Etiquetas predichas:\n", prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
"""
Explanation: Clustering basado en densidades - DBSCAN
Otra forma útil de agrupamiento es la conocida como Density-based Spatial Clustering of Applications with Noise (DBSCAN). En esencia, podríamos pensar que DBSCAN es un algoritmo que divide el dataset en subgrupos, buscando regiones densas de puntos.
En DBSCAN, hay tres tipos de puntos:
Puntos núcleo: puntos que tienen un mínimo número de puntos (MinPts) contenidos en una hiperesfera de radio epsilon.
Puntos fronterizos: puntos que no son puntos núcleo, ya que no tienen suficientes puntos en su vecindario, pero si que pertenecen al vecindario de radio epsilon de algún punto núcleo.
Puntos de ruido: todos los puntos que no pertenecen a ninguna de las categorías anteriores.
Una ventaja de DBSCAN es que no tenemos que especificar el número de clusters a priori. Sin embargo, requiere que establezcamos dos hiper-parámetros adicionales que son MinPts y el radio epsilon.
End of explanation
"""
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
plt.scatter(X[:, 0], X[:, 1], c=y);
"""
Explanation: <div class="alert alert-success">
<b>EJERCICIO</b>:
<ul>
<li>
Usando el siguiente conjunto sintético, dos círculos concéntricos, experimenta los resultados obtenidos con los algoritmos de clustering que hemos considerado hasta el momento: `KMeans`, `AgglomerativeClustering` y `DBSCAN`.
¿Qué algoritmo reproduce o descubre mejor la estructura oculta (suponiendo que no conocemos `y`)?
¿Puedes razonar por qué este algoritmo funciona mientras que los otros dos fallan?
</li>
</ul>
</div>
End of explanation
"""
|
mavillan/SciProg | 01_intro/01_intro.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
"""
Explanation: <h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 1: Introduction and basic tools </h2>
Notebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - April 2017.
End of explanation
"""
xgrid = np.linspace(-3,3,50)
f1 = np.exp(-xgrid**2)
f2 = np.tanh(xgrid)
plt.figure(figsize=(8,6))
plt.plot(xgrid, f1, 'bo-')
plt.plot(xgrid, f2, 'ro-')
plt.title('Just a demo plot')
plt.grid()
plt.show()
"""
Explanation: Table of Contents
1.- Anaconda
2.- GIT
3.- IPython
4.- Jupyter Notebook
5.- Inside Ipython and Kernels
6.- Magics
<div id='anaconda' />
1.- Anaconda
Although Python is an open-source, cross-platform language, installing it with the usual scientific packages used to be overly complicated. Fortunately, there is now an all-in-one scientific Python distribution, Anaconda (by Continuum Analytics), that is free, cross-platform, and easy to install.
Note: There are other distributions and installation options (like Canopy, WinPython, Python(x, y), and others).
Why to use Anaconda:
1. User level install of the version of python you want.
2. Able to install/update packages completely independent of system libraries or admin privileges.
3. No risk of messing up required system libraries.
4. Comes with the conda manager which allows us to handle the packages and magage environments.
5. It "completely" solves the problem of packages dependencies.
6. Most important scientific packages (NumPy, SciPy, Scikit-Learn and others) are compiled with MKL support.
7. Many scientific communities are using it!.
Note: In this course we will use Python3.
Installation
Download installation script here. Run in a terminal:
bash
bash Anaconda3-4.3.1-Linux-x86_64.sh
Then modify the PATH environment variable in your ~/.bashrc appending the next line:
bash
export PATH=~/anaconda3/bin:$PATH
Run source ~/.bashrc and test your installation by calling the python interpreter!
Conda and useful comands
Install packages
bash
conda install package_name
Update packages
bash
conda update package_name
conda update --all
Search packages
bash
conda search package_pattern
Clean Installation
bash
conda clean {--lock, --tarballs, --index-cache, --packages, --source-cache, --all}
Environments
Isolated distribution of packages.
Create an environments
bash
conda create --name env_name python=version packages_to_install_in_env
conda create --name python2 python=2.7 anaconda
conda create --name python26 python=2.6 python
Switch to environments
bash
source activate env_name
List all available environments
bash
conda info --envs
Delete an environment
bash
conda remove --name env_name --all
Important Note: If you install packages with pip, they will be installed in the running environment.
For more info about conda see here
<div id='git' />
2.- Git
Git is a version control system (VCS) for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for software development, but it can be used to keep track of changes in any files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.
Online providers supporting Git include GitHub (https://github.com), Bitbucket (https://bitbucket.org), Google code (https://code.google.com), Gitorious (https://gitorious.org), and SourceForge (https://sourceforge.net).
In order to get your git repository ready for use, follow these instructions:
Create the project directory.
bash
mkdir project && cd project
Initialize the local directory as a Git repository.
bash
git init
Add the files in your new local repository. This stages them for the first commit.
```bash
touch README
git add .
To unstage a file, use 'git reset HEAD YOUR-FILE'.
4. Commit the files that you've staged in your local repository.bash
git commit -m "First commit"
To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.
5. Add the URL for the remote repository where your local repository will be pushed.bash
git remote add origin remote_repository_URL
Sets the new remote
git remote -v
Verifies the new remote URL
6. Push the changes in your local repository to GitHub.bash
git push -u origin master
```
<div id='ipython' />
3.- IPython
IPython its just an improved version of the standard Python shell, that provides tools for interactive computing in Python.
Here are some cool features of IPython:
Better syntax highlighting.
Code completion.
Direct access to bash/linux commands (cd, ls, pwd, rm, mkdir, etc). Additional commands can be exectuted with: !command.
who command to see defined variables in the current session.
Inspect objects with ?.
And magics, which we will see briefly.
<div id='jupyter' />
4.- Jupyter Notebook
It is a web-based interactive environment that combines code, rich text, images, videos, animations, mathematics, and plots into a single document. This modern tool is an ideal gateway to high-performance numerical computing and data science in Python.
New paragraph
This is rich text with links, equations:
$$
\hat{f}(\xi) = \int_{-\infty}^{+\infty} f(x) \, \mathrm{e}^{-i \xi x} dx,
$$
code with syntax highlighting
python
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
images:
and plots:
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('HrxX9TBj2zY')
"""
Explanation: IPython also comes with a sophisticated display system that lets us insert rich web elements in the notebook. Here you can see an example of how to add Youtube videos in a notebook
End of explanation
"""
# this will list all magic commands
%lsmagic
# also work in ls, cd, mkdir, etc
%pwd
%history
# this will execute and show the output of the program
%run ./hola_mundo.py
def naive_loop():
for i in range(100):
for j in range(100):
for k in range(100):
a = 1+1
return None
%timeit -n 10 naive_loop()
%time naive_loop()
%%bash
cd ..
ls
"""
Explanation: <div id='inside' />
5.- Inside Ipython and Kernels
The IPython Kernel is a separate IPython process which is responsible for running user code, and things like computing possible completions. Frontends, like the notebook or the Qt console, communicate with the IPython Kernel using JSON messages sent over ZeroMQ sockets.
The core execution machinery for the kernel is shared with terminal IPython:
A kernel process can be connected to more than one frontend simultaneously. In this case, the different frontends will have access to the same variables.
The Client-Server architecture
The Notebook frontend does something extra. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension.
The notebook server, not the kernel, is responsible for saving and loading notebooks, so you can edit notebooks even if you don’t have the kernel for that language —you just won’t be able to run code. The kernel doesn’t know anything about the notebook document: it just gets sent cells of code to execute when the user runs them.
Others Kernels
There are two ways to develop a kernel for another language. Wrapper kernels reuse the communications machinery from IPython, and implement only the core execution part. Native kernels implement execution and communications in the target language:
Note: To see a list of all available kernels (and installation instructions) visit here.
Convert notebooks to other formats
It is also possible to convert the original JSON notebook to the following formats: html, latex, pdf, rst, markdown and script. For that you must run
bash
jupyter-nbconvert --to FORMAT notebook.ipynb
with FORMAT as one of the above options. Lets convert this notebook to htlm!
<div id='magics' />
6.- Magics
IPython magics are custom commands that let you interact with your OS and filesystem. There are line magics % (which just affect the behavior of such line) and cell magics %% (which affect the whole cell).
Here we test some useful magics:
End of explanation
"""
%%capture output
!ls
%%writefile myfile.txt
Holanda que talca!
!cat myfile.txt
!rm myfile.txt
"""
Explanation: lets you capture the standard output and error output of some code into a Python variable.
Here is an example (the outputs are captured in the output Python variable).
End of explanation
"""
from IPython.core.magic import register_cell_magic
"""
Explanation: Writting our own magics!
In this section we will create a new cell magic that compiles and executes C++ code in the Notebook.
End of explanation
"""
@register_cell_magic
def cpp(line, cell):
"""Compile, execute C++ code, and return the
standard output."""
# We first retrieve the current IPython interpreter instance.
ip = get_ipython()
# We define the source and executable filenames.
source_filename = '_temp.cpp'
program_filename = '_temp'
# We write the code to the C++ file.
with open(source_filename, 'w') as f:
f.write(cell)
# We compile the C++ code into an executable.
compile = ip.getoutput("g++ {0:s} -o {1:s}".format(
source_filename, program_filename))
# We execute the executable and return the output.
output = ip.getoutput('./{0:s}'.format(program_filename))
print('\n'.join(output))
%%cpp
#include<iostream>
int main()
{
std::cout << "Hello world!";
}
"""
Explanation: To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic.
End of explanation
"""
%load_ext cpp_ext
"""
Explanation: This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython.
To create an IPython extension, copy the definition of the cpp() function (without the decorator) to a Python module, named cpp_ext.py for example. Then, add the following at the end of the file:
python
def load_ipython_extension(ipython):
"""This function is called when the extension is loaded.
It accepts an IPython InteractiveShell instance.
We can register the magic with the `register_magic_function`
method of the shell instance."""
ipython.register_magic_function(cpp, 'cell')
Then, you can load the extension with %load_ext cpp_ext. The cpp_ext.py le needs to be in the PYTHONPATH, for example in the current directory.
End of explanation
"""
|
cochoa0x1/integer-programming-with-python | 04-packing-and-allocation/knapsack_problem.ipynb | mit | from pulp import *
import numpy as np
"""
Explanation: Knapsack Problem
Bin packing tried to minimize the number of bins needed for a fixed number of items, if we instead fix the number of bins and assign some way to value objects, then the knapsack problem tells us which objects to take to maximize our total item value. Rather than object sizes, in the traditional formulation we consider item weights and imagine that we are packing a backpack for a camping trip or a suitcase for a vacation.
How many items of different weights can we fit in our suitcase before our suitcase is too heavy. Which objects should we take?
This problems is also known as the capital budgeting problem. Instead of items we think of investment opportunities, instead of value we consider investment return, weight becomes value, and the maximum weight we can carry becomes our budget. Which investments should we take on to maximize our return with the current budget?
End of explanation
"""
items=['item_%d'%i for i in range(20)]
item_weights = dict( (i,np.random.randint(1,20)) for i in items)
item_values = dict( (i,10*np.random.rand()) for i in items)
W = 100
#variables. How many of each object to take. For simplicity lets make this 0 or 1 (classic 0-1 knapsack problem)
x = LpVariable.dicts('item',items,0,1, LpBinary)
#create the problme
prob=LpProblem("knapsack",LpMaximize)
#the objective
cost = lpSum([ item_values[i]*x[i] for i in items])
prob+=cost
#constraint
prob += lpSum([ item_weights[i]*x[i] for i in items]) <= W
"""
Explanation: 1. First lets make some fake data
End of explanation
"""
%time prob.solve()
print(LpStatus[prob.status])
"""
Explanation: Solve it!
End of explanation
"""
for i in items:
print(i, value(x[i]))
print(value(prob.objective))
print(sum([ item_weights[i]*value(x[i]) for i in items]))
"""
Explanation: And the result:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/unstructured/ML-Tests-Solution.ipynb | apache-2.0 | from googleapiclient.discovery import build
import subprocess
images = subprocess.check_output(["gsutil", "ls", "gs://{}/unstructured/photos".format(BUCKET)])
images = list(filter(None,images.split('\n')))
print(images)
"""
Explanation: <h2> Finding specific text in a corpus of scanned documents </h2>
End of explanation
"""
# Running Vision API to find images that have a specific search term
import base64
SEARCH_TERM = u"1321"
for IMAGE in images:
print(IMAGE)
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
'source': {
'gcs_image_uri': IMAGE
}
},
'features': [{
'type': 'TEXT_DETECTION',
'maxResults': 100,
}]
}],
})
outputs = request.execute(num_retries=3)
# print outputs
if 'responses' in outputs and len(outputs['responses']) > 0 and 'textAnnotations' in outputs['responses'][0]:
for output in outputs['responses'][0]['textAnnotations']:
if SEARCH_TERM in output['description']:
print("image={} contains the following text: {}".format(IMAGE, output['description']))
"""
Explanation: Here are a few of the images we are going to search.
<img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot1.png" />
<img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot2.png" />
<img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot5.png" />
End of explanation
"""
def executeTranslate(inputs):
from googleapiclient.discovery import build
service = build('translate', 'v2', developerKey=APIKEY)
translator = service.translations()
outputs = translator.list(source='en', target='es', q=inputs).execute()
return outputs['translations'][0]['translatedText']
print("Added executeTranslate() function.")
s
"""
Explanation: <h2> Translating a large document in parallel. </h2>
As the number of items increases, we need to parallelize the cells. Here, we translate Alice in Wonderland into Spanish.
<p>
This cell creates a worker that calls the Translate API. This has to be done on each Spark worker; this is why the import is within the function.
End of explanation
"""
alice = sc.textFile("gs://cpb103-public-files/alice-short-transformed.txt")
alice = alice.map(lambda x: x.split("."))
for eachSentence in alice.take(10):
print("{0}".format(eachSentence))
"""
Explanation: Now we are ready to execute the above function in parallel on the Spark cluster
End of explanation
"""
aliceTranslated = alice.map(executeTranslate)
for eachSentance in aliceTranslated.take(10):
print("{0}".format(eachSentance))
"""
Explanation: Note: The book has also been transformed so all the new lines have been removed. This allows the book to be imported as a long
string. The text is then split on the periods to create an array of strings. The loop just shows the input.
<p>
This code runs the translation in parallel on the Spark cluster and shows the results.
End of explanation
"""
def executeSentimentAnalysis(quote):
from googleapiclient.discovery import build
lservice = build('language', 'v1beta1', developerKey=APIKEY)
response = lservice.documents().analyzeSentiment(
body={
'document': {
'type': 'PLAIN_TEXT',
'content': quote
}
}).execute()
return response
print("Added executeSentimentAnalysis() function.")
"""
Explanation: <h2> Sentiment analysis in parallel </h2>
Here, we do sentiment analysis on a bunch of text in parallel. This is similar to the translate.
End of explanation
"""
import pandas as pd
from pandas.io import gbq
print("Imports run.")
print 'Running query...'
df = gbq.read_gbq("""
SELECT title, text
FROM [bigquery-public-data:hacker_news.stories]
where text > " " and title contains("JavaScript")
LIMIT 10
""", project_id=PROJECT_ID)
#Convert Pandas DataFrame to RDD
rdd = sqlContext.createDataFrame(df).rdd
print(rdd.take(2))
# extract text field from Dictionary
comments = rdd.map(lambda x: x[1])
sentiments = comments.map(executeSentimentAnalysis)
for sentiment in sentiments.collect():
print("Score:{0} and Magnitude:{1}".format(sentiment['documentSentiment']['score'], sentiment['documentSentiment']['magnitude']))
"""
Explanation: But this time, instead of processing a text file, let's process data from BigQuery. We will pull articles on JavaScript from Hacker News and look at the sentiment associated with them.
End of explanation
"""
|
ocefpaf/secoora | notebooks/timeSeries/ssh/01-skill_score.ipynb | mit | import os
try:
import cPickle as pickle
except ImportError:
import pickle
run_name = '2014-07-07'
fname = os.path.join(run_name, 'config.pkl')
with open(fname, 'rb') as f:
config = pickle.load(f)
import numpy as np
from pandas import DataFrame, read_csv
from utilities import (load_secoora_ncs, to_html,
save_html, apply_skill)
fname = '{}-all_obs.csv'.format(run_name)
all_obs = read_csv(os.path.join(run_name, fname), index_col='name')
def rename_cols(df):
columns = dict()
for station in df.columns:
mask = all_obs['station'] == station
name = all_obs['station'][mask].index[0]
columns.update({station: name})
return df.rename(columns=columns)
"""
Explanation: <img style='float: left' width="150px" src="http://secoora.org/sites/default/files/secoora_logo.png">
<br><br>
SECOORA Notebook 2
Sea Surface Height time-series model skill
This notebook calculates several skill scores for the
SECOORA models weekly time-series saved by 00-fetch_data.ipynb.
Load configuration
End of explanation
"""
from utilities import mean_bias
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
df = rename_cols(df)
skill_score = dict(mean_bias=df.copy())
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'mean_bias.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 1: Model Bias (or Mean Bias)
The bias skill compares the model mean elevation against the observations.
Because the observations were saved in NAVD datum, the deviation is usually a
datum difference between the model forcings and the observations. (This can
be confirmed via the constant bias observed at different runs.)
$$ \text{MB} = \mathbf{\overline{m}} - \mathbf{\overline{o}}$$
End of explanation
"""
from utilities import rmse
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['rmse'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'rmse.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 2: Central Root Mean Squared Error
Root Mean Squared Error of the deviations from the mean.
$$ \text{CRMS} = \sqrt{\left(\mathbf{m'} - \mathbf{o'}\right)^2}$$
where: $\mathbf{m'} = \mathbf{m} - \mathbf{\overline{m}}$ and $\mathbf{o'} = \mathbf{o} - \mathbf{\overline{o}}$
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 3: R$^2$
https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=True)
df = rename_cols(df)
skill_score['low_pass_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 4: Low passed R$^2$
http://dx.doi.org/10.1175/1520-0450(1979)018%3C1016:LFIOAT%3E2.0.CO;2
https://github.com/ioos/secoora/issues/188
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
# SABGOM dt = 3 hours.
dfs = dfs.swapaxes('items', 'major').resample('3H').swapaxes('items', 'major')
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['low_pass_resampled_3H_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_resampled_3H_r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 4: Low passed and re-sampled (3H) R$^2$
https://github.com/ioos/secoora/issues/183
End of explanation
"""
fname = os.path.join(run_name, 'skill_score.pkl')
with open(fname,'wb') as f:
pickle.dump(skill_score, f)
"""
Explanation: Save scores
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from utilities.taylor_diagram import TaylorDiagram
def make_taylor(samples):
fig = plt.figure(figsize=(9, 9))
dia = TaylorDiagram(samples['std']['OBS_DATA'],
fig=fig,
label="Observation")
colors = plt.matplotlib.cm.jet(np.linspace(0, 1, len(samples)))
# Add samples to Taylor diagram.
samples.drop('OBS_DATA', inplace=True)
for model, row in samples.iterrows():
dia.add_sample(row['std'], row['corr'], marker='s', ls='',
label=model)
# Add RMS contours, and label them.
contours = dia.add_contours(colors='0.5')
plt.clabel(contours, inline=1, fontsize=10)
# Add a figure legend.
kw = dict(prop=dict(size='small'), loc='upper right')
leg = fig.legend(dia.samplePoints,
[p.get_label() for p in dia.samplePoints],
numpoints=1, **kw)
return fig
dfs = load_secoora_ncs(run_name)
# Bin and interpolate all series to 1 hour.
freq = '1H'
for station, df in list(dfs.iteritems()):
df = df.resample(freq).interpolate().dropna(axis=1)
if 'OBS_DATA' in df:
samples = DataFrame.from_dict(dict(std=df.std(),
corr=df.corr()['OBS_DATA']))
else:
continue
samples[samples < 0] = np.NaN
samples.dropna(inplace=True)
if len(samples) <= 2: # 1 obs 1 model.
continue
fig = make_taylor(samples)
fig.savefig(os.path.join(run_name, '{}.png'.format(station)))
plt.close(fig)
"""
Explanation: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
HrantDavtyan/Data_Scraping | Week 5/Working_with_XML_docs.ipynb | apache-2.0 | data = '''
<xml_data>
<person>
<id>01</id>
<name>
<first>Hrant</first>
<last>Davtyan</last>
</name>
<status organization="AUA">Instructor</status>
</person>
<person>
<id>02</id>
<name>
<first>Jack</first>
<last>Nicolson</last>
</name>
<status organization="Hollywood">Actor</status>
</person>
</xml_data>
'''
"""
Explanation: Working with XML docs (with lxml)
XML documents have a long history behind yet they are still very popular and are one of the main types of the response provided by most of the APIs (the other type is JSON). They have very similar structure to HTML documents in a sense that both have the tag-based structure. However, in XML the user is the one who defines the tag name (also, as in the <status> line below, one may provide an identifier like organization and a value for it, like AUA). Below, a sample XML document is developed.
End of explanation
"""
from lxml import etree
tree = etree.fromstring(data)
"""
Explanation: There are many options to work with a XML document including the built-in Python support. Yet, we will go for the lxml option and will import etree from it to get the XML tree from a string.
End of explanation
"""
tree.find('person').text
"""
Explanation: To find the text content of a tag one just needs to use the find() function and then the text method on it as follows:
End of explanation
"""
tree.findall("person/name/last")[1].text
tree.find("person/status").text
"""
Explanation: Similar to BeautifulSoup, one can use a findall() function to return a list of all elements and get the text of one of them.
End of explanation
"""
tree.find("person/status").get("organization")
"""
Explanation: To get a value of an attribute defined by the user the get() function should be used.
End of explanation
"""
for i in tree:
print("Full name: "+ i.find("name/first").text + i.find("name/last").text)
print("Position: " + i.find("status").text + " at " + i.find("status").get("organization")+"\n")
"""
Explanation: Now, we can print some text by applying a for loop over our tree (XML document).
End of explanation
"""
with open('output.xml', 'w') as f:
f.write(etree.tostring(tree, pretty_print = True))
"""
Explanation: To write an existing XML document from the Python enviroinment into a XML file, one needs to use the general approach: open a file (which probably does not exist yet) in a writing mode and then write. One just needs to remember that you write a text/string, which means the XML tree should be converted (in a pretty way) into a string using the tostring() sungion. The latter takes one required argument: the name of the variable to be converted, while the other argument is quite an optional one.
End of explanation
"""
with open('output.xml') as f:
tree = etree.parse(f)
tree.find('person/id').text
"""
Explanation: Excellent! Now we have a XML document called output.xml saved in our computer. Let's read it.
End of explanation
"""
|
Startupsci/data-science-notebooks | .ipynb_checkpoints/titanic-kaggle-old-checkpoint.ipynb | mit | df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
df.head()
"""
Explanation: Convert Sex feature to numeric
End of explanation
"""
df['Age'].dropna().hist(bins=16, range=(0,80), alpha = .5)
P.show()
median_ages = np.zeros((2,3))
median_ages
for i in range(0, 2):
for j in range(0, 3):
median_ages[i,j] = df[(df['Gender'] == i) & \
(df['Pclass'] == j+1)]['Age'].dropna().median()
median_ages
df['AgeFill'] = df['Age']
df.head()
df[df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)
for i in range(0, 2):
for j in range(0, 3):
df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1),\
'AgeFill'] = median_ages[i,j]
df[ df['Age'].isnull() ][['Gender','Pclass','Age','AgeFill']].head(10)
df['AgeIsNull'] = pd.isnull(df.Age).astype(int)
df.describe()
"""
Explanation: Fill missing Age values
The histogram is skewed towards 20-30s, so cannot use average age.
End of explanation
"""
df[df['Embarked'].isnull()][['Fare','Pclass','AgeFill','Embarked']].head(10)
df['Port'] = df['Embarked']
df.loc[df.Embarked.isnull(), 'Port'] = ['S', 'S']
df['Port'] = df['Port'].map( {'S': 1, 'Q': 2, 'C': 3} ).astype(int)
df.head()
"""
Explanation: Fill missing Embarked
End of explanation
"""
df['FamilySize'] = df['SibSp'] + df['Parch']
df['Age*Class'] = df.AgeFill * df.Pclass
df.describe()
df['Age*Class'].hist(bins=16, alpha=.5)
P.show()
df.dtypes[df.dtypes.map(lambda x: x=='object')]
df = df.drop(['Name', 'Sex', 'Age', 'Ticket', 'Cabin', 'Embarked'], axis=1)
df.info()
df.describe()
train_data = df.values
train_data
df_test = pd.read_csv('data/titanic-kaggle/test.csv', header=0)
df_test.describe()
df_test['Gender'] = df_test['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# Create the random forest object which will include all the parameters
# for the fit
forest = RandomForestClassifier(n_estimators = 100)
# Fit the training data to the Survived labels and create the decision trees
forest = forest.fit(train_data[0::,1::],train_data[0::,0])
# Take the same decision trees and run it on the test data
output = forest.predict(test_data)
"""
Explanation: Feature Engineering
Since we know that Parch is the number of parents or children onboard, and SibSp is the number of siblings or spouses, we could collect those together as a FamilySize.
End of explanation
"""
|
satishkt/ML-Foundations-Coursera | Week3-Classification/.ipynb_checkpoints/Analyzing product sentiment-checkpoint.ipynb | bsd-2-clause | import graphlab
"""
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
"""
products.head()
"""
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
"""
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
"""
Explanation: Build the word count vector for each review
End of explanation
"""
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
"""
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
"""
products['rating'].show(view='Categorical')
"""
Explanation: Build a sentiment classifier
End of explanation
"""
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
"""
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
"""
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
"""
Explanation: Let's train the sentiment classifier
End of explanation
"""
sentiment_model.evaluate(test_data, metric='roc_curve')
graphlab.canvas.set_target('browser')
sentiment_model.show(view='Evaluation')
"""
Explanation: Evaluate the sentiment model
End of explanation
"""
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
"""
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
"""
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
"""
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
"""
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
"""
Explanation: Most positive reviews for the giraffe
End of explanation
"""
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
"""
Explanation: Show most negative reviews for giraffe
End of explanation
"""
|
grokkaine/biopycourse | day1/.ipynb_checkpoints/tutorial-checkpoint.ipynb | cc0-1.0 | # This is a line comment.
"""
A multi-line
comment.
"""
a = None #Just declared an empty object
print(a)
a = 1
print(a)
a = 'abc'
print(a)
b = 3
c = [1, 2, 3]
a = [a, 2, b, 1., 1.2e-5, True] #This is a list.
print(a)
## Python is a dynamic language
a = 1
print(type(a))
print(a)
a = "spam"
print(type(a))
print(a)
a = 1
a
b = 'abc'
print(b)
#b
"""
Explanation: Python tutorial
Basics: Math, Variables, Functions, Control flow, Modules
Data representation: String, Tuple, List, Set, Dictionary, Objects and Classes
Standard library modules: script arguments, file operations, timing, processes, forks, multiprocessing
Beginners in programming have a hard time knowing where to begin and are overwhelmed.
These first steps are not very methodical. Programming language classes take time, start with basic concepts and gradually improve and expand on them.
Universities offer more relaxed programming classes for students, but this is a different type of learning. Students have time for many things, for the rest of us working people there are two or perhaps three steps to learning a new language or a new library, and we learn by doing. The tutorial is a simple exposure that takes you through some key aspects, next comes the programming guide (or any decent book) where concepts are explained in greater detail and at last there is the reference library, where every detail is supposed to be documented in concise format. Good programmers learn to read the tutorial, read some key aspects from the guide that make their library stand out and only check the reference guide when needed.
No matter how good I would become at teaching basic Python in one or two hours, it fails to compare with reading the default tutorial, which would take much longer time than we have at our disposal. So please understand that there is a trade-off between time and quality. As we learn by doing I would like to invite you to check whenever possible the documentation for Python and for the libraries we are using.
If you want to a certain standard module to be discussed in more depth, mention it in Hackmd/Slack!
Basics
Variables and comments
Variable vs type. 'Native' datatypes. Console output.
End of explanation
"""
print(a, b, c)
t = c
c = b
b = t
print(a, b, c)
"""
Explanation: Now let us switch the values of two variables.
End of explanation
"""
a = 2
b = 1
b = a*(5 + b) + 1/0.5
print(b)
d = 1/a
print(d)
"""
Explanation: Math operations
Arithmetic
End of explanation
"""
a = True
b = 3
print(b == 5)
print(a == False)
print(b < 6 and not a)
print(b < 6 or not a)
print(b < 6 and (not a or not b == 3))
print(False and True)
True == 1
"""
Explanation: Logical operations:
End of explanation
"""
## Indentation and function declaration, parameters of a function
def operation(a, b):
c = 2*(5 + b) + 1/0.5
a = 1
return a, c
a = None
mu = 2
operation(mu, 1)
a, op = operation(a, 1)
print(a, op)
# Function scope, program workflow
def f(a):
print("inside the scope of f():")
a = 4
print("a =", a)
return a
a = 1
print("f is called")
f(a)
print("outside the scope of f, a=", a)
print("also outside the scope of f, f returns", f(a))
## Defining default parameters for a function
def f2(a, b=1):
return a + b
print(f2(5))
print(f2(5, b=2))
## Globals. Never use them!
g = 0
def f1():
# Comment bellow to spot the diference
global g # Needed to modify global copy of g
g = 1
def f2():
print("f2:",g)
print(g)
f1()
print(g)
f2()
"""
Explanation: Functions
Functions are a great way to separate code into readable chunks. The exact size and number of functions needed to solve a problem will affect readability.
New concepts: indentation, namespaces, global and local scope, default parameters, passing arguments by value or by reference is meaningless in Python, what are mutable and imutable types?
End of explanation
"""
from IPython.display import Image
Image(url= "../img/mutability.png", width=400, height=400)
i = 43
print(id(i))
print(type(i))
print(i)
i = 42
print(id(i))
print(type(i))
print(i)
i = 43
print(id(i))
print(type(i))
print(i)
i = i + 1
print(id(i))
print(type(i))
print(i)
# assignments reference the same object as i
i = 43
print(id(i))
print(type(i))
print(i)
j = i
print(id(j))
print(type(j))
print(j)
# Task: will j also change?
i = 5
# Strings of characters are also immutable, x did not changed its value
x = 'foo'
y = x
print(x, y) # foo
y += 'bar'
print(x, y) # foo
# lists are mutable
x = [1, 2, 3]
print(x)
print(id(x))
print(type(x))
x.pop()
#x = [1, 2, 3]
print(x)
print(id(x))
"""
Explanation: Task:
- Define three functions, f, g and h. Call g and h from inside f. Run f on some value v.
- You can also have functions that are defined inside the namespace of another function. Try it!
Data types
Everything is an object in Python, and every object has an ID (or identity), a type, and a value. This means that whenever you assign an expression to a variable, you're not actually copying the value into a memory location denoted by that variable, but you're merely giving a name to the memory location where the value actually exists.
Once created, the ID of an object never changes. It is a unique identifier for it, and it is used behind the scenes by Python to retrieve the object when we want to use it.
The type also never changes. The type tells what operations are supported by the object and the possible values that can be assigned to it.
The value can either change or not. If it can, the object is said to be mutable, while when it cannot, the object is said to be immutable.
End of explanation
"""
a = 5
b = a
a += 5
print(a, b)
## A list however is mutable datatype in Python
x = [1, 2, 3]
y = x
print(x, y) # [1, 2, 3]
y += [3, 2, 1]
print(x, y) # [1, 2, 3, 3, 2, 1]
## String mutable? No
def func(val):
val += 'bar'
return val
x = 'foo'
print(x) # foo
print(func(x))
print(x) # foo
## List mutable? Yes.
def func(val):
val += [3, 2, 1]
return val
x = [1, 2, 3]
print(x) # [1, 2, 3]
print(func(x))
print(x) # [1, 2, 3, 3, 2, 1]
"""
Explanation: Question:
- Why weren't all data types made mutable only, or immutable only?
Below, if ints would have been mutable, you would expect both variables to be updated. But you normally want variables pointing to ints to be independent.
End of explanation
"""
# for loops
for b in [1, 2, 3]:
print(b)
# while, break and continue
b = 0
while b < 10:
b += 1
a = 2
if b%a == 0:
#break
continue
print(b)
# Now do the same, but using the for loop
## if else: use different logical operators and see if it makes sense
a = 1
if a == 3:
print('3')
elif a == 4:
print('4')
else:
print('something else..')
## error handling - use sparingly!
## python culture: better to apologise than to verify!
def divide(x, y):
"""catches an exception"""
try:
result = x / y
except ZeroDivisionError:
print("division by zero!")
#raise ZeroDivisionError
#pass
else:
print("result is", result)
finally:
print("executing finally code block..")
divide(1,0)
"""
Explanation: Control flow
There are two major types of programming languages, procedural and functional. Python is mostly procedural, with very simple functional elements. Procedural languages typicaly have very strong control flow specifications. Programmers spend time specifying how a program should run. In functional languages the time is spent defining the program while how to run it is left to the computer. Scala is the most used functional language in Bioinformatics.
End of explanation
"""
import math
print(dir())
print(dir(math))
print(help(math.log))
a = 3
print(type(a))
import numpy
print(numpy.__version__)
import os
print(os.getcwd())
"""
Explanation: Python modules
import xls
"How can you simply import Excel !?!"
How Python is structured:
Packages are the way code libraries are distributed. Libraries contain one or several modules. Each module can contain object classes, functions and submodules.
Object introspection.
It happens often that some Python code that you require is not well documented. To understand how to use the code one can interogate any object during runtime. Aditionally the code is always located somewhere on your computer.
End of explanation
"""
"""
%run full(relative)path/distance.py
or
os.setcwd(path)
"""
import distance
print(distance.euclidian(1, 2, 4.5 , 6))
from distance import euclidian
print(euclidian(1, 2, 4.5 , 6))
import distance as d
print(d.euclidian(1, 2, 4.5 , 6))
import math
def euclidian(x1, x2, y1 , y2):
l = math.sqrt((x1-x2)**2+(y1-y2)**2)
return l
import sys
print(sys.path)
sys.path.append('/my/custom/path')
print(sys.path)
"""
Explanation: Task:
Compute the distance between 2D points.
d(p1, p2)=sqrt((x1-x2)**2+(y1-y2)**2), where pi(xi,yi)
Define a module containing a function that computes the euclidian distance. Use the Spyder code editor and save the module on your filesystem.
Import that module into a new code cell bellow.
Make the module location available to Jupyter.
End of explanation
"""
#String declarations
statement = "Gene IDs are great. My favorite gene ID is"
name = "At5G001024"
statement = statement + " " + name
print(statement)
statement2 = 'Genes names \n \'are great. My favorite gene name is ' + 'Afldtjahd'
statement3 = """
Gene IDs are great.
My favorite genes are {} and {}.""".format(name, 'ksdyfngusy')
print(statement2)
print(statement3)
print('.\n'.join(statement.split(". ")))
print (statement.split(". "))
#String methods
name = "At5G001024"
print(name.lower())
print(name.index('G00'))
print(name.rstrip('402'))
print(name.strip('Add34'))
#Splits, joins
statement = "Gene IDs are great. My favorite gene ID is At5G001024"
words = statement.split()
print("Splitting a string:", words)
print("Joining into a string:", "\t ".join(words))
import random
random.shuffle(words)
print("Fun:", " ".join(words))
#Strings are lists of characters!
print(statement)
print(statement[0:5] + " blabla " + statement[-10:-5])
"""
Explanation: Data representation
Strings
End of explanation
"""
#a tupple is an immutable list
a = (1, "spam", 5)
#a.append("eggs")
print(a[1])
b = (1, "one")
c = (a, b, 3)
print(c)
#unpacking a collection into positional arguments
def sum(a, b):
return a + b
values = (5, 2)
s = sum(*values)
print(s)
"""
Explanation: Tuples
A few pros for tuples:
- Tuples are faster than lists
- Tuples can be keys to dictionaires (they are immutable types)
End of explanation
"""
a = [1,"one",(2,"two")]
print(a[0])
print(a)
a.append(3)
print(a)
b = a + a[:2]
print(b)
## slicing and indexing
print(b[2:5])
del a[-1]
print(a)
print(a.index("one"))
print(len(a))
## not just list size but list elements too are scoping free! (list is mutable)
def f(a, b):
a[1] = "changed"
b = [1,2]
return
a = [(2, 'two'), 3, 1]
b = [2, "two"]
f(a, b)
print(a, b)
## matrix
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]]
print(matrix)
print(matrix[0][1])
print(list(range(2,10,3)))
for x in range(len(matrix)):
for y in range(len(matrix[x])):
print(x,y, matrix[x][y])
## ranges
r = range(0, 5)
for i in r: print("step", i)
## list comprehensions
def f(i):
return 2*i
a = [2*i for i in range(10)]
a = [f(i) for i in range(10)]
print(a)
b = [str(e) for e in a[4:] if e%3==0]
print(b)
## sorting a list of tupples
a = [(str(i), str(j)) for i in a for j in range(3)]
print(a)
a.sort(key=lambda tup: tup[1])
a.sort(key=lambda tup: len(tup[1]), reverse = True)
print(a)
#zipping and enumerating
y = zip('abc', 'def')
print(list(y)) # y is a generator
print(list(y)) # second cast to list, content is empty!
print(list(zip(['one', 'two', 'three'], [1, 2, 3])))
x = [1, 2, 3]
y = [4, 5, 6]
zipped = zip(x, y)
#print(type(zipped))
print(zipped)
x2, y2 = zip(*zipped)
print (x == list(x2) and y == list(y2))
print (x2, y2)
alist = ['a1', 'a2', 'a3']
for i, e in enumerate(alist): print (i, e) #this is called a one liner
for i in range(len(alist)):
print(i, alist[i])
print(list(range(len(alist))))
# mapping
a = [1, 2, 3, 4, 5]
b = [2, 2, 9, 0, 9]
print(list(map(lambda x: max(x), zip(a, b))))
print(list(zip(a, b)))
# deep and shallow copies on mutable objects or collections of mutable objects
lst1 = ['a','b',['ab','ba']]
lst2 = lst1 #this is a shallow copy of the entire list
lst2[0]='e'
print(lst1)
lst1 = ['a','b',['ab','ba']]
lst2 = lst1[:] #this is a shallow copy of each element
lst2[0] = 'e'
lst2[2][1] = 'd'
print(lst1)
from copy import deepcopy
lst1 = ['a','b',['ab','ba']]
lst2 = deepcopy(lst1) #this is a deep copy
lst2[2][1] = "d"
lst2[0] = "c";
print(lst2)
print(lst1)
"""
Explanation: Lists
End of explanation
"""
# set vs. frozenset
s = set()
#s = frozenset()
s.add(1)
s = s | set([2,"three"])
s |= set([2,"three"])
s.add(2)
s.remove(1)
print(s)
print("three" in s)
s1 = set(range(10))
s2 = set(range(5,15))
s3 = s1 & s2
print(s1, s2, s3)
s3 = s1 - s2
print(s1, s2, s3)
print(s3 <= s1)
s3 = s1 ^ s2
print(s1, s2, s3)
"""
Explanation: Sets
Sets have no order and cannot include identical elements. Use them when the position of elements is not relevant. Finding elements is faster than in a list. Also set operations are more straightforward. A frozen set has a hash value.
Task:
Find on the Internet the official reference documentation for the Python sets
End of explanation
"""
d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 80, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50}
d
d = {}
d['geneid10'] = 110
d
#Creation: dict(list)
genes = ['geneid1', 'geneid2', 'geneid3']
values = [20, 30, 40]
d = dict(zip(genes, values))
print(d)
#Creation: dictionary comprehensions
d2 = { 'geneid'+str(i):10*(i+1) for i in range(4, 10) }
print(d2)
#Keys and values
print(d2.keys())
print(d2.values())
for k in d2.keys(): print(k, d2[k])
"""
Explanation: Dictionary
considered one of the most elegant data structure in Python
A set of key: value pairs.
Keys must be hashable elements, values can be any Python datatype.
The keys of the dictionary are hashable i.e. the are generated by hashing function which generates unique result for each unique value supplied to the hash function. This makes a dictionary value retrieval by key much faster than if using a list!
TODO: !timeit, sha() function
End of explanation
"""
d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 90, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50}
def getkey(value):
ks = set()
# .. your code here
return ks
print(getkey(90))
"""
Explanation: Task:
Find the dictionary key corresponding to a certain value. Why is Python not offering a native method for this?
End of explanation
"""
class Dog(object):
def __init__(self, name):
self.name = name
return
def bark_if_called(self, call):
if call[:-1]==self.name:
print("Woof Woof!")
else:
print("*sniffs..")
return
def get_ball(self):
print(self.name + " brings back ball")
d = Dog("Buffy")
print(d.name, "was created from Ether!") #name is an attribute
d.bark_if_called("Bambi!") #bark_if_called is a method
#dog.bark_if_called("Buffy!")
class PitBull(Dog):
def get_ball(self):
super(PitBull, self).get_ball()
print("*hates you")
return
def chew_boots(self):
print("*drools")
return
d2 = PitBull("Georgie")
d2.bark_if_called("Loopie!")
d2.bark_if_called("Georgie!")
d2.chew_boots()
#d.chew_boots()
d2.get_ball()
print(d2.name)
"""
Explanation: Objects and Classes
Everything is an object in Python and every variable is a reference to an object. References map the adress in memory where an object lies. However this is kept hidden in Python. C was famous for not cleaning up automatically the adress space after alocating memory for its data structures. This was causing memory leaks that makes some programs gain more and more RAM space. Modern languages cleanup dynamically after the scope of a variable ended, something called "garbage collecting". However this is afecting their speed of computation.
New concepts:
- Instantiation, Fields, Methods, Decomposition into classes, Inheritance
End of explanation
"""
from time import sleep
def sleep_decorator(function):
"""
Limits how fast the function is
called.
"""
def wrapper(*args, **kwargs):
sleep(2)
return function(*args, **kwargs)
return wrapper
@sleep_decorator
def print_number(num):
return num
print(print_number(222))
for num in range(1, 6):
print(print_number(num))
"""
Explanation: Decorators
End of explanation
"""
import sys
print(sys.argv)
sys.exit()
##getopt, sys.exit()
##getopt.getopt(args, options[, long_options])
# import getopt
# try:
# opts, args = getopt.getopt(sys.argv[1:],"hi:o:",["ifile=","ofile="])
# except getopt.GetoptError:
# print 'test.py -i <inputfile> -o <outputfile>'
# sys.exit(2)
# for opt, arg in opts:
# if opt == '-h':
# print 'test.py -i <inputfile> -o <outputfile>'
# sys.exit()
# elif opt in ("-i", "--ifile"):
# inputfile = arg
# elif opt in ("-o", "--ofile"):
# outputfile = arg
# print inputfile, outputfile
"""
Explanation: Standard library modules
https://docs.python.org/3/library/
sys - system-specific parameters and functions
os - operating system interface
shutil - shell utilities
math - mathematical functions and constants
random - pseudorandom number generator
timeit - time it
format - number and text formating
zlib - file archiving
... etc ...
Reccomendation: Take time to explore the Python module of the week. It is a very good way to learn why Python comes "with batteries included".
The sys module. Command line arguments.
End of explanation
"""
import os
print(os.getcwd())
#os.chdir(newpath)
os.system('mkdir testdir')
f = open('testfile.txt','wt')
f.write('One line of text\n')
f.write('Another line of text\n')
f.close()
import shutil
#shutil.copy('testfile.txt', 'testdir/')
shutil.copyfile('testfile.txt', 'testdir/testfile1.txt')
shutil.copyfile('testfile.txt', 'testdir/testfile2.txt')
with open('testdir/testfile1.txt','rt') as f:
for l in f: print(l)
for fn in os.listdir("testdir/"):
print(fn)
#fpath = os.path.join(dirpath,filename)
os.rename('testdir/'+fn, 'testdir/file'+fn[-5]+'.txt')
import glob
print (glob.glob('testdir/*'))
os.remove('testdir/file2.txt')
#os.rmdir('testdir')
#shutil.rmtree(path)
"""
Explanation: Task:
Create a second script that contains command line arguments and imports the distance module above. If an -n 8 is provided in the arguments, it must generate 8 random points and compute a matrix of all pair distances.
os module: File operations
The working directory, file IO, copy, rename and delete
End of explanation
"""
from datetime import datetime
startTime = datetime.now()
n = 10**8
for i in range(n):
continue
print datetime.now() - startTime
"""
Explanation: Task:
Add a function to save the random vectors and the generated matrix into a file.
Timing
End of explanation
"""
import os
#print os.system('/path/yourshellscript.sh args')
subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
subprocess.run("exit 1", shell=True, check=True)
from subprocess import call
call(["ls", "-l"])
args = ['/path/yourshellscript.sh', '-arg1', 'value1', '-arg2', 'value2']
p = Popen(args, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
p.wait()
(child_stdin, child_stdout, child_stderr) = (p.stdin, p.stdout, p.stderr)
# def child():
# print 'A new child ', os.getpid( )
# os._exit(0)
# def parent():
# while True:
# newpid = os.fork()
# if newpid == 0:
# child()
# else:
# pids = (os.getpid(), newpid)
# print "parent: %d, child: %d" % pids
# if raw_input( ) == 'q': break
# parent()
"""
Explanation: Processes
Launching a process, Paralellization: shared resources, clusters, clouds
End of explanation
"""
p1 = Popen(["cat", "test.txt"], stdout=PIPE)
p2 = Popen(["grep", "something"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close()
output = p2.communicate()[0]
"""
Explanation: How to do the equivalent of shell piping in Python? This is the basic step of an automated pipeline.
cat test.txt | grep something
Task:
- Test this!
- Uncomment p1.stdout.close(). Why is it not working?
- What are signals? Read about SIGPIPE.
End of explanation
"""
def run(l=[]):
l.append(len(l))
return l
print(run())
print(run())
print(run())
"""
Explanation: Questions:
- What are the Python's native datatypes? Have a look at the Python online documentation for each datatype.
- How many data types does Python have?
- Python is a "dynamic" language. What does it mean?
- Python is an "interpreted" language. What does it mean?
- Which data strutures are mutable and which are immutable. When does this matters?
- What is "hash" and how does it influences set and dictionary operations?
- What are the most important Python libraries for you? Read through Anaconda's collection of libraries and check out some of them.
Task. Explain why this happens:
End of explanation
"""
|
jorisvandenbossche/DS-python-data-analysis | notebooks/pandas_03a_selecting_data.ipynb | bsd-3-clause | import pandas as pd
# redefining the example DataFrame
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
"""
Explanation: <p><font size="6"><b>03 - Pandas: Indexing and selecting data - part I</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
countries['area'] # single []
"""
Explanation: Subsetting data
Subset variables (columns)
For a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python)
Selecting a single column:
End of explanation
"""
countries[['area', 'population']] # double [[]]
"""
Explanation: Remember that the same syntax can also be used to add a new columns: df['new'] = ....
We can also select multiple columns by passing a list of column names into []:
End of explanation
"""
countries[0:4]
"""
Explanation: Subset observations (rows)
Using [], slicing or boolean indexing accesses the rows:
Slicing
End of explanation
"""
countries['area'] > 100000
countries[countries['area'] > 100000]
countries[countries['population'] > 50]
"""
Explanation: Boolean indexing (filtering)
Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
End of explanation
"""
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
"""
Explanation: An overview of the possible comparison operations:
Operator | Description
------ | --------
== | Equal
!= | Not equal
> | Greater than
>= | Greater than or equal
\< | Lesser than
<= | Lesser than or equal
and to combine multiple conditions:
Operator | Description
------ | --------
& | And (cond1 & cond2)
\| | Or (cond1 \| cond2)
<div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b>: <br><br>
So as a summary, `[]` provides the following convenience shortcuts:
* **Series**: selecting a **label**: `s[label]`
* **DataFrame**: selecting a single or multiple **columns**:`df['col']` or `df[['col1', 'col2']]`
* **DataFrame**: slicing or filtering the **rows**: `df['row_label1':'row_label2']` or `df[mask]`
</div>
Some other useful methods: isin and string methods
The isin method of Series is very useful to select rows that may contain certain values:
End of explanation
"""
countries[countries['capital'].isin(['Berlin', 'London'])]
"""
Explanation: This can then be used to filter the dataframe with boolean indexing:
End of explanation
"""
string = 'Berlin'
string.startswith('B')
"""
Explanation: Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method:
End of explanation
"""
countries['capital'].str.startswith('B')
"""
Explanation: In pandas, these are available on a Series through the str namespace:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: For an overview of all string methods, see: https://pandas.pydata.org/pandas-docs/stable/reference/series.html#string-handling
Exercises using the Titanic dataset
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data1.py
# %load _solutions/pandas_03a_selecting_data2.py
# %load _solutions/pandas_03a_selecting_data3.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 1</b>:
<ul>
<li>Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data4.py
# %load _solutions/pandas_03a_selecting_data5.py
"""
Explanation: We will later see an easier way to calculate both averages at the same time with groupby.
<div class="alert alert-success">
<b>EXERCISE 2</b>:
<ul>
<li>How many passengers older than 70 were on the Titanic?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data6.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 3</b>:
<ul>
<li>Select the passengers that are between 30 and 40 years old?</li>
</ul>
</div>
End of explanation
"""
name = 'Braund, Mr. Owen Harris'
# %load _solutions/pandas_03a_selecting_data7.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 4</b>:
For a single string `name = 'Braund, Mr. Owen Harris'`, split this string (check the `split()` method of a string) and get the first element of the resulting list.
<details><summary>Hints</summary>
- No Pandas in this exercise, just standard Python.
- The `split()` method of a string returns a python list. Accessing elements of a python list can be done using the square brackets indexing (`a_list[i]`).
</details>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data8.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 5</b>:
Convert the solution of the previous exercise to all strings of the `Name` column at once. Split the 'Name' column on the `,`, extract the first part (the surname), and add this as new column 'Surname'.
<details><summary>Hints</summary>
- Pandas uses the `str` accessor to use the string methods such as `split`, e.g. `.str.split(...)` as the equivalent of the `split()` method of a single string (note: there is a small difference in the naming of the first keyword argument: `sep` vs `pat`).
- The [`.str.get()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.get.html#pandas.Series.str.get) can be used to get the n-th element of a list, which is what the `str.split()` returns. This is the equivalent of selecting an element of a single list (`a_list[i]`) but then for all values of the Series.
- One can chain multiple `.str` methods, e.g. `str.SOMEMETHOD(...).str.SOMEOTHERMETHOD(...)`.
</details>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data9.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 6</b>:
<ul>
<li>Select all passenger that have a surname starting with 'Williams'.</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data10.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 7</b>:
<ul>
<li>Select all rows for the passengers with a surname of more than 15 characters.</li>
</ul>
</div>
End of explanation
"""
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
"""
Explanation: [OPTIONAL] more exercises
For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /notebooks/data folder.
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data11.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 8</b>:
<ul>
<li>How many movies are listed in the titles dataframe?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data12.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 9</b>:
<ul>
<li>What are the earliest two films listed in the titles dataframe?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data13.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 10</b>:
<ul>
<li>How many movies have the title "Hamlet"?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data14.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 11</b>:
<ul>
<li>List all of the "Treasure Island" movies from earliest to most recent.</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data15.py
# %load _solutions/pandas_03a_selecting_data16.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 12</b>:
<ul>
<li>How many movies were made from 1950 through 1959?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data17.py
# %load _solutions/pandas_03a_selecting_data18.py
# %load _solutions/pandas_03a_selecting_data19.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 13</b>:
<ul>
<li>How many roles in the movie "Inception" are NOT ranked by an "n" value?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data20.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 14</b>:
<ul>
<li>But how many roles in the movie "Inception" did receive an "n" value?</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data21.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 15</b>:
<ul>
<li>Display the cast of the "Titanic" (the most famous 1997 one) in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.</li>
</ul>
</div>
End of explanation
"""
# %load _solutions/pandas_03a_selecting_data22.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE 16</b>:
<ul>
<li>List the supporting roles (having n=2) played by Brad Pitt in the 1990s, in order by year.</li>
</ul>
</div>
End of explanation
"""
|
slundberg/shap | notebooks/benchmark/text/Abstractive Summarization Benchmark Demo.ipynb | mit | import numpy as np
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
import nlp
import shap
import shap.benchmark as benchmark
"""
Explanation: Text Data Explanation Benchmarking: Abstractive Summarization
This notebook demonstrates how to use the benchmark utility to benchmark the performance of an explainer for text data. In this demo, we showcase explanation performance for partition explainer on an Abstractive Summarization model. The metric used to evaluate is "keep positive". The masker used is Text Masker.
The new benchmark utility uses the new API with MaskedModel as wrapper around user-imported model and evaluates masked values of inputs.
End of explanation
"""
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-xsum-12-6")
dataset = nlp.load_dataset('xsum',split='train')
s = dataset['document'][0:1]
"""
Explanation: Load Data and Model
End of explanation
"""
explainer = shap.Explainer(model,tokenizer)
"""
Explanation: Create Explainer Object
End of explanation
"""
shap_values = explainer(s)
"""
Explanation: Run SHAP Explanation
End of explanation
"""
sort_order = 'positive'
perturbation = 'keep'
"""
Explanation: Define Metrics (Sort Order & Perturbation Method)
End of explanation
"""
sp = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation)
xs, ys, auc = sp.model_score(shap_values, s)
sp.plot(xs, ys, auc)
"""
Explanation: Benchmark Explainer
End of explanation
"""
|
Kaggle/learntools | notebooks/machine_learning/raw/tut_automl.ipynb | apache-2.0 | #$HIDE_INPUT$
# Save CSV file with first 2 million rows only
import pandas as pd
train_df = pd.read_csv("../input/new-york-city-taxi-fare-prediction/train.csv", nrows = 2_000_000)
train_df.to_csv("train_small.csv", index=False)
PROJECT_ID = 'kaggle-playground-170215'
BUCKET_NAME = 'automl-tutorial-alexis'
DATASET_DISPLAY_NAME = 'taxi_fare_dataset'
TRAIN_FILEPATH = "../working/train_small.csv"
TEST_FILEPATH = "../input/new-york-city-taxi-fare-prediction/test.csv"
TARGET_COLUMN = 'fare_amount'
ID_COLUMN = 'key'
MODEL_DISPLAY_NAME = 'tutorial_model'
TRAIN_BUDGET = 4000
# Import the class defining the wrapper
from automl_tables_wrapper import AutoMLTablesWrapper
# Create an instance of the wrapper
amw = AutoMLTablesWrapper(project_id=PROJECT_ID,
bucket_name=BUCKET_NAME,
dataset_display_name=DATASET_DISPLAY_NAME,
train_filepath=TRAIN_FILEPATH,
test_filepath=TEST_FILEPATH,
target_column=TARGET_COLUMN,
id_column=ID_COLUMN,
model_display_name=MODEL_DISPLAY_NAME,
train_budget=TRAIN_BUDGET)
"""
Explanation: Introduction
When applying machine learning to real-world data, there are a lot of steps involved in the process -- starting with collecting the data and ending with generating predictions. (We work with the seven steps of machine learning, as defined by Yufeng Guo here.)
It all begins with Step 1: Gather the data. In industry, there are important considerations you need to take into account when building a dataset, such as target leakage. When participating in a Kaggle competition, this step is already completed for you.
In the Intro to Machine Learning and the Intermediate Machine Learning courses, you can learn how to:
- Step 2: Prepare the data - Deal with missing values and categorical data. (Feature engineering is covered in a separate course.)
- Step 4: Train the model - Fit decision trees and random forests to patterns in training data.
- Step 5: Evaluate the model - Use a validation set to assess how well a trained model performs on unseen data.
- Step 6: Tune parameters - Tune parameters to get better performance from XGBoost models.
- Step 7: Get predictions - Generate predictions with a trained model and submit your results to a Kaggle competition.
That leaves Step 3: Select a model. There are a lot of different types of models. Which one should you select for your problem? When you're just getting started, the best option is just to try everything and build your own intuition - there aren't any universally accepted rules. There are also many useful Kaggle notebooks (like this one) where you can see how and when other Kagglers used different models.
Mastering the machine learning process involves a lot of time and practice. While you're still learning, you can turn to automated machine learning (AutoML) tools to generate intelligent predictions.
Automated machine learning (AutoML)
In this notebook, you'll learn how to use Google Cloud AutoML Tables to automate the machine learning process. While Kaggle has already taken care of the data collection, AutoML Tables will take care of all remaining steps.
AutoML Tables is a paid service. In the exercise that follows this tutorial, we'll show you how to claim $300 of free credits that you can use to train your own models!
<div class="alert alert-block alert-info">
<b>Note</b>: This lesson is <b>optional</b>. It is not required to complete the <b><a href="https://www.kaggle.com/learn/intro-to-machine-learning">Intro to Machine Learning</a></b> course.
</div>
<br>
Code
We'll work with data from the New York City Taxi Fare Prediction competition. In this competition, we want you to predict the fare amount (inclusive of tolls) for a taxi ride in New York City, given the pickup and dropoff locations, number of passengers, and the pickup date and time.
To do this, we'll use a Python class that calls on AutoML Tables. To use this code, you need only define the following variables:
- PROJECT_ID - The name of your Google Cloud project. All of the work that you'll do in Google Cloud is organized in "projects".
- BUCKET_NAME - The name of your Google Cloud storage bucket. In order to work with AutoML, we'll need to create a storage bucket, where we'll upload the Kaggle dataset.
- DATASET_DISPLAY_NAME - The name of your dataset.
- TRAIN_FILEPATH - The filepath for the training data (train.csv file) from the competition.
- TEST_FILEPATH - The filepath for the test data (test.csv file) from the competition.
- TARGET_COLUMN - The name of the column in your training data that contains the values you'd like to predict.
- ID_COLUMN - The name of the column containing IDs.
- MODEL_DISPLAY_NAME - The name of your model.
- TRAIN_BUDGET - How long you want your model to train (use 1000 for 1 hour, 2000 for 2 hours, and so on).
All of these variables will make more sense when you run your own code in the following exercise!
End of explanation
"""
# Create and train the model
amw.train_model()
# Get predictions
amw.get_predictions()
"""
Explanation: Next, we train a model and use it to generate predictions on the test dataset.
End of explanation
"""
submission_df = pd.read_csv("../working/submission.csv")
submission_df.head()
"""
Explanation: After completing these steps, we have a file that we can submit to the competition! In the code cell below, we load this submission file and view the first several rows.
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp | day03/Advanced_Keras_Tutorial/1.0 Multi-Modal Networks.ipynb | mit | !pip install keras==2.0.8
from keras.datasets import mnist
from keras.layers import *
from keras.layers import Dense, Input, Flatten
from keras.models import Model
from keras.layers.merge import concatenate
from keras.utils import np_utils
img_rows, img_cols = 28, 28
if K.image_data_format() == 'channels_first':
shape_ord = (1, img_rows, img_cols)
else: # channel_last
shape_ord = (img_rows, img_cols, 1)
inputs = Input(shape=(28, 28, 1), name='left_input')
random_layer_name = Flatten()(inputs)
random_layer_name = Dense(32)(random_layer_name)
predictions = Dense(2, activation='softmax')(random_layer_name)
model = Model(inputs=[inputs], outputs=predictions)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0],) + shape_ord)
X_test = X_test.reshape((X_test.shape[0],) + shape_ord)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
np.random.seed(1338) # for reproducibilty!!
# Test datafit
X_test = X_test.copy()
Y = y_test.copy()
# Converting the output to binary classification(Six=1,Not Six=0)
Y_test = Y == 6
Y_test = Y_test.astype(int)
# Selecting the 5918 examples where the output is 6
X_six = X_train[y_train == 6].copy()
Y_six = y_train[y_train == 6].copy()
# Selecting the examples where the output is not 6
X_not_six = X_train[y_train != 6].copy()
Y_not_six = y_train[y_train != 6].copy()
# Selecting 6000 random examples from the data that
# only contains the data where the output is not 6
random_rows = np.random.randint(0,X_six.shape[0],6000)
X_not_six = X_not_six[random_rows]
Y_not_six = Y_not_six[random_rows]
# Appending the data with output as 6 and data with output as <> 6
X_train = np.append(X_six,X_not_six)
# Reshaping the appended data to appropraite form
X_train = X_train.reshape((X_six.shape[0] + X_not_six.shape[0],) + shape_ord)
# Appending the labels and converting the labels to
# binary classification(Six=1,Not Six=0)
Y_labels = np.append(Y_six,Y_not_six)
Y_train = Y_labels == 6
Y_train = Y_train.astype(int)
# Converting the classes to its binary categorical form
nb_classes = 2
Y_train = np_utils.to_categorical(Y_train, nb_classes)
Y_test = np_utils.to_categorical(Y_test, nb_classes)
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=32,
verbose=1,
validation_data=(X_test, Y_test))
# %load ../solutions/sol_821.py
"""
Explanation: Keras Functional API
Recall: All models (layers) are callables
```python
from keras.layers import Input, Dense
from keras.models import Model
this returns a tensor
inputs = Input(shape=(784,))
a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
this creates a model that includes
the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels) # starts training
```
Multi-Input Networks
Keras Merge Layer
Here's a good use case for the functional API: models with multiple inputs and outputs.
The functional API makes it easy to manipulate a large number of intertwined datastreams.
Let's consider the following model.
```python
from keras.layers import Dense, Input
from keras.models import Model
from keras.layers.merge import concatenate
left_input = Input(shape=(784, ), name='left_input')
left_branch = Dense(32, input_dim=784, name='left_branch')(left_input)
right_input = Input(shape=(784,), name='right_input')
right_branch = Dense(32, input_dim=784, name='right_branch')(right_input)
x = concatenate([left_branch, right_branch])
predictions = Dense(10, activation='softmax', name='main_output')(x)
model = Model(inputs=[left_input, right_input], outputs=predictions)
```
Resulting Model will look like the following network:
<img src="../imgs/multi_input_model.png" />
Such a two-branch model can then be trained via e.g.:
python
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([input_data_1, input_data_2], targets) # we pass one data array per model input
Try yourself
Step 1: Get Data - MNIST
End of explanation
"""
## try yourself
## `evaluate` the model on test data
"""
Explanation: Step 2: Create the Multi-Input Network
End of explanation
"""
from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model
# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
# Note that we can name any layer by passing it a "name" argument.
main_input = Input(shape=(100,), dtype='int32', name='main_input')
# This embedding layer will encode the input sequence
# into a sequence of dense 512-dimensional vectors.
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(32)(x)
"""
Explanation: Keras supports different Merge strategies:
add: element-wise sum
concatenate: tensor concatenation. You can specify the concatenation axis via the argument concat_axis.
multiply: element-wise multiplication
average: tensor average
maximum: element-wise maximum of the inputs.
dot: dot product. You can specify which axes to reduce along via the argument dot_axes. You can also specify applying any normalisation. In that case, the output of the dot product is the cosine proximity between the two samples.
You can also pass a function as the mode argument, allowing for arbitrary transformations:
python
merged = Merge([left_branch, right_branch], mode=lambda x: x[0] - x[1])
Even more interesting
Here's a good use case for the functional API: models with multiple inputs and outputs.
The functional API makes it easy to manipulate a large number of intertwined datastreams.
Let's consider the following model (from: https://keras.io/getting-started/functional-api-guide/ )
Problem and Data
We seek to predict how many retweets and likes a news headline will receive on Twitter.
The main input to the model will be the headline itself, as a sequence of words, but to spice things up, our model will also have an auxiliary input, receiving extra data such as the time of day when the headline was posted, etc.
The model will also be supervised via two loss functions.
Using the main loss function earlier in a model is a good regularization mechanism for deep models.
<img src="https://s3.amazonaws.com/keras.io/img/multi-input-multi-output-graph.png" width="40%" />
End of explanation
"""
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
"""
Explanation: Here we insert the auxiliary loss, allowing the LSTM and Embedding layer to be trained smoothly even though the main loss will be much higher in the model.
End of explanation
"""
from keras.layers import concatenate
auxiliary_input = Input(shape=(5,), name='aux_input')
x = concatenate([lstm_out, auxiliary_input])
# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
# And finally we add the main logistic regression layer
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
"""
Explanation: At this point, we feed into the model our auxiliary input data by concatenating it with the LSTM output:
End of explanation
"""
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
"""
Explanation: Model Definition
End of explanation
"""
model.compile(optimizer='rmsprop',
loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},
loss_weights={'main_output': 1., 'aux_output': 0.2})
"""
Explanation: We compile the model and assign a weight of 0.2 to the auxiliary loss.
To specify different loss_weights or loss for each different output, you can use a list or a dictionary. Here we pass a single loss as the loss argument, so the same loss will be used on all outputs.
Note:
Since our inputs and outputs are named (we passed them a "name" argument),
We can compile&fit the model via:
End of explanation
"""
|
tolaoniyangi/dmc | notebooks/week-3/01-basic ann.ipynb | apache-2.0 | %matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(style="ticks", color_codes=True)
from sklearn.preprocessing import OneHotEncoder
from sklearn.utils import shuffle
"""
Explanation: Lab 3 - Basic Artificial Neural Network
In this lab we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented with only basic math and linear algebra functions using Python's scientific computing library numpy. This will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use higher-level libraries such as Keras and Tensorflow which automate and optimize most of these functions, making the network much faster and easier to use.
The code and MNIST test data is taken directly from http://neuralnetworksanddeeplearning.com/ by Michael Nielsen. Please review the first chapter of the book for a thorough explanation of the code.
First we import the Python libraries we will be using, including the random library for generating random numbers, numpy for scientific computing, matplotlib and seaborn for creating data visualizations, and several helpful modules from the sci-kit learn machine learning library:
End of explanation
"""
class Network(object):
def __init__(self, sizes):
"""The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs for later layers."""
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]
def feedforward (self, a):
"""Return the output of the network if "a" is input. The np.dot()
function computes the matrix multiplication between the weight and input
matrices for each set of layers. When used with numpy arrays, the '+'
operator performs matrix addition."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The "training_data" is a list of tuples
"(x, y)" representing the training inputs and the desired
outputs. The other non-optional parameters specify the number
of epochs, size of each mini-batch, and the learning rate.
If "test_data" is provided then the network will be evaluated
against the test data after each epoch, and partial progress
printed out. This is useful for tracking progress, but slows
things down substantially."""
# create an empty array to store the accuracy results from each epoch
results = []
n = len(training_data)
if test_data:
n_test = len(test_data)
# this is the code for one training step, done once for each epoch
for j in xrange(epochs):
# before each epoch, the data is randomly shuffled
random.shuffle(training_data)
# training data is broken up into individual mini-batches
mini_batches = [ training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size) ]
# then each mini-batch is used to update the parameters of the
# network using backpropagation and the specified learning rate
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
# if a test data set is provided, the accuracy results
# are displayed and stored in the 'results' array
if test_data:
num_correct = self.evaluate(test_data)
accuracy = "%.2f" % (100 * (float(num_correct) / n_test))
print "Epoch", j, ":", num_correct, "/", n_test, "-", accuracy, "% acc"
results.append(accuracy)
else:
print "Epoch", j, "complete"
return results
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The "mini_batch" is a list of tuples "(x, y)", and "eta"
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
"""Note that the variable l in the loop below is used a little
differently to the notation in Chapter 2 of the book. Here,
l = 1 means the last layer of neurons, l = 2 is the
second-last layer, and so on. It's a renumbering of the
scheme in the book, used here to take advantage of the fact
that Python can use negative indices in lists."""
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
"""Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation.
Numpy's argmax() function returns the position of the
largest element in an array. We first create a list of
predicted value and target value pairs, and then count
the number of times those values match to get the total
number correct."""
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)
"""
Explanation: Next, we will build the artificial neural network by defining a new class called Network. This class will contain all the data for our neural network, as well as all the methods we need to compute activations between each layer, and train the network through backpropagation and stochastic gradient descent (SGD).
End of explanation
"""
def sigmoid(z):
# The sigmoid activation function.
return 1.0/(1.0 + np.exp(-z))
def sigmoid_prime(z):
# Derivative of the sigmoid function.
return sigmoid(z)*(1-sigmoid(z))
"""
Explanation: Finally, we define two helper functions which compute the sigmoid activation function and it's derivative which is used in backpropagation.
End of explanation
"""
iris_data = sns.load_dataset("iris")
# randomly shuffle data
iris_data = shuffle(iris_data)
# print first 5 data points
print iris_data[:5]
# create pairplot of iris data
g = sns.pairplot(iris_data, hue="species")
"""
Explanation: Iris dataset example
Now we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset,
which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using a pairwise plot using a buit-in function in seaborn. A pairwise plot is a kind of exploratory data analysis that helps us to find relationships between pairs of features within a multi-dimensional data set. In this case, we can use it to understand which features might be most useful for determining the species of the flower.
End of explanation
"""
# convert iris data to numpy format
iris_array = iris_data.as_matrix()
# split data into feature and target sets
X = iris_array[:, :4].astype(float)
y = iris_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# convert the textual category data to integer using numpy's unique() function
_, y = np.unique(y, return_inverse=True)
# convert the list of targets to a vertical matrix with the dimensions [1 x number of samples]
# this is necessary for later computation
y = y.reshape(-1,1)
# combine feature and target data into a new python array
data = []
for i in range(X.shape[0]):
data.append(tuple([X[i].reshape(-1,1), y[i][0]]))
# split data into training and test sets
trainingSplit = int(.7 * len(data))
training_data = data[:trainingSplit]
test_data = data[trainingSplit:]
# create an instance of the one-hot encoding function from the sci-kit learn library
enc = OneHotEncoder()
# use the function to figure out how many categories exist in the data
enc.fit(y)
# convert only the target data in the training set to one-hot encoding
training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data]
# define the network
net = Network([4, 32, 3])
# train the network using SGD, and output the results
results = net.SGD(training_data, 30, 10, 0.2, test_data=test_data)
# visualize the results
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
"""
Explanation: Next, we will prepare the data set for training in our ANN. Here is a list of operations we need to perform on the data set so that it will work with the Network class we created above:
Convert data to numpy format
Normalize the data so that each features is scaled from 0 to 1
Split data into feature and target data sets by extracting specific rows from the numpy array. In this case the features are in the first four columns, and the target is in the last column, which in Python we can access with a negative index
Recombine the data into a single Python array, so that each entry in the array represents one sample, and each sample is composed of two numpy arrays, one for the feature data, and one for the target
Split this data set into training and testing sets
Finally, we also need to convert the targets of the training set to 'one-hot' encoding (OHE). OHE takes each piece of categorical data and converts it to a list of binary values the length of which is equal to the number of categories, and the position of the current category denoted with a '1' and '0' for all others. For example, in our dataset we have 3 possible categories: versicolor, virginica, and setosa. After applying OHE, versicolor becomes [1,0,0], virginica becomes [0,1,0], and setosa becomes [0,0,1]. OHE is often used to represent target data in neural networks because it allows easy comparison to the output coming from the network's final layer.
End of explanation
"""
import mnist_loader
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
"""
Explanation: MNIST dataset example
Next, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here.
We will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set.
note: since this is a much larger data set than the Iris data, the training will take substantially more time.
End of explanation
"""
img = training_data[0][0][:,0].reshape((28,28))
fig = plt.figure()
plt.imshow(img, interpolation='nearest', vmin = 0, vmax = 1, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
net = Network([784, 30, 10])
results = net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
"""
Explanation: We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back to a 2d matrix form
End of explanation
"""
wine_data = np.loadtxt(open("./data/wine.csv","rb"),delimiter=",")
wine_data = shuffle(wine_data)
X = wine_data[:,1:]
y = wine_data[:, 0]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# convert the textual category data to integer using numpy's unique() function
_, y = np.unique(y, return_inverse=True)
# convert the list of targets to a vertical matrix with the dimensions [1 x number of samples]
# this is necessary for later computation
y = y.reshape(-1,1)
# combine feature and target data into a new python array
data = []
for i in range(X.shape[0]):
data.append(tuple([X[i].reshape(-1,1), y[i][0]]))
# split data into training and test sets
trainingSplit = int(.8 * len(data))
training_data = data[:trainingSplit]
test_data = data[trainingSplit:]
# create an instance of the one-hot encoding function from the sci-kit learn library
enc = OneHotEncoder()
# use the function to figure out how many categories exist in the data
enc.fit(y)
# convert only the target data in the training set to one-hot encoding
training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data]
# define the network
net = Network([13, 60, 3])
results = net.SGD(training_data, 30, 12, .5, test_data=test_data)
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
"""
Explanation: Assignment 3 - classification
Now that you have a basic understanding of how an artificial neural network works and have seen it applied to a classification task using two types of data, see if you can use the network to solve another classification problem using another data set.
In the week-3 folder there is a data set called wine.csv which is another common data set used to test classification capabilities of machine learning algorithms. You can find a description of the data set here:
https://archive.ics.uci.edu/ml/datasets/Wine
The code below uses numpy to import this .csv file as a 2d numpy array. As before, we first shuffle the data set, and then split it into feature and target sets. This time, the target is in the first column of the data, with the rest of the columns representing the 13 features.
From there you should be able to go through and format the data set in a similar way as we did for the Iris data above. Remember to split the data into both training and test sets, and encode the training targets as one-hot vectors. When you create the network, make sure to specify the proper dimensions for the input and output layer so that it matches the number of features and target categories in the data set. You can also experiment with different sizes for the hidden layer. If you are not achieving good results, try changing some of the hyper-parameters, including the size and quantity of hidden layers in the network specification, and the number of epochs, the size of a mini-batch, and the learning rate in the SGD function call. With a training/test split of 80/20 you should be able to achieve 100% accuracy Within 30 epochs.
Remeber to commit your changes and submit a pull request when you are done.
Hint: do not be fooled by the category labels that come with this data set! Even though the labels are already integers (1,2,3) we need to always make sure that our category labels are sequential integers and start with 0. To make sure this is the case you should always use the np.unique() function on the target data as we did with the Iris example above.
End of explanation
"""
|
CosmoJG/neural-heatmap | cable-properties/cable-length-calculator.ipynb | gpl-3.0 | # Imports
import sys # Required for system access (below)
import os # Required for os access (below)
sys.path.append(os.path.join(os.path.dirname(os.getcwd()), 'dependencies'))
from neuron_readExportedGeometry import * # Required to interpret hoc files
"""
Explanation: Cable Length Calculator
This program reads a neuron hoc file and spits out its total cable length (i.e. the combined length of all neurites, excluding axons).
First, here are the required imports:
End of explanation
"""
# Convert the given hoc file into a geo object
geo = demoReadsilent('/home/cosmo/marderlab/test/878_043_GM_scaled.hoc')
"""
Explanation: Next, load up a neuron hoc file as a geo object:
End of explanation
"""
tips, ends = geo.getTips() # Store all the tip segments in a list, "tips"
# Also store the associated ends in "ends"
find = PathDistanceFinder(geo, geo.soma) # Set up a PDF object for the
# given geo object, anchored at
# the soma
paths = [find.pathTo(seg) for seg in tips] # List of all paths
"""
Explanation: Now that we have a geo object ready to go, let's make a list of all the neurite paths from soma to tip:
End of explanation
"""
counted = [] # Initialize a list for keeping track of which segments have
# already been measured
cablelength = 0 # Initialize a running total of cable length
for path in paths: # Sort through each path
pruned = [seg for seg in path if seg not in counted] # Limit the paths
# we work with to
# those which have
# not already been
# measured
forfind = PathDistanceFinder(geo, pruned[0]) # Initialize a PDF
# anchored at the earliest
# unmeasured segment
cablelength += forfind.distanceTo(pruned[-1]) # Add the distance
# between the anchor and
# the tip segment to the
# running total
for seg in pruned: # Add all of the measured segments to "counted"
counted.append(seg)
print(cablelength)
"""
Explanation: Finally, it's time to calculate the cable length! Let's create a for loop that keeps a running list of which paths have already been measured while adding everything together:
End of explanation
"""
|
planetlabs/notebooks | jupyter-notebooks/analytics/change_detection_heatmap.ipynb | apache-2.0 | !pip install cython
!pip install https://github.com/SciTools/cartopy/archive/v0.18.0.zip
"""
Explanation: Creating a Heatmap of Vector Results
In this notebook, you'll learn how to use Planet's Analytics API to display a heatmap of vector analytic results, specifically buildng change detections. This can be used to identify where the most change is happining.
Setup
Install additional dependencies
Install cartopy v0.18 beta, so that we can render OSM tiles under the heatmap:
End of explanation
"""
import os
import requests
API_KEY = os.environ["PL_API_KEY"]
SUBSCRIPTION_ID = "..."
TIMES = None
planet = requests.session()
planet.auth = (API_KEY, '')
"""
Explanation: API configuration
Before getting items from the API, you must set your API_KEY and the SUBSCRIPTION_ID of the change detection subscription to use.
If you want to limit the heatmap to a specific time range, also set TIMES to a valid time range.
End of explanation
"""
import requests
import statistics
def get_next_url(result):
if '_links' in result:
return result['_links'].get('_next')
elif 'links' in result:
for link in result['links']:
if link['rel'] == 'next':
return link['href']
def get_items_from_sif():
url = 'https://api.planet.com/analytics/collections/{}/items?limit={}'.format(
SUBSCRIPTION_ID, 500)
if TIMES:
url += '&datetime={}'.format(TIMES)
print("Fetching items from " + url)
result = planet.get(url).json()
items = []
while len(result.get('features', [])) > 0:
for f in result['features']:
coords = f['geometry']['coordinates'][0]
items.append({
'lon': statistics.mean([c[0] for c in coords]),
'lat': statistics.mean([c[1] for c in coords]),
'area': f['properties']['object_area_m2']
})
url = get_next_url(result)
if not url:
return items
print("Fetching items from " + url)
result = planet.get(url).json()
items = get_items_from_sif()
print("Fetched " + str(len(items)) + " items")
# Get the bounding box coordinates of this AOI.
url = 'https://api.planet.com/analytics/subscriptions/{}'.format(SUBSCRIPTION_ID)
result = planet.get(url).json()
geometry = result['geometry']
"""
Explanation: Fetch Items
Next, we fetch the items from the API in batches of 500 items, and return only the relevant data - the centroid and the area. This might take a few minutes to run, as some change detection feeds have thousands of items.
End of explanation
"""
import pyproj
SRC_PROJ = 'EPSG:4326'
DEST_PROJ = 'EPSG:3857'
PROJ_UNITS = 'm'
transformer = pyproj.Transformer.from_crs(SRC_PROJ, DEST_PROJ, always_xy=True)
"""
Explanation: Displaying the Heatmap
Once you've fetched all the items, you are nearly ready to display them as a heatmap.
Coordinate Systems
The items fetched from the API are in WGS84 (lat/lon) coordinates. However, it can be useful to display the data in an equal area projection like EPSG:3857 so that the heatmap shows change per square meter.
To do this, we use pyproj to transfrom the item coordinates between projections.
End of explanation
"""
import matplotlib.pylab as pl
import numpy as np
from matplotlib.colors import ListedColormap
src_colormap = pl.cm.plasma
alpha_vals = src_colormap(np.arange(src_colormap.N))
alpha_vals[:int(src_colormap.N/2),-1] = np.linspace(0, 1, int(src_colormap.N/2))
alpha_vals[int(src_colormap.N/2):src_colormap.N,-1] = 1
alpha_colormap = ListedColormap(alpha_vals)
"""
Explanation: Colormap
Matplotlib provides a number of colormaps that are useful to render heatmaps. However, all of these are solid color - in order to see an underlying map, we need to add an alpha chanel.
For this example, we will use the "plasma" colormap, and add a transparent gradient to the first half of the map, so that it starts out completely transparent, and gradually becomes opaque, such that all values above the midpoint have no transparency.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import cartopy.io.img_tiles as cimgt
import cartopy.crs as ccrs
import shapely
# Heatmap Configuration
RAW_BOUNDS = shapely.geometry.shape(geometry).bounds
INTERVALS: int = 36
BOUNDS = [0.] * 4
BOUNDS[0],BOUNDS[2] = transformer.transform(RAW_BOUNDS[0],RAW_BOUNDS[1])
BOUNDS[1],BOUNDS[3] = transformer.transform(RAW_BOUNDS[2],RAW_BOUNDS[3])
# Categorization
# 1. Generate bins from bounds + intervals
aspect_ratio = (BOUNDS[1] - BOUNDS[0]) / (BOUNDS[3] - BOUNDS[2])
x_bins = np.linspace(BOUNDS[0], BOUNDS[1], INTERVALS, endpoint=False)
y_bins = np.linspace(BOUNDS[2], BOUNDS[3], int(INTERVALS/aspect_ratio), endpoint=False)
x_delta2 = (x_bins[1] - x_bins[0])/2
y_delta2 = (y_bins[1] - y_bins[0])/2
x_bins = x_bins + x_delta2
y_bins = y_bins + y_delta2
# 2. Categorize items in bins
binned = []
for f in items:
fx,fy = transformer.transform(f['lon'], f['lat'])
if (BOUNDS[0] < fx < BOUNDS[1]) and (BOUNDS[2] < fy < BOUNDS[3]):
binned.append({
'x': min(x_bins, key=(lambda x: abs(x - fx))),
'y': min(y_bins, key=(lambda y: abs(y - fy))),
'area': f['area']
})
# 3. Aggregate binned values
hist = pd.DataFrame(binned).groupby(['x', 'y']).sum().reset_index()
# 4. Pivot into an xy grid and fill in empty cells with 0.
hist = hist.pivot('y', 'x', 'area')
hist = hist.reindex(y_bins, axis=0, fill_value=0).reindex(x_bins, axis=1, fill_value=0).fillna(0)
# OSM Basemap
osm_tiles = cimgt.OSM()
carto_proj = ccrs.GOOGLE_MERCATOR
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection=carto_proj)
ax.axis(BOUNDS)
tile_image = ax.add_image(osm_tiles, 8)
# Display Heatmap
heatmap = ax.imshow(hist.values, zorder=1, aspect='equal', origin='lower', extent=BOUNDS, cmap=alpha_colormap, interpolation='bicubic')
plt.colorbar(heatmap, ax=ax).set_label("Square meters of new buildings per {:.3e} {}²".format(4 * x_delta2 * y_delta2,PROJ_UNITS))
"""
Explanation: Heatmap configuration
Note: These final four sections are presented together in one code block, to make it easier to re-run with different configurations of bounds or intervals.
Set BOUNDS to the area of interest to display (min lon,max lon,min lat,max lat). The default bounds are centered on Sydney, Australia - you should change this to match the AOI of your change detection subscription feed.
Set INTERVALS to the number of bins along the x-axis. Items are categorized into equal-size square bins based on this number of intervals and the aspect ratio of your bounds. For a square AOI, the default value of INTERVALS = 36 would give 36 * 36 = 1296 bins; an AOI with the same width that is half as tall would give 36 * 18 = 648 bins.
The area (in square meters) of each bin is displayed in the legend to the right of the plot.
Categorization
This configuration is used to categorize the items into bins for display as a heatmap.
The bounds and intervals are used to generate an array of midpoints representing the bins.
Categorize the items retrieved from the API into these bins based on which midpoint they are closest to.
Aggregate up the areas of all the items in each bin.
Convert the resulting data into an xy grid of areas and fill in missing cells with zeros.
OSM Basemap
So that we can see where our heatmap values actually are, we will use cartopy to display OSM tiles underneath the heatmap. Note that this requires an internet connection.
For an offline alternative, you could plot a vector basemap or imshow to display a local raster image.
Display Heatmap
The final step is to display the grid data as a heatmap, using imshow. You can use the parameters here to change how the heatmap is rendered. For example, chose a different cmap to change the color, or add the interpolation='bicubic' parameter to display smooth output instead of individual pixels.
To make it clear where the heatmap is being displayed, use Natural Earth 1:110m datasets to render a map alongside the heatmap data.
End of explanation
"""
|
GSimas/EEL7045 | Aula 9.3 - Circuitos RC.ipynb | mit | print("Exemplo 7.1")
import numpy as np
from sympy import *
C = 0.1
v0 = 15
t = symbols('t')
Req1 = 8 + 12
Req2 = Req1*5/(Req1 + 5)
tau = C*Req2
vc = v0*exp(-t/tau)
vx = vc*12/(12 + 8)
ix = vx/12
print("Tensão Vc:",vc,"V")
print("Tensão Vx:",vx,"V")
print("Corrente ix:",ix,"A")
"""
Explanation: Circuitos Lineares de 1ª Ordem
Jupyter Notebook desenvolvido por Gustavo S.S.
"Vivemos de nossos atos, não dos anos vividos; de pensamentos, não apenas da respiração;
de sentimentos, não dos números em um disco de telefone. Deveríamos contar o tempo em pulsações.
Vive mais aquele que pensa mais, sente-se o mais nobre, aquele que age melhor." - P.J. Bailey
Um circuito de primeira ordem é caracterizado por uma equação diferencial
de primeira ordem.
Circuito RC sem fonte
Um circuito RC sem fonte ocorre quando sua fonte CC é desconectada abruptamente.
A energia já armazenada no capacitor é liberada para os resistores.
\begin{align}
{\Large v(t) = V_0 e^{\frac{-t}{RC}}}
\end{align}
A resposta natural de um circuito se refere ao comportamento (em termos
de tensões e correntes) do próprio circuito, sem nenhuma fonte externa de
excitação.
A resposta natural depende da
natureza do circuito em si, sem
nenhuma fonte externa. De
fato, o circuito apresenta uma
resposta apenas em razão da
energia armazenada inicialmente
no capacitor.
A resposta natural é ilustrada graficamente na Figura 7.2. Observe que em
t = 0 temos a condição inicial correta como na Equação anterior. À medida que t
aumenta, a tensão diminui em direção a zero. A rapidez com que a tensão decresce é expressa em termos da constante de tempo, representada por \tau, a letra grega minúscula tau.
A constante de tempo de um circuito é o tempo necessário para a resposta
de decaimento a um fator igual a 1/e ou a 36,8% de seu valor inicial
\begin{align}
{\Large \tau = RC}
\end{align}
Assim, a equação da tensão em função do tempo, fica:
\begin{align}
{\Large v(t) = V_0 e^{\frac{-t}{\tau}}}
\end{align}
Com a tensão v(t) na Equação, podemos determinar a corrente iR(t):
\begin{align}
{\Large iR(t) = \frac{V_0}{R} e^{\frac{-t}{\tau}}}
\end{align}
A energia absorvida pelo resistor até o instante t é:
\begin{align}
{\Large w_R(t) = \int_{0}^{t} p(x)dx = \frac{1}{2} C V_0² (1 - e^{-2 \frac{t}{\tau}})}
\end{align}
Note que, à medida que t -> ∞, wR(∞) -> CV0²/2, que é o mesmo que wC(0), a energia armazenada inicialmente no capacitor, a qual é finalmente dissipada no resistor.
O segredo para se trabalhar com um circuito RC sem
fonte é encontrar:
A tensão inicial v(0) = V0 no capacitor
A constante de tempo \tau
Exemplo 7.1
Na Figura 7.5, façamos vC(0) = 15 V. Determine vC, vx e ix para t 7 0.
End of explanation
"""
print("Problema Prático 7.1")
v0 = 60
C = 1/3
Req1 = 12*6/(12 + 6)
Req2 = Req1 + 8
tau = C*Req2
vc = v0*exp(-t/tau)
vx = vc*Req1/(Req1 + 8)
vr = vc - vx
i0 = - vr/8
print("Tensão Vc:",vc,"V")
print("Tensão Vx:",vx,"V")
print("Corrente i0:",i0,"A")
"""
Explanation: Problema Prático 7.1
Consulte o circuito da Figura 7.7. Seja, vC(0) = 60 V. Determine vC, vx e io, para t >= 0.
End of explanation
"""
print("Exemplo 7.2")
Vf = 20
m = 10**-3
C = 20*m
v0 = Vf*9/(9 + 3)
Req = 9 + 1
tau = Req*C
vc = v0*exp(-t/tau)
wc = (C*v0**2)/2
print("Tensão v(t):",vc,"V")
print("Energia inicial:",wc,"J")
"""
Explanation: Exemplo 7.2
A chave no circuito da Figura 7.8 foi fechada por um longo período e é aberta em t = 0.
Determine v(t) para t >= 0. Calcule a energia inicial armazenada no capacitor.
End of explanation
"""
print("Problema Prático 7.2")
Vf = 24
C = 1/6
Req1 = 12*4/(12 + 4)
v0 = Vf*Req1/(Req1 + 6)
tau = Req1*C
v = v0*exp(-t/tau)
wc = (C*v0**2)/2
print("Tensão v(t):",v,"V")
print("Energia inicial:",wc,"J")
"""
Explanation: Problema Prático 7.2
Se a chave da Figura 7.10 abrir em t = 0, determine v(t) para t >= 0 e wC(0).
End of explanation
"""
|
qinwf-nuan/keras-js | notebooks/layers/pooling/GlobalMaxPooling3D.ipynb | mit | data_in_shape = (6, 6, 3, 4)
L = GlobalMaxPooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalMaxPooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: GlobalMaxPooling3D
[pooling.GlobalMaxPooling3D.0] input 6x6x3x4, data_format='channels_last'
End of explanation
"""
data_in_shape = (3, 6, 6, 3)
L = GlobalMaxPooling3D(data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalMaxPooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalMaxPooling3D.1] input 3x6x6x3, data_format='channels_first'
End of explanation
"""
data_in_shape = (5, 3, 2, 1)
L = GlobalMaxPooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalMaxPooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalMaxPooling3D.2] input 5x3x2x1, data_format='channels_last'
End of explanation
"""
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
njtwomey/ADS | 03_data_transformation_and_integration/01_wrangling_casas.ipynb | mit | ## from __future__ import print_function # uncomment if using python 2
from os.path import join
import pandas as pd
import numpy as np
from datetime import datetime
%matplotlib inline
"""
Explanation: Applied Data Science
Data Wrangling
Niall Twomey
To contact, please email <firstname>.<lastname>@bristol.ac.uk
This notebook considers the CASAS dataset. This is a dataset collected in a smart environment. As participants interact with the house, sensors record their interactions. There are a number of different sensor types including motion, door contact, light, temperature, water flow, etc. (see sensorlayout2.png)
This notebook goes through a number of common issues in data science when working with real data. Namely, several issues relating to dates, sensor values, etc. This are dealt with consistently using the functionality provided by the pandas library.
The objective is to fix all errors (if we can), and then to convert the timeseries data to a form that would be recognisable by a machine learning algorithm. I have attempted to comment my code where possible to explain my thought processes. At several points in this script I could have taken shortcuts, but I also attempted to forgo brevity for clarity.
For more detail on this dataset, see
Cook, Diane J., and Maureen Schmitter-Edgecombe. "Assessing the quality of activities in a smart environment." Methods of information in medicine 48.5 (2009): 480.
Twomey, Niall, Tom Diethe, and Peter Flach. "On the need for structure modelling in sequence prediction." Machine Learning 104.2-3 (2016): 291-314.
Twomey, Niall, et al. "Unsupervised learning of sensor topologies for improving activity recognition in smart environments." Neurocomputing (2016).
Diethe, Tom, Niall Twomey, and Peter Flach. "Bayesian modelling of the temporal aspects of smart home activity with circular statistics." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2015.
Twomey, Niall, and Peter Flach. "Context modulation of sensor data applied to activity recognition in smart homes." Workshop on Learning over Multiple Contexts, European Conference on Machine Learning (ECML’14). 2014.
End of explanation
"""
url = 'http://casas.wsu.edu/datasets/twor.2009.zip'
zipfile = url.split('/')[-1]
dirname = '.'.join(zipfile.split('.')[:2])
filename = join(dirname, 'data')
print(' url: {}'.format(url))
print(' zipfile: {}'.format(zipfile))
print(' dirname: {}'.format(dirname))
print('filename: {}'.format(filename))
"""
Explanation: Set up various parameters and variables that will be used in this script
End of explanation
"""
#from subprocess import call
#call(('wget', url));
#call(('unzip', zipfile));
from IPython.display import Image
Image("twor.2009/sensorlayout2.png")
column_headings = ('date', 'time', 'sensor', 'value', 'annotation', 'state')
df = pd.read_csv(
filename,
delim_whitespace=True, # Note, the file is delimited by both space and tab characters
names=column_headings
)
df.head()
df.columns
#df.sensor
df.dtypes
df.time[0]
df['datetime'] = pd.to_datetime(df[['date', 'time']].apply(lambda row: ' '.join(row), axis=1))
"""
Explanation: Download the dataset, and unzip it using the following commands in shell
shell
wget http://casas.wsu.edu/datasets/twor.2009.zip
unzip twor.2009.zip
or directly in python
python
from subprocess import call
call(('wget', url));
call(('unzip', zipfile);
End of explanation
"""
#df.ix[df.date.str.startswith('22009'), 'date'] = '2009-02-03'
df.loc[df.date.str.startswith('22009'), 'date'] = '2009-02-03'
df['datetime'] = pd.to_datetime(df[['date', 'time']].apply(lambda row: ' '.join(row), axis=1))
df.dtypes
"""
Explanation: Create a datetime column (currently date and time are separate, and are also strings). The following
should achieve this:
python
df['datetime'] = pd.to_datetime(df[['date', 'time']].apply(lambda row: ' '.join(row), axis=1))
The code above will fail, however, but this is expected behaviour. If we investigate the traceback for the error, we will discover that the final row in the file is from the year 22009. This is a typo, clearly, and it should be from the year 2009, so we will first replace this value, and then parse the dates.
End of explanation
"""
df = df[['datetime', 'sensor', 'value', 'annotation', 'state']]
df.set_index('datetime', inplace=True)
df.head()
"""
Explanation: By default, the new column is added to the end of the columns. However, since the date and time are now captured by the datetime column, we no longer need the date and time columns. Additionally, we will see how it is useful to have the datetime column as an index variable: this allows us to do time-driven querying which for this dataset will be very useful.
End of explanation
"""
df.sensor.unique()
df.annotation.unique()
df.state.unique()
df.value.unique()
"""
Explanation: We can now inspect the unique sensor, activity and value values:
End of explanation
"""
categorical_inds = df.sensor.str.match(r"^[^A]")
df_categorical = df.loc[categorical_inds][['sensor', 'value']]
df_categorical.head()
df_categorical.value.value_counts()
for val in ('O', 'OF', 'OFFF', 'ONF'):
df_categorical.loc[df_categorical.value == val, 'value'] = 'OFF';
df_categorical.value.value_counts()
"""
Explanation: We can see here that the unique values contains both numbers (eg 2.82231) and strings (ON, OFF). This is because the data recorded by all sensors is contained in one column. The next few steps will be to extract the non-numeric (ie categorical) data from the column.
Extracting the categorical data
We can extract the categorical dataframe (noting that no categorical sensor starts with the letter A):
The regular expression ^[^A] returns true for strings that do not begin with A -- the first ^ is the symbol for the start of the sequence, and the second ^ when contained within square brackets is a 'not match' operation.
Since we only need the sensor name and the value, the data frames that we will deal with in this and the next section will slice these columns only (and doesn't consider the annotation columns).
End of explanation
"""
df_categorical.loc[:, 'sensor_value'] = df_categorical[['sensor', 'value']].apply(
lambda row: '{}_{}'.format(*row).lower(),
axis=1
)
df_categorical.head()
df_categorical_exploded = pd.get_dummies(df_categorical.sensor_value)
df_categorical_exploded.head()
"""
Explanation: Our ambition is to create a matrix where each column corresponds to the combinations of sensors and values that are availabele in the data. For example, one column would correspond to the state of M35 being ON, and another column will correspond to M35 being OFF. The reason for having two columns to represent the ON and OFF states is that diffferent information may be achieved by the combinations. For example, a sensor turning on may correspond to somebody entering a room, but correspondingly, a sensor turning off may correspond to somebody leaving the room.
We will achieve the matrix representation by creating a new column that has the sensor and value columns concatenated, and then we will use the get_dummies function provided by pandas to create the representat that we desire.
End of explanation
"""
df_categorical_exploded.values
df_categorical_exploded['m35_off'].plot(figsize=(10,5));
kitchen_columns = ['m{}_on'.format(ii) for ii in (15,16,17,18,19,51)]
start = datetime(2009, 2, 2, 10)
end = datetime(2009, 2, 2, 11)
df_categorical_exploded[(df_categorical_exploded.index > start) & (df_categorical_exploded.index < end)][kitchen_columns].plot(figsize=(10,15), subplots=True);
start = datetime(2009, 2, 2, 15)
end = datetime(2009, 2, 2, 17)
df_categorical_exploded[(df_categorical_exploded.index > start) & (df_categorical_exploded.index < end)][kitchen_columns].plot(figsize=(10,15), subplots=True);
"""
Explanation: And if desired, we can get a matrix form of the data with the values property
End of explanation
"""
numeric_inds = df.sensor.str.startswith("A")
df_numeric = df.loc[numeric_inds][['sensor', 'value']]
df_numeric.head()
np.asarray(df_numeric.value)
df_numeric.value.astype(float)
"""
Explanation: Numeric columns
We have extracted matrix representation of the categorical data. Now, we will do the same for the numeric data.
End of explanation
"""
f_inds = df_numeric.value.str.endswith('F')
df_numeric.loc[f_inds, 'value'] = df_numeric.loc[f_inds, 'value'].str[:-1]
df_numeric.loc[f_inds]
"""
Explanation: Note, however, that since the value data was obtained from file that it is still in string format. We can convert these str data types to floating point data types easily as follows:
python
df_numeric.value.map(float)
However, if we do this, we will discover that there is one record that holds a circumspect value (0.509695F). This is clearly another mistake in the dataset. We can remove the F from the string easily. It is not difficult to make the next bit of code more robust to other data types (eg by applying regular expressions to the strings), but here we will simply slice up until the last character to remove the F:
End of explanation
"""
df_numeric.value = df_numeric.value.map(float)
"""
Explanation: We can now map all data to floating point numbers
End of explanation
"""
unique_keys = df_numeric.sensor.unique()
unique_keys
"""
Explanation: There are only three numeric sensor types, as we can see with the unique member function:
End of explanation
"""
df_numeric = pd.merge(df_numeric[['value']], pd.get_dummies(df_numeric.sensor), left_index=True, right_index=True)
df_numeric.head()
for key in unique_keys:
df_numeric[key] *= df_numeric.value
df_numeric = df_numeric[unique_keys]
# Print a larger sample of the data frame
df_numeric
#df_numeric.value.groupby(df_numeric.sensor).plot(kind='kde', legend=True, figsize=(10,5))
df_numeric[unique_keys].plot(kind='kde', legend=True, figsize=(10,5), subplots=True);
"""
Explanation: Create some new columns for the three sensors (AD1-A, AD1-B, and AD1-C), and merge with the original data frame
End of explanation
"""
df_categorical_exploded.head()
df_numeric.head()
"""
Explanation: Merging categorical and numeric data together
Since we have two dataframes (one for categorical data and one for numeric data). For any analysis, it would be useful to have these two in a unified data frame.
First we will remind ourselves what the dataframes look like, and then these will be unified into one format.
End of explanation
"""
df_joined = pd.merge(
df_categorical_exploded,
df_numeric,
left_index=True,
right_index=True,
how='outer'
)
df_joined.head()
"""
Explanation: We will use the pandas.merge function to join the two dataframes. In this case, we must use more of its functionality. We will merge on the index of the categorical and numeric dataframes. However, since none of these timestamps are shared (refer to the original data frame) we will do the merge with an "outer" join.
End of explanation
"""
annotation_inds = pd.notnull(df.annotation)
df_annotation = df.loc[annotation_inds][['annotation', 'state']]
# There are some duplicated indices. Remove with
df_annotation = df_annotation.groupby(level=0).first()
df_annotation.head()
"""
Explanation: Note, that in merging the dataframes, we now have a time-ordered dataframe. This is one of the advantages of using datetimes as the index type in dataframes since pandas will understand precisely how to merge the two datasets.
Annotations
So far we have extracted the categorical and the numeric data from the dataframe. The annotations/labels have not yet been considered. This section will focus on these data.
Since this part of the notebook will focus only on the annotations, the annotation dataframe will slice these columns exclusively.
Finally, because the annotations have begin and end times, we will need to fill in the time between begin and end with 1's. The ideal output therefore will be that we produce a matrix. Each column of the matrix will correspond to a particular activity. The duration of the activity is captured with a series of 1's. This is similar to the techniques we used to create the categorical dataframe, however, in that we only recorded that a sensor turned on and off, and did not track its continuous state.
End of explanation
"""
for annotation, group in df_annotation.groupby('annotation'):
counts = group.state.value_counts()
if counts.begin == counts.end:
print(' {}: equal counts ({}, {})'.format(
annotation,
counts.begin,
counts.end
))
else:
print(' *** WARNING {}: inconsistent annotation counts with {} begin and {} end'.format(
annotation,
counts.begin,
counts.end
))
"""
Explanation: It's important to ensure that the expected format of the data is consistent in this dataset. This means that there should be
One way to do this is to group by the annotation label and to print some statistics about the data.
End of explanation
"""
df_annotation.loc[df_annotation.annotation == 'R1_Work']
"""
Explanation: We can see here that two activities have inconsistent numbers of begin and end statements for the activities. Interestingly, they both have more end conditions than begin conditions. In some sense, this is a less critical bug than having more begin statements.
In order to deal with this, we will first look at the affected rows:
End of explanation
"""
def filter_annotations(anns):
left = iter(anns.index[:-1])
right = iter(anns.index[1:])
inds = []
for ii, (ll, rr) in enumerate(zip(left, right)):
try:
l = anns.loc[ll]
r = anns.loc[rr]
if l.state == 'begin' and r.state == 'end':
inds.extend([ll, rr])
except ValueError:
print(ii)
print(l)
print()
print(r)
print()
print()
asdf
return anns.loc[inds, :]
dfs = []
for annotation, group in df_annotation.groupby('annotation'):
print('{:>30} - {}'.format(annotation, group.size))
dfs.append(filter_annotations(group))
"""
Explanation: Querying consecutive annotations, we can print the pair of annotations that have
End of explanation
"""
df_annotation_exploded = pd.get_dummies(df_annotation.annotation)
df_annotation_exploded.head(50)
paired = pd.concat(dfs)
left = paired.index[:-1:2]
right = paired.index[1::2]
print(df_annotation_exploded.mean())
for ll, rr in zip(left, right):
l = paired.loc[ll]
r = paired.loc[rr]
assert l.annotation == r.annotation
annotation = l.annotation
begin = l.name
end = r.name
# Another advantage of using datetime index: can slice with time ranges
df_annotation_exploded.loc[begin:end, annotation] = 1
df_annotation_exploded.head(50)
"""
Explanation: Create the output dataframe
End of explanation
"""
dataset = pd.merge(
df_joined,
df_annotation_exploded,
left_index=True,
right_index=True,
how='outer'
)
data_cols = df_joined.columns
annotation_cols = df_annotation_exploded.columns
dataset[data_cols] = dataset[data_cols].fillna(0)
dataset[annotation_cols] = dataset[annotation_cols].ffill()
dataset.head()
dataset[data_cols].head()
dataset[annotation_cols].head()
dataset.loc[dataset.Meal_Preparation == 1][kitchen_columns + ['AD1-A']].head()
"""
Explanation: Merging the full dataset
End of explanation
"""
|
snth/ctdeep | MNIST Tutorial.ipynb | mit | from __future__ import absolute_import
from __future__ import print_function
from ipywidgets import interact, interactive, widgets
import numpy as np
np.random.seed(1337) # for reproducibility
"""
Explanation: Deep Neural Networks
Theano
Python library that provides efficient (low-level) tools for working with Neural Networks
In particular:
Automatic Differentiation (AD)
Compiled computation graphs
GPU accelerated computation
Keras
High level library for specifying and training neural networks
Can use Theano or TensorFlow as backend
The MNIST Dataset
70,000 handwritten digits
60,000 for training
10,000 for testing
As 28x28 pixel images
TODO
implement layer-by-layer training in the Stacked Autoencoder
implement supervised fine tuning on pre-trained "Autoencoder"
implement filter visualisation by gradient ascent on neuron activations
Data Preprocessing
End of explanation
"""
from keras.datasets import mnist
(images_train, labels_train), (images_test, labels_test) = mnist.load_data()
print('images',images_train.shape)
print('labels', labels_train.shape)
"""
Explanation: Inspecting the data
Let's load some data
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
def plot_mnist_digit(image, figsize=None):
""" Plot a single MNIST image."""
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
if figsize:
ax.set_figsize(*figsize)
ax.matshow(image, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def plot_1_by_2_images(image, reconstruction, figsize=None):
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(1, 2, 1)
ax.matshow(image, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
ax = fig.add_subplot(1, 2, 2)
ax.matshow(reconstruction, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def plot_10_by_10_images(images, figsize=None):
""" Plot 100 MNIST images in a 10 by 10 table. Note that we crop
the images so that they appear reasonably close together. The
image is post-processed to give the appearance of being continued."""
fig = plt.figure(figsize=figsize)
#images = [image[3:25, 3:25] for image in images]
#image = np.concatenate(images, axis=1)
for x in range(10):
for y in range(10):
ax = fig.add_subplot(10, 10, 10*y+x+1)
ax.matshow(images[10*y+x], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def draw_image(i):
plot_mnist_digit(images_train[i])
print(i, ':', labels_train[i])
interact(draw_image, i=(0, len(images_train)-1))
plot_10_by_10_images(images_train, figsize=(10,10))
"""
Explanation: and then visualise it
End of explanation
"""
def to_features(X):
return X.reshape(-1, 784).astype("float32") / 255.0
def to_images(X):
return (X*255.0).astype('uint8').reshape(-1, 28, 28)
#print((images_train[0]-(to_images(to_features(images_train[0])))).max())
print('data shape:', images_train.shape)
print('features shape', to_features(images_train).shape)
# the data, shuffled and split between train and test sets
X_train = to_features(images_train)
X_test = to_features(images_test)
print(X_train.shape, 'training samples')
print(X_test.shape, 'test samples')
"""
Explanation: Transform to "features"
End of explanation
"""
# The labels need to be transformed into class indicators
from keras.utils import np_utils
y_train = np_utils.to_categorical(labels_train, nb_classes=10)
y_test = np_utils.to_categorical(labels_test, nb_classes=10)
print(y_train.shape, 'train labels')
print(y_test.shape, 'test labels')
"""
Explanation: The labels we transform to a "one-hot" encoding
End of explanation
"""
print('labels', labels_train[:3])
print('y', y_train[:3])
"""
Explanation: For example, let's inspect the first 3 labels:
End of explanation
"""
# Neural Network Architecture Parameters
nb_input = 784
nb_hidden = 512
nb_output = 10
# Training Parameters
nb_epoch = 1
batch_size = 128
"""
Explanation: Simple Multi-Layer Perceptron (MLP)
The simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation
mlp = Sequential()
mlp.add(Dense(output_dim=nb_hidden, input_dim=nb_input, init='uniform'))
mlp.add(Activation('sigmoid'))
mlp.add(Dense(output_dim=nb_output, input_dim=nb_hidden, init='uniform'))
mlp.add(Activation('softmax'))
"""
Explanation: First we define the "architecture" of the network
End of explanation
"""
mlp.compile(loss='categorical_crossentropy', optimizer='SGD')
"""
Explanation: then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model.
Note that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent.
End of explanation
"""
mlp.fit(X_train, y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, show_accuracy=True)
"""
Explanation: Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model.
End of explanation
"""
mlp.evaluate(X_test, y_test, show_accuracy=True)
def draw_mlp_prediction(j):
plot_mnist_digit(to_images(X_test)[j])
prediction = mlp.predict_classes(X_test[j:j+1], verbose=False)[0]
print(j, ':', '\tpredict:', prediction, '\tactual:', labels_test[j])
interact(draw_mlp_prediction, j=(0, len(X_test)-1))
plot_10_by_10_images(images_test, figsize=(10,10))
"""
Explanation: Once the model is trained, we can evaluate its performance on the test data.
End of explanation
"""
from keras.models import Sequential
nb_layers = 2
mlp2 = Sequential()
# add hidden layers
for i in range(nb_layers):
mlp2.add(Dense(output_dim=nb_hidden/nb_layers, input_dim=nb_input if i==0 else nb_hidden/nb_layers, init='uniform'))
mlp2.add(Activation('sigmoid'))
# add output layer
mlp2.add(Dense(output_dim=nb_output, input_dim=nb_hidden/nb_layers, init='uniform'))
mlp2.add(Activation('softmax'))
mlp2.compile(loss='categorical_crossentropy', optimizer='SGD')
mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=1)
mlp2.evaluate(X_test, y_test, show_accuracy=True)
"""
Explanation: A Deeper MLP
Next we build a two-layer MLP with the same number of hidden nodes, half in each layer.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
mae = Sequential()
nb_layers = 1
encoder = []
decoder = []
for i in range(nb_layers):
if i>0:
encoder.append(Dropout(0.4))
encoder.append(Dense(output_dim=nb_hidden/nb_layers,
input_dim=nb_input if i==0 else nb_hidden/nb_layers,
init='glorot_uniform'))
encoder.append(Activation('sigmoid'))
# Note that these are in reverse order
decoder.append(Activation('sigmoid'))
decoder.append(Dense(output_dim=nb_input if i==0 else nb_hidden/nb_layers,
input_dim=nb_hidden/nb_layers,
init='glorot_uniform'))
#decoder.append(Dropout(0.2))
for layer in encoder:
mae.add(layer)
for layer in reversed(decoder):
mae.add(layer)
from keras.optimizers import SGD
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
mae.compile(loss='mse', optimizer=sgd) # replace with sgd
mae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
def draw_mae_prediction(j):
X_plot = X_test[j:j+1]
prediction = mae.predict(X_plot, verbose=False)
plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])
interact(draw_mae_prediction, j=(0, len(X_test)-1))
plot_10_by_10_images(images_test, figsize=(10,10))
"""
Explanation: Manual Autoencoder
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
class StackedAutoencoder(object):
def __init__(self, layers, mode='autoencoder',
activation='sigmoid', init='uniform', final_activation='softmax',
dropout=0.2, optimizer='SGD'):
self.layers = layers
self.mode = mode
self.activation = activation
self.final_activation = final_activation
self.init = init
self.dropout = dropout
self.optimizer = optimizer
self._model = None
self.build()
self.compile()
def _add_layer(self, model, i, is_encoder):
if is_encoder:
input_dim, output_dim = self.layers[i], self.layers[i+1]
activation = self.final_activation if i==len(self.layers)-2 else self.activation
else:
input_dim, output_dim = self.layers[i+1], self.layers[i]
activation = self.activation
model.add(Dense(output_dim=output_dim,
input_dim=input_dim,
init=self.init))
model.add(Activation(activation))
def build(self):
self.encoder = Sequential()
self.decoder = Sequential()
self.autoencoder = Sequential()
for i in range(len(self.layers)-1):
self._add_layer(self.encoder, i, True)
self._add_layer(self.autoencoder, i, True)
#if i<len(self.layers)-2:
# self.autoencoder.add(Dropout(self.dropout))
# Note that the decoder layers are in reverse order
for i in reversed(range(len(self.layers)-1)):
self._add_layer(self.decoder, i, False)
self._add_layer(self.autoencoder, i, False)
def compile(self):
print("Compiling the encoder ...")
self.encoder.compile(loss='categorical_crossentropy', optimizer=self.optimizer)
print("Compiling the decoder ...")
self.decoder.compile(loss='mse', optimizer=self.optimizer)
print("Compiling the autoencoder ...")
return self.autoencoder.compile(loss='mse', optimizer=self.optimizer)
def fit(self, X_train, Y_train, batch_size, nb_epoch, verbose=1):
result = self.autoencoder.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=verbose)
# copy the weights to the encoder
for i, l in enumerate(self.encoder.layers):
l.set_weights(self.autoencoder.layers[i].get_weights())
for i in range(len(self.decoder.layers)):
self.decoder.layers[-1-i].set_weights(self.autoencoder.layers[-1-i].get_weights())
return result
def pretrain(self, X_train, batch_size, nb_epoch, verbose=1):
for i in range(len(self.layers)-1):
# Greedily train each layer
print("Now pretraining layer {} [{}-->{}]".format(i+1, self.layers[i], self.layers[i+1]))
ae = Sequential()
self._add_layer(ae, i, True)
#ae.add(Dropout(self.dropout))
self._add_layer(ae, i, False)
ae.compile(loss='mse', optimizer=self.optimizer)
ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=verbose)
# Then lift the training data up one layer
print("Transforming data from", X_train.shape, "to", (X_train.shape[0], self.layers[i+1]))
enc = Sequential()
self._add_layer(enc, i, True)
enc.compile(loss='mse', optimizer=self.optimizer)
enc.layers[0].set_weights(ae.layers[0].get_weights())
enc.layers[1].set_weights(ae.layers[1].get_weights())
X_train = enc.predict(X_train, verbose=verbose)
print("Shape check:", X_train.shape)
# Then copy the learned weights
self.encoder.layers[2*i].set_weights(ae.layers[0].get_weights())
self.encoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())
self.autoencoder.layers[2*i].set_weights(ae.layers[0].get_weights())
self.autoencoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())
self.decoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())
self.decoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())
self.autoencoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())
self.autoencoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())
def evaluate(self, X_test, Y_test, show_accuracy=False):
return self.autoencoder.evaluate(X_test, Y_test, show_accuracy=show_accuracy)
def predict(self, X, verbose=False):
return self.autoencoder.predict(X, verbose=verbose)
def _get_paths(self, name):
model_path = "models/{}_model.yaml".format(name)
weights_path = "models/{}_weights.hdf5".format(name)
return model_path, weights_path
def save(self, name='autoencoder'):
model_path, weights_path = self._get_paths(name)
open(model_path, 'w').write(self.autoencoder.to_yaml())
self.autoencoder.save_weights(weights_path, overwrite=True)
def load(self, name='autoencoder'):
model_path, weights_path = self._get_paths(name)
self.autoencoder = keras.models.model_from_yaml(open(model_path))
self.autoencoder.load_weights(weights_path)
sae = StackedAutoencoder(layers=[nb_input, 400, 100, 10],
activation='sigmoid',
final_activation='sigmoid',
init='uniform',
dropout=0.2,
optimizer='adam')
nb_epoch = 3
sae.pretrain(X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
#sae.compile()
sae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
def draw_sae_prediction(j):
X_plot = X_test[j:j+1]
prediction = sae.predict(X_plot, verbose=False)
plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])
print(sae.encoder.predict(X_plot, verbose=False)[0])
interact(draw_sae_prediction, j=(0, len(X_test)-1))
plot_10_by_10_images(images_test, figsize=(10,10))
sae.evaluate(X_test, X_test, show_accuracy=True)
def visualise_filter(model, layer_index, filter_index):
from keras import backend as K
# build a loss function that maximizes the activation
# of the nth filter on the layer considered
layer_output = model.layers[layer_index].get_output()
loss = K.mean(layer_output[:, filter_index])
# compute the gradient of the input picture wrt this loss
input_img = model.layers[0].input
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# we start from a gray image with some noise
input_img_data = np.random.random((1,nb_input,))
# run gradient ascent for 20 steps
step = 1
for i in range(100):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
#print("Current loss value:", loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
print("Current loss value:", loss_value)
# decode the resulting input image
if loss_value>0:
#return input_img_data[0]
return input_img_data
else:
raise ValueError(loss_value)
def draw_filter(i):
flt = visualise_filter(mlp, 3, 4)
#print(flt)
plot_mnist_digit(to_images(flt)[0])
interact(draw_filter, i=[0, 9])
"""
Explanation: Stacked Autoencoder
End of explanation
"""
|
hankcs/HanLP | plugins/hanlp_demo/hanlp_demo/zh/ner_stl.ipynb | apache-2.0 | !pip install hanlp -U
"""
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/ner_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fner_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
"""
import hanlp
hanlp.pretrained.ner.ALL # 语种见名称最后一个字段或相应语料库
"""
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
"""
ner = hanlp.load(hanlp.pretrained.ner.MSRA_NER_ELECTRA_SMALL_ZH)
"""
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。
End of explanation
"""
print(ner([["2021年", "HanLPv2.1", "为", "生产", "环境", "带来", "次", "世代", "最", "先进", "的", "多", "语种", "NLP", "技术", "。"], ["阿婆主", "来到", "北京", "立方庭", "参观", "自然", "语义", "科技", "公司", "。"]], tasks='ner*'))
"""
Explanation: 命名实体识别
命名实体识别任务的输入为已分词的句子:
End of explanation
"""
print(ner.dict_whitelist)
"""
Explanation: 每个四元组表示[命名实体, 类型标签, 起始下标, 终止下标],下标指的是命名实体在单词数组中的下标。
自定义词典
自定义词典是NER任务的成员变量:
End of explanation
"""
ner.dict_whitelist = {'午饭后': 'TIME'}
ner(['2021年', '测试', '高血压', '是', '138', ',', '时间', '是', '午饭', '后', '2点45', ',', '低血压', '是', '44'])
"""
Explanation: 白名单词典
白名单词典中的词语会尽量被输出。当然,HanLP以统计为主,词典的优先级很低。
End of explanation
"""
ner.dict_tags = {('名字', '叫', '金华'): ('O', 'O', 'S-PERSON')}
ner(['他', '在', '浙江', '金华', '出生', ',', '他', '的', '名字', '叫', '金华', '。'])
"""
Explanation: 强制词典
如果你读过《自然语言处理入门》,你就会理解BMESO标注集,于是你可以直接干预统计模型预测的标签,拿到最高优先级的权限。
End of explanation
"""
ner.dict_blacklist = {'金华'}
ner(['他', '在', '浙江', '金华', '出生', ',', '他', '的', '名字', '叫', '金华', '。'])
"""
Explanation: 黑名单词典
黑名单中的词语绝对不会被当做命名实体。
End of explanation
"""
|
Chipe1/aima-python | games.ipynb | mit | from games import *
from notebook import psource, pseudocode
"""
Explanation: GAMES OR ADVERSARIAL SEARCH
This notebook serves as supporting material for topics covered in Chapter 5 - Adversarial Search in the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from games.py module. Let's import required classes, methods, global variables etc., from games module.
CONTENTS
Game Representation
Game Examples
Tic-Tac-Toe
Figure 5.2 Game
Min-Max
Alpha-Beta
Players
Let's Play Some Games!
End of explanation
"""
%psource Game
"""
Explanation: GAME REPRESENTATION
To represent games we make use of the Game class, which we can subclass and override its functions to represent our own games. A helper tool is the namedtuple GameState, which in some cases can come in handy, especially when our game needs us to remember a board (like chess).
GameState namedtuple
GameState is a namedtuple which represents the current state of a game. It is used to help represent games whose states can't be easily represented normally, or for games that require memory of a board, like Tic-Tac-Toe.
Gamestate is defined as follows:
GameState = namedtuple('GameState', 'to_move, utility, board, moves')
to_move: It represents whose turn it is to move next.
utility: It stores the utility of the game state. Storing this utility is a good idea, because, when you do a Minimax Search or an Alphabeta Search, you generate many recursive calls, which travel all the way down to the terminal states. When these recursive calls go back up to the original callee, we have calculated utilities for many game states. We store these utilities in their respective GameStates to avoid calculating them all over again.
board: A dict that stores the board of the game.
moves: It stores the list of legal moves possible from the current position.
Game class
Let's have a look at the class Game in our module. We see that it has functions, namely actions, result, utility, terminal_test, to_move and display.
We see that these functions have not actually been implemented. This class is just a template class; we are supposed to create the class for our game, by inheriting this Game class and implementing all the methods mentioned in Game.
End of explanation
"""
%psource TicTacToe
"""
Explanation: Now let's get into details of all the methods in our Game class. You have to implement these methods when you create new classes that would represent your game.
actions(self, state): Given a game state, this method generates all the legal actions possible from this state, as a list or a generator. Returning a generator rather than a list has the advantage that it saves space and you can still operate on it as a list.
result(self, state, move): Given a game state and a move, this method returns the game state that you get by making that move on this game state.
utility(self, state, player): Given a terminal game state and a player, this method returns the utility for that player in the given terminal game state. While implementing this method assume that the game state is a terminal game state. The logic in this module is such that this method will be called only on terminal game states.
terminal_test(self, state): Given a game state, this method should return True if this game state is a terminal state, and False otherwise.
to_move(self, state): Given a game state, this method returns the player who is to play next. This information is typically stored in the game state, so all this method does is extract this information and return it.
display(self, state): This method prints/displays the current state of the game.
GAME EXAMPLES
Below we give some examples for games you can create and experiment on.
Tic-Tac-Toe
Take a look at the class TicTacToe. All the methods mentioned in the class Game have been implemented here.
End of explanation
"""
moves = dict(A=dict(a1='B', a2='C', a3='D'),
B=dict(b1='B1', b2='B2', b3='B3'),
C=dict(c1='C1', c2='C2', c3='C3'),
D=dict(d1='D1', d2='D2', d3='D3'))
utils = dict(B1=3, B2=12, B3=8, C1=2, C2=4, C3=6, D1=14, D2=5, D3=2)
initial = 'A'
"""
Explanation: The class TicTacToe has been inherited from the class Game. As mentioned earlier, you really want to do this. Catching bugs and errors becomes a whole lot easier.
Additional methods in TicTacToe:
__init__(self, h=3, v=3, k=3) : When you create a class inherited from the Game class (class TicTacToe in our case), you'll have to create an object of this inherited class to initialize the game. This initialization might require some additional information which would be passed to __init__ as variables. For the case of our TicTacToe game, this additional information would be the number of rows h, number of columns v and how many consecutive X's or O's are needed in a row, column or diagonal for a win k. Also, the initial game state has to be defined here in __init__.
compute_utility(self, board, move, player) : A method to calculate the utility of TicTacToe game. If 'X' wins with this move, this method returns 1; if 'O' wins return -1; else return 0.
k_in_row(self, board, move, player, delta_x_y) : This method returns True if there is a line formed on TicTacToe board with the latest move else False.
TicTacToe GameState
Now, before we start implementing our TicTacToe game, we need to decide how we will be representing our game state. Typically, a game state will give you all the current information about the game at any point in time. When you are given a game state, you should be able to tell whose turn it is next, how the game will look like on a real-life board (if it has one) etc. A game state need not include the history of the game. If you can play the game further given a game state, you game state representation is acceptable. While we might like to include all kinds of information in our game state, we wouldn't want to put too much information into it. Modifying this game state to generate a new one would be a real pain then.
Now, as for our TicTacToe game state, would storing only the positions of all the X's and O's be sufficient to represent all the game information at that point in time? Well, does it tell us whose turn it is next? Looking at the 'X's and O's on the board and counting them should tell us that. But that would mean extra computing. To avoid this, we will also store whose move it is next in the game state.
Think about what we've done here. We have reduced extra computation by storing additional information in a game state. Now, this information might not be absolutely essential to tell us about the state of the game, but it does save us additional computation time. We'll do more of this later on.
To store game states will will use the GameState namedtuple.
to_move: A string of a single character, either 'X' or 'O'.
utility: 1 for win, -1 for loss, 0 otherwise.
board: All the positions of X's and O's on the board.
moves: All the possible moves from the current state. Note here, that storing the moves as a list, as it is done here, increases the space complexity of Minimax Search from O(m) to O(bm). Refer to Sec. 5.2.1 of the book.
Representing a move in TicTacToe game
Now that we have decided how our game state will be represented, it's time to decide how our move will be represented. Becomes easy to use this move to modify a current game state to generate a new one.
For our TicTacToe game, we'll just represent a move by a tuple, where the first and the second elements of the tuple will represent the row and column, respectively, where the next move is to be made. Whether to make an 'X' or an 'O' will be decided by the to_move in the GameState namedtuple.
Fig52 Game
For a more trivial example we will represent the game in Figure 5.2 of the book.
<img src="images/fig_5_2.png" width="75%">
The states are represented with capital letters inside the triangles (eg. "A") while moves are the labels on the edges between states (eg. "a1"). Terminal nodes carry utility values. Note that the terminal nodes are named in this example 'B1', 'B2' and 'B2' for the nodes below 'B', and so forth.
We will model the moves, utilities and initial state like this:
End of explanation
"""
print(moves['A']['a1'])
"""
Explanation: In moves, we have a nested dictionary system. The outer's dictionary has keys as the states and values the possible moves from that state (as a dictionary). The inner dictionary of moves has keys the move names and values the next state after the move is complete.
Below is an example that showcases moves. We want the next state after move 'a1' from 'A', which is 'B'. A quick glance at the above image confirms that this is indeed the case.
End of explanation
"""
fig52 = Fig52Game()
"""
Explanation: We will now take a look at the functions we need to implement. First we need to create an object of the Fig52Game class.
End of explanation
"""
psource(Fig52Game.actions)
print(fig52.actions('B'))
"""
Explanation: actions: Returns the list of moves one can make from a given state.
End of explanation
"""
psource(Fig52Game.result)
print(fig52.result('A', 'a1'))
"""
Explanation: result: Returns the next state after we make a specific move.
End of explanation
"""
psource(Fig52Game.utility)
print(fig52.utility('B1', 'MAX'))
print(fig52.utility('B1', 'MIN'))
"""
Explanation: utility: Returns the value of the terminal state for a player ('MAX' and 'MIN'). Note that for 'MIN' the value returned is the negative of the utility.
End of explanation
"""
psource(Fig52Game.terminal_test)
print(fig52.terminal_test('C3'))
"""
Explanation: terminal_test: Returns True if the given state is a terminal state, False otherwise.
End of explanation
"""
psource(Fig52Game.to_move)
print(fig52.to_move('A'))
"""
Explanation: to_move: Return the player who will move in this state.
End of explanation
"""
psource(Fig52Game)
"""
Explanation: As a whole the class Fig52 that inherits from the class Game and overrides its functions:
End of explanation
"""
pseudocode("Minimax-Decision")
"""
Explanation: MIN-MAX
Overview
This algorithm (often called Minimax) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the utility value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree).
It is worth noting that the algorithm works in a depth-first manner. The pseudocode can be found below:
End of explanation
"""
psource(minimax_decision)
"""
Explanation: Implementation
In the implementation we are using two functions, max_value and min_value to calculate the best move for MAX and MIN respectively. These functions interact in an alternating recursion; one calls the other until a terminal state is reached. When the recursion halts, we are left with scores for each move. We return the max. Despite returning the max, it will work for MIN too since for MIN the values are their negative (hence the order of values is reversed, so the higher the better for MIN too).
End of explanation
"""
print(minimax_decision('B', fig52))
print(minimax_decision('C', fig52))
print(minimax_decision('D', fig52))
"""
Explanation: Example
We will now play the Fig52 game using this algorithm. Take a look at the Fig52Game from above to follow along.
It is the turn of MAX to move, and he is at state A. He can move to B, C or D, using moves a1, a2 and a3 respectively. MAX's goal is to maximize the end value. So, to make a decision, MAX needs to know the values at the aforementioned nodes and pick the greatest one. After MAX, it is MIN's turn to play. So MAX wants to know what will the values of B, C and D be after MIN plays.
The problem then becomes what move will MIN make at B, C and D. The successor states of all these nodes are terminal states, so MIN will pick the smallest value for each node. So, for B he will pick 3 (from move b1), for C he will pick 2 (from move c1) and for D he will again pick 2 (from move d3).
Let's see this in code:
End of explanation
"""
print(minimax_decision('A', fig52))
"""
Explanation: Now MAX knows that the values for B, C and D are 3, 2 and 2 (produced by the above moves of MIN). The greatest is 3, which he will get with move a1. This is then the move MAX will make. Let's see the algorithm in full action:
End of explanation
"""
from notebook import Canvas_minimax
from random import randint
minimax_viz = Canvas_minimax('minimax_viz', [randint(1, 50) for i in range(27)])
"""
Explanation: Visualization
Below we have a simple game visualization using the algorithm. After you run the command, click on the cell to move the game along. You can input your own values via a list of 27 integers.
End of explanation
"""
pseudocode("Alpha-Beta-Search")
"""
Explanation: ALPHA-BETA
Overview
While Minimax is great for computing a move, it can get tricky when the number of game states gets bigger. The algorithm needs to search all the leaves of the tree, which increase exponentially to its depth.
For Tic-Tac-Toe, where the depth of the tree is 9 (after the 9th move, the game ends), we can have at most 9! terminal states (at most because not all terminal nodes are at the last level of the tree; some are higher up because the game ended before the 9th move). This isn't so bad, but for more complex problems like chess, we have over $10^{40}$ terminal nodes. Unfortunately we have not found a way to cut the exponent away, but we nevertheless have found ways to alleviate the workload.
Here we examine pruning the game tree, which means removing parts of it that we do not need to examine. The particular type of pruning is called alpha-beta, and the search in whole is called alpha-beta search.
To showcase what parts of the tree we don't need to search, we will take a look at the example Fig52Game.
In the example game, we need to find the best move for player MAX at state A, which is the maximum value of MIN's possible moves at successor states.
MAX(A) = MAX( MIN(B), MIN(C), MIN(D) )
MIN(B) is the minimum of 3, 12, 8 which is 3. So the above formula becomes:
MAX(A) = MAX( 3, MIN(C), MIN(D) )
Next move we will check is c1, which leads to a terminal state with utility of 2. Before we continue searching under state C, let's pop back into our formula with the new value:
MAX(A) = MAX( 3, MIN(2, c2, .... cN), MIN(D) )
We do not know how many moves state C allows, but we know that the first one results in a value of 2. Do we need to keep searching under C? The answer is no. The value MIN will pick on C will at most be 2. Since MAX already has the option to pick something greater than that, 3 from B, he does not need to keep searching under C.
In alpha-beta we make use of two additional parameters for each state/node, a and b, that describe bounds on the possible moves. The parameter a denotes the best choice (highest value) for MAX along that path, while b denotes the best choice (lowest value) for MIN. As we go along we update a and b and prune a node branch when the value of the node is worse than the value of a and b for MAX and MIN respectively.
In the above example, after the search under state B, MAX had an a value of 3. So, when searching node C we found a value less than that, 2, we stopped searching under C.
You can read the pseudocode below:
End of explanation
"""
%psource alphabeta_search
"""
Explanation: Implementation
Like minimax, we again make use of functions max_value and min_value, but this time we utilise the a and b values, updating them and stopping the recursive call if we end up on nodes with values worse than a and b (for MAX and MIN). The algorithm finds the maximum value and returns the move that results in it.
The implementation:
End of explanation
"""
print(alphabeta_search('A', fig52))
"""
Explanation: Example
We will play the Fig52 Game with the alpha-beta search algorithm. It is the turn of MAX to play at state A.
End of explanation
"""
print(alphabeta_search('B', fig52))
print(alphabeta_search('C', fig52))
print(alphabeta_search('D', fig52))
"""
Explanation: The optimal move for MAX is a1, for the reasons given above. MIN will pick move b1 for B resulting in a value of 3, updating the a value of MAX to 3. Then, when we find under C a node of value 2, we will stop searching under that sub-tree since it is less than a. From D we have a value of 2. So, the best move for MAX is the one resulting in a value of 3, which is a1.
Below we see the best moves for MIN starting from B, C and D respectively. Note that the algorithm in these cases works the same way as minimax, since all the nodes below the aforementioned states are terminal.
End of explanation
"""
from notebook import Canvas_alphabeta
from random import randint
alphabeta_viz = Canvas_alphabeta('alphabeta_viz', [randint(1, 50) for i in range(27)])
"""
Explanation: Visualization
Below you will find the visualization of the alpha-beta algorithm for a simple game. Click on the cell after you run the command to move the game along. You can input your own values via a list of 27 integers.
End of explanation
"""
game52 = Fig52Game()
"""
Explanation: PLAYERS
So, we have finished the implementation of the TicTacToe and Fig52Game classes. What these classes do is defining the rules of the games. We need more to create an AI that can actually play games. This is where random_player and alphabeta_player come in.
query_player
The query_player function allows you, a human opponent, to play the game. This function requires a display method to be implemented in your game class, so that successive game states can be displayed on the terminal, making it easier for you to visualize the game and play accordingly.
random_player
The random_player is a function that plays random moves in the game. That's it. There isn't much more to this guy.
alphabeta_player
The alphabeta_player, on the other hand, calls the alphabeta_search function, which returns the best move in the current game state. Thus, the alphabeta_player always plays the best move given a game state, assuming that the game tree is small enough to search entirely.
minimax_player
The minimax_player, on the other hand calls the minimax_search function which returns the best move in the current game state.
play_game
The play_game function will be the one that will actually be used to play the game. You pass as arguments to it an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches!
LET'S PLAY SOME GAMES!
Game52
Let's start by experimenting with the Fig52Game first. For that we'll create an instance of the subclass Fig52Game inherited from the class Game:
End of explanation
"""
print(random_player(game52, 'A'))
print(random_player(game52, 'A'))
"""
Explanation: First we try out our random_player(game, state). Given a game state it will give us a random move every time:
End of explanation
"""
print( alphabeta_player(game52, 'A') )
print( alphabeta_player(game52, 'B') )
print( alphabeta_player(game52, 'C') )
"""
Explanation: The alphabeta_player(game, state) will always give us the best move possible, for the relevant player (MAX or MIN):
End of explanation
"""
minimax_decision('A', game52)
alphabeta_search('A', game52)
"""
Explanation: What the alphabeta_player does is, it simply calls the method alphabeta_full_search. They both are essentially the same. In the module, both alphabeta_full_search and minimax_decision have been implemented. They both do the same job and return the same thing, which is, the best move in the current state. It's just that alphabeta_full_search is more efficient with regards to time because it prunes the search tree and hence, explores lesser number of states.
End of explanation
"""
game52.play_game(alphabeta_player, alphabeta_player)
game52.play_game(alphabeta_player, random_player)
game52.play_game(query_player, alphabeta_player)
game52.play_game(alphabeta_player, query_player)
"""
Explanation: Demonstrating the play_game function on the game52:
End of explanation
"""
ttt = TicTacToe()
"""
Explanation: Note that if you are the first player then alphabeta_player plays as MIN, and if you are the second player then alphabeta_player plays as MAX. This happens because that's the way the game is defined in the class Fig52Game. Having a look at the code of this class should make it clear.
TicTacToe
Now let's play TicTacToe. First we initialize the game by creating an instance of the subclass TicTacToe inherited from the class Game:
End of explanation
"""
ttt.display(ttt.initial)
"""
Explanation: We can print a state using the display method:
End of explanation
"""
my_state = GameState(
to_move = 'X',
utility = '0',
board = {(1,1): 'X', (1,2): 'O', (1,3): 'X',
(2,1): 'O', (2,3): 'O',
(3,1): 'X',
},
moves = [(2,2), (3,2), (3,3)]
)
"""
Explanation: Hmm, so that's the initial state of the game; no X's and no O's.
Let us create a new game state by ourselves to experiment:
End of explanation
"""
ttt.display(my_state)
"""
Explanation: So, how does this game state look like?
End of explanation
"""
random_player(ttt, my_state)
random_player(ttt, my_state)
"""
Explanation: The random_player will behave how he is supposed to i.e. pseudo-randomly:
End of explanation
"""
alphabeta_player(ttt, my_state)
"""
Explanation: But the alphabeta_player will always give the best move, as expected:
End of explanation
"""
ttt.play_game(random_player, alphabeta_player)
"""
Explanation: Now let's make two players play against each other. We use the play_game function for this. The play_game function makes players play the match against each other and returns the utility for the first player, of the terminal state reached when the game ends. Hence, for our TicTacToe game, if we get the output +1, the first player wins, -1 if the second player wins, and 0 if the match ends in a draw.
End of explanation
"""
for _ in range(10):
print(ttt.play_game(alphabeta_player, alphabeta_player))
"""
Explanation: The output is (usually) -1, because random_player loses to alphabeta_player. Sometimes, however, random_player manages to draw with alphabeta_player.
Since an alphabeta_player plays perfectly, a match between two alphabeta_players should always end in a draw. Let's see if this happens:
End of explanation
"""
for _ in range(10):
print(ttt.play_game(random_player, alphabeta_player))
"""
Explanation: A random_player should never win against an alphabeta_player. Let's test that.
End of explanation
"""
from notebook import Canvas_TicTacToe
bot_play = Canvas_TicTacToe('bot_play', 'random', 'alphabeta')
"""
Explanation: Canvas_TicTacToe(Canvas)
This subclass is used to play TicTacToe game interactively in Jupyter notebooks. TicTacToe class is called while initializing this subclass.
Let's have a match between random_player and alphabeta_player. Click on the board to call players to make a move.
End of explanation
"""
rand_play = Canvas_TicTacToe('rand_play', 'human', 'random')
"""
Explanation: Now, let's play a game ourselves against a random_player:
End of explanation
"""
ab_play = Canvas_TicTacToe('ab_play', 'human', 'alphabeta')
"""
Explanation: Yay! We (usually) win. But we cannot win against an alphabeta_player, however hard we try.
End of explanation
"""
|
mrmeswani/Robotics | RoboND-Rover-Project/src/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb | gpl-3.0 | %%HTML
<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)
# Make some of the relevant imports
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
import imageio
imageio.plugins.ffmpeg.download()
"""
Explanation: Rover Project Test Notebook
This notebook contains the functions from the lesson and provides the scaffolding you need to test out your mapping methods. The steps you need to complete in this notebook for the project are the following:
First just run each of the cells in the notebook, examine the code and the results of each.
Run the simulator in "Training Mode" and record some data. Note: the simulator may crash if you try to record a large (longer than a few minutes) dataset, but you don't need a ton of data, just some example images to work with.
Change the data directory path (2 cells below) to be the directory where you saved data
Test out the functions provided on your data
Write new functions (or modify existing ones) to report and map out detections of obstacles and rock samples (yellow rocks)
Populate the process_image() function with the appropriate steps/functions to go from a raw image to a worldmap.
Run the cell that calls process_image() using moviepy functions to create video output
Once you have mapping working, move on to modifying perception.py and decision.py to allow your rover to navigate and map in autonomous mode!
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
Run the next cell to get code highlighting in the markdown cells.
End of explanation
"""
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
"""
Explanation: Quick Look at the Data
There's some example data provided in the test_dataset folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator.
Next, read in and display a random image from the test_dataset folder
End of explanation
"""
# In the simulator you can toggle on a grid on the ground for calibration
# You can also toggle on the rock samples with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(grid_img)
plt.subplot(122)
plt.imshow(rock_img)
"""
Explanation: Calibration Data
Read in and display example grid and rock sample calibration images. You'll use the grid for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
End of explanation
"""
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
return warped
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped = perspect_transform(grid_img, source, destination)
rock_warped = perspect_transform(rock_img, source, destination)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(warped)
plt.subplot(122)
plt.imshow(rock_warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
"""
Explanation: Perspective Transform
Define the perspective transform function from the lesson and test it on an image.
End of explanation
"""
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def ground_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
def obstacle_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
below_thresh = (img[:,:,0] < rgb_thresh[0]) \
& (img[:,:,1] < rgb_thresh[1]) \
& (img[:,:,2] < rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[below_thresh] = 1
# Return the binary image
return color_select
def rock_thresh(img):
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of color in HSV
lower = np.array([80,100,100])
upper = np.array([120,255,255])
# Threshold the HSV image to get only yellow colors
mask = cv2.inRange(hsv, lower, upper)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
color_select = np.zeros_like(res[:,:,0])
thresh = (res[:,:,0] > 0 ) \
& (img[:,:,1] > 0) \
& (img[:,:,2] > 0)
color_select[thresh] = 1
#cv2.imshow('original',img)
#cv2.imshow('mask',mask)
#cv2.imshow('res',res)
return color_select
def color_thresh3(img, high_thresh=(255,255,0), low_thresh=(140,120,100)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
below_highthresh = (img[:,:,0] < high_thresh[0]) \
& (img[:,:,1] < high_thresh[1]) \
& (img[:,:,2] < high_thresh[2])
above_lowthresh = (img[:,:,0] > low_thresh[0]) \
& (img[:,:,1] > low_thresh[1]) \
& (img[:,:,2] > low_thresh[2])
thresh = np.logical_and(img[:,:,0] < high_thresh[0], img[:,:,0] > low_thresh[0]) \
& np.logical_and(img[:,:,1] < high_thresh[1], img[:,:,1] > low_thresh[1]) \
& np.logical_and(img[:,:,2] < high_thresh[2], img[:,:,2] > low_thresh[2])
thresh2 = (img[:,:,0] < high_thresh[0])\
& (img[:,:,0] > low_thresh[0]) \
& (img[:,:,1] < high_thresh[1]) \
& (img[:,:,1] > low_thresh[1]) \
& (img[:,:,2] < high_thresh[2])\
& (img[:,:,2] > low_thresh[2])
# Index the array of zeros with the boolean array and set to 1
#color_select[below_highthresh] = 1
#color_select[above_lowthresh] = 1
color_select[thresh] = 1
# Return the binary image
return color_select
def show_rock(img):
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of color in HSV
lower = np.array([80,100,100])
upper = np.array([120,255,255])
# Threshold the HSV image to get only yellow colors
mask = cv2.inRange(hsv, lower, upper)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
#cv2.imshow('original',img)
#cv2.imshow('mask',mask)
#cv2.imshow('res',res)
return res
ground_threshold = (160,160,160)
obstacle_threshold = (160,160,160)
fig = plt.figure(2, figsize=(12,4))
threshed = ground_thresh(warped, ground_threshold)
plt.subplot(151)
plt.imshow(threshed, cmap='gray')
obstacle = obstacle_thresh(warped, obstacle_threshold)
plt.subplot(152)
plt.imshow(obstacle, cmap='gray')
rocks = show_rock(rock_img)
plt.subplot(153)
plt.imshow(rocks, cmap='gray')
rock = show_rock(rock_warped)
plt.subplot(154)
plt.imshow(rock, cmap='gray')
#rock_threshed = color_thresh3(rock_warped)
rock_threshed = rock_thresh(rock_warped)
plt.subplot(155)
plt.imshow(rock_threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
"""
Explanation: Color Thresholding
Define the color thresholding function from the lesson and apply it to the warped image
TODO: Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well.
Hints and Suggestion:
* For obstacles you can just invert your color selection that you used to detect ground pixels, i.e., if you've decided that everything above the threshold is navigable terrain, then everthing below the threshold must be an obstacle!
For rocks, think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. You can investigate the colors of the rocks (the RGB pixel values) in an interactive matplotlib window to get a feel for the appropriate threshold range (keep in mind you may want different ranges for each of R, G and B!). Feel free to get creative and even bring in functions from other libraries. Here's an example of color selection using OpenCV.
Beware However: if you start manipulating images with OpenCV, keep in mind that it defaults to BGR instead of RGB color space when reading/writing images, so things can get confusing.
End of explanation
"""
# Define a function to convert from image coords to rover coords
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def rotate_pix(xpix, ypix, yaw):
# Convert yaw to radians
yaw_rad = yaw * np.pi / 180
xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))
ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))
# Return the result
return xpix_rotated, ypix_rotated
def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale):
# Apply a scaling and a translation
xpix_translated = (xpix_rot / scale) + xpos
ypix_translated = (ypix_rot / scale) + ypos
# Return the result
return xpix_translated, ypix_translated
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):
# Apply rotation
xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)
# Apply translation
xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)
# Return the result
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
"""
Explanation: Coordinate Transformations
Define the functions used to do coordinate transforms and apply them to an image.
End of explanation
"""
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change the path below to your data directory
# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator
# change the '.' to ','
df = pd.read_csv('../test_dataset/robot_log.csv', delimiter=';', decimal='.')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = 0 # This will be a running index
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
"""
Explanation: Read in saved data and ground truth map of the world
The next cell is all setup to read your saved data into a pandas dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (data = Databucket()) you'll have a global variable called data that you can refer to for telemetry and map data within the process_image() function in the following cell.
End of explanation
"""
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
def process_image(img):
# Example of how to use the Databucket() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
# TODO:
# 1) Define source and destination points for perspective transform
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
# 2) Apply perspective transform
warped = perspect_transform(img, source, destination)
# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples
ground_threshold = (160,160,160)
obstacle_threshold = (160,160,160)
navigate_threshed = ground_thresh(warped, ground_threshold)
obstacle_threshed = obstacle_thresh(warped, obstacle_threshold)
rock_threshed = rock_thresh(warped)
# 4) Convert thresholded image pixel values to rover-centric coords
navigate_x_rover, navigate_y_rover = rover_coords(navigate_threshed)
obstacle_x_rover, obstacle_y_rover = rover_coords(obstacle_threshed)
rock_x_rover, rock_y_rover = rover_coords(rock_threshed)
# 5) Convert rover-centric pixel values to world coords
navigate_x_world,navigate_y_world = pix_to_world(navigate_x_rover, navigate_y_rover, data.xpos[data.count], data.ypos[data.count],\
data.yaw[data.count], 200, 10)
obstacle_x_world,obstacle_y_world = pix_to_world(obstacle_x_rover, obstacle_y_rover, data.xpos[data.count], data.ypos[data.count],\
data.yaw[data.count], 200, 10)
rock_x_world,rock_y_world = pix_to_world(rock_x_rover, rock_y_rover, data.xpos[data.count], data.ypos[data.count],\
data.yaw[data.count], 200, 10)
# 6) Update worldmap (to be displayed on right side of screen)
# Example: data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
# data.worldmap[rock_y_world, rock_x_world, 1] += 1
# data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
data.worldmap[navigate_y_world, navigate_x_world,2]+=1
data.worldmap[obstacle_y_world, obstacle_x_world,0]+=1
data.worldmap[rock_y_world, rock_x_world,1]+=1
# 7) Make a mosaic image, below is some example code
# First create a blank image (can be whatever shape you like)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Next you can populate regions of the image with various output
# Here I'm putting the original image in the upper left hand corner
output_image[0:img.shape[0], 0:img.shape[1]] = img
# Let's create more images to add to the mosaic, first a warped image
warped = perspect_transform(img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:img.shape[0], img.shape[1]:] = warped
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[img.shape[0]:, 0:data.worldmap.shape[1]] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
if data.count < len(data.images) - 1:
data.count += 1 # Keep track of the index in the Databucket()
return output_image
"""
Explanation: Write a function to process stored images
Modify the process_image() function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package to create a video from the images you saved taking data in the simulator.
In short, you will be passing individual images into process_image() and building up an image called output_image that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below).
To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
data = Databucket() # Re-initialize data in case you're running this cell multiple times
clip = ImageSequenceClip(data.images, fps=60) # Note: output video will be sped up because
# recording rate in simulator is fps=25
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
"""
Explanation: Make a video from processed image data
Use the moviepy library to process images and create a video.
End of explanation
"""
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
"""
Explanation: This next cell should function as an inline video player
If this fails to render the video, try running the following cell (alternative video rendering method). You can also simply have a look at the saved mp4 in your /output folder
End of explanation
"""
import io
import base64
video = io.open(output, 'r+b').read()
encoded_video = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded_video.decode('ascii')))
"""
Explanation: Below is an alternative way to create a video in case the above cell did not work.
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/fastai kaggle rossman - tabular regression.ipynb | apache-2.0 | %matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
PATH='data/rossmann/'
"""
Explanation: Structured and time series data
This notebook contains an implementation of the third place result in the Rossman Kaggle competition as detailed in Guo/Berkhahn's Entity Embeddings of Categorical Variables.
The motivation behind exploring this architecture is it's relevance to real-world application. Most data used for decision making day-to-day in industry is structured and/or time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
End of explanation
"""
def concat_csvs(dirname):
path = f'{PATH}{dirname}'
filenames=glob(f"{PATH}/*.csv")
wrote_header = False
with open(f"{path}.csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
# concat_csvs('googletrend')
# concat_csvs('weather')
"""
Explanation: Create datasets
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them here.
For completeness, the implementation used to put them together is included below.
End of explanation
"""
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
"""
Explanation: Feature Space:
* train: Training set provided by competition
* store: List of stores
* store_states: mapping of store to the German state they are in
* List of German state names
* googletrend: trend of certain google keywords over time, found by users to correlate well w/ given data
* weather: weather
* test: testing set
End of explanation
"""
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML, display
"""
Explanation: We'll be using the popular data manipulation framework pandas. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
We're going to go ahead and load all of our csv's as dataframes into the list tables.
End of explanation
"""
for t in tables: display(t.head())
"""
Explanation: We can use head() to get a quick look at the contents of each table:
* train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holdiay, etc.
* store: general info about the store including competition, etc.
* store_states: maps store to state it is in
* state_names: Maps state abbreviations to names
* googletrend: trend data for particular week/state
* weather: weather conditions for each state
* test: Same as training table, w/o sales and customers
End of explanation
"""
for t in tables: display(DataFrameSummary(t).summary())
"""
Explanation: This is very representative of a typical industry dataset.
The following returns summarized aggregate information to each table accross each field.
End of explanation
"""
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
"""
Explanation: Data Cleaning / Feature Engineering
As a structured data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
End of explanation
"""
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
"""
Explanation: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
End of explanation
"""
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
"""
Explanation: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Pandas does joins using the merge method. The suffixes argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
End of explanation
"""
weather = join_df(weather, state_names, "file", "StateName")
"""
Explanation: Join weather/state names.
End of explanation
"""
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
"""
Explanation: In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".
End of explanation
"""
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
"""
Explanation: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
End of explanation
"""
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
"""
Explanation: The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
End of explanation
"""
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
"""
Explanation: Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside: Why note just do an inner join?
If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
End of explanation
"""
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
"""
Explanation: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary signal value that doesn't otherwise appear in the data.
End of explanation
"""
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
"""
Explanation: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
End of explanation
"""
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
"""
Explanation: We'll replace some erroneous / outlying data.
End of explanation
"""
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
"""
Explanation: We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
End of explanation
"""
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
"""
Explanation: Same process for Promo dates.
End of explanation
"""
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
"""
Explanation: Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function get_elapsed for cumulative counting across a sorted dataframe. Given a particular field fld to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
End of explanation
"""
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
#df = train[columns]
df = train[columns].append(test[columns])
"""
Explanation: We'll be applying this to a subset of columns:
End of explanation
"""
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After'):
This will apply to each row with School Holiday:
* A applied to every row of the dataframe in order of store and date
* Will add to the dataframe the days since seeing a School Holiday
* If we sort in the other direction, this will count the days until another holiday.
End of explanation
"""
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: We'll do this for two more fields.
End of explanation
"""
df = df.set_index("Date")
"""
Explanation: We're going to set the active index to Date.
End of explanation
"""
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
"""
Explanation: Then set null values from elapsed field calculations to 0.
End of explanation
"""
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
"""
Explanation: Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (sort_index()) and counting the number of events of interest (sum()) defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.
End of explanation
"""
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
"""
Explanation: Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
End of explanation
"""
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
"""
Explanation: Now we'll merge these values onto the df.
End of explanation
"""
df.to_feather(f'{PATH}df')
df = pd.read_feather(f'{PATH}df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
"""
Explanation: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
End of explanation
"""
joined = joined[joined.Sales!=0]
"""
Explanation: The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
End of explanation
"""
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
"""
Explanation: We'll back this up as well.
End of explanation
"""
joined = pd.read_feather(f'{PATH}joined')
joined_test = pd.read_feather(f'{PATH}joined_test')
joined.head().T.head(40)
"""
Explanation: We now have our final set of engineered features.
While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
Create features
End of explanation
"""
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw',
'SchoolHoliday_fw', 'SchoolHoliday_bw']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
n = len(joined); n
dep = 'Sales'
joined = joined[cat_vars+contin_vars+[dep, 'Date']].copy()
joined_test[dep] = 0
joined_test = joined_test[cat_vars+contin_vars+[dep, 'Date', 'Id']].copy()
for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered()
apply_cats(joined_test, joined)
for v in contin_vars:
joined[v] = joined[v].fillna(0).astype('float32')
joined_test[v] = joined_test[v].fillna(0).astype('float32')
"""
Explanation: Now that we've engineered all our features, we need to convert to input compatible with a neural network.
This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc...
End of explanation
"""
idxs = get_cv_idxs(n, val_pct=150000/n)
joined_samp = joined.iloc[idxs].set_index("Date")
samp_size = len(joined_samp); samp_size
"""
Explanation: We're going to run on a sample.
End of explanation
"""
samp_size = n
joined_samp = joined.set_index("Date")
"""
Explanation: To run on the full dataset, use this instead:
End of explanation
"""
joined_samp.head(2)
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yl = np.log(y)
joined_test = joined_test.set_index("Date")
df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'],
mapper=mapper, na_dict=nas)
df.head(2)
"""
Explanation: We can now process our data...
End of explanation
"""
train_ratio = 0.75
# train_ratio = 0.9
train_size = int(samp_size * train_ratio); train_size
val_idx = list(range(train_size, len(df)))
"""
Explanation: In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be in real application. This issue is discussed in detail in this post on our web site.
One approach is to take the last 25% of rows (sorted by date) as our validation set.
End of explanation
"""
val_idx = np.flatnonzero(
(df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1)))
val_idx=[0]
"""
Explanation: An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here:
End of explanation
"""
def inv_y(a): return np.exp(a)
def exp_rmspe(y_pred, targ):
targ = inv_y(targ)
pct_var = (targ - inv_y(y_pred))/targ
return math.sqrt((pct_var**2).mean())
max_log_y = np.max(yl)
y_range = (0, max_log_y*1.2)
"""
Explanation: DL
We're ready to put together our models.
Root-mean-squared percent error is the metric Kaggle used for this competition.
End of explanation
"""
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl.astype(np.float32), cat_flds=cat_vars, bs=128,
test_df=df_test)
"""
Explanation: We can create a ModelData object directly from out data frame.
End of explanation
"""
cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars]
cat_sz
"""
Explanation: Some categorical variables have a lot more levels than others. Store, in particular, has over a thousand!
End of explanation
"""
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz]
emb_szs
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
m.summary()
lr = 1e-3
m.lr_find()
m.sched.plot(100)
"""
Explanation: We use the cardinality of each variable (that is, its number of unique values) to decide how large to make its embeddings. Each level will be associated with a vector with length defined as below.
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 5, metrics=[exp_rmspe], cycle_len=1)
m.fit(lr, 2, metrics=[exp_rmspe], cycle_len=4)
"""
Explanation: Sample
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 1, metrics=[exp_rmspe])
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1)
"""
Explanation: All
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1)
m.save('val0')
m.load('val0')
x,y=m.predict_with_targs()
exp_rmspe(x,y)
pred_test=m.predict(True)
pred_test = np.exp(pred_test)
joined_test['Sales']=pred_test
csv_fn=f'{PATH}tmp/sub.csv'
joined_test[['Id','Sales']].to_csv(csv_fn, index=False)
FileLink(csv_fn)
"""
Explanation: Test
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
((val,trn), (y_val,y_trn)) = split_by_idx(val_idx, df.values, yl)
m = RandomForestRegressor(n_estimators=40, max_features=0.99, min_samples_leaf=2,
n_jobs=-1, oob_score=True)
m.fit(trn, y_trn);
preds = m.predict(val)
m.score(trn, y_trn), m.score(val, y_val), m.oob_score_, exp_rmspe(preds, y_val)
"""
Explanation: RF
End of explanation
"""
|
tensorflow/federated | docs/tutorials/custom_federated_algorithms_2.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# Must use the Python context because it
# supports tff.sequence_* intrinsics.
executor_factory = tff.framework.local_executor_factory(
support_sequence_ops=True)
execution_context = tff.framework.ExecutionContext(
executor_fn=executor_factory)
tff.framework.set_default_context(execution_context)
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
"""
Explanation: Custom Federated Algorithms, Part 2: Implementing Federated Averaging
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial is the second part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TFF using the
Federated Core (FC), which serves as a foundation for
the Federated Learning (FL) layer (tff.learning).
We encourage you to first read the
first part of this series, which
introduce some of the key concepts and programming abstractions used here.
This second part of the series uses the mechanisms introduced in the first part
to implement a simple version of federated training and evaluation algorithms.
We encourage you to review the
image classification and
text generation tutorials for a
higher-level and more gentle introduction to TFF's Federated Learning APIs, as
they will help you put the concepts we describe here in context.
Before we start
Before we start, try to run the following "Hello World" example to make sure
your environment is correctly setup. If it doesn't work, please refer to the
Installation guide for instructions.
End of explanation
"""
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
[(x.dtype, x.shape) for x in mnist_train]
"""
Explanation: Implementing Federated Averaging
As in
Federated Learning for Image Classification,
we are going to use the MNIST example, but since this is intended as a low-level
tutorial, we are going to bypass the Keras API and tff.simulation, write raw
model code, and construct a federated data set from scratch.
Preparing federated data sets
For the sake of a demonstration, we're going to simulate a scenario in which we
have data from 10 users, and each of the users contributes knowledge how to
recognize a different digit. This is about as
non-i.i.d.
as it gets.
First, let's load the standard MNIST data:
End of explanation
"""
NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x':
np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y':
np.array([source[1][i] for i in batch_samples], dtype=np.int32)
})
return output_sequence
federated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]
federated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]
"""
Explanation: The data comes as Numpy arrays, one with images and another with digit labels, both
with the first dimension going over the individual examples. Let's write a
helper function that formats it in a way compatible with how we feed federated
sequences into TFF computations, i.e., as a list of lists - the outer list
ranging over the users (digits), the inner ones ranging over batches of data in
each client's sequence. As is customary, we will structure each batch as a pair
of tensors named x and y, each with the leading batch dimension. While at
it, we'll also flatten each image into a 784-element vector and rescale the
pixels in it into the 0..1 range, so that we don't have to clutter the model
logic with data conversions.
End of explanation
"""
federated_train_data[5][-1]['y']
"""
Explanation: As a quick sanity check, let's look at the Y tensor in the last batch of data
contributed by the fifth client (the one corresponding to the digit 5).
End of explanation
"""
from matplotlib import pyplot as plt
plt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')
plt.grid(False)
plt.show()
"""
Explanation: Just to be sure, let's also look at the image corresponding to the last element of that batch.
End of explanation
"""
BATCH_SPEC = collections.OrderedDict(
x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),
y=tf.TensorSpec(shape=[None], dtype=tf.int32))
BATCH_TYPE = tff.to_type(BATCH_SPEC)
str(BATCH_TYPE)
"""
Explanation: On combining TensorFlow and TFF
In this tutorial, for compactness we immediately decorate functions that
introduce TensorFlow logic with tff.tf_computation. However, for more complex
logic, this is not the pattern we recommend. Debugging TensorFlow can already be
a challenge, and debugging TensorFlow after it has been fully serialized and
then re-imported necessarily loses some metadata and limits interactivity,
making debugging even more of a challenge.
Therefore, we strongly recommend writing complex TF logic as stand-alone
Python functions (that is, without tff.tf_computation decoration). This way
the TensorFlow logic can be developed and tested using TF best practices and
tools (like eager mode), before serializing the computation for TFF (e.g., by invoking tff.tf_computation with a Python function as the argument).
Defining a loss function
Now that we have the data, let's define a loss function that we can use for
training. First, let's define the type of input as a TFF named tuple. Since the
size of data batches may vary, we set the batch dimension to None to indicate
that the size of this dimension is unknown.
End of explanation
"""
MODEL_SPEC = collections.OrderedDict(
weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),
bias=tf.TensorSpec(shape=[10], dtype=tf.float32))
MODEL_TYPE = tff.to_type(MODEL_SPEC)
print(MODEL_TYPE)
"""
Explanation: You may be wondering why we can't just define an ordinary Python type. Recall
the discussion in part 1, where we
explained that while we can express the logic of TFF computations using Python,
under the hood TFF computations are not Python. The symbol BATCH_TYPE
defined above represents an abstract TFF type specification. It is important to
distinguish this abstract TFF type from concrete Python representation
types, e.g., containers such as dict or collections.namedtuple that may be
used to represent the TFF type in the body of a Python function. Unlike Python,
TFF has a single abstract type constructor tff.StructType for tuple-like
containers, with elements that can be individually named or left unnamed. This
type is also used to model formal parameters of computations, as TFF
computations can formally only declare one parameter and one result - you will
see examples of this shortly.
Let's now define the TFF type of model parameters, again as a TFF named tuple of
weights and bias.
End of explanation
"""
# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can
# be later called from within another tf.function. Necessary because a
# @tf.function decorated method cannot invoke a @tff.tf_computation.
@tf.function
def forward_pass(model, batch):
predicted_y = tf.nn.softmax(
tf.matmul(batch['x'], model['weights']) + model['bias'])
return -tf.reduce_mean(
tf.reduce_sum(
tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
return forward_pass(model, batch)
"""
Explanation: With those definitions in place, now we can define the loss for a given model, over a single batch. Note the usage of @tf.function decorator inside the @tff.tf_computation decorator. This allows us to write TF using Python like semantics even though were inside a tf.Graph context created by the tff.tf_computation decorator.
End of explanation
"""
str(batch_loss.type_signature)
"""
Explanation: As expected, computation batch_loss returns float32 loss given the model and
a single data batch. Note how the MODEL_TYPE and BATCH_TYPE have been lumped
together into a 2-tuple of formal parameters; you can recognize the type of
batch_loss as (<MODEL_TYPE,BATCH_TYPE> -> float32).
End of explanation
"""
initial_model = collections.OrderedDict(
weights=np.zeros([784, 10], dtype=np.float32),
bias=np.zeros([10], dtype=np.float32))
sample_batch = federated_train_data[5][-1]
batch_loss(initial_model, sample_batch)
"""
Explanation: As a sanity check, let's construct an initial model filled with zeros and
compute the loss over the batch of data we visualized above.
End of explanation
"""
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`. Must
# be defined outside the @tf.function.
model_vars = collections.OrderedDict([
(name, tf.Variable(name=name, initial_value=value))
for name, value in initial_model.items()
])
optimizer = tf.keras.optimizers.SGD(learning_rate)
@tf.function
def _train_on_batch(model_vars, batch):
# Perform one step of gradient descent using loss from `batch_loss`.
with tf.GradientTape() as tape:
loss = forward_pass(model_vars, batch)
grads = tape.gradient(loss, model_vars)
optimizer.apply_gradients(
zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))
return model_vars
return _train_on_batch(model_vars, batch)
str(batch_train.type_signature)
"""
Explanation: Note that we feed the TFF computation with the initial model defined as a
dict, even though the body of the Python function that defines it consumes
model parameters as model['weight'] and model['bias']. The arguments of the call
to batch_loss aren't simply passed to the body of that function.
What happens when we invoke batch_loss?
The Python body of batch_loss has already been traced and serialized in the above cell where it was defined. TFF acts as the caller to batch_loss
at the computation definition time, and as the target of invocation at the time
batch_loss is invoked. In both roles, TFF serves as the bridge between TFF's
abstract type system and Python representation types. At the invocation time,
TFF will accept most standard Python container types (dict, list, tuple,
collections.namedtuple, etc.) as concrete representations of abstract TFF
tuples. Also, although as noted above, TFF computations formally only accept a
single parameter, you can use the familiar Python call syntax with positional
and/or keyword arguments in case where the type of the parameter is a tuple - it
works as expected.
Gradient descent on a single batch
Now, let's define a computation that uses this loss function to perform a single
step of gradient descent. Note how in defining this function, we use
batch_loss as a subcomponent. You can invoke a computation constructed with
tff.tf_computation inside the body of another computation, though typically
this is not necessary - as noted above, because serialization looses some
debugging information, it is often preferable for more complex computations to
write and test all the TensorFlow without the tff.tf_computation decorator.
End of explanation
"""
model = initial_model
losses = []
for _ in range(5):
model = batch_train(model, sample_batch, 0.1)
losses.append(batch_loss(model, sample_batch))
losses
"""
Explanation: When you invoke a Python function decorated with tff.tf_computation within the
body of another such function, the logic of the inner TFF computation is
embedded (essentially, inlined) in the logic of the outer one. As noted above,
if you are writing both computations, it is likely preferable to make the inner
function (batch_loss in this case) a regular Python or tf.function rather
than a tff.tf_computation. However, here we illustrate that calling one
tff.tf_computation inside another basically works as expected. This may be
necessary if, for example, you do not have the Python code defining
batch_loss, but only its serialized TFF representation.
Now, let's apply this function a few times to the initial model to see whether
the loss decreases.
End of explanation
"""
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
@tff.tf_computation(LOCAL_DATA_TYPE, tf.float32)
def _insert_learning_rate_to_sequence(dataset, learning_rate):
return dataset.map(lambda x: (x, learning_rate))
batches_with_learning_rate = _insert_learning_rate_to_sequence(all_batches, learning_rate)
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, batches_with_learning_rate.type_signature.element)
def batch_fn(model, batch_with_lr):
batch, lr = batch_with_lr
return batch_train(model, batch, lr)
return tff.sequence_reduce(batches_with_learning_rate, initial_model, batch_fn)
str(local_train.type_signature)
"""
Explanation: Gradient descent on a sequence of local data
Now, since batch_train appears to work, let's write a similar training
function local_train that consumes the entire sequence of all batches from one
user instead of just a single batch. The new computation will need to now
consume tff.SequenceType(BATCH_TYPE) instead of BATCH_TYPE.
End of explanation
"""
locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])
"""
Explanation: There are quite a few details buried in this short section of code, let's go
over them one by one.
First, while we could have implemented this logic entirely in TensorFlow,
relying on tf.data.Dataset.reduce to process the sequence similarly to how
we've done it earlier, we've opted this time to express the logic in the glue
language, as a tff.federated_computation. We've used the federated operator
tff.sequence_reduce to perform the reduction.
The operator tff.sequence_reduce is used similarly to
tf.data.Dataset.reduce. You can think of it as essentially the same as
tf.data.Dataset.reduce, but for use inside federated computations, which as
you may remember, cannot contain TensorFlow code. It is a template operator with
a formal parameter 3-tuple that consists of a sequence of T-typed elements,
the initial state of the reduction (we'll refer to it abstractly as zero) of
some type U, and the reduction operator of type (<U,T> -> U) that alters the
state of the reduction by processing a single element. The result is the final
state of the reduction, after processing all elements in a sequential order. In
our example, the state of the reduction is the model trained on a prefix of the
data, and the elements are data batches.
Second, note that we have again used one computation (batch_train) as a
component within another (local_train), but not directly. We can't use it as a
reduction operator because it takes an additional parameter - the learning rate.
To resolve this, we define an embedded federated computation batch_fn that
binds to the local_train's parameter learning_rate in its body. It is
allowed for a child computation defined this way to capture a formal parameter
of its parent as long as the child computation is not invoked outside the body
of its parent. You can think of this pattern as an equivalent of
functools.partial in Python.
The practical implication of capturing learning_rate this way is, of course,
that the same learning rate value is used across all batches.
Now, let's try the newly defined local training function on the entire sequence
of data from the same user who contributed the sample batch (digit 5).
End of explanation
"""
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
@tff.tf_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def _insert_model_to_sequence(model, dataset):
return dataset.map(lambda x: (model, x))
model_plus_data = _insert_model_to_sequence(model, all_batches)
@tff.tf_computation(tf.float32, batch_loss.type_signature.result)
def tff_add(accumulator, arg):
return accumulator + arg
return tff.sequence_reduce(
tff.sequence_map(
batch_loss,
model_plus_data), 0., tff_add)
str(local_eval.type_signature)
"""
Explanation: Did it work? To answer this question, we need to implement evaluation.
Local evaluation
Here's one way to implement local evaluation by adding up the losses across all data
batches (we could have just as well computed the average; we'll leave it as an
exercise for the reader).
End of explanation
"""
print('initial_model loss =', local_eval(initial_model,
federated_train_data[5]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[5]))
"""
Explanation: Again, there are a few new elements illustrated by this code, let's go over them
one by one.
First, we have used two new federated operators for processing sequences:
tff.sequence_map that takes a mapping function T->U and a sequence of
T, and emits a sequence of U obtained by applying the mapping function
pointwise, and tff.sequence_sum that just adds all the elements. Here, we map
each data batch to a loss value, and then add the resulting loss values to
compute the total loss.
Note that we could have again used tff.sequence_reduce, but this wouldn't be
the best choice - the reduction process is, by definition, sequential, whereas
the mapping and sum can be computed in parallel. When given a choice, it's best
to stick with operators that don't constrain implementation choices, so that
when our TFF computation is compiled in the future to be deployed to a specific
environment, one can take full advantage of all potential opportunities for a
faster, more scalable, more resource-efficient execution.
Second, note that just as in local_train, the component function we need
(batch_loss) takes more parameters than what the federated operator
(tff.sequence_map) expects, so we again define a partial, this time inline by
directly wrapping a lambda as a tff.federated_computation. Using wrappers
inline with a function as an argument is the recommended way to use
tff.tf_computation to embed TensorFlow logic in TFF.
Now, let's see whether our training worked.
End of explanation
"""
print('initial_model loss =', local_eval(initial_model,
federated_train_data[0]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[0]))
"""
Explanation: Indeed, the loss decreased. But what happens if we evaluated it on another
user's data?
End of explanation
"""
SERVER_MODEL_TYPE = tff.type_at_server(MODEL_TYPE)
CLIENT_DATA_TYPE = tff.type_at_clients(LOCAL_DATA_TYPE)
"""
Explanation: As expected, things got worse. The model was trained to recognize 5, and has
never seen a 0. This brings the question - how did the local training impact
the quality of the model from the global perspective?
Federated evaluation
This is the point in our journey where we finally circle back to federated types
and federated computations - the topic that we started with. Here's a pair of
TFF types definitions for the model that originates at the server, and the data
that remains on the clients.
End of explanation
"""
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
"""
Explanation: With all the definitions introduced so far, expressing federated evaluation in
TFF is a one-liner - we distribute the model to clients, let each client invoke
local evaluation on its local portion of data, and then average out the loss.
Here's one way to write this.
End of explanation
"""
print('initial_model loss =', federated_eval(initial_model,
federated_train_data))
print('locally_trained_model loss =',
federated_eval(locally_trained_model, federated_train_data))
"""
Explanation: We've already seen examples of tff.federated_mean and tff.federated_map
in simpler scenarios, and at the intuitive level, they work as expected, but
there's more in this section of code than meets the eye, so let's go over it
carefully.
First, let's break down the let each client invoke local evaluation on its
local portion of data part. As you may recall from the preceding sections,
local_eval has a type signature of the form (<MODEL_TYPE, LOCAL_DATA_TYPE> ->
float32).
The federated operator tff.federated_map is a template that accepts as a
parameter a 2-tuple that consists of the mapping function of some type T->U
and a federated value of type {T}@CLIENTS (i.e., with member constituents of
the same type as the parameter of the mapping function), and returns a result of
type {U}@CLIENTS.
Since we're feeding local_eval as a mapping function to apply on a per-client
basis, the second argument should be of a federated type {<MODEL_TYPE,
LOCAL_DATA_TYPE>}@CLIENTS, i.e., in the nomenclature of the preceding sections,
it should be a federated tuple. Each client should hold a full set of arguments
for local_eval as a member consituent. Instead, we're feeding it a 2-element
Python list. What's happening here?
Indeed, this is an example of an implicit type cast in TFF, similar to
implicit type casts you may have encountered elsewhere, e.g., when you feed an
int to a function that accepts a float. Implicit casting is used scarcily at
this point, but we plan to make it more pervasive in TFF as a way to minimize
boilerplate.
The implicit cast that's applied in this case is the equivalence between
federated tuples of the form {<X,Y>}@Z, and tuples of federated values
<{X}@Z,{Y}@Z>. While formally, these two are different type signatures,
looking at it from the programmers's perspective, each device in Z holds two
units of data X and Y. What happens here is not unlike zip in Python, and
indeed, we offer an operator tff.federated_zip that allows you to perform such
conversions explicity. When the tff.federated_map encounters a tuple as a
second argument, it simply invokes tff.federated_zip for you.
Given the above, you should now be able to recognize the expression
tff.federated_broadcast(model) as representing a value of TFF type
{MODEL_TYPE}@CLIENTS, and data as a value of TFF type
{LOCAL_DATA_TYPE}@CLIENTS (or simply CLIENT_DATA_TYPE), the two getting
filtered together through an implicit tff.federated_zip to form the second
argument to tff.federated_map.
The operator tff.federated_broadcast, as you'd expect, simply transfers data
from the server to the clients.
Now, let's see how our local training affected the average loss in the system.
End of explanation
"""
SERVER_FLOAT_TYPE = tff.type_at_server(tf.float32)
@tff.federated_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,
CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
return tff.federated_mean(
tff.federated_map(local_train, [
tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate), data
]))
"""
Explanation: Indeed, as expected, the loss has increased. In order to improve the model for
all users, we'll need to train in on everyone's data.
Federated training
The simplest way to implement federated training is to locally train, and then
average the models. This uses the same building blocks and patters we've already
discussed, as you can see below.
End of explanation
"""
model = initial_model
learning_rate = 0.1
for round_num in range(5):
model = federated_train(model, learning_rate, federated_train_data)
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
"""
Explanation: Note that in the full-featured implementation of Federated Averaging provided by
tff.learning, rather than averaging the models, we prefer to average model
deltas, for a number of reasons, e.g., the ability to clip the update norms,
for compression, etc.
Let's see whether the training works by running a few rounds of training and
comparing the average loss before and after.
End of explanation
"""
print('initial_model test loss =',
federated_eval(initial_model, federated_test_data))
print('trained_model test loss =', federated_eval(model, federated_test_data))
"""
Explanation: For completeness, let's now also run on the test data to confirm that our model
generalizes well.
End of explanation
"""
|
ThyrixYang/LearningNotes | MOOC/stanford_cnn_cs231n/assignment2/ConvolutionalNetworks.ipynb | gpl-3.0 | # As usual, a bit of setup
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
End of explanation
"""
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
|
wisner23/serenata-de-amor | develop/2016-08-08-irio-translate-dataset.ipynb | mit | import pandas as pd
data = pd.read_csv('../data/2016-08-08-AnoAtual.csv')
data.shape
data.head()
data.iloc[0]
"""
Explanation: Translate dataset
The main language of the project is English: works well mixed in programming languages like Python and provides a low barrier for non-Brazilian contributors. Today, the dataset we make available by default for them is a set of XMLs from The Chamber of Deputies, in Portuguese. We need attribute names and categorical values to be translated to English.
This file is intended to serve as a base for the script to translate current and future datasets in the same format.
End of explanation
"""
data.rename(columns={
'ideDocumento': 'document_id',
'txNomeParlamentar': 'congressperson_name',
'ideCadastro': 'congressperson_id',
'nuCarteiraParlamentar': 'congressperson_document',
'nuLegislatura': 'term',
'sgUF': 'state',
'sgPartido': 'party',
'codLegislatura': 'term_id',
'numSubCota': 'subquota_number',
'txtDescricao': 'subquota_description',
'numEspecificacaoSubCota': 'subquota_group_id',
'txtDescricaoEspecificacao': 'subquota_group_description',
'txtFornecedor': 'supplier',
'txtCNPJCPF': 'cnpj_cpf',
'txtNumero': 'document_number',
'indTipoDocumento': 'document_type',
'datEmissao': 'issue_date',
'vlrDocumento': 'document_value',
'vlrGlosa': 'remark_value',
'vlrLiquido': 'net_value',
'numMes': 'month',
'numAno': 'year',
'numParcela': 'installment',
'txtPassageiro': 'passenger',
'txtTrecho': 'leg_of_the_trip',
'numLote': 'batch_number',
'numRessarcimento': 'reimbursement_number',
'vlrRestituicao': 'reimbursement_value',
'nuDeputadoId': 'applicant_id',
}, inplace=True)
data['subquota_description'] = data['subquota_description'].astype('category')
data['subquota_description'].cat.categories
"""
Explanation: New names are based on the "Nome do Dado" column of the table available at data/2016-08-08-datasets-format.html, not "Elemento de Dado", their current names.
End of explanation
"""
data['subquota_description'].cat.rename_categories([
'Publication subscriptions',
'Fuels and lubricants',
'Consultancy, research and technical work',
'Publicity of parliamentary activity',
'Flight ticket issue',
'Congressperson meal',
'Lodging, except for congressperson from Distrito Federal',
'Aircraft renting or charter of aircraft',
'Watercraft renting or charter',
'Automotive vehicle renting or charter',
'Maintenance of office supporting parliamentary activity',
'Participation in course, talk or similar event',
'Flight tickets',
'Terrestrial, maritime and fluvial tickets',
'Security service provided by specialized company',
'Taxi, toll and parking',
'Postal services',
'Telecommunication',
], inplace=True)
data.head()
data.iloc[0]
"""
Explanation: When localizing categorical values, I prefer a direct translation over adaptation as much as possible. Not sure what values each attribute will contain, so I give the power of the interpretation to the people analyzing it in the future.
End of explanation
"""
|
adit-chandra/tensorflow | tensorflow/lite/examples/experimental_new_converter/keras_lstm.ipynb | apache-2.0 | !pip install tf-nightly --upgrade
"""
Explanation: Overview
This CodeLab demonstrates how to build a LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite.
The CodeLab is very similar to the tf.lite.experimental.nn.TFLiteLSTMCell
CodeLab. However, with the control flow support in the experimental new converter, we can define the model with control flow directly without refactoring the code.
Also note: We're not trying to build the model to be a real world application, but only demonstrate how to use TensorFlow Lite. You can a build a much better model using CNN models. For a more canonical lstm codelab, please see here.
Step 0: Prerequisites
It's recommended to try this feature with the newest TensorFlow nightly pip build.
End of explanation
"""
import numpy as np
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28), name='input'),
tf.keras.layers.LSTM(20),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
"""
Explanation: Step 1: Build the MNIST LSTM model.
End of explanation
"""
# Load MNIST dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
# Change this to True if you want to test the flow rapidly.
# Train with a small dataset and only 1 epoch. The model will work poorly
# but this provides a fast way to test if the conversion works end to end.
_FAST_TRAINING = False
_EPOCHS = 5
if _FAST_TRAINING:
_EPOCHS = 1
_TRAINING_DATA_COUNT = 1000
x_train = x_train[:_TRAINING_DATA_COUNT]
y_train = y_train[:_TRAINING_DATA_COUNT]
model.fit(x_train, y_train, epochs=_EPOCHS)
model.evaluate(x_test, y_test, verbose=0)
"""
Explanation: Step 2: Train & Evaluate the model.
We will train the model using MNIST data.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# Note: It will NOT work without enabling the experimental converter!
# `experimental_new_converter` flag.
converter.experimental_new_converter = True
tflite_model = converter.convert()
"""
Explanation: Step 3: Convert the Keras model to TensorFlow Lite model.
Note here: we just convert to TensorFlow Lite model as usual.
End of explanation
"""
# Run the model with TensorFlow to get expected results.
expected = model.predict(x_test[0:1])
# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]["index"], x_test[0:1, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])
# Assert if the result of TFLite model is consistent with the TF model.
np.testing.assert_almost_equal(expected, result)
print("Done. The result of TensorFlow matches the result of TensorFlow Lite.")
"""
Explanation: Step 4: Check the converted TensorFlow Lite model.
Now load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.
End of explanation
"""
|
lukasmerten/CRPropa3 | doc/pages/example_notebooks/secondaries/photons.v4.ipynb | gpl-3.0 | from crpropa import *
obs = Observer()
obs.add(ObserverPoint())
obs.add(ObserverInactiveVeto())
t = TextOutput("photon_electron_output.txt", Output.Event1D)
obs.onDetection(t)
sim = ModuleList()
sim.add(SimplePropagation())
sim.add(Redshift())
sim.add(EMPairProduction(CMB(),True))
sim.add(EMPairProduction(IRB_Gilmore12(),True))
sim.add(EMPairProduction(URB_Protheroe96(),True))
sim.add(EMDoublePairProduction(CMB(),True))
sim.add(EMDoublePairProduction(IRB_Gilmore12(),True))
sim.add(EMDoublePairProduction(URB_Protheroe96(),True))
sim.add(EMInverseComptonScattering(IRB_Gilmore12(),True))
sim.add(EMInverseComptonScattering(CMB(),True))
sim.add(EMInverseComptonScattering(URB_Protheroe96(),True))
sim.add(EMTripletPairProduction(CMB(),True))
sim.add(EMTripletPairProduction(IRB_Gilmore12(),True))
sim.add(EMTripletPairProduction(URB_Protheroe96(),True))
sim.add(MinimumEnergy(0.01 * EeV))
source = Source()
source.add(SourcePosition(Vector3d(4,0,0)*Mpc))
source.add(SourceRedshift1D())
source.add(SourceParticleType(22))
source.add(SourceEnergy(1000*EeV))
sim.add(obs)
sim.run(source,1000,True)
t.close()
"""
Explanation: Photon Propagation
End of explanation
"""
%matplotlib inline
from pylab import *
t.close()
figure(figsize=(6,6))
a = loadtxt("photon_electron_output.txt")
E = logspace(16,23,71)
idx = a[:,1] == 22
photons = a[idx,2] * 1e18
idx = fabs(a[:,1]) == 11
ep = a[idx,2] * 1e18
data,bins = histogram(photons,E)
bincenter = (E[1:] -E[:-1])/2 + E[:-1]
plot(bincenter, data,label="photons")
data,bins = histogram(ep,E)
plot(bincenter, data, label="electrons / positrons")
grid()
loglog()
xlim(1e16, 1e21)
ylim(1e1, 1e4)
legend(loc="lower right")
xlabel("Energy [eV]")
ylabel("Number of Particles")
show()
"""
Explanation: (Optional) plotting of the results
End of explanation
"""
from crpropa import *
# source setup
source = Source()
source.add(SourceParticleType(nucleusId(1, 1)))
source.add(SourcePowerLawSpectrum(10 * EeV, 100 * EeV, -2))
source.add(SourceUniform1D(3 * Mpc, 100.00001 * Mpc))
# setup module list for proton propagation
m = ModuleList()
m.add(SimplePropagation(0, 10 * Mpc))
m.add(MinimumEnergy(1 * EeV))
# observer
obs1 = Observer() # proton output
obs1.add( ObserverPoint() )
obs1.add( ObserverPhotonVeto() ) # we don't want photons here
obs1.onDetection( TextOutput('proton_output.txt', Output.Event1D) )
m.add(obs1)
obs2 = Observer() # photon output
obs2.add( ObserverDetectAll() ) # stores the photons at creation without propagating them
obs2.add( ObserverNucleusVeto() ) # we don't want hadrons here
out2 = TextOutput('photon_output.txt', Output.Event1D)
out2.enable(Output.CreatedIdColumn) # enables the necessary columns to be compatible with the DINT and EleCa propagation
out2.enable(Output.CreatedEnergyColumn)
out2.enable(Output.CreatedPositionColumn)
obs2.onDetection( out2 )
m.add(obs2)
# secondary electrons are disabled here
m.add(ElectronPairProduction(CMB(), False))
# enable secondary photons
m.add(PhotoPionProduction(CMB(), True))
# run simulation
m.run(source, 10000, True)
"""
Explanation: Photon Propagation outside of CRPropa with EleCa and DINT
There are two main ways to propagate electromagnetic particles (EM particles: photons, electrons, positrons) in CRPropa:
1) propagation as part of the CRPropa simulation chain
2) propagation outside of the CRPropa simulation chain
The following describes option 2, for which CRPropa provides three functions.
EM particles can either be propagated individually using the external EleCa code (suitable for high energies), or their spectra can be propagated with the transport code DINT (suitable for low energies).
Alternatively, a combined option is available that processes high energy photons with Eleca and then calculates the resulting spectra with DINT down to low energies.
All three functions take as input a plain-text file with EM particles in the format given in the "Photons from Proton Propagation" example below.
In the following examples the input file "photon_monoenergetic_source.dat" contains 1000 photons with E = 50 EeV from a photon source at 4 Mpc distance.
The last example "Photons from Proton Propagation" shows how to obtain secondary EM particles from a simulation of hadronic cosmic rays.
Note that the differing results in EleCa (and correspondingly the high energy part of the combined option) are due to an incorrect sampling of the background photon energies in EleCa. The EleCa support will be removed in the near future.
Photons from Proton Propagation
The generation of photons has to be enabled for the individual energy-loss processes in the module chain. Also, separate photon output can be added:
End of explanation
"""
import crpropa
# Signature: ElecaPropagation(inputfile, outputfile, showProgress=True, lowerEnergyThreshold=5*EeV, magneticFieldStrength=1*nG, background="ALL")
crpropa.ElecaPropagation("photon_output.txt", "photons_eleca.dat", True, 0.1*crpropa.EeV, 0.1*crpropa.nG)
"""
Explanation: The file 'photon_output.txt' will contain approximately 300 photons and can be processed as the photon example below.
Propagation with EleCa
End of explanation
"""
import crpropa
# Signature: DintPropagation(inputfile, outputfile, IRFlag=4, RadioFlag=4, magneticFieldStrength=1*nG, aCutcascade_Magfield=0)
crpropa.DintPropagation("photon_output.txt", "spectrum_dint.dat", 4, 4, 0.1*crpropa.nG)
"""
Explanation: Propagation with DINT
End of explanation
"""
import crpropa
# Signature: DintElecaPropagation(inputfile, outputfile, showProgress=True, crossOverEnergy=0.5*EeV, magneticFieldStrength=1*nG, aCutcascade_Magfield=0)
crpropa.DintElecaPropagation("photon_output.txt", "spectrum_dint_eleca.dat", True, 0.5*crpropa.EeV, 0.1*crpropa.nG)
"""
Explanation: Combined Propagation
End of explanation
"""
%matplotlib inline
from pylab import *
figure(figsize=(6,6))
loglog(clip_on=False)
yscale("log", nonposy='clip')
xlabel('Energy [eV]')
ylabel ('$E^2 dN/dE$ [a.u.]')
# Plot the EleCa spectrum
elecaPhotons = genfromtxt("photons_eleca.dat")
binEdges = 10**arange(12, 24, .1)
logBinCenters = log10(binEdges[:-1]) + 0.5 * (log10(binEdges[1:]) - log10(binEdges[:-1]))
binWidths = (binEdges[1:] - binEdges[:-1])
data = histogram(elecaPhotons[:,1] * 1E18, bins=binEdges)
J = data[0] / binWidths
E = 10**logBinCenters
step(E, J * E**2, c='m', label='EleCa')
#Plot the DINT spectrum
data = genfromtxt("spectrum_dint.dat", names=True)
lE = data['logE']
E = 10**lE
dE = 10**(lE + 0.05) - 10**(lE - 0.05)
J = data['photons'] / dE
step(E, J * E**2 , c='b', where='mid', label='DINT')
#Plot the combined DINT+EleCa spectrum
data = genfromtxt("spectrum_dint_eleca.dat", names=True)
lE = data['logE']
E = 10**lE
dE = 10**(lE + 0.05) - 10**(lE - 0.05)
J = data['photons'] / dE
step(E, J * E**2 , c='r', where='mid', label='Combined')
# Nice limits
xlim(1e14, 1e20)
ylim(bottom=1e17)
legend(loc='upper left')
show()
"""
Explanation: (Optional) Plotting of Results
End of explanation
"""
|
anachlas/w210_vendor_recommendor | Collaborative Filtering on Spark.ipynb | gpl-3.0 | import os
import sys
spark_home = os.environ['SPARK_HOME'] = '/Users/ozimmer/GoogleDrive/berkeley/w261/spark-2.0.0-bin-hadoop2.6'
if not spark_home:
raise ValueError('SPARK_HOME enviroment variable is not set')
sys.path.insert(0,os.path.join(spark_home,'python'))
sys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.9-src.zip'))
execfile(os.path.join(spark_home,'python/pyspark/shell.py'))
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
# Load and parse the data
data = sc.textFile("/Users/ozimmer/GoogleDrive/berkeley/w210/w210_vendor_recommendor/test_spark_1.csv")
header = data.first() #filter out the header
ratings = data.filter(lambda row: row != header)\
.map(lambda l: l.split(','))\
.map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2])))
# Build the recommendation model using Alternating Least Squares
rank = 10
numIterations = 10
model = ALS.train(ratings, rank, numIterations)
# Evaluate the model on training data
testdata = ratings.map(lambda p: (p[0], p[1]))
predictions = model.predictAll(testdata).map(lambda r: ((r[0], r[1]), r[2]))
ratesAndPreds = ratings.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean()
print("Mean Squared Error = " + str(MSE))
# Save and load model
model.save(sc, "target/tmp/myCollaborativeFilter")
sameModel = MatrixFactorizationModel.load(sc, "target/tmp/myCollaborativeFilter")
#Create a RDD for prediction
data = [(145, 895988), (143, 348288), (143, 795270), (143, 795221), (143, 306804)]
data_rdd = sc.parallelize(data)
#Paste the prediction results in the model
model.predictAll(data_rdd).collect()
"""
Explanation: Collaborative Filtering using Spark
TO-DO
Running collaborative filtering using Mllib on Spark
Using implicit feedback (we do not have any direct input from the users regarding their preferences)
Running on DataProc on multiple clusters
Loading the data set from Bigquery
Writing the output into Biquery
References
https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html
http://ieeexplore.ieee.org/document/4781121/
https://link.springer.com/chapter/10.1007%2F978-3-540-68880-8_32
https://cloud.google.com/dataproc/docs/guides/setup-project
Run collaborative filtering using MLlib - LOCAL
End of explanation
"""
lines = spark.read.text("data/mllib/als/sample_movielens_ratings.txt").rdd
parts = lines.map(lambda row: row.value.split("::"))
ratingsRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]),
rating=float(p[2]), timestamp=long(p[3])))
ratings = spark.createDataFrame(ratingsRDD)
(training, test) = ratings.randomSplit([0.8, 0.2])
# Build the recommendation model using ALS on the training data
# Note we set cold start strategy to 'drop' to ensure we don't get NaN evaluation metrics
als = ALS(maxIter=5, regParam=0.01, userCol="userId", itemCol="movieId", ratingCol="rating",
coldStartStrategy="drop")
model = als.fit(training)
# Evaluate the model by computing the RMSE on the test data
predictions = model.transform(test)
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(predictions)
print("Root-mean-square error = " + str(rmse))
# Generate top 10 movie recommendations for each user
userRecs = model.recommendForAllUsers(10)
# Generate top 10 user recommendations for each movie
movieRecs = model.recommendForAllItems(10)
"""
Explanation: Non-RDD Example
End of explanation
"""
!gsutil cp gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py .
!cat hello-world.py
!gcloud dataproc jobs submit pyspark --cluster test1 hello-world.py
"""
Explanation: DataProc - submit a job
End of explanation
"""
|
francisc0garcia/autonomous_bicycle | docs/python_notebooks/EKF_Design.ipynb | apache-2.0 | # Import dependencies
from __future__ import division, print_function
%matplotlib inline
import scipy
from BicycleTrajectory2D import *
from BicycleUtils import *
from FormatUtils import *
from PlotUtils import *
"""
Explanation: Extended Kalman Filter design for bicycle's kinematic motion model
End of explanation
"""
[N, dt, wheel_distance] = [300, 0.05, 1.1] # simulation parameters
add_noise = True # Enable/disable gaussian noise
# Define initial state --------------------------------------------------------
delta = math.radians(6) # steering angle
phi = math.radians(0) # Lean angle
X_init = np.array([1.0, 3.0, 0.0, np.tan(delta)/wheel_distance, 0.0, phi]) # [x, y, z, sigma, psi, phi]
# Define constant inputs ------------------------------------------------------
U_init = np.array([1.0, 0.01, 0.01]) # [v, phi_dot, delta_dot]
# Define standard deviation for gaussian noise model --------------------------
# [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
if add_noise:
noise = [0.5, 0.5, 0.5, 0.5, 0.1, 0.1, 0.1, 0.01, 0.01, 0.01]
else:
noise = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
# Create object simulator ------------------------------------------------------
bike = BicycleTrajectory2D(X_init=X_init, U_init=U_init, noise=noise)
# Simulate path ----------------------------------------------------------------
(gt_sim, zs_sim, time) = bike.simulate_path(N=N, dt=dt)
plot_results(xs=[], zs_sim=zs_sim, gt_sim=gt_sim, time=time, plot_xs=False)
"""
Explanation: Simulation of kinematic motion model
End of explanation
"""
class EKF_sigma_model_fusion(object):
"""Implements an EKF to bicycle model"""
def __init__(self, xs, P, R_std, Q_std, wheel_distance=1.2, dt=0.1, alpha=1.0):
self.w = wheel_distance #Set the distance between the wheels
self.xs = xs *0.0 #Set the initial state
self.P = P #Set the initial Covariance
self.dt = dt
self.R_std = R_std
self.Q_std = Q_std
self.alpha = alpha
self.K = np.zeros((6, 6)) # Kalman gain
#Set the process noise covariance
self.Q = np.diag([self.Q_std[0], # v
self.Q_std[1], # phi_dot
self.Q_std[2] # delta_dot
])
# Set the measurement noise covariance
self.R = np.diag([self.R_std[0], # xf
self.R_std[1], # xr
self.R_std[2], # yf
self.R_std[3], # yr
self.R_std[4], # zf
self.R_std[5], # zr
self.R_std[6], # za
self.R_std[7], # sigma
self.R_std[8], # psi
self.R_std[9]]) # phi
# Linear relationship H - z = Hx
self.H = np.zeros((10, 6)) # 10 measurements x 6 state variables
[self.H[0, 0], self.H[1, 0]] = [1.0, 1.0] # x
[self.H[2, 1], self.H[3, 1]] = [1.0, 1.0] # y
[self.H[4, 2], self.H[5, 2], self.H[6, 2]] = [1.0, 1.0, 1.0] # z
[self.H[7, 3], self.H[8, 4], self.H[9, 5]] = [1.0, 1.0, 1.0] # sigma - psi - phi
def Fx(self, xs, u):
""" Linearize the system with the Jacobian of the x """
F_result = np.eye(len(xs))
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
F04 = -t * v * np.sin(psi)
F14 = t * v * np.cos(psi)
F33 = (2 * t * delta_dot * sigma * self.w) + 1
F43 = (t * v)/np.cos(phi)
F45 = t * sigma * v * np.sin(phi) / np.cos(phi)**2
F_result[0, 4] = F04
F_result[1, 4] = F14
F_result[3, 3] = F33
F_result[4, 3] = F43
F_result[4, 5] = F45
return F_result
def Fu(self, xs, u):
""" Linearize the system with the Jacobian of the u """
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
V_result = np.zeros((len(xs), len(u)))
V00 = t * np.cos(psi)
V10 = t * np.sin(psi)
V32 = (t/self.w)*((sigma**2)*(self.w**2) + 1)
V40 = t * sigma / np.cos(phi)
V51 = t
V_result[0, 0] = V00
V_result[1, 0] = V10
V_result[3, 2] = V32
V_result[4, 0] = V40
V_result[5, 1] = V51
return V_result
def f(self, xs, u):
""" Estimate the non-linear state of the system """
v = u[0]
phi_dot = u[1]
delta_dot = u[2]
sigma = xs[3]
psi = xs[4]
phi = xs[5]
t = self.dt
fxu_result = np.zeros((len(xs), 1))
fxu_result[0] = xs[0] + t * v * np.cos(psi)
fxu_result[1] = xs[1] + t * v * np.sin(psi)
fxu_result[2] = xs[2]
fxu_result[3] = xs[3] + (t*phi_dot/self.w)*((sigma**2)*(self.w**2) +1)
fxu_result[4] = xs[4] + t * v * sigma / np.cos(phi)
fxu_result[5] = xs[5] + t * phi_dot
return fxu_result
def h(self, x):
""" takes a state variable and returns the measurement
that would correspond to that state. """
sensor_out = np.zeros((10, 1))
sensor_out[0] = x[0]
sensor_out[1] = x[0]
sensor_out[2] = x[1]
sensor_out[3] = x[1]
sensor_out[4] = x[2]
sensor_out[5] = x[2]
sensor_out[6] = x[2]
sensor_out[7] = x[3] # sigma
sensor_out[8] = x[4] # psi
sensor_out[9] = x[5] # phi
return sensor_out
def Prediction(self, u):
x_ = self.xs
P_ = self.P
self.xs = self.f(x_, u)
self.P = self.alpha * self.Fx(x_, u).dot(P_).dot((self.Fx(x_,u)).T) + \
self.Fu(x_,u).dot(self.Q).dot((self.Fu(x_,u)).T)
def Update(self, z):
"""Update the Kalman Prediction using the meazurement z"""
y = z - self.h(self.xs)
self.K = self.P.dot(self.H.T).dot(np.linalg.inv(self.H.dot(self.P).dot(self.H.T) + self.R))
self.xs = self.xs + self.K.dot(y)
self.P = (np.eye(len(self.xs)) - self.K.dot(self.H)).dot(self.P)
"""
Explanation: Implementation of EKF for $\sigma$-model
Define state vector:
$$ X =
\begin{bmatrix}x & y & z & v & \sigma & \psi & \phi \end{bmatrix}^\mathsf T$$
Define measurement vector:
$$ Z =
\begin{bmatrix}x_f & x_r & y_f & y_r & z_f & z_r & z_a & \sigma & \psi & \phi \end{bmatrix}^\mathsf T$$
End of explanation
"""
np.random.seed(850)
[N, dt, wheel_distance, number_state_variables] = [300, 0.05, 1.1, 6]
delta = math.radians(6)
phi = math.radians(0)
U_init = np.array([1.0, 0.01, 0.01]) # [v, phi_dot, delta_dot]
X_init = np.array([1.0, 3.0, 0.0, np.tan(delta)/wheel_distance, 0.0, phi]) # [x, y, z, sigma, psi, phi]
# noise = [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
#noise = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
noise = [0.5, 0.5, 0.5, 0.5, 0.1, 0.1, 0.1, 0.01, 0.01, 0.01]
#noise = [5.5, 5.5, 5.5, 5.5, 5.1, 5.1, 5.1, 1.1, 1.1, 1.1]
bike = BicycleTrajectory2D(X_init=X_init, U_init=U_init, w=wheel_distance, noise=noise)
(gt_sim, zs_sim, time_t) = bike.simulate_path(N=N, dt=dt)
alpha = 1.00
# covariance matrix
P = np.eye(number_state_variables) * 1e0
# process noise covariance Q
Q_std = [(0.10)**2, (0.10)**2, (0.10)**2 ] # v, phi_dot, delta_dot
# Measurement noise covariance R
# [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
R_std = [0.8**2, 0.8**2, # x
0.8**2, 0.8**2, # y
0.5**2, 0.5**2, 0.5**2, # z
1.5**2, 0.4**2, 1.8**2] # delta - psi - phi
filter_ekf = EKF_sigma_model_fusion(X_init, P, R_std=R_std, Q_std=Q_std, wheel_distance=wheel_distance, dt=dt, alpha=alpha)
xs = np.zeros((N, number_state_variables))
ps = np.zeros((N, number_state_variables, number_state_variables))
PU = np.zeros((N, number_state_variables))
KU = np.zeros((N, number_state_variables))
time_t = np.zeros((N, 1))
t = 0
z_t = np.zeros((10, 1))
for i in range(N):
P = filter_ekf.P
K = filter_ekf.K
PU[i] = [P[0,0], P[1,1], P[2,2], P[3,3], P[4,4], P[5,5]]
KU[i] = [K[0,0], K[1,1], K[2,2], K[3,3], K[4,4], K[5,5]]
xs[i] = filter_ekf.xs.T
xs[i, 3] = np.arctan2(xs[i, 3], 1/wheel_distance) # sigma to delta conversion
# predict
filter_ekf.Prediction(U_init)
# update measurements [xf, xr, yf, yr, zf, zr, za, delta, psi, phi]
z_t[0] = zs_sim[i].xf
z_t[1] = zs_sim[i].xr
z_t[2] = zs_sim[i].yf
z_t[3] = zs_sim[i].yr
z_t[4] = zs_sim[i].zf
z_t[5] = zs_sim[i].zr
z_t[6] = zs_sim[i].za
z_t[7] = np.tan(zs_sim[i].delta)/wheel_distance # sigma
z_t[8] = zs_sim[i].psi # psi
z_t[9] = zs_sim[i].phi # phi
filter_ekf.Update(z_t)
cov = np.array([[P[0, 0], P[2, 0]],
[P[0, 2], P[2, 2]]])
mean = (xs[i, 0], xs[i, 1])
#plot_covariance_ellipse(mean, cov, fc='g', std=3, alpha=0.3, title="covariance")
time_t[i] = t
t += dt
filter_ekf.time_t = t
plot_results(xs=xs, zs_sim=zs_sim, gt_sim=gt_sim, time=time_t, plot_xs=True)
"""
Explanation: Execute EKF
End of explanation
"""
fig = plt.figure(figsize=(12,8))
plt.plot(time_t,KU[:,0], label='$x$')
plt.plot(time_t,KU[:,1], label='$y$')
plt.plot(time_t,KU[:,2], label='$z$')
plt.plot(time_t,KU[:,3], label='$\sigma$')
plt.plot(time_t,KU[:,4], label='$\psi$')
plt.plot(time_t,KU[:,5], label='$\phi$')
plt.title("Kalman gain")
plt.legend(bbox_to_anchor=(0., 0.91, 1., .06), loc='best',
ncol=9, borderaxespad=0.,prop={'size':16})
fig = plt.figure(figsize=(12,8))
plt.semilogy(time_t,PU[:,0], label='$x$')
plt.step(time_t,PU[:,1], label='$y$')
plt.step(time_t,PU[:,2], label='$z$')
plt.step(time_t,PU[:,3], label='$\sigma$')
plt.step(time_t,PU[:,4], label='$\psi$')
plt.step(time_t,PU[:,5], label='$\phi$')
plt.title("Process covariance")
plt.legend(bbox_to_anchor=(0., 0.91, 1., .06), loc='best',
ncol=9, borderaxespad=0.,prop={'size':16})
"""
Explanation: Plot Kalman gain and process covariance
End of explanation
"""
|
cgnorthcutt/rankpruning | tutorial_and_testing/tutorial.ipynb | mit | # Choose mislabeling noise rates.
frac_pos2neg = 0.8 # rh1, P(s=0|y=1) in literature
frac_neg2pos = 0.15 # rh0, P(s=1|y=0) in literature
# Combine data into training examples and labels
data = neg.append(pos)
X = data[["x1","x2"]].values
y = data["label"].values
# Noisy P̃Ñ learning: instead of target y, we have s containing mislabeled examples.
# First, we flip positives, then negatives, then combine.
# We assume labels are flipped by some noise process uniformly randomly within each class.
s = y * (np.cumsum(y) <= (1 - frac_pos2neg) * sum(y))
s_only_neg_mislabeled = 1 - (1 - y) * (np.cumsum(1 - y) <= (1 - frac_neg2pos) * sum(1 - y))
s[y==0] = s_only_neg_mislabeled[y==0]
# Create testing dataset
neg_test = multivariate_normal(mean=[2,2], cov=[[10,-1.5],[-1.5,5]], size=2000)
pos_test = multivariate_normal(mean=[5,5], cov=[[1.5,1.3],[1.3,4]], size=1000)
X_test = np.concatenate((neg_test, pos_test))
y_test = np.concatenate((np.zeros(len(neg_test)), np.ones(len(pos_test))))
# Create and fit Rank Pruning object using any clf
# of your choice as long as it has predict_proba() defined
rp = RankPruning(clf = LogisticRegression())
# rp.fit(X, s, positive_lb_threshold=1-frac_pos2neg, negative_ub_threshold=frac_neg2pos)
rp.fit(X, s)
actual_py1 = sum(y) / float(len(y))
actual_ps1 = sum(s) / float(len(s))
actual_pi1 = frac_neg2pos * (1 - actual_py1) / float(actual_ps1)
actual_pi0 = frac_pos2neg * actual_py1 / (1 - actual_ps1)
print("What are rho1, rho0, pi1, and pi0?")
print("----------------------------------")
print("rho1 (frac_pos2neg) is the fraction of positive examples mislabeled as negative examples.")
print("rho0 (frac_neg2pos) is the fraction of negative examples mislabeled as positive examples.")
print("pi1 is the fraction of mislabeled examples in observed noisy P.")
print("pi0 is the fraction of mislabeled examples in observed noisy N.")
print()
print("Given (rho1, pi1), (rho1, rho), (rho0, pi0), or (pi0, pi1) the other two are known.")
print()
print("Using Rank Pruning, we estimate rho1, rh0, pi1, and pi0:")
print("--------------------------------------------------------------")
print("Estimated rho1, P(s = 0 | y = 1):", round(rp.rh1, 2), "\t| Actual:", round(frac_pos2neg, 2))
print("Estimated rho0, P(s = 1 | y = 0):", round(rp.rh0, 2), "\t| Actual:", round(frac_neg2pos, 2))
print("Estimated pi1, P(y = 0 | s = 1):", round(rp.pi1, 2), "\t| Actual:", round(actual_pi1, 2))
print("Estimated pi0, P(y = 1 | s = 0):", round(rp.pi0, 2), "\t| Actual:", round(actual_pi0, 2))
print("Estimated py1, P(y = 1):", round(rp.py1, 2), "\t\t| Actual:", round(actual_py1, 2))
print("Actual k1 (Number of items to remove from P̃):", actual_pi1 * sum(s))
print("Acutal k0 (Number of items to remove from Ñ):", actual_pi0 * (len(s) - sum(s)))
"""
Explanation: Let's look at seemingly impossible example where 80% of hidden, true positive labels have been flipped to negative! Also, let 15% of negative labels be flipped to positive!
Feel free to adjust these noise rates. But remember --> frac_neg2pos + frac_neg2pos < 1, i.e. $\rho_1 + \rho_0 < 1$
End of explanation
"""
# For shorter notation use rh1 and rh0 for noise rates.
clf = LogisticRegression()
rh1 = frac_pos2neg
rh0 = frac_neg2pos
models = {
"Rank Pruning" : RankPruning(clf = clf),
"Baseline" : other_pnlearning_methods.BaselineNoisyPN(clf),
"Rank Pruning (noise rates given)": RankPruning(rh1, rh0, clf),
"Elk08 (noise rates given)": other_pnlearning_methods.Elk08(e1 = 1 - rh1, clf = clf),
"Liu16 (noise rates given)": other_pnlearning_methods.Liu16(rh1, rh0, clf),
"Nat13 (noise rates given)": other_pnlearning_methods.Nat13(rh1, rh0, clf),
}
for key in models.keys():
model = models[key]
model.fit(X, s)
pred = model.predict(X_test)
pred_proba = model.predict_proba(X_test) # Produces P(y=1|x)
print("\n%s Model Performance:\n==============================\n" % key)
print("Accuracy:", acc(y_test, pred))
print("Precision:", prfs(y_test, pred)[0])
print("Recall:", prfs(y_test, pred)[1])
print("F1 score:", prfs(y_test, pred)[2])
"""
Explanation: Comparing models using a logistic regression classifier.
End of explanation
"""
num_features = 100
# Create training dataset - this synthetic dataset is not necessarily
# appropriate for the CNN. This is for demonstrative purposes.
# A fully connected regular neural network is more appropriate.
neg = multivariate_normal(mean=[0]*num_features, cov=np.eye(num_features), size=5000)
pos = multivariate_normal(mean=[0.5]*num_features, cov=np.eye(num_features), size=4000)
X = np.concatenate((neg, pos))
y = np.concatenate((np.zeros(len(neg)), np.ones(len(pos))))
# Again, s is the noisy labels, we flip y randomly using noise rates.
s = y * (np.cumsum(y) <= (1 - frac_pos2neg) * sum(y))
s_only_neg_mislabeled = 1 - (1 - y) * (np.cumsum(1 - y) <= (1 - frac_neg2pos) * sum(1 - y))
s[y==0] = s_only_neg_mislabeled[y==0]
# Create testing dataset
neg_test = multivariate_normal(mean=[0]*num_features, cov=np.eye(num_features), size=1000)
pos_test = multivariate_normal(mean=[0.4]*num_features, cov=np.eye(num_features), size=800)
X_test = np.concatenate((neg_test, pos_test))
y_test = np.concatenate((np.zeros(len(neg_test)), np.ones(len(pos_test))))
from classifier_cnn import CNN
clf = CNN(img_shape = (num_features/10, num_features/10), epochs = 1)
rh1 = frac_pos2neg
rh0 = frac_neg2pos
models = {
"Rank Pruning" : RankPruning(clf = clf),
"Baseline" : other_pnlearning_methods.BaselineNoisyPN(clf),
"Rank Pruning (noise rates given)": RankPruning(rh1, rh0, clf),
"Elk08 (noise rates given)": other_pnlearning_methods.Elk08(e1 = 1 - rh1, clf = clf),
"Liu16 (noise rates given)": other_pnlearning_methods.Liu16(rh1, rh0, clf),
"Nat13 (noise rates given)": other_pnlearning_methods.Nat13(rh1, rh0, clf),
}
print("Train all models first. Results will print at end.")
preds = {}
for key in models.keys():
print("Training model: ", key)
model = models[key]
model.fit(X, s)
pred = model.predict(X_test)
pred_proba = model.predict_proba(X_test) # Produces P(y=1|x)
preds[key] = pred
print("Comparing models using a CNN classifier.")
for key in models.keys():
pred = preds[key]
print("\n%s Model Performance:\n==============================\n" % key)
print("Accuracy:", acc(y_test, pred))
print("Precision:", prfs(y_test, pred)[0])
print("Recall:", prfs(y_test, pred)[1])
print("F1 score:", prfs(y_test, pred)[2])
"""
Explanation: In the above example, you see that for very simple 2D Gaussians, Rank Pruning performs similarly to Nat13.
Now let's look at a slightly more realistic scenario with 100-Dimensional data and a more complex classifier.
Below you see that Rank Pruning greatly outperforms other models.
Comparing models using a CNN classifier.
Note, this particular CNN's architecture is for MNIST / CIFAR image detection and may not be appropriate for this synthetic dataset. A simple, fully connected regular deep neural network is likely suitable. We only use it here for the purpose of showing that Rank Pruning works for any probabilistic classifier, as long as it has clf.predict(), clf.predict_proba(), and clf.fit() defined.
This section requires keras and tensorflow packages installed. See git repo for instructions in README.
End of explanation
"""
|
xebia-france/luigi-airflow | Luigi_airflow_003.ipynb | apache-2.0 | raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv")
"""
Explanation: Import data
End of explanation
"""
raw_dataset.head(3)
raw_dataset_copy = raw_dataset
check1 = raw_dataset_copy[raw_dataset_copy["iid"] == 1]
check1_sel = check1[["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]]
check1_sel.drop_duplicates().head(20)
#merged_datasets = raw_dataset.merge(raw_dataset_copy, left_on="pid", right_on="iid")
#merged_datasets[["iid_x","gender_x","pid_y","gender_y"]].head(5)
#same_gender = merged_datasets[merged_datasets["gender_x"] == merged_datasets["gender_y"]]
#same_gender.head()
columns_by_types = raw_dataset.columns.to_series().groupby(raw_dataset.dtypes).groups
raw_dataset.dtypes.value_counts()
raw_dataset.isnull().sum().head(3)
summary = raw_dataset.describe() #.transpose()
print summary
#raw_dataset.groupby("gender").agg({"iid": pd.Series.nunique})
raw_dataset.groupby('gender').iid.nunique()
raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(5)
raw_dataset.groupby(["gender","match"]).iid.nunique()
"""
Explanation: Data exploration
Shape, types, distribution, modalities and potential missing values
End of explanation
"""
local_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/"
local_filename = "Speed_Dating_Data.csv"
my_variables_selection = ["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]
class RawSetProcessing(object):
"""
This class aims to load and clean the dataset.
"""
def __init__(self,source_path,filename,features):
self.source_path = source_path
self.filename = filename
self.features = features
# Load data
def load_data(self):
raw_dataset_df = pd.read_csv(self.source_path + self.filename)
return raw_dataset_df
# Select variables to process and include in the model
def subset_features(self, df):
sel_vars_df = df[self.features]
return sel_vars_df
@staticmethod
# Remove ids with missing values
def remove_ids_with_missing_values(df):
sel_vars_filled_df = df.dropna()
return sel_vars_filled_df
@staticmethod
def drop_duplicated_values(df):
df = df.drop_duplicates()
return df
# Combine processing stages
def combiner_pipeline(self):
raw_dataset = self.load_data()
subset_df = self.subset_features(raw_dataset)
subset_no_dup_df = self.drop_duplicated_values(subset_df)
subset_filled_df = self.remove_ids_with_missing_values(subset_no_dup_df)
return subset_filled_df
raw_set = RawSetProcessing(local_path, local_filename, my_variables_selection)
dataset_df = raw_set.combiner_pipeline()
dataset_df.head(3)
# Number of unique participants
dataset_df.iid.nunique()
dataset_df.shape
"""
Explanation: Data processing
End of explanation
"""
suffix_me = "_me"
suffix_partner = "_partner"
def get_partner_features(df, suffix_1, suffix_2, ignore_vars=True):
#print df[df["iid"] == 1]
df_partner = df.copy()
if ignore_vars is True:
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
else:
df_partner = df_partner.copy()
#print df_partner.shape
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=(suffix_1,suffix_2))
#print merged_datasets[merged_datasets["iid_me"] == 1]
return merged_datasets
feat_eng_df = get_partner_features(dataset_df,suffix_me,suffix_partner)
feat_eng_df.head(3)
"""
Explanation: Feature engineering
End of explanation
"""
import sklearn
print sklearn.__version__
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import subprocess
"""
Explanation: Modelling
This model aims to answer the questions what is the profile of the persons regarding interests that got the most matches.
Variables:
* gender
* date (In general, how frequently do you go on dates?)
* go out (How often do you go out (not necessarily on dates)?
* sports: Playing sports/ athletics
* tvsports: Watching sports
* excersice: Body building/exercising
* dining: Dining out
* museums: Museums/galleries
* art: Art
* hiking: Hiking/camping
* gaming: Gaming
* clubbing: Dancing/clubbing
* reading: Reading
* tv: Watching TV
* theater: Theater
* movies: Movies
* concerts: Going to concerts
* music: Music
* shopping: Shopping
* yoga: Yoga/meditation
End of explanation
"""
#features = list(["gender","age_o","race_o","goal","samerace","imprace","imprelig","date","go_out","career_c"])
features = list(["gender","date","go_out","sports","tvsports","exercise","dining","museums","art",
"hiking","gaming","clubbing","reading","tv","theater","movies","concerts","music",
"shopping","yoga"])
label = "match"
#add suffix to each element of list
def process_features_names(features, suffix_1, suffix_2):
print features
print suffix_1
print suffix_2
features_me = [feat + suffix_1 for feat in features]
features_partner = [feat + suffix_2 for feat in features]
features_all = features_me + features_partner
print features_all
return features_all
features_model = process_features_names(features, suffix_me, suffix_partner)
feat_eng_df.head(5)
explanatory = feat_eng_df[features_model]
explained = feat_eng_df[label]
"""
Explanation: Variables selection
End of explanation
"""
clf = tree.DecisionTreeClassifier(min_samples_split=20,min_samples_leaf=10,max_depth=4)
clf = clf.fit(explanatory, explained)
# Download http://www.graphviz.org/
with open("data.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f, feature_names= features_model, class_names="match")
import subprocess
subprocess.call(['dot', '-Tpdf', 'data.dot', '-o' 'data.pdf'])
"""
Explanation: Decision Tree
End of explanation
"""
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(explanatory, explained, test_size=0.3, random_state=0)
parameters = [
{'criterion': ['gini','entropy'], 'max_depth': [4,6,10,12,14],
'min_samples_split': [10,20,30], 'min_samples_leaf': [10,15,20]
}
]
scores = ['precision', 'recall']
dtc = tree.DecisionTreeClassifier()
clf = GridSearchCV(dtc, parameters,n_jobs=3, cv=5, refit=True)
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print("")
clf = GridSearchCV(dtc, parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print("")
print(clf.best_params_)
print("")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print("")
best_param_dtc = tree.DecisionTreeClassifier(criterion="entropy",min_samples_split=10,min_samples_leaf=10,max_depth=14)
best_param_dtc = best_param_dtc.fit(explanatory, explained)
best_param_dtc.feature_importances_
raw_dataset.rename(columns={"age_o":"age_of_partner","race_o":"race_of_partner"},inplace=True)
"""
Explanation: Tuning Parameters
End of explanation
"""
import unittest
class FeatureEngineeringTest(unittest.TestCase):
def test_get_partner_features(self):
"""
:return:
"""
# Given
raw_data_a = {
'iid': ['1', '2', '3', '4', '5','6'],
'first_name': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport':['foot','run','volley','basket','swim','tv'],
'pid': ['4', '5', '6', '1', '2','3'],}
df_a = pd.DataFrame(raw_data_a, columns = ['iid', 'first_name', 'sport','pid'])
expected_output_values = pd.DataFrame({
'iid_me': ['1', '2', '3', '4', '5','6'],
'first_name_me': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport_me': ['foot','run','volley','basket','swim','tv'],
'pid_me': ['4', '5', '6', '1', '2','3'],
'iid_partner': ['4', '5', '6', '1', '2','3'],
'first_name_partner': ['Bill', 'Brian','Bruce','Sue', 'Maria', 'Sandra'],
'sport_partner': ['basket','swim','tv','foot','run','volley'],
'pid_partner':['1', '2', '3', '4', '5','6']
}, columns = ['iid_me','first_name_me','sport_me','pid_me',
'iid_partner','first_name_partner','sport_partner','pid_partner'])
# When
output_values = get_partner_features(df_a, "_me","_partner",ignore_vars=False)
print output_values
# Then
self.assertItemsEqual(output_values, expected_output_values)
suite = unittest.TestLoader().loadTestsFromTestCase(FeatureEngineeringTest)
unittest.TextTestRunner(verbosity=2).run(suite)
"""
Explanation: Test
End of explanation
"""
|
jgoppert/pymola | test/notebooks/XML.ipynb | bsd-3-clause | m1_xml = os.path.join(
'..', 'models', 'bouncing-ball.xml')
m1_ca = parse_xml.parse_file(m1_xml)
m1_ca
m1_ode = m1_ca.to_ode()
m1_ode
m1_ode.prop['x']['start'] = 1
data1 = sim_scipy.sim(m1_ode, {'dt': 0.01, 'tf': 3.5, 'integrator': 'dopri5'})
plt.figure(figsize=(15, 10))
analysis.plot(data1, marker='.', linewidth=1, markersize=5)
"""
Explanation: Bouncing Ball
This example:
1. Reads a ModelicaXML file
2. Parses it and creates an Casadi HybridDAE model.
3. Converts the HybridDAE model to a HybridODE model.
4. Simulates it using scipy.integration
End of explanation
"""
simple_circuit_file = os.path.join(
'..', 'models', 'SimpleCircuit.mo')
m2_mo = parse_mo(open(simple_circuit_file, 'r').read())
m2_xml = generate_xml(m2_mo, 'SimpleCircuit')
m2_ca = parse_xml.parse(m2_xml)
m2_ca
m2_ode = m2_ca.to_ode()
m2_ode
m2_ode.prop.keys()
m2_ode.prop['AC.f']['value'] = 60
m2_ode.prop['AC.VA']['value'] = 110
data2 = sim_scipy.sim(m2_ode, {'dt': 0.001, 'tf': 0.5, 'integrator': 'dopri5'})
plt.figure(figsize=(15, 10))
analysis.plot(data2, fields=['x'])
"""
Explanation: Simple Circuit
This example:
1. Reads a Modelica file
2. Converts the Modelica file to ModelicaXML
3. Parses the ModelicaXML file and creates a Casadi HybridDAE model.
3. Converts the HybridDAE model to a HybridODE model.
4. Simulates it using scipy.integration
End of explanation
"""
model_txt = """
model Simple
Real x(start=0);
discrete Real y;
discrete Real v;
discrete Real time_last(start=0);
equation
der(x) = v;
when (abs(x) >= 2) then
reinit(x, 0);
end when;
when (time - time_last > 0.1) then
v = 1 + noise_gaussian(0, 0.1);
y = x + noise_gaussian(0, 0.1);
time_last = time;
end when;
end Simple;
"""
m3_mo = parse_mo(model_txt)
m3_xml = generate_xml(m3_mo, 'Simple')
print(m3_xml)
m3_ca = parse_xml.parse(m3_xml)
m3_ca
m3_ode = m3_ca.to_ode()
m3_ode
m3_ode.prop['x']['start'] = 0
data = sim_scipy.sim(m3_ode, {'dt': 0.01, 'tf': 3})
plt.figure(figsize=(15, 10))
analysis.plot(data, fields=['m', 'x', 'y'])
"""
Explanation: Noise Simulation
This example demonstrates using the string based parser and also noise simulation.
End of explanation
"""
|
ptpro3/ptpro3.github.io | Projects/Challenges/Challenge09/challenge_set_9ii_prashant.ipynb | mit | from sqlalchemy import create_engine
import pandas as pd
cnx = create_engine('postgresql://prashant:ptpro3@52.14.144.23:5432/prashant')
#port ~ 5432
pd.read_sql_query('''SELECT * FROM allstarfull LIMIT 5''',cnx)
pd.read_sql_query('''SELECT * FROM schools LIMIT 5''',cnx)
pd.read_sql_query('''SELECT * FROM salaries LIMIT 5''',cnx)
pd.read_sql_query('''SELECT schoolstate,Count(schoolid) as ct FROM schools Group By schoolstate ORDER BY ct DESC LIMIT 5''',cnx)
pd.read_sql_query('''SELECT playerid,salary
FROM Salaries
WHERE yearid = '1985' and salary > '500000' LIMIT 5;''',cnx)
"""
Explanation: Topic: Challenge Set 9 Part II
Subject: SQL
Date: 02/20/2017
Name: Prashant Tatineni
End of explanation
"""
pd.read_sql_query('''SELECT yearid, teamid, SUM(salary) FROM salaries
GROUP BY 1,2
ORDER BY 1 DESC
LIMIT 10
''',cnx)
"""
Explanation: 1. What was the total spent on salaries by each team, each year?
End of explanation
"""
pd.read_sql_query('''SELECT playerid, min(yearid), max(yearid)
FROM fielding
GROUP BY 1
LIMIT 10
''',cnx)
"""
Explanation: 2. What is the first and last year played for each player? Hint: Create a new table from 'Fielding.csv'.
End of explanation
"""
pd.read_sql_query('''SELECT playerid, COUNT(*)
FROM allstarfull
GROUP BY 1
ORDER BY 2 DESC
LIMIT 1
''',cnx)
"""
Explanation: 3. Who has played the most all star games?
End of explanation
"""
pd.read_sql_query('''SELECT schoolid, count(distinct playerid)
FROM schoolsplayers
GROUP BY 1
ORDER BY 2 DESC
LIMIT 1
''',cnx)
"""
Explanation: 4. Which school has generated the most distinct players? Hint: Create new table from 'CollegePlaying.csv'.
End of explanation
"""
pd.read_sql_query('''SELECT playerid, finalgame, debut, (finalgame-debut) AS days
FROM master
WHERE finalgame IS NOT NULL and debut IS NOT NULL
ORDER BY 4 DESC
LIMIT 5''',cnx)
"""
Explanation: 5. Which players have the longest career? Assume that the debut and finalGame columns comprise the start and end, respectively, of a player's career. Hint: Create a new table from 'Master.csv'. Also note that strings can be converted to dates using the DATE function and can then be subtracted from each other yielding their difference in days.
End of explanation
"""
pd.read_sql_query('''SELECT EXTRACT(MONTH FROM debut) AS debut_month, COUNT(*)
FROM master
GROUP BY 1
ORDER BY 1 ASC''',cnx)
"""
Explanation: 6. What is the distribution of debut months? Hint: Look at the DATE and EXTRACT functions.
End of explanation
"""
pd.read_sql_query('''SELECT S.playerid, AVG(salary)
FROM Salaries S LEFT JOIN Master M
ON S.playerid = M.playerid
GROUP BY 1
LIMIT 5''',cnx)
pd.read_sql_query('''SELECT M.playerid, AVG(salary)
FROM Master M LEFT JOIN Salaries S
ON M.playerid = S.playerid
GROUP BY 1
LIMIT 5''',cnx)
"""
Explanation: 7. What is the effect of table join order on mean salary for the players listed in the main (master) table? Hint: Perform two different queries, one that joins on playerID in the salary table and other that joins on the same column in the master table. You will have to use left joins for each since right joins are not currently supported with SQLalchemy.
End of explanation
"""
|
skkandrach/foundations-homework | Homework_5_Soma_graded.ipynb | mit | # !pip3 install requests
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], artist['popularity'], artist['genres'])
"""
Explanation: Grade: 6 / 8 -- take a look at TA-COMMENTS (you can Command + F to search for "TA-COMMENT")
I don't have the spotify portion of your HW5 -- is it somewhere else in your repo? Let me know because that part of the homework is another 8 points.
1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
"""
genre_list = []
for artist in artist_data:
for genre in artist['genres']:
genre_list.append(genre)
print(genre_list)
genre_count = 0
# TA-COMMENT: (-0.5) This is actually calling your temporary variable "artist" from your for loop above.
# You want to call 'artist['genres']'
for genre in artist['genres']:
# TA-COMMENT: This is actually always true and therefore, it's not doing what you'd want it to do!
if True:
genre_count = genre_count + 1
print(artist['genres'])
else:
print("No genres listed")
# TA-COMMENT: see example
if True:
print("hello")
import requests # TA-COMMENT: No need to import requests everytime -- you just once to do it once within a notebook
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
type(data)
data.keys()
type(data['artists'])
data['artists'].keys()
artists = data['artists']['items']
artists
# TA-COMMENT: Excellent for loop!!
for artist in artists:
print(artist['name'], artist['popularity'])
if len(artist['genres']) > 0:
genres = ", ".join(artist['genres'])
print("Genre list: ", genres)
else:
print("No genres listed")
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
# genres = data['genres']
# genre_count = 0
# for genre in genres:
# print(genres['name'])
# g = genres + 1
# genre_count.append(g)
data.keys()
# to figure out what data['artists'] is, we have a couple of options. My favorite!! Printing!!
print(data['artists']) #its a dictionary so look inside!
data['artists'].keys() #calling out what's inside of the dictionary
print(data['artists']['items']) #calling out a dictionary inside of a dicitonary...and look below its a list '[]'
data['artists']['items'][0] #set the list at zero.
artists = data['artists']['items'] #declare the dictionary and list as a variable to make it easier
for artist in artists:
# print(artist.keys())
print(artist['genres'])
from collections import Counter
artists = data['artists']['items']
genre_list = []
for artist in artists:
# print(artist.keys())
# print(artist['genres'])
genre_list = genre_list + artist['genres']
print(genre_list)
Counter(genre_list) #counter function - it rocks
# TA-COMMENT: Yassss
unique_genres = set(genre_list)
unique_genres
genre_count_dict = {}
for unique_item in unique_genres:
count_variable = 0
for item in genre_list:
if item == unique_item:
count_variable = count_variable + 1
genre_count_dict[unique_item] = count_variable
# TA-COMMENT: Beautiful!
genre_count_dict
"""
Explanation: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
End of explanation
"""
most_popular_name = ""
most_popular_score = 0
for artist in artists:
print("Looking at", artist['name'], "who has a popularity score of", artist['popularity'])
#THE CONDITIONAL - WHAT YOU'RE TESTING
print("Comparing", artist['popularity'], "to", most_popular_score, "of")
if artist['popularity'] > most_popular_score and artist ['name'] != "Lil Wayne":
#THE CHANGE - WHAT YOU'RE KEEPING TRACK OF
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
# TA-COMMENT: Excellent!
"""
Explanation: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
End of explanation
"""
target_score = 72
#PART ONE: INITIAL CONDITON
second_best_artists = []
#AGGREGATION PROBLEM - when you're looping through a series of objects and you someitmes you want to add some
#of those objects to a DIFFERENT list
for artists in artists:
print("Looking at", artist['name'], "who has a popularity of", artist['popularity'])
#PART TWO: CONDITONAL - when we want to add someone to our list
if artist['popularity'] == 72:
#PART THREE: THE CHANGE - add artist to our list
second_best_artists.append(artist['name'])
print("OUR SECOND BEST ARTISTS ARE:")
for artist in second_best_artists:
print(artist)
# TA-COMMENT: This code doesn't work as you'd want because your temporary variable name "artists" is the same
# as your list name!
for artists in artists:
#print("Looking at", artist['name'])
if artist['name'] == "Lil' Kim":
print("Found Lil' Kim")
print(artist['popularity'])
else:
pass
#print("Not Lil' Kim")
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], artist['popularity'], artist['genres'])
"""
Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim.
End of explanation
"""
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], "ID is:", artist['id'])
import requests
response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US')
lil_wayne = response.json()
print(lil_wayne)
type(lil_wayne)
print(type(lil_wayne))
print(lil_wayne.keys())
print(lil_wayne['tracks'])
# TA-COMMENT: (-1) Remember what our for loop structures should look like! Below, there is nothing indented below your
# for loop!
# and you cannot call ['tracks'] without specifying which dictionary to find ['tracks']
for lil_wayne in ['tracks']
print("Lil Wayne's top tracks are:")
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
lil_jon = response.json()
print(lil_jon)
type(lil_jon)
print(type(lil_jon))
print(lil_jon.keys())
print(lil_jon['tracks'])
tracks = ['tracks']
for lil_jon in tracks:
print("Lil Jon's top tracks are:")
"""
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
End of explanation
"""
response = requests.get("https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
# TA-COMMENT: (-0.5) if True is always True! This happens to work out because all the tracks were explicit.
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("We have found", explicit_count, "explicit tracks, and", clean_count, "tracks are clean")
if track_count > 0:
print("Overall, We discovered", explicit_count, "explicit tracks")
print("And", clean_count, "were non-explicit")
print("Which means", 100 * clean_count / explicit_count, " percent of tracks were clean")
else:
print("No top tracks found")
# TA-COMMENT: It's a good idea to comment out code so if you read it back later, you know what's happening at each stage
import requests
response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
popularity_explicit = 0
popularity_nonexplicit = 0
minutes_explicit = 0
minutes_not_explicit = 0
for track in tracks:
print(track['name'], track['explicit'])
if track['explicit'] == True:
explicit_count = explicit_count + 1
popularity_explicit = popularity_explicit + track['popularity']
minutes_explicit = minutes_explicit + track['duration_ms']
print("The number of explicit songs are", explicit_count, "with a popularity of", popularity_explicit)
print( "Lil Wayne has", minutes_explicit/10000, "minutes of explicit songs" )
elif track['explicit'] == False:
clean_count = clean_count + 1
popularity_nonexplicit = popularity_nonexplicit + track['popularity']
minutes_not_explicit = minutes_not_explicit + track['duration_ms']
print("Lil Wayne has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
print( "Lil Wayne has", minutes_not_explicit/10000, "minutes of clean songs")
print("The average popularity of Lil Wayne's explicit tracks is", popularity_explicit/explicit_count)
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("Lil Jon has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
response = requests.get("https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("We have found", explicit_count, "explicit tracks, and", clean_count, "tracks are clean")
if track_count > 0:
print("Overall, We discovered", explicit_count, "explicit tracks")
print("And", clean_count, "were non-explicit")
print("Which means", 100 * clean_count / explicit_count, " percent of tracks were clean")
else:
print("No top tracks found")
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
popularity_explicit = 0
popularity_nonexplicit = 0
minutes_explicit = 0
minutes_not_explicit = 0
for track in tracks:
print(track['name'], track['explicit'])
if track['explicit'] == True:
explicit_count = explicit_count + 1
popularity_explicit = popularity_explicit + track['popularity']
minutes_explicit = minutes_explicit + track['duration_ms']
print("The number of explicit songs are", explicit_count, "with a popularity of", popularity_explicit)
print( "Lil Jon has", minutes_explicit/1000, "minutes of explicit songs" )
elif track['explicit'] == False:
clean_count = clean_count + 1
popularity_nonexplicit = popularity_nonexplicit + track['popularity']
minutes_not_explicit = minutes_not_explicit + track['duration_ms']
print("Lil Jon has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
print( "Lil Jon has", minutes_not_explicit/1000, "minutes of clean songs")
print("The average popularity of Lil Jon's explicit tracks is", popularity_explicit/explicit_count)
"""
Explanation: 6) Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
End of explanation
"""
|
Boussau/Notebooks | Notebooks/computeConsensusChronogram.ipynb | gpl-2.0 | import sys
from ete3 import Tree, TreeStyle, NodeStyle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
fileT = "100.trees"
try:
f=open(fileT, 'r')
except IOError:
print ("Unknown file: " + fileT)
sys.exit()
allTrees = list()
for l in f:
allTrees.append( Tree( l.strip() ) )
"""
Explanation: Computing a consensus chronogram from MT trees
Reading all 100 trees computed by Gergely from replicates of 1% of the transfer-based constraints.
End of explanation
"""
#File where I store useful functions
exec (open("/Users/boussau/Programs/PythonCode/functions.py").read ())
id2Heights=list()
for t in allTrees:
node2Height,id2Height = getNodeHeights( t )
id2Heights.append(id2Height)
"""
Explanation: Node ids are consistent across all trees.
Getting node Heights.
End of explanation
"""
print(len(id2Heights))
# Creating a uniform weight vector
weights = [1] * len(id2Heights)
outputsWeightedChronogram (allTrees[0].copy(), id2Heights, "consensus100MT", weights)
"""
Explanation: Now we have the node heights.
Outputting the consensus chronogram, with confidence intervals.
End of explanation
"""
|
jdnz/qml-rg | Meeting 6/aps_with_classifiers.ipynb | gpl-3.0 | import numpy as np
import os
from skimage.transform import resize
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
import image_loader as im
from matplotlib import pyplot as plt
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
"""
Explanation: Solving the Captcha problem with Random Forest and Suppor Vector
To be able to run this notebook you should have place in the folder Peter's program image_loader.py and the folder images used last week
End of explanation
"""
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) # reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X[i]=np.reshape(X[i],1024) # reshape from 32x32 to a flat 1024 vector
X=np.array(X) # transforms it into an array
Y=np.asarray(xlabels) # transforms from list to array
return X,Y
"""
Explanation: We define the function prep_datas (props to Alexandre), already used the previous week. However now we reshape the images from a 32x32 matrix (this value seems unnecessary, however the bigger the image the worst the classifiers will work) to a flat 1024 vector, a constraint given by the Random Forest classifier.
End of explanation
"""
training_set, training_labels = im.load_images(path_train)
X_train, Y_train = prep_datas(training_set,training_labels)
test_set, test_labels = im.load_images(path_test)
X_test,Y_test=prep_datas(test_set,test_labels)
"""
Explanation: Then we load the training and the test set:
End of explanation
"""
classifierForest = RandomForestClassifier(n_estimators=1000)
classifierSVC=svm.SVC(kernel='linear')
classifierForest.fit(X_train, Y_train)
classifierSVC.fit(X_train,Y_train)
"""
Explanation: We define the classifiers Random Forest Classifier and Support Machine Classifier and we train them throught the fit function. Taking a linear kernel for SVC gives the best results for this classifier.
End of explanation
"""
expectedF = Y_test
predictedF = classifierForest.predict(X_test)
predictedS = classifierSVC.predict(X_test)
print(expectedF)
print(predictedF)
print(predictedS)
"""
Explanation: Let's test how good the system is doing
End of explanation
"""
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
"""
Explanation: Now we load the real set of images and test it. This part of the program has been taken from Alexandre's program from last week. First we load the 'real world' images
End of explanation
"""
y_predF = classifierForest.predict(X_real)
y_predS = classifierSVC.predict(X_real)
"""
Explanation: Then we make the predictions with both classifiers
End of explanation
"""
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_predF,'o')
plt.plot(1.2*y_predS,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
"""
Explanation: Finally we plot the results
End of explanation
"""
|
csadorf/signac | doc/signac_202_Integration_with_pandas.ipynb | bsd-3-clause | import signac
import pandas as pd
project = signac.get_project(root='projects/tutorial')
"""
Explanation: 2.2 Integration with pandas data frames
As was shown earlier, we can use indexes to search for specific data points.
One way to operate on the data is using pandas data frames.
Please note: The following steps require the pandas package.
End of explanation
"""
df_index = pd.DataFrame(project.index())
df_index.head()
"""
Explanation: Let's first create a basic index and use it to construct an index data frame:
End of explanation
"""
df_index = df_index.set_index(['_id'])
df_index.head()
"""
Explanation: It is a good idea, to explicitly use the _id value as index key:
End of explanation
"""
statepoints = {doc['_id']: doc['statepoint'] for doc in project.index()}
df = pd.DataFrame(statepoints).T.join(df_index)
df.head()
"""
Explanation: Furthermore, the index would be more useful if each statepoint parameter had its own column.
End of explanation
"""
df[(df.fluid=='argon') & (df.p > 2.0) & (df.p <= 5.0)].V_gas.mean()
"""
Explanation: Now we can select specific data subsets, for example to calculate the mean gas volume of argon for a pressure p between 2.0 and 5.0:
End of explanation
"""
% matplotlib inline
df_water = df[df.fluid=='argon'][['p', 'V_liq', 'V_gas']]
df_water.sort_values('p').set_index('p').plot(logy=True)
"""
Explanation: Or we can plot a p-V phase diagram for argon (requires matplotlib).
End of explanation
"""
from matplotlib import pyplot as plt
for fluid, group in df[df.p < 2].groupby('fluid'):
d = group.sort_values('p')
plt.plot(d['p'], d['V_gas'] / d['N'], label=fluid)
plt.xlabel('p')
plt.ylabel(r'$\rho_{gas}$')
plt.legend(loc=0)
"""
Explanation: Or we group the data by fluid and compare the gas densities for low pressures:
End of explanation
"""
|
wasit7/tutorials | flask/tu/notebook/.ipynb_checkpoints/Somkiats-Basic-Python-checkpoint.ipynb | mit | x=1
print x
type(x)
x.conjugate()
type(1+2j)
z=1+2j
print z
(1,2)
t=(1,2,"text")
t
t
def foo():
return (1,2)
x,y=foo()
print x
print y
def swap(x,y):
return (y,x)
x=1;y=2
print "{0:d} {1:d}".format(x,y)
x,y=swap(x,y)
print "{:f} {:f}".format(x,y)
dir(1)
x=[]
x.append("text")
x
x.append(1)
x.pop()
x.append([1,2,3])
x
x.append(2)
x
print x[0]
print x[-2]
x.pop(-2)
x
%%timeit -n10
x=[]
for i in range(100000):
x.append(2*i+1)
%%timeit -n10
x=[]
for i in xrange(100000):
x.append(2*i+1)
range(10)
y=[2*i+1 for i in xrange(10)]
print y
type({})
x={"key":"value","foo":"bar"}
print x
key="key1"
if key in x:
print x[key]
y={ i:i*i for i in xrange(10)}
y
z=[v for (k,v) in y.iteritems()]
print z
"""
Explanation: Environment setup: Python and Jupyter
Variables: Numbers, String, Tuple, List, Dictionary
Basic operators: Arithmetic and Boolean operators
Control flow: if/else, for, while, pass, break, continue
List: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()
Dictionary: access, update, del, in
Function: function definition, pass by reference, keyword argument, default argument, lambda
map reduce filter
Module: from, import, reload(), package has init.py, init and str
I/O: raw_input(), input(), open(), close(), write(), read(), rename(), remove(), mkdir(), chdir(), rmdir()
Pass by value, Pass by reference
Date/time: local time and time zone, pytz module
Variables: Numbers, String, Tuple, List, Dictionary
End of explanation
"""
p=[]
for i in xrange(2,100):
isprime=1
for j in p:
if(i%j==0):
isprime=0
break
if isprime:
p.append(i)
print p
for i in xrange(10):
pass
i=10
while i>0:
i=i-1
print i
x=['text',"str",''' Hello World\\n ''']
print x
"""
Explanation: if/else, for, while, pass, break, continue
End of explanation
"""
x=['a','b','c']
#access
print x[0]
#update
x[0]='d'
print x
print "size of x%d is"%len(x)
y=['x','y','z']
z=x+y
gamma=y+x
print z
print gamma
print 'a' in x
print y
y.remove('y')# remove by vavlue
print y
print y
y.pop(0)# remove by index
print y
y.insert(0,'x')
y.insert(1,'y')
print y
x=[i*i for i in xrange(10)]
print x
x[:3]
x[-3:]
x[-1:]
x[3:-3]
x[1:6]
x[::2]
print x
x.reverse()
print x
print x
print x[::-1]
print x
"""
Explanation: List: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()
End of explanation
"""
x={}
x={'key':'value'}
x['foo']='bar'
x
x['foo']='Hello'
x
x['m']=123
x['foo','key']
keys=['foo','key']
[x[k] for k in keys]
print x
del x
print x
"""
Explanation: Dictionary: access, update, del, in
End of explanation
"""
def foo(x):
x=x+1
y=2*x
return y
print foo(3)
x=3
print foo(x)
print x
def bar(x=[]):
x.append(7)
print "in loop: {}".format(x)
x=[1,2,3]
print x
bar(x)
print x
def func(x=0,y=0,z=0):#defualt input argument
return x*100+y*10+z
func(1,2)
func(y=2,z=3,x=1)#keyword input argument
f=func
f(y=2)
distance=[13,500,1370]#meter
def meter2Kilometer(d):
return d/1000.0;
meter2Kilometer(distance)
[meter2Kilometer(d) for d in distance]
d2 = map(meter2Kilometer,distance)
print d2
d3 = map(lambda x: x/1000.0,distance)
print d3
distance=[13,500,1370]#meter
time=[1,10,100]
d3 = map(lambda s,t: s/float(t)*3.6, distance,time )
print d3
d4=filter(lambda s: s<1000, distance)
print d4
total_distance=reduce(lambda i,j : i+j, distance)
total_distance
import numpy as np
x=np.arange(101)
print x
np.histogram(x,bins=[0,50,60,70,80,100])
print np.sort(x)
"""
Explanation: Function: function definition, pass by reference, keyword argument, default argument, lambda
a built-in immutable type: str, int, long, bool, float, tuple
End of explanation
"""
class Obj:
def __init__(self, _x, _y):
self.x = _x
self.y = _y
def update(self, _x, _y):
self.x += _x
self.y += _y
def __str__(self):
return "x:%d, y:%d"%(self.x,self.y)
a=Obj(5,7)#call __init__
print a#call __str__
a.update(1,2)#call update
print a
import sys
import os
path=os.getcwd()
path=os.path.join(path,'lib')
print path
sys.path.insert(0, path)
from Obj import Obj as ob
b=ob(7,9)
print b
b.update(3,7)
print b
os.getcwd()
from mylib import mymodule as mm
mm=reload(mm)
print mm.Obj2(8,9)
"""
Explanation: Module: class, from, import, reload(), package and init
End of explanation
"""
|
SaTa999/pyPanair | examples/tutorial2/tutorial2.ipynb | mit | %matplotlib notebook
import matplotlib.pyplot as plt
from pyPanair.preprocess import wgs_creator
for eta in ("0000", "0126", "0400", "0700", "1000"):
af = wgs_creator.read_airfoil("eta{}.csv".format(eta))
plt.plot(af[:,0], af[:,2], "k-", lw=1.)
plt.plot((0.5049,), (0,), "ro", label="Center of rotation")
plt.legend(loc="best")
plt.xlabel("$x$ [m]")
plt.xlabel("$z$ [m]")
plt.show()
"""
Explanation: pyPanair Tutorial#2 Tapered Wing
In this tutorial we will perform an analysis of a tapered wing.
The wing is defined by five different wing sections at $\eta=0.000, 0.126, 0.400, 0.700, 1.000$.
Below are the wing planform and airfoil stack, respectively.
(The wing is based on the DLR-F4<sup>1</sup>)
End of explanation
"""
from pyPanair.preprocess import wgs_creator
wgs = wgs_creator.LaWGS("tapered_wing")
"""
Explanation: 1.Defining the geometry
Just as we have done in tutorial 1, we will use the wgs_creator module to define the geometry of the wing.
First off, we initialize a LaWGS object.
End of explanation
"""
import pandas as pd
pd.set_option("display.max_rows", 10)
pd.read_csv("eta0000.csv")
"""
Explanation: Next, we create a Line object that defines the coordinates of the airfoil at the root of the wing.
To do so, we will read a csv file that contains the coordinates of the airfoil, using the read_airfoil function.
Five csv files, eta0000.csv, eta0126.csv, eta0400.csv, eta0700.csv, and eta1000.csv have been prepared for this tutorial.
Before creating the Line object, we will take a quick view at these files.
For example, eta0000.csv looks like ...
End of explanation
"""
wingsection1 = wgs_creator.read_airfoil("eta0000.csv", y_coordinate=0.)
"""
Explanation: The first and second columns xup and zup represent the xz-coordinates of the upper surface of the airfoil.
The third and fourth columns xlow and zlow represent the xz-coordinates of the lower surface of the airfoil.
The csv file must follow four rules:
1. Data in the first row correspond to the xz-coordinates of the leading edge of the airfoil
2. Data in the last row correspond to the xz-coordinates of the trailing edge of the airfoil
3. For the first row, the coordinates (xup, zup) and (xlow, zlow) are the same
4. For the last row, the coordinates (xup, zup) and (xlow, zlow) are the same (i.e. the airfoil has a sharp TE)
Now we shall create a Line object for the root of the wing.
End of explanation
"""
wingsection2 = wgs_creator.read_airfoil("eta0126.csv", y_coordinate=0.074211)
wingsection3 = wgs_creator.read_airfoil("eta0400.csv", y_coordinate=0.235051)
wingsection4 = wgs_creator.read_airfoil("eta0700.csv", y_coordinate=0.410350)
wingsection5 = wgs_creator.read_airfoil("eta1000.csv", y_coordinate=0.585650)
"""
Explanation: The first variable specifies the name of the csv file.
The y_coordinate variable defines the y-coordinate of the points included in the Line.
Line objects for the remaining four wing sections can be created in the same way.
End of explanation
"""
wingnet1 = wingsection1.linspace(wingsection2, num=4)
wingnet2 = wingsection2.linspace(wingsection3, num=8)
wingnet3 = wingsection3.linspace(wingsection4, num=9)
wingnet4 = wingsection4.linspace(wingsection5, num=9)
"""
Explanation: Next, we create four networks by linearly interpolating these wing sections.
End of explanation
"""
wing = wingnet1.concat_row((wingnet2, wingnet3, wingnet4))
"""
Explanation: Then, we concatenate the networks using the concat_row method.
End of explanation
"""
wing.plot_wireframe()
"""
Explanation: The concatenated network is displayed below.
End of explanation
"""
wingtip_up, wingtip_low = wingsection5.split_half()
wingtip_low = wingtip_low.flip()
wingtip = wingtip_up.linspace(wingtip_low, num=5)
wake_length = 50 * 0.1412
wingwake = wing.make_wake(edge_number=3, wake_length=wake_length)
"""
Explanation: After creating the Network for the wing, we create networks for the wingtip and wake.
End of explanation
"""
wgs.append_network("wing", wing, 1)
wgs.append_network("wingtip", wingtip, 1)
wgs.append_network("wingwake", wingwake, 18)
"""
Explanation: Next, the Networks will be registered to the wgs object.
End of explanation
"""
wgs.create_stl()
"""
Explanation: Then, we create a stl file to check that there are no errors in the model.
End of explanation
"""
wgs.create_aux(alpha=(-2, 0, 2), mach=0.6, cbar=0.1412, span=1.1714, sref=0.1454, xref=0.5049, zref=0.)
wgs.create_wgs()
"""
Explanation: Last, we create input files for panin
End of explanation
"""
from pyPanair.postprocess import write_vtk
write_vtk(n_wake=1)
from pyPanair.postprocess import calc_section_force
calc_section_force(aoa=2, mac=0.1412, rot_center=(0.5049,0,0), casenum=3, networknum=1)
section_force = pd.read_csv("section_force.csv")
section_force
plt.plot(section_force.pos / 0.5857, section_force.cl * section_force.chord, "s", mfc="None", mec="b")
plt.xlabel("spanwise position [normalized]")
plt.ylabel("cl * chord")
plt.grid()
plt.show()
"""
Explanation: 2. Analysis
The analysis can be done in the same way as tutorial 1.
Place panair, panin, tapered_wing.aux, and tapered_wing.wgs in the same directory,
and run panin and panair.
bash
$ ./panin
Prepare input for PanAir
Version 1.0 (4Jan2000)
Ralph L. Carmichael, Public Domain Aeronautical Software
Enter the name of the auxiliary file:
tapered_wing.aux
10 records copied from auxiliary file.
9 records in the internal data file.
Geometry data to be read from tapered_wing.wgs
Reading WGS file...
Reading network wing
Reading network wingtip
Reading network wingwake
Reading input file instructions...
Command 1 MACH 0.6
Command 11 ALPHA -2 0 2
Command 6 cbar 0.1412
Command 7 span 1.1714
Command 2 sref 0.1454
Command 3 xref 0.5049
Command 5 zref 0.0
Command 35 BOUN 1 1 18
Writing PanAir input file...
Files a502.in added to your directory.
Also, file panin.dbg
Normal termination of panin, version 1.0 (4Jan2000)
Normal termination of panin
bash
$ ./panair
Panair High Order Panel Code, Version 15.0 (10 December 2009)
Enter name of input file:
a502.in
After the analysis finishes, place panair.out, agps, and ffmf in the tutorial2 directory.
3. Visualization
Visualization of the results can be done in the same manner as tutorial 2.
End of explanation
"""
from pyPanair.postprocess import write_ffmf, read_ffmf
read_ffmf()
write_ffmf()
"""
Explanation: The ffmf file can be parsed using the read_ffmf and write_ffmf methods.
End of explanation
"""
|
austinjalexander/sandbox | python/py/nanodegree/intro_ds/final_project/IntroDS-ProjectOne-Section2.ipynb | mit | import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as st
import statsmodels.api as sm
import scipy.optimize as op
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
filename = '/Users/excalibur/py/nanodegree/intro_ds/final_project/improved-dataset/turnstile_weather_v2.csv'
# import data
data = pd.read_csv(filename)
print data.columns.values
data['ENTRIESn_hourly'].describe()
plt.boxplot(data['ENTRIESn_hourly'], vert=False)
plt.show()
data[data['ENTRIESn_hourly'] == 0].count()[0]
data[data['ENTRIESn_hourly'] > 500].count()[0]
data[data['ENTRIESn_hourly'] > 1000].count()[0]
data[data['ENTRIESn_hourly'] > 5000].count()[0]
data[data['ENTRIESn_hourly'] > 10000].count()[0]
plt.figure(figsize = (10,10))
plt.hist(data['ENTRIESn_hourly'], bins=100)
plt.show()
plt.boxplot(data['ENTRIESn_hourly'], vert=False)
plt.show()
# the overwhelming majority of the action is occurring below 10000
#data = data[(data['ENTRIESn_hourly'] <= 10000)]
plt.figure(figsize = (10,10))
plt.hist(data['ENTRIESn_hourly'].values, bins=100)
plt.show()
plt.boxplot(data['ENTRIESn_hourly'].values, vert=False)
plt.show()
"""
Explanation: Analyzing the NYC Subway Dataset
Intro to Data Science: Final Project 1, Part 2
(Short Questions)
Section 2. Linear Regression
Austin J. Alexander
Import Directives and Initial DataFrame Creation
End of explanation
"""
class SampleCreator:
def __init__(self,data,categorical_features,quantitative_features):
##markedfordeletion## m = data.shape[0]
##markedfordeletion## random_indices = np.random.choice(np.arange(0,m), size=m, replace=False)
##markedfordeletion## train_indices = random_indices[0:(m-(m*0.10))] # leave about 10% of data for testing
##markedfordeletion## test_indices = random_indices[(m-(m*0.10)):]
##markedfordeletion## # check disjointedness of training and testing indices
##markedfordeletion## for i in train_indices:
##markedfordeletion## if i in test_indices:
##markedfordeletion## print "<!> Training and Testing Sample Overlap <!>"
# response vector
y = data['ENTRIESn_hourly'].values
# get quantitative features
X = data[quantitative_features].values
# Feature Scaling
# mean normalization
x_i_bar = []
s_i = []
for i in np.arange(X.shape[1]):
x_i_bar.append(np.mean(X[:,i]))
s_i.append(np.std(X[:,i]))
X[:,i] = np.true_divide((np.subtract(X[:,i],x_i_bar[i])),s_i[i])
# create dummy variables for categorical features
for feature in categorical_features:
dummies = sm.categorical(data[feature].values, drop=True)
X = np.hstack((X,dummies))
# final design matrix
X = sm.add_constant(X)
##markedfordeletion## # training samples
##markedfordeletion## self.y_train = y[train_indices]
##markedfordeletion## self.X_train = X[train_indices]
##markedfordeletion## # testing samples
##markedfordeletion## self.y_test = y[test_indices]
##markedfordeletion## self.X_test = X[test_indices]
self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(X, y, test_size=0.2, random_state=42)
"""
Explanation: Class for Creating Training and Testing Samples
End of explanation
"""
categorical_features = ['UNIT', 'hour', 'day_week', 'station']
#categorical_features = ['UNIT']
quantitative_features = ['rain', 'tempi']
#quantitative_features = []
# for tracking during trials
best_rsquared = 0
best_results = []
# perform 5 trials; keep model with best R^2
for x in xrange(0,5):
samples = SampleCreator(data,categorical_features,quantitative_features)
model = sm.OLS(samples.y_train,samples.X_train)
results = model.fit()
if results.rsquared > best_rsquared:
best_rsquared = results.rsquared
best_results = results
print "r = {0:.2f}".format(np.sqrt(best_results.rsquared))
print "R^2 = {0:.2f}".format(best_results.rsquared)
"""
Explanation: Section 2. Linear Regression
<h3 id='2_1'>2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model?</h3>
After comparing a few different methods (Ordinary Least Squares [OLS] from StatsModels, two different regression techniques from scikit-learn, the Broyden–Fletcher–Goldfarb–Shanno [BFGS] optimization algorithm from Scipy.optimize, and a Normal Equations algebraic attempt), OLS from StatsModels was chosen due to its consistently higher r and R2 values (see notes 1 and 2 below) throughout various test sample sizes ( $n={30,100,500,1500,5000,10000}$ ).
Notes
<sup>1</sup> The linear correlation coefficient ($r$) can take on the following values: $-1 \leq r \leq 1$. If $r = +1$, then a perfect positive linear relation exists between the explanatory and response variables. If $r = -1$, then a perfect negative linear relation exists between the explanatory and response variables.
<sup>2</sup> The coefficient of determination ($R^{2}$) can take on the following values: $0 \leq R^{2} \leq 1$. If $R^{2} = 0$, the least-squares regression line has no explanatory value; if $R^{2} = 1$, the least-squares regression line explains $100\%$ of the variation in the response variable.
<h3 id='2_2'>2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?</h3>
Quantitative features used: 'hour','day_week','rain','tempi'.
Categorical features used: 'UNIT'. As a categorical feature, this variable required the use of so-called dummy variables.
<h3 id='2_3'>2.3 Why did you select these features in your model?</h3>
Due to the findings presented in the <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb' target='_blank'>DataExploration supplement</a>, it seemed clear that location significantly impacted the number of entries. In addition, the hour and day of the week showed importance. Temperature appeared to have some relationship with entries as well, and so it was included. Based on that exploration and on the statistical and practical evidence offered in <a href='IntroDS-ProjectOne-Section1.ipynb' target='_blank'>Section 1. Statistical Test</a>, rain was not included as a feature (and, as evidenced by a number of test runs, had marginal if any importance).
As far as the selection of location and day/time variables were concerned, station can be captured quantitatively by latitude and longitude, both of which, as numeric values, should offer a better sense of trend toward something. However, as witnessed by numerous test runs, latitude and longitude in fact appear to be redundant when using UNIT as a feature, which is in fact more signficant (as test runs indicated and, as one might assume, due to, for example, station layouts, where some UNITs would be used more than others) than latitude and longitude.
Each DATEn is a 'one-off', so it's unclear how any could be helpful for modeling/predicting (as those dates literally never occur again). day_week seemed to be a better selection in this case.
Using StatsModels OLS to Create a Model
End of explanation
"""
X_train = samples.X_train
print X_train.shape
y_train = samples.y_train
print y_train.shape
y_train.shape = (y_train.shape[0],1)
print y_train.shape
X_test = samples.X_test
print X_test.shape
y_test = samples.y_test
print y_test.shape
y_test.shape = (y_test.shape[0],1)
print y_test.shape
ols_y_hat = results.predict(X_test)
ols_y_hat.shape = (ols_y_hat.shape[0],1)
plt.title('Observed Values vs Fitted Predictions')
plt.xlabel('observed values')
plt.ylabel('predictions')
plt.scatter(y_test, ols_y_hat, alpha=0.7, color='green', edgecolors='black')
plt.show()
"""
Explanation: Get Training and Testing Values
End of explanation
"""
print best_results.params
"""
Explanation: <h3 id='2_4'>2.4 What are the coefficients (or weights) of the features in your linear regression model?</h3>
End of explanation
"""
ols_residuals = (ols_y_hat - y_test)
ols_residuals.shape
"""
Explanation: <h3 id='2_5'>2.5 What is your model’s $R^{2}$ (coefficient of determination) value?</h3>
For $n = 500$, the best $R^{2}$ value witnessed was $0.85$ (with the best $r$ value seen at $0.92$).
<h3 id='2_6_a'>2.6.a What does this $R^{2}$ value mean for the goodness of fit for your regression model?</h3>
This $R^{2}$ value means that $85\%$ of the proportion of total variation in the response variable is explained by the least-squares regression line (i.e., model) that was created above.
<h3 id='2_6_b'>2.6.b Do you think this linear model to predict ridership is appropriate for this dataset, given this $R^{2}$ value?</h3>
It's better than guessing in the dark, but too much shouldn't be staked on its predictions:
Predictions and their Residual Differences from Observed Values
End of explanation
"""
plt.boxplot(ols_residuals, vert=False)
plt.title('Boxplot of Residuals')
plt.xlabel('residuals')
plt.show()
plt.scatter(ols_y_hat,ols_residuals, alpha=0.7, color='purple', edgecolors='black')
plt.title('RESIDUAL PLOT')
plt.plot([np.min(ols_y_hat),np.max(ols_y_hat)], [0, 0], color='red')
plt.xlabel('predictions')
plt.ylabel('residuals')
plt.show()
plt.hist(y_test, color='purple', alpha=0.7, label='observations')
plt.hist(ols_y_hat, color='green', alpha=0.5, bins=6, label='ols predictions')
plt.title('OBSERVATIONS vs OLS PREDICTIONS')
plt.ylabel('frequency')
plt.legend()
plt.show()
plt.hist(ols_residuals, color='gray', alpha=0.7)
plt.title('OLS RESIDUALS')
plt.ylabel('frequency')
plt.show()
"""
Explanation: As can be seen from the above, somewhat arbitrarily-selected, values, the number of close predictions is a little over $50\%$ when close is defined as a prediction with a difference that is less than $1$ from the actual observed value. Given that the value of entries can take on such a large range of values $[0, 32814]$, differences less than $100$ and $1000$ are shown as well.
Residual Analysis
End of explanation
"""
best_results.summary()
"""
Explanation: Since the above predictions show a discernible, linear, and increasing pattern (and, thus, are not stochastic), it seems apparent that there is in fact not a linear relationship between the explanatory and response variables. Thus, a linear model is not appropriate for the current data set.
End of explanation
"""
|
ML4DS/ML4all | R_lab1_ML_Bay_Regresion/old/Pract_regression_student.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
"""
Explanation: Parametric ML and Bayesian regression
Notebook version: 1.2 (Sep 28, 2018)
Authors: Miguel Lázaro Gredilla
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jesus.cid@uc3m.es)
Changes: v.1.0 - First version. Python version
v.1.1 - Python 3 compatibility. ML section.
v.1.2 - Revised content. 2D visualization removed.
Pending changes:
End of explanation
"""
np.random.seed(3)
"""
Explanation: 1. Introduction
In this exercise the student will review several key concepts of Maximum Likelihood and Bayesian regression. To do so, we will assume the regression model
$$s = f({\bf x}) + \varepsilon$$
where $s$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is an unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e.,
$$\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2).$$
In addition, we will assume that the latent function is linear in the parameters
$$f({\bf x}) = {\bf w}^\top {\bf z}$$
where ${\bf z} = T({\bf x})$ is a possibly non-linear transformation of the input. Along this notebook, we will explore different types of transformations.
Also, we will assume an <i>a priori</i> distribution for ${\bf w}$ given by
$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$
Practical considerations
Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$.
Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$.
Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix.
Reproducibility of computations
To guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook.
End of explanation
"""
# Parameter settings
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
"""
Explanation: 2. Data generation with a linear model
During this section, we will assume affine transformation
$${\bf z} = T({\bf x}) = (1, {\bf x}^\top)^\top$$.
The <i>a priori</i> distribution of ${\bf w}$ is assumed to be
$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$
2.1. Synthetic data generation
First, we are going to generate synthetic data (so that we have the ground-truth model) and use them to make sure everything works correctly and our estimations are sensible.
[1] Set parameters $\sigma_p^2 = 2$ and $\sigma_{\varepsilon}^2 = 0.2$. To do so, define variables sigma_p and sigma_eps containing the respectiv standard deviations.
End of explanation
"""
# Data dimension:
dim_x = 2
# Generate a parameter vector taking a random sample from the prior distributions
# (the np.random module may be usefull for this purpose)
# true_w = <FILL IN>
print('The true parameter vector is:')
print(true_w)
"""
Explanation: [2] Generate a weight vector $\mbox{true_w}$ with two elements from the <i>a priori</i> distribution of the weights. This vector determines the regression line that we want to find (i.e., the optimum unknown solution).
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: [3] Generate an input matrix ${\bf X}$ (in this case, a single column) containing 20 samples with equally spaced values between 0 and 2 in the second column (method linspace from numpy can be useful for this)
End of explanation
"""
# Expand input matrix with an all-ones column
col_1 = np.ones((n_points, 1))
# Z = <FILL IN>
# Generate values of the target variable
# s = <FILL IN>
"""
Explanation: [4] Finally, generate the output vector ${\mbox s}$ as the product $\mbox{X} \ast \mbox{true_w}$ plus Gaussian noise of pdf ${\cal N}(0,\sigma_\varepsilon^2)$ at each element.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 2.2. Data visualization
Plot the generated data. You will notice a linear behavior, but the presence of noise makes it hard to estimate precisely the original straight line that generated them (which is stored in $\mbox{true_w}$).
End of explanation
"""
# <SOL>
# </SOL>
# Print predictions
print(p)
"""
Explanation: 3. Maximum Likelihood (ML) regression
3.1. Likelihood function
[1] Define a function predict(we, Z) that computes the linear predictions for all inputs in data matrix Z (a 2-D numpy arry), for a given parameter vector we (a 1-D numpy array). The output should be a 1-D array. Test your function with the given dataset and we = [0.4, 0.7]
End of explanation
"""
# <SOL>
# </SOL>
print(" The SSE is: {0}".format(SSE))
"""
Explanation: [2] Define a function sse(we, Z, s) that computes the sum of squared errors (SSE) for the linear prediction with parameters we (1D numpy array), inputs Z (2D numpy array) and targets s (1D numpy array). Using this function, compute the SSE of the true parameter vector in true_w.
End of explanation
"""
# <SOL>
# </SOL>
print("The likelihood of the true parameter vector is {0}".format(L_w_true))
"""
Explanation: [3] Define a function likelihood(we, Z, s, sigma_eps) that computes the likelihood of parameter vector we for a given dataset in matrix Z and vector s, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the sse function defined above. Using this function, compute the likelihood of the true parameter vector in true_w.
End of explanation
"""
# <SOL>
# </SOL>
print("The log-likelihood of the true parameter vector is {0}".format(LL_w_true))
"""
Explanation: [4] Define a function LL(we, Xe, s) that computes the log-likelihood of parameter vector we for a given dataset in matrix Z and vector s. Note that this function can use the likelihood function defined above. However, for a highe numerical precission, implemening a direct expression for the log-likelihood is recommended.
Using this function, compute the likelihood of the true parameter vector in true_w.
End of explanation
"""
# <SOL>
# </SOL>
print(w_ML)
"""
Explanation: 3.2. ML estimate
[1] Compute the ML estimate of $w_e$ given the data.
End of explanation
"""
# <SOL>
# </SOL>
print('Maximum likelihood: {0}'.format(L_w_ML))
print('Maximum log-likelihood: {0}'.format(LL_w_ML))
"""
Explanation: [2] Compute the maximum likelihood, and the maximum log-likelihood.
End of explanation
"""
# First construct a grid of (theta0, theta1) parameter pairs and their
# corresponding cost function values.
N = 200 # Number of points along each dimension.
w0_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
w1_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
Lw = np.zeros((N,N))
# Fill Lw with the likelihood values
for i, w0i in enumerate(w0_grid):
for j, w1j in enumerate(w1_grid):
we = np.array((w0i, w1j))
Lw[i, j] = LL(we, Z, s, sigma_eps)
WW0, WW1 = np.meshgrid(w0_grid, w1_grid, indexing='ij')
contours = plt.contour(WW0, WW1, Lw, 20)
plt.figure
plt.clabel(contours)
plt.scatter([true_w[0]]*2, [true_w[1]]*2, s=[50,10], color=['k','w'])
plt.scatter([w_ML[0]]*2, [w_ML[1]]*2, s=[50,10], color=['r','w'])
plt.xlabel('$w_0$')
plt.ylabel('$w_1$')
plt.show()
"""
Explanation: Just as an illustration, the code below generates a set of points in a two dimensional grid going from $(-\sigma_p, -\sigma_p)$ to $(\sigma_p, \sigma_p)$, computes the log-likelihood for all these points and visualize them using a 2-dimensional plot. You can see the difference between the true value of the parameter ${\bf w}$ (black) and the ML estimate (red).
End of explanation
"""
# Parameter settings
x_min = 0
x_max = 2
n_points = 2**16
# <SOL>
# </SOL>
"""
Explanation: 3.3. Convergence of the ML estimate for the true model
Note that the likelihood of the true parameter vector is, in general, smaller than that of the ML estimate. However, as the sample size increasis, both should converge to the same value.
[1] Generate a longer dataset, with $K_\text{max}=2^{16}$ samples, uniformly spaces between 0 and 2. Store it in the 2D-array X2 and the 1D-array s2
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: [2] Compute the ML estimate based on the first $2^k$ samples, for $k=2,3,\ldots, 16$. For each value of $k$ compute the squared euclidean distance between the true parameter vector and the ML estimate. Represent it graphically (using a logarithmic scale in the y-axis).
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 4. ML estimation with real data. The stocks dataset.
Once our code has been tested on synthetic data, we will use it with real data.
4.1. Dataset
[1] Load data corresponding to the evolution of the stocks of 10 airline companies. This data set is an adaptation of the Stock dataset from http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html, which in turn was taken from the StatLib Repository, http://lib.stat.cmu.edu/
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: [2] Normalize the data so all training sample components have zero mean and unit standard deviation. Store the normalized training and test samples in 2D numpy arrays Xtrain and Xtest, respectively.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 4.2. Polynomial ML regression with a single variable
In this first part, we will work with the first component of the input only.
[1] Take the first column of Xtrain and Xtest into arrays X0train and X0test, respectively.
[2] Visualize, in a single scatter plot, the target variable (in the vertical axes) versus the input variable)
[3] Since the data have been taken from a real scenario, we do not have any true mathematical model of the process that generated the data. Thus, we will explore different models trying to take the one that fits better the training data.
Assume a polinomial model given by
$$
{\bf z} = T({\bf x}) = (1, x_0, x_0^2, \ldots, x_0^{g-1})^\top.
$$
Built a method
` Ztrain, Ztest = T_poly(Xtrain, Xtest, g)`
that, for a given value of $g$, computes normailzed data matrices Ztrain and Ztest that result from applying the polinomial transformation to the inputs in X0train and X0test for an arbitrary value of $g$.
Note that, despite X0train and X0test where normalized, you will need to re-normalize the transformed variables.
End of explanation
"""
# <SOL>
# </SOL>
mean_w, Cov_w, iCov_w = posterior_stats(Z, s, sigma_eps, sigma_p)
print('true_w = {0}'.format(true_w))
print('mean_w = {0}'.format(mean_w))
print('Cov_w = {0}'.format(Cov_w))
print('iCov_w = {0}'.format(iCov_w))
"""
Explanation: [4] Fit a polynomial model with degree $g$ for $g$ ranging from 0 to 10. Store the weights of all models in a list of weight vectors, named models, such that models[g] returns the parameters estimated for the polynomial model with degree $g$.
We will use these models in the following sections.
[5] Plot the polynomial models with degrees 1, 3 and 10, superimposed over a scatter plot of the training data (in blue) and the test data (in red).
[6] Show, in the same plot:
The log-likelihood function corresponding to each model, as a function of $g$, computed over the training set.
The log-likelihood function corresponding to each model, as a function of $g$, computed over the test set.
[7] [OPTIONAL] You may have seen the the likelihood function grows with the degree of the polynomial. However, large values of $g$ produce a strong data overfitting. For this reasong, $g$ cannot be selected with the same data used to fit the model.
This kind of parameters, like $g$ are usually called hyperparameters and need to be selected by cross validation.
Another hyperparameter is $\sigma_\epsilon^2$. Plot the log-likelihood function corresponding to the polynomial model with degree 3 for different values of $\sigma_\epsilon^2$, for the training set and the test set. What would be the optimal value of this hyperparameters according to the training set?
In any case, not that the model coefficients do not depend on $\sigma_eps^2$. Therefore, we do not need to estimat its value for ML regression.
[8] Select the optimal value of $g$ by cross-validation. To do so, the cross validation methods provided by sklearn will simplify this task.
[9] For the selected model:
Plot the regresion function over the scater plot of the data.
Compute the log-likelihood and the SSE over the test set.
5. Bayesian regression. The stock dataset.
In this section we will keep using the first component of the data from the stock dataset, assuming the same kind of plolynomial model. We will explore the potential advantages of using a Bayesian model. To do so, we will asume that the <i>a priori</i> distribution of ${\bf w}$ is
$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$
5.1. Posterior pdf of the weight vector
In this section we will visualize prior and the posterior distribution functions. First, we will restore the dataset at the begining of this notebook:
[1] Define a function posterior_stats(Z, s, sigma_eps, sigma_p) that computes the parameters of the posterior coefficient distribution given the dataset in matrix Z and vector s, for given values of the hyperparameters.
This function should return the posterior mean, the covariance matrix and the precision matrix (the inverse of the covariance matrix). Test the function to the given dataset, for $g=3$.
End of explanation
"""
# <SOL>
# </SOL>
print('p(true_w | s) = {0}'.format(gauss_pdf(true_w, mean_w, iCov_w)))
print('p(w_ML | s) = {0}'.format(gauss_pdf(w_ML, mean_w, iCov_w)))
print('p(w_MSE | s) = {0}'.format(gauss_pdf(mean_w, mean_w, iCov_w)))
"""
Explanation: [2] Define a function gauss_pdf(we, mean_w, iCov_w) that computes the Gaussian pdf with mean mean_w and precision matrix iCov_w. Use this function to compute and compare the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset.
End of explanation
"""
# w_LS, residuals, rank, s = <FILL IN>
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
print(sigma_eps)
"""
Explanation: [3] Define a function log_gauss_pdf(we, mean_w, iCov_w) that computes the log of the Gaussian pdf with mean mean_w and precision matrix iCov_w. Use this function to compute and compare the log of the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset.
5.2. Hyperparameter selection
Since the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, we have to select them in some way. To see their influence, assume $g=3$ and plot the regression function for different values of $\sigma_p$
To this end, we will adjust them using the LS solution to the regression problem:
$\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{LS}$
$\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{LS}$
5.1. Hyperparameter selection
Since the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, a first rough estimation is needed (we will soon see how to estimate these values in a principled way).
To this end, we will adjust them using the LS solution to the regression problem:
$\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{LS}$
$\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{LS}$
End of explanation
"""
# Definition of the interval for representation purposes
x2_min = -1
x2_max = 3
n_points = 100 # Only two points are needed to plot a straigh line
# Build the input data matrix:
# Input values for representation of the regression curves
X2 = np.linspace(x2_min, x2_max, n_points)
col_1 = np.ones((n_points,))
X2e = np.vstack((col_1, X2)).T
"""
Explanation: 5.2. Sampling regression curves from the posterior
In this section we will plot the functions corresponding to different samples drawn from the posterior distribution of the weight vector.
To this end, we will first generate an input dataset of equally spaced samples. We will compute the functions at these points
End of explanation
"""
# Drawing weights from the posterior
# First, compute the cholesky decomposition of the covariance matrix
# L = <FILL IN>
for l in range(50):
# Generate a random sample from the posterior distribution
# w_l = <FILL IN>
# Compute predictions for the inputs in the data matrix
# p_l = <FILL IN>
# Plot prediction function
# plt.plot(<FILL IN>, 'c:');
# Compute predictions for the inputs in the data matrix and using the true model
# p_truew = <FILL IN>
# Plot the true model
plt.plot(X2, p_truew, 'b', label='True model', linewidth=2);
# Plot the training points
plt.plot(X,s,'r.',markersize=12);
plt.xlim((x2_min,x2_max));
plt.legend(loc='best')
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
"""
Explanation: Generate random vectors ${\bf w}_l$ with $l = 1,\dots, 50$, from the posterior density of the weights, $p({\bf w}\mid{\bf s})$, and use them to generate 50 polinomial regression functions, $f({\bf x}^\ast) = {{\bf z}^\ast}^\top {\bf w}_l$, with ${\bf x}^\ast$ between $-1.2$ and $1.2$, with step $0.1$.
Plot the line corresponding to the model with the posterior mean parameters, along with the $50$ generated straight lines and the original samples, all in the same plot. As you can check, the Bayesian model is not providing a single answer, but instead a density over them, from which we have extracted 50 options.
End of explanation
"""
# Note that you can re-use code from sect. 4.2 to solve this exercise
# Plot sample functions from the posterior, and the training points
# <SOL>
# </SOL>
# Plot the posterior mean.
# mean_ast = <FILL IN>
plt.plot(X2, mean_ast, 'm', label='Predictive mean', linewidth=2);
# Plot the posterior mean \pm 2 std
# std_ast = <FILL IN>
# plt.plot(<FILL IN>, 'm--', label='Predictive mean $\pm$ 2std', linewidth=2);
# plt.plot(<FILL IN>, 'm--', linewidth=3);
plt.legend(loc='best')
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
"""
Explanation: 5.3. Plotting the confidence intervals
On top of the previous figure (copy here your code from the previous section), plot functions
$${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right}$$
and
$${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right} \pm 2 \sqrt{{\mathbb V}\left{f({\bf x}^\ast)\mid{\bf s}\right}}$$
(i.e., the posterior mean of $f({\bf x}^\ast)$, as well as two standard deviations above and below).
It is possible to show analytically that this region comprises $95.45\%$ probability of the posterior probability $p(f({\bf x}^\ast)\mid {\bf s})$ at each ${\bf x}^\ast$.
End of explanation
"""
# Plot sample functions confidence intervals and sampling points
# Note that you can simply copy and paste most of the code used in the cell above.
# <SOL>
# </SOL>
# Compute the standad deviations for s and plot the confidence intervals
# <SOL>
# </SOL>
plt.legend(loc='best')
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
"""
Explanation: Plot now ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right} \pm 2 \sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ (note that the posterior means of $f({\bf x}^\ast)$ and $s({\bf x}^\ast)$ are the same, so there is no need to plot it again). Notice that $95.45\%$ of observed data lie now within the newly designated region. These new limits establish a confidence range for our predictions. See how the uncertainty grows as we move away from the interpolation region to the extrapolation areas.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 5.3. Model assessment
[OPTIONAL. You can skip this section]
In order to verify the performance of the resulting model, compute the posterior mean and variance of each of the test outputs from the posterior over ${\bf w}$. I.e, compute ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right}$ and $\sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ for each test sample ${\bf x}^\ast$ contained in each row of Xtest. Be sure not to use the outputs Ytest at any point during this process.
Store the predictive mean and variance of all test samples in two column vectors called m_y and v_y, respectively.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: Compute now the mean square error (MSE) and the negative log-predictive density (NLPD) with the following code:
End of explanation
"""
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
"""
Explanation: Results should be:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.