repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
googlegenomics/datalab-examples | datalab/genomics/Getting started with the Genomics API.ipynb | apache-2.0 | !pip install --upgrade google-api-python-client
"""
Explanation: <!-- Copyright 2015 Google Inc. All rights reserved. -->
<!-- Licensed under the Apache License, Version 2.0 (the "License"); -->
<!-- you may not use this file except in compliance with the License. -->
<!-- You may obtain a copy of the License at -->
<!-- http://www.apache.org/licenses/LICENSE-2.0 -->
<!-- Unless required by applicable law or agreed to in writing, software -->
<!-- distributed under the License is distributed on an "AS IS" BASIS, -->
<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
<!-- See the License for the specific language governing permissions and -->
<!-- limitations under the License. -->
Getting started with the Google Genomics API
In this notebook we'll cover how to make authenticated requests to the Google Genomics API.
NOTE:
If you're new to notebooks, or want to check out additional samples, check out the full list of general notebooks.
For additional Genomics samples, check out the full list of Genomics notebooks.
Setup
Install Python libraries
We'll be using the Google Python API client for interacting with Genomics API. We can install this library, or any other 3rd-party Python libraries from the Python Package Index (PyPI) using the pip package manager.
There are 50+ Google APIs that you can work against with the Google Python API Client, but we'll focus on the Genomics API in this notebook.
End of explanation
"""
from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http)
"""
Explanation: Create an Authenticated Client
Next we construct a Python object that we can use it to make requests.
The following snippet shows how we can authenticate using the service account on the Datalab host. For more detail about authentication from Python, see Using OAuth 2.0 for Server to Server Applications.
End of explanation
"""
from apiclient.discovery import build
genomics = build('genomics', 'v1', http=http)
"""
Explanation: And then we create a client for the Genomics API.
End of explanation
"""
request = genomics.datasets().get(datasetId='10473108253681171589')
"""
Explanation: Send a request to the Genomics API
Now that we have a Python client for the Genomics API, we can access a variety of different resources. For details about each available resource, see the python client API docs here.
Using our genomics client, we'll demonstrate fetching a Dataset resource by ID (the 1000 Genomes dataset in this case).
First, we need to construct a request object.
End of explanation
"""
response = request.execute()
"""
Explanation: Next, we'll send this request to the Genomics API by calling the request.execute() method.
End of explanation
"""
for entry in response.items():
print "%s => %s" % entry
"""
Explanation: You will need enable the Genomics API for your project if you have not done so previously. Click on this link to enable the API in your project.
The response object returned is simply a Python dictionary. Let's take a look at the properties returned in the response.
End of explanation
"""
dataset_id = '10473108253681171589' # This is the 1000 Genomes dataset ID
sample = 'NA12872'
reference_name = '22'
reference_position = 51003835
"""
Explanation: Success! We can see the name of the specified Dataset and a few other pieces of metadata.
Accessing other Genomics API resources will follow this same set of steps. The full list of available resources within the API is here. Each resource has details about the different verbs that can be applied (e.g., Dataset methods).
Access Data
In this portion of the notebook, we implement this same example implemented as a python script. First let's define a few constants to use within the examples that follow.
End of explanation
"""
request = genomics.readgroupsets().search(
body={'datasetIds': [dataset_id], 'name': sample},
fields='readGroupSets(id)')
read_group_sets = request.execute().get('readGroupSets', [])
if len(read_group_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of read group sets' % sample)
read_group_set_id = read_group_sets[0]['id']
"""
Explanation: Get read bases for a sample at specific a position
First find the read group set ID for the sample.
End of explanation
"""
request = genomics.reads().search(
body={'readGroupSetIds': [read_group_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1,
'pageSize': 1024},
fields='alignments(alignment,alignedSequence)')
reads = request.execute().get('alignments', [])
"""
Explanation: Once we have the read group set ID, lookup the reads at the position in which we are interested.
End of explanation
"""
# Note: This is simplistic - the cigar should be considered for real code
bases = [read['alignedSequence'][
reference_position - int(read['alignment']['position']['position'])]
for read in reads]
print '%s bases on %s at %d are' % (sample, reference_name, reference_position)
from collections import Counter
for base, count in Counter(bases).items():
print '%s: %s' % (base, count)
"""
Explanation: And we print out the results.
End of explanation
"""
request = genomics.callsets().search(
body={'variantSetIds': [dataset_id], 'name': sample},
fields='callSets(id)')
resp = request.execute()
call_sets = resp.get('callSets', [])
if len(call_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of call sets' % sample)
call_set_id = call_sets[0]['id']
"""
Explanation: Get variants for a sample at specific a position
First find the call set ID for the sample.
End of explanation
"""
request = genomics.variants().search(
body={'callSetIds': [call_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1},
fields='variants(names,referenceBases,alternateBases,calls(genotype))')
variant = request.execute().get('variants', [])[0]
"""
Explanation: Once we have the call set ID, lookup the variants that overlap the position in which we are interested.
End of explanation
"""
variant_name = variant['names'][0]
genotype = [variant['referenceBases'] if g == 0
else variant['alternateBases'][g - 1]
for g in variant['calls'][0]['genotype']]
print 'the called genotype is %s for %s' % (','.join(genotype), variant_name)
"""
Explanation: And we print out the results.
End of explanation
"""
|
robertoalotufo/ia898 | deliver/Aula_10_Wavelets.ipynb | mit | import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Aula 10 Discrete Wavelets Transform
Exercícios
isccsym
Não é fácil projetar um conjunto de testes para garantir que o seu programa esteja correta.
No caso em que o resultado é Falso, i.e., não simétrico, basta um pixel não ser simétrico para
o resultado ser Falso. Com isso, o conjunto de teste com uma imagem enorme com muitos pixels
não simétricos não é bom teste. Por exemplo, neste caso, faltou um teste onde tudo seja
simétrico, com exceção da origem (F[0,0]).
Solução apresentada pelo Marcelo, onde é comparado com a imagem refletida com translação periódica de 1 deslocamento é bem conceitual. A solução do Deângelo parece ser a mais rápida: não foi feita nenhuma cópia e comparou apenas com metade dos pixels.
Existe ainda pequeno problema a ser encontrado na questão de utilizar apenas metade dos pixels
para serem comparados.
minify
A redução da imagem deve ser feita com uma filtragem inicial de período de corte 2.r onde r é o
fator de redução da imagem. A seguir, é feita a reamostragem (decimação).
Para se fazer a redução no domínio da frequência, bastaria recortar o espectro da imagem original
e fazer a transforma inversa de Fourier.
resize
Verificou-se que a melhor função de ampliação/redução é a scipy.misc.imresize, tanto na qualidade como
na rapidez.
End of explanation
"""
def imresize(f, size):
'''
Resize an image
Parameters
----------
f: input image
size: integer, float or tuple
- integer: percentage of current size
- float: fraction of current size
- tuple: new dimensions
Returns
-------
output image resized
'''
return f
"""
Explanation: Exercícios para a próxima aula
Fazer uma função que amplie/reduza a imagem utilizando interpolação no domínio da frequência,
conforme discutido em aula. Comparar os resultados com o scipy.misc.imresize, tanto de qualidade do
espectro como de tempo de execução.
Os alunos com RA ímpar devem fazer as ampliações e os com RA par devem fazer
as reduções.
Nome da função: imresize
End of explanation
"""
def pconvfft(f,h):
'''
Periodical convolution.
This is an efficient implementation of the periodical convolution.
This implementation should be comutative, i.e., pconvfft(f,h)==pconvfft(h,f).
This implementation should be fast. If the number of pixels used in the
convolution is larger than 15, it uses the convolution theorem to implement
the convolution.
Parameters:
-----------
f: input image (can be complex, up to 2 dimensions)
h: input kernel (can be complex, up to 2 dimensions)
Outputs:
image of the result of periodical convolution
'''
return f
"""
Explanation: Modificar a função pconv para executar no domínio da frequência, caso o número de
elementos não zeros da menor imagem, é maior que um certo valor, digamos 15.
Nome da função: pconvfft
End of explanation
"""
/home/lotufo/ia898/dev/wavelets.ipynb
"""
Explanation: Transforma Discreta de Wavelets
Iremos utilizar um notebook que foi um resultado de projeto de anos
anteriores.
DWT
End of explanation
"""
|
Chipe1/aima-python | obsolete_search4e.ipynb | mit | romania = {
'A': ['Z', 'T', 'S'],
'B': ['F', 'P', 'G', 'U'],
'C': ['D', 'R', 'P'],
'D': ['M', 'C'],
'E': ['H'],
'F': ['S', 'B'],
'G': ['B'],
'H': ['U', 'E'],
'I': ['N', 'V'],
'L': ['T', 'M'],
'M': ['L', 'D'],
'N': ['I'],
'O': ['Z', 'S'],
'P': ['R', 'C', 'B'],
'R': ['S', 'C', 'P'],
'S': ['A', 'O', 'F', 'R'],
'T': ['A', 'L'],
'U': ['B', 'V', 'H'],
'V': ['U', 'I'],
'Z': ['O', 'A']}
"""
Explanation: Note: This is not yet ready, but shows the direction I'm leaning in for Fourth Edition Search.
State-Space Search
This notebook describes several state-space search algorithms, and how they can be used to solve a variety of problems. We start with a simple algorithm and a simple domain: finding a route from city to city. Later we will explore other algorithms and domains.
The Route-Finding Domain
Like all state-space search problems, in a route-finding problem you will be given:
- A start state (for example, 'A' for the city Arad).
- A goal state (for example, 'B' for the city Bucharest).
- Actions that can change state (for example, driving from 'A' to 'S').
You will be asked to find:
- A path from the start state, through intermediate states, to the goal state.
We'll use this map:
<img src="http://robotics.cs.tamu.edu/dshell/cs625/images/map.jpg" height="366" width="603">
A state-space search problem can be represented by a graph, where the vertices of the graph are the states of the problem (in this case, cities) and the edges of the graph are the actions (in this case, driving along a road).
We'll represent a city by its single initial letter.
We'll represent the graph of connections as a dict that maps each city to a list of the neighboring cities (connected by a road). For now we don't explicitly represent the actions, nor the distances
between cities.
End of explanation
"""
romania['A']
"""
Explanation: Suppose we want to get from A to B. Where can we go from the start state, A?
End of explanation
"""
from collections import deque # Doubly-ended queue: pop from left, append to right.
def breadth_first(start, goal, neighbors):
"Find a shortest sequence of states from start to the goal."
frontier = deque([start]) # A queue of states
previous = {start: None} # start has no previous state; other states will
while frontier:
s = frontier.popleft()
if s == goal:
return path(previous, s)
for s2 in neighbors[s]:
if s2 not in previous:
frontier.append(s2)
previous[s2] = s
def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
return [] if (s is None) else path(previous, previous[s]) + [s]
"""
Explanation: We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search: we don't know which immediate action is best, so we'll have to explore, until we find a path that leads to the goal.
How do we explore? We'll start with a simple algorithm that will get us from A to B. We'll keep a frontier—a collection of not-yet-explored states—and expand the frontier outward until it reaches the goal. To be more precise:
Initially, the only state in the frontier is the start state, 'A'.
Until we reach the goal, or run out of states in the frontier to explore, do the following:
Remove the first state from the frontier. Call it s.
If s is the goal, we're done. Return the path to s.
Otherwise, consider all the neighboring states of s. For each one:
If we have not previously explored the state, add it to the end of the frontier.
Also keep track of the previous state that led to this new neighboring state; we'll need this to reconstruct the path to the goal, and to keep us from re-visiting previously explored states.
A Simple Search Algorithm: breadth_first
The function breadth_first implements this strategy:
End of explanation
"""
breadth_first('A', 'B', romania)
breadth_first('L', 'N', romania)
breadth_first('N', 'L', romania)
breadth_first('E', 'E', romania)
"""
Explanation: A couple of things to note:
We always add new states to the end of the frontier queue. That means that all the states that are adjacent to the start state will come first in the queue, then all the states that are two steps away, then three steps, etc.
That's what we mean by breadth-first search.
We recover the path to an end state by following the trail of previous[end] pointers, all the way back to start.
The dict previous is a map of {state: previous_state}.
When we finally get an s that is the goal state, we know we have found a shortest path, because any other state in the queue must correspond to a path that is as long or longer.
Note that previous contains all the states that are currently in frontier as well as all the states that were in frontier in the past.
If no path to the goal is found, then breadth_first returns None. If a path is found, it returns the sequence of states on the path.
Some examples:
End of explanation
"""
from search import *
sgb_words = open_data("EN-text/sgb-words.txt")
"""
Explanation: Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this: given a start word and a goal word, find the shortest way to transform the start word into the goal word by changing one letter at a time, such that each change results in a word. For example starting with green we can reach grass in 7 steps:
green → greed → treed → trees → tress → cress → crass → grass
We will need a dictionary of words. We'll use 5-letter words from the Stanford GraphBase project for this purpose. Let's get that file from aimadata.
End of explanation
"""
WORDS = set(sgb_words.read().split())
len(WORDS)
"""
Explanation: We can assign WORDS to be the set of all the words in this file:
End of explanation
"""
def neighboring_words(word):
"All words that are one letter away from this word."
neighbors = {word[:i] + c + word[i+1:]
for i in range(len(word))
for c in 'abcdefghijklmnopqrstuvwxyz'
if c != word[i]}
return neighbors & WORDS
"""
Explanation: And define neighboring_words to return the set of all words that are a one-letter change away from a given word:
End of explanation
"""
neighboring_words('hello')
neighboring_words('world')
"""
Explanation: For example:
End of explanation
"""
word_neighbors = {word: neighboring_words(word)
for word in WORDS}
"""
Explanation: Now we can create word_neighbors as a dict of {word: {neighboring_word, ...}}:
End of explanation
"""
breadth_first('green', 'grass', word_neighbors)
breadth_first('smart', 'brain', word_neighbors)
breadth_first('frown', 'smile', word_neighbors)
"""
Explanation: Now the breadth_first function can be used to solve a word ladder problem:
End of explanation
"""
def breadth_first_search(problem):
"Search for goal; paths with least number of steps first."
if problem.is_goal(problem.initial):
return Node(problem.initial)
frontier = FrontierQ(Node(problem.initial), LIFO=False)
explored = set()
while frontier:
node = frontier.pop()
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child.state not in frontier:
if problem.is_goal(child.state):
return child
frontier.add(child)
"""
Explanation: More General Search Algorithms
Now we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities:
We distinguish between an action and the result of an action.
We allow different measures of the cost of a solution (not just the number of steps in the sequence).
We search through the state space in an order that is more likely to lead to an optimal solution quickly.
Here's how we do these things:
Instead of having a graph of neighboring states, we instead have an object of type Problem. A Problem
has one method, Problem.actions(state) to return a collection of the actions that are allowed in a state,
and another method, Problem.result(state, action) that says what happens when you take an action.
We keep a set, explored of states that have already been explored. We also have a class, Frontier, that makes it efficient to ask if a state is on the frontier.
Each action has a cost associated with it (in fact, the cost can vary with both the state and the action).
The Frontier class acts as a priority queue, allowing the "best" state to be explored next.
We represent a sequence of actions and resulting states as a linked list of Node objects.
The algorithm breadth_first_search is basically the same as breadth_first, but using our new conventions:
End of explanation
"""
def uniform_cost_search(problem, costfn=lambda node: node.path_cost):
frontier = FrontierPQ(Node(problem.initial), costfn)
explored = set()
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child not in frontier:
frontier.add(child)
elif child in frontier and frontier.cost[child] < child.path_cost:
frontier.replace(child)
"""
Explanation: Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
End of explanation
"""
def astar_search(problem, heuristic):
costfn = lambda node: node.path_cost + heuristic(node.state)
return uniform_cost_search(problem, costfn)
"""
Explanation: Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
End of explanation
"""
class Node(object):
"""A node in a search tree. A search tree is spanning tree over states.
A Node contains a state, the previous node in the tree, the action that
takes us from the previous state to this state, and the path cost to get to
this state. If a state is arrived at by two paths, then there are two nodes
with the same state."""
def __init__(self, state, previous=None, action=None, step_cost=1):
"Create a search tree Node, derived from a previous Node by an action."
self.state = state
self.previous = previous
self.action = action
self.path_cost = 0 if previous is None else (previous.path_cost + step_cost)
def __repr__(self): return "<Node {}: {}>".format(self.state, self.path_cost)
def __lt__(self, other): return self.path_cost < other.path_cost
def child(self, problem, action):
"The Node you get by taking an action from this Node."
result = problem.result(self.state, action)
return Node(result, self, action,
problem.step_cost(self.state, action, result))
"""
Explanation: Search Tree Nodes
The solution to a search problem is now a linked list of Nodes, where each Node
includes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).
End of explanation
"""
from collections import OrderedDict
import heapq
class FrontierQ(OrderedDict):
"A Frontier that supports FIFO or LIFO Queue ordering."
def __init__(self, initial, LIFO=False):
"""Initialize Frontier with an initial Node.
If LIFO is True, pop from the end first; otherwise from front first."""
super(FrontierQ, self).__init__()
self.LIFO = LIFO
self.add(initial)
def add(self, node):
"Add a node to the frontier."
self[node.state] = node
def pop(self):
"Remove and return the next Node in the frontier."
(state, node) = self.popitem(self.LIFO)
return node
def replace(self, node):
"Make this node replace the nold node with the same state."
del self[node.state]
self.add(node)
class FrontierPQ:
"A Frontier ordered by a cost function; a Priority Queue."
def __init__(self, initial, costfn=lambda node: node.path_cost):
"Initialize Frontier with an initial Node, and specify a cost function."
self.heap = []
self.states = {}
self.costfn = costfn
self.add(initial)
def add(self, node):
"Add node to the frontier."
cost = self.costfn(node)
heapq.heappush(self.heap, (cost, node))
self.states[node.state] = node
def pop(self):
"Remove and return the Node with minimum cost."
(cost, node) = heapq.heappop(self.heap)
self.states.pop(node.state, None) # remove state
return node
def replace(self, node):
"Make this node replace a previous node with the same state."
if node.state not in self:
raise ValueError('{} not there to replace'.format(node.state))
for (i, (cost, old_node)) in enumerate(self.heap):
if old_node.state == node.state:
self.heap[i] = (self.costfn(node), node)
heapq._siftdown(self.heap, 0, i)
return
def __contains__(self, state): return state in self.states
def __len__(self): return len(self.heap)
"""
Explanation: Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations:
f.add(node): Add a node to the Frontier.
f.pop(): Remove and return the "best" node from the frontier.
f.replace(node): add this node and remove a previous node with the same state.
state in f: Test if some node in the frontier has arrived at state.
f[state]: returns the node corresponding to this state in frontier.
len(f): The number of Nodes in the frontier. When the frontier is empty, f is false.
We provide two kinds of frontiers: One for "regular" queues, either first-in-first-out (for breadth-first search) or last-in-first-out (for depth-first search), and one for priority queues, where you can specify what cost function on nodes you are trying to minimize.
End of explanation
"""
class Problem(object):
"""The abstract class for a search problem."""
def __init__(self, initial=None, goals=(), **additional_keywords):
"""Provide an initial state and optional goal states.
A subclass can have additional keyword arguments."""
self.initial = initial # The initial state of the problem.
self.goals = goals # A collection of possible goal states.
self.__dict__.update(**additional_keywords)
def actions(self, state):
"Return a list of actions executable in this state."
raise NotImplementedError # Override this!
def result(self, state, action):
"The state that results from executing this action in this state."
raise NotImplementedError # Override this!
def is_goal(self, state):
"True if the state is a goal."
return state in self.goals # Optionally override this!
def step_cost(self, state, action, result=None):
"The cost of taking this action from this state."
return 1 # Override this if actions have different costs
def action_sequence(node):
"The sequence of actions to get to this node."
actions = []
while node.previous:
actions.append(node.action)
node = node.previous
return actions[::-1]
def state_sequence(node):
"The sequence of states to get to this node."
states = [node.state]
while node.previous:
node = node.previous
states.append(node.state)
return states[::-1]
"""
Explanation: Search Problems
Problem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.
End of explanation
"""
dirt = '*'
clean = ' '
class TwoLocationVacuumProblem(Problem):
"""A Vacuum in a world with two locations, and dirt.
Each state is a tuple of (location, dirt_in_W, dirt_in_E)."""
def actions(self, state): return ('W', 'E', 'Suck')
def is_goal(self, state): return dirt not in state
def result(self, state, action):
"The state that results from executing this action in this state."
(loc, dirtW, dirtE) = state
if action == 'W': return ('W', dirtW, dirtE)
elif action == 'E': return ('E', dirtW, dirtE)
elif action == 'Suck' and loc == 'W': return (loc, clean, dirtE)
elif action == 'Suck' and loc == 'E': return (loc, dirtW, clean)
else: raise ValueError('unknown action: ' + action)
problem = TwoLocationVacuumProblem(initial=('W', dirt, dirt))
result = uniform_cost_search(problem)
result
action_sequence(result)
state_sequence(result)
problem = TwoLocationVacuumProblem(initial=('E', clean, dirt))
result = uniform_cost_search(problem)
action_sequence(result)
"""
Explanation: Two Location Vacuum World
End of explanation
"""
class PourProblem(Problem):
"""Problem about pouring water between jugs to achieve some water level.
Each state is a tuples of levels. In the initialization, provide a tuple of
capacities, e.g. PourProblem(capacities=(8, 16, 32), initial=(2, 4, 3), goals={7}),
which means three jugs of capacity 8, 16, 32, currently filled with 2, 4, 3 units of
water, respectively, and the goal is to get a level of 7 in any one of the jugs."""
def actions(self, state):
"""The actions executable in this state."""
jugs = range(len(state))
return ([('Fill', i) for i in jugs if state[i] != self.capacities[i]] +
[('Dump', i) for i in jugs if state[i] != 0] +
[('Pour', i, j) for i in jugs for j in jugs if i != j])
def result(self, state, action):
"""The state that results from executing this action in this state."""
result = list(state)
act, i, j = action[0], action[1], action[-1]
if act == 'Fill': # Fill i to capacity
result[i] = self.capacities[i]
elif act == 'Dump': # Empty i
result[i] = 0
elif act == 'Pour':
a, b = state[i], state[j]
result[i], result[j] = ((0, a + b)
if (a + b <= self.capacities[j]) else
(a + b - self.capacities[j], self.capacities[j]))
else:
raise ValueError('unknown action', action)
return tuple(result)
def is_goal(self, state):
"""True if any of the jugs has a level equal to one of the goal levels."""
return any(level in self.goals for level in state)
p7 = PourProblem(initial=(2, 0), capacities=(5, 13), goals={7})
p7.result((2, 0), ('Fill', 1))
result = uniform_cost_search(p7)
action_sequence(result)
"""
Explanation: Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
End of explanation
"""
def showpath(searcher, problem):
"Show what happens when searcvher solves problem."
problem = Instrumented(problem)
print('\n{}:'.format(searcher.__name__))
result = searcher(problem)
if result:
actions = action_sequence(result)
state = problem.initial
path_cost = 0
for steps, action in enumerate(actions, 1):
path_cost += problem.step_cost(state, action, 0)
result = problem.result(state, action)
print(' {} =={}==> {}; cost {} after {} steps'
.format(state, action, result, path_cost, steps,
'; GOAL!' if problem.is_goal(result) else ''))
state = result
msg = 'GOAL FOUND' if result else 'no solution'
print('{} after {} results and {} goal checks'
.format(msg, problem._counter['result'], problem._counter['is_goal']))
from collections import Counter
class Instrumented:
"Instrument an object to count all the attribute accesses in _counter."
def __init__(self, obj):
self._object = obj
self._counter = Counter()
def __getattr__(self, attr):
self._counter[attr] += 1
return getattr(self._object, attr)
showpath(uniform_cost_search, p7)
p = PourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
class GreenPourProblem(PourProblem):
def step_cost(self, state, action, result=None):
"The cost is the amount of water used in a fill."
if action[0] == 'Fill':
i = action[1]
return self.capacities[i] - state[i]
return 0
p = GreenPourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
def compare_searchers(problem, searchers=None):
"Apply each of the search algorithms to the problem, and show results"
if searchers is None:
searchers = (breadth_first_search, uniform_cost_search)
for searcher in searchers:
showpath(searcher, problem)
compare_searchers(p)
"""
Explanation: Visualization Output
End of explanation
"""
import random
N, S, E, W = DIRECTIONS = [(0, 1), (0, -1), (1, 0), (-1, 0)]
def Grid(width, height, obstacles=0.1):
"""A 2-D grid, width x height, with obstacles that are either a collection of points,
or a fraction between 0 and 1 indicating the density of obstacles, chosen at random."""
grid = {(x, y) for x in range(width) for y in range(height)}
if isinstance(obstacles, (float, int)):
obstacles = random.sample(grid, int(width * height * obstacles))
def neighbors(x, y):
for (dx, dy) in DIRECTIONS:
(nx, ny) = (x + dx, y + dy)
if (nx, ny) not in obstacles and 0 <= nx < width and 0 <= ny < height:
yield (nx, ny)
return {(x, y): list(neighbors(x, y))
for x in range(width) for y in range(height)}
Grid(5, 5)
class GridProblem(Problem):
"Create with a call like GridProblem(grid=Grid(10, 10), initial=(0, 0), goal=(9, 9))"
def actions(self, state): return DIRECTIONS
def result(self, state, action):
#print('ask for result of', state, action)
(x, y) = state
(dx, dy) = action
r = (x + dx, y + dy)
return r if r in self.grid[state] else state
gp = GridProblem(grid=Grid(5, 5, 0.3), initial=(0, 0), goals={(4, 4)})
showpath(uniform_cost_search, gp)
"""
Explanation: Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
End of explanation
"""
def hardness(problem):
L = breadth_first_search(problem)
#print('hardness', problem.initial, problem.capacities, problem.goals, L)
return len(action_sequence(L)) if (L is not None) else 0
hardness(p7)
action_sequence(breadth_first_search(p7))
C = 9 # Maximum capacity to consider
phard = max((PourProblem(initial=(a, b), capacities=(A, B), goals={goal})
for A in range(C+1) for B in range(C+1)
for a in range(A) for b in range(B)
for goal in range(max(A, B))),
key=hardness)
phard.initial, phard.capacities, phard.goals
showpath(breadth_first_search, PourProblem(initial=(0, 0), capacities=(7, 9), goals={8}))
showpath(uniform_cost_search, phard)
class GridProblem(Problem):
"""A Grid."""
def actions(self, state): return ['N', 'S', 'E', 'W']
def result(self, state, action):
"""The state that results from executing this action in this state."""
(W, H) = self.size
if action == 'N' and state > W: return state - W
if action == 'S' and state + W < W * W: return state + W
if action == 'E' and (state + 1) % W !=0: return state + 1
if action == 'W' and state % W != 0: return state - 1
return state
compare_searchers(GridProblem(initial=0, goals={44}, size=(10, 10)))
def test_frontier():
#### Breadth-first search with FIFO Q
f = FrontierQ(Node(1), LIFO=False)
assert 1 in f and len(f) == 1
f.add(Node(2))
f.add(Node(3))
assert 1 in f and 2 in f and 3 in f and len(f) == 3
assert f.pop().state == 1
assert 1 not in f and 2 in f and 3 in f and len(f) == 2
assert f
assert f.pop().state == 2
assert f.pop().state == 3
assert not f
#### Depth-first search with LIFO Q
f = FrontierQ(Node('a'), LIFO=True)
for s in 'bcdef': f.add(Node(s))
assert len(f) == 6 and 'a' in f and 'c' in f and 'f' in f
for s in 'fedcba': assert f.pop().state == s
assert not f
#### Best-first search with Priority Q
f = FrontierPQ(Node(''), lambda node: len(node.state))
assert '' in f and len(f) == 1 and f
for s in ['book', 'boo', 'bookie', 'bookies', 'cook', 'look', 'b']:
assert s not in f
f.add(Node(s))
assert s in f
assert f.pop().state == ''
assert f.pop().state == 'b'
assert f.pop().state == 'boo'
assert {f.pop().state for _ in '123'} == {'book', 'cook', 'look'}
assert f.pop().state == 'bookie'
#### Romania: Two paths to Bucharest; cheapest one found first
S = Node('S')
SF = Node('F', S, 'S->F', 99)
SFB = Node('B', SF, 'F->B', 211)
SR = Node('R', S, 'S->R', 80)
SRP = Node('P', SR, 'R->P', 97)
SRPB = Node('B', SRP, 'P->B', 101)
f = FrontierPQ(S)
f.add(SF); f.add(SR), f.add(SRP), f.add(SRPB); f.add(SFB)
def cs(n): return (n.path_cost, n.state) # cs: cost and state
assert cs(f.pop()) == (0, 'S')
assert cs(f.pop()) == (80, 'R')
assert cs(f.pop()) == (99, 'F')
assert cs(f.pop()) == (177, 'P')
assert cs(f.pop()) == (278, 'B')
return 'test_frontier ok'
test_frontier()
# %matplotlib inline
import matplotlib.pyplot as plt
p = plt.plot([i**2 for i in range(10)])
plt.savefig('destination_path.eps', format='eps', dpi=1200)
import itertools
import random
# http://stackoverflow.com/questions/10194482/custom-matplotlib-plot-chess-board-like-table-with-colored-cells
from matplotlib.table import Table
def main():
grid_table(8, 8)
plt.axis('scaled')
plt.show()
def grid_table(nrows, ncols):
fig, ax = plt.subplots()
ax.set_axis_off()
colors = ['white', 'lightgrey', 'dimgrey']
tb = Table(ax, bbox=[0,0,2,2])
for i,j in itertools.product(range(ncols), range(nrows)):
tb.add_cell(i, j, 2./ncols, 2./nrows, text='{:0.2f}'.format(0.1234),
loc='center', facecolor=random.choice(colors), edgecolor='grey') # facecolors=
ax.add_table(tb)
#ax.plot([0, .3], [.2, .2])
#ax.add_line(plt.Line2D([0.3, 0.5], [0.7, 0.7], linewidth=2, color='blue'))
return fig
main()
import collections
class defaultkeydict(collections.defaultdict):
"""Like defaultdict, but the default_factory is a function of the key.
>>> d = defaultkeydict(abs); d[-42]
42
"""
def __missing__(self, key):
self[key] = self.default_factory(key)
return self[key]
"""
Explanation: Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one.
End of explanation
"""
class TSP_problem(Problem):
'''
subclass of Problem to define various functions
'''
def two_opt(self, state):
'''
Neighbour generating function for Traveling Salesman Problem
'''
state2 = state[:]
l = random.randint(0, len(state2) - 1)
r = random.randint(0, len(state2) - 1)
if l > r:
l, r = r,l
state2[l : r + 1] = reversed(state2[l : r + 1])
return state2
def actions(self, state):
'''
action that can be excuted in given state
'''
return [self.two_opt]
def result(self, state, action):
'''
result after applying the given action on the given state
'''
return action(state)
def path_cost(self, c, state1, action, state2):
'''
total distance for the Traveling Salesman to be covered if in state2
'''
cost = 0
for i in range(len(state2) - 1):
cost += distances[state2[i]][state2[i + 1]]
cost += distances[state2[0]][state2[-1]]
return cost
def value(self, state):
'''
value of path cost given negative for the given state
'''
return -1 * self.path_cost(None, None, None, state)
def init():
'''
Initialisation function for matplotlib animation
'''
line.set_data([], [])
for name, coordinates in romania_map.locations.items():
ax.annotate(
name,
xy=coordinates, xytext=(-10, 5), textcoords='offset points', size = 10)
text.set_text("Cost = 0 i = 0" )
return line,
def animate(i):
'''
Animation function to set next path and print its cost.
'''
x, y = [], []
for name in states[i]:
x.append(romania_map.locations[name][0])
y.append(romania_map.locations[name][1])
x.append(romania_map.locations[states[i][0]][0])
y.append(romania_map.locations[states[i][0]][1])
line.set_data(x,y)
text.set_text("Cost = " + str('{:.2f}'.format(TSP_problem.path_cost(None, None, None, None, states[i]))))
return line,
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib import animation
import numpy as np
font = {'family': 'roboto',
'color': 'darkred',
'weight': 'normal',
'size': 12,
}
cities = []
distances ={}
states = []
# creating plotting area
fig = plt.figure(figsize = (8,6))
ax = plt.axes(xlim=(60, 600), ylim=(245, 600))
line, = ax.plot([], [], c="b",linewidth = 1.5, marker = 'o', markerfacecolor = 'r', markeredgecolor = 'r',markersize = 10)
text = ax.text(450, 565, "", fontdict = font)
# creating initial path
for name in romania_map.locations.keys():
distances[name] = {}
cities.append(name)
# distances['city1']['city2'] contains euclidean distance between their coordinates
for name_1,coordinates_1 in romania_map.locations.items():
for name_2,coordinates_2 in romania_map.locations.items():
distances[name_1][name_2] = np.linalg.norm([coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
distances[name_2][name_1] = np.linalg.norm([coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
# creating the problem
tsp_problem = TSP_problem(cities)
# all the states as a 2-D list of paths
states = simulated_annealing_full(tsp_problem)
# calling the matplotlib animation function
anim = animation.FuncAnimation(fig, animate, init_func = init,
frames = len(states), interval = len(states), blit = True, repeat = False)
plt.show()
"""
Explanation: Simulated Annealing visualisation using TSP
Applying simulated annealing in traveling salesman problem to find the shortest tour to travel all cities in Romania. Distance between two cities is taken as the euclidean distance.
End of explanation
"""
next_state = cities
states = []
# creating plotting area
fig = plt.figure(figsize = (8,6))
ax = plt.axes(xlim=(60, 600), ylim=(245, 600))
line, = ax.plot([], [], c="b",linewidth = 1.5, marker = 'o', markerfacecolor = 'r', markeredgecolor = 'r',markersize = 10)
text = ax.text(450, 565, "", fontdict = font)
# to plot only the final states of every simulated annealing iteration
for iterations in range(100):
tsp_problem = TSP_problem(next_state)
states.append(simulated_annealing(tsp_problem))
next_state = states[-1]
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=len(states),interval=len(states), blit=True, repeat = False)
plt.show()
"""
Explanation: Iterative Simulated Annealing
Providing the output of the previous run as input to the next run to give better performance.
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/1_machine_learning_foundations/assignment/week6/.ipynb_checkpoints/Deep Features for Image Retrieval-checkpoint.ipynb | mit | import graphlab
"""
Explanation: Building an image retrieval system with deep features
Fire up GraphLab Create
End of explanation
"""
image_train = graphlab.SFrame('image_train_data/')
"""
Explanation: Load the CIFAR-10 dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set. In this simple retrieval example, there is no notion of "testing", so we will only use the training data.
End of explanation
"""
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
image_train.head()
"""
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(image_train,features=['deep_features'],
label='id')
"""
Explanation: Train a nearest-neighbors model for retrieving images using deep features
We will now build a simple image retrieval system that finds the nearest neighbors for any image.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
cat = image_train[18:19]
cat['image'].show()
knn_model.query(cat)
"""
Explanation: Use image retrieval model with deep features to find similar images
Let's find similar images to this cat picture.
End of explanation
"""
def get_images_from_ids(query_result):
return image_train.filter_by(query_result['reference_label'],'id')
cat_neighbors = get_images_from_ids(knn_model.query(cat))
cat_neighbors['image'].show()
"""
Explanation: We are going to create a simple function to view the nearest neighbors to save typing:
End of explanation
"""
car = image_train[8:9]
car['image'].show()
get_images_from_ids(knn_model.query(car))['image'].show()
"""
Explanation: Very cool results showing similar cats.
Finding similar images to a car
End of explanation
"""
show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show()
show_neighbors(8)
show_neighbors(26)
"""
Explanation: Just for fun, let's create a lambda to find and show nearest neighbor images
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_stats_cluster_time_frequency_repeated_measures_anova.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.time_frequency import single_trial_power
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
"""
Explanation: .. _tut_stats_cluster_sensor_rANOVA_tfr:
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = raw.info['ch_names'][picks[0]]
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject)
"""
Explanation: Set parameters
End of explanation
"""
epochs.equalize_event_counts(event_id, copy=False)
# Time vector
times = 1e3 * epochs.times # change unit to ms
# Factor to down-sample the temporal dimension of the PSD computed by
# single_trial_power.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
n_cycles = frequencies / frequencies[0]
baseline_mask = times[::decim] < 0
"""
Explanation: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
End of explanation
"""
epochs_power = list()
for condition in [epochs[k].get_data()[:, 97:98, :] for k in event_id]:
this_power = single_trial_power(condition, sfreq=sfreq,
frequencies=frequencies, n_cycles=n_cycles,
decim=decim)
this_power = this_power[:, 0, :, :] # we only have one channel.
# Compute ratio with baseline power (be sure to correct time vector with
# decimation factor)
epochs_baseline = np.mean(this_power[:, :, baseline_mask], axis=2)
this_power /= epochs_baseline[..., np.newaxis]
epochs_power.append(this_power)
"""
Explanation: Create TFR representations for all conditions
End of explanation
"""
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
n_times = len(times[::decim])
"""
Explanation: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
End of explanation
"""
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
"""
Explanation: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
End of explanation
"""
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
"""
Explanation: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location):
.. table::
===== ==== ==== ==== ====
trial A1B1 A1B2 A2B1 B2B2
===== ==== ==== ==== ====
1 1.34 2.53 0.97 1.74
... .... .... .... ....
56 2.45 7.90 3.09 4.76
===== ==== ==== ==== ====
Now we're ready to run our repeated measures ANOVA.
Note. As we treat trials as subjects, the test only accounts for
time locked responses despite the 'induced' approach.
For analysis for induced power at the group level averaged TRFs
are required.
End of explanation
"""
effects = 'A:B'
"""
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
End of explanation
"""
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
"""
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and
the second dimension and finally calls the ANOVA function.
End of explanation
"""
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' cluster-level corrected (p <= 0.05)' % ch_name)
plt.show()
"""
Explanation: Create new stats image with only significant clusters
End of explanation
"""
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' FDR corrected (p <= 0.05)' % ch_name)
plt.show()
# Both, cluster level and FDR correction help getting rid of
# putatively spots we saw in the naive f-images.
"""
Explanation: Now using FDR
End of explanation
"""
|
ddandur/Twords | jupyter_example_notebooks/Bitcoin Data.ipynb | mit | import sys
sys.path.append('..')
from twords.twords import Twords
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
# this pandas line makes the dataframe display all text in a line; useful for seeing entire tweets
pd.set_option('display.max_colwidth', -1)
twit = Twords()
twit.data_path = "../data/java_collector/bitcoin/"
twit.background_path = '../jar_files_and_background/freq_table_72319443_total_words_twitter_corpus.csv'
twit.create_Background_dict()
twit.set_Search_terms(["bitcoin"])
twit.create_Stop_words()
twit.get_java_tweets_from_csv_list()
# find how many tweets we have in original dataset
print "Total number of tweets:", len(twit.tweets_df)
"""
Explanation: Exploring tweets containing word "bitcoin" from mid-summer 2016
July 28 - August 6, 2016
This is another small dataset (~3000 tweets), so semantic insight into bitcoin in a way done with charisma data is probably not possible - still may be other interesting patterns.
For this example no attempt is made to remove spammy or repetitive posts.
In general Twords was designed to look at frequencies and semantics for terms likely to be used in casual conversation (like "charisma") more so than terms that likely have a lot of marketing behind them (like "brexit" or "bitcoin"), but people specifically interested in a term like "bitcoin" may still find interesing patterns here.
End of explanation
"""
twit.keep_column_of_original_tweets()
twit.lower_tweets()
twit.keep_only_unicode_tweet_text()
twit.remove_urls_from_tweets()
twit.remove_punctuation_from_tweets()
twit.drop_non_ascii_characters_from_tweets()
twit.drop_duplicate_tweets()
twit.drop_by_search_in_name()
twit.convert_tweet_dates_to_standard()
twit.sort_tweets_by_date()
len(twit.tweets_df)
twit.keep_tweets_with_terms("bitcoin")
len(twit.tweets_df)
"""
Explanation: Standard cleaning
End of explanation
"""
twit.create_word_bag()
twit.make_nltk_object_from_word_bag()
twit.create_word_freq_df(1000)
twit.word_freq_df.sort_values("log relative frequency", ascending = False, inplace = True)
twit.word_freq_df.head(20)
"""
Explanation: Create word_freq_df
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 100
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
"""
Explanation: Plot results with varying background cutoffs
At least 100 background occurrences:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 500
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
twit.tweets_containing("decline")[:10]
"""
Explanation: At least 500 background occurrences:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 2000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
twit.tweets_containing("confirmed")[:10]
"""
Explanation: At least 2000 background occurrences:
End of explanation
"""
|
whitead/numerical_stats | project/type3_examples/basketball.ipynb | gpl-3.0 | import pandas as pd
import numexpr
import bottleneck
import numpy as np
import numpy.linalg as linalg
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as ss
reg_14_15 = pd.read_csv('2014_2015 Regular Season Stats.csv')
#Testing out our system
reg_14_15
"""
Explanation: Markov Madness
Ok let's get down to business! So the overall goal is to build a mathematical model that predicts with good accuracy who is likely to make it to the sweet 16 in the NCAA tournament. This project is going to have two parts:
Part 1:
-Performing an optimization/regression in order to write equations that predict a team's performance in the NCAA tournament based on their regular season statistics.
Part 2:
-Putting together a Markov Chain that will use these probabilities to predict the teams most likely to advance in the tourament. I'll explain more about a Markov Chain when we get there.
Let's start with Part 1!
Part 1: The Regression
In order to perform this regression we need to set up a system to pull data from CSV files.
End of explanation
"""
reg_14_15 = reg_14_15.rename(columns={'Unnamed: 0': 'Number'})
#renaming the columns with integers so they can be more easily manipulated
d=[]
for i in range(0,34,1):
d.append(i)
d
reg_14_15.columns=[d]
#creating a new dataframe with only the teams in the tournamment
bracket_14_15=reg_14_15.iloc[[7,8,12,14,22,23,35,36,51,55,66,67,75,82,99,100,102,104,108,110,126,129,130,135,139,141,149,153,162,173,177,198,203,206,211,214,218,222,225,226,227,230,242,243,250,263,283,288,290,299,303,316,319,321,325,328,329,330,337,342,345,346,348,349],:]
newCol = [27,56,6,24,33,58,48,19,25,61,44,22,1,54,25,42,23,8,62,51,43,27,33,20,3,64,38,7,27,4,46,59,10,13,57,55,27,5,26,11,40,21,37,39,63,27,35,41,49,45,60,53,15,9,52,17,18,35,16,14,2,47,50,12]
newName = '34'
values = np.insert(bracket_14_15.values,bracket_14_15.shape[1],newCol,axis=1)
header = bracket_14_15.columns.values.tolist()
header.append(newName)
df = pd.DataFrame(values,columns=header)
df
"""
Explanation: Excellent, now we have datasets. The first thing to do is to rank teams based on their performance in each year's NCAA march-madness tournament. This part of the calculation is rather subjective- I'm going to individually rank teams by how well they did in the tournament. I need to do this because, if you think about it, there were two teams that lost in the final four, four that lost in the elite 8, etc. How do we rank these teams? We could put them at relatively the same ranking, which I will. But I'm also going to differentiate between a bad loss and a close game. So this isn't an exact science but that's ok because the results will show how good my ranking system was.
In the following cells I select only the 64 teams in the NCAA tournament from the above list of every single team in Division 1 College basketball, and I assign each team a ranking (my assigned ranking is in column 34).
End of explanation
"""
mat = np.zeros((64,32))
for j in range (0,64,1):
for i in range(3,34,1):
val = float(df.iat[j,i])/float(df.iat[j,2])
mat[j,i-3]=val
"""
Explanation: Now, for easier manipulation, we're going to convert the dataframe into a numpy array. Then we'll divide each value in the array by the total number of games that team played, ensuring we have 'per game' statistics.
End of explanation
"""
#creating our y matrix
ratings = np.zeros((64,1))
for j in range(0,64,1):
val = 64 - float(df.iat[j,34])
ratings[j] = val
"""
Explanation: Next we're going to begin the regression. First, we define a matrix y for our regression such that
$$ \textbf{Y} =\textbf{X} * \textbf{b}$$
where Y is our ratings, X is a matrix of our data points (each row represents the statistics for a single team), and b is our coefficients. I'm going to assume a linear relationship for now- I can play around with non-linear regressions later, but we really want to just get values for now and later we can figure out whether our regression is good.
End of explanation
"""
coeffs = []
for i in range(0,32,1):
results = ss.spearmanr(mat[:,i],ratings)
if results[1] < .05:
coeffs.append(i)
xmat = []
for i in coeffs:
xmat.append(mat[:,i])
result = linalg.lstsq(np.transpose(xmat),ratings)
x_mat = np.asarray(xmat)
x_matT = np.transpose(np.asarray(xmat))
rating = np.transpose(np.asarray(ratings))
npresult = np.asarray(result[0])
dot = np.dot(np.transpose(npresult),x_mat)
dot
dotadjusted = np.zeros((1,64))
for i in range(0,64,1):
if dot[0,i] < 0:
dotadjusted[0,i] = 1
else:
dotadjusted[0,i] = dot[0,i]
"""
Explanation: Since we only want to use the statistics that are correlated with the ratings, we run a spearman correlation test on every statistic and select only the ones below our alpha level of $0.05$. These statistics then form our $\textbf{X}$ matrix. Next we use the "linalg.lstsq" regression function to perform a least squares regression of our data. Finally, I'll compute our predicted rankings by multiplying the $\textbf{X}$ and $b$ matrices.
End of explanation
"""
brac2015 = np.zeros((64,64))
def brac(i):
a=0
for j in range(0,64,1):
a = a + dotadjusted[0,i]/(dotadjusted[0,i]+dotadjusted[0,j])
return 1/(64*.9921875)*a
for i in range(0,64,1):
for j in range(0,64,1):
if i != j:
brac2015[i,j] = 1/(64*.9921875) * dotadjusted[0,j]/(dotadjusted[0,i] + dotadjusted[0,j])
if i == j:
brac2015[i,i] = brac(i)
brac2015transpose = np.transpose(brac2015)
"""
Explanation: Notice above that I had to make a cheeky and dubious adjustment- some of the predicted rankings came out negative, so to ensure that all rankings are positive (we'll need them positive to create our Markov chain), I change all negative rankings to a rank of 0. A higher ranking means a better team.
Alright, we now have an equation with 15 coefficients that predicts the ranking of a team based on its regular season stats. Now we are going to create a Markov Chain using these data!
Part 2: The Markov Chain
Let's play a game called the jumping particle.
Consider a particle that can jump between multiple different states. On each turn of the game, the particle has a probability of jumping to another state or remaining in the current state. This group of states represents a Markov chain. The probability that a particle jumps to any particular state is written in the form of a "transition probability matrix." For example, consider a 2-state Markov Chain with states 0 and 1:
$$P =
\left[
\begin{array}{cc}
0.4 & 0.6\
0.7 & 0.3\end{array}\right]
$$
In this case, the probability that a particle in state 0 on turn 1 jumps to state 1 on turn 2 is 0.6, and the probability it stays in state 0 is 0.4. Likewise, the probability that a particle in state 1 on turn 1 jumps to state 0 on turn 2 is 0.7 while the probability that it stays in state 1 is 0.3. Notice that each row sums to 1. This makes intuitive sense; the probability that the particle either jumps or stays must add to 1. It turns out that Markov Chains have lots of nice properties that we can exploit. First, however, we have to construct our transition probability matrix for our bracket.
Let's use our ranking system. Adopting a method suggested in Kvam et. al, we can define
$$p_{i,j}= \frac{r_j}{r_i+r_j}$$
and
$$p_{i,i} = \sum_{j = 1, j \neq i}^{64}\frac{r_i}{r_i+r_j}$$ where $r_i$represents the ranking of team i, $r_j$ the ranking of team j.
Notice, however, that there is an issue; this does not necessarily sum to 1 for all the values in a row. In fact,
$$ p_{i,1} + p_{i,2} + ... + p_{i,i-1} + p_{i,i+1} + ... + p_{i,64} + p_{i,i} = $$
$$ \frac{r_1}{r_i+r_1} + \frac{r_2}{r_i+r_2} + ... + \frac{r_{i-1}}{r_i+r_{i-1}} + \frac{r_{i+1}}{r_i+r_{i+1}} + ... + \frac{r_{64}}{r_i+r_{64}} + (\frac{r_i}{r_i+r_1} + ... + \frac{r_i}{r_i+r_{i-1}} + \frac{r_i}{r_i+r_{i+1}} + ... + \frac{r_i}{r_i+r_{64}}) = $$
$$ \frac{r_i + r_1}{r_i+r_1} + ...\frac{r_i + r_{i-1}}{r_i+r_{i-1}} + \frac{r_i+r_{i+1}}{r_i+r_{i+1}} + ... + \frac{r_i+r_{64}}{r_i+r_{64}} = 63(1) = 63 $$
So if we normalize by $\frac{1}{63}$ we should get rows that sum to 1. Now let's write the matrix.
End of explanation
"""
#replace last equation of P with the second boundary condition.
for i in range(0,63,1):
for j in range(0,63,1):
if i == j:
brac2015eq[i,j] = brac2015transpose[i,i] - 1
if i != j:
brac2015eq[i,j] = brac2015transpose[i,j]
for i in range(0,64,1):
brac2015eq[63,i] = 1
b = np.zeros((64,1))
b[63,0] = 1
a = np.zeros((64,1))
c = []
d = []
for i in range(0,64,1):
cat = np.linalg.solve(brac2015eq,b)[i,0]
c.append(cat)
d.append(df.iat[i,1])
e = pd.Series(d)
f = pd.Series(c)
predictions = pd.DataFrame({ 'Team Name' : e,
'Steady State Probability' : f})
finalpredictions = predictions.sort_values(by = 'Steady State Probability')
print(finalpredictions.tail())
"""
Explanation: Inexplicably the rows don't add to one unless we use the normalization factor $\frac{1}{64 * 0.9921875}$. No biggie.
This is a special type of Markov chain- because none of the values in the transition matrix are 1 or 0, it's possible to go from any state in the matrix to any other state. We call this a regular Markov chain. In fact, this Markov Chain is regular, aperiodic, and irreducible. The special property of such a Markov chain is that there's a limiting probability distribution. This means that if we evolve the Markov process over infinite iterations (i.e. you randomly go from state 0 to state 1 to state 7 to state 32 etc. etc. infinite times) there is a set probability that the particle will be in any given state at time infinity. The limiting distribution follows this equation:
$$ \pi* \textbf{P} = \pi$$
where $\pi$ is the limiting distribution and $\textbf{P}$ is the transition probability matrix we constructed. Notice that $\pi$ is a 64-dimensional vector in our case.
We can use these limiting distributions! If we rank teams by their limiting distribution probabilities, we should be able to see which teams will be the most likely to win the tournament.
The other equation of importance is
$$ \pi_1 + ... + \pi_{64} = 1$$
where $\pi = <\pi_1,\pi_2,...,\pi_{64}>$
which makes sense, since the particle must be in $\textit{some}$ state at time infinity (Note: $\pi_i$ is the probability that the particle will be in state i at time infinity).
So now we have 64 equations to solve 64 variables (the $\pi_i$).
End of explanation
"""
reg_15_16 = pd.read_csv('2015_2016 Regular Season Stats.csv')
reg_15_16.head()
"""
Explanation: Now the interesting part!!! We get to apply this to new data sets. Because I'm still salty about how poorly my bracket did this year (my beloved MSU Spartans fell in the first round...) let's take a look and see whether this rating scheme is good for the 2016 March Madness bracket. First, we need to call a new data set.
End of explanation
"""
#teams = the 64 teams in the bracket that year. bracket = the associated data.
def predictor(regseasonstats,teams,vars,coefficients):
'''This function takes in multiple different constraints and outputs the teams most likely to win the NCAA tournament and
their probabilities of winning. Inputs:
regseasonstats = uploaded CSV file containing statistics for all teams as a Pandas Dataframe
teams = a list of the numerical indices associated with the 64 teams in the NCAA bracket that year
vars = the numerical values of the column headers of the variables desired to use in the regression
coefficients = the associated coefficients for each variable.'''
d=[]
for i in range(0,34,1):
d.append(i)
regseasonstats.columns=[d]
bracket = regseasonstats.iloc[teams,:]
mat = np.zeros((64,32))
for j in range (0,64,1):
for i in range(3,34,1):
val = float(bracket.iat[j,i])/float(bracket.iat[j,2])
mat[j,i-3]=val
xmat = []
for i in vars:
xmat.append(mat[:,i])
x_mat = np.asarray(xmat)
np.result = np.asarray(coefficients)
dot = np.dot(np.transpose(npresult),x_mat)
dotadjusted = np.zeros((1,64))
for i in range(0,64,1):
if dot[0,i] < 0:
dotadjusted[0,i] = 1
else:
dotadjusted[0,i] = dot[0,i]
#Making the Markov transition matrix
brac2015 = np.zeros((64,64))
def brac(i):
a=0
for j in range(0,64,1):
a = a + dotadjusted[0,i]/(dotadjusted[0,i]+dotadjusted[0,j])
return 1/(64*.9921875)*a
for i in range(0,64,1):
for j in range(0,64,1):
if i != j:
brac2015[i,j] = 1/(64*.9921875) * dotadjusted[0,j]/(dotadjusted[0,i] + dotadjusted[0,j])
if i == j:
brac2015[i,i] = brac(i)
brac2015transpose = np.transpose(brac2015)
for i in range(0,63,1):
for j in range(0,63,1):
if i == j:
brac2015eq[i,j] = brac2015transpose[i,i] - 1
if i != j:
brac2015eq[i,j] = brac2015transpose[i,j]
for i in range(0,64,1):
brac2015eq[63,i] = 1
b = np.zeros((64,1))
b[63,0] = 1
a = np.zeros((64,1))
mat1 = []
mat2 = []
for i in range(0,64,1):
cat = np.linalg.solve(brac2015eq,b)[i,0]
mat1.append(cat)
mat2.append(bracket.iat[i,1])
teamname = pd.Series(mat2)
probability = pd.Series(mat1)
predictions = pd.DataFrame({ 'Team Name' : teamname,
'Steady State Probability' : probability})
finalpredictions = predictions.sort_values(by = 'Steady State Probability')
return(finalpredictions[48:64])
#Here we define
teams2016 = [12,16,20,22,35,36,38,49,51,58,61,67,75,90,94,104,107,108,111,114,126,128,129,130,135,139,162,170,172
,173,174,203,207,209,218,222,226,230,231,236,242,243,256,269,276,281,290,292,293,294,299,300,305,320,
321,328,329,330,336,337,342,345,349,350]
#so
predictor(reg_15_16,teams2016,coeffs,result[0])
"""
Explanation: Luckily for me I don't have to create rankings for this set; I can just plug in the regular season stats of the 64 teams in the bracket and see what the program predicts. So let's do that!!!
End of explanation
"""
|
kdestasio/online_brain_intensive | nipype_tutorial/notebooks/basic_mapnodes.ipynb | gpl-2.0 | from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
"""
Explanation: <img src="../static/images/mapnode.png" width="300">
MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. (The main homepage has a nice section about MapNode and iterables if you want to learn more).
Let's demonstrate this with a simple function interface:
End of explanation
"""
square.run(x=2).outputs.f_x
"""
Explanation: We see that this function just takes a numeric input and returns its squared value.
End of explanation
"""
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
square_node.inputs.x = [0, 1, 2, 3]
square_node.run().outputs.f_x
"""
Explanation: What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode.
The MapNode constructor has a field called iterfield, which tells it what inputs should be expecting a list.
End of explanation
"""
def power_func(x, y):
return x ** y
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
print(power_node.run().outputs.f_xy)
"""
Explanation: Because iterfield can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
End of explanation
"""
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
print(power_node.run().outputs.f_xy)
"""
Explanation: But not every input needs to be an iterfield.
End of explanation
"""
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype.pipeline.engine import Node, MapNode, Workflow
files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',
'/data/ds000114/sub-02/ses-test/func/sub-02_ses-test_task-fingerfootlips_bold.nii.gz']
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
"""
Explanation: As in the case of iterables, each underlying MapNode execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you processes large amounts of data efficiently and reproducibly.
Why is this important?
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterwards, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.
<img src="../static/images/mapnode.png" width="300">
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:
- Gunzip, to unzip the files (plural)
- Realign, to do the motion correction
End of explanation
"""
gunzip = Node(Gunzip(), name='gunzip',)
gunzip.inputs.in_file = files
"""
Explanation: If we try to specify the input for the Gunzip node with a simple Node, we get the following error:
End of explanation
"""
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
"""
Explanation: bash
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] <type 'list'> was specified.
But if we do it with a MapNode, it works:
End of explanation
"""
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = '/output'
mcflow.run('MultiProc', plugin_args={'n_procs': 4})
"""
Explanation: Now, we just have to create a workflow, connect the nodes and we can run it:
End of explanation
"""
|
waynegm/OpendTect-Plugins | python_bindings/Examples/wmodpy_demo.ipynb | gpl-3.0 | import sys
sys.path.insert(0,'/opt/seismic/OpendTect_6/6.6.0/bin/lux64/Release')
"""
Explanation: OpendTect Python Bindings
Release 6.6.7 of the wmPlugins suite includes experimental Python bindings to OpendTect. There are a number of limitations to be aware of:
- Currently the bindings only provide access to information about Surveys, Wells and Horizons
- The bindings have been built against Python 3.7 and may/may not work with other versions of Python
- The bindings can't be used with a Python IDE that depends on a different Qt version (5.15.1) to OpendTect. This rules out the spyder IDE in the OpendTect Python environments which use Qt 5.9.6
- Visual Studio Code, Jupyter Lab or Notebooks work well
Installation
The recommended procedure for installation is to use the OpendTect installation manager to install the wmPlugins. This will install the library containing the bindings in the OpendTect bin/"Platform"/Release folder where "Platform" will either be win64 for Windows or lux64 for Linux.
You also need a Python environment with at least Numpy. Additional output options exist if your environment has Pandas or Shapely and Geopandas installed. The OpendTect Python environments don't include Shapely or Geopandas so your options are to use conda to install those dependencies or set up your own custom Python environment if you want the extra fnctionality.
Setting the PYTHONPATH and Python Environment
The folder containing the bindings library must be on the PYTHONPATH to be found by an import statement in a Python script. You can use the OpendTect Python Settings dialog (accessible from the Utilities| Installation|Python Settings application menu) and add the location of the OpendTect executable files to the Custom Module Path. This dialog is also where you can select a custom Python environment to use and also add an icon to the OpendTect toolbar to start your chosen IDE from the specified Custom Python environment with the custom module path added to the PYTHONPATH.
Alternatively, the PYTHONPATH can be modified in the script or a notebook cell before the actual import.
End of explanation
"""
import wmodpy
help(wmodpy.get_surveys)
"""
Explanation: wmodpy Module
The module contains a function (get_surveys)that lists the OpendTect surveys in a user supplied base data folder and classes to access:
- Survey information (Survey class)
- Well data (Wells class)
- 2D and 3D horizons (Horizons2D and Horizonz3D classes)
If you are working in a Jupyter notebook or Python command shell you can use the help() function to get more information about a class or function:
End of explanation
"""
wmodpy.get_surveys("/mnt/Data/seismic/CooperBasin/ODData")
"""
Explanation: wmodpy.get_surveys()
Give it the location of a base data folder and you get back a python list of the contained projects/surveys.
End of explanation
"""
help(wmodpy.Survey)
f3demo = wmodpy.Survey('/mnt/Data/seismic/ODData', 'F3_Demo_2020')
f3demo.name()
f3demo.has3d()
f3demo.has2d()
f3demo.epsg()
"""
Explanation: wmodpy.Survey Class
This class provides some basic information about an OpendTect project/survey. Creating a Survey object requires both the base data folder location and the project/survey name. The other data specific classes require a Survey object for context.
End of explanation
"""
penobscot = wmodpy.Survey('/mnt/Data/seismic/ODData', 'Penobscot')
penobscot.epsg()
f3demo.epsg()
"""
Explanation: It's easy to work with multiple projects/surveys at the same time:
End of explanation
"""
f3wells = wmodpy.Wells(f3demo)
f3wells.names()
"""
Explanation: wmodpy.Wells Class
This class provides access to well data within an OpendTect project/survey. The class constructor requires a Survey object.
The names() method provides a list of well names in the project/survey.
End of explanation
"""
f3wells.info()
wmodpy.Wells(penobscot).names()
wmodpy.Wells(penobscot).info()
"""
Explanation: General well information is available as a python dictionary.
End of explanation
"""
f3wells.info_df()
"""
Explanation: If your Python environment includes Pandas there is a function with the same name suffixed by "_df" which will return the same information directly in a Pandas DataFrame
End of explanation
"""
f3wells.log_info_df('F02-1')
wmodpy.Wells(penobscot).log_info_df('L-30')
"""
Explanation: If your environment includes GeoPandas there is another function of the same name suffixed by "_gdf" which returns a GeoDataFrame with the well surface coordinates as Point geometries. The crs of the GeoDataFrame is set to the EPSG code of the survey object.
Likewise well log information for each well is also available as a python dictionary or as a Pandas dataframe
End of explanation
"""
f3wells.markers_df('F03-4')
"""
Explanation: And marker information for each well is also available as a python dictionary or Pandas dataframe depending on the function used.
End of explanation
"""
f3wells.track_df('F03-4')
hor3d = wmodpy.Horizons3D(f3demo)
hor3d.names()
wmodpy.Horizons2D(f3demo).names()
"""
Explanation: Also the well track is available as either a python dictionary or Pandas dataframe depending on the function used.
End of explanation
"""
|
modin-project/modin | examples/tutorial/jupyter/execution/pandas_on_ray/local/exercise_2.ipynb | apache-2.0 | import modin.pandas as pd
import pandas
import time
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
"""
Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 2: Speed improvements
GOAL: Learn about common functionality that Modin speeds up by using all of your machine's cores.
Concept for Exercise: read_csv speedups
The most commonly used data ingestion method used in pandas is CSV files (link to pandas survey). This concept is designed to give an idea of the kinds of speedups possible, even on a non-distributed filesystem. Modin also supports other file formats for parallel and distributed reads, which can be found in the documentation.
We will import both Modin and pandas so that the speedups are evident.
Note: Rerunning the read_csv cells many times may result in degraded performance, depending on the memory of the machine
End of explanation
"""
path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
"""
Explanation: Dataset: 2015 NYC taxi trip data
We will be using a version of this data already in S3, originally posted in this blog post: https://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes
Size: ~1.8GB
End of explanation
"""
# [Optional] Download data locally. This may take a few minutes to download.
# import urllib.request
# url_path = "https://dask-data.s3.amazonaws.com/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
# urllib.request.urlretrieve(url_path, "taxi.csv")
# path = "taxi.csv"
"""
Explanation: Optional: Note that the dataset takes a while to download. To speed things up a bit, if you prefer to download this file once locally, you can run the following code in the notebook:
End of explanation
"""
start = time.time()
pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
pandas_duration = end - start
print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3)))
"""
Explanation: pandas.read_csv
End of explanation
"""
start = time.time()
modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
modin_duration = end - start
print("Time to read with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2)))
"""
Explanation: Expect pandas to take >3 minutes on EC2, longer locally
This is a good time to chat with your neighbor
Dicussion topics
- Do you work with a large amount of data daily?
- How big is your data?
- What’s the common use case of your data?
- Do you use any big data analytics tools?
- Do you use any interactive analytics tool?
- What’s are some drawbacks of your current interative analytic tools today?
modin.pandas.read_csv
End of explanation
"""
pandas_df
modin_df
"""
Explanation: Are they equal?
End of explanation
"""
start = time.time()
pandas_count = pandas_df.count()
end = time.time()
pandas_duration = end - start
print("Time to count with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_count = modin_df.count()
end = time.time()
modin_duration = end - start
print("Time to count with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `count`!".format(round(pandas_duration / modin_duration, 2)))
"""
Explanation: Concept for exercise: Reduces
In pandas, a reduce would be something along the lines of a sum or count. It computes some summary statistics about the rows or columns. We will be using count.
End of explanation
"""
pandas_count
modin_count
"""
Explanation: Are they equal?
End of explanation
"""
start = time.time()
pandas_isnull = pandas_df.isnull()
end = time.time()
pandas_duration = end - start
print("Time to isnull with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_isnull = modin_df.isnull()
end = time.time()
modin_duration = end - start
print("Time to isnull with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `isnull`!".format(round(pandas_duration / modin_duration, 2)))
"""
Explanation: Concept for exercise: Map operations
In pandas, map operations are operations that do a single pass over the data and do not change its shape. Operations like isnull and applymap are included in this. We will be using isnull.
End of explanation
"""
pandas_isnull
modin_isnull
"""
Explanation: Are they equal?
End of explanation
"""
start = time.time()
rounded_trip_distance_pandas = pandas_df["trip_distance"].apply(round)
end = time.time()
pandas_duration = end - start
print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
rounded_trip_distance_modin = modin_df["trip_distance"].apply(round)
end = time.time()
modin_duration = end - start
print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2)))
"""
Explanation: Concept for exercise: Apply over a single column
Sometimes we want to compute some summary statistics on a single column from our dataset.
End of explanation
"""
rounded_trip_distance_pandas
rounded_trip_distance_modin
"""
Explanation: Are they equal?
End of explanation
"""
start = time.time()
pandas_df["rounded_trip_distance"] = rounded_trip_distance_pandas
end = time.time()
pandas_duration = end - start
print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_df["rounded_trip_distance"] = rounded_trip_distance_modin
end = time.time()
modin_duration = end - start
print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas add a column!".format(round(pandas_duration / modin_duration, 2)))
"""
Explanation: Concept for exercise: Add a column
It is common to need to add a new column to an existing dataframe, here we show that this is significantly faster in Modin due to metadata management and an efficient zero copy implementation.
End of explanation
"""
pandas_df
modin_df
"""
Explanation: Are they equal?
End of explanation
"""
|
freedomtan/tensorflow | tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb | apache-2.0 | # Define paths to model files
import os
MODELS_DIR = 'models/'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model'
MODEL_NO_QUANT_TFLITE = MODELS_DIR + 'model_no_quant.tflite'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
"""
Explanation: Train a Simple TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Configure Defaults
End of explanation
"""
! pip install tensorflow==2.4.0rc0
"""
Explanation: Setup Environment
Install Dependencies
End of explanation
"""
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Keras is TensorFlow's high-level API for deep learning
from tensorflow import keras
# Numpy is a math library
import numpy as np
# Pandas is a data manipulation library
import pandas as pd
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# Math is Python's math library
import math
# Set seed for experiment reproducibility
seed = 1
np.random.seed(seed)
tf.random.set_seed(seed)
"""
Explanation: Import Dependencies
End of explanation
"""
# Number of sample datapoints
SAMPLES = 1000
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(
low=0, high=2*math.pi, size=SAMPLES).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
End of explanation
"""
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
"""
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
"""
Explanation: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows:
1. Training: 60%
2. Validation: 20%
3. Testing: 20%
The following code will split our data and then plots each set as a different color:
End of explanation
"""
# We'll use Keras to create a simple model architecture
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 8 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(keras.layers.Dense(8, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model_1.compile(optimizer='adam', loss='mse', metrics=['mae'])
"""
Explanation: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
"""
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
"""
Explanation: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(train_loss) + 1)
plt.plot(epochs, train_loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: 3. Plot Metrics
1. Loss (or Mean Squared Error)
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
"""
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs:
End of explanation
"""
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
"""
Explanation: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
"""
# Calculate and print the loss on our test dataset
test_loss, test_mae = model_1.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model_1.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predictions')
plt.legend()
plt.show()
"""
Explanation: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier:
End of explanation
"""
model = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model.add(keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second and third layer will help the network learn more complex representations
model.add(keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model.compile(optimizer='adam', loss="mse", metrics=["mae"])
"""
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle:
End of explanation
"""
# Train the model
history = model.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
# Save the model to disk
model.save(MODEL_TF)
"""
Explanation: 2. Train the Model
We'll now train and save the new model.
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(train_loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history.history['mae']
val_mae = history.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
"""
Explanation: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 500/500
10/10 [==============================] - 0s 10ms/step - loss: 0.0121 - mae: 0.0882 - val_loss: 0.0115 - val_mae: 0.0865
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.01, and validation MAE has dropped from 0.33 to 0.08.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
"""
# Calculate and print the loss on our test dataset
test_loss, test_mae = model.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predicted')
plt.legend()
plt.show()
"""
Explanation: Great results! From these graphs, we can see several exciting things:
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
"""
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_TF)
model_no_quant_tflite = converter.convert()
# Save the model to disk
open(MODEL_NO_QUANT_TFLITE, "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([x_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_TFLITE, "wb").write(model_tflite)
"""
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to stop. This model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of a model is called quantization. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
In the following cell, we'll convert the model twice: once with quantization, once without.
End of explanation
"""
def predict_tflite(tflite_model, x_test):
# Prepare the test data
x_test_ = x_test.copy()
x_test_ = x_test_.reshape((x_test.size, 1))
x_test_ = x_test_.astype(np.float32)
# Initialize the TFLite interpreter
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
# If required, quantize the input layer (from float to integer)
input_scale, input_zero_point = input_details["quantization"]
if (input_scale, input_zero_point) != (0.0, 0):
x_test_ = x_test_ / input_scale + input_zero_point
x_test_ = x_test_.astype(input_details["dtype"])
# Invoke the interpreter
y_pred = np.empty(x_test_.size, dtype=output_details["dtype"])
for i in range(len(x_test_)):
interpreter.set_tensor(input_details["index"], [x_test_[i]])
interpreter.invoke()
y_pred[i] = interpreter.get_tensor(output_details["index"])[0]
# If required, dequantized the output layer (from integer to float)
output_scale, output_zero_point = output_details["quantization"]
if (output_scale, output_zero_point) != (0.0, 0):
y_pred = y_pred.astype(np.float32)
y_pred = (y_pred - output_zero_point) * output_scale
return y_pred
def evaluate_tflite(tflite_model, x_test, y_true):
global model
y_pred = predict_tflite(tflite_model, x_test)
loss_function = tf.keras.losses.get(model.loss)
loss = loss_function(y_true, y_pred).numpy()
return loss
"""
Explanation: 2. Compare Model Performance
To prove these models are accurate even after conversion and quantization, we'll compare their predictions and loss on our test dataset.
Helper functions
We define the predict (for predictions) and evaluate (for loss) functions for TFLite models. Note: These are already included in a TF model, but not in a TFLite model.
End of explanation
"""
# Calculate predictions
y_test_pred_tf = model.predict(x_test)
y_test_pred_no_quant_tflite = predict_tflite(model_no_quant_tflite, x_test)
y_test_pred_tflite = predict_tflite(model_tflite, x_test)
# Compare predictions
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual values')
plt.plot(x_test, y_test_pred_tf, 'ro', label='TF predictions')
plt.plot(x_test, y_test_pred_no_quant_tflite, 'bx', label='TFLite predictions')
plt.plot(x_test, y_test_pred_tflite, 'gx', label='TFLite quantized predictions')
plt.legend()
plt.show()
"""
Explanation: 1. Predictions
End of explanation
"""
# Calculate loss
loss_tf, _ = model.evaluate(x_test, y_test, verbose=0)
loss_no_quant_tflite = evaluate_tflite(model_no_quant_tflite, x_test, y_test)
loss_tflite = evaluate_tflite(model_tflite, x_test, y_test)
# Compare loss
df = pd.DataFrame.from_records(
[["TensorFlow", loss_tf],
["TensorFlow Lite", loss_no_quant_tflite],
["TensorFlow Lite Quantized", loss_tflite]],
columns = ["Model", "Loss/MSE"], index="Model").round(4)
df
"""
Explanation: 2. Loss (MSE/Mean Squared Error)
End of explanation
"""
# Calculate size
size_tf = os.path.getsize(MODEL_TF)
size_no_quant_tflite = os.path.getsize(MODEL_NO_QUANT_TFLITE)
size_tflite = os.path.getsize(MODEL_TFLITE)
# Compare size
pd.DataFrame.from_records(
[["TensorFlow", f"{size_tf} bytes", ""],
["TensorFlow Lite", f"{size_no_quant_tflite} bytes ", f"(reduced by {size_tf - size_no_quant_tflite} bytes)"],
["TensorFlow Lite Quantized", f"{size_tflite} bytes", f"(reduced by {size_no_quant_tflite - size_tflite} bytes)"]],
columns = ["Model", "Size", ""], index="Model")
"""
Explanation: 3. Size
End of explanation
"""
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file, i.e, a TensorFlow Lite for Microcontrollers model
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
"""
Explanation: Summary
We can see from the predictions (graph) and loss (table) that the original TF model, the TFLite model, and the quantized TFLite model are all close enough to be indistinguishable - even though they differ in size (table). This implies that the quantized (smallest) model is ready to use!
Note: The quantized (integer) TFLite model is just 300 bytes smaller than the original (float) TFLite model - a tiny reduction in size! This is because the model is already so small that quantization has little effect. Complex models with more weights, can have upto a 4x reduction in size!
Generate a TensorFlow Lite for Microcontrollers Model
Convert the TensorFlow Lite quantized model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
End of explanation
"""
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
"""
Explanation: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the hello_world/train/models directory to access the models generated in this notebook.
New Model: If you have generated a new model, then update the values assigned to the variables defined in hello_world/model.cc with values displayed after running the following cell.
End of explanation
"""
|
IS-ENES-Data/scripts | Scripts/test1.ipynb | apache-2.0 | result = web.jsonfile_to_dict("/home/stephan/Repos/ENES-EUDAT/cordex/CORDEX_adjust_register.json")
html_out = web.generate_bias_table(result)
HTML(html_out)
"""
Explanation: HTML Bias CV view
showing ["institution", "institute_id", "bc_method", "bc_method_id",
"institute_id"-"bc_method_id", "terms_of_use", "CORDEX_domain",
"reference", "package" ]
End of explanation
"""
html_out = web.generate_bias_table(result)
HTML(html_out)
html_out = web.generate_bias_table_add(result)
HTML(html_out)
"""
Explanation: HTML bias CV view separated in 2 tables
End of explanation
"""
|
dedx/STAR2015 | notebooks/CountingStars.ipynb | mit | %pylab inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Counting Stars
Based on the Multimedia Programming lesson at Software Carpentry.
End of explanation
"""
from PIL import Image
import requests
from StringIO import StringIO
#Pick an image from the list above and fetch it with requests.get.
#It's okay to do a copy/paste here.
#The default picture here is of M45 - the Pleiades Star Cluster.
response = requests.get("http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg")
pic = Image.open(StringIO(response.content))
"""
Explanation: Image I/O
Reading in and manipulating image data can be accomplished with NumPy or another built-in image library designed for image processing, such as the "Python Imaging Library" or PIL.
For example, we might want to read in a beautiful image of a star field taken by the Hubble Space Telescope. Have a look at some of these images:
http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1995-32-c-full_jpg.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-2002-10-c-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1999-30-b-full_jpg.jpg
Recall that color images are just arrays of numbers representing pixel locations and the color depth of each of Red, Green, and Blue. The different file formats (jpg, png, tiff, etc.) also contain additional information (headers, for example) that an image reading function needs to know about. We can use PIL to read image data into NumPy arrays. If we want to fetch a file off the web, we also need some help from the requests and StringIO libraries:
End of explanation
"""
plt.imshow(pic);
"""
Explanation: Plot the image using matplotlib:
End of explanation
"""
pic.format
"""
Explanation: Examine image properties
End of explanation
"""
pic.size
"""
Explanation: Pixel coordinates are (x,y) tuples with (0,0) the upper left corner because that's how old CRT monitors drew things.
End of explanation
"""
#Color tuple (R,G,B) of very first pixel
pic.getpixel((0,0))
"""
Explanation: Colors are represented by RGB triples. Black is (0,0,0), White is (255, 255, 255) or (0xFF, 0xFF, 0xFF) in hexadecimal. Think of it as a color cube with the three axes representing the different possible colors. The furthest away from the origin (black) is white.
End of explanation
"""
xsize, ysize = pic.size
max_val = 0
for x in range(xsize):
for y in range(ysize):
r,g,b = pic.getpixel((x,y))
if r+g+b > max_val:
bx, by, max_val = x, y, r+g+b
print (bx,by), max_val
"""
Explanation: Where is the brightest pixel? "Bright" here means the one with the largest color value, represented as the sum of the RGB values.
End of explanation
"""
def brightest(picture):
"""add up each pixel's color values
to approximate its overall luminance"""
xsize, ysize = picture.size
bx, by, max_val = 0, 0, 0
for x in range(xsize):
for y in range(ysize):
r,g,b = pic.getpixel((x,y))
if r+g+b > max_val:
bx, by, max_val = x, y, r+g+b
return (bx, by), max_val
"""
Explanation: By comparison, the greatest possible value is 3 * 255, or 765.
Encapsulate this code into a function that could be used on any picture:
End of explanation
"""
from time import time
def elapsed(func, picture):
"""takes a function and a picture as arguments,
applies the function to the picture, and returns
the elapsed time along with whatever the function
itself returned."""
start = time()
result = func(picture)
return time() - start, result
"""
Explanation: How long does it take to find the result for our image? Let's import the time library and use it to find out.
End of explanation
"""
print elapsed(brightest,pic)
"""
Explanation: Now run it with your picture data:
End of explanation
"""
def faster(picture):
"""This function, 'faster', uses
'picture.getdata' to unpack the
row-and-column representation of the
image to create a vector of pixels,
and then loops over that."""
max_val = 0
for (r,g,b) in picture.getdata():
if r+g+b > max_val:
max_val = r + g + b
return max_val
print elapsed(faster,pic)
"""
Explanation: We could process the information faster if we use the PIL getdata function to unpack the row-and-column representation of the image to create a vector of pixels, and then loops over that.
End of explanation
"""
def in_between(picture):
xsize, ysize = picture.size
temp = picture.load()
bx, by, max_val = 0, 0, 0
for x in range(xsize):
for y in range(ysize):
r,g,b = temp[x,y]
if r+g+b > max_val:
bx, by, max_val = x,y,r+g+b
return (bx, by), max_val
print elapsed(in_between,pic)
"""
Explanation: This is faster because the pixels are unpacked into a 1-D array row-by-row. This function is more than nine times faster than its predecessor, partly because we are not translating between (x,y) coordinates and pixel locations in memory over and over again, and partly because the 'getdata' method unpacks the pixels to make them more accessible.
But now we don't have the coordinates of the brightest pixel. We could calculate it from the location in the 1-D array but that is a bit of a pain. A useful alternative is to call 'picture.load', which unpacks the picture's pixels in memory, so that you can index the picture as if it was an array.
End of explanation
"""
#create numpy array of image data
myimg = np.array(pic.getdata(), np.uint8).reshape(pic.size[1], pic.size[0], 3)
#find max pixel with aggregates
def with_numpy(picture):
return picture.sum(axis=2).max()
print elapsed(with_numpy,myimg)
"""
Explanation: Note: If the picture can be read in as a NumPy array, you can use masks and aggregates to make this even faster. There is overhead in creating the NumPy array from the image but once it is an array, the operations are lightning fast because they don't require loops.
End of explanation
"""
def monochrome(picture, threshold):
"""loops over the pixels in the loaded image,
replacing the RGB values of each with either
black or white depending on whether its total
luminance is above or below some threshold
passed in by the user"""
black = (0, 0, 0)
white = (255, 255, 255)
xsize, ysize = picture.size
temp = picture.load()
for x in range(xsize):
for y in range(ysize):
r,g,b = temp[x,y]
if r+g+b >= threshold:
temp[x,y] = black
else:
temp[x,y] = white
"""
Explanation: Which of the forms you should use in a particular situation depends on what information you need from the image, what format they are, and how big the images you're working with are.
Finding Stars
Let's use what we've learned to count stars in our image. Start by converting the image to B/W, so that which pixels belong to stars and which don't is unambiguous. We'll use black for stars and white for background, since it's easier to see black-on-white than the reverse.
End of explanation
"""
#Get another copy to convert to B/W
bwpic = Image.open(StringIO(response.content))
#Remember, this threshold is a scalar, not an RGB triple
#we're looking for pixels whose total color value is 600 or greater
monochrome(bwpic,200+200+200)
plt.imshow(bwpic);
"""
Explanation: Could you do this faster with masks?
End of explanation
"""
BLACK = (0,0,0)
RED = (255,0,0)
def count(picture):
"""scan the image top to bottom and left to right using a nested loop.
when black pixel is found, increment the count, then call the fill
function to fill in all the pixels connected to that one."""
xsize, ysize = picture.size
temp = picture.load()
result = 0
for x in range(xsize):
for y in range(ysize):
if temp[x,y] == BLACK:
result += 1
fill(temp,xsize,ysize,x,y)
return result
"""
Explanation: Now we can start counting stars by counting "blobs" of connected or adjacent black pixels.
Decide what we mean by "adjacent": sharing sides of square pixels but not corners. i.e. directly above, below, left, or right from each other, not if they are touching diagonally.
Count each one just once.
Scan the image left-right, top-bottom
Each time we find a new blob, increment the count
How do we tell whether the pixel we're looking at is part of a new blob or not?
We could mark counted pixels by turning black ones red. (Hmm...sounds a bit like ipythonblocks! In fact, ipythonblocks can also read in images and manipulate them as ImageGrid objects!)
If the pixel we're looking at touches one that has already been turned red, then it's part of a blob we have already counted. We'll turn it red to show that we have looked at it, but we won't count it as a star, since it belongs to a star we've already counted.
<table align="center">
<tr>
<td width="25%"><img src="../img/Slide01.jpg" width=300> Sweep across the field left-to-right, top-to-bottom, until we find the first non-zero pixel.</td>
<td width="25%"><img src="../img/Slide02.jpg" width=300> Once it is found mark it as red.</td>
<td width="25%"><img src="../img/Slide03.jpg" width=300> Look to the left and above to see if it is part of an already counted blob. If not, increment the counter.</td>
<td width="25%"><img src="../img/Slide04.jpg" width=300> Proceed to the right.</td>
</tr>
<tr>
<td width="25%"><img src="../img/Slide05.jpg" width=300> Follow the same procedure as before to decide whether the count should be incremented.</td>
<td width="25%"><img src="../img/Slide06.jpg" width=300> When we reach the next blob, repeat.</td>
<td width="25%"><img src="../img/Slide07.jpg" width=300> Hmm...Maybe we should modify the "already counted" procedure?</td>
<td width="25%"><img src="../img/Slide08.jpg" width=300> We could add the diagonals...</td>
</tr>
<tr>
<td width="25%"><img src="../img/Slide09.jpg" width=300> But that could fail on the this pixel.</td>
<td width="25%"><img src="../img/Slide10.jpg" width=250> And what about a case like this?</td>
<td width="25%"><img src="../img/Slide11.jpg" width=250> Maybe we could try a "flood-fill" algorithm. Sweep across the field until a black pixel is encountered.</td>
<td width="25%"><img src="../img/Slide12.jpg" width=250> Mark it red.</td>
</tr>
<tr>
<td width="25%"><img src="../img/Slide13.jpg" width=250> Check for neighbors on all sides.</td>
<td width="25%"><img src="../img/Slide14.jpg" width=250> Move to the first neighbor and mark it red.</td>
<td width="25%"><img src="../img/Slide15.jpg" width=250> Now check its neighbors and repeat until there are no more unchecked neighbors.
<td width="25%"><img src="../img/Slide16.jpg" width=250> Done.</td>
</tr>
</table>
Let's see how to implement such an algorithm.
First, implement a function to do our counting:
End of explanation
"""
def fill(picture, xsize, ysize, xstart, ystart):
"""keep a list of pixels that need to be looked at,
but haven't yet been filled in - a list of the (x,y)
coordinates of pixels that are neighbors of ones we have
already examined. Keep looping until there's nothing
left in this list"""
queue = [(xstart,ystart)]
#print "queue start:",queue
qcount = 0
while queue:
#print qcount,": ",queue
x,y,queue = queue[0][0], queue[0][1], queue[1:]
if picture[x,y] == BLACK:
picture[x,y] = RED
if x > 0:
queue.append((x-1,y))
if x < (xsize-1):
queue.append((x+1,y))
if y > 0:
queue.append((x, y-1))
if y < (ysize-1):
queue.append((x, y+1))
qcount+=1
count(bwpic)
"""
Explanation: Uh... we don't have a "fill" function yet. What would it look like? Maybe we could set it up as follows:
Keep list of (x,y) coordinates to be examined (the "queue").
Loop until queue is empty:
Take (x,y) coordinates from queue
If black, fill it in and add neighbors to queue
Here's what it might look like:
End of explanation
"""
|
idekerlab/cyrest-examples | notebooks/Realistic workflow/The workflow of Anne/The Python workflow of Anne.ipynb | mit | from py2cytoscape.data.cynetwork import CyNetwork
from py2cytoscape.data.cyrest_client import CyRestClient
from py2cytoscape.data.style import StyleUtil
import py2cytoscape.util.cytoscapejs as cyjs
import py2cytoscape.cytoscapejs as renderer
import networkx as nx
import pandas as pd
import json
# !!!!!!!!!!!!!!!!! Step 0: Start Cytoscape 3 with cyREST App !!!!!!!!!!!!!!!!!!!!!!!!!!
# Step 1: Create py2cytoscape client
cy = CyRestClient()
# Reset
cy.session.delete()
"""
Explanation: The Python workflow of Anne
In this jupyter-notebook, I'll show you the Python workflow of Anne as the sample.
You can understand the basic workflow by reading this, but the py2cytoscape's documentation is really useful and you can know much more about this.
The documentation : coming soon
Setup
To execute this, please sutisfy following items.
Java SE 8
Cytoscape version 3.3+
CyREST
Use docker file to do this. The docker file provide you the environment below.
py2cytoscape
igraph
...
Start a new session
Let's start a new session with py2cytoscape.
End of explanation
"""
# Load network from somewhere
net_from_local1 = cy.network.create_from('sample_yeast_network.xgmml', collection='My Collection')
#
cy.layout.apply(name='degree-circle', network=net_from_local1)
"""
Explanation: Load a network data from file/URL
End of explanation
"""
#TODO
"""
Explanation: Upload a network table, an edge attribute table and a node attribute table.
Now, we use only network data. However, you also want to use an edge attribute and a node attribute table. So, first we will import attribution data, then merge it to network data.
The following document will show you some example.
Import edge attribute table and merge it.
Import node attribute table and merge it.
Select edges based on some edge attributes
End of explanation
"""
|
stonebig/winpython_afterdoc | docs/maths/kalman_filters.ipynb | mit | # mlab.bivariate_normal is going to be remove from matplotlib
# from matplotlib.mlab import bivariate_normal
import numpy as np
def _bivariate_normal(X, Y, sigmax=1.0, sigmay=1.0,
mux=0.0, muy=0.0, sigmaxy=0.0):
"""
This is the implementation from matplotlib:
https://github.com/matplotlib/matplotlib/blob/81e8154dbba54ac1607b21b22984cabf7a6598fa/lib/matplotlib/mlab.py#L1866
it was deprecated in v2.2 of matplotlib, so we are including it here.
Bivariate Gaussian distribution for equal shape *X*, *Y*.
See `bivariate normal
<http://mathworld.wolfram.com/BivariateNormalDistribution.html>`_
at mathworld.
"""
Xmu = X-mux
Ymu = Y-muy
rho = sigmaxy/(sigmax*sigmay)
z = Xmu**2/sigmax**2 + Ymu**2/sigmay**2 - 2*rho*Xmu*Ymu/(sigmax*sigmay)
denom = 2*np.pi*sigmax*sigmay*np.sqrt(1-rho**2)
return np.exp(-z/(2*(1-rho**2))) / denom
from scipy import linalg
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
%matplotlib inline
# == Set up the Gaussian prior density p == #
Σ = [[0.4, 0.3], [0.3, 0.45]]
Σ = np.matrix(Σ)
x_hat = np.matrix([0.2, -0.2]).T
# == Define the matrices G and R from the equation y = G x + N(0, R) == #
G = [[1, 0], [0, 1]]
G = np.matrix(G)
R = 0.5 * Σ
# == The matrices A and Q == #
A = [[1.2, 0], [0, -0.2]]
A = np.matrix(A)
Q = 0.3 * Σ
# == The observed value of y == #
y = np.matrix([2.3, -1.9]).T
# == Set up grid for plotting == #
x_grid = np.linspace(-1.5, 2.9, 100)
y_grid = np.linspace(-3.1, 1.7, 100)
X, Y = np.meshgrid(x_grid, y_grid)
def gen_gaussian_plot_vals(μ, C):
"Z values for plotting the bivariate Gaussian N(μ, C)"
m_x, m_y = float(μ[0]), float(μ[1])
s_x, s_y = np.sqrt(C[0, 0]), np.sqrt(C[1, 1])
s_xy = C[0, 1]
return _bivariate_normal(X, Y, s_x, s_y, m_x, m_y, s_xy)
# Plot the figure
fig, ax = plt.subplots(figsize=(10, 8))
ax.grid()
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
plt.show()
"""
Explanation: A First Look at the Kalman Filter
(2019-03-30 by https://github.com/zouhx11 , Haoxuan Zou, )
<a id='index-0'></a>
Contents
A First Look at the Kalman Filter
Overview
The Basic Idea
Convergence
Implementation
Exercises
Solutions
Overview
This lecture provides a simple and intuitive introduction to the Kalman filter, for those who either
have heard of the Kalman filter but don’t know how it works, or
know the Kalman filter equations, but don’t know where they come from
For additional (more advanced) reading on the Kalman filter, see
[LS18], section 2.7
[AM05]
The second reference presents a comprehensive treatment of the Kalman filter
Required knowledge: Familiarity with matrix manipulations, multivariate normal distributions, covariance matrices, etc.
The Basic Idea
The Kalman filter has many applications in economics, but for now
let’s pretend that we are rocket scientists
A missile has been launched from country Y and our mission is to track it
Let $ x \in \mathbb{R}^2 $ denote the current location of the missile—a
pair indicating latitude-longitute coordinates on a map
At the present moment in time, the precise location $ x $ is unknown, but
we do have some beliefs about $ x $
One way to summarize our knowledge is a point prediction $ \hat x $
But what if the President wants to know the probability that the missile is currently over the Sea of Japan?
Then it is better to summarize our initial beliefs with a bivariate probability density $ p $
$ \int_E p(x)dx $ indicates the probability that we attach to the missile being in region $ E $
The density $ p $ is called our prior for the random variable $ x $
To keep things tractable in our example, we assume that our prior is Gaussian
In particular, we take
<a id='equation-prior'></a>
$$
p = N(\hat x, \Sigma) \tag{1}
$$
where $ \hat x $ is the mean of the distribution and $ \Sigma $ is a
$ 2 \times 2 $ covariance matrix. In our simulations, we will suppose that
<a id='equation-kalman-dhxs'></a>
$$
\hat x
= \left(
\begin{array}{c}
0.2 \
-0.2
\end{array}
\right),
\qquad
\Sigma
= \left(
\begin{array}{cc}
0.4 & 0.3 \
0.3 & 0.45
\end{array}
\right) \tag{2}
$$
This density $ p(x) $ is shown below as a contour map, with the center of the red ellipse being equal to $ \hat x $
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 8))
ax.grid()
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
"""
Explanation: The Filtering Step
We are now presented with some good news and some bad news
The good news is that the missile has been located by our sensors, which report that the current location is $ y = (2.3, -1.9) $
The next figure shows the original prior $ p(x) $ and the new reported
location $ y $
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 8))
ax.grid()
Z = gen_gaussian_plot_vals(x_hat, Σ)
cs1 = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs1, inline=1, fontsize=10)
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
x_hat_F = x_hat + M * (y - G * x_hat)
Σ_F = Σ - M * G * Σ
new_Z = gen_gaussian_plot_vals(x_hat_F, Σ_F)
cs2 = ax.contour(X, Y, new_Z, 6, colors="black")
ax.clabel(cs2, inline=1, fontsize=10)
ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
"""
Explanation: The bad news is that our sensors are imprecise
In particular, we should interpret the output of our sensor not as
$ y=x $, but rather as
<a id='equation-kl-measurement-model'></a>
$$
y = G x + v, \quad \text{where} \quad v \sim N(0, R) \tag{3}
$$
Here $ G $ and $ R $ are $ 2 \times 2 $ matrices with $ R $
positive definite. Both are assumed known, and the noise term $ v $ is assumed
to be independent of $ x $
How then should we combine our prior $ p(x) = N(\hat x, \Sigma) $ and this
new information $ y $ to improve our understanding of the location of the
missile?
As you may have guessed, the answer is to use Bayes’ theorem, which tells
us to update our prior $ p(x) $ to $ p(x \,|\, y) $ via
$$
p(x \,|\, y) = \frac{p(y \,|\, x) \, p(x)} {p(y)}
$$
where $ p(y) = \int p(y \,|\, x) \, p(x) dx $
In solving for $ p(x \,|\, y) $, we observe that
$ p(x) = N(\hat x, \Sigma) $
In view of (3), the conditional density $ p(y \,|\, x) $ is $ N(Gx, R) $
$ p(y) $ does not depend on $ x $, and enters into the calculations only as a normalizing constant
Because we are in a linear and Gaussian framework, the updated density can be computed by calculating population linear regressions
In particular, the solution is known <sup><a href=#f1 id=f1-link>[1]</a></sup> to be
$$
p(x \,|\, y) = N(\hat x^F, \Sigma^F)
$$
where
<a id='equation-kl-filter-exp'></a>
$$
\hat x^F := \hat x + \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x)
\quad \text{and} \quad
\Sigma^F := \Sigma - \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma \tag{4}
$$
Here $ \Sigma G' (G \Sigma G' + R)^{-1} $ is the matrix of population regression coefficients of the hidden object $ x - \hat x $ on the surprise $ y - G \hat x $
This new density $ p(x \,|\, y) = N(\hat x^F, \Sigma^F) $ is shown in the next figure via contour lines and the color map
The original density is left in as contour lines for comparison
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 8))
ax.grid()
# Density 1
Z = gen_gaussian_plot_vals(x_hat, Σ)
cs1 = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs1, inline=1, fontsize=10)
# Density 2
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
x_hat_F = x_hat + M * (y - G * x_hat)
Σ_F = Σ - M * G * Σ
Z_F = gen_gaussian_plot_vals(x_hat_F, Σ_F)
cs2 = ax.contour(X, Y, Z_F, 6, colors="black")
ax.clabel(cs2, inline=1, fontsize=10)
# Density 3
new_x_hat = A * x_hat_F
new_Σ = A * Σ_F * A.T + Q
new_Z = gen_gaussian_plot_vals(new_x_hat, new_Σ)
cs3 = ax.contour(X, Y, new_Z, 6, colors="black")
ax.clabel(cs3, inline=1, fontsize=10)
ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
"""
Explanation: Our new density twists the prior $ p(x) $ in a direction determined by the new
information $ y - G \hat x $
In generating the figure, we set $ G $ to the identity matrix and $ R = 0.5 \Sigma $ for $ \Sigma $ defined in (2)
<a id='kl-forecase-step'></a>
The Forecast Step
What have we achieved so far?
We have obtained probabilities for the current location of the state (missile) given prior and current information
This is called “filtering” rather than forecasting, because we are filtering
out noise rather than looking into the future
$ p(x \,|\, y) = N(\hat x^F, \Sigma^F) $ is called the filtering distribution
But now let’s suppose that we are given another task: to predict the location of the missile after one unit of time (whatever that may be) has elapsed
To do this we need a model of how the state evolves
Let’s suppose that we have one, and that it’s linear and Gaussian. In particular,
<a id='equation-kl-xdynam'></a>
$$
x_{t+1} = A x_t + w_{t+1}, \quad \text{where} \quad w_t \sim N(0, Q) \tag{5}
$$
Our aim is to combine this law of motion and our current distribution $ p(x \,|\, y) = N(\hat x^F, \Sigma^F) $ to come up with a new predictive distribution for the location in one unit of time
In view of (5), all we have to do is introduce a random vector $ x^F \sim N(\hat x^F, \Sigma^F) $ and work out the distribution of $ A x^F + w $ where $ w $ is independent of $ x^F $ and has distribution $ N(0, Q) $
Since linear combinations of Gaussians are Gaussian, $ A x^F + w $ is Gaussian
Elementary calculations and the expressions in (4) tell us that
$$
\mathbb{E} [A x^F + w]
= A \mathbb{E} x^F + \mathbb{E} w
= A \hat x^F
= A \hat x + A \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x)
$$
and
$$
\operatorname{Var} [A x^F + w]
= A \operatorname{Var}[x^F] A' + Q
= A \Sigma^F A' + Q
= A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q
$$
The matrix $ A \Sigma G' (G \Sigma G' + R)^{-1} $ is often written as
$ K_{\Sigma} $ and called the Kalman gain
The subscript $ \Sigma $ has been added to remind us that $ K_{\Sigma} $ depends on $ \Sigma $, but not $ y $ or $ \hat x $
Using this notation, we can summarize our results as follows
Our updated prediction is the density $ N(\hat x_{new}, \Sigma_{new}) $ where
<a id='equation-kl-mlom0'></a>
$$
\begin{aligned}
\hat x_{new} &:= A \hat x + K_{\Sigma} (y - G \hat x) \
\Sigma_{new} &:= A \Sigma A' - K_{\Sigma} G \Sigma A' + Q \nonumber
\end{aligned} \tag{6}
$$
The density $ p_{new}(x) = N(\hat x_{new}, \Sigma_{new}) $ is called the predictive distribution
The predictive distribution is the new density shown in the following figure, where
the update has used parameters
$$
A
= \left(
\begin{array}{cc}
1.2 & 0.0 \
0.0 & -0.2
\end{array}
\right),
\qquad
Q = 0.3 * \Sigma
$$
End of explanation
"""
from quantecon import Kalman
from quantecon import LinearStateSpace
from scipy.stats import norm
"""
Explanation: The Recursive Procedure
<a id='index-1'></a>
Let’s look back at what we’ve done
We started the current period with a prior $ p(x) $ for the location $ x $ of the missile
We then used the current measurement $ y $ to update to $ p(x \,|\, y) $
Finally, we used the law of motion (5) for $ {x_t} $ to update to $ p_{new}(x) $
If we now step into the next period, we are ready to go round again, taking $ p_{new}(x) $
as the current prior
Swapping notation $ p_t(x) $ for $ p(x) $ and $ p_{t+1}(x) $ for $ p_{new}(x) $, the full recursive procedure is:
Start the current period with prior $ p_t(x) = N(\hat x_t, \Sigma_t) $
Observe current measurement $ y_t $
Compute the filtering distribution $ p_t(x \,|\, y) = N(\hat x_t^F, \Sigma_t^F) $ from $ p_t(x) $ and $ y_t $, applying Bayes rule and the conditional distribution (3)
Compute the predictive distribution $ p_{t+1}(x) = N(\hat x_{t+1}, \Sigma_{t+1}) $ from the filtering distribution and (5)
Increment $ t $ by one and go to step 1
Repeating (6), the dynamics for $ \hat x_t $ and $ \Sigma_t $ are as follows
<a id='equation-kalman-lom'></a>
$$
\begin{aligned}
\hat x_{t+1} &= A \hat x_t + K_{\Sigma_t} (y_t - G \hat x_t) \
\Sigma_{t+1} &= A \Sigma_t A' - K_{\Sigma_t} G \Sigma_t A' + Q \nonumber
\end{aligned} \tag{7}
$$
These are the standard dynamic equations for the Kalman filter (see, for example, [LS18], page 58)
<a id='kalman-convergence'></a>
Convergence
The matrix $ \Sigma_t $ is a measure of the uncertainty of our prediction $ \hat x_t $ of $ x_t $
Apart from special cases, this uncertainty will never be fully resolved, regardless of how much time elapses
One reason is that our prediction $ \hat x_t $ is made based on information available at $ t-1 $, not $ t $
Even if we know the precise value of $ x_{t-1} $ (which we don’t), the transition equation (5) implies that $ x_t = A x_{t-1} + w_t $
Since the shock $ w_t $ is not observable at $ t-1 $, any time $ t-1 $ prediction of $ x_t $ will incur some error (unless $ w_t $ is degenerate)
However, it is certainly possible that $ \Sigma_t $ converges to a constant matrix as $ t \to \infty $
To study this topic, let’s expand the second equation in (7):
<a id='equation-kalman-sdy'></a>
$$
\Sigma_{t+1} = A \Sigma_t A' - A \Sigma_t G' (G \Sigma_t G' + R)^{-1} G \Sigma_t A' + Q \tag{8}
$$
This is a nonlinear difference equation in $ \Sigma_t $
A fixed point of (8) is a constant matrix $ \Sigma $ such that
<a id='equation-kalman-dare'></a>
$$
\Sigma = A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q \tag{9}
$$
Equation (8) is known as a discrete time Riccati difference equation
Equation (9) is known as a discrete time algebraic Riccati equation
Conditions under which a fixed point exists and the sequence $ {\Sigma_t} $ converges to it are discussed in [AHMS96] and [AM05], chapter 4
A sufficient (but not necessary) condition is that all the eigenvalues $ \lambda_i $ of $ A $ satisfy $ |\lambda_i| < 1 $ (cf. e.g., [AM05], p. 77)
(This strong condition assures that the unconditional distribution of $ x_t $ converges as $ t \rightarrow + \infty $)
In this case, for any initial choice of $ \Sigma_0 $ that is both nonnegative and symmetric, the sequence $ {\Sigma_t} $ in (8) converges to a nonnegative symmetric matrix $ \Sigma $ that solves (9)
Implementation
<a id='index-2'></a>
The class Kalman from the QuantEcon.py package implements the Kalman filter
Instance data consists of:
the moments $ (\hat x_t, \Sigma_t) $ of the current prior
An instance of the LinearStateSpace class from QuantEcon.py
The latter represents a linear state space model of the form
$$
\begin{aligned}
x_{t+1} & = A x_t + C w_{t+1}
\
y_t & = G x_t + H v_t
\end{aligned}
$$
where the shocks $ w_t $ and $ v_t $ are iid standard normals
To connect this with the notation of this lecture we set
$$
Q := CC' \quad \text{and} \quad R := HH'
$$
The class Kalman from the QuantEcon.py package has a number of methods, some that we will wait to use until we study more advanced applications in subsequent lectures
Methods pertinent for this lecture are:
prior_to_filtered, which updates $ (\hat x_t, \Sigma_t) $ to $ (\hat x_t^F, \Sigma_t^F) $
filtered_to_forecast, which updates the filtering distribution to the predictive distribution – which becomes the new prior $ (\hat x_{t+1}, \Sigma_{t+1}) $
update, which combines the last two methods
a stationary_values, which computes the solution to (9) and the corresponding (stationary) Kalman gain
You can view the program on GitHub
Exercises
<a id='kalman-ex1'></a>
Exercise 1
Consider the following simple application of the Kalman filter, loosely based
on [LS18], section 2.9.2
Suppose that
all variables are scalars
the hidden state $ {x_t} $ is in fact constant, equal to some $ \theta \in \mathbb{R} $ unknown to the modeler
State dynamics are therefore given by (5) with $ A=1 $, $ Q=0 $ and $ x_0 = \theta $
The measurement equation is $ y_t = \theta + v_t $ where $ v_t $ is $ N(0,1) $ and iid
The task of this exercise to simulate the model and, using the code from kalman.py, plot the first five predictive densities $ p_t(x) = N(\hat x_t, \Sigma_t) $
As shown in [LS18], sections 2.9.1–2.9.2, these distributions asymptotically put all mass on the unknown value $ \theta $
In the simulation, take $ \theta = 10 $, $ \hat x_0 = 8 $ and $ \Sigma_0 = 1 $
Your figure should – modulo randomness – look something like this
<img src="https://s3-ap-southeast-2.amazonaws.com/lectures.quantecon.org/py/_static/figures/kl_ex1_fig.png" style="width:100%;height:100%">
<a id='kalman-ex2'></a>
Exercise 2
The preceding figure gives some support to the idea that probability mass
converges to $ \theta $
To get a better idea, choose a small $ \epsilon > 0 $ and calculate
$$
z_t := 1 - \int_{\theta - \epsilon}^{\theta + \epsilon} p_t(x) dx
$$
for $ t = 0, 1, 2, \ldots, T $
Plot $ z_t $ against $ T $, setting $ \epsilon = 0.1 $ and $ T = 600 $
Your figure should show error erratically declining something like this
<img src="https://s3-ap-southeast-2.amazonaws.com/lectures.quantecon.org/py/_static/figures/kl_ex2_fig.png" style="width:100%;height:100%">
<a id='kalman-ex3'></a>
Exercise 3
As discussed above, if the shock sequence $ {w_t} $ is not degenerate, then it is not in general possible to predict $ x_t $ without error at time $ t-1 $ (and this would be the case even if we could observe $ x_{t-1} $)
Let’s now compare the prediction $ \hat x_t $ made by the Kalman filter
against a competitor who is allowed to observe $ x_{t-1} $
This competitor will use the conditional expectation $ \mathbb E[ x_t
\,|\, x_{t-1}] $, which in this case is $ A x_{t-1} $
The conditional expectation is known to be the optimal prediction method in terms of minimizing mean squared error
(More precisely, the minimizer of $ \mathbb E \, \| x_t - g(x_{t-1}) \|^2 $ with respect to $ g $ is $ g^*(x_{t-1}) := \mathbb E[ x_t \,|\, x_{t-1}] $)
Thus we are comparing the Kalman filter against a competitor who has more
information (in the sense of being able to observe the latent state) and
behaves optimally in terms of minimizing squared error
Our horse race will be assessed in terms of squared error
In particular, your task is to generate a graph plotting observations of both $ \| x_t - A x_{t-1} \|^2 $ and $ \| x_t - \hat x_t \|^2 $ against $ t $ for $ t = 1, \ldots, 50 $
For the parameters, set $ G = I, R = 0.5 I $ and $ Q = 0.3 I $, where $ I $ is
the $ 2 \times 2 $ identity
Set
$$
A
= \left(
\begin{array}{cc}
0.5 & 0.4 \
0.6 & 0.3
\end{array}
\right)
$$
To initialize the prior density, set
$$
\Sigma_0
= \left(
\begin{array}{cc}
0.9 & 0.3 \
0.3 & 0.9
\end{array}
\right)
$$
and $ \hat x_0 = (8, 8) $
Finally, set $ x_0 = (0, 0) $
You should end up with a figure similar to the following (modulo randomness)
<img src="https://s3-ap-southeast-2.amazonaws.com/lectures.quantecon.org/py/_static/figures/kalman_ex3.png" style="width:100%;height:100%">
Observe how, after an initial learning period, the Kalman filter performs quite well, even relative to the competitor who predicts optimally with knowledge of the latent state
<a id='kalman-ex4'></a>
Exercise 4
Try varying the coefficient $ 0.3 $ in $ Q = 0.3 I $ up and down
Observe how the diagonal values in the stationary solution $ \Sigma $ (see (9)) increase and decrease in line with this coefficient
The interpretation is that more randomness in the law of motion for $ x_t $ causes more (permanent) uncertainty in prediction
Solutions
End of explanation
"""
# == parameters == #
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
# == set prior, initialize kalman filter == #
x_hat_0, Σ_0 = 8, 1
kalman = Kalman(ss, x_hat_0, Σ_0)
# == draw observations of y from state space model == #
N = 5
x, y = ss.simulate(N)
y = y.flatten()
# == set up plot == #
fig, ax = plt.subplots(figsize=(10,8))
xgrid = np.linspace(θ - 5, θ + 2, 200)
for i in range(N):
# == record the current predicted mean and variance == #
m, v = [float(z) for z in (kalman.x_hat, kalman.Sigma)]
# == plot, update filter == #
ax.plot(xgrid, norm.pdf(xgrid, loc=m, scale=np.sqrt(v)), label=f'$t={i}$')
kalman.update(y[i])
ax.set_title(f'First {N} densities when $\\theta = {θ:.1f}$')
ax.legend(loc='upper left')
plt.show()
"""
Explanation: Exercise 1
End of explanation
"""
from scipy.integrate import quad
ϵ = 0.1
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
x_hat_0, Σ_0 = 8, 1
kalman = Kalman(ss, x_hat_0, Σ_0)
T = 600
z = np.empty(T)
x, y = ss.simulate(T)
y = y.flatten()
for t in range(T):
# Record the current predicted mean and variance, and plot their densities
m, v = [float(temp) for temp in (kalman.x_hat, kalman.Sigma)]
f = lambda x: norm.pdf(x, loc=m, scale=np.sqrt(v))
integral, error = quad(f, θ - ϵ, θ + ϵ)
z[t] = 1 - integral
kalman.update(y[t])
fig, ax = plt.subplots(figsize=(9, 7))
ax.set_ylim(0, 1)
ax.set_xlim(0, T)
ax.plot(range(T), z)
ax.fill_between(range(T), np.zeros(T), z, color="blue", alpha=0.2)
plt.show()
"""
Explanation: Exercise 2
End of explanation
"""
from numpy.random import multivariate_normal
from scipy.linalg import eigvals
# === Define A, C, G, H === #
G = np.identity(2)
H = np.sqrt(0.5) * np.identity(2)
A = [[0.5, 0.4],
[0.6, 0.3]]
C = np.sqrt(0.3) * np.identity(2)
# === Set up state space mode, initial value x_0 set to zero === #
ss = LinearStateSpace(A, C, G, H, mu_0 = np.zeros(2))
# === Define the prior density === #
Σ = [[0.9, 0.3],
[0.3, 0.9]]
Σ = np.array(Σ)
x_hat = np.array([8, 8])
# === Initialize the Kalman filter === #
kn = Kalman(ss, x_hat, Σ)
# == Print eigenvalues of A == #
print("Eigenvalues of A:")
print(eigvals(A))
# == Print stationary Σ == #
S, K = kn.stationary_values()
print("Stationary prediction error variance:")
print(S)
# === Generate the plot === #
T = 50
x, y = ss.simulate(T)
e1 = np.empty(T-1)
e2 = np.empty(T-1)
for t in range(1, T):
kn.update(y[:,t])
e1[t-1] = np.sum((x[:, t] - kn.x_hat.flatten())**2)
e2[t-1] = np.sum((x[:, t] - A @ x[:, t-1])**2)
fig, ax = plt.subplots(figsize=(9,6))
ax.plot(range(1, T), e1, 'k-', lw=2, alpha=0.6, label='Kalman filter error')
ax.plot(range(1, T), e2, 'g-', lw=2, alpha=0.6, label='Conditional expectation error')
ax.legend()
plt.show()
"""
Explanation: Exercise 3
End of explanation
"""
|
jobovy/stream-stream | py/Orbits-for-Nbody.ipynb | bsd-3-clause | lp= LogarithmicHaloPotential(normalize=1.,q=0.9)
R0, V0= 8., 220.
"""
Explanation: Initial conditions for $N$-body simulations to create the impact we want
Setup the potential and coordinate system
End of explanation
"""
def rectangular_to_cylindrical(xv):
R,phi,Z= bovy_coords.rect_to_cyl(xv[:,0],xv[:,1],xv[:,2])
vR,vT,vZ= bovy_coords.rect_to_cyl_vec(xv[:,3],xv[:,4],xv[:,5],R,phi,Z,cyl=True)
out= numpy.empty_like(xv)
# Preferred galpy arrangement of cylindrical coordinates
out[:,0]= R
out[:,1]= vR
out[:,2]= vT
out[:,3]= Z
out[:,4]= vZ
out[:,5]= phi
return out
def cylindrical_to_rectangular(xv):
# Using preferred galpy arrangement of cylindrical coordinates
X,Y,Z= bovy_coords.cyl_to_rect(xv[:,0],xv[:,5],xv[:,3])
vX,vY,vZ= bovy_coords.cyl_to_rectvec(xv[:,1],xv[:,2],xv[:,4],xv[:,5])
out= numpy.empty_like(xv)
out[:,0]= X
out[:,1]= Y
out[:,2]= Z
out[:,3]= vX
out[:,4]= vY
out[:,5]= vZ
return out
"""
Explanation: Functions for converting coordinates between rectangular to cylindrical:
End of explanation
"""
xv_prog_init= numpy.array([30.,0.,0.,0.,105.74895,105.74895])
RvR_prog_init= rectangular_to_cylindrical(xv_prog_init[:,numpy.newaxis].T)[0,:]
prog_init= Orbit([RvR_prog_init[0]/R0,RvR_prog_init[1]/V0,RvR_prog_init[2]/V0,
RvR_prog_init[3]/R0,RvR_prog_init[4]/V0,RvR_prog_init[5]],ro=R0,vo=V0)
times= numpy.linspace(0.,10./bovy_conversion.time_in_Gyr(V0,R0),10001)
prog_init.integrate(times,lp)
xv_prog_impact= [prog_init.x(times[-1]),prog_init.y(times[-1]),prog_init.z(times[-1]),
prog_init.vx(times[-1]),prog_init.vy(times[-1]),prog_init.vz(times[-1])]
"""
Explanation: At the time of impact, the phase-space coordinates of the GC can be computed using orbit integration:
End of explanation
"""
xv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,6.82200571,132.7700529,149.4174464])
RvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:]
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
times= numpy.linspace(0.,10./bovy_conversion.time_in_Gyr(V0,R0),1001)
dm_impact.integrate(times,lp)
"""
Explanation: The DM halo at the time of impact is at the following location:
End of explanation
"""
prog_init.plot()
dm_impact.plot(overplot=True)
plot(RvR_dm_impact[0],RvR_dm_impact[3],'ro')
xlim(0.,35.)
ylim(-20.,20.)
"""
Explanation: The orbits over the past 10 Gyr for both objects are:
End of explanation
"""
prog_backward= prog_init.flip()
ts= numpy.linspace(0.,(10.25*0.9777922212082034-10.)/bovy_conversion.time_in_Gyr(V0,R0),1001)
prog_backward.integrate(ts,lp)
print [prog_backward.x(ts[-1]),prog_backward.y(ts[-1]),prog_backward.z(ts[-1]),
-prog_backward.vx(ts[-1]),-prog_backward.vy(ts[-1]),-prog_backward.vz(ts[-1])]
"""
Explanation: Initial condition for the King cluster
We start the King cluster at 10.25 WD time units, which corresponds to 10.25x0.9777922212082034 Gyr. The phase-space coordinates of the cluster are then:
End of explanation
"""
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
ts= numpy.linspace(0.,0.125*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)
dm_impact.integrate(ts,lp)
print [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),
-dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]
"""
Explanation: Initial conditions for the Plummer DM subhalo
Starting 0.125 time units ago
End of explanation
"""
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
ts= numpy.linspace(0.,0.25*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)
dm_impact.integrate(ts,lp)
print [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),
-dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]
"""
Explanation: Starting 0.25 time units ago
End of explanation
"""
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
ts= numpy.linspace(0.,0.375*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)
dm_impact.integrate(ts,lp)
print [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),
-dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]
"""
Explanation: Starting 0.375 time units ago
End of explanation
"""
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
ts= numpy.linspace(0.,0.50*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)
dm_impact.integrate(ts,lp)
print [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),
-dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]
"""
Explanation: Starting 0.50 time units ago
End of explanation
"""
v_gc= numpy.array([xv_prog_impact[3],xv_prog_impact[4],xv_prog_impact[5]])
v_dm= numpy.array([6.82200571,132.7700529,149.4174464])
w_base= v_dm-v_gc
def v_dm_scaled(lam):
return w_base*lam+v_gc
"""
Explanation: Initial conditions for the Plummer DM subhalo with $\lambda$ scaled interaction velocities
To test the impulse approximation, we want to simulate interactions where the relative velocity ${\bf w}$ is changed by a factor of $\lambda$: ${\bf w} \rightarrow \lambda {\bf w}$. We start by computing the relative velocity for the impacts above and define a function that returns a dark-matter velocity after scaling the relative velocity by $\lambda$:
End of explanation
"""
lam= 0.5
xv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,v_dm_scaled(lam)[0],v_dm_scaled(lam)[1],v_dm_scaled(lam)[2]])
RvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:]
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,
RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)
dm_impact= dm_impact.flip()
ts= numpy.linspace(0.,0.25*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)
dm_impact.integrate(ts,lp)
print [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),
-dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]
"""
Explanation: Starting 0.25 time units ago, scaled down by 0.5
End of explanation
"""
|
eric-haibin-lin/mxnet | example/adversary/adversary_generation.ipynb | apache-2.0 | %matplotlib inline
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from mxnet import gluon
"""
Explanation: Fast Sign Adversary Generation Example
This notebook demos finds adversary examples using MXNet Gluon and taking advantage of the gradient information
[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
https://arxiv.org/abs/1412.6572
End of explanation
"""
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()
batch_size = 128
"""
Explanation: Build simple CNN network for solving the MNIST dataset digit recognition task
End of explanation
"""
transform = lambda x,y: (x.transpose((2,0,1)).astype('float32')/255., y)
train_dataset = gluon.data.vision.MNIST(train=True).transform(transform)
test_dataset = gluon.data.vision.MNIST(train=False).transform(transform)
train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
"""
Explanation: Data Loading
End of explanation
"""
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(
gluon.nn.Conv2D(kernel_size=5, channels=20, activation='tanh'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Conv2D(kernel_size=5, channels=50, activation='tanh'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Flatten(),
gluon.nn.Dense(500, activation='tanh'),
gluon.nn.Dense(10)
)
"""
Explanation: Create the network
End of explanation
"""
net.initialize(mx.initializer.Uniform(), ctx=ctx)
net.hybridize()
loss = gluon.loss.SoftmaxCELoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1, 'momentum':0.95})
"""
Explanation: Initialize training
End of explanation
"""
epoch = 3
for e in range(epoch):
train_loss = 0.
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(data)
l = loss(output, label)
l.backward()
trainer.update(data.shape[0])
train_loss += l.mean().asscalar()
acc.update(label, output)
print("Train Accuracy: %.2f\t Train Loss: %.5f" % (acc.get()[1], train_loss/(i+1)))
"""
Explanation: Training loop
End of explanation
"""
# Get a batch from the testing set
for data, label in test_data:
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
break
# Attach gradient to it to get the gradient of the loss with respect to the input
data.attach_grad()
with mx.autograd.record():
output = net(data)
l = loss(output, label)
l.backward()
acc = mx.metric.Accuracy()
acc.update(label, output)
print("Validation batch accuracy {}".format(acc.get()[1]))
"""
Explanation: Perturbation
We first run a validation batch and measure the resulting accuracy.
We then perturbate this batch by modifying the input in the opposite direction of the gradient.
End of explanation
"""
data_perturbated = data + 0.15 * mx.nd.sign(data.grad)
output = net(data_perturbated)
acc = mx.metric.Accuracy()
acc.update(label, output)
print("Validation batch accuracy after perturbation {}".format(acc.get()[1]))
"""
Explanation: Now we perturb the input
End of explanation
"""
from random import randint
idx = randint(0, batch_size-1)
plt.imshow(data_perturbated[idx, :].asnumpy().reshape(28,28), cmap=cm.Greys_r)
print("true label: %d" % label.asnumpy()[idx])
print("predicted: %d" % np.argmax(output.asnumpy(), axis=1)[idx])
"""
Explanation: Visualization
Let's visualize an example after pertubation.
We can see that the prediction is often incorrect.
End of explanation
"""
|
solomonvimal/UCLA-Hydro | LakeArea_Altimetry/Altimetry_MODIS_SurfaceArea_lake_345.ipynb | gpl-3.0 | % matplotlib inline
import pandas as pd
import glob
import matplotlib.pyplot as plt
GRLM = "345_GRLM10.txt"; print GRLM
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
df_grlm.head(5)
"""
Explanation: Notebook to work with Altimetry and Lake Surface Area
End of explanation
"""
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
def get_year(date): return int(str(date)[0:4])
def get_month(date): return int(str(date)[4:6])
def get_day(date): return int(str(date)[6:])
df_grlm['year'] = df_grlm['date'].apply(get_year)
df_grlm['month'] = df_grlm['date'].apply(get_month)
df_grlm['day'] = df_grlm['date'].apply(get_day)
df_grlm = df_grlm.where(df_grlm.minute < 61 ) # remove lines that do not have time
df_grlm = df_grlm.where(df_grlm.lake_height < 900 ) # remove entries that do not have lake-height
df_grlm.lake_height.plot(); plt.title("Actual data without resampling"); plt.ylabel("Variation (m)")
"""
Explanation: GRLM Altimetry data from July 22 2008 to September 3, 2016
Create new columns of year, month, day in a convenient format
End of explanation
"""
df_grlm.lake_height.interpolate().plot(); plt.title("Interpolated Actual data without resampling"); plt.ylabel("Variation (m)")
"""
Explanation: Interpolate the missing data points
End of explanation
"""
df = df_grlm
df[["year", "month", "day", "hour", "minute"]] = df[["year", "month", "day", "hour", "minute"]].fillna(0).astype(int)
df['Time'] = df.year.astype(str).str.cat(df.month.astype(str).astype(str), sep='-').str.cat(df.day.astype(str), sep='-')\
.str.cat(df.hour.astype(str).astype(str), sep='-').str.cat(df.minute.astype(str).astype(str), sep='-')
df = df.where(df.year>10) # to ger rid of all the nan values
df.index = pd.to_datetime(pd.Series(df["Time"]), format="%Y-%m-%d-%H-%M");
print df.index[0:3], df.index[-3:]
"""
Explanation: Add time information to the dataframe
End of explanation
"""
df["lake_height"].resample("M").mean().plot(); plt.title("Mean Monthly Altimetry"); plt.ylabel("Variation (m)")
df["lake_height"].resample("A").mean().plot(); plt.title("Mean Annual Altimetry"); plt.ylabel("Variation (m)")
"""
Explanation: Resample the data to get monthly and annual variation in lake height
End of explanation
"""
df_modis = pd.read_csv('MODIS_t.txt', names=["Area"], engine='python', index_col=False)
df_time = pd.read_csv('DV.txt', sep = "\t", names=["Year", "Month", "Day", "", "", ""], engine='python', index_col=False)
df_time['Time'] = df_time.Year.astype(str).str.cat(df_time.Month.astype(str).astype(str), sep='-').str.cat(df_time.Day.astype(str), sep='-')
df_time = df_time.where(df_time.Year>10) # to ger rid of all the nan values
df_modis.index = pd.to_datetime(pd.Series(df_time["Time"]), format="%Y-%m-%d")#df.index[0:3]
df_modis.plot(); plt.title("MODIS data - Surface Area"); plt.ylabel("Surface Area (sq.m.?)")
"""
Explanation: MODIS data Lake Surface Area (Feb 18, 2000 to Aug 13, 2015)
End of explanation
"""
df_glrm_subset = df["lake_height"].resample("D").mean().interpolate()
df_glrm_subset = df_glrm_subset[(df_glrm_subset.index > '2008-07-22') & (df_glrm_subset.index <= '2015-08-13')]
df_glrm_subset.plot(); plt.legend(); plt.title("Subset of Altimetry"); plt.ylabel("Variation (m)")
df_glrm_subset.index
df_modis_daily = df_modis["Area"].resample("D").mean().interpolate()
df_modis_subset = df_modis_daily[(df_modis_daily.index > '2008-07-22') & (df_modis_daily.index <= '2015-08-13')]
df_modis_subset.plot()
df_modis_subset.index
# QA: Create a time series of time alone, to check the number of data points that we should have for days.
#Note the vaiable called length
print pd.date_range('22/07/2008', periods=len(df_modis_subset), freq='D')
# Check if the two vectors are of the same length
print len(df_glrm_subset.tolist()), len(df_modis_subset.tolist())
"""
Explanation: Create subsets of both vectors (altimetry and surface area) for the overlapping period
End of explanation
"""
import numpy
cor = numpy.corrcoef(df_glrm_subset.resample("W").mean().interpolate().tolist(),
df_modis_subset.resample("W").mean().interpolate().tolist())
print "correlation coefficient is: " , cor[0][1]
"""
Explanation: Compute correlation coefficient
End of explanation
"""
|
sevo/pewe-presentations | PCA nie je vyber atributov.ipynb | gpl-3.0 | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
plt.rcParams['figure.figsize'] = 9, 6
"""
Explanation: Priklad vyber autributov pomocou filtra a ukazka toho, preco PCA nie je vyber atributov
End of explanation
"""
from sklearn import datasets, svm
from sklearn.feature_selection import SelectPercentile, f_classif
iris = datasets.load_iris()
iris.data.shape
"""
Explanation: Skusme najskor priklad toho ako by sme z nejakeho datasetu vyberali najdolezitejsie atributy pomocou filtra
End of explanation
"""
# vygenerujme si 20 uplne nahodnych atributov a pricapme ich k povodnym datam
E = np.random.uniform(0, 0.1, size=(len(iris.data), 20))
X = np.hstack((iris.data, E))
y = iris.target
X_indices = np.arange(X.shape[-1])
X_indices
X.shape
"""
Explanation: Pouzijeme oblubeny dataset kvietkov ktory ma 150 pozorovani a 4 atributy
K nemu dogenerujeme 20 nahodnych atributov, ktore by mali mat len minimalny vplyv na predikciu zavyslej premennej
End of explanation
"""
# povodne data
iris.data[:2]
# data rozsirene o dalsich 20 nahodnych atributov
# len tie prve by mali davat zmysel
X[:2]
"""
Explanation: Pre porovnanie sa pozireme na dva riadky povodnych a novych dat
End of explanation
"""
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile=10)
selector.fit(X, y)
scores = -np.log10(selector.pvalues_)
scores /= scores.max()
plt.bar(X_indices, scores)
"""
Explanation: Mozeme skusit najst najdolezitejsie atributy. Mali by to byt prve 4
End of explanation
"""
from sklearn.decomposition import PCA
"""
Explanation: Naozaj sa nam podarilo najst tie data, ktore suviseli s predikovanou premennou.
Da sa na nieco podobne pouzit PCA?
A case against PCA a.k.a. Nepouzivajte PCA na vyber atributov
PCA sa obycajne pouziva na redukciu atributov do mensieho poctu komponentov. Tu sa ale vyrabaju uplne nove komponenty (atributy), ktore su linearnymi kombinaciami tych povodnych
End of explanation
"""
import sklearn.datasets as ds
data = ds.load_breast_cancer()['data']
pca_trafo = PCA().fit(data)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
line, = ax.plot(pca_trafo.explained_variance_ratio_, '--o')
ax.set_yscale('log') # skus si vyhodit logaritmicku mierku, uvidis, ze je tam asi problem
ax.set_title('Prispevok komponentov k vysvetleniu variancie datasetu')
"""
Explanation: Skusme pouzit PCA na zobrazenie toho, kolko potrebuje komponentov na vysvetlenie datasetu
pozor, hovorime o komponentoch ktore vznikli linearnou kombinaciou atributov a nie priamo o atributoch
End of explanation
"""
import sklearn.datasets as ds
from sklearn.decomposition import PCA
pca_trafo = PCA()
data = ds.load_breast_cancer()['data']
pca_data = pca_trafo.fit_transform(data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(data.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
"""
Explanation: Mozeme skusit pouzit PCA na oznacenie tych atributov, ktore najviac prispievaju k variancii v datach
Zobrazime si heatmapu toho, ako silno prispievaju jednotlive vlastnosti k tvorbe komponentov a teda ako silno su v nich odrazene. To by nam malo vediet povedat, ktory atribut je vo vyslednych datach najviac odrazeny.
End of explanation
"""
means = np.mean(pca_trafo.inverse_transform(np.eye(data.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
"""
Explanation: matica nieje uplne nahodna ale su tam 3 pruhy, ktore zobrazuju 3 skupiny vlastnosti, ktore su v komponentoch odrazene vyraznejsie ako ostatne. Zda sa, ze toto su tie najdolezitejsie atributy.
Mozeme si spocitat ich priemerny prispevok k vysvetleniu datasetu
End of explanation
"""
# PCA sa pokusa vysvetliv varianciu v datach. Ak ma kazdy atribut inu strednu hodnotu (varianciu), tak nevysvetli mnozstvo informacie v atribute ale len jeho varianciu
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.std(data, axis=0))
ax.set_ylabel('standard deviation')
ax.set_xlabel('feature #')
# ax.set_yscale('log')
"""
Explanation: Zda sa, ze su tam nejake atributy, ktore su v tych komponentoch odrazene velmi silno.
O co sa ale snazi PCA? Vysvetlit varianciu v datach. Skusme si teda vykreslit variancie vsetkych vlastnosti.
End of explanation
"""
import sklearn.datasets as ds
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler # vykona z-normalizaciu na kazdom atribute
z_scaler = StandardScaler()
data = ds.load_breast_cancer()['data']
z_data = z_scaler.fit_transform(data)
pca_trafo = PCA().fit(z_data)
plt.plot(pca_trafo.explained_variance_ratio_, '--o') # mnozstvo vysvetlenej variancie per atribut
plt.plot(pca_trafo.explained_variance_ratio_.cumsum(), '--o') # kumulativna suma vysvetlenej variancie ak si chcem vybrat atributy
plt.ylim((0,1.0))
"""
Explanation: Tu sa nam asi nieco nezda
Pca nam vratilo prakticky to iste ako obycajne spocitanie variancie per atribut.
Pri pouziti PCA je vzdy treba najskor normalizovat data
PCA sa snazi vysvetlit co najviac variancie v datach. Ak maju rozne atributy roznu varianciu, tak pri niektorych sa bude snazit viac.
Skusme spravit to iste ale predtym tie data normalizujeme
End of explanation
"""
import sklearn.datasets as ds
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
z_scaler = StandardScaler()
data = ds.load_breast_cancer()['data']
pca_trafo = PCA()
z_data = z_scaler.fit_transform(data)
pca_data = pca_trafo.fit_transform(z_data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(data.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
"""
Explanation: Teraz potrebujem trochu viac komponentov na to aby som vysvetlil rovnake mnozstvo variancie
Atributy maju asi vyrovnanejsi prispevok
Ako bude vyzerat ta heatmapa?
End of explanation
"""
means = np.mean(pca_trafo.inverse_transform(np.eye(data.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
"""
Explanation: Teraz ta heatmapa vyzera nahodne a neda sa jasne pouzit na urcenie najdolezitejsich vlastnosti
Mozeme si spocitat aj priemerny prispevok per atribut
End of explanation
"""
iris = datasets.load_iris()
iris.data.shape
# vygenerujme si 20 uplne nahodnych atributov a pricapme ich k povodnym datam
E = np.random.uniform(0, 0.1, size=(len(iris.data), 20))
X = np.hstack((iris.data, E))
y = iris.target
print('Tvar povodnych dat', iris.data.shape)
print('Tvar upravenych dat', X.shape)
X_indices = np.arange(X.shape[-1])
z_scaler = StandardScaler()
pca_trafo = PCA()
z_data = z_scaler.fit_transform(X)
pca_data = pca_trafo.fit_transform(z_data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(X.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
"""
Explanation: Po normalizovani dat sa priemrny prispevok pre vsetky atributy pohybuje tesne okolo 0. Neda as teda povedat ktory je najdolezitejsi a PCA nam povedalo len to, ktory ma najviac variancie.
Toto sa da spocitat aj podstatne jednoduchsie
Neodraza to strukturu dat, ale len ich varianciu
Pouzivanie PCA na feature selection teda nema velky zmysel. Na dimensionality reduction ale ano.
Ak ale mate nejaku kategoricku predikovanu hodnotu, tak zvazte skor LDA
Skusme si pozriet co nam vrati PCA na ten dataset z prikladu na zaciatku
End of explanation
"""
means = np.mean(pca_trafo.inverse_transform(np.eye(X.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(X_indices, means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
"""
Explanation: Z tejto heatmapy sa neda vycitat ziadny jasny trend. Skusme este tie priemery prispevkov do komponentov per atribut.
End of explanation
"""
|
ceteri/pytextrank | explain_summ.ipynb | apache-2.0 | import warnings
warnings.filterwarnings("ignore")
import spacy
nlp = spacy.load("en_core_web_sm")
"""
Explanation: Explain PyTextRank: extractive summarization
How does PyTextRank perform extractive summarization on a text document?
First we perform some basic housekeeping for Jupyter, then load spaCy with a language model for English ...
End of explanation
"""
text = "Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types."
"""
Explanation: Create some text to use....
End of explanation
"""
import pytextrank
tr = pytextrank.TextRank()
nlp.add_pipe(tr.PipelineComponent, name="textrank", last=True)
doc = nlp(text)
"""
Explanation: Then add PyTextRank into the spaCy pipeline...
End of explanation
"""
for p in doc._.phrases:
print("{:.4f} {:5d} {}".format(p.rank, p.count, p.text))
print(p.chunks)
"""
Explanation: Examine the results: a list of top-ranked phrases in the document
End of explanation
"""
sent_bounds = [ [s.start, s.end, set([])] for s in doc.sents ]
sent_bounds
"""
Explanation: Construct a list of the sentence boundaries with a phrase vector (initialized to empty set) for each...
End of explanation
"""
limit_phrases = 4
phrase_id = 0
unit_vector = []
for p in doc._.phrases:
print(phrase_id, p.text, p.rank)
unit_vector.append(p.rank)
for chunk in p.chunks:
print(" ", chunk.start, chunk.end)
for sent_start, sent_end, sent_vector in sent_bounds:
if chunk.start >= sent_start and chunk.start <= sent_end:
print(" ", sent_start, chunk.start, chunk.end, sent_end)
sent_vector.add(phrase_id)
break
phrase_id += 1
if phrase_id == limit_phrases:
break
"""
Explanation: Iterate through the top-ranked phrases, added them to the phrase vector for each sentence...
End of explanation
"""
sent_bounds
for sent in doc.sents:
print(sent)
"""
Explanation: Let's take a look at the results...
End of explanation
"""
unit_vector
sum_ranks = sum(unit_vector)
unit_vector = [ rank/sum_ranks for rank in unit_vector ]
unit_vector
"""
Explanation: We also construct a unit_vector for all of the phrases, up to the limit requested...
End of explanation
"""
from math import sqrt
sent_rank = {}
sent_id = 0
for sent_start, sent_end, sent_vector in sent_bounds:
print(sent_vector)
sum_sq = 0.0
for phrase_id in range(len(unit_vector)):
print(phrase_id, unit_vector[phrase_id])
if phrase_id not in sent_vector:
sum_sq += unit_vector[phrase_id]**2.0
sent_rank[sent_id] = sqrt(sum_sq)
sent_id += 1
print(sent_rank)
"""
Explanation: Iterate through each sentence, calculating its euclidean distance from the unit vector...
End of explanation
"""
from operator import itemgetter
sorted(sent_rank.items(), key=itemgetter(1))
"""
Explanation: Sort the sentence indexes in descending order
End of explanation
"""
limit_sentences = 2
sent_text = {}
sent_id = 0
for sent in doc.sents:
sent_text[sent_id] = sent.text
sent_id += 1
num_sent = 0
for sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)):
print(sent_id, sent_text[sent_id])
num_sent += 1
if num_sent == limit_sentences:
break
"""
Explanation: Extract the sentences with the lowest distance, up to the limite requested...
End of explanation
"""
|
arcyfelix/Courses | 18-11-22-Deep-Learning-with-PyTorch/02-Introduction to PyTorch/Part 6 - Saving and Loading Models.ipynb | apache-2.0 | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(dataset=testset,
batch_size=64,
shuffle=True)
"""
Explanation: Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
End of explanation
"""
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
"""
Explanation: Here we can see one of the images.
End of explanation
"""
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model,
trainloader,
testloader,
criterion,
optimizer,
epochs=2)
"""
Explanation: Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called fc_model. Importing this, we can easily create a fully-connected network with fc_model.Network, and train the network using fc_model.train. I'll use this model (once it's trained) to demonstrate how we can save and load models.
End of explanation
"""
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
"""
Explanation: Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's state_dict. We can see the state dict contains the weight and bias matrices for each of our layers.
End of explanation
"""
torch.save(model.state_dict(), './models/checkpoint.pth')
"""
Explanation: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
End of explanation
"""
state_dict = torch.load('./models/checkpoint.pth')
print(state_dict.keys())
"""
Explanation: Then we can load the state dict with torch.load.
End of explanation
"""
model.load_state_dict(state_dict)
"""
Explanation: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
End of explanation
"""
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, './models/checkpoint.pth')
"""
Explanation: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict.
End of explanation
"""
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('./models/checkpoint.pth')
print(model)
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
print(':')
print(param.data)
name, params = next(model.named_parameters())
name
params
"""
Explanation: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex30-Identify_North_Atlantic_winter_weather_regimes by KMeans.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
from sklearn.cluster import KMeans
"""
Explanation: Identify North Atlantic Winter Weather Regimes by K-means Clustering
The four weather regimes typically found over the North Atlantic in winter are identified as
* NAO+ (positive NAO)
* NAO- (negative NAO)
* Blocking
* Atlantic Ridge.
Each weather regime is associated with different climatic conditions over Europe and North America (Cassou, 2008). In particular, the negative NAO and the blocking regimes are generally associated with cold extreme temperatures over Europe and the eastern United States (US) (Yiou and Nogaj 2004).
The North Atlantic winter weather(DJF) regimes are computed using a k-mean clustering algorithm applied to the monthly anomalies of the 500 hPa geopotential height (Z500) on the NCEP/NCAR reanalysis. The monthly anomalies are with respect to the 1979–2010 climatology and are computed over the [90W/60E; 20/80N] domain.
k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells (https://en.wikipedia.org/wiki/K-means_clustering).
sklearn.cluster.KMeans is used in this notebook.
1. Load all needed libraries
End of explanation
"""
z500 = xr.open_dataset('data\z500.DJF.anom.1979.2010.nc', decode_times=False)
print(z500)
da = z500.sel(P=500).phi.load()
print(da.name, da.dims)
print(da.coords)
"""
Explanation: 2. Load data
End of explanation
"""
data = da.values
nt,ny,nx = data.shape
data = np.reshape(data, [nt, ny*nx], order='F')
mk = KMeans(n_clusters=4, random_state=0, n_jobs=-1).fit(data)
"""
Explanation: 3. Perform KMeans clustering to idenfity weather regimes
It is worth noting that sklearn.cluster.KMeans only support dimensions<=2. Have to convert 3D (time|lat|lon) data into 2D (time|lat*lon) using numpy.reshape. When visualizing the final identified cluster_centers(i.e., weaterh regions), have to convert them back from 1D to 2d spatial format (lat|lon).
End of explanation
"""
def get_cluster_fraction(m, label):
return (m.labels_==label).sum()/(m.labels_.size*1.0)
"""
Explanation: Get the fraction of a given cluster denoted by label.
Defaultly the labeles is 0 to n_clusters-1. In this case, should be 0, 1, 2, 3.
End of explanation
"""
x,y = np.meshgrid(da.X, da.Y)
proj = ccrs.Orthographic(0,45)
fig, axes = plt.subplots(2,2, figsize=(8,8), subplot_kw=dict(projection=proj))
regimes = ['NAO$^-$', 'NAO$^+$', 'Blocking', 'Atlantic Ridge']
tags = list('abcd')
for i in range(mk.n_clusters):
onecen = mk.cluster_centers_[i,:].reshape(ny,nx, order='F')
cs = axes.flat[i].contourf(x, y, onecen,
levels=np.arange(-150, 151, 30),
transform=ccrs.PlateCarree(),
cmap='RdBu_r')
cb=fig.colorbar(cs, ax=axes.flat[i], shrink=0.8, aspect=20)
cb.set_label('[unit: m]',labelpad=-7)
axes.flat[i].coastlines()
axes.flat[i].set_global()
title = '{}, {:4.1f}%'.format(regimes[i], get_cluster_fraction(mk, i)*100)
axes.flat[i].set_title(title)
plt.text(0, 1, tags[i],
transform=axes.flat[i].transAxes,
va='bottom',
fontsize=plt.rcParams['font.size']*2,
fontweight='bold')
"""
Explanation: 4. Visualize weather regimes
End of explanation
"""
|
d00d/quantNotebooks | Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_8.ipynb | unlicense | from quantopian.pipeline.data import morningstar
# Since the underlying data of morningstar.share_class_reference.exchange_id
# is of type string, .latest returns a Classifier
exchange = morningstar.share_class_reference.exchange_id.latest
"""
Explanation: Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label:
F(asset, timestamp) -> category
An example of a classifier producing a string output is the exchange ID of a security. To create this classifier, we'll have to import morningstar.share_class_reference.exchange_id and use the latest attribute to instantiate our classifier:
End of explanation
"""
from quantopian.pipeline.classifiers.morningstar import Sector
morningstar_sector = Sector()
"""
Explanation: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
End of explanation
"""
nyse_filter = exchange.eq('NYS')
"""
Explanation: Using Sector is equivalent to morningstar.asset_classification.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
End of explanation
"""
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
"""
Explanation: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like:
End of explanation
"""
def make_pipeline():
exchange = morningstar.share_class_reference.exchange_id.latest
nyse_filter = exchange.eq('NYS')
morningstar_sector = Sector()
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
return Pipeline(
columns={
'exchange': exchange,
'sector_code': morningstar_sector,
'dollar_volume_decile': dollar_volume_decile
},
screen=(nyse_filter & top_decile)
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
result.head(5)
"""
Explanation: Let's put each of our classifiers into a pipeline and run it to see what they look like.
End of explanation
"""
|
turi-code/tutorials | strata-sj-2016/intro-ml/sentiment_analysis.ipynb | apache-2.0 | !head -n 2 ../data/yelp/yelp_training_set_review.json
reviews = gl.SFrame.read_csv('../data/yelp/yelp_training_set_review.json', header = False)
reviews
reviews[0]
"""
Explanation: 1. Task: Predicting sentiment from product reviews
The goal of this task is to know if a particular review has a positive, or negative review associated with it.
Input : Raw text blob of review data
My wife took me here on my birthday for breakfast and it was excellent. The weather was perfect which made sitting outside overlooking their grounds an absolute pleasure.
Output : Positive!
2. Getting access to data
End of explanation
"""
reviews=reviews.unpack('X1','')
reviews
"""
Explanation: Unpack to extract structure
End of explanation
"""
reviews = reviews.unpack('votes', '')
reviews
"""
Explanation: Votes are still crammed in a dictionary. Let's unpack it.
End of explanation
"""
reviews.show()
gl.canvas.set_target('ipynb')
"""
Explanation: Quick data visualization
End of explanation
"""
reviews['stars'].show(view = 'Categorical')
#ignore all 3* reviews
reviews = reviews[reviews['stars'] != 3]
#positive sentiment = 4* or 5* reviews
reviews['sentiment'] = reviews['stars'] >=4
reviews['sentiment'].show(view = 'Categorical')
"""
Explanation: 3. Problem formulation
Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
"""
reviews['word_count'] = gl.text_analytics.count_words(reviews['text'])
reviews['word_count']
"""
Explanation: 4. Feature engineering
The goal is to convert data of the following form into something that is useful for machine learning.
'My wife took me here on my birthday for breakfast and it was excellent. The weather was perfect which made sitting outside overlooking their grounds an absolute pleasure. Our waitress was excellent and our food arrived quickly on the semi-busy Saturday morning. It looked like the place fills up pretty quickly so the earlier you get here the better.\n\nDo yourself a favor and get their Bloody Mary. It was phenomenal and simply the best I\'ve ever had. I\'m pretty sure they only use ingredients from their garden and blend them fresh when you order it. It was amazing.\n\nWhile EVERYTHING on the menu looks excellent, I had the white truffle scrambled eggs vegetable skillet and it was tasty and delicious. It came with 2 pieces of their griddled bread with was amazing and it absolutely made the meal complete. It was the best "toast" I\'ve ever had.\n\nAnyway, I can\'t wait to go back!',
End of explanation
"""
train_data, test_data = reviews.random_split(.8, seed=0)
sentiment_model = gl.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
"""
Explanation: 5. Model/Algorithm selection & training
Finally, we are ready to train a model.
End of explanation
"""
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
"""
Explanation: 6a. Evaluate the model (Quantitatively)
End of explanation
"""
most_popular_business = 'VVeogjZya58oiTxK7qUjAQ'
most_popular_business_data = test_data[test_data['business_id'] == most_popular_business]
most_popular_business_data
"""
Explanation: 6b. Evaluate the model (Qualitatively)
Let us start by picking the most popular restraunt
End of explanation
"""
most_popular_business_data['predictions'] = sentiment_model.predict(most_popular_business_data,
output_type = 'probability')
most_popular_business_data = most_popular_business_data.sort('predictions')
"""
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
"""
print most_popular_business_data['text'][1]
"""
Explanation: Explore some very bad sentiment reviews
End of explanation
"""
print most_popular_business_data['text'][-2]
# 7. Deployment
"""
Explanation: Explore some very good sentiment reviews
End of explanation
"""
|
tayden/titanic-death-decider | titanic-death-decider.ipynb | mit | import numpy as np
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Read the input datasets
train_data = pd.read_csv('../input/train.csv')
test_data = pd.read_csv('../input/test.csv')
# Fill missing numeric values with median for that column
train_data['Age'].fillna(train_data['Age'].mean(), inplace=True)
test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
test_data['Fare'].fillna(test_data['Fare'].mean(), inplace=True)
print(train_data.info())
print(test_data.info())
"""
Explanation: First the data is loaded into Pandas data frames
End of explanation
"""
# Encode sex as int 0=female, 1=male
train_data['Sex'] = train_data['Sex'].apply(lambda x: int(x == 'male'))
# Extract the features we want to use
X = train_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()
print(np.shape(X))
# Extract survival target
y = train_data[['Survived']].values.ravel()
print(np.shape(y))
"""
Explanation: Next select a subset of our train_data to use for training the model
End of explanation
"""
from sklearn.svm import SVC
from sklearn.model_selection import KFold, cross_val_score
from sklearn.preprocessing import MinMaxScaler
# Build the classifier
kf = KFold(n_splits=3)
model = SVC(kernel='rbf', C=300)
scores = []
for train, test in kf.split(X):
# Normalize training and test data using train data norm parameters
normalizer = MinMaxScaler().fit(X[train])
X_train = normalizer.transform(X[train])
X_test = normalizer.transform(X[test])
scores.append(model.fit(X_train, y[train]).score(X_test, y[test]))
print("Mean 3-fold cross validation accuracy: %s" % np.mean(scores))
"""
Explanation: Now train the SVM classifier and get validation accuracy using K-Folds cross validation
End of explanation
"""
# Create model with all training data
normalizer = MinMaxScaler().fit(X)
X = normalizer.transform(X)
classifier = model.fit(X, y)
# Encode sex as int 0=female, 1=male
test_data['Sex'] = test_data['Sex'].apply(lambda x: int(x == 'male'))
# Extract desired features
X_ = test_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()
X_ = normalizer.transform(X_)
# Predict if passengers survived using model
y_ = classifier.predict(X_)
# Append the survived attribute to the test data
test_data['Survived'] = y_
predictions = test_data[['PassengerId', 'Survived']]
print(predictions)
# Save the output for submission
predictions.to_csv('submission.csv', index=False)
"""
Explanation: Make predictions on the test data and output the results
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session08/Day1/IntroToSQLiteSolutions.ipynb | mit | import matplotlib.pyplot as plt
%matplotlib notebook
"""
Explanation: Introduction to SQLite &
Selecting Sources from the Sloan Digital Sky Survey
Version 0.1
By AA Miller 2019 Mar 25
As noted earlier, there will be full lectures on databases over the remainder of this week.
This notebook provides a quick introduction to SQLite a lightweight implementation of a Structured Query Language (SQL) database. One of the incredibly nice things about SQLite is the low overhead needed to set up a database (as you will see in a minute). We will take advantage of this low overhead to build a database later in the week.
End of explanation
"""
import sqlite3
"""
Explanation: At the most basic level - databases store your bytes, and later return those bytes (or a subset of them) when queried.
They provide a highly efficient means for filtering your bytes (there are many different strategies that the user can employ).
The backend for most databases is the Structured Query Language or SQL, which is a standard declarative language.
There are many different libraries that implement SQL: MySQL, PostgreSQL, Greenplum, Microsoft SQL server, IBM DB2, Oracle Database, etc.
Problem 1) Basic SQL Operations with SQLite
The most basic implementation is SQLite a self-contained, SQL database engine. We will discuss SQLite further later in the week, but in brief - it is a nice stand alone package that works really well for small problems (such as the example that we are about to encounter).
End of explanation
"""
conn = sqlite3.connect("intro.db")
cur = conn.cursor()
"""
Explanation: Without diving too much into the weeds (we'll investigate this further later this week), we need to establish a connection to the database. From the connection we create a cursor, which allows us to actually interact with the database.
End of explanation
"""
cur.execute( # complete
cur.execute("""create table DSFPstudents(
Name text,
Institution text,
Year tinyint
)""")
"""
Explanation: And just like that - we have now created a new database intro.db, with which we can "store bytes" or later "retrieve bytes" once we have added some data to the database.
Aside - note that unlike many SQL libraries, SQLite does not require a server and creates an actual database file on your hard drive. This improves portability, but also creates some downsides as well.
Now we need to create a table and insert some data. We will interact with the database via the execute() method for the cursor object.
Recall that creating a table requires a specification of the table name, the columns in the table, and the data type for each column. Here's an example where I create a table to store info on my pets:
cur.execute("""create table PetInfo(
Name text,
Species text,
Age tinyint,
FavoriteFood text
)""")
Problem 1a
Create a new table in the database called DSFPstudents with columns Name, Institution, and Year, where Year is the year in graduate school.
End of explanation
"""
cur.execute( # complete
cur.execute("""insert into DSFPstudents(Name, Institution, Year)
values ("Adam Miller", "Northwestern", 13)""")
cur.execute("""insert into DSFPstudents(Name, Institution, Year)
values ("Lucianne Walkowicz", "Adler", 14)""")
"""
Explanation: Once a table is created, we can use the database to store bytes. If I were to populate my PetInfo table I would do the following:
cur.execute("""insert into PetInfo(Name, Species, Age, FavoriteFood)
values ("Rocky", "Dog", 12, "Bo-Nana")""")
cur.execute("""insert into PetInfo(Name, Species, Age, FavoriteFood)
values ("100 Emoji-Flames Emoji", "Red Panda", 2, "bamboo leaves")""")
Note - column names do not need to be explicitly specified, but for clarity this is always preferred.
Problem 1b
Insert data for yourself, and the two people sitting next to you into the database.
End of explanation
"""
cur.execute( # complete
cur.fetchall()
cur.execute("""select Institution from DSFPstudents where year > 2""")
cur.fetchall()
"""
Explanation: Now that we have bytes in the database, we can retrieve those bytes with one (or several) queries. There are 3 basic building blocks to a query:
SELECT...
FROM...
WHERE...
Where SELECT specifies the information we want to retrieve from the database, FROM specifies the tables being queried in the database, and WHERE specifies the conditions for the query.
Problem 1c
Select the institutions for all students in the DSFPstudents table who have been in grad school for more than 2 years.
Hint - to display the results of your query run cur.fetchall().
End of explanation
"""
# you may need to run conda install -c astropy astroquery
from astroquery.sdss import SDSS
"""
Explanation: In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity
Consistency
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation - no interference across transactions (even if concurrent)
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation - no interference across transactions (even if concurrent)
Durability - a committed transaction remains committed (even if there's a power outage, etc)
Problem 2) Complex Queries with SDSS
Above we looked at the most basic operations possible with a database (recall - databases are unnecessary, and possibly cumbersome, with small data sets). A typical database consists of many tables, and these tables may be joined together to unlock complex questions for the data.
As a reminder on (some of) this functionality, we are now going to go through some problems using the SDSS database. The full SDSS schema explains all of the tables, columns, views and functions for querying the database. We will keep things relatively simple in that regard.
End of explanation
"""
SDSS.query_sql( # complete
SDSS.query_sql("""select top 20 * from PhotoObjAll""")
"""
Explanation: astroquery enables seemless connections to the SDSS database via the Python shell.
Problem 2a
Select 20 random sources from the PhotoObjAll table and return all columns in the table.
Hint - while this would normally be accomplished by starting the query select limit 20 ..., SDSS CasJobs uses Microsoft's SQL Server, which adopts select top 20 ... to accomplish an identical result.
End of explanation
"""
SDSS.query_sql( # complete
SDSS.query_sql("""select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
inner join specobjall s on p.objid = s.bestobjid""")
"""
Explanation: That's more columns than we will likely ever need. Instead, let's focus on objID, a unique identifier, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, and cModelMag_z, the source magnitude in $u', g', r', i', z'$, respectively.
We will now introduce the concept of joining two tables.
The most common operation is known as an inner join (which is often referred to as just join). An inner join returns records that have matching sources in both tables in the join.
Less, but nevertheless still powerful, is the outer join. An outer join returns all records in either table, with NULL values for columns in a table in which the record does not exist.
Specialized versions of the outer join include the left join and right join, whereby all records in either the left or right table, respectively, are returned along with their counterparts.
Problem 2b
Select objid and $u'g'r'i'z'$ from PhotoObjAll and the corresponding class from specObjAll for 20 random sources.
There are multiple columns you could use to join the tables, in this case match objid to bestobjid from specObjAll and use an inner join.
End of explanation
"""
SDSS.query_sql("""select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
left outer join specobjall s on p.objid = s.bestobjid""")
"""
Explanation: Problem 2c
Perform an identical query to the one above, but this time use a left outer join (or left join).
How do your results compare to the previous query?
End of explanation
"""
SDSS.query_sql("""select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
right outer join specobjall s on s.bestobjid = p.objid
""")
"""
Explanation: Problem 2d
This time use a right outer join (or right join).
How do your results compare to the previous query?
End of explanation
"""
SDSS.query_sql("""select rm.*
from
(select r.objid, r.sourcename, r.ra, r.dec, r.cps, r.hr1, r.hr2, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z
from photoobjall p join rosat r on p.objid = r.objid
where (cModelFlux_u + cModelFlux_g + cModelFlux_r + cModelFlux_i + cModelFlux_z > 10000)
and p.type = 3) as rm
left join specobjall p on rm.objid = p.bestobjid
where p.bestobjid is null
""")
"""
Explanation: Challenge Problem
To close the notebook we will perform a nested query. In brief, the idea is to join the results of one query with a separate query.
Here, we are going to attempt to identify bright AGN that don't have SDSS spectra. To do so we will need the photoObjAll table, the specObjAll table, and the rosat table, which includes all cross matches between SDSS sources and X-ray sources detected by the Rosat satellite.
Create a nested query that selects all Rosat sources that don't have SDSS spectra with cModelFlux_u + cModelFlux_g + cModelFlux_r + cModelFlux_i + cModelFlux_z > 10000 (this flux contraint ensures the source is bright without making any cuts on color) and type = 3, this last constraint means the source is extended in SDSS images.
Hint - you may run into timeout issues in which case you should run the query on CasJobs.
End of explanation
"""
|
lseongjoo/learn-python | function.ipynb | mit | def greetings(hour, lang='kr', extra_msg=None):
# 시간값 확인
if hour < 0 or hour > 24:
return
# 언어에 따라 메시지 설정
msgs = {'kr': [u'좋은', u'아침', u'오후', u'저녁', u'밤'],
'en': [u'Good', u'morning', u'afternoon', u'evening',
u'night']}
# 별도로 설정된 메시지가 있으면, 해당 메시지 반영
if not extra_msg is None:
for key, value in extra_msg.items():
msgs[key] = value
# 해당 언어가 없으면 함수 종료
if not lang in msgs:
return
msg_prefix = msgs[lang][0]
# 시간에 따른 메시지 설정
msg = msg_prefix + ' '
if 6 < hour < 12:
msg += msgs[lang][1]
elif 12<= hour < 18:
msg += msgs[lang][2]
elif 18<= hour < 21:
msg += msgs[lang][3]
else:
msg += msgs[lang][4]
return msg
print(greetings(9, lang='fr',
extra_msg={'fr':
['bon',
'jour', 'soir', 'nuit']}))
print(greetings(13))
print(greetings(19))
print(greetings(22))
print(greetings(-2))
# return의 효과
def many_exits(exit_no):
if exit_no==1:
return 'Exit 1'
if exit_no==2:
return 'Exit 2'
if exit_no==3:
return 'Exit 3'
return '그런 출구는 없습니다.'
print(many_exits(1))
print(many_exits(2))
print(many_exits(9))
"""
Explanation: 인자의 기본값
End of explanation
"""
def juicer(ingredient, customer_name):
result = customer_name + u' 님, '
# TODO: 가능한 메뉴인지 확인
menu = [u'딸기', u'사과', u'망고']
if not ingredient in menu:
return None
result += ingredient + u' 주스'
if ingredient == u'딸기':
price = 10
elif ingredient == u'사과':
price = 15
elif ingredient == u'망고':
price = 20
return result, price
result = juicer(u'딸기', u'성주')
if result is None:
print('그런 메뉴 없습니다.')
else:
msg, price = result # 결과 튜플 언패킹
print(msg + u' 나왔습니다.')
print('가격: ' + str(price))
# 출력 예시: 성주 님, 딸기 주스 나왔습니다.
# 딸기 주스는 10원입니다.
# 사과 주스는 15원입니다.
# 망고 주스는 20원입니다.
"""
Explanation: 함수 인자
함수의 인자를 학습하는 예시
End of explanation
"""
def swap(x,y):
return (y,x)
print(swap(1,2))
# 함수 정의
def foo(a,b):
return a*2, b*2
# 함수 호출
aa, bb = foo(1,2)
print(aa,bb)
result = foo(1,2)
print(type(result))
print(result[0], result[1])
"""
Explanation: 두 개 이상의 결과 반환(return)
End of explanation
"""
def double(x):
x = x*2
x = 1
double(x)
print(x)
def square_not_safe(seq):
for i, n in enumerate(seq):
seq[i] = seq[i]**2
def square_safe(seq):
# 값 복사
seq = list(seq[:])
for i, n in enumerate(seq):
seq[i] = seq[i]**2
return seq
nums = [1,2,3,4]
result = square_safe(nums)
print(nums)
print(result)
# list --> tuple
nums = tuple(nums)
square_not_safe(nums)
"""
Explanation: 인자에 미치는 효과
End of explanation
"""
def func(x):
y = x
func(1)
print(y)
"""
Explanation: 함수의 스코프에서 선언된 변수는 지역변수
End of explanation
"""
def countdown(n):
print('카운트 다운 시작!')
while n>0:
yield n
n -=1
c = countdown(10)
c.next()
c.next()
for c in countdown(10):
print(c, end=' ')
"""
Explanation: 제네레이터 (Generator)
End of explanation
"""
def its_odd(seq):
result = seq[::2]
return result
result = its_odd(u'파이썬')
print(result)
print(its_odd([1,2,3,4,5]))
type(its_odd)
its_odd(range(10))
xx = its_odd
xx(range(10))
def save_to_file(func, input_seq, filename, str_format=u'{} --> {}'):
output_seq = func(input_seq)
f = open(filename, 'w')
# TODO: 문자열 인코딩 수행
text_encoded = str_format.format(input_seq, output_seq).encode('utf-8')
f.write(text_encoded)
f.close()
save_to_file(its_odd, [1,2,3,4,5], 'its_odd_result.txt')
def its_even(seq):
return seq[1::2]
save_to_file(its_even, [1,2,3,4,5], 'its_even_result.txt')
"""
Explanation: 도전과제
리스트, 문자열과 같은 시퀀스를 받아서 홀수번째 원소만 선택해 반환하는 함수 its_odd를 정의하시오.
예:
result = its_odd(u'파이썬')
print(result) # '파썬'
result = its_odd([1,2,3,4,5])
print(result) # [1,3,5]
a. 입력과 출력을 다음 형식으로 파일 its_odd_result.txt에 저장하시오.
파이썬 --> 파썬
[1,2,3,4,5] --> [1,3,5]
End of explanation
"""
save_to_file(its_odd, u'파이썬', 'its_odd_result.txt')
"""
Explanation: 그런데, 유니코드 저장을 하려고 하니까 ... ?
End of explanation
"""
def generate_fibo(a=0,b=1,n=10):
fibos = [a,b]
# 지정된 개수만큼 원소 추가
while len(fibos)<n:
# 새 원소 생성 후 a, b 값 교체
a,b = (b, a+b)
# 새로 생성된 값을 결과에 추가
fibos.append(b)
return fibos
generate_fibo()
"""
Explanation: 도전과제
피보나치 수열은 다음과 같다.
0 1 1 2 3 5 8 ...
a. 임의의 n개의 피보나치 수열을 리스트로 반환하는 함수 generate_fibo를 작성하시오.
b. generate_fibo 호출 시, 시작 숫자 두 개를 지정할 수 있도록 하시오. 인자를 설정하지 않으면 수열은 0,1로 시작한다. 생성 개수를 지정하지 않으면, 기본적으로 10개의 숫자를 생성하도록 만드시오.
End of explanation
"""
from __future__ import print_function
import random
# 1. 카드덱 생성
def generate_card():
# 52장 카드 덱 생성
suits = ['Heart', 'Diamond', 'Clover', 'Spade']
ranks = range(2,11)+['J', 'Q', 'K', 'A']
deck = []
for s in suits:
for r in ranks:
card = s + str(r)
deck.append(card)
return deck
# 2. 카드덱 나눠주기
def play_card_game(deck, players):
# 나눠주기 전에 섞어야지
random.shuffle(deck)
# 인제 나눠줘야지 ...
for person in players:
person['hand'] = deck[:5]
deck = deck[5:]
return
deck = generate_card()
players = [{'name':'이성주'}, {'name':'김성주'}]
play_card_game(deck, players)
print(len(deck))
for person in players:
print(person['name'], end=': ')
print(person['hand'])
"""
Explanation: 도전과제
포커 카드 52장을 게임 참가자에게 각각 5장씩 나눠주는 프로그램을 작성하시오. 포커 카드는 참가자에게 나눠주기 전에 무작위로 섞여야 한다. 참가자의 수는 2-4명이 될 수 있다.
a. 포커 카드를 나눠주고 나서 각 참가자가 받은 포커 카드를 모두 출력한다.
예:
이성주 : H2, D2, SJ, C10, S3
김성주 : C3, D4, CK, SK, H9
b. 각 참자자의 카드의 숫자를 기준으로 오름차순으로 정렬해 출력한다.
End of explanation
"""
from __future__ import print_function
import random
# 52장의 카드덱 생성
def gen_deck():
ranks = list(range(2,11))+['J', 'Q', 'K', 'A']
suits = ['Spade', 'Heart', 'Diamond', 'Clover']
deck = [] # 카드덱 초기화
for s in suits:
for r in ranks:
deck.append((s, r))
# 잘 섞기
random.shuffle(deck)
return deck
def get_card_value(hand):
"""카드패의 숫자값 합계"""
# 현재 hand의 카드의 숫자를 모두 더한다.
value=0
for card in hand:
# 현재 카드의 숫자값
rank = card[1]
if rank=='A':
value = value + 14
elif rank=='K':
value = value + 13
elif rank == 'Q':
value = value + 12
elif rank == 'J':
value = value + 11
else:
# 숫자인 경우는 그냥 더해준다.
value = value + rank
return value
def play_blackjack(player, output_file=None):
# 블랙잭 게임 시작
deck=gen_deck()
# 카드 두 장 받기
player['hand'] = [deck.pop(), deck.pop()]
while True:
play_log = u'카드패: {}'.format(player['hand'])
play_log += u'\n'
# 카드패 숫자값 합계
hand_value = get_card_value(player['hand'])
play_log += u'카드 숫자값= {}'.format(hand_value)
play_log += u'\n'
if hand_value == 21:
play_log += u'블랙잭!!!!!!!'
play_log += u'\n'
print(play_log)
break
elif hand_value > 21:
play_log += u'돈 잃었다...'
play_log += '\n'
print(play_log)
break
elif hand_value < 21:
# 카드를 한 장 더 받는다.
player['hand'].append(deck.pop())
play_log += u'인생을 계속 살아봐야 아는 거지 ... 한 장 더'
play_log += u'\n'
print(play_log)
# 게임 결과를 파일로 출력
if output_file is not None:
f = open(output_file, 'a')
text_encoded = play_log.encode('utf-8')
f.write(text_encoded)
f.close()
play_blackjack({'name':u'이성주'}, output_file='blackjack_log.txt')
deck = gen_deck()
hand = deck[:3]
print(hand)
get_card_value(hand)
"""
Explanation: 도전과제
52장의 포커 카드가 있다. 이 카드를 사용해 블랙잭 게임을 한다.
블랙잭 게임은 각 참가자가 처음에 두 장의 카드를 받는다.
각 카드의 숫자를 모두 더 해 21인지 확인한다.
21이면 블랙잭! 게임이 종료되고 승리한다.
숫자의 합이 21보다 작으면 한 장의 카드를 더 받는다.
숫자의 합이 21보다 크면 게임에서 패배한다.
a. 받은 카드패를 파일로 출력해 저장한다.
참가자: 이성주
2015-07-08
HJ, HK 패!
HJ, S10 블랙잭!
End of explanation
"""
# list의 원소를 문자열로 반환하는 구문
text = ''
for c in ['a','b','c']:
text += c
print(text)
# sequence 형의 원소를 문자열로 반환
# 파이썬 표준 라이브러리 활용
import string
text = string.join(('Spade', str(6)), sep=' ')
print(text)
# 문자열을 list로 변환
'a,b,c'.split(',')
import string
def card_to_string(card):
return string.join((str(card[0]), str(card[1])), sep=' ')
def gen_hands(filename = 'hand_100.txt', n=100):
# 카드패 파일 생성
f = open(filename, 'w')
for i in range(n):
hand = gen_deck()[:5]
# 카드패 정보를 문자열로 변환
for card in hand:
card_str = card_to_string(card)
f.write(card_str)
f.write(',')
f.write('\n')
f.close()
from __future__ import print_function
def is_pocker(hand):
"""카드패가 포커(4장의 같은 숫자 포함)인지 확인"""
# TODO: 포커 탐지
"""
X O O O O
O X O O O
O O X O O
O O O X O
O O O O X
"""
# 카드패에서 숫자만 추출
ranks = []
for card in hand:
ranks.append(str(card[1]))
#print(ranks) # 디버깅용 출력
# 자료 구조를 바꾸는 방법
# 더 좋은 방법을 찾아야 할 듯 ...
if len(set(ranks)) == 2:
return True
return False
def to_list(hand_str_list):
hand_list = []
for l in hand_str_list:
# 카드패를 list로 변환
hand_str = l.split(',')[:-1]
# 카드패의 각 원소를 튜플로 변환
hand = []
for card in hand_str:
hand.append(tuple(card.split()))
hand_list.append(hand)
return hand_list
def check_pockers(filename):
"""파일을 읽어들여, 카드패에서 포커 패턴 탐지"""
f = open(filename)
hand_list = to_list(f.readlines())
f.close() # 파일 읽기 종료
# 각 패의 포커 탐지
for hand in hand_list:
if is_pocker(hand):
print(hand, end=': ')
print('포커!')
is_pocker([('H',7), ('D',7), ('S',7), ('D',2), ('C',7)])
filename = 'hand_10000.txt'
gen_hands(filename, n=10000)
check_pockers(filename)
"""
Explanation: 도전과제
5개의 임의의 카드패 100개 정보를 담은 파일을 생성한다. 이 파일의 각 카드패에 포커 (같은 숫자 4개)가 있는지 확인하는 프로그램을 작성한다.
End of explanation
"""
|
LorenzoBi/courses | UQ/assignment_3/.ipynb_checkpoints/Assignment 3-checkpoint.ipynb | mit | import numpy as np
from scipy.special import binom
import matplotlib.pylab as plt
from scipy.misc import factorial as fact
%matplotlib inline
def binomial(p, n, k):
return binom(n, k) * p ** k * (1 - p) ** (n-k)
"""
Explanation: Assignment 3
Lorenzo Biasi and Michael Aichmüller
End of explanation
"""
p = 4. / 100
np.sum(binomial(p, 150, np.arange(5)))
"""
Explanation: Exercise 1.
a.
$\Omega$ will be all the possible combinations we have for 150 object two have two diffent values. For example (0, 0, ..., 0), (1, 0, ..., 0), (0, 1, ..., 0), ... (1, 1, ..., 0), ... (1, 1, ..., 1). This sample space has size of $2^{150}$. The random variable $X(\omega)$ will be the number of defective objects there are in the sample $\omega$. We can also define $Y(\omega) = 150 - X(\omega)$, that will be counting the number of checked items.
b.
The binomial distribution is the distribution that gives the probability of the number of "succeses" in a sequence of random and indipendent boolean values. This is the case for counting the number of broken object in a group of 150 and the probability of being broken of 4%.
c.
For computing the probability that at most 4 objects are broken we need to sum the probabilities that $k$ objects are broken with $k \in [0, 4]$.
$P(<5) = \sum_{k=0}^{4} P(X=k) = \sum_{k=0}^{4} {4\choose k}p^k(1-p)^{4-k}$
The probability is 28 %
End of explanation
"""
np.sum(binomial(p, 150, np.arange(5, 10)))
plt.bar(np.arange(20), binomial(p, 150, np.arange(20)))
plt.bar(np.arange(5), binomial(p, 150, np.arange(5)))
plt.bar(np.arange(5, 10), binomial(p, 150, np.arange(5,10)))
plt.xlabel('# defectives')
plt.ylabel('P(X=k)')
"""
Explanation: b.
The same of before just that this time $k \in [5, 9]$. The probability is 64%
End of explanation
"""
def not_same_birthday(q):
return np.prod((365 - np.arange(q))/ 365)
q = 45
p = np.empty(q - 1)
for i in range(1, q):
p[i - 1] = 1 - not_same_birthday(i)
plt.plot(np.arange(1, q), p)
plt.plot(23, 1 - not_same_birthday(23), 'r+', label='23 people')
plt.grid()
plt.ylabel('Probability')
plt.xlabel('q')
plt.legend()
1 - not_same_birthday(23)
"""
Explanation: Exercise 2.
For computing how big $q$ needs to be we can compute the probability $p^$ that nobody has the same birthday in a group of $q$ and compute $1 - p^$. The first two people will not have the same birthday with probability of $364/365$, the probability that the third will also have a different birthday will be $364/365 * 363 / 365$. this will go on until the last person. One can make the computation and finds that the minimum for having over 50% of probability that at least two people have the same birthday is 23 with p = 50.73%.
End of explanation
"""
import itertools
x = [1, 2, 3, 4, 5, 6]
omega = set([p for p in itertools.product(x, repeat=3)])
print(r'Omega has', len(omega), 'elements and they are:')
print(omega)
"""
Explanation: Exercise 3.
a.
Let's define $\Omega$ as all the possible combination we can have with 3 throws of a 6-faced dice. $\Omega$ will be then:
End of explanation
"""
g = binomial(1 / 6, 3, np.arange(4)) * np.array([-30, 50, 75, 100])
np.sum(g)
plt.bar(np.arange(4), g)
plt.plot([-.5, 3.5], np.ones(2) * np.sum(g), 'r')
"""
Explanation: X would be -30 when the sample $\omega$ has no 6s, 50 when has one, 75 when it has two, and 100 when it has three. The probability distribution of such variable would be the binomial with $p = 1 / 6$, $n=3$ and $k$ the number of 6s.
So:
$P_X(X = -30) = {3\choose 0}(1 / 6)^0(1-1/6)^{3-0}$
$P_X(X = 50) = {3\choose 1}(1 / 6)^1(1-1/6)^{3-1}$
$P_X(X = 75) = {3\choose 2}(1 / 6)^2(1-1/6)^{3-2}$
$P_X(X = 100) = {3\choose 3}(1 / 6)^3(1-1/6)^{3-3}$
b.
I would be part of this competition, in fact if calculate the mean of $X$ as suggested we obtain $\approx$ 5.67(€).
End of explanation
"""
|
ponderousmad/pyndent | depth_classy.ipynb | mit | %matplotlib inline
from __future__ import print_function
import gc
import ipywidgets
import math
import os
import random
import sys
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from IPython.display import Image
from scipy import ndimage
from scipy.misc import imsave
from six.moves import cPickle as pickle
import outputer
import improc
import convnet
import mutate
import convevo
import darwin
# For use during development
from imp import reload
reload (improc)
reload (convnet)
reload (mutate)
reload (convevo)
reload (darwin)
"""
Explanation: Evolving Convnets to Classify Labeled Depths
End of explanation
"""
training, test = improc.enumerate_images("captures")
print("Training:", len(training), "Test:", len(test))
print(training[:2])
print(test[:2])
"""
Explanation: Enumerate Images
Image names are sequential, so add every tenth image to the validation set based on filename.
End of explanation
"""
example_image, example_depth, example_attitude = improc.load_image("testing/IMG_2114.PNG")
plt.imshow(example_image)
print(example_image.shape, example_image.dtype)
plt.imshow(example_depth)
print(example_depth.shape, example_depth.dtype)
print(example_attitude)
example_lab = improc.rgb2lab_normalized(example_image)
plt.imshow(example_lab[:,:,0], cmap='Greys_r')
plt.imshow(example_lab[:,:,1], cmap='Greys_r')
"""
Explanation: Image Processing
Each image file contains a color image (top half), and an encoded depth image (bottom half)
<img src="testing/IMG_2114.PNG">
* Note: The image may also contain the orientation data. If so it is encoded in the first two pixels of the depth image. If the first pixel of the depth image is red, the second has the x, y, z, w quaternion components encoded in the r,g,b,a values.
The improc module contains functions for splitting the image, decoding the depth back into floating point millimeters, and for filling in gaps.
Image processing examples:
End of explanation
"""
def size_for_factor(factor, buckets):
return improc.MAX_DEPTH * (1 - factor) / (1 - factor ** buckets)
def depth_label_boundaries(factor, buckets):
boundaries = []
size_sum = 0
bucket_size = size_for_factor(factor, buckets)
for i in range(buckets):
size_sum += bucket_size
boundaries.append(size_sum)
bucket_size *= factor
return boundaries
def boundary_midpoints(boundaries):
midpoints = np.zeros(shape=[len(boundaries)], dtype=np.float32)
depth = 0
prev_boundary = 0
for i, boundary in enumerate(DEPTH_BOUNDARIES):
midpoints[i] = (boundary + prev_boundary) / 2
prev_boundary = boundary
return midpoints
DEPTH_LABEL_COUNT = 40
DEPTH_BUCKET_SCALE_FACTOR = 1.2
DEPTH_BOUNDARIES = depth_label_boundaries(DEPTH_BUCKET_SCALE_FACTOR, DEPTH_LABEL_COUNT)
DEPTH_BOUNDARY_MIDPOINTS = boundary_midpoints(DEPTH_BOUNDARIES)
def depth_label_index(depth):
for i, boundary in enumerate(DEPTH_BOUNDARIES):
if depth < boundary:
return i
return DEPTH_LABEL_COUNT - 1
def depth_label(depth, labels=None):
if labels is None:
labels = np.zeros(shape=(DEPTH_LABEL_COUNT + 1), dtype=np.float32)
labels[depth_label_index(depth)] = 1
labels[DEPTH_LABEL_COUNT] = depth / improc.MAX_DEPTH
return labels
def depth_for_label(labels):
depth = 0
prev_boundary = 0
for label, boundary in zip(labels, DEPTH_BOUNDARIES):
boundary_midpoint = (boundary + prev_boundary) / 2
depth += boundary_midpoint * label
prev_boundary = boundary
return depth
def depth_for_label_normalized(labels):
return depth_for_label(labels) / improc.MAX_DEPTH
def depths_for_labels(labels):
return labels * DEPTH_BOUNDARY_MIDPOINTS
def depths_for_labels_normalized(labels):
return np.sum(depths_for_labels(labels) / improc.MAX_DEPTH, axis=1)
def depth_label_image(depths):
labeled = depths.copy()
for y in range(depths.shape[0]):
for x in range(depths.shape[1]):
labeled[y,x] = depth_label_index(depths[y,x])
return labeled
# Precomputed via improc.compute_mean_depth(training)
# Actually it should 1680.24, value below is actually the mean of the image means.
# Keeping this value as it was what was used in the experiments to date,
# and it is close to the correct value.
MEAN_DEPTH = np.float32(1688.97)
print(DEPTH_BOUNDARIES[:5])
print("Mean depth label:", depth_label(MEAN_DEPTH), np.argmax(depth_label(MEAN_DEPTH)))
print("Zero depth label:", depth_label(0)[0], depth_label(0)[-1])
print("Max depth label:", depth_label(improc.MAX_DEPTH)[-2:])
roundtrip_mean = depth_for_label(depth_label(MEAN_DEPTH))
print("Roundtrip mean depth:", roundtrip_mean, np.argmax(depth_label(roundtrip_mean)))
# Set up cache directory.
depth_image_cache_path = outputer.setup_directory("temp", "cache")
def linear_order(height_span, width_span):
pixel_indices = []
for y in range(height_span):
for x in range(width_span):
pixel_indices.append((y, x))
return pixel_indices
class ImageSampler(object):
"""Wrap an image for sampling."""
def __init__(self, image_file,
sample_height, sample_width,
half_valid_check=2, tolerance=0):
# Process the image or grab it from the cache.
# image is normalized CIELAB, depth is not normalized.
self.image, self.depth = improc.process_cached(depth_image_cache_path, image_file)
self.index = 0
self.pixel_index = (0, 0)
self.sample_height = sample_height
self.sample_width = sample_width
self.depth_offset_y = (sample_height + 1) // 2
self.depth_offset_x = (sample_width + 1) // 2
self.height = self.image.shape[0]
self.width = self.image.shape[1]
self.half_valid_check = half_valid_check
self.tolerance = tolerance
def depth_value(self, y, x):
return self.depth[y + self.depth_offset_y, x + self.depth_offset_x]
def sample(self, inputs, labels, index):
self.sample_at(self.pixel_index, inputs, labels, index)
self.advance()
def sample_at(self, pixel, inputs, labels, index):
y, x = pixel
patch = self.image[y : y + self.sample_height, x : x + self.sample_width]
inputs[index] = patch
depth = self.depth_value(y, x)
if np.isnan(depth):
return False
depth_label(depth, labels[index])
return True
def setup_sample_order(self, sample_orders, entropy):
height_span = self.height - self.sample_height
width_span = self.width - self.sample_width
cached = sample_orders.get((height_span, width_span))
if cached:
return cached
pixel_indices = linear_order(height_span, width_span)
mutate.fisher_yates_shuffle(pixel_indices, entropy)
sample_orders[(height_span, width_span)] = pixel_indices
return pixel_indices
def advance(self):
self.index += 1
def next_sample(self, sample_orders, entropy):
c = self.half_valid_check
order = self.setup_sample_order(sample_orders, entropy)
while self.index < len(order):
self.pixel_index = order[self.index]
depth_y = self.pixel_index[0] + self.depth_offset_y
depth_x = self.pixel_index[1] + self.depth_offset_x
# Check that the sample is from a clean part of the image.
sum = np.sum(np.isnan(self.depth[depth_y - c : depth_y + c,
depth_x - c: depth_x + c]))
if sum <= self.tolerance:
return True
self.advance()
return False
class BatchSampler(object):
"""Created sample batches for a set of image files"""
def __init__(self, image_files, sample_height, sample_width, samplers_count=100):
self.files = image_files
self.samplers_count = samplers_count
self.sample_height = sample_height
self.sample_width = sample_width
self.sample_orders = {}
self.reset()
# Access or initialize the specified sampler.
def sampler(self, index, entropy):
sampler = self.samplers[index]
if sampler and not sampler.next_sample(self.sample_orders, entropy):
sampler = None
while sampler is None:
path = self.files[self.file_index]
sampler = ImageSampler(path, self.sample_height, self.sample_width)
self.file_index = (self.file_index + 1) % len(self.files)
if not sampler.next_sample(self.sample_orders, entropy):
sampler = None
print ("No samples in", path)
else:
self.samplers[index] = sampler
return sampler
# Get the next single sample.
def sample(self, inputs, labels, index, entropy):
sampler = self.sampler(self.sample_index, entropy)
self.sample_index = (self.sample_index + 1) % len(self.samplers)
sampler.sample(inputs, labels, index)
# Get the next batch of samples.
def sample_batch(self, inputs, labels, batch_size, entropy):
labels.fill(0)
for b in range(batch_size):
self.sample(inputs, labels, b, entropy)
def reset(self):
self.sample_index = 0
self.file_index = 0
self.samplers = [None] * self.samplers_count
# Force load all the samplers.
def fill_and_pickle(self, path, entropy):
for i in range(self.samplers_count):
sampler = self.sampler(i, entropy)
try:
with open(path, 'wb') as f:
pickle.dump(self, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', path, ':', e)
raise
"""
Explanation: Depth Labels and Batching
Covert depth to classification labels.
Want more precision for nearby things, so use progressively expanding buckets for labels, so if smallest bucket has size s and each succesive bucket is larger by a factor F then:
improc.MAX_DEPTH == sF<sup>0</sup> + sF<sup>1</sup> + sF<sup>2</sup> + ... + sF<sup>label count - 1</sup>
So, plug into sum of geometric series formula:
improc.MAX_DEPTH == s * (1 - F<sup>label count</sup>) / (1 - F)
Since there are two unknowns we can choose either the factor or the bucket size. A factor of 1.3 resulted in buckets that seemed about right.
End of explanation
"""
plt.imshow(depth_label_image(example_depth))
del example_image
del example_depth
del example_lab
gc.collect()
SAMPLE_SIZE = 101
batcher = BatchSampler(["testing/IMG_2114.PNG", "testing/IMG_3410.PNG"],
SAMPLE_SIZE, SAMPLE_SIZE, 2)
BATCH_SIZE = 100
inputs = np.ones(shape=(BATCH_SIZE, SAMPLE_SIZE, SAMPLE_SIZE, improc.COLOR_CHANNELS),
dtype=np.float32)
labels = np.zeros(shape=(BATCH_SIZE, DEPTH_LABEL_COUNT + 1), dtype=np.float32)
for _ in range(100):
batcher.sample_batch(inputs, labels, BATCH_SIZE, random.Random(42))
plt.imshow(inputs[1,:,:,0], cmap='Greys_r')
print(inputs[1].shape)
print(labels[1])
"""
Explanation: Depth label and batching examples
End of explanation
"""
data_files = {
"image_size": (101, 101, improc.COLOR_CHANNELS),
"depth_labels": DEPTH_LABEL_COUNT,
"train_files": np.array(training),
"test_files": np.array(sorted(test))
}
del training
del test
def setup_cross_validation(
data,
train_count, valid_count, test_count=None,
label_count=None, entropy=random
):
"""Shuffle the data and split off training, validation and test sets."""
cross_data = data.copy()
if label_count:
cross_data["depth_labels"] = label_count
paths = cross_data["train_files"][:]
mutate.fisher_yates_shuffle(paths, entropy)
cross_data["train_files"] = paths[:train_count]
cross_data["valid_files"] = paths[train_count:train_count + valid_count]
if test_count is not None:
cross_data["test_files"] = data["test_files"][:test_count]
return cross_data
"""
Explanation: Data Management
End of explanation
"""
def pickle_batch(data, set_name, samplers, entropy):
path = os.path.join("temp", set_name + ".pickle")
files = data[set_name + "_files"]
image_size = data["image_size"]
batcher = BatchSampler(files, image_size[0], image_size[1], samplers)
batcher.fill_and_pickle(path, entropy)
del batcher
gc.collect()
return path
def load_batcher(pickle_batches, set_name):
if pickle_batches:
path = pickle_batches.get(set_name)
if path:
with open(path, 'rb') as f:
return pickle.load(f)
return None
"""
Explanation: Batcher Caching
The evolutionary process will involve running many graphs with the same data. To make this as efficent as possible, these are used cache and restore the processed batch data.
End of explanation
"""
pickle_data = setup_cross_validation(
data_files, 0, 100, None,
label_count=DEPTH_LABEL_COUNT, entropy=random.Random(24601)
)
pickle_size = pickle_data["image_size"]
pickle_files = pickle_data["valid_files"]
pickle_sampler = BatchSampler(pickle_files,pickle_size[0],pickle_size[1],len(pickle_files))
pickle_sampler.fill_and_pickle("temp/depth_valid.pickle", random)
with open("temp/depth_valid.pickle", 'rb') as f:
loaded_sampler = pickle.load(f)
BATCH_SIZE = 100
inputs = np.ones(shape=(BATCH_SIZE, pickle_size[0], pickle_size[1], improc.COLOR_CHANNELS),
dtype=np.float32)
labels = np.zeros(shape=(BATCH_SIZE, DEPTH_LABEL_COUNT + 1), dtype=np.float32)
for _ in range(500):
loaded_sampler.sample_batch(inputs, labels, BATCH_SIZE, random.Random(42))
del pickle_data
del pickle_files
del pickle_sampler
del loaded_sampler
gc.collect()
"""
Explanation: Data Management examples
End of explanation
"""
def batch_input_shape(batch_size, image_shape):
return (batch_size,) + image_shape
def batch_output_shape(batch_size, label_count):
return (batch_size, label_count + 1)
def setup_graph(
batch_size,
image_shape,
label_count,
regress_factor,
stack
):
graph = tf.Graph()
with graph.as_default():
input_shape = batch_input_shape(batch_size, image_shape)
output_shape = batch_output_shape(batch_size, label_count)
train = tf.placeholder(tf.float32, shape=input_shape)
targets = tf.placeholder(tf.float32, shape=output_shape)
verify = tf.placeholder(tf.float32, shape=input_shape)
operations = stack.construct(input_shape)
l2_loss = convnet.setup(operations)
result = convnet.connect_model(train, operations, True)[-1]
depth_label = tf.slice(targets, [0, label_count], [batch_size, 1])
depths = tf.slice(result, [0, label_count], [batch_size, 1])
labels = tf.slice(targets, [0, 0], [batch_size, label_count])
logits = tf.slice(result, [0, 0], [batch_size, label_count])
loss = l2_loss
if regress_factor >= 0:
loss += tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
else:
regress_factor = -regress_factor
if regress_factor > 0:
loss += regress_factor * tf.reduce_mean(
tf.squared_difference(depths, depth_label)
)
verify_result = convnet.connect_model(verify, operations, False)[-1]
verify_logits = tf.slice(verify_result, [0, 0], [batch_size, label_count])
verify_depths = tf.slice(verify_result, [0, label_count], [batch_size, 1])
verify_depths = tf.maximum(verify_depths, 0)
verify_depths = tf.minimum(verify_depths, 1)
info = {
"graph": graph,
"batch_size": batch_size,
"train": train,
"targets": targets,
"depths": depths,
"loss": loss,
"optimizer": stack.construct_optimizer(loss),
"predictions": tf.nn.softmax(logits),
"verify": verify,
"verify_predictions": tf.nn.softmax(verify_logits),
"verify_depths": verify_depths,
"saver": tf.train.Saver()
}
return info
"""
Explanation: Graph Setup
End of explanation
"""
def accuracy(predictions, labels):
correct_predictions = np.argmax(predictions, 1) == np.argmax(labels, 1)
return (100.0 * np.sum(correct_predictions) / predictions.shape[0])
def mean_depth_error(depths, labels):
return np.mean(np.absolute(depths[:,0] - labels[:,-1]))
def score_result(loss, predictions, depths, labels):
return (loss, accuracy(predictions, labels[:,0:-1]), mean_depth_error(depths, labels))
def print_batch_info(
context, score, predictions, depths, labels, verbose, print_count=20, depth_print=10
):
print(context, "accuracy: %.1f%%" % score[1])
if verbose:
print(np.argmax(predictions[0:print_count],1))
print(np.argmax(labels[0:print_count,0:-1],1))
print(context, "average depth error:", score[2])
print(depths[0:depth_print,0])
print(labels[0:depth_print,-1])
def batch_accuracy(
context, session, graph, batcher, entropy, inputs, labels, batch_size, count, verbose
):
total_accuracy = 0
total_depth = 0
for b in range(count):
batcher.sample_batch(inputs, labels, batch_size, entropy)
targets = [graph["verify_predictions"], graph["verify_depths"]]
predictions, depths = session.run(targets, feed_dict={graph["verify"] : inputs})
total_accuracy += accuracy(predictions, labels) / float(count)
total_depth += mean_depth_error(depths, labels) / float(count)
score = (0, total_accuracy, total_depth)
print_batch_info(context, score, predictions, depths, labels, verbose)
return score
def run_graph(
graph_info,
data,
step_count,
valid_count,
test_count=0,
batch_sampler_count=1000,
report_every=50,
verbose=True,
accuracy_minimum=None, # Minimimum validation percent accuracy for early abort
pickle_batches=None, # pickle files for training and validation batchers
tracker=None,
entropy=random
):
with tf.Session(graph=graph_info["graph"]) as session:
tf.initialize_all_variables().run()
print("Initialized")
# Optionally restore graph parameters from disk.
convnet.restore_model(graph_info, session)
# Set up space for graph inputs / feed values
batch_size = graph_info["batch_size"]
depth_labels = data["depth_labels"]
height, width, _ = data["image_size"]
inputs = np.zeros(shape=batch_input_shape(batch_size, data["image_size"]),
dtype=np.float32)
labels = np.zeros(shape=batch_output_shape(batch_size, depth_labels),
dtype=np.float32)
# Construct or unpickle training batcher.
train_batcher = load_batcher(pickle_batches, "train")
if not train_batcher:
train_batcher = BatchSampler(
data["train_files"], height, width, batch_sampler_count
)
score = (0,1)
try:
for step in range(step_count + 1):
if tracker:
tracker.update_progress(step)
# Generate a batch
train_batcher.sample_batch(inputs, labels, batch_size, entropy)
# Graph targets
run_targets = [
graph_info["optimizer"],
graph_info["loss"],
graph_info["predictions"],
graph_info["depths"]
]
# Graph inputs:
feed_dict = {graph_info["train"] : inputs, graph_info["targets"] : labels}
_, loss, predictions, depths = session.run(run_targets,feed_dict=feed_dict)
# Keep track of and possibly display score.
batch_score = score_result(loss, predictions, depths, labels)
if tracker:
tracker.record_score(batch_score)
if np.isnan(loss):
print("Error computing loss at step", step)
print_batch_info("Minibatch", batch_score, predictions,
depths, labels, True)
return 0
if (step % report_every == 0):
if verbose:
print("Minibatch loss at step", step, ":", loss)
print_batch_info("Minibatch", batch_score, predictions,
depths, labels, True)
# Evaluate the validation data.
valid_batcher = load_batcher(pickle_batches, "valid")
if not valid_batcher:
valid_files = data["valid_files"]
valid_batcher = BatchSampler(
valid_files, height, width, len(valid_files)
)
valid_score = batch_accuracy(
"Validation", session, graph_info, valid_batcher, entropy,
inputs, labels, batch_size, valid_count, verbose
)
del valid_batcher
score = valid_score[1:]
if accuracy_minimum and step > 0 and valid_score[1] < accuracy_minimum:
print("Early out.")
break
# Evaluate the test data, if any.
if test_count > 0:
test_batcher = BatchSampler(data["test_files"], height, width)
valid_accuracy = batch_accuracy(
"Test", session, graph_info, test_batcher, entropy,
inputs, labels, batch_size, test_count, verbose
)
return score
finally:
# Optionally save out graph parameters to disk.
convnet.save_model(graph_info, session)
def valid_accuracy_metric(valid_accuracy, valid_depth_error, train_results):
return valid_accuracy
def valid_error_metric(valid_accuracy, valid_depth_error, train_results):
return valid_depth_error
def train_accuracy_metric(valid_accuracy, valid_depth_error, train_results):
result_count = min(len(train_results), 1000)
return sum(accuracy for _, accuracy, _ in train_results[-result_count:]) / result_count
def train_depth_error_metric(valid_accuracy, valid_depth_error, train_results):
result_count = min(len(train_results), 1000)
error = sum(error for _, _, error in train_results[-result_count:]) / result_count
return max(0, 1 - error)
results_path = outputer.setup_directory("temp", "classy_results")
def make_eval(
batch_size=20,
eval_steps=10000,
valid_steps=500,
regress_factor=1.0,
report_every=None,
reuse_cross=False,
metric=valid_accuracy_metric,
entropy=random
):
pickle_batches = {}
train_count = 9700
valid_count = 400
batch_sampler_count = min(801, eval_steps * batch_size)
test_count = None
#if reusing data, set up training and test data, and pickle batchers for efficiency.
if reuse_cross:
redata = setup_cross_validation(
data_files, train_count, valid_count, test_count,
label_count=DEPTH_LABEL_COUNT, entropy=entropy
)
pickle_batches["valid"] = pickle_batch(
redata, "valid", len(redata["valid_files"]), entropy
)
print("Pickled Validation")
pickle_batches["train"] = pickle_batch(
redata, "train", batch_sampler_count, entropy
)
print("Pickled Training")
progress_tracker = outputer.ProgressTracker(
["Loss", "Accuracy", "Error"], eval_steps, results_path, convevo.serialize
)
def evaluate(stack, eval_entropy):
# If not reusing data, generate training and validation sets
if not reuse_cross:
data = setup_cross_validation(
data_files, train_count, valid_count, test_count,
label_count=DEPTH_LABEL_COUNT, entropy=eval_entropy
)
pickle_batches["valid"] = pickle_batch(
data, "valid", len(data["valid_files"]), eval_entropy
)
print("Pickled Validation")
else:
data = redata
progress_tracker.setup_eval(stack)
# Set up the Tensorflow graph
try:
graph_info = setup_graph(
batch_size,
data["image_size"],
data["depth_labels"],
regress_factor,
stack
)
except KeyboardInterrupt:
raise
except:
progress_tracker.error(sys.exc_info())
return -10
progress_tracker.start_eval(graph_info)
# Run the graph
try:
valid_accuracy, valid_depth_error = run_graph(
graph_info,
data,
eval_steps,
valid_count=valid_steps,
batch_sampler_count=batch_sampler_count,
report_every=report_every if report_every else eval_steps//4,
verbose=True,
accuracy_minimum=None,
pickle_batches=pickle_batches,
tracker=progress_tracker,
entropy=eval_entropy
)
if metric:
return metric(valid_accuracy, valid_depth_error, progress_tracker.results)
return valid_accuracy
except KeyboardInterrupt:
raise
except:
progress_tracker.error(sys.exc_info())
return -1
finally:
progress_tracker.output()
return evaluate
"""
Explanation: Graph Execution
End of explanation
"""
cross_data = setup_cross_validation(data_files,9700,400,1000,label_count=DEPTH_LABEL_COUNT)
batch_size = 20
conv_layers = [
("conv_bias", 20, 2, 10, "SAME", True),
("conv_bias", 10, 5, 20, "SAME", True),
("conv_bias", 5, 2, 40, "SAME", True)
]
hidden_sizes = [400, 100, cross_data["depth_labels"] + 1]
optimizer = convevo.Optimizer("GradientDescent", 0.01)
optimizer.default_parameters()
prototype = convevo.create_stack(conv_layers,[],True,hidden_sizes,0.0, 0.05, 0.0,optimizer)
prototype.reseed(random.Random(42))
prototype_graph = setup_graph(
batch_size,
cross_data["image_size"],
cross_data["depth_labels"],
1.0,
prototype
)
run_graph(
prototype_graph, cross_data, 1000,
valid_count=200, report_every=500, verbose=True, entropy=random.Random(42)
)
print(convevo.serialize(prototype))
prototype_entropy = random.Random(42)
prototype_eval = make_eval(
batch_size=100,
eval_steps=100,
valid_steps=20,
regress_factor=1.0,
reuse_cross=True,
entropy=prototype_entropy
)
prototype_eval(prototype, prototype_entropy)
del cross_data
del conv_layers
del hidden_sizes
del prototype_graph
del prototype_eval
gc.collect()
"""
Explanation: Test of components in isoloation
End of explanation
"""
prototypes = [prototype]
population,_,_ = convevo.load_population("testing/color_quad_run.xml", False)
prototypes = population[:5]
print(len(prototypes))
prototypes = [
convevo.load_stack("testing/candidate1.xml"),
convevo.load_stack("testing/candidate2.xml"),
convevo.load_stack("testing/candidate3.xml"),
convevo.load_stack("testing/candidate4.xml"),
convevo.load_stack("testing/candidate5.xml")
]
with outputer.TeeOutput(os.path.join("temp", outputer.timestamp("Depth_Evolve_", "txt"))):
mutate_seed = random.randint(1, 100000)
print("Mutate Seed:", mutate_seed)
mutate_entropy = random.Random(mutate_seed)
eval_seed = random.randint(1, 100000)
print("Eval Seed:", eval_seed)
eval_entropy = random.Random(eval_seed)
population_size = 10
generations = 5
batch_size = 100
breed_options = {
"input_shape": batch_input_shape(batch_size, data_files["image_size"]),
"output_shape": batch_output_shape(batch_size, data_files["depth_labels"])
}
for stack in prototypes:
stack.make_safe(breed_options["input_shape"], breed_options["output_shape"])
evaluator = make_eval(
batch_size=batch_size, eval_steps=40000, valid_steps=1000, regress_factor=1.0,
reuse_cross=True, metric=None, entropy=eval_entropy
)
charles = darwin.Darwin(convevo.serialize, evaluator, convevo.breed)
charles.init_population(prototypes, population_size, False,
breed_options, mutate_entropy)
for g in range(generations):
print("Generation", g)
results = charles.evaluate(eval_entropy)
convevo.output_results(results, "temp", outputer.timestamp() + ".xml",
mutate_seed, eval_seed)
charles.repopulate(population_size, 0.3, 3, results, breed_options, mutate_entropy)
results = darwin.descending_score(charles.history.values())
convevo.output_results(results, "testing", "candidates_evolve_run.xml",
mutate_seed, eval_seed)
len(results)
"""
Explanation: Evolving Convnets
End of explanation
"""
BATCH_SIZE = 100
candidate = convevo.load_stack("testing/candidate6.xml")
candidate.make_safe(
batch_input_shape(BATCH_SIZE, data_files["image_size"]),
batch_output_shape(BATCH_SIZE, data_files["depth_labels"])
)
print(convevo.serialize(candidate))
candidate_evaluator = make_eval(
batch_size=BATCH_SIZE,
eval_steps=10000000,
valid_steps=100000,
regress_factor=1.0,
report_every=500000,
reuse_cross=False,
metric=None,
entropy=random.Random(42)
)
with outputer.TeeOutput(os.path.join("temp", "candidate6_results.txt")):
candidate_evaluator(candidate, random.Random(57))
"""
Explanation: Candidate Evaluation
Do a long training run for the best graph to date. Note: on my GPU accelerated machine, this takes 5 days to run.
End of explanation
"""
with outputer.TeeOutput(os.path.join("temp", "candidate6_retest.txt")):
candidate = convevo.load_stack("testing/candidate6.xml")
candidate_reevaluator = make_eval(
batch_size=100, eval_steps=10000, valid_steps=10000, regress_factor=1.0,
reuse_cross=False, metric=None, entropy=random.Random(42)
)
candidate.checkpoint_path("testing/candidate6/full/2016-06-11~15_23_44_712.ckpt")
candidate_reevaluator(candidate, random.Random(42))
"""
Explanation: Test reloading the resulting graph for additional training/validation.
End of explanation
"""
def test_score(labels, predictions, depths, count):
is_finite = np.isfinite(labels[:count,-1])
where_valid = np.where(is_finite)
count = np.count_nonzero(is_finite)
if count:
score = accuracy(predictions[where_valid], labels[where_valid])
error = mean_depth_error(depths[where_valid], labels[where_valid])
valid_predictions = predictions[where_valid]
label_depths = depths_for_labels_normalized(valid_predictions)
label_error = mean_depth_error(label_depths[:,np.newaxis], labels[where_valid])
argmax_predictions = np.argmax(valid_predictions, axis=1)
argmax_depths = DEPTH_BOUNDARY_MIDPOINTS[argmax_predictions] / improc.MAX_DEPTH
argmax_error = mean_depth_error(argmax_depths[:,np.newaxis], labels[where_valid])
return score*count, error*count, label_error*count, argmax_error*count, count
return 0, 0, 0, 0, 0
# Validate the test_score function.
def check_test_score():
test_batch_size = 10
test_labels = np.zeros(shape=batch_output_shape(test_batch_size, DEPTH_LABEL_COUNT),
dtype=np.float32)
test_depths = np.zeros(shape=(test_batch_size,1), dtype=np.float32)
for l in range(test_batch_size):
test_depth = improc.MAX_DEPTH * l / float(test_batch_size)
depth_label(test_depth, test_labels[l])
test_depths[l, 0] = test_labels[l,-1]
test_predictions = np.copy(test_labels)[:,:-1]
test_predictions[0, 10] = 0.5
test_labels[2] = np.nan
score = test_score(test_labels, test_predictions, test_depths, 7)
print(score)
print([s / score[-1] for s in score[:-1]])
check_test_score()
"""
Explanation: Candidate Testing
Calculates for non-NaN pixels:
* accuracy score,
* mean depth error for the predicted depth,
* mean depth error for the softmax predicted label converted to a depth via:
* sum(midpoint of bucket * softmax value for bucket).
* mean depth error for the bucket midpoint corresponding to the argmax of the predicted label
End of explanation
"""
def compute_test_images(graph_info, data, output_path):
with tf.Session(graph=graph_info["graph"]) as session:
tf.initialize_all_variables().run()
print("Initialized")
# restore graph parameters from disk.
convnet.restore_model(graph_info, session)
# Set up space for graph inputs / feed values
batch_size = graph_info["batch_size"]
depth_labels = data["depth_labels"]
image_size = data["image_size"]
inputs = np.zeros(shape=batch_input_shape(batch_size, image_size),
dtype=np.float32)
labels = np.zeros(shape=batch_output_shape(batch_size, depth_labels),
dtype=np.float32)
nan_label = np.array([np.nan]*labels.shape[-1], dtype=np.float32)
source_image_size = (480, 640)
height_span = source_image_size[0] - image_size[0]
width_span = source_image_size[1] - image_size[1]
pixel_order = np.array(linear_order(height_span, width_span))
files = data["test_files"]
eval_count = len(files) * len(pixel_order) // batch_size
progress = outputer.show_progress("Evaluation Steps:", eval_count)
eval_count = 0;
all_scores = {}
for image_path in files:
sampler = ImageSampler(image_path, image_size[0], image_size[1])
if output_path:
raw_depths = ndimage.imread(image_path)
label_depths = np.copy(raw_depths)
argmax_depths = np.copy(raw_depths)
image_scores = np.zeros(shape=(5,), dtype=np.float32)
gc.collect()
for row in range(height_span):
# Update progress
eval_count += 1
progress.value = eval_count
# Generate a batch and run the graph
batch_pixels = pixel_order[row * width_span : (row + 1) * width_span, :]
labels.fill(0)
for i, pixel in enumerate(batch_pixels):
if not sampler.sample_at(pixel, inputs, labels, i):
labels[i] = nan_label
targets = [graph_info["verify_predictions"],
graph_info["verify_depths"]]
predictions, depths = session.run(
targets, feed_dict={graph_info["verify"] : inputs}
)
if output_path:
iy = ((raw_depths.shape[0] + image_size[0]) // 2) + row
sx = (image_size[0] // 2)
ex = sx + width_span
raw_depths[iy, sx : ex] = improc.encode_normalized_depths(depths)
label_depths[iy, sx : ex] = improc.encode_normalized_depths(
depths_for_labels_normalized(predictions)[:, np.newaxis]
)
argmax_predictions = np.argmax(predictions, axis=1)
argmax_depth_values = DEPTH_BOUNDARY_MIDPOINTS[argmax_predictions]
argmax_depths[iy, sx : ex] = improc.encode_normalized_depths(
(argmax_depth_values / improc.MAX_DEPTH)[:, np.newaxis]
)
image_scores += test_score(labels, predictions, depths, len(batch_pixels))
image_name, ext = os.path.splitext(os.path.basename(image_path))
all_scores[image_name] = image_scores
print("Image scores for", image_name, image_scores[:-1] / image_scores[-1])
if output_path:
outputs = [
(raw_depths, "_depth"),
(label_depths, "_softmax"),
(argmax_depths, "_argmax")
]
for image, postfix in outputs:
imsave(os.path.join(output_path, image_name + postfix + ".png"), image)
return all_scores
"""
Explanation: For all the test images in the provided data set, compute metrics for full images, and generate the corresponding image for the output depth and either the linear combination of the labeled softmax depth output, or the argmax labeled depth output.
End of explanation
"""
def output_test_scores(test_scores, test_data, path):
with outputer.TeeOutput(path):
titles = ["Name", "Accuracy", "Error", "Label Error", "Argmax Error", "Count"]
total = np.zeros(shape=(5,), dtype=np.float32)
lines = [titles]
for image_path in test_data["test_files"]:
image_name, ext = os.path.splitext(os.path.basename(image_path))
scores = test_scores[image_name]
total += scores
line = [image_name]
line.extend(scores[:-1] / scores[-1])
line.append(scores[-1])
lines.append(line)
line = ["Total"]
line.extend(total[:-1] / total[-1])
line.append(total[-1])
lines.append(line)
text = "\n".join(",".join(str(v) for v in line) for line in lines)
print(text)
for i in range(1, 5):
sorted_lines = sorted(lines[1:-1], key=lambda l: l[i])
print(titles[i] + " high")
print(",".join([str(v) for v in sorted_lines[-1]]))
print(titles[i] + " low")
print(",".join([str(v) for v in sorted_lines[0]]))
"""
Explanation: Format the test results as a CSV file, and find the min/max for each metric.
End of explanation
"""
def predict_constant_depth(batch_size, image_shape, label_count, value):
graph = tf.Graph()
with graph.as_default():
verify = tf.placeholder(tf.float32,
shape=batch_input_shape(batch_size, image_shape))
mean_label = tf.one_hot(depth_label_index(value), label_count, np.float32(1), 0)
mean_label = tf.reshape(mean_label, (1, DEPTH_LABEL_COUNT))
return {
"graph": graph,
"batch_size": batch_size,
"verify": verify,
"verify_predictions": tf.tile(mean_label, [batch_size, 1]),
"verify_depths": tf.fill([batch_size, 1], value / improc.MAX_DEPTH)
}
"""
Explanation: Constructs a simple graph that just computes the output label and depth corresponding to a constant depth value.
End of explanation
"""
with outputer.TeeOutput(os.path.join("temp", "guess_mean_test.txt")):
mean_graph = predict_constant_depth(
100, data_files["image_size"], DEPTH_LABEL_COUNT, MEAN_DEPTH
)
test_data = setup_cross_validation(
data_files, 0, 0, 1123, label_count=DEPTH_LABEL_COUNT
)
mean_test_scores = compute_test_images(
mean_graph, test_data, False, None
)
output_test_scores(mean_test_scores, test_data, "temp/mean_test_scores.csv")
"""
Explanation: Compute the test scores resulting from just predicting the mean for every pixel.
End of explanation
"""
def test_candidate_stack(stack_path, output_path, output_images):
batch_size = 640 - data_files["image_size"][1]
with outputer.TeeOutput(os.path.join(output_path, "full_test.txt")):
candidate = convevo.load_stack(stack_path)
test_data = setup_cross_validation(
data_files, 0, 0, 1123, label_count=DEPTH_LABEL_COUNT
)
candidate_graph = setup_graph(
batch_size, test_data["image_size"], test_data["depth_labels"], 1.0, candidate
)
convnet.setup_restore_model(
candidate_graph, candidate.checkpoint_path()
)
test_scores = compute_test_images(
candidate_graph, test_data, output_path if output_images else None
)
output_test_scores(
test_scores, test_data, os.path.join(output_path, "full_test_scores.csv")
)
candidate6_results_path = outputer.setup_directory("temp/candidate6")
test_candidate_stack("testing/candidate6.xml", candidate6_results_path, True)
"""
Explanation: Run and score the candidate graph for the full test set.
End of explanation
"""
|
google/eng-edu | ml/cc/prework/es-419/tensorflow_programming_concepts.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2017 Google LLC.
End of explanation
"""
import tensorflow as tf
"""
Explanation: # Conceptos de programación de TensorFlow
Objetivos de aprendizaje:
* Obtén información sobre los conceptos básicos del modelo de programación de TensorFlow, enfocado en los siguientes conceptos:
* tensores
* operaciones
* gráficos
* sesiones
* Crea un programa simple de TensorFlow que genere un gráfico predeterminado y una sesión que ejecute el gráfico.
Nota: Lee atentamente este instructivo. El modelo de programación de TensorFlow probablemente sea diferente a otros que conozcas y, por lo tanto, tal vez no sea lo suficientemente intuitivo.
## Resumen de conceptos
El nombre de TensorFlow proviene de tensores, que son matrices de dimensionalidad arbitraria. Con TensorFlow, puedes manipular tensores con una gran cantidad de dimensiones. No obstante, mayormente trabajarás con uno o más de los siguientes tensores de baja dimensionalidad:
* Un escalar es una matriz de 0-d (un tensor en el orden 0). Por ejemplo, "Howdy" o 5
* Un vector es una matriz de 1-d (un tensor en el orden 1). Por ejemplo, [2, 3, 5, 7, 11] o [5]
* Un arreglo es una matriz de 2-d (un tensor en el orden 2). Por ejemplo, [[3.1, 8.2, 5.9][4.3, -2.7, 6.5]]
Las operaciones de TensorFlow crean, destruyen y manipulan tensores. La mayoría de las líneas de código en un programa típico de TensorFlow son operaciones.
Un gráfico de TensorFlow (también conocido como gráfico computacional o gráfico de flujo de datos) es, de hecho, una estructura de datos de gráfico. Muchos programas de TensorFlow consisten en un solo gráfico, pero es posible que los programas de TensorFlow alternativamente creen gráficos múltiples. Los nodos de un gráfico son operaciones; los bordes de un gráfico son tensores. Los tensores fluyen en el gráfico, manipulado en cada nodo mediante una operación. El tensor de resultado de una operación por lo general se convierte en el tensor de entrada de una operación subsiguiente. TensorFlow implementa un modelo de ejecución relajado, lo que significa que los nodos solo se calculan cuando es necesario, en función de los nodos asociados.
Los tensores pueden almacenarse en el gráfico como constantes o variables. Como podrás imaginar, las constantes tienen tensores cuyos valores no pueden cambiar, mientras que las variables tienen tensores cuyos valores sí pueden cambiar. Sin embargo, lo que posiblemente no imagines es que las constantes y las variables son solo más operaciones en el gráfico. Una constante es una operación que siempre arroja el mismo valor de tensor. Una variable es una operación que arrojará el tensor que tuviera asignado.
Para definir una constante, usa el operador tf.constant y envía su valor. Por ejemplo:
x = tf.constant([5.2])
De manera similar, puedes crear una variable como la siguiente:
y = tf.Variable([5])
O bien puedes crear primero la variable y, luego, asignar un valor como el siguiente (ten en cuenta que siempre debes especificar un valor predeterminado):
y = tf.Variable([0])
y = y.assign([5])
Una vez que hayas definido algunas constantes o variables, podrás combinarlas con otras operaciones, como tf.add. Cuando evalúas la operación tf.add, esta invocará a tus operaciones tf.constant o tf.Variable para obtener sus valores y, luego, obtener un tensor nuevo con la suma de esos valores.
Los gráficos deben ejecutarse dentro de una sesión de TensorFlow, que incluye el estado de los gráficos que ejecuta:
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
print(y.eval())
Cuando trabajas con variables tf.Variable, debes inicializarlas de forma explícita. Para hacerlo, debes invocar a tf.global_variables_initializer al inicio de la sesión, como se muestra arriba.
Nota: Una sesión puede distribuir la ejecución del gráfico en varias máquinas (si asumimos que el programa se ejecuta en un marco de trabajo de cálculo distribuido). Para obtener más información, consulta TensorFlow distribuido.
Resumen
La programación de TensorFlow básicamente es un proceso de dos pasos:
1. Crea constantes, variables y operaciones en un gráfico.
2. Evalúa esas constantes, variables y operaciones en una sesión.
## Creación de un programa simple de TensorFlow
Veamos cómo codificar un programa simple de TensorFlow que agrega dos constantes.
### Establecimiento de sentencias de importación
Al igual que con casi todos los programas de Python, empezarás especificando algunas sentencias de import.
El conjunto de sentencias de import necesario para ejecutar un programa de TensorFlow depende, por supuesto, de las funciones que accede tu programa. Como mínimo, debes proporcionar la sentencia import tensorflow en todos los programas de TensorFlow:
End of explanation
"""
from __future__ import print_function
import tensorflow as tf
# Create a graph.
g = tf.Graph()
# Establish the graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of the following three operations:
# * Two tf.constant operations to create the operands.
# * One tf.add operation to add the two operands.
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
print(sum.eval())
"""
Explanation: No te olvides de ejecutar el bloque de código previo (las sentencias de import).
Otras sentencias de importación habituales incluyen lo siguiente:
import matplotlib.pyplot as plt # Visualización del conjunto de datos
import numpy as np # Biblioteca de Python numérica de bajo nivel
import pandas as pd # Biblioteca de Python numérica de alto nivel
TensorFlow proporciona un gráfico predeterminado. Sin embargo, recomendamos de forma explícita crear tu propio Graph en lugar de proporcionar el estado de seguimiento (p. ej., es posible que quieras trabajar con un Graph diferente en cada celda).
End of explanation
"""
# Create a graph.
g = tf.Graph()
# Establish our graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of three operations.
# (Creating a tensor is an operation.)
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Task 1: Define a third scalar integer constant z.
z = tf.constant(4, name="z_const")
# Task 2: Add z to `sum` to yield a new sum.
new_sum = tf.add(sum, z, name="x_y_z_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
# Task 3: Ensure the program yields the correct grand total.
print(new_sum.eval())
"""
Explanation: ## Ejercicio: Introduce un tercer operando
Revisa la lista de códigos de arriba para agregar tres números enteros, en lugar de dos:
Define una tercera constante escalar de números enteros, z, y asígnala a un valor de 4.
Agrega z a sum para generar una nueva suma.
Sugerencia: Consulta los documentos de la API para tf.add() para obtener más información sobre su firma de función.
Vuelve a ejecutar el bloque de código modificado. ¿El programa generó la suma total correcta?
### Solución
Haz clic a continuación para conocer la solución.
End of explanation
"""
|
seg/2016-ml-contest | Pet_Stromatolite/Facies_Classification_Draft2.ipynb | apache-2.0 | ### loading
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
### setting up options in pandas
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
### taking a look at the training dataset
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
### Checking out Well Names
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Well Name'].unique()
training_data['Well Name']
well_name_list = training_data['Well Name'].unique()
well_name_list
### Checking out Formation Names
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Formation'].unique()
training_data.describe()
facies_1 = training_data.loc[training_data['Facies'] == 1]
facies_2 = training_data.loc[training_data['Facies'] == 2]
facies_3 = training_data.loc[training_data['Facies'] == 3]
facies_4 = training_data.loc[training_data['Facies'] == 4]
facies_5 = training_data.loc[training_data['Facies'] == 5]
facies_6 = training_data.loc[training_data['Facies'] == 6]
facies_7 = training_data.loc[training_data['Facies'] == 7]
facies_8 = training_data.loc[training_data['Facies'] == 8]
facies_9 = training_data.loc[training_data['Facies'] == 9]
#showing description for just facies 1, Sandstone
facies_1.describe()
#showing description for just facies 9, Phylloid-algal bafflestone (limestone)
facies_9.describe()
#showing description for just facies 8, Packstone-grainstone (limestone)
facies_8.describe()
"""
Explanation: SEG Machine Learning (Well Log Facies Prediction) Contest
Entry by Justin Gosses of team Pet_Stromatolite
This is an "open science" contest designed to introduce people to machine learning with well logs and brainstorm different methods through collaboration with others, so this notebook is based heavily on the introductary notebook with my own modifications.
more information at https://github.com/seg/2016-ml-contest
and even more information at http://library.seg.org/doi/abs/10.1190/tle35100906.1
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
=================================================================================================================
Notes:
Early Ideas for feature engineering
take out any points in individual wells where not all the logs are present
test whether error increases around the depths where PE is absent?
test whether using formation, depth, or depth&formation as variables impacts prediction
examine well logs & facies logs (including prediction wells) to see if there aren't trends that might be dealt with by increasing the population of certain wells over others in the training set?
explore effect size of using/not using marine or non-marine flags
explore making 'likely to predict wrong' flags based on first-pass results with thin facies surrounded by thicker facies such that you might expand a 'blended' response due to the measured response of the tool being thicker than predicted facies
explore doing the same above but before prediction using range of thickness in predicted facies flags vs. range of thickness in known facies flags
explore using multiple prediction loops, in order words, predict errors not just facies.
Explore error distribution: adjacent vs. non-adjacent facies, by thickness, marine vs. non-marine, by formation, and possible human judgement patterns that influence interpreted facies.
End of explanation
"""
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
"""
Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
End of explanation
"""
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
"""
Explanation: Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
End of explanation
"""
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
"""
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
"""
def make_faciesOnly_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(3, 9))
# f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
# ax[0].plot(logs.GR, logs.Depth, '-g')
ax[0].plot(logs.ILD_log10, logs.Depth, '-')
# ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
# ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
# ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[1].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
# ax[0].set_xlabel("GR")
# ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[0].set_xlabel("ILD_log10")
ax[0].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
# ax[2].set_xlabel("DeltaPHI")
# ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
# ax[3].set_xlabel("PHIND")
# ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
# ax[4].set_xlabel("PE")
# ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[1].set_xlabel('Facies')
# ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
# ax[4].set_yticklabels([]);
ax[1].set_yticklabels([])
ax[1].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
"""
Explanation: editing the well viewer code in an attempt to understand it and potentially not show everything
End of explanation
"""
# make_faciesOnly_log_plot(
# training_data[training_data['Well Name'] == 'SHRIMPLIN'],
# facies_colors)
for i in range(len(well_name_list)-1):
# well_name_list[i]
make_faciesOnly_log_plot(
training_data[training_data['Well Name'] == well_name_list[i]],
facies_colors)
"""
Explanation: looking at several wells at once
End of explanation
"""
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
from pandas.tools.plotting import radviz
radviz(training_data.drop(['Well Name','Formation','Depth','NM_M','RELPOS']), "Facies")
"""
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
End of explanation
"""
correct_facies_labels = training_data['Facies'].values
# dropping certain labels and only keeping the geophysical log values to train on
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
"""
Explanation: Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
"""
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
feature_vectors.describe()
"""
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
"""
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
End of explanation
"""
from sklearn import svm
clf = svm.SVC()
"""
Explanation: Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a support vector machine. The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in scikit-learn takes a number of important parameters. First we create a classifier using the default settings.
End of explanation
"""
clf.fit(X_train,y_train)
"""
Explanation: Now we can train the classifier using the training set we created above.
End of explanation
"""
predicted_labels = clf.predict(X_test)
"""
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
End of explanation
"""
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
"""
Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
"""
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
"""
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
"""
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
"""
Explanation: Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter C is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if C is too large it may 'overfit' the data and fail to generalize when classifying new data. If C is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function rbf kernel (the default). The gamma parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for C and gamma. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
End of explanation
"""
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
"""
Explanation: The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters:
End of explanation
"""
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
"""
Explanation: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the display_confusion_matrix() function:
End of explanation
"""
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
"""
Explanation: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies MS or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies BS or bafflestone has the best F1 score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
End of explanation
"""
blind
"""
Explanation: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind:
End of explanation
"""
y_blind = blind['Facies'].values
"""
Explanation: The label vector is just the Facies column:
End of explanation
"""
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
"""
Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe:
End of explanation
"""
X_blind = scaler.transform(well_features)
"""
Explanation: Now we can transform this with the scaler we made before:
End of explanation
"""
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
"""
Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe:
End of explanation
"""
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
"""
Explanation: Let's see how we did with the confusion matrix:
End of explanation
"""
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
"""
Explanation: We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
End of explanation
"""
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
"""
Explanation: ...but does remarkably well on the adjacent facies predictions.
End of explanation
"""
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
"""
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
"""
X_unknown = scaler.transform(well_features)
"""
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
"""
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
"""
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
"""
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
"""
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation
"""
|
CartoDB/cartoframes | docs/examples/data_observatory/do_data_enrichment.ipynb | bsd-3-clause | import geopandas as gpd
import matplotlib.pyplot as plt
import seaborn as sns
from cartoframes.auth import set_default_credentials
from cartoframes.data.observatory import *
from cartoframes.data.services import Isolines
from cartoframes.viz import *
sns.set_style('whitegrid')
%matplotlib inline
"""
Explanation: Advanced Data Enrichment using CARTO's Data Observatory
This notebook shows how to use CARTOframes to enrich the area of influence of different POIs with data from CARTO's Data Observatory. Please, visit CARTOframes Guides to learn more about the enrichment functionality.
We will show CARTOframes enrichment functionality with an example in which we will quantify the number of eating places within a 5-minute isochrone for all sports POI's in Madrid downtown.
The notebook is organized as follows:
1. Download sports POIs
2. Calculate isochrones
3. Enrich isochrones
- Simple enrichment: Counting the number of POI's within isochrones
- Enrichment applying filters: Counting the number of eating places
- Brief analysis
Note for this notebook we are using the premium dataset of Pitney Bowes POI's in Spain.
Setup
Import packages
End of explanation
"""
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
"""
Explanation: Set CARTO default credentials
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
End of explanation
"""
Catalog().subscriptions().datasets.to_dataframe()
pois_ds = Dataset.get('pb_points_of_i_94bda91b')
pois_ds.head()
sql_query = """
SELECT * except(do_label) FROM $dataset$
WHERE TRADE_DIVISION = 'DIVISION M. - SPORTS'
AND ST_IntersectsBox(geom, -3.716398,40.407437,-3.690477,40.425277)
"""
pois_df = pois_ds.to_dataframe(sql_query=sql_query)
# To keep only the latest version of POI's
pois_df = pois_df.sort_values(['NAME', 'do_date']).groupby('NAME').first().reset_index()
pois_df.head()
"""
Explanation: Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download sports POIs
We need to start with the initial DataFrame that we would like to enrich. Normally, this initial DataFrame contains your own data that you later enrich with data from the Data Observatory. In this case, we will download all sports POI's and use it as our initial DataFrame.
We first check that we are subscribed to PB POIs dataset in Spain and download the sports POI's within a bounding box covering Madrid downtown. You can calculate your bounding box of interest using bboxfinder.
For a step by step description on how to discover and download premium datasets, take a look at templates: <a href='https://carto.com/developers/cartoframes/examples/#example-data-discovery-in-the-data-observatory' target='_blank'>Data Discovery</a> and <a href='https://carto.com/developers/cartoframes/examples/#example-access-premium-data-from-the-data-observatory' target='_blank'>Access Premium Data</a>.
End of explanation
"""
iso_service = Isolines()
isochrones_gdf, isochrones_metadata = iso_service.isochrones(pois_df, [300], mode='walk', geom_col='geom')
isochrones_gdf.head()
pois_df['isochrone'] = isochrones_gdf.sort_values('source_id')['the_geom'].values
pois_df.head()
"""
Explanation: <a id='section2'></a>
2. Calculate isochrones
For this analysis, we are interested in knowing the number of eating places reachable within 5 minutes for every sport POI. We'll now proceed to calculate 5-minute isochrones for every POI, which represent the area reachable within 5 minutes.
You can read more regarding isochrones on CARTOframes Guides.
End of explanation
"""
Map([Layer(pois_df, geom_col='geom'),
Layer(pois_df, geom_col='isochrone', style=basic_style(opacity=0.1))])
"""
Explanation: Visualize isochrones
End of explanation
"""
enrichment = Enrichment()
"""
Explanation: <a id='section3'></a>
3. Enrich isochrones
We will now proceed to enrich our DataFrame.
For enriching datasets, we use the Enrichment class. Please, visit CARTOframes Guides to learn more.
End of explanation
"""
# Here we can use any variable because we're only interested in counts
pois_df = enrichment.enrich_polygons(
pois_df,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='isochrone'
)
# We rename the column name to give it a more descriptive name
pois_df.rename(columns={'CLASS_y':'n_pois'}, inplace=True)
pois_df.head()
"""
Explanation: <a id='section31'></a>
3.1 Simple enrichment: Counting the number of POI's within isochrones
We will start by simply counting the number of POI's within each isochrone. This will allow us to measure how busy the area around each sport POI is.
In order to do this, we will use the Enrichment function enrich_polygons() for which we can select any variable, because we are only interested in counting POIs. That is why we selected the variable CLASS_517d6003 that we will use later. Remember you can access the dataset variables doing pois_ds.variables.to_dataframe().
Note that we need to specify the name of the geometry column (geom_col) because we are working with a DataFrame instead of a GeoDataFrame.
End of explanation
"""
Map(Layer(pois_df, geom_col='geom',
style=size_continuous_style('n_pois'),
legends=size_continuous_legend('# POIs'),
popup_hover=[popup_element('NAME', 'Name'),
popup_element('n_pois', 'Number of POIs')]))
"""
Explanation: Visualize enrichment
End of explanation
"""
pois_df = enrichment.enrich_polygons(
pois_df,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='iso_10walk',
filters={Variable.get('CLASS_517d6003').id:"= 'EATING PLACES/RESTAURANTS'"}
)
# We rename the column name to give it a more descriptive name
pois_df.rename(columns={'CLASS':'n_pois_eating'}, inplace=True)
pois_df.head()
"""
Explanation: <a id='section32'></a>
3.2 Enrichment applying filters: Counting the number of eating places
Now, we are interested in getting the number of eating places within a 5-minute isochrone for every sport POI. This requires using a filter to indicate that only eating places should be counted. Filters are added in a dictionary-like format, where the key is the filtering variable and the value is the filtering value.
If you are interested in knowing how to identify the variable to use as filter, check out this <a href='https://carto.com/developers/cartoframes/examples/#example-access-premium-data-from-the-data-observatory' target='_blank'>notebook</a> on how to access and download premium data.
End of explanation
"""
Map(Layer(pois_df, geom_col='geom',
style=size_continuous_style('n_pois_eating'),
legends=size_continuous_legend('# Eating POIs'),
popup_hover=[popup_element('NAME', 'Name'),
popup_element('n_pois_eating', 'Number of eating places')]))
"""
Explanation: Visualize enrichment
End of explanation
"""
Layout([Map(Layer(pois_df, geom_col='geom',
style=size_continuous_style('n_pois'),
legends=size_continuous_legend('# POIs'),
popup_hover=[popup_element('NAME', 'Name'),
popup_element('n_pois', 'Number of POIs')])),
Map(Layer(pois_df, geom_col='geom',
style=size_continuous_style('n_pois_eating'),
legends=size_continuous_legend('# Eating POIs'),
popup_hover=[popup_element('NAME', 'Name'),
popup_element('n_pois_eating', 'Number of eating places')]))],
map_height=550)
plt.figure(figsize=(12,5))
sns.regplot(pois_df['n_pois'], pois_df['n_pois_eating'],
scatter_kws={'color':'blue', 'alpha':0.5}, line_kws={'color':'red'})
"""
Explanation: <a id='section33'></a>
3.3 Brief analysis
Let's now take a look at how the total number of POI's and eating places around sport POI's correlate.
End of explanation
"""
|
google/alligator2 | alligator2.ipynb | apache-2.0 | # Printing to screen
print("I'm a code block")
# Defining variables
a = 2
b = 5
c = a + b
print(f"a equals {a}")
print(f"b equals {b}")
print(f"a plus b equals {c}")
# Proper indentation is essential in Python
for x in range(1,6):
print(x)
"""
Explanation: Make a copy of this notebook!
Intro to Colab
60 second crash course in Colab notebooks
A notebook is a list of cells. Cells contain either explanatory text or executable code and its output. This is a text cell. You can double-click to edit this cell.
Once the toolbar button indicates CONNECTED, click in the cell to select it and execute the contents in the following ways:
Click the Play icon in the left gutter of the cell; or
Type Cmd/Ctrl + Enter to run the cell in place.
Good to know
* Hashtags (#) are Python comments (they're ignored during code execution)
* Use Cmd/Ctrl + / to comment out a line of code (helpful during debugging)
* When you execute a code block, anything within that code block can be referenced elsewhere in the notebook
End of explanation
"""
#@markdown Authenticate your user for this colab
from google.colab import auth
auth.authenticate_user()
"""
Explanation: Alligator 2.0 Notebook
This notebook contains the steps necessary to set up the Google My Business (GMB) Insights extraction and aggregation tool known as Alligator 2.0, as well as the visualization of the extracted insights in a Google Data Studio dashboard.
Solution Details
Overview
Alligator 2.0 (Alligator for short) is a Python-based solution that aggregates Insights data from the GMB API and stores it into Google Cloud Platform, precisely in BigQuery. Insights data provides details around how users interact with GMB listings via Google Maps, such as the number of queries for a location, locations of where people searched for directions, number of website clicks, calls, and reviews.
The solution provides a cross-account look at the GMB data, instead of a per-location view. In addition, the use of BigQuery to aggregate and store this data allows trends longer than the range accessible through the GMB API to be captured.
Along with gathering stats, the Natural Language API from Cloud has been used to provide sentiment analysis, as well as entity detection for supported languages.
Architecture
Alligator is a process built in Python that pulls data from the GMB API and gathers it into a BigQuery instance.
The process uses the publicly available GMB API, along with the Big Query API to access, download and aggregate the data. Access is gained using standard OAuth tokens, which are stored in the runtime environment executing the Python process.
Once aggregated, tables are created in BigQuery, and the reviews are processed using the NLP API on Cloud to get their sentiment information.
GMB Account Prerequisites
All locations must roll up to a Location Group (formerly known as a Business Account). If this is not the case, please create a single Location Group and add all locations to it. Click here for more information.<br/>Multiple location groups are supported and can be queried accordingly (refer to the BigQuery Views section below).
The user accessing the GMB UI must have Admin privileges set for the Location Group(s).
GCP Project Setup
To get started, you first need to create or select an existing Google Cloud Platform Project that is allowed to access the GMB API. For more information on getting access, refer to this guide.
Once your project is created and approved, the GMB API will automatically be enabled for use within the project (if not, enable it using this link). Additionally, please enable the following APIs:
Cloud Natural Language API
BigQuery API
Make sure that the user with access to the GMB UI/API also has permission to access the other APIs as well.
Authentication
The APIs are authenticated via OAuth, which means that you authenticate as a user as opposed to as an application. Your user permissions control what you have access to. If you don't have access in the UI you cannot access via the API.
The client secrets give your application permission to access the API using your user credentials.
Go to the API & Services > Credentials page on Google Cloud and select the project for which you enabled the APIs earlier.
Create an OAuth 2.0 Client ID of the type Desktop App.
Download the client secrets json file and save it as client_secrets.json.
End of explanation
"""
#@markdown Set a global flag representing your GCP Project ID
PROJECT_ID = '<your-project-id-here>' # @param
#@markdown Clone the alligator2 GitHub repository and cd into it
!echo "Restoring working directory to root..."
%cd /content
!rm -rf alligator2 && git clone https://github.com/google/alligator2.git
!echo "Changing working directory to alligator2..."
%cd alligator2
#@markdown Copy the client_secrets.json file into the alligator2 directory
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('Uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
#@markdown Install the application's required packages
!pip3 install --upgrade --requirement requirements.txt
#@markdown Download the GMB API discovery document
#@markdown <br/>*Refer to this*
#@markdown [*link*](https://developers.google.com/my-business/samples/#discovery_document)
#@markdown *for the latest available discovery document as the one used here may have been outdated*
!wget -O gmb_discovery.json https://developers.google.com/my-business/samples/mybusiness_google_rest_v4p7.json
"""
Explanation: Installation Guide
End of explanation
"""
!python3 main.py --project_id=$PROJECT_ID
"""
Explanation: Now you are all set; execute the cell below to run the application and start extracting and aggregating GMB data!
Refer to the CLI Usage section of the GitHub documentation for more information on the different flags you can use.
End of explanation
"""
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
alligator.INFORMATION_SCHEMA.TABLES;
"""
Explanation: Depending on the amount of GMB source data you have across your locations and location groups, the frequency of analysis may differ.
<br/>Usually, a weekly workflow suffices for capturing enough insights. Refer to the Maintenance Guide section below for more information.
You can also fine-tune the constants referenced in api.py to fit your data volume and indiviual use cases. The predefined values are optimized for the extraction of significantly large volumes of data without running into operational / transactional limits.
```python
DATASET_ID = "alligator"
max lookback window for insights metrics in days
INSIGHTS_DAYS_BACK = 540
calls metrics are aggregated in GMB, this value supports a weekly workflow
CALLS_DAYS_BACK = 7
directions metrics are limited to predefined values in GMB, this value supports a weekly workflow
DIRECTIONS_NUM_DAYS = "SEVEN"
max number of rows of data to retrieve from BQ. Used when processing sentiments for existing reviews
BQ_JOBS_QUERY_MAXRESULTS_PER_PAGE = 1000
max batch size for BQ streaming inserts. Do not change this value as you might run into API limits
BQ_TABLEDATA_INSERTALL_BATCHSIZE = 50
```
Once the script is run, your BigQuery dataset will be populated with several tables depending on the execution flags used.<br/>
Execute the next cell to view your dataset's contents (make sure you change the dataset name if you are not using the default 'alligator').
End of explanation
"""
%%bigquery reviews_agg_rating --project $PROJECT_ID
SELECT
r.starRating AS rating,
count(1) AS count_rating
FROM
alligator.reviews AS r
GROUP BY r.starRating
ORDER BY (
CASE r.starRating
WHEN "ONE" then 1
WHEN "TWO" then 2
WHEN "THREE" then 3
WHEN "FOUR" then 4
WHEN "FIVE" then 5
END
);
"""
Explanation: Here is a description of each table and the extraction methods used:
accounts: Provides information on the accounts (e.g., location groups) associated to the user. It is omitted if running for a specific account using the --account_id switch.
locations: Provides information about locations associated to the user through the GMB accounts. It is omitted if running for a specific location using the --location_id switch.
insights: Executes the reportInsights method and gets all the information under Location Metrics, aggregated by day, for the last 540 days (with 5 days delay) for all associated locations. This range can be modified in the code.
directions: Executes the reportInsights method and gets all the information under Driving Direction Metrics, for the last 7 days for all associated locations. This range can be modified to one of the allowed ones (7, 30 or 90 days).
hourly_calls: Executes the reportInsights method and gets all the phone actions aggregated by hour of the day, for the last 7 days (with 5 days delay) for all associated locations. This range can be modified in the code, but consistency of the data will depend on the extraction schedule and the range complementariness.
reviews: Provides all the reviews available in the API for all associated locations.
sentiments: Provides the sentiment extracted for all the reviews present in the reviews table at execution time.
Maintenance Guide
Data extracted by Alligator needs to be maintained in order to be consistent, up to date and GDPR compliant.
On the compliance side, any user can delete a review in GMB, and the data owner is responsible for deleting all copies of it in their systems. Unfortunately the GMB API does not provide a means to get information on deleted reviews. The workaround for this is to extract all reviews periodically, and delete old copies from the data lake.
To ensure consistency, and as the extraction process could duplicate information, all the tables are partitioned by day of extraction to facilitate deletion of old data.
On the same note, the directions and hourly_calls are different from other tables in that their information is aggregated (hourly_calls), or limited to predefined ranges (directions) in the API. For that, it is recommended to avoid deleting these two tables periodically, in case you want to extend the lookback window farther than as provided by the API.
Weekly Workflow
Alligator is prepared to use this workflow, and we propose the following approach to handle extractions and deletions.
Day 01: the extraction is run completely.
Day 02 to 07: nothing.
Day 08: prior to running the extraction completely, manually delete all data with _PARTITIONTIME < TODAY from all tables, except for directions and hourly_calls.
Day 08 to 14: nothing.
Day 15: prior to running the extraction completely, manually delete all data with _PARTITIONTIME < TODAY from all tables, except for directions and hourly_calls.
<br/>...
Daily Workflow
In case there is a need to get information as soon as possible, there is an alternative approach, but it might require a bit more maintenance.
Day 01: the extraction is run completely.
Day 02: the extraction is run with the --no-directions and --no-hourly-calls flag. Before doing that, all data with _PARTITIONTIME < TODAY should be manually deleted from all tables, except for directions and hourly_calls.
Day 03: (same as Day 02)
Day 04: (same as Day 02)
Day 05: (same as Day 02)
Day 06: (same as Day 02)
Day 07: (same as Day 02)
Day 08: prior to running the extraction completely, manually delete all data with _PARTITIONTIME < TODAY from all tables, except for directions and hourly_calls.
Day 09: (same as Day 02)
Day 10: (same as Day 02)
Day 11: (same as Day 02)
Day 12: (same as Day 02)
Day 13: (same as Day 02)
Day 14: (same as Day 02)
Day 15: prior to running the extraction completely, manually delete all data with _PARTITIONTIME < TODAY from all tables, except for directions and hourly_calls.
<br/>...
Reporting Guide
Now that the data is available in BigQuery we can build the Data Studio Dashboard. First, let's try to visualize some of the data you have extracted so far. As an example, the code below will aggregate the star rating of all your reviews (across all your locations):
End of explanation
"""
%matplotlib inline
reviews_agg_rating.plot(kind="bar", x="rating", y="count_rating");
"""
Explanation: Now let's plot the data using matplotlib.
End of explanation
"""
%%bigquery --project $PROJECT_ID
WITH location_reviews AS (
SELECT
l.locationName,
(CASE
WHEN r.starRating = "ONE" THEN 1
WHEN r.starRating = "TWO" THEN 2
WHEN r.starRating = "THREE" THEN 3
WHEN r.starRating = "FOUR" THEN 4
WHEN r.starRating = "FIVE" THEN 5
ELSE NULL
END) AS numRating
FROM
alligator.reviews AS r
JOIN
alligator.locations AS l
ON l.name = REGEXP_EXTRACT(r.name, '(.*)/reviews/')
)
SELECT
locationName,
AVG(numRating) AS avgRating
FROM
location_reviews
WHERE numRating IS NOT NULL
GROUP BY locationName
ORDER BY avgRating DESC
LIMIT 10;
"""
Explanation: Let's keep going and identify the top 10 locations based on review rating:
End of explanation
"""
#@markdown Enter the desired plotting params
#@markdown <br/>*param_zipcode* and *param_address* are optional,
#@markdown though it is highly recommended that you enter a postal code to view the effect of having multiple locations within the same area:
param_city = '<city>' # @param
param_zipcode = '<zipcode>' # @param
param_address = '' # @param
params = {
'param_city': param_city,
'param_zipcode': param_zipcode,
'param_address': param_address
}
#@markdown Fetch data from BigQuery and store it in a Pandas DataFrame named 'geo_data'.
%%bigquery geo_data --project $PROJECT_ID --params $params
WITH temp_data AS (
SELECT
regionCode AS countryCode,
locality AS city,
postalCode AS locationZipCode,
locationName AS locationName,
addressLines AS locationAddress,
label AS sourceZipCode,
latitude,
longitude
FROM
alligator.formatted_directions
WHERE SAFE_CAST(label AS NUMERIC) IS NOT NULL
ORDER BY city, locationName, locationAddress, locationZipCode, sourceZipCode
), temp_aggregated_data AS (
SELECT distinct
regionCode AS countryCode,
locality AS city,
postalCode AS destinationZipCode,
label AS sourceZipCode,
latitude,
longitude
FROM
alligator.formatted_directions
WHERE SAFE_CAST(label AS NUMERIC) IS NOT NULL
ORDER BY city, destinationZipCode, sourceZipCode
), temp_geo_data AS (
SELECT
d.countryCode,
d.city,
d.locationZipCode,
d.locationName,
d.locationAddress,
ARRAY_AGG(d.sourceZipCode ORDER BY d.sourceZipCode) AS sourceZipCodes,
ARRAY_AGG(ST_GeogPoint(d.longitude, d.latitude) ORDER BY d.sourceZipCode) AS sourceGeoPoints
FROM
temp_data AS d
GROUP BY countryCode, city, locationZipCode, locationName, locationAddress
ORDER BY countryCode, city, locationZipCode, locationName, locationAddress
), temp_aggregated_geo_data AS (
SELECT
a.countryCode,
a.city,
a.destinationZipCode,
ARRAY_AGG(a.sourceZipCode ORDER BY a.sourceZipCode) AS sourceZipCodes,
ARRAY_AGG(ST_GeogPoint(a.longitude, a.latitude) ORDER BY a.sourceZipCode) AS sourceGeoPoints
FROM
temp_aggregated_data AS a
GROUP BY countryCode, city, destinationZipCode
ORDER BY countryCode, city, destinationZipCode
), directions_geo AS (
SELECT
ag.countryCode,
ag.city,
ag.destinationZipCode,
ag.sourceZipCodes,
CASE
WHEN
ST_NUMPOINTS(ST_MAKELINE(ag.sourceGeoPoints)) >= 3
THEN
ST_GEOGFROMGEOJSON(REPLACE(REPLACE(REPLACE(ST_ASGEOJSON(ST_MAKELINE(ag.sourceGeoPoints)), 'LineString', 'Polygon'), ': [ [', ':[[['), '] ] } ', ']]]}'), make_valid => true)
ELSE
ST_MAKELINE(ag.sourceGeoPoints)
END AS sourceZipCodesGeoPolygon,
g.locationName,
g.locationAddress,
g.sourceZipCodes AS locationSourceZipCodes,
CASE
WHEN
ST_NUMPOINTS(ST_MAKELINE(g.sourceGeoPoints)) >= 3
THEN
ST_GEOGFROMGEOJSON(REPLACE(REPLACE(REPLACE(ST_ASGEOJSON(ST_MAKELINE(g.sourceGeoPoints)), 'LineString', 'Polygon'), ': [ [', ':[[['), '] ] } ', ']]]}'), make_valid => true)
ELSE
ST_MAKELINE(g.sourceGeoPoints)
END AS locationSourceZipCodesGeoPolygon
FROM
temp_aggregated_geo_data AS ag
JOIN temp_geo_data AS g
ON g.locationZipCode = ag.destinationZipCode
ORDER BY ag.countryCode, ag.city, ag.destinationZipCode, g.locationAddress
)
# The aformentioned temp tables can be stored as a BigQuery view named
# directions_geo and queried below directly as alligator.directions_geo
SELECT
city,
destinationZipCode,
locationName,
locationAddress,
ST_ASGEOJSON(locationSourceZipCodesGeoPolygon) AS geojson
FROM
directions_geo
WHERE
city = @param_city
AND destinationZipCode = (
CASE
WHEN
@param_zipcode = ''
THEN
destinationZipCode
ELSE
@param_zipcode
END
)
AND locationAddress = (
CASE
WHEN
@param_address = ''
THEN
locationAddress
ELSE
@param_address
END
)
;
#@markdown Output the DataFrame:
import pandas as pd
pd.set_option("display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None)
geo_data
#@markdown We need to fetch a latitude, longitude pair for every location within the provided postal code.
#@markdown <br/>Multiple locations will be visualized using different colors.
#@markdown <br/><br/>Execute this cell to install all required packages.
!pip3 install -U geocoder geojson
#@markdown Execute this cell to create a [GeoJSON specification](https://geojson.org/) string.
import geocoder
import json
import random
from geojson import FeatureCollection, Feature, Point
def geocode(city, zipcode, address):
g = geocoder.osm(f"{address} {zipcode} {city}", components="country:DE")
return g.lng, g.lat
def geodata_feature_point(row, color):
city = row.city
zipcode = row.destinationZipCode
name = row.locationName
address = row.locationAddress
lng, lat = geocode(city, zipcode, address)
point = Point((lng, lat))
feature = Feature(geometry=point, properties={
'name': f'Location Pointer',
'popupContent': f'{name}<br>{address}<br>{zipcode} {city}',
'color': color
})
return feature
def geodata_feature_polygon(row, color):
city = row.city
zipcode = row.destinationZipCode
name = row.locationName
address = row.locationAddress
j = json.loads(row.geojson)
feature = Feature(geometry=j, properties={
'name': f'Driving Directions Polygon',
'popupContent': f'Driving directions for:<br/>{name}<br>{address}<br>{zipcode} {city}',
'color': color
})
return feature
def gen_colors(number_of_colors):
colors = ["#"+''.join([random.choice('0123456789ABCDEF') for j in range(6)])
for i in range(number_of_colors)]
return colors
def geodata_features():
features = []
points = []
polygons = []
colors = gen_colors(len(geo_data))
for index, row in geo_data.iterrows():
color = colors[index]
feature_point = geodata_feature_point(row, color)
points.append(feature_point)
feature_polygon = geodata_feature_polygon(row, color)
polygons.append(feature_polygon)
# Add polygons first to ensure that points are plotted above them
features.extend(polygons)
features.extend(points)
return features
address = None
if param_address != '':
address = param_address
lng, lat = geocode(param_city, param_zipcode, address)
feature_collection = FeatureCollection(geodata_features())
#@markdown Now execute this cell to visualize the data.
from IPython.display import HTML
# Single curly braces are reserved for "replacement fields", escape curly braces in literal text by doubling them.
display(HTML(f'''
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.7.1/dist/leaflet.css" integrity="sha512-xodZBNTC5n17Xt2atTPuE1HxjVMSvLVW9ocqUKLsCC5CXdbqCmblAshOMAS6/keqq/sMZMZ19scR4PsZChSR7A==" crossorigin=""/>
<script src="https://unpkg.com/leaflet@1.7.1/dist/leaflet.js" integrity="sha512-XQoYMqMTK8LvdxXYG3nZ448hOEQiglfqkJs1NOQV44cWnUrBc8PkAOcXy20w0vlaXaVUearIOBhiXZ5V3ynxwA==" crossorigin=""></script>
<div id="mapid" style="width: 600px; height: 400px;"></div>
<script>
function onEachFeature(feature, layer) {{
if (feature.properties && feature.properties.popupContent) {{
layer.bindPopup(feature.properties.popupContent);
}}
}}
function styles(feature) {{
if (feature.properties && feature.properties.color) {{
return {{
color: feature.properties.color
}};
}}
}}
function pointToLayer(feature, latlng) {{
var options = geojsonMarkerOptions;
if (feature.properties && feature.properties.color) {{
options.fillColor = feature.properties.color;
}}
return L.circleMarker(latlng, options);
}}
var geojsonMarkerOptions = {{
radius: 8,
fillColor: "#ff7800",
color: "#000",
weight: 1,
opacity: 1,
fillOpacity: 0.8
}};
var map = L.map('mapid').setView([{lat}, {lng}], 12);
L.tileLayer('https://api.mapbox.com/styles/v1/{{id}}/tiles/{{z}}/{{x}}/{{y}}?access_token=pk.eyJ1IjoibWFwYm94IiwiYSI6ImNpejY4NXVycTA2emYycXBndHRqcmZ3N3gifQ.rJcFIG214AriISLbB6B5aw', {{
maxZoom: 18,
attribution: 'Map data © <a href="https://www.openstreetmap.org/">OpenStreetMap</a> contributors, ' +
'<a href="https://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, ' +
'Imagery © <a href="https://www.mapbox.com/">Mapbox</a>',
id: 'mapbox/streets-v11',
tileSize: 512,
zoomOffset: -1
}}).addTo(map);
L.geoJSON({feature_collection}, {{
onEachFeature: onEachFeature,
style: styles,
pointToLayer: pointToLayer
}}).addTo(map);
</script>
'''))
"""
Explanation: Now that you have an idea of the different visualizations possible with this data, let's build the Data Studio Dashboard. You can use this dashboard template and copy it as-is, but first you will need to prepare the data sources beforehand.
For this, you have two options:
Create views within your GCP project's BigQuery dataset to supply the data to the dashboard; or
Use Data Studio's Custom Query interface to directly input the views' SQL queries.
It is recommended that you follow the first approach (views in BigQuery) for maintainability and performance reasons.
To create a view in BigQuery, you can follow the steps described here. You will need to create six views, and you can find sample SQL files in the sql folder of the project (make sure to modify the Project ID and BigQuery dataset if you are not using the default).
Once the views are available, you need to link them to Data Studio by creating Data Sources as described here, and selecting the My Projects or Shared Projects options.
If you prefer to go the Custom Query route, you can use the same instructions to create the Data Sources but selecting the Custom Query option. You can also use the sql samples as the custom queries on each Data Source.
After the Data Sources are created, you can create a copy of the template and replace the sample data sources with the ones associated with your project.
Geographical Visualization for Driving Directions
At the time of writing, Data Studio's built in Geo and Google Maps chart types did not support custom map overlays.
For example, say we want to plot both an existing location as a pointer on the map, along with a polygon representing all source postal codes from which driving directions originated.
Since this is not currently possible with Data Studio, BigQuery Geo Viz and/or geojson.io (among others) can be used instead.
Follow the instructions below to plot this information for a given city and postal code pair using geojson.io. You could also materialize the SQL below as a view in BigQuery and use BigQuery Geo Viz for a more aggregated view (e.g. city-wide or even regional) of the data.
End of explanation
"""
|
gwsb-istm-6212-fall-2016/syllabus-and-schedule | lectures/week-03/20160913-lecture-notes.ipynb | cc0-1.0 | !mkdir mydirectory
!ls > mydirectory/myfiles.txt
!rm myfiles.txt
!rm mydirectory/myfiles.txt
!ls mydirectory
"""
Explanation: Week 3 lecture notes
Exercise 2 review - common mistakes
Including directories in paths
If you create a file in a lower directory, then want to modify, move, or delete it, you have to use the directory to refer to it.
End of explanation
"""
!date > datefile.txt
!cat datefile.txt
!date > datefile.txt
!cat datefile.txt
!date >> datefile.txt
!date >> datefile.txt
!cat datefile.txt
"""
Explanation: ">" vs ">>"
Both ">" and ">>" redirect output from the screen to a file. Both will create new files if none yet exists. Only ">" will overwrite an existing file; ">>" will append to an existing file.
End of explanation
"""
!wget https://github.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/raw/master/exercises/pg2500.txt
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | head
"""
Explanation: lower|sort|uniq or sort|lower|uniq
Order matters! Consider the text from exercise-02.
End of explanation
"""
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | tr '[:upper:]' '[:lower:]' | sort | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | sort | tr '[:upper:]' '[:lower:]' | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | sort | tr '[:upper:]' '[:lower:]' | uniq -c | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | sort | uniq -c | tr '[:upper:]' '[:lower:]' | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | tr '[:upper:]' '[:lower:]' | sort | uniq -c | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | tr '[:upper:]' '[:lower:]' | uniq -c | sort | head
"""
Explanation: Among the set of three functions: {uniq, lower, sort} there are six orderings. Which produce correct results, and why?
uniq, lower, sort
uniq, sort, lower
sort, lower, uniq
sort, uniq, lower
lower, sort, uniq
lower, uniq, sort
End of explanation
"""
!grep Romeo romeo.txt | head
"""
Explanation: More about grep
grep is a lot more powerful than what you've seen so far. More than anything else, it's commonly used to find text within files. For example, to find lines with "Romeo" in Romeo and Juliet:
End of explanation
"""
!grep -i what romeo.txt | head
"""
Explanation: There are many, many options, such as case-insensitivity:
End of explanation
"""
!grep -n Juliet romeo.txt | head
"""
Explanation: Another useful one is to print line numbers for matching lines:
End of explanation
"""
!grep -n Juliet romeo.txt | grep -v Romeo | head
"""
Explanation: We can also negate certain terms - show non-matches.
End of explanation
"""
!grep "Romeo\|Juliet" romeo.txt | head
"""
Explanation: And one more useful tip is to match more than one thing:
End of explanation
"""
!ls *.txt
"""
Explanation: Wildcards with "*"
Sometimes you need to perform a task with a set of files that share a characteristic like a file extension. The shell lessons had examples with .pdb files. This is common.
The * (asterisk, or just "star") is a wildcard, which matches zero-to-many characters.
End of explanation
"""
!cp romeo.txt womeo.txt
!ls ?omeo.txt
!ls wome?.txt
"""
Explanation: The ? (question mark) is a wildcard that matches exactly one character.
End of explanation
"""
!ls wo*.txt
!ls wo?.txt
"""
Explanation: The difference is subtle - these two would have worked interchangeably on the above. But note:
End of explanation
"""
!chmod +x simplefilter.py
!head pg2500.txt | ./simplefilter.py
!cp simplefilter.py lower.py
!head pg2500.txt | ./lower.py
"""
Explanation: See the difference? The * can match more than one character; ? only matches one.
Writing Python filters
Starting with the samplefilter.py filter, let's write some of our own.
End of explanation
"""
!wc *.txt
"""
Explanation: Working with GNU Parallel
GNU Parallel is an easy to use but very powerful tool with a lot of options. You can use it to process a lot of data easily and it can also make a big mess in a hurry. For more examples, see the tutorial page.
Let's start with something we've seen before: splitting a text file up and counting its unique words.
End of explanation
"""
!grep -oE '\w{{2,}}' romeo.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
"""
Explanation: That's 25,875 lines and 218,062 words in the texts of Romeo and Juliet and Little Women.
We can split them up into word counts one at a time like we did in exercise-02:
End of explanation
"""
!time grep -oE '\w{{2,}}' women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
"""
Explanation: Note that I've wrapped lines around by using the \ character. To me, this looks easier to read - you can see each step of the pipeline one at a time. The \ only means "this shell line continues on the next line". The | still acts as the pipe.
Let's look at a second book, Little Women. We'll add time to get a sense of how long it takes.
End of explanation
"""
!wc *.txt
"""
Explanation: It looks like Little Women is much longer, which makes sense - it's a novel, not a play. More text!
To compare the two directly:
End of explanation
"""
!time grep -oE '\w{{2,}}' romeo.txt women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
"""
Explanation: We can run through both files at once by giving both file names to grep:
End of explanation
"""
!time grep -oE '\w{{2,}}' romeo.txt women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| grep "and" \
| tail -10
"""
Explanation: Do those numbers look right?
Let's take a closer look at what's going on.
End of explanation
"""
!time ls *.txt \
| parallel -j+0 "grep -oE '\w{2,}' {} | tr '[:upper:]' '[:lower:]' >> all-words.txt"
!time sort all-words.txt \
| uniq -c \
| sort -rn \
| head -10
"""
Explanation: Aha! grep is not-so-helpfully including the second filename on the lines matched from the second file, but not on the first. That's why the counts are off.
There's probably an option to tell grep not to do that. But let's try something completely different.
First, let's break the step into the data parallel piece. For which part of this pipeline is completely data parallel?
End of explanation
"""
!unzip -d many-texts texts.zip
!ls -l many-texts | wc -l
!wc many-texts/*.txt
!time ls many-texts/*.txt \
| parallel --eta -j+0 "grep -oE '\w{2,}' {} | tr '[:upper:]' '[:lower:]' >> many-texts/all-words.txt"
!time sort many-texts/all-words.txt \
| uniq -c \
| sort -rn \
| head -10
"""
Explanation: See what we did there? We parallelized the data, then brought it back together for the rest of the pipeline.
Let's try it on a much bigger dataset. (Note that we're unzipping into a new directory with unzip -d.)
End of explanation
"""
|
arongdari/sparse-graph-prior | notebooks/CompareSparseMixtureGraph.ipynb | mit | mdest = '../result/random_network/mixture/'
sdest = '../result/random_network/sparse/'
m_f = '%d_%.2f_%.2f_%.2f_%.2f_%.2f_%.2f.pkl'
s_f = '%d_%.2f_%.2f_%.2f.pkl'
colors = cm.rainbow(np.linspace(0, 1, 7))
np.random.shuffle(colors)
colors = itertools.cycle(colors)
def degree_dist_list(graph, ddist):
_ddict = nx.degree(graph)
_ddist = defaultdict(int)
for k, v in _ddict.items():
_ddist[v] += 1
for k, v in _ddist.items():
ddist[k].append(v)
del _ddict, _ddist
return ddist
def avg_degree_dist(path_list):
""" Compute average degree distribution over repeated simulations
"""
ddist = defaultdict(list)
for path in path_list:
sample = pickle.load(open(path, 'rb'))
G = sparse_to_networkx(sample[0])
degree_dist_list(G, ddist)
del G, sample
avg_dist = dict()
for k, v in ddist.items():
avg_dist[k] = sum(ddist[k])/len(ddist[k])
return avg_dist
def scatter(_ddist, path, color=None):
""" print scatter plot of given degree distribution dictionary
"""
plt.scatter(list(_ddist.keys()), list(_ddist.values()), label=os.path.basename(path), color=color)
def degree_dist(graph):
""" Compute digree distribution of given graph
"""
_ddict = nx.degree(graph)
_ddist = defaultdict(int)
for k, v in _ddict.items():
_ddist[v] += 1
return _ddist
"""
Explanation: Properties:
* The number of nodes increases as tau decreases (minimum > 0).
* The number of nodes increases as alpha increases
* Expected number of dense node is : -alpha / sigma * tau ^ sigma
Basic parameter config (sparse alpha, sigma, tau + dense alpha, sigma tau):
* 100, 0.5, 1, 100, -1, 0.1 (generate the largest graph among basic configurations)
* 100, 0.5, 1, 100, -1, 1
Additional parameter configurations
* 100, 0, 1 + 100, -1, 1
* 100, 0.5, 0.1 + 100, -1, 0.1
End of explanation
"""
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_sigma = -1
d_taus = [0.1, 1]
n_samples = 5
plt.figure(figsize=(12, 8))
for d_tau in d_taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
alphas = [100, 150]
for alpha in alphas:
path_list = list()
for i in range(n_samples):
path_list.append(os.path.join(sdest, s_f % (i, alpha, sigma, tau)))
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
"""
Explanation: Comparision bewteen sparse and mixed graph
End of explanation
"""
sigmas = [0, 0.5, 0.9]
alpha = 100
tau = 1
d_alpha = 100
d_sigma = -1
d_tau = 1
plt.figure(figsize=(12, 8))
for sigma in sigmas:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
"""
Explanation: Varying sigma in the sparse part of the mixed graph
End of explanation
"""
alpha = 100
sigma = 0.5
taus = [0.1, 0.5, 1]
d_alpha = 100
d_sigma = -1
d_tau = 1
plt.figure(figsize=(12, 8))
for tau in taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
"""
Explanation: Varying tau in the sparse part of the mixed graph
End of explanation
"""
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_tau = 1
sigmas = [-0.5, -1, -2]
plt.figure(figsize=(12, 8))
plt.figure(figsize=(12, 8))
for d_sigma in sigmas:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
"""
Explanation: Varying sigma in the dense part of the mixed graph
End of explanation
"""
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_sigma = -1
taus = [0.1, 0.5, 1]
plt.figure(figsize=(12, 8))
for d_tau in taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
# for d_tau in taus:
# mfile = os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau))
# if os.path.exists(mfile):
# sample = pickle.load(open(mfile, 'rb'))
# G = sparse_to_networkx(sample[0])
# ddist = degree_dist(G)
# scatter(ddist, mfile, next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
"""
Explanation: Varying tau in the dense part of the mixed graph
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/feature_engineering/solutions/5_tftransform_taxifare.ipynb | apache-2.0 | # Run the chown command to change the ownership
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Install the necessary dependencies
!pip install tensorflow==2.3.0 tensorflow-transform==0.24.0 apache-beam[gcp]==2.24.0
"""
Explanation: Exploring tf.transform #
Learning Objectives
1. Preprocess data and engineer new features using TfTransform.
1. Create and deploy Apache Beam pipeline.
1. Use processed data to train taxifare model locally then serve a prediction.
Introduction
While Pandas is fine for experimenting, for operationalization of your workflow it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam allows for streaming. In this lab we will pull data from BigQuery then use Apache Beam TfTransform to process the data.
Only specific combinations of TensorFlow/Beam are supported by tf.transform so make sure to get a combo that works. In this lab we will be using:
* TFT 0.24.0
* TF 2.3.0
* Apache Beam [GCP] 2.24.0
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
"""
!pip install --user google-cloud-bigquery==1.25.0
!pip download tensorflow-transform==0.24.0 --no-deps
"""
Explanation: NOTE: You may ignore specific incompatibility errors and warnings. These components and issues do not impact your ability to complete the lab.
Download .whl file for tensorflow-transform. We will pass this file to Beam Pipeline Options so it is installed on the DataFlow workers
End of explanation
"""
%%bash
# Output installed packages in requirements format.
pip freeze | grep -e 'flow\|beam'
# Import data processing libraries
import tensorflow as tf
import tensorflow_transform as tft
# Python shutil module enables us to operate with file objects easily and without diving into file objects a lot.
import shutil
# Show the currently installed version of TensorFlow
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-example-labs'
PROJECT = 'project-id'
REGION = 'us-central1'
# The OS module in python provides functions for interacting with the operating system.
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
# gcloud config set - set a Cloud SDK property
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
# Create bucket
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: <b>Restart the kernel</b> (click on the reload button above).
End of explanation
"""
# Import Google BigQuery API client library
from google.cloud import bigquery
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
"""
Explanation: Input source: BigQuery
Get data from BigQuery but defer the majority of filtering etc. to Beam.
Note that the dayofweek column is now strings.
End of explanation
"""
df_valid = bigquery.Client().query(query).to_dataframe()
# `head()` function is used to get the first n rows of dataframe
display(df_valid.head())
# `describe()` is use to get the statistical summary of the DataFrame
df_valid.describe()
"""
Explanation: Let's pull this query down into a Pandas DataFrame and take a look at some of the statistics.
End of explanation
"""
# Import a module named `datetime` to work with dates as date objects.
import datetime
# Import data processing libraries and modules
import tensorflow as tf
import apache_beam as beam
import tensorflow_transform as tft
import tensorflow_metadata as tfmd
from tensorflow_transform.beam import impl as beam_impl
def is_valid(inputs):
"""Check to make sure the inputs are valid.
Args:
inputs: dict, dictionary of TableRow data from BigQuery.
Returns:
True if the inputs are valid and False if they are not.
"""
try:
pickup_longitude = inputs['pickuplon']
dropoff_longitude = inputs['dropofflon']
pickup_latitude = inputs['pickuplat']
dropoff_latitude = inputs['dropofflat']
hourofday = inputs['hourofday']
dayofweek = inputs['dayofweek']
passenger_count = inputs['passengers']
fare_amount = inputs['fare_amount']
return fare_amount >= 2.5 and pickup_longitude > -78 \
and pickup_longitude < -70 and dropoff_longitude > -78 \
and dropoff_longitude < -70 and pickup_latitude > 37 \
and pickup_latitude < 45 and dropoff_latitude > 37 \
and dropoff_latitude < 45 and passenger_count > 0
except:
return False
def preprocess_tft(inputs):
"""Preprocess the features and add engineered features with tf transform.
Args:
dict, dictionary of TableRow data from BigQuery.
Returns:
Dictionary of preprocessed data after scaling and feature engineering.
"""
import datetime
print(inputs)
result = {}
result['fare_amount'] = tf.identity(inputs['fare_amount'])
# build a vocabulary
# TODO 1
result['dayofweek'] = tft.string_to_int(inputs['dayofweek'])
result['hourofday'] = tf.identity(inputs['hourofday']) # pass through
# scaling numeric values
# TODO 2
result['pickuplon'] = (tft.scale_to_0_1(inputs['pickuplon']))
result['pickuplat'] = (tft.scale_to_0_1(inputs['pickuplat']))
result['dropofflon'] = (tft.scale_to_0_1(inputs['dropofflon']))
result['dropofflat'] = (tft.scale_to_0_1(inputs['dropofflat']))
result['passengers'] = tf.cast(inputs['passengers'], tf.float32) # a cast
# arbitrary TF func
result['key'] = tf.as_string(tf.ones_like(inputs['passengers']))
# engineered features
latdiff = inputs['pickuplat'] - inputs['dropofflat']
londiff = inputs['pickuplon'] - inputs['dropofflon']
# Scale our engineered features latdiff and londiff between 0 and 1
# TODO 3
result['latdiff'] = tft.scale_to_0_1(latdiff)
result['londiff'] = tft.scale_to_0_1(londiff)
dist = tf.sqrt(latdiff * latdiff + londiff * londiff)
result['euclidean'] = tft.scale_to_0_1(dist)
return result
def preprocess(in_test_mode):
"""Sets up preprocess pipeline.
Args:
in_test_mode: bool, False to launch DataFlow job, True to run locally.
"""
import os
import os.path
import tempfile
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam import tft_beam_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-taxi-features' + '-'
job_name += datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EVERY_N = 100000
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
EVERY_N = 10000
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'num_workers': 1,
'max_num_workers': 1,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'direct_num_workers': 1,
'extra_packages': ['tensorflow_transform-0.24.0-py3-none-any.whl']
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# Set up raw data metadata
raw_data_schema = {
colname: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'dayofweek,key'.split(',')
}
raw_data_schema.update({
colname: dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in
'fare_amount,pickuplon,pickuplat,dropofflon,dropofflat'.split(',')
})
raw_data_schema.update({
colname: dataset_schema.ColumnSchema(
tf.int64, [], dataset_schema.FixedColumnRepresentation())
for colname in 'hourofday,passengers'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema))
# Run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# Save the raw data metadata
(raw_data_metadata |
'WriteInputMetadata' >> tft_beam_io.WriteMetadata(
os.path.join(
OUTPUT_DIR, 'metadata/rawdata_metadata'), pipeline=p))
# Read training data from bigquery and filter rows
raw_data = (p | 'train_read' >> beam.io.Read(
beam.io.BigQuerySource(
query=create_query(1, EVERY_N),
use_standard_sql=True)) |
'train_filter' >> beam.Filter(is_valid))
raw_dataset = (raw_data, raw_data_metadata)
# Analyze and transform training data
# TODO 4
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(
preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
# Save transformed train data to disk in efficient tfrecord format
transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'), file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# Read eval data from bigquery and filter rows
# TODO 5
raw_test_data = (p | 'eval_read' >> beam.io.Read(
beam.io.BigQuerySource(
query=create_query(2, EVERY_N),
use_standard_sql=True)) | 'eval_filter' >> beam.Filter(
is_valid))
raw_test_dataset = (raw_test_data, raw_data_metadata)
# Transform eval data
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset()
)
transformed_test_data, _ = transformed_test_dataset
# Save transformed train data to disk in efficient tfrecord format
(transformed_test_data |
'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'), file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema)))
# Save transformation function to disk for use at serving time
(transform_fn |
'WriteTransformFn' >> transform_fn_io.WriteTransformFn(
os.path.join(OUTPUT_DIR, 'metadata')))
# Change to True to run locally
preprocess(in_test_mode=False)
"""
Explanation: Create ML dataset using tf.transform and Dataflow
Let's use Cloud Dataflow to read in the BigQuery data and write it out as TFRecord files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
transformed_data is type pcollection.
End of explanation
"""
%%bash
# ls preproc_tft
# `ls` command show the full list or content of your directory
gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
"""
Explanation: This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has successfully completed.
Note: The above command may fail with an error Workflow failed. Causes: There was a problem refreshing your credentials. In that case, re-run the command again.
End of explanation
"""
%%bash
# Train our taxifare model locally
rm -r ./taxi_trained
export PYTHONPATH=${PYTHONPATH}:$PWD
python3 -m tft_trainer.task \
--train_data_path="gs://${BUCKET}/taxifare/preproc_tft/train*" \
--eval_data_path="gs://${BUCKET}/taxifare/preproc_tft/eval*" \
--output_dir=./taxi_trained \
# `ls` command show the full list or content of your directory
!ls $PWD/taxi_trained/export/exporter
"""
Explanation: Train off preprocessed data
Now that we have our data ready and verified it is in the correct location we can train our taxifare model locally.
End of explanation
"""
%%writefile /tmp/test.json
{"dayofweek":0, "hourofday":17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2.0}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
# Serve a prediction with gcloud ai-platform local predict
model_dir=$(ls $PWD/taxi_trained/export/exporter/)
gcloud ai-platform local predict \
--model-dir=./taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict
End of explanation
"""
|
marcelomiky/PythonCodes | scikit-learn/scikit-learn-book/Chapter 4 - Advanced Features - Model Selection.ipynb | mit | %pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
"""
Explanation: Learning Scikit-learn: Machine Learning in Python
IPython Notebook for Chapter 4: Advanced Features - Model Selection
In the previous section we worked on ways to preprocess the data and select the most promising features. As we stated, selecting a good set of features is a crucial step to obtain good results. Now we will focus on another important step: selecting the algorithm parameters, known as hyperparameters to distinguish them from the parameters that are adjusted within the machine learning algorithm. Many machine learning algorithms include hyperparameters (from now on we will simply call them parameters) that guide certain aspects of the underlying method and have great impact on the results. In this section we will review some methods to help us obtain the best parameter configuration, a process known as model selection.
We will look back at the text-classification problem we addressed in Chapter 2, Supervised Learning. In that example, we compounded a TF-IDF vectorizer alongside a multinomial Naïve Bayes (NB) algorithm to classify a set of newsgroup messages into a discrete number of categories. The MultinomialNB algorithm has one important parameter, named alpha, that adjusts the smoothing. We initially used the class with its default parameter values (alpha = 1.0) and obtained an accuracy of 0.89. But when we set alpha to 0.01, we obtained a noticeable accuracy improvement to 0.92. Clearly, the configuration of the alpha parameter has great impact on the performance of the algorithm. How can we be sure 0.01 is the best value? Perhaps if we try other possible values, we could still obtain better results.
Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).
End of explanation
"""
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
n_samples = 3000
X = news.data[:n_samples]
y = news.target[:n_samples]
"""
Explanation: Let's start again with our text-classification problem, but for now we will only use a reduced number of instances. We will work only with 3,000 instances.
End of explanation
"""
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
def get_stop_words():
result = set()
for line in open('data/stopwords_en.txt', 'r').readlines():
result.add(line.strip())
return result
stop_words = get_stop_words()
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('nb', MultinomialNB(alpha=0.01)),
])
"""
Explanation: Then import the set of stop words and create a pipeline that compounds the TF-IDF vectorizer and the Naïve Bayes algorithms (recall that we had a stopwords_en.txt file with a list of stop words).
End of explanation
"""
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
def evaluate_cross_validation(clf, X, y, K):
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(len(y), K, shuffle=True, random_state=0)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print scores
print ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), sem(scores))
evaluate_cross_validation(clf, X, y, 3)
"""
Explanation: If we evaluate our algorithm with a three-fold cross-validation, we obtain a mean score of around 0.81.
End of explanation
"""
def calc_params(X, y, clf, param_values, param_name, K):
# initialize training and testing scores with zeros
train_scores = np.zeros(len(param_values))
test_scores = np.zeros(len(param_values))
# iterate over the different parameter values
for i, param_value in enumerate(param_values):
print param_name, ' = ', param_value
# set classifier parameters
clf.set_params(**{param_name:param_value})
# initialize the K scores obtained for each fold
k_train_scores = np.zeros(K)
k_test_scores = np.zeros(K)
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for j, (train, test) in enumerate(cv):
# fit the classifier in the corresponding fold
# and obtain the corresponding accuracy scores on train and test sets
clf.fit([X[k] for k in train], y[train])
k_train_scores[j] = clf.score([X[k] for k in train], y[train])
k_test_scores[j] = clf.score([X[k] for k in test], y[test])
# store the mean of the K fold scores
train_scores[i] = np.mean(k_train_scores)
test_scores[i] = np.mean(k_test_scores)
# plot the training and testing scores in a log scale
plt.semilogx(param_values, train_scores, alpha=0.4, lw=2, c='b')
plt.semilogx(param_values, test_scores, alpha=0.4, lw=2, c='g')
plt.xlabel(param_name + " values")
plt.ylabel("Mean cross validation accuracy")
# return the training and testing scores on each parameter value
return train_scores, test_scores
"""
Explanation: It looks like we should train the algorithm with a list of different parameter values and keep the parameter value that achieves the best results. Let's implement a helper function to do that. This function will train the algorithm with a list of values, each time obtaining an accuracy score calculated by performing k-fold cross-validation
on the training instances. After that, it will plot the training and testing scores as a function of the parameter values.
End of explanation
"""
alphas = np.logspace(-7, 0, 8)
print alphas
train_scores, test_scores = calc_params(X, y, clf, alphas, 'nb__alpha', 3)
"""
Explanation: Let's call this function; we will use numpy's logspace function to generate a list of alpha values spaced evenly on a log scale.
End of explanation
"""
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
"""
Explanation: As expected, the training accuracy is always greater than the testing accuracy. The best results are obtained with an alpha value of 0.1 (accuracy of 0.81):
End of explanation
"""
from sklearn.svm import SVC
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gammas = np.logspace(-2, 1, 4)
train_scores, test_scores = calc_params(X, y, clf, gammas, 'svc__gamma', 3)
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
"""
Explanation: We created a very useful function to graph and obtain the best parameter value for a classifier. Let's use it to adjust another classifier that uses a Support Vector Machines (SVM) instead of MultinomialNB:
End of explanation
"""
from sklearn.grid_search import GridSearchCV
parameters = {
'svc__gamma': np.logspace(-2, 1, 4),
'svc__C': np.logspace(-1, 1, 3),
}
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gs = GridSearchCV(clf, parameters, verbose=2, refit=False, cv=3)
"""
Explanation: For gamma < 1 we have underfitting. For gamma > 1 we have overfitting. So here, the best result is for gamma = 1 where we obtain a training an accuracy of 0.999 and a testing accuracy of about 0.75
Grid Search
If you take a closer look at the SVC class constructor parameters, we have other parameters, apart from gamma, that may also affect classifier performance. If we only adjust the gamma value, we implicitly state that the optimal C value is 1.0 (the default value that we did not explicitly set). Perhaps we could obtain better results with a new combination of C and gamma values. This opens a new degree of complexity; we should try all the parameter combinations and keep the better one.
With GridSearchCV, we can specify a grid of any number of parameters and parameter values to traverse. It will train the classifier for each combination and obtain a cross-validation accuracy to evaluate each one.
End of explanation
"""
%time _ = gs.fit(X, y)
gs.best_params_, gs.best_score_
"""
Explanation: Let's execute our grid search and print the best parameter values and scores.
End of explanation
"""
from sklearn.externals import joblib
from sklearn.cross_validation import ShuffleSplit
import os
def persist_cv_splits(X, y, K=3, name='data', suffix="_cv_%03d.pkl"):
"""Dump K folds to filesystem."""
cv_split_filenames = []
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for i, (train, test) in enumerate(cv):
cv_fold = ([X[k] for k in train], y[train], [X[k] for k in test], y[test])
cv_split_filename = name + suffix % i
cv_split_filename = os.path.abspath(cv_split_filename)
joblib.dump(cv_fold, cv_split_filename)
cv_split_filenames.append(cv_split_filename)
return cv_split_filenames
cv_filenames = persist_cv_splits(X, y, name='news')
"""
Explanation: With the grid search we obtained a better combination of C and gamma parameters, for values 10.0 and 0.10 respectively, we obtained a 3-fold cross validation accuracy of 0.828 much better than the best value we obtained (0.76) in the previous experiment by only adjusting gamma and keeeping C value at 1.0.
We could continue trying to improve the results by also adjusting the vectorizer parameters in the grid search.
Parallelizing
Grid search calculation grows exponentially with each parameter and its possible values we want to tune. We could reduce our response time if we calculate each of the combinations in parallel instead of sequentially, as we have done. In our previous example, we had four different values for gamma and three different values for C, summing up 12 parameter combinations. Additionally, we also needed to train each combination three times (in a three-fold cross-validation), so we summed up
36 trainings and evaluations. We could try to run these 36 tasks in parallel, since the tasks are independent.
Most modern computers have multiple cores that can be used to run tasks in parallel. We also have a very useful tool within IPython, called IPython parallel, that allows us to run independent tasks in parallel, each task in a different core of our machine. Let's do that with our text classifier example.
First we will declare a function that will persist all the K folds for the cross validation in different files. These files will be loaded by a process that will execute the corresponding fold:
End of explanation
"""
def compute_evaluation(cv_split_filename, clf, params):
# All module imports should be executed in the worker namespace
from sklearn.externals import joblib
# load the fold training and testing partitions from the filesystem
X_train, y_train, X_test, y_test = joblib.load(
cv_split_filename, mmap_mode='c')
clf.set_params(**params)
clf.fit(X_train, y_train)
test_score = clf.score(X_test, y_test)
return test_score
"""
Explanation: The following function loads a particular fold and fits the classifier with the specified parameters set. Finally returns the testing score. This function will be called by each of the parallel processes:
End of explanation
"""
from sklearn.grid_search import ParameterGrid
def parallel_grid_search(lb_view, clf, cv_split_filenames, param_grid):
all_tasks = []
all_parameters = list(ParameterGrid(param_grid))
# iterate over parameter combinations
for i, params in enumerate(all_parameters):
task_for_params = []
# iterate over the K folds
for j, cv_split_filename in enumerate(cv_split_filenames):
t = lb_view.apply(
compute_evaluation, cv_split_filename, clf, params)
task_for_params.append(t)
all_tasks.append(task_for_params)
return all_parameters, all_tasks
from sklearn.svm import SVC
from IPython.parallel import Client
client = Client()
lb_view = client.load_balanced_view()
all_parameters, all_tasks = parallel_grid_search(
lb_view, clf, cv_filenames, parameters)
def print_progress(tasks):
progress = np.mean([task.ready() for task_group in tasks
for task in task_group])
print "Tasks completed: {0}%".format(100 * progress)
print_progress(all_tasks)
def find_bests(all_parameters, all_tasks, n_top=5):
"""Compute the mean score of the completed tasks"""
mean_scores = []
for param, task_group in zip(all_parameters, all_tasks):
scores = [t.get() for t in task_group if t.ready()]
if len(scores) == 0:
continue
mean_scores.append((np.mean(scores), param))
return sorted(mean_scores, reverse=True)[:n_top]
print find_bests(all_parameters, all_tasks)
"""
Explanation: This function executes the grid search in parallel processes. For each of the parameter combination (returned by the IterGrid iterator), it iterates over the K folds and creates a process to compute the evaluation. It returns the parameter combinations alongside with the tasks list:
End of explanation
"""
|
rishuatgithub/MLPy | nlp/4. Naive Machine Translation + LSH.ipynb | apache-2.0 | en_set = set(en_vec.vocab)
fr_set = set(fr_vec.vocab)
en_embeddings_subset = {}
fr_embeddings_subset = {}
french_words = set(en_fr_train.values())
for en_word in en_fr_train.keys():
fr_word = en_fr_train[en_word]
if fr_word in fr_set and en_word in en_set:
en_embeddings_subset[en_word] = en_vec[en_word]
fr_embeddings_subset[fr_word] = fr_vec[fr_word]
for en_word in en_fr_test.keys():
fr_word = en_fr_test[en_word]
if fr_word in fr_set and en_word in en_set:
en_embeddings_subset[en_word] = en_vec[en_word]
fr_embeddings_subset[fr_word] = fr_vec[fr_word]
pickle.dump(en_embeddings_subset, open('en_embeddings.p','wb'))
pickle.dump(fr_embeddings_subset, open('fr_embeddings.p','wb'))
"""
Explanation: Creating a subset of embedding
End of explanation
"""
en_embeddings_subset = pickle.load(open("en_embeddings.p", "rb"))
fr_embeddings_subset = pickle.load(open("fr_embeddings.p", "rb"))
en_embeddings_subset['the'].size
"""
Explanation: Load the embedding subset
End of explanation
"""
def get_matrices(en_fr, french_vecs, english_vecs):
'''
Generate X and Y vectors for english-french
'''
X_l = []
y_l = []
english_set = english_vecs.keys()
french_set = french_vecs.keys()
french_words = set(en_fr.values())
for eng, fr in en_fr.items():
if fr in french_set and eng in english_set:
eng_vec = english_vecs[eng]
fr_vec = french_vecs[fr]
X_l.append(eng_vec)
y_l.append(fr_vec)
X = np.vstack(X_l)
y = np.vstack(y_l)
return X,y
X_train, y_train = get_matrices(en_fr_train, fr_embeddings_subset, en_embeddings_subset)
X_train.shape
"""
Explanation: Generate embeddings and transform matrics
End of explanation
"""
def compute_loss(X, Y, R):
'''
Computing Loss using Frobenoius norm.
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
L: a matrix of dimension (m,n) - the value of the loss function for given X, Y and R.
'''
m = X.shape[0]
diff = np.dot(X,R) - Y
diff_sqrd = diff ** 2
sum_diff_sqrd = np.sum(diff_sqrd)
loss = sum_diff_sqrd / m
return loss
def compute_gradient(X, Y, R):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
g: a matrix of dimension (n,n) - gradient of the loss function L for given X, Y and R.
'''
# m is the number of rows in X
m = X.shape[0]
# gradient is X^T(XR - Y) * 2/m
gradient = np.dot(X.transpose(),np.dot(X,R)-Y)*(2/m)
return gradient
def align_embeddings(X, Y, train_steps=100, learning_rate=0.0003):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
train_steps: positive int - describes how many steps will gradient descent algorithm do.
learning_rate: positive float - describes how big steps will gradient descent algorithm do.
Outputs:
R: a matrix of dimension (n,n) - the projection matrix that minimizes the F norm ||X R -Y||^2
'''
np.random.seed(129)
# the number of columns in X is the number of dimensions for a word vector (e.g. 300)
# R is a square matrix with length equal to the number of dimensions in th word embedding
R = np.random.rand(X.shape[1], X.shape[1])
for i in range(train_steps):
if i % 25 == 0:
print(f"loss at iteration {i} is: {compute_loss(X, Y, R):.4f}")
# use the function that you defined to compute the gradient
gradient = compute_gradient(X,Y,R)
# update R by subtracting the learning rate times gradient
R -= learning_rate * gradient
return R
np.random.seed(129)
m = 10
n = 5
X = np.random.rand(m, n)
Y = np.random.rand(m, n) * .1
R = align_embeddings(X, Y)
R_train = align_embeddings(X_train, y_train, train_steps=400, learning_rate=0.8)
"""
Explanation: Translation as linear transformation of embeddings
Step 1: Computing the loss
The loss function will be squared Frobenoius norm of the difference between matrix and its approximation, divided by the number of training examples $m$.
Its formula is: $$ L(X, Y, R)=\frac{1}{m}\sum_{i=1}^{m} \sum_{j=1}^{n}\left( a_{i j} \right)^{2}$$
where $a_{i j}$ is value in $i$th row and $j$th column of the matrix $\mathbf{XR}-\mathbf{Y}$.
Instructions:
- complete the compute_loss() function
- Compute the approximation of Y by matrix multiplying X and R
- Compute difference XR - Y
- Compute the squared Frobenius norm of the difference and divide it by $m$.
End of explanation
"""
def cosine_similarity(A,B):
'''
Returns the cosine similarity between vectors A and B
'''
d = np.dot(A,B)
norm_a = np.sqrt(np.dot(A,A))
norm_b = np.sqrt(np.dot(B,B))
cos = d / (norm_a * norm_b)
return cos
def nearest_neighbor(v, candidates, k=1):
"""
Input:
- v, the vector you are going find the nearest neighbor for
- candidates: a set of vectors where we will find the neighbors
- k: top k nearest neighbors to find
Output:
- k_idx: the indices of the top k closest vectors in sorted form
"""
similarity_l = []
# for each candidate vector...
for row in candidates:
# get the cosine similarity
cos_similarity = cosine_similarity(v,row)
# append the similarity to the list
similarity_l.append(cos_similarity)
# sort the similarity list and get the indices of the sorted list
sorted_ids = np.argsort(similarity_l)
# get the indices of the k most similar candidate vectors
k_idx = sorted_ids[-k:]
return k_idx
v = np.array([1, 0, 1])
candidates = np.array([[1, 0, 5], [-2, 5, 3], [2, 0, 1], [6, -9, 5], [9, 9, 9]])
print(candidates[nearest_neighbor(v, candidates, 3)])
def test_vocabulary(X, Y, R):
'''
Input:
X: a matrix where the columns are the English embeddings.
Y: a matrix where the columns correspong to the French embeddings.
R: the transform matrix which translates word embeddings from
English to French word vector space.
Output:
accuracy: for the English to French capitals
'''
# The prediction is X times R
pred = np.dot(X,R)
# initialize the number correct to zero
num_correct = 0
# loop through each row in pred (each transformed embedding)
for i in range(len(pred)):
# get the index of the nearest neighbor of pred at row 'i'; also pass in the candidates in Y
pred_idx = nearest_neighbor(pred[i],Y)
# if the index of the nearest neighbor equals the row of i... \
if pred_idx == i:
# increment the number correct by 1.
num_correct += 1
# accuracy is the number correct divided by the number of rows in 'pred' (also number of rows in X)
accuracy = num_correct / len(pred)
return accuracy
X_val, Y_val = get_matrices(en_fr_test, fr_embeddings_subset, en_embeddings_subset)
acc = test_vocabulary(X_val, Y_val, R_train)
print(f"accuracy on test set is {acc:.3f}")
"""
Explanation: Nearest Neighbour and Test
End of explanation
"""
|
rh01/ml-course-4-cluster-and-retrieval | 0_nearest-neighbors-features-and-metrics_blank.ipynb | agpl-3.0 | import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
"""
Explanation: Nearest Neighbors
author: 申恒恒
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
As usual we need to first import the Python packages that we will need.
End of explanation
"""
wiki = graphlab.SFrame('people_wiki.gl')
wiki
wiki['URI'][1]
"""
Explanation: Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
End of explanation
"""
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
"""
Explanation: Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.
End of explanation
"""
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
"""
Explanation: Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
End of explanation
"""
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
"""
Explanation: Let's look at the top 10 nearest neighbors by performing the following query:
End of explanation
"""
wiki[wiki['name'] == 'Barack Obama'][['word_count']].stack('word_count', new_column_name=['word','count']).sort('count',ascending=False)
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
"""
Explanation: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
End of explanation
"""
combined_words = obama_words.join(barrio_words, on='word')
combined_words
"""
Explanation: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details.
For instance, running
obama_words.join(barrio_words, on='word')
will extract the rows from both tables that correspond to the common words.
End of explanation
"""
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
"""
Explanation: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
End of explanation
"""
combined_words.sort('Obama', ascending=False)
"""
Explanation: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
End of explanation
"""
obama_words = top_words('Barack Obama')
common_words = list(obama_words[:5]['word'])
type(common_words)
#mmon_words
set(common_words)
common_words = list(top_words('Barack Obama')[:5]['word']) # Barack Obama 5 largest words
print common_words
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) #using keys() method and using set() method convert list to set
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print wiki['has_top_words']
sum(wiki['has_top_words'])
"""
Explanation: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Answer.answer is 56066
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task.
- Convert the list of top 5 words into set using the syntax
set(common_words)
where common_words is a Python list. See this link if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the keys() method.
- Convert the list of keys into a set as well.
- Use issubset() method to check if all 5 words are among the keys.
* Now apply the has_top_words function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
End of explanation
"""
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
type(wiki[33])
"""
Explanation: Checkpoint. Check your has_top_words function on two random articles:
End of explanation
"""
a = graphlab.SFrame(wiki[wiki['name']=='Barack Obama']['word_count'])[0]['X1']
b = graphlab.SFrame(wiki[wiki['name']=='George W. Bush']['word_count'])[0]['X1']
c = graphlab.SFrame(wiki[wiki['name']=='Joe Biden']['word_count'])[0]['X1']
graphlab.toolkits.distances.euclidean(a,b) # Obama and Bush
graphlab.toolkits.distances.euclidean(a,c) # Obama and Joe
graphlab.toolkits.distances.euclidean(b,c) # Bush and Joe+++++++++++++
"""
Explanation: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Answer. Biden and Bush
Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage.
End of explanation
"""
bush_words = top_words('George W. Bush')
obama_words.join(bush_words, on='word') \
.rename({'count' : 'Obama', 'count.1' : 'Bush'}) \
.sort('Obama', ascending = False)
obama_words.join(bush_words, on='word') \
.rename({'count' : 'Obama', 'count.1' : 'Bush'}) \
.sort('Obama', ascending = False)['word'][:10]
"""
Explanation: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words,
find the 10 words that show up most often in Obama's page.
Answer. 3th
the', 'in', 'and', 'of', 'to', 'his', 'act', 'he', 'a', 'as'
obama_words.join()
End of explanation
"""
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
"""
Explanation: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
End of explanation
"""
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
"""
Explanation: Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
End of explanation
"""
combination2_words = obama_tf_idf.join(schiliro_tf_idf,on='word').sort('weight',ascending=False)
combination2_words
combination2_words = combination2_words.rename({'weight':'Obama', 'weight.1':'Schiliro'})
combination2_words
combination2_words = combination2_words.sort('Obama', ascending=False)
combination2_words
"""
Explanation: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
End of explanation
"""
common_words = set(list(combination2_words[:5]['word']))
common_words
# common_words = common_words
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print wiki['has_top_words'] # YOUR CODE HERE
sum(wiki['has_top_words'])
"""
Explanation: The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Answer.14
End of explanation
"""
obama = wiki[wiki['name'] == 'Barack Obama']['tf_idf'][0]
biden = wiki[wiki['name'] == 'Joe Biden']['tf_idf'][0]
graphlab.toolkits.distances.euclidean(obama, biden)
"""
Explanation: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
Answer. 123.297
End of explanation
"""
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
"""
Explanation: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
End of explanation
"""
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
"""
Explanation: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
End of explanation
"""
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
End of explanation
"""
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
"""
Explanation: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
Note: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to cosine distances:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
End of explanation
"""
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
"""
Explanation: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
End of explanation
"""
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
"""
Explanation: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
End of explanation
"""
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama = wiki[wiki['name'] == 'Barack Obama']
obama
"""
Explanation: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
End of explanation
"""
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
"""
Explanation: Now, compute the cosine distance between the Barack Obama article and this tweet:
End of explanation
"""
model2_tf_idf.query(obama, label='name', k=10)
"""
Explanation: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
End of explanation
"""
|
jrbourbeau/cr-composition | notebooks/fraction-distribution.ipynb | mit | %load_ext watermark
%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
"""
Explanation: <a id='top'> </a>
Author: James Bourbeau
End of explanation
"""
from __future__ import division, print_function
from collections import defaultdict
import itertools
import numpy as np
from scipy import optimize
import pandas as pd
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import pyprind
import multiprocessing as mp
from sklearn.model_selection import ShuffleSplit
import composition as comp
import composition.analysis.plotting as plotting
# color_dict allows for a consistent color-coding for each composition
color_dict = comp.analysis.get_color_dict()
%matplotlib inline
"""
Explanation: Cosmic-ray light component analysis
Table of contents
Define analysis free parameters
Data preprocessing
Fitting random forest
Fraction correctly identified
Spectrum
Unfolding
Feature importance
End of explanation
"""
comp_class = True
comp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe']
"""
Explanation: Define analysis free parameters
[ back to top ]
Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions
End of explanation
"""
pipeline_str = 'GBDT'
pipeline = comp.get_pipeline(pipeline_str)
"""
Explanation: Get composition classifier pipeline
End of explanation
"""
energybins = comp.analysis.get_energybins()
"""
Explanation: Define energy binning for this analysis
End of explanation
"""
sim_train, sim_test = comp.preprocess_sim(comp_class=comp_class, return_energy=True)
splitter = ShuffleSplit(n_splits=1, test_size=.5, random_state=2)
for train_index, verification_index in splitter.split(sim_train.X):
sim_verification = sim_train[verification_index]
sim_train = sim_train[train_index]
print('Number of training events = {}'.format(len(sim_train)))
print('Number of verification events = {}'.format(len(sim_verification)))
data = comp.preprocess_data(comp_class=comp_class, return_energy=True)
"""
Explanation: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
4. Feature transformation
End of explanation
"""
pipeline.fit(sim_train.X, sim_train.y)
fracs = defaultdict(list)
frac_array = np.arange(0.0, 1.1, 0.1)
for light_frac in frac_array:
print('On light_frac = {}'.format(light_frac))
for i in range(1000):
heavy_frac = 1 - light_frac
light_dataset = comp.analysis.get_random_subsample(sim_verification, frac=light_frac, composition='light')
heavy_dataset = comp.analysis.get_random_subsample(sim_verification, frac=heavy_frac, composition='heavy')
combined_dataset = light_dataset + heavy_dataset
pred = pipeline.predict(combined_dataset.X)
num_pred_light = np.sum(combined_dataset.le.inverse_transform(pred) == 'light')
frac_light = num_pred_light/len(combined_dataset)
fracs[light_frac].append(frac_light)
with sns.color_palette('viridis', len(frac_array)):
fig, ax = plt.subplots()
for light_frac in frac_array:
sns.distplot(fracs[light_frac], bins=np.linspace(0.0, 1.0, 100),
kde=False, label=str(light_frac), hist_kws={'alpha': 0.75})
ax.set_xlabel('Reconstructed fraction of light events')
ax.set_ylabel('Counts')
ax.set_xlim([0.1, 0.9])
ax.grid()
leg = plt.legend(title='Injected fraction\nof light events',
loc='center left', frameon=False,
bbox_to_anchor=(1.0, # horizontal
0.5),# vertical
ncol=1, fancybox=False)
plt.savefig('/home/jbourbeau/public_html/figures/light-frac-reconstructed-hists.png')
plt.show()
fig, ax = plt.subplots()
medians = []
errs = []
for light_frac in frac_array:
medians.append(np.median(fracs[light_frac]))
errs.append(np.std(fracs[light_frac]))
print(medians)
print(errs)
ax.errorbar(frac_array, medians, yerr=errs, marker='.', ls='None')
ax.set_xlabel('Injected fraction of light events')
ax.set_ylabel('Reconstructed fraction\nof light events')
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_aspect(1.0)
ax.grid()
plt.savefig('/home/jbourbeau/public_html/figures/light-frac-reconstructed-medians.png')
plt.show()
n_samples = 10000
injected_frac = np.random.ranf(n_samples)
reco_frac = []
bar = pyprind.ProgBar(n_samples)
for light_frac in injected_frac:
heavy_frac = 1 - light_frac
light_dataset = comp.analysis.get_random_subsample(sim_verification, frac=light_frac, composition='light')
heavy_dataset = comp.analysis.get_random_subsample(sim_verification, frac=heavy_frac, composition='heavy')
combined_dataset = light_dataset + heavy_dataset
pred = pipeline.predict(combined_dataset.X)
num_pred_light = np.sum(combined_dataset.le.inverse_transform(pred) == 'light')
frac_light = num_pred_light/len(combined_dataset)
reco_frac.append(frac_light)
bar.update()
def get_reco_frac(dataset, injected_light_fraction, pipeline):
print('WEWOWOW')
heavy_frac = 1 - injected_light_fraction
light_dataset = comp.analysis.get_random_subsample(dataset, frac=injected_light_fraction, composition='light')
heavy_dataset = comp.analysis.get_random_subsample(dataset, frac=heavy_frac, composition='heavy')
combined_dataset = light_dataset + heavy_dataset
pred = pipeline.predict(combined_dataset.X)
num_pred_light = np.sum(combined_dataset.le.inverse_transform(pred) == 'light')
frac_light = num_pred_light/len(combined_dataset)
return frac_light
get_reco_frac(comp, sim_train, 0.1, pipeline)
pool = mp.Pool(processes=1)
n_samples = 1
injected_frac = np.random.ranf(n_samples)
print('injected_frac = {}'.format(injected_frac))
results = [pool.apply(get_reco_frac, args=(sim_train, x, pipeline)) for x in injected_frac]
print(results)
reco_frac_median = stats.binned_statistic(injected_frac, reco_frac, bins=frac_bins, statistic='median')[0]
frac_midpoints = (frac_bins[1:] + frac_bins[:-1]) / 2
slope, intercept = np.polyfit(frac_midpoints, reco_frac_median, 1)
def linefit_response(x):
return intercept + slope*x
def inverse_response(y):
return (y - intercept) / slope
pred = pipeline.predict(data.X)
light_mask = sim_train.le.inverse_transform(pred) == 'light'
frac_light = np.sum(light_mask)/pred.shape[0]
print('light fraction = {}'.format(frac_light))
fig, ax = plt.subplots()
frac_bins = np.linspace(0.0, 1.0, 75)
plotting.histogram_2D(injected_frac, reco_frac, bins=frac_bins, ax=ax)
ax.plot(frac_midpoints, linefit_response(frac_midpoints), marker='None',
ls='-', lw=2, color='C1')
ax.axhline(frac_light, marker='None', ls='-.')
ax.axvline(inverse_response(frac_light), marker='None', ls='-.')
print(inverse_response(frac_light))
ax.set_xlabel('Injected fraction of light events')
ax.set_ylabel('Reconstructed fraction of light events')
ax.grid()
plt.savefig('/home/jbourbeau/public_html/figures/light-frac-reconstructed-2d.png')
plt.show()
reco_frac_std = stats.binned_statistic(reco_frac, injected_frac, bins=frac_bins, statistic=np.std)[0]
print(reco_frac_std)
frac_midpoints = (frac_bins[1:] + frac_bins[:-1]) / 2
linefit = lambda x, b: b
x = frac_midpoints[(frac_midpoints > 0.3) & (frac_midpoints < 0.7)]
y = reco_frac_std[(frac_midpoints > 0.3) & (frac_midpoints < 0.7)]
popt, pcov = optimize.curve_fit(linefit, x, y)
intercept = popt[0]
print(intercept)
yfit = linefit(x, intercept)
yfit = np.array([yfit for i in range(len(x))])
fig, ax = plt.subplots()
ax.plot(frac_midpoints, reco_frac_std, ls='None', ms=10)
ax.axhline(intercept, marker='None', lw=1, ls=':', color='k')
ax.annotate('{:.4f}'.format(intercept), xy=(0.3, intercept), xytext=(0.4, 0.018),
arrowprops=dict(arrowstyle='-|>', color='black', connectionstyle='arc3,rad=-0.3'), fontsize=8,
bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8))
ax.grid()
ax.set_xlabel('Reconstructed fraction of light events')
ax.set_ylabel('1$\sigma$ spread in injected fraction')
plt.savefig('/home/jbourbeau/public_html/figures/light-frac-reconstructed-spread.png')
plt.show()
"""
Explanation: Run classifier over training and testing sets to get an idea of the degree of overfitting
End of explanation
"""
|
pramitchoudhary/Experiments | notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb | unlicense | !sudo pip install deap update_checker tqdm xgboost tpot
import pandas as pd
import numpy as np
import psycopg2
import os
import json
from tpot import TPOTClassifier
from sklearn.metrics import classification_report
conn = psycopg2.connect(
user = os.environ['REDSHIFT_USER']
,password = os.environ['REDSHIFT_PASS']
,port = os.environ['REDSHIFT_PORT']
,host = os.environ['REDSHIFT_HOST']
,database = 'tradesy'
)
query = """
select
purchase_dummy
,shipping_price_ratio
,asking_price
,price_level
,brand_score
,brand_size
,a_over_b
,favorite_count
,has_blurb
,has_image
,seasonal_component
,description_length
,product_category_accessories
,product_category_shoes
,product_category_bags
,product_category_tops
,product_category_dresses
,product_category_weddings
,product_category_bottoms
,product_category_outerwear
,product_category_jeans
,product_category_activewear
,product_category_suiting
,product_category_swim
from saleability_model_v2
limit 50000
"""
df = pd.read_sql(query, conn)
target = 'purchase_dummy'
domain = filter(lambda x: x != target, df.columns.values)
df = df.astype(float)
y_all = df[target].values
X_all = df[domain].values
idx_all = np.random.RandomState(1).permutation(len(y_all))
idx_train = idx_all[:int(.8 * len(y_all))]
idx_test = idx_all[int(.8 * len(y_all)):]
# TRAIN AND TEST DATA
X_train = X_all[idx_train]
y_train = y_all[idx_train]
X_test = X_all[idx_test]
y_test = y_all[idx_test]
"""
Explanation: TPOT uses a genetic algorithm (implemented with DEAP library) to pick an optimal pipeline for a regression task.
What is a pipeline?
Pipeline is composed of preprocessors:
* take polynomial transformations of features
*
TPOTBase is key class
parameters:
population_size: int (default: 100)
The number of pipelines in the genetic algorithm population. Must
be > 0.The more pipelines in the population, the slower TPOT will
run, but it's also more likely to find better pipelines.
* generations: int (default: 100)
The number of generations to run pipeline optimization for. Must
be > 0. The more generations you give TPOT to run, the longer it
takes, but it's also more likely to find better pipelines.
* mutation_rate: float (default: 0.9)
The mutation rate for the genetic programming algorithm in the range
[0.0, 1.0]. This tells the genetic programming algorithm how many
pipelines to apply random changes to every generation. We don't
recommend that you tweak this parameter unless you know what you're
doing.
* crossover_rate: float (default: 0.05)
The crossover rate for the genetic programming algorithm in the
range [0.0, 1.0]. This tells the genetic programming algorithm how
many pipelines to "breed" every generation. We don't recommend that
you tweak this parameter unless you know what you're doing.
* scoring: function or str
Function used to evaluate the quality of a given pipeline for the
problem. By default, balanced class accuracy is used for
classification problems, mean squared error for regression problems.
TPOT assumes that this scoring function should be maximized, i.e.,
higher is better.
Offers the same options as sklearn.cross_validation.cross_val_score:
['accuracy', 'adjusted_rand_score', 'average_precision', 'f1',
'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted',
'precision', 'precision_macro', 'precision_micro', 'precision_samples',
'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro',
'recall_samples', 'recall_weighted', 'roc_auc']
* num_cv_folds: int (default: 3)
The number of folds to evaluate each pipeline over in k-fold
cross-validation during the TPOT pipeline optimization process
* max_time_mins: int (default: None)
How many minutes TPOT has to optimize the pipeline. If not None,
this setting will override the generations parameter.
TPOTClassifier and TPOTRegressor inherit parent class TPOTBase, with modifications of the scoring function.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
sklearn_model = RandomForestClassifier()
sklearn_model.fit(X_train, y_train)
sklearn_predictions = sklearn_model.predict(X_test)
print classification_report(y_test, sklearn_predictions)
"""
Explanation: Sklearn model:
End of explanation
"""
tpot_model = TPOTClassifier(generations=3, population_size=10, verbosity=2, max_time_mins=10)
tpot_model.fit(X_train, y_train)
tpot_predictions = tpot_model.predict(X_test)
print classification_report(y_test, tpot_predictions)
"""
Explanation: TPOT Classifier
End of explanation
"""
tpot_model.export('optimal-saleability-model.py')
!cat optimal-saleability-model.py
"""
Explanation: Export Pseudo Pipeline Code
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment03/NumpyEx01.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
"""
Explanation: Numpy Exercise 1
Imports
End of explanation
"""
def checkerboard(size):
#Create a 2x2 diagonal array
x = np.diag((1.0, 1.0))
#If the size is even, the array is tiled
if size % 2 == 0:
checkers = np.tile(x, (size/2, size/2))
#If the size is odd, the array is tiled larger
else:
board = np.tile(x, ((size + 1)/2, (size + 1)/2))
#The extra row is removed, the matrix is transponsed, and the new extra row is removed
remove = np.delete(board, size, 0)
trans = np.transpose(remove)
checkers = np.delete(trans, size, 0)
return checkers
print checkerboard(4)
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
"""
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
"""
va.set_block_size(10)
va.enable()
checkerboard(20)
assert True
"""
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
"""
va.set_block_size(5)
checkerboard(27)
assert True
"""
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation
"""
|
dynaryu/rmtk | rmtk/vulnerability/derivation_fragility/R_mu_T_dispersion/ruiz_garcia_miranda/ruiz-garcia_miranda.ipynb | agpl-3.0 | from rmtk.vulnerability.derivation_fragility.R_mu_T_dispersion.ruiz_garcia_miranda import RGM2007
from rmtk.vulnerability.common import utils
import scipy.stats as stat
%matplotlib inline
"""
Explanation: Ruiz-García and Miranda (2007)
The aim of this procedure is the estimation of the median spectral acceleration value that brings the structure to the attainment of a set of damage states, and the corresponding dispersion. This aim is achieved through the methodology described in Ruiz-García and Miranda (2007), where the inelastic displacement demand of bilinear SDOF systems is related to the elastic displacement by means of the inelastic displacement ratio $C_R$ (the ratio of inelastic displacement to elastic spectral displacement). Estimates of the parameter $C_R$ are provided by Ruiz-García and Miranda in terms of median and dispersion, based on a nonlinear regression analysis of 240 ground motions. The figure below illustrates the relationship of the parameter $C_R$ with the period $T$ of the structure and $R$.
<img src="../../../../../figures/RGM_Cr.jpg" width="400" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Vb-droof.csv"
input_spectrum = "../../../../../../rmtk_data/FEMAP965spectrum.txt"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Sa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Please also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.
End of explanation
"""
idealised_type = "bilinear"
idealised_capacity = utils.idealisation(idealised_type, capacity_curves)
utils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)
"""
Explanation: Idealise pushover curves
In order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The only valid option for this methodology "bilinear". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.
End of explanation
"""
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.
End of explanation
"""
montecarlo_samples = 50
fragility_model = RGM2007.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios)
"""
Explanation: Calculate fragility functions
The damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.
End of explanation
"""
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
"""
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
"""
utils.plot_vulnerability_model(vulnerability_model)
"""
Explanation: Plot vulnerability function
End of explanation
"""
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
"""
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Install Sklearn
!python3 -m pip install --user sklearn
import os
import tensorflow.keras
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
#from keras.utils import plot_model
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: LAB 03: Basic Feature Engineering in Keras
Learning Objectives
Create an input pipeline using tf.data
Engineer features to create categorical, crossed, and numerical feature columns
Introduction
In this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Start by importing the necessary libraries for this lab.
End of explanation
"""
if not os.path.isdir("../data"):
os.makedirs("../data")
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/housing_pre-proc_toy.csv ../data
!ls -l ../data/
"""
Explanation: Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.
First, let's download the raw .csv data by copying the data from a cloud storage bucket.
End of explanation
"""
housing_df = pd.read_csv('../data/housing_pre-proc_toy.csv', error_bad_lines=False)
housing_df.head()
"""
Explanation: Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.
End of explanation
"""
housing_df.describe()
"""
Explanation: We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 2500.000000 for all feature columns. Thus, there are no missing values.
End of explanation
"""
train, test = train_test_split(housing_df, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
"""
Explanation: Split the dataset for ML
The dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.
End of explanation
"""
train.to_csv('../data/housing-train.csv', encoding='utf-8', index=False)
val.to_csv('../data/housing-val.csv', encoding='utf-8', index=False)
test.to_csv('../data/housing-test.csv', encoding='utf-8', index=False)
!head ../data/housing*.csv
"""
Explanation: Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.
End of explanation
"""
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# TODO 1a -- Your code here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
"""
Explanation: Lab Task 1: Create an input pipeline using tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model.
Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.
End of explanation
"""
batch_size = 32
train_ds = df_to_dataset(train)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
"""
Explanation: Next we initialize the training and validation datasets.
End of explanation
"""
# TODO 1b -- Your code here
"""
Explanation: Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
End of explanation
"""
# TODO 1c -- Your code here
"""
Explanation: We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called numeric_cols to hold only the numerical feature columns.
End of explanation
"""
# Scalar def get_scal(feature):
# TODO 1d -- Your code here
# TODO 1e -- Your code here
"""
Explanation: Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols".
End of explanation
"""
print('Total number of feature coLumns: ', len(feature_columns))
"""
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
"""
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
"""
Explanation: Using the Keras Sequential Model
Next, we will run this cell to compile and fit the Keras Sequential model.
End of explanation
"""
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
"""
Explanation: Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.
End of explanation
"""
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
"""
test_data = pd.read_csv('../data/housing-test.csv')
test_data.describe()
"""
Explanation: Load test data
Next, we read in the test.csv file and validate that there are no null values.
Again, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 500.000000 for all feature columns. Thus, there are no missing values.
End of explanation
"""
# TODO 1f -- Your code here
test_predict = test_input_fn(dict(test_data))
"""
Explanation: Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.
End of explanation
"""
predicted_median_house_value = model.predict(test_predict)
"""
Explanation: Prediction: Linear Regression
Before we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineering.
To predict with Keras, you simply call model.predict() and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.
End of explanation
"""
# Ocean_proximity is INLAND
model.predict({
'longitude': tf.convert_to_tensor([-121.86]),
'latitude': tf.convert_to_tensor([39.78]),
'housing_median_age': tf.convert_to_tensor([12.0]),
'total_rooms': tf.convert_to_tensor([7653.0]),
'total_bedrooms': tf.convert_to_tensor([1578.0]),
'population': tf.convert_to_tensor([3628.0]),
'households': tf.convert_to_tensor([1494.0]),
'median_income': tf.convert_to_tensor([3.0905]),
'ocean_proximity': tf.convert_to_tensor(['INLAND'])
}, steps=1)
# Ocean_proximity is NEAR OCEAN
model.predict({
'longitude': tf.convert_to_tensor([-122.43]),
'latitude': tf.convert_to_tensor([37.63]),
'housing_median_age': tf.convert_to_tensor([34.0]),
'total_rooms': tf.convert_to_tensor([4135.0]),
'total_bedrooms': tf.convert_to_tensor([687.0]),
'population': tf.convert_to_tensor([2154.0]),
'households': tf.convert_to_tensor([742.0]),
'median_income': tf.convert_to_tensor([4.9732]),
'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])
}, steps=1)
"""
Explanation: Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.
End of explanation
"""
# TODO 2a -- Your code here
"""
Explanation: The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predicted for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
Lab Task 2: Engineer features to create categorical and numerical features
Now we create a cell that indicates which features will be used in the model.
Note: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!
End of explanation
"""
# Scalar def get_scal(feature):
def get_scal(feature):
def minmax(x):
mini = train[feature].min()
maxi = train[feature].max()
return (x - mini)/(maxi-mini)
return(minmax)
# All numerical features - scaling
feature_columns = []
for header in numeric_cols:
scal_input_fn = get_scal(header)
feature_columns.append(fc.numeric_column(header,
normalizer_fn=scal_input_fn))
"""
Explanation: Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the preceding cell.
End of explanation
"""
# TODO 2b -- Your code here
"""
Explanation: Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'.
End of explanation
"""
# TODO 2c -- Your code here
"""
Explanation: Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age'
End of explanation
"""
# TODO 2d -- Your code here
"""
Explanation: Feature Cross
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.
End of explanation
"""
print('Total number of feature columns: ', len(feature_columns))
"""
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
"""
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns,
dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
"""
Explanation: Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.
End of explanation
"""
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Next, we show loss and mean squared error then plot the model.
End of explanation
"""
# TODO 2e -- Your code here
"""
Explanation: Next we create a prediction model. Note: You may use the same values from the previous prediciton.
End of explanation
"""
|
eroicaleo/LearningPython | HandsOnML/ch02/ex01.ipynb | mit | strat_train_set_copy = strat_train_set.copy()
housing.plot(kind="scatter", x='longitude', y='latitude')
housing.plot(kind="scatter", x='longitude', y='latitude', alpha=0.1)
strat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,
s=strat_train_set_copy.population/100,
c=strat_train_set_copy.median_house_value,
cmap=plt.get_cmap("jet"),
label="population", figsize=(15, 15),
colorbar=True)
plt.legend()
corr_matrix = strat_train_set_copy.corr()
corr_matrix.median_house_value.sort_values(ascending=False)
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
strat_train_set_copy.plot.scatter(x="median_income", y="median_house_value", alpha=0.1)
"""
Explanation: Visualizing Data
End of explanation
"""
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
housing.info()
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
"""
Explanation: Experimenting with Attribute Combinations
End of explanation
"""
housing = strat_train_set.drop('median_house_value', axis=1)
housing_labels = strat_train_set['median_house_value'].copy()
housing.info()
housing.dropna(subset=['total_bedrooms']).info()
housing.drop('total_bedrooms', axis=1).info()
housing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='median')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
imputer.statistics_
imputer.strategy
housing.drop("ocean_proximity", axis=1).median().values
X = imputer.transform(housing_num)
X
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
"""
Explanation: 2.5 Prepare the Data for Machine Learning Algorithms
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing.ocean_proximity
housing_cat.describe()
housing_cat.value_counts()
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
type(housing_cat_encoded)
print(encoder.classes_)
"""
Explanation: Handling Text and Categorical Attributes
End of explanation
"""
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
print(housing_cat_encoded.shape)
print(type(housing_cat_encoded))
(housing_cat_encoded.reshape(-1, 1)).shape
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))
housing_cat_1hot
type(housing_cat_1hot)
housing_cat_1hot.toarray()
"""
Explanation: One hot encoding
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer(sparse_output=False)
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
type(housing_cat_1hot)
"""
Explanation: Combine
End of explanation
"""
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
housing.head()
housing.iloc[:, 3]
X = housing.values
# This can be achieved by the iloc, with using .values
housing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head()
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
np.c_[X, rooms_per_household, population_per_household]
np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=False):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(X)
print(housing_extra_attribs.shape)
print(housing.shape)
# Convert back to data frame -- My way
new_columns = housing.columns.append(
pd.Index(['rooms_per_household', 'population_per_household'])
)
new_columns
housing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns)
housing_extra_attribs_df.head()
"""
Explanation: Custom Transformers
End of explanation
"""
housing.describe()
housing.total_rooms.describe()
from sklearn.preprocessing import MinMaxScaler
scalar = MinMaxScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
"""
Explanation: 2.5.4 Feature Scaling
End of explanation
"""
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
# I want to verify the pipelined version
# doest the same thing as the separated steps
num_pipeline_stage1 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
])
X_pipeline = num_pipeline_stage1.fit_transform(housing_num)
X = imputer.transform(housing_num)
X_pipeline
np.array_equal(X, X_pipeline)
num_pipeline_stage2 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
])
Y = attr_adder.fit_transform(X)
Y_pipeline = num_pipeline_stage2.fit_transform(housing_num)
np.array_equal(Y, Y_pipeline)
num_pipeline_stage3 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
Z = scalar.fit_transform(Y)
Z.std(), Z.mean()
Z_pipeline = num_pipeline_stage3.fit_transform(housing_num)
np.array_equal(Z, Z_pipeline)
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
class CustomizedLabelBinarizer(BaseEstimator, TransformerMixin):
def __init__(self, sparse_output=False):
self.encode = LabelBinarizer(sparse_output = sparse_output)
def fit(self, X, y=None):
return self.encode.fit(X)
def transform(self, X):
return self.encode.transform(X)
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
]
)
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', CustomizedLabelBinarizer()),
]
)
# LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing))
# num_pipeline.fit_transform(housing)
# cat_pipeline.fit_transform(housing)
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
print(housing_prepared.shape)
housing_prepared
"""
Explanation: 2.5.5 Transformation Pipeline
End of explanation
"""
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing[:5]
some_data
some_labels = housing_labels[:5]
some_labels
some_data_prepared = full_pipeline.transform(some_data)
some_data_prepared
print(f'Prediction:\t{lin_reg.predict(some_data_prepared)}')
print(f'Lables:\t\t{list(some_labels)}')
from sklearn.metrics import mean_squared_error
housing_prediction = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_prediction, housing_labels)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
"""
Explanation: 2.6.1 Training and Evaluating on the Training Set
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
tree_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(tree_predictions, housing_labels)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
"""
Explanation: Tree model
End of explanation
"""
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
def display_scores(scores):
print(f'Scores: {scores}')
print(f'Mean: {scores.mean()}')
print(f'STD: {scores.std()}')
display_scores(rmse_scores)
"""
Explanation: 2.6.2 Better Evaluation Using Cross-Validation
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
forest_prediction = forest_reg.predict(housing_prepared)
forest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels))
forest_rmse
"""
Explanation: Random Forest
End of explanation
"""
# Follow the example here: https://scikit-learn.org/stable/auto_examples/plot_kernel_ridge_regression.html
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
param_grid = [
{'kernel': ['linear'], 'C': [0.1, 1.0, 10.0]},
{'kernel': ['rbf'], 'C': [0.1, 1.0, 10.0], 'gamma': np.logspace(-2, 2, 5)},
]
param_grid = [
{'kernel': ['rbf'], 'C': [0.1, 1.0, 10.0], 'gamma': np.logspace(-2, 2, 5)},
]
svm_reg = SVR()
grid_search = GridSearchCV(svm_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
grid_search.fit(housing_prepared, housing_labels)
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
"""
Explanation: Ex01
Try a Support Vector Machine regressor (sklearn.svm.SVR), with various hyper‐
parameters such as kernel="linear" (with various values for the C hyperpara‐
meter) or kernel="rbf" (with various values for the C and gamma
hyperparameters). Don’t worry about what these hyperparameters mean for now.
How does the best SVR predictor perform?
End of explanation
"""
# from sklearn.externals import joblib
# joblib.dump(forest_reg, 'forest_reg.pkl')
# forest_reg_loaded = joblib.load('forest_reg.pkl')
# np.sqrt(mean_squared_error(forest_reg_loaded.predict(housing_prepared), housing_labels))
"""
Explanation: From the above results, we can see it doesn't do a very good job.
python
118905.9879650293 {'C': 0.1, 'gamma': 0.01, 'kernel': 'rbf'}
118883.02813861452 {'C': 0.1, 'gamma': 0.1, 'kernel': 'rbf'}
118918.98149265403 {'C': 0.1, 'gamma': 1.0, 'kernel': 'rbf'}
118922.89671485328 {'C': 0.1, 'gamma': 10.0, 'kernel': 'rbf'}
118922.98194535275 {'C': 0.1, 'gamma': 100.0, 'kernel': 'rbf'}
118751.48457167056 {'C': 1.0, 'gamma': 0.01, 'kernel': 'rbf'}
118540.96844123407 {'C': 1.0, 'gamma': 0.1, 'kernel': 'rbf'}
118888.52068712503 {'C': 1.0, 'gamma': 1.0, 'kernel': 'rbf'}
118922.09634831538 {'C': 1.0, 'gamma': 10.0, 'kernel': 'rbf'}
118922.99812017354 {'C': 1.0, 'gamma': 100.0, 'kernel': 'rbf'}
117247.1573626364 {'C': 10.0, 'gamma': 0.01, 'kernel': 'rbf'}
115278.18575098237 {'C': 10.0, 'gamma': 0.1, 'kernel': 'rbf'}
118589.60263664678 {'C': 10.0, 'gamma': 1.0, 'kernel': 'rbf'}
118914.65419428976 {'C': 10.0, 'gamma': 10.0, 'kernel': 'rbf'}
118923.15974801985 {'C': 10.0, 'gamma': 100.0, 'kernel': 'rbf'}
Save models
End of explanation
"""
# from sklearn.model_selection import GridSearchCV
# param_grid = [
# {'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]},
# {'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}
# ]
# forest_reg = RandomForestRegressor()
# grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
# grid_search.fit(housing_prepared, housing_labels)
# grid_search.best_params_
# grid_search.best_estimator_
# cvres = grid_search.cv_results_
# for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
# print(np.sqrt(-mean_score), params)
"""
Explanation: 2.7.1 Grid Search
End of explanation
"""
# feature_importances = grid_search.best_estimator_.feature_importances_
# feature_importances
# extra_attribs = ['rooms_per_hhold', 'pop_per_hhold']
# cat_one_hot_attribs = list(encoder.classes_)
# cat_one_hot_attribs
# attributes = num_attribs + extra_attribs + cat_one_hot_attribs
# attributes, len(attributes)
# sorted(zip(feature_importances, attributes), reverse=True)
"""
Explanation: 2.7.4 Analyze the best models and their errors
End of explanation
"""
# final_model = grid_search.best_estimator_
# X_test = strat_test_set.drop("median_house_value", axis=1)
# y_test = strat_test_set.median_house_value.copy()
# X_test_prepared = full_pipeline.transform(X_test)
# final_predictions = final_model.predict(X_test_prepared)
# final_mse = mean_squared_error(final_predictions, y_test)
# final_rmse = np.sqrt(final_mse)
# final_rmse
"""
Explanation: 2.7.5 Evaluate Your System on the Test Set
End of explanation
"""
|
griffinfoster/fundamentals_of_interferometry | 2_Mathematical_Groundwork/2_3_fourier_series.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
2. Mathematical Groundwork
Previous: 2.2 Important functions
Next: 2.4 The Fourier Transform
Import standard modules:
End of explanation
"""
from IPython.display import HTML
from ipywidgets import interact
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
def FS_coeffs(x, m, func, T=2.0*np.pi):
"""
Computes Fourier series (FS) coeffs of func
Input:
x = input vector at which to evaluate func
m = the order of the coefficient
func = the function to find the FS of
T = the period of func (defaults to 2 pi)
"""
# Evaluate the integrand
am_int = func(x)*np.exp(-1j*2.0*m*np.pi*x/T)
# Use trapezoidal integration to get the coefficient
am = np.trapz(am_int,x)
return am/T
"""
Explanation: 2.3. Fourier Series<a id='math:sec:fourier_series'></a>
While Fourier series are not immediately required to understand the required calculus for this book, they are closely connected to the Fourier transform, which is an essential tool. Moreover, we noticed a few times that the principle of the harmonic analysis or harmonic decomposition is essential and, despite its simplicity, often not well understood. We hence give a very brief summary, not caring about existence questions.
2.3.1 Definition <a id='math:sec:fourier_series_definition'></a>
The Fourier series of a function $f: \mathbb{R} \rightarrow \mathbb{R}$ with real coefficients is defined as
<a id='math:eq:3_001'></a><!--\label{math:eq:3_001}-->$$
f_{\rm F}(x) \,=\, \frac{1}{2}c_0+\sum_{m = 1}^{\infty}c_m \,\cos(mx)+\sum_{m = 1}^{\infty}s_m \,\sin(mx),
$$
with the Fourier coefficients $c_m$ and $s_m$
<a id='math:eq:3_002'></a><!--\label{math:eq:3_002}-->$$
\left( c_0 \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,dx \right)\
c_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,\cos(mx)\,dx \qquad m \in \mathbb{N_0}\
s_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,\sin(mx)\,dx \qquad m \in \mathbb{N_0}.\{\rm }
$$
If $f_{\rm F}$ exists, it is identical to $f$ in all points of continuity. For functions which are periodic with a period of $2\pi$ the Fourier series converges. Hence, for continuous periodic function with a period of $2\pi$ the Fourier series converges and $f_{\rm F}=f$.
The Fourier series of a function $f: \mathbb{R} \rightarrow \mathbb{R}$ with complex coefficients is defined as
<a id='math:eq:3_003'></a><!--\label{math:eq:3_003}-->$$
f_{\rm IF}(x) \,=\, \sum_{m = -\infty}^{\infty}a_m \,e^{\imath mx},$$
with the Fourier coefficients $a_m$
<a id='math:eq:3_004'></a><!--\label{math:eq:3_004}-->$$
a_m \,=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-\imath mx}\,dx\qquad \forall m\in\mathbb{Z}.$$
The same convergence criteria apply and one realisation can be transformed to the other. Making use of Euler's formula ➞ <!--\ref{math:sec:eulers_formula}-->, one gets
<a id='math:eq:3_005'></a><!--\label{math:eq:3_005}-->$$
\begin{split}
a_m \,&=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)\,[\cos(mx)-\imath \,\sin(mx)]\,dx
&=\,\left {
\begin{array}{lll}
\frac{1}{2} (c_m+\imath s_m) & {\rm for} & m < 0\
\frac{1}{2} c_m & {\rm for} & m = 0\
\frac{1}{2} (c_m-\imath\,s_m) & {\rm for} & m > 0\
\end{array} \right.
\end{split},
$$
and accordingly, $\forall m \in \mathbb{N_0}$,
<a id='math:eq:2_006'></a><!--\label{math:eq:2_006}-->$$
\
\begin{split}
c_m \,&=\, a_m+a_{-m}\
s_m \,&=\, \imath\,(a_m-a_{-m})\
\end{split}.
$$
The concept Fourier series can be expanded to a base interval of a period T instead of $2\pi$ by substituting $x$ with $x = \frac{2\pi}{T}t$.
<a id='math:eq:3_007'></a><!--\label{math:eq:3_007}-->$$
g_{\rm F}(t) = f_{\rm F}(\frac{2\pi}{T}t) \,=\, \frac{1}{2}c_0+\sum_{m = 1}^{\infty}c_m \,\cos(m\frac{2\pi}{T}t)+\sum_{m = 1}^{\infty}s_m \,\sin(m\frac{2\pi}{T}t)
$$
where
<a id='math:eq:3_008'></a><!--\label{math:eq:3_008}-->$$
c_0 \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,dt. \
c_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,\cos(m\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,\cos(m\frac{2\pi}{T}t)\,dt \qquad m \in \mathbb{N_0}\
s_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,\sin(m\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,\sin(m\frac{2\pi}{T}t)\,dt \qquad m \in \mathbb{N_0}\
$$
or
<a id='math:eq:3_009'></a><!--\label{math:eq:3_010}-->$$
g_{\rm IF}(t) = f_{\rm IF}(\frac{2\pi}{T}t) \,=\, \sum_{m = -\infty}^{\infty}a_m \,e^{\imath m\frac{2\pi}{T}t}
$$
<a id='math:eq:3_011'></a><!--\label{math:eq:3_011}-->$$
a_m \,=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)e^{-\imath m\frac{2\pi}{T}t}\,dx\,=\,\frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)e^{-\imath m\frac{2\pi}{T}t}\,dt \qquad\forall m\in\mathbb{Z}.$$
The series again converges under the same criteria as before and the relations between the coefficients of the complex or real Fourier coefficients from equation equation <!--\ref{math:eq:3_005}-->stay valid.
One nice example is the complex, scaled Fourier series of the scaled shah function ➞ <!--\ref{math:sec:shah_function}--> $III_{T^{-1}}(x)\,=III\left(\frac{x}{T}\right)\,=\sum_{l=-\infty}^{+\infty} T \delta\left(x-l T\right)$. Obviously, the period of this function is $T$. The Fourier coefficients (matched to a period of $T$) is calculated as
<a id='math:eq:3_012'></a><!--\label{math:eq:3_012}-->$$
\begin{split}
a_m \,&= \,\frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}\left(\sum_{l=-\infty}^{+\infty}T\delta\left(x-l T\right)\right)e^{-\imath m \frac{2\pi}{T} x}\,dx\
&=\,\frac{1}{T} \int_{-\frac{T}{2}}^{\frac{T}{2}} T \delta\left(x\right)e^{-\imath m \frac{2\pi}{T} x}\,dx\
&=\,1
\end{split}
\forall m\in\mathbb{Z}.$$
It follows that
<a id='math:eq:3_013'></a><!--\label{math:eq:3_013}-->$$
\begin{split}
III_{T^{-1}}(x)\,=III\left(\frac{x}{T}\right)\,=\,\sum_{m=-\infty}^{+\infty} e^{\imath m\frac{2\pi}{T} x t}
\end{split}
.$$
2.3.2 Example <a id='math:sec:fourier_series_example'></a>
We demonstrate how to decompose a signal into its Fourier series. An easy way to implement this numerically is to use the trapezoidal rule to approximate the integral. Thus we start by defining a function that computes the coefficients of the Fourier series using the complex definition <!--\ref{math:eq:3_011}-->
End of explanation
"""
def FS_sum(x, m, func, period=None):
# If no period is specified use entire domain
if period is None:
period = np.abs(x.max() - x.min())
# Evaluate the coefficients and sum the series
f_F = np.zeros(x.size, dtype=np.complex128)
for i in range(-m,m+1):
am = FS_coeffs(x, i, func, T=period)
f_F += am*np.exp(2.0j*np.pi*i*x/period)
return f_F
"""
Explanation: That should be good enough for our purposes here. Next we create a function to sum the Fourier series.
End of explanation
"""
# define square wave function
def square_wave(x):
I = np.argwhere(np.abs(x) <= 0.5)
tmp = np.zeros(x.size)
tmp[I] = 1.0
return tmp
# Set domain and compute square wave
N = 250
x = np.linspace(-1.0,1.0,N)
# Compute the FS up to order m
m = 10
sw_F = FS_sum(x, m, square_wave, period=2.0)
# Plot result
plt.figure(figsize=(15,5))
plt.plot(x, sw_F.real, 'g', label=r'$ Fourier \ series $')
plt.plot(x, square_wave(x), 'b', label=r'$ Square \ wave $')
plt.title(r"$FS \ decomp \ of \ square \ wave$",fontsize=20)
plt.xlabel(r'$x$',fontsize=18)
plt.ylim(-0.05,1.5)
plt.legend()
# <a id='math:fig:fou_decomp'></a><!--\label{math:fig:fou_decomp}-->
"""
Explanation: Let's see what happens if we decompose a square wave.
End of explanation
"""
def inter_FS(x,m,func,T):
f_F = FS_sum(x, m, func, period=T)
plt.plot(x,f_F.real,'b')
plt.plot(x,func(x),'g')
interact(lambda m,T:inter_FS(x=np.linspace(-1.0,1.0,N),m=m,func=square_wave,T=T),
m=(5,100,1),T=(0,2*np.pi,0.5)) and None
# <a id='math:fig:fou_decomp_inter'></a><!--\label{math:fig:fou_decomp_inter}-->
"""
Explanation: Figure 2.8.1: Approximating a function with a finite number of Fourier series coefficients.
As can be seen from the figure, the Fourier series approximates the square wave. However at such a low order (i.e. $m = 10$) it doesn't do a very good job. Actually an infinite number of Fourier series coefficients are required to fully capture a square wave. Below is an interactive demonstration that allows you to vary the parameters on the Fourier series decomposition. Note, in particular, what happens if we make the period too small. Also feel free to apply it to functions other than the square wave (but make sure to adjust the domain accordingly.
End of explanation
"""
|
ayush29feb/cs231n | assignment2/BatchNormalization.ipynb | mit | # As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
"""
Explanation: Batch Normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
balaaagi/pylearn | presentations/ChennaiPy-ProbabilisticProgrammingWithLea.ipynb | mit | from lea import *
# mandatory die example - initilize a die object
die = Lea.fromVals(1, 2, 3, 4, 5, 6)
# throw the die a few times
die.random(20)
# mandatory coin toss example - states can be strings!
coin = Lea.fromVals('Head', 'Tail')
# toss the coin a few times
coin.random(10)
# how about a Boolean variable - only True or False ?
rain = Lea.boolProb(5, 100)
# how often does it rain in Chennai ?
rain.random(10)
# How about standard statistics ?
die.mean, die.mode, die.var, die.entropy
"""
Explanation: Introduction to Lea
End of explanation
"""
# Lets create two dies
die1 = die.clone()
die2 = die.clone()
# Two throws of the die
dice = die1 + die2
dice
dice.random(10)
dice.mean
dice.mode
print dice.histo()
"""
Explanation: Summary
Random variables are abstract objects. Transparent method for drawing random samples variable.random(times). Standard statistical metrics of the probability distribution is also part of the object.
Coolness - Part 1
End of explanation
"""
## We can create a new distribution, conditioned on our state of knowledge : P(sum | sum <= 6)
conditionalDice = dice.given(dice<=6)
## What is our best guess for the result of the throw ?
conditionalDice.mode
## Conditioning can be done in many ways : suppose we know that the first die came up 3.
dice.given(die1 == 3)
## Conditioning can be done in still more ways : suppose we know that **either** of the two dies came up 3
dice.given((die1 == 3) | (die2 == 3))
"""
Explanation: Summary
Random variables are abstract objects. Methods are available for operating on them algebraically. The probability
distributions, methods for drawing random samples, statistical metrics, are transparently propagated.
Coolness - Part 2
"You just threw two dice. Can you guess the result ?"
"Here's a tip : the sum is less than 6"
End of explanation
"""
# Species is a random variable with states "common" and "rare", with probabilities determined by the population. Since
# are only two states, species states are, equivalently, "rare" and "not rare". Species can be a Boolean!
rare = Lea.boolProb(1,1000)
# Similarly, pattern is either "present" or "not present". It too is a Boolean, but, its probability distribution
# is conditioned on "rare" or "not rare"
patternIfrare = Lea.boolProb(98, 100)
patternIfNotrare = Lea.boolProb(5, 100)
# Now, lets build the conditional probability table for P(pattern | species)
pattern = Lea.buildCPT((rare , patternIfrare), ( ~rare , patternIfNotrare))
# Sanity check : do we get what we put in ?
pattern.given(rare)
# Finally, our moment of truth : Bayesian inference - what is P(rare | pattern )?
rare.given(pattern)
# And, now some show off : what is the probability of being rare and having a pattern ?
rare & pattern
# All possible outcomes
Lea.cprod(rare,pattern)
"""
Explanation: Summary
Conditioning, which is the first step towards inference, is done automatically. A wide variety of conditions can be used. P(A | B) translates to a.given(b).
Inference
An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare subspecies, 98% have the pattern, or P(pattern|species = rare) = 0.98. In the common subspecies, 5% have the pattern, or P(pattern | species = common) = 0.05. The rare subspecies accounts for only 0.1% of the population. How likely is the beetle having the pattern to be rare, or what is P(species=rare|pattern) ?
End of explanation
"""
|
bioinformatica-corso/lezioni | laboratorio/lezione16-03dic21/esercizio1-biopython.ipynb | cc0-1.0 | import Bio
"""
Explanation: Biopython - Esercizio1
Prendere in input un file in formato FASTA di sequenze EST (Expressed Sequence Tag) e
separare le sequenze in due diversi gruppi:
A: sequenze EST con coding sequence
B: sequenze EST senza coding sequence
Per ognuna delle sequenze del gruppo A estrarre la coding sequence e tradurla in proteina. Produrre tutte le proteine ottenute in un file in formato FASTA.
Dal gruppo B eliminare le sequenze più corte di 500 basi e produrre quelle rimaste in un file in formato FASTA, ordinate per lunghezza crescente, dopo averne effettuato il reverse&complement se il clone_end indicato è pari a 5'.
Suggerimenti
Le sequenze del gruppo A hanno un header FASTA contenente il tag /cds che fornisce le posizioni 1-based di start ed end della coding sequence. Le sequenze del gruppo B non hanno invece questo tag. Le sequenze che non presentano questo tag fanno parte del gruppo B.
Il tag /clone_end delle sequenze del gruppo B indica se la sequenza ha direzione 5'3' oppure 3'5'. /clone_end=5' indica una direzione 3'5' e quindi si deve operare un reverse&complement al fine di rendere la sequenza concorde con lo strand di trascrizione del suo gene di origine.
Il GenBank ID è indicato dal tag /gb.
Esempio di header FASTA per sequenza del gruppo A:
>gnl|UG|Hs#S3219558 Homo sapiens Rho GTPase activating protein 4 (ARHGAP4), transcript variant 2, mRNA /cds=p(59,2899) /gb=NM_001666 /gi=258613905 /ug=Hs.701324 /len=3285
Esempio di header FASTA per sequenza del gruppo B:
>gnl|UG|Hs#S1027289 os53f09.s1 NCI_CGAP_Br2 Homo sapiens cDNA clone IMAGE:1609097 3', mRNA sequence /clone=IMAGE:1609097 /clone_end=3' /gb=AI000530 /gi=3191084 /ug=Hs.701324 /len=458
Requisiti di output
L'header FASTA del file di output delle traduzioni del gruppo A deve contenere unicamente il GenBank ID, mentre quello del file di output delle sequenze del gruppo B deve contenere oltre al GenBank ID anche la lunghezza della sequenza.
Installare il package Bio di Biopython.
Importare il package Bio.
End of explanation
"""
from Bio import SeqIO
"""
Explanation: Importare il package SeqIO.
End of explanation
"""
fasta_record_iter = SeqIO.parse('./ests.fa', 'fasta')
est_list = list(fasta_record_iter)
est_list
"""
Explanation: Ottenere la lista dei record delle sequenze EST
End of explanation
"""
import re
cds_start_end_list = [re.findall('/cds=(\S+)', est.description) for est in est_list]
cds_start_end_list
"""
Explanation: Separare le sequenze in due liste (gruppo A e gruppo B)
Ottenere la lista dei tag /cds da ognuno degli header FASTA in input.
End of explanation
"""
est_a_list = [(est_list[i], cds_start_end_list[i][0]) for i in range(len(est_list)) if cds_start_end_list[i] != []]
est_a_list
"""
Explanation: Ottenere la lista dei record del gruppo A, come lista di tuple di due dimensioni (record come primo elemento e valore del tag /cds come secondo elemento).
End of explanation
"""
est_b_list = [est_list[i] for i in range(len(est_list)) if cds_start_end_list[i] == []]
est_b_list
"""
Explanation: Ottenere la lista dei record del gruppo B (oggetti di tipo SeqRecord).
End of explanation
"""
cds_start_end_list = [re.findall('(\d+)', est_tuple[1]) for est_tuple in est_a_list]
cds_start_end_list = [[int(start_end[0]), int(start_end[1])] for start_end in cds_start_end_list]
cds_start_end_list
cds_start_end_list
"""
Explanation: Produrre in output le traduzioni delle coding sequence del gruppo A
a) Estrarre la lista delle posizioni di inizio e fine delle cds delle sequenze del gruppo A.
End of explanation
"""
is_multiple_3 = [(el[1] - el[0] + 1) % 3 == 0 for el in cds_start_end_list]
is_multiple_3
"""
Explanation: b) Eliminare le coding sequence la cui lunghezza non è un multiplo di 3 e aggiornare di conseguenza la lista delle sequenze del gruppo A.
Costruire una lista di valori booleani tali che l'i-esimo valore è True se l'i-esima sequenza ha una coding sequence che è multiplo di 3.
End of explanation
"""
cds_start_end_list = [cds_start_end_list[i] for i, v in enumerate(is_multiple_3) if v is True]
cds_start_end_list
"""
Explanation: Usare questa lista per aggiornare la lista degli start ed end.
End of explanation
"""
est_a_list = [est_a_list[i] for i, v in enumerate(is_multiple_3) if v is True]
est_a_list
"""
Explanation: Aggiornare anche la lista delle sequenze.
End of explanation
"""
coding_sequence_list = [el[0][s-1:e] for el, (s, e) in zip(est_a_list, cds_start_end_list)]
coding_sequence_list
"""
Explanation: Ottenere la lista delle relative coding sequence (come oggetti di tipo SeqRecord).
End of explanation
"""
translation_list = [el.translate() for el in coding_sequence_list]
translation_list
"""
Explanation: Ottenere la lista delle traduzioni delle coding sequence (come oggetti di tipo SeqRecord).
End of explanation
"""
for t, c in zip(translation_list, coding_sequence_list):
t.id = re.findall('/gb=(\S+)', c.description)[0]
t.description = ''
translation_list
"""
Explanation: Assegnare ad ogni traduzione il GenBank ID come attributo id del relativo record e la stringa nulla come attributo description.
End of explanation
"""
SeqIO.write(translation_list, './translation.fa', 'fasta')
"""
Explanation: Scrivere le traduzioni nel file di output translation.fa.
End of explanation
"""
est_b_list = [est for est in est_b_list if int(re.findall('/len=(\S+)', est.description)[0]) >= 500]
est_b_list = [est for est in est_b_list if len(est) >= 500]
est_b_list
"""
Explanation: Produrre in output le sequenze del gruppo B.
Eliminare le sequenze più corte di 500 basi.
End of explanation
"""
for el in est_b_list:
if int(re.findall('/clone_end=(\S+)', el.description)[0].strip("'")) == 5:
el.reverse_complement()
est_b_list
"""
Explanation: Eseguire il reverse&complement delle sequenze che hanno clone_end pari a 5'.
End of explanation
"""
for est in est_b_list:
est.id = re.findall('/gb=(\S+)', est.description)[0]
est.description = str(len(est))
est_b_list
"""
Explanation: Assegnare ad ogni sequenza il suo GenBank ID come attributo id e la sua lunghezza come attributo description.
End of explanation
"""
est_b_list.sort(key=lambda x: len(x))
est_b_list = sorted(est_b_list, key=lambda x: len(x))
est_b_list
"""
Explanation: Ordinare le sequenze per lunghezza crescente.
End of explanation
"""
SeqIO.write(est_b_list, './ests-500.fa', 'fasta')
"""
Explanation: Scrivere le sequenze nel file di output ests-500.fa.
End of explanation
"""
|
Tsiems/machine-learning-projects | Lab1/Lab1-Travis-Copy1.ipynb | mit | import pandas as pd
import numpy as np
df = pd.read_csv('data/data.csv') # read in the csv file
"""
Explanation: Lab 1: Exploring NFL Play-By-Play Data
Data Loading and Preprocessing
To begin, we load the data into a Pandas data frame from a csv file.
End of explanation
"""
df.head()
"""
Explanation: Let's take a cursory glance at the data to see what we're working with.
End of explanation
"""
columns_to_delete = ['Unnamed: 0', 'Date', 'time',
'PosTeamScore', 'PassAttempt', 'RushAttempt',
'DefTeamScore', 'Season', 'PlayAttempted']
#Iterate through and delete the columns we don't want
for col in columns_to_delete:
if col in df:
del df[col]
"""
Explanation: There's a lot of data that we don't care about. For example, 'PassAttempt' is a binary attribute, but there's also an attribute called 'PlayType' which is set to 'Pass' for a passing play.
We define a list of the columns which we're not interested in, and then we delete them
End of explanation
"""
df.columns
"""
Explanation: We can then grab a list of the remaining column names
End of explanation
"""
df.info()
df = df.replace(to_replace=np.nan,value=-1)
"""
Explanation: Temporary simple data replacement so that we can cast to integers (instead of objects)
End of explanation
"""
df.info()
"""
Explanation: At this point, lots of things are encoded as objects, or with excesively large data types
End of explanation
"""
continuous_features = ['TimeSecs', 'PlayTimeDiff', 'yrdln', 'yrdline100',
'ydstogo', 'ydsnet', 'Yards.Gained', 'Penalty.Yards',
'ScoreDiff', 'AbsScoreDiff']
ordinal_features = ['Drive', 'qtr', 'down']
binary_features = ['GoalToGo', 'FirstDown','sp', 'Touchdown', 'Safety', 'Fumble']
categorical_features = df.columns.difference(continuous_features).difference(ordinal_features)
"""
Explanation: We define four lists based on the types of features we're using.
Binary features are separated from the other categorical features so that they can be stored in less space
End of explanation
"""
df[continuous_features] = df[continuous_features].astype(np.float64)
df[ordinal_features] = df[ordinal_features].astype(np.int64)
df[binary_features] = df[binary_features].astype(np.int8)
"""
Explanation: We then cast all of the columns to the appropriate underlying data types
End of explanation
"""
df['PassOutcome'].replace(['Complete', 'Incomplete Pass'], [1, 0], inplace=True)
df = df[df["PlayType"] != 'Quarter End']
df = df[df["PlayType"] != 'Two Minute Warning']
df = df[df["PlayType"] != 'End of Game']
"""
Explanation: THIS IS SOME MORE REFORMATTING SHIT I'M DOING FOR NOW. PROLLY GONNA KEEP IT
End of explanation
"""
df.info()
"""
Explanation: Now all of the objects are encoded the way we'd like them to be
End of explanation
"""
df.describe()
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
#Embed figures in the Jupyter Notebook
%matplotlib inline
#Use GGPlot style for matplotlib
plt.style.use('ggplot')
pass_plays = df[df['PlayType'] == "Pass"]
pass_plays_grouped = pass_plays.groupby(by=['Passer'])
"""
Explanation: Now we can start to take a look at what's in each of our columns
End of explanation
"""
first_downs_grouped = df.groupby(by=['FirstDown'])
print(first_downs_grouped['Yards.Gained'].count())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum()/first_downs_grouped['Yards.Gained'].count())
"""
Explanation: Look at the number of yards gained by a FirstDown
End of explanation
"""
plays_grouped = df.groupby(by=['PlayType'])
print(plays_grouped['Yards.Gained'].count())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum()/plays_grouped['Yards.Gained'].count())
"""
Explanation: Group by play type
End of explanation
"""
size = 10
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns)
for tick in ax.get_xticklabels():
tick.set_rotation(90)
plt.yticks(range(len(corr.columns)), corr.columns)
"""
Explanation: We can eliminate combos who didn't have at least 10 receptions together, and then re-sample the data. This will remove noise from QB-receiver combos who have very high or low completion rates because they've played very little together.
End of explanation
"""
import seaborn as sns
%matplotlib inline
# df_dropped = df.dropna()
# df_dropped.info()
selected_types = df.select_dtypes(exclude=["object"])
useful_attributes = df[['FieldGoalDistance','ydstogo']]
print(useful_attributes)
sns.heatmap(corr)
cluster_corr = sns.clustermap(corr)
plt.setp(cluster_corr.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
# plt.xticks(rotation=90)
fg_analysis = df[['FieldGoalDistance','FieldGoalResult', 'PlayType']]
fg_analysis = fg_analysis[fg_analysis['FieldGoalResult'] != -1]
fg_grouped = fg_analysis.groupby(by=["FieldGoalResult"])
print(fg_grouped.sum()/fg_grouped.count())
sns.violinplot(x="FieldGoalResult", y="FieldGoalDistance", data=fg_analysis, inner="quart")
fg_analysis = fg_analysis[fg_analysis['FieldGoalResult'] != "Blocked"]
fg_analysis = fg_analysis[fg_analysis['PlayType'] == "Field Goal"]
sns.violinplot(x = "PlayType", y="FieldGoalDistance", hue="FieldGoalResult", data=fg_analysis, inner="quart", split = True)
pass_analysis = df[df.PlayType == 'Pass']
pass_analysis = pass_analysis[['PassOutcome','PassLength','PassLocation']]
# print(pass_analysis)
pass_analysis = pass_analysis[pass_analysis.PassLength != -1]
pa_grouped = pass_analysis.groupby(by=['PassLength'])
print(pa_grouped.count())
# pass_analysis['SuccessfulPass'] = pd.cut(df.PassOutcome,[0,1,2],2,labels=['Complete','Incomplete'])
pass_analysis.info()
# Draw a nested violinplot and split the violins for easier comparison
# sns.violinplot(x="PassLocation", y="SuccessfulPass", hue="PassLength", data=pass_analysis, split=True,
# inner="quart")
# sns.despine(left=True)
pass_info = pd.crosstab([pass_analysis['PassLength'],pass_analysis['PassLocation'] ],
pass_analysis.PassOutcome.astype(bool))
print(pass_info)
pass_info.plot(kind='bar', stacked=True)
df.RunGap.value_counts()
pass_rate = pass_info.div(pass_info.sum(1).astype(float),
axis=0) # normalize the value
# print pass_rate
pass_rate.plot(kind='barh',
stacked=True)
# Run data
run_analysis = df[df.PlayType == 'Run']
run_analysis = run_analysis[['Yards.Gained','RunGap','RunLocation']]
runlocation_violinplot = sns.violinplot(x="RunLocation", y="Yards.Gained", data=run_analysis, inner="quart")
run_analysis = run_analysis[run_analysis.RunLocation != -1]
run_analysis['RunGap'].replace(-1, 'up the middle',inplace=True)
# run_analysis['RunLocation'].replace(-1, 'no location',inplace=True)
ra_grouped = run_analysis.groupby(by=['RunGap'])
print(ra_grouped.count())
print(run_analysis.info())
sns.set(style="whitegrid", palette="muted")
# Draw a categorical scatterplot to show each observation
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis)
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis,kind="bar")
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis,kind="violin")
#just compare left and right options
run_lr = run_analysis[(run_analysis['RunLocation'] == 'right') | (run_analysis['RunLocation'] == 'left')]
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_lr,kind="bar")
rungap_violinplot = sns.violinplot(x="RunGap", y="Yards.Gained", data=run_analysis, inner="quart")
rush_plays = df[(df.Rusher != -1)]
rush_plays_grouped = rush_plays.groupby(by=['posteam']).filter(lambda g: len(g) > 10).groupby(by=["posteam"])
yards_per_carry = rush_plays_grouped["Yards.Gained"].sum() / rush_plays_grouped["Yards.Gained"].count()
yards_per_carry.sort_values(inplace=True, ascending=False)
yards_per_carry[0:40].plot(kind='barh')
# run_analysis = df[df.PlayType == 'Run']
# run_analysis = run_analysis[['Yards.Gained','RunGap','RunLocation','Rusher','posteam']]
# run_analysis = run_analysis[run_analysis['posteam'] != -1]
# # runlocation_violinplot = sns.violinplot(x="RunLocation", y="Yards.Gained", data=run_analysis, inner="quart")
# run_analysis['RunGap'].replace(-1, 'up the middle',inplace=True)
# run_an_cleaned = run_analysis[run_analysis.RunLocation != -1]
# # run_analysis['RunLocation'].replace(-1, 'no location',inplace=True)
# run_analysis['Avg_Running_Yards'] = run_analysis[run_analysis['Yards.Gained'].mean()]
# # sns.violinplot(x="posteam", y="Yards.Gained", data=run_analysis, inner="quart")
# team_barplot = sns.barplot(x="posteam", y="Yards.Gained", data=run_analysis)
# # ra_grouped = run_analysis.groupby(by=['RunGap'])
# for item in team_barplot.get_xticklabels():
# item.set_rotation(90)
#shows average yards gained on running plays
# ************ repeat chart **************
rush_plays = df[(df.Rusher != -1)]
rush_plays_grouped = rush_plays.groupby(by=['Rusher']).filter(lambda g: len(g) > 10).groupby(by=["Rusher"])
yards_per_carry = rush_plays_grouped["Yards.Gained"].sum() / rush_plays_grouped["Yards.Gained"].count()
yards_per_carry.sort_values(inplace=True, ascending=False)
# yards_per_carry = yards_per_carry.groupby(by=['posteam'])
# yards_per_carry[0:20].plot(kind='barh')
#sns.barplot(x="Rusher",y="Yards.Gained",hue="posteam",data=rush_plays)
# yards_per_carry.info()
# sns.barplot(hue="posteam", data=yards_per_carry[0:20])
yards_per_carry.index[0]
#str(df[df.Rusher == yards_per_carry.index[0]].posteam.mode())[5:index()]
teams = [ str(df[df.Rusher == yards_per_carry.index[i]].posteam.mode()) for i in range(len(yards_per_carry)) ]
teams = [ x[5:x.index('\n')] for x in teams]
temp_df = pd.DataFrame({'yards_per_carry': yards_per_carry, 'rusher': yards_per_carry.index, 'team': teams})
ax = sns.barplot(x = "yards_per_carry", y = "rusher", hue = "team", data = temp_df[0:10], palette=sns.color_palette("Set2", 20))
## Analyzing scoring during certain times of the game
quarter_data = df[df['sp'] == 1]
sns.countplot(x='qtr',data=quarter_data)
# shows distribution of number of scores per quarter
quarter_data = df[df['sp'] == 1]
quarter_data = quarter_data[['qtr','Touchdown','sp']]
qd_grouped = quarter_data.groupby(by=['qtr'])
print(qd_grouped.count())
qd_info = pd.crosstab([quarter_data['qtr'] ],
quarter_data.Touchdown.astype(bool))
print(qd_info)
qd_info.plot(kind='bar', stacked=True)
time_score_data = df[df['sp'] == 1]
time_score_data = time_score_data[['sp','ScoreDiff','TimeSecs']]
g = sns.jointplot("TimeSecs", "ScoreDiff", data=time_score_data, kind="reg",
xlim=(3600, -900), ylim=(-40, 50), color="r", size=12)
time_score_data = time_score_data[time_score_data['ScoreDiff']>0]
sns.jointplot("TimeSecs", "ScoreDiff", data=time_score_data, kind="reg",
xlim=(3600, -900), ylim=(-40, 50), color="r", size=12)
# regression for winning team
quarter_data = df[df['sp'] == 1]
quarter_data = quarter_data[['qtr','TimeUnder']]
sns.countplot(x="qtr",hue="TimeUnder",data=quarter_data, hue_order = range(15,-1,-1))
"""
Explanation: We can also extract the highest-completion percentage combos.
Here we take the top-10 most reliable QB-receiver pairs.
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb | bsd-3-clause | print(__doc__)
import sys
from skopt.plots import plot_objective
from skopt import forest_minimize
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
"""
Explanation: Partial Dependence Plots
Sigurd Carlsen Feb 2019
Holger Nahrstaedt 2020
.. currentmodule:: skopt
Plot objective now supports optional use of partial dependence as well as
different methods of defining parameter values for dependency plots.
End of explanation
"""
# Here we define a function that we evaluate.
def funny_func(x):
s = 0
for i in range(len(x)):
s += (x[i] * i) ** 2
return s
"""
Explanation: Objective function
Plot objective now supports optional use of partial dependence as well as
different methods of defining parameter values for dependency plots
End of explanation
"""
bounds = [(-1, 1.), ] * 3
n_calls = 150
result = forest_minimize(funny_func, bounds, n_calls=n_calls,
base_estimator="ET",
random_state=4)
"""
Explanation: Optimisation using decision trees
We run forest_minimize on the function
End of explanation
"""
_ = plot_objective(result, n_points=10)
"""
Explanation: Partial dependence plot
Here we see an example of using partial dependence. Even when setting
n_points all the way down to 10 from the default of 40, this method is
still very slow. This is because partial dependence calculates 250 extra
predictions for each point on the plots.
End of explanation
"""
_ = plot_objective(result, n_points=10, minimum='expected_minimum')
"""
Explanation: It is possible to change the location of the red dot, which normally shows
the position of the found minimum. We can set it 'expected_minimum',
which is the minimum value of the surrogate function, obtained by a
minimum search method.
End of explanation
"""
_ = plot_objective(result, sample_source='result', n_points=10)
"""
Explanation: Plot without partial dependence
Here we plot without partial dependence. We see that it is a lot faster.
Also the values for the other parameters are set to the default "result"
which is the parameter set of the best observed value so far. In the case
of funny_func this is close to 0 for all parameters.
End of explanation
"""
_ = plot_objective(result, n_points=10, sample_source='expected_minimum',
minimum='expected_minimum')
"""
Explanation: Modify the shown minimum
Here we try with setting the minimum parameters to something other than
"result". First we try with "expected_minimum" which is the set of
parameters that gives the miniumum value of the surrogate function,
using scipys minimum search method.
End of explanation
"""
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',
minimum='expected_minimum_random')
"""
Explanation: "expected_minimum_random" is a naive way of finding the minimum of the
surrogate by only using random sampling:
End of explanation
"""
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',
minimum='expected_minimum_random',
n_minimum_search=10)
_ = plot_objective(result, n_points=10, sample_source="expected_minimum",
minimum='expected_minimum', n_minimum_search=2)
"""
Explanation: We can also specify how many initial samples are used for the two different
"expected_minimum" methods. We set it to a low value in the next examples
to showcase how it affects the minimum for the two methods.
End of explanation
"""
_ = plot_objective(result, n_points=10, sample_source=[1, -0.5, 0.5],
minimum=[1, -0.5, 0.5])
"""
Explanation: Set a minimum location
Lastly we can also define these parameters ourself by parsing a list
as the minimum argument:
End of explanation
"""
|
n-witt/MachineLearningWithText_SS2017 | tutorials/8 k-Means Clustering.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # for plot styling
import numpy as np
"""
Explanation: k-Means Clustering
In the previous few section, we have explored one category of unsupervised machine learning models: dimensionality reduction.
Here we will move on to another class of unsupervised machine learning models: clustering algorithms
Clustering algorithms seek to learn, from the properties of the data, an optimal division or discrete labeling of groups of points.
Many clustering algorithms are available in Scikit-Learn, but the simplest to understand is k-means clustering.
We begin with the standard imports:
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y_true = make_blobs(n_samples=300, centers=4,
cluster_std=0.60, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=50);
"""
Explanation: Introducing k-Means
The k-means algorithm searches for a pre-determined number of clusters within an unlabeled multidimensional dataset.
It accomplishes this using a simple conception of what the optimal clustering looks like:
The "cluster center" is the arithmetic mean of all the points belonging to the cluster.
Each point is closer to its own cluster center than to other cluster centers.
First, let's generate a two-dimensional dataset containing four distinct blobs:
End of explanation
"""
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
y_kmeans = kmeans.predict(X)
"""
Explanation: Here, it's relatively easy to pick out the four clusters.
The k-means algorithm does this automatically:
End of explanation
"""
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
"""
Explanation: Let's visualize the results by plotting the data colored by these labels. We will also plot the cluster centers as determined by the k-means estimator:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for plot styling
import numpy as np
from ipywidgets import interact
from sklearn.metrics import pairwise_distances_argmin
from sklearn.datasets.samples_generator import make_blobs
def plot_kmeans_interactive(min_clusters=1, max_clusters=6):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
def plot_points(X, labels, n_clusters):
plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis',
vmin=0, vmax=n_clusters - 1);
def plot_centers(centers):
plt.scatter(centers[:, 0], centers[:, 1], marker='o',
c=np.arange(centers.shape[0]),
s=200, cmap='viridis')
plt.scatter(centers[:, 0], centers[:, 1], marker='o',
c='black', s=50)
def _kmeans_step(frame=0, n_clusters=4):
rng = np.random.RandomState(2)
labels = np.zeros(X.shape[0])
centers = rng.randn(n_clusters, 2)
nsteps = frame // 3
for i in range(nsteps + 1):
old_centers = centers
if i < nsteps or frame % 3 > 0:
labels = pairwise_distances_argmin(X, centers)
if i < nsteps or frame % 3 > 1:
centers = np.array([X[labels == j].mean(0)
for j in range(n_clusters)])
nans = np.isnan(centers)
centers[nans] = old_centers[nans]
# plot the data and cluster centers
plot_points(X, labels, n_clusters)
plot_centers(old_centers)
# plot new centers if third frame
if frame % 3 == 2:
for i in range(n_clusters):
plt.annotate('', centers[i], old_centers[i],
arrowprops=dict(arrowstyle='->', linewidth=1))
plot_centers(centers)
plt.xlim(-4, 4)
plt.ylim(-2, 10)
if frame % 3 == 1:
plt.text(3.8, 9.5, "1. Reassign points to nearest centroid",
ha='right', va='top', size=14)
elif frame % 3 == 2:
plt.text(3.8, 9.5, "2. Update centroids to cluster means",
ha='right', va='top', size=14)
return interact(_kmeans_step, frame=[0, 50],
n_clusters=[min_clusters, max_clusters])
plot_kmeans_interactive();
"""
Explanation: The k-means algorithm (at least in this simple case) assigns the points to clusters very similarly to how we might assign them by eye.
You might wonder how this algorithm finds these clusters so quickly!
After all, the number of possible combinations of cluster assignments is exponential in the number of data points
An exhaustive search would be very, very costly.
The approach to k-means clustering is called expectation–maximization.
Expectation–Maximization
Guess some cluster centers
Repeat until converged
Assign points to the nearest cluster center
Set the cluster centers to the mean
Interactive Example
End of explanation
"""
from sklearn.metrics import pairwise_distances_argmin
def find_clusters(X, n_clusters, rseed=2):
# 1. Randomly choose clusters
rng = np.random.RandomState(rseed)
i = rng.permutation(X.shape[0])[:n_clusters]
centers = X[i]
while True:
# 2a. Assign labels based on closest center
labels = pairwise_distances_argmin(X, centers)
# 2b. Find new centers from means of points
new_centers = np.array([X[labels == i].mean(0)
for i in range(n_clusters)])
# 2c. Check for convergence
if np.all(centers == new_centers):
break
centers = new_centers
return centers, labels
centers, labels = find_clusters(X, 4)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
"""
Explanation: The algorithm is simple enough that we can implement it in just a few lines of code:
End of explanation
"""
centers, labels = find_clusters(X, 4, rseed=0)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
"""
Explanation: Issues of the EM algorithm
The globally optimal result may not be achieved
There is no assurance that it will lead to the global best solution.
For example, if we use a different random seed in our simple procedure, the particular starting guesses lead to poor results:
End of explanation
"""
labels = KMeans(6, random_state=0).fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
"""
Explanation: It is usual to run the algorith several times with different starting guesses.
Scikit-Learn does by default 10 runs.
The number of clusters must be selected beforehand
A challenge with k-means is that you must tell it how many clusters you expect.
It cannot learn the number of clusters from the data.
For example, if we ask the algorithm to identify six clusters, it will happily proceed and find the best six clusters:
End of explanation
"""
from sklearn.datasets import make_moons
X, y = make_moons(200, noise=.05, random_state=0)
labels = KMeans(2, random_state=0).fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
"""
Explanation: k-means is limited to linear cluster boundaries
The model assumptions of k-means means that the algorithm will often be ineffective if the clusters have complicated geometries.
The boundaries between k-means clusters will always be linear, which means that it will fail for more complicated boundaries.
Consider the following data, along with the cluster labels found by the typical k-means approach:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
kmeans = KMeans(n_clusters=10, random_state=0)
clusters = kmeans.fit_predict(digits.data)
kmeans.cluster_centers_.shape
"""
Explanation: k-means can be slow for large numbers of samples
Because each iteration of k-means must access every point in the dataset, the algorithm can be relatively slow as the number of samples grows.
Example 1: k-means on digits
Let's take a look at applying k-means on the same simple digits data that we already saw.
Here we will attempt to use k-means to try to identify similar digits without using the original label information.
Recall that the digits consist of 1,797 samples with 64 features, where each of the 64 features is the brightness of one pixel in an 8×8 image:
End of explanation
"""
fig, ax = plt.subplots(2, 5, figsize=(8, 3))
centers = kmeans.cluster_centers_.reshape(10, 8, 8)
for axi, center in zip(ax.flat, centers):
axi.set(xticks=[], yticks=[])
axi.imshow(center, interpolation='nearest', cmap=plt.cm.binary)
"""
Explanation: What do these clusters look like?
End of explanation
"""
cluster_no = 6
digits.target[(clusters == cluster_no)]
"""
Explanation: We can have a look at the true labels of the datapoints assigned to a cluster:
End of explanation
"""
from scipy.stats import mode
labels = np.zeros_like(clusters)
for i in range(10):
mask = (clusters == i)
labels[mask] = mode(digits.target[mask])[0]
"""
Explanation: Obviously, there is a mismatch between the cluster numbers and the labels.
We can fix that using the true labels:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
"""
Explanation: Now we can check how accurate our unsupervised clustering was in finding similar digits within the data:
End of explanation
"""
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(digits.target, labels)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=digits.target_names,
yticklabels=digits.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
"""
Explanation: With just a simple k-means algorithm, we discovered the correct grouping for 80% of the input digits! Let's check the confusion matrix for this:
End of explanation
"""
from sklearn.manifold import TSNE
# Project the data: this step will take several seconds
tsne = TSNE(n_components=2, init='random', random_state=0)
digits_proj = tsne.fit_transform(digits.data)
# Compute the clusters
kmeans = KMeans(n_clusters=10, random_state=0)
clusters = kmeans.fit_predict(digits_proj)
# Permute the labels
labels = np.zeros_like(clusters)
for i in range(10):
mask = (clusters == i)
labels[mask] = mode(digits.target[mask])[0]
# Compute the accuracy
accuracy_score(digits.target, labels)
mat = confusion_matrix(digits.target, labels)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=digits.target_names,
yticklabels=digits.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
"""
Explanation: Just for fun
We can use the t-distributed stochastic neighbor embedding (t-SNE) algorithm to pre-process the data before performing k-means.
t-SNE is a nonlinear embedding algorithm that is particularly adept at preserving points within clusters.
Let's see how it does:
End of explanation
"""
|
martijnvermaat/monoseq | doc/monoseq.ipynb | mit | from monoseq.ipynb import Seq
s = ('cgcactcaaaacaaaggaagaccgtcctcgactgcagaggaagcaggaagctgtc'
'ggcccagctctgagcccagctgctggagccccgagcagcggcatggagtccgtgg'
'ccctgtacagctttcaggctacagagagcgacgagctggccttcaacaagggaga'
'cacactcaagatcctgaacatggaggatgaccagaactggtacaaggccgagctc'
'cggggtgtcgagggatttattcccaagaactacatccgcgtcaag')
Seq(s)
"""
Explanation: Pretty-printing DNA and protein sequences with monoseq
monoseq is a Python library for pretty-printing DNA and protein sequences using a monospace font. It also provides a simple command line interface.
Sequences are pretty-printed in the traditional way using blocks of letters where each line is prefixed with the sequence position. User-specified regions are highlighted and the output format can be HTML or plaintext with optional styling using ANSI escape codes for use in a terminal.
Here we show how monoseq can be used in the IPython Notebook environment. See the monoseq documentation for more.
Note: Some applications (e.g., GitHub) will not show the annotation styling in this notebook. View this notebook on nbviewer to see all styling.
Use in the IPython Notebook
If you haven't already done so, install monoseq using pip.
pip install monoseq
The monoseq.ipynb module provides Seq, a convenience wrapper around monoseq.pprint_sequence providing easy printing of sequence strings in an IPython Notebook.
End of explanation
"""
Seq(s, block_length=8, blocks_per_line=8)
"""
Explanation: Block and line lengths
We can change the number of characters per block and the number of blocks per line.
End of explanation
"""
conserved = [(11, 37), (222, 247)]
Seq(s, annotations=[conserved])
"""
Explanation: Annotations
Let's say we want to highlight two subsequences because they are conserved between species. We define each region as a tuple start,stop (zero-based, stop not included) and include this in the annotation argument.
End of explanation
"""
twelves = [(p, p + 1) for p in range(11, len(s), 12)]
middle = [(len(s) / 3, len(s) / 3 * 2)]
Seq(s, annotations=[conserved, twelves, middle])
"""
Explanation: As a contrived example to show several levels of annotation, let's also annotate every 12th character and the middle third of the sequence.
End of explanation
"""
style = """
{selector} {{ background: beige; color: gray }}
{selector} .monoseq-margin {{ font-style: italic; color: green }}
{selector} .monoseq-annotation-0 {{ color: blue; font-weight: bold }}
"""
Seq(s, style=style, annotations=[conserved])
"""
Explanation: Custom styling
The default CSS that is applied can be overridden with the style argument.
End of explanation
"""
|
opengeostat/pygslib | pygslib/Ipython_templates/broken/vtk_tools.ipynb | mit | import pygslib
import numpy as np
"""
Explanation: VTK tools
Pygslib use VTK:
as data format and data converting tool
to plot in 3D
as a library with some basic computational geometry functions, for example to know if a point is inside a surface
Some of the functions in VTK were obtained or modified from Adamos Kyriakou at https://pyscience.wordpress.com/
End of explanation
"""
help(pygslib.vtktools)
"""
Explanation: Functions in vtktools
End of explanation
"""
#load the cube
mycube=pygslib.vtktools.loadSTL('../datasets/stl/cube.stl')
# see the information about this data... Note that it is an vtkPolyData
print mycube
# Create a VTK render containing a surface (mycube)
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.50, background=(1,1,1))
# Now we plot the render
pygslib.vtktools.vtk_show(renderer, camera_position=(-20,20,20), camera_focalpoint=(0,0,0))
"""
Explanation: Load a cube defined in an stl file and plot it
STL is a popular mesh format included an many non-commercial and commercial software, example: Paraview, Datamine Studio, etc.
End of explanation
"""
# we have a line, for example a block model row
# defined by two points or an infinite line passing trough a dillhole sample
pSource = [-50.0, 0.0, 0.0]
pTarget = [50.0, 0.0, 0.0]
# now we want to see how this looks like
pygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))
pygslib.vtktools.vtk_show(renderer) # the camera position was already defined
# now we find the point coordinates of the intersections
intersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)
print "the line intersects? ", intersect==1
print "the line is over the surface?", intersect==-1
# list of coordinates of the points intersecting
print points
#Now we plot the intersecting points
# To do this we add the points to the renderer
for p in points:
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
pygslib.vtktools.vtk_show(renderer)
"""
Explanation: Ray casting to find intersections of a lines with the cube
This is basically how we plan to find points inside solid and to define blocks inside solid
End of explanation
"""
# we have a line, for example a block model row
# defined by two points or an infinite line passing trough a dillhole sample
pSource = [-50.0, 5.01, 0]
pTarget = [50.0, 5.01, 0]
# now we find the point coordinates of the intersections
intersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)
print "the line intersects? ", intersect==1
print "the line is over the surface?", intersect==-1
# list of coordinates of the points intersecting
print points
# now we want to see how this looks like
pygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))
for p in points:
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
pygslib.vtktools.vtk_show(renderer) # the camera position was already defined
# note that there is a tolerance of about 0.01
"""
Explanation: Test line on surface
End of explanation
"""
#using same cube but generation arbitrary random points
x = np.random.uniform(-10,10,150)
y = np.random.uniform(-10,10,150)
z = np.random.uniform(-10,10,150)
"""
Explanation: Finding points
End of explanation
"""
# selecting all inside the solid
# This two methods are equivelent but test=4 also works with open surfaces
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=1)
inside1,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=4)
err=inside==inside1
#print inside, tuple(p)
print x[~err]
print y[~err]
print z[~err]
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
"""
Explanation: Find points inside a solid
End of explanation
"""
# selecting all over a solid (test = 2)
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=2)
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(-p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
"""
Explanation: Find points over a surface
End of explanation
"""
# selecting all over a solid (test = 2)
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=3)
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
"""
Explanation: Find points below a surface
End of explanation
"""
data = {'inside': inside}
pygslib.vtktools.points2vtkfile('points', x,y,z, data)
"""
Explanation: Export points to a VTK file
End of explanation
"""
|
thomasantony/CarND-Projects | Exercises/Term1/TensorFlow-Tutorials/01_Simple_Linear_Model.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
"""
Explanation: TensorFlow Tutorial #01
Simple Linear Model
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
"""
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
"""
Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
"""
data.test.labels[0:5, :]
"""
Explanation: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
End of explanation
"""
data.test.cls = np.array([label.argmax() for label in data.test.labels])
"""
Explanation: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
End of explanation
"""
data.test.cls[0:5]
"""
Explanation: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
End of explanation
"""
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
"""
Explanation: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
"""
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
x = tf.placeholder(tf.float32, [None, img_size_flat])
"""
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used to change the input to the graph.
Model variables that are going to be optimized so as to make the model perform better.
The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and the model variables.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables of the model.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
"""
y_true = tf.placeholder(tf.float32, [None, num_classes])
"""
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
"""
y_true_cls = tf.placeholder(tf.int64, [None])
"""
Explanation: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
End of explanation
"""
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
"""
Explanation: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
End of explanation
"""
biases = tf.Variable(tf.zeros([num_classes]))
"""
Explanation: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
End of explanation
"""
logits = tf.matmul(x, weights) + biases
"""
Explanation: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
End of explanation
"""
y_pred = tf.nn.softmax(logits)
"""
Explanation: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
End of explanation
"""
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
"""
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
End of explanation
"""
cost = tf.reduce_mean(cross_entropy)
"""
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
"""
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
"""
Explanation: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
"""
session = tf.Session()
"""
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
session.run(tf.initialize_all_variables())
"""
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
"""
batch_size = 100
"""
Explanation: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
End of explanation
"""
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
"""
Explanation: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
End of explanation
"""
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
"""
Explanation: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
End of explanation
"""
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
"""
Explanation: Function for printing the classification accuracy on the test-set.
End of explanation
"""
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
"""
Explanation: Function for printing and plotting the confusion matrix using scikit-learn.
End of explanation
"""
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred = session.run([correct_prediction, y_pred_cls],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
"""
Explanation: Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
"""
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
"""
Explanation: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
End of explanation
"""
print_accuracy()
plot_example_errors()
"""
Explanation: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
End of explanation
"""
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
"""
Explanation: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
End of explanation
"""
plot_weights()
"""
Explanation: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
End of explanation
"""
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
"""
Explanation: Performance after 10 optimization iterations
End of explanation
"""
# We have already performed 10 iterations.
optimize(num_iterations=9000)
print_accuracy()
plot_example_errors()
"""
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
End of explanation
"""
plot_weights()
"""
Explanation: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
End of explanation
"""
print_confusion_matrix()
"""
Explanation: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
"""
Explanation: We are now done using TensorFlow, so we close the session to release its resources.
End of explanation
"""
|
bumblebeefr/poppy_rate | [startup] Dynamixel - List and configure motors.ipynb | gpl-2.0 | ports = pypot.dynamixel.get_available_ports()
if not ports:
raise IOError('no port found!')
print "Ports founds %s" % ports
for port in ports:
print('Connecting on port:', port)
dxl_io = pypot.dynamixel.DxlIO(port)
motors = dxl_io.scan()
print(" %s motors founds : %s\n" % (len(motors),motors))
dxl_io.close()
"""
Explanation: Scan et affiche l'id de tous le smoteurs connectés
⇒ Documentation pypot.Dynamixel de connection aux moteurs
End of explanation
"""
#57142 => 1000000
#return_delay_time => 0
def motor_config():
ports = pypot.dynamixel.get_available_ports()
if len(ports) == 1:
print("Connection au port %s" % ports[0])
dxl_io = pypot.dynamixel.DxlIO(ports[0])
print('Scan des moteurs (cela peut prendre quelques secondes)')
motors = dxl_io.scan()
if len(motors) == 1:
print("OK, un seul moteur trouvé : %s" % motors[0])
for k,v in dxl_io.__dict__.items():
print(" - %s : %s" % (k,v))
dxl_io.enable_torque(motors)
dxl_io.set_moving_speed({motors[0]:200})
print("Positionnement du moteur %s à 90° " % motors[0])
dxl_io.set_goal_position({motors[0]:90})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
print("Positionnement du moteur %s à -90° " % motors[0])
dxl_io.set_goal_position({motors[0]:-90})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
print("Positionnement du moteur %s à 0° " % motors[0])
dxl_io.set_goal_position({motors[0]:0})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
dxl_io.disable_torque(motors)
target_id = raw_input("Changer l'id du moteur %s : " % motors[0])
try:
target_id = int(target_id)
dxl_io.change_id({motors[0]:target_id})
print("ID modifié")
except ValueError:
print("ID non modifié")
dxl_io.close()
else:
print("Erreur, %s moteurs conncectés : %s" % (len(motors), motors))
else :
print("Erreur, %s ports trouvés : %s" % (len(ports),ports))
motor_config()
"""
Explanation: Remise à zero(angle) Affichage et changement de l'id d'un moteur.
⇒ Autre script de reset "brutal" de l'id et bitrate
End of explanation
"""
|
tleonhardt/CodingPlayground | dataquest/SQL_and_Databases/Preparing_Data_for_SQLite.ipynb | mit | # Import pandas and read the CSV file academy_awards.csv into a DataFrame
import pandas as pd
df = pd.read_csv('../data/academy_awards.csv', encoding="ISO-8859-1")
# Start exploring the data in Pandas and look for data quality issues
df.head()
# There are 6 unnamed columns at the end. Do any of them contain valid values?
cols = df.columns
for column in cols:
if column.startswith('Unnamed:'):
print('\ncolumn {}\n:{}'.format(column, df[column].value_counts()))
# Additional Info column contains a few different formatting styles.
# Start brainstorming ways to clean this column up.
col_str = "Biutiful {'Uxbal'}"
col_split = col_str.split(' ')
movie_name = col_split[0]
character_name = col_split[1].split("'")[1]
print(movie_name)
print(character_name)
"""
Explanation: Introduction To The Data
So far, we've learned how to write SQL queries to interact with existing databases. In this guided project, you'll learn how to clean a CSV dataset and add it to a SQLite database.
We'll work with data on Academy Award nominations, which can be downloaded here. The Academy Awards, also known as the Oscars, is an annual awards ceremony hosted to recognize the achievements in the film industry. There are many different awards categories and the members of the academy vote every year to decide which artist or film should get the award. The awards categories have changed over the years, and you can learn more about when categories were added on Wikipedia.
Here are the columns in the dataset, academy_awards.csv:
* Year - the year of the awards ceremony.
* Category - the category of award the nominee was nominated for.
* Nominee - the person nominated for the award.
* Additional Info - this column contains additional info like:
* the movie the nominee participated in.
* the character the nominee played (for acting awards).
* Won? - this column contains either YES or NO depending on if the nominee won the award.
Read in the dataset into a Dataframe and explore it to become more familiar witht he data. Once you've cleaned the dataset, you'll use a Pandas helper method to export the data into a SQLite database.
End of explanation
"""
# Before we filter the data, let's clean up the Year column
import numpy as np
df["Year"] = df["Year"].str[0:4]
# Convert the Year column to the int64 data type
df["Year"] = df["Year"].astype(np.int64)
df.dtypes
# Use conditional filtering to select only the rows where Year > 2000
later_than_2000 = df[df["Year"] > 2000]
# Use conditional filtering to select rows where Category matches
award_categories = ['Actor -- Leading Role',
'Actor -- Supporting Role',
'Actress -- Leading Role',
'Actress -- Supporting Role']
nominations = later_than_2000[later_than_2000["Category"].isin(award_categories)]
nominations.head()
"""
Explanation: Filtering the Data
The dataset is incredibly messy and you may have noticed many inconsistencies that make it hard to work with. Most columns don't have consistent formatting, which is incredibly important when we use SQL to query the data later on. Other columns vary in the information they convey based on the type of awards category that row corresponds to.
In the SQL and Databases: Intermediate course, we worked with a subset of the same dataset. This subset contained only the nominations from years 2001 to 2010 and only the following awards categories:
* Actor -- Leading Role
* Actor -- Supporting Role
* Actress -- Leading Role
* Actress -- Supporting Role
Let's filter our Dataframe to the same subset so it's more manageable.
End of explanation
"""
# Use Series method map to replace all NO values with 0 and YES with 1
replacements = {"NO": 0, "YES": 1}
nominations["Won?"] = nominations["Won?"].map(replacements)
nominations.head()
# Create a new column Won that contains the values from the Won? column
nominations["Won"] = nominations["Won?"]
# Use the drop method to remove the extraneous columns
drop_cols = ['Won?', 'Unnamed: 5', 'Unnamed: 6', 'Unnamed: 7',
'Unnamed: 8', 'Unnamed: 9', 'Unnamed: 10']
final_nominations = nominations.drop(drop_cols, axis=1)
final_nominations.head()
"""
Explanation: Cleaning Up the Won? and Unnamed Columns
Since SQLite uses the integers 0 and 1 to represent Boolean values, convert the Won? column to reflect this. Also rename the Won? column to Won so that it's consistent with the other column names. Finally, get rid of the 6 extra, unnamed columns, since they contain only null values in our filtered Dataframe nominations.
End of explanation
"""
# Use vectorized string metods to clean up the Additional Info column
additional_info_one = final_nominations["Additional Info"].str.rstrip("'}")
additional_info_two = additional_info_one.str.split(" {'")
movie_names = additional_info_two.str[0]
characters = additional_info_two.str[1]
final_nominations["Movie"] = movie_names
final_nominations["Character"] = characters
final_nominations = final_nominations.drop("Additional Info", axis=1)
final_nominations.head()
"""
Explanation: Cleaning Up the Additional Info Column
Now clean up the Additional Info column, whose values are formatted like so:
MOVIE {'CHARACTER'}
Here are some examples:
* Biutiful {'Uxbal'} - Biutiful is the movie and Uxbal is the character this nominee played.
* True Grit {'Rooster Cogburn'} - True Grit is the movie and Rooster Cogburn is the character this nominee played.
* The Social Network {'Mark Zuckerberg'} - The Social Network is the movie and Mark Zuckerberg is the character this nominee played.
The values in this column contain the movie name and the character the nominee played. Instead of keeping these values in 1 column, split them up into 2 different columns for easier querying.
End of explanation
"""
# Create the SQLite database "nominations.db" and connect to it
import sqlite3
conn = sqlite3.connect('../data/nominations.db')
# Use the DataFrame method "to_sql" to export final_nominations
final_nominations.to_sql("nominations", conn, index=False)
"""
Explanation: Exporting to SQLite
Now that our Dataframe is cleaned up, let's write these records to a SQL database. We can use the Pandas Dataframe method to_sql to create a new table in a database we specify. This method has 2 required parameters:
* name - string corresponding to the name of the table we want created. The rows from our Dataframe will be added to this table after it's created.
* conn - the Connection instance representing the database we want to add to.
Behind the scenes, Pandas creates a table and uses the first parameter to name it. Pandas uses the data types of each of the columns in the Dataframe to create a SQLite schema for this table. Since SQLite uses integer values to represent Booleans, it was important to convert the Won column to the integer values 0 and 1. We also converted the Year column to the integer data type, so that this column will have the appropriate type in our table.
After creating the table, Pandas creates a large INSERT query and runs it to insert the values into the table. We can customize the behavior of the to_sql method using its parameters. For example, if we wanted to append rows to an existing SQLite table instead of creating a new one, we can set the if_exists parameter to "append". By default, if_exists is set to "fail" and no rows will be inserted if we specify a table name that already exists. If we're inserting a large number of records into SQLite and we want to break up the inserting of records into chunks, we can use the chunksize parameter to set the number of rows we want inserted each time.
Since we're creating a database from scratch, we need to create a database file first so we can connect to it and export our data. To create a new database file, we use the sqlite3 library to connect to a file path that doesn't exist yet. If Python can't find the file we specified, it will create it for us and treat it as a SQLite database file.
SQLite doesn't have a special file format and you can use any file extension you'd like when creating a SQLite database. We generally use the .db extension, which isn't a file extension that's generally used for other applications.
End of explanation
"""
import sqlite3
# Create a Connection using sqlite3.connect
conn = sqlite3.connect('../data/nominations.db')
# Explore the database to make sure the nominations table looks OK
# Return and print the schema using "pragma table_info()"
query_one = "pragma table_info(nominations);"
print(conn.execute(query_one).fetchall())
# Return and print the first 10 rows using the SELECT and LIMIT statements
query_two = "select * from nominations limit 10;"
print(conn.execute(query_two).fetchall())
# Once you're done, close the Connection
conn.close()
"""
Explanation: Verifying In SQL
Let's now query the database to make sure everything worked as expected.
End of explanation
"""
|
quantopian/research_public | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | apache-2.0 | # Useful functions
def normal_test(X):
z, pval = stats.normaltest(X)
if pval < 0.05:
print 'Values are not normally distributed.'
else:
print 'Values are normally distributed.'
return
# Useful Libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
"""
Explanation: Exercises: Comparing ETFs - Answer Key
By Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/statistical-moments
https://www.quantopian.com/lectures/hypothesis-testing
IMPORTANT NOTE:
This lecture corresponds to the statistical moments and hypothesis testing lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
When you feel comfortable with the topics presented here, see if you can create an algorithm that qualifies for the Quantopian Contest. Participants are evaluated on their ability to produce risk-constrained alpha and the top 10 contest participants are awarded cash prizes on a daily basis.
https://www.quantopian.com/contest
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Key Concepts
t-statistic formula for unequal variances : $ t = \frac{\bar{X}_1 - \bar{X}_2}{(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2})^{1/2}}$
Where $s_1$ and $s_2$ are the standard deviation of set 1 and set 2; and $n_1$ and $n_2$ are the number of observations we have.
End of explanation
"""
# Get pricing data for an energy (XLE) and industrial (XLI) ETF
xle = get_pricing('XLE', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
xli = get_pricing('XLI', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
# Compute returns
xle_returns = xle.pct_change()[1:]
xli_returns = xli.pct_change()[1:]
"""
Explanation: Data:
End of explanation
"""
xle = plt.hist(xle_returns, bins=30)
xli = plt.hist(xli_returns, bins=30, color='r')
plt.xlabel('returns')
plt.ylabel('Frequency')
plt.title('Histogram of the returns of XLE and XLI')
plt.legend(['XLE returns', 'XLI returns']);
# Checking for normality using function above.
print 'XLE'
normal_test(xle_returns)
print 'XLI'
normal_test(xli_returns)
# Because the data is not normally distributed, we must use the levene and not the F-test of variance.
stats.levene(xle_returns, xli_returns)
"""
Explanation: Exercise 1 : Hypothesis Testing on Variances.
Plot the histogram of the returns of XLE and XLI
Check to see if each return stream is normally distributed
If the assets are normally distributed, use the F-test to perform a hypothesis test and decide whether they have the two assets have the same variance.
If the assets are not normally distributed, use the Levene test (in the scipy library) to perform a hypothesis test on variance.
End of explanation
"""
# Manually calculating the t-statistic
N1 = len(xle_returns)
N2 = len(xli_returns)
m1 = xle_returns.mean()
m2 = xli_returns.mean()
s1 = xle_returns.std()
s2 = xli_returns.std()
test_statistic = (m1 - m2) / (s1**2 / N1 + s2**2 / N2)**0.5
print 't-test statistic:', test_statistic
# Alternative form, using the scipy library on python.
stats.ttest_ind(xle_returns, xli_returns, equal_var=False)
"""
Explanation: Since we find a pvalue for the Levene test of less than our $\alpha$ level (0.05), we can reject the null hypothesis that the variability of the two groups are equal thus implying that the variances are unequal.
Exercise 2 : Hypothesis Testing on Means.
Since we know that the variances are not equal, we must use Welch's t-test.
- Calculate the mean returns of XLE and XLI.
- Find the difference between the two means.
- Calculate the standard deviation of the returns of XLE and XLI
- Using the formula given above, calculate the t-test statistic (Using $\alpha = 0.05$) for Welch's t-test to test whether the mean returns of XLE and XLI are different.
- Consult the Hypothesis Testing Lecture to calculate the p-value for this test. Are the mean returns of XLE and XLI the same?
Now use the t-test function for two independent samples from the scipy library. Compare the results.
End of explanation
"""
# Calculate the mean and median of xle and xli using the numpy library
xle_mean = np.mean(xle_returns)
xle_median = np.median(xle_returns)
print 'Mean of XLE returns = ', xle_mean, '; median = ', xle_median
xli_mean = np.mean(xli_returns)
xli_median = np.median(xli_returns)
print 'Mean of XLI returns = ', xli_mean, '; median = ', xli_median
# Print values of Skewness for xle and xli returns
print 'Skew of XLE returns:', stats.skew(xle_returns)
print 'Skew of XLI returns:', stats.skew(xli_returns)
"""
Explanation: Exercise 3 : Skewness
Calculate the mean and median of the two assets
Calculate the skewness using the scipy library
End of explanation
"""
# Print value of Kurtosis for xle and xli returns
print 'kurtosis:', stats.kurtosis(xle_returns)
print 'kurtosis:', stats.kurtosis(xli_returns)
# Distribution plot of XLE returns in red (for Kurtosis of 1.6).
# Distribution plot of XLI returns in blue (for Kurtosis of 2.0).
xle = sns.distplot(xle_returns, color = 'r', axlabel = 'xle')
xli = sns.distplot(xli_returns, axlabel = 'xli');
"""
Explanation: And the skewness of XLE returns of values > 0 means that there is more weight in the right tail of the distribution. The skewness of XLI returns of value > 0 means that there is more weight in the left tail of the distribution.
Exercise 4 : Kurtosis
Check the kurtosis of the two assets, using the scipy library.
Using the seaborn library, plot the distribution of XLE and XLI returns.
Recall:
- Kurtosis > 3 is leptokurtic, a highly peaked, narrow deviation from the mean
- Kurtosis = 3 is mesokurtic. The most significant mesokurtic distribution is the normal distribution family.
- Kurtosis < 3 is platykurtic, a lower-peaked, broad deviation from the mean
End of explanation
"""
|
katychuang/ipython-notebooks | fragrance analysis - scrapy example.ipynb | gpl-2.0 | import requests
from scrapy.http import TextResponse
url = "https://www.fragrantica.com/designers/Dolce%26Gabbana.html"
user_agent = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58: .0.3029.110 Chrome/58.0.3029.110 Safari/537.36'}
r = requests.get(url, headers=user_agent)
response = TextResponse(r.url, body=r.text, encoding='utf-8')
"""
Explanation: Fragrance analysis
Author: @katychuang
Description: I want to learn more about what's out there in the fragrance world so am starting this project to collect data. There's no existing API to a database of fragrance information so I'm scraping websites as a way to collect some data for analysis.
Scrapy example
Code in this notebook is an example of using scrapy to scrape data off one webpage of Fragrantica. The code below is for Python3, Scrapy (1.4.0).
Making requests
Using the requests library, which returns binary data and the scrapy TextResponse module to read the binary data.
Thanks to @jasonwirth's tip about using user agent strings, I was able to get around the 403 forbidden access error codes while scraping. There are many user agents available to use, the top ones are listed here, and it's conventionally good to rotate/randomize the use of the strings.
End of explanation
"""
# Navigate the perfume list
c = response.xpath('//div[@id="col1"]/div[@class="perfumeslist"]/div/div/p/a//text()').extract()
print("There are {} perfumes from Dolce & Gabanna".format(len(c)))
print(c)
"""
Explanation: Once we have the response, which is a huge chunk of minimized html tags, we need to navigate through the DOM structure to get exactly the information needed. The perfumes are thankfully listed in the tree with the ID #col1, so I can start there as the root and get all the perfume names by picking specific child DOM nodes.
End of explanation
"""
def parse_perfume_data(response):
my_list = []
for row in response.xpath('//div[@id="col1"]/div[@class="perfumeslist"]'):
perfume = {}
perfume['name'] = row.xpath('div/div/p/a//text()').extract()[0]
perfume['year'] = year(row.xpath('div/div/p/span[@class="mtext"]/span/strong/text()').extract())
perfume['gender'] = row.xpath('div/@class').extract()[0].split(' ')[1][6:]
perfume['img'] = row.xpath('div/div/p/a/img//@src').extract()[0]
perfume['url'] = row.xpath('div/div/p/a/@href').extract()[0]
my_list.append(perfume)
return my_list
def year(y):
if len(y) >= 1:
return y[0]
else:
return ''
"""
Explanation: The extract() method returns a list, so it was easy to get the number of items by finding the length of the list.
Parsing the response
Once you have the response, you can parse the output to get all the bits of information needed. I'm interested in the name of the perfume, the gender it's made for, the image, and also the url to the product detail page. The function parse_perfume_data() takes the response and outputs the fields to a list of dictionaries.
End of explanation
"""
data = parse_perfume_data(response)
"""
Explanation: Now we call this function while passing in the response to get the data, which is a structured extraction from the response. I chose some easy to remember field names to use as dictionary keys.
End of explanation
"""
print(data[0])
print(data[1])
"""
Explanation: Here's how the data looks like after parsing.
End of explanation
"""
from collections import Counter
Counter(token['gender'] for token in data)
Counter(token['year'] for token in data)
years = list(sorted(set([p['year'] for p in data[1:]])))
yTotal = Counter(token['year'] for token in data)
print('Year', 'M', 'F', 'U', 'T')
for y in years:
filtered = list(filter(lambda d: d['year'] == y, data))
count = Counter(token['gender'] for token in filtered)
print(y, count['male'], count['female'], count['unisex'], yTotal[y])
"""
Explanation: Data Analysis
Now we can do some quick stats, for example to see how many products per gender
End of explanation
"""
data
# display D&G Perfume Bottles
from IPython.display import Image, HTML, display
from glob import glob
def make_html(image):
return '<img src="{}" style="display:inline;margin:1px"/>'.format(image)
item = ''.join( [make_html(x['img']) for x in data] )
display(HTML(item))
"""
Explanation: Values where,
M = Male
F = Female
U = Unisex
T = Total
```
intentional blank space
```
Appendix
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb | bsd-3-clause | import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])
"""
Explanation: Setting the EEG reference
This tutorial describes how to set or change the EEG reference in MNE-Python.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save memory. Since
this tutorial deals specifically with EEG, we'll also restrict the dataset to
just a few EEG channels so the plots are easier to see:
End of explanation
"""
# code lines below are commented out because the sample data doesn't have
# earlobe or mastoid channels, so this is just for demonstration purposes:
# use a single channel reference (left earlobe)
# raw.set_eeg_reference(ref_channels=['A1'])
# use average of mastoid channels as reference
# raw.set_eeg_reference(ref_channels=['M1', 'M2'])
# use a bipolar reference (contralateral)
# raw.set_bipolar_reference(anode='[F3'], cathode=['F4'])
"""
Explanation: Background
EEG measures a voltage (difference in electric potential) between each
electrode and a reference electrode. This means that whatever signal is
present at the reference electrode is effectively subtracted from all the
measurement electrodes. Therefore, an ideal reference signal is one that
captures none of the brain-specific fluctuations in electric potential,
while capturing all of the environmental noise/interference that is being
picked up by the measurement electrodes.
In practice, this means that the reference electrode is often placed in a
location on the subject's body and close to their head (so that any
environmental interference affects the reference and measurement electrodes
similarly) but as far away from the neural sources as possible (so that the
reference signal doesn't pick up brain-based fluctuations). Typical reference
locations are the subject's earlobe, nose, mastoid process, or collarbone.
Each of these has advantages and disadvantages regarding how much brain
signal it picks up (e.g., the mastoids pick up a fair amount compared to the
others), and regarding the environmental noise it picks up (e.g., earlobe
electrodes may shift easily, and have signals more similar to electrodes on
the same side of the head).
Even in cases where no electrode is specifically designated as the reference,
EEG recording hardware will still treat one of the scalp electrodes as the
reference, and the recording software may or may not display it to you (it
might appear as a completely flat channel, or the software might subtract out
the average of all signals before displaying, making it look like there is
no reference).
Setting or changing the reference channel
If you want to recompute your data with a different reference than was used
when the raw data were recorded and/or saved, MNE-Python provides the
:meth:~mne.io.Raw.set_eeg_reference method on :class:~mne.io.Raw objects
as well as the :func:mne.add_reference_channels function. To use an
existing channel as the new reference, use the
:meth:~mne.io.Raw.set_eeg_reference method; you can also designate multiple
existing electrodes as reference channels, as is sometimes done with mastoid
references:
End of explanation
"""
raw.plot()
"""
Explanation: If a scalp electrode was used as reference but was not saved alongside the
raw data (reference channels often aren't), you may wish to add it back to
the dataset before re-referencing. For example, if your EEG system recorded
with channel Fp1 as the reference but did not include Fp1 in the data
file, using :meth:~mne.io.Raw.set_eeg_reference to set (say) Cz as the
new reference will then subtract out the signal at Cz without restoring
the signal at Fp1. In this situation, you can add back Fp1 as a flat
channel prior to re-referencing using :func:~mne.add_reference_channels.
(Since our example data doesn't use the 10-20 electrode naming system_, the
example below adds EEG 999 as the missing reference, then sets the
reference to EEG 050.) Here's how the data looks in its original state:
End of explanation
"""
# add new reference channel (all zero)
raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])
raw_new_ref.plot()
"""
Explanation: By default, :func:~mne.add_reference_channels returns a copy, so we can go
back to our original raw object later. If you wanted to alter the
existing :class:~mne.io.Raw object in-place you could specify
copy=False.
End of explanation
"""
# set reference to `EEG 050`
raw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])
raw_new_ref.plot()
"""
Explanation: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
End of explanation
"""
# use the average of all channels as reference
raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')
raw_avg_ref.plot()
"""
Explanation: Notice that the new reference (EEG 050) is now flat, while the original
reference channel that we added back to the data (EEG 999) has a non-zero
signal. Notice also that EEG 053 (which is marked as "bad" in
raw.info['bads']) is not affected by the re-referencing.
Setting average reference
To set a "virtual reference" that is the average of all channels, you can use
:meth:~mne.io.Raw.set_eeg_reference with ref_channels='average'. Just
as above, this will not affect any channels marked as "bad", nor will it
include bad channels when computing the average. However, it does modify the
:class:~mne.io.Raw object in-place, so we'll make a copy first so we can
still go back to the unmodified :class:~mne.io.Raw object later:
End of explanation
"""
raw.set_eeg_reference('average', projection=True)
print(raw.info['projs'])
"""
Explanation: Creating the average reference as a projector
If using an average reference, it is possible to create the reference as a
:term:projector rather than subtracting the reference from the data
immediately by specifying projection=True:
End of explanation
"""
for title, proj in zip(['Original', 'Average'], [False, True]):
fig = raw.plot(proj=proj, n_channels=len(raw))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
"""
Explanation: Creating the average reference as a projector has a few advantages:
It is possible to turn projectors on or off when plotting, so it is easy
to visualize the effect that the average reference has on the data.
If additional channels are marked as "bad" or if a subset of channels are
later selected, the projector will be re-computed to take these changes
into account (thus guaranteeing that the signal is zero-mean).
If there are other unapplied projectors affecting the EEG channels (such
as SSP projectors for removing heartbeat or blink artifacts), EEG
re-referencing cannot be performed until those projectors are either
applied or removed; adding the EEG reference as a projector is not subject
to that constraint. (The reason this wasn't a problem when we applied the
non-projector average reference to raw_avg_ref above is that the
empty-room projectors included in the sample data :file:.fif file were
only computed for the magnetometers.)
End of explanation
"""
raw.del_proj() # remove our average reference projector first
sphere = mne.make_sphere_model('auto', 'auto', raw.info)
src = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.)
forward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere)
raw_rest = raw.copy().set_eeg_reference('REST', forward=forward)
for title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]):
fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
"""
Explanation: Using an infinite reference (REST)
To use the "point at infinity" reference technique described in
:footcite:Yao2001 requires a forward model, which we can create in a few
steps. Here we use a fairly large spacing of vertices (pos = 15 mm) to
reduce computation time; a 5 mm spacing is more typical for real data
analysis:
End of explanation
"""
raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'],
cathode=['EEG 055'])
raw_bip_ref.plot()
"""
Explanation: Using a bipolar reference
To create a bipolar reference, you can use :meth:~mne.set_bipolar_reference
along with the respective channel names for anode and cathode which
creates a new virtual channel that takes the difference between two
specified channels (anode and cathode) and drops the original channels by
default. The new virtual channel will be annotated with the channel info of
the anode with location set to (0, 0, 0) and coil type set to
EEG_BIPOLAR by default. Here we use a contralateral/transverse bipolar
reference between channels EEG 054 and EEG 055 as described in
:footcite:YaoEtAl2019 which creates a new virtual channel
named EEG 054-EEG 055.
End of explanation
"""
|
kevinjliang/Duke-Tsinghua-MLSS-2017 | 03A_Variational_Autoencoder.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
slim = tf.contrib.slim
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Variational Autoencoder in TensorFlow
Variational Autoencoders (VAE) are a popular model that allows for unsupervised (and semi-supervised) learning. In this notebook, we'll implement a simple VAE on the MNIST dataset.
One of the primary goals of the VAE (and auto-encoders in general) is to reconstruct the original input. Why would we want to do that? At first glance, such a model seems silly: a simple identity function achieves the same thing with perfect results. However, with an autoencoder, we can learn a compresesed representation in a smaller latent space, allowing us to learn features and structure of the data. Autoencoders are composed of two arms, the encoder and decoder, which convert values from the data space to the latent space and vice versa, respectively.
Importantly, since we're simply reconstructing the original input, we do not necessarily need labels to do our learning, as we have in previous examples. This is significant, as labels are often far more expensive to acquire than raw data, often prohibitively so. VAEs therefore allow us to leverage abundant unlabeled data. That said, VAEs are also able to take advantage of labels when available as well, either in a completely supervised or semi-supervised setting. Altogether, autoencoders can achieve impressive results on tasks like denoising, segmentation, and even predicting future images.
Imports and Data
First, some package imports and loading of the data. This is similar to what we've done before, with the main difference being that we're going to use TensorFlow Slim, as a follow-up to notebook 02A.
End of explanation
"""
def encoder(x):
"""Network q(z|x)"""
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
mu_logvar = slim.fully_connected(x, 128, scope='fc1')
mu_logvar = slim.fully_connected(mu_logvar, 128, activation_fn=None, scope='fc2')
return mu_logvar
"""
Explanation: Encoder
The encoder deterministically transforms the data $x$ from the data space to the latent space of $z$. Since we're dealing with a variational autoencoder, we attempt to model the distribution of the latent space given the input, represented by $q(z|x)$. This isn't immediately obvious in the code implementation, but we assume a standard Gaussian prior on this distribution, and our encoder returns the mean and variance (actually log-variance) of this distribution. We use log-variance because our model returns a real number, while variances must be positive.
MNIST is a very simple dataset, so let's also keep the model simple: an MLP with 2 fully connected layers. We name the output mu_logvar as we will be interpretting the first half of the final 128-dimensional vector as the mean $\mu$ and the second half as the log-variance log($\sigma^2$).
End of explanation
"""
def decoder(mu_logvar):
"""Network p(x|z)"""
# Interpret z as concatenation of mean and log variance
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
# Standard deviation must be positive
stddev = tf.sqrt(tf.exp(logvar))
# Draw a z from the distribution
epsilon = tf.random_normal(tf.shape(stddev))
z = mu + tf.multiply(stddev, epsilon)
# Decoding arm
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
x_logits = slim.fully_connected(z, 128, scope='fc1')
x_logits = slim.fully_connected(x_logits, 784, activation_fn=None, scope='fc2')
# x_hat to be generated from a Bernoulli distribution
x_dist = tf.contrib.distributions.Bernoulli(logits=x_logits, dtype=tf.float32)
return x_logits, x_dist
"""
Explanation: Note that we use a couple features of TF-Slim here:
We use slim.fully_connected() to specify which layers we want to use, without having to worry about defining weight or bias variables beforehand.
We use slim.arg_scope() to specify default arguments so we can leave them out of the definitions of each of the fully connected layers. We can still override the activation_fn for the last layer though.
For this simple model, TF-Slim doesn't actually benefit us all that much, but for the sake of demonstration, we'll stick with it.
Decoder
The decoder is the generative arm of the auotencoder. Just like our encoder learned parameters of a distribution $p(z|x)$, our decoder will learn parameters of a distribution $p(x|z)$. Beceause $x$ is binary data (black and white pixels), we will use a Bernoulli distribution. Our generative neural network will learn the mean of this Bernoulli distribution for each pixel we want to generate. Another viewpoint: if our neural network outputs $\hat{x}_j$ for pixel $j$, it means we believe that the pixel will be white with that probability.
Again, since MNIST is simple, we'll use a 2 layer MLP for the decoder. Importantly, since we are focusing on reconstruction, we make sure that the final output of the decoder $\hat{x}$ is the same dimensions as our input $x$.
End of explanation
"""
def optimizer(x_logits, x, mu_logvar):
"""Define loss functions (reconstruction, KL divergence) and optimizer"""
with tf.variable_scope('optimizer') as scope:
# Reconstruction loss
reconstruction = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=x, logits=x_logits), reduction_indices=[1])
# KL divergence
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
kl_d = -0.5 * tf.reduce_sum(1.0 + logvar - tf.square(mu) - tf.exp(logvar), reduction_indices=[1])
# Total loss
loss = tf.reduce_mean(reconstruction + kl_d)
# ADAM optimizer
train_step = tf.train.AdamOptimizer().minimize(loss)
return train_step
"""
Explanation: Loss
Prof. Jun Zhu talked in class about the theoretical motivation for the loss of the VAE model. Like all variational inference techniques, it tries to match the variational posterior distribution (here a neural network) with the true posterior. However, at the end of the derivation, we can think of our model as trading off two goals:
Reconstruction loss: Our generator produces parameters to a Bernoulli distribution that is supposed to represent $p(x | z)$; because we assume that $z$ is the latent representation of an actual data point $x$, we can measure how well we achieve this goal by measuring the likelihood of $x$ according to that Bernoulli distribution. Another way of thinking of this is that we can measure how similar our reconstructed image is to our original image. The measure of similarity we use is cross-entropy: we think of our model as classifying each pixel as black or white, and we measure how good the classifier is using the classic sigmoid cross-entropy loss.
KL Divergence: Because this model is variational, we also include a KL penalty to impose a Gaussian prior on the latent space. The exact derivation of this term can be found in the original Auto-Encoding Variational Bayes paper. Is a standard Gaussian prior a good assumption? What are the potential weaknesses of this approach?
We use the ADAM algorithm that we've used before for optimization.
End of explanation
"""
def visualize_row(image, reconstruction, img_width=28, cmap='gray'):
"""
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
"""
fig, ax = plt.subplots(1, 2)
image = np.reshape(image, [-1, img_width])
reconstruction = np.reshape(reconstruction, [-1, img_width])
plt.figure()
ax[0].imshow(np.clip(image, 0, 1), cmap=cmap)
ax[1].imshow(np.clip(reconstruction, 0, 1), cmap=cmap)
plt.show()
"""
Explanation: Visualization
It'll be nice to visualize the reconstructions that our model generates to see what it learns. This helper function plots the original inputs in one column and the reconstructions next to them in another column. I also may or may not have stolen it from Alex Lew, who included it in his GAN notebook (03B)...
End of explanation
"""
# Reset the graph
tf.reset_default_graph()
# Define input placeholder
x = tf.placeholder(tf.float32,[None, 784], name='x')
# Define VAE graph
with tf.variable_scope('encoder'):
mu_logvar = encoder(x)
with tf.variable_scope('decoder'):
x_logits, x_dist = decoder(mu_logvar)
x_hat = x_dist.sample()
# Optimization
with tf.variable_scope('unlabeled') as scope:
train_step_unlabeled = optimizer(x_logits, x, mu_logvar)
"""
Explanation: Define the graph and train
All of the functions we've written thus far are just that: functions. We still need to call them to assemble our TensorFlow computation graph. At this point, this should be becoming familiar.
One of the small differences is the inclusion of tf.reset_default_graph(), added to remedy a small, unfortunate side effect of using Jupyter and TensorFlow in conjunction, but you don't have to worry about it too much to understand the model. A more detailed explanation if you're interested below [1].
End of explanation
"""
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train VAE model
for i in range(20000):
# Get a training minibatch
batch = mnist.train.next_batch(100)
# Binarize the data
x_binarized = (batch[0] > 0.5).astype(np.float32)
# Train on minibatch
sess.run(train_step_unlabeled, feed_dict={x: x_binarized}) # No labels
# Visualize reconstructions every 1000 iterations
if i % 1000 == 0:
batch = mnist.validation.next_batch(5)
x_binarized = (batch[0] > 0.5).astype(np.float32)
reconstructions = sess.run(x_hat, feed_dict={x: x_binarized})
print("Iteration {0}:".format(i))
visualize_row(batch[0], reconstructions)
"""
Explanation: <sub>[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.</sub>
<sub>So how can this be problematic? In the Coding Environments notebook (00B), it was mentioned that code from previously run cells persists. As such, if we're programming interactively and want to rebuild our graph after some updates, the new updated nodes we want to add collide with the names from our previous run, throwing an error. Why didn't we have to worry about this before? In the past, we haven't been naming our variables, so TensorFlow has been giving the nodes new unique names every time we update the graph and adding them to the collection of nodes from previous runs; the old nodes are never called, so they just sit there. However, TF-Slim does name the variables it generates, thus causing the problem. We can solve this by creating a new graph object before we define our computation graph, so every time we want to make modifications to the graph, we start anew.</sub>
<sub>If you're confused by that explanation, I wouldn't worry about it. It's not necessary for the program to run. It's there so we can re-run the cell defining the computation graph without restarting the entire kernel to clear memory of previous variables. In a traditionally written Python program (i.e. not IPython), you wouldn't need to do this.</sub>
For training, we'll stay simple and train for 20000 iterations, visualizing our results with 5 digits from the validation set after every 1000 minibatches. Notice that this model is completely unsupervised: we never include the digit labels at any point in the process. Within a few thousand iterations, the model should start producing reasonable looking results:
End of explanation
"""
|
osemer01/insights-from-baby-names-since-1910 | baby_names.ipynb | cc0-1.0 | import os
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data_folder = os.path.join('data')
file_names = []
for f in os.listdir(data_folder):
file_names.append(os.path.join(data_folder,f))
del file_names[file_names.index(os.path.join(data_folder,'StateReadMe.pdf'))]
"""
Explanation: Insights from Baby Names Data
Author Information:
Oguz Semerci<br>
oguz.semerci@gmail.com<br>
Summary of the investigation
We report on descriptive satistics as well as a few insights mined from a data set of state by state baby name counts from 1910 to 2014. We present the following descriptive statistics:
The most popular male and female names of all time
favorite gender-neutral names in 1945 and 2013
Names with biggest decreae and increase in popularity since 1980
We extract the following insights from the dataset:
Increase in pupularity of gender ambigious names
Correlation in the increaed tendency in using gender-neutral names with landmark events leading the legalization of same-sex marriage
Dimentionality reduction (randomized PCA) of the data, comments on first tow principle components and K-means clustering of the states.
I- Data Preperation
Here we quote the official description of the data aset:
For each of the 50 states and the District of Columbia we created a file called SC.txt, where SC is the state's postal code.
Each record in a file has the format: 2-digit state code, sex (M = male or F = female), 4-digit year of birth (starting with 1910), the 2-15 character name, and the number of occurrences of the name. Fields are delimited with a comma. Each file is sorted first on sex, then year of birth, and then on number of occurrences in descending order. When there is a tie on the number of occurrences names are listed in alphabetical order. This sorting makes it easy to determine a name's rank. The first record for each sex & year of birth has rank 1, the second record has rank 2, and so forth.
To safeguard privacy, we restrict our list of names to those with at least 5 occurrences. If a name has less than 5 occurrences for a year of birth in any state, the sum of the state counts for that year will be less than the national count.</q>"
One can say the data sets look clean except for some ambiguities in baby names. For example in RI data we have the following for the year 1992:
RI,F,1992,Kaitlyn,37
RI,F,1992,Katelyn,36
One might argue that both versions of the name Katelyn are phonetically the same and sould be counted together. And if they were to be counted together that would change the rank of the name Katelyn about 10 levels. Normalizing the data for such instances is out of the scope of this analysis. However, we'll keep it in mind when analyzing the results.
Below, we sequentially process each file and extract relevant data without loading all data to memory at once. Let's first fet a list of all the file names:
End of explanation
"""
# we yearly count data for each name in the data set using the following dictionary format :
# dict = {'name': {count: []}} where count[0] is count for name 1910 and count[-1] is for 2014
N_years = 2014-1910+1
names_dict_M = {}
names_dict_F = {}
for fname in file_names:
with open(fname,'r') as f:
for line in f:
state, gender, year, name, count = line.split(',')
year = int(year)
count = int(count)
if gender == 'M':
if name in names_dict_M:
# name already in the dict, update the count for appropriate year
names_dict_M[name]['count'][year-1910] += count
else:
# create an entry for the name
names_dict_M[name] = {'count': [0]*N_years}
names_dict_M[name]['count'][year-1910] += count
elif gender == 'F':
if name in names_dict_F:
# name already in the dict, update the count for appropriate year
names_dict_F[name]['count'][year-1910] += count
else:
# create an entry for the name
names_dict_F[name] = {'count': [0]*N_years}
names_dict_F[name]['count'][year-1910] += count
"""
Explanation: II- Predictive Analysis
II-1 Most popular name of all time
End of explanation
"""
#lets extract tuples as (name, total_count) and sort them
male_overall = [(n, sum(names_dict_M[n]['count'])) for n in names_dict_M.keys()]
male_overall.sort(key = lambda x: x[1], reverse = True)
female_overall = [(n, sum(names_dict_F[n]['count'])) for n in names_dict_F.keys()]
female_overall.sort(key = lambda x: x[1], reverse = True)
"""
Explanation: Now, let's find the most popular male and female names of all times:
End of explanation
"""
print('Male:')
print('{}: {}'.format(male_overall[0][0], male_overall[0][1]))
print('\nFemale:')
print('{}: {}'.format(female_overall[0][0], female_overall[0][1]))
width = 0.6
fig = plt.figure(figsize = (12,3))
ax = plt.subplot(121)
ax.bar(np.arange(10), [c for n,c in male_overall[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in male_overall[:10]], rotation = 90)
ax.set_title('10 Most Popular Male Names since 1910')
ax.set_ylabel('name count')
ax = plt.subplot(122)
ax.bar(np.arange(10), [c for n,c in female_overall[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in female_overall[:10]], rotation = 90)
ax.set_title('10 Most Popular Female Names since 1910')
ax.set_ylabel('name count')
plt.tight_layout()
plt.show()
"""
Explanation: And the winner for most popular male and female baby names since 1910 are:
End of explanation
"""
#lets extract tuples as (name, count[2013]) and sort them with count
male_2013 = [(n, names_dict_M[n]['count'][2013-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][2013-1910] > 0]
female_2013 = [(n, names_dict_F[n]['count'][2013-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][2013-1910] > 0]
male_1945 = [(n, names_dict_M[n]['count'][1945-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][1945-1910] > 0]
female_1945 = [(n, names_dict_F[n]['count'][1945-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][1945-1910] > 0]
#first find gender ambigious names in 2013:
gender_ambigious_names = set([n for n, _ in male_2013]) & set([n for n, _ in female_2013])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][2013-1910],
names_dict_F[n]['count'][2013-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 2013 there were {} gender ambigious names and are the most popular ones was {}'
.format(len(gender_ambigious_names), gender_ambigious_names[0][0]))
width = 0.6
fig = plt.figure(figsize = (12,3))
ax = plt.subplot(121)
ax.bar(np.arange(10), [c for n,c in gender_ambigious_names[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in gender_ambigious_names[:10]], rotation = 90)
ax.set_title('10 Most Popular Gender Ambigious Names in 2013')
ax.set_ylabel('name count')
gender_ambigious_names = set([n for n, _ in male_1945]) & set([n for n, _ in female_1945])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][1945-1910],
names_dict_F[n]['count'][1945-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 1945 there were {} gender ambigious names and are the most popular ones was {}'
.format(len(gender_ambigious_names), gender_ambigious_names[0][0]))
ax2 = plt.subplot(122)
ax2.bar(np.arange(10), [c for n,c in gender_ambigious_names[:10]], width = width)
ax2.set_xticks(np.arange(10) + width/2)
ax2.set_xticklabels([n for n,c in gender_ambigious_names[:10]], rotation = 90)
ax2.set_title('10 Most Popular Gender Ambigious Names in 1945')
ax2.set_ylabel('name count')
plt.tight_layout()
plt.show()
"""
Explanation: The winner in the male category is James. 493865 baby boys were named 'James' from 1910 to 2014.
On the female side 'Mary' is the winner. 3730856 baby girls were named 'Mary' from 1910 to 2014.
II-2 Most Gender Ambigious Name in 2013 and 1945
We quantify the popularity of a gender ambigious name with 'name' in year x by: Minimum of {number of male babies born in year x with name 'name', number of female babies born in year x with name 'name'}
End of explanation
"""
male_diff = [ (n, names_dict_M[n]['count'][-1] - names_dict_M[n]['count'][1980-1910]) for n in names_dict_M.keys() ]
female_diff = [ (n, names_dict_F[n]['count'][-1] - names_dict_F[n]['count'][1980-1910]) for n in names_dict_F.keys() ]
male_diff.sort(key = lambda x: x[1], reverse = True)
female_diff.sort(key = lambda x: x[1], reverse = True)
print('Male name with most increase in popularity is {}'.format(male_diff[0][0]))
print('Count for {} increased from {} to {} from 1980 to 2014'.format(male_diff[0][0],
names_dict_M[male_diff[0][0]]['count'][1980-1910],
names_dict_M[male_diff[0][0]]['count'][-1]))
print('\nFemale name with most increase in popularity is {}'.format(female_diff[0][0]))
print('Count for {} increased from {} to {} from 1980 to 2014'.format(female_diff[0][0],
names_dict_F[female_diff[0][0]]['count'][1980-1910],
names_dict_F[female_diff[0][0]]['count'][-1]))
print('\nMale name with most deccrease in popularity is {}'.format(male_diff[-1][0]))
print('Count for {} decreased from {} to {} from 1980 to 2014'.format(male_diff[-1][0],
names_dict_M[male_diff[-1][0]]['count'][1980-1910],
names_dict_M[male_diff[-1][0]]['count'][-1]))
print('\nFemale name with most deccrease in popularity is {}'.format(female_diff[-1][0]))
print('Count for {} decreased from {} to {} from 1980 to 2014'.format(female_diff[-1][0],
names_dict_F[female_diff[-1][0]]['count'][1980-1910],
names_dict_F[female_diff[-1][0]]['count'][-1]))
"""
Explanation: It is intesting to notice number gender ambigious names more than doubles since 1945. I believe this is a general trend which could more predominantly observerd in liberal and urban cities in the US.
II-3,4 Names with largest decrease and increase in number since 1980
End of explanation
"""
print('Male names with largest increase in popularity along with increase rate:')
for n, c in male_diff[:5]:
print('{}: {}'.format(n,c))
print('\nFemale names with largest increase in popularity along with increase rate:')
for n, c in female_diff[:5]:
print('{}: {}'.format(n,c))
print('\nMale names with largest decrease in popularity along with decrease rate:')
for n, c in male_diff[-1:-5:-1]:
print('{}: {}'.format(n,c))
print('\nFemale names with largest decrease in popularity along with decrease rate:')
for n, c in female_diff[-1:-5:-1]:
print('{}: {}'.format(n,c))
"""
Explanation: II-5 Other Names with largest decrease and increase in number since 1980
Let's see for what other names large differentials are observed betseen 1980 and 2014.
End of explanation
"""
count = [0]*(2014-1910+1)
for year in range(0,2014-1910+1):
male_names = [n for n in names_dict_M.keys() if names_dict_M[n]['count'][year] > 0]
female_names = [n for n in names_dict_F.keys() if names_dict_F[n]['count'][year] > 0]
count[year] = len(set(male_names) & set(female_names))
fit = np.polyfit(range(0,2014-1910+1),count,1)
fit_fn = np.poly1d(fit)
fig = plt.figure(figsize = (15,3))
plt.plot(range(0,2014-1910+1), count, label = 'data')
plt.plot(range(0,2014-1910+1), fit_fn(range(0,2014-1910+1)), '--k', label = 'linear fit')
plt.legend(loc = 'lower right')
plt.title('Trend in the number of gender ambigious names from 1910 to 2014')
plt.xticks([0,1960-1910,2014-1910], ['1910', '1960', '2014'])
plt.xlabel('years')
plt.xlim([0,2014-1910])
plt.grid()
plt.show()
print('There is peak in yer {}.'.format(1910 + count.index(max(count))))
#what are the most popular gender ambigious names in 2004:
male_2004 = [(n, names_dict_M[n]['count'][2004-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][2004-1910] > 0]
female_2004 = [(n, names_dict_F[n]['count'][2004-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][2004-1910] > 0]
gender_ambigious_names = set([n for n, _ in male_2004]) & set([n for n, _ in female_2004])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][1945-1910],
names_dict_F[n]['count'][1945-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 2014 there were {} gender ambigious names and here are the most popular ones:'
.format(len(gender_ambigious_names)))
for n,c in gender_ambigious_names[:3]:
print('{}: {}'.format(n,c))
"""
Explanation: III- Insights
III-1 Trend in the number of gender ambigious names
As mentioned in Section II-2 we expect the number of gender ambigious names to increase over the years. That trend is most probably related to changes in perspective of the society in the gender-equality issues. But let's not pretend to be a sociologist here:). Below, we plot the trend as well as a linear fit to the trend.
End of explanation
"""
count[2004-1910] = 0
1910 + count.index(max(count))
"""
Explanation: A quick google seaerch revealst that in 2003 and 2004 landmark years in the process of leagalization of same-sex marriage:
Goodridge v. Dept. of Public Health, 798 N.E.2d 941 (Mass. 2003), is a landmark state appellate court case dealing with same-sex marriage in Massachusetts. The November 18, 2003, decision was the first by a U.S. state's highest court to find that same-sex couples had the right to marry. Despite numerous attempts to delay the ruling, and to reverse it, the first marriage licenses were issued to same-sex couples on May 17, 2004, and the ruling has been in full effect since that date. (https://en.wikipedia.org/wiki/Goodridge_v._Department_of_Public_Health)
Maybe there is some correlation here! People were prefering gender-neutral names to celebrate such events. It'd be interesting to look into that other peak happened before 2004.
End of explanation
"""
#find all the male nd female names for 2014
male_names = [n for n in names_dict_M.keys() if names_dict_M[n]['count'][-1] > 0]
female_names = [n for n in names_dict_F.keys() if names_dict_F[n]['count'][-1] > 0]
#create a map names to indexes
#we'll make sure to have two feature's associated with gender-neutral names
name2index_male = {}
for i,n in enumerate(male_names):
name2index_male[n] = i
male_name_count = len(male_names)
name2index_female = {}
for i,n in enumerate(female_names):
name2index_female[n] = i + male_name_count
states = []
#data with counts for all the names in 2014 for each state in its rows:
X = []
for fname in file_names:
states.append(fname[-6:-4])
#temporary sample vector for current state
temp = [0]*(len(name2index_male)+len(name2index_female))
#read the file for the current state
with open(fname,'r') as f:
for line in f:
state, gender, year, name, count = line.split(',')
year = int(year)
if year == 2014:
count = float(count)
if gender == 'M':
feature_index = name2index_male[name]
else:
feature_index = name2index_female[name]
temp[feature_index] = count
X.append(temp)
X = np.array(X)
print('Data matrix X has shape: {}'.format(X.shape))
#check if sparse to see if it makes sense to transform X to a sparse matrix
from scipy.sparse import csr_matrix, issparse
issparse(X)
"""
Explanation: Now, the other peak has happened in 1989. It turns out Berlin wall came down in 1989. But also Denmark became the first country to legalize same sex marriage.
III-2 Clustering of the US States using baby names
Now we try to see if the states cluster in terms of how their people name their babies. We'll first extract all the baby names (male and female) used in 2014 and generate feature vectors for each state using the counts for each name.
End of explanation
"""
#normlize each the counts for each state by the total number babies born there in 2014
for i in range(X.shape[0]):
X[i,:] = X[i,:] / np.sum(X[i,:])
from sklearn.decomposition import RandomizedPCA
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(X)
pca = RandomizedPCA(n_components = 2)
pca.fit(X)
X_pca = pca.transform(X)
fig = plt.figure(figsize = (6,6))
plt.scatter(X_pca[:,0],X_pca[:,1])
# plt.xlim([-1,2])
# plt.ylim([-2,3])
for i in range(len(states)):
plt.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
plt.xlabel("first principal component")
plt.ylabel("second principal component")
plt.title("States projected to first two principle components")
plt.show()
"""
Explanation: Next, we'll perform dimentionality reduction using principle component analysis and we'll retain only two of the componets. Scikit-learn's RandomizedPCA implementation is choosen for its efficiencty.
We note that it is important to normalize the data since baby name counts are correlated with the population of states. Our goal is to cluster the states by the distribution of different names.
End of explanation
"""
ind2keep = [i for i in range(len(states)) if states[i] not in ['NY', 'FL', 'CA', 'TX']]
X_pca = X_pca[ind2keep,:]
states = [states[i] for i in ind2keep]
X_pca = StandardScaler().fit_transform(X_pca)
fig = plt.figure(figsize = (13,6))
ax1 = plt.subplot(121)
ax1.scatter(X_pca[:,0],X_pca[:,1])
# plt.xlim([-1,2])
# plt.ylim([-2,3])
for i in range(len(states)):
ax1.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax1.set_xlabel("first principal component")
ax1.set_ylabel("second principal component")
ax1.set_title('States')
ax2 = plt.subplot(122)
ax2.scatter(X_pca[:,0],X_pca[:,1])
ax2.set_xlim([-1.5,1.1])
ax2.set_ylim([-1.5,0.5])
for i in range(len(states)):
ax2.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax2.set_xlabel("first principal component")
ax2.set_ylabel("second principal component")
ax2.set_title('States - Zoomed in to the lower left corner')
plt.show()
"""
Explanation: It is interesting to observe CA and TX being obvious outliers. We have squeezed many dimansions into only two therefore it not easy to comment on the meaning of principle componenets. However it is tempting to conclude that the first principal component is directly proportional to the Hispanic population since both CA and TX has huge values in that direction. And with taking the rist of getting ahead of ourselves we can say that the other direction could well be related to the Asian population percentage. And it is not surprising to see CA having the largest coefficient in that direction: (https://en.wikipedia.org/wiki/Demographics_of_Asian_Americans).
Now let's remove NY, FL, CA and TX from the data set, standardize the features and zoom into that big cluster:
End of explanation
"""
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 3, init='k-means++')
kmeans.fit(X_pca)
y_pred = kmeans.predict(X_pca)
fig = plt.figure(figsize = (15,15))
ax1 = plt.subplot(111)
ax1.scatter(X_pca[:,0],X_pca[:,1], c = y_pred, s= 100)
for i in range(len(states)):
ax1.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax1.set_xlabel("first principal component")
ax1.set_ylabel("second principal component")
ax1.set_title('States Clustered by K-means')
plt.show()
"""
Explanation: Finally we employ a K-means clustering algorithm to the data with reduced to 2 dimensions.
End of explanation
"""
state_dict = {}
import re
with open('states.csv', 'r') as f:
for line in f:
name, abbrv = re.sub('["\n]', '', line).split(',')
state_dict[abbrv] = name
"""
Explanation: We'll conclude by listing the states under each cluster. For that aim we downloaded a csv file from http://www.fonz.net/blog/archives/2008/04/06/csv-of-states-and-state-abbreviations/ that contains state names and their abbreviations. Let's load that file and get a map of abbreviations to full state names.
End of explanation
"""
print('Blue cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 0 ]))
print('\nGreen cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 1 ]))
print('\nRed cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 2 ]))
"""
Explanation: Finally, let's list the states under each cluster:
End of explanation
"""
!ipython nbconvert baby_names.ipynb
"""
Explanation: We'll avoid trying to give too much isight looking at these clusters as we mentioned before a lot of dimentions are pressed into two and it is questionable if these clusters are meaningful in an obvious sense.
Some ideas for further investigation:
If we had more time there it'd have been possible to extract other interesting information from this data set. Here are two examples that come to mind:
State by state population change.
Analyis of diversity and demographics of immigration.
More informed cluster analysis by classification of names into demographics.
End of explanation
"""
|
elizabetht/deep-learning | gan_mnist/Intro_to_GANs_Solution.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
"""
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
"""
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/feateng/feateng.ipynb | apache-2.0 | !pip install --user apache-beam[gcp]==2.16.0
!pip install --user httplib2==0.12.0
"""
Explanation: <h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Note: You may ignore specific errors related to "papermill", "google-cloud-storage", and "datalab". You may also ignore warnings related to '/home/jupyter/.local/bin'. These components and issues do not impact your ability to complete the lab.
End of explanation
"""
import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__)
"""
Explanation: After doing a pip install, restart your kernel by selecting kernel from the menu and clicking Restart Kernel before proceeding further
End of explanation
"""
import os
PROJECT = 'cloud-training-demos' # CHANGE THIS
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
REGION = 'us-central1' # Choose an available region for Cloud AI Platform
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.15'
## ensure we're using python3 env
os.environ['CLOUDSDK_PYTHON'] = 'python3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
"""
Explanation: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available) </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
"""
def create_query(phase, EVERY_N):
if EVERY_N == None:
EVERY_N = 4 #use full dataset
#select and pre-process fields
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
DAYOFWEEK(pickup_datetime) AS dayofweek,
HOUR(pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
#add subsampling criteria by modding with hashkey
if phase == 'train':
query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
elif phase == 'valid':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
elif phase == 'test':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
return query
print(create_query('valid', 100)) #example query using 1% of data
"""
Explanation: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
End of explanation
"""
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
"""
Explanation: Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)
<h2> 3. Preprocessing Dataflow job from BigQuery </h2>
This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.
While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.
Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch.
End of explanation
"""
import datetime
####
# Arguments:
# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
# which each row is represented as a python dictionary
# Returns:
# -rowstring: a comma separated string representation of the record with dayofweek
# converted from int to string (e.g. 3 --> Tue)
####
def to_csv(rowdict):
days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
rowdict['dayofweek'] = days[rowdict['dayofweek']]
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
####
# Arguments:
# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
# Larger values will yield smaller sample
# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specfy to run the pipeline
# locally or on Google Cloud respectively.
# Side-effects:
# -Creates and executes dataflow pipeline.
# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
####
def preprocess(EVERY_N, RUNNER):
job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
#dictionary of pipeline options
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
'project': PROJECT,
'runner': RUNNER,
'num_workers' : 4,
'max_num_workers' : 5
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags=[], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ['train', 'valid']:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
(
p | 'read_{}'.format(phase) >> beam.io.Read(beam.io.BigQuerySource(query=query))
| 'tocsv_{}'.format(phase) >> beam.Map(to_csv)
| 'write_{}'.format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done")
"""
Explanation: First, let's define a function for preprocessing the data
End of explanation
"""
preprocess(50*10000, 'DirectRunner')
%%bash
gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/
"""
Explanation: Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
End of explanation
"""
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
"""
Explanation: 4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size.
End of explanation
"""
preprocess(50*100, 'DataflowRunner')
"""
Explanation: The following step will take <b>15-20 minutes.</b> Monitor job progress on the Cloud Console, in the Dataflow section
End of explanation
"""
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
#print first 10 lines of first shard of train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
"""
Explanation: Once the job completes, observe the files created in Google Cloud Storage
End of explanation
"""
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" > sample/valid.csv
"""
Explanation: 5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development.
End of explanation
"""
%%bash
grep -A 20 "INPUT_COLUMNS =" taxifare/trainer/model.py
%%bash
grep -A 50 "build_estimator" taxifare/trainer/model.py
%%bash
grep -A 15 "add_engineered(" taxifare/trainer/model.py
"""
Explanation: We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
End of explanation
"""
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths=${PWD}/sample/train.csv \
--eval_data_paths=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
%%bash
ls taxi_trained/export/exporter/
"""
Explanation: Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
End of explanation
"""
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
%%writefile /tmp/test.json
{"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ai-platform local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--train_steps=5000 \
--output_dir=$OUTDIR
"""
Explanation: 6. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ai-platform local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
End of explanation
"""
%%bash
MODEL_NAME="feateng"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
%%bash
gcloud ai-platform predict --model=feateng --version=v1 --json-instances=/tmp/test.json
"""
Explanation: Optional: deploy model to cloud
End of explanation
"""
|
WNoxchi/Kaukasos | FAML1/Lesson1-RandomForests.ipynb | mit | %load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = "data/bulldozers/"
!ls {PATH}
"""
Explanation: FastAI Machine Learning 1 – Random Forests
CodeAlong / Reimplementation of: https://github.com/fastai/fastai/blob/master/courses/ml1/lesson1-rf.ipynb
Lessons 1 & 2 | this notebook has been rerun on another machine – numbers may not exactly match notes (though trends will be the same).
1.2 Imports
End of explanation
"""
df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False,
parse_dates=["saledate"])
"""
Explanation: 2. Data
End of explanation
"""
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
display_all(df_raw.tail().transpose())
display_all(df_raw.describe(include='all').transpose())
"""
Explanation: In any sort of analytics work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.
End of explanation
"""
df_raw.SalePrice = np.log(df_raw.SalePrice)
"""
Explanation: Lecture 2 00:06:08 It's important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of project setup. However, in this case Kaggle tells us what metric to use: RMSLE (Root Mean Squared Log Error) between the actual and predicted auction prices. Therefore we take the log of the prices, so that RMSE will give us what we need.
$$\sum\big((\ln{(acts)} - \ln{(preds)})^2\big)$$
End of explanation
"""
m = RandomForestRegressor(n_jobs=-1)
m.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)
"""
Explanation: 2.2.2 Initial Processing
End of explanation
"""
add_datepart(df_raw, 'saledate')
df_raw.saleYear.head()
"""
Explanation: From the error above, we see we need all our columns to be numbers.
We'll start off by replacing the date ('saledate') column with a whole bunch of date-related columns.
End of explanation
"""
df_raw.columns
"""
Explanation: We can see those new date columns below where 'saledate' used to be:
End of explanation
"""
train_cats(df_raw)
df_raw.UsageBand
"""
Explanation: The new date columns are numbers, but:
The categorical variables are currently stored as strings, which is inefficient, and doesn't provide the numeric coding required for a random forest. Therefore we call train_cats to convert strings to Pandas categories.
End of explanation
"""
df_raw.UsageBand.cat.categories
# we can do .cat.codes to get the actual numbers
df_raw.UsageBand.cat.codes
"""
Explanation: At first glance it doesn't look like anything's changed, but if you take a deeper look, you'll see the data type has changed to 'category'. 'category' is a Pandas class, with attributes accesible via .cat.xxxx
The index below shows that 'High' --> 0, 'Low' --> 1, 'Medium' --> 2
End of explanation
"""
df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True)
df_raw.UsageBand = df_raw.UsageBand.cat.codes
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
os.makedirs('tmp', exist_ok=True)
df_raw.to_feather('tmp/bulldozers-raw.feather')
"""
Explanation: To actually use this dataset and turn it into numbers, what we need to do is to take every categorical column and replace it with .cat.codes
This is done further below in 2.2.3 Pre-processing via proc_df()
End of explanation
"""
# df_raw = pd.read_feather('tmp/bulldozers-raw.feather')
df, y, nas = proc_df(df_raw, 'SalePrice')
??numericalize
df.columns
"""
Explanation: 2.2.3 Pre-processing
The nas coming out of proc_df() is a dictionary, where the keys are the names of columns with missing values, and the values are the medians.
Optionally you can pass nas as an additional arg to proc_df(), and it'll make sure it adds those specific columns and uses those specific medians. IE: it gives you the ability to say "process this test set exactly the same way we processed the training set." FAML1-L3: 00:07:00
End of explanation
"""
m = RandomForestRegressor(n_jobs=-1)
m.fit(df, y)
m.score(df, y)
"""
Explanation: The R^2 score shown below shows the variance (or mean?) of the data. It shows how much the data varies.
End of explanation
"""
def split_vals(a, n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size
n_trn = len(df) - n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
"""
Explanation: A validation set helps handle the issue of overfitting. Make it st it shares the test set's properties, ie: give it 12k rows just like the test set, and split it as the first n - 12k rows for training and the last 12k rows as validation set.
End of explanation
"""
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
m = RandomForestRegressor(n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Lecture 2 00:17:58 Creating your validation set is the most important thing [I think] you need to do when you're doing a Machine Learning project – at least in terms of the actual modeling part.
A Note on the validation set: in general any time you're building a model that has a time element, you want your test set to be a separate time period -- and consequently your validation set too. In this case the dataset is already sorted by date, so you can just take the later portion.
3. Random Forests
3.1 Base Model
Let's try our model again, this time with separate training and validation sets.
End of explanation
"""
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice', subset=30000, na_dict=nas)
X_train, _ = split_vals(df_trn, 20000)
y_train, _ = split_vals(y_trn, 20000)
m = RandomForestRegressor(n_jobs=-1) # n_jobs=-1: set to num. cores on CPU
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Here we see our model, which had 0.982 R2 on the training set, got only 0.887 on the validation set, which makes us think it's overfitting quite badly. However it's not too badly because the RMSE on the logs of the prices (0.25) would've put us in the top 25% of the competition anyway (100/407).
*reran this on another machine
3.2 Speeding things up
Fast feedback is important for iteration and good interactive analysis. To this end we can pass in the subset par to proc_df() which'll randomly sample the data. We want no more than a 10sec wait when experimenting.
When you do this you still have to be careful your validation set doesn't change, and your training set doesn't overlap with it. So after sampling 30k items, we'll then taken the first 20k (since they're sorted by date) for our training data, and ignore the other 10k -- keeping our validation set the same as before.
End of explanation
"""
m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Instead of 83 seconds of total compute time (15.2s thanks to multi-cores), we now run in only 2.94 total seconds of compute.
3.3 Single tree
Let's use that subset to build a model that's so simple we can actually take a look at it. We'll build a forest made of trees - and before we look at the forest, we'll look at the trees.
In scikit-learn the trees are called 'estimators'. Below we'll make a forest with a single tree n_estimators=1, and a small tree at that max_depth=3, and we'll turn off the random-component of the RandomForest bootstrap=False. Now it'll create a small deteriministic tree.
End of explanation
"""
draw_tree(m.estimators_[0], df_trn, precision=3)
df_raw.fiProductClassDesc.cat.categories
# df_raw.fiProductClassDesc.cat.codes
"""
Explanation: After fitting the model and printing the score, the R2 score has dropped from 0.77 to 0.39. This is not a good model. It's better than the Mean-model (being > 0) but still not good.
But we can draw this model to take a look at it:
End of explanation
"""
m = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: A tree is a series of binary decisions, or splits. Our tree first of all decided to split on Coupler_System ≤ 0.5. That's actually a boolean variable, True/False. Within the group where it was True, it further split those into YearMade ≤ 1988 (1987.5), and on, etc.
Looking at our tree, in the first box: there are 20,000 rows in our data set (samples), the average of the log of price is 10.1, and if we built a model where we just used that average all the time: then the mean-squared-error would be 0.456.
So this first box is like the Denominator of an R^2. The most basic model is a tree with zero splits, just predict the average.
It turns out above that the best single binary split we can make turns out to be splitting by where the Coupler System is ≤ 0.5. (True or False).
If we do that, the MSE of Coupler System < 0.5 (ie: False) goes down from 0.456 to 0.111, improving the error a lot. In the other group, it's only improved slightly, from 0.456 to 0.398.
We can also see the Coupler System False group is only a small percentage: 1,721 samples of the total 20,000.
If you wanted to know what the single best binary decision to make for your data, how could you do it?
We want to build a Random Forest from scratch.
The first step is to create a tree. The first step to creating a tree is to create the first binary decision. How do you do this?
FAML1-0:39:02
Enumerate the different splits for each variable and choose the one with the lowest MSE. so how do we do the enumeration?
For each variable, for each possible value of that variable: see if its better. What does better mean?:
We could take the weighted average of the new MSE times number of samples.
That would be the same as saying:
I've got a model. The model is a single binary decision. For everybody with YearMade ≤ 1987.5, I'll fill-in 10.21, for everyone > 1987.5 I'll fill-in 9.184, and calculate the RMSE of this model.
That'll give the same answer as the weighted-average idea.
So now we have a single number that represents how good a split is: the weighted average of the MSE's of the two groups it creates.
We also have a way to find the best split, which is to try every variable, and every possible value of that variable, and see which variable and which value gives us a split with the best score.
The granuality is defined by the variables. So, Coupler_System only has two possible values, True or False. YearMade ranges from 1960 to 2010, so we just try all those unique values. All those possible split points.
Now rinse and repeat: with the conditions set by the split: continue.
Claim: it's Never necessary to do more than 1 split at a level
Why?: because you can split it again.
THAT is the entirety of creating a Decision Tree. You stop either when you hit some requested limit (here when depth reaches 3), or when the leaf-nodes each only contain 1 thing.
That is how we grow decision trees.
Now this tree isn't very good, it has a validation R^2 of 0.39. We can try to make it better by letting grow deeper (removing max_depth=3
Bigger tree:
End of explanation
"""
m = RandomForestRegressor(n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: If we don't limit depth, the training R^2 is of of course, a perfect 1.0. Because we can exactly predict every training element because it's in a leaf-node all it's own.
But the validation R^2 is not 1.0. It's a lot better than our super-shallow tree, but not as good as we'd like.
We want to find another way of making these trees better. And we'll do that by making a forest.
What's a forest?
To create a forest we're going to use a statistical technique called bagging.
3.4 Bagging
3.4.1 Intro to Bagging
You can bag any kind of model. The Random Forest is a way of bagging Decision Trees.
Bagging: what if we created 5 different models, each of which was only somewhat predictive, but the models weren't at all correlated with each other -- their predictions weren't correlated. That would mean the 5 models would've had to've found different insights into relationships in the data.
If you took the average of those models, you're effectively taking in the insights from each of them.
Averaging models: Ensembling.
Let's come up with a more specific idea of how to do this. What if we created a hole lot of these trees: big, deep, massively-overfit D-Trees. But each tree gets a random 1/10th of the data. And do that a hundred times with different random samples.
All the trees will have errors, but random errors. What's the average of a bunch of random errors? Zero. So if we take the average the error will average to zero and what's left is the true relationship.
That's a Random Forest.
After making those trees, we'll take our test data, run it through the tree, get to the leaf node, take the average in that leaf node for all the trees, and average them all together.
To do that we call RandomForestRegressor(.). An 'estimator' is what scikit-learn calls a tree. By default n_estimators = 10
End of explanation
"""
preds = np.stack([t.predict(X_valid) for t in m.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0]
"""
Explanation: We'll grab the predictions for each individual tree, and look at one example.
Each tree is stored in the attribute: .estimators_. Below gives a list of arrays of predictions. Each array will be all the predictions for that tree.
np.stack(.) concatenates them on a new axis.
End of explanation
"""
preds.shape
"""
Explanation: We see a shape of 10 different sets of predictions and for each one our validation set of size 12,000 -- so 12,000 predictions for each of the 10 trees:
End of explanation
"""
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
"""
Explanation: Above, preds[:,0] returns an array of the first prediction for each of our 10 trees. np.mean(preds[:,0]) returns the mean of those predictions, and y_valid[0] shows the actual answer. Most of our trees had inaccurate predictions, but the mean of them was actually pretty close.
Note: I probably made a mistake somewhere - or the data was too small - for multiple trees getting the exact answer
The models are based on different random subsets, and so their errors aren't correlated with eachother. The key insight here is to construct multiple models which are better than nothing, and the errors are - as much as possible - not correlated with eachother.
One of our first tunable hyperparameters is our number of trees.
What scikit-learn does by default is for N rows it picks out N rows with replacement: bootstrapping. ~ 63.2% of the rows will be represented, and a bunch of them multiple times.
The whole point of Machine Learning is to identify which variables matter the most and how do they relate to each other and your dependant variable together.
Random Forests were discovered/invented with the aim of creating trees as predictive and as uncorrelated as possible -- 1990s. Recent research has focused more on minimizing correlation: creating forests with trees that are individually less predictive, but with very little correlation.
There's another scikit-learn class called:
sklearn.ensemble.ExtraTreesClassifier
or
sklearn.ensemble.ExtraTreesRegressor
With the exact same API (just replace RandomForestRegressor). It's called an "Extremely Randomized Trees" model. It does exactly what's discussed above, but instead of trying every split of every variable, it randomly tries a few splits of a few variables.
So it's much faster to train, and has more randomness. With the time saved, you can build more trees - and therefore get better generalization.
In practice: if you have crappy individual trees, you just need more models to get a good overall model.
Now the obvious question: isn't this computationally expensive? Going through every possible value of a 32-bit Float or .. God forbid.. a 64-bit Float? Yes.
Firstly, that's why it's good your CPU runs in GHz, billions of clock-cycles per second, and moreso why Multi-Core processors are fantastic. Each core has SIMD capability -- Single Instruction Multiple Data -- allowing it to perform up to 8 computations at once - and that's per core.
On the GPU performance is measured in TFLOPS - Terra FLOPS - Trillions of FLoating Point Operations per Second.
This is why, when designing Algorithms, it's very difficult for us Humans to realize how * stupid algorithms should be given how fast modern computers are.*
It's quite a few operations... but at trillions per second, you hardly notice it.
Let's do a little data analysis. Let's go through each of the 10 trees, take the mean of all the predictions up to the ith tree and plot the R^2:
End of explanation
"""
m = RandomForestRegressor(n_estimators=20, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=80, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=160, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Note that the final value on the plot is the same as the final R^2 score returned by the RandomForest -- about 0.7748 here.
The shape of this curve suggests that adding more trees isn't going to help much. Let's check (Compare this to our original model on a sample).
End of explanation
"""
m = RandomForestRegressor(n_estimators=160, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: At this point, it looks like we're inside signal noise. More trees is never going to make the model worse - but a lower score is easily explained as whatever diminished accuracy gain being overwhelmed by noise in the random-sampling of the data.
If that's the case, I'd expect being able to see an R2 score greater than 0.79786, with the same hyperparameters:
End of explanation
"""
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Well there you have it. And the highest score so far to boot.
The n_estimators hyperparameter is a tradeoff between improvement vs. computation time.
Interesting note above, number of trees increases computation time Linearly.
You can still get many of the same insights as a large forest, with a few dozen trees, like 20 or 30.
3.4.2 Out-of-Bag (OOB) Score
Is our validation set worse than our training set because we're overfitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, Random Forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)
The idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.
This also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.
This is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.
So what if your dataset is so small you don't want to pull anything out for a validation set - because doing so means you no longer have enough data to build a good model? What do you do?
There's a cool trick unique to Random Forests. We could recognize that for each tree there are some portion of rows not used... So we could pass in the rows not used by the 1st tree to the 1st, the rows not used by the 2nd to the 2nd, and so on. So technically we'd have a different validation set for each tree. To calculate our prediction, we would average all of the trees where that row was not used for training.
As long as you have enough trees, every row is going to appear in the OOB sample for one of them at least. So you'll be averaging a few trees - more if you have more trees.
You can create an OOB prediction by averaging all the trees you didn't use to train each individual row, and then calculate RMSE, R2, etc, on that.
If you pass oob_score=True to scikit-learn, it'll do that for you. It'll then create an attribute oob_score_. Our print_score(.) function at top prints out the oob score if it exists.
End of explanation
"""
n_trn
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice')
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
len(df_trn), len(X_train)
"""
Explanation: The extra value at the end is the R2 for the oob score. We want it to be very close to the R2 for the validation set (2nd to last value) although that doesn't seem to be the case here.
In general the OOB R2 score will slightly underestimate how generalizable the model is. The more trees you have, the less it'll be less by.
Although in this case my OOB R2 score is actually better than my validation R2... NOTE (L2 1:21:54) the OOb score is better because it's taken from a random sample of the data, whereas our validation set is not: it's a different time-period which is harder to predict.
The OOB R2 score is handy for finding an automated way to set hyperparameters. A Grid-Search is one way to do this. You pass in a list of all hyperpars you want to tune to scikit-learn and a list of all values you want to try for each hypar, and it'll run your model on every possible combination and tells you which one is best. OOB score is a great choice for measuring that.
3.5 Reducing Over-Fitting
3.5.1 Subsampling Lecture 2 1:14:48
It turns out that one of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: subsampling. Let's return to using our full dataset, so that we can demonstrate the impact of this technique. [1:15:00]
What we did before wasn't ideal. We took a subset of 30k rows of the data, and built our models on that - meaning every tree in our RF is a different subset of that subset of 30k. Why? Why not pick a totally different subset of 30k for each tree. So leave the total 300k dataset as is, and if we want to make things faster, pick a different subset of 30k each time. Instead of bootstrapping the entire set of rows, let's just randomly sample a subset of the data.
Let's do that by calling proc_df() without the subset par to get all our data.
End of explanation
"""
set_rf_samples(20000)
m = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a different random subset per tree. That way, given enough trees, the model can still see all the data, but for each individual tree, it'll be just as fast as if we'd cut down our dataset as before.
When we run set_rf_samples(20000), when we run a RF, it's not going to bootstrap an entire set of 390k rows, it's going to grab a subset of 20k rows.
So when running, it'll be just as fast as when we did a random sample of 20k, but now every tree can have access to the entire dataset. So if we have enough estimators/trees, the model will eventually see everything.
End of explanation
"""
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: We don't see that much of an improvement over the R2 with the 20k data-subset, because we haven't used many estimators yet.
Since each additional tree allows th emodel to see more data, this approach can make additional trees more useful.
End of explanation
"""
reset_rf_samples()
"""
Explanation: With more estimators the model can see a larger portion of the data, and the R2 (2nd last value) has gone up from 0.8591 to 0.8755.
The Favorita groceries competition has over a hundred-million rows of data. There's no way you'll create an RF using 128M rows in every tree. It'll take forever. Instead you can use set_rf_samples(.) set to 100k or 1M.
The trick here is with a Random Forest using this technique, no dataset is too big. Even if it has 100B rows. You can just create a bunch of trees, each with a different subset.
NOTE: right now OOB Scores and set_rf_samples(.) are not compatible with each other, so you need to set oob_score=False if you use set_rf_samples(.) because the OOB score will be meaningless.
To turn of set_rf_samples(.) just call: reset_rf_samples()
A great big tip - that very few people in Industry or Academia use:
Most people run all their models on all their data all the time using their best possible pars - which is just pointless.
If you're trying to find which features are important and how they relate to oneanother, having that 4th dec place of accuracy isn't going to change anything.
Do most of your models on a large-enough sample size that your accuracy is reasonable (within a reasonable distance of the best accuracy you can get) and it's taking a few seconds to train - so you can do your analysis.
3.5.2 Tree Building Parameters
We revert to using a full bootstrap sample in order to show the impact of the other over-fitting avoidance methods.
End of explanation
"""
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Let's get a baseline for this full set to compare to. This'll train 40 estimators all the way down until the leaf nodes have just one sample in them.
End of explanation
"""
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: This gets us a 0.899 R2 on the validation set, or a 0.908 on the OOB.
L2 1:21:54 Our OOB is better than our ValSet bc our ValSet is actually a different time period (future), whereas the OOB is a random sample. It's harder to predict the future.
Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with min_samples_leaf) that we require some minimum number of rows in every leaf node. This has two benefits:
There are less decision rules for each leaf node; simpler models should generalize better.
The predictions are made by average more rows in the leaf node, resulting in less volatility.
End of explanation
"""
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Setting min_samples_leaf = 3 stops the RF when each leaf-node has 3 or fewer samples in it. In practice this means 1 or 2 fewer levels of decisions being made - which means around half the number of decision criteria, so it'll train much quicker. It also means when we look at an individual tree, instead of taking one point, wer're taking the average of 3 points - so we'd expect the trees to generalize better, though each tree is likely to be less powerful than before.
Values of in the range of 1, 3, 5, 10, 25 tend to work well for min_samples_leaf.
If you have a massive dataset and aren't using small samples, you may need a min_samples_leaf in the hundreds or thousands.
In this case, going from the default leaf-size of 1 to 3 has increased our valset R2 from 0.899 to 0.903.
We can also increase the amount of variation amongst the trees by not only using a sample of rows for each tree, but also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.
Idea: the less correlated your trees are with eachother: the better. Image you had 1 column that was so much better than all the other cols at being predictive, that every tree you built - regardless of which subset of rows - always started with that column. So the trees are all going to be pretty similar. But you can imagine there being some interaction of variables where that interaction is more important than that individual column.
So if each tree always splits on the same thing the first time, you're not going to get much variation on those trees.
So what we do in addition to just taking a subset of those rows, is at every single split point take a different subset of columns.
This is slightly different to the row sampling. In row-sampling, each new tree is based on a random set of rows. In column sampling every individual binary split chooses from a different subset of columns.
IoW: rather than looking at every possible level of every possible column, we look at every possible level of a random subset of columns. And at each decision point we use a different random subset.
How many? You get to pick. max_features=0.5 means you choose half of them. The default is to use all.
J.Howard has found good values at: 1, 0.5, Log(2), Sqrt()
NOTE the RF never removes variables as it's building a Decision Tree. It just goes through the different split points based on possible values.
You may have noticed our RMSE of Log(price) has been dropping on our validation set as well (2nd value), now down to 0.23306.
Checking the public leaderboard, 0.23305579 gets us to 25th place. Unfortunately the competition is old enough that direct comparisons are difficult, but we get the general idea.
Roughly speaking, we'd've gotten in the top 25 of this competition with a brainless Random Forest with some brainless hyperparameter tuning.
This is why the Random Forest is such an important first step, and often * only step in Machine Learning.*
End of explanation
"""
|
udacity/deep-learning | sentiment-network/Sentiment_Classification_Projects.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
"""
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
"""
from collections import Counter
import numpy as np
"""
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
"""
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
"""
Explanation: We'll create three Counter objects, one for words from positive reviews, one for words from negative reviews, and one for all the words.
End of explanation
"""
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
"""
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
"""
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
"""
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
"""
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
"""
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of positive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
"""
# TODO: Convert ratios to logs
"""
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in positive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very positive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
"""
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
"""
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with positive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
"""
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None
"""
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
"""
vocab_size = len(vocab)
print(vocab_size)
"""
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network_2.png')
"""
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
"""
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = None
"""
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
"""
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
"""
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
"""
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
"""
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
"""
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
"""
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
"""
update_input_layer(reviews[0])
layer_0
"""
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
"""
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here
"""
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
"""
labels[0]
get_target_for_label(labels[0])
"""
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
"""
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
"""
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducible results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = None
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = None
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
"""
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
"""
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
"""
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
"""
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
"""
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
"""
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
"""
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
"""
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
"""
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance. It should be
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance.
End of explanation
"""
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
"""
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation
"""
|
slundberg/shap | notebooks/tabular_examples/tree_based_models/Scatter Density vs. Violin Plot Comparison.ipynb | mit | import xgboost
import shap
# train xgboost model on diabetes data:
X, y = shap.datasets.diabetes()
bst = xgboost.train({"learning_rate": 0.01}, xgboost.DMatrix(X, label=y), 100)
# explain the model's prediction using SHAP values on the first 1000 training data samples
shap_values = shap.TreeExplainer(bst).shap_values(X)
"""
Explanation: Scatter Density vs. Violin Plot
This gives several examples to compare the dot density vs. violin plot options for summary_plot.
End of explanation
"""
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="layered_violin", color='#cccccc')
"""
Explanation: Layered violin plot
Without color, this plot can simply display the distribution of importance for each variable as a standard violin plot.
End of explanation
"""
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="layered_violin", color='coolwarm')
"""
Explanation: For example, in the above, we can see that s5 is the most important variable, and generally it causes either a large positive or negative change in the prediction. However, is it large values of s5 that cause a positive change and small ones that cause a negative change - or vice versa, or something more complicated? If we use color to represent the largeness/smallness of the feature, then this becomes apparent:
End of explanation
"""
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:])
"""
Explanation: Here, red represents large values of a variable, and blue represents small ones. So, it becomes clear that large values of s5 do indeed increase the prediction, and vice versa. You can also see that others (like s6) are pretty evenly split, which indicates that while overall they're still important, their interaction is dependent on other variables. (After all, the whole point of a tree model like xgboost is to capture these interactions, so we can't expect to see everything in a single dimension!)
Note that the order of the color isn't important: each violin is actually a number (layered_violin_max_num_bins) of individual smoothed shapes stacked on top of each other, where each shape corresponds to a certain percentile of the feature (e.g. the 5-10% percentile of s5 values). These are always drawn with small values first (and hence closest to the x-axis) and large values last (hence on the 'edge'), and that's why in this case you always see the red on the edge and the blue in the middle. (You could, of course switch this round by using a different color map, but the point is that the order of red inside/outside blue has no inherent meaning.)
There are other options you can play with, if you wish. Most notable is the layered_violin_max_num_bins mentioned above. This has an additional effect that if the feature has less that layered_violin_max_num_bins unique values, then instead of partitioning each section as as a percentile (the 5-10% above), we make each section represent a specific value. For example, since sex has only two values, here blue will mean male (or female?) and read means female (or male?). Not sure with the diabetes data if a higher value of sex means male or female
<!-- commenting this out for the public repo since there is a fair amount of opinion here.
#### Pros
- look great
- easily interpretable (with color): people can generally get the idea without having to explain in detail
- both of these meant they're good to show laymen/clients in presentations etc.
#### Cons
- take longer to draw (only relevant if you're doing heaps)
- can be hard to get the smoothing just right
- the code isn't as well supported, so if you want to tweak it, you might have to hack the code yourself-->
Dot plot
This combines a scatter plot with density estimation by letting dots pile up when they don't fit. The advatange of this approach is that it does not hide anything behind kernel smoothing, so what-you-see-is-what-is-there.
End of explanation
"""
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="violin")
"""
Explanation: <!--#### Pros
- if you're looking for really fine features, and your data doesn't have the problems below, then this might be better. However, you probably shouldn't be using a graph to discover such fine features.
#### Cons
- generally doesn't look as nice for most data sets
- can be quite noisy - no smoothing etc. This generally makes it harder to interpret the 'obvious' results.
- the plot will depend on the order the dots are drawn (since they'll overlap etc.). In other words it's possible that you could get very different looking plots with the same data. You can get round this somewhat by using a very low opacity - but this then makes the non-overlapping parts of the graph hard to read.
- [Note: this issue could be fixed somewhat if the y-value of the dots are given specific meaning (as with the layered violin plot) to avoid plots of different color overlapping. Though then it'd just be the layered violin plot.]
- doesn't support categorical data (see the comment for the layered violin plot).-->
Violin plot
These are a standard violin plot but with outliers drawn as points. This gives a more accurate representation of the density out the outliers than a kernel density estimated from so few points. The color represents the average feature value at that position, so red regions have mostly high valued feature values while blue regions have mostly low feature values.
End of explanation
"""
|
ocefpaf/git_intro_demo | git_intro.ipynb | mit | %%bash
git status
%%bash
git log
%%bash
git show
%%writefile foo.md
Fetchez la vache
%%bash
git add foo.md
%%bash
git st
%%bash
git diff foo.md
%%bash
git diff git_intro.ipynb
%%bash
git rm -f foo.md
%%bash
git st
"""
Explanation: Very simple git intro
git config
%%bash
git config --global --get user.name
git config --global --get user.email
Basic commands
bash
git clone <repo>
git status
git log
git commit
git add <file>
git rm <file>
git diff <file>
git push
End of explanation
"""
%%bash
git branch new_post
%%bash
git checkout new_post
%%writefile my_new_post.md
# Q: What is the meaning of life?
# A: 42
%%bash
git st
%%bash
git add my_new_post.md
%%bash
git st
%%bash
git ci -m "Adding my new post." my_new_post.md
%%bash
git push
%%bash
git push --set-upstream origin new_post
"""
Explanation: GitHub workflow
Fork → Branch → Write → PR
Fork
bash
git clone https://github.com/<username>/git_intro_demo.git
Branch
End of explanation
"""
%%bash
git remote --verbose
%%bash
git remote add upstream https://github.com/ocefpaf/git_intro_demo.git
git remote --verbose
"""
Explanation: PR ready!? What now?
End of explanation
"""
|
tommyod/abelian | docs/notebooks/homomorphisms.ipynb | gpl-3.0 | from IPython.display import display, Math
def show(arg):
return display(Math(arg.to_latex()))
"""
Explanation: Tutorial: Homomorphisms
This is an interactive tutorial written with real code.
We start by setting up $\LaTeX$ printing.
End of explanation
"""
from abelian import LCA, HomLCA
# Initialize the target group for the homomorphism
target = LCA([0, 5], discrete = [False, True])
# Initialize a homomorphism between LCAs
phi = HomLCA([[1, 2], [3, 4]], target = target)
show(phi)
# Initialize a homomorphism with no source/target.
# Source and targets are assumed to be
# of infinite order and discrete (free-to-free)
phi = HomLCA([[1, 2], [3, 4]])
show(phi)
"""
Explanation: Initializing a homomorphism
Homomorphisms between general LCAs are represented by the HomLCA class.
To define a homomorphism, a matrix representation is needed.
In addition to the matrix, the user can also define a target and source explicitly.
Some verification of the inputs is performed by the initializer, for instance a matrix $A \in \mathbb{Z}^{2 \times 2}$ cannot represent $\phi: \mathbb{Z}^m \to \mathbb{Z}^n$ unless both $m$ and $n$ are $2$.
If no target/source is given, the initializer
will assume a free, discrete group, i.e. $\mathbb{Z}^m$.
End of explanation
"""
from abelian import HomLCA
phi = HomLCA([[4, 5], [9, -3]])
show(phi)
"""
Explanation: Homomorphisms between finitely generated abelian groups (FGAs) are also represented by the HomLCA class.
End of explanation
"""
# Create two HomLCAs
phi = HomLCA([[4, 5], [9, -3]])
psi = HomLCA([[1, 0, 1], [0, 1, 1]])
# The composition of phi, then psi
show(phi * psi)
"""
Explanation: Roughly speaking, for a HomLCA instance to represent a homomorphism between FGAs, it must have:
FGAs as source and target.
The matrix must contain only integer entries.
Compositions
A fundamental way to combine two functions is to compose them.
We create two homomorphisms and compose them: first $\psi$, then $\phi$.
The result is the function $\phi \circ \psi$.
End of explanation
"""
show(phi**3)
"""
Explanation: If the homomorphism is an endomorphism (same source and target),
repeated composition can be done using exponents.
$\phi^{n} = \phi \circ \phi \circ \dots \circ \phi, \quad n \geq 1$
End of explanation
"""
show(psi)
# Each element in the matrix is multiplied by 2
show(psi + psi)
# Element-wise addition
show(psi + 10)
"""
Explanation: Numbers and homomorphisms can be added to homomorphisms,
in the same way that numbers and matrices are added to matrices in other software packages.
End of explanation
"""
A = [[10, 10], [10, 15]]
# Notice how the HomLCA converts a list
# into an LCA, this makes it easier to create HomLCAs
phi = HomLCA(A, target = [20, 20])
phi = phi.project_to_source()
# Slice in different ways
show(phi)
show(phi[0, :]) # First row, all columns
show(phi[:, 0]) # All rows, first column
show(phi[1, 1]) # Second row, second column
"""
Explanation: Slice notation
Slice notation is available. The first slice works on rows (target group)
and the second slice works on columns (source group).
Notice that in Python, indices start with 0.
End of explanation
"""
# Create two homomorphisms
phi = HomLCA([2], target = LCA([0], [False]))
psi = HomLCA([2])
# Stack diagonally
show(phi.stack_diag(psi))
"""
Explanation: Stacking homomorphisms
There are three ways to stack morphisms:
Diagonal stacking
Horizontal stacking
Vertical stacking
They are all shown below.
Diagonal stacking
End of explanation
"""
# Create two homomorphisms with the same target
target = LCA([0], [False])
phi = HomLCA([[1, 3]], target = target)
source = LCA([0], [False])
psi = HomLCA([7], target=target, source=source)
# Stack horizontally
show(phi.stack_horiz(psi))
"""
Explanation: Horizontal stacking
End of explanation
"""
# Create two homomorphisms, they have the same source
phi = HomLCA([[1, 2]])
psi = HomLCA([[3, 4]])
# Stack vertically
show(phi.stack_vert(psi))
"""
Explanation: Vertical stacking
End of explanation
"""
# Create a homomorphism, specify the target
phi = HomLCA([[2, 0], [0, 4]], [10, 12])
# Find the source group (orders)
phi = phi.project_to_source()
show(phi)
"""
Explanation: Calling homomorphisms
In Python, a callable is an object which implements a method for function calls.
A homomorphism is a callable object, so we can use phi(x) to evaluate x, i.e. send x from the source to the target.
We create a homomorphism.
End of explanation
"""
# An element in the source, represented as a list
group_element = [1, 1]
# Calling the homomorphism
print(phi(group_element))
# Since [6, 4] = [1, 1] mod [5, 3] (source group)
# the following is equal
print(phi([6, 4]) == phi([1, 1]))
"""
Explanation: We can now call it. The argument must be in the source group.
End of explanation
"""
# Create two HomLCAs
phi = HomLCA([[4, 5], [9, -3]])
psi = HomLCA([[1, 0, 1], [0, 1, 1]])
x = [1, 1, 1]
# Compose, then call
answer1 = (phi * psi)(x)
# Call, then call again
answer2 = phi(psi(x))
# The result is the same
print(answer1 == answer2)
"""
Explanation: Calling and composing
We finish this tutorial by showing two ways to calculate the same thing:
$y = (\phi \circ \psi)(x)$
$y = \phi(\psi(x))$
End of explanation
"""
|
f-guitart/data_mining | notes/02c - Apache Spark MLlib.ipynb | gpl-3.0 | from pyspark.sql import SparkSession
import pyspark
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
sc = spark.sparkContext
"""
Explanation: Apache Spark MLlib
MLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. At a high level, it provides tools such as:
ML Algorithms: common learning algorithms such as classification, regression, clustering, and collaborative filtering
Featurization: feature extraction, transformation, dimensionality reduction, and selection
Pipelines: tools for constructing, evaluating, and tuning ML Pipelines
Persistence: saving and load algorithms, models, and Pipelines
Utilities: linear algebra, statistics, data handling, etc.
You can find 2 Machine Libraries APIs in Spark:
* spark.mllib: it is the RDD-based API
* spark.ml: it is the DataFrame-based API
spark.ml is the primary ML library in Spark. spark.mllib is in maintenance mode. This means that it can be used and it will have bug fixes but will not have any new features.
As of Spark 2.0, the RDD-based APIs in the spark.mllib package have entered maintenance mode. The primary Machine Learning API for Spark is now the DataFrame-based API in the spark.ml package.
DataFrames provide a more user-friendly API than RDDs. The many benefits of DataFrames include Spark Datasources, SQL/DataFrame queries, Tungsten and Catalyst optimizations, and uniform APIs across languages.
End of explanation
"""
df = spark.read.csv(path = '../data/papers.csv', header = False,inferSchema = True)
df.printSchema()
df.show(5)
"""
Explanation: Data Used
We will use different data sets:
Papers Dataset:
This is a dataset containing conferences, authors and paper titles
End of explanation
"""
bf_train = spark.read.csv(path = '../data/blackfriday_train.csv', header = True,inferSchema = True)
bf_test = spark.read.csv(path = '../data/blackfriday_test.csv', header = True,inferSchema = True)
bf_train.printSchema()
"""
Explanation: Black Friday Dataset
source: https://datahack.analyticsvidhya.com/contest/black-friday
The data set also contains customer demographics (age, gender, marital status, city_type, stay_in_current_city), product details (product_id and product category) and Total purchase_amount from last month.
| Variable | Definition |
|----------|------------|
| User_ID | User ID|
| Product_ID| Product ID|
| Gender| Sex of User|
| Age| Age in bins|
| Occupation| Occupation (Masked)|
| City_Category| Category of the City (A,B,C)|
| Stay_In_Current_City_Years| Number of years stay in current city|
| Marital_Status| Marital Status|
| Product_Category_1| Product Category (Masked)|
| Product_Category_2| Product may belongs to other category also (Masked)|
| Product_Category_3| Product may belongs to other category also (Masked)|
| Purchase| Purchase Amount (Target Variable)|
End of explanation
"""
df.printSchema()
"""
Explanation: DataFrames Manipulations
source: https://www.analyticsvidhya.com/blog/2016/10/spark-dataframe-and-operations/
How to see datatype of columns?
To see the types of columns in DataFrame, we can use the printSchema. printSchema() on a DataFrame will show the schema in a tree format.
End of explanation
"""
df.head(5)
"""
Explanation: How to Show first n observation?
We can use head operation to see first n observation (say, 5 observation). Head operation in PySpark is returns a list of Rows.
End of explanation
"""
print(type(df.show(2,truncate= True)))
#df.show(2,truncate= True)
"""
Explanation: To see the result formatted as DataFrames output, we can use the show operation.
We can pass the argument truncate = True to truncate the result (the row won't be shown completely).
Note that show does not return any data, just shows the DataFrame contents.
End of explanation
"""
df.count()
"""
Explanation: How to Count the number of rows in DataFrame?
We can use count operation to count the number of rows in DataFrame.
End of explanation
"""
len(df.columns), df.columns
"""
Explanation: How many columns do we have in train and test files along with their names?
For getting the columns name we can use columns on DataFrame, similar to what we do for getting the columns in pandas DataFrame.
End of explanation
"""
df.describe().show()
"""
Explanation: How to get the summary statistics (mean, standard deviance, min ,max , count) of numerical columns in a DataFrame?
describe operation is use to calculate the summary statistics of numerical column(s) in DataFrame. If we don’t specify the name of columns it will calculate summary statistics for all numerical columns present in DataFrame.
End of explanation
"""
df.describe()
type(df.describe())
"""
Explanation: Note that describe() is a transformation, so the result is a DataFrame that we can collect or transform again.
End of explanation
"""
df.describe("_c0").show()
"""
Explanation: As we can see that, describe operation is working for String type column but the output for mean, stddev are null and min & max values are calculated based on ASCII value of categories.
End of explanation
"""
unique_elements = df.select('_c0').distinct().collect()
len(unique_elements)
df.select('_c0').distinct().count()
# if we want to get the different number of conferences
df.select("_c3").distinct().show(5)
"""
Explanation: How to find the number of distinct product in train and test files?
The distinct operation can be used here, to calculate the number of distinct rows in a DataFrame. Let’s apply distinct operation to calculate the number of distinct product in df.
End of explanation
"""
df2 = spark.read.csv(path = '../data/people.csv', header = True,inferSchema = True)
df2.crosstab('Age[years]', 'Sex').show()
"""
Explanation: What if I want to calculate pair wise frequency of categorical columns?
We can use crosstab operation on DataFrame to calculate the pair wise frequency of columns. Let’s apply crosstab operation on Age[years] and Sex columns of df DataFrame.
End of explanation
"""
df2.select('Sex', 'Eye Color').show()
df2.select('Sex', 'Eye Color').dropDuplicates().show()
df2.select('Sex', 'Eye Color').count()
df2.select('Sex', 'Eye Color').dropDuplicates().count()
"""
Explanation: In the above output, the first column of each row will be the distinct values of Age and the column names will be the distinct values of Sex. The name of the first column will be Age[years]_Sex. Pair with no occurrences will have zero count in contingency table.
What If I want to get the DataFrame which won’t have duplicate rows of given DataFrame?
We can use dropDuplicates operation to drop the duplicate rows of a DataFrame and get the DataFrame which won’t have duplicate rows.
If we apply this on two columns Sex and Eye Color of df2 and get the all unique rows for these columns.
End of explanation
"""
bf_train.count()
bf_train.dropna().count()
"""
Explanation: What if I want to drop the all rows with null value?
The dropna operation can be use here. To drop row from the DataFrame it consider three options.
how– any or all. If any, drop a row if it contains any nulls. If all, drop a row only if all its values are null.
thresh – int, default None If specified, drop rows that have less than thresh non-null values. This overwrites the how parameter.
subset – optional list of column names to consider.
Let’t drop null rows in df2 with default parameters and count the rows in output DataFrame. Default options are any, None, None for how, thresh, subset respectively.
End of explanation
"""
bf_train.show(2)
bf_train.fillna(-1).show(2)
"""
Explanation: What if I want to fill the null values in DataFrame with constant number?
Use fillna operation here. The fillna will take two parameters to fill the null values.
value:
It will take a dictionary to specify which column will replace with which value.
A value (int , float, string) for all columns.
subset: Specify some selected columns.
Let’s fill -1 inplace of null values in train DataFrame.
End of explanation
"""
bf_train.filter(bf_train.Purchase > 15000).count()
"""
Explanation: If I want to filter the rows in train which has Purchase more than 15000?
We can apply the filter operation on Purchase column in bf_train DataFrame to filter out the rows with values more than 15000.
We need to pass a condition.
Let’s apply filter on Purchase column in train DataFrame and print the number of rows which has more purchase than 15000.
End of explanation
"""
bf_train.groupby('Age').agg({'Purchase': 'mean'}).show()
"""
Explanation: How to find the mean of each age group in train?
The groupby operation can be used here to find the mean of Purchase for each age group in bf_train. Let’s see how can we get the mean purchase for the Age column train.
End of explanation
"""
bf_train.groupby('Age').count().show()
"""
Explanation: We can also apply sum, min, max, count with groupby when we want to get different summary insight each group.
Let’s take one more example of groupby to count the number of rows in each Age group.
End of explanation
"""
t1 = bf_train.sample(False, 0.2, 42)
t2 = bf_train.sample(False, 0.2, 43)
t1.count(),t2.count()
"""
Explanation: How to create a sample DataFrame from the base DataFrame?
We can use sample operation to take sample of a DataFrame.
The sample method on DataFrame will return a DataFrame containing the sample of base DataFrame. The sample method will take 3 parameters.
withReplacement = True or False to select a observation with or without replacement.
fraction = x, where x = .5 shows that we want to have 50% data in sample DataFrame.
seed to reproduce the result
Let’s create the two DataFrame t1 and t2 from bf_train, both will have 20% sample of train and count the number of rows in each.
End of explanation
"""
bf_train.select('User_ID').rdd.map(lambda x:(x,1)).take(5)
"""
Explanation: How to apply map operation on DataFrame columns?
We can apply a function on each row of DataFrame using map operation. After applying this function, we get the result in the form of RDD. Let’s apply a map operation on User_ID column of bf_train and print the first 5 elements of mapped RDD(x,1) after applying the function.
End of explanation
"""
bf_train.orderBy(bf_train.Purchase.desc()).show(5)
"""
Explanation: How to sort the DataFrame based on column(s)?
We can use orderBy operation on DataFrame to get sorted output based on some column. The orderBy operation take two arguments.
List of columns.
ascending = True or False for getting the results in ascending or descending order(list in case of more than two columns )
Let’s sort the train DataFrame based on Purchase.
End of explanation
"""
bf_train.withColumn('Purchase_new', bf_train.Purchase /2.0).select('Purchase','Purchase_new').show(5)
"""
Explanation: How to add the new column in DataFrame?
We can use withColumn operation to add new column (we can also replace) in base DataFrame and return a new DataFrame. The withColumn operation will take 2 parameters.
Column name which we want add /replace.
Expression on column.
Let’s see how withColumn works. To calculate new column name Purchase_new in bf_train which is calculated by dviding Purchase column by 2.
End of explanation
"""
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
to_cat = udf(lambda x: "cheap" if x > 15000 else "expensive", StringType())
bf_train.withColumn('Purchase_cat', to_cat(bf_train["Purchase"])).select('Purchase_cat').show(5)
"""
Explanation: We can also use functions with withColumn
End of explanation
"""
bf_test.drop('Comb').columns
"""
Explanation: How to drop a column in DataFrame?
To drop a column from the DataFrame we can use drop operation. Let’s drop the column called Comb from the bf_test and get the remaining columns in bf_test.
End of explanation
"""
train, test = df.randomSplit([0.9, 0.1], seed=12345)
train.count()
test.count()
"""
Explanation: How to split a DataFrame into two new DataFrames
Sometime we will want to have a DataFrame divided into separate parts separated randomly.
End of explanation
"""
from pyspark.ml.linalg import SparseVector, DenseVector, Matrices
sv1 = SparseVector(3, [0, 2], [1.0, 3.0])
sv1
dv1 = DenseVector([1.0, 3.0])
dv1
"""
Explanation: Algebraic Structures
Dense and Sparse Vectors
A vector is a one-dimensional array of elements.
The natural Python implementation of a vector is as a one-dimensional list. However, in many applications, the elements of a vector have mostly zero values. Such a vector is said to be sparse.
It is inefficient to use a one-dimensional list to store a sparse vector. It is also inefficient to add elements whose values are zero in forming sums of sparse vectors. Consequently, we should choose a different representation.
A dense vector is the most natural implementation, using one-dimensional list
A sparse vector is represented by two parallel arrays: indices and values. Zero entries are not stored. A dense vector is backed by a double array representing its entries. For example, a vector [1., 0., 0., 0., 0., 0., 0., 3.] can be represented in the sparse format as (7, [0, 6], [1., 3.]), where 7 is the size of the vector, as illustrated below:
(source: https://databricks.com/blog/2014/07/16/new-features-in-mllib-in-spark-1-0.html)
End of explanation
"""
Matrices.dense(2,2,[1,2,3,4])
sparse_mat = Matrices.sparse(2,2,[1,2,3],[0,1],[1,1])
dense_mat = Matrices.dense(2,2,[1,2,3,4])
"""
Explanation: We can also have Sparse and Dense Matrices
End of explanation
"""
sparse_df = sc.parallelize([
(1, SparseVector(10, {1: 1.0, 2: 1.0, 3: 2.0, 4: 1.0, 5: 3.0})),
(2, SparseVector(10, {9: 100.0})),
(3, SparseVector(10, {1: 1.0})),
]).toDF(["row_num", "features"])
sparse_df.show()
dense_df = sc.parallelize([
(1, DenseVector([1,2,3,4])),
(2, DenseVector([1,2,3,4])),
(3, DenseVector([1,3,4,5])),
]).toDF(["row_num", "features"])
dense_df.show()
"""
Explanation: It is common to, instead of represent features as variables, to represent all instance variables as a vector, which indeed can be sparse.
End of explanation
"""
auto_df = spark.read.csv(path = '../data/auto-mpg.csv',
header = True,
inferSchema = True)
auto_df.printSchema()
"""
Explanation: When to Exploit Sparsity
from: https://databricks.com/blog/2014/07/16/new-features-in-mllib-in-spark-1-0.html
For many large-scale datasets, it is not feasible to store the data in a dense format. Nevertheless, for medium-sized data, it is natural to ask when we should switch from a dense format to sparse. In MLlib, a sparse vector requires 12nnz+4 bytes of storage, where nnz is the number of nonzeros, while a dense vector needs 8n bytes, where n is the vector size. So storage-wise, the sparse format is better than the dense format when more than 1/3 of the elements are zero. However, assuming that the data can be fit into memory in both formats, we usually need sparser data to observe a speedup, because the sparse format is not as efficient as the dense format in computation. Our experience suggests a sparsity of around 10%, while the exact switching point for the running time is indeed problem-dependent.
Pipelines
from: https://spark.apache.org/docs/2.3.2/ml-pipeline.html
In this section, we introduce and practice with the concept of ML Pipelines.
ML Pipelines provide a uniform set of high-level APIs built on top of DataFrames that help users create and tune practical machine learning pipelines.
MLlib standardizes APIs for machine learning algorithms to make it easier to combine multiple algorithms into a single pipeline, or workflow. This section covers the key concepts introduced by the Pipelines API, where the pipeline concept is mostly inspired by the scikit-learn project.
DataFrame: This ML API uses DataFrame from Spark SQL as an ML dataset, which can hold a variety of data types. E.g., a DataFrame could have different columns storing text, feature vectors, true labels, and predictions.
Transformer: A Transformer is an algorithm which can transform one DataFrame into another DataFrame. E.g., an ML model is a Transformer which transforms a DataFrame with features into a DataFrame with predictions.
Estimator: An Estimator is an algorithm which can be fit on a DataFrame to produce a Transformer. E.g., a learning algorithm is an Estimator which trains on a DataFrame and produces a model.
Pipeline: A Pipeline chains multiple Transformers and Estimators together to specify an ML workflow.
Transformers
A Transformer is an abstraction that includes feature transformers and learned models.
Technically, a Transformer implements a method transform(), which converts one DataFrame into another, generally by appending one or more columns.
For example:
A feature transformer might take a DataFrame, read a column (e.g., text), map it into a new column (e.g., feature vectors), and output a new DataFrame with the mapped column appended.
A learning model might take a DataFrame, read the column containing feature vectors, predict the label for each feature vector, and output a new DataFrame with predicted labels appended as a column.
Estimators
An Estimator abstracts the concept of a learning algorithm or any algorithm that fits or trains on data. Technically, an Estimator implements a method fit(), which accepts a DataFrame and produces a Model, which is a Transformer.
For example, a learning algorithm such as LogisticRegression is an Estimator, and calling fit() trains a LogisticRegressionModel, which is a Model and hence a Transformer.
Code examples
We will see two examples of how to train Supervised Machine Learning models. Supervised Models try learn from labeled datasets. This means that we have a dataset with some variables for each ocurrence and a label of that occurrence.
The main objective is to predict a label for a new occurrence for which we don't have the label.
Regression: Predicting a Continuous Variable
In the following example we will load the auto-mpg.csv dataset. The description says (http://archive.ics.uci.edu/ml/datasets/Auto+MPG)
"The data concerns city-cycle fuel consumption in miles per gallon, to be predicted in terms of 3 multivalued discrete and 5 continuous attributes." (Quinlan, 1993)
So, first of all, let's load the dataset:
End of explanation
"""
pred_vars = ['cylinders', 'displacement', 'weight', 'acceleration', 'year', 'origin']
"""
Explanation: Our main goal is to build a predictive model that takes some input variables representing the vehicle features, and outputs the consumption of the vehicle in miles per gallon.
End of explanation
"""
from pyspark.ml.feature import VectorAssembler
vectorAssembler = VectorAssembler(
inputCols = pred_vars,
outputCol = 'features')
train_df = vectorAssembler.transform(auto_df)
train_df = train_df.withColumn("label", auto_df["mpg"])
train_df = train_df.select(['features', 'label'])
train_df.show(3)
"""
Explanation: To predict the consumption we will use a Linear Regression model. We won't go into details about the model itself, but we have to take into account the following considerations:
The model will take as input a Vector representing the vehicle characteristics
All input variables must be numeric variables
The Linear Regressor model is an Estimator
A Linear Regressor is a Supervised Machine Learning algorithm
The model obtained is a Transformer
To generate the training dataset we will use a VectorAssembler, which is a transformer.
VectorAssembler, takes a DataFrame as input, and outputs the same DataFrame with the specified columns in a vector.
End of explanation
"""
from pyspark.ml.regression import LinearRegression
# LinearRegression is an Estimator
lr = LinearRegression(maxIter=10,
regParam=0.3,
elasticNetParam=0.8)
# Fit the model
lrModel = lr.fit(train_df)
# lrModel will contain a Transformer
type(lrModel)
"""
Explanation: Then we create a LinearRegression Estimator (skip parameters details). Remember that lrModel is a Transformer.
End of explanation
"""
predictions = lrModel.transform(train_df.select(['features']))
predictions.show(5)
"""
Explanation: To make the predictions over the dataset, we just have to apply the transformer over the features of a certain dataset.
End of explanation
"""
df = spark.createDataFrame([
(0,"the cat in the mat is flat"),
(1,"the mouse with the hat is nice")
], ["id","text"])
tokenizer = Tokenizer(inputCol="text", outputCol="words")
tok_df = tokenizer.transform(df)
tok_df.select("words").collect()
"""
Explanation: Classification: Learning to predict text classes
In this second example we will see how to classify text using a Logistic Regression model. The aim of this example is to learn how to concatenate some Transformers in a Pipeline.
We will use two transformers and an estimator:
Tokenizer: it is a Transformer that takes a textual column as input and generates a vector of tokenized words
End of explanation
"""
from pyspark.ml.feature import CountVectorizer
count_vec = CountVectorizer(inputCol="words", outputCol="features")
counter = count_vec.fit(tok_df)
count_df = counter.transform(tok_df)
count_df.show()
counter.vocabulary
"""
Explanation: CountVectorizer: Convert a list of words into a vector of variables. It does so by converting each word into an index and then at each position (which represents the word in the vector) counts the occurrences of the word in the original list.
End of explanation
"""
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
# Prepare training documents from a list of (id, text, label) tuples.
training = spark.createDataFrame([
(0, "a b c d e spark", 1.0),
(1, "b d", 0.0),
(2, "spark f g h", 1.0),
(3, "hadoop mapreduce", 0.0)
], ["id", "text", "label"])
# Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = CountVectorizer(inputCol=tokenizer.getOutputCol(), outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
# Fit the pipeline to training documents.
model = pipeline.fit(training)
# Prepare test documents, which are unlabeled (id, text) tuples.
test = spark.createDataFrame([
(4, "spark i j k"),
(5, "l m n"),
(6, "spark hadoop spark"),
(7, "apache hadoop")
], ["id", "text"])
# Make predictions on test documents and print columns of interest.
prediction = model.transform(test)
selected = prediction.select("id", "text", "probability", "prediction")
for row in selected.collect():
rid, text, prob, prediction = row
print("(%d, %s) --> prob=%s, prediction=%f" % (rid, text, str(prob), prediction))
"""
Explanation: LogisticRegression: takes some input variables with a label and constructs a classification model
Pipeline: puts together transformers and estimators
End of explanation
"""
auto_df = spark.read.csv(path = '../data/auto-mpg.csv',
header = True,
inferSchema = True)
pred_vars = ['cylinders', 'displacement', 'weight', 'acceleration', 'year', 'origin']
vectorAssembler = VectorAssembler(
inputCols = pred_vars,
outputCol = 'features')
vec_auto_df = vectorAssembler.transform(auto_df)
vec_auto_df = vec_auto_df.withColumn("label", auto_df["mpg"])
vec_auto_df = vec_auto_df.select(['features', 'label'])
train_auto_df, test_auto_df = vec_auto_df.randomSplit([0.9, 0.1], seed=12345)
lr = LinearRegression(maxIter=10,
regParam=0.3,
elasticNetParam=0.8)
# Fit the model
lrModel = lr.fit(train_auto_df)
predicted_auto_df = lrModel.transform(test_auto_df)
from pyspark.ml.evaluation import RegressionEvaluator
lr_evaluator = RegressionEvaluator(predictionCol="prediction", \
labelCol="label", metricName="mae")
results = lr_evaluator.evaluate(predicted_auto_df)
print("R Squared (R2) on test data = %g" % results)
"""
Explanation: Model Evaluation
Train/Test separation
The most straightforward approach to evaluate a supervised model is to split the original dataset into two subsets.
Training subset: this set is used to train a model. Normally, supervised Machine Learning algorithms try to minimize an error value, so algorithms use features and labels from the training dataset to learn a model that minimizes this error.
However, if we expose the algorithm to too much learning we may see that the algorithm begins to memorize occurrences in the training dataset. This is the so called overfitting problem, this problem derives to poor generalization. At the end of the day we have a model that performs very well over the training data, but does a bad prediction into new occurrences.
Test subset: if we separate a group of occurrences that are hidden to the algorithm during training, we can use these occurrences to check how is our model behaving to new occurrences. We can do so because the test subset is a proper labelled set of data, so we can check the difference between the actual label and the predicted label. With this difference we can generate an analysis to see the goodness of the model.
Evaluation of a Regression Model
To evaluate a Regression model we can use the following metrics:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$ \frac{1}{n} \sum_{i=1}^{n} |y_{i}-\hat{y}_{i}| $$
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$ \frac{1}{n} \sum_{i=1}^{n} \sqrt{(y_{i}-\hat{y}_{i})^2} $$
End of explanation
"""
from pyspark.ml.feature import StringIndexer
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
iris_df = spark.read.csv(path = '../data/iris.data',
header = False,
inferSchema = True)
iris_df.printSchema()
setosa_udf = udf(lambda x: "a" if x == "Iris-setosa" else "b", StringType())
iris_df = iris_df.withColumn("_c5", setosa_udf(iris_df["_c4"]))
pred_vars = ['_c0', '_c1', '_c2', '_c3']
vectorAssembler = VectorAssembler(
inputCols = pred_vars,
outputCol = 'features')
vec_iris_df = vectorAssembler.transform(iris_df)
indexer = StringIndexer(inputCol="_c5", outputCol="categoryIndex")
vec_iris_df = indexer.fit(vec_iris_df).transform(vec_iris_df)
vec_iris_df.sample(False, .05).show()
vec_iris_df = vec_iris_df.withColumn("label", vec_iris_df["categoryIndex"])
train_iris_df, test_iris_df = vec_iris_df.randomSplit([0.9, 0.1], seed=12345)
lr = LogisticRegression(maxIter=10, regParam=0.001)
# Fit the model
lrModel = lr.fit(train_iris_df)
#predict
predictions = lrModel.transform(test_iris_df)
from pyspark.mllib.evaluation import MulticlassMetrics
predictionAndLabels = predictions.rdd.map(lambda x: (x.prediction, x.label))
metrics = MulticlassMetrics(predictionAndLabels)
print("accuracy: {}".format(metrics.accuracy))
print("recall: {}".format(metrics.recall()))
print("precision: {}".format(metrics.precision()))
print("f1 measure: {}".format(metrics.fMeasure()))
"""
Explanation: Evaluation of a Classification Model
We will focus on the classification of a binary model. This means that the classification model classifies between two exclusive variables: "a" and "b" for example.
A binary classifier can be seen as a classifier telling if the occurrence belongs to one of the classes, say "a". So, when the classifier outputs a true value, it means that the occurrence belongs to class a. If the classifier outputs a false value, it means that it does not belong to class "a", hence it belong to class "b". So, false outputs mean that the occurrence belongs to class "b".
source: https://en.wikipedia.org/wiki/Precision_and_recall
When we classify a new occurrence, we can have the following 4 cases:
source: https://en.wikipedia.org/wiki/Precision_and_recall
True Positive: The prediction and the actual value are the same, a positive value
True Negative: The prediction and the actual value are the same, a negative value
False Positive: The prediction and the actual value differ, the actual value is negative but the predicted value is positive
False Negative: The prediction and the actual value differ, the actual value is positive but the predicted value is negative
Taking these definitions into account we can define the following metrics:
Accuracy: among all the sample, how many are correct
$$ acc = \frac{TP+TN}{TP+TN+FP+FN}$$
Precision: for those for which the model said as positive, how many of them are correct
$$ prec = \frac{TP}{TP+FP} $$
Recall: for those which are actually real, how many of them my model can label correctly
$$ rec = \frac{TP}{TP+FN} $$
F1 measure:
$$ F = 2 \cdot \frac{prec \cdot acc}{prec + acc} $$
End of explanation
"""
|
aboSamoor/compsocial | Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb | gpl-3.0 | from tools import get_psycinfo_database
words_df = get_psycinfo_database()
words_df.head()
#words_df.to_csv("data/PsycInfo/processed/psychinfo_combined.csv.bz2", encoding='utf-8',compression='bz2')
"""
Explanation: Merge CSV databases
End of explanation
"""
#psychinfo = pd.read_csv("data/PsycInfo/processed/psychinfo_combined.csv.bz2", encoding='utf-8', compression='bz2')
psychinfo = words_df
"""
Explanation: Load PsychINFO unified database
End of explanation
"""
abstract_occurrence = []
for x,y in psychinfo[["Term", "Abstract"]].fillna("").values:
if x.lower() in y.lower():
abstract_occurrence.append(1)
else:
abstract_occurrence.append(0)
psychinfo["term_in_abstract"] = abstract_occurrence
title_occurrence = []
for x,y in psychinfo[["Term", "Title"]].fillna("").values:
if x.lower() in y.lower():
title_occurrence.append(1)
else:
title_occurrence.append(0)
psychinfo["term_in_title"] = title_occurrence
psychinfo_search = psychinfo.drop('Abstract', 1)
psychinfo_search = psychinfo_search.drop('Title', 1)
term_ID = {"multiculturalism": 1, "polyculturalism": 2, "cultural pluralism": 3,
"monocultural": 4, "monoracial": 5, "bicultural": 6,
"biracial": 7, "biethnic": 8, "interracial": 9,
"multicultural": 10, "multiracial": 11, "polycultural": 12,
"polyracial": 13, "polyethnic": 14, "mixed race": 15,
"mixed ethnicity": 16, "other race": 17, "other ethnicity": 18}
psychinfo_search["term_ID"] = psychinfo_search.Term.map(term_ID)
psychinfo_search["Type of Book"].value_counts()
type_of_book = { 'Handbook/Manual': 1, 'Textbook/Study Guide': 2, 'Conference Proceedings': 3,
'Reference Book': 2, 'Classic Book': 4,'Handbook/Manual\n\nTextbook/Study Guide': 5,
'Reference Book\n\nTextbook/Study Guide': 5,'Classic Book\n\nTextbook/Study Guide': 5,
'Handbook/Manual\n\nReference Book': 5,'Conference Proceedings\n\nTextbook/Study Guide': 5,
'Reference Book\r\rTextbook/Study Guide': 5,'Conference Proceedings\r\rTextbook/Study Guide': 5}
psychinfo_search["type_of_book"] = psychinfo_search["Type of Book"].map(type_of_book)
psychinfo_search["cited_references"] = psychinfo_search['Cited References'].map(lambda text:len(text.strip().split("\n")),"ignore")
psychinfo_search['Document Type'].value_counts()
document_type = {'Journal Article': 1, 'Dissertation': 2, 'Chapter': 3, 'Review-Book': 4,
'Comment/Reply': 6, 'Editorial': 6, 'Chapter\n\nReprint': 3,
'Erratum/Correction': 6, 'Review-Media': 6, 'Abstract Collection': 6,
'Letter': 6, 'Obituary': 6, 'Chapter\n\nComment/Reply': 3, 'Column/Opinion': 6,
'Reprint': 5, 'Bibliography': 5, 'Journal Article\n\nReprint': 1,
'Chapter\r\rReprint': 3, 'Chapter\n\nJournal Article\n\nReprint': 3,
'Bibliography\n\nChapter': 3, 'Encyclopedia Entry': 5,
'Chapter\r\rJournal Article\r\rReprint': 3, 'Review-Software & Other': 6,
'Publication Information': 6, 'Journal Article\r\rReprint': 1,
'Reprint\n\nReview-Book': 4}
psychinfo_search['document_type'] = psychinfo_search['Document Type'].map(document_type)
psychinfo_search["conference_dich"] = psychinfo_search["Conference"].fillna("").map(lambda x: int((len(x) > 0)))
psychinfo_search['Publication Type'].value_counts()
publication_type = {'Journal\n\nPeer Reviewed Journal': 1, 'Book\n\nEdited Book': 3,
'Dissertation Abstract': 2, 'Book\n\nAuthored Book': 3,
'Journal\r\rPeer Reviewed Journal': 1, 'Electronic Collection': 1,
'Journal\n\nPeer-Reviewed Status-Unknown': 1, 'Book\r\rEdited Book': 3,
'Book': 3, 'Journal\r\rPeer-Reviewed Status-Unknown': 1,
'Book\r\rAuthored Book': 3, 'Encyclopedia': 4}
psychinfo_search['publication_type'] = psychinfo_search['Publication Type'].map(publication_type)
(psychinfo_search["publication_type"] * psychinfo_search["conference_dich"]).value_counts()
selection = (psychinfo_search["publication_type"] == 3) * (psychinfo_search["conference_dich"] == 1)
psychinfo_search[selection][["Publication Type", "Conference"]]
psychinfo_search['Language'].value_counts()
language = {'English': 1, 'French': 2, 'Spanish': 3, 'Italian': 4, 'German': 5, 'Portuguese': 6,
'Dutch': 7, 'Chinese': 8, 'Greek': 9, 'Hebrew': 10, 'Turkish': 10, 'Russian': 10,
'Serbo-Croatian': 10, 'Slovak': 10, 'Japanese': 10, 'Hungarian': 10, 'Czech': 10,
'Danish': 10, 'Romanian': 10, 'Polish': 10, 'Norwegian': 10, 'Swedish': 10, 'Finnish': 10,
'NonEnglish': 10, 'Arabic': 10, 'Afrikaans': 10}
psychinfo_search['language'] = psychinfo_search['Language'].map(language)
#psychinfo_search["PsycINFO Classification Code"].value_counts().to_csv("data/PsycInfo/processed/PsycINFO_Classification_Code.csv")
#psychinfo_search["Tests & Measures"].value_counts().to_csv("data/PsycInfo/processed/Tests_&_Measures.csv")
#psychinfo_search["Key Concepts"].value_counts().to_csv("data/PsycInfo/processed/Key_Concepts.csv")
#psychinfo_search["Location"].value_counts().to_csv("data/PsycInfo/processed/Location.csv")
#psychinfo_search["MeSH Subject Headings"].value_counts().to_csv("data/PsycInfo/processed/MeSH_Subject_Headings.csv")
#psychinfo_search["Journal Name"].value_counts().to_csv("data/PsycInfo/processed/Journal_Name.csv")
#psychinfo_search["Institution"].value_counts().to_csv("data/PsycInfo/processed/Institution.csv")
len(psychinfo_search["Population Group"].value_counts())
#psychinfo_search["Methodology"].value_counts()
def GetCats(text):
pattern = re.compile("([0-9]+)")
results = [100*(int(x)//100) for x in pattern.findall(text)]
if len(set(results))>1:
return 4300
else:
return results[0]
psychinfo_search["PsycINFO_Classification_Code"] = psychinfo_search["PsycINFO Classification Code"].map(GetCats, "ignore")
lists = psychinfo["PsycINFO Classification Code"].map(GetCats, "ignore")
len(set([x for x in lists.dropna()]))
#Number of unique categories
psychinfo_search["grants_sponsorship"] = psychinfo_search["Grant/Sponsorship"].fillna("").map(lambda x: int(len(x) > 0))
#psychinfo_search.to_csv("data/PsycInfo/processed/psychinfo_term_search.csv.bz2", encoding='utf-8', compression='bz2')
#psychinfo_search = psychinfo_search.drop('Title', 1)
#psychinfo_search["Methodology"].value_counts().to_csv("data/PsycInfo/Manual_Mapping/Methodology.csv")
#psychinfo_search["Population Group"].value_counts().to_csv("data/PsycInfo/Manual_Mapping/Population_Group.csv")
"""
Explanation: Term appearance in abstract and title
End of explanation
"""
len(psychinfo_search["Population Group"].value_counts())
"""
Explanation: PsycINFO Tasks
Keep the current spreadsheet and add the following:
1. ~~Add Term in Abstract to spreadsheet~~ (word co-occurrence and control for the length of the abstract--lambda(len(abstract)) )do this for NSF/NIH data as well
1. ~~Add Term in Title to spreadsheet~~
1. ~~Copy the word data into a new column (title it 'terms')--> code them as the following: 1 = multiculturalism, 2 = polyculturalism, 3 = cultural pluralism, 4 = monocultural, 5 = monoracial, 6 = bicultural, 7 = biracial, 8 = biethnic, 9 = interracial, 10 = multicultural, 11 = multiracial, 12 = polycultural, 13 = polyracial, 14 = polyethnic, 15 = mixed race, 16 = mixed ethnicity, 17 = other race, 18 = other ethnicity~~
1. Search all options in set for the following categories: -- I will manually categorize them once you give all options in each set
1. ~~"Type of Book"~~
1. ~~"PsycINFO Classification Code"~~
~~1. (used the classification codes[recoded to most basic category levels] -- subcategories
created by PsycInfo (22)-- multiple categories = 4300)~~
1. ~~"Document Type"~~
1. ~~"Grant/Scholarship"~~
1. ~~(create a dichotomized variable 0/1)~~
1. ~~"Tests & Measures"--> csv (no longer necessary)~~
1. ~~(Too many categories---needs to be reviewed manually/carefully in excel)~~
1. ~~"Publication Type"~~
1. ~~"Publication Status"~~
1. "Population Group"
1. (Need to be mapped manually and then recategorized)
1. We need: gender, age (abstract, years)
1. "Methodology"
1. (can make specific methods dichotomous--may remove if unnecessary)
1. "Conference"
1. ~~Right now, this is text (~699 entries)--> dichotomize variable.~~
~~If it is a conference ie there is a text = 1, if there is NaN = 0.~~
1. Then, I will incorporate this as a new category in "Publication Type" and remove this column).??? [not currently included as a category--overlaps with category 3 in Publication Type = Books]
1. "Key Concepts"--> csv
1. (word co-occurrence)
1. "Location"-->csv--> sent to Barbara
1. (categorized by region--multiple regions)
1. ~~"Language"~~
~~1. I am not sure about my "other" language (10) category -- I put everything with less
than 10 entries into one category.~~
1. "MeSH Subject Headings"--> csv (may no longer be necessary?)
1. (word co-occurrence)
1. "Journal Name"-->csv--> sent to Jian Xin
1. (categorized by H-index in 2014)
1. "Institution"-->csv --> sent to Barbara
1. (categorized by state, region & country)
1. ~~Count the number of cited references for each entry~~
***Once we extract the csv files for these columns, I will categorize them.
Once all of these corrections have been made, make a new spreadsheet and delete the following information:
1. Volume
1. Publisher
1. Accession Number
1. Author(s)
1. Issue
1. Cited References
1. Publication Status (had no variance)--only first posting
1. Document Type???
End of explanation
"""
|
manoharan-lab/structural-color | detector_tutorial.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
import structcol as sc
from structcol import refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
import pymie as pm
from pymie import size_parameter, index_ratio
import time
# For Jupyter notebooks only:
%matplotlib inline
"""
Explanation: Tutorial for using detectors in the structural-color package
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson
This file is part of the structural-color python package.
This package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.
Introduction to how detectors are implemented in the package
Single scattering model
TODO: add description and examples for detection in single scattering model
Monte Carlo model
In the Monte Carlo model, the detector first comes into play in calculating reflectance or transmittance using the calc_refl_trans() function. If no detector parameters are specified in the arguments of the function, the reflectance and transmittance returned are the reflectance integrated over the total reflection hemisphere and the transmittance integrated over the total transmittance hemisphere.
There are two ways to change the type of detection used to calculate reflectance. At this time, the detectors aren't implemented for transmittance, but it is certainly possible to implement them in the future. The different detectors and how to use them are explained below.
Loading and using the package
End of explanation
"""
# incident light wavelength
wavelength = sc.Quantity('600 nm')
# sample parameters
radius = sc.Quantity('0.140 um')
volume_fraction = sc.Quantity(0.55, '')
n_imag = 2.1e-4*1j
n_particle = ri.n('polystyrene', wavelength) + n_imag # refractive indices can be specified as pint quantities or
n_matrix = ri.n('vacuum', wavelength) # called from the refractive_index module. n_matrix is the
n_medium = ri.n('vacuum', wavelength) # space within sample. n_medium is outside the sample
n_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation
n_matrix,
volume_fraction)
thickness = sc.Quantity('80 um')
boundary = 'film'
# Monte Carlo parameters
ntrajectories = 300 # number of trajectories
nevents = 200 # number of scattering events in each trajectory
"""
Explanation: Run Monte Carlo trajectories
for a single wavelength
set system parameters
End of explanation
"""
# Calculate the phase function and scattering and absorption coefficients from the single scattering model
p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelength)
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary)
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
# Generate a matrix of all the randomly sampled angles first
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)
# Create step size distribution
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Run photons
trajectories.absorb(mu_abs, step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
"""
Explanation: initialize and run trajectories
End of explanation
"""
# Calculate reflectance
reflectance, _ = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)
print('Reflectance = '+ str(reflectance))
"""
Explanation: Calculate reflectance with default detector
This calculates the reflectance for the full reflection hemisphere
End of explanation
"""
# Set detector parameter
detection_angle = sc.Quantity('80 degrees')
# Calculate reflectance
reflectance, _ = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary,
detection_angle = detection_angle)
print('Reflectance = '+ str(reflectance))
"""
Explanation: Calculate reflectance with large aperture detector
This detects only the trajectories that exit at an angle less than or equal to a specified detection angle.
To use this detector, you must specify an extra parameter in the calc_refl_trans() function:
detection_angle [structural color Quantity, angle]
This is the maximum angle that is detected. It is defined relative to the vector normal to surface of the sample. For example, a detection angle of 0 degrees corresponds to no angles detected,
and a detection angle of 90 degrees corresponds to the full reflection hemisphere.
End of explanation
"""
# Set detector parameters
detector = True
det_theta = sc.Quantity('45 degrees')
det_len = sc.Quantity('5 cm')
det_dist = sc.Quantity('10 cm')
plot_detector = True
# Calculate reflectance
reflectance, _ = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary,
detector = detector,
det_theta = det_theta,
det_len = det_len,
det_dist = det_dist,
plot_detector = plot_detector)
print('Reflectance = '+ str(reflectance))
"""
Explanation: Calculate reflectance with goniometer detector
This detects only the trajectories that exit into a detector aperture with a specified size and position. The detector is modeled after a goniometer detector, which is on an arm of fixed length that can rotate around the sample in the center.
To use this detector, you must specify 4 (optionally 5) parameters to be passed into calc_refl_trans().
1. detector [boolean]
<br>
This must be set to True.
<br>
<br>
2. det_theta [structural color Quantity, angle]
<br>
This is the angle that the detector is centered at. The angle is defined relative to the vector normal to the surface of the sample. For example a det_theta of 0 degrees would correspond to a detector normal to the sample, and a det_theta of 90 degrees would correspond to a detector at the side of the sample.
<br>
<br>
3. det_len [structural color Quantity, length]
<br>
This is the side length of the detector, assuming a square detector
<br>
<br>
4. det_dist [structural color Quantity, length]
<br>
This is the distance between the center of the detector detector and the center of the sample surface.
<br>
<br>
5. plot_detector [boolean, optional]
<br>
This can be set to True to plot the exit positions of the trajectories in the detection hemisphere, with detected trajectories encirled in an orange ring.
<br>
<br>
End of explanation
"""
refl_renorm = det.normalize_refl_goniometer(reflectance, det_dist, det_len)
print('Renormalized Reflectance: ' + str(refl_renorm))
"""
Explanation: Renormalizing goniometer detector results
Because detection in a small angle range can lead to very small reflectances, it can be useful to normalize to the reflectance of a lambertian reflector for the given angle detector range, rather than to the total beam intensity. This will lead to results that can more easily be converted to a color for visualization.
This normalization scheme makes several key assumptions:
1. The area of the detection hemisphere spanned by the detector aperture is
a square. As the detector size approaches the diameter of the detection
hemisphere, this assumption becomes worse. In reality, the detection hemisphere
area spanned by the detector is the projection of a square on the sphere
surface, which looks like a curved square patch.
2. The reference reflector (maximum reflectance) is that of a lambertian
reflector, meaning the reflectance is uniform over the detection hemisphere
and that the integrated reflected intensity is equal to the intensity
of the incident beam. This means that if the sample has a specular
component, the reflectance could be greater than one for the specular angle.
The normalization formula is:
refl_renormlized = $\frac{\textsf{area of detection hemisphere}}{\textsf{(area detected) * reflectance}}$
We are just scaling up the reflectance based on the area detected relative to the total possible area that can be detected.
End of explanation
"""
|
BL-Labs/poetryhunt | Cluster experiment 2.ipynb | mit | %matplotlib inline
# Load this library to make the graphs interactive for smaller samples
#import mpld3
#mpld3.enable_notebook()
# Turns out, multiple interactive scattergraphs with 170,000+ points each is a bit too much for a browser
# Who knew?!
from clustering_capitals import create_cluster_dataset, NewspaperArchive
DBFILE = "1745-55.db"
n = NewspaperArchive(textareas="/datastore/burneytextareas")
a = n.get_areas(newspaper="B0574REMEMBRA", year = "1748", month = "03", day = "05")
pg1a1 = a['0001']['001'][0]
print(len(pg1a1['lines']), len(pg1a1['line_widths'][:len(pg1a1['lines'])-1]))
print(pg1a1['line_widths'][:len(pg1a1['lines'])-1][-1])
# Get/create the dataset:
ds = create_cluster_dataset(n, daterange = [1745, 1755], dbfile = DBFILE) # , refresh = True)
"""
Explanation: Clustering experiment #1
========================
Plan:
Derive a vector for left and righthand alignment variance for each contiguous block of text in each article (newspaper has pages, which hold articles, which consist of blocks).
Create a k=8 kmeans clustering using data from all newspapers from 1745 to 1756.
Visualise the clustering
Given the set of found poems, see into which clusters the poems get assigned.
Report on the spread of these and if a cluster is found which just has poems, report on all of the references within that cluster.
End of explanation
"""
data, transform, id_list = ds
print(data)
print(transform.get_feature_names())
"""
Explanation: What do these 'vectors' look like? What do the columns refer to?
End of explanation
"""
from clustering_capitals import ClusterDB
db = ClusterDB(DBFILE)
item = dict(db.vecidtoitem(id_list[1]))
print(item)
print(transform.inverse_transform(data[1]))
from burney_data import BurneyDB
bdb = BurneyDB("burney.db")
titlemd = bdb.get_title_row(titleAbbreviation=item['newspaper'])
entry = bdb.get_entry_row(year=item['year'], month=item['month'], day=item['day'], title_id= titlemd['id'])
issue = bdb.get_issue_row(id=entry['issue_id'])
print(titlemd)
print(issue)
print(entry)
vector = db.vector(id_list[1])
print(dict(vector))
mask = {'ave_lsp': 1.0, 'density':1.0, 'ltcount':0.0, 'redge_x2ave':0.0, 'st_caps':1.0,
'st_nums':1.0, 'x1_var1':1.0, 'x1_var2':0.0, 'x1ave_ledge':0.0, 'x2_var1':1.0, 'x2_var2':0.0}
m_vec = transform.transform(mask)
print(m_vec)
import numpy as np
from matplotlib import pyplot as plt
# Mask off leaving just the left and right variance columns
npdata = data.toarray()
mask = np.ones((11), dtype=bool)
# remember: ['ave_lsp', 'density', 'ltcount', 'redge_x2ave', 'st_caps',
# 'st_nums', 'x1_var1', 'x1_var2', 'x1ave_ledge', 'x2_var1', 'x2_var2']
mask[[0,1,2,3,4,5,7,8,10]] = False
marray = npdata[:,mask]
"""
Explanation: Going from a vector back to the metadata reference:
By keeping an 'id_list', we can look up the identifier for any vector in the list from the database we've made for this clustering attemp. This lets us look up what the reference for that is, and where we can find it:
End of explanation
"""
plt.scatter(marray[:,0], marray[:,1], marker = ".", s = [2] * len(marray), linewidths=[0.0] * len(marray))
plt.show()
"""
Explanation: x1 vs x2 varience?
What is the rough shape of this data? The varience of x1 and x2 are equivalent to the left and right alignment of the text varies in a given block of text.
End of explanation
"""
# Build the clustering and show the individual clusters as best we can:
from sklearn.cluster import KMeans
cl_mask = np.ones((11), dtype=bool)
# remember: ['ave_lsp', 'density', 'ltcount', 'redge_x2ave', 'st_caps',
# 'st_nums', 'x1_var1', 'x1_var2', 'x1ave_ledge', 'x2_var1', 'x2_var2']
# so, we should cluster on ave_lsp, density, st_caps, st_nums, x1_var1, x2_var1:
cl_mask[[2,3,7,8,10]] = False
cl_marray = npdata[:,cl_mask]
estimator = KMeans(n_clusters=12)
clusters = estimator.fit(cl_marray)
labels = estimator.labels_
def isol(label, labels):
for l in labels.astype(np.float):
if l != label:
yield "#444444"
else:
yield "#FF3355"
def highlight(label, labels):
for l in labels.astype(np.float):
if l != label:
yield 2
else:
yield 4
# plot graphs of ave_lsp vs x2_var1?
for label in set(labels):
print("Cluster: {0} - x1_var1 vs x2_var2".format(label))
plt.scatter(cl_marray[:,4], cl_marray[:,5], c=list(isol(label, labels)), marker = ".",
s = list(highlight(label, labels)), linewidths=[0.0] * len(marray))
plt.show()
"""
Explanation: Attempting K-Means
What sort of clustering algorithm to employ is actually a good question. K-means can give fairly meaningless responses if the data is of a given sort. Generally, it can be useful but cannot be used blindly.
Given the data above, it might be a good start however.
End of explanation
"""
# plot graphs of ave_lsp vs x2_var1?
for label in [4,11]:
print("Cluster: {0} - ave_lp vs density".format(label))
plt.scatter(cl_marray[labels == label,0], cl_marray[labels == label,1], marker = ".", linewidths=1)
plt.show()
print("Cluster: {0} - st_caps vs st_num".format(label))
plt.scatter(cl_marray[labels == label,2], cl_marray[labels == label,3], marker = ".", linewidths=1)
plt.show()
print("Cluster: {0} - x1_var1 vs x2_var2".format(label))
plt.scatter(cl_marray[labels == label,4], cl_marray[labels == label,5], marker = ".", linewidths=1)
plt.show()
"""
Explanation: It looks like cluster 4 and perhaps cluster 11 are ones that should contain more complete poems than the rest if our assumptions are correct. Clump with very low x1 (lefthand edge) variance, but high x2 (right hand side).
What do the other aspects of 4 and 11 look like?
End of explanation
"""
import csv
def get_info(item_id):
record = dict(db.vecidtoitem(item_id))
vect = dict(db.vector(item_id))
titlemd = bdb.get_title_row(titleAbbreviation=record['newspaper'])
entry = bdb.get_entry_row(year=record['year'], month=record['month'], day=record['day'], title_id= titlemd['id'])
issue = bdb.get_issue_row(id=entry['issue_id'])
record.update(titlemd)
record.update(entry)
record.update(issue)
record.update(vect)
return record
for label in set(labels):
print("Saving label {0}".format(label))
with open("exp2_cluster{0}.csv".format(label), "w") as cfn:
fields = ["title", "titleAbbreviation", "year", "month", "day",
"issueNumber", "printedDate", "page", "article", "block_number", "filepath", "st_caps", "st_nums", "x1_var1", "x2_var1", "ltcount"]
csvdoc = csv.DictWriter(cfn, fieldnames = fields)
csvdoc.writerow(dict([(x,x) for x in fields]))
count = 0
for idx, vlabel in enumerate(list(labels)):
if idx % 1000 == 0:
print("Tackling line {0} - saved {1} lines for this label".format(idx, count))
if vlabel == label:
record = get_info(id_list[idx])
csvdoc.writerow(dict([(x,record[x]) for x in fields]))
count += 1
"""
Explanation: Lets export this as a list of references to explore further - "clusterX.csv"
End of explanation
"""
|
kgrodzicki/machine-learning-specialization | course-3-classification/module-2-linear-classifier-assignment-blank.ipynb | mit | from __future__ import division
import graphlab
import math
import string
"""
Explanation: Predicting sentiment from product reviews
The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use SFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Data preperation
We will use a dataset consisting of baby product reviews on Amazon.com.
End of explanation
"""
products
"""
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
"""
products[269]
"""
Explanation: Build the word count vector for each review
Let us explore a specific example of a baby product.
End of explanation
"""
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_puctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation)
"""
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Transform the reviews into word-counts.
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
End of explanation
"""
products[269]['word_count']
"""
Explanation: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
End of explanation
"""
products = products[products['rating'] != 3]
len(products)
"""
Explanation: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
End of explanation
"""
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
"""
Explanation: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
End of explanation
"""
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
"""
Explanation: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
"""
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
"""
Explanation: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.
Note: This line may take 1-2 minutes.
End of explanation
"""
weights = sentiment_model.coefficients
weights.column_names()
"""
Explanation: Aside. You may get an warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:
End of explanation
"""
num_positive_weights = len(weights[weights["value"] >= 0])
num_negative_weights = len(weights[weights["value"] < 0])
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
"""
Explanation: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).
End of explanation
"""
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
"""
Explanation: Quiz question: How many weights are >= 0?
Making predictions with logistic regression
Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
End of explanation
"""
sample_test_data[0]['review']
"""
Explanation: Let's dig deeper into the first row of the sample_test_data. Here's the full review:
End of explanation
"""
sample_test_data[1]['review']
"""
Explanation: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
End of explanation
"""
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
"""
Explanation: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
$$
\mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i)
$$
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].
End of explanation
"""
y = scores.apply(lambda x: 1 if x >= 0 else -1)
print y
"""
Explanation: Predicting sentiment
These scores can be used to make class predictions as follows:
$$
\hat{y} =
\left{
\begin{array}{ll}
+1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \
-1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \
\end{array}
\right.
$$
Using scores, write code to calculate $\hat{y}$, the class predictions:
End of explanation
"""
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
"""
Explanation: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
End of explanation
"""
prob = scores.apply(lambda x: 1/(1 + math.exp(-1 * x)))
print prob
"""
Explanation: Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.
Probability predictions
Recall from the lectures that we can also calculate the probability predictions from the scores using:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}.
$$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
End of explanation
"""
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
"""
Explanation: Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.
End of explanation
"""
min(prob)
"""
Explanation: Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
End of explanation
"""
test_data["probability"] = sentiment_model.predict(test_data, output_type='probability')
best_20 = test_data.topk("probability", k=20)
best_20.print_rows(20)
"""
Explanation: Find the most positive (and negative) review
We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.
Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-20 reviews, use the following steps:
1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)
2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)
End of explanation
"""
worst_20 = test_data.topk("probability", k=20, reverse = True)
worst_20.print_rows(20)
"""
Explanation: Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]
Now, let us repeat this excercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.
End of explanation
"""
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
## YOUR CODE HERE
scores = model.predict(data, output_type='margin')
y = scores.apply(lambda x: 1 if x > 0 else -1)
# Compute the number of correctly classified examples
## YOUR CODE HERE
correctly_classified = 0
for i in range(len(data)):
if y[i] == true_labels[i]:
correctly_classified += 1
# Then compute accuracy by dividing num_correct by total number of examples
## YOUR CODE HERE
accuracy = correctly_classified / len(true_labels)
return accuracy
"""
Explanation: Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
Compute accuracy of the classifier
We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
This can be computed as follows:
Step 1: Use the trained model to compute class predictions (Hint: Use the predict method)
Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).
Step 3: Divide the total number of correct predictions by the total number of data points in the dataset.
Complete the function below to compute the classification accuracy:
End of explanation
"""
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
"""
Explanation: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
End of explanation
"""
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
"""
Explanation: Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
Learn another classifier with fewer words
There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:
End of explanation
"""
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
"""
Explanation: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
End of explanation
"""
train_data[0]['review']
"""
Explanation: Let's see what the first example of the dataset looks like:
End of explanation
"""
print train_data[0]['word_count']
"""
Explanation: The word_count column had been working with before looks like the following:
End of explanation
"""
print train_data[0]['word_count_subset']
"""
Explanation: Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
End of explanation
"""
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
"""
Explanation: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
End of explanation
"""
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
"""
Explanation: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
End of explanation
"""
simple_model.coefficients
"""
Explanation: Now, we will inspect the weights (coefficients) of the simple_model:
End of explanation
"""
c = simple_model.coefficients.sort('value', ascending=False)["index", "value"].apply(lambda x: x["value"] > 0 and x["index"] in significant_words)
len([x for x in c if x != 0])
"""
Explanation: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
End of explanation
"""
print c
sentiment_model_coefficients = sentiment_model.coefficients.sort('value', ascending=False)
sentiment_model_coefficients_positive = sentiment_model_coefficients[sentiment_model_coefficients["value"] > 0]
"""
Explanation: Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
End of explanation
"""
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
"""
Explanation: Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
End of explanation
"""
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
"""
Explanation: Now, compute the classification accuracy of the simple_model on the train_data:
End of explanation
"""
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
"""
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
End of explanation
"""
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
"""
Explanation: Next, we will compute the classification accuracy of the simple_model on the test_data:
End of explanation
"""
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
"""
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
Baseline: Majority class prediction
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
End of explanation
"""
num_positive/len(train_data)
"""
Explanation: Now compute the accuracy of the majority class classifier on test_data.
Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
End of explanation
"""
|
diego0020/va_course_2015 | AstroML/notebooks/07_classification_example.ipynb | mit | import os
DATA_HOME = os.path.abspath('C:/temp/AstroML/data/sdss_colors/')
"""
Explanation: Classification Example
You'll need to modify the DATA_HOME variable to the location of the datasets.
In this tutorial we'll use the colors of over 700,000 stars and quasars from the
Sloan Digital Sky Survey. 500,000 of them are training data, spectroscopically
identified as stars or quasars. The remaining 200,000
have been classified based on their photometric colors.
End of explanation
"""
import numpy as np
train_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class_train.npy'))
test_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class.200000.npy'))
"""
Explanation: Here we will use a Naive Bayes estimator to classify the objects.
First, we will construct our training data and test data arrays:
End of explanation
"""
print(train_data.dtype.names)
print(train_data['u-g'].shape)
"""
Explanation: The data is stored as a record array, which is a convenient format for
collections of labeled data:
End of explanation
"""
X_train = np.vstack([train_data['u-g'],
train_data['g-r'],
train_data['r-i'],
train_data['i-z']]).T
y_train = (train_data['redshift'] > 0).astype(int)
X_test = np.vstack([test_data['u-g'],
test_data['g-r'],
test_data['r-i'],
test_data['i-z']]).T
y_test = (test_data['label'] == 0).astype(int)
print("training data: ")
print(X_train.shape)
print("test data: ")
print(X_test.shape)
"""
Explanation: Now we must put these into arrays of shape (n_samples, n_features)
in order to pass them to routines in scikit-learn. Training samples
with zero-redshift are stars, while samples with positive redshift are quasars:
End of explanation
"""
from sklearn import naive_bayes
gnb = naive_bayes.GaussianNB()
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
"""
Explanation: Notice that we’ve set this up so that quasars have y = 1,
and stars have y = 0. Now we’ll set up a Naive Bayes classifier.
This will fit a four-dimensional uncorrelated gaussian to each
distribution, and from these gaussians quickly predict the label
for a test point:
End of explanation
"""
accuracy = float(np.sum(y_test == y_pred)) / len(y_test)
print(accuracy)
"""
Explanation: Let’s check our accuracy. This is the fraction of labels that are correct:
End of explanation
"""
print(np.sum(y_test == 0))
print(np.sum(y_test == 1))
"""
Explanation: We have 61% accuracy. Not very good. But we must be careful here:
the accuracy does not always tell the whole story. In our data,
there are many more stars than quasars
End of explanation
"""
TP = np.sum((y_pred == 1) & (y_test == 1)) # true positives
FP = np.sum((y_pred == 1) & (y_test == 0)) # false positives
FN = np.sum((y_pred == 0) & (y_test == 1)) # false negatives
print("precision:")
print(TP / float(TP + FP))
print("recall: ")
print(TP / float(TP + FN))
"""
Explanation: Stars outnumber Quasars by a factor of 14 to 1. In cases like this,
it is much more useful to evaluate the fit based on precision and
recall. Because there are many fewer quasars than stars, we’ll call
a quasar a positive label and a star a negative label. The precision
asks what fraction of positively labeled points are correctly labeled:
$\mathrm{precision = \frac{True\ Positives}{True\ Positives + False\ Positives}}$
The recall asks what fraction of positive samples are correctly identified:
$\mathrm{recall = \frac{True\ Positives}{True\ Positives + False\ Negatives}}$
We can calculate this for our results as follows:
End of explanation
"""
from sklearn import metrics
print("precision:")
print(metrics.precision_score(y_test, y_pred))
print("recall: ")
print(metrics.recall_score(y_test, y_pred))
"""
Explanation: For convenience, these can be computed using the tools in the metrics sub-package of scikit-learn:
End of explanation
"""
print("F1 score:")
print(metrics.f1_score(y_test, y_pred))
"""
Explanation: Precision and Recall tell different stories about the performance of the classifier. Ideally one would try to create a classifier with a high precision and high recall but this is not always possible, and sometimes raising the precision will decrease the recall or viceversa (why?).
Think about situations when you'll want a high precision classifier even if the recall is poor, and viceversa.
Another useful metric is the F1 score, which gives a single score based on the precision and recall for the class:
$\mathrm{F1 = 2\frac{precision * recall}{precision + recall}}$
In a perfect classification, the precision, recall, and F1 score are all equal to 1.
End of explanation
"""
print(metrics.classification_report(y_test, y_pred, target_names=['Stars', 'QSOs']))
"""
Explanation: For convenience, sklearn.metrics provides a function that computes all
of these scores, and returns a nicely formatted string. For example:
End of explanation
"""
|
gmodena/notebooks | Ensemble learning - stacked generalization.ipynb | bsd-3-clause | from sklearn.cross_validation import train_test_split, StratifiedKFold
from sklearn.metrics import accuracy_score
from sklearn.datasets import make_classification
import numpy as np
n_features = 20
n_samples = 10000
X, y = make_classification(n_features=n_features, n_samples=n_samples)
"""
Explanation: Introduction
In this document I'll show a python implementation of stacked generalization (or stacking), an ensemble technique introduced in [Wolpert, David H., 1992. Stacked generalization, Neural Networks, Volume 5, Issue 2, Pages 241-259].
Stacking uses cross validation to combine the results of several predictive models to improve their accuracy.
A particular case of stacked generalization (blending) was used by the winners of the Netflix Prize (http://www.netflixprize.com/assets/GrandPrize2009_BPC_BigChaos.pdf). Ensemble techniques are also extremely popular in several other competitions like the ones hosted on Kaggle. More important, these methods usually perform very well also on "real world" predictive modeling tasks. Stacked generalization is particularly effective when we have datasets describing different aspects of the "thing" we are trying to predict (eg. a dataset of patients' signals). Olivetti et. at. 2014. MEG Decoding Across Subjects - and a related Kaggle competition - is an example of using stacking to build a robust predictor across subjects.
Stacked Generalization
In its original formulation, the method works as follows:
Split a dataset set into two disjoint sets (train/test).
Train and test $k$ models, with cross validation, on the first part. These are called level-0 models
Build train and test level-1 datasets, using the predictions from 2) as inputs
Train a higher level model (level-1 model) on the level 1 data from 3) and use it to predict unseen instances from the test set in 1.
A complement to Wolpert's work is [Ting, Witten 1998. Issues in Stacked Generalization http://arxiv.org/pdf/1105.5466.pdf]; this paper presents empirical results that fill in on what Wolpert described as "black art". These can be considered a sort of "best practices" for stacking. In particular:
1. Logistic Regression performs well as the level-1 model
2. For classification tasks, build level-1 data using class probabilities rather than the predicted class labels.
3. Like any ensemble method, stacking is ideal for parallel computation
4. Stacking can work well with just two or three level-0 models
In terms of potential pitfalls, the common issue with loss of interpretability in model ensembles comes to mind. Perlich, Swirszcz 2010 suggest tha cross validation and stacking should be handled with care (eg. use stratified k-fold to improve robustness) when the dataset is skewed (eg. very small number of positive examples).
Data
I'm not really interested in the method performance, so I'll create an artificial dataset to experiment with a classification task.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y)
"""
Explanation: Divide the dataset into a 75% - 25% training/test split to satisfy Step 1.
End of explanation
"""
skf = StratifiedKFold(y_test, n_folds=3)
"""
Explanation: Cross validation will be carried out by means of stratified k-fold.
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
n_models = 3
clfs = [DecisionTreeClassifier()] * n_models
"""
Explanation: Models
I'm using decision trees (cart) as level-0 classifiers
End of explanation
"""
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
"""
Explanation: and logistic regression as the level 1 model
End of explanation
"""
level_1_train = np.zeros((X_train.shape[0], len(clfs)))
level_1_test = np.zeros((X_test.shape[0], len(clfs), len(skf)))
for k, clf in enumerate(clfs):
for j, (train_index, test_index) in enumerate(skf):
# L^(-j), L_j
X_train_cv, X_test_cv = X_train[train_index], X_test[test_index]
y_train_cv, y_test_cv = y_train[train_index], y_test[test_index]
# M_k^(-j) - level 0 model (M_k) on the training set L^{-j}
clf.fit(X_train_cv, y_train_cv)
# L_cv = z_kj
# we use this dataset to train the level-1 model
# this is a 2-class problems, so we consider only the probability
# p of class 0.
level_1_train[test_index, k] = clf.predict_proba(X_test_cv)[:, 0]
# We build a level-1 test set to be used with the level 1 classifier.
# This is the output of model M_k^(-j) on the held out test set
level_1_test[:, k, j] = clf.predict_proba(X_test)[:, 0]
"""
Explanation: Iterative version
We start the process by training and testing a classification tree $M_k^{-j}$ on a training set $L^{-j}$, for each fold $j$. We use the predictions of these models to build the level-1 dataset $L_{cv}$ that is the train set for the level-1 classifier $\tilde{M}$. In this loop we also take care of building a level-1 test set for $\tilde{M}$, by collecting the predictions of each models $M_k^{-j}$ on unseen instances (X_test).
The code comments follow [Ting, Witten 1998] naming conventions.
End of explanation
"""
lr.fit(level_1_train, y_train)
"""
Explanation: We conclude the training process by fitting a logistic regression on level-1 data.
End of explanation
"""
pred = lr.predict(level_1_test.mean(2))
"""
Explanation: Finally we predict labels on the level-1 test set. The per-fold classifiers predictions of each model $M_k^{-j}$ are blended using their mean as a combiner. This leads to what [Ting, Witten 1998] refer to as final level-0 models $M_k$.
End of explanation
"""
from joblib import Parallel, delayed
from joblib import load, dump, cpu_count
import tempfile
import shutil
import os
import numpy as np
mmap_dir = tempfile.mkdtemp()
"""
Explanation: Parallel version
Cross validation does not require any form of comunication between the models being trained. This makes stacked generalization a good candidate for parallelization.
In this section I'll be using joblib, a frontend to the multiprocessing framework, to paralleize the training/testing of level-0 models as well as the generation of level 1 data. The results of parallel computations are written to shared, mem-mapped, ndarrays. In general this is not a good idea; numpy does not provide atomic operations and writes to shared segments can lead to data corruption. However, in this specific case we can rely on the fact that each classifier $k$ and fold $j$ are allocated exclusive segments of the shared ndarrays.
End of explanation
"""
level_1_train = np.memmap(os.path.join(mmap_dir, "level_1_train"),
shape=(X_train.shape[0], len(clfs)),
mode='w+')
level_1_test = np.memmap(os.path.join(mmap_dir, "level_1_test"),
shape=(X_test.shape[0], len(clfs), len(skf)),
mode='w+')
"""
Explanation: X_train, X_test, y_train, y_test have been defined above.
For each input dataset, I'm releaseing the reference on the original in memory array (dump) and replacing it with a reference to the mem mapped ndarray. gc.collect() is called in Parallel just before forking. joblib.dump crashes Ipython Notebook, so for the sake of this example I will not mmap input dataset. I leave to code here as a template for future reuse.
```{python}
X_train = np.memmap(os.path.join(mmap_dir, "X_train"),
shape=X_train.shape,
mode='w+')
dump(X_train, os.path.join(mmap_dir, "X_train"))
X_train = load(os.path.join(mmap_dir, "X_train"), mmap_mode='r')
X_test = np.memmap(os.path.join(mmap_dir, "X_test"),
shape=X_test.shape,
mode='w+')
dump(X_test, os.path.join(mmap_dir, "X_test"))
X_test = load(os.path.join(mmap_dir, "X_test"), mmap_mode='r')
y_train = np.memmap(os.path.join(mmap_dir, "y_train"),
shape=y_train.shape,
mode='w+')
dump(y_train, os.path.join(mmap_dir, "y_train"))
y_train = load(os.path.join(mmap_dir, "y_train"), mmap_mode='r')
y_test = np.memmap(os.path.join(mmap_dir, "y_test"),
shape=y_train.shape,
mode='w+')
dump(y_test, os.path.join(mmap_dir, "y_test"))
y_test = load(os.path.join(mmap_dir, "y_test"), mmap_mode='r')
```
Output data.
End of explanation
"""
def cross_validate(params):
(level_1_train,
level_1_test,
X_train,
X_test,
y_train,
y_test,
train_index,
test_index,
k,
j,
clf
) = params
X_train_cv, X_test_cv = X_train[train_index], X_test[test_index]
y_train_cv, y_test_cv = y_train[train_index], y_test[test_index]
clf.fit(X_train_cv, y_train_cv)
level_1_train[test_index, k] = clf.predict_proba(X_test_cv)[:, 0]
level_1_test[:,k,j] = clf.predict_proba(X_test)[:, 0]
"""
Explanation: cross_validate implements the training of level-0 models and generation of mem-mapped level-1 data.
End of explanation
"""
params = [[level_1_train,
level_1_test,
X_train,
X_test,
y_train,
y_test,
train_index,
test_index,
k,
j,
clf]
for k, clf in enumerate(clfs)
for j, (train_index, test_index) in enumerate(skf)]
#n_jobs = max(1, min(cpu_count()-1, len(clfs)*len(skf)))
n_jobs = 4
results = Parallel(n_jobs=n_jobs)(delayed(cross_validate)(param) for param in params)
"""
Explanation: We can use list comprehension to generate a list of parameters to pass to cross_validate() via delayed(). Each element of the list, is itself a list containing the $k$ model and $j$ fold data.
Note that we could be passing the $j$-th fold as eg. X_train[train_index] rather that the whole X_train. However, the function is supposed to use a mem-mapped version of the input data, hence we would pass a reference rather than a copy of the object.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(level_1_train, y_train)
pred = lr.predict(level_1_test.mean(2))
"""
Explanation: Like in the iterative case, we use logistic regression as the level-1 model and predict unseen instances on the blended level-1 test set.
End of explanation
"""
shutil.rmtree(mmap_dir)
"""
Explanation: At last we clean up mmap data
End of explanation
"""
|
JakeColtman/BayesianSurvivalAnalysis | Full done.ipynb | mit | running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
"""
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
"""
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
"""
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
"""
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
"""
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
"""
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
"""
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
"""
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
"""
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
"""
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival.head()
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "45 year old male - search")
plt.plot(survival_2,label = "45 year old male - display")
plt.legend(loc = "upper")
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
"""
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
"""
from pyBMA import CoxPHFitter
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
bmaCox.summary
#Low probability for everything favours parsimonious models
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
bmaCox.summary
#Boost probability of brand
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
"""
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation
"""
|
esa-as/2016-ml-contest | esaTeam/esa_Submission02.ipynb | apache-2.0 | # Import
from __future__ import division
get_ipython().magic(u'matplotlib inline')
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
inline_rc = dict(mpl.rcParams)
from classification_utilities import make_facies_log_plot
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.metrics import f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, GradientBoostingClassifier
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn.cluster import KMeans
from scipy.signal import medfilt
import sys, scipy, sklearn
print('Python: ' + sys.version.split('\n')[0])
print(' ' + sys.version.split('\n')[0])
print('Pandas: ' + pd.__version__)
print('Numpy: ' + np.__version__)
print('Scipy: ' + scipy.__version__)
print('Sklearn: ' + sklearn.__version__)
print('Xgboost: ' + xgb.__version__)
"""
Explanation: Facies classification using machine learning techniques
The ideas of
<a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", <a href="https://github.com/ar4">Alan Richardson's</a> "Try 2",
<a href="https://github.com/dalide">Dalide's</a> "Try 6", augmented, by Dimitrios Oikonomou and Eirik Larsen (ESA AS) by
adding the gradient of gradient of features as augmented features.
with an ML estimator for PE using both training and blind well data.
removing the NM_M from augmented features.
Using clustering output as a well feature.
In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.
The proposed algorithm is based on the use of random forests, xgboost or gradient boost combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:
- Robust feature normalization.
- Feature imputation for missing feature values.
- Well-based cross-validation routines.
- Feature augmentation strategies.
- Test multiple classifiers
Script initialization
Let's import the used packages and define some parameters (e.g., colors, labels, etc.).
End of explanation
"""
#Seed
seed = 24
np.random.seed(seed)
#Select classifier type
clfType='XB' #XB Clasifier
#clfType='XBA' #XBA Clasifier
#clfType='RF' #Random Forest clasifier
#clfType='GB' #Gradient Boosting Classifier
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
"""
Explanation: Parameters
End of explanation
"""
# Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Load Test data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data.insert(0,'Facies',np.ones(test_data.shape[0])*(-1))
#Create Dataset for PE prediction from both dasets
all_data=pd.concat([data,test_data])
"""
Explanation: Load data
Let's load the data
End of explanation
"""
# Define number of clusters
nClusters=20
clus=KMeans(n_clusters=nClusters, random_state=seed)
# Define features to be used in clustering process
cl_feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']
# Scale input data
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(all_data[cl_feature_names])
X_cluster = scaler.transform(all_data[cl_feature_names])
# Fit cluster algorithm
clus.fit(X_cluster)
# Append cluster data to training data
data.insert (1,'Cluster',clus.predict(scaler.transform(data [cl_feature_names])))
# Append cluster data to test data
test_data.insert(1,'Cluster',clus.predict(scaler.transform(test_data[cl_feature_names])))
# Assign feature names to be used for classification. New feature 'Cluster'
feature_names = ['Cluster', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
"""
Explanation: Feature imputation
The new feature is derived through unsuprvised learning. Unsupervised learning can be used to reduce possible person biases during core interpretation. The unsuprvised derived classes are used as an extra fature in the supervised learning process.
Finally let us fill missing PE values. We use knowledge from both training and data wells.
Clutering
End of explanation
"""
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
"""
Explanation: Let's store features, labels and other data into numpy arrays.
End of explanation
"""
# Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.
imp_feature_names = [ 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=42)
DataImpAll = all_data[imp_feature_names ].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(data.PE.isnull()),feature_names .index('PE')] = reg.predict(data.loc[data.PE.isnull(),:][['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']])
"""
Explanation: PE prediction
End of explanation
"""
# Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFrame
features = pd.DataFrame(X, columns=feature_names)
labels = pd.DataFrame(y, columns=['Facies'])
for f_idx, facies in enumerate(facies_names):
labels[labels[:] == f_idx] = facies
data = pd.concat((labels, features), axis=1)
# Plot features statistics
facies_color_map = {}
for ind, label in enumerate(facies_names):
facies_color_map[label] = facies_colors[ind]
sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))
"""
Explanation: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samples from all classes.
- PE measurements are available only for some wells.
End of explanation
"""
# Facies per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_names)
ax.set_title(w)
# Features per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))
plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist)))
ax.set_xticklabels(feature_names)
ax.set_yticks([0, 1])
ax.set_yticklabels(['miss', 'hit'])
ax.set_title(w)
"""
Explanation: Feature distribution
plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
mpl.rcParams.update(inline_rc)
End of explanation
"""
# ## Feature augmentation
# Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
# - Select features to augment.
# - Aggregating aug_features at neighboring depths.
# - Computing aug_features spatial gradient.
# - Computing aug_features spatial gradient of gradient.
# Feature windows concatenation function
def augment_features_window(X, N_neig, features=-1):
# Parameters
N_row = X.shape[0]
if features==-1:
N_feat = X.shape[1]
features=np.arange(0,X.shape[1])
else:
N_feat = len(features)
# Zero padding
X = np.vstack((np.zeros((N_neig, X.shape[1])), X, (np.zeros((N_neig, X.shape[1])))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig)+X.shape[1]))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
if (c==0):
this_row = np.hstack((this_row, X[r+c,:]))
else:
this_row = np.hstack((this_row, X[r+c,features]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth, features=-1):
if features==-1:
features=np.arange(0,X.shape[1])
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X[:,features], axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1, features=-1):
if (features==-1):
N_Feat=X.shape[1]
else:
N_Feat=len(features)
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1] + N_Feat*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig,features)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx],features)
X_aug_grad_grad = augment_features_gradient(X_aug_grad, depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad,X_aug_grad_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Define window length
N_neig=1
# Define which features to augment by introducing window and gradients.
augm_Features=['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'RELPOS']
# Get the columns of features to be augmented
feature_indices=[feature_names.index(log) for log in augm_Features]
# Augment features
X_aug, padded_rows = augment_features(X, well, depth, N_neig=N_neig, features=feature_indices)
# Remove padded rows
data_no_pad = np.setdiff1d(np.arange(0,X_aug.shape[0]), padded_rows)
X=X[data_no_pad ,:]
depth=depth[data_no_pad]
X_aug=X_aug[data_no_pad ,:]
y=y[data_no_pad]
data=data.iloc[data_no_pad ,:]
well=well[data_no_pad]
"""
Explanation: Augment features
End of explanation
"""
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
# Print splits
for s, split in enumerate(split_list):
print('Split %d' % s)
print(' training: %s' % (data.iloc[split['train']]['Well Name'].unique()))
print(' validation: %s' % (data.iloc[split['val']]['Well Name'].unique()))
"""
Explanation: Generate training, validation and test data splitsar4_submission_withFac.ipynb
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or validation set.
- Training and validation sets contain at least one sample for each class.
Initialize model selection methods
End of explanation
"""
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v, clf):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
# Parameters search grid (uncomment parameters for full grid search... may take a lot of time)
if clfType=='XB':
md_grid = [2] #[2,3]
# mcw_grid = [1]
gamma_grid = [0.3] #[0.2, 0.3, 0.4]
ss_grid = [0.9] #[0.9,1.3] #[0.7, 0.9, 0.5]
csb_grid = [0.8] #[0.6,0.8,1]
alpha_grid =[0.3] #[0.3, 0.4] #[0.2, 0.15, 0.3]
lr_grid = [0.04] #[0.04, 0.06, 0.05] #[0.05, 0.08, 0.1, 0.12]
ne_grid = [300] #[100,200,300]
param_grid = []
for N in md_grid:
# for M in mcw_grid:
for S in gamma_grid:
for L in ss_grid:
for K in csb_grid:
for P in alpha_grid:
for R in lr_grid:
for E in ne_grid:
param_grid.append({'maxdepth':N,
# 'minchildweight':M,
'gamma':S,
'subsample':L,
'colsamplebytree':K,
'alpha':P,
'learningrate':R,
'n_estimators':E})
if clfType=='XBA':
learning_rate_grid=[0.1] #[0.12, 0.10, 0.14]
max_depth_grid=[3] # [2,3,5]
min_child_weight_grid=[12] #[8, 10, 12]
colsample_bytree_grid = [0.7] #[0.7,0.9]
n_estimators_grid=[150] #[150]
param_grid = []
for max_depth in max_depth_grid:
for min_child_weight in min_child_weight_grid:
for colsample_bytree in colsample_bytree_grid:
for learning_rate in learning_rate_grid:
for n_estimators in n_estimators_grid:
param_grid.append({'maxdepth':max_depth,
'minchildweight':min_child_weight,
'colsamplebytree':colsample_bytree,
'learningrate':learning_rate,
'n_estimators':n_estimators})
if clfType=='GB':
N_grid = [100] #[50, 100, 150]
MD_grid = [3] #[3, 5, 10]
M_grid = [10] #[6, 10, 15]
LR_grid = [0.1]
L_grid = [10]
S_grid = [10] #[10, 15, 25]
param_grid = []
for N in N_grid:
for M in MD_grid:
for M1 in M_grid:
for S in LR_grid:
for L in L_grid:
for S1 in S_grid:
param_grid.append({'N':N,
'MD':M,
'MF':M1,
'LR':S,
'L':L,'S1':S1})
if clfType=='RF':
N_grid = [100] #[50, 100, 150]
M_grid = [5] #[5, 10, 15]
S_grid = [25] #[5, 10, 25]
# L_grid = [2, 3, 4, 5, 10, 25]
cw_grid=['balanced'] #['balanced_subsample','balanced']
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
# for L in L_grid:
for cw in cw_grid:
param_grid.append({'N':N
,'M':M
,'S':S
# ,'L':L
,'c_w':cw})
def getClf(clfType, param):
if clfType=='RF':
clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'],
# criterion='entropy',
max_features=param['M'],
min_samples_split=param['S'],
# min_samples_leaf=param['L'],
class_weight=param['c_w'],
random_state=seed), n_jobs=-1)
if clfType=='XB':
clf = OneVsOneClassifier(XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
# min_child_weight=param['minchildweight'],
gamma = param['gamma'],
subsample=param['subsample'],
colsample_bytree=param['colsamplebytree'],
reg_alpha = param['alpha'],
nthread =1,
seed = seed,
) , n_jobs=-1)
if clfType=='XBA':
clf = XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
min_child_weight=param['minchildweight'],
colsample_bytree=param['colsamplebytree'],
nthread =4,
seed = seed
)
if clfType=='GB':
clf=OneVsOneClassifier(GradientBoostingClassifier(
loss='exponential',
n_estimators=param['N'],
learning_rate=param['LR'],
max_depth=param['MD'],
max_features= param['MF'],
min_samples_leaf=param['L'],
min_samples_split=param['S1'],
random_state=seed,
max_leaf_nodes=None,)
, n_jobs=-1)
return clf
# For each set of parameters
score_param = []
print('features: %d' % X_aug.shape[1])
exportScores=[]
for param in param_grid:
# For each data split
score_split = []
for split in split_list:
split_train_no_pad = split['train']
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Train and test
y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, getClf(clfType,param))
# Score
score = f1_score(y_v, y_v_hat, average='micro')
score_split.append(score)
#print('Split: {0}, Score = {1:0.3f}'.format(split_list.index(split),score))
# Average score for this param
score_param.append(np.mean(score_split))
print('Average F1 score = %.3f %s' % (score_param[-1], param))
exportScores.append('Average F1 score = %.3f %s' % (score_param[-1], param))
# Best set of parameters
best_idx = np.argmax(score_param)
param_best = param_grid[best_idx]
score_best = score_param[best_idx]
print('\nBest F1 score = %.3f %s' % (score_best, param_best))
# Store F1 scores for multiple param grids
if len(exportScores)>1:
exportScoresFile=open('results_{0}_sub02.txt'.format(clfType),'wb')
exportScoresFile.write('features: %d' % X_aug.shape[1])
for item in exportScores:
exportScoresFile.write("%s\n" % item)
exportScoresFile.write('\nBest F1 score = %.3f %s' % (score_best, param_best))
exportScoresFile.close()
# ## Predict labels on test data
# Let us now apply the selected classification technique to test data.
# Training data
X_tr = X_aug
y_tr = y
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment Test data features
X_ts, padded_rows = augment_features(X_ts, well_ts,depth_ts,N_neig=N_neig, features=feature_indices)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts, getClf(clfType,param_best))
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('esa_predicted_facies_{0}_CL_sub02.csv'.format(clfType))
# Plot predicted labels
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
test_data[test_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
mpl.rcParams.update(inline_rc)
"""
Explanation: Classification parameters optimization
Let us perform the following steps for each set of parameters:
Select a data split.
Normalize features using a robust scaler.
Train the classifier on training data.
Test the trained classifier on validation data.
Repeat for all splits and average the F1 scores.
At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/f398f296c84e53a14339d2c3c36e91a4/movement_detection.ipynb | bsd-3-clause | # Authors: Adonay Nunes <adonay.s.nunes@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# License: BSD (3-clause)
import os.path as op
import mne
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.preprocessing import annotate_movement, compute_average_dev_head_t
# Load data
data_path = bst_auditory.data_path()
data_path_MEG = op.join(data_path, 'MEG')
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
raw_fname1 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_02.ds')
# read and concatenate two files
raw = read_raw_ctf(raw_fname1, preload=False)
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=False)])
raw.crop(350, 410).load_data()
raw.resample(100, npad="auto")
"""
Explanation: Annotate movement artifacts and reestimate dev_head_t
Periods, where the participant moved considerably, are contaminated by low
amplitude artifacts. When averaging the magnetic fields, the more spread the
head position, the bigger the cancellation due to different locations.
Similarly, the covariance will also be affected by severe head movement,
and source estimation will suffer low/smeared coregistration accuracy.
This example uses the continuous head position indicators (cHPI) times series
to annotate periods of head movement, then the device to head transformation
matrix is estimated from the artifact-free segments. The new head position will
be more representative of the actual head position during the recording.
End of explanation
"""
# Get cHPI time series and compute average
chpi_locs = mne.chpi.extract_chpi_locs_ctf(raw)
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs)
original_head_dev_t = mne.transforms.invert_transform(
raw.info['dev_head_t'])
average_head_dev_t = mne.transforms.invert_transform(
compute_average_dev_head_t(raw, head_pos))
fig = mne.viz.plot_head_positions(head_pos)
for ax, val, val_ori in zip(fig.axes[::2], average_head_dev_t['trans'][:3, 3],
original_head_dev_t['trans'][:3, 3]):
ax.axhline(1000 * val, color='r')
ax.axhline(1000 * val_ori, color='g')
# The green horizontal lines represent the original head position, whereas the
# red lines are the new head position averaged over all the time points.
"""
Explanation: Plot continuous head position with respect to the mean recording position
End of explanation
"""
mean_distance_limit = .0015 # in meters
annotation_movement, hpi_disp = annotate_movement(
raw, head_pos, mean_distance_limit=mean_distance_limit)
raw.set_annotations(annotation_movement)
raw.plot(n_channels=100, duration=20)
"""
Explanation: Plot raw data with annotated movement
End of explanation
"""
new_dev_head_t = compute_average_dev_head_t(raw, head_pos)
raw.info['dev_head_t'] = new_dev_head_t
mne.viz.plot_alignment(raw.info, show_axes=True, subject=subject,
trans=trans_fname, subjects_dir=subjects_dir)
"""
Explanation: After checking the annotated movement artifacts, calculate the new transform
and plot it:
End of explanation
"""
|
Open-Power-System-Data/time_series | processing.ipynb | mit | version = '2020-10-06'
changes = '''Yearly update'''
"""
Explanation: <div style="width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;">
<b>Time series: Processing Notebook</b>
<ul>
<li><a href="main.ipynb">Main Notebook</a></li>
<li>Processing Notebook</li>
</ul>
<br>This Notebook is part of the <a href="http://data.open-power-system-data.org/time_series">Time series Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</div>
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introductory-Notes" data-toc-modified-id="Introductory-Notes-1"><span class="toc-item-num">1 </span>Introductory Notes</a></span></li><li><span><a href="#Settings" data-toc-modified-id="Settings-2"><span class="toc-item-num">2 </span>Settings</a></span><ul class="toc-item"><li><span><a href="#Set-version-number-and-recent-changes" data-toc-modified-id="Set-version-number-and-recent-changes-2.1"><span class="toc-item-num">2.1 </span>Set version number and recent changes</a></span></li><li><span><a href="#Import-Python-libraries" data-toc-modified-id="Import-Python-libraries-2.2"><span class="toc-item-num">2.2 </span>Import Python libraries</a></span></li><li><span><a href="#Display-options" data-toc-modified-id="Display-options-2.3"><span class="toc-item-num">2.3 </span>Display options</a></span></li><li><span><a href="#Set-directories" data-toc-modified-id="Set-directories-2.4"><span class="toc-item-num">2.4 </span>Set directories</a></span></li><li><span><a href="#Chromedriver" data-toc-modified-id="Chromedriver-2.5"><span class="toc-item-num">2.5 </span>Chromedriver</a></span></li><li><span><a href="#Set-up-a-log" data-toc-modified-id="Set-up-a-log-2.6"><span class="toc-item-num">2.6 </span>Set up a log</a></span></li><li><span><a href="#Select-timerange" data-toc-modified-id="Select-timerange-2.7"><span class="toc-item-num">2.7 </span>Select timerange</a></span></li><li><span><a href="#Select-download-source" data-toc-modified-id="Select-download-source-2.8"><span class="toc-item-num">2.8 </span>Select download source</a></span></li><li><span><a href="#Select-subset" data-toc-modified-id="Select-subset-2.9"><span class="toc-item-num">2.9 </span>Select subset</a></span></li></ul></li><li><span><a href="#Download" data-toc-modified-id="Download-3"><span class="toc-item-num">3 </span>Download</a></span><ul class="toc-item"><li><span><a href="#Automatic-download-(for-most-sources)" data-toc-modified-id="Automatic-download-(for-most-sources)-3.1"><span class="toc-item-num">3.1 </span>Automatic download (for most sources)</a></span></li><li><span><a href="#Manual-download" data-toc-modified-id="Manual-download-3.2"><span class="toc-item-num">3.2 </span>Manual download</a></span><ul class="toc-item"><li><span><a href="#Energinet.dk" data-toc-modified-id="Energinet.dk-3.2.1"><span class="toc-item-num">3.2.1 </span>Energinet.dk</a></span></li><li><span><a href="#CEPS" data-toc-modified-id="CEPS-3.2.2"><span class="toc-item-num">3.2.2 </span>CEPS</a></span></li><li><span><a href="#ENTSO-E-Power-Statistics" data-toc-modified-id="ENTSO-E-Power-Statistics-3.2.3"><span class="toc-item-num">3.2.3 </span>ENTSO-E Power Statistics</a></span></li></ul></li></ul></li><li><span><a href="#Read" data-toc-modified-id="Read-4"><span class="toc-item-num">4 </span>Read</a></span><ul class="toc-item"><li><span><a href="#Preparations" data-toc-modified-id="Preparations-4.1"><span class="toc-item-num">4.1 </span>Preparations</a></span></li><li><span><a href="#Reading-loop" data-toc-modified-id="Reading-loop-4.2"><span class="toc-item-num">4.2 </span>Reading loop</a></span></li><li><span><a href="#Save-raw-data" data-toc-modified-id="Save-raw-data-4.3"><span class="toc-item-num">4.3 </span>Save raw data</a></span></li></ul></li><li><span><a href="#Processing" data-toc-modified-id="Processing-5"><span class="toc-item-num">5 </span>Processing</a></span><ul class="toc-item"><li><span><a href="#Missing-data-handling" data-toc-modified-id="Missing-data-handling-5.1"><span class="toc-item-num">5.1 </span>Missing data handling</a></span><ul class="toc-item"><li><span><a href="#Interpolation" data-toc-modified-id="Interpolation-5.1.1"><span class="toc-item-num">5.1.1 </span>Interpolation</a></span></li></ul></li><li><span><a href="#Aggregate-wind-offshore-+-onshore" data-toc-modified-id="Aggregate-wind-offshore-+-onshore-5.2"><span class="toc-item-num">5.2 </span>Aggregate wind offshore + onshore</a></span></li><li><span><a href="#Country-specific-calculations---not-used-in-this-release" data-toc-modified-id="Country-specific-calculations---not-used-in-this-release-5.3"><span class="toc-item-num">5.3 </span>Country specific calculations - not used in this release</a></span><ul class="toc-item"><li><span><a href="#Germany" data-toc-modified-id="Germany-5.3.1"><span class="toc-item-num">5.3.1 </span>Germany</a></span><ul class="toc-item"><li><span><a href="#Aggregate-German-data-from-individual-TSOs" data-toc-modified-id="Aggregate-German-data-from-individual-TSOs-5.3.1.1"><span class="toc-item-num">5.3.1.1 </span>Aggregate German data from individual TSOs</a></span></li></ul></li><li><span><a href="#Italy" data-toc-modified-id="Italy-5.3.2"><span class="toc-item-num">5.3.2 </span>Italy</a></span></li><li><span><a href="#Great-Britain-/-United-Kingdom" data-toc-modified-id="Great-Britain-/-United-Kingdom-5.3.3"><span class="toc-item-num">5.3.3 </span>Great Britain / United Kingdom</a></span></li></ul></li><li><span><a href="#Calculate-availabilities/profiles" data-toc-modified-id="Calculate-availabilities/profiles-5.4"><span class="toc-item-num">5.4 </span>Calculate availabilities/profiles</a></span></li><li><span><a href="#Resample-higher-frequencies-to-60'" data-toc-modified-id="Resample-higher-frequencies-to-60'-5.5"><span class="toc-item-num">5.5 </span>Resample higher frequencies to 60'</a></span></li><li><span><a href="#Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data" data-toc-modified-id="Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data-5.6"><span class="toc-item-num">5.6 </span>Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data</a></span></li><li><span><a href="#Insert-a-column-with-Central-European-(Summer-)time" data-toc-modified-id="Insert-a-column-with-Central-European-(Summer-)time-5.7"><span class="toc-item-num">5.7 </span>Insert a column with Central European (Summer-)time</a></span></li></ul></li><li><span><a href="#Create-a-final-savepoint" data-toc-modified-id="Create-a-final-savepoint-6"><span class="toc-item-num">6 </span>Create a final savepoint</a></span></li><li><span><a href="#Write-data-to-disk" data-toc-modified-id="Write-data-to-disk-7"><span class="toc-item-num">7 </span>Write data to disk</a></span><ul class="toc-item"><li><span><a href="#Limit-time-range" data-toc-modified-id="Limit-time-range-7.1"><span class="toc-item-num">7.1 </span>Limit time range</a></span></li><li><span><a href="#Different-shapes" data-toc-modified-id="Different-shapes-7.2"><span class="toc-item-num">7.2 </span>Different shapes</a></span></li><li><span><a href="#Write-to-SQLite-database" data-toc-modified-id="Write-to-SQLite-database-7.3"><span class="toc-item-num">7.3 </span>Write to SQLite-database</a></span></li><li><span><a href="#Write-to-Excel" data-toc-modified-id="Write-to-Excel-7.4"><span class="toc-item-num">7.4 </span>Write to Excel</a></span></li><li><span><a href="#Write-to-CSV" data-toc-modified-id="Write-to-CSV-7.5"><span class="toc-item-num">7.5 </span>Write to CSV</a></span></li><li><span><a href="#Create-metadata" data-toc-modified-id="Create-metadata-7.6"><span class="toc-item-num">7.6 </span>Create metadata</a></span></li><li><span><a href="#Write-checksums.txt" data-toc-modified-id="Write-checksums.txt-7.7"><span class="toc-item-num">7.7 </span>Write checksums.txt</a></span></li></ul></li></ul></div>
Introductory Notes
This Notebook handles missing data, performs calculations and aggragations and creates the output files.
Settings
This section performs some preparatory steps.
Set version number and recent changes
Executing this script till the end will create a new version of the data package.
The Version number specifies the local directory for the data <br>
We include a note on what has been changed.
End of explanation
"""
# Python modules
from datetime import datetime, date, timedelta, time
import pandas as pd
import numpy as np
import logging
import json
import sqlite3
import yaml
import itertools
import os
import pytz
from shutil import copyfile
import pickle
# Skripts from time-series repository
from timeseries_scripts.read import read
from timeseries_scripts.download import download
from timeseries_scripts.imputation import find_nan, mark_own_calc
from timeseries_scripts.make_json import make_json, get_sha_hash
# Reload modules with execution of any code, to avoid having to restart
# the kernel after editing timeseries_scripts
%load_ext autoreload
%autoreload 2
# speed up tab completion in Jupyter Notebook
%config Completer.use_jedi = False
"""
Explanation: Import Python libraries
End of explanation
"""
# Allow pretty-display of multiple variables
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# Adjust the way pandas DataFrames a re displayed to fit more columns
pd.reset_option('display.max_colwidth')
pd.options.display.max_columns = 60
# pd.options.display.max_colwidth=5
"""
Explanation: Display options
End of explanation
"""
# make sure the working directory is this file's directory
try:
os.chdir(home_path)
except NameError:
home_path = os.path.realpath('.')
# optionally, set a different directory to store outputs and raw data,
# which will take up around 15 GB of disk space
#Milos: save_path is None <=> use_external_dir == False
use_external_dir = True
if use_external_dir:
save_path = os.path.join('C:', os.sep, 'OPSD_time_series_data')
else:
save_path = home_path
input_path = os.path.join(home_path, 'input')
sources_yaml_path = os.path.join(home_path, 'input', 'sources.yml')
areas_csv_path = os.path.join(home_path, 'input', 'areas.csv')
data_path = os.path.join(save_path, version, 'original_data')
out_path = os.path.join(save_path, version)
temp_path = os.path.join(save_path, 'temp')
parsed_path = os.path.join(save_path, 'parsed')
chromedriver_path = os.path.join(home_path, 'chromedriver', 'chromedriver')
for path in [data_path, out_path, temp_path, parsed_path]:
os.makedirs(path, exist_ok=True)
# change to temp directory
os.chdir(temp_path)
os.getcwd()
"""
Explanation: Set directories
End of explanation
"""
# Deciding whether to use the provided database of Terna links
extract_new_terna_urls = False
# Saving the choice
f = open("extract_new_terna_urls.pickle", "wb")
pickle.dump(extract_new_terna_urls, f)
f.close()
"""
Explanation: Chromedriver
If you want to download from sources which require scraping, download the appropriate version of Chromedriver for your platform, name it chromedriver, create folder chromedriver in the working directory, and move the driver to it. It is used by Selenium to scrape the links from web pages.
The current list of sources which require scraping (as of December 2018):
- Terna
- Note that the package contains a database of Terna links up to 20 December 2018. Bu default, the links are first looked up for in this database, so if the end date of your query is not after 20 December 2018, you won't need Selenium. In the case that you need later dates, you have two options. If you set the variable extract_new_terna_urls to True, then Selenium will be used to download the files for those later dates. If you set extract_new_terna_urls to False (which is the default value), only the recorded links will be consulted and Selenium will not be used.
- Note: Make sure that the database file, recorded_terna_urls.csv, is located in the working directory.
End of explanation
"""
# Configure the display of logs in the notebook and attach it to the root logger
logstream = logging.StreamHandler()
logstream.setLevel(logging.INFO) #threshold for log messages displayed in here
logging.basicConfig(level=logging.INFO, handlers=[logstream])
# Set up an additional logger for debug messages from the scripts
script_logger = logging.getLogger('timeseries_scripts')
script_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',)
# Set up a logger for logs from the notebook
logger = logging.getLogger('notebook')
# Set up a logfile and attach it to both loggers
logfile = logging.handlers.TimedRotatingFileHandler(os.path.join(temp_path, 'logfile.log'), when='midnight')
logfile.setFormatter(formatter)
logfile.setLevel(logging.DEBUG) #threshold for log messages in logfile
script_logger.addHandler(logfile)
logger.addHandler(logfile)
"""
Explanation: Set up a log
End of explanation
"""
logstream.setLevel(logging.DEBUG)
"""
Explanation: Execute for more detailed logging message (May slow down computation).
End of explanation
"""
start_from_user = date(2015, 1, 1)
end_from_user = date(2020, 9, 30)
"""
Explanation: Select timerange
This section: select the time range and the data sources for download and read. Default: all data sources implemented, full time range available.
Source parameters are specified in input/sources.yml, which describes, for each source, the datasets (such as wind and solar generation) alongside all the parameters necessary to execute the downloads.
The option to perform downloading and reading of subsets is for testing only. To be able to run the script succesfully until the end, all sources have to be included, or otherwise the script will run into errors (i.e. the step where aggregate German timeseries are caculated requires data from all four German TSOs to be loaded).
In order to do this, specify the beginning and end of the interval for which to attempt the download.
Type None to download all available data.
End of explanation
"""
archive_version = None # i.e. '2016-07-14'
"""
Explanation: Select download source
Instead of downloading from the sources, the complete raw data can be downloaded as a zip file from the OPSD Server. Advantages are:
- much faster download
- back up of raw data in case it is deleted from the server at the original source
In order to do this, specify an archive version to use the raw data from that version that has been cached on the OPSD server as input. All data from that version will be downloaded - timerange and subset will be ignored.
Type None to download directly from the original sources.
End of explanation
"""
with open(sources_yaml_path, 'r', encoding='UTF-8') as f:
sources = yaml.full_load (f.read())
"""
Explanation: Select subset
Read in the configuration file which contains all the required infos for the download.
End of explanation
"""
for k, v in sources.items():
print(yaml.dump({k: list(v.keys())}, default_flow_style=False))
"""
Explanation: The next cell prints the available sources and datasets.<br>
Copy from its output and paste to following cell to get the right format.<br>
End of explanation
"""
subset = yaml.full_load('''
ENTSO-E Transparency FTP:
- Actual Generation per Production Type
- Actual Total Load
- Day-ahead Total Load Forecast
- Day-ahead Prices
OPSD:
- capacity
''')
exclude=None
"""
Explanation: Optionally, specify a subset to download/read.<br>
Type subset = None to include all data.
End of explanation
"""
with open(sources_yaml_path, 'r', encoding='UTF-8') as f:
sources = yaml.full_load(f.read())
if subset: # eliminate sources and datasets not in subset
sources = {source_name:
{k: v for k, v in sources[source_name].items()
if k in dataset_list}
for source_name, dataset_list in subset.items()}
if exclude: # eliminate sources and variables in exclude
sources = {source_name: dataset_dict
for source_name, dataset_dict in sources.items()
if not source_name in exclude}
# Printing the selected sources (all of them or just a subset)
print("Selected sources: ")
for k, v in sources.items():
print(yaml.dump({k: list(v.keys())}, default_flow_style=False))
"""
Explanation: Now eliminate sources and datasets not in subset.
End of explanation
"""
auth = yaml.full_load('''
ENTSO-E Transparency FTP:
username: your email
password: your password
Elexon:
username: your email
password: your password
''')
"""
Explanation: Download
This section: download data. Takes about 1 hour to run for the complete data set (subset=None).
First, a data directory is created on your local computer. Then, download parameters for each data source are defined, including the URL. These parameters are then turned into a YAML-string. Finally, the download is executed file by file.
Each file is saved under it's original filename. Note that the original file names are often not self-explanatory (called "data" or "January"). The files content is revealed by its place in the directory structure.
Some sources (currently only ENTSO-E Transparency) require an account to allow downloading. For ENTSO-E Transparency, set up an account here.
End of explanation
"""
download(sources, data_path, input_path, chromedriver_path, auth,
archive_version=None,
start_from_user=start_from_user,
end_from_user=end_from_user,
testmode=False)
"""
Explanation: Automatic download (for most sources)
End of explanation
"""
headers = ['region', 'variable', 'attribute', 'source', 'web', 'unit']
"""
Explanation: Manual download
Energinet.dk
Go to http://osp.energinet.dk/_layouts/Markedsdata/framework/integrations/markedsdatatemplate.aspx.
Check The Boxes as specified below:
- Periode
- Hent udtræk fra perioden: 01-01-2005 Til: 01-01-2019
- Select all months
- Datakolonner
- Elspot Pris, Valutakode/MWh: Select all
- Produktion og forbrug, MWh/h: Select all
- Udtræksformat
- Valutakode: EUR
- Decimalformat: Engelsk talformat (punktum som decimaltegn
- Datoformat: Andet datoformat (ÅÅÅÅ-MM-DD)
- Hent Udtræk: Til Excel
Click Hent Udtræk
You will receive a file Markedsata.xls of about 50 MB. Open the file in Excel. There will be a warning from Excel saying that file extension and content are in conflict. Select "open anyways" and and save the file as .xlsx.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{data_path}}{{os.sep}}Energinet.dk{{os.sep}}prices_wind_solar{{os.sep}}2005-01-01_2019-01-01
CEPS
Go to http://www.ceps.cz/en/all-data#GenerationRES
check boxes as specified below:
DISPLAY DATA FOR: Generation RES
TURN ON FILTER checked
FILTER SETTINGS:
- Set the date range
- interval
- from: 2012 to: 2019
- Agregation and data version
- Aggregation: Hour
- Agregation function: average (AVG)
- Data version: real data
- Filter
- Type of power plant: ALL
- Click USE FILTER
- DOWNLOAD DATA: DATA V TXT
You will receive a file data.txt of about 1.5 MB.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{data_path}}{{os.sep}}CEPS{{os.sep}}wind_pv{{os.sep}}2012-01-01_2019-01-01
ENTSO-E Power Statistics
Go to https://www.entsoe.eu/data/statistics/Pages/monthly_hourly_load.aspx
check boxes as specified below:
Date From: 01-01-2016 Date To: 28-02-2019
Country: (Select All)
Scale values to 100% using coverage ratio: YES
View Report
Click the Save symbol and select Excel
You will receive a file MHLV.xlsx of about 8 MB.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{os.sep}}original_data{{os.sep}}ENTSO-E Power Statistics{{os.sep}}load{{os.sep}}2016-01-01_2016-04-30
The data covers the period from 01-01-2016 up to the present, but 4 months of data seems to be the maximum that interface supports for a single download request, so you have to repeat the download procedure for 4-Month periods to cover the whole period until the present.
Read
This section: Read each downloaded file into a pandas-DataFrame and merge data from different sources if it has the same time resolution. Takes ~15 minutes to run.
Preparations
Set the title of the rows at the top of the data used to store metadata internally. The order of this list determines the order of the levels in the resulting output.
End of explanation
"""
areas = pd.read_csv(areas_csv_path)
"""
Explanation: Read a prepared table containing meta data on the geographical areas
End of explanation
"""
areas.loc[areas['area ID'].notnull(), :'EIC'].fillna('')
"""
Explanation: View the areas table
End of explanation
"""
areas = pd.read_csv(areas_csv_path)
read(sources, data_path, parsed_path, areas, headers,
start_from_user=start_from_user, end_from_user=end_from_user,
testmode=False)
"""
Explanation: Reading loop
Loop through sources and datasets to do the reading.
First read the original CSV, Excel etc. files into pandas DataFrames.
End of explanation
"""
# Create a dictionary of empty DataFrames to be populated with data
data_sets = {'15min': pd.DataFrame(),
'30min': pd.DataFrame(),
'60min': pd.DataFrame()}
entso_e = {'15min': pd.DataFrame(),
'30min': pd.DataFrame(),
'60min': pd.DataFrame()}
for filename in os.listdir(parsed_path):
res_key, source_name, dataset_name, = filename.split('_')[:3]
if subset and not source_name in subset.keys():
continue
logger.info('include %s', filename)
df_portion = pd.read_pickle(os.path.join(parsed_path, filename))
#if source_name == 'ENTSO-E Transparency FTP':
# dfs = entso_e
#else:
dfs = data_sets
if dfs[res_key].empty:
dfs[res_key] = df_portion
elif not df_portion.empty:
dfs[res_key] = dfs[res_key].combine_first(df_portion)
else:
logger.warning(filename + ' WAS EMPTY')
for res_key, df in data_sets.items():
logger.info(res_key + ': %s', df.shape)
#for res_key, df in entso_e.items():
# logger.info('ENTSO-E ' + res_key + ': %s', df.shape)
"""
Explanation: Then combine the DataFrames that have the same temporal resolution
End of explanation
"""
data_sets['60min']
"""
Explanation: Display some rows of the dataframes to get a first impression of the data.
End of explanation
"""
os.chdir(temp_path)
data_sets['15min'].to_pickle('raw_data_15.pickle')
data_sets['30min'].to_pickle('raw_data_30.pickle')
data_sets['60min'].to_pickle('raw_data_60.pickle')
entso_e['15min'].to_pickle('raw_entso_e_15.pickle')
entso_e['30min'].to_pickle('raw_entso_e_30.pickle')
entso_e['60min'].to_pickle('raw_entso_e_60.pickle')
"""
Explanation: Save raw data
Save the DataFrames created by the read function to disk. This way you have the raw data to fall back to if something goes wrong in the ramainder of this notebook without having to repeat the previos steps.
End of explanation
"""
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('raw_data_15.pickle')
data_sets['30min'] = pd.read_pickle('raw_data_30.pickle')
data_sets['60min'] = pd.read_pickle('raw_data_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('raw_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('raw_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('raw_entso_e_60.pickle')
"""
Explanation: Load the DataFrames saved above
End of explanation
"""
nan_tables = {}
overviews = {}
for res_key, df in data_sets.items():
data_sets[res_key], nan_tables[res_key], overviews[res_key] = find_nan(
df, res_key, headers, patch=True)
for res_key, df in entso_e.items():
entso_e[res_key], nan_tables[res_key + ' ENTSO-E'], overviews[res_key + ' ENTSO-E'] = find_nan(
df, res_key, headers, patch=True)
"""
Explanation: Processing
This section: missing data handling, aggregation of sub-national to national data, aggregate 15'-data to 60'-resolution. Takes 30 minutes to run.
Missing data handling
Interpolation
Patch missing data. At this stage, only small gaps (up to 2 hours) are filled by linear interpolation. This catched most of the missing data due to daylight savings time transitions, while leaving bigger gaps untouched
The exact locations of missing data are stored in the nan_table DataFrames.
Patch the datasets and display the location of missing Data in the original data. Takes ~5 minutes to run.
End of explanation
"""
nan_tables['60min']
"""
Explanation: Execute this to see an example of where the data has been patched.
Display the table of regions of missing values
End of explanation
"""
os.chdir(temp_path)
writer = pd.ExcelWriter('NaN_table.xlsx')
for res_key, df in nan_tables.items():
df.to_excel(writer, res_key)
writer.save()
writer = pd.ExcelWriter('Overview.xlsx')
for res_key, df in overviews.items():
df.to_excel(writer, res_key)
writer.save()
"""
Explanation: You can export the NaN-tables to Excel in order to inspect where there are NaNs
End of explanation
"""
os.chdir(temp_path)
data_sets['15min'].to_pickle('patched_15.pickle')
data_sets['30min'].to_pickle('patched_30.pickle')
data_sets['60min'].to_pickle('patched_60.pickle')
entso_e['15min'].to_pickle('patched_entso_e_15.pickle')
entso_e['30min'].to_pickle('patched_entso_e_30.pickle')
entso_e['60min'].to_pickle('patched_entso_e_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('patched_15.pickle')
data_sets['30min'] = pd.read_pickle('patched_30.pickle')
data_sets['60min'] = pd.read_pickle('patched_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')
"""
Explanation: Save/Load the patched data sets
End of explanation
"""
for res_key, df in data_sets.items():
df.sort_index(axis='columns', inplace=True)
"""
Explanation: Some of the following operations require the Dataframes to be lexsorted in the columns
End of explanation
"""
for res_key, df in data_sets.items():
for geo in df.columns.get_level_values(0).unique():
# we could also include 'generation_forecast'
for attribute in ['generation_actual']:
df_wind = df.loc[:, (geo, ['wind_onshore', 'wind_offshore'], attribute)]
if ('wind_onshore' in df_wind.columns.get_level_values('variable') and
'wind_offshore' in df_wind.columns.get_level_values('variable')):
logger.info(f'aggregate onhore + offshore for {res_key} {geo}')
# skipna=False, otherwise NAs will become zeros after summation
sum_col = df_wind.sum(axis='columns', skipna=False).to_frame()
# Create a new MultiIndex
new_col_header = {
'region': geo,
'variable': 'wind',
'attribute': 'generation_actual',
'source': 'own calculation based on ENTSO-E Transparency',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
df[new_col_header] = sum_col
#df[new_col_header].describe()
dfi = data_sets['15min'].copy()
dfi.columns = [' '.join(col[:3]).strip() for col in dfi.columns.values]
dfi.info(verbose=True, null_counts=True)
"""
Explanation: Aggregate wind offshore + onshore
End of explanation
"""
df = data_sets['15min']
control_areas_DE = ['DE_50hertz', 'DE_amprion', 'DE_tennet', 'DE_transnetbw']
for variable in ['solar', 'wind', 'wind_onshore', 'wind_offshore']:
# we could also include 'generation_forecast'
for attribute in ['generation_actual']:
# Calculate aggregate German generation
sum_frame = df.loc[:, (control_areas_DE, variable, attribute)]
sum_frame.head()
sum_col = sum_frame.sum(axis='columns', skipna=False).to_frame().round(0)
# Create a new MultiIndex
new_col_header = {
'region': 'DE',
'variable': variable,
'attribute': attribute,
'source': 'own calculation based on German TSOs',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['15min'][new_col_header] = sum_col
data_sets['15min'][new_col_header].describe()
"""
Explanation: Country specific calculations - not used in this release
Germany
Aggregate German data from individual TSOs
The wind and solar in-feed data for the 4 German control areas is summed up and stored in a new column. The column headers are created in the fashion introduced in the read script. Takes 5 seconds to run.
End of explanation
"""
bidding_zones_IT = ['IT_CNOR', 'IT_CSUD', 'IT_NORD', 'IT_SARD', 'IT_SICI', 'IT_SUD']
attributes = ['generation_actual', 'generation_actual_dso', 'generation_actual_tso']
for variable in ['solar', 'wind_onshore']:
sum_col = (
data_sets['60min']
.loc[:, (bidding_zones_IT, variable, attributes)]
.sum(axis='columns', skipna=False))
# Create a new MultiIndex
new_col_header = {
'region': 'IT',
'variable': variable,
'attribute': 'generation_actual',
'source': 'own calculation based on Terna',
'web': 'https://www.terna.it/SistemaElettrico/TransparencyReport/Generation/Forecastandactualgeneration.aspx',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['60min'][new_col_header] = sum_col
data_sets['60min'][new_col_header].describe()
"""
Explanation: Italy
Generation data for Italy come by region (North, Central North, Sicily, etc.) and separately for DSO and TSO, so they need to be agregated in order to get values for the whole country. In the next cell, we sum up the data by region and for each variable-attribute pair present in the Terna dataset header.
End of explanation
"""
for variable in ['solar', 'wind']:
sum_col = (data_sets['30min']
.loc[:, ('GB_GBN', variable, ['generation_actual_dso', 'generation_actual_tso'])]
.sum(axis='columns', skipna=False))
# Create a new MultiIndex
new_col_header = {
'region' : 'GB_GBN',
'variable' : variable,
'attribute' : 'generation_actual',
'source': 'own calculation based on Elexon and National Grid',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['30min'][new_col_header] = sum_col
data_sets['30min'][new_col_header].describe()
"""
Explanation: Great Britain / United Kingdom
Data for Great Britain (without Northern Ireland) are disaggregated for DSO and TSO connected generators. We calculate aggregate values.
End of explanation
"""
for res_key, df in data_sets.items():
#if res_key == '60min':
# continue
for col_name, col in df.loc[:,(slice(None), slice(None), 'capacity')].iteritems():
# Get the generation data for the selected capacity column
kwargs = {
'key': (col_name[0], col_name[1], 'generation_actual'),
'level': ['region', 'variable', 'attribute'],
'axis': 'columns', 'drop_level': False}
generation_col = df.xs(**kwargs)
# take ENTSO-E transparency data if there is none from TSO
if generation_col.size == 0:
try:
generation_col = entso_e[res_key].xs(**kwargs)
except KeyError:
continue
if generation_col.size == 0:
continue
# Calculate the profile column
profile_col = generation_col.divide(col, axis='index').round(4)
# Create a new MultiIndex
new_col_header = {
'region': '{region}',
'variable': '{variable}',
'attribute': 'profile',
'source': 'own calculation based on {source}',
'web': '',
'unit': 'fraction'
}
source_capacity = col_name[3]
source_generation = generation_col.columns.get_level_values('source')[0]
if source_capacity == source_generation:
source = source_capacity
else:
source = (source_generation + ' and ' + source_capacity).replace('own calculation based on ', '')
new_col_header = tuple(new_col_header[level].format(region=col_name[0], variable=col_name[1], source=source)
for level in headers)
data_sets[res_key][new_col_header] = profile_col
data_sets[res_key][new_col_header].describe()
# Append profile to the dataset
df = df.combine_first(profile_col)
new_col_header
"""
Explanation: Calculate availabilities/profiles
Calculate profiles, that is, the share of wind/solar capacity producing at a given time.
End of explanation
"""
for res_key, df in data_sets.items():
df.sort_index(axis='columns', inplace=True)
"""
Explanation: Some of the following operations require the Dataframes to be lexsorted in the columns
End of explanation
"""
os.chdir(temp_path)
data_sets['15min'].to_pickle('calc_15.pickle')
data_sets['30min'].to_pickle('calc_30.pickle')
data_sets['60min'].to_pickle('calc_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('calc_15.pickle')
data_sets['30min'] = pd.read_pickle('calc_30.pickle')
data_sets['60min'] = pd.read_pickle('calc_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')
"""
Explanation: Another savepoint
End of explanation
"""
for ds in [data_sets]:#, entso_e]:
for res_key, df in ds.items():
if res_key == '60min':
continue
# # Resample first the marker column
# marker_resampled = df['interpolated_values'].groupby(
# pd.Grouper(freq='60Min', closed='left', label='left')
# ).agg(resample_markers, drop_region='DE_AT_LU')
# marker_resampled = marker_resampled.reindex(ds['60min'].index)
# # Glue condensed 15/30 min marker onto 60 min marker
# ds['60min'].loc[:, 'interpolated_values'] = glue_markers(
# ds['60min']['interpolated_values'],
# marker_resampled.reindex(ds['60min'].index))
# # Drop DE_AT_LU bidding zone data from the 15 minute resolution data to
# # be resampled since it is already provided in 60 min resolution by
# # ENTSO-E Transparency
# df = df.drop('DE_AT_LU', axis=1, errors='ignore')
# Do the resampling
resampled = df.resample('H').mean()
resampled.columns = resampled.columns.map(mark_own_calc)
resampled.columns.names = headers
# filter out columns already represented in hourly data
data_cols = ds['60min'].columns.droplevel(['source', 'web', 'unit'])
tuples = [col for col in resampled.columns if not col[:3] in data_cols]
add_cols = pd.MultiIndex.from_tuples(tuples, names=headers)
resampled = resampled[add_cols]
# Round the resampled columns
for col in resampled.columns:
if col[2] == 'profile':
resampled.loc[:, col] = resampled.loc[:, col].round(4)
else:
resampled.loc[:, col] = resampled.loc[:, col].round(0)
ds['60min'] = ds['60min'].combine_first(resampled)
"""
Explanation: Resample higher frequencies to 60'
Some data comes in 15 or 30-minute intervals (i.e. German or British renewable generation), other in 60-minutes (i.e. load data from ENTSO-E and Prices). We resample the 15 and 30-minute data to hourly resolution and append it to the 60-minutes dataset.
The .resample('H').mean() methods calculates the means from the values for 4 quarter hours [:00, :15, :30, :45] of an hour values, inserts that for :00 and drops the other 3 entries. Takes 15 seconds to run.
End of explanation
"""
data_cols = data_sets['60min'].columns.droplevel(['source', 'web', 'unit'])
for res_key, df in entso_e.items():
# Combine with TSO data
# # Copy entire 30min data from ENTSO-E if there is no data from TSO
if data_sets[res_key].empty:
data_sets[res_key] = df
else:
# Keep only region, variable, attribute in MultiIndex for comparison
# Compare columns from ENTSO-E against TSO's, keep which we don't have yet
cols = [col for col in df.columns if not col[:3] in data_cols]
add_cols = pd.MultiIndex.from_tuples(cols, names=headers)
data_sets[res_key] = data_sets[res_key].combine_first(df[add_cols])
# # Add the ENTSO-E markers (but only for the columns actually copied)
# add_cols = ['_'.join(col[:3]) for col in tuples]
# # Spread marker column out over a DataFrame for easiser comparison
# # Filter out everey second column, which contains the delimiter " | "
# # from the marker
# marker_table = (df['interpolated_values'].str.split(' | ', expand=True)
# .filter(regex='^\d*[02468]$', axis='columns'))
# # Replace cells with markers marking columns not copied with NaNs
# marker_table[~marker_table.isin(add_cols)] = np.nan
# for col_name, col in marker_table.iteritems():
# if col_name == 0:
# marker_entso_e = col
# else:
# marker_entso_e = glue_markers(marker_entso_e, col)
# # Glue ENTSO-E marker onto our old marker
# marker = data_sets[res_key]['interpolated_values']
# data_sets[res_key].loc[:, 'interpolated_values'] = glue_markers(
# marker, df['interpolated_values'].reindex(marker.index))
"""
Explanation: Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data
End of explanation
"""
info_cols = {'utc': 'utc_timestamp',
'cet': 'cet_cest_timestamp'}
for ds in [data_sets]: #, entso_e]:
for res_key, df in ds.items():
if df.empty:
continue
df.index.rename(info_cols['utc'], inplace=True)
df.insert(0, info_cols['cet'],
df.index.tz_localize('UTC').tz_convert('CET'))
"""
Explanation: Insert a column with Central European (Summer-)time
The index column of th data sets defines the start of the timeperiod represented by each row of that data set in UTC time. We include an additional column for the CE(S)T Central European (Summer-) Time, as this might help aligning the output data with other data sources.
End of explanation
"""
data_sets['15min'].to_pickle('final_15.pickle')
data_sets['30min'].to_pickle('final_30.pickle')
data_sets['60min'].to_pickle('final_60.pickle')
#entso_e['15min'].to_pickle('final_entso_e_15.pickle')
#entso_e['30min'].to_pickle('final_entso_e_30.pickle')
#entso_e['60min'].to_pickle('final_entso_e_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('final_15.pickle')
data_sets['30min'] = pd.read_pickle('final_30.pickle')
data_sets['60min'] = pd.read_pickle('final_60.pickle')
#entso_e = {}
#entso_e['15min'] = pd.read_pickle('final_entso_e_15.pickle')
#entso_e['30min'] = pd.read_pickle('final_entso_e_30.pickle')
#entso_e['60min'] = pd.read_pickle('final_entso_e_60.pickle')
combined = data_sets
"""
Explanation: Create a final savepoint
End of explanation
"""
col_info = pd.DataFrame()
df = combined['60min']
for level in df.columns.names:
col_info[level] = df.columns.get_level_values(level)
col_info
"""
Explanation: Show the column names contained in the final DataFrame in a table
End of explanation
"""
for res_key, df in combined.items():
# In order to make sure that the respective time period is covered in both
# UTC and CE(S)T, we set the start in CE(S)T, but the end in UTC
if start_from_user:
start_from_user = (pytz.timezone('Europe/Brussels')
.localize(datetime.combine(start_from_user, time()))
.astimezone(pytz.timezone('UTC'))
.replace(tzinfo=None))
if end_from_user:
end_from_user = (pytz.timezone('UTC')
.localize(datetime.combine(end_from_user, time()))
.replace(tzinfo=None)
# Appropriate offset to inlude the end of period
+ timedelta(days=1, minutes=-int(res_key[:2])))
# Then cut off the data_set
data_sets[res_key] = df.loc[start_from_user:end_from_user, :]
"""
Explanation: Write data to disk
This section: Save as Data Package (data in CSV, metadata in JSON file). All files are saved in the directory of this notebook. Alternative file formats (SQL, XLSX) are also exported. Takes about 1 hour to run.
Limit time range
Cut off the data outside of [start_from_user:end_from_user]
End of explanation
"""
combined_singleindex = {}
combined_multiindex = {}
combined_stacked = {}
for res_key, df in combined.items():
if df.empty:
continue
# # Round floating point numbers to 2 digits
# for col_name, col in df.iteritems():
# if col_name[0] in info_cols.values():
# pass
# elif col_name[2] == 'profile':
# df[col_name] = col.round(4)
# else:
# df[col_name] = col.round(3)
# MultIndex
combined_multiindex[res_key + '_multiindex'] = df
# SingleIndex
df_singleindex = df.copy()
# use first 3 levels of multiindex to create singleindex
df_singleindex.columns = [
col_name[0] if col_name[0] in info_cols.values()
else '_'.join([level for level in col_name[0:3] if not level == ''])
for col_name in df.columns.values]
combined_singleindex[res_key + '_singleindex'] = df_singleindex
# Stacked
stacked = df.copy().drop(columns=info_cols['cet'], level=0)
stacked.columns = stacked.columns.droplevel(['source', 'web', 'unit'])
# Concatenate all columns below each other (="stack").
# df.transpose().stack() is faster than stacking all column levels
# seperately
stacked = stacked.transpose().stack(dropna=True).to_frame(name='data')
combined_stacked[res_key + '_stacked'] = stacked
"""
Explanation: Different shapes
Data are provided in three different "shapes":
- SingleIndex (easy to read for humans, compatible with datapackage standard, small file size)
- Fileformat: CSV, SQLite
- MultiIndex (easy to read into GAMS, not compatible with datapackage standard, small file size)
- Fileformat: CSV, Excel
- Stacked (compatible with data package standard, large file size, many rows, too many for Excel)
- Fileformat: CSV
The different shapes need to be created internally befor they can be saved to files. Takes about 1 minute to run.
End of explanation
"""
os.chdir(out_path)
for res_key, df in combined_singleindex.items():
table = 'time_series_' + res_key
df = df.copy()
df.index = df.index.strftime('%Y-%m-%dT%H:%M:%SZ')
cet_col_name = info_cols['cet']
df[cet_col_name] = (df[cet_col_name].dt.strftime('%Y-%m-%dT%H:%M:%S%z'))
df.to_sql(table, sqlite3.connect('time_series.sqlite'),
if_exists='replace', index_label=info_cols['utc'])
"""
Explanation: Write to SQLite-database
This file format is required for the filtering function on the OPSD website. This takes ~3 minutes to complete.
End of explanation
"""
os.chdir(out_path)
writer = pd.ExcelWriter('time_series.xlsx')
writer.save()
for res_key, df in data_sets.items():
# Need to convert CE(S)T-timestamps to tz-naive, otherwise Excel converts
# them back to UTC
df.loc[:,(info_cols['cet'], '', '', '', '', '')].dt.tz_localize(None).to_excel(writer, res_key)
filename = 'tsos_' + res_key + '.csv'
df.to_csv(filename, float_format='%.4f', date_format='%Y-%m-%dT%H:%M:%SZ')
#for res_key, df in entso_e.items():
# df.loc[:,(info_cols['cet'], '', '', '', '', '')].dt.tz_localize(None).to_excel(writer, res_key+ ' ENTSO-E')
# filename = 'entso_e_' + res_key + '.csv'
# df.to_csv(filename, float_format='%.4f', date_format='%Y-%m-%dT%H:%M:%SZ')
"""
Explanation: Write to Excel
Writing the full tables to Excel takes extremely long. As a workaround, only the timestamp-columns are exported. The rest of the data can than be inserted manually from the _multindex.csv files.
End of explanation
"""
os.chdir(out_path)
# itertoools.chain() allows iterating over multiple dicts at once
for res_stacking_key, df in itertools.chain(
combined_singleindex.items(),
combined_multiindex.items(),
combined_stacked.items()):
df = df.copy()
# convert the format of the cet_cest-timestamp to ISO-8601
if not res_stacking_key.split('_')[1] == 'stacked':
df.iloc[:, 0] = df.iloc[:, 0].dt.strftime('%Y-%m-%dT%H:%M:%S%z') # https://frictionlessdata.io/specs/table-schema/#date
filename = 'time_series_' + res_stacking_key + '.csv'
df.to_csv(filename, float_format='%.4f',
date_format='%Y-%m-%dT%H:%M:%SZ')
"""
Explanation: Write to CSV
This takes about 10 minutes to complete.
End of explanation
"""
os.chdir(out_path)
make_json(combined, info_cols, version, changes, headers, areas,
start_from_user, end_from_user)
"""
Explanation: Create metadata
This section: create the metadata, both general and column-specific. All metadata we be stored as a JSON file. Takes 10s to run.
End of explanation
"""
os.chdir(out_path)
files = os.listdir(out_path)
# Create checksums.txt in the output directory
with open('checksums.txt', 'w') as f:
for file_name in files:
if file_name.split('.')[-1] in ['csv', 'sqlite', 'xlsx']:
file_hash = get_sha_hash(file_name)
f.write('{},{}\n'.format(file_name, file_hash))
# Copy the file to root directory from where it will be pushed to GitHub,
# leaving a copy in the version directory for reference
copyfile('checksums.txt', os.path.join(home_path, 'checksums.txt'))
"""
Explanation: Write checksums.txt
We publish SHA-checksums for the outputfiles on GitHub to allow verifying the integrity of outputfiles on the OPSD server.
End of explanation
"""
|
darkomen/TFG | ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo3-checkpoint.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo3.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
"""
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 12:00
* Hora final : 12:30
* Filamento extruido: 425cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En los casos 3 a 6 se pasa de un incremento de velocidad de +1 a un incremento de +2.
End of explanation
"""
graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3))
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
"""
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
"""
Explanation: Con esta tercera aproximación se ha conseguido estabilizar los datos y reducir la desviación estandar, sin embargo, la medía del filamento y de la velocidad de tracción ha disminuido también.
Como tercera aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en sentido de subida. (casos 3 y 5) el sentido de bajada se mantendrá con incrementos de +1.
Se ha detectado también que el eje de giro de la tractora está algo suelto. Se va a apretar para el siguiente ensayo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
"""
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
"""
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
"""
Explanation: Representación de X/Y
End of explanation
"""
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
"""
Explanation: Analizamos datos del ratio
End of explanation
"""
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
"""
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation
"""
|
ecervera/Baxter-Vision | 03 Compute Items Dominant Colors.ipynb | mit | import json
from utils import load_items
with open('parameters.json', 'r') as infile:
params = json.load(infile)
RESIZE_X = params['resize']['x']
RESIZE_Y = params['resize']['y']
ITEM_FOLDER = params['item_folder']
items = load_items(ITEM_FOLDER)
"""
Explanation: <a id="top"></a>
Compute Items Features
First:
* Load Parameters and Items
Then choose one operation:
* Compute and Save
* Statistics
* Plot Item Files
* Computing Test
Load Parameters and Items<a id="load"></a>
End of explanation
"""
import cv2, glob
from utils import imread_rgb, imread_gray, compute_colors
def worker(item):
folder = ITEM_FOLDER + '/' + item + '/'
files = glob.glob(folder + '*.png')
for filename in files:
image_RGB = imread_rgb(filename)
if not image_RGB is None:
image_RGB = cv2.resize(image_RGB,(RESIZE_X,RESIZE_Y))
file_mask = filename[:-4] + '_mask.pgm'
mask = imread_gray(file_mask)
hist, cc = compute_colors(image_RGB, mask)
dominant_colors = {'hist': hist,
'cluster_centers': map(lambda t: list(t), cc)}
with open(filename[:-4] + '_dc.json', 'w') as outfile:
json.dump(dominant_colors, outfile)
%%time
from multiprocessing import Pool
print('Computing Dominant colors of images')
print('* resized to %d x %d' % (RESIZE_X,RESIZE_Y))
print('* LAB space')
print('* KMeans')
#print('* MiniBatchKMeans')
pool_size = 6
pool = Pool(pool_size)
result = []
for item in items:
result.append( pool.apply_async(worker, (item,)) )
pool.close()
pool.join()
for r in result:
r.get()
"""
Explanation: Compute and Save<a id="compute"></a>
Issues:
* remove black color from transparent items, e.g.:
* poland_spring
* ...
End of explanation
"""
import glob
from utils import ...
item_view = []
VVV = []
for item in items:
folder = ITEM_FOLDER + '/' + item + '/'
files = glob.glob(folder + '*_XXX.YYY')
for filename in files:
...
item_view.append(filename)
VVV.append(len(des))
from matplotlib import pyplot as plt
%matplotlib inline
plt.hist(VVV,bins=60);
[(ns, str(iv.split('/')[-1][:-9])) for ns, iv in sorted(zip(VVV,item_view), reverse=True) if ns>2000]
[(ns, str(iv.split('/')[-1][:-9])) for ns, iv in sorted(zip(VVV,item_view), reverse=True) if ns<50]
"""
Explanation: Statistics???<a id="statistics"></a>
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
import cv2, numpy as np
from ipywidgets import interact
from utils import imread_rgb, plot_colors
def load_and_plot(item,view):
try:
prefix = ITEM_FOLDER + '/' + item + '/' + item + '_' + view
filename = prefix + '.png'
image_RGB = imread_rgb(filename)
if not image_RGB is None:
with open(filename[:-4] + '_dc.json', 'r') as infile:
dc = json.load(infile)
hist = dc['hist']
cc = dc['cluster_centers']
bar = plot_colors(hist, cc)
plt.subplot(121); plt.imshow(image_RGB); plt.axis('off');
plt.subplot(122); plt.imshow(bar); plt.axis('off');
except (IOError, OSError):
print('File not found')
views = ['top_01','top-side_01','top-side_02','bottom_01','bottom-side_01','bottom-side_02']
interact(load_and_plot,item=items,view=views);
"""
Explanation: Plot File<a id="plot"></a>
End of explanation
"""
for item in items:
for view in views:
print(item + '_' + view)
load_and_plot(item,view)
plt.show()
"""
Explanation: Plot All Items
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
import cv2, numpy as np
from ipywidgets import interact
from utils import imread_rgb, imread_gray, compute_colors, plot_colors
def compute_and_plot(item,view):
prefix = ITEM_FOLDER + '/' + item + '/' + item + '_' + view
filename = prefix + '.png'
image_RGB = imread_rgb(filename)
if not image_RGB is None:
image_RGB = cv2.resize(image_RGB,(RESIZE_X,RESIZE_Y))
filename = prefix + '_mask.pgm'
mask = imread_gray(filename)
plt.subplot(121); plt.imshow(image_RGB); plt.axis('off');
hist, cc = compute_colors(image_RGB, mask)
bar = plot_colors(hist, cc)
plt.subplot(122); plt.imshow(bar); plt.axis('off'); plt.title(item + '_' + view);
views = ['top_01','top-side_01','top-side_02','bottom_01','bottom-side_01','bottom-side_02']
interact(compute_and_plot,item=items,view=views);
"""
Explanation: Computing Test<a id="test"></a>
End of explanation
"""
for item in items:
for view in views:
print(item + '_' + view)
plot(item,view)
plt.show()
"""
Explanation: Compute and Plot All Items
End of explanation
"""
|
monsta-hd/ml-mnist | experiments/cross_validations.ipynb | mit | import numpy as np
import pandas as pd
import seaborn as sns
sns.set()
import matplotlib.pyplot as plt
import env
from ml_mnist.knn import KNNClassifier
from ml_mnist.gp import GPClassifier
from ml_mnist.logreg import LogisticRegression
from ml_mnist.nn import NNClassifier, RBM
from ml_mnist.nn.layers import FullyConnected, Activation
from ml_mnist.nn.activations import leaky_relu
from ml_mnist.decomposition import PCA
from ml_mnist.preprocessing import StandardScaler
from ml_mnist.feature_selection import VarianceThreshold
from ml_mnist.model_selection import TrainTestSplitter, GridSearchCV
from ml_mnist.augmentation import RandomAugmentator
from ml_mnist.metrics import (accuracy_score,
zero_one_loss,
confusion_matrix,
plot_confusion_matrix)
from ml_mnist.utils import (one_hot, unhot,
Stopwatch, RNG,
plot_greyscale_image, plot_rbm_filters)
from ml_mnist.utils.dataset import load_mnist
from ml_mnist.utils.read_write import load_model
%load_ext autoreload
%autoreload 2
%matplotlib inline
"""
Explanation: imports
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X.shape
plot_greyscale_image(X[0], title="Label is {0}".format(y[0]));
plot_greyscale_image(X[42], title="Label is {0}".format(y[42]));
"""
Explanation: load dataset
End of explanation
"""
def load_small(n_samples=5000):
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255.
X_scaled = VarianceThreshold(0.1).fit_transform(X_scaled)
X_scaled = StandardScaler(copy=False).fit_transform(X_scaled)
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
indices, _ = tts.split(y, train_ratio=n_samples/60000., stratify=True)
return X_scaled[indices], y[indices] # 5000 -> 4994 training samples
"""
Explanation: k-NN
load small subset of dataset
End of explanation
"""
X_scaled = X / 255.
print X_scaled.min(), X_scaled.max()
print X_scaled.shape
sns.heatmap(X_scaled[100:124, 100:124]); # lots of zeros ofc
"""
Explanation: Approach #1: remove (almost) constant features + standartize + (non-kernelized) k-NN
Scale data to [0, 1] range
End of explanation
"""
X_scaled = VarianceThreshold(0.1).fit_transform(X_scaled)
print X_scaled.min(), X_scaled.max()
print X_scaled.shape
"""
Explanation: Remove features with low variance (784 -> 444)
End of explanation
"""
X_scaled = StandardScaler(copy=False).fit_transform(X_scaled)
print X_scaled.min(), X_scaled.max()
print X_scaled.shape
sns.heatmap(X_scaled[100:124, 100:124], cmap='RdYlGn'); # more interesting
"""
Explanation: Now perform mean-std standartization
End of explanation
"""
knn = KNNClassifier(algorithm='brute')
knn
with Stopwatch(verbose=True) as s:
knn.fit(X_scaled[:1000], y[:1000])
with Stopwatch(True) as s:
y_pred = knn.predict(X_scaled[1000:1100])
print zero_one_loss(y_pred, y[1000:1100])
knn2 = KNNClassifier(algorithm='kd_tree', leaf_size=10)
knn2
with Stopwatch(True) as s:
knn2.fit(X_scaled[:1000], y[:1000])
with Stopwatch(True) as s:
y_pred = knn2.predict(X_scaled[1000:1100])
print zero_one_loss(y_pred, y[1000:1100])
"""
Explanation: Some benchmarks
As you can see, for brute-force algorithm no training time needed at all, but longer prediction, compared to k-d tree. The difference become bigger as number of training samples grows and it should be even bigger when we'll use much less number of features after PCA (now 444).
End of explanation
"""
param_grid = ({'weights': ['uniform', 'distance'], 'k': [2, 3]}, {'p': [1., np.inf], 'k': [2]})
grid_cv1 = GridSearchCV(model=KNNClassifier(algorithm='kd_tree', leaf_size=1), param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337), n_splits=4,
refit=True, save_models=False, verbose=True)
grid_cv1.fit(X_scaled[:1000], y[:1000]); # rebuilding tree on each iteration
# Training KNNClassifier on 1000 samples x 444 features.
# 4-fold CV for each of 6 params combinations == 24 fits ...
# iter: 1/24 +--- elapsed: 1.026 sec ...
# iter: 2/24 ++-- elapsed: 2.022 sec ...
# iter: 3/24 +++- elapsed: 3.010 sec ...
# iter: 4/24 ++++ elapsed: 4.012 sec - mean acc.: 0.7940 +/- 2 * 0.038
# iter: 5/24 +--- elapsed: 5.017 sec - best acc.: 0.7940 at {'k': 2, 'weights': 'uniform'}
# iter: 6/24 ++-- elapsed: 6.017 sec - best acc.: 0.7940 at {'k': 2, 'weights': 'uniform'}
# iter: 7/24 +++- elapsed: 7.042 sec - best acc.: 0.7940 at {'k': 2, 'weights': 'uniform'}
# iter: 8/24 ++++ elapsed: 8.054 sec - mean acc.: 0.8070 +/- 2 * 0.029
# iter: 9/24 +--- elapsed: 9.093 sec - best acc.: 0.8070 at {'k': 2, 'weights': 'distance'}
# iter: 10/24 ++-- elapsed: 10.105 sec - best acc.: 0.8070 at {'k': 2, 'weights': 'distance'}
# iter: 11/24 +++- elapsed: 11.138 sec - best acc.: 0.8070 at {'k': 2, 'weights': 'distance'}
# iter: 12/24 ++++ elapsed: 12.157 sec - mean acc.: 0.8209 +/- 2 * 0.024
# iter: 13/24 +--- elapsed: 13.198 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 14/24 ++-- elapsed: 14.308 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 15/24 +++- elapsed: 15.596 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 16/24 ++++ elapsed: 16.607 sec - mean acc.: 0.7811 +/- 2 * 0.029
# iter: 17/24 +--- elapsed: 17.706 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 18/24 ++-- elapsed: 18.770 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 19/24 +++- elapsed: 19.840 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 20/24 ++++ elapsed: 20.889 sec - mean acc.: 0.8140 +/- 2 * 0.031
# iter: 21/24 +--- elapsed: 21.866 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 22/24 ++-- elapsed: 22.843 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 23/24 +++- elapsed: 23.811 sec - best acc.: 0.8209 at {'k': 3, 'weights': 'uniform'}
# iter: 24/24 ++++ elapsed: 24.766 sec - mean acc.: 0.4880 +/- 2 * 0.018
"""
Explanation: GridSearchCV (uses stratified K-Fold CV)
This class will be used for convenient hyper-parameters grid search for simple models. This class desing as many others is inspired by one of sklearn, yet have some extensions (such as model saving, that is supported for all models here, and possibility to specify order for parameters exploration).
One more feature is the following -- parameter refit, which controls order of exploring parameters.
If set to True, then for each combination of parameters we refit our model for each new train/test split to get mean accuracy score for given set of parameters as soon as possible. This makes sense for each ML algorithm (typically parametric), with explicit training procedure.
If set to False, then for each possible split we fit our model once, and after that we evaluate this model on all possible combination of parameters. This makes sense and yields results significantly faster for such models (typically non-parametric), as k-NN, in particular.
Below small demo of output for refit=True:
End of explanation
"""
grid_cv2 = GridSearchCV(model=KNNClassifier(algorithm='kd_tree', leaf_size=1), param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337), n_splits=4,
refit=False, save_models=False, verbose=True)
grid_cv2.fit(X_scaled[:1000], y[:1000]); # building tree only on each 6-th iteration
# Training KNNClassifier on 1000 samples x 444 features.
# 4-fold CV for each of 6 params combinations == 24 fits ...
# iter: 1/24 +--- elapsed: 1.019 sec - best acc.: 0.8110 [1/4 splits] at {'k': 2, 'weights': 'uniform'}
# iter: 2/24 +--- elapsed: 1.834 sec - best acc.: 0.8228 [1/4 splits] at {'k': 2, 'weights': 'distance'}
# iter: 3/24 +--- elapsed: 2.645 sec - best acc.: 0.8386 [1/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 4/24 +--- elapsed: 3.448 sec - best acc.: 0.8386 [1/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 5/24 +--- elapsed: 4.277 sec - best acc.: 0.8386 [1/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 6/24 +--- elapsed: 5.058 sec - best acc.: 0.8386 [1/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 7/24 ++-- elapsed: 6.073 sec - best acc.: 0.8055 [2/4 splits] at {'k': 2, 'weights': 'uniform'}
# iter: 8/24 ++-- elapsed: 6.878 sec - best acc.: 0.8174 [2/4 splits] at {'k': 2, 'weights': 'distance'}
# iter: 9/24 ++-- elapsed: 7.672 sec - best acc.: 0.8353 [2/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 10/24 ++-- elapsed: 8.475 sec - best acc.: 0.8353 [2/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 11/24 ++-- elapsed: 9.336 sec - best acc.: 0.8353 [2/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 12/24 ++-- elapsed: 10.125 sec - best acc.: 0.8353 [2/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 13/24 +++- elapsed: 11.311 sec - best acc.: 0.7806 [3/4 splits] at {'k': 2, 'weights': 'uniform'}
# iter: 14/24 +++- elapsed: 12.127 sec - best acc.: 0.7980 [3/4 splits] at {'k': 2, 'weights': 'distance'}
# iter: 15/24 +++- elapsed: 12.918 sec - best acc.: 0.8166 [3/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 16/24 +++- elapsed: 13.722 sec - best acc.: 0.8166 [3/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 17/24 +++- elapsed: 14.576 sec - best acc.: 0.8166 [3/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 18/24 +++- elapsed: 15.538 sec - best acc.: 0.8166 [3/4 splits] at {'k': 3, 'weights': 'uniform'}
# iter: 19/24 ++++ elapsed: 16.519 sec - best acc.: 0.7940 +/- 2 * 0.038 at {'k': 2, 'weights': 'uniform'}
# iter: 20/24 ++++ elapsed: 17.322 sec - best acc.: 0.8070 +/- 2 * 0.029 at {'k': 2, 'weights': 'distance'}
# iter: 21/24 ++++ elapsed: 18.106 sec - best acc.: 0.8209 +/- 2 * 0.024 at {'k': 3, 'weights': 'uniform'}
# iter: 22/24 ++++ elapsed: 19.095 sec - best acc.: 0.8209 +/- 2 * 0.024 at {'k': 3, 'weights': 'uniform'}
# iter: 23/24 ++++ elapsed: 19.933 sec - best acc.: 0.8209 +/- 2 * 0.024 at {'k': 3, 'weights': 'uniform'}
# iter: 24/24 ++++ elapsed: 20.688 sec - best acc.: 0.8209 +/- 2 * 0.024 at {'k': 3, 'weights': 'uniform'}
"""
Explanation: and for refit=True: (difference is not big because too many features and too few parameter combinations are used)
End of explanation
"""
grid_cv2.best_model_
"""
Explanation: best model as well as other "best" stuff are available:
End of explanation
"""
df = grid_cv2.to_df()
df.to_excel('test.xlsx')
df
"""
Explanation: finally all results can be converted to pandas.DataFrame and stored to excel or whatever. For more see docstrings in code.
End of explanation
"""
param_grid = {'weights': ['uniform', 'distance'],
'k': range(2, 31),
'p': [1., 2., 3., np.inf]}
param_order = ['k', 'weights', 'p']
grid_cv_knn_1 = GridSearchCV(model=KNNClassifier(algorithm='kd_tree', leaf_size=10),
param_grid=param_grid,
param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=5,
refit=False,
save_models=True,
dirpath='tmp/',
save_params=dict(
params_mask=dict(kd_tree_=False), # do not save tree
json_params=dict(indent=4)),
verbose=True)
[params for params in grid_cv_knn_1.gen_params()][:10]
grid_cv_knn_1.number_of_combinations()
grid_cv_knn_1.fit(X_knn_1, y_knn_1);
# Training KNNClassifier on 4994 samples x 444 features.
# 5-fold CV for each of 232 params combinations == 1160 fits ...
# iter: 1/1160 +---- elapsed: 34.320 sec - best acc.: 0.9084 [1/5 splits] at {'p': 1.0, 'k': 2, 'weights': 'uniform'}
# iter: 2/1160 +---- elapsed: 49.252 sec - best acc.: 0.9203 [1/5 splits] at {'p': 1.0, 'k': 2, 'weights': 'distance'}
# iter: 3/1160 +---- elapsed: 63.681 sec - best acc.: 0.9203 [1/5 splits] at {'p': 1.0, 'k': 2, 'weights': 'distance'}
# ...
# iter: 925/1160 ++++- elapsed: 20728.7 sec - best acc.: 0.9217 [4/5 splits] at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 926/1160 ++++- elapsed: 20780.0 sec - best acc.: 0.9217 [4/5 splits] at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 927/1160 ++++- elapsed: 20794.5 sec - best acc.: 0.9217 [4/5 splits] at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 928/1160 ++++- elapsed: 20809.0 sec - best acc.: 0.9217 [4/5 splits] at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 929/1160 +++++ elapsed: 20843.3 sec - best acc.: 0.9091 +/- 2 * 0.007 at {'p': 1.0, 'k': 2, 'weights': 'uniform'}
# iter: 930/1160 +++++ elapsed: 20858.1 sec - best acc.: 0.9195 +/- 2 * 0.003 at {'p': 1.0, 'k': 2, 'weights': 'distance'}
# iter: 931/1160 +++++ elapsed: 20872.5 sec - best acc.: 0.9195 +/- 2 * 0.003 at {'p': 1.0, 'k': 2, 'weights': 'distance'}
# ...
# iter: 1158/1160 +++++ elapsed: 25924.2 sec - best acc.: 0.9209 +/- 2 * 0.004 at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 1159/1160 +++++ elapsed: 25939.9 sec - best acc.: 0.9209 +/- 2 * 0.004 at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# iter: 1160/1160 +++++ elapsed: 25955.6 sec - best acc.: 0.9209 +/- 2 * 0.004 at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
df = grid_cv_knn_1.to_df()
df.to_excel('knn_1_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(10).to_excel('knn_1_best.xlsx')
"""
Explanation: 5-Fold CV on 5k images
load data
End of explanation
"""
param_grid_0 = [{'weights': ['uniform'], 'k': range(2, 12 + 1)},
{'weights': ['distance'], 'k': (2, 3, 4)}]
param_grid = []
for d in param_grid_0:
d1 = d.copy()
d1.update({'kernel': ['rbf'],
'kernel_params': [dict(gamma=gamma) for gamma in np.logspace(-7, 2, 10)]})
param_grid.append(d1)
d2 = d.copy()
d2.update({'kernel': ['sigmoid'],
'kernel_params': [dict(gamma=gamma) for gamma in (1e-4, 1e-2, 1.)]})
param_grid.append(d2)
d3 = d.copy()
d3.update({'kernel': ['poly'],
'kernel_params': [dict(degree=degree) for degree in (2, 3, 4)]})
param_grid.append(d3)
param_order = [['kernel_params', 'k']] * len(param_grid)
grid_cv_knn_2 = GridSearchCV(model=KNNClassifier(algorithm='brute'),
param_grid=param_grid,
param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=3,
refit=True,
save_models=True,
dirpath='tmp/',
save_params=dict(json_params=dict(indent=4)),
verbose=True)
[params for params in grid_cv_knn_2.gen_params()][:3]
grid_cv_knn_2.number_of_combinations()
X_knn_2, y_knn_2 = load_small(2500)
grid_cv_knn_2.fit(X_knn_2, y_knn_2)
# Training KNNClassifier on 2494 samples x 444 features.
# 3-fold CV for each of 224 params combinations == 672 fits ...
# iter: 1/672 +-- elapsed: 99.099 sec ...
# iter: 2/672 ++- elapsed: 197.839 sec ...
# iter: 3/672 +++ elapsed: 294.787 sec - mean acc.: 0.8693 +/- 2 * 0.009
# iter: 4/672 +-- elapsed: 390.949 sec - best acc.: 0.8693 at {'kernel_params': {'gamma': 9.9999999999999995e-08}, 'k': 2, 'weights': 'uniform', 'kernel': 'rbf'}
# iter: 5/672 ++- elapsed: 487.090 sec - best acc.: 0.8693 at {'kernel_params': {'gamma': 9.9999999999999995e-08}, 'k': 2, 'weights': 'uniform', 'kernel': 'rbf'}
# ...
# iter: 668/672 ++- elapsed: 56102.7 sec - best acc.: 0.8889 at {'kernel_params': {'gamma': 9.9999999999999995e-08}, 'k': 2, 'weights': 'distance', 'kernel': 'rbf'}
# iter: 669/672 +++ elapsed: 56140.9 sec - mean acc.: 0.3946 +/- 2 * 0.015
# iter: 670/672 +-- elapsed: 56179.3 sec - best acc.: 0.8889 at {'kernel_params': {'gamma': 9.9999999999999995e-08}, 'k': 2, 'weights': 'distance', 'kernel': 'rbf'}
# iter: 671/672 ++- elapsed: 56217.2 sec - best acc.: 0.8889 at {'kernel_params': {'gamma': 9.9999999999999995e-08}, 'k': 2, 'weights': 'distance', 'kernel': 'rbf'}
# iter: 672/672 +++ elapsed: 56253.5 sec - mean acc.: 0.3797 +/- 2 * 0.020
df = grid_cv_knn_2.to_df()
df.to_excel('knn_2_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(25).to_excel('knn_2_best.xlsx')
"""
Explanation: Approach #2: remove (almost) constant features + standartize + kernelized k-NN
3-Fold CV on 2.5k images
Unfortunately, kd-trees in scipy only supported for l_p metric, and not for custom function, so k-NN must be predicted in brute-force mode
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X /= 255.
with Stopwatch(verbose=True) as s:
pca = PCA().fit(X)
pca.save('models/pca_full.json') # ~13 Mb
"""
Explanation: Approach #3, #4: Same as above but with PCA (unwhitened/whitened)
interesting observation
$$
\mathbf{x}{PCA}=W^T(\mathbf{x}-\pmb{\mu})=\left(\sqrt{n}W^TS^{-1}\right)\frac{1}{\sqrt{n}}S(\mathbf{x}-\pmb{\mu})=
\left[\frac{1}{\sqrt{n}}S\mathbf{x}\right]{PCA\;whitened},
$$
where $S$ is matrix with singular values of $X$, and even more interesting:
$$
\mathbf{x}{PCA}=W^T(\mathbf{x}-\pmb{\mu})=
\frac{1}{\sqrt{n}}S \left(\sqrt{n}S^{-1}W^T\right)(\mathbf{x}-\pmb{\mu})=
\frac{1}{\sqrt{n}}S\cdot\mathbf{x}{PCA\;whitened},
$$
therefore computing distance between vectors after applying PCA w/o whitening is the same as to apply PCA whitening and then to compute distance between weighted vectors according to the respective singular values!
(I wanted to try it as a separate approach, but it is == approach #3)
compute & apply PCA for all training set
End of explanation
"""
pca_full = load_model('models/pca_full.json'); pca_full
sum(pca_full.explained_variance_ratio_[:154]) # <- to explain 95% of the variance we need 154 components
"""
Explanation: load PCA model
End of explanation
"""
def load_small2(n_samples):
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255. # only divide by 255
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
indices, _ = tts.split(y, train_ratio=n_samples/60000., stratify=True)
return X_scaled[indices], y[indices]
X_sm, y_sm = load_small2(1000) # approx
"""
Explanation: load small stratified subset of data
End of explanation
"""
param_grid = ({'weights': ['distance'],
'k': [2, 3, 4],
'p': [1, 2]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6, 9, 12, 15],
'p': [1, 2]
})
grid_search_params = dict(model=KNNClassifier(algorithm='kd_tree'),
param_grid=param_grid,
# param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=3,
refit=False,
# save_models=True,
# dirpath='tmp/',
# save_params=dict(json_params=dict(indent=4)),
verbose=True)
for n_components in xrange(5, 151, 5):
print "[PCA n_components = {0}]\n\n".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=False).transform(X_sm)
grid_cv_knn_pca_1 = GridSearchCV(**grid_search_params).fit(X_current, y_sm)
df = grid_cv_knn_pca_1\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/knn_3_pca_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv_knn_pca_1.best_score_))
print "\n\n"
# [PCA n_components = 5]
# Training KNNClassifier on 4994 samples x 5 features.
# 3-fold CV for each of 20 params combinations == 60 fits ...
# iter: 1/60 +-- elapsed: 0.673 sec - best acc.: 0.6936 [1/3 splits] at {'p': 1, 'k': 2, 'weights': 'distance'}
# iter: 2/60 +-- elapsed: 1.340 sec - best acc.: 0.6990 [1/3 splits] at {'p': 2, 'k': 2, 'weights': 'distance'}
# iter: 3/60 +-- elapsed: 1.998 sec - best acc.: 0.6990 [1/3 splits] at {'p': 2, 'k': 2, 'weights': 'distance'}
# ...
# iter: 58/60 +++ elapsed: 41.769 sec - best acc.: 0.7369 +/- 2 * 0.003 at {'p': 2, 'k': 12, 'weights': 'uniform'}
# iter: 59/60 +++ elapsed: 42.429 sec - best acc.: 0.7369 +/- 2 * 0.003 at {'p': 1, 'k': 15, 'weights': 'uniform'}
# iter: 60/60 +++ elapsed: 43.073 sec - best acc.: 0.7369 +/- 2 * 0.003 at {'p': 1, 'k': 15, 'weights': 'uniform'}
# ...
# ...
# ...
# iter: 58/60 +++ elapsed: 133.416 sec - best acc.: 0.9381 +/- 2 * 0.004 at {'p': 2, 'k': 2, 'weights': 'distance'}
# iter: 59/60 +++ elapsed: 136.472 sec - best acc.: 0.9381 +/- 2 * 0.004 at {'p': 2, 'k': 2, 'weights': 'distance'}
# iter: 60/60 +++ elapsed: 138.300 sec - best acc.: 0.9381 +/- 2 * 0.004 at {'p': 2, 'k': 2, 'weights': 'distance'}
# [PCA n_components = 115]
# Training KNNClassifier on 4994 samples x 115 features.
# 3-fold CV for each of 20 params combinations == 60 fits ...
# iter: 1/60 +-- elapsed: 3.008 sec - best acc.: 0.9263 [1/3 splits] at {'p': 1, 'k': 2, 'weights': 'distance'}
# iter: 2/60 +-- elapsed: 4.943 sec - best acc.: 0.9394 [1/3 splits] at {'p': 2, 'k': 2, 'weights': 'distance'}
"""
Explanation: 5k images 3-Fold CV for non-kernelized k-NN + number of PCA components
End of explanation
"""
param_grid = ({'weights': ['distance'],
'k': [2, 3, 4],
'p': [1, 2]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6, 9, 12, 15],
'p': [1, 2]
})
grid_search_params = dict(model=KNNClassifier(algorithm='kd_tree'),
param_grid=param_grid,
# param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=3,
refit=False,
# save_models=True,
# dirpath='tmp/',
# save_params=dict(json_params=dict(indent=4)),
verbose=False)
for n_components in xrange(10, 151, 5):
print "[PCA n_components = {0}]".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=True).transform(X_sm)
grid_cv_knn_pca_1 = GridSearchCV(**grid_search_params).fit(X_current, y_sm)
df = grid_cv_knn_pca_1\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/knn_3_pca_whiten_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv_knn_pca_1.best_score_))
# [PCA n_components = 10]
# [PCA n_components = 15]
# [PCA n_components = 20]
# [PCA n_components = 25]
# [PCA n_components = 30]
# [PCA n_components = 35]
# [PCA n_components = 40]
# [PCA n_components = 45]
# [PCA n_components = 50]
# [PCA n_components = 55]
# [PCA n_components = 60]
# [PCA n_components = 65]
# [PCA n_components = 70]
# [PCA n_components = 75]
"""
Explanation: ... same with whitening
End of explanation
"""
param_grid = ({'weights': ['distance'],
'k': [2, 3, 4],
'kernel': ['rbf'],
'kernel_params': [dict(gamma=x) for x in [1e-1, 1e-2, 1e-4, 1e-6]]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6, 9, 12],
'kernel': ['rbf'],
'kernel_params': [dict(gamma=x) for x in [1e-1, 1e-2, 1e-4, 1e-6]]
},
{'weights': ['distance'],
'k': [2, 3, 4],
'kernel': ['poly'],
'kernel_params': [dict(degree=x) for x in [2, 3, 4]]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6],
'kernel': ['poly'],
'kernel_params': [dict(degree=x) for x in [2, 3, 4]]
})
grid_search_params = dict(model=KNNClassifier(algorithm='brute'),
param_grid=param_grid,
# param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=3,
refit=True,
# save_models=True,
# dirpath='tmp/',
# save_params=dict(json_params=dict(indent=4)),
verbose=True)
for n_components in xrange(5, 151, 5):
print "[PCA n_components = {0}]\n\n".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=False).transform(X_sm)
grid_cv_knn_pca_2 = GridSearchCV(**grid_search_params).fit(X_current, y_sm)
df = grid_cv_knn_pca_2\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/knn_4_pca_krnl_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv_knn_pca_2.best_score_))
print "\n"
# [PCA n_components = 5]
# Training KNNClassifier on 996 samples x 5 features.
# 3-fold CV for each of 57 params combinations == 171 fits ...
# iter: 1/171 +-- elapsed: 18.874 sec ...
# iter: 2/171 ++- elapsed: 39.243 sec ...
# iter: 3/171 +++ elapsed: 58.217 sec - mean acc.: 0.6879 +/- 2 * 0.029
# ...
# iter: 169/171 +-- elapsed: 2299.67 sec - best acc.: 0.7149 at {'kernel': 'rbf', 'k': 6, 'weights': 'uniform', 'kernel_params': {'gamma': 0.1}}
# iter: 170/171 ++- elapsed: 2306.23 sec - best acc.: 0.7149 at {'kernel': 'rbf', 'k': 6, 'weights': 'uniform', 'kernel_params': {'gamma': 0.1}}
# iter: 171/171 +++ elapsed: 2313.28 sec - mean acc.: 0.5814 +/- 2 * 0.011
# ...
# ...
# ...
# iter: 169/171 +-- elapsed: 1869.40 sec - best acc.: 0.8704 at {'kernel': 'rbf', 'k': 2, 'weights': 'distance', 'kernel_params': {'gamma': 0.1}}
# iter: 170/171 ++- elapsed: 1876.34 sec - best acc.: 0.8704 at {'kernel': 'rbf', 'k': 2, 'weights': 'distance', 'kernel_params': {'gamma': 0.1}}
# iter: 171/171 +++ elapsed: 1882.34 sec - mean acc.: 0.3715 +/- 2 * 0.043
# [PCA n_components = 95]
# Training KNNClassifier on 996 samples x 95 features.
# 3-fold CV for each of 57 params combinations == 171 fits ...
# iter: 1/171 +-- elapsed: 15.785 sec ...
# iter: 2/171 ++- elapsed: 31.366 sec ...
# iter: 3/171 +++ elapsed: 46.182 sec - mean acc.: 0.8674 +/- 2 * 0.024
# iter: 4/171 +-- elapsed: 60.642 sec - best acc.: 0.8674 at {'kernel': 'rbf', 'k': 2, 'weights': 'distance', 'kernel_params': {'gamma': 0.1}}
"""
Explanation: 1k images 3-Fold CV for kernelized k-NN + number of PCA components
End of explanation
"""
param_grid = ({'weights': ['distance'],
'k': [2, 3, 4],
'kernel': ['rbf'],
'kernel_params': [dict(gamma=x) for x in [1e-1, 1e-2, 1e-4, 1e-6]]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6, 9, 12],
'kernel': ['rbf'],
'kernel_params': [dict(gamma=x) for x in [1e-1, 1e-2, 1e-4, 1e-6]]
},
{'weights': ['distance'],
'k': [2, 3, 4],
'kernel': ['poly'],
'kernel_params': [dict(degree=x) for x in [2, 3, 4]]
},
{'weights': ['uniform'],
'k': [2, 3, 4, 6],
'kernel': ['poly'],
'kernel_params': [dict(degree=x) for x in [2, 3, 4]]
})
grid_search_params = dict(model=KNNClassifier(algorithm='brute'),
param_grid=param_grid,
# param_order=param_order,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=3,
refit=True,
# save_models=True,
# dirpath='tmp/',
# save_params=dict(json_params=dict(indent=4)),
verbose=True)
for n_components in xrange(5, 151, 5):
print "[PCA n_components = {0}]\n\n".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=True).transform(X_sm)
grid_cv_knn_pca_2 = GridSearchCV(**grid_search_params).fit(X_current, y_sm)
df = grid_cv_knn_pca_2\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/knn_4_pca_krnl_whiten_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv_knn_pca_2.best_score_))
print "\n"
# [PCA n_components = 5]
# Training KNNClassifier on 996 samples x 5 features.
# 3-fold CV for each of 57 params combinations == 171 fits ...
# iter: 1/171 +-- elapsed: 16.284 sec ...
# iter: 2/171 ++- elapsed: 32.904 sec ...
# iter: 3/171 +++ elapsed: 54.273 sec - mean acc.: 0.6939 +/- 2 * 0.018
# ...
# iter: 169/171 +-- elapsed: 2319.17 sec - best acc.: 0.7199 at {'kernel': 'rbf', 'k': 9, 'weights': 'uniform', 'kernel_params': {'gamma': 0.1}}
# iter: 170/171 ++- elapsed: 2325.68 sec - best acc.: 0.7199 at {'kernel': 'rbf', 'k': 9, 'weights': 'uniform', 'kernel_params': {'gamma': 0.1}}
# iter: 171/171 +++ elapsed: 2331.78 sec - mean acc.: 0.5984 +/- 2 * 0.013
# ...
# ...
# ...
# iter: 169/171 +-- elapsed: 2504.95 sec - best acc.: 0.7972 at {'kernel': 'rbf', 'k': 2, 'weights': 'distance', 'kernel_params': {'gamma': 0.1}}
# iter: 170/171 ++- elapsed: 2511.18 sec - best acc.: 0.7972 at {'kernel': 'rbf', 'k': 2, 'weights': 'distance', 'kernel_params': {'gamma': 0.1}}
# iter: 171/171 +++ elapsed: 2517.55 sec - mean acc.: 0.1124 +/- 2 * 0.001
# [PCA n_components = 85]
# Training KNNClassifier on 996 samples x 85 features.
# 3-fold CV for each of 57 params combinations == 171 fits ...
# iter: 1/171 +-- elapsed: 15.737 sec ...
# iter: 2/171 ++- elapsed: 32.295 sec ...
# iter: 3/171 +++ elapsed: 48.847 sec - mean acc.: 0.7892 +/- 2 * 0.035
"""
Explanation: ... same with whitening
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)\
.add('RandomRotate', angle=(-10., 15.))\
.add('Dropout', p=(0., 0.1))\
.add('RandomGaussian', sigma=(0., 0.5))\
.add('RandomShift', x_shift=(-2, 2), y_shift=(-2, 2))
for z in aug.transform(X[:2]/255., 3):
plot_greyscale_image(z)
pca_full = load_model('models/pca_full.json')
def load_big2():
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255. # only divide by 255
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=50005./60000., stratify=True)
return X_scaled[train], y[train], X_scaled[test], y[test] # 49999 train, 10001 val
X_train, y_train, X_test, y_test = load_big2()
X_train = X_train[:5000]
y_train = y_train[:5000]
X_test = X_test[:1000]
y_test = y_test[:1000]
N = 3
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)
aug.add('RandomRotate', angle=(-7., 10.))
aug.add('RandomGaussian', sigma=(0., 0.5))
aug.add('RandomShift', x_shift=(-1, 1), y_shift=(-1, 1))
aug.add('Dropout', p=(0.8, 1.0))
X_train_aug = aug.transform(X_train, N)
y_train_aug = np.repeat(y_train, N + 1)
print X_train_aug.shape
pca_full.set_params(n_components=35, whiten=False)
X_train_aug = pca_full.transform(X_train_aug)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=2, p=2, weights='distance')
with Stopwatch(verbose=True) as s: knn.fit(X_train_aug, y_train_aug)
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test)
print accuracy_score(y_test, y_pred)
"""
Explanation: Approach #5 Artificially augment dataset
End of explanation
"""
def load_big():
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255.
X_scaled = VarianceThreshold(0.1).fit_transform(X_scaled)
X_scaled = StandardScaler(copy=False).fit_transform(X_scaled)
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=50005./60000., stratify=True)
return X_scaled[train], y[train], X_scaled[test], y[test] # 49999 train, 10001 val
X_train, y_train, X_test, y_test = load_big()
knns_best = []
# from approach 1
knns_best.append(KNNClassifier(algorithm='brute', k=3, p=1., weights='uniform'))
knns_best.append(KNNClassifier(algorithm='brute', k=2, p=1., weights='distance'))
# from approach 2
knns_best.append(KNNClassifier(algorithm='brute', k=2, weights='distance', kernel='rbf', kernel_params=dict(gamma=1e-5)))
knns_best.append(KNNClassifier(algorithm='brute', k=3, weights='uniform', kernel='rbf', kernel_params=dict(gamma=1e-5)))
# -------------------------------------------
# def f(x):
# return knn._predict_x(x)
# from joblib import Parallel, delayed
# p = Parallel(n_jobs=1, max_nbytes=None)
# print p(delayed(f)(x) for x in X_test[:2]) # <-- NOT WORKING, CANNOT PICKLE INSTANCE METHODS
# ----------------------------------------------
import pathos.multiprocessing as mp
pool = mp.ProcessingPool(4)
for knn in knns_best:
knn.fit(X_train, y_train)
y_pred = pool.map(knn._predict_x, X_test) # knn.predict(X_test) in parallel
print accuracy_score(y_test, y_pred)
# 0.96650...
# 0.96400...
# 0.96110...
# 0.96150...
"""
Explanation: k-NN best models from all approaches
X_train (60000) = 50k train : 10k validation
Approaches 1, 2
End of explanation
"""
pca_full = load_model('models/pca_full.json')
def load_big2():
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255. # only divide by 255
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=50005./60000., stratify=True)
return X_scaled[train], y[train], X_scaled[test], y[test] # 49999 train, 10001 val
X_train, y_train, X_test, y_test = load_big2()
pca_full.set_params(n_components=35)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
with Stopwatch(verbose=True) as s: knn.fit(X_train, y_train) # Elapsed time: 0.064 sec
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test) # Elapsed time: 18.823 sec <- FAST!
print accuracy_score(y_test, y_pred)
# 0.9754...
C = confusion_matrix(y_test, y_pred)
plot_confusion_matrix(C);
C = confusion_matrix(y_test, y_pred, normalize='cols')
plot_confusion_matrix(C, fmt=".2f");
pca_full.set_params(n_components=35)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=2, p=2, weights='distance')
with Stopwatch(verbose=True) as s: knn.fit(X_train, y_train) # Elapsed time: 0.067 sec
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test) # Elapsed time: 17.848 sec
print accuracy_score(y_test, y_pred)
# 0.9751...
pca_full.set_params(n_components=35)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=2, p=1, weights='distance')
with Stopwatch(verbose=True) as s: knn.fit(X_train, y_train)
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test)
print accuracy_score(y_test, y_pred)
# 0.9747...
pca_full.set_params(n_components=30)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
with Stopwatch(verbose=True) as s: knn.fit(X_train, y_train)
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test)
print accuracy_score(y_test, y_pred)
# 0.9746...
"""
Explanation: Approach 3 (w/o whitening)
End of explanation
"""
pca_full.set_params(n_components=35, whiten=True)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
with Stopwatch(verbose=True) as s: knn.fit(X_train, y_train)
with Stopwatch(verbose=True) as t: y_pred = knn.predict(X_test)
print accuracy_score(y_test, y_pred)
# 0.9723...
"""
Explanation: ... with whitening
End of explanation
"""
pca_full.set_params(n_components=35, whiten=False)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='brute', k=3, weights='uniform', kernel='rbf', kernel_params=dict(gamma=1e-4))
knn.fit(X_train, y_train)
y_pred = []
for (i, x) in enumerate(X_test):
y_pred.append(knn._predict_x(x))
if (i + 1) % 10 == 0:
print "computed {0}/{1} ... accuracy {2:.4f}".format(i + 1, len(X_test), accuracy_score(y_test[:len(y_pred)], y_pred))
print accuracy_score(y_test, y_pred)
# ...
# computed 2960/10001 ... accuracy 0.9743
# computed 2970/10001 ... accuracy 0.9744
# ...
# computed 3030/10001 ... accuracy 0.9743
# computed 3040/10001 ... accuracy 0.9743
pca_full.set_params(n_components=20, whiten=True)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test)
knn = KNNClassifier(algorithm='brute', k=3, weights='uniform', kernel='rbf', kernel_params=dict(gamma=1e-4))
knn.fit(X_train, y_train)
y_pred = []
for (i, x) in enumerate(X_test):
y_pred.append(knn._predict_x(x))
if (i + 1) % 10 == 0:
print "computed {0}/{1} ... accuracy {2:.4f}".format(i + 1, len(X_test), accuracy_score(y_test[:len(y_pred)], y_pred))
print accuracy_score(y_test, y_pred)
# 0.9655...
"""
Explanation: Approach 4 (w/ and w/o whitening)
End of explanation
"""
pca_full = load_model('models/pca_full.json')
def load_big2(train_ratio=50005./60000.):
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255. # only divide by 255
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=train_ratio, stratify=True)
return X_scaled[train], y[train], X_scaled[test], y[test]
X_train_orig, y_train_orig, X_test_orig, y_test_orig = load_big2(57000./60000.)
# train_ratio=50005./60000.
pca_full.set_params(n_components=35, whiten=True)
z = pca_full.explained_variance_ratio_[:35]
z /= sum(z)
# for alpha in (1e-6, 1e-4, 1e-2, 0.1, 1., 10.):
# for alpha in np.logspace(0.0, 5.0, num=11):
# for alpha in (5., 7., 8., 9., 11., 12., 14., 16.):
for alpha in np.arange(11.0, 13.0, 0.2):
print "alpha =", alpha
X_train = pca_full.transform(X_train_orig)
X_test = pca_full.transform(X_test_orig)
X_train *= np.exp(alpha * z)
X_test *= np.exp(alpha * z)
# knn = KNNClassifier(algorithm='kd_tree', k=2, p=2, weights='distance')
# knn.fit(X_train, y_train)
# print knn.evaluate(X_test, y_test)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
knn.fit(X_train, y_train)
print knn.evaluate(X_test, y_test)# train_ratio=50005./60000.
pca_full.set_params(n_components=35, whiten=True)
z = pca_full.explained_variance_ratio_[:35]
z /= sum(z)
# for alpha in (1e-6, 1e-4, 1e-2, 0.1, 1., 10.):
# for alpha in np.logspace(0.0, 5.0, num=11):
# for alpha in (5., 7., 8., 9., 11., 12., 14., 16.):
for alpha in np.arange(11.0, 13.0, 0.2):
print "alpha =", alpha
X_train = pca_full.transform(X_train_orig)
X_test = pca_full.transform(X_test_orig)
X_train *= np.exp(alpha * z)
X_test *= np.exp(alpha * z)
# knn = KNNClassifier(algorithm='kd_tree', k=2, p=2, weights='distance')
# knn.fit(X_train, y_train)
# print knn.evaluate(X_test, y_test)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
knn.fit(X_train, y_train)
print knn.evaluate(X_test, y_test)
# alpha = 1e-06
# 0.971102889711
# 0.972302769723
# 0.971102889711
# alpha = 0.0001
# 0.971102889711
# 0.972302769723
# alpha = 0.01
# 0.971102889711
# 0.972302769723
# alpha = 0.1
# 0.971202879712
# 0.972302769723
# alpha = 1.0
# 0.97200279972
# 0.972802719728
# alpha = 10.0
# 0.973802619738
# 0.97700229977
# ...
# alpha = 5.0
# 0.973402659734
# 0.974802519748
# alpha = 7.0
# 0.97400259974
# 0.975602439756
# alpha = 8.0
# 0.974102589741
# 0.976202379762
# alpha = 9.0
# 0.973802619738
# 0.976302369763
# alpha = 11.0
# 0.97400259974
# 0.977302269773
# alpha = 12.0
# 0.974202579742
# 0.977502249775
# alpha = 14.0
# 0.973402659734
# 0.976602339766
# alpha = 16.0
# 0.972902709729
# 0.976202379762
# alpha = 11.2
# 0.977402259774
# alpha = 11.4
# 0.977602239776
# alpha = 11.6
# [*] 0.977802219778
# alpha = 11.8
# [*] 0.977802219778
# alpha = 12.0
# 0.977502249775
# alpha = 12.2
# 0.977402259774
# alpha = 12.4
# train_ratio=57000./60000.
pca_full.set_params(n_components=35, whiten=True)
z = pca_full.explained_variance_ratio_[:35]
z /= sum(z)
alpha = 11.6
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)
aug.add('RandomRotate', angle=(-7., 10.))
aug.add('RandomGaussian', sigma=(0., 0.5))
aug.add('RandomShift', x_shift=(-1, 1), y_shift=(-1, 1))
aug.add('Dropout', p=(0., 0.2))
for N in xrange(10): # augment by a factor of (1 + N)
X_train = aug.transform(X_train_orig, N)
y_train = np.repeat(y_train_orig, N + 1)
X_train = pca_full.transform(X_train)
X_test = pca_full.transform(X_test_orig)
X_train *= np.exp(alpha * z)
X_test *= np.exp(alpha * z)
knn = KNNClassifier(algorithm='kd_tree', k=3, p=2, weights='uniform')
knn.fit(X_train, y_train)
print "N = {0}, acc. = {1:.5f}".format(N, knn.evaluate(X_test, y_test_orig))
# N = 0, acc. = 0.97904
# N = 1, acc. = 0.98137
# N = 2, acc. = 0.98137
# N = 3, acc. = 0.98303
# N = 4, acc. = 0.98337
# N = 5, acc. = 0.98370
# N = 6, acc. = 0.98370
# N = 7, acc. = 0.98237
# [*] N = 8, acc. = 0.98536
# N = 9, acc. = 0.98436
"""
Explanation: Approach 6: exponential decay on normalized explained variance
End of explanation
"""
nn = load_model('tmp/16nn.json')
X_train, _ = load_mnist('train', 'data/')
X_train /= 255.
nn.forward_pass(X_train)
np.save('data/train_feats.npy', leaky_relu(nn.layers[13]._last_input))
X = np.load('data/train_feats.npy')
_, y = load_mnist('train', 'data/')
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=50005./60000., stratify=True) # 49999 : 10001
param_grid = dict(
k=[2, 3, 4, 5],
p=[1., 2., 3.],
weights=['uniform', 'distance']
)
grid_cv = GridSearchCV(None, param_grid=param_grid)
knn = KNNClassifier(algorithm='kd_tree')
knn.fit(X[train], y[train])
for params in grid_cv.gen_params():
knn.reset_params().set_params(**params)
acc = knn.evaluate(X[test], y[test])
print "{0:.4f} at {1}".format(acc, params)
# (Sorted)
# 0.9906 at {'p': 1.0, 'k': 5, 'weights': 'distance'}
# 0.9912 at {'p': 2.0, 'k': 5, 'weights': 'distance'}
# 0.9919 at {'p': 3.0, 'k': 5, 'weights': 'distance'}
# 0.9926 at {'p': 1.0, 'k': 4, 'weights': 'distance'}
# 0.9929 at {'p': 2.0, 'k': 4, 'weights': 'distance'}
# 0.9934 at {'p': 3.0, 'k': 4, 'weights': 'distance'}
# 0.9943 at {'p': 2.0, 'k': 3, 'weights': 'distance'}
# 0.9945 at {'p': 1.0, 'k': 3, 'weights': 'distance'}
# 0.9950 at {'p': 3.0, 'k': 3, 'weights': 'distance'}
# 0.9957 at {'p': 2.0, 'k': 2, 'weights': 'uniform'}
# 0.9958 at {'p': 3.0, 'k': 2, 'weights': 'uniform'}
# 0.9959 at {'p': 1.0, 'k': 2, 'weights': 'uniform'}
# 0.9960 at {'p': 2.0, 'k': 2, 'weights': 'distance'}
# 0.9962 at {'p': 3.0, 'k': 2, 'weights': 'distance'}
# 0.9963 at {'p': 1.0, 'k': 2, 'weights': 'distance'}
# 0.9964 at {'p': 2.0, 'k': 4, 'weights': 'uniform'}
# 0.9965 at {'p': 3.0, 'k': 4, 'weights': 'uniform'}
# 0.9967 at {'p': 1.0, 'k': 4, 'weights': 'uniform'}
# 0.9968 at {'p': 3.0, 'k': 3, 'weights': 'uniform'}
# 0.9969 at {'p': 1.0, 'k': 3, 'weights': 'uniform'}
# 0.9969 at {'p': 1.0, 'k': 5, 'weights': 'uniform'}
# 0.9970 at {'p': 2.0, 'k': 5, 'weights': 'uniform'}
# 0.9970 at {'p': 3.0, 'k': 5, 'weights': 'uniform'}
# [*] 0.9971 at {'p': 2.0, 'k': 3, 'weights': 'uniform'}
"""
Explanation: Approach #NN
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X /= 255.
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=0.85)
y = one_hot(y)
logreg = LogisticRegression(n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=100,
learning_rate=1e-3)
)
logreg.fit(X[train], y[train], X_val=X[test], y_val=y[test])
y_pred = logreg.predict(X[test])
print accuracy_score(y_pred, y[test])
# 0.92755...
X, y = load_mnist(mode='train', path='data/')
X /= 255.
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=0.85)
y = one_hot(y)
logreg = LogisticRegression(n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=100,
learning_rate=1e-3)
)
logreg.fit(X[train], y[train], X_val=X[test], y_val=y[test])
y_pred = logreg.predict(X[test])
print accuracy_score(y_pred, y[test])
# 0.92766...
"""
Explanation: Logistic Regression
Approach #1: no preprocessing
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X /= 255.
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=0.85)
y = one_hot(y)
for lr in (5 * 1e-5, 1e-4, 2 * 1e-4, 5 * 1e-4, 1e-3, 2 * 1e-3, 5 * 1e-3, 1e-2):
for L2 in (0., 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 0.1, 1.):
logreg = LogisticRegression(L2=L2,
n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=600,
early_stopping=50,
# verbose=True,
learning_rate=lr,
plot=False)
)
logreg.fit(X[train], y[train], X_val=X[test], y_val=y[test])
acc = logreg.evaluate(X[test], y[test])
print "{0:.4f}, lr = {1}, L2 = {2}".format(acc, lr, L2)
# 0.9051, lr = 1e-05, L2 = 1e-06
# 0.9051, lr = 1e-05, L2 = 1e-05
# 0.9051, lr = 1e-05, L2 = 0.0001
# 0.9051, lr = 1e-05, L2 = 0.001
# 0.9049, lr = 1e-05, L2 = 0.01
# 0.9046, lr = 1e-05, L2 = 0.1
# 0.9009, lr = 1e-05, L2 = 1.0
# 0.9250, lr = 2e-05, L2 = 1e-06
# 0.9250, lr = 2e-05, L2 = 1e-05
# 0.9250, lr = 2e-05, L2 = 0.0001
# 0.9251, lr = 2e-05, L2 = 0.001
# 0.9248, lr = 2e-05, L2 = 0.01
# 0.9248, lr = 2e-05, L2 = 0.1
# 0.9268, lr = 5e-05, L2 = 0.0
# 0.9268, lr = 5e-05, L2 = 1e-06
# 0.9268, lr = 5e-05, L2 = 1e-05
# 0.9267, lr = 5e-05, L2 = 0.0001
# 0.9268, lr = 5e-05, L2 = 0.001
# 0.9270, lr = 5e-05, L2 = 0.01
# 0.9262, lr = 5e-05, L2 = 0.1
# 0.9216, lr = 5e-05, L2 = 1.0
# 0.9264, lr = 0.0001, L2 = 0.0
# 0.9264, lr = 0.0001, L2 = 1e-06
# 0.9266, lr = 0.0001, L2 = 1e-05
# 0.9266, lr = 0.0001, L2 = 0.0001
# 0.9266, lr = 0.0001, L2 = 0.001
# 0.9268, lr = 0.0001, L2 = 0.01
# 0.9262, lr = 0.0001, L2 = 0.1
# 0.9220, lr = 0.0001, L2 = 1.0
# 0.9267, lr = 0.0002, L2 = 0.0
# 0.9267, lr = 0.0002, L2 = 1e-06
# 0.9266, lr = 0.0002, L2 = 1e-05
# 0.9266, lr = 0.0002, L2 = 0.0001
# 0.9276, lr = 0.0002, L2 = 0.001
# 0.9264, lr = 0.0002, L2 = 0.01
# 0.9262, lr = 0.0002, L2 = 0.1
# 0.9218, lr = 0.0002, L2 = 1.0
# 0.9281, lr = 0.0005, L2 = 0.0
# 0.9281, lr = 0.0005, L2 = 1e-06
# 0.9282, lr = 0.0005, L2 = 1e-05
# 0.9280, lr = 0.0005, L2 = 0.0001
# 0.9278, lr = 0.0005, L2 = 0.001
# 0.9274, lr = 0.0005, L2 = 0.01
# 0.9264, lr = 0.0005, L2 = 0.1
# 0.9212, lr = 0.0005, L2 = 1.0
# 0.9276, lr = 0.001, L2 = 0.0
# 0.9277, lr = 0.001, L2 = 1e-06
# 0.9277, lr = 0.001, L2 = 1e-05
# 0.9277, lr = 0.001, L2 = 0.0001
# 0.9281, lr = 0.001, L2 = 0.001
# 0.9271, lr = 0.001, L2 = 0.01
# 0.9260, lr = 0.001, L2 = 0.1
# 0.9224, lr = 0.001, L2 = 1.0
# 0.9299, lr = 0.002, L2 = 0.0
# 0.9293, lr = 0.002, L2 = 1e-06
# 0.9292, lr = 0.002, L2 = 1e-05
# 0.9297, lr = 0.002, L2 = 0.0001
# 0.9292, lr = 0.002, L2 = 0.001
# 0.9291, lr = 0.002, L2 = 0.01
# 0.9281, lr = 0.002, L2 = 0.1
# 0.9232, lr = 0.002, L2 = 1.0
# 0.9294, lr = 0.005, L2 = 0.0
# [*] 0.9301, lr = 0.005, L2 = 1e-06
# 0.9294, lr = 0.005, L2 = 1e-05
# 0.9293, lr = 0.005, L2 = 0.0001
# 0.9294, lr = 0.005, L2 = 0.001
# 0.9299, lr = 0.005, L2 = 0.01
# 0.9277, lr = 0.005, L2 = 0.1
# 0.9227, lr = 0.005, L2 = 1.0
# 0.9274, lr = 0.01, L2 = 0.0
# 0.9266, lr = 0.01, L2 = 1e-06
# 0.9276, lr = 0.01, L2 = 1e-05
# 0.9286, lr = 0.01, L2 = 0.0001
# 0.9274, lr = 0.01, L2 = 0.001
# 0.9291, lr = 0.01, L2 = 0.01
# 0.9261, lr = 0.01, L2 = 0.1
# 0.9201, lr = 0.01, L2 = 1.0
"""
Explanation: Validation for L2, learning rate
End of explanation
"""
logregs = []
for i, n_components in enumerate(xrange(301, 401, 20)):
pca_full = load_model('models/pca_full.json')
pca_full.set_params(n_components=n_components, whiten=False)
X, y = load_mnist(mode='train', path='data/')
X /= 255.
X = pca_full.transform(X)
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=0.85)
y = one_hot(y)
logreg = LogisticRegression(n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=500,
learning_rate=1e-3,
plot=False)
)
logregs.append(logreg)
# logregs[i].set_params(optimizer_params=dict(max_epochs=100, learning_rate=1e-3, plot=False))
logreg.fit(X[train], y[train], X_val=X[test], y_val=y[test])
y_pred = logreg.predict(X[test])
print "PCA {0} --- {1:.4f}".format(n_components, accuracy_score(y_pred, y[test]))
# W/O whitening | with
# ---------------------------
# PCA 15 --- 0.8441 |
# PCA 20 --- 0.8783 |
# PCA 25 --- 0.8874 |
# PCA 30 --- 0.8936 | 0.8931
# PCA 35 --- 0.9027 | 0.9029
# PCA 40 --- 0.9056 | 0.9051
# PCA 45 --- 0.9076 | 0.9077
# PCA 50 --- 0.9087 | 0.9083
# PCA 55 --- 0.9132 | 0.9134
# PCA 60 --- 0.9129 | 0.9129
# PCA 65 --- 0.9133 | 0.9129
# PCA 70 --- 0.9176 | 0.9173
# PCA 75 --- 0.9189 | 0.9186
# PCA 80 --- 0.9206 | 0.9200
# PCA 85 --- 0.9207 | 0.9207
# PCA 90 --- 0.9213 | 0.9212
# PCA 95 --- 0.9203 | 0.9198
# PCA 100 --- 0.9184 | 0.9188
# PCA 105 --- 0.9203 | 0.9198
# PCA 110 --- 0.9209 | 0.9202
# PCA 115 --- 0.9210 | 0.9209
# PCA 120 --- 0.9217 | 0.9212
# PCA 125 | 0.9228 [*]
# PCA 130 | 0.9210
# PCA 135 | 0.9220
# PCA 140 | 0.9211
# PCA 145 | 0.9202
# PCA 150 | 0.9208
# PCA 155 | 0.9223
# PCA 165 --- 0.9210
# PCA 170 --- 0.9207
# PCA 175 --- 0.9214
# PCA 180 --- 0.9211
# PCA 185 --- 0.9208
# PCA 190 - -- 0.9208
# PCA 195 --- 0.9204
# PCA 200 --- 0.9208
# ...
# PCA 220 --- 0.9209
# PCA 230 --- 0.9214
# PCA 240 --- 0.9207
# ...
# PCA 301 --- 0.9207
# PCA 321 --- 0.9204
# ...
"""
Explanation: Approach #2: PCA
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X /= 255.
X = X.astype(np.float32)
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)
aug.add('RandomRotate', angle=(-5., 7.))
aug.add('RandomGaussian', sigma=(0., 0.5))
aug.add('RandomShift', x_shift=(-1, 1), y_shift=(-1, 1))
aug.add('Dropout', p=(0., 0.2))
X_aug = aug.transform(X, 4)
y_aug = np.repeat(y, 5)
y_aug = one_hot(y_aug)
np.save('data/X_aug_logreg.npy', X_aug)
np.save('data/y_aug_logreg.npy', y_aug)
"""
Explanation: Approach #3: augment data (x5) for logreg and save to file
save data
End of explanation
"""
X = np.load('data/X_aug_logreg.npy')
y = np.load('data/y_aug_logreg.npy')
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=29./30.)
"""
Explanation: load data
End of explanation
"""
X = np.load('data/X_aug_logreg.npy')#[:25000]
y = np.load('data/y_aug_logreg.npy')#[:25000]
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=29./30.)
for lr in reversed([1e-2, 1e-3, 1e-4, 1e-5, 1e-6]):
for L2 in (1e-8, 1e-6, 1e-4, 1e-2, 1.):
plot = (L2 == 1e-8)
logreg = LogisticRegression(L2=L2,
n_batches=64,
# n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=800,
# max_epochs=20,
early_stopping=50,
learning_rate=lr,
plot=plot,
plot_dirpath='learning_curves_logreg_{0}/'.format(lr)
))
logreg.fit(X[train], y[train], X_val=X[test], y_val=y[test])
acc = logreg.evaluate(X[test], y[test])
print "{0:.4f}, lr = {1}, L2 = {2}".format(acc, lr, L2)
s = '{0:.4f}'.format(acc).replace('.', '_')
t = 'models/logreg/logreg_{0}_{1}_{2}.json'.format(s, lr, L2)
logreg.save(t)
logreg_loaded = load_model(t)#.fit([[0.]], [[1]])
print "{0:.4f}".format(logreg.evaluate(X[test], y[test]))
# 0.7843, lr = 1e-06, L2 = 1e-08
# 0.7843, lr = 1e-06, L2 = 1e-06
# 0.7843, lr = 1e-06, L2 = 0.0001
# 0.7843, lr = 1e-06, L2 = 0.01
# 0.7855, lr = 1e-06, L2 = 1.0
# 0.8754, lr = 1e-05, L2 = 1e-08
# 0.8754, lr = 1e-05, L2 = 1e-06
# ...
# 0.8805, lr = 1e-4, L2 whaterer
# ...
# 0.86.., lr = 1e-3, L2 whaterer
"""
Explanation: grid searchs
End of explanation
"""
pca_full = load_model('models/pca_full.json')
def load_big2():
X, y = load_mnist(mode='train', path='data/')
X_scaled = X / 255. # only divide by 255
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, test = tts.split(y, train_ratio=50005./60000., stratify=True)
return X_scaled[train], y[train], X_scaled[test], y[test] # 49999 train, 10001 val
X_train_orig, y_train, X_test_orig, y_test = load_big2()
y_train = one_hot(y_train)
y_test = one_hot(y_test)
pca_full.set_params(n_components=35, whiten=True)
z = pca_full.explained_variance_ratio_[:35]
z /= sum(z)
for alpha in (1e-6, 1e-4, 1e-2, 0.1, 1., 2., 5., 10., 16., 25., 100.):
print "alpha =", alpha
X_train = pca_full.transform(X_train_orig)
X_test = pca_full.transform(X_test_orig)
X_train *= np.exp(alpha * z)
X_test *= np.exp(alpha * z)
logreg = LogisticRegression(L2=1e-6,
n_batches=10,
random_seed=1337,
optimizer_params=dict(
max_epochs=600,
early_stopping=50,
# verbose=True,
learning_rate=0.005,
plot=False)
)
logreg.fit(X_train, y_train, X_val=X_test, y_val=y_test)
print logreg.evaluate(X_test, y_test)
# alpha = 1e-06
# 0.90800919908
# alpha = 0.0001
# 0.90800919908
# alpha = 0.01
# 0.90800919908
# alpha = 0.1
# 0.90800919908
# alpha = 1.0
# 0.90800919908
# alpha = 2.0
# 0.908109189081
# alpha = 5.0
# 0.907709229077
# alpha = 10.0
# 0.907809219078
# alpha = 16.0
# 0.907209279072
# alpha = 25.0
# 0.906409359064
# alpha = 100.0
# 0.505749425057
"""
Explanation: Approach #4: Exponential decay on singular values
End of explanation
"""
X_train = np.load('data/train_feats.npy')
_, y_train = load_mnist('train', 'data/')
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, val = tts.split(y_train, train_ratio=50005./60000., stratify=True) # 49999 : 10001
param_grid = dict(
L2=[0] + np.logspace(-4., 1., 11).tolist(),
)
logreg_params = dict(n_batches=32,
random_seed=1337,
optimizer_params=dict(
max_epochs=750,
learning_rate=0.001,
early_stopping=50,
plot=False,
verbose=False
))
for params in GridSearchCV(param_grid=param_grid).gen_params():
logreg = LogisticRegression(**logreg_params).set_params(**params)
logreg.fit(X_train[train], one_hot(y_train[train]), X_val=X_train[val], y_val=one_hot(y_train[val]))
acc = logreg.evaluate(X_train[val], one_hot(y_train[val]))
print "{0:.5f} at {1}".format(acc, val_acc, params)
# (Sorted)
# 0.99590 val at {'learning_rate': 0.005, 'L2': 3.1622776601683795}
# 0.99610 val at {'learning_rate': 0.005, 'L2': 0.31622776601683794}
# 0.99710 val at {'learning_rate': 0.005, 'L2': 0.0031622776601683794}
# 0.99710 val at {'learning_rate': 0.005, 'L2': 10.0}
# 0.99730 val at {'learning_rate': 0.005, 'L2': 0.031622776601683791}
# 0.99760 val at {'learning_rate': 0.005, 'L2': 0.0}
# 0.99770 val at {'learning_rate': 0.005, 'L2': 0.0001}
# 0.99780 val at {'learning_rate': 0.005, 'L2': 0.001}
# 0.99780 val at {'learning_rate': 0.005, 'L2': 0.01}
# 0.99790 val at {'learning_rate': 0.005, 'L2': 0.1}
# 0.99790 val at {'learning_rate': 0.005, 'L2': 1.0}
# [*] 0.99810 val at {'learning_rate': 0.005, 'L2': 0.00031622776601683794}
"""
Explanation: Approach #NN
End of explanation
"""
X, y = load_mnist('train', 'data/')
indices, _ = TTS(shuffle=True, random_seed=1337).split(y, train_ratio=4.005/60., stratify=True)
X = X[indices]
X = X[:4000]
X /= 255.
param_grid = dict(
n_hidden=[128, 256, 384],
learning_rate=[0.05, 0.01, 0.005, '0.05->0.005', '0.01->0.001'],
k=[1, 4],
random_seed=[1337, 42],
)
rbm = RBM(persistent=True,
n_epochs=40,
early_stopping=12,
momentum='0.5->0.99',
batch_size=10,
verbose=False)
done = 0
for thr in (False, True):
if thr:
X = (X > 0.5).astype(np.float32)
for params in GS(param_grid=param_grid).gen_params(): # 60
done += 1
rbm.reset_params().set_params(**params)
rbm.fit(X)
mse = rbm.best_recon
dirpath = 'tmp/rbm_ge0.5/' if thr else 'tmp/rbm/'
rbm.save(dirpath + '{0:.5f}.json'.format(mse))
print "mse {0:.5f} [{1}/120] at {2}!".format(mse, done, params)
# (Sorted)
# [*] mse 0.06684 [25/120] at {'k': 1, 'random_seed': 1337, 'learning_rate': '0.01->0.001', 'n_hidden': 256}!
# ...
rbm = load_model('models/rbm.json')
plot_rbm_filters(rbm.best_W)
plt.savefig('rbm_filters.png')
"""
Explanation: RBM
params
End of explanation
"""
# non-random nudging in all directions
X, y = load_mnist('train', 'data/')
X /= 255.
indices, _ = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=4.005/60., stratify=True)
X = X[indices]
X = X[:4000]
X_aug = []
for x in X:
X_aug.append(x)
for t in RandomAugmentator(transform_shape=(28, 28), out_shape=(784,))\
.add('RandomShift', x_shift=(-1, -1), y_shift=( 0, 0))\
.transform_x(x, 1):
X_aug.append(t)
for t in RandomAugmentator(transform_shape=(28, 28), out_shape=(784,))\
.add('RandomShift', x_shift=( 1, 1), y_shift=( 0, 0))\
.transform_x(x, 1):
X_aug.append(t)
for t in RandomAugmentator(transform_shape=(28, 28), out_shape=(784,))\
.add('RandomShift', x_shift=( 0, 0), y_shift=( 1, 1))\
.transform_x(x, 1):
X_aug.append(t)
for t in RandomAugmentator(transform_shape=(28, 28), out_shape=(784,))\
.add('RandomShift', x_shift=( 0, 0), y_shift=(-1, -1))\
.transform_x(x, 1):
X_aug.append(t)
X_aug = np.asarray(X_aug)
np.save('data/X_rbm_small.npy', X_aug)
X_aug = np.load('data/X_rbm_small.npy')
param_grid = dict(
learning_rate=['0.01->0.005', '0.05->0.001', '0.05->0.005', '0.01->0.001'],
batch_size=[5, 10, 20, 40],
random_seed=[1337, 42],
)
rbm = RBM(n_hidden=256,
k=1,
persistent=True,
n_epochs=60,
early_stopping=12,
momentum='0.5->0.99',
verbose=True)
done = 0
GS = GridSearchCV
for params in GS(param_grid=param_grid).gen_params():
done += 1
if done <= 16:
continue
rbm.reset_params().set_params(**params)
rbm.fit(X_aug)
mse = rbm.best_recon
rbm.save('tmp/rbm_{0:.5f}.json'.format(mse))
print "mse {0:.5f} [{1}/40] at {2}!".format(mse, done, params)
# (Sorted:)
# [*] mse 0.06809 [19/40] at {'learning_rate': '0.05->0.001', 'random_seed': 1337, 'batch_size': 20}!
# ...
"""
Explanation: nudge and try again
End of explanation
"""
rbm = load_model('models/rbm.json')
X, _ = load_mnist('train', 'data/')
X /= 255.
F = np.dot(X, rbm.best_W) + rbm.hb # rbm.propup(X)
# F.min(), F.max(), F.mean() --> -3773.89447221 2.30920675476 -140.968359014
F = StandardScaler().fit_transform(F)
np.save('data/rbm_train.npy', F)
"""
Explanation: extract features and save features
End of explanation
"""
X_train = np.load('data/rbm_train.npy')
_, y_train = load_mnist('train', 'data/')
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
train, val = tts.split(y_train, train_ratio=50005./60000., stratify=True) # 49999 : 10001
param_grid = dict(
L2=np.logspace(-6., 1., 15),
)
logreg_params = dict(n_batches=32,
random_seed=1337,
optimizer_params=dict(
max_epochs=750,
learning_rate=0.001,
early_stopping=50,
plot=False,
verbose=False
))
for params in GridSearchCV(param_grid=param_grid).gen_params():
logreg = LogisticRegression(**logreg_params).set_params(**params)
logreg.fit(X_train[train], one_hot(y_train[train]), X_val=X_train[val], y_val=one_hot(y_train[val]))
acc = logreg.evaluate(X_train[val], one_hot(y_train[val]))
print "{0:.5f} at {1}".format(acc, val_acc, params)
# 0.91800 test 0.92251 val at {'L2': 9.9999999999999995e-07}
# 0.91760 test 0.92241 val at {'L2': 3.1622776601683792e-06}
# ... D:
"""
Explanation: Approach #RBM
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X /= 255.
X = X.astype(np.float32)
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)
aug.add('RandomRotate', angle=(-5., 7.))
aug.add('RandomGaussian', sigma=(0., 0.5))
aug.add('RandomShift', x_shift=(-1, 1), y_shift=(-1, 1))
X_aug = aug.transform(X, 4)
y_aug = np.repeat(y, 5)
y_aug = one_hot(y_aug)
np.save('data/X_aug_nn.npy', X_aug)
np.save('data/y_aug_nn.npy', y_aug)
"""
Explanation: Neural Network
#1 augment data (x5) for NN and save to file
End of explanation
"""
X = np.load('data/X_aug_nn.npy')
y = np.load('data/y_aug_nn.npy')
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=29./30.)
"""
Explanation: load data
End of explanation
"""
print "Loading data ..."
X = np.load('data/X_aug_nn.npy')#[:30000]
y = np.load('data/y_aug_nn.npy')#[:30000]
train, test = TrainTestSplitter(shuffle=True, random_seed=1337).split(y, train_ratio=29./30.)
nn = NNClassifier(layers=[
FullyConnected(512),
Activation('leaky_relu'),
FullyConnected(256),
Activation('leaky_relu'),
FullyConnected(128),
Activation('leaky_relu'),
FullyConnected(32),
Activation('leaky_relu'),
FullyConnected(10),
Activation('softmax')
],
n_batches=1024,
shuffle=True,
random_seed=1337,
optimizer_params=dict(
max_epochs=100,
early_stopping=20,
verbose=True,
plot=True,
plot_dirpath='learning_curves_NN/',
learning_rate=1e-4
))
print "Initializing NN ..."
nn.fit(X[train], y[train], X_val=X[test], y_val=y[test])
print nn.evaluate(X[train], y[train], 'accuracy_score')
# 1) validation accuracy --> 0.9929
# 2) 512-256-128-32-10 Dropout(0.1) --> 0.9906
# 3) 512-256-128-32-10 Dropout(0.2) --> 0.9897
# 4) 600-300-128-32-10 --> 0.9879
# 5) 600-300-128-32-10 Dropout(0.1) --> 0.9914
# 6) 600-300-128-32-10 Dropout(0.12) --> 0.9895
# 7) 800-400-200-100-10 Dropout(0.12) --> 0.9929
# 8) 1024-512-256-128-10 Dropout(0.12) --> 0.9944
# 9) 1024-D.05-768-D.1-256-128-10 --> 0.9905
# 10.a) 1024-768-256-128-10 Dropout(0.1) --> 0.9923
# 10.b) 1024-768-256-128-10 Dropout(0.2) --> 0.9892
# 10.c) 1024-768-256-128-10 Dropout(1/4) --> 0.9857
# 10.d) 1024-768-256-128-10 Dropout(0.5) --> 0.9686
# (...)
"""
Explanation: NN models
End of explanation
"""
X, y = load_mnist(mode='train', path='data/')
X = X / 255.
X = X.astype(np.float32)
tts = TrainTestSplitter(shuffle=False, random_seed=1337)
train, val = tts.split(y, train_ratio=55005.98/60000., stratify=True) # 55k : 5k
X_train, y_train, X_val, y_val = X[train], y[train], X[val], y[val]
y_val = one_hot(y_val)
np.save('data/nn_X_val.npy', X_val)
np.save('data/nn_y_val.npy', y_val)
aug = RandomAugmentator(transform_shape=(28, 28), random_seed=1337)
aug.add('RandomRotate', angle=(-5., 7.))
aug.add('RandomGaussian', sigma=(0., 0.5))
aug.add('RandomShift', x_shift=(-1, 1), y_shift=(-1, 1))
X_train = aug.transform(X_train, 4)
y_train = np.repeat(y_train, 5)
y_train = one_hot(y_train)
np.save('data/nn_X_train.npy', X_train)
np.save('data/nn_y_train.npy', y_train)
# 1.a) 1024-D.05-768-D.1-256-128-10 --> 0.9880
# 1.b) 1024-D.05-768-D.05-256-128-10 --> 0.9868
# 1.c) 1024-768-256-128-10 --> 0.9896
# 2) 1000-800-800-500-250-10 --> 0.9824
# ... --> 0.9838
# (...)
# WORSE!
"""
Explanation: #2 more thorough augmentation
End of explanation
"""
# 11) 800-1024-512-256-128 --> 0.9933
# 12) 1337-911-666-128 --> 0.9923
# 13) 800-D.05-1024-D.1-512-256-128 --> 0.9936
# 14) 800-D.05-1024-D.1-512-D.1-256-128 --> 0.9928
# 15) 1337-D.05-911-D.1-666-128 --> 0.9939
# [*] 16) 1337-D.05-911-D.1-666-333-128 --> 0.9948
# 17) 1337-D.1-911-D.2-666-333-128 --> 0.9887
# 18) ... --> 0.9930
# 19) ... --> 0.9935
# 20) 2048-D.1-1337-D.2-666-333 --> 0.9896
# 21) 2048-D.15-1337-D.25-666-333 --> 0.9723
# 22) 2048-D.05-1337-D.1-666-333 --> 0.9936
# 23) 2048-D.1-1337-D.2-666-333-128 --> 0.9892
"""
Explanation: back to #1
End of explanation
"""
X, y = load_mnist('train', 'data/')
X /= 255.
y = one_hot(y)
gp = GPClassifier(algorithm='exact')
gp
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:10], y[:10])
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:100], y[:100])
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:1000], y[:1000])
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:2000], y[:2000])
# Elapsed time: 0.046 sec
# Elapsed time: 0.518 sec
# Elapsed time: 59.686 sec
# Elapsed time: 298.424 sec
"""
Explanation: Gaussian Processes
some benchmarks
exact linear systems solving
End of explanation
"""
gp = GPClassifier(algorithm='cg')
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:100], y[:100])
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:1000], y[:1000])
gp.reset_K()
with Stopwatch(verbose=True):
gp.fit(X[:2000], y[:2000])
# Elapsed time: 0.044 sec
# Elapsed time: 0.262 sec
# Elapsed time: 50.412 sec
# Elapsed time: 259.823 sec
"""
Explanation: via CG
End of explanation
"""
sigma_n = np.concatenate(([0], np.logspace(-8., -4., 2)))
length_scale = np.logspace(-1., 2., 19)
gamma = 0.5/length_scale**2
# sigma_f = np.logspace(-2., 2., 7)
param_grid = ({'sigma_n': sigma_n,
'kernel_params': [dict(sigma=1., gamma=gamma_) for gamma_ in gamma]},
{'sigma_n': sigma_n,
'kernel_params': [dict(sigma=0.1, gamma=gamma_) for gamma_ in gamma]},
{'sigma_n': sigma_n,
'kernel_params': [dict(sigma=10., gamma=gamma_) for gamma_ in gamma]})
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg', random_seed=1337, tol=1e-8, cg_tol=1e-7, n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
print grid_cv.number_of_combinations()
[params for params in grid_cv.gen_params()][:3]
X, y = load_mnist(mode='train', path='data/')
X /= 255.
st = StandardScaler(copy=False, with_mean=True, with_std=False)
X = st.fit_transform(X)
tts = TrainTestSplitter(random_seed=1337, shuffle=True)
indices, _ = tts.split(y, train_ratio=0.02, stratify=True) # 1195 samples
X = X[indices]
y = y[indices]
grid_cv.fit(X, y);
# Training GPClassifier on 1195 samples x 784 features.
# 2-fold CV for each of 171 params combinations == 342 fits ...
# iter: 1/342 +- elapsed: 21.159 sec ...
# iter: 2/342 ++ elapsed: 35.444 sec - mean acc.: 0.1113 +/- 2 * 0.014
# iter: 3/342 +- elapsed: 49.669 sec - best acc.: 0.1113 at {'kernel_params': {'sigma': 1.0, 'gamma': 49.999999999999993}, 'sigma_n': 0.0}
# ...
# ...
# ...
# iter: 340/342 ++convergence is not reached
# elapsed: 16914.8 sec - mean acc.: 0.1046 +/- 2 * 0.001
# iter: 341/342 +-convergence is not reached
# elapsed: 17005.8 sec - best acc.: 0.6686 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.049999999999999989}, 'sigma_n': 0.0}
# iter: 342/342 ++convergence is not reached
# elapsed: 17083.8 sec - mean acc.: 0.1046 +/- 2 * 0.001
df = grid_cv.to_df()
df.to_excel('cv_results/gp_raw_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(25).to_excel('cv_results/gp_raw_best.xlsx')
"""
Explanation: Approach #1. Raw data
End of explanation
"""
pca_full = load_model('models/pca_full.json')
X, y = load_mnist(mode='train', path='data/')
X /= 255.
# st = StandardScaler(copy=False, with_mean=True, with_std=False)
# X = st.fit_transform(X)
tts = TrainTestSplitter(random_seed=1337, shuffle=True)
indices, _ = tts.split(y, train_ratio=0.02, stratify=True) # 1195 samples
X = X[indices]
y = y[indices]
"""
Explanation: Approach #2. PCA
load data
End of explanation
"""
# for n_components in xrange(5, 151, 5):
for n_components in xrange(12, 25):
gamma = np.array([0.3, 0.6, 1.5, 3.0, 5.1]) / n_components
param_grid = {'sigma_n': [1e-8],
'kernel_params': [dict(sigma=0.1, gamma=gamma_) for gamma_ in gamma]}
grid_cv_params = dict(model=GPClassifier(algorithm='cg', random_seed=1337, tol=1e-8, cg_tol=1e-7, n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
print "[PCA n_components = {0}]\n\n".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=False).transform(X)
grid_cv = GridSearchCV(**grid_cv_params).fit(X_current, y)
df = grid_cv\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/gp_pca_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv.best_score_))
print "\n\n"
# [PCA n_components = 5]
# Training GPClassifier on 1195 samples x 5 features.
# 2-fold CV for each of 5 params combinations == 10 fits ...
# iter: 1/10 +- elapsed: 11.139 sec ...
# iter: 2/10 ++ elapsed: 22.108 sec - mean acc.: 0.5940 +/- 2 * 0.040
# iter: 3/10 +- elapsed: 32.947 sec - best acc.: 0.5940 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.059999999999999998}, 'sigma_n': 1e-08}
# iter: 4/10 ++ elapsed: 41.796 sec - mean acc.: 0.6384 +/- 2 * 0.034
# iter: 5/10 +- elapsed: 49.648 sec - best acc.: 0.6384 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.12}, 'sigma_n': 1e-08}
# iter: 6/10 ++ elapsed: 56.744 sec - mean acc.: 0.6728 +/- 2 * 0.018
# iter: 7/10 +- elapsed: 63.334 sec - best acc.: 0.6728 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.29999999999999999}, 'sigma_n': 1e-08}
# iter: 8/10 ++ elapsed: 70.164 sec - mean acc.: 0.6410 +/- 2 * 0.012
# iter: 9/10 +- elapsed: 75.789 sec - best acc.: 0.6728 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.29999999999999999}, 'sigma_n': 1e-08}
# iter: 10/10 ++ elapsed: 81.808 sec - mean acc.: 0.5172 +/- 2 * 0.003
# ...
# ...
# ...
# iter: 8/10 ++ elapsed: 80.497 sec - mean acc.: 0.7422 +/- 2 * 0.032
# iter: 9/10 +- elapsed: 85.950 sec - best acc.: 0.7481 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.042857142857142858}, 'sigma_n': 1e-08}
# iter: 10/10 ++ elapsed: 91.393 sec - mean acc.: 0.5288 +/- 2 * 0.012
# [PCA n_components = 40]
# Training GPClassifier on 1195 samples x 40 features.
# 2-fold CV for each of 5 params combinations == 10 fits ...
# iter: 1/10 +- elapsed: 14.542 sec ...
# iter: 2/10 ++ elapsed: 28.153 sec - mean acc.: 0.5832 +/- 2 * 0.016
# iter: 3/10 +- elapsed: 39.689 sec - best acc.: 0.5832 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.0074999999999999997}, 'sigma_n': 1e-08}
"""
Explanation: PCA w/o whitening
End of explanation
"""
n_components = 20
whiten = False
X = pca_full.set_params(n_components=n_components, whiten=whiten).transform(X)
X = StandardScaler(copy=False, with_mean=True, with_std=False).fit_transform(X)
sigma_n = [0., 1e-8, 1e-6, 1e-4, 1e-2]
sigma_f = np.logspace(-2., 1., 6)
gamma = np.linspace(0.04, 0.12, 16, True)
param_grid = [{'sigma_n': sigma_n, 'kernel_params': [dict(sigma=sigma, gamma=gamma_) for gamma_ in gamma]} for sigma in sigma_f]
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg',
random_seed=1337,
max_iter=200,
tol=1e-8,
cg_tol=1e-7,
n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
grid_cv.number_of_combinations() # 480
grid_cv.fit(X, y);
# Training GPClassifier on 1195 samples x 20 features.
# 2-fold CV for each of 480 params combinations == 960 fits ...
# iter: 1/960 +- elapsed: 1.409 sec ...
# iter: 2/960 ++ elapsed: 2.614 sec - mean acc.: 0.6368 +/- 2 * 0.023
# iter: 3/960 +- elapsed: 3.875 sec - best acc.: 0.6368 at {'kernel_params': {'sigma': 0.01, 'gamma': 0.040000000000000001}, 'sigma_n': 0.0}
# ...
# ...
# ...
# iter: 958/960 ++convergence is not reached
# elapsed: 9239.23 sec - mean acc.: 0.2006 +/- 2 * 0.075
# iter: 959/960 +-convergence is not reached
# elapsed: 9253.45 sec - best acc.: 0.8677 at {'kernel_params': {'sigma': 0.63095734448019303, 'gamma': 0.082666666666666666}, 'sigma_n': 0.0}
# iter: 960/960 ++convergence is not reached
# elapsed: 9267.46 sec - mean acc.: 0.7169 +/- 2 * 0.094
# df = grid_cv.to_df()
# df.to_excel('cv_results/gp_2_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(64).to_excel('cv_results/gp_2_best.xlsx')
"""
Explanation: more thoroughly
End of explanation
"""
for n_components in xrange(5, 151, 5):
gamma = np.array([0.3, 0.6, 1.5, 3.0, 5.1]) / n_components
param_grid = {'sigma_n': [1e-8],
'kernel_params': [dict(sigma=0.1, gamma=gamma_) for gamma_ in gamma]}
grid_cv_params = dict(model=GPClassifier(algorithm='cg', random_seed=1337, tol=1e-8, cg_tol=1e-7, n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
print "[PCA n_components = {0}]\n\n".format(n_components)
X_current = pca_full.set_params(n_components=n_components, whiten=True).transform(X)
grid_cv = GridSearchCV(**grid_cv_params).fit(X_current, y)
df = grid_cv\
.to_df()\
.sort_values(by='mean_score', ascending=False)\
.to_excel('cv_results/gp_pca_whiten_{0}_{1:.4f}.xlsx'.format(n_components, grid_cv.best_score_))
print "\n\n"
# the best is 0.79.. <-- worse
"""
Explanation: PCA whitening
End of explanation
"""
pca_full = load_model('models/pca_full.json')
X, y = load_mnist(mode='train', path='data/')
X /= 255.
tts = TrainTestSplitter(random_seed=1337, shuffle=True)
indices, _ = tts.split(y, train_ratio=0.03, stratify=True) # 1794 samples
X = X[indices]
y = y[indices]
X = pca_full.set_params(n_components=20, whiten=True).transform(X)
X = StandardScaler(copy=False, with_mean=True, with_std=False).fit_transform(X)
z = pca_full.explained_variance_ratio_[:20]
z /= sum(z)
train, test = tts.split(y, train_ratio=0.5, stratify=True)
# for alpha in np.logspace(-6., 2., 9):
# for alpha in np.logspace(-3., 1.2, 9):
# for alpha in np.arange(1.4, 9.8, 0.4):
# for alpha in np.arange(5.4, 6.6, 0.1):
for alpha in np.arange(6.05, 6.15, 0.01):
X_train = X[train] * np.exp(alpha * z)
X_test = X[test] * np.exp(alpha * z)
gp = GPClassifier(algorithm='cg',
sigma_n=1e-8,
kernel_params=dict(sigma=0.1, gamma=0.075),
n_samples=1500,
tol=1e-7,
max_iter=200,
random_seed=1337,
cg_tol=1e-7)
gp.fit(X_train, y[train])
acc = gp.evaluate(X_test, y[test])
print "{0:.4f}, alpha = {1}".format(acc, alpha)
# 0.8122, alpha = 0.01
# 0.8111, alpha = 0.1
# 0.8211, alpha = 1.0
# -----
# 0.8111, alpha = 0.125892541179
# 0.8122, alpha = 0.421696503429
# 0.8244, alpha = 1.41253754462
# 0.8511, alpha = 4.73151258961
# 0.4056, alpha = 15.8489319246
# -----
# 0.8478, alpha = 4.6
# 0.8500, alpha = 5.0
# 0.8433, alpha = 5.4
# 0.8578, alpha = 5.8
# 0.8544, alpha = 6.2
# 0.8500, alpha = 6.6
# ----
# 0.8578, alpha = 5.9
# 0.8578, alpha = 6.0
# 0.8589, alpha = 6.1
# 0.8544, alpha = 6.2
# ----
# 0.8556, alpha = 6.08
# 0.8589, alpha = 6.09
# 0.8589, alpha = 6.1
# 0.8556, alpha = 6.11
"""
Explanation: Approach #3. Exponential decay on the normalized explained variance
End of explanation
"""
pca_full = load_model('models/pca_full.json')
X, y = load_mnist(mode='train', path='data/')
X /= 255.
# st = StandardScaler(copy=False, with_mean=True, with_std=False)
# X = st.fit_transform(X)
tts = TrainTestSplitter(random_seed=1337, shuffle=True)
indices, _ = tts.split(y, train_ratio=0.02, stratify=True) # 1195 samples
X = X[indices]
y = y[indices]
"""
Explanation: more thoroughly
End of explanation
"""
n_components = 20
whiten = True
X = pca_full.set_params(n_components=n_components, whiten=whiten).transform(X)
alpha = 6.1
z = pca_full.explained_variance_ratio_[:20]
z /= sum(z)
X *= np.exp(alpha * z)
X = StandardScaler(copy=False, with_mean=True, with_std=False).fit_transform(X)
sigma_n = [0., 1e-4, 1e-2]
sigma_f = [0.1, 0.3, 0.5, 0.7, 0.9]
gamma = np.linspace(0.08, 0.11, 7, True)
param_grid = [{'sigma_n': sigma_n, 'kernel_params': [dict(sigma=sigma, gamma=gamma_) for gamma_ in gamma]} for sigma in sigma_f]
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg',
random_seed=1337,
max_iter=200,
tol=1e-8,
cg_tol=1e-7,
n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
print grid_cv.number_of_combinations() # 105
grid_cv.fit(X, y);
# Training GPClassifier on 1195 samples x 20 features.
# 2-fold CV for each of 105 params combinations == 210 fits ...
# iter: 1/210 +-convergence is not reached
# elapsed: 9.902 sec ...
# iter: 2/210 ++convergence is not reached
# elapsed: 19.138 sec - mean acc.: 0.7798 +/- 2 * 0.030
# iter: 3/210 +-convergence is not reached
# elapsed: 28.945 sec - best acc.: 0.7798 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.080000000000000002}, 'sigma_n': 0.0}
# ...
# ...
# ...
# iter: 208/210 ++convergence is not reached
# elapsed: 2135.44 sec - mean acc.: 0.7606 +/- 2 * 0.022
# iter: 209/210 +-convergence is not reached
# elapsed: 2145.34 sec - best acc.: 0.8702 at {'kernel_params': {'sigma': 0.7, 'gamma': 0.080000000000000002}, 'sigma_n': 0.01}
# iter: 210/210 ++convergence is not reached
# elapsed: 2155.47 sec - mean acc.: 0.7615 +/- 2 * 0.022
df = grid_cv.to_df()
df.to_excel('cv_results/gp_3_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(64).to_excel('cv_results/gp_3_best.xlsx')
"""
Explanation: first exp(alpha * z) and after that mean
End of explanation
"""
n_components = 20
whiten = False
X = pca_full.set_params(n_components=n_components, whiten=whiten).transform(X)
X = StandardScaler(copy=False, with_mean=True, with_std=False).fit_transform(X)
sigma_n = [0., 1e-8, 1e-6]
l = np.logspace(-1., 2., 12)
alpha = np.logspace(0., 2., 5)
param_grid = [{'sigma_n': sigma_n,
'kernel_params': [dict(sigma=0.1, alpha=alpha_, l=l_) for alpha_ in alpha]} for l_ in l]
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg',
kernel='RationalQuadratic',
random_seed=1337,
max_iter=200,
tol=1e-8,
cg_tol=1e-7,
n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
grid_cv.number_of_combinations() # 180
grid_cv.fit(X, y);
# Training GPClassifier on 1195 samples x 20 features.
# 2-fold CV for each of 180 params combinations == 360 fits ...
# iter: 1/360 +-convergence is not reached
# elapsed: 8.978 sec ...
# iter: 2/360 ++convergence is not reached
# elapsed: 17.204 sec - mean acc.: 0.1138 +/- 2 * 0.002
# iter: 3/360 +-convergence is not reached
# elapsed: 26.859 sec - best acc.: 0.1138 at {'kernel_params': {'alpha': 1.0, 'sigma': 0.1, 'l': 0.10000000000000001}, 'sigma_n': 0.0}
# ...
# ...
# ...
# iter: 358/360 ++convergence is not reached
# elapsed: 2948.18 sec - mean acc.: 0.1121 +/- 2 * 0.000
# iter: 359/360 +-convergence is not reached
# elapsed: 2959.30 sec - best acc.: 0.8025 at {'kernel_params': {'alpha': 100.0, 'sigma': 0.1, 'l': 2.3101297000831593}, 'sigma_n': 0.0}
# iter: 360/360 ++convergence is not reached
# elapsed: 2971.48 sec - mean acc.: 0.1121 +/- 2 * 0.000
df = grid_cv.to_df()
df.to_excel('cv_results/gp_rq_full.xlsx')
df.sort_values(by='mean_score', ascending=False).head(64).to_excel('cv_results/gp_rq_best.xlsx')
"""
Explanation: Approach #4. RQ Kernel
4.1 find reasonable ranges for params for PCA-20
End of explanation
"""
X = np.load('data/train_feats.npy')
_, y = load_mnist('train', 'data/')
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
indices, _ = tts.split(y, train_ratio=1300./60000., stratify=True)
y = y[indices]
y = one_hot(y)
X = X[indices]
# sigma_n = [0., 1e-4, 1e-2]
sigma_n = [0, 1e-8, 1e-6]
# sigma_f = [0.1, 1., 10.]
# sigma_f = np.logspace(-1., 1., 5)
sigma_f = np.logspace(-0.9, -0.2, 5)
# length_scale = np.logspace(-1., 2., 19)
# gamm = 0.5/length_scale**2
# gamma = np.logspace(-4., -2.1, 19)
gamma = np.logspace(-3.7, -3., 11)
param_grid = [{'sigma_n': sigma_n,
'kernel_params': [dict(sigma=sigma, gamma=gamma_) for gamma_ in gamma]} for sigma in sigma_f]
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg',
random_seed=1337,
max_iter=200,
tol=1e-8,
cg_tol=1e-7,
n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
grid_cv.number_of_combinations() # 44
grid_cv.fit(X, y);
# Training GPClassifier on 1295 samples x 128 features.
# 2-fold CV for each of 171 params combinations == 342 fits ...
# iter: 1/342 +- elapsed: 3.584 sec ...
# ...
# ...
# ...
# iter: 226/342 ++ elapsed: 4405.15 sec - mean acc.: 0.1042 +/- 2 * 0.001
# iter: 227/342 +- elapsed: 4432.84 sec - best acc.: 0.9846 at {'kernel_params': {'sigma': 0.1, 'gamma': 0.00050000000000000001}, 'sigma_n': 0.0}
# iter: 228/342 ++ elapsed: 4460.72 sec - mean acc.: 0.1042 +/- 2 * 0.001
# --------------------------------------------------------------
# Training GPClassifier on 1295 samples x 128 features.
# 2-fold CV for each of 285 params combinations == 570 fits ...
# iter: 1/570 +- elapsed: 28.242 sec ...
# iter: 2/570 ++ elapsed: 51.745 sec - mean acc.: 0.9799 +/- 2 * 0.005
# ...
# ...
# ...
# iter: 418/570 ++ elapsed: 9589.25 sec - mean acc.: 0.5370 +/- 2 * 0.119
# iter: 419/570 +- elapsed: 9605.40 sec - best acc.: 0.9861 at {'kernel_params': {'sigma': 0.31622776601683794, 'gamma': 0.00033711476775509616}, 'sigma_n': 0.0}
# iter: 420/570 ++ elapsed: 9620.88 sec - mean acc.: 0.6041 +/- 2 * 0.119
# ---------------------------------------------------------------
# Training GPClassifier on 1096 samples x 128 features.
# 2-fold CV for each of 165 params combinations == 330 fits ...
# iter: 1/330 +-convergence is not reached
# elapsed: 21.402 sec ...
# iter: 2/330 ++convergence is not reached
# elapsed: 41.644 sec - mean acc.: 0.9845 +/- 2 * 0.006
# iter: 3/330 +-convergence is not reached
# elapsed: 59.386 sec - best acc.: 0.9845 at {'kernel_params': {'sigma': 0.12589254117941673, 'gamma': 0.00019952623149688788}, 'sigma_n': 0}
# ...
# ...
# ...
# iter: 328/330 ++convergence is not reached
# elapsed: 7163.06 sec - mean acc.: 0.8219 +/- 2 * 0.129
# iter: 329/330 +-convergence is not reached
# elapsed: 7184.70 sec - best acc.: 0.9899 at {'kernel_params': {'sigma': 0.42169650342858211, 'gamma': 0.00085113803820237679}, 'sigma_n': 0}
# iter: 330/330 ++convergence is not reached
# elapsed: 7208.78 sec - mean acc.: 0.8219 +/- 2 * 0.129
df = grid_cv.to_df()
df.to_excel('cv_results/gp_nn_full.xlsx')
df.sort_values(by='mean_score', ascending=False).to_excel('cv_results/gp_nn_best.xlsx')
"""
Explanation: [discarded] 4.2 PCA components (whiten|x)
[discarded] 4.3 Exponential decay ...
Approach #NN
End of explanation
"""
X = np.load('data/rbm_train.npy')
_, y = load_mnist('train', 'data/')
tts = TrainTestSplitter(shuffle=True, random_seed=1337)
indices, _ = tts.split(y, train_ratio=1100./60000., stratify=True)
X = X[indices]
y = y[indices]
y = one_hot(y)
sigma_n = [0.]
sigma_f = [0.1]
length_scale = np.logspace(-1., 2., 13)
# gamma = np.logspace(-3.7, -3., 11)
gamma = np.logspace(-5., -0., 19)
param_grid = [{'sigma_n': sigma_n,
'kernel_params': [dict(sigma=sigma, gamma=gamma_) for gamma_ in gamma]} for sigma in sigma_f]
grid_cv = GridSearchCV(model=GPClassifier(algorithm='cg',
random_seed=1337,
max_iter=200,
tol=1e-8,
cg_tol=1e-7,
n_samples=1500),
param_grid=param_grid,
train_test_splitter_params=dict(shuffle=True, random_seed=1337),
n_splits=2,
refit=True,
verbose=True)
print grid_cv.number_of_combinations()
grid_cv.fit(X, y)
#
# [*] 0.683... D:
df = grid_cv.to_df()
df.to_excel('cv_results/gp_rbm_full.xlsx')
df.sort_values(by='mean_score', ascending=False).to_excel('cv_results/gp_rbm_best.xlsx')
"""
Explanation: Approach #RBM
End of explanation
"""
_, y_test = load_mnist('test', 'data/')
# knn_pred = np.load('data/knn_pred.npy')
# nn_pred = unhot(np.load('data/nn_pred.npy'))
# logreg_pred = unhot(np.load('data/logreg_pred.npy'))
gp_pred = unhot(np.load('data/gp_pred.npy'))
# C = confusion_matrix(y_test, knn_pred)
# ax = plot_confusion_matrix(C)
# plt.title("Confusion matrix for k-NN model", fontsize=18)
# plt.savefig('confusion_matrix_knn.png', dpi=144)
# C = confusion_matrix(y_test, knn_pred)
# ax = plot_confusion_matrix(C)
# plt.title("Confusion matrix for k-NN model", fontsize=18)
# plt.savefig('confusion_matrix_knn.png', dpi=144)
# C = confusion_matrix(y_test, nn_pred)
# ax = plot_confusion_matrix(C)
# plt.title("Confusion matrix for NN model", fontsize=18)
# plt.savefig('confusion_matrix_nn.png', dpi=144)
# C = confusion_matrix(y_test, logreg_pred)
# ax = plot_confusion_matrix(C)
# plt.title("Confusion matrix for LogReg model", fontsize=18)
# plt.savefig('confusion_matrix_logreg.png', dpi=144)
C = confusion_matrix(y_test, gp_pred)
ax = plot_confusion_matrix(C)
plt.title("Confusion matrix for GP model", fontsize=18)
plt.savefig('confusion_matrix_gp.png', dpi=144)
"""
Explanation: Misc
Plot confusion matrices for final models
End of explanation
"""
|
setiQuest/ML4SETI | results/effsubsee_seti_code_challenge_1stPlace.ipynb | apache-2.0 | # Uncomment and run this one time only
# !pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
# !pip install torchvision==0.1.8
# !pip install tabulate
# !pip install --upgrade scikit-learn
# !pip install --upgrade numpy
# !pip install h5py
# !pip install ibmseti
# !pip install tqdm
# !pip install --upgrade pandas
"""
Explanation: ML4SETI Code Challenge Winning Model
This notebook shows you how to run the winning model from the ML4SETI code challenge; a public code challenge issued by IBM and the SETI Insititute in the summer of 2017. The challenge was to build the best signal classification model from a set of simulated (and labeled) radio-telescope data files. These time-series simulated measurements, much like the real data acquired by the SETI Institute during observations at the Allen Telescope Array, were converted to spectrograms, represented as 2D images, and used to train various machine-learning models.
The 1st place team, Effsubsee, achieved a classification accuracy of 94.9% used "an averaged ensemble of 5 Wide Residual Networks, trained on different sets of 4(/5) folds, each with a depth of 34 (convolutional layers) and a widening factor of 2." (NB: Effsubsee, is $F_{c}$, from the Drake Equation, which represents "The fraction of civilizations that develop a technology that releases detectable signs of their existence into space.")
The code below will install the necessary Python packages, Effsubsee's model, and demonstrate how to use that model to classify a simulated data file from one of the test sets.
<br>
Install Packages
End of explanation
"""
# Uncomment and run this one time only!
# from __future__ import print_function
# import requests
# import shutil
# base_url = 'https://dal.objectstorage.open.softlayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b/code_challenge_models/effsubsee'
# for i in range(1,6):
# r = requests.get('{0}/fold{1}/FOLD{1}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(base_url, i), stream=True)
# filename = 'effsubsee_FOLD{}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(i)
# with open(filename, 'wb') as fout:
# shutil.copyfileobj(r.raw, fout)
# print('saved {}'.format(filename))
# Uncomment and run this once
# !wget -O mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5 https://github.com/sgrvinod/ml4seti-Effsubsee/blob/master/folds/mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5?raw=true
"""
Explanation: <br>
Download Effsubsee's model
Model stored in IBM Object Storage
The parameters for our models have been placed in an IBM Cloud Object Storage service instance. The Access Control Lists for the containers in Object Storage have been set such that the objects in those containers are publicly available.
End of explanation
"""
# Uncomment and run this one time only
# !wget https://dal.objectstorage.open.softlayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b/simsignals_v3_zipped/primary_testset_preview_v3.zip
# !unzip -q primary_testset_preview_v3.zip
# !ls
"""
Explanation: <br>
Download the Preview Test Set
End of explanation
"""
import math
from torch import nn
class BasicBlock(nn.Module):
"""
Graph of the Basic Block, as defined in the paper.
This block contains two 3x3 convolutional layers, each with prior Batch Norm and ReLU.
There is an additive residual connection across the block.
If the number of dimensions change across the block, this residual is a convolutional projection of the input.
Args:
inplanes (int): number of dimensions in the input tensor.
outplanes (int): number of dimensions in the output tensor.
stride (int): stride length for the filter.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero in-between conv. layers.
"""
def __init__(self, inplanes, outplanes, stride, dropout=0.0):
super(BasicBlock, self).__init__()
self.inplanes = inplanes
self.outplanes = outplanes
self.bn1 = nn.BatchNorm2d(inplanes)
self.relu1 = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(inplanes, outplanes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(outplanes)
self.relu2 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(outplanes, outplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.dropout = dropout
if self.inplanes != self.outplanes:
self.projection = nn.Conv2d(inplanes, outplanes, kernel_size=1, stride=stride, padding=0, bias=False)
else:
self.projection = None
def forward(self, x):
out = self.bn1(x)
out = self.relu1(out)
if self.inplanes != self.outplanes:
residual = self.projection(out)
else:
residual = x
out = self.conv1(out)
out = self.bn2(out)
out = self.relu2(out)
if self.dropout > 0.:
out = nn.functional.dropout(out, p=self.dropout, training=self.training)
out = self.conv2(out)
out += residual
return out
class WideResNet(nn.Module):
"""
Graph of the Wide Residual Network, as defined in the paper.
This network contains 4 convolutional blocks, each increasing dimensions by a factor of 'k':
The first is a single 3x3 Convolution, increasing dimensions from 2 (log(amplitude^2), phase) to 16.
The second is a sequence of Basic Blocks, 16 dimensions -> 16*k
The third is a sequence of Basic Blocks, 16*k dimensions -> 16*k^2
The fourth is a sequence of Basic Blocks, 16*k dimensions -> 16*k^3
These convolutional layers are followed by Batch Norm, ReLU, Average Pool, and finally a Fully Connected Layer
to perform the classification.
Args:
n (int): number of single convolutional layers in the entire network, 'n' in the paper.
k (int): widening factor for each succeeding convolutional layer, 'k' in the paper.
block (nn.module): BasicBlock.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero inside the blocks.
"""
def __init__(self, n, k, block=BasicBlock, dropout=0.0):
super(WideResNet, self).__init__()
if (n - 4) % 6 != 0:
raise ValueError("Invalid depth! Depth must be (6 * n_blocks + 4).")
n_blocks = (n - 4) / 6
self.conv_block1 = nn.Conv2d(2, 16, kernel_size=3, stride=1, padding=1, bias=False)
self.conv_block2 = self._make_layer(block, n_blocks, 16, 16 * k, 2, dropout)
self.conv_block3 = self._make_layer(block, n_blocks, 16 * k, 32 * k, 2, dropout)
self.conv_block4 = self._make_layer(block, n_blocks, 32 * k, 64 * k, 2, dropout)
self.bn1 = nn.BatchNorm2d(64 * k)
self.relu = nn.ReLU(inplace=True)
self.fc = nn.Linear(64 * k * 6 * 8, 7)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n_weights = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n_weights))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.bias.data.zero_()
def _make_layer(self, block, n_blocks, inplanes, outplanes, stride, dropout):
"""
Graph of a Convolutional block layer (conv_block2/conv_block3/conv_block4), as defined in the paper.
This graph assembles a number of blocks (BasicBlock) in sequence.
Args:
block (nn.module): BasicBlock or ResidualBlock.
inplanes (int): number of dimensions in the input tensor.
outplanes (int): number of dimensions in the output tensor.
stride (int): stride length for the filter.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero in-between conv. layers.
"""
layers = []
for i in range(n_blocks):
if i == 0:
layers.append(block(inplanes, outplanes, stride, dropout))
else:
layers.append(block(outplanes, outplanes, 1, dropout))
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv_block1(x)
out = self.conv_block2(out)
out = self.conv_block3(out)
out = self.conv_block4(out)
out = self.bn1(out)
out = self.relu(out)
out = nn.functional.avg_pool2d(out, 8)
out = out.view(out.size(0), -1)
return self.fc(out)
def wresnet34x2():
model = WideResNet(n=34, k=2, block=BasicBlock, dropout=0.3)
return model
from __future__ import print_function
import argparse
import os
import time
import torch
import torchvision.transforms as transforms
import pandas as pd
import ibmseti
import numpy as np
import ibmseti
import h5py
def normalizeSimFile(normalizeData, simfile):
# Load the Normalizer function
h = h5py.File(normalizeData, 'r')
mean = torch.FloatTensor(h['mean'][:])
mean = mean.permute(2, 0, 1)
std_dev = torch.FloatTensor(h['std_dev'][:])
std_dev = std_dev.permute(2, 0, 1)
h.close()
normalize = transforms.Normalize(mean=mean,
std=std_dev)
# Load simulation data
time_freq_resolution=(384, 512)
aca = ibmseti.compamp.SimCompamp(open(simfile, 'rb').read())
complex_data = aca.complex_data()
complex_data = complex_data.reshape(time_freq_resolution[0], time_freq_resolution[1])
complex_data = complex_data * np.hanning(complex_data.shape[1])
cpfft = np.fft.fftshift(np.fft.fft(complex_data), 1)
spectrogram = np.abs(cpfft)
features = np.stack((np.log(spectrogram ** 2),
np.arctan(cpfft.imag / cpfft.real)), -1)
# create FloatTensor, permute to proper dimensional order, and normalize
data = torch.FloatTensor(features)
data = data.permute(2, 0, 1)
data = normalize(data)
# The model expects a 4D tensor
s = data.size()
data = data.contiguous().view(1, s[0], s[1], s[2])
input_var = torch.autograd.Variable(data, volatile=True)
return input_var
def singleProbs(model, input_var):
"""
"""
model.eval()
softmax = torch.nn.Softmax()
softmax.zero_grad()
output = model(input_var)
probs = softmax(output).data.view(7).tolist()
return probs
"""
Explanation: <br>
Restart Your Kernel
After you've pip installed the packages above, you'll need to restart your kernel.
Comment out the code in the cells above (within a cell you drag and select the lines of code then press Command+'/', or Ctrl+'/', to comment and uncomment entire blocks of code)
In the menu above select Kernel -> Restart.
Run the cells below
Adapted from https://github.com/sgrvinod/ml4seti-Effsubsee
This code, for now, is found in https://github.com/gadamc/ml4seti-Effsubsee/
End of explanation
"""
#!ls primary_testset_preview_v3/*
simfile = 'primary_testset_preview_v3/00b3b8fdb14ce41f341dbe251f476093.dat'
"""
Explanation: Select a simulation file to test
You can change the simfile to any of the ~2500 files you choose in the primary_testset_preview_v3 folder
End of explanation
"""
allFolds = []
def loadFoldParams(modelcheckpoint):
model = wresnet34x2().cpu()
if os.path.isfile(modelcheckpoint):
print("=> Loading checkpoint '{}'".format(modelcheckpoint))
checkpoint = torch.load(modelcheckpoint, map_location=lambda storage, loc: storage)
best_acc = checkpoint['best_acc']
print("This model had an accuracy of %.2f on the validation set." % (best_acc,))
keys = checkpoint['state_dict'].keys()
for old_key in keys:
new_key = old_key.replace('module.', '')
checkpoint['state_dict'][new_key] = checkpoint['state_dict'].pop(old_key)
model.load_state_dict(checkpoint['state_dict'])
print("=> Loaded checkpoint '{}' (epoch {})"
.format(modelcheckpoint, checkpoint['epoch']))
else:
print("=> No model checkpoint found. Exiting")
return
allFolds.append(model)
def lf():
for i in range(1,6):
loadFoldParams('effsubsee_FOLD{}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(i))
%time lf()
assert len(allFolds) == 5
# normalize the simulation data file
normalizer = 'mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5'
%time input_var = normalizeSimFile(normalizer, simfile)
"""
Explanation: Load the parameters for the models
End of explanation
"""
# calculate probabilities
def runAllModels(aSimFile):
probs = np.zeros(7)
for mf in allFolds:
probs += singleProbs(mf, input_var)
probs = probs/float(len(allFolds))
return probs
%time probs = runAllModels(simfile)
"""
Explanation: <br>
Calculate the class probabilities as an average of the probabilities returned by the 5 different networks
End of explanation
"""
print('final class probabilities')
print(probs)
class_list = ['brightpixel', 'narrowband', 'narrowbanddrd', 'noise', 'squarepulsednarrowband', 'squiggle', 'squigglesquarepulsednarrowband']
print('signal classification')
predicted_signal_class = class_list[probs.argmax()]
print(predicted_signal_class)
"""
Explanation: <br>
Display class probabilities and most-likely signal class
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
aca = ibmseti.compamp.SimCompamp(open(simfile,'rb').read())
spectrogram = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(20, 10))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0], cmap='gray')
"""
Explanation: <br>
Confirm prediction
We will display the signal as a spectrogram to confirm the predicted class. Addtionally, the signal classes for the preview test set from the code challenge are available in the Github repository, allowing you to explicitely check the prediction against the actual signal class. (The classes for the final test set are not published so that teams may submit a scorecard to the final test set scoreboard even though the code challenge has officially ended.)
End of explanation
"""
import pandas as pd
preview_test_set_pd = pd.read_csv('https://github.com/setiQuest/ML4SETI/raw/master/results/private_list_primary_v3_testset_preview_uuid_class_29june_2017.csv', index_col=None)
expected_signal_class = preview_test_set_pd[preview_test_set_pd.UUID == simfile.split('/')[-1].rstrip('.dat')].SIGNAL_CLASSIFICATION.values[0]
assert predicted_signal_class == expected_signal_class
print(expected_signal_class)
"""
Explanation: <br>
Check the test set class from the published list of signal classes.
End of explanation
"""
|
metpy/MetPy | dev/_downloads/bb9caa5586d62e19ca46e30c02d29b43/Station_Plot.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from metpy.calc import reduce_point_density
from metpy.cbook import get_test_data
from metpy.io import metar
from metpy.plots import add_metpy_logo, current_weather, sky_cover, StationPlot
"""
Explanation: Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
End of explanation
"""
data = metar.parse_metar_file(get_test_data('metar_20190701_1200.txt', as_file_obj=False))
# Drop rows with missing winds
data = data.dropna(how='any', subset=['wind_direction', 'wind_speed'])
"""
Explanation: The setup
First read in the data. We use the metar reader because it simplifies a lot of tasks,
like dealing with separating text and assembling a pandas dataframe
https://thredds-test.unidata.ucar.edu/thredds/catalog/noaaport/text/metar/catalog.html
End of explanation
"""
# Set up the map projection
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
# Use the Cartopy map projection to transform station locations to the map and
# then refine the number of stations plotted by setting a 300km radius
point_locs = proj.transform_points(ccrs.PlateCarree(), data['longitude'].values,
data['latitude'].values)
data = data[reduce_point_density(point_locs, 300000.)]
"""
Explanation: This sample data has way too many stations to plot all of them. The number
of stations plotted will be reduced using reduce_point_density.
End of explanation
"""
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering.
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection.
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1100, 300, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable.
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'].values, data['latitude'].values,
clip_on=True, transform=ccrs.PlateCarree(), fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', data['air_temperature'].values, color='red')
stationplot.plot_parameter('SW', data['dew_point_temperature'].values,
color='darkgreen')
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', data['air_pressure_at_sea_level'].values,
formatter=lambda v: format(10 * v, '.0f')[-3:])
# Plot the cloud cover symbols in the center location. This uses the codes made above and
# uses the `sky_cover` mapper to convert these values to font codes for the
# weather symbol font.
stationplot.plot_symbol('C', data['cloud_coverage'].values, sky_cover)
# Same this time, but plot current weather to the left of center, using the
# `current_weather` mapper to convert symbols to the right glyphs.
stationplot.plot_symbol('W', data['current_wx1_symbol'].values, current_weather)
# Add wind barbs
stationplot.plot_barb(data['eastward_wind'].values, data['northward_wind'].values)
# Also plot the actual text of the station id. Instead of cardinal directions,
# plot further out by specifying a location of 2 increments in x and 0 in y.
stationplot.plot_text((2, 0), data['station_id'].values)
plt.show()
"""
Explanation: The payoff
End of explanation
"""
|
adit-chandra/tensorflow | tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
import numpy as np
tf.logging.set_verbosity(tf.logging.DEBUG)
! git clone --depth 1 https://github.com/tensorflow/models
tf.lite.constants.FLOAT16
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
"""
Explanation: Post-training float16 quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 16-bit floating point values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 2x reduction in model size. Some harware, like GPUs, can compute natively in this reduced precision arithmetic, realizing a speedup over traditional floating point execution. The Tensorflow Lite GPU delegate can be configured to run in this way. However, a model converted to float16 weights can still run on the CPU without additional modification: the float16 weights are upsampled to float32 prior to the first inference. This permits a significant reduction in model size in exchange for a minimal impacts to latency and accuracy.
In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the saved model into a Tensorflow Lite flatbuffer
with float16 quantization. Finally, check the
accuracy of the converted model and compare it to the original saved model. The training script, mnist.py, is available from the
TensorFlow official MNIST tutorial.
Build an MNIST model
Setup
End of explanation
"""
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
"""
Explanation: Train and export the model
End of explanation
"""
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
"""
Explanation: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
The savedmodel directory is named with a timestamp. Select the most recent one:
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
"""
Explanation: Using the Python TFLiteConverter, the saved model can be converted into a TensorFlow Lite model.
First load the model using the TFLiteConverter:
End of explanation
"""
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
"""
Explanation: Write it out to a .tflite file:
End of explanation
"""
tf.logging.set_verbosity(tf.logging.INFO)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]
"""
Explanation: To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform:
End of explanation
"""
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
"""
Explanation: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
End of explanation
"""
!ls -lh {tflite_models_dir}
"""
Explanation: Note how the resulting file is approximately 1/2 the size.
End of explanation
"""
_, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.cast(mnist_test[0], tf.float32)/255.0, mnist_test[1]
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
"""
Explanation: Run the TensorFlow Lite models
Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.
Load the test data
First, let's load the MNIST test data to feed to the model:
End of explanation
"""
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
"""
Explanation: Load the model into the interpreters
End of explanation
"""
for img, label in mnist_ds:
break
interpreter.set_tensor(interpreter.get_input_details()[0]["index"], img)
interpreter.invoke()
predictions = interpreter.get_tensor(
interpreter.get_output_details()[0]["index"])
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0])))
plt.grid(False)
interpreter_fp16.set_tensor(
interpreter_fp16.get_input_details()[0]["index"], img)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(
interpreter_fp16.get_output_details()[0]["index"])
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0])))
plt.grid(False)
"""
Explanation: Test the models on one image
End of explanation
"""
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
# Create smaller dataset for demonstration purposes
mnist_ds_demo = mnist_ds.take(2000)
print(eval_model(interpreter, mnist_ds_demo))
"""
Explanation: Evaluate the models
End of explanation
"""
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(eval_model(interpreter_fp16, mnist_ds_demo))
"""
Explanation: Repeat the evaluation on the float16 quantized model to obtain:
End of explanation
"""
|
niketanpansare/systemml | samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb | apache-2.0 | !pip install --upgrade --user systemml
!pip show systemml
"""
Explanation: Linear Regression Algorithms using Apache SystemML
Table of Content:
- Install SystemML using pip
- Example 1: Implement a simple 'Hello World' program in SystemML
- Example 2: Matrix Multiplication
- Load diabetes dataset from scikit-learn for the example 3
- Example 3: Implement three different algorithms to train linear regression model
- Algorithm 1: Linear Regression - Direct Solve (no regularization)
- Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization)
- Algorithm 3: Linear Regression - Conjugate Gradient (no regularization)
- Example 4: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API
- Example 5: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API
- Uninstall/Clean up SystemML Python package and jar file
Install SystemML using pip <a class="anchor" id="bullet1"></a>
For more details, please see the install guide.
End of explanation
"""
from systemml import MLContext, dml, dmlFromResource
ml = MLContext(sc)
print("Spark Version:", sc.version)
print("SystemML Version:", ml.version())
print("SystemML Built-Time:", ml.buildTime())
# Step 1: Write the DML script
script = """
print("Hello World!");
"""
# Step 2: Create a Python DML object
script = dml(script)
# Step 3: Execute it using MLContext API
ml.execute(script)
"""
Explanation: Example 1: Implement a simple 'Hello World' program in SystemML <a class="anchor" id="bullet2"></a>
First import the classes necessary to implement the 'Hello World' program.
The MLContext API offers a programmatic interface for interacting with SystemML from Spark using languages such as Scala, Java, and Python. As a result, it offers a convenient way to interact with SystemML from the Spark Shell and from Notebooks such as Jupyter and Zeppelin. Please refer to the documentation for more detail on the MLContext API.
As a sidenote, here are alternative ways by which you can invoke SystemML (not covered in this notebook):
- Command-line invocation using either spark-submit or hadoop.
- Using the JMLC API.
End of explanation
"""
# Step 1: Write the DML script
script = """
s = "Hello World!";
"""
# Step 2: Create a Python DML object
script = dml(script).output('s')
# Step 3: Execute it using MLContext API
s = ml.execute(script).get('s')
print(s)
"""
Explanation: Now let's implement a slightly more complicated 'Hello World' program where we initialize a string variable to 'Hello World!' and print it using Python. Note: we first register the output variable in the dml object (in the step 2) and then fetch it after execution (in the step 3).
End of explanation
"""
# Step 1: Write the DML script
script = """
# The number of rows is passed externally by the user via 'nr'
X = rand(rows=nr, cols=1000, sparsity=0.5)
A = t(X) %*% X
s = sum(A)
"""
# Step 2: Create a Python DML object
script = dml(script).input(nr=1e5).output('s')
# Step 3: Execute it using MLContext API
s = ml.execute(script).get('s')
print(s)
"""
Explanation: Example 2: Matrix Multiplication <a class="anchor" id="bullet3"></a>
Let's write a script to generate a random matrix, perform matrix multiplication, and compute the sum of the output.
End of explanation
"""
import numpy as np
npMatrix = np.random.rand(1000, 1000)
# Step 1: Write the DML script
script = """
A = t(X) %*% X
s = sum(A)
"""
# Step 2: Create a Python DML object
script = dml(script).input(X=npMatrix).output('s')
# Step 3: Execute it using MLContext API
s = ml.execute(script).get('s')
print(s)
"""
Explanation: Now, let's generate a random matrix in NumPy and pass it to SystemML.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
plt.switch_backend('agg')
%matplotlib inline
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
diabetes_y_train = diabetes.target[:-20].reshape(-1,1)
diabetes_y_test = diabetes.target[-20:].reshape(-1,1)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
"""
Explanation: Load diabetes dataset from scikit-learn for the example 3 <a class="anchor" id="bullet4"></a>
End of explanation
"""
# Step 1: Write the DML script
script = """
# add constant feature to X to model intercept
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
A = t(X) %*% X
b = t(X) %*% y
w = solve(A, b)
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
# Step 2: Create a Python DML object
script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
# Step 3: Execute it using MLContext API
w, bias = ml.execute(script).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
"""
Explanation: Example 3: Implement three different algorithms to train linear regression model
Linear regression models the relationship between one numerical response variable and one or more explanatory (feature) variables by fitting a linear equation to observed data. The feature vectors are provided as a matrix $X$ an the observed response values are provided as a 1-column matrix $y$.
A linear regression line has an equation of the form $y = Xw$.
Algorithm 1: Linear Regression - Direct Solve (no regularization) <a class="anchor" id="example3algo1"></a>
Least squares formulation
The least squares method calculates the best-fitting line for the observed data by minimizing the sum of the squares of the difference between the predicted response $Xw$ and the actual response $y$.
$w^* = argmin_w ||Xw-y||^2 \
\;\;\; = argmin_w (y - Xw)'(y - Xw) \
\;\;\; = argmin_w \dfrac{(w'(X'X)w - w'(X'y))}{2}$
To find the optimal parameter $w$, we set the gradient $dw = (X'X)w - (X'y)$ to 0.
$(X'X)w - (X'y) = 0 \
w = (X'X)^{-1}(X' y) \
\;\;= solve(X'X, X'y)$
End of explanation
"""
# Step 1: Write the DML script
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
max_iter = 100
w = matrix(0, rows=ncol(X), cols=1)
for(i in 1:max_iter){
XtX = t(X) %*% X
dw = XtX %*%w - t(X) %*% y
alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)
w = w + dw*alpha
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
# Step 2: Create a Python DML object
script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
# Step 3: Execute it using MLContext API
w, bias = ml.execute(script).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) <a class="anchor" id="example3algo2"></a>
Algorithm
Step 1: Start with an initial point
while(not converged) {
Step 2: Compute gradient dw.
Step 3: Compute stepsize alpha.
Step 4: Update: wnew = wold + alpha*dw
}
Gradient formula
dw = r = (X'X)w - (X'y)
Step size formula
Find number alpha to minimize f(w + alpha*r)
alpha = -(r'r)/(r'X'Xr)
End of explanation
"""
# Step 1: Write the DML script
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
m = ncol(X); i = 1;
max_iter = 20;
w = matrix (0, rows = m, cols = 1); # initialize weights to 0
dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)
norm_r2 = sum (dw ^ 2);
for(i in 1:max_iter) {
q = t(X) %*% (X %*% p)
alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)
w = w + alpha * p; # update weights
dw = dw + alpha * q;
old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);
p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction
i = i + 1;
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
# Step 2: Create a Python DML object
script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
# Step 3: Execute it using MLContext API
w, bias = ml.execute(script).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) <a class="anchor" id="example3algo3"></a>
Problem with gradient descent: Takes very similar directions many times
Solution: Enforce conjugacy
Step 1: Start with an initial point
while(not converged) {
Step 2: Compute gradient dw.
Step 3: Compute stepsize alpha.
Step 4: Compute next direction p by enforcing conjugacy with previous direction.
Step 4: Update: w_new = w_old + alpha*p
}
End of explanation
"""
# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :)
# Step 2: Create a Python DML object
script = dmlFromResource('scripts/algorithms/LinearRegDS.dml')
script = script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')
# Step 3: Execute it using MLContext API
w = ml.execute(script).get('beta_out')
w = w.toNumPy()
bias = w[1]
w = w[0]
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Example 4: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API <a class="anchor" id="example4"></a>
SystemML ships with several pre-implemented algorithms that can be invoked directly. Please refer to the algorithm reference manual for usage.
End of explanation
"""
# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :)
# Step 2: No need to create a Python DML object. But, keeping it as a placeholder for consistency :)
# Step 3: Execute Linear Regression using the mllearn API
from systemml.mllearn import LinearRegression
regr = LinearRegression(spark)
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
predictions = regr.predict(diabetes_X_test)
# Use the trained model to perform prediction
%matplotlib inline
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, predictions, color='black')
"""
Explanation: Example 5: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API <a class="anchor" id="example5"></a>
mllearn API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API.
End of explanation
"""
|
ContextLab/quail | docs/tutorial/basic_analyze_and_plot.ipynb | mit | import quail
%matplotlib inline
egg = quail.load_example_data()
"""
Explanation: Basic analyzing and plotting
This tutorial will go over the basics of analyzing eggs, the primary data structure used in quail. To learn about how an egg is set up, see the egg tutorial.
An egg is made up of (at minimum) the stimuli presented to a subject and the stimuli recalled by the subject. With these, two components we can perform a number of analyses:
Recall Accuracy - the proportion of stimuli presented that were later recalled
Serial Position Curve - recall accuracy as a function of the encoding position of the stimulus
Probability of First Recall - the probability that a stimulus will be recalled first as a function of its encoding position
Lag-CRP - given the recall of word n, the probability of recalling stimuli at neighboring positions (n+/-1, 2, 3 etc).
Temporal Clustering - a measure of recall clustering by temporal proximity during encoding
If we have a set of features for the stimuli, we can also compute a Memory Fingerprint, which is an estimate of how a subject clusters their recall responses with respect to features of a stimulus (see the fingerprint tutorial for more on this).
Let's get to analyzing some eggs. First, we'll load in some example data:
End of explanation
"""
egg.get_pres_items().head()
"""
Explanation: This dataset is comprised of 30 subjects, who each performed 8 study/test blocks of 16 words each. Here are some of the presented words:
End of explanation
"""
egg.get_rec_items().head()
"""
Explanation: and some of the recalled words:
End of explanation
"""
acc = egg.analyze('accuracy')
acc.get_data().head()
"""
Explanation: We can start with the simplest analysis - recall accuracy - which is just the proportion of stimuli recalled that were in the encoding lists. To compute accuracy, simply call the analyze method, with the analysis key word argument set to accuracy:
Recall Accuracy
End of explanation
"""
accuracy_avg = egg.analyze('accuracy', listgroup=['average']*8)
accuracy_avg.get_data().head()
"""
Explanation: The result is a FriedEgg data object. The accuracy data can be retrieved using the get_data method, which returns a multi-index Pandas DataFrame where the first-level index is the subject identifier and the second level index is the list number. By default, note that each list is analyzed separately. However, you can easily return the average over lists using the listgroup kew word argument:
End of explanation
"""
accuracy_split = egg.analyze('accuracy', listgroup=['First Half']*4+['Second Half']*4)
accuracy_split.get_data().head()
"""
Explanation: Now, the result is a single value for each subject representing the average accuracy across the 16 lists. The listgroup kwarg can also be used to do some fancier groupings, like splitting the data into the first and second half of the experiment:
End of explanation
"""
accuracy_split.plot()
"""
Explanation: These analysis results can be passed directly into the plot function like so:
End of explanation
"""
spc = egg.analyze('spc', listgroup=['average']*8)
spc.get_data().head()
"""
Explanation: For more details on plotting, see the advanced plotting tutorial. Next, lets take a look at the serial position curve analysis. As stated above the serial position curve (or spc) computes recall accuracy as a function of the encoding position of the stimulus. To use it, use the same analyze method illustrated above, but set the analysis kwarg to spc. Let's also average across lists within subject:
Serial Position Curve
End of explanation
"""
spc.plot(ylim=[0, 1])
"""
Explanation: The result is a df where each row is a subject and each column is the encoding position of the word. To plot, simply pass the result of the analysis function to the plot function:
End of explanation
"""
pfr = egg.analyze('pfr', listgroup=['average']*8)
pfr.get_data().head()
"""
Explanation: Probability of First Recall
The next analysis we'll take a look at is the probability of first recall, which is the probability that a word will be recalled first as a function of its encoding position. To compute this, call the analyze method with the analysis kwarg set to pfr. Again, we'll average over lists:
End of explanation
"""
pfr.plot()
"""
Explanation: This df is set up just like the serial position curve. To plot:
End of explanation
"""
lagcrp = egg.analyze('lagcrp', listgroup=['average']*8)
lagcrp.get_data().head()
"""
Explanation: Lag-CRP
The next analysis to consider is the lag-CRP, which again is a function that given the recall of word n, returns the probability of recalling words at neighboring positions (n+/-1, 2, 3 etc). To use it? You guessed it: call the analyze method with the analysis kwarg set to lagcrp:
End of explanation
"""
lagcrp.plot()
"""
Explanation: Unlike the previous two analyses, the result of this analysis returns a df where the number of columns are double the length of the lists. To view the results:
End of explanation
"""
temporal = egg.analyze('temporal', listgroup=['First Half']*4+['Second Half']*4)
temporal.plot(plot_style='violin', ylim=[0,1])
"""
Explanation: Temporal clustering
Another way to evaluate temporal clustering is to measure the temporal distance of each transition made with respect to where on a list the subject could have transitioned. This 'temporal clustering score' is a good summary of how strongly participants are clustering their responses according to temporal proximity during encoding.
End of explanation
"""
egg.feature_names
"""
Explanation: Memory Fingerprint
Last but not least is the memory fingerprint analysis. For a detailed treatment of this analysis, see the fingerprint tutorial.
As described in the fingerprint tutorial, the features data structure is used to estimate how subjects cluster their recall responses with respect to the features of the encoded stimuli. Briefly, these estimates are derived by computing the similarity of neighboring recall words along each feature dimension. For example, if you recall "dog", and then the next word you recall is "cat", your clustering by category score would increase because the two recalled words are in the same category. Similarly, if after you recall "cat" you recall the word "can", your clustering by starting letter score would increase, since both words share the first letter "c". This logic can be extended to any number of feature dimensions.
Here is a glimpse of the features df:
End of explanation
"""
fingerprint = egg.analyze('fingerprint', listgroup=['average']*8)
fingerprint.get_data().head()
"""
Explanation: Like the other analyses, computing the memory fingerprint can be done using the analyze method with the analysis kwarg set to fingerprint:
End of explanation
"""
order=sorted(egg.feature_names)
fingerprint.plot(order=order, ylim=[0, 1])
"""
Explanation: The result of this analysis is a df, where each row is a subject's fingerprint and each column is a feature dimensions. The values represent a subjects tendency to cluster their recall responses along a particular feature dimensions. They are probability values, and thus, greater values indicate more clustering along that feature dimension. To plot, simply pass the result to the plot function:
End of explanation
"""
# warning: this can take a little while. Setting parallel=True will help speed up the permutation computation
# fingerprint = quail.analyze(egg, analysis='fingerprint', listgroup=['average']*8, permute=True, n_perms=100)
# ax = quail.plot(fingerprint, ylim=[0,1.2])
"""
Explanation: This result suggests that subjects in this example dataset tended to cluster their recall responses by category as well as the size (bigger or smaller than a shoebox) of the word. List length and other properties of your experiment can bias these clustering scores. To help with this, we implemented a permutation clustering procedure which shuffles the order of each recall list and recomputes the clustering score with respect to that distribution. Note: this also works with the temporal clustering analysis.
End of explanation
"""
|
brain-research/guided-evolutionary-strategies | Guided_Evolutionary_Strategies_Demo.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
print(f'tensorflow version: {tf.__version__}')
print(f'tensorflow_probability version: {tfp.__version__}')
"""
Explanation: Guided ES Demo
This is a fully self-contained notebook that reproduces the toy example in Fig.1 of the guided evolutionary strategies paper.
The main code is in the 'Algorithms' section below.
Contact: nirum@google.com
Date: 6/22/18
End of explanation
"""
class AntitheticSampler(object):
def __init__(self, distributions):
"""Antithetic perturbations.
Generates samples eta, and two custom getters that return
(x + eta) and (x - eta)
for a variable x.
This is used to evaluate a loss at perturbed parameter values, e.g.:
[f(x+eta), f(x-eta)]
"""
# stores the sampled noise
self.perturbations = {}
# store the distributions
self.distributions = distributions
def pos_getter(self, getter, name, *args, **kwargs):
"""Custom getter for positive perturbation"""
# get the variable
variable = getter(name, *args, **kwargs)
# check if we have pulled this variable before
if name not in self.perturbations:
# generate a noise sample and store it
self.perturbations[name] = self.distributions[name].sample()
# return the perturbed variable
return variable + tf.reshape(self.perturbations[name], variable.shape)
def neg_getter(self, getter, name, *args, **kwargs):
"""Custom getter for negative perturbation"""
# get the variable
variable = getter(name, *args, **kwargs)
# check if we have pulled this variable before
if name not in self.perturbations:
# generate a noise sample and store it
self.perturbations[name] = self.distributions[name].sample(shape=variable.shape)
# return the perturbed variable
return variable - tf.reshape(self.perturbations[name], variable.shape)
"""
Explanation: Helper functions
Antithetic sampler
Creates custom getters for perturbing variables.
These are used to evaluate f(x + epsilon), where epsilon is some perturbation applied to the parameters, x.
This also stores the sampled noise (epsilon) in a dictionary, since we need to reuse the noise for the negative sample, when we want to compute f(x - epsilon). (note: this is where the name antithetic comes from)
End of explanation
"""
mvn_diag = tfp.distributions.MultivariateNormalDiag
mvn_lowrank = tfp.distributions.MultivariateNormalDiagPlusLowRank
"""
Explanation: Noise distributions
We draw perturbations of parameters from either a diagonal covariance (the standard evolutionary strategies algorithm), or from a diagonal plus low rank covariance (guided ES).
End of explanation
"""
def gradient_descent(loss_fn, grads_and_vars):
return grads_and_vars
"""
Explanation: Algorithms
Gradient descent
As a baseline, we will compare against running gradient descent directly on the biased gradients.
End of explanation
"""
def evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma):
"""Function to compute the evolutionary strategies.
See the guided ES paper for details on the method.
Args:
loss_fn: function that builds the graph that computes the loss. loss_fn,
when called, returns a scalar loss tensor.
dists: dict mapping from variable names to distributions for perturbing
those variables.
grads_and_vars: list of (gradient, variable) tuples. The gradient and
variable are tensors of the same shape. The gradient may be biased (it
is not necessarily the gradient of the loss_fn).
beta: float, scale hyperparameter of the guided ES algorithm.
sigma: float, controls the overall std. dev. of the perturbation
distribution.
Returns:
updates_and_vars: a list of (update, variable) tuples contaniing the
estimated descent direction (update) and variable for each variable to
optimize. (This list will be passed to a tf.train.Optimizer instance).
"""
# build the antithetic sampler
anti = AntitheticSampler(dists)
# evaluate the loss at different parameters
with tf.variable_scope('', custom_getter=anti.pos_getter):
y_pos = loss_fn()
with tf.variable_scope('', custom_getter=anti.neg_getter):
y_neg = loss_fn()
# use these losses to compute the evolutionary strategies update
c = beta / (2 * sigma ** 2)
updates_and_vars = [
(c * tf.reshape(anti.perturbations[v.op.name], v.shape) * (y_pos - y_neg), v)
for _, v in grads_and_vars]
return updates_and_vars
"""
Explanation: Evolutionary strategies
To compute descent directions using evolutionary strategies, we will use the antithetic sampler defined above.
This will let us perturb model parameters centered on the current iterate.
End of explanation
"""
def vanilla_es(loss_fn, grads_and_vars, sigma=0.1, beta=1.0):
def vardist(v):
n = v.shape[0]
scale_diag = (sigma / tf.sqrt(tf.cast(n, tf.float32))) * tf.ones(n)
return mvn_diag(scale_diag=scale_diag)
# build distributions
dists = {v.op.name: vardist(v) for _, v in grads_and_vars}
updates_and_vars = evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma)
return updates_and_vars
"""
Explanation: Vanilla ES
Vanilla ES is the standard evolutionary strategies algorithm. It uses a diagonal covariance matrix for perturbing parameters.
End of explanation
"""
def guided_es(loss_fn, grads_and_vars, sigma=0.1, alpha=0.5, beta=1.0):
def vardist(grad, variable):
"""Builds the sampling distribution for the given variable."""
n = tf.cast(variable.shape[0], tf.float32)
k = 1
a = sigma * tf.sqrt(alpha / n)
c = sigma * tf.sqrt((1-alpha) / k)
b = tf.sqrt(a ** 2 + c ** 2) - a
scale_diag = a * tf.ones(tf.cast(n, tf.int32))
perturb_diag = b * tf.ones(1,)
perturb_factor, _ = tf.qr(grad)
return mvn_lowrank(scale_diag=scale_diag,
scale_perturb_factor=perturb_factor,
scale_perturb_diag=perturb_diag)
dists = {v.op.name: vardist(g, v) for g, v in grads_and_vars}
# antithetic getter
updates_and_vars = evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma)
return updates_and_vars
"""
Explanation: Guided ES
Guided ES is our proposed method. It uses a diagonal plus low-rank covariance matrix for drawing perturbations, where the low-rank subspace is spanned by the available gradient information.
End of explanation
"""
def generate_problem(n, m, seed=None):
rs = np.random.RandomState(seed=seed)
# sample a random problem
A = rs.randn(m, n)
b = rs.randn(m, 1)
grad_bias = rs.randn(n, 1)
return A, b, grad_bias
def perturbed_quadratic(n, m, problem_seed):
tf.reset_default_graph()
# generate problem
A_np, b_np, bias_np = generate_problem(n, m, seed=problem_seed)
A = tf.convert_to_tensor(A_np, dtype=tf.float32)
b = tf.convert_to_tensor(b_np, dtype=tf.float32)
# sample gradient bias and noise
grad_bias = 1.0 * tf.nn.l2_normalize(tf.convert_to_tensor(bias_np, dtype=tf.float32))
grad_noise = 1.5 * tf.nn.l2_normalize(tf.random_normal(shape=(n, 1)))
# compute loss
def loss_fn():
with tf.variable_scope('perturbed_quadratic', reuse=tf.AUTO_REUSE):
x = tf.get_variable('x', shape=(n, 1), initializer=tf.zeros_initializer)
resid = tf.matmul(A, x) - b
return 0.5*tf.norm(resid)**2 / float(m)
# compute perturbed gradient
with tf.variable_scope('perturbed_quadratic', reuse=tf.AUTO_REUSE):
x = tf.get_variable('x', shape=(n, 1), initializer=tf.zeros_initializer)
err = tf.matmul(tf.transpose(A), tf.matmul(A, x) - b) / float(m)
grad = err + (grad_bias + grad_noise) * tf.norm(err)
grads_and_vars = [(grad, x)]
return loss_fn, grads_and_vars
"""
Explanation: Tasks
Perturbed quadratic
This is a toy problem where we explicitly add bias and variance to the gradient
End of explanation
"""
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = vanilla_es(loss_fn, gav, sigma=0.1, beta=1.0)
opt = tf.train.GradientDescentOptimizer(0.2)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
ves = np.array(fobj).copy()
sess.close()
"""
Explanation: Demo
Vanilla ES
First, we run minimize the problem using vanilla evolutionary strategies.
End of explanation
"""
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = gradient_descent(loss_fn, gav)
opt = tf.train.GradientDescentOptimizer(5e-3)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
gd = np.array(fobj).copy()
sess.close()
"""
Explanation: Gradient descent
Our next baseline is gradient descent, applied directly to the biased gradients.
End of explanation
"""
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = guided_es(loss_fn, gav, sigma=0.1, alpha=0.5, beta=2.0)
opt = tf.train.GradientDescentOptimizer(0.2)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
ges = np.array(fobj).copy()
sess.close()
"""
Explanation: Guided ES
Finally, we will run the same problem using the guided evolutionary strategies method.
End of explanation
"""
A, b, _ = generate_problem(1000, 2000, seed=2)
xstar = np.linalg.lstsq(A, b, rcond=None)[0]
f_opt = (0.5/2000) * np.linalg.norm(np.dot(A, xstar) - b) ** 2
"""
Explanation: Plots
End of explanation
"""
COLORS = {'ges': '#7570b3', 'ves': '#1b9e77', 'sgdm': '#d95f02'}
plt.figure(figsize=(8, 6))
plt.plot(ves - f_opt, color=COLORS['ves'], label='Vanilla ES')
plt.plot(gd - f_opt, color=COLORS['sgdm'], label='Grad. Descent')
plt.plot(ges - f_opt, color=COLORS['ges'], label='Guided ES')
plt.legend(fontsize=16, loc=0)
plt.xlabel('Iteration', fontsize=16)
plt.ylabel('Loss', fontsize=16)
plt.title('Demo of Guided Evolutionary Strategies', fontsize=16);
"""
Explanation: As we see in the plot below, Guided ES combines the benefits of gradient descent (quick initial descent) and vanilla evolutionary strategies (converges on the true solution).
End of explanation
"""
|
rueedlinger/machine-learning-snippets | notebooks/basics/statistical_analysis.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import pandas as pd
from matplotlib import pyplot as plt
plt.style.use("ggplot")
"""
Explanation: Statistical analysis
In this notebook we use pandas and the stats module from scipy for some basic statistical analysis.
End of explanation
"""
df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", names=["Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Martial Status",
"Occupation", "Relationship", "Race", "Sex", "Capital Gain", "Capital Loss",
"Hours per week", "Country", "Target"])
# some data cleaning remove leading and trailing spaces
df['Sex'] = df['Sex'].str.strip()
df.head()
"""
Explanation: First we need some data. Let'use pandas to load the 'adult' data set from the UC Irvine Machine Learning Repository in our dataframe.
End of explanation
"""
df.shape
"""
Explanation: Descriptive statistics
Let's have a first look at the shape of our dataframe.
End of explanation
"""
df.columns
"""
Explanation: What are the column names.
End of explanation
"""
df.mean()
df.median()
df.sem()
df.var()
df.std()
df.quantile(q=0.5)
df.quantile(q=[0.05, 0.95])
"""
Explanation: We can calculate the mean, median, standard error of the mean (sem), variance, standard deviation (std) and the quantiles for every column in the dataframe
End of explanation
"""
_ = sns.pairplot(df, hue="Target")
_ = sns.displot(df, x="Age" ,hue="Sex", label="male", kind="kde", log_scale=False)
"""
Explanation: In the next sample we replace a value with None so that we can show how to hanlde missing values in a dataframe.
Basic visualization
First let's create a pair plot
End of explanation
"""
female = df[df.Sex == 'Female']
male = df[df.Sex == 'Male']
"""
Explanation: Inferential statistics
End of explanation
"""
t, p = stats.ttest_ind(female['Age'], male['Age'])
print("test statistic: {}".format(t))
print("p-value: {}".format(p))
"""
Explanation: T-Test
End of explanation
"""
z, p = stats.ranksums(female['Age'], male['Age'])
print("test statistic: {}".format(z))
print("p-value: {}".format(p))
"""
Explanation: Wilcoxon rank-sum test
End of explanation
"""
|
lmcinnes/pynndescent | doc/pynndescent_in_pipelines.ipynb | bsd-2-clause | from sklearn.manifold import Isomap, TSNE
from sklearn.neighbors import KNeighborsTransformer
from pynndescent import PyNNDescentTransformer
from sklearn.pipeline import make_pipeline
from sklearn.datasets import fetch_openml
from sklearn.utils import shuffle
import seaborn as sns
"""
Explanation: Working with Scikit-learn pipelines
Nearest neighbor search is a fundamental building block of many machine learning algorithms, including in supervised learning with kNN-classifiers and kNN-regressors, and unsupervised learning with manifold learning, and clustering. It would be useful to be able to bring the speed of PyNNDescent's approximate nearest neighbor search to bear on these problems without having to re-implement everything from scratch. Fortunately Scikit-learn has done most of the work for us with their KNeighborsTransformer, which provides a means to insert nearest neighbor computations into sklearn pipelines, and feed the results to many of their models that make use of nearest neighbor computations. It is worth reading through the documentation they have, because we are going to use PyNNDescent as a drop in replacement.
To make this as simple as possible PyNNDescent implements a class PyNNDescentTransformer that acts as a KNeighborsTransformer and can be dropped into all the same pipelines. Let's see an example of this working ...
End of explanation
"""
def load_mnist(n_samples):
"""Load MNIST, shuffle the data, and return only n_samples."""
mnist = fetch_openml("mnist_784")
X, y = shuffle(mnist.data, mnist.target, random_state=2)
return X[:n_samples] / 255, y[:n_samples]
data, target = load_mnist(10000)
"""
Explanation: As usual we will need some data to play with. In this case let's use a random subsample of MNIST digits.
End of explanation
"""
sklearn_isomap = make_pipeline(
KNeighborsTransformer(n_neighbors=15),
Isomap(metric='precomputed')
)
pynnd_isomap = make_pipeline(
PyNNDescentTransformer(n_neighbors=15),
Isomap(metric='precomputed')
)
sklearn_tsne = make_pipeline(
KNeighborsTransformer(n_neighbors=92),
TSNE(metric='precomputed', random_state=42)
)
pynnd_tsne = make_pipeline(
PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05),
TSNE(metric='precomputed', random_state=42)
)
"""
Explanation: Now we need to make a pipeline that feeds the nearest neighbor results into a downstream task. To demonstrate how this can work we'll try manifold learning. First we will try out Isomap and then t-SNE. In both cases we can provide a "precomputed" distance matrix, and if it is a sparse matrix (as output by KNeighborsTransformer) then any entry not explicitly provided as a non-zero element of the matrix will be ignored (or treated as an effectively infinite distance). To make the whole thing work we simple make an sklearn pipeline (and could easily include pre-processing steps such as categorical encoding, or data scaling and standardisation as earlier steps if we wished) that first uses the KNeighborsTransformer to process the raw data into a nearest neighbor graph, and then passes that on to either Isomap or TSNE. For comparison we'll drop in a PyNNDescentTransformer instead and see how that effects the results.
End of explanation
"""
%%time
sklearn_iso_map = sklearn_isomap.fit_transform(data)
%%time
pynnd_iso_map = pynnd_isomap.fit_transform(data)
"""
Explanation: First let's try Isomap. The algorithm first constructs a k-nearest-neighbor graph (which our transformers will handle in the pipeline), then measures distances between points as path lengths in that graph. Finally it performs an eigendecomposition of the resulting distance matrix. We can do much to speed up the later two steps, which are still non-trivial, but hopefully we can get some speedup by substituting in the approximate nearest neighbor computation.
End of explanation
"""
sns.scatterplot(x=sklearn_iso_map.T[0], y=sklearn_iso_map.T[1], hue=target, palette="Spectral", size=1);
sns.scatterplot(x=pynnd_iso_map.T[0], y=pynnd_iso_map.T[1], hue=target, palette="Spectral", size=1);
"""
Explanation: A two-times speedup is not bad, especially since we only accelerated one component of the full algorithm. It is quite good considering it was simply a matter of dropping a different class into a pipeline. More importantly as we scale to larger amounts of data the nearest neighbor search comes to dominate the over algorithm run-time, so we can expect to only get better speedups for more data. We can plot the results to ensure we are getting qualitatively the same thing.
End of explanation
"""
%%time
sklearn_tsne_map = sklearn_tsne.fit_transform(data)
%%time
pynnd_tsne_map = pynnd_tsne.fit_transform(data)
"""
Explanation: Now let's try t-SNE. This algorithm requires nearest neighbors as a first step, and then the second major part, in terms of computation time, is the optimization of a layout of a modified k-neighbor graph. We can hope for some improvement in the first part, which usually accounts for around half the overall run-time for small data (and comes to consume a majority of the run-time for large datasets).
End of explanation
"""
sns.scatterplot(x=sklearn_tsne_map.T[0], y=sklearn_tsne_map.T[1], hue=target, palette="Spectral", size=1);
sns.scatterplot(x=pynnd_tsne_map.T[0], y=pynnd_tsne_map.T[1], hue=target, palette="Spectral", size=1);
"""
Explanation: Again we have an approximate two-times speedup. Again this was achieved by simply substituting a different class into the pipeline (although in the case we tweaked the early_termination_value so it would stop sooner). Again we can look at the qualitative results and see that we are getting something very similar.
End of explanation
"""
import numba
import numpy as np
@numba.njit()
def arr_intersect(ar1, ar2):
aux = np.sort(np.concatenate((ar1, ar2)))
return aux[:-1][aux[:-1] == aux[1:]]
@numba.njit()
def neighbor_accuracy_numba(n1_indptr, n1_indices, n2_indptr, n2_indices):
result = 0.0
for i in range(n1_indptr.shape[0] - 1):
indices1 = n1_indices[n1_indptr[i]:n1_indptr[i+1]]
indices2 = n2_indices[n2_indptr[i]:n2_indptr[i+1]]
n_correct = np.float64(arr_intersect(indices1, indices2).shape[0])
result += n_correct / indices1.shape[0]
return result / (n1_indptr.shape[0] - 1)
def neighbor_accuracy(neighbors1, neighbors2):
return neighbor_accuracy_numba(
neighbors1.indptr, neighbors1.indices, neighbors2.indptr, neighbors2.indices
)
%time true_neighbors = KNeighborsTransformer(n_neighbors=15).fit_transform(data)
%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=15).fit_transform(data)
print(f"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%")
"""
Explanation: So the results, in both cases, look pretty good, and we did get a good speed-up. A question remains -- how fast was he nearest neighbor component, and how accurate was it? We can write a simple function to measure the neighbor accuracy: compute the average percentage intersection in the neighbor sets of each sample point. Then let's just run the transformers and compare the times as well as computing the actual percentage accuracy.
End of explanation
"""
%time true_neighbors = KNeighborsTransformer(n_neighbors=92).fit_transform(data)
%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05).fit_transform(data)
print(f"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%")
"""
Explanation: So for the Isomap case we went from taking over one and half minutes down to less then a second. While doing so we still achieved over 99% accuracy in the nearest neighbors. This seems like a good tradeoff.
By constrast t-SNE requires a much larger number of neighbors (approximately three times the desired perplexity value, which defaults to 30 in sklearn's implementation). This is a little more of a challenge so we might expect it to take longer.
End of explanation
"""
|
pysg/pyther | thermodynamic_correlations.ipynb | mit | import numpy as np
import pandas as pd
import pyther as pt
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Thermodynamics correlations for pure components
En esta sección se muestra la class Thermodynamic_correlations() la cual permite realizar el cálculo de propiedades termodinámicas de sustancias puras como una función de la temperatura. En este caso se pueden tener 6 situaciones para cada una de las 13 propiedades termofísicas soportadas:
Especificar una sustancia pura sin especificar una temperatura. En este caso por defecto la propiedad termodinámica se calcula entre el intervalo mínimo y máximo de validez experimental para cada correlación.
Especificar una sustancia pura y especificar una temperatura.
Especificar una sustancia pura y especificar varias temperaturas.
Especificar varias sustancias puras sin especificar una temperatura.
Especificar varias sustancias puras y especificar una temperatura.
Especificar varias sustancias puras y especificar varias temperaturas
la clase Thermodynamics_correlations es usada para calcular 13 propiedades termodinámicas de sustancias puras en función de la temperatura y se sigue la siguente convención para presentar identificar las propiedades termodinámicas
property thermodynamics = name property, units, correlation, equation
The thermodynamic correlations are:
-Solid_Density = "Solid Density", "[kmol/m^3]", "A+BT+CT^2+DT^3+ET^4", 0
-Liquid_Density = "Liquid Density", "[kmol/m^3]", "A/B^(1+(1-T/C)^D)", 1
-Vapour_Pressure = "Vapour Pressure", "[Bar]", "exp(A+B/T+Cln(T)+DT^E) * 1e-5", 2
-Heat_of_Vaporization = "Heat of Vaporization", "[J/kmol]", "A(1-Tr)^(B+CTr+D*Tr^2)", 3
-Solid_Heat_Capacity = "Solid Heat Capacity", "[J/(kmolK)]", "A+BT+CT^2+DT^3+E*T^4", 4
-Liquid_Heat_Capacity = "Liquid Heat Capacity", "[J/(kmolK)]", "A^2/(1-Tr)+B-2AC(1-Tr)-AD(1-Tr)^2-C^2(1-Tr)^3/3-CD(1-Tr)^4/2-D^2(1-Tr)^5/5", 5
-Ideal_Gas_Heat_Capacity = "Ideal Gas Heat Capacity" "[J/(kmolK)]", "A+B(C/T/sinh(C/T))^2+D*(E/T/cosh(E/T))^2", 6
-Second_Virial_Coefficient = "Second Virial Coefficient", "[m^3/kmol]", "A+B/T+C/T^3+D/T^8+E/T^9", 7
-Liquid_Viscosity = "Liquid Viscosity", "[kg/(ms)]", "exp(A+B/T+Cln(T)+D*T^E)", 8
-Vapour_Viscosity = "Vapour Viscosity", "[kg/(ms)]", "AT^B/(1+C/T+D/T^2)", 9
-Liquid_Thermal_Conductivity = "Liquid Thermal Conductivity", "[J/(msK)]", "A+BT+CT^2+DT^3+ET^4", 10
-Vapour_Thermal_Conductivity = "Vapour Thermal Conductivity", "[J/(msK)]", "A*T^B/(1+C/T+D/T^2)", 11
-Surface_Tension = "Surface Tension", "[kg/s^2]", "A(1-Tr)^(B+CTr+D*Tr^2)", 12
Para empezar se importan las librerías que se van a utilizar, que en este caso son numpy, pandas, pyther y especificar que las figuras generadas se muesten dentro del jupyter notebook
End of explanation
"""
prueba_1 = pt.Thermodynamic_correlations()
component = ['METHANE']
property_thermodynamics = "Vapour_Pressure"
Vapour_Pressure = prueba_1.property_cal(component, property_thermodynamics)
print("Vapour Pressure = {0}". format(Vapour_Pressure))
prueba_2 = pt.Thermodynamic_correlations()
component = ['ETHANE']
property_thermodynamics = "Vapour_Pressure"
Vapour_Pressure_2 = prueba_2.property_cal(component, property_thermodynamics)
print("Vapour Pressure = {0}". format(Vapour_Pressure))
Vapour_Pressure[:5]*100
"""
Explanation: 1. Especificar una sustancia pura sin especificar una temperatura.
Luego se carga el archivo que contine las constantes de las correlaciones de las propiedades termodinamicas, que se llama en este caso "PureFull_mod_properties.xls" y se asigna a la variable dppr_file.
Creamos un objeto llamado thermodynamic_correlations y se pasan como parametros las variables component y property_thermodynamics que en el ejemplo se especifica para el componente METHANE la Vapour_Pressure
End of explanation
"""
temperature_vapour = prueba_1.temperature
units = prueba_1.units
print(units)
thermodynamic_correlations.graphical(temperature_vapour, Vapour_Pressure, property_thermodynamics, units)
"""
Explanation: para realizar un gráfico simple de la propiedad termodinámica se utiliza el método graphical(temperature, property_thermodynamics, label_property_thermodynamics, units).
En donde se pasan como argumentos la temperatura a la cual se claculó la propiedad termodinamica, los valores calculados de la propiedad termodinamica, el label de la propiedad termodinámica y las unidades correspondientes de temperatura y la propiedad termodinámica en cada caso.
End of explanation
"""
component = ['METHANE']
property_thermodynamics = "Vapour_Pressure"
temperature = [180.4]
Vapour_Pressure = thermodynamic_correlations.property_cal(component, property_thermodynamics, temperature)
print("Vapour Pressure = {0} {1}". format(Vapour_Pressure, units[1]))
"""
Explanation: 2. Especificar una sustancia pura y una temperatura.
Siguiendo con la sustacia pura METHANE se tiene el segundo caso en el cual ademas de especificiar el componente se especifica también solo un valor de temperatura, tal como se muestra en la variable temperature.
Dado que cada correlación de propiedad termodinámica tiene un rango mínimo y máximo de temperatura en la cual es valida, al especificar un valor de temperatura se hace una verificación para determinar si la temperatura ingresada se encuentra entre el intervalo aceptado para cada componente y cada propiedad termodinámica. En caso contrario la temperatura se clasifica como invalida y no se obtiene valor para la propiedad termodinámica seleccionada.
End of explanation
"""
component = ['METHANE']
property_thermodynamics = "Vapour_Pressure"
temperature = [180.4, 181.4, 185.3, 210, 85]
Vapour_Pressure = thermodynamic_correlations.property_cal(component, "Vapour_Pressure", temperature)
print("Vapour Pressure = {0} {1}". format(Vapour_Pressure, units[1]))
"""
Explanation: 3. Especificar una sustancia pura y especificar varias temperaturas.
Ahora se tiene la situación de contar con un solo componente "METHANE" sin embargo, esta vez se especifica varios valores para la temperatura en las cuales se quiere determinar el correspondiente valor de una proiedad termodinámica, que como en los casos anteriores es la Vapour_Pressure.
End of explanation
"""
components = ["METHANE", "n-TETRACOSANE", "ISOBUTANE"]
property_thermodynamics = "Vapour_Pressure"
Vapour_Pressure = thermodynamic_correlations.property_cal(components, property_thermodynamics)
temperature_vapour = thermodynamic_correlations.temperature
"""
Explanation: Se debe notar que al ingresar una serie de valores de temperatura, en este caso 5 valores, se obtienen solo 3 valores de la propiedad termodinámica. Esto se debe a que para este caso 2 valores de temperatura no se encuentran en el valor mínimo y máximo en donde es valida la correlación termodinámica. Por tanto, esto se avisa por medio del mensaje: Temperature_invalid = ['210 K is a temperature not valid', '85 K is a temperature not valid']
4. Especificar varias sustancias puras sin especificar una temperatura.
Otra de las posibilidades que se puede tener es la opción de especificar varios componentes para una misma propiedad termodinámica sin que se especifique una o más valores de temperatura. En esta opción se pueden ingresar multiples componentes sin un limite, siempre y cuando estén en la base de datos con la que se trabaja o en dado caso sean agregados a la base de datos nuevas correlaciones para sustancias puras Ver sección base de datos. Para este ejemplo se utiliza una list components con 3 sustancias puras por cuestiones de visibilidad de las gráficas de Vapour_Pressure.
End of explanation
"""
thermodynamic_correlations.multi_graphical(components, temperature_vapour, Vapour_Pressure)
"""
Explanation: por medio del método multi_graphical(components, temperature, property_thermodynamics) al cual se pasan los parámetros correspondiente a las sustancias puras, la temperatura a la cual se realiza el calculo de la propiedad termodinámica y los valores de la propiedad termodinámica de cada sustancia pura, para obtener la siguiente figura.
End of explanation
"""
components = ["METHANE", "n-TETRACOSANE", "n-PENTACOSANE", "ETHANE", "ISOBUTANE", "PROPANE", "3-METHYLHEPTANE"]
property_thermodynamics = "Vapour_Pressure"
Vapour_Pressure = thermodynamic_correlations.property_cal(components, property_thermodynamics)
temperature_vapour = thermodynamic_correlations.temperature
thermodynamic_correlations.multi_graphical(components[2:5], temperature_vapour[2:5], Vapour_Pressure[2:5])
"""
Explanation: sin embargo como se menciono anteriormente, es posible calcular una propiedad termodinámica para un gran número de sustancias puras y luego realizar las gráficas correspondientes dependiendo de las necesidades de visualización entre otros criterios. Para ejemplificar esto, ahora se tienen 7 sustancias puras y se quiere gŕaficar la propiedad termodinámica de solo: n-PENTACOSANE, ETHANE y el ISOBUTANE.
End of explanation
"""
dppr_file = "PureFull_mod_properties.xls"
thermodynamic_correlations = pt.Thermodynamic_correlations(dppr_file)
components = ["METHANE", "n-TETRACOSANE", "ISOBUTANE"]
property_thermodynamics = "Vapour_Pressure"
temperature = [180.4]
Vapour_Pressure = thermodynamic_correlations.property_cal(components, property_thermodynamics, temperature)
print("Vapour Pressure = {0} {1}". format(Vapour_Pressure, units[1]))
"""
Explanation: 5. Especificar varias sustancias puras y una temperatura.
Como en el caso anterios, en este ejemplo se espcifican 3 sustancias puras pero con la especificación de un solo valor de temperatura. Esta temperatura será común para las sustancias puras con las que se trabaje por tanto puede darse el caso de que sea una temperatura valida para algunas sustancias puras mientras que para otras no dependiendo del intervalo de valides de cada correlación termodinámica.
End of explanation
"""
thermodynamic_correlations.component_constans
"""
Explanation: en este caso se tiene como resultado un con 2 valores de presión de vapor, uno para METHANE y otro para ISOBUTANE, mientras que se obtiene un array vacio en el caso "de n-TETRACOSANE, puesto que la temperatura de 180 K especificada no se encuentra como valida.
para verificar tanto los valores de las constantes como los valores mínimos y máximos de cada correlación termodinámica para cada una de las sustancias puras que se especifique se utiliza el atributo component_constans tal como se muestra a continuación
End of explanation
"""
import numpy as np
import pandas as pd
import pyther as pt
import matplotlib.pyplot as plt
%matplotlib inline
dppr_file = "PureFull_mod_properties.xls"
thermodynamic_correlations = pt.Thermodynamic_correlations(dppr_file)
#components = ["METHANE", "n-TETRACOSANE", "ISOBUTANE"]
components = ["METHANE", "n-TETRACOSANE", "n-PENTACOSANE", "ETHANE", "ISOBUTANE", "PROPANE", "3-METHYLHEPTANE"]
property_thermodynamics = "Vapour_Pressure"
temperature = [180.4, 181.4, 185.3, 210, 800]
Vapour_Pressure = thermodynamic_correlations.property_cal(components, property_thermodynamics, temperature)
print("Vapour Pressure = {0}". format(Vapour_Pressure))
"""
Explanation: 6. Especificar varias sustancias puras y especificar varias temperaturas
En esta opción se puede manipular varias sustancias puras de forma simultanea con la especificación de varios valores de temperaturas, en donde cada valor de temperatura especificado será común para cada sustancia pura, de tal forma que se obtendra valores adecuados para aquellos valores de temperatura que sean validos para cada caso considerado.
End of explanation
"""
temp_enter = thermodynamic_correlations.temperature_enter
thermodynamic_correlations.data_temperature(components, temperature, Vapour_Pressure, temp_enter)
"""
Explanation: como se muestra en los resultados anteriores, se comienza a complicar la manipulación de los datos conforme incrementa el número de sustancias puras y temperaturas involucradas en el analisis, por tal motivo conviene utilizar las bondades de librerías especializadas para el procesamiento de datos como Pandas para obtener resultados más eficientes.
El método data_temperature(components, temperature, Vapour_Pressure, temp_enter) presenta un DataFrame con los resultados obtenidos luego de calcular la propiedad termodinámica indicada, señalan que para las temperaturas invalidas en el intervalo de aplicación de la correlación termodinámica, el resultado será NaN, tal como se muestra con el ejemplo a continuación.
End of explanation
"""
|
valter-lisboa/ufo-notebooks | Python3/.ipynb_checkpoints/ufo-sample-python3-checkpoint.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
"""
Explanation: USA UFO sightings (Python 3 version)
This notebook is based on the first chapter sample from Machine Learning for Hackers with some added features. I did this to present Jupyter Notebook with Python 3 for Tech Days in my Job.
The original link is offline so you need to download the file from the author's repository inside ../data form the r notebook directory.
I will assume the following questions need to be aswers;
- What is the best place to have UFO sightings on USA?
- What is the best month to have UFO sightings on USA?
Loading the data
This first session will handle with loading the main data file using Pandas.
End of explanation
"""
ufo = pd.read_csv(
'../data/ufo_awesome.tsv',
sep = "\t",
header = None,
dtype = object,
na_values = ['', 'NaN'],
error_bad_lines = False,
warn_bad_lines = False
)
"""
Explanation: Here we are loading the dataset with pandas with a minimal set of options.
- sep: once the file is in TSV format the separator is a <TAB> special character;
- na_values: the file have empty strings for NaN values;
- header: ignore any column as a header since the file lacks it;
- dtype: load the dataframe as objects, avoiding interpret the data types¹;
- error_bad_lines: ignore lines with more than the number of rows;
- warn_bad_lines: is set to false to avoid ugly warnings on the screen, activate this if you want to analyse the bad rows.
¹ Before start to make assumptions of the data I prefer load it as objects and then convert it after make sense of it. Also the data can be corrupted and make it impossible to cast.
End of explanation
"""
ufo.describe()
ufo.head()
"""
Explanation: With the data loaded in ufo dataframe, lets check it composition and first set of rows.
End of explanation
"""
ufo.columns = [
'DateOccurred',
'DateReported',
'Location',
'Shape',
'Duration',
'LongDescription'
]
"""
Explanation: The dataframe describe() show us how many itens (without NaN) each column have, how many are uniques, which is more frequent value, and how much this value appear. head() simply show us the first 5 rows (first is 0 on Python).
Dealing with metadata and column names
We need to handle the columns names, to do so is necessary to see the data document. The table bellow shows the fields details get from the metadata:
| Short name | Type | Description |
| ---------- | ---- | ----------- |
| sighted_at | Long | Date the event occurred (yyyymmdd) |
| reported_at | Long | Date the event was reported |
| location | String | City and State where event occurred |
| shape | String | One word string description of the UFO shape |
| duration | String | Event duration (raw text field) |
| description | String | A long, ~20-30 line, raw text description |
To keep in sync with the R example, we will set the columns names to the following values:
- DateOccurred
- DateReported
- Location
- Shape
- Duration
- LogDescription
End of explanation
"""
ufo.head()
"""
Explanation: Now we have a good looking dataframe with columns.
End of explanation
"""
ufo.drop(
labels = ['DateReported', 'Duration', 'Shape', 'LongDescription'],
axis = 1,
inplace = True
)
ufo.head()
"""
Explanation: Data Wrangling
Now we start to transform our data into something to analyse.
Keeping only necessary data
To decide about this lets get back to the questions to be answers.
The first one is about the better place on USA to have UFO sightings, for this we will need the Location column, and in some place in time we will make filters for it. The second question is about the better month to have UFO sightings, which will lead to the DateOccurred column.
Based on this Shape and LongDescription columns can be stripped high now (it's a bit obvious for the data relevance). But there is 2 others columns which can or cannot be removed, DataRepoted and Duration.
I always keep in mind to maintain, at last util second order, columns with some useful information to use it on further data wrangling or to get some statistical sense of it. Both columns have a date (in a YYYYDDMM year format) and a string which can possibly store some useful information if have data treatment to convert it in some numeric format. For the purpose of this demo, I removing it because DateReported will not be used further (the main purpose of the date is when the sight occurs and not when it was registered) and Duration is a relly mess and for a example to show on a Tech Day the effort to decompose it is not worthing.
The drop() command bellow have the following parameters:
- labels: columns to remove;
- axis: set to 1 to remove columns;
- inplace: set to True to modify the dataframe itself and return none.
End of explanation
"""
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d',
errors='coerce'
) for date in ufo['DateOccurred']
])
ufo.describe()
"""
Explanation: Converting data
Now we are good to start the data transformation, the dates columns must be converted to Python date objects to allow manipulation of theirs time series.
The first problem will happens when trying to run this code using pandas.to_datetime() to convert the string:
python
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d'
) for date in ufo['DateOccurred']
])
This will rise a serie of errors (stack trace) which is cause by this:
ValueError: time data '0000' does not match format '%Y%m%d' (match)
What happen here is bad data (welcome to the data science world, most of data will come corrupted, missing, wrong or with some other problem). Before proceed we need to deal with the dates with wrong format.
So what to do? Well we can make the to_datetime() method ignore the errors putting a NaT values on the field. Lets convert this and then see how the DataOccurred column will appear.
End of explanation
"""
ufo['DateOccurred'].isnull().sum()
"""
Explanation: The column now is a datetime object and have 60814 against the original 61069 elements, which shows some bad dates are gone. The following code show us how many elements was removed.
End of explanation
"""
ufo.isnull().sum()
ufo.dropna(
axis = 0,
inplace = True
)
ufo.isnull().sum()
ufo.describe()
"""
Explanation: There is no surprise that 60814 + 255 = 61069, we need to deal with this values too.
So we have a field DateOccurred with some NaN values. In this point we need to make a importante decision, get rid of the columns with NaN dates or fill it with something.
There is no universal guide to this, we could fill it with the mean of the column or copy the content of the DateReported column. But in this case the missing date is less then 0.5% of the total, so for the simplicity sakes we will simply drop all NaN values.
End of explanation
"""
ufo['Year'] = pd.DatetimeIndex(ufo['DateOccurred']).year
ufo['Month'] = pd.DatetimeIndex(ufo['DateOccurred']).month
ufo.head()
ufo['Month'].describe()
ufo['Year'].describe()
"""
Explanation: With the dataframe with clean dates, lets create another 2 columns to handle years and months in separate. This will make some analysis more easy (like discover which is the better month of year to look for UFO sights).
End of explanation
"""
sightings_by_year = ufo.groupby('Year').size().reset_index()
sightings_by_year.columns = ['Year', 'Sightings']
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
plt.style.use('seaborn-white')
%matplotlib inline
plt.xticks(rotation = 90)
sns.barplot(
data = sightings_by_year,
x = 'Year',
y = 'Sightings',
color= 'blue'
)
ax = plt.gca()
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
"""
Explanation: A funny thing about year is the most old sight is in 1762! This dataset includes sights from history.
How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words.
To do so we will use the default matplotlib library from Python to build our graphs.
Analysing the years
Before start lets count the sights by year.
The comands bellow are equivalent to the following SQL code:
SQL
SELECT Year, count(*) AS Sightings
FROM ufo
GROUP BY Year
End of explanation
"""
ufo = ufo[ufo['Year'] > 1900]
"""
Explanation: We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
End of explanation
"""
locations = ufo['Location'].str.split(', ').apply(pd.Series)
ufo['City'] = locations[0]
ufo['State'] = locations[1]
"""
Explanation: Handling location
Here we will make two steps, first is splitting all locations is city and states, for USA only. Second is load a dataset having the latitude and longitude for each USA city for future merge.
End of explanation
"""
|
KitwareMedical/ITKUltrasound | examples/RFPowerSpectraAttenuation.ipynb | apache-2.0 | # Install notebook dependencies
import sys
#!{sys.executable} -m pip install itk itk-ultrasound matplotlib
import os
import requests
import shutil
import itk
import matplotlib.pyplot as plt
assert 'AttenuationImageFilter' in dir(itk) # Verify we have an up-to-date version of itk-ultrasound
"""
Explanation: RF Power Spectra Attenuation
Attenuation is a measure of how the strength of a signal changes as it propagates through space. In the context of ultrasound imaging, an ultrasound acoustic waveform will reduce in intensity as it moves through tissue, often in different ways depending on the type of tissue. We can gain insights into the properties of objects in the viewing plane of an ultrasound probe by examining how reflected signals from the probe attenuate over time and space.
Ultrasound imaging works by repeatedly emitting ultrasound acoustic waves and then translating reflected waveforms into RF signals over a very short time. By transforming RF waveforms into the power frequency (Fourier) domain, we can selectively estimate how different signal frequencies lose strength as they propagate through tissue.
This notebook demonstrates how to use tools available in ITK and ITKUltrasound to generate attenuation estimations and get statistical insights. Sample data used in this notebook represents a 2D sweep over a liver volume, with the first image dimension the direction of RF waveforms, the second dimension representing the lateral direction along probe elements, and the third dimension representing steps in time. Each pixel itself represents estimated power spectra frequency content at the given point in space, with 31 bins representing the frequency range (0, nyquist_frequency). A mask is provided as an expert segmentation of the liver from the corresponding BMode image.
Please see PlotPowerSpectra.ipynb for an example visualizing 31-channel power spectra averaged across an image. SpectralDistributions.ipynb may also be referenced to demonstrate how a 31-channel power spectra image may be generated from an RF ultrasound image.
End of explanation
"""
SAMPLING_FREQUENCY_MHZ = 60.0
SCAN_DIRECTION = 0
# Retrieve hosted sample data from data.kitware.com
spectra_image_path = (('data/UnfusedRF-a0-spectra-cropped.mhd','https://data.kitware.com/api/v1/file/621902af4acac99f425ebe8b/download'),
('data/UnfusedRF-a0-spectra-cropped.raw','https://data.kitware.com/api/v1/file/621902ae4acac99f425ebe73/download'))
mask_image_path = (('data/UnfusedRF-a0-spectra-label_1-cropped.mhd','https://data.kitware.com/api/v1/file/621902ae4acac99f425ebe7b/download'),
('data/UnfusedRF-a0-spectra-label_1-cropped.raw','https://data.kitware.com/api/v1/file/621902af4acac99f425ebe83/download'))
os.makedirs('data',exist_ok=True)
for idx in range(2):
if not os.path.exists(spectra_image_path[idx][0]):
response = requests.get(spectra_image_path[idx][1], stream=True)
with open(spectra_image_path[idx][0], 'wb') as fp:
response.raw.decode_content = True
shutil.copyfileobj(response.raw, fp)
for idx in range(2):
if not os.path.exists(mask_image_path[idx][0]):
response = requests.get(mask_image_path[idx][1], stream=True)
with open(mask_image_path[idx][0], 'wb') as fp:
response.raw.decode_content = True
shutil.copyfileobj(response.raw, fp)
spectra_image = itk.imread(spectra_image_path[0][0],pixel_type=itk.VariableLengthVector[itk.F])
mask_image = itk.imread(mask_image_path[0][0])
print(f'Sample has size {itk.size(spectra_image)} where dim0 = scanline, dim1 = lateral, dim2 = time')
print(f'Samples has {spectra_image.GetNumberOfComponentsPerPixel()} frequency components')
time_idx = 9 # Arbitrarily pick timestep 9 out of 10 for viewing
frequency_bin_idx = 15 # Arbitrarily pick channel 15 out of 31 for viewing
plt.imshow(spectra_image[time_idx,:,:,frequency_bin_idx],aspect='auto')
plt.title(f'Spectra Image, Channel {frequency_bin_idx}')
plt.show()
plt.imshow(mask_image[time_idx,:,:],aspect='auto')
plt.title('Mask Image')
plt.show()
"""
Explanation: Retrieve and Load Spectra Data
End of explanation
"""
metric_image_type = itk.Image[itk.F,spectra_image.GetImageDimension()]
attenuation_filter = itk.AttenuationImageFilter[type(spectra_image),metric_image_type].New()
attenuation_filter.SetInput(spectra_image)
attenuation_filter.SetInputMaskImage(mask_image)
attenuation_filter.SetSamplingFrequencyMHz(SAMPLING_FREQUENCY_MHZ)
attenuation_filter.SetScanDirection(SCAN_DIRECTION)
# Get attenuation within a confined frequency band
attenuation_filter.SetFrequencyBandStartMHz(5.0)
attenuation_filter.SetFrequencyBandEndMHz(20.0)
# Account for segmentation error by estimating attenuation starting from
# a fixed distance into the liver, essentially eroding the mask image
attenuation_filter.SetPadLowerBoundsMM(3.0)
attenuation_filter.SetPadUpperBoundsMM(3.0)
# Normalize by estimating at a fixed distance between pixels where possible
attenuation_filter.SetFixedEstimationDepthMM(3.0)
# Round up any apparent negative attenuations to zero
attenuation_filter.SetConsiderNegativeAttenuations(False)
attenuation_filter.Update()
metric_image = attenuation_filter.GetOutput()
output_mask_image = attenuation_filter.GetOutputMaskImage()
plt.imshow(metric_image[time_idx,:,:], aspect='auto')
plt.title('Metric Image of Attenuations in Liver')
plt.show()
plt.imshow(output_mask_image[time_idx,:,:],aspect='auto')
plt.title('Eroded mask image')
plt.show()
"""
Explanation: Estimate Attenuation Within Liver Tissue
itk.AttunationImageFilter defines a procedure for estimating attenuation among points in the RF spectra image. Points are selected according to several criteria:
- Attenuation is estimated between two pixels in the spectra input image, a "source" and a "target"
- A "source" and "target" pair must occupy the same scan line
- Attenuation output is a scalar image with the same size, spacing, orientation, origin as the input spectra image
- Each output pixel corresponds to an input "source" pixel
- Attenuation estimation only takes place within an updated mask. Output pixels outside of the updated mask are fixed at zero.
- The updated mask is generated by taking the input image and eroding via "upper" and "lower" pad bounds
- If fixed estimation is zero (not set), the "target" pixel is selected by tracing along the scanline from the "source" pixel and taking the last pixel that lies within the updated mask.
- If fixed estimation is set, the "target" pixel is selected at a fixed depth along the scanline from the "source" pixel. If that "target" pixel lies outside the updated mask, the fixed estimation is ignored and the previously described method is used.
The output is an image of attenuation estimations.
End of explanation
"""
statistics_filter = itk.LabelStatisticsImageFilter[type(metric_image),type(output_mask_image)].New()
statistics_filter.SetInput(metric_image)
statistics_filter.SetLabelInput(output_mask_image)
statistics_filter.Update()
print(f'Count:\t{statistics_filter.GetCount(1)}\n'
f'Min:\t{statistics_filter.GetMinimum(1):0.4f}\tdB/(MHz*cm)\n'
f'Max:\t{statistics_filter.GetMaximum(1):0.4f}\tdB/(MHz*cm)\n'
f'Mean:\t{statistics_filter.GetMean(1):0.4f}\tdB/(MHz*cm)\n'
f'Sigma:\t{statistics_filter.GetSigma(1):0.4f}\tdB/(MHz*cm)\n'
f'Variance: {statistics_filter.GetVariance(1):0.4f} dB/(MHz*cm)\n'
f'Sum:\t{statistics_filter.GetSum(1):0.2f} dB/(MHz*cm)\n'
)
histogram_filter = itk.MaskedImageToHistogramFilter[type(metric_image),type(output_mask_image)].New()
histogram_filter.SetInput(metric_image)
histogram_filter.SetMaskImage(output_mask_image)
histogram_filter.SetMaskValue(1)
histogram_filter.SetHistogramSize([1e5])
histogram_filter.SetMarginalScale(10)
histogram_filter.Update()
histogram = histogram_filter.GetOutput()
size = histogram.GetSize(0)
print(f'Histogram size: {size} bins'
f' from {histogram.GetBinMin(0,0)} dB/(MHz*cm)'
f' to {histogram.GetBinMax(0,size-1):0.4f} dB/(MHz*cm)'
f' with {histogram.GetTotalFrequency()} entries')
print(f'Histogram mean is {histogram.Mean(0):0.4f}')
print(f'{histogram.GetFrequency(0, 0)} entries in bin '
f' from {histogram.GetBinMin(0,0)} dB/(MHz*cm)'
f' to {histogram.GetBinMax(0,0):0.4f} dB/(MHz*cm)')
print(f'5th percentile: {histogram.Quantile(0,0.05):0.4f} dB/(MHz*cm)\n'
f'25th percentile: {histogram.Quantile(0,0.25):0.4f} dB/(MHz*cm)\n'
f'50th percentile: {histogram.Quantile(0,0.50):0.4f} dB/(MHz*cm)\n'
f'75th percentile: {histogram.Quantile(0,0.75):0.4f} dB/(MHz*cm)\n'
f'95th percentile: {histogram.Quantile(0,0.95):0.4f} dB/(MHz*cm)')
"""
Explanation: Get Attenuation Statistics
Attenuation estimates are subject to noise and can be highly variable. ITK statistics filters can be deployed to acquire useful information regarding the masked region of estimated attenuation samples.
End of explanation
"""
|
ToqueWillot/M2DAC | FDMS/TME4/TME4_FiltrageCollaboratif-Copy1.ipynb | gpl-2.0 | def loadMovieLens(path='./data/movielens'):
#Get movie titles
movies={}
for line in open(path+'/u.item'):
id,title=line.split('|')[0:2]
movies[id]=title
# Load data
prefs={}
for line in open(path+'/u.data'):
(user,movieid,rating,ts)=line.split('\t')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs
data = loadMovieLens("data/ml-100k")
"""
Explanation: Collect data
End of explanation
"""
data['3']
"""
Explanation: Explore data
End of explanation
"""
def split_train_test(data,percent_test):
test={}
train={}
movie={}
for u in data.keys():
test.setdefault(u,{})
train.setdefault(u,{})
for movie in data[u]:
#print(data[u][movie])
if (random()<percent_test):
test[u][movie]=data[u][movie]
else:
train[u][movie]=data[u][movie]
return train, test
percent_test=0.2
train,test=split_train_test(data,percent_test)
"""
Explanation: Creation of train set and test set
We want to split data in two set (train and test)
Actually :
train= 80%totaldataset
test = 20%totaldataset
End of explanation
"""
def get_moove(data):
moove = {}
for u in data:
for m in data[u]:
moove[m]=0
return moove
def get_youser(data):
youser = {}
for u in data:
youser[u]=0
return youser
def clean(d1,d2):
to_erase = {}
for i in d1:
try:
d2[i]
except KeyError:
to_erase[i]=0
for i in d2:
try:
d1[i]
except KeyError:
to_erase[i]=0
return to_erase
def _remove_users(test,rem):
for i in rem:
try:
del test[i]
except KeyError:
pass
def _remove_movies(test,rem):
for i in test:
for j in rem:
try:
del test[i][j]
except KeyError:
pass
mooveToRemoove = clean(get_moove(train),get_moove(test))
youserToRemoove = clean(get_youser(train),get_youser(test))
_remove_users(test,youserToRemoove)
_remove_movies(test,mooveToRemoove)
"""
Explanation: Part that allows to clean train and test
We don't want to have user in test set which are not in train test, the same for the movies so we delete them
End of explanation
"""
class BaselineMeanUser:
def __init__(self):
self.users={}
self.movies={}
def fit(self,train):
for user in train:
note=0
for movie in train[user]:
note+=train[user][movie]
note=note/len(train[user])
self.users[user]=note
def predict(self,user,movie):
return self.users[user]
def score(self,X):
nb_movies = len(get_moove(X))
score = 0.0
for user in X:
for movie in X[user]:
score += (self.predict(user,movie) - X[user][movie])**2
return score/nb_movies
class BaselineMeanMovie:
def __init__(self):
self.users={}
self.movies={}
def fit(self,train):
movies = get_moove(train)
for movie in movies:
note=0
cpt=0
for user in train:
try:
note+=train[user][movie]
cpt+=1
except KeyError:
pass
note=note/cpt
self.movies[movie]=note
def predict(self,user,movie):
return self.movies[movie]
def score(self,X):
nb_movies = len(get_moove(X))
score = 0.0
for user in X:
for movie in X[user]:
score += (self.predict(user,movie) - X[user][movie])**2
return score/nb_movies
baseline_mu= BaselineMeanUser()
baseline_mm= BaselineMeanMovie()
baseline_mu.fit(train)
baseline_mm.fit(train)
print("score baseline mean user ",baseline_mu.score(test))
print("score baseline mean movie ",baseline_mm.score(test))
"""
Explanation: Collaboritive Filtering classes
End of explanation
"""
# NMF by alternative non-negative least squares using projected gradients
# Author: Chih-Jen Lin, National Taiwan University
# Python/numpy translation: Anthony Di Franco
from numpy import *
from numpy.linalg import norm
from time import time
from sys import stdout
def nmf(V,Winit,Hinit,tol,timelimit,maxiter):
"""
(W,H) = nmf(V,Winit,Hinit,tol,timelimit,maxiter)
W,H: output solution
Winit,Hinit: initial solution
tol: tolerance for a relative stopping condition
timelimit, maxiter: limit of time and iterations
"""
W = Winit; H = Hinit; initt = time();
gradW = dot(W, dot(H, H.T)) - dot(V, H.T)
gradH = dot(dot(W.T, W), H) - dot(W.T, V)
initgrad = norm(r_[gradW, gradH.T])
print 'Init gradient norm %f' % initgrad
tolW = max(0.001,tol)*initgrad
tolH = tolW
for iter in xrange(1,maxiter):
# stopping condition
projnorm = norm(r_[gradW[logical_or(gradW<0, W>0)],
gradH[logical_or(gradH<0, H>0)]])
if projnorm < tol*initgrad or time() - initt > timelimit: break
(W, gradW, iterW) = nlssubprob(V.T,H.T,W.T,tolW,1000)
W = W.T
gradW = gradW.T
if iterW==1: tolW = 0.1 * tolW
(H,gradH,iterH) = nlssubprob(V,W,H,tolH,1000)
if iterH==1: tolH = 0.1 * tolH
if iter % 10 == 0: stdout.write('.')
print '\nIter = %d Final proj-grad norm %f' % (iter, projnorm)
return (W,H)
def nlssubprob(V,W,Hinit,tol,maxiter):
"""
H, grad: output solution and gradient
iter: #iterations used
V, W: constant matrices
Hinit: initial solution
tol: stopping tolerance
maxiter: limit of iterations
"""
H = Hinit
WtV = dot(W.T, V)
WtW = dot(W.T, W)
alpha = 1; beta = 0.1;
for iter in xrange(1, maxiter):
grad = dot(WtW, H) - WtV
projgrad = norm(grad[logical_or(grad < 0, H >0)])
if projgrad < tol: break
# search step size
for inner_iter in xrange(1,20):
Hn = H - alpha*grad
Hn = where(Hn > 0, Hn, 0)
d = Hn-H
gradd = sum(grad * d)
dQd = sum(dot(WtW,d) * d)
suff_decr = 0.99*gradd + 0.5*dQd < 0;
if inner_iter == 1:
decr_alpha = not suff_decr; Hp = H;
if decr_alpha:
if suff_decr:
H = Hn; break;
else:
alpha = alpha * beta;
else:
if not suff_decr or (Hp == Hn).all():
H = Hp; break;
else:
alpha = alpha/beta; Hp = Hn;
if iter == maxiter:
print 'Max iter in nlssubprob'
return (H, grad, iter)
from numpy import *
from nmf import *
w1 = array([[1,2,3],[4,5,6]])
h1 = array([[1,2],[3,4],[5,6]])
w2 = array([[1,1,3],[4,5,6]])
h2 = array([[1,1],[3,4],[5,6]])
w3 = array([[2,2,2],[2,2,2]])
h3 = array([[2,2],[2,2],[2,2]])
# v the ratings matrix
# v = dot(w1,h1)
v = array([array([4,0]),array([4,4])])
(wo,ho) = nmf(v, w3, h3, 0.001, 10, 10)
print wo
print ho
print v
print dot(wo,ho)
"""
Explanation:
End of explanation
"""
|
corochann/chainer-hands-on-tutorial | src/04_cifar_cnn/cifar10_cifar100_dataset_introduction.ipynb | mit | from __future__ import print_function
import os
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import chainer
basedir = './src/cnn/images'
"""
Explanation: CIFAR-10, CIFAR-100 dataset introduction
CIFAR-10 and CIFAR-100 are the small image datasets with its classification labeled. It is widely used for easy image classification task/benchmark in research community.
Official page: CIFAR-10 and CIFAR-100 datasets
In Chainer, CIFAR-10 and CIFAR-100 dataset can be obtained with build-in function.
End of explanation
"""
CIFAR10_LABELS_LIST = [
'airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck'
]
train, test = chainer.datasets.get_cifar10()
"""
Explanation: CIFAR-10
chainer.datasets.get_cifar10 method is prepared in Chainer to get CIFAR-10 dataset.
Dataset is automatically downloaded from https://www.cs.toronto.edu only for the first time, and its cache is used from second time.
End of explanation
"""
print('len(train), type ', len(train), type(train))
print('len(test), type ', len(test), type(test))
"""
Explanation: The dataset structure is quite same with MNIST dataset, it is TupleDataset.
train[i] represents i-th data, there are 50000 training data.
test data structure is same, with 10000 test data.
End of explanation
"""
print('train[0]', type(train[0]), len(train[0]))
x0, y0 = train[0]
print('train[0][0]', x0.shape, x0)
print('train[0][1]', y0.shape, y0, '->', CIFAR10_LABELS_LIST[y0])
def plot_cifar(filepath, data, row, col, scale=3., label_list=None):
fig_width = data[0][0].shape[1] / 80 * row * scale
fig_height = data[0][0].shape[2] / 80 * col * scale
fig, axes = plt.subplots(row,
col,
figsize=(fig_height, fig_width))
for i in range(row * col):
# train[i][0] is i-th image data with size 32x32
image, label_index = data[i]
image = image.transpose(1, 2, 0)
r, c = divmod(i, col)
axes[r][c].imshow(image) # cmap='gray' is for black and white picture.
if label_list is None:
axes[r][c].set_title('label {}'.format(label_index))
else:
axes[r][c].set_title('{}: {}'.format(label_index, label_list[label_index]))
axes[r][c].axis('off') # do not show axis value
plt.tight_layout() # automatic padding between subplots
plt.savefig(filepath)
plot_cifar(os.path.join(basedir, 'cifar10_plot.png'), train, 4, 5,
scale=4., label_list=CIFAR10_LABELS_LIST)
plot_cifar(os.path.join(basedir, 'cifar10_plot_more.png'), train, 10, 10,
scale=4., label_list=CIFAR10_LABELS_LIST)
"""
Explanation: train[i] represents i-th data, type=tuple $(x_i, y_i)$, where $ x_i $ is image data and $ y_i $ is label data.
train[i][0] represents $x_i$, CIFAR-10 image data,
this is 3 dimensional array, (3, 32, 32), which represents RGB channel, width 32 px, height 32 px respectively.
train[i][1] represents $y_i$, the label of CIFAR-10 image data (scalar),
this is scalar value whose actual label can be converted by LABELS_LIST.
Let's see 0-th data, train[0], in detail.
End of explanation
"""
CIFAR100_LABELS_LIST = [
'apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle',
'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel',
'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock',
'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup', 'dinosaur',
'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster',
'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion',
'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse',
'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear',
'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine',
'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose',
'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake',
'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table',
'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout',
'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman',
'worm'
]
train_cifar100, test_cifar100 = chainer.datasets.get_cifar100()
"""
Explanation: CIFAR-100
CIFAR-100 is really similar to CIFAR-10. The difference is the number of classified label is 100.
chainer.datasets.get_cifar100 method is prepared in Chainer to get CIFAR-100 dataset.
End of explanation
"""
print('len(train_cifar100), type ', len(train_cifar100), type(train_cifar100))
print('len(test_cifar100), type ', len(test_cifar100), type(test_cifar100))
print('train_cifar100[0]', type(train_cifar100[0]), len(train_cifar100[0]))
x0, y0 = train_cifar100[0]
print('train_cifar100[0][0]', x0.shape) # , x0
print('train_cifar100[0][1]', y0.shape, y0)
plot_cifar(os.path.join(basedir, 'cifar100_plot_more.png'), train_cifar100,
10, 10, scale=4., label_list=CIFAR100_LABELS_LIST)
"""
Explanation: The dataset structure is quite same with MNIST dataset, it is TupleDataset.
train[i] represents i-th data, there are 50000 training data.
Total train data is same size while the number of class label increased.
So the training data for each class label is fewer than CIFAR-10 dataset.
test data structure is same, with 10000 test data.
End of explanation
"""
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo)
return dict
metadata = unpickle(os.path.join('./src/cnn/assets', 'meta'))
print(metadata)
"""
Explanation: Backup code
Extracting metadata information from CIFAR-100 dataset.
Please download CIFAR-100 dataset for python from
https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
Extract it, and put "meta" file into proper place to execute below code.
End of explanation
"""
|
JENkt4k/pynotes-general | Reformer_Text_Generation.ipynb | gpl-3.0 | # Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/JENkt4k/pynotes-general/blob/master/Reformer_Text_Generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
# Grab newest JAX version.
!pip install --upgrade -q jax==0.1.57 jaxlib==0.1.37
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q sentencepiece
!pip install --upgrade -q gin git+https://github.com/google/trax.git@v1.2.0
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.supervised import inputs
import numpy as onp
import jax.numpy as np
from scipy.special import softmax
from sentencepiece import SentencePieceProcessor
"""
Explanation: Reformer: Text Generation
This notebook was designed to run on TPU.
To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
End of explanation
"""
# Import a copy of "Crime and Punishment", by Fyodor Dostoevsky
with GFile('gs://trax-ml/reformer/crime-and-punishment-2554.txt') as f:
text = f.read()
# The file read above includes metadata and licensing information.
# For training our language model, we will only use the actual novel text.
start = text.find('CRIME AND PUNISHMENT') # skip header
start = text.find('CRIME AND PUNISHMENT', start + 1) # skip header
start = text.find('CRIME AND PUNISHMENT', start + 1) # skip translator preface
end = text.rfind('End of Project') # skip extra text at the end
text = text[start:end].strip()
# Load a BPE vocabulaary with 320 types. This mostly consists of single letters
# and pairs of letters, but it has some common words and word pieces, too.
!gsutil cp gs://trax-ml/reformer/cp.320.* .
TOKENIZER = SentencePieceProcessor()
TOKENIZER.load('cp.320.model')
# Tokenize
IDS = TOKENIZER.EncodeAsIds(text)
IDS = onp.asarray(IDS, dtype=onp.int32)
PAD_AMOUNT = 512 * 1024 - len(IDS)
print("Number of tokens:", IDS.shape[0])
"""
Explanation: Setting up data and model
In this notebook, we'll be pushing the limits of just how many tokens we can fit on a single TPU device. The TPUs available in Colab have 8GB of memory per core, and 8 cores. We will set up a Reformer model that can fit a copy of "Crime and Punishment" on each of the 8 TPU cores (over 500,000 tokens per 8GB of memory).
End of explanation
"""
# Set up the data pipeline.
def my_inputs(n_devices):
while True:
inputs = []
mask = []
pad_amounts = onp.random.choice(PAD_AMOUNT, n_devices)
for i in range(n_devices):
inputs.append(onp.pad(IDS, (pad_amounts[i], PAD_AMOUNT - pad_amounts[i]),
mode='constant'))
mask.append(onp.pad(onp.ones_like(IDS, dtype=onp.float32),
(pad_amounts[i], PAD_AMOUNT - pad_amounts[i]),
mode='constant'))
inputs = onp.stack(inputs)
mask = onp.stack(mask)
yield (inputs, inputs, mask)
print("(device count, tokens per device) = ",
next(my_inputs(trax.math.device_count()))[0].shape)
# Configure hyperparameters.
gin.parse_config("""
import trax.layers
import trax.models
import trax.optimizers
import trax.supervised.inputs
import trax.supervised.trainer_lib
# Parameters that will vary between experiments:
# ==============================================================================
train.model = @trax.models.ReformerLM
# Our model will have 6 layers, alternating between the LSH attention proposed
# in the Reformer paper and local attention within a certain context window.
n_layers = 6
attn_type = [
@TimeBinCausalAttention,
@LSHCausalAttention,
@TimeBinCausalAttention,
@LSHCausalAttention,
@TimeBinCausalAttention,
@LSHCausalAttention,
]
share_qk = False # LSHCausalAttention ignores this flag and always shares q & k
n_heads = 2
attn_kv = 64
dropout = 0.05
n_tokens = 524288
# Parameters for MultifactorSchedule:
# ==============================================================================
MultifactorSchedule.constant = 0.01
MultifactorSchedule.factors = 'constant * linear_warmup * cosine_decay'
MultifactorSchedule.warmup_steps = 100
MultifactorSchedule.steps_per_cycle = 900
# Parameters for Adam:
# ==============================================================================
Adam.weight_decay_rate=0.0
Adam.b1 = 0.86
Adam.b2 = 0.92
Adam.eps = 1e-9
# Parameters for TimeBinCausalAttention:
# ==============================================================================
TimeBinCausalAttention.bin_length = 64
TimeBinCausalAttention.dropout = 0.05
TimeBinCausalAttention.n_bins = None
TimeBinCausalAttention.share_qk = %share_qk
# Parameters for LSHCausalAttention:
# ==============================================================================
LSHCausalAttention.allow_duplicate_attention = False
LSHCausalAttention.attend_across_buckets = True
LSHCausalAttention.rehash_each_round = True
LSHCausalAttention.data_rotation = False
LSHCausalAttention.n_bins = 4096
LSHCausalAttention.n_buckets = 8192
LSHCausalAttention.factorize_hash = [64, 128]
LSHCausalAttention.n_hashes = 1
LSHCausalAttention.one_rng = False
LSHCausalAttention.hard_k = 0
LSHCausalAttention.dropout = 0.0
LSHCausalAttention.drop_for_hash_rate = 0.0
LSHCausalAttention.max_len_for_inference = 2048
LSHCausalAttention.bucket_capacity_for_inference = 64
# Parameters for ReformerLM:
# ==============================================================================
ReformerLM.attention_type = %attn_type
ReformerLM.d_attention_key = %attn_kv
ReformerLM.d_attention_value = %attn_kv
ReformerLM.d_model = 256
ReformerLM.d_ff = 512
ReformerLM.dropout = %dropout
ReformerLM.ff_activation = @trax.layers.Relu
ReformerLM.max_len = %n_tokens
ReformerLM.mode = 'train'
ReformerLM.n_heads = %n_heads
ReformerLM.n_layers = %n_layers
ReformerLM.vocab_size = 320
ReformerLM.share_qk = %share_qk
ReformerLM.axial_pos_shape = (512, 1024)
ReformerLM.d_axial_pos_embs= (64, 192)
""")
# Set up a Trainer.
output_dir = os.path.expanduser('~/train_dir/')
!rm -f ~/train_dir/model.pkl # Remove old model
trainer = trax.supervised.Trainer(
model=trax.models.ReformerLM,
loss_fn=trax.layers.CrossEntropyLoss,
optimizer=trax.optimizers.Adam,
lr_schedule=trax.lr.MultifactorSchedule,
inputs=trax.supervised.inputs.Inputs(my_inputs),
output_dir=output_dir,
has_weights=True)
# Run one training step, to make sure the model fits in memory.
# The first time trainer.train_epoch is called, it will JIT the entire network
# architecture, which takes around 2 minutes. The JIT-compiled model is saved
# so subsequent runs will be much faster than the first.
trainer.train_epoch(n_steps=1, n_eval_steps=1)
# Train for 600 steps total
# The first ~20 steps are slow to run, but after that it reaches steady-state
# speed. This will take at least 30 minutes to run to completion, but can safely
# be interrupted by selecting "Runtime > Interrupt Execution" from the menu.
# The language model won't be exceptionally good when trained for just a few
# steps and with minimal regularization. However, we can still sample from it to
# see what it learns.
trainer.train_epoch(n_steps=9, n_eval_steps=1)
for _ in range(59):
trainer.train_epoch(n_steps=10, n_eval_steps=1)
"""
Explanation: As we see above, "Crime and Punishment" has just over half a million tokens with the BPE vocabulary we have selected.
Normally we would have a dataset with many examples, but for this demonstration we fit a language model on the single novel only. We don't want the model to just memorize the dataset by encoding the words in its position embeddings, so at each training iteration we will randomly select how much padding to put before the text vs. after it.
We have 8 TPU cores, so we will separately randomize the amount of padding for each core.
End of explanation
"""
# As we report in the Reformer paper, increasing the number of hashing rounds
# helps with quality. We can even increase the number of hashing rounds at
# evaluation time only.
gin.parse_config("""LSHCausalAttention.n_hashes = 4""")
model_infer = trax.models.ReformerLM(mode='predict')
# Prepare a jitted copy of the model.
jit_model_infer = trax.layers.base._accelerate(
model_infer._forward_internal, trax.math.device_count())
# Set up the initial state for sampling.
infer_state = model_infer.new_weights_and_state(
trax.supervised.trainer_lib.ShapeDtype((1,1), dtype=np.int32))[1]
infer_state = trainer._for_n_devices(infer_state)
def sample(length=2048, prompt=None):
"""Sample from the ReformerLM model"""
model_weights = trainer._opt_state[0][0]
# Token id 0 is the equivalent of a "start" token
cur_inputs = np.zeros((trax.math.device_count(), 1, 1), dtype=np.int32)
cur_state = infer_state
rngs = trax.math.random.split(trax.math.random.get_prng(0), trax.math.device_count())
all_samples = []
if prompt is not None:
prompt = np.asarray(
[TOKENIZER.EncodeAsIds(prompt)] * trax.math.device_count())
for iteration in range(length):
logits, cur_state = jit_model_infer(
cur_inputs,
model_weights,
cur_state,
rngs)
if prompt is not None and iteration < prompt.shape[1]:
cur_samples = onp.array(prompt[:, iteration], dtype=int)
else:
logits = onp.array(logits)[:,0,0,:]
probs = onp.exp(logits)
cur_samples = [onp.random.choice(probs.shape[-1], p=probs[i,:])
for i in range(probs.shape[0])]
cur_samples = onp.array(cur_samples, dtype=int)
all_samples.append(cur_samples)
cur_inputs = np.array(cur_samples[:,None,None])
all_samples = onp.stack(all_samples, -1)
return all_samples
# Sample from the Reformer language model, given a prefix.
samples = sample(length=128, prompt="There was a time when")
for ids in samples:
print(TOKENIZER.DecodeIds(ids.tolist()))
"""
Explanation: Sample from the model
End of explanation
"""
|
kvr777/deep-learning | first-neural-network/dlnd-your-first-neural-network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1/(1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs *(1.0 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr* output_errors *hidden_outputs.T # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr*np.dot((hidden_errors*hidden_grad), inputs.T) #update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.03
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
To set-up our prediction model, we should find the right balance between precision on training and testing data (avoid overfitting when the model predict well only on training data and bias - when the model bad on training and testing data)
In our case we build 15 hidden layers to make it more precise and reduce learning_rate to 0.03 to be a much closer to optimum when we find it with gradient descent technique
As a result, we got a high precision on training data (MSE is only 0.068) but not so good results on testing data (MSE is 0.129)
If we analyse prediction vs Data figure, we can see that rising of MSE is big when demand pattern is changed: since Dec-22 there was Christmas time and our model was uncapable to predict this change.
The main reason of this situation is perhaps - small training dataset: we use only 60 days from our historical data for training. It is unsufficient to deal with non-stationarity time series data and to capture seasonality and trends.
To avoid this drawback, we should extend our training dataset to be able to capture seasonality.
Also it was useful to make some "feature engeniering" and build some technical indicators based on information from historical dataset (e.g. year-to-year number of riders , one-week moving average for nb riders, one month MA for nb riders etc). Using these indicators we can feed additional analytic to NN for better prediction.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
zhmz90/DeepLearningCourseFromGoogle | udacity/1_notmnist.ipynb | mit | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
"""
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
"""
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
"""
num_classes = 10
def extract(filename):
tar = tarfile.open(filename)
tar.extractall()
tar.close()
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_folders, len(data_folders)))
print data_folders
return data_folders
train_folders = extract(train_filename)
test_folders = extract(test_filename)
"""
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
"""
from IPython.display import Image
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load(data_folders, min_num_images, max_num_images):
dataset = np.ndarray(
shape=(max_num_images, image_size, image_size), dtype=np.float32)
labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
label_index = 0
image_index = 0
for folder in data_folders:
print folder
for image in os.listdir(folder):
if image_index >= max_num_images:
raise Exception('More images than expected: %d >= %d' % (
num_images, max_num_images))
image_file = os.path.join(folder, image)
#if image_index % 20000 == 0:
# display(Image(filename=image_file))
#plt.show()
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
labels[image_index] = label_index
image_index += 1
except IOError as e:
print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
label_index += 1
num_images = image_index
dataset = dataset[0:num_images, :, :]
labels = labels[0:num_images]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' % (
num_images, min_num_images))
print 'Full dataset tensor:', dataset.shape
print 'Mean:', np.mean(dataset)
print 'Standard deviation:', np.std(dataset)
print 'Labels:', labels.shape
return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)
from string import join
from matplotlib.pyplot import imshow
from time import sleep
num_png = 0
directory = "notMNIST_large/A"
for png in os.listdir(directory):
#print png
num_png += 1
#display(Image(os.path.join(directory,png)))
#sleep(2)
if num_png > 10:
break
#Image(filename="notMNIST_large/A/ISBKYW1pcm9xdWFpICEudHRm.png")
np.flatnonzero()
"""
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.
A few images might not be readable, we'll just skip them.
End of explanation
"""
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
"""
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
"""
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
"""
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
"""
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
"""
Explanation: Problem 3
Convince yourself that the data is still good after shuffling!
End of explanation
"""
def numexample(labels,label):
return np.sum(labels == label)
for i in range(0,10):
print i,"\t",numexample(train_labels,i)
"""
Explanation: Problem 4
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
"""
train_size = 200000
valid_size = 10000
valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape
"""
Explanation: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size
"""
Explanation: Finally, let's save the data for later reuse:
End of explanation
"""
train_dataset.shape,test_dataset.shape,valid_dataset.shape
def similar(dataset1,dataset2):
print dataset2.shape
nexm,nrow,ncol = dataset1.shape
dataset1 = np.reshape(dataset1[0:5000,],(5000,nrow*ncol))
nexm,nrow,ncol = dataset2.shape
dataset2 = np.reshape(dataset2[0:1000,],(1000,1,nrow*ncol))
dataset = dataset1 - dataset2
return dataset.T
def overlap(S):
#m = np.mean(S)
return np.sum(S == 0)
STrainVal = similar(train_dataset,valid_dataset)
print "train Val overlap: ",overlap(STrainVal)
SValTest = similar(valid_dataset,test_dataset)
print "Val Test overlap: ", overlap(SValTest)
"""
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
"""
train = np.reshape(train_dataset,(train_dataset.shape[0],28*28))
#val = valid_dataset
#test = test_dataset
train = train[0:30000,]
train_labels = train_labels[0:30000,]
clf = LogisticRegression()
clf.fit(train,train_labels)
train.shape,train_labels.shape
train_labels.shape
"""
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation
"""
|
PWhiddy/kbmod | notebooks/Kbmod_Documentation.ipynb | bsd-2-clause | from kbmodpy import kbmod as kb
import numpy
path = "../data/demo/"
"""
Explanation: KBMOD Documentation
This notebook demonstrates the basics of the kbmod image processing and searching python API
Before importing, make sure to run
source setup.bash
in the root directory, and that you are using the python3 kernel.
Import everything with:
End of explanation
"""
p = kb.psf(1.0)
p.get_array()
"""
Explanation: Object Types
There are currently 5 classes exposed to the python interface
psf
layered_image
image_stack
stack_search
trajectory
psf
A psf kernel, for convolution and adding artificial sources to images
Initializes to a gaussian given the width in pixels
End of explanation
"""
p.get_dim() # dimension of kernel width and height
p.get_radius() # distance from center of kernel to edge
p.get_size() # total number of pixels in the kernel
p.get_sum() # total sum of all pixels in the kernel, should be close to 1.0
"""
Explanation: There are several methods that get information about its properties
End of explanation
"""
p.print_psf()
"""
Explanation: The entire kernel can be printed out (note: prints to the console, not the notebook)
End of explanation
"""
im = kb.layered_image("image2", 100, 100, 5.0, 25.0, 0.0)
# name, width, height, background_noise_sigma, variance, capture_time
"""
Explanation: layered_image
Stores the science, mask, and variance image for a single image. The "layered" means it contains all of them together.
It can be initialized 2 ways:
A. Generate a new image from scratch
End of explanation
"""
im = kb.layered_image(path+"CORR40535777.fits")
"""
Explanation: B. Load a file
End of explanation
"""
im.add_object(20.0, 35.0, 2500.0, p)
# x, y, flux, psf
"""
Explanation: Artificial objects can easily be added into a layered_image
End of explanation
"""
pixels = im.science()
pixels
"""
Explanation: The image pixels can be retrieved as a 2D numpy array
End of explanation
"""
flags = ~0
flag_exceptions = [32,39]
# mask all of pixels with flags except those with specifed combiniations
im.apply_mask_flags( flags, flag_exceptions )
"""
Explanation: The image can mask itself by providing a bitmask of flags (note: masked pixels are set to -9999.9 so they can be distinguished later from 0.0 pixles)
End of explanation
"""
im.convolve(p)
# note: This function is called interally by stack_search and doesn't need to be
# used directy. It is only exposed because it happens to be a fast
# implementation of a generally useful function
"""
Explanation: The image can be convolved with a psf kernel
End of explanation
"""
im.save_layers(path+"/out/") # file will use original name
"""
Explanation: The image at any point can be saved to a file
End of explanation
"""
im.get_width()
im.get_height()
im.get_time()
im.get_ppi() # pixels per image, width*height
"""
Explanation: Get properties
End of explanation
"""
count = 10
imlist = [ kb.layered_image("img"+str(n), 50, 50, 10.0, 5.0, n/count) for n in range(count) ]
stack = kb.image_stack( imlist )
# this creates a stack with 10 50x50 images, and times ranging from 0 to 1
"""
Explanation: image_stack
A collection of layered_images. Used to apply operations to a group of images.
End of explanation
"""
import os
files = os.listdir(path)
files = [path+f for f in files if '.fits' in f]
files.sort()
files
stack = kb.image_stack(files)
"""
Explanation: A shortcut is provided to initialize a stack automatically from a list of files
End of explanation
"""
stack.apply_master_mask( int('100111', 2), 2 ) # flags, threshold
stack.save_master_mask("mask.fits")
"""
Explanation: A master mask can be generated and applied to the stack
End of explanation
"""
stack.set_times( [0,2,3,4.5,5,6,7,10,11,14] )
"""
Explanation: Manually set the times the images in the stack were taken
End of explanation
"""
stack.apply_mask_flags(flags, flag_exceptions)
stack.convolve(p)
stack.get_width()
stack.get_height()
stack.get_ppi()
stack.get_images() # retrieves list of layered_images back from the stack
stack.get_times()
"""
Explanation: Most features of the layered_image can be used on the whole stack
End of explanation
"""
search = kb.stack_search( stack, p )
"""
Explanation: stack_search
Searches a stack of images for a given psf
End of explanation
"""
search.save_psi_phi("/home/kbmod-usr/cuda-workspace/kbmod/search/output")
"""
Explanation: To save psi and images, a directory with "psi" and "phi" folders must be specified.
End of explanation
"""
search.gpu(10, 10, 0.0, 1.0, 20.0, 50.0)
# angle_steps, velocity_steps, min_angle, max_angle, min_velocity, max_velocity
"""
Explanation: Launch a search
End of explanation
"""
search.save_results(path+"results.txt", 0.05)
# path, fraction of total results to save in file
"""
Explanation: Save the results to a files
note: format is {x, y, xv, yv, likelihood, flux}
End of explanation
"""
top_results = search.get_results(0, 100)
# start, count
"""
Explanation: Trajectories can be retrieved directly from search without writing and reading to file.
However, this is not recommended for a large number of trajectories, as it is not returned as a numpy array, but as a list of the trajectory objects described below
End of explanation
"""
best = top_results[0]
# these numbers are wild because mask flags and search parameters above were chosen randomly
best.flux
best.lh
best.x
best.y
best.x_v
best.y_v
"""
Explanation: trajectory
A simple container with properties representing an object and its path
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.