code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Python 101
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
### First code in Python
#### Running (executing) a cell
Jupyter Notebooks allow code to be separated into sections that can be executed independent of one another. These sections are called "cells".
Running a cell means that you will execute the cell’s contents. To execute a cell, you can just select the cell and click the Run button that is in the row of buttons along the top. It’s towards the middle. If you prefer using your keyboard, you can just press SHIFT + ENTER
To automatically run all cells in a notebook, navigate to the "Run" tab of the menu bar at the top of JupyterLab and select "Run All Cells" (or the option that best suits your needs). When a cell is run, the cell's content is executed. Any output produced from running the cell will appear directly below it.
```
print('Hello World')
```
#### Cell status
The [ ]: symbol to the left of each Code cell describes the state of the cell:
[ ]: means that the cell has not been run yet.
[*]: means that the cell is currently running.
[1]: means that the cell has finished running and was the first cell run.
For more information on jupyter notebooks have a look at the jupyter_introduction.ipynb notebook in the additional content section
### Mathematical Operations
Now we can try some basic mathematical operations
```
22 / 9
243 + 3
4454 - 32
222 / 2
```
### Variable Assignment
In python the '=' is used to assign a value to a variable. Besides a single equal sign you can also use combinations with other operators.
```
x = 5
```
### Functions
Functions are named pieces of code, to perform a particular job. Functions in Python are excuted by specifying their name, followed by parentheses.
```
abs(-7)
```
### Python libraries
One of the main advantages in python is the extensive standard library (already included in python) and the huge number of third party libraries.
In order to use these libraries you have to import them. Therefor you just need the 'import' command.
```
import math
math.ceil(3.445)
```
When using the import command, python only loads the name of this module (e.g. math) and not the names of the single functions. <br>
If you want to use individual classes or functions within the module you have to enter the name of the module and the name of the function separated by a dot:
```
import math
math.ceil(1.34) # math = module name , ceil = function name
```
You can also assign a function to variable to a variable name
```
import math
ceil = math.ceil
ceil(1.34)
```
If you want to load only one or more specific function you can use the term from ... import ...
```
from math import ceil
from math import ceil, fabs, trunc
from math import * # import all functions of the module
```
You can also assign a new name to the a module or function while importing
```
import math as m
print(m.ceil(1.34))
from math import ceil as c
print(c(1.34))
```
### Installing libraries
If you want to install a external library you can do this via pip or conda
```
!pip install geopandas
!conda install geopandas
```
### Help
If you want to know more about a function/library or what they are exactly doing you can use the 'help()' function.
```
import geopandas as gpd
help(gpd)
import geopandas as gpd
help(gpd.read_file)
```
### Data types
These classes are the basic building blocks of Python
|Type | Meaning | Mutability | Examples |
|-----|---------|------------|----------|
| int | Integer | immutable | 1, -10, 0 |
| float | Float | immutable | 3.2, 100.0, -9.9 |
| bool | Boolean | immutable | True, False |
| str | String | immutable | "Hello!", "a", "I like Python" |
| list | List | mutable | [1,2,3] |
| tuple | Tuple | immutable | (1, 2) |
| dict | Dictionary | mutable | {"a": 2} |
| set | Set | mutable | {"a", "b"} |
### Numbers
Python can handle several types of numbers, but the two most common are:
- int, which represents integer values like 100, and
- float, which represents numbers that have a fraction part, like 0.5
```
population = 127880
latitude = 49.79391
longitude = 9.95121
print(type(population))
print(type(latitude))
area = 87.63
density = population / area
print(density)
87**2
```
Below is a list of operations for these build-in numeric types:
| Operation | Result |
|---------------|--------------|
|x + y |sum of x and y|
|x - y |difference of x and y|
|x * y |product of x and y|
|x / y |quotient of x and y|
|x // y |(floored) quotient of x and y |
|x % y |remainder of x / y |
|-x |x negated |
|+x |x unchanged |
|abs(x) |absolute value or magnitude of x |
|int(x) |x converted to integer |
|long(x) |x converted to long integer |
|float(x) |x converted to floating point |
|complex(re,im) |a complex number with real part re, |
| |imaginary part im (im defaults to zero) |
|c.conjugate() |conjugate of the complex number c |
|divmod(x, y) |the pair (x // y, x % y) |
|pow(x, y) |x to the power y |
|x ** y |x to the power y |
### Booleans and comparison
Another type in python is the so called boolean type. This type has two values: True and false. Booleans can be assign to a variable name or created for example when comparing values using the equal operator '=='. Another way to compare values is the not equal operator '!=' and the operators for greater '>' or smaller '<'.
```
x = True
print(x)
y = False
print(y)
city_1 = 'Wuerzburg'
pop_1 = 127880
region_1 = 'Bavaria'
city_2 = 'Munich'
pop_2 = 1484226
region_2 = 'Bavaria'
print(pop_1 == pop_2)
print(region_1 == region_2)
print(pop_1 >= pop_2)
print(city_1 != city_2)
```
### Strings
If you want to use text in python, you have to use 'strings'. A string is created by writing your desired text between two single '(...)' or double quotation marks "(...)". For printing text (or numbers) use the 'print()' function.
```
type("Spatial Python")
```
Other data types can be converted to a string using the str function:
```
"Sentinel2_" + "B" + str(1) + ".tif"
pop = 127880
'The population of Würzburg is ' + str(pop)
```
Of course strings can be also converted to numbers
```
int("11")
float("42.2")
```
Strings can be concatenated with the + operator
```
"Sentinel2_" + "B" + str(1) + ".tif"
```
Besides the + operator Python also has some more advanced formatting methods
```
x=1
f"Sentinel2_B{x}.tif"
"Sentinel2_B{x}.tif".format(x=1)
```
Python also provides many built-in functions and methods for strings. Below are just a few examples
| Function/Methods Name | Description |
|---------------|------------|
| capitalize() | Converts the first character of the string to a capital (uppercase) letter |
| count()| Returns the number of occurrences of a substring in the string. |
| encode()| Encodes strings with the specified encoded scheme |
| endswith()| Returns “True” if a string ends with the given suffix |
| find()| Returns the lowest index of the substring if it is found |
| format()| Formats the string for printing it to console |
| index()| Returns the position of the first occurrence of a substring in a string |
| isalnum()| Checks whether all the characters in a given string is alphanumeric or not |
| isalpha()| Returns “True” if all characters in the string are alphabets |
| isdecimal()| Returns true if all characters in a string are decimal |
| isnumeric()| Returns “True” if all characters in the string are numeric characters |
| isprintable()| Returns “True” if all characters in the string are printable or the string is empty |
| supper()| Checks if all characters in the string are uppercase |
| join()| Returns a concatenated String |
| lower()| Converts all uppercase characters in a string into lowercase |
| replace()| Replaces all occurrences of a substring with another substring |
| startswith()| Returns “True” if a string starts with the given prefix |
| strip()| Returns the string with both leading and trailing characters |
| swapcase()| Converts all uppercase characters to lowercase and vice versa |
| title()| Convert string to title case |
| translate()| Modify string according to given translation mappings |
| upper()| Converts all lowercase characters in a string into uppercase |
| zfill()| Returns a copy of the string with ‘0’ characters padded to the left side of the string |
```
string = "Hello World"
string.upper()
string.replace('Hello', 'My')
string.find('l')
string.count('l')
```
Strings in python can be accesssed by index or sliced
```
#string[2] # get third character
string[1:5] # slice from 1 (included) to 5 (excluded) postion 2 - 5
string[-5] # count from behind
string[2:] # from 2 (included) to end
string[:2] # from 0 to 1
string[-5] # last character
```
<img src="images/indexing.png" width=600 />
### Lists
Another data type are so called lists. Lists can be created putting several comma-separated values between square brackets. You can use lists to generate sequences of values, which can be of the same or different datatype.
```
letter_list = ['a','b','c','d','e','f'] #list of stringd
letter_list
list_of_numbers = [1,2,3,4,5,6,7] #list of numbers
list_of_numbers
mixed_list = ['hello', 2.45, 3, 'a', -.6545+0J] #mixing different data types
mixed_list
```
Similar to strings values in a list can be done using indexing or slicing.
```
random = [1, 2, 3, 4, 'a', 'b','c','d']
random[2]
print(random[1:5]) # slice from 1 (included) to 5 (excluded)
print(random[-5]) # count from behind
print(random[2:]) # from 2 (included) to end
print(random[:2]) # from begin to 2 (!not included!)
```
You can also update lists with one or more elements by giving the slice on the left-hand side. It´s also possible to append new elements to the list or delete list elements with the function.
```
cities = ['Berlin', 'Paris','London','Madrid','Lisboa']
cities[3]
# Update list
cities[3] = 'Rome'
cities
# deleting elemants
del(cities[3])
cities
# append elemnts
cities.append('Vienna')
cities
```
There are many different ways to interact with lists. Exploring them is part of the fun of python.
| Function/Method Name | Description |
|---------------|-------------|
| list.append(x) | Add an item to the end of the list. Equivalent to a[len(a):] = [x]. |
| list.extend(L) | Extend the list by appending all the items in the given list. Equivalent to a[len(a):] = L. |
| list.insert(i, x) | Insert an item at a given position. The first argument is the index of the element before which to insert, so a.insert(0, x) inserts at the front of the list, and a.insert(len(a), x) is equivalent to a.append(x). |
| list.remove(x) | Remove the first item from the list whose value is x. It is an error if there is no such item. |
| list.pop([i]) | Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list. |
| list.clear() | Remove all items from the list. Equivalent to del a[:]. |
| list.index(x) | Return the index in the list of the first item whose value is x. It is an error if there is no such item. |
| list.count(x) | Return the number of times x appears in the list. |
| list.sort() | Sort the items of the list in place. |
| list.reverse() | Reverse the elements of the list in place. |
| list.copy() | Return a shallow copy of the list. Equivalent to a[:]. |
| len(list) | Returns the number of items in the list. |
| max(list) | Returns the largest item or the largest of two or more arguments. |
| min(list) | Returns the smallest item or the smallest of two or more arguments. |
```
temp = [3.4,4.3,5.6,0.21,3.0]
len(temp)
min(temp)
temp.sort()
temp
```
### Tuples
Tuples are sequences, just like lists. The difference is that tuples are immutable. Which means that they can´t be changed like lists. Tuples can be created without brackets (optionally you can use parenthesis).
```
tup1 = "a", "b","c"
tup2 = (1, 2, 3, 4, 5 )
tup3 = ("a", "b", "c", "d",2)
tup1
tup2
tup3
```
You can access elements in the same way as lists. But due to the fact that tuples are **immutable**, you cannot update or change the values of tuple elements.
```
# Access elements
tup1 = 1, 2, 3, 4, 'a', 'b','c','d'
tup1[2]
tup1[1:5] # slice from 1 (included) to 5 (included)
tup1[-5] # count from behind
tup1[2:] # from 2 (included) to end
tup1[:2] # from begin to 2 (!not included!)
tup1 = (123,4554,5,454, 34.56)
tup2 = ('abc','def', 'ghi' ,'xyz')
tup1[0] = 10 #This action is not allowed for tuples
```
Tuples also have some built-in functions
```
tup1 = [1,2,1,4,3,4,5,6,7,8,8,9]
len(tup1)
min(tup1)
max(tup1)
tup1.count(8)
```
#### Why using tuples at all ?
- Tuples are faster than lists and need less memory
- It makes your code safer if you “write-protect” data that does not need to be changed.
- Tuples can be used as dictionary keys
### Dictionaries
Strings, lists and tuples are so called sequential datatypes. Dictionaries belong to python’s built-in mapping type. Sequential datatypes use integers as indices to access the values they contain within them. Dictionaries allows to use map keys. The values of a dictionary can be of any type, but the keys must be of an immutable data type (strings, numbers, tuples).
Dictionaries are constructed with curly brackets {}. Key and values are separated by colons ':'and square brackets are used to index it. It´s not allowed more entries per key, which also means that duplicate keys are also not allowed.
<img src="images/dict.png" width=600 />
```
city_temp = {'City': 'Dublin', 'MaxTemp': 15.5, 'MinTemp': 5.0}
```
Dictionary values are accessible through the keys
```
city_temp['City']
city_temp['MaxTemp']
city_temp['MinTemp']
```
Dictionaries can be updated and elements can be removed.
```
# Update dictionaries
city_temp = {'City': 'Dublin', 'MaxTemp': 15.5, 'MinTemp': 5.0}
city_temp['MaxTemp'] = 15.65; # update existing entry
city_temp['Population'] = 544107; # Add new entry
city_temp['MaxTemp']
city_temp['Population']
```
Of course we can also use more than one value per key
```
city_temp = {'City': ['Dublin','London'], 'MaxTemp': [15.5,12.5], 'MinTemp': [15.5,12.5]}
```
If we access a key with multiple values we get back a list
```
city_temp['MaxTemp']
city_temp['City'][1][0]
```
Few examples of built-in functions and methods.
| Function/Method | Description|
| ----------------| -----------|
| clear()| Removes all the elements from the dictionary|
| copy()| Returns a copy of the dictionary|
| fromkeys()| Returns a dictionary with the specified keys and value|
| get() | Returns the value of the specified key|
| items()| Returns a list containing a tuple for each key value pair|
| keys()| Returns a list containing the dictionary's keys|
| pop() | Removes the element with the specified key|
| popitem()| Removes the last inserted key-value pair|
| setdefault()| Returns the value of the specified key. If the key does not exist: insert the key, with the specified value|
| update()| Updates the dictionary with the specified key-value pairs|
| values()| Returns a list of all the values in the dictionary|
```
city_temp.keys()
city_temp.values()
len(city_temp)
len(city_temp['City'])
max(city_temp['MaxTemp'])
min(city_temp['MinTemp'])
```
### Indentation
A python program is structured through indentation. Indentations are used to separate different code block. This make it´s easier to read and understand your own code and the code of others. While in other programming languages indentation is a matter of style, in python it´s a language requirement.
```
def letterGrade(score):
if score >= 90:
letter = 'A'
else: # grade must be B, C, D or F
if score >= 80:
letter = 'B'
else: # grade must be C, D or F
if score >= 70:
letter = 'C'
else: # grade must D or F
if score >= 60:
letter = 'D'
else:
letter = 'F'
return letter
letterGrade(9)
```
## Control flow statements
### While, if, else
Decision making is required when we want to execute a code only if a certain condition holds. This means e.g. that some statements are only carried out if an expression is True. The 'while' statement repeatedly tests the given expression and executes the code block as long as the expression is True
```
password = "datacube"
attempt = input("Enter password: ")
if attempt == password:
print("Welcome")
```
In this case, the if statement is used to evaluates the input of the user and the following code block will only be executed if the expression is True. If the expression is False, the statement(s) is not executed.
But if we want that the the program does something else, even when the if-statement evaluates to false, we can add the 'else' statement.
```
password = "python2017"
attempt = input("Enter password: ")
if attempt == password:
print("Welcome")
else:
print("Incorrect password!")
```
You can also use multiple if...else statements nested into each other
```
passlist = ['1223','hamster','mydog','python','snow' ]
name = input("What is your username? ")
if name == 'Steve':
password = input("What’s the password? ")
#did they enter the correct password?
if password in passlist:
print("Welcome {0}".format(name))
else:
print("Incorrect password")
else:
print("No valid username")
```
In the next example will want a program that evaluates more than two possible outcomes. For this, we will use an else if statement. In python else if statment is written as 'elif'.
```
name = input("What is your username? ")
password = input("What’s the password? ")
if name == 'Steve':
if password == 'kingofthehill':
print("Welcome {0}".format(name))
else:
print("Incorrect password")
elif name == 'Insa':
if password == 'IOtte123':
print("Welcome {0}".format(name))
else:
print("Incorrect password")
elif name == 'Johannes':
if password == 'RadarLove':
print("Welcome {0}".format(name))
else:
print("Incorrect password")
else:
print("No valid username")
```
Sometimes you want that a specific code block is carried out repeatedly. This can be accomplished creating so called loops. A loop allows you to execute a statement or even a group of statements multiple times. For example, the 'while' statement, allows you to run the code within the loop as long as the expression is True.
```
count = 0
while count < 9:
print('The count is:', count)
count += 1
print("Count maximum is reached!")
```
You can also create infinite loops
```
var = 1
while var == 1 : # This constructs an infinite loop
num = int(input("Enter a number :"))
print("You entered: ", num)
print('Thanks')
```
Just like the 'if-statement', you can also combine 'while' with 'else'
```
count = 0
while count < 12:
print(count, " is less than 12")
count = count + 1
else:
print(count, " is not less than 12")
```
Another statement you can use is break(). It terminates the enclosing loop. A premature termination of the current loop can be useful when some external condition is triggered requiring an exit from a loop.
```
import random
number = random.randint(1, 15)
number_of_guesses = 0
while number_of_guesses < 5:
print('Guess a number between 1 and 15:')
guess = input()
guess = int(guess)
number_of_guesses = number_of_guesses + 1
if guess < number:
print('Your guess is too low')
if guess > number:
print('Your guess is too high')
if guess == number:
break
if guess == number:
print('You guessed the number in ' , number_of_guesses,' tries!')
else:
print('You did not guess the number. The number was ' , number)
```
Sometimes, you want to perform code on each item on a list. This can be accomplished with a while loop and counter variable.
```
words = ['one', 'two', 'three', 'four','five' ]
count = 0
max_count = len(words) - 1
while count <= max_count:
word = words[count]
print(word +'!')
count = count + 1
```
Using a while loop for iterating through list requires quite a lot of code. Python also provides the for-loop as shortcut that accomplishes the same task. Let´s do the same code as above with a for-loop
```
words = ['one', 'two', 'three', 'four','five' ]
for index, value in enumerate(words):
print(index)
print(value + '!')
```
If you want to repeat some code a certain numbers of time, you can combine the for-loop with an range object.
```
range(9)
for i in range(9):
print(str(i) + " !")
```
Now we can use our gained knowledge to program e.g. a simple calculator.
```
print("1.Add")
print("2.Subtract")
print("3.Multiply")
print("4.Divide")
# Take input from the user
choice = input("Enter choice(1/2/3/4):")
num1 = int(input("Enter first number: "))
num2 = int(input("Enter second number: "))
if choice == '1':
result = num1 + num2
print("{0} + {1} = {2}".format(num1,num2,result))
elif choice == '2':
result = num1 - num2
print("{0} - {1} = {2}".format(num1,num2,result))
elif choice == '3':
result = num1*num2
print("{0} * {1} = {2}".format(num1,num2,result))
elif choice == '4':
result = num1/num2
print("{0} / {1} = {2}".format(num1,num2,result))
else:
print("Invalid input")
```
#### Comprehensions
Comprehensions are constructs that allow sequences to be built from other sequences. Let's assume we have a list with temperature values in Celsius and we want to convert them to Fahrenheit
```
T_in_celsius = [3, 12, 18, 9, 10, 20]
```
We could write a for loop for this problem
```
fahrenheit = []
for temp in T_in_celsius:
temp_fahr = (temp * 9 / 5) + 32
fahrenheit.append(temp_fahr)
fahrenheit
```
Or, we could use a list comprehension:
```
fahrenheit = [(temp * 9 / 5) + 32 for temp in T_in_celsius]
fahrenheit
```
We can also go one step further and also include a if statement
```
# Pythagorean triple
# consists of three positive integers a, b, and c, such that a**2 + b**2 = c**2
[(a,b,c) for a in range(1,30) for b in range(1,30) for c in range(1,30) if a**2 + b**2 == c**2]
```
You can even create nested comprehensions
```
matrix = [[j * j+i for j in range(4)] for i in range(3)]
matrix
```
Of course you can use comprehensions also for dictionaries
```
fruits = ['apple', 'mango', 'banana','cherry']
{f:len(f) for f in fruits}
```
### Functions
In this chapter, we will learn how to write your own functions. A function can be used as kind of a structuring element in programming languages to group a set of statements so you can reuse them. Decreasing code size by using functions make it more readable and easier to maintain. And of course it saves a lot of typing. In python a function call is a statement consisting of a function name followed by information between in parentheses. You have already used functions in the previous chapters
A function consists of several parts:
- **Name**: What you call the function by
- **Parameters**: You can provide functions with variables.
- **Docstring**: A docstring allows you to write a little documentation were you explain how the function works
- **Body**: This is were the magic happens, as here is the place for the code itself
- **Return values**: You usually create functions to do something that create a result.
<img src="images/function.png" width=600 />
Let´s start with a very simple function
```
def my_function():
print("I love python!")
my_function()
```
You can also create functions which receive arguments
```
def function1(value, value2 = 5):
return value**2 + value2
function1(value = 4, value2 = 10000)
```
Or create functions inside functions
```
def area(width, height, func):
print("Area: {0}".format(func(width*height)))
x = area(width = 4, height = 6, func=function1)
```
If the function should return a result (not only print) you can use the 'return' statement. The return statement exits the function and can contain a expression which gets evaluated or a value is returned. If there is no expression or value the function returns the 'None' object
```
def fahrenheit(T_in_celsius):
""" returns the temperature in degrees Fahrenheit """
return (T_in_celsius * 9 / 5) + 32
x = fahrenheit(35)
x
```
Now we can rewrite our simple calculator. But this time we define our functions in front.
```
def add(x,y):
return x+y
def diff(x,y):
return x-y
def multiply(x,y):
return x*y
def divide(x,y):
return x/y
print("1.Add")
print("2.Subtract")
print("3.Multiply")
print("4.Divide")
# Take input from the user
choice = eval(input("Enter choice(1/2/3/4):"))
num1 = eval(input("Enter first number: "))
num2 = eval(input("Enter second number: "))
if choice == 1:
result = add(num1,num2)
print("{0} + {1} = {2}".format(num1,num2,result))
elif choice == 2:
result = diff(num1,num2)
print("{0} - {1} = {2}".format(num1,num2,result))
elif choice == 3:
result = multiply(num1,num2)
print("{0} * {1} = {2}".format(num1,num2,result))
elif choice == 4:
result = divide(num1,num2)
print("{0} / {1} = {2}".format(num1,num2,result))
else:
print("Invalid input")
```
# Literature
For this script I mainly used following sources:
<br>[1] https://docs.python.org/3/
<br>[2] https://www.tutorialspoint.com/python/python_lists.htm
<br>[3] https://www.datacamp.com
<br>[4] https://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/index.html
<br>[5] Python - kurz und gut (2014) Mark Lutz
| github_jupyter |
# Working with Data in OpenCV
Now that we have whetted our appetite for machine learning, it is time to delve a little
deeper into the different parts that make up a typical machine learning system.
Machine learning is all about building mathematical models in order
to understand data. The learning aspect enters this process when we give a machine
learning model the capability to adjust its **internal parameters**; we can tweak these
parameters so that the model explains the data better. In a sense, this can be understood as
the model learning from the data. Once the model has learned enough—whatever that
means—we can ask it to explain newly observed data.
Hence machine learning problems are always split into (at least) two
distinct phases:
- A **training phase**, during which we aim to train a machine learning model on a set of data that we call the **training dataset**.
- A **test phase**, during which we evaluate the learned (or finalized) machine learning model on a new set of never-before-seen data that we call the **test dataset**.
The importance of splitting our data into a training set and test set cannot be understated.
We always evaluate our models on an independent test set because we are interested in
knowing how well our models generalize to new data. In the end, isn't this what learning is
all about—be it machine learning or human learning?
Machine learning is also all about the **data**.
Data can be anything from images and movies to text
documents and audio files. Therefore, in its raw form, data might be made of pixels, letters,
words, or even worse: pure bits. It is easy to see that data in such a raw form might not be
very convenient to work with. Instead, we have to find ways to **preprocess** the data in order
to bring it into a form that is easy to parse.
In this chapter, we want to learn how data fits in with machine learning, and how to work with data using the tools of our choice: OpenCV and Python.
In specific, we want to address the following questions:
- What does a typical machine learning workflow look like?
- What are training data, validation data, and test data - and what are they good for?
- How do I load, store, and work with such data in OpenCV using Python?
## Outline
- [Dealing with Data Using Python's NumPy Package](02.01-Dealing-with-Data-Using-Python-NumPy.ipynb)
- [Loading External Datasets in Python](02.02-Loading-External-Datasets-in-Python.ipynb)
- [Visualizing Data Using Matplotlib](02.03-Visualizing-Data-Using-Matplotlib.ipynb)
- [Visualizing Data from an External Dataset](02.04-Visualizing-Data-from-an-External-Dataset.ipynb)
- [Dealing with Data Using OpenCV's TrainData container in C++](02.05-Dealing-with-Data-Using-the-OpenCV-TrainData-Container-in-C%2B%2B.ipynb)
## Starting a new IPython or Jupyter session
Before we can get started, we need to open an IPython shell or start a Jupyter Notebook:
1. Open a terminal like we did in the previous chapter, and navigate to the `Machine-Learning-for-OpenCV-Second-Edition` directory:
```
$ cd Desktop/Machine-Learning-for-OpenCV-Second-Edition
```
2. Activate the conda environment we created in the previous chapter:
```
$ source activate OpenCV-ML # Mac OS X / Linux
$ activate OpenCV-ML # Windows
```
3. Start a new IPython or Jupyter session:
```
$ ipython # for an IPython session
$ jupyter notebook # for a Jupyter session
```
If you chose to start an IPython session, the program should have greeted you with a
welcome message such as the following:
$ ipython
Python 3.6.0 | packaged by conda-forge | (default, Feb 9 2017, 14:36:55)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
The line starting with `In [1]` is where you type in your regular Python commands. In
addition, you can also use the Tab key while typing the names of variables and functions in
order to have IPython automatically complete them.
If you chose to start a Jupyter session, a new window should have opened in your web
browser that is pointing to http://localhost:8888. You want to create a new notebook by
clicking on New in the top-right corner and selecting Notebooks (Python3).
This will open a new window that contains an empty page with the same command line as in an IPython session:
In [ ]:
| github_jupyter |
```
# 显示上传的数据集
!ls /home/aistudio/data/data113944/
!unzip /home/aistudio/data/data113944/steel_bug_detect.zip
# paddle环境准备
!git clone https://gitee.com/paddlepaddle/PaddleDetection.git
%cd PaddleDetection
# 安装其他依赖
! pip install paddledet==2.0.1 -i https://mirror.baidu.com/pypi/simple
! pip install -r requirements.txt
# 拷贝配置好的模型及参数
!cp -r ../model/* ../steel_bug_detect/train/
# 拷贝训练好的模型权重
!cp -r ../pdparams/* ./
#%cd PaddleDetection/
!pwd
#weights: output/ppyolov2_r50vd_dcn_voc/model_final
!pwd
import os
for temp in os.listdir('../steel_bug_detect/test/ppyolov2_result'):
print(temp)
break
# 使用train.py进行训练
#!python ./tools/train.py -c ../steel_bug_detect/train/yolov3_mobilenet_v1_270e_voc.yml --eval
!python ./tools/train.py -c ../steel_bug_detect/train/ppyolov2_r50vd_dcn_voc.yml --eval --use_vdl=True
#!pip install visualdl==2.0.4
# !visualdl service upload --model ./vdl_log_dir/scalar/vdlrecords.1635757418.log
# 预测
#!python ./tools/infer.py -c ../steel_bug_detect/train/yolov3_mobilenet_v1_270e_voc.yml -o weights=./output/yolov3_mobilenet_v1_270e_voc/best_model.pdparams --infer_dir=../steel_bug_detect/test/IMAGES
#!python ./tools/infer.py -c ../steel_bug_detect/train/ppyolov2_r50vd_dcn_voc.yml -o weights=./output/ppyolov2_r50vd_dcn_voc/best_model.pdparams --infer_dir=../steel_bug_detect/test/IMAGES
!python ./tools/infer.py -c ../steel_bug_detect/train/ppyolov2_r50vd_dcn_voc.yml -o weights=./output/ppyolov2_r50vd_dcn_voc/1409.pdparams --infer_dir=../steel_bug_detect/test/IMAGES --output_dir=../steel_bug_detect/test/ppyolov2_result --save_txt=True --draw_threshold=0.1
#!pwd
%cd PaddleDetection
import os
import pandas as pd
result = pd.DataFrame(columns=['image_id', 'bbox', 'category_id', 'confidence'])
result_map = {'crazing':0,'inclusion':1, 'pitted_surface':2, 'scratches':3, 'patches':4, 'rolled-in_scale':5}
file_path = '../steel_bug_detect/test/ppyolov2_result/'
def work(temp):
global result
with open(file_path + temp, 'r') as file:
lines = file.readlines()
#result.split(' ')
image_id = temp.replace('.jpg.txt', '')
image_id = temp.replace('.txt', '')
image_id
#print('lines=', lines)
for line in lines:
#xmin, ymin, xmax, ymax, category_id, confidence, _ = line.split(' ')
line = line.replace('\n', '')
# print(line)
category, confidence, xmin, ymin, w, h = line.split(' ')
xmax = float(xmin) + float(w)
ymax = float(ymin) + float(h)
# print(category)
category_id = result_map[category]
xmin, ymin, xmax, ymax = int(float(xmin)), int(float(ymin)), int(float(xmax)), int(float(ymax))
if xmin > xmax:
temp = xmin
xmin = xmax
xmax = temp
if ymin > ymax:
temp = ymin
ymin = ymax
ymax = temp
if ymin <= 0:
ymin = 1
bbox = [xmin, ymin, xmax, ymax]
temp_result = {}
temp_result['image_id'] = image_id
temp_result['bbox'] = bbox
temp_result['category_id'] = category_id
temp_result['confidence'] = confidence
result = result.append(temp_result, ignore_index=True)
#print(xmin, ymin, xmax, ymax, category_id, confidence)
for temp in os.listdir(file_path):
if temp[-4:] == '.txt':
work(temp)
result = result.sort_values(by="image_id")
result.to_csv('./baseline_ppyolov2.csv', index=False)
result
# 预测
# !python ./tools/infer.py -c ../steel_bug_detect/train/yolov3_mobilenet_v1_270e_voc.yml -o weights=./output/yolov3_mobilenet_v1_270e_voc/best_model.pdparams --infer_dir=../steel_bug_detect/test/IMAGES --save_txt True
pwd
# 同时添加如下代码, 这样每次环境(kernel)启动的时候只要运行下方代码即可:
# Also add the following code,
# so that every time the environment (kernel) starts,
# just run the following code:
import sys
sys.path.append('/home/aistudio/external-libraries')
```
请点击[此处](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576)查看本环境基本用法. <br>
Please click [here ](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576) for more detailed instructions.
| github_jupyter |
# Exploring the UTx000 Extension Beiwe Data
(Known as BPEACE2 in the [GH repo](https://github.com/intelligent-environments-lab/utx000))
# Determining if participants were home when completing EMAs
We want to use the GPS data and timestamps of completed EMAs to see if the participant was home when they submitted their EMA.
```
import warnings
warnings.filterwarnings('ignore')
```
# Package Import
```
import sys
sys.path.append('../')
from src.features import build_features
from src.visualization import visualize
import pandas as pd
pd.set_option('display.max_columns', 200)
import numpy as np
from datetime import datetime, timedelta
import math
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import geopy.distance
```
# Data Import
## GPS Data
The GPS data are available in the ```processed``` directory and already downsampled to 1-minute increments.
```
gps = pd.read_csv('../data/processed/beiwe-gps-ux_s20.csv', index_col="timestamp", parse_dates=True, infer_datetime_format=True)
gps.drop(["utc","altitude","accuracy"],axis="columns",inplace=True)
gps.dropna(inplace=True)
gps.tail()
print(f"Number of participants: {len(gps['beiwe'].unique())}")
```
## Address Information
We will need the address information from participants in order to determine if the participant is home or not.
```
info = pd.read_excel('../data/raw/utx000/admin/id_crossover.xlsx',sheet_name='beacon')
info.drop(["return_date","volume","housemates","roommates","n_rooms","no2_sensor","original_start","original_end","original_move","original_address","second address","lat3","long3","third address"],axis="columns",inplace=True)
info.dropna(subset=["beacon","lat","long"],inplace=True)
info.head()
info = info[info["lat2"].isnull()].drop(["lat2","long2"],axis="columns")
info.sort_values("redcap",inplace=True)
print(f"Number of participants: {len(info['beiwe'].unique())}")
```
## EMA Data
We need the morning and evening EMAs because we are interested in mood, not sleep quality this time ;)
```
morning_ema = pd.read_csv("../data/processed/beiwe-morning_ema-ux_s20.csv",parse_dates=["timestamp"],infer_datetime_format=True)
morning_ema.drop(["tst","sol","naw","restful"],axis="columns",inplace=True) # don't need and dropping to combine
evening_ema = pd.read_csv("../data/processed/beiwe-evening_ema-ux_s20.csv",parse_dates=["timestamp"],infer_datetime_format=True)
ema = morning_ema.append(evening_ema)
ema.reset_index(inplace=True,drop=True)
ema.dropna(inplace=True)
ema.head()
```
## Homestay
Data from Peter
```
homestay = pd.read_csv("../data/processed/beiwe-homestay-ux_s20.csv",index_col=0,parse_dates=["start","end"],infer_datetime_format=True)
homestay.head()
```
# Getting Location When EMA is Completed
We can create a function that gets the coordinates of the participants' locations at a time `t`.
## My Algorithm
```
def get_coordinates(t, gps, pt, id_var="beiwe", window=10):
"""
Gets GPS coordinates for a given participant
Inputs:
- t: datetime corresponding to the time point of interest
- gps: dataframe of GPS coordinates for all participants
- pt: string of participant
- id_var: string of identifying variable
- window: integer of plus/minus time to look for GPS coordinates
Returns mean coordinates during plus/minus window as [lat,long] and gps timepoint
"""
gps_by_pt = gps[gps[id_var] == pt] # gps data for given participant
if len(gps_by_pt) > 0:
timeframe = [t - timedelta(minutes=window),t + timedelta(minutes=window)] # getting timeframe to average gps coordinates over
gps_by_pt_in_window = gps_by_pt[timeframe[0]:timeframe[-1]] # restricting to timeframe
if len(gps_by_pt_in_window) > 0:
return [np.nanmean(gps_by_pt_in_window["lat"]), np.nanmean(gps_by_pt_in_window["long"])], gps_by_pt_in_window.index[-1] # returning mean lat/long coordinates
return [np.nan, np.nan], np.nan
```
# Getting Time Only When Participants are Home
We can use the GPS coordinates from Beiwe and the addresses to filter out the data so that we can determine the time that participants are home.
## My Algorithm
```
def get_time_when_home(gps_df,info_df,radius=100,verbose=False):
"""returns gps data only from times when participants are home"""
gps_with_distance = pd.DataFrame()
for pt in info["beiwe"].unique():
# getting data by pt
gps_pt = gps_df[gps_df['beiwe'] == pt]
info_pt = info_df[info_df['beiwe'] == pt]
if verbose:
print(f'Working for Participant {pt} - Beacon', int(info_pt['beacon'].values[0]))
# getting pt address points
lat_pt1 = info_pt['lat'].values[0]
long_pt1 = info_pt['long'].values[0]
coords_add_1 = (lat_pt1, long_pt1)
# Getting distances to address from coordinates
d1 = []
for lat, long in zip(gps_pt["lat"].values,gps_pt["long"].values):
d1.append(geopy.distance.distance(coords_add_1, (lat,long)).m)
gps_pt["d1"] = d1
gps_with_distance = gps_with_distance.append(gps_pt)
return gps_with_distance[(gps_with_distance["d1"] < radius)], gps_with_distance
gps_home, gps_dist = get_time_when_home(gps,info,verbose=True)
print("Number of datapoints:",len(gps_home))
```
# Checking if Participant was Home When Survey was Submitted
## My Algorithm
```
def get_ema_location(gps_df,ema_df):
"""appends the location to the ema dataframe"""
ema_with_loc = pd.DataFrame()
# looping through each participant
for pt in gps_df["beiwe"].unique():
# gps
gps_home_by_pt = gps_df[gps_df["beiwe"] == pt]
gps_home_by_pt["time"] = gps_home_by_pt.index
gps_home_by_pt["dt"] = (gps_home_by_pt["time"] - gps_home_by_pt["time"].shift(1)).dt.total_seconds() / 60
# ema
ema_by_pt = ema_df[ema_df["beiwe"] == pt]
lats = []
longs = []
# looping through participant EMA submissions
for submission in ema_by_pt["timestamp"]:
loc, timestamp = get_coordinates(submission,gps_home_by_pt,pt)
# if one of the loc coordinates is NaN, then the pt had no home gps data - this works because we are passing home gps ONLY
if math.isnan(loc[0]):
lats.append(np.nan)
longs.append(np.nan)
else:
lats.append(loc[0])
longs.append(loc[1])
# appending location and adding to overall df
ema_by_pt["lat"] = lats
ema_by_pt["long"] = longs
ema_with_loc = ema_with_loc.append(ema_by_pt.dropna())
return ema_with_loc
```
### Getting the Numbers
```
ema_loc = get_ema_location(gps_home,ema)
ema_loc.head()
print("Number of Participants:", len(ema_loc["beiwe"].unique()))
print("Number of Surveys Completed at Home:",len(ema_loc))
print("Breakdown:")
n = ema_loc["beiwe"].value_counts()
for index, val in zip(n.index,n.values):
print(f"\t{index} - {val}")
```
### Saving Results
```
ema_loc.to_csv("../data/processed/beiwe-ema_at_home-ux_s20.csv",index=False)
```
## With Peter's Data
```
def get_emas_when_home(ema_df,homestay_df):
"""
Returns only the emas that were completed at home
"""
df_ema = ema_df.copy()
df_home = homestay_df.copy()
home = []
time_at_home = []
for pt in df_ema["beiwe"].unique():
ema_pt = df_ema[df_ema["beiwe"] == pt]
homestay_pt = homestay_df[homestay_df["beiwe"] == pt]
for submission in ema_pt["timestamp"]:
found = False
for s, e in zip(homestay_pt["start"],homestay_pt["end"]):
if submission > s and submission < e:
home.append(1)
found = True
time_at_home.append((submission - s).total_seconds())
break
if found == False:
home.append(0)
time_at_home.append(0)
df_ema["home"] = home
df_ema["time_at_home"] = time_at_home
ema_loc_homestay = df_ema[df_ema["home"] == 1]
ema_loc_homestay.drop(["home"],axis="columns",inplace=True)
return ema_loc_homestay
```
### Saving Results
```
ema_loc_homestay = get_emas_when_home(ema,homestay)
ema_loc_homestay.to_csv("../data/processed/beiwe-ema_at_home_v2-ux_s20.csv",index=False)
ema_evening_homestay = get_emas_when_home(evening_ema,homestay)
ema_evening_homestay.to_csv("../data/processed/beiwe-ema_evening_at_home-ux_s20.csv",index=False)
ema_morning_homestay = get_emas_when_home(morning_ema,homestay)
ema_morning_homestay.to_csv("../data/processed/beiwe-ema_morning_at_home-ux_s20.csv",index=False)
```
## Comparison
```
print("Number of surveys from my algorithm:", len(ema_loc))
print("Number of surveys from Peter's algorithm:", len(ema_loc_homestay))
emas_at_home = ema_loc.merge(right=ema_loc_homestay,on="timestamp")
print("Number of merged surveys:", len(emas_at_home))
```
<div class="alert alert-block alert-warning">
The two algorithms seem to return a different set of EMAs which is interesting although I trust Peter's algorithm more than mine :)
</div>
# Related Analysis
## Inspecting Distances and Coordinates of Addresses
Checking to see if the address coordinates really make sense...
```
def get_common_coordinates(gps_df,pt):
"""gets the commonly occuring GPS coordinates"""
gps_by_pt = gps_df[gps_df["beiwe"] == pt]
try:
lats = list(gps_by_pt["lat"].values)
lat_mode = max(set(lats), key=lats.count)
longs = list(gps_by_pt["long"].values)
long_mode = max(set(longs), key=longs.count)
except ValueError as e:
print(e)
return [np.nan,np.nan]
return [lat_mode,long_mode]
def inpsect_distances(gps_with_d, pt, byvar="beiwe",ylim=1000):
"""Plots distances for participant"""
df_to_plot = gps_with_d[gps_with_d[byvar] == pt]
fig, ax = plt.subplots(figsize=(24,6))
ax.scatter(df_to_plot.index,df_to_plot["d1"],s=5,color="black")
# x-axis
ax.set_xlim([datetime(2020,5,1),datetime(2020,9,1)])
# y-axis
ax.set_ylim([0,ylim])
for loc in ["top","right"]:
ax.spines[loc].set_visible(False)
plt.show()
plt.close()
for pt in info["beiwe"].unique():
coords = get_common_coordinates(gps,pt)
info_by_pt = info[info["beiwe"] == pt]
print(f"Participant {pt}:\n\tFrom Address:\t({round(info_by_pt['lat'].values[0],6)},{round(info_by_pt['long'].values[0],6)})\n\tEstimate:\t({round(coords[0],6)},{round(coords[1],6)})")
#inpsect_distances(gps_dist,pt,ylim=400)
```
| github_jupyter |
# **[MC906] Projeto Final**: Detecção de Desastres
O objetivo desse projeto é construir e avaliar modelos de aprendizado de máquina que classifiquem quais Tweets são sobre desastres reais e quais não são.
## **Acessar Diretório do Projeto**
Esse Notebook assume que você está executando o código dentro da pasta `Projeto Final/Código`, que contém todos os códigos fontes relativos a esse trabalho. Para acessar o diretório no Colab é preciso criar um atalho em seu Drive (right click no diretório -> "Adicionar atalho ao Google Drive") e executar as células abaixo:
```
# Conectar ao Drive
from google.colab import drive
drive.mount('/content/drive/', force_remount=True)
# Diretório do Projeto (/content/drive/My Drive/{path até Projeto Final/Código}),
# dependendo da onde se localiza o atalho no seu Drive
% cd '/content/drive/My Drive/[MC906] Introdução à Inteligência Artificial/Projeto Final/Código'
! ls
```
## **Dependências:**
```
# Imports de pacotes instalados
import matplotlib.pyplot as plt
from os.path import join, exists
import pandas as pd
import numpy as np
from tensorflow.keras.layers import Input, Dense, Embedding, GlobalMaxPooling1D, Conv1D, Activation
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.callbacks import ModelCheckpoint
# Imports locais
from utils import *
```
## **Dataset:**
Utilizamos um *dataset* disponível no site [Kaggle](https://www.kaggle.com/c/nlp-getting-started/data) (em inglês). Cada tweet apresenta três atributos: seu conteúdo (`text`), uma palavra-chave (`keyword`, opcional) e a localização da onde foi enviado (`location`, opcional). Como só usaremos o texto, removemos os dois últimos.
```
# Ler e limpar dados (removendo colunas do id, keyword e location)
train = pd.read_csv("../Dataset/train.csv")
train = train.drop(['id','keyword','location'], axis=1)
# Imprimir alguns dados
print(train.head())
vals = train.groupby('target').count()
print("\nSome General insights:")
print(f"Figure of Speech: {vals.iloc[0]['text']*100/len(train):.2f}%")
print(f"Actual Accidents: {vals.iloc[1]['text']*100/len(train):.2f}%")
```
## **Pré-Processamento:**
*Global Vectors for Word Representation*, ou GloVe, é um algoritmo de aprendizado não-supervisionado com o objetivo de obter representações vetoriais para palavras, proposto em 2014. O algoritmo mapeia as palavras no espaço, tal que a distância e a semelhança semântica entre elas é proporcional.
Utilizamos sua representação de palavras para nosso conjunto de dados e seus pesos pré-treinados na camada de Embedding da rede, mais especificamente o arquivo *glove.840B.300d.txt*.
```
# Aplicar tokenização
max_words = 20000
max_length = 50
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(train.text)
X_train = tokenizer.texts_to_sequences(train.text)
X_train = pad_sequences(X_train, maxlen=max_length)
Y_train = np.array([[x] for x in train.target.tolist()])
# Preparar incorporação de palavras
embeddings_index = dict()
f = open("../Glove/glove.840B.300d.txt")
for line in f:
values = line.split()
word = values[0]
try:
float(values[1])
except ValueError:
continue
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
# Construir matriz de incorporação
embedding_matrix = np.zeros((max_words, 300))
for word, index in tokenizer.word_index.items():
if index > max_words - 1:
break
else:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
```
## **Modelo**: Redes Convolucionais
Apesar de serem originalmente voltadas para visão computacional, alguns estudos recentes mostram que Redes Neurais Convolucionais (CNN) também são muito eficientes quando aplicadas em PLN. Assim, decidimos testar esse modelo ao nosso problema de detecção de desastres.
Abaixo, implementamos uma rede convolucional simples usando as camadas *Conv1D*, com 128 filtros, e *GlobalMaxPooling1D* da biblioteca *Tensorflow (Keras)*.
```
def CNN_GloVe(max_words=20000, max_len=X_train.shape[1]):
''' Função que constrói o modelo CNN. '''
inputs = Input(name='inputs',shape=[max_len])
layer = Embedding(max_words, 300, input_length=max_len, weights=[embedding_matrix],trainable=False)(inputs)
layer = Conv1D(128, 5)(layer)
layer = Activation('relu')(layer)
layer = GlobalMaxPooling1D()(layer)
layer = Dense(1)(layer)
layer = Activation('sigmoid')(layer)
model = Model(inputs=inputs,outputs=layer)
return model
# Construir e compilar modelo
cnn_glove = CNN_GloVe()
cnn_glove.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
cnn_glove.summary()
# Treinar no dataset pré-processado
callbacks = [ModelCheckpoint(monitor='val_loss', filepath='./Modelos/best_model_CNN.h5', save_best_only=True)]
cnn_glove_history = cnn_glove.fit(X_train,Y_train,batch_size=256,epochs=10, validation_split=0.1, callbacks=callbacks)
# Carregar modelo
if exists('./Modelos/best_model_CNN.h5'):
cnn_glove = load_model('./Modelos/best_model_CNN.h5')
# Plotar métricas
plot_graphs(cnn_glove_history, "CNN GloVe", key='acc')
```
| github_jupyter |
```
%matplotlib notebook
from pylab import *
from scipy.stats import *
# Population
total_population = 208e6
percentage_0_14 = 0.23
percentage_15_64 = 0.69
percentage_65_ = 0.08
num_adults = total_population*(percentage_15_64 + percentage_65_)
# Labor force
percentage_labor_force = 0.71
labor_force = num_adults*percentage_labor_force
disabled_adults = 19e6
# Monetary
basic_income = 880*12 # salario minimo nominal anual
current_wealth_transfers = 240e9 # aproximadamente 10% do PIB
def jk_rowling(num_non_workers):
num_of_jk_rowlings = binom(num_non_workers, 1e-9).rvs()
return num_of_jk_rowlings * 1e9
def basic_income_cost_benefit():
direct_costs = num_adults * basic_income
administrative_cost_per_person = norm(250,75)
non_worker_multiplier = uniform(-0.10, 0.15).rvs()
non_workers = (num_adults-labor_force-disabled_adults) * (1+non_worker_multiplier)
marginal_worker_productivity = norm(1.2*basic_income,0.1*basic_income)
administrative_costs = num_adults * administrative_cost_per_person.rvs()
labor_effect_costs_benefit = -1 * ((num_adults-labor_force-disabled_adults) *
non_worker_multiplier *
(marginal_worker_hourly_productivity.rvs())
)
return direct_costs + administrative_costs + labor_effect_costs_benefit - jk_rowling(non_workers)
def basic_job_cost_benefit():
administrative_cost_per_disabled_person = norm(500,150).rvs()
administrative_cost_per_worker = norm(5000, 1500).rvs()
non_worker_multiplier = uniform(-0.20, 0.25).rvs()
basic_job_productivity = uniform(0.0, basic_income).rvs()
disabled_cost = disabled_adults * (basic_income + administrative_cost_per_disabled_person)
num_basic_workers = ((num_adults - disabled_adults - labor_force) *
(1+non_worker_multiplier)
)
basic_worker_cost_benefit = num_basic_workers * (
basic_income +
administrative_cost_per_worker -
basic_job_productivity
)
return disabled_cost + basic_worker_cost_benefit
N = 1024*4
bi = zeros(shape=(N,), dtype=float)
bj = zeros(shape=(N,), dtype=float)
for k in range(N):
bi[k] = basic_income_cost_benefit()
bj[k] = basic_job_cost_benefit()
subplot(211)
start = 0
width = 8e12
height= 400*N/1024
title("Income Guarantee")
hist(bi, bins=5, color='red')
axis([start,width,0,height])
subplot(212)
title("Job Guarantee")
hist(bj, bins=20, color='blue')
axis([start,width,0,height])
tight_layout()
show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/alijablack/data-science/blob/main/Wikipedia_NLP_Sentiment_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Natural Language Processing
## Problem Statement
Use natural language processing on Wikipedia articles to identify the overall sentiment analysis for a page and number of authors.
## Data Collection
```
from google.colab import drive
drive.mount('/content/drive')
!python -m textblob.download_corpora
from textblob import TextBlob
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from sklearn.feature_extraction.text import CountVectorizer
people_path = '/content/drive/My Drive/Copy of people_db.csv'
people_df = pd.read_csv(people_path)
```
## Exploratory Data Analysis
## Part 1 of Project
This dataset from dbpedia includes over 42,000 entries.
```
people_df.info
```
Explore the first 100 to decide who to choose.
```
people_df.head(100).T
```
Select a person, Armen Ra, from the list to use as the input for sentiment analysis. Output Armen Ra's overview from the database.
```
my_person = [people_df.iloc[96]['text']]
my_person
```
### Data Processing
#### Vector Analysis
```
vect_people = CountVectorizer(stop_words='english')
word_weight = vect_people.fit_transform(people_df['text'])
word_weight
```
#### Nearest Neighbors
Fit the nearest neighbors model with content from people dataframe.
```
nn = NearestNeighbors(metric='euclidean')
nn.fit(word_weight)
ra_index = people_df[people_df['name'] == 'Armen Ra'].index[0]
ra_index
```
Use the nearest neighbor model to output people with overviews similar to Armen Ra's page.
```
distances, indices = nn.kneighbors(word_weight[ra_index], n_neighbors=11)
distances
```
Show the index of 10 similar overviews.
```
indices
```
Output the 10 people with overviews closest to Armen Ra.
```
people_df.iloc[indices[0],:]
top_ten = people_df.iloc[indices[0],1:11]
top_ten.head(11)
df2 = people_df[['text','name']]
# For each row, combine all the columns into one column
df3 = df2.apply(lambda x: ','.join(x.astype(str)), axis=1)
# Store them in a pandas dataframe
df_clean = pd.DataFrame({'clean': df3})
# Create the list of list format of the custom corpus for gensim modeling
sent = [row.split(',') for row in df_clean['clean']]
# show the example of list of list format of the custom corpus for gensim modeling
sent[:2]
```
Another way to output the 10 people with overviews closest to Armen Ra's page.
```
import gensim
from gensim.models import Word2Vec
model = Word2Vec(sent, min_count=1,size= 50,workers=3, window =3, sg = 1)
model['Armen Ra']
model.most_similar('Armen Ra'[:10])
```
This method outputs a different set of people than the nearest neighbors method. The nearest neighbors method output appears more closely aligned with the substance of Armen Ra's overview by similarly outputting people in creative industries. Whereas the similarity method outputs people with overviews that share a similar tone and format as Armen Ra's overview that is brief, informational, neutral.
#### Sentiment Analysis
Make Armen Ra's overview a string.
```
df2 = pd.DataFrame(my_person)
# For each row, combine all the columns into one column
df3 = df2.apply(lambda x: ','.join(x.astype(str)), axis=1)
# Store them in a pandas dataframe
df_clean = pd.DataFrame({'clean': df3})
# Create the list of list format of the custom corpus for gensim modeling
sent1 = [row.split(',') for row in df_clean['clean']]
# show the example of list of list format of the custom corpus for gensim modeling
sent1[:2]
```
Assign tags to each word in the overview.
```
!python -m textblob.download_corpora
from textblob import TextBlob
wiki = TextBlob(str(sent1))
wiki.tags
```
Identify the nouns in the overview.
```
wiki.noun_phrases
zen = TextBlob(str(sent1))
```
Identify the words in the overview.
```
zen.words
```
Identify the sentences in the overview.
```
zen.sentences
sentence = TextBlob(str(sent1))
sentence.words
sentence.words[-1].pluralize()
sentence.words[-1].singularize()
b = TextBlob(str(sentence))
print(b.correct())
```
Output the sentiment for Armen Ra's overview.
```
for sentence in zen.sentences:
print(sentence.sentiment[0])
```
## Part 2 of Project
### Data Collection
Install Wikipedia API. Wikipedia will be the main datasource for this step to access the full content of Armen Ra's page.
```
!pip install wikipedia
import wikipedia
```
### Data Processing
Produce the entire page of Armen Ra
```
#search wikipedia for Armen Ra
print(wikipedia.search('Armen Ra'))
#output the summary for Armen Ra
print(wikipedia.summary("Armen Ra"))
#output the page for Armen Ra
print(wikipedia.page("Armen Ra"))
#output the page content for Armen Ra
print(wikipedia.page('Armen Ra').content)
#output the url for Armen Ra's Wikipedia page
print(wikipedia.page('Armen Ra').url)
ra_df = pd.read_html('https://en.wikipedia.org/wiki/Armen_Ra')
type(ra_df)
page = wikipedia.page('Armen Ra')
page.summary
page.content
type(page.content)
wiki1 = TextBlob(page.content)
wiki1.tags
wiki1.noun_phrases
```
##### Sentiment Analysis
Produce the sentiment for Armen Ra's page.
```
testimonial = TextBlob(page.content)
testimonial.sentiment
```
Sentiment analysis shows a primarily neutral and objective tone throughout the page.
```
zen = TextBlob(page.content)
```
Process Armen Ra's page into words and sentences to determine how the sentiment changes throughout the page.
```
zen.words
zen.sentences
```
Determine any changes in sentiment throughout the page.
```
for sentence in zen.sentences:
print(sentence.sentiment[0])
```
Estimate 6 or 7 authors contributed to the Wikipedia article based on changes in the sentiment analysis.
Output a summary of the Armen Ra page
```
page.summary
sentence = TextBlob(page.content)
sentence.words
sentence.words[2].singularize()
sentence.words[2].pluralize()
b = TextBlob(page.content)
print(b.correct())
```
Consider algorithmic bias and errors in the natural language processing tools as Armen Ra's name is being shortened to 'Men A' or 'A'.
```
blob = TextBlob(page.content)
blob.ngrams(n=3)
#The sentiment of Armen Ra's page is in a informational, neutral tone
testimonial = TextBlob(page.content)
testimonial.sentiment
```
### Communication of Results
Ultimately, the sentiment analysis for Armen Ra's page shows the tone is primarily informational, objective, and neutral. When using Nearest Neighbors or Model Most Similar to identify Wikipedia pages similar to Armen Ra's, there were different results presented based on the method was used. Nearest Neighbors presented pages of individuals that had similarly neutral tones, while Most Similar showed individuals in similar industries as Armen Ra. The natural language processing tools at times output errors in Armen Ra's name and typos throughout the content. Consider further analysis into algorithmic bias present within the natural language processing tools and alternative data analysis and visualization methods available.
## Live Coding
In addition to presenting our slides to each other, at the end of the presentation each analyst will demonstrate their code using a famous person randomly selected from the database.
```
Roddy = people_df[people_df['name'].str.contains('Roddy Piper')]
Roddy
wikipedia.search('Roddy Piper')
wikipedia.summary('Roddy Piper')
wikipedia.page('Roddy Piper')
wikipedia.page('Roddy Piper').url
famous_page = wikipedia.page('Roddy Piper')
famous_page.summary
testimonial = TextBlob(famous_page.content)
testimonial.sentiment
```
Nearest Neighbors
```
people_df1 = [people_df.iloc[32819]['text']]
people_df1
nn = NearestNeighbors(metric='euclidean')
nn.fit(word_weight)
roddy_index = people_df[people_df['name'] == 'Roddy Piper'].index[0]
roddy_index
distances, indices = nn.kneighbors(word_weight[roddy_index], n_neighbors=11)
distances
indices
people_df.iloc[indices[0],:]
people_df.iloc[2037]['text']
people_df.iloc[18432]['text']
people_df.iloc[21038]['text']
people_df.iloc[35633]['text']
```
| github_jupyter |
# Application
```
import pandas as pd
import numpy as np
property_type = pd.read_csv("final_df.csv")
property_type1 = property_type.iloc[:,1:33]
for i in range(len(property_type1)):
for j in range(2, len(property_type1.columns)):
if type(property_type1.iloc[i,j]) != str:
continue
elif len(property_type1.iloc[i,j]) <= 4:
property_type1.iloc[i,j] = property_type1.iloc[i,j]
else:
property_type1.iloc[i,j] = property_type1.iloc[i,j].split(",")[0] + property_type1.iloc[i,j].split(",")[1]
property_type2 = property_type1.loc[:, ["Property Type", "Mean Price"]]
property_type2 = property_type2.groupby(["Property Type"]).mean()
plt.figure(figsize = (7,4))
plt.bar(property_type2.index, property_type2["Mean Price"], color=('red','yellow','orange','blue','green','purple','black','grey'))
plt.title("Mean Price of Different Property Types")
plt.xlabel("Property Type")
plt.xticks(rotation=90)
plt.ylabel("Mean Price")
plt.show()
import numpy as np
import pandas as pd
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import webbrowser
from threading import Timer
import dash_table
import dash_table.FormatTemplate as FormatTemplate
import plotly.express as px
#Import datasets
df_details = pd.read_csv('dfclean_1adult.csv')
df_details = df_details.rename(columns = {'Unnamed: 0':'Name',
'reviews': 'no. of reviews'})
df_dates = pd.read_csv('final_df.csv').drop('Unnamed: 0', 1)
# Merge datasets
df = df_details.merge(df_dates, on='Name')
df = df.replace(to_replace = ['Y','N'],value = [1,0])
df.iloc[:,7:37] = df.iloc[:,7:37].apply(lambda x: x.astype(str))
df.iloc[:,7:37] = df.iloc[:,7:37].apply(lambda x: x.str.replace(',', '').astype(float), axis=1)
user_df = df.copy()
date_cols = user_df.columns[7:37]
hotel_types = user_df['Property Type'].unique()
features = ['Price'] + list(user_df.columns[2:5]) + list(user_df.columns[37:])
continuous_features = features[:9]
continuous_features_A = ['Price', 'Distance to Mall', 'Distance to MRT']
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.title = 'Hotel Booking'
def generate_table(dataframe, max_rows=5):
df_drop_link = dataframe.drop(columns='link')
return html.Table([
html.Thead(
html.Tr([html.Th(col) for col in df_drop_link.columns])
),
html.Tbody([
html.Tr([
html.Td(dataframe.iloc[i][col]) if col != 'Name' else html.Td(html.A(href=dataframe.iloc[i]['link'], children=dataframe.iloc[i][col], target='_blank')) for col in df_drop_link.columns
]) for i in range(min(len(dataframe), max_rows))
])
])
colors = {'background': '#111111', 'text': '#7FDBFF'}
app.layout = html.Div([
#introduction
html.Div([
html.H2(children='Hello!',
style={'color': colors['text']}),
#inputs for date and hotel type
html.Div([html.H4("Step 1: Input Date (eg. 4Nov): "),
dcc.Input(id='date-input', value='4Nov', type='text')],
style={'width':'30%', 'float':'left'}),
html.Div(id='date-output-hotel'),
html.Div([
html.H4('Step 2: Select Your Preferred Hotel Types:'),
dcc.Dropdown(id='hotel-input',
options=[{'label': i, 'value': i} for i in hotel_types],
value= hotel_types,
multi=True)],
style={'width':'70%', 'float':'right'}),
html.Br(), html.Br()
]),
#return available hotels for given date
html.Div([
html.Br(), html.Br(), html.Hr(),
dcc.Graph(id='output-submit'),
html.Hr(),
]),
#input top 3 features
html.Div([
html.H4(children='Step 3: Select Your Top 3 Features:'),
]),
html.Div([
dcc.Dropdown(
id='feature1',
options=[{'label': i, 'value': i} for i in features],
value= features[0]
), html.Br(),
dcc.Slider(id='weight1',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=50)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
html.Div([
dcc.Dropdown(
id='feature2',
options=[{'label': i, 'value': i} for i in features],
value= features[1]
), html.Br(),
dcc.Slider(id='weight2',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=30)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
html.Div([
dcc.Dropdown(
id='feature3',
options=[{'label': i, 'value': i} for i in features],
value= features[2]
), html.Br(),
dcc.Slider(id='weight3',
min= 10, max= 90, step= 10,
marks={i: '{}%'.format(i) for i in np.arange(10, 90, 10).tolist()},
value=20)
], style={"display": "grid", "grid-template-columns": "20% 10% 70%", "grid-template-rows": "50px"}
),
#return top 5 hotels recommended
html.Div([
html.Hr(),
html.H2(children='Top 5 Hotels Recommended For You',
style={'color': colors['text']}),
html.Div(id='output-feature'),
html.Hr()
])
])
#update available hotels for given date
@app.callback(Output('output-submit', 'figure'),
[Input('hotel-input', 'value'), Input('date-input', 'value')])
def update_hotels(hotel_input, date_input):
user_df = df.copy()
user_df = user_df[user_df[date_input].notnull()]
user_df = user_df[user_df['Property Type'].isin(hotel_input)]
plot_df = pd.DataFrame(user_df.groupby('Property Type')['Name'].count()).reset_index()
fig = px.bar(plot_df, x='Property Type', y='Name', color="Property Type", title="Hotel Types available on {}:".format(date_input))
fig.update_layout(transition_duration=500)
return fig
#update top 5 hotels recommended
@app.callback(Output('output-feature', 'children'),
[Input('hotel-input', 'value'), Input('date-input', 'value'),
Input('feature1', 'value'), Input('feature2', 'value'), Input('feature3', 'value'),
Input('weight1', 'value'), Input('weight2', 'value'), Input('weight3', 'value')])
def update_features(hotel_input, date_input, feature1, feature2, feature3, weight1, weight2, weight3):
user_df = df.copy()
user_df = user_df[user_df[date_input].notnull()]
user_df['Price'] = user_df[date_input]
user_df = user_df[user_df['Property Type'].isin(hotel_input)]
features= [feature1, feature2, feature3]
selected_features = features.copy()
selected_continuous = set(selected_features) & set(continuous_features)
for i in selected_continuous:
col = i + str(' rank')
if i in continuous_features_A:
user_df[col] = user_df[i].rank(ascending=False) #higher value, lower score
else:
user_df[col] = user_df[i].rank(ascending=True) #higher value, higher score
selected_features[selected_features.index(i)] = col #replace element in list name with new col name
#Scoring: weight * feature's score
user_df['Score'] = (((weight1/100) * user_df[selected_features[0]])
+ ((weight2/100) * user_df[selected_features[1]])
+ ((weight3/100) * user_df[selected_features[2]])).round(1)
#Score-to-Price ratio
user_df['Value_to_Price ratio'] = (user_df['Score'] / user_df['Price']).round(1)
user_df = user_df.sort_values(by=['Value_to_Price ratio'], ascending = False).reset_index()
features_result = [i for i in features if i != 'Price']
selected_features_result = [i for i in selected_features if i not in features_result]
user_df_results = user_df[['Name', 'Property Type', 'Price', 'Score', 'Value_to_Price ratio'] + ['link'] + features_result + selected_features_result]
return generate_table(user_df_results.head(5))
port = 8050
url = "http://127.0.0.1:{}".format(port)
def open_browser():
webbrowser.open_new(url)
if __name__ == '__main__':
Timer(0.5, open_browser).start();
app.run_server( debug= False, port=port)
```
# Price Prediciton
```
import glob
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
%matplotlib inline
from sklearn.metrics import mean_squared_error, r2_score
from sklearn import datasets, linear_model
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LogisticRegression
import random
import xgboost as xgb
dfs = glob.glob("*Novhotels.csv")
# for df in dfs:
train_features = pd.read_csv("10Novhotels.csv")
#Preliminary data cleaning
col_names = train_features.columns
list1 = []
for i in col_names:
prop_na = sum(train_features.loc[:,i].isnull())/train_features.loc[:,"Laundry Service"].count()
if prop_na >= .9:
list1.append(i)
title = ['Price', 'Property Type', 'Number of Stars', 'Review Score',
'Cleanliness', 'Distance to Mall', 'Distance to MRT',
'Early Check-in (Before 3pm)', 'Late Check-out (After 12pm)',
'Pay Later', 'Free Cancellation', 'Gym', 'Swimming Pool', 'Car Park',
'Airport Transfer', 'Breakfast', 'Hygiene+ (Covid-19)',
'24h Front Desk', 'Laundry Service', 'Bathtub', 'Balcony', 'Kitchen',
'TV', 'Internet', 'Air Conditioning', 'Ironing', 'Non-Smoking']
train_features = train_features.drop(columns = list1)
train_features = train_features.drop(['Unnamed: 0', 'Name'], axis = 1)
#train_features.rename(columns={'*Nov': 'Price'}, inplace=True)
train_features.columns = title
pd.options.display.max_columns = None
pd.options.display.max_rows = None
# display(train_features.head())
train_features = train_features.replace(['Y', 'N'], [1, 0])
train_features = train_features[train_features["Price"].notna()]
train_features["Price"] = train_features["Price"].astype(str).str.replace(',','')
# train_features["Price"] = train_features["Price"].str.replace(',','')
train_features["Price"] = pd.to_numeric(train_features["Price"])
#Change stars to categorical
train_features["Number of Stars"] = train_features["Number of Stars"].astype(str)
#One hot encoding
train_features = pd.get_dummies(train_features)
#Check for missing data
# check = train_features.isnull().sum()
mean_val_distmall = round(train_features['Distance to Mall'].mean(),0)
train_features['Distance to Mall']=train_features['Distance to Mall'].fillna(mean_val_distmall)
mean_val_distmrt = round(train_features['Distance to MRT'].mean(),0)
train_features['Distance to MRT']=train_features['Distance to MRT'].fillna(mean_val_distmrt)
mean_val_price = round(train_features['Price'].mean(),0)
train_features['Price']=train_features['Price'].fillna(mean_val_price)
# print(train_features.isnull().sum())
# Create correlation matrix
corr_matrix = train_features.corr().abs()
# Select upper triangle of correlation matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))
# Find features with correlation greater than 0.95
to_drop = [column for column in upper.columns if any(upper[column] > 0.95)]
# Drop features
train_features.drop(to_drop, axis=1, inplace=True)
labels = []
for i in train_features.columns:
labels.append(i)
labels.remove('Price')
training_features = labels
target = 'Price'
random.seed(5)
#Perform train-test split
#creating 90% training data and 10% test data
X_train, X_test, Y_train, Y_test = train_test_split(train_features[training_features], train_features[target], train_size = 0.9)
colsample = np.arange(0.0, 1.1, 0.1)
learningrate = np.arange(0.0, 1.1, 0.1)
maxdepth = list(range(1, 1000))
alpha_val = list(range(1, 1000))
n_estimators_val = list(range(1, 1000))
# for a in range(len(maxdepth)):
xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 1, n_estimators = 20)
xg_reg.fit(X_train,Y_train)
predicted = xg_reg.predict(X_test)
# print(n_estimators_val[a])
#the mean squared error
print('Mean squared error: %.2f' % mean_squared_error(Y_test, predicted))
#explained variance score: 1 is perfect prediction
print('R square score: %.2f' % r2_score(Y_test,predicted))
df = pd.read_csv("prices_1adult.csv")
df = df.replace(to_replace ="[]", value =np.nan)
df = pd.melt(df, id_vars='Unnamed: 0')
df.columns = ["Name","Date","Price"]
df.head()
df_second = pd.read_csv("Predicted_Price.csv")
df_second.head()
df_second = df_second.drop_duplicates()
df_merge_col = pd.merge(df, df_second, on=['Name','Date'])
# df_merge_col.to_csv("Predicted_Price.csv")
```
| github_jupyter |
## DATA SCIENCE NANO DEGREE
### PROJECT 1: Boston AirBnB
##### MAHBUBUL WASEK
#### Introduction
This is part of the Udacity Data Science Nanodegree (Project 1). In this project we are supposed to analysis data using the CRISP-DM process. The CRISP-DM process:
1) Business Understanding
2) Data Understanding
3) Prepare Data
4) Data Modeling
5) Evaluate the Results
6) Deploy
#### Business Understanding
AirBnB is an online rental marketplace, which created a community for a landlords and their tenants. Landlords are able to attract temporary tenants through this online platform.
The main source of income for AirBnB is from the service fees charged to both guests and hosts. The peer-to-peer business model used by AirBnb has the potential for continued revenue growhth.
In this project, I wanted to explore the data to answer the following questions:
##### 1) What has been the overall price trend in the given period of time? Is there any price trend in a week? If so, which days are more profitable?
##### 2) What is the relationship between price and various attributes?
##### 3) What are some of the common features that customers take into consideration for a good experience?
#### Data Understanding
We have 3 datasets for this project:
i) Listings : Provides us with a number of columns containing detailed information about the rooms rented.
ii) Reviews : Contains the reviews for the rooms along with unique id for customers.
iii) Calendar : This provides us with the dates the rooms are available along with the price.
I will explore and try to visualize the data by presenting the results in the form of dashboards to answer the above questions.
```
#Loading the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import os
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
#Reading the datasets:
bost_listing = pd.read_csv('listings.csv')
bost_review = pd.read_csv('reviews.csv')
bost_calendar = pd.read_csv('calendar.csv')
num_rows_l = bost_listing.shape[0]
num_cols_l = bost_listing.shape[1]
num_rows_r = bost_review.shape[0]
num_cols_r = bost_review.shape[1]
num_rows_c = bost_calendar.shape[0]
num_cols_c = bost_calendar.shape[1]
print(num_rows_l, num_cols_l, num_rows_r, num_cols_r, num_rows_c, num_cols_c)
bost_review.head()
bost_review.info()
bost_calendar.head()
bost_calendar.tail()
bost_calendar.info()
# Check if price is NaN when available value is f:
calendar_q1 = bost_calendar.groupby('available')['price'].count().reset_index()
calendar_q1.columns = ['available', 'price_count']
calendar_q1
# How many rows per each listing:
calendar_q2 = bost_calendar.groupby('listing_id')['date'].count().reset_index()
calendar_q2['date'].value_counts()
pd.options.display.max_columns = 95
bost_listing.head()
#Getting the columns in Listing dataset:
bost_listing.columns
#Converting amount columns from string to numbers:
bost_listing['price'] = bost_listing['price'].apply(str).str.replace("[$, ]", "").astype("float")
bost_listing['weekly_price'] = bost_listing['weekly_price'].apply(str).str.replace("[$, ]", "").astype("float")
bost_listing['monthly_price'] = bost_listing['monthly_price'].apply(str).str.replace("[$, ]", "").astype("float")
bost_listing['security_deposit'] = bost_listing['security_deposit'].apply(str).str.replace("[$, ]", "").astype("float")
bost_listing['cleaning_fee'] = bost_listing['cleaning_fee'].apply(str).str.replace("[$, ]", "").astype("float")
bost_listing['extra_people'] = bost_listing['extra_people'].apply(str).str.replace("[$, ]", "").astype("float")
print(bost_listing['price'])
#Dropping columns with all Null values:
bost_listing = bost_listing.dropna(how='all', axis=1)
#Dropping columns with 'url' in the name:
for col in bost_listing.columns:
if 'url' in col:
del bost_listing[col]
#Dropping columns that I will not need:
useless_columns = ['scrape_id', 'last_scraped', 'experiences_offered', 'neighborhood_overview', 'notes', 'transit', 'access', 'interaction', 'house_rules', 'host_name', 'host_since', 'host_location', 'host_about', 'host_response_time', 'host_response_rate',
'host_acceptance_rate', 'host_neighbourhood', 'host_listings_count', 'host_total_listings_count', 'host_verifications', 'host_has_profile_pic', 'host_identity_verified', 'street', 'neighbourhood', 'neighbourhood_cleansed',
'city', 'state', 'zipcode', 'market', 'smart_location', 'country_code', 'country', 'latitude', 'longitude', 'is_location_exact', 'calendar_updated', 'calendar_last_scraped', 'review_scores_rating',
'review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_communication', 'review_scores_location', 'review_scores_value', 'requires_license', 'require_guest_profile_picture',
'require_guest_phone_verification', 'calculated_host_listings_count']
bost_listing.drop(useless_columns, axis=1, inplace=True)
bost_listing.head()
bost_listing.info()
len(bost_listing.columns)
```
-----
#### ANSWERING THE QUESTIONS:
```
# Plot 1: Overall Price Trend:
sns.set_style("dark",{"axes.facecolor":"black"})
calendar_q1 = bost_calendar.copy(deep=True)
calendar_q1.dropna(inplace=True)
calendar_q1['date'] = pd.to_datetime(calendar_q1['date'])
calendar_q1['price'] = calendar_q1['price'].map(lambda x: float(x[1:].replace(",", "")))
#Range
start_date = '2016-09-05 00:00:00'
end_date = '2017-09-06 00:00:00'
calendar_q1 = calendar_q1[(calendar_q1['date'] > start_date) & (calendar_q1['date'] < end_date)]
calendar_q1 = calendar_q1.groupby('date')['price'].mean().reset_index()
plt.figure(figsize=(10,6))
plt.plot(calendar_q1.date, calendar_q1.price, color = 'r', marker='D', ls='--', linewidth=1.5)
plt.title("Overall Price Trend", fontsize= 30, color= "DarkBlue")
plt.xlabel('Date', fontsize=25)
plt.ylabel('Price', fontsize=25)
plt.show()
# Plot 2: Weekly Price Trend
sns.set_style("dark",{"axes.facecolor":"black"})
calendar_q1["weekday"] = calendar_q1["date"].dt.day_name()
plt.rcParams['figure.figsize'] = 10,6
sns.boxplot(x= 'weekday', y= 'price', data = calendar_q1, palette= "vlag", width=0.4)
plt.title("Weekly Price Trend", fontsize= 30, color= "DarkBlue")
plt.xlabel('Day', fontsize= 25)
plt.ylabel('Price', fontsize= 25)
plt.show()
```
The above 2 graphs provide us the price of AirBnB homes in Boston over a period of time. Plot 1 is a line graph which shows the price of AirBnB homes in Bostom over the time period of September 2016 to September 2017. It can be seen that the price of AirBnB homes has overall decreased in Boston from September 2016 to 2017. Starting at 240 in September 2016 and coming to below 200 a year later. The highest it reached was in 2016 at slightly above 280. Although, there was a substantial decrease from October 2016 to February 2017, the price experienced a slow increasing trend till September 2017. There was also a probable seasonal peak for few weeks in the price at the end of April 2017.
Plot 2 shows the weekly price trend for AirBnB homes in Boston. Plot 2 is a follow up to the overall trend of price for AirBnB homes in Boston. Plot 2 gives us a clear picture of how the price of AirBnB homes vary depending on the day of the week. The highest price for homes can be seen to be on Fridays and Saturdays at slightly above 200. The lowest price for homes are on Mondays with the mean price of around 190.
```
# Plot 3: Price Of Different Room Types In The Different Types Of Property:
sns.set_style("dark",{"axes.facecolor":"black"})
plt.rcParams['figure.figsize'] = 10,6
ax = sns.swarmplot(data=bost_listing, x="property_type", y="price", hue="room_type")
plt.xticks (rotation='vertical')
plt.ylim(0,1100)
plt.title("Price of Different Room Types in Different Properties", fontsize= 30, color= "DarkBlue")
plt.legend(framealpha=1, frameon=True, facecolor='white', edgecolor='white')
plt.xlabel('Property Type', fontsize= 25)
plt.ylabel('Price', fontsize= 25)
```
Plot 3 shows the price of different types of room for different AirBnB properties in Boston. There are mainly 3 types of room for the different AirBnB homes in Boston. The blue dots represent Entire Home or Apartment and seems to be the most available option in Boston followed by the orange dot representing Private Rooms. The least available AirBnB room type in Boston is Shared Room.
Boston offers these 3 types of room to be rented in 13 different types of properties. The top 3 types of properties where rooms are rented in Boston are House, Apartment and Condominium. The other types of property in Boston includes Loft, Bed & Breakfast, Townhouse, Boat, Villa, Entire Floor, Dorm, Guesthouse, Camper/RV and Others. Another valueable information which can be deduced from Plot 3, is the difference in price among the 3 types of room in these different properties in Boston.
```
# Plot 5: Price of Different Rooms:
sns.set_style("dark",{"axes.facecolor":"black"})
w = sns.boxplot(data=bost_listing, x='room_type', y='price', palette='summer', width= 0.5)
plt.rcParams['figure.figsize'] = 10,6
plt.ylim(0,500)
plt.title("Price of Different Rooms", fontsize= 30, color= "DarkBlue")
plt.xlabel('Room Types', fontsize= 25)
plt.ylabel('Price', fontsize= 25)
```
Plot 5 shows a boxplot graph plotted to investigate the information gained from Plot 4. Plot 5 confirms the price trend observed for the different types of room in Plot 4. It can be seen that among the 3 types of room, Entire Home/Apt has the highest mean price of around 200 followed by Private Room with price around a little less than 100. The price for Shared Room in Boston AirBnB homes seems to be around 50.
```
# Plot 4: Price of Different Properties with the Number of Beds
sns.set_style("darkgrid")
vis2 = sns.lmplot(data= bost_listing, x='price', y='beds',
fit_reg=False, hue="property_type", size=7, aspect=1)
plt.ylim(0,10)
plt.xlim(0,1500)
plt.title("Price of Different Properties With Varying Bed Numbers", fontsize= 30, color= "DarkBlue")
plt.xlabel('Price', fontsize= 25)
plt.ylabel('Number of Beds', fontsize= 25)
```
Plot 4 displays the Different Property prices for the number of beds. It can be seen that most of the AirBnB homes rented in Boston had only 1 bed with varying prices mostly between 600. Most of the Townhouse, Entire floor, Loft and Guesthouse offered 2 beds with the price similar to properties with 1 bed. However, most of the AirBnB homes offering 3 beds were Apartment and Condominium with varying price range between 400. A similar price trend can be seen in AirBnB homes offering 4 beds with few boats along with Condominium and Villas among the property types.
```
# Plot 6: Price Distribution with Extra People:
k3= sns.kdeplot(bost_listing.extra_people, bost_listing.price, shade=True, shade_lowest=True, cmap='inferno')
k3b= sns.kdeplot(bost_listing.extra_people, bost_listing.price, cmap='cool')
plt.ylim(-10,400)
plt.xlim(-10,35)
plt.title("Price Distribution with Extra People", fontsize= 30, color= "DarkBlue")
plt.xlabel('Extra People', fontsize= 25)
plt.ylabel('Price', fontsize= 25)
```
Plot 6 shows us a Kernel Density Estimation (KDE) graph of Extra People and Price. It tells us how having Extra People effects the price of the AirBnB homes in Boston. We can see that if there are 0 extra people then the lower density of price is stretched from the minimum at 0 to the highest density at around 400. If we move to the right and increase the extra people to 5 we see that the lower density of price is at 25 and the higher density of price at around 275. If wee keep mmoving further to 10 extra people the lower density of price seems to stay similar but the higher density of price decreases to around 150. It is a very interesting observation as the price density narrows down if we keeping increasing the number of extra people to 30.
```
# Plot 7: Extracting Common Words from the 'Comment' Column:
## Loading the necessary libraries
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
comment_words = ' '
stopwords = set(STOPWORDS)
for val in bost_review.comments:
# typecaste each val to string
val = str(val)
# split the value
tokens = val.split()
# Converts each token into lowercase
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
for words in tokens:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
stopwords = stopwords,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (5, 8), facecolor ='B')
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
```
Plot 7 is word cloud which derived the most common words from the 'Comment' column of the reviews from the customers. Plot 7 provides us the criteria which matters to customers when giving reviews. A good observation which can be deduced from the reviews is that customers seem to have a good experience if the AirBnB home is clean and amenities are provided. In order to please the customers, the host can include amenities along with shampoo and towels.
----
#### Linear Regression Model
```
bost_listing.info()
df = bost_listing.copy()
useless_columns1 = ['name', 'summary', 'space', 'description', 'host_id', 'host_is_superhost', 'property_type', 'room_type', 'bed_type', 'amenities', 'first_review', 'last_review', 'instant_bookable', 'cancellation_policy']
df.drop(useless_columns1, axis=1, inplace=True)
# Fill the mean of the columns for any missing values
fill_mean = lambda col: col.fillna(col.mean())
df = df.apply(fill_mean, axis=0)
# Setting X variables
X = df[['accommodates', 'bathrooms', 'bedrooms', 'beds', 'security_deposit', 'cleaning_fee', 'guests_included', 'extra_people']]
# Setting y variable
y = df['price']
# Creating train and test sets of data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=42)
# Instantiate a Linear Regression model with normalized data
lm_model = LinearRegression (normalize=True)
# Fit model to the training data
lm_model.fit(X_train, y_train)
# Predict the response for the training data and the test data
y_test_preds = lm_model.predict(X_test)
y_train_preds = lm_model.predict(X_train)
# Comparing the values of the y_test_preds (Y Prediction) with the y_test values:
plt.scatter(y_test, y_test_preds)
# Obtain an r-squared value for both the training and test data
test_score = r2_score(y_test, y_test_preds)
train_score = r2_score(y_train, y_train_preds)
print("Training R2 {}. Test R2 {}.".format(train_score, test_score))
```
The predicted values from the Linear Regression Model created is quite far away from the Actual Values. The model has a lot of scope of being improved
| github_jupyter |
```
import os
from dotenv import load_dotenv, find_dotenv
from os.path import join, dirname, basename, exists, isdir
### Load environmental variables from the project root directory ###
# find .env automagically by walking up directories until it's found
dotenv_path = find_dotenv()
# load up the entries as environment variables
load_dotenv(dotenv_path)
# now you can get the variables using their names
# Check whether a network drive has been specified
DATABASE = os.environ.get("NETWORK_URL")
if DATABASE == 'None':
pass
else:
pass
#mount network drive here
# set up directory pathsa
CURRENT_DIR = os.getcwd()
PROJ = dirname(dotenv_path) # project root directory
DATA = join(PROJ, 'data') #data directory
RAW_EXTERNAL = join(DATA, 'raw_external') # external data raw directory
RAW_INTERNAL = join(DATA, 'raw_internal') # internal data raw directory
INTERMEDIATE = join(DATA, 'intermediate') # intermediate data directory
FINAL = join(DATA, 'final') # final data directory
RESULTS = join(PROJ, 'results') # output directory
FIGURES = join(RESULTS, 'figures') # figure output directory
PICTURES = join(RESULTS, 'pictures') # picture output directory
# make folders specific for certain data
folder_name = ''
if folder_name != '':
#make folders if they don't exist
if not exists(join(RAW_EXTERNAL, folder_name)):
os.makedirs(join(RAW_EXTERNAL, folder_name))
if not exists(join(INTERMEDIATE, folder_name)):
os.makedirs(join(INTERMEDIATE, folder_name))
if not exists(join(FINAL, folder_name)):
os.makedirs(join(FINAL, folder_name))
print('Standard variables loaded, you are good to go!')
import cobra
import os
import pandas as pd
import cameo
import wget
import ssl
from scipy.stats import pearsonr
#E. coli model:
ssl._create_default_https_context = ssl._create_unverified_context
wget.download("https://raw.githubusercontent.com/BenjaSanchez/notebooks/master/e_coli_simulations/eciML1515.xml")
eColi_Model = cobra.io.read_sbml_model("eciML1515.xml")
os.remove("eciML1515.xml")
# proteomics data:
proteomics_dataset = f"{INTERMEDIATE}/proteomics/proteomics_concentrations.csv"
weights_location = f"{INTERMEDIATE}/proteomics/proteomics_masses.csv"
from collections import namedtuple
from cobra.medium.boundary_types import find_external_compartment
from cobra.io.dict import reaction_to_dict
import pandas as pd
import numpy as np
from simulations.modeling.driven import (
adjust_fluxes2model,
flexibilize_proteomics,
minimize_distance,
)
exchange_reaction = "42°C glucose"
exchange_reaction_lowercase = "42c"
def reset_real_proteomics(proteomics_dataset):
'''loads set of proteomics data from the provided dataset file into dict of lists'''
data = pd.read_csv(proteomics_dataset, index_col="UP") # yeast
data_dict = {}
for i in range(0,data.shape[1], 3):
uncertainty = data.iloc[:,i:i+3].std(axis=1)
uncertainty_name = data.columns[i]+ "_uncertainty"
data[uncertainty_name] = uncertainty
data_dict[data.columns[i]] = [{'identifier':data.index[j], 'measurement':data.iloc[j,i], 'uncertainty':data[uncertainty_name][j] }\
for j in range(0, len(data.iloc[:,i]))]
data_dict[data.columns[i+1]] = [{'identifier':data.index[j], 'measurement':data.iloc[j,i+1], 'uncertainty':data[uncertainty_name][j] }\
for j in range(0, len(data.iloc[:,i+1]))]
data_dict[data.columns[i+2]] = [{'identifier':data.index[j], 'measurement':data.iloc[j,i+2], 'uncertainty':data[uncertainty_name][j] }\
for j in range(0, len(data.iloc[:,i+2]))]
return data_dict
proteomics_data = reset_real_proteomics(proteomics_dataset)
growth_rates = pd.read_csv(f"{RAW_INTERNAL}/proteomics/growth_conditions.csv")
growth_rates = growth_rates.drop(growth_rates.columns.difference(['Growth condition','Growth rate (h-1)', 'Stdev']), 1)
growth_rates = growth_rates.drop([0,1], axis=0)
def find_exchange_rxn(compound, model):
exchange_reactions = [i for i in model.reactions if "EX" in i.id]
compound_ex_rxn = [i for i in exchange_reactions if compound in i.name]
compound_ex_rxn = [i for i in compound_ex_rxn if len(list(i._metabolites.keys())) == 1 \
& (list(i._metabolites.values())[0] == 1.0) \
& (list(i._metabolites.keys())[0].name == compound + " [extracellular space]")]
return compound_ex_rxn
# find Pyruvate
ac_ex = find_exchange_rxn(exchange_reaction, eColi_Model)
print(ac_ex)
model = eColi_Model
# minimal medium with pyruvate
# pyruvate_growth_rate = list(growth_rates['Growth rate (h-1)'].loc[growth_rates['Growth condition'] == "Acetate"])[0]
# model = eColi_Model.copy()
# medium = model.medium
# medium.pop("EX_glc__D_e_REV", None)
# medium[f'{ac_ex[0].id}'] = 10
# model.medium = medium
# pyr_model.medium = minimal_medium(pyr_model).to_dict()
print(model.optimize())
# Flexibilize proteomics
ec_model_1 = model
ec_model_2 = model
ec_model_3 = model
# first
print("Number of proteins originally: ", len(proteomics_data[exchange_reaction_lowercase]))
growth_rate = {"measurement":float(list(growth_rates['Growth rate (h-1)'].loc[growth_rates['Growth condition'] == exchange_reaction])[0]),\
"uncertainty":float(list(growth_rates['Stdev'].loc[growth_rates['Growth condition'] == exchange_reaction])[0])}
new_growth_rate, new_proteomics, warnings = flexibilize_proteomics(ec_model_1, "BIOMASS_Ec_iML1515_core_75p37M", growth_rate, proteomics_data[exchange_reaction_lowercase], [])
print("Number of proteins incorporated: ", len(new_proteomics))
# first
print("Number of proteins originally: ", len(proteomics_data[exchange_reaction_lowercase + "2"]))
growth_rate = {"measurement":float(list(growth_rates['Growth rate (h-1)'].loc[growth_rates['Growth condition'] == exchange_reaction])[0]),\
"uncertainty":float(list(growth_rates['Stdev'].loc[growth_rates['Growth condition'] == exchange_reaction])[0])}
new_growth_rate, new_proteomics, warnings = flexibilize_proteomics(ec_model_2, "BIOMASS_Ec_iML1515_core_75p37M", growth_rate, proteomics_data[exchange_reaction_lowercase + "1"], [])
print("Number of proteins incorporated: ", len(new_proteomics))
# first
print("Number of proteins originally: ", len(proteomics_data[exchange_reaction_lowercase + "2"]))
growth_rate = {"measurement":float(list(growth_rates['Growth rate (h-1)'].loc[growth_rates['Growth condition'] == exchange_reaction])[0]),\
"uncertainty":float(list(growth_rates['Stdev'].loc[growth_rates['Growth condition'] == exchange_reaction])[0])}
new_growth_rate, new_proteomics, warnings = flexibilize_proteomics(ec_model_3, "BIOMASS_Ec_iML1515_core_75p37M", growth_rate, proteomics_data[exchange_reaction_lowercase + "2"], [])
print("Number of proteins incorporated: ", len(new_proteomics))
```
# Extraction of the usages
```
weights = pd.read_csv(weights_location, index_col = "UP")
# usages of ac proteins
#solution = pyr_model.optimize()
# pyr model uages
def get_usages(prot_int_model, weights):
# get the usages of a model integrated with proteomics
try:
solution = cobra.flux_analysis.pfba(prot_int_model)
except:
print("used normal fba")
solution = prot_int_model.optimize()
abs_usages = pd.Series()
perc_usages = pd.Series()
mass_usages = 0
non_mass_proteins = []
for reaction in prot_int_model.reactions:
if reaction.id.startswith("prot_"):
prot_id = reaction.id.replace("prot_","")
prot_id = prot_id.replace("_exchange","")
abs_usage = solution.fluxes[reaction.id]
abs_usages = abs_usages.append(pd.Series({prot_id:abs_usage}))
perc_usage = solution.fluxes[reaction.id]/reaction.upper_bound
perc_usages = perc_usages.append(pd.Series({prot_id:perc_usage}))
try:
if perc_usage <= 100:
mass_usages += perc_usage/100 * weights[prot_id]
except:
non_mass_proteins.append(prot_id)
return abs_usages, perc_usages, mass_usages, non_mass_proteins
#
abs_usages_1, perc_usages_1, mass_usage_1, non_mass_proteins_1 = get_usages(ec_model_1, weights[f"42°C"])
abs_usages_2, perc_usages_2, mass_usage_2, non_mass_proteins_2 = get_usages(ec_model_2, weights[f"42°C"])
abs_usages_3, perc_usages_3, mass_usage_3, non_mass_proteins_3 = get_usages(ec_model_3, weights[f"42°C"])
len(non_mass_proteins_1)
print("Mass of Proteins total: ", sum(weights["acetate"]))
print("Mass actually used: ", sum(weights["acetate"])*(mass_usage_1/sum(weights["acetate"])))
abs_usages_df = pd.DataFrame({f"{exchange_reaction_lowercase}": perc_usages_1, f"{exchange_reaction_lowercase}.1": perc_usages_2, f"{exchange_reaction_lowercase}.2": perc_usages_3})
abs_usages_df.to_csv(f"{FINAL}/abs_usages_gecko/{exchange_reaction_lowercase}")
```
# Masses
Masses that are actually used seem very low, at 0,9 %
What should I actually do here?
Total protein mass: 117633655349 Dalton
```
import numpy as np; np.random.seed(42)
import matplotlib.pyplot as plt
import seaborn as sns
df = perc_usages_1.to_frame()
df["perc_usages_2"] = perc_usages_2
df["perc_usages_3"] = perc_usages_3
df.columns = ["Measurement 1", "Measurement 2", "Measurement 3"]
sns.boxplot(x="variable", y="value", data=pd.melt(df[(df > 0) & (df < 100)]))
plt.xlabel('Measurements')
plt.ylabel('Usage of measurement in %')
plt.title('% usage of proteins per ec simulation ')
plt.savefig(f'{FIGURES}/ec_incorporation_perc_usage_box_ac')
plt.show()
#df['pct'] = df['Location'].div(df.groupby('Hour')['Location'].transform('sum'))
#g = sns.FacetGrid(df, row="pct", hue="pct", aspect=15, height=.5, palette=pal)
perc_incorporation_pyr = pd.melt(df[(df > 0) & (df < 100)])
# Method 1: on the same Axis
sns.distplot( df[(df > 0) & (df < 100)].iloc[:,0] , color="skyblue", label="1", kde=False)
sns.distplot( df[(df > 0) & (df < 100)].iloc[:,1], color="red", label="2", kde=False)
sns.distplot( df[(df > 0) & (df < 100)].iloc[:,2], color="green", label="3", kde=False)
from sklearn.preprocessing import StandardScaler
# standardize data for pca
# #features = ['sepal length', 'sepal width', 'petal length', 'petal width']# Separating out the features
pca_df_all_proteomics_and_pyr = pd.read_csv(proteomics_dataset, index_col="UP").loc[df.index,:]
pca_df_all_proteomics_and_pyr['pyr_1'] = abs_usages_1
pca_df_all_proteomics_and_pyr = pca_df_all_proteomics_and_pyr.T.dropna(axis='columns')
x = pca_df_all_proteomics_and_pyr.values
x = StandardScaler().fit_transform(x)
# run pca
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(x)
principalDf = pd.DataFrame(data = principalComponents, columns = ['principal component 1', 'principal component 2'])
principalDf.index = pca_df_all_proteomics_and_pyr.index
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA with zero values', fontsize = 20)
amount = len(principalDf.index)
for i in range(amount):
c = [float(i)/float(amount), 0.0, float(amount-i)/float(amount)] #R,G,B
ax.scatter(principalDf.loc[principalDf.index[i], 'principal component 1']
, principalDf.loc[principalDf.index[i], 'principal component 2']
, color = c
, s = 50)
ax.scatter(principalDf.loc["pyr_1", 'principal component 1']
, principalDf.loc[principalDf.index[i], 'principal component 2']
, color = "green"
, s = 50)
#ax.legend(pca_df_all_proteomics_and_pyr.index)
ax.grid()
plt.savefig(f'{FIGURES}/')
# standardize data for pca
# #features = ['sepal length', 'sepal width', 'petal length', 'petal width']# Separating out the features
pca_df_all_proteomics_and_pyr = pd.read_csv(proteomics_dataset, index_col="UP").loc[df.index,:]
pca_df_all_proteomics_and_pyr['pyr_1'] = abs_usages_1
pca_df_all_proteomics_and_pyr = pca_df_all_proteomics_and_pyr[pca_df_all_proteomics_and_pyr['pyr_1'] > 0]
pca_df_all_proteomics_and_pyr = pca_df_all_proteomics_and_pyr.T.dropna(axis='columns')
x = pca_df_all_proteomics_and_pyr.values
x = StandardScaler().fit_transform(x)
# run pca
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(x)
principalDf = pd.DataFrame(data = principalComponents, columns = ['principal component 1', 'principal component 2'])
principalDf.index = pca_df_all_proteomics_and_pyr.index
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA without zero values', fontsize = 20)
amount = len(principalDf.index)
for i in range(amount):
c = [float(i)/float(amount), 0.0, float(amount-i)/float(amount)] #R,G,B
ax.scatter(principalDf.loc[principalDf.index[i], 'principal component 1']
, principalDf.loc[principalDf.index[i], 'principal component 2']
, color = c
, s = 50)
ax.scatter(principalDf.loc["pyr_1", 'principal component 1']
, principalDf.loc[principalDf.index[i], 'principal component 2']
, color = "green"
, s = 50)
ax.grid()
pd.DataFrame({'ac_1':abs_usages_1, 'ac_2':abs_usages_2, 'ac_3':abs_usages_3}).to_csv(f'{INTERMEDIATE}/proteomics/acetate_usages.csv')
```
# Check sensibility
- try using fluxomics data to compare the fluxes to the values of the non-ec model
```
# load fluxomics data
fluxes = pd.read_csv(f"{RAW_EXTERNAL}/chemostat_flux.csv", index_col = "D [h-1]")["0.19"]
fluxes
ec_model_sol = ec_model_1.optimize()
ec_model_pfba = cobra.flux_analysis.pfba(ec_model_1)
# compare the
type(ec_model_1.optimize().fluxes)
f = ec_model_1.optimize()
g = [i for i in range(0, len(f.fluxes)) if "EX_glc" in f.fluxes.index[i]]
f.fluxes[g]
# find reaction GLC + ATP -> G6P
glc_metabolite = ec_model_1.metabolites.get_by_id("glc__D_c")
atp_metabolite = ec_model_1.metabolites.get_by_id("atp_c")
reactions_glc_atp = [i for i in model.reactions\
if glc_metabolite in i.metabolites and atp_metabolite in i.metabolites]
hexokinase = reactions[0]
print("GLC + ATP -> G6P :",ec_model_sol[hexokinase.id])
# find flux glc atp
# find reaction G6P -> 6PG + NADPH
g6p_metabolite = ec_model_1.metabolites.get_by_id("g6p_c")
nadph_metabolite = ec_model_1.metabolites.get_by_id("nadph_c")
nadp_reactions = [i for i in model.reactions\
if nadph_metabolite in i.metabolites and g6p_metabolite in i.metabolites]
g6p_nadph = nadp_reactions[0]
print("G6P -> 6PG + NADPH :",ec_model_sol[g6p_nadph.id])
# find reaction G6P -> F6P
f6p_metabolite = ec_model_1.metabolites.get_by_id("f6p_c")
g6p_metabolite = ec_model_1.metabolites.get_by_id("g6p_c")
f6p_reactions = [i for i in model.reactions\
if f6p_metabolite in i.metabolites and g6p_metabolite in i.metabolites]
f6p_reaction = f6p_reactions[0]
print("G6P -> F6P :",ec_model_sol[f6p_reaction.id])
# find reaction 6PG -> T3P + PYR
pyr_metabolite = ec_model_1.metabolites.get_by_id("pyr_c")
sixpgl_metabolite = ec_model_1.metabolites.get_by_id("6pgl_c")
pyr_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites]# and sixpgl_metabolite in i.metabolites]
pyr_reaction = pyr_reactions[0]
print("6PG -> T3P + PYR :",ec_model_sol[pyr_reaction.id])
# find reaction F6P + ATP -> 2T3P
pyr_reactions = [i for i in model.reactions\
if f6p_metabolite in i.metabolites and atp_metabolite in i.metabolites]
f6p_reaction = pyr_reactions[0]
print("F6P + ATP -> 2T3P :",ec_model_sol[f6p_reaction.id])
# find reaction PGA -> PEP
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
pga_reactions = [i for i in model.reactions\
if pep_metabolite in i.metabolites]# and atp_metabolite in i.metabolites]
pga_reaction = pga_reactions[0]
#ec_model_1.optimize()[pga_reactions.id]
# find reaction PEP -> PYR + ATP
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
peppyr_reactions = [i for i in model.reactions\
if pep_metabolite in i.metabolites and atp_metabolite in i.metabolites]
peppyr_reaction = peppyr_reactions[1]
print("PEP -> PYR + ATP :",ec_model_sol[peppyr_reaction.id])
# find reaction PYR -> AcCoA + CO2 + NADH
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
co2_metabolite = ec_model_1.metabolites.get_by_id("co2_c")
accoa_metabolite = ec_model_1.metabolites.get_by_id("accoa_c")
pyraccoa_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites and co2_metabolite in i.metabolites and accoa_metabolite in i.metabolites]
#pyraccoa_reaction = pyraccoa_reactions[0]
#ec_model_1.optimize()[pyraccoa_reaction.id]
# find reaction FUM -> MAL
fum_metabolite = ec_model_1.metabolites.get_by_id("fum_c")
co2_metabolite = ec_model_1.metabolites.get_by_id("co2_c")
accoa_metabolite = ec_model_1.metabolites.get_by_id("accoa_c")
pyraccoa_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites and co2_metabolite in i.metabolites and accoa_metabolite in i.metabolites]
#pyraccoa_reaction = pyraccoa_reactions[0]
#ec_model_1.optimize()[pyraccoa_reaction.id]
# find reaction AcCoA -> Acetate + ATP
ac_metabolite = ec_model_1.metabolites.get_by_id("ac_c")
accoaac_reactions = [i for i in model.reactions\
if ac_metabolite in i.metabolites and accoa_metabolite in i.metabolites and atp_metabolite in i.metabolites]
accoaac_reaction = accoaac_reactions[0]
print("AcCoA -> Acetate + ATP: " ,ec_model_sol[accoaac_reaction.id])
# find reaction NADPH -> NADH
nadh_metabolite = ec_model_1.metabolites.get_by_id("nadh_c")
nadpnadh_reactions = [i for i in model.reactions\
if nadh_metabolite in i.metabolites and nadph_metabolite in i.metabolites]
nadpnadh_reaction = nadpnadh_reactions[0]
#ec_model_1.optimize()[nadpnadh_reaction.id]
#E. coli model:
ssl._create_default_https_context = ssl._create_unverified_context
wget.download("https://raw.githubusercontent.com/BenjaSanchez/notebooks/master/e_coli_simulations/eciML1515.xml")
unc_model = cobra.io.read_sbml_model("eciML1515.xml")
os.remove("eciML1515.xml")
unc_model_sol = unc_model.optimize()
#unc_model_pfba_sol = cobra.flux_analysis.pfba(unc_model_sol)
# load fluxomics data
fluxes = pd.read_csv(f"{RAW_EXTERNAL}/chemostat_flux.csv", index_col = "D [h-1]")["0.19"]
# compare the
type(ec_model_1.optimize().fluxes)
f = ec_model_1.optimize()
g = [i for i in range(0, len(f.fluxes)) if "EX_glc" in f.fluxes.index[i]]
f.fluxes[g]
unc_model.medium
# find reaction GLC + ATP -> G6P
glc_metabolite = ec_model_1.metabolites.get_by_id("glc__D_c")
atp_metabolite = ec_model_1.metabolites.get_by_id("atp_c")
reactions_glc_atp = [i for i in model.reactions\
if glc_metabolite in i.metabolites and atp_metabolite in i.metabolites]
hexokinase = reactions[0]
print("GLC + ATP -> G6P :",unc_model_sol[hexokinase.id])
# find flux glc atp
# find reaction G6P -> 6PG + NADPH
g6p_metabolite = ec_model_1.metabolites.get_by_id("g6p_c")
nadph_metabolite = ec_model_1.metabolites.get_by_id("nadph_c")
nadp_reactions = [i for i in model.reactions\
if nadph_metabolite in i.metabolites and g6p_metabolite in i.metabolites]
g6p_nadph = nadp_reactions[0]
print("G6P -> 6PG + NADPH :",unc_model_sol[g6p_nadph.id])
# find reaction G6P -> F6P
f6p_metabolite = ec_model_1.metabolites.get_by_id("f6p_c")
g6p_metabolite = ec_model_1.metabolites.get_by_id("g6p_c")
f6p_reactions = [i for i in model.reactions\
if f6p_metabolite in i.metabolites and g6p_metabolite in i.metabolites]
f6p_reaction = f6p_reactions[0]
print("G6P -> F6P :",unc_model_sol[f6p_reaction.id])
# find reaction 6PG -> T3P + PYR
pyr_metabolite = ec_model_1.metabolites.get_by_id("pyr_c")
sixpgl_metabolite = ec_model_1.metabolites.get_by_id("6pgl_c")
pyr_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites]# and sixpgl_metabolite in i.metabolites]
pyr_reaction = pyr_reactions[0]
print("6PG -> T3P + PYR :",unc_model_sol[pyr_reaction.id])
# find reaction F6P + ATP -> 2T3P
pyr_reactions = [i for i in model.reactions\
if f6p_metabolite in i.metabolites and atp_metabolite in i.metabolites]
f6p_reaction = pyr_reactions[0]
print("F6P + ATP -> 2T3P :",unc_model_sol[f6p_reaction.id])
# find reaction PGA -> PEP
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
pga_reactions = [i for i in model.reactions\
if pep_metabolite in i.metabolites]# and atp_metabolite in i.metabolites]
pga_reaction = pga_reactions[0]
#ec_model_1.optimize()[pga_reactions.id]
# find reaction PEP -> PYR + ATP
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
peppyr_reactions = [i for i in model.reactions\
if pep_metabolite in i.metabolites and atp_metabolite in i.metabolites]
peppyr_reaction = peppyr_reactions[1]
print("PEP -> PYR + ATP :",unc_model_sol[peppyr_reaction.id])
# find reaction PYR -> AcCoA + CO2 + NADH
pep_metabolite = ec_model_1.metabolites.get_by_id("pep_c")
co2_metabolite = ec_model_1.metabolites.get_by_id("co2_c")
accoa_metabolite = ec_model_1.metabolites.get_by_id("accoa_c")
pyraccoa_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites and co2_metabolite in i.metabolites and accoa_metabolite in i.metabolites]
#pyraccoa_reaction = pyraccoa_reactions[0]
#ec_model_1.optimize()[pyraccoa_reaction.id]
# find reaction FUM -> MAL
fum_metabolite = ec_model_1.metabolites.get_by_id("fum_c")
co2_metabolite = ec_model_1.metabolites.get_by_id("co2_c")
accoa_metabolite = ec_model_1.metabolites.get_by_id("accoa_c")
pyraccoa_reactions = [i for i in model.reactions\
if pyr_metabolite in i.metabolites and co2_metabolite in i.metabolites and accoa_metabolite in i.metabolites]
#pyraccoa_reaction = pyraccoa_reactions[0]
#ec_model_1.optimize()[pyraccoa_reaction.id]
# find reaction AcCoA -> Acetate + ATP
ac_metabolite = ec_model_1.metabolites.get_by_id("ac_c")
accoaac_reactions = [i for i in model.reactions\
if ac_metabolite in i.metabolites and accoa_metabolite in i.metabolites and atp_metabolite in i.metabolites]
accoaac_reaction = accoaac_reactions[0]
print("AcCoA -> Acetate + ATP: " ,unc_model_sol[accoaac_reaction.id])
# find reaction NADPH -> NADH
nadh_metabolite = ec_model_1.metabolites.get_by_id("nadh_c")
nadpnadh_reactions = [i for i in model.reactions\
if nadh_metabolite in i.metabolites and nadph_metabolite in i.metabolites]
nadpnadh_reaction = nadpnadh_reactions[0]
#ec_model_1.optimize()[nadpnadh_reaction.id]
```
| github_jupyter |
### load library
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from tensorflow import keras
# from tensorflow.keras.utils import to_categorical
# from tensorflow.keras.models import Sequential
# from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# from tensorflow.keras.losses import categorical_crossentropy
# from tensorflow.keras.optimizers import Adam
# from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
```
### Load dataset
```
train_data_path = '../mnist_input/mnist-digits-train.csv'
test_data_path = '../mnist_input/mnist-digits-test.csv'
train_data = pd.read_csv(train_data_path, header=None)
train_data.head(10)
# The classes of this balanced dataset are as follows. Index into it based on class label
class_mapping = '0123456789'
class_mapping[5]
train_data.shape
```
### Data is flipped
```
num_classes = len(train_data[0].unique())
row_num = 8
plt.imshow(train_data.values[row_num, 1:].reshape([28, 28]), cmap='Greys_r')
plt.show()
img_flip = np.transpose(train_data.values[row_num,1:].reshape(28, 28), axes=[1,0]) # img_size * img_size arrays
plt.imshow(img_flip, cmap='Greys_r')
plt.show()
def show_img(data, row_num):
img_flip = np.transpose(data.values[row_num,1:].reshape(28, 28), axes=[1,0]) # img_size * img_size arrays
plt.title('Class: ' + str(data.values[row_num,0]) + ', Label: ' + str(class_mapping[data.values[row_num,0]]))
plt.imshow(img_flip, cmap='Greys_r')
show_img(train_data, 149)
# 10 digits
num_classes = 10
img_size = 28
def img_label_load(data_path, num_classes=None):
data = pd.read_csv(data_path, header=None)
data_rows = len(data)
if not num_classes:
num_classes = len(data[0].unique())
# this assumes square imgs. Should be 28x28
img_size = int(np.sqrt(len(data.iloc[0][1:])))
# Images need to be transposed. This line also does the reshaping needed.
imgs = np.transpose(data.values[:,1:].reshape(data_rows, img_size, img_size, 1), axes=[0,2,1,3]) # img_size * img_size arrays
labels = keras.utils.to_categorical(data.values[:,0], num_classes) # one-hot encoding vectors
return imgs/255., labels
```
### model, compile
```
model = keras.models.Sequential()
# model.add(keras.layers.Reshape((img_size,img_size,1), input_shape=(784,)))
model.add(keras.layers.Conv2D(filters=12, kernel_size=(5,5), strides=2, activation='relu',
input_shape=(img_size,img_size,1)))
# model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(.5))
model.add(keras.layers.Conv2D(filters=18, kernel_size=(3,3) , strides=2, activation='relu'))
# model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(.5))
model.add(keras.layers.Conv2D(filters=24, kernel_size=(2,2), activation='relu'))
# model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
# model.add(keras.layers.Conv2D(filters=30, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units=150, activation='relu'))
model.add(keras.layers.Dense(units=num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
model.summary()
for layer in model.layers:
print(layer.get_output_at(0).get_shape().as_list())
```
### Train
```
X, y = img_label_load(train_data_path)
print(X.shape)
data_generator = keras.preprocessing.image.ImageDataGenerator(validation_split=.2)
## consider using this for more variety
data_generator_with_aug = keras.preprocessing.image.ImageDataGenerator(validation_split=.2,
width_shift_range=.2, height_shift_range=.2,
rotation_range=60, zoom_range=.2, shear_range=.3)
# if already ran this above, no need to do it again
# X, y = img_label_load(train_data_path)
# print("X.shape: ", X.shape)
training_data_generator = data_generator.flow(X, y, subset='training')
validation_data_generator = data_generator.flow(X, y, subset='validation')
history = model.fit_generator(training_data_generator,
steps_per_epoch=500, epochs=10, # can change epochs to 5
validation_data=validation_data_generator)
test_X, test_y = img_label_load(test_data_path)
test_data_generator = data_generator.flow(X, y)
model.evaluate_generator(test_data_generator)
```
### Look at some predictions
```
test_data = pd.read_csv(test_data_path, header=None)
show_img(test_data, 123)
X_test, y_test = img_label_load(test_data_path) # loads images and orients for model
def run_prediction(idx):
result = np.argmax(model.predict(X_test[idx:idx+1]))
print('Prediction: ', result, ', Char: ', class_mapping[result])
print('Label: ', test_data.values[idx,0])
show_img(test_data, idx)
import random
for _ in range(1,10):
idx = random.randint(0, 47-1)
run_prediction(idx)
show_img(test_data, 123)
np.argmax(y_test[123])
```
### Keras exports
```
with open('model.json', 'w') as f:
f.write(model.to_json())
model.save_weights('./model.h5')
model.save('./full_model.h5')
#!dir
# ... or ...
#!ls -al
```
### Keras to ONNX
```
import keras2onnx
# convert to onnx model
onnx_model = keras2onnx.convert_keras(model, model.name)
# save onnx model
model_file = 'model.onnx'
keras2onnx.save_model(onnx_model, model_file)
```
### Upload the ONNX file at Sclbl.net ...
### ... then test with Protobuf input
```
import requests
import base64
from onnx import numpy_helper
# serialize and base64 encode protobuf input
xc = X_test[127]
print(X_test[127])
xc = xc.astype('float32')
tensor = numpy_helper.from_array(xc)
serialized = tensor.SerializeToString()
encoded = base64.b64encode(serialized)
# then test the model on a sclbl.net cloud server
url = "https://taskmanager.sclbl.net:8080/task/34e77475-51e7-11eb-962f-9600004e79cc"
payload = "{\"input\":{\"content-type\":\"json\",\"location\":\"embedded\",\"data\":\"{\\\"input\\\": \\\"" + encoded.decode('ascii') + "\\\"}\"},\"output\":{\"content-type\":\"json\",\"location\":\"echo\"},\"control\":1,\"properties\":{\"language\":\"WASM\"}}"
response = requests.request("POST", url, data = payload)
print(response.text.encode('utf8'))
print("Expected result: " + str(np.argmax(y_test[127])))
```
### ... or with raw input
```
# serialize and base64 encode raw input
xc = X_test[127]
xc = xc.astype('float32')
raw = xc.tobytes();
encoded = base64.b64encode(raw)
# then test the model on a sclbl.net cloud server
url = "https://taskmanager.sclbl.net:8080/task/34e77475-51e7-11eb-962f-9600004e79cc"
payload = "{\"input\":{\"content-type\":\"json\",\"location\":\"embedded\",\"data\":\"{\\\"type\\\":\\\"raw\\\",\\\"input\\\": \\\"" + encoded.decode('ascii') + "\\\"}\"},\"output\":{\"content-type\":\"json\",\"location\":\"echo\"},\"control\":1,\"properties\":{\"language\":\"WASM\"}}"
response = requests.request("POST", url, data = payload)
print(response.text.encode('utf8'))
print("Expected result: " + str(np.argmax(y_test[127])))
print(encoded.decode('ascii'))
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Libraries-&-settings" data-toc-modified-id="Libraries-&-settings-1"><span class="toc-item-num">1 </span>Libraries & settings</a></span></li><li><span><a href="#Metrics" data-toc-modified-id="Metrics-2"><span class="toc-item-num">2 </span>Metrics</a></span><ul class="toc-item"><li><span><a href="#Crowd-related" data-toc-modified-id="Crowd-related-2.1"><span class="toc-item-num">2.1 </span>Crowd-related</a></span></li><li><span><a href="#Path-efficiency-related" data-toc-modified-id="Path-efficiency-related-2.2"><span class="toc-item-num">2.2 </span>Path efficiency-related</a></span></li><li><span><a href="#Control-related" data-toc-modified-id="Control-related-2.3"><span class="toc-item-num">2.3 </span>Control-related</a></span></li></ul></li><li><span><a href="#Pipeline" data-toc-modified-id="Pipeline-3"><span class="toc-item-num">3 </span>Pipeline</a></span><ul class="toc-item"><li><span><a href="#Result-loading" data-toc-modified-id="Result-loading-3.1"><span class="toc-item-num">3.1 </span>Result loading</a></span></li><li><span><a href="#Mean-Std-statistics" data-toc-modified-id="Mean-Std-statistics-3.2"><span class="toc-item-num">3.2 </span>Mean-Std statistics</a></span></li><li><span><a href="#ANOVA-test-for-controller-comparison" data-toc-modified-id="ANOVA-test-for-controller-comparison-3.3"><span class="toc-item-num">3.3 </span>ANOVA test for controller comparison</a></span></li><li><span><a href="#Visualize-with-grouping-by-date" data-toc-modified-id="Visualize-with-grouping-by-date-3.4"><span class="toc-item-num">3.4 </span>Visualize with grouping by date</a></span><ul class="toc-item"><li><span><a href="#Palette-settings" data-toc-modified-id="Palette-settings-3.4.1"><span class="toc-item-num">3.4.1 </span>Palette settings</a></span></li><li><span><a href="#Crowd-related-metrics" data-toc-modified-id="Crowd-related-metrics-3.4.2"><span class="toc-item-num">3.4.2 </span>Crowd-related metrics</a></span><ul class="toc-item"><li><span><a href="#4-in-1-plotting" data-toc-modified-id="4-in-1-plotting-3.4.2.1"><span class="toc-item-num">3.4.2.1 </span>4-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.4.2.2"><span class="toc-item-num">3.4.2.2 </span>Individual figures</a></span></li></ul></li><li><span><a href="#Path-efficiency-related-metrics" data-toc-modified-id="Path-efficiency-related-metrics-3.4.3"><span class="toc-item-num">3.4.3 </span>Path efficiency-related metrics</a></span><ul class="toc-item"><li><span><a href="#2-in-1-plotting" data-toc-modified-id="2-in-1-plotting-3.4.3.1"><span class="toc-item-num">3.4.3.1 </span>2-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.4.3.2"><span class="toc-item-num">3.4.3.2 </span>Individual figures</a></span></li></ul></li><li><span><a href="#Control-related-metrics" data-toc-modified-id="Control-related-metrics-3.4.4"><span class="toc-item-num">3.4.4 </span>Control-related metrics</a></span><ul class="toc-item"><li><span><a href="#4-in-1-plotting" data-toc-modified-id="4-in-1-plotting-3.4.4.1"><span class="toc-item-num">3.4.4.1 </span>4-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.4.4.2"><span class="toc-item-num">3.4.4.2 </span>Individual figures</a></span></li></ul></li></ul></li><li><span><a href="#Visualize-without-grouping-by-date" data-toc-modified-id="Visualize-without-grouping-by-date-3.5"><span class="toc-item-num">3.5 </span>Visualize without grouping by date</a></span><ul class="toc-item"><li><span><a href="#Palette-settings" data-toc-modified-id="Palette-settings-3.5.1"><span class="toc-item-num">3.5.1 </span>Palette settings</a></span></li><li><span><a href="#Crowd-related-metrics" data-toc-modified-id="Crowd-related-metrics-3.5.2"><span class="toc-item-num">3.5.2 </span>Crowd-related metrics</a></span><ul class="toc-item"><li><span><a href="#4-in-1-plotting" data-toc-modified-id="4-in-1-plotting-3.5.2.1"><span class="toc-item-num">3.5.2.1 </span>4-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.5.2.2"><span class="toc-item-num">3.5.2.2 </span>Individual figures</a></span></li></ul></li><li><span><a href="#Path-efficiency-related-metrics" data-toc-modified-id="Path-efficiency-related-metrics-3.5.3"><span class="toc-item-num">3.5.3 </span>Path efficiency-related metrics</a></span><ul class="toc-item"><li><span><a href="#2-in-1-plotting" data-toc-modified-id="2-in-1-plotting-3.5.3.1"><span class="toc-item-num">3.5.3.1 </span>2-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.5.3.2"><span class="toc-item-num">3.5.3.2 </span>Individual figures</a></span></li></ul></li><li><span><a href="#Control-related-metrics" data-toc-modified-id="Control-related-metrics-3.5.4"><span class="toc-item-num">3.5.4 </span>Control-related metrics</a></span><ul class="toc-item"><li><span><a href="#4-in-1-plotting" data-toc-modified-id="4-in-1-plotting-3.5.4.1"><span class="toc-item-num">3.5.4.1 </span>4-in-1 plotting</a></span></li><li><span><a href="#Individual-figures" data-toc-modified-id="Individual-figures-3.5.4.2"><span class="toc-item-num">3.5.4.2 </span>Individual figures</a></span></li></ul></li></ul></li></ul></li></ul></div>
# Controller comparison analysis
> Analysis of different control methods on 2021-04-10 and 2021-04-10 data
## Libraries & settings
```
import math
import datetime
import collections
import sys, os, fnmatch
from pathlib import Path
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib as mpl
import matplotlib.pyplot as plt
use_serif_font = True
if use_serif_font:
plt.style.use('./styles/serif.mplstyle')
else:
plt.style.use('./styles/sans_serif.mplstyle')
plt.ioff()
import seaborn as sns
sns.set_context("paper", font_scale=1.2, rc={"lines.linewidth": 1.3})
from qolo.utils.notebook_util import (
walk,
values2colors,
values2color_list,
violinplot,
categorical_plot,
barplot_annotate_brackets,
import_eval_res,
)
from qolo.core.crowdbot_data import CrowdBotDatabase, CrowdBotData
from qolo.metrics.metric_qolo_perf import compute_rel_jerk
```
## Metrics
### Crowd-related
1. Crowd Density (within an area of 2.5, 5m, 10m around the robot):
2. Minimum distance to pedestrians:
3. Number of violations to the virtual boundary set to the robot controller
```
crowd_metrics = (
'avg_crowd_density2_5',
'std_crowd_density2_5',
'max_crowd_density2_5',
'avg_crowd_density5',
'std_crowd_density5',
'max_crowd_density5',
'avg_min_dist',
'virtual_collision',
)
```
### Path efficiency-related
1. Relative time to goal (normalized by the goal distance)
2. Relative path length (normalized by the goal distance in straight line):
```
path_metrics = (
'rel_duration2goal',
'rel_path_length2goal',
'path_length2goal',
'duration2goal',
'min_dist2goal',
)
```
### Control-related
1. Agreement
2. Fluency
3. Contribution
4. Relative Jerk (smoothness of the path as added sum of linear and angular jerk)
```
control_metrics = (
'rel_jerk',
'avg_fluency',
'contribution',
'avg_agreement',
)
```
## Pipeline
```
qolo_dataset = CrowdBotData()
bagbase = qolo_dataset.bagbase_dir
outbase = qolo_dataset.outbase_dir
```
### Result loading
```
chosen_dates = ['0410', '0424']
chosen_type = ['mds', 'rds', 'shared_control']
eval_dirs = []
for root, dirs, files in walk(outbase, topdown=False, maxdepth=1):
for dir_ in dirs:
if any(s in dir_ for s in chosen_dates) and any(s in dir_ for s in chosen_type):
dir_ = dir_.replace("_processed", "")
eval_dirs.append(dir_)
print("{}/ is available!".format(dir_))
eval_res_df = import_eval_res(eval_dirs)
eval_res_df.head()
```
### Mean-Std statistics
```
for ctrl in chosen_type:
print(ctrl, ":", len(eval_res_df[eval_res_df.control_type == ctrl]))
frames_stat = []
for ctrl in chosen_type:
eval_res_df_ = eval_res_df[eval_res_df.control_type == ctrl]
stat_df = eval_res_df_.drop(['date'], axis=1).agg(['mean', 'std'])
if ctrl == 'shared_control':
stat_df.index = 'sc_'+stat_df.index.values
else:
stat_df.index = ctrl+'_'+stat_df.index.values
frames_stat.append(stat_df)
stat_df_all = pd.concat(frames_stat) # , ignore_index=True
stat_df_all.index.name = 'Metrics'
stat_df_all
export_metrics = (
'avg_crowd_density2_5',
'max_crowd_density2_5',
# 'avg_crowd_density5',
'avg_min_dist',
'rel_duration2goal',
'rel_path_length2goal',
'rel_jerk',
'contribution',
'avg_fluency',
'avg_agreement',
'virtual_collision',
)
export_control_df = stat_df_all[list(export_metrics)]
metrics_len = len(export_control_df.loc['mds_mean'])
methods = ['MDS', 'RDS', 'shared_control']
for idxx, method in enumerate(methods):
str_out = []
for idx in range(metrics_len):
avg = "${:0.2f}".format(round(export_control_df.iloc[2*idxx,idx],2))
std = "{:0.2f}$".format(round(export_control_df.iloc[2*idxx+1,idx],2))
str_out.append(avg+" \pm "+std)
export_control_df.loc[method] = str_out
export_contro_str_df = export_control_df.iloc[6:9]
export_contro_str_df
# print(export_contro_str_df.to_latex())
# print(export_contro_str_df.T.to_latex())
```
### ANOVA test for controller comparison
```
anova_metrics = (
'avg_crowd_density2_5',
'max_crowd_density2_5',
'avg_crowd_density5',
'avg_min_dist',
'virtual_collision',
'rel_duration2goal',
'rel_path_length2goal',
'rel_jerk',
'contribution',
'avg_fluency',
'avg_agreement',
)
mds_anova_ = eval_res_df[eval_res_df.control_type=='mds']
mds_metrics = mds_anova_[list(anova_metrics)].values
rds_anova_ = eval_res_df[eval_res_df.control_type=='rds']
rds_metrics = rds_anova_[list(anova_metrics)].values
shared_control_anova_ = eval_res_df[eval_res_df.control_type=='shared_control']
shared_control_metrics = shared_control_anova_[list(anova_metrics)].values
fvalue12, pvalue12 = stats.f_oneway(mds_metrics, rds_metrics)
fvalue23, pvalue23 = stats.f_oneway(mds_metrics, shared_control_metrics)
fvalue13, pvalue13 = stats.f_oneway(rds_metrics, shared_control_metrics)
# total
fvalue, pvalue = stats.f_oneway(mds_metrics, rds_metrics, shared_control_metrics)
statP_df = pd.DataFrame(
data=np.vstack((pvalue12, pvalue23, pvalue13, pvalue)),
index=['mds-rds', 'mds-shared', 'rds-shared', 'total'],
)
statP_df.columns = list(anova_metrics)
statP_df.index.name = 'Metrics'
statF_df = pd.DataFrame(
data=np.vstack((fvalue12, fvalue23, fvalue13, fvalue)),
index=['mds-rds', 'mds-shared', 'rds-shared', 'total'],
)
statF_df.columns = list(anova_metrics)
statF_df.index.name = 'Metrics'
statP_df
statF_df
# print(statF_df.T.to_latex())
# print(statP_df.T.to_latex())
# print(stat_df_all.T.to_latex())
```
### Visualize with grouping by date
#### Palette settings
```
dates=['0410', '0424']
value_unique, color_unique = values2color_list(
dates, cmap_name='hot', range=(0.55, 0.75)
)
value_unique, point_color_unique = values2color_list(
dates, cmap_name='hot', range=(0.3, 0.6)
)
# creating a dictionary with one specific color per group:
box_pal = {value_unique[i]: color_unique[i] for i in range(len(value_unique))}
# original: (0.3, 0.6)
scatter_pal = {value_unique[i]: point_color_unique[i] for i in range(len(value_unique))}
# black
# scatter_pal = {value_unique[i]: (0.0, 0.0, 0.0, 1.0) for i in range(len(value_unique))}
# gray
# scatter_pal = {value_unique[i]: (0.3, 0.3, 0.3, 0.8) for i in range(len(value_unique))}
box_pal, scatter_pal
```
#### Crowd-related metrics
```
crowd_metrics_df = eval_res_df[['seq', 'control_type'] + list(crowd_metrics) + ['date']]
for ctrl in chosen_type:
print("###", ctrl)
print("# mean")
print(crowd_metrics_df[crowd_metrics_df.control_type == ctrl].mean(numeric_only=True))
# print("# std")
# print(crowd_metrics_df[crowd_metrics_df.control_type == ctrl].std(numeric_only=True))
print()
print("# max value in each metrics")
print(crowd_metrics_df.max(numeric_only=True))
print("# min value in each metrics")
print(crowd_metrics_df.min(numeric_only=True))
```
##### 4-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
fig, axes = plt.subplots(2, 2, figsize=(16, 10))
categorical_plot(
axes=axes[0,0],
df=crowd_metrics_df,
metric='avg_crowd_density2_5',
category='control_type',
title='Mean crowd density within 2.5 m',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.0, 0.25],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[0,0].set_ylabel("Density [1/$m^2$]", fontsize=16)
axes[0,0].tick_params(axis='x', labelsize=16)
axes[0,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[0,1],
df=crowd_metrics_df,
metric='max_crowd_density2_5',
category='control_type',
title='Max crowd density within 2.5 m',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.3, 0.90],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[0,1].set_ylabel("Density [1/$m^2$]", fontsize=16)
axes[0,1].tick_params(axis='x', labelsize=16)
axes[0,1].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[1,0],
df=crowd_metrics_df,
metric='virtual_collision',
category='control_type',
title='Virtual collision with Qolo',
xlabel='',
ylabel='',
ylim=[-0.1, 20],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[1,0].set_ylabel("Virtual collision", fontsize=16)
axes[1,0].tick_params(axis='x', labelsize=16)
axes[1,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[1,1],
df=crowd_metrics_df,
metric='avg_min_dist',
category='control_type',
title='Min. distance of Pedestrain from qolo',
xlabel='',
ylabel='Distance [m]',
ylim=[0.6, 2.0],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[1,1].set_ylabel("Distance [m]", fontsize=16)
axes[1,1].tick_params(axis='x', labelsize=16)
axes[1,1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_crowd_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig1, control_axes1 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes1,
df=crowd_metrics_df,
metric='avg_crowd_density2_5',
category='control_type',
title='',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.0, 0.25],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes1.set_ylabel("Density [1/$m^2$]", fontsize=16)
control_axes1.tick_params(axis='x', labelsize=16)
control_axes1.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_mean_density_2_5_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig2, control_axes2 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes2,
df=crowd_metrics_df,
metric='max_crowd_density2_5',
category='control_type',
title='',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.3, 0.90],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes2.set_ylabel("Density [1/$m^2$]", fontsize=16)
control_axes2.tick_params(axis='x', labelsize=16)
control_axes2.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_max_density_2_5_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig3, control_axes3 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes3,
df=crowd_metrics_df,
metric='virtual_collision',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[-0.1, 20],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes3.set_ylabel("Virtual collision", fontsize=16)
control_axes3.tick_params(axis='x', labelsize=16)
control_axes3.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_virtual_collision_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig4, control_axes4 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes4,
df=crowd_metrics_df,
metric='avg_min_dist',
category='control_type',
title='',
xlabel='',
ylabel='Distance [m]',
ylim=[0.6, 2.0],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes4.set_ylabel("Distance [m]", fontsize=16)
control_axes4.tick_params(axis='x', labelsize=16)
control_axes4.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_mean_min_dist_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
```
#### Path efficiency-related metrics
```
path_metrics_df = eval_res_df[['seq', 'control_type'] + list(path_metrics) + ['date']]
print("# max value in each metrics")
print(path_metrics_df.max(numeric_only=True))
print("# min value in each metrics")
print(path_metrics_df.min(numeric_only=True))
```
##### 2-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
path_fig, path_axes = plt.subplots(1, 2, figsize=(16, 5))
categorical_plot(
axes=path_axes[0],
df=path_metrics_df,
metric='rel_duration2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
path_axes[0].set_ylabel("Relative time to the goal", fontsize=16)
path_axes[0].tick_params(axis='x', labelsize=16)
path_axes[0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=path_axes[1],
df=path_metrics_df,
metric='rel_path_length2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 3.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
path_axes[1].set_ylabel("Relative path length to the goal", fontsize=16)
path_axes[1].tick_params(axis='x', labelsize=16)
path_axes[1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_path_efficiency_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig5, control_axes5 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes5,
df=path_metrics_df,
metric='rel_duration2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes5.set_ylabel("Relative time to the goal", fontsize=16)
control_axes5.tick_params(axis='x', labelsize=16)
control_axes5.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_rel_time2goal_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig6, control_axes6 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes6,
df=path_metrics_df,
metric='rel_path_length2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[1.0, 2.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes6.set_ylabel("Relative path length to the goal", fontsize=16)
control_axes6.tick_params(axis='x', labelsize=16)
control_axes6.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_rel_path_length2goal_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
```
#### Control-related metrics
```
control_metrics_df = eval_res_df[['seq', 'control_type'] + list(control_metrics) + ['date']]
print("# max value in each metrics")
print(control_metrics_df.max(numeric_only=True))
print("# min value in each metrics")
print(control_metrics_df.min(numeric_only=True))
```
##### 4-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig, control_axes = plt.subplots(2, 2, figsize=(16, 12))
categorical_plot(
axes=control_axes[0,0],
df=control_metrics_df,
metric='avg_fluency',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.90, 1.02],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[0,0].set_ylabel("Average control fluency", fontsize=16)
control_axes[0,0].tick_params(axis='x', labelsize=16)
control_axes[0,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[0,1],
df=control_metrics_df,
metric='rel_jerk',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0, 0.35],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[0,1].set_ylabel("Relative jerk", fontsize=16)
control_axes[0,1].tick_params(axis='x', labelsize=16)
control_axes[0,1].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[1,0],
df=control_metrics_df,
metric='contribution',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.2],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[1,0].set_ylabel("Contribution", fontsize=16)
control_axes[1,0].tick_params(axis='x', labelsize=16)
control_axes[1,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[1,1],
df=control_metrics_df,
metric='avg_agreement',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.5, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[1,1].set_ylabel("Average agreement", fontsize=16)
control_axes[1,1].tick_params(axis='x', labelsize=16)
control_axes[1,1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_control_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig7, control_axes7 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes7,
df=control_metrics_df,
metric='avg_fluency',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.90, 1.02],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes7.set_ylabel("Average control fluency", fontsize=16)
control_axes7.tick_params(axis='x', labelsize=16)
control_axes7.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_avg_fluency_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig8, control_axes8 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes8,
df=control_metrics_df,
metric='rel_jerk',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0, 0.35],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes8.set_ylabel("Relative jerk", fontsize=16)
control_axes8.tick_params(axis='x', labelsize=16)
control_axes8.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_rel_jerk_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig9, control_axes9 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes9,
df=control_metrics_df,
metric='contribution',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.2],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes9.set_ylabel("Contribution", fontsize=16)
control_axes9.tick_params(axis='x', labelsize=16)
control_axes9.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_contribution_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig10, control_axes10 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes10,
df=control_metrics_df,
metric='avg_agreement',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.5, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes10.set_ylabel("Average agreement", fontsize=16)
control_axes10.tick_params(axis='x', labelsize=16)
control_axes10.tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/pub/control_boxplot_avg_agreement_group_by_date.pdf", dpi=300)
plt.show()
plt.close()
crowd_metrics_df0424 = crowd_metrics_df[crowd_metrics_df.date=='0424'].sort_values('control_type', ascending=False)
print("Sequence on 0424")
print(crowd_metrics_df0424['control_type'].value_counts())
crowd_metrics_df0410 = crowd_metrics_df[crowd_metrics_df.date=='0410'].sort_values(by=['control_type'], ascending=False, ignore_index=True).reindex()
print("Sequence on 0410")
print(crowd_metrics_df0410['control_type'].value_counts())
```
### Visualize without grouping by date
#### Palette settings
```
control_methods=['mds', 'rds', 'shared_control']
value_unique, color_unique = values2color_list(
eval_res_df['control_type'].values, cmap_name='hot', range=(0.55, 0.75)
)
value_unique, point_color_unique = values2color_list(
eval_res_df['control_type'].values, cmap_name='hot', range=(0.35, 0.5)
)
# creating a dictionary with one specific color per group:
box_pal = {value_unique[i]: color_unique[i] for i in range(len(value_unique))}
# original: (0.3, 0.6)
# scatter_pal = {value_unique[i]: point_color_unique[i] for i in range(len(value_unique))}
# black
# scatter_pal = {value_unique[i]: (0.0, 0.0, 0.0, 1.0) for i in range(len(value_unique))}
# gray
scatter_pal = {value_unique[i]: (0.3, 0.3, 0.3, 0.8) for i in range(len(value_unique))}
box_pal, scatter_pal
```
#### Crowd-related metrics
##### 4-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
fig, axes = plt.subplots(2, 2, figsize=(16, 10))
categorical_plot(
axes=axes[0,0],
df=crowd_metrics_df,
metric='avg_crowd_density2_5',
category='control_type',
title='Mean crowd density within 2.5 m',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.05, 0.20],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
#group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[0,0].set_ylabel("Density [1/$m^2$]", fontsize=16)
axes[0,0].tick_params(axis='x', labelsize=16)
axes[0,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[0,1],
df=crowd_metrics_df,
metric='max_crowd_density2_5',
category='control_type',
title='Max crowd density within 2.5 m',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.3, 0.90],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
#group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[0,1].set_ylabel("Density [1/$m^2$]", fontsize=16)
axes[0,1].tick_params(axis='x', labelsize=16)
axes[0,1].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[1,0],
df=crowd_metrics_df,
metric='virtual_collision',
category='control_type',
title='Virtual collision with Qolo',
xlabel='',
ylabel='',
ylim=[-0.1, 20],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
#group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[1,0].set_ylabel("Virtual collision", fontsize=16)
axes[1,0].tick_params(axis='x', labelsize=16)
axes[1,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=axes[1,1],
df=crowd_metrics_df,
metric='avg_min_dist',
category='control_type',
title='Min. distance of Pedestrain from qolo',
xlabel='',
ylabel='Distance [m]',
ylim=[0.6, 1.6],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
#group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
axes[1,1].set_ylabel("Distance [m]", fontsize=16)
axes[1,1].tick_params(axis='x', labelsize=16)
axes[1,1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_crowd.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig1, control_axes1 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes1,
df=crowd_metrics_df,
metric='avg_crowd_density2_5',
category='control_type',
title='',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.05, 0.20],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes1.set_ylabel("Density [1/$m^2$]", fontsize=16)
control_axes1.tick_params(axis='x', labelsize=16)
control_axes1.tick_params(axis='y', labelsize=14)
control_axes1.set_xticks([0,1,2])
control_axes1.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_mean_density_2_5.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig2, control_axes2 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes2,
df=crowd_metrics_df,
metric='max_crowd_density2_5',
category='control_type',
title='',
xlabel='',
ylabel='Density [1/$m^2$]',
ylim=[0.2, 0.90],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes2.set_ylabel("Density [1/$m^2$]", fontsize=16)
control_axes2.tick_params(axis='x', labelsize=16)
control_axes2.tick_params(axis='y', labelsize=14)
control_axes2.set_xticks([0,1,2])
control_axes2.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_max_density_2_5.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig3, control_axes3 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes3,
df=crowd_metrics_df,
metric='virtual_collision',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[-0.1, 15],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes3.set_ylabel("Virtual collision", fontsize=16)
control_axes3.tick_params(axis='x', labelsize=16)
control_axes3.tick_params(axis='y', labelsize=14)
control_axes3.set_xticks([0,1,2])
control_axes3.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_virtual_collision.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig4, control_axes4 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes4,
df=crowd_metrics_df,
metric='avg_min_dist',
category='control_type',
title='',
xlabel='',
ylabel='Distance [m]',
ylim=[0.6, 1.6],
kind='box',
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes4.set_ylabel("Distance [m]", fontsize=16)
control_axes4.tick_params(axis='x', labelsize=16)
control_axes4.tick_params(axis='y', labelsize=14)
control_axes4.set_xticks([0,1,2])
control_axes4.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_mean_min_dist.pdf", dpi=300)
plt.show()
plt.close()
```
#### Path efficiency-related metrics
##### 2-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
path_fig, path_axes = plt.subplots(1, 2, figsize=(16, 5))
categorical_plot(
axes=path_axes[0],
df=path_metrics_df,
metric='rel_duration2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
path_axes[0].set_ylabel("Relative time to the goal", fontsize=16)
path_axes[0].tick_params(axis='x', labelsize=16)
path_axes[0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=path_axes[1],
df=path_metrics_df,
metric='rel_path_length2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[1.0, 2.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
path_axes[1].set_ylabel("Relative path length to the goal", fontsize=16)
path_axes[1].tick_params(axis='x', labelsize=16)
path_axes[1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_path_efficiency.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig5, control_axes5 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes5,
df=path_metrics_df,
metric='rel_duration2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes5.set_ylabel("Relative time to the goal", fontsize=16)
control_axes5.tick_params(axis='x', labelsize=16)
control_axes5.tick_params(axis='y', labelsize=14)
control_axes5.set_xticks([0,1,2])
control_axes5.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_rel_time2goal.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig6, control_axes6 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes6,
df=path_metrics_df,
metric='rel_path_length2goal',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[1.0, 2.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes6.set_ylabel("Relative path length to the goal", fontsize=16)
control_axes6.tick_params(axis='x', labelsize=16)
control_axes6.tick_params(axis='y', labelsize=14)
control_axes6.set_xticks([0,1,2])
control_axes6.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
plt.savefig("./pdf/pub/control_boxplot_rel_path_length2goal.pdf", dpi=300)
plt.show()
plt.close()
```
#### Control-related metrics
##### 4-in-1 plotting
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig, control_axes = plt.subplots(2, 2, figsize=(16, 12))
categorical_plot(
axes=control_axes[0,0],
df=control_metrics_df,
metric='avg_fluency',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.90, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[0,0].set_ylabel("Average control fluency", fontsize=16)
control_axes[0,0].tick_params(axis='x', labelsize=16)
control_axes[0,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[0,1],
df=control_metrics_df,
metric='rel_jerk',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0, 0.3],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[0,1].set_ylabel("Relative jerk", fontsize=16)
control_axes[0,1].tick_params(axis='x', labelsize=16)
control_axes[0,1].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[1,0],
df=control_metrics_df,
metric='contribution',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.2],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[1,0].set_ylabel("Contribution", fontsize=16)
control_axes[1,0].tick_params(axis='x', labelsize=16)
control_axes[1,0].tick_params(axis='y', labelsize=14)
categorical_plot(
axes=control_axes[1,1],
df=control_metrics_df,
metric='avg_agreement',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.5, 1.0],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes[1,1].set_ylabel("Average agreement", fontsize=16)
control_axes[1,1].tick_params(axis='x', labelsize=16)
control_axes[1,1].tick_params(axis='y', labelsize=14)
plt.savefig("./pdf/comp_control.pdf", dpi=300)
plt.show()
plt.close()
```
##### Individual figures
```
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig7, control_axes7 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes7,
df=control_metrics_df,
metric='avg_fluency',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.90, 1.06],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes7.set_ylabel("Average control fluency", fontsize=16)
control_axes7.tick_params(axis='x', labelsize=16)
control_axes7.tick_params(axis='y', labelsize=14)
control_axes7.set_xticks([0,1,2])
control_axes7.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
# significance
bars = [0, 1, 2]
heights = [0.99, 1.0, 1.03]
barplot_annotate_brackets(0, 1, 3.539208e-04, bars, heights, line_y=1.00)
barplot_annotate_brackets(0, 2, 4.194127e-03, bars, heights, line_y=1.03)
barplot_annotate_brackets(1, 2, 7.744226e-10, bars, heights, line_y=1.015)
plt.savefig("./pdf/pub/control_boxplot_avg_fluency.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig8, control_axes8 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes8,
df=control_metrics_df,
metric='rel_jerk',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0, 0.30],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes8.set_ylabel("Relative jerk", fontsize=16)
control_axes8.tick_params(axis='x', labelsize=16)
control_axes8.tick_params(axis='y', labelsize=14)
control_axes8.set_xticks([0,1,2])
control_axes8.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
# significance
bars = [0, 1, 2]
heights = [0.99, 1.0, 1.03]
barplot_annotate_brackets(0, 1, 1.022116e-02, bars, heights, line_y=0.265)
barplot_annotate_brackets(0, 2, 2.421626e-01, bars, heights, line_y=0.30)
barplot_annotate_brackets(1, 2, 2.126847e-07, bars, heights, line_y=0.19)
plt.savefig("./pdf/pub/control_boxplot_rel_jerk.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig9, control_axes9 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes9,
df=control_metrics_df,
metric='contribution',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.0, 1.4],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes9.set_ylabel("Contribution", fontsize=16)
control_axes9.tick_params(axis='x', labelsize=16)
control_axes9.tick_params(axis='y', labelsize=14)
control_axes9.set_xticks([0,1,2])
control_axes9.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
# significance
bars = [0, 1, 2]
heights = [0.99, 1.0, 1.03]
barplot_annotate_brackets(0, 1, 1.701803e-10, bars, heights, line_y=1.15)
barplot_annotate_brackets(0, 2, 1.271729e-01, bars, heights, line_y=1.2)
barplot_annotate_brackets(1, 2, 3.495410e-09, bars, heights, line_y=1.25)
plt.savefig("./pdf/pub/control_boxplot_contribution.pdf", dpi=300)
plt.show()
plt.close()
mpl.rcParams['font.family'] = ['serif']
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rcParams['mathtext.fontset'] = 'cm'
control_fig10, control_axes10 = plt.subplots(figsize=(6, 5))
categorical_plot(
axes=control_axes10,
df=control_metrics_df,
metric='avg_agreement',
category='control_type',
title='',
xlabel='',
ylabel='',
ylim=[0.5, 1.1],
lgd_labels=['April 10, 2021', 'April 24, 2021'],
lgd_font="Times New Roman",
kind='box',
# group='date',
loc='upper left',
box_palette=box_pal,
scatter_palette=scatter_pal,
)
control_axes10.set_ylabel("Average agreement", fontsize=16)
control_axes10.tick_params(axis='x', labelsize=16)
control_axes10.tick_params(axis='y', labelsize=14)
control_axes10.set_xticks([0,1,2])
control_axes10.set_xticklabels(['MDS','RDS','SC'], fontsize=16)
# significance
bars = [0, 1, 2]
heights = [0.99, 1.0, 1.03]
barplot_annotate_brackets(0, 1, 5.248126e-02, bars, heights, line_y=0.82)
barplot_annotate_brackets(0, 2, 4.394447e-12, bars, heights, line_y=1.0)
barplot_annotate_brackets(1, 2, 3.542947e-15, bars, heights, line_y=0.94)
plt.savefig("./pdf/pub/control_boxplot_avg_agreement.pdf", dpi=300)
plt.show()
plt.close()
```
| github_jupyter |
# VacationPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
cities = pd.read_csv("weather.csv", encoding="utf-8")
cities = cities.drop(['Unnamed: 0'], axis=1)
cities.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
humidity = cities["Humidity"].astype(float)
locations = cities[["Lat", "Lng"]]
gmaps.configure(api_key=g_key)
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,dissipating=False, max_intensity=humidity.max(),point_radius=5)
fig.add_layer(heat_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
narrowed_city_df = cities.loc[(cities["Max Tempture"] > 70) & (cities["Max Tempture"] < 80) & (cities["Cloud density"] == 0), :]
narrowed_city_df = narrowed_city_df.dropna(how='any')
narrowed_city_df.reset_index(inplace=True)
del narrowed_city_df['index']
narrowed_city_df.head()
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotels = []
for index, row in narrowed_city_df.iterrows():
lat = row['Lat']
lng = row['Lng']
params = {
"location": f"{lat},{lng}",
"radius": 5000,
"types" : "hotel",
"key": g_key
}
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
requested = requests.get(base_url, params=params)
jsn = requested.json()
try:
hotels.append(jsn['results'][0]['name'])
except:
hotels.append("")
narrowed_city_df["Hotel Name"] = hotels
narrowed_city_df = narrowed_city_df.dropna(how='any')
narrowed_city_df.head()
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City Name}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in narrowed_city_df.iterrows()]
locations = narrowed_city_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations)
fig.add_layer(markers)
fig
# Display Map
```
| github_jupyter |
# Introduction to Modeling Libraries
```
import numpy as np
import pandas as pd
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.set_printoptions(precision=4, suppress=True)
```
## Interfacing Between pandas and Model Code
```
import pandas as pd
import numpy as np
data = pd.DataFrame({
'x0': [1, 2, 3, 4, 5],
'x1': [0.01, -0.01, 0.25, -4.1, 0.],
'y': [-1.5, 0., 3.6, 1.3, -2.]})
data
data.columns
data.values
df2 = pd.DataFrame(data.values, columns=['one', 'two', 'three'])
df2
model_cols = ['x0', 'x1']
data.loc[:, model_cols].values
data['category'] = pd.Categorical(['a', 'b', 'a', 'a', 'b'],
categories=['a', 'b'])
data
dummies = pd.get_dummies(data.category, prefix='category')
data_with_dummies = data.drop('category', axis=1).join(dummies)
data_with_dummies
```
## Creating Model Descriptions with Patsy
y ~ x0 + x1
```
data = pd.DataFrame({
'x0': [1, 2, 3, 4, 5],
'x1': [0.01, -0.01, 0.25, -4.1, 0.],
'y': [-1.5, 0., 3.6, 1.3, -2.]})
data
import patsy
y, X = patsy.dmatrices('y ~ x0 + x1', data)
y
X
np.asarray(y)
np.asarray(X)
patsy.dmatrices('y ~ x0 + x1 + 0', data)[1]
coef, resid, _, _ = np.linalg.lstsq(X, y)
coef
coef = pd.Series(coef.squeeze(), index=X.design_info.column_names)
coef
```
### Data Transformations in Patsy Formulas
```
y, X = patsy.dmatrices('y ~ x0 + np.log(np.abs(x1) + 1)', data)
X
y, X = patsy.dmatrices('y ~ standardize(x0) + center(x1)', data)
X
new_data = pd.DataFrame({
'x0': [6, 7, 8, 9],
'x1': [3.1, -0.5, 0, 2.3],
'y': [1, 2, 3, 4]})
new_X = patsy.build_design_matrices([X.design_info], new_data)
new_X
y, X = patsy.dmatrices('y ~ I(x0 + x1)', data)
X
```
### Categorical Data and Patsy
```
data = pd.DataFrame({
'key1': ['a', 'a', 'b', 'b', 'a', 'b', 'a', 'b'],
'key2': [0, 1, 0, 1, 0, 1, 0, 0],
'v1': [1, 2, 3, 4, 5, 6, 7, 8],
'v2': [-1, 0, 2.5, -0.5, 4.0, -1.2, 0.2, -1.7]
})
y, X = patsy.dmatrices('v2 ~ key1', data)
X
y, X = patsy.dmatrices('v2 ~ key1 + 0', data)
X
y, X = patsy.dmatrices('v2 ~ C(key2)', data)
X
data['key2'] = data['key2'].map({0: 'zero', 1: 'one'})
data
y, X = patsy.dmatrices('v2 ~ key1 + key2', data)
X
y, X = patsy.dmatrices('v2 ~ key1 + key2 + key1:key2', data)
X
```
## Introduction to statsmodels
### Estimating Linear Models
```
import statsmodels.api as sm
import statsmodels.formula.api as smf
def dnorm(mean, variance, size=1):
if isinstance(size, int):
size = size,
return mean + np.sqrt(variance) * np.random.randn(*size)
# For reproducibility
np.random.seed(12345)
N = 100
X = np.c_[dnorm(0, 0.4, size=N),
dnorm(0, 0.6, size=N),
dnorm(0, 0.2, size=N)]
eps = dnorm(0, 0.1, size=N)
beta = [0.1, 0.3, 0.5]
y = np.dot(X, beta) + eps
X[:5]
y[:5]
X_model = sm.add_constant(X)
X_model[:5]
model = sm.OLS(y, X)
results = model.fit()
results.params
print(results.summary())
data = pd.DataFrame(X, columns=['col0', 'col1', 'col2'])
data['y'] = y
data[:5]
results = smf.ols('y ~ col0 + col1 + col2', data=data).fit()
results.params
results.tvalues
results.predict(data[:5])
```
### Estimating Time Series Processes
```
init_x = 4
import random
values = [init_x, init_x]
N = 1000
b0 = 0.8
b1 = -0.4
noise = dnorm(0, 0.1, N)
for i in range(N):
new_x = values[-1] * b0 + values[-2] * b1 + noise[i]
values.append(new_x)
MAXLAGS = 5
model = sm.tsa.AR(values)
results = model.fit(MAXLAGS)
results.params
```
## Introduction to scikit-learn
```
train = pd.read_csv('datasets/titanic/train.csv')
test = pd.read_csv('datasets/titanic/test.csv')
train[:4]
train.isnull().sum()
test.isnull().sum()
impute_value = train['Age'].median()
train['Age'] = train['Age'].fillna(impute_value)
test['Age'] = test['Age'].fillna(impute_value)
train['IsFemale'] = (train['Sex'] == 'female').astype(int)
test['IsFemale'] = (test['Sex'] == 'female').astype(int)
predictors = ['Pclass', 'IsFemale', 'Age']
X_train = train[predictors].values
X_test = test[predictors].values
y_train = train['Survived'].values
X_train[:5]
y_train[:5]
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
y_predict[:10]
```
(y_true == y_predict).mean()
```
from sklearn.linear_model import LogisticRegressionCV
model_cv = LogisticRegressionCV(10)
model_cv.fit(X_train, y_train)
from sklearn.model_selection import cross_val_score
model = LogisticRegression(C=10)
scores = cross_val_score(model, X_train, y_train, cv=4)
scores
```
## Continuing Your Education
```
pd.options.display.max_rows = PREVIOUS_MAX_ROWS
```
| github_jupyter |
# FloPy
### Demo of netCDF and shapefile export capabilities within the flopy export module.
```
import os
import sys
import datetime
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('flopy version: {}'.format(flopy.__version__))
```
Load our old friend...the Freyberg model
```
nam_file = "freyberg.nam"
model_ws = os.path.join("..", "data", "freyberg_multilayer_transient")
ml = flopy.modflow.Modflow.load(nam_file, model_ws=model_ws, check=False)
```
We can see the ``Modelgrid`` instance has generic entries, as does ``start_datetime``
```
ml.modelgrid
ml.modeltime.start_datetime
```
Setting the attributes of the ``ml.modelgrid`` is easy:
```
proj4_str = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"
ml.modelgrid.set_coord_info(xoff=123456.7, yoff=765432.1, angrot=15.0, proj4=proj4_str)
ml.dis.start_datetime = '7/4/1776'
ml.modeltime.start_datetime
```
### Some netCDF export capabilities:
#### Export the whole model (inputs and outputs)
```
# make directory
pth = os.path.join('data', 'netCDF_export')
if not os.path.exists(pth):
os.makedirs(pth)
fnc = ml.export(os.path.join(pth, ml.name+'.in.nc'))
hds = flopy.utils.HeadFile(os.path.join(model_ws,"freyberg.hds"))
flopy.export.utils.output_helper(os.path.join(pth, ml.name+'.out.nc'), ml, {"hds":hds})
```
#### export a single array to netcdf or shapefile
```
# export a 2d array
ml.dis.top.export(os.path.join(pth, 'top.nc'))
ml.dis.top.export(os.path.join(pth, 'top.shp'))
```
#### sparse export of stress period data for a boundary condition package
* excludes cells that aren't in the package (aren't in `package.stress_period_data`)
* by default, stress periods with duplicate parameter values (e.g., stage, conductance, etc.) are omitted
(`squeeze=True`); only stress periods with different values are exported
* argue `squeeze=False` to export all stress periods
```
ml.drn.stress_period_data.export(os.path.join(pth, 'drn.shp'), sparse=True)
```
#### Export a 3d array
```
#export a 3d array
ml.upw.hk.export(os.path.join(pth, 'hk.nc'))
ml.upw.hk.export(os.path.join(pth, 'hk.shp'))
```
#### Export a number of things to the same netCDF file
```
# export lots of things to the same nc file
fnc = ml.dis.botm.export(os.path.join(pth, 'test.nc'))
ml.upw.hk.export(fnc)
ml.dis.top.export(fnc)
# export transient 2d
ml.rch.rech.export(fnc)
```
### Export whole packages to a netCDF file
```
# export mflist
fnc = ml.wel.export(os.path.join(pth, 'packages.nc'))
ml.upw.export(fnc)
fnc.nc
```
### Export the whole model to a netCDF
```
fnc = ml.export(os.path.join(pth, 'model.nc'))
fnc.nc
```
## Export output to netcdf
FloPy has utilities to export model outputs to a netcdf file. Valid output types for export are MODFLOW binary head files, formatted head files, cell budget files, seawat concentration files, and zonebudget output.
Let's use output from the Freyberg model as an example of these functions
```
# load binary head and cell budget files
fhead = os.path.join(model_ws, 'freyberg.hds')
fcbc = os.path.join(model_ws, 'freyberg.cbc')
hds = flopy.utils.HeadFile(fhead)
cbc = flopy.utils.CellBudgetFile(fcbc)
export_dict = {"hds": hds,
"cbc": cbc}
# export head and cell budget outputs to netcdf
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output.nc"), ml, export_dict)
fnc.nc
```
### Exporting zonebudget output
zonebudget output can be exported with other modflow outputs, and is placed in a seperate group which allows the user to post-process the zonebudget output before exporting.
Here are two examples on how to export zonebudget output with a binary head and cell budget file
__Example 1__: No postprocessing of the zonebudget output
```
# load the zonebudget output file
zonbud_ws = os.path.join("..", "data", "zonbud_examples")
fzonbud = os.path.join(zonbud_ws, "freyberg_mlt.2.csv")
zon_arrays = flopy.utils.zonbud.read_zbarray(os.path.join(zonbud_ws, "zonef_mlt.zbr"))
zbout = flopy.utils.ZoneBudgetOutput(fzonbud, ml.dis, zon_arrays)
zbout
export_dict = {'hds': hds,
'cbc': cbc}
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output_with_zonebudget.nc"),
ml, export_dict)
fnc = zbout.export(fnc, ml)
fnc.nc
```
A budget_zones variable has been added to the root group and a new zonebudget group has been added to the netcdf file which hosts all of the budget data
__Example 2__: postprocessing zonebudget output then exporting
```
# load the zonebudget output and get the budget information
zbout = flopy.utils.ZoneBudgetOutput(fzonbud, ml.dis, zon_arrays)
df = zbout.dataframe
df
```
Let's calculate a yearly volumetric budget from the zonebudget data
```
# get a dataframe of volumetric budget information
vol_df = zbout.volumetric_flux()
# add a year field to the dataframe using datetime
start_date = ml.modeltime.start_datetime
start_date = datetime.datetime.strptime(start_date, "%m/%d/%Y")
nzones = len(zbout.zones) - 1
year = [start_date.year] * nzones
for totim in vol_df.totim.values[:-nzones]:
t = start_date + datetime.timedelta(days=totim)
year.append(t.year)
vol_df['year'] = year
print(vol_df)
# calculate yearly volumetric change using pandas
totim_df = vol_df.groupby(['year', 'zone'], as_index=False)['totim'].max()
yearly = vol_df.groupby(['year', 'zone'], as_index=False)[['STORAGE', 'CONSTANT_HEAD', 'OTHER_ZONES',
'ZONE_1', 'ZONE_2', 'ZONE_3']].sum()
yearly['totim'] = totim_df['totim']
yearly
```
And finally, export the pandas dataframe to netcdf
```
# process the new dataframe into a format that is compatible with netcdf exporting
zbncf = zbout.dataframe_to_netcdf_fmt(yearly, flux=False)
# export to netcdf
export_dict = {"hds": hds,
"cbc": cbc,
"zbud": zbncf}
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output_with_zonebudget.2.nc"),
ml, export_dict)
fnc.nc
```
| github_jupyter |
```
import torch
# Check if pytorch is using GPU:
print('Used device name: {}'.format(torch.cuda.get_device_name(0)))
```
Import your google drive if necessary.
```
from google.colab import drive
drive.mount('/content/drive')
import sys
import os
ROOT_DIR = 'your_dir'
sys.path.insert(0, ROOT_DIR)
import pickle
import numpy as np
import pandas as pd
import torch
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from mpl_toolkits.mplot3d import Axes3D
from sklearn.manifold import TSNE
% matplotlib inline
```
After trraining preprocessing the data and training the model, load all the needed files.
```
resources_dir = os.path.join(ROOT_DIR, 'resources', '')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
vocabulary = pickle.load(open(os.path.join(os.path.join(resources_dir, 'vocabulary'), 'vocabulary.pickle'), 'rb'))
word2vec_path = 'your_path/idx2vec.pickle'
word2idx = pickle.load(open(os.path.join(os.path.join(resources_dir, 'word2idx'), 'word2idx.pickle'), 'rb'))
idx2word = pickle.load(open(os.path.join(os.path.join(resources_dir, 'idx2word'), 'idx2word.pickle'), 'rb'))
word_count = pickle.load(open(os.path.join(os.path.join(resources_dir, 'word_counts'), 'word_counts.pickle'), 'rb'))
embeddings_weigths = pickle.load(open(word2vec_path, 'rb'))
embeddings_weigths = torch.tensor(embeddings_weigths).to(device)
embeddings_weigths[1]
```
Define the cosine similarity between two vectors.
```
def cosine_sim(x_vector, y_vector):
dot_prod = torch.dot(x_vector.T, y_vector)
vector_norms = torch.sqrt(torch.sum(x_vector**2)) * torch.sqrt(torch.sum(y_vector**2))
similarity = dot_prod / vector_norms
return similarity
```
Plot results from t-SNE for a group of selected words.
```
test_words = ['frodo', 'gandalf', 'gimli', 'saruman', 'sauron', 'aragorn', 'ring', 'bilbo',
'shire', 'gondor', 'sam', 'pippin', 'baggins', 'legolas',
'gollum', 'elrond', 'isengard', 'king', 'merry', 'elf']
test_idx = [word2idx[word] for word in test_words]
test_embds = embeddings_weigths[test_idx]
tsne = TSNE(perplexity=5, n_components=2, init='pca', n_iter=10000, random_state=12,
verbose=1)
test_embds_2d = tsne.fit_transform(test_embds.cpu().numpy())
plt.figure(figsize = (9, 9), dpi=120)
for idx, word in enumerate(test_words):
plt.scatter(test_embds_2d[idx][0], test_embds_2d[idx][1])
plt.annotate(word, xy = (test_embds_2d[idx][0], test_embds_2d[idx][1]), \
ha='right',va='bottom')
plt.show()
```
Compute cosine similarities for a group of selected words.
```
words = ['frodo', 'gandalf', 'gimli', 'saruman', 'sauron', 'aragorn', 'ring', 'bilbo',
'shire', 'gondor', 'sam', 'pippin', 'baggins', 'legolas',
'gollum', 'elrond', 'isengard', 'king', 'merry', 'elf']
words_idx = [word2idx[word] for word in words]
embeddings_words = [embeddings_weigths[idx] for idx in words_idx]
top_num = 5
t = tqdm(embeddings_words)
t.set_description('Checking words for similarities')
similarities = {}
for idx_1, word_1 in enumerate(t):
key_word = words[idx_1]
similarities[key_word] = []
for idx_2, word_2 in enumerate(embeddings_weigths):
# the first two elements in vocab are padding word and unk word
if idx_2 > 1:
similarity = float(cosine_sim(word_1, word_2))
if word2idx[key_word] != idx_2:
similarities[key_word].append([idx2word[idx_2], similarity])
similarities[key_word].sort(key= lambda x: x[1])
similarities[key_word] = similarities[key_word][:-top_num-1:-1]
for key in similarities:
for item in similarities[key]:
item[1] = round(item[1], 4)
```
Format the results and convert them into a pandas dataframe.
```
formated_sim = {}
for key in similarities:
temp_list = []
for items in similarities[key]:
string = '"{}": {}'.format(items[0], items[1])
temp_list.append(string)
formated_sim[key] = temp_list
df = pd.DataFrame(data=formated_sim)
df
```
| github_jupyter |
# NBA Statistics
---
Timothy Helton
This notebook generates figures describing National Basketball Association (NBA) players likelyhood of being a member of the Hall of Fame.
To see the full project please click
[**here**](https://timothyhelton.github.io/nba_stats.html).
---
NOTE: This notebook uses code found in the
[**nba_stats**](https://github.com/TimothyHelton/nba_stats)
package.
To execute all the cells do one of the following items:
- Install the nba_stats package to the active Python interpreter.
- Add nba_stats/nba_stats to the PYTHON_PATH system variable.
---
## Imports
```
import logging
import sys
import os
import warnings
import bokeh.io as bkio
import pandas as pd
from nba_stats import players
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
bkio.output_notebook()
%matplotlib inline
warnings.filterwarnings('ignore')
```
---
## Python Version
```
print(f'Python Version: {sys.version}')
```
---
## Set Logging Level
To display the logging statments use logging.INFO.
```
players.logger.setLevel(logging.CRITICAL)
```
---
## Load Data
```
nba = players.Statistics()
```
---
## Generate Hall of Fame Figures
##### NBA Player Hall of Fame Percentage
```
nba.hof_percent_plot()
```
##### Basketball Hall of Fame Categories
```
nba.hof_category_plot()
```
##### Basketball Player Hall of Fame Subcategories
```
nba.hof_player_breakdown_plot()
```
##### NBA Player Birth Locations Histogram
```
nba.hof_birth_loc_plot()
```
##### NBA Player Birth Locations Map
```
try:
os.remove('hof_birth_map.html')
except FileNotFoundError:
pass
nba.hof_birth_map_plot()
```
##### NBA Hall of Fame Players College Attendence
```
nba.hof_college_plot()
```
---
## Features
```
nba.hof_correlation_plot()
```
---
## Data Subsets
Subsets of the season statistics dataset are determined isolating records with complete features (no missing data).
A total of 21 subsets were identified, and stored in the *feature_subset* attribute dictionary with the record count as keys.
Each subset is a named tuple with the following fields.
- data: original data
- feature_names: names of included features
- x_test: x test dataset
- x_train: x training dataset
- y_test: y test dataset
- y_train: y training dataset
The test and train data has the following qualities.
- test set size is 20% of the subset
- training set is balanced with 500 random entries for both the Hall of Fame and Regular players
- alter this parameter using the *training_size* attribute
- the random seed is set to 0 by default
- alter this parameter using the *seed* attribute
- none of the test entries are included in the training dataset
```
subsets = nba.feature_subset.keys()
print(f'Number of Subsets: {len(subsets)}')
```
---
## Principle Component Analysis (PCA)
The PCA for each of the subsets is calculated in the player.Statistics constructor.
The *pca* attribute is a dictionary with keys of features count for each subset.
Each PCA subset is a numedtuple with the following fields:
- cut_off: number of components that have a positive $2^{nd}$ derivative for the scree plot
- feature_names: names of included features
- fit: PCA model fit of the training datasets
- model: PCA model
- n_components: number of components
- subset: original subset data
- var_pct: variance percentage
- var_pct_cum: variance percentage cumulative sum
- variance: DataFrame combining var_pct and var_pct_cum
- x_test: x test dataset
- x_train: x training dataset
- y_test: y test dataset
- y_train: y training dataset
```
import pandas as pd
pca_subsets = pd.DataFrame(list(zip(nba.pca.keys(), nba.feature_subset.keys())),
columns=['Features', 'Records'])
pca_subsets.index = pca_subsets.index.rename('PCA Model')
pca_subsets
```
---
## Model Evaluations
##### Comparision Plot
```
nba.evaluation_plot()
```
##### Chose Optimal Feature Subset
The random seed value is not set and calculations for all subsets are run 100 times with the top test score being tallied.
**Note**:
Models with 47 or 48 features result in the highest predictive classification scores.
The variation is due to the stochastic nature of the calculation.
With the seed values set to the default of 0 the optimal features to use will be 47.
```
# nba.optimal_features_plot(evaluations=100)
```
##### PCA Plot for Optimal Feature Subset
```
nba.classify_players(nba.pca[47], model='LR')
nba.pca_plot(nba.pca[47])
```
##### Confusion Matrix for Optimal Feature Subset
```
players.confusion_plot(nba.classify[47].confusion)
```
##### Evaluate All Players in a Subset
```
nba.evaluate_all_players(feature_qty=47, model='LR')
```
###### Note:
At first glance the Classification Report and Confusion Matrix appear to be in disagreement. scikit learn performs the following steps to generate the Classification Report.
1. Calculate the precision and recall for each class.
1. Use values from step 1 to calculate the F1 score.
1. Calculated a weighted average for precision, recall, and the F1 score.
```
nba.evaluate_all_players(feature_qty=47, model='LR')
```
| github_jupyter |
# Feature Crosses
Continuing on the previous exercise, we will improve our linear regression model with the addition of more synthetic features.
First, let's define the input and create the data loading code.
```
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_examples.describe()
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
training_targets.describe()
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_examples.describe()
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
validation_targets.describe()
```
### Feature engineering
Creating relevant features greatly improves ML models, especially for simple models like regression. We learned in a previous exercise that two (or more) independent features often do not provide as much information as a feature derived from them.
We have already used a synthetic feature in our example: `rooms_per_person`.
We can create simple synthetic features by performing operations on certain columns. However, this may become tedious for complex operations like bucketizing or crossing bucketized features. Feature columns are powerful abstractions that make it easy to add synthetic features.
```
longitude = tf.contrib.layers.real_valued_column("longitude")
latitude = tf.contrib.layers.real_valued_column("latitude")
housing_median_age = tf.contrib.layers.real_valued_column("housing_median_age")
households = tf.contrib.layers.real_valued_column("households")
median_income = tf.contrib.layers.real_valued_column("median_income")
rooms_per_person = tf.contrib.layers.real_valued_column("rooms_per_person")
feature_columns = set([
longitude,
latitude,
housing_median_age,
households,
median_income,
rooms_per_person])
```
#### The input function
Previously, we passed data to the estimator using Pandas `DataFrame` objects. A more flexible, but more complex, way to pass data is through the input function.
One particularity of the estimators API is that input functions are responsible for splitting the data into batches, so the `batch_size` arg is ignored when using `input_fn`. The batch size will be determined by the number of rows that the input function returns (see below).
Input functions return [Tensor](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#Tensor) objects, which are the core data types used in TensorFlow. More specifically, input functions must return the following `(features, label)` tuple:
* `features`: A `dict` mapping `string` values (the feature name) to `Tensor` values of shape `(n, 1)` where `n` is the number of data rows (and therefore batch size) returned by the input function.
* `label`: A `Tensor` of shape `(n, 1)`, representing the corresponding labels.
As a side note, the input functions usually create a queue that reads the data sequentially, but this is an advanced topic not covered here. This makes them a necessity if your data is too large to be preloaded into memory.
For simplicity, our function will convert the entire `DataFrame` to a `Tensor`. This means we'll use a batch size of `12000` (and respectively `5000` for validation) - somewhat on the large size, but that will work fine with our small model. This will make training somewhat slower, but thanks to vector optimizations the performance penalty won't be that bad.
Here's the necessary input function:
```
def input_function(examples_df, targets_df, single_read=False):
"""Converts a pair of examples/targets `DataFrame`s to `Tensor`s.
The `Tensor`s are reshaped to `(N,1)` where `N` is number of examples in the `DataFrame`s.
Args:
examples_df: A `DataFrame` that contains the input features. All its columns will be
transformed into corresponding input feature `Tensor` objects.
targets_df: A `DataFrame` that contains a single column, the targets corresponding to
each example in `examples_df`.
single_read: A `bool` that indicates whether this function should stop after reading
through the dataset once. If `False`, the function will loop through the data set.
This stop mechanism is used by the estimator's `predict()` to limit the number of
values it reads.
Returns:
A tuple `(input_features, target_tensor)`:
input_features: A `dict` mapping string values (the column name of the feature) to
`Tensor`s (the actual values of the feature).
target_tensor: A `Tensor` representing the target values.
"""
features = {}
for column_name in examples_df.keys():
batch_tensor = tf.to_float(
tf.reshape(tf.constant(examples_df[column_name].values), [-1, 1]))
if single_read:
features[column_name] = tf.train.limit_epochs(batch_tensor, num_epochs=1)
else:
features[column_name] = batch_tensor
target_tensor = tf.to_float(
tf.reshape(tf.constant(targets_df[targets_df.keys()[0]].values), [-1, 1]))
return features, target_tensor
```
For an example, the code below shows the output of the input function when passed a few sample records from the California housing data set.
This snippet is for illustrative purposes only. It is not required for training the model, but you may find it useful to visualize the effect of various feature crosses.
```
def sample_from_input_function(input_fn):
"""Returns a few samples from the given input function.
Args:
input_fn: An input function, that meets the `Estimator`'s contract for
input functions.
Returns:
A `DataFrame` that contains a small number of records that are returned
by this function.
"""
examples, target = input_fn()
example_samples = {
name: tf.strided_slice(values, [0, 0], [5, 1]) for name, values in examples.items()
}
target_samples = tf.strided_slice(target, [0, 0], [5, 1])
with tf.Session() as sess:
example_sample_values, target_sample_values = sess.run(
[example_samples, target_samples])
results = pd.DataFrame()
for name, values in example_sample_values.items():
results[name] = pd.Series(values.reshape(-1))
results['target'] = target_sample_values.reshape(-1)
return results
samples = sample_from_input_function(
lambda: input_function(training_examples, training_targets))
samples
```
### FTRL optimization algorithm
High dimensional linear models benefit from using a variant of gradient-based optimization called FTRL. This algorithm has the benefit of scaling the learning rate differently for different coefficients, which can be useful if some features rarely take non-zero values (it also is well suited to support L1 regularization). We can apply FTRL using the [FtrlOptimizer](https://www.tensorflow.org/versions/master/api_docs/python/train.html#FtrlOptimizer).
```
def train_model(
learning_rate,
steps,
feature_columns,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
feature_columns: A `set` specifying the input feature columns to use.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
linear_regressor = tf.contrib.learn.LinearRegressor(
feature_columns=feature_columns,
optimizer=tf.train.FtrlOptimizer(learning_rate=learning_rate),
gradient_clip_norm=5.0
)
training_input_function = lambda: input_function(
training_examples, training_targets)
training_input_function_for_predict = lambda: input_function(
training_examples, training_targets, single_read=True)
validation_input_function_for_predict = lambda: input_function(
validation_examples, validation_targets, single_read=True)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.fit(
input_fn=training_input_function,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = list(linear_regressor.predict(
input_fn=training_input_function_for_predict))
validation_predictions = list(linear_regressor.predict(
input_fn=validation_input_function_for_predict))
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
_ = train_model(
learning_rate=1.0,
steps=500,
feature_columns=feature_columns,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### One-hot encoding for discrete features
Discrete (i.e. strings, enumerations, integers) features are usually converted into families of binary features before training a logistic regression model.
For example, suppose we created a synthetic feature that can take any of the values `0`, `1` or `2`, and that we have a few training points:
| # | feature_value |
|---|---------------|
| 0 | 2 |
| 1 | 0 |
| 2 | 1 |
For each possible categorical value, we make a new **binary** feature of **real values** that can take one of just two possible values: 1.0 if the example has that value, and 0.0 if not. In the example above, the categorical feature would be converted into three features, and the training points now look like:
| # | feature_value_0 | feature_value_1 | feature_value_2 |
|---|-----------------|-----------------|-----------------|
| 0 | 0.0 | 0.0 | 1.0 |
| 1 | 1.0 | 0.0 | 0.0 |
| 2 | 0.0 | 1.0 | 0.0 |
### Bucketized (binned) features
Bucketization is also known as binning.
We can bucketize `population` into the following 3 buckets (for instance):
- `bucket_0` (`< 5000`): corresponding to less populated blocks
- `bucket_1` (`5000 - 25000`): corresponding to mid populated blocks
- `bucket_2` (`> 25000`): corresponding to highly populated blocks
Given the preceding bucket definitions, the following `population` vector:
[[10001], [42004], [2500], [18000]]
becomes the following bucketized feature vector:
[[1], [2], [0], [1]]
The feature values are now the bucket indices. Note that these indices are considered to be discrete features. Typically, these will be further converted in one-hot representations as above, but this is done transparently.
To define bucketized features, use `bucketized_column`, which requires the boundaries separating each bucket. The function in the cell below will calculate these boundaries based on quantiles, so that each bucket contains an equal number of elements.
```
def get_quantile_based_boundaries(feature_values, num_buckets):
boundaries = np.arange(1.0, num_buckets) / num_buckets
quantiles = feature_values.quantile(boundaries)
return [quantiles[q] for q in quantiles.keys()]
# Divide households into 7 buckets.
bucketized_households = tf.contrib.layers.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
california_housing_dataframe["households"], 7))
# Divide longitude into 10 buckets.
bucketized_longitude = tf.contrib.layers.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
california_housing_dataframe["longitude"], 10))
```
### Task 1: Train the model on bucketized feature columns.
**Bucketize all the real valued features in our example, train the model and see if the results improve.**
In the preceding code block, two real valued columns (namely `households` and `longitude`) have been transformed into bucketized feature columns. Your task is to bucketize the rest of the columns, then run the code to train the model. There are various heuristics to find the range of the buckets. This exercise uses a quantile-based technique, which chooses the bucket boundaries in such a way that each bucket has the same number of examples.
```
#
# Your code here: bucketize the following columns below, following the example above.
#
bucketized_latitude =
bucketized_housing_median_age =
bucketized_median_income =
bucketized_rooms_per_person =
bucketized_feature_columns=set([
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person])
_ = train_model(
learning_rate=1.0,
steps=500,
feature_columns=bucketized_feature_columns,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### Feature crosses
Crossing two (or more) features is a clever way to learn non-linear relations using a linear model. In our problem, if we just use the feature `latitude` for learning, the model might learn that city blocks at a particular latitude (or within a particular range of latitudes since we have bucketized it) are more likely to be expensive than others. Similarly for the feature `longitude`. However, if we cross `longitude` by `latitude`, the crossed feature represents a well defined city block. If the model learns that certain city blocks (within range of latitudes and longitudes) are more likely to be more expensive than others, it is a stronger signal than two features considered individually.
Currently, the feature columns API only supports discrete features for crosses. To cross two continuous values, like `latitude` or `longitude`, we cab bucketize them.
If we cross the `latitude` and `longitude` features (supposing, for example, that `longitude` was bucketized into `2` buckets, while `latitude` has `3` buckets), we actually get six crossed binary features. Each of these features will get its own separate weight when we train the model.
### Task 2: Train the model using feature crosses.
**Add a feature cross of `longitude` and `latitude` to your model, train it, and determine whether the results improve.**
```
long_x_lat = tf.contrib.layers.crossed_column(
set([bucketized_longitude, bucketized_latitude]), hash_bucket_size=1000)
#
# Your code here: Create a feature column set that includes the cross.
#
feature_columns_with_cross =
_ = train_model(
learning_rate=1.0,
steps=500,
feature_columns=feature_columns_with_cross,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### Optional Challenge: Try out more synthetic features.
So far, we've tried simple bucketized columns and feature crosses, but there are many more combinations that could potentially improve the results. For example, you could cross multiple columns. What happens if you vary the number of buckets? What other synthetic features can you think of? Do they improve the model?
| github_jupyter |
# MODIS Cloud Top Pressure Retrieval
This notebook demsontrates the application of QRNNs to retrieve cloud-top pressure (CTP) from MODIS infrared observations. A similar retrieval will be used in the next version of the EUMETSAT PPS package, for the production
of near-real time (NRT) Meteorological data to support Nowcasting activities.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from quantnn.models.keras.xception import XceptionNet
model = XceptionNet(15, 101)
```
## Downloading the data
```
from quantnn.examples.modis_ctp import download_data
download_data()
```
## Loading and preparing the training data
```
import pathlib
from quantnn.normalizer import Normalizer
training_data = np.load("data/ctp_training_data.npz")
x_train, y_train = training_data["x"], training_data["y"]
from quantnn.normalizer import Normalizer
normalizer = Normalizer(x_train)
x_train = normalizer(x_train)
```
## Defining a neural network model
```
quantiles = [0.01, 0.05, 0.15, 0.25, 0.35, 0.45, 0.5, 0.55, 0.65, 0.75, 0.85, 0.95, 0.99]
import torch
import torch.nn as nn
n_layers = 4
n_neurons = 256
# First block
layers = [nn.Linear(16, n_neurons), nn.BatchNorm1d(n_neurons), nn.ReLU(), ]
# Center blocks
for _ in range(n_layers):
layers.extend([nn.Linear(n_neurons, n_neurons), nn.BatchNorm1d(n_neurons), nn.ReLU()])
# Final block
layers.append(nn.Linear(n_neurons, len(quantiles)))
model = nn.Sequential(*layers)
```
## Training the neural network
```
from quantnn import QRNN
qrnn = QRNN(quantiles=quantiles,
model=model)
from torch.utils.data import TensorDataset, DataLoader
x_tensor = torch.tensor(x_train).float()
y_tensor = torch.tensor(y_train).float()
training_data = TensorDataset(x_tensor, y_tensor)
training_loader = DataLoader(training_data,
batch_size=256,
shuffle=True,
num_workers=4)
n_epochs = 10
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, n_epochs)
qrnn.train(training_loader,
optimizer=optimizer,
scheduler=scheduler,
n_epochs=n_epochs)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, n_epochs)
qrnn.train(training_loader,
optimizer=optimizer,
scheduler=scheduler,
n_epochs=n_epochs)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, n_epochs)
results = qrnn.train(training_loader,
optimizer=optimizer,
scheduler=scheduler,
n_epochs=n_epochs)
```
## Applying the CTP retrieval
To validate the CTP retrieval, we will apply the retrieval to observations of Hurricane Nicole of the 2016 Hurricane season and compare the results to the cloud-top pressure determined by the CALIOP lidar on the CALIPSO satellite, which is also used as reference to generate the training data.
```
validation_data = np.load("data/ctp_validation_data.npz")
# Overview over full MODIS observations.
lons_rgb = validation_data["longitude_rgb"]
lats_rgb = validation_data["latitude_rgb"]
modis_rgb = validation_data["modis_rgb"]
modis_bt_11 = validation_data["bt_11_rgb"]
modis_bt_12 = validation_data["bt_12_rgb"]
# Caliop obervations used as reference
lons_c = validation_data["longitude"]
lats_c = validation_data["latitude"]
ctp_c = validation_data["ctp"]
input_data = validation_data["input_data"]
```
### Hurricane Nicole
The plot below shows an overview of the scene the we will be using to validate the retrieval. The scene depicts an overpass of the CALIOP lidar over Hurricane Nicole from the 2016 Hurrican season. The line plotted ontop of the true-color image in panel (a) displays the swath of the CALIOP lidar. As you can see, it passed directly through the eye of the Hurricane.
Panel (b) and (c) show the MODIS observations that are used as input for the retrieval. The two channels are located in the far infrared region and thus measure thermal emission from the atmosphere. Nicole's high clouds are visible as cold regions in the image since the radiation is emitted higher up in the atmosphere, where it is colder.`
```
import cartopy.crs as ccrs
from matplotlib.gridspec import GridSpec
f = plt.figure(figsize=(14, 4))
gs = GridSpec(2, 3, height_ratios=[1.0, 0.05])
ax = plt.subplot(gs[0, 0], projection=ccrs.PlateCarree())
colors = modis_rgb[:-1, :-1].reshape(-1, 4) / 256.0
l = plt.plot(lons_c, lats_c, c="r")
ax.pcolormesh(lons_rgb, lats_rgb, lons_rgb, color=colors)
ax.set_xticks(np.linspace(-85, -55, 7))
ax.set_xlabel("Longitude [$^\circ\ E$]")
ax.set_yticks(np.linspace(20, 40, 6))
ax.set_ylabel("Latitude [$^\circ\ N$]")
ax.set_title("(a) MODIS true color")
ax = plt.subplot(gs[1, 0])
ax.set_axis_off()
ax.legend(handles=l, labels=["CALIOP swath"], loc="center")
ax = plt.subplot(gs[0, 1], projection=ccrs.PlateCarree())
m = ax.pcolormesh(lons_rgb, lats_rgb, modis_bt_11)
ax.set_xticks(np.linspace(-85, -55, 7))
ax.set_xlabel("Longitude [$^\circ\ E$]")
ax.set_yticks(np.linspace(20, 40, 6))
ax.set_title("(b) MODIS $11\mu$")
ax = plt.subplot(gs[1, 1])
plt.colorbar(m, cax=ax, orientation="horizontal", label="Brightness temperature")
ax = plt.subplot(gs[0, 2], projection=ccrs.PlateCarree())
img = ax.pcolormesh(lons_rgb, lats_rgb, modis_bt_12)
ax.set_xticks(np.linspace(-85, -55, 7))
ax.set_xlabel("Longitude [$^\circ\ E$]")
ax.set_title("(c) MODIS $12\mu$")
ax = plt.subplot(gs[1, 2])
plt.colorbar(m, cax=ax, orientation="horizontal", label="Brightness temperature")
f.canvas.draw()
plt.tight_layout()
```
### Running the retrieval
The validation data comes with pre-processed observation along the CALOP swath. Evaluating the retrieval therfore only requires normalizing the data (using the same normalizer that was used during training) and evaluating the network prediction.
```
y_pred = qrnn.predict(normalizer(input_data))
# CALIOP reference data
y_ref = ctp_c[:, 0]
y_ref[y_ref < 0.0] = np.nan
```
The plot below shows the QRNN-predicted cloud-top pressure as confidence intervals together with the reference data from the CALIOP LIDAR (black markers). Although there is considerable uncertainty in the retrieval, all reference values lie withing the predicted intervals.
However, the cloud-top pressure of the hurricane seems to be rather consistently underestimating the reference pressure, which indicates that the the uncertainty estimates are not very well calibrated in this region. This is expected, to some extent, because the QRNN learned to predicted uncertainty based on the a-priori distribution of cloud-top pressures in the training data, which is quite different from those of the Hurricane.
```
from quantnn.plotting import plot_confidence_intervals
f, ax = plt.subplots(1, 1)
plot_confidence_intervals(ax, lats_c, y_pred, qrnn.quantiles)
ax.scatter(lats_c, y_ref, c="k", marker=".", s=2)
ax.set_xlim([lats_c.min(), lats_c.max()])
ax.set_ylim([0, 1000])
ax.invert_yaxis()
ax.set_ylabel("Cloud-top pressure [hPA]")
ax.set_xlabel("Latitude [$^\circ\ N$]")
```
## Comparison to XGBoost
We conclude this example by comparing the QRNN performance to that of another machine learning method: gradient-boosted regression trees.
```
import xgboost as xgb
xgb_retrieval = xgb.XGBRegressor(n_estimators=100,
reg_lambda=1,
gamma=0,
max_depth=3)
xgb_retrieval.fit(x_train, y_train)
from quantnn import posterior_mean
y_pred_xgb = xgb_retrieval.predict(normalizer(input_data))
y_pred_qrnn = posterior_mean(y_pred.numpy(), qrnn.quantiles)
f, ax = plt.subplots(1, 1)
plot_confidence_intervals(ax, lats_c, y_pred, qrnn.quantiles)
ax.scatter(lats_c, y_pred_xgb, c="grey", marker=".", s=2)
ax.scatter(lats_c, y_pred_qrnn, c="navy", marker=".", s=2)
ax.scatter(lats_c, y_ref, c="k", marker=".", s=2)
ax.set_xlim([lats_c.min(), lats_c.max()])
ax.set_ylim([0, 1000])
ax.invert_yaxis()
ax.set_ylabel("Cloud-top pressure [hPA]")
ax.set_xlabel("Latitude [$^\circ\ N$]")
```
| github_jupyter |
`2017-09-11 Monday`
# Programme sur les 2 ans
Introduction ($\sum\text{termes techniques}$)
1. Microéconomie
2. Macroéconomie
3. Ouverture internationale
4. Économie du développement
# Chapitre 1: Les fondements de l'Economie
## Introduction
L'économie cherche à résoudre les problèmes de satisfaction, des besoins fondamentaux des individus.
## 1. La rareté
La rareté est un concept de base qui sert à mesurer notre capacité à satisfaire nos besoins fondamentaux.
Pour les économistes, tous les biens et services (B&S) qui ont un prix sont relativement (plus ou moins) rares. En effet, les B&S sont rares par rapport à la demande des personnes.
Ex. des bus, du pétrole ...
Les B&S qui satisfont nos besoins n'existent pas en quantité suffisante c'est-à-dire en quantité rare ou limitée. Notons qu'on parle bien d'une rareté des ressources.
Les économistes utilisent le terme «rareté» d'une façon différente de celle de l'usage quotidien. Cette notion constitue le postulat de base d'un grand nombre de théories économiques.
La rareté n'est pas une hypothèse mais une réalité universelle et atemporelle: presque tout et rare.
Les bus ne sont pas rares à Paris mais pour les économistes, ils le sont car peu de personnes peuvent acquérir (acheter) un bus.
### Définition
La rareté est une tension entre les besoins et les ressources disponibles pour les satisfaire.
- L'économie est la science de la rareté.
- L'économie étudie la manière dont la société gère ses ressources rares.
- Cette science étudie comment les ressources rares sont utilisées pour satisfaire les besoins des hommes qui vivent en société.
La rareté mesure le caractère limité des ressources de la société.
### Exemple
Venezuela, aujourd'hui (**cadre spatial et cadre temporel**): rareté en terme de nourriture
## 2. Le choix
les économistes tentent de résoudre le problème de rareté en faisant des choix.
Les personnes n'ont pas des revenus infinis. Ils doivent faire des choix pour acheter des B&S.
Ils doivent prendre de nombreuses décisions pour bien utiliser leurs ressources limitées.
En général, pour obtenir un B&S, il faut renoncer à un autre que l'on aime. Prendre une décision revient donc à comparer deux objectifs.
Ex. 2 T-shirts
Comme la plupart des individus vivent en société, ils sont confrontés à d'autres types de choix.
L'exemple traditionnel en Économie oppose le beurre au canon. Plus l'on dépense en Défense Nationale (canon) pour protéger notre territoire, moins il restera à dépenser pour améliorer notre niveau de la vie à l'intérieur (beurre). *(Samuelson)*
Dans les sociétés contemporaines, un choix devenu vital est celui qui oppose l'environnement propre et niveau de revenu. Ex. les lois qui contraignent les entreprises à réduire leur niveau de pollution génèrent une augmentation des coûts de production des B&S. En effet, les entreprises(e) en question gagnent moins de revenus, payent des salaires inférieurs à leurs employés et augmentent le prix de leurs produits. Finalement, si les lois antipollution nous procurent un environnement plus salubre (sain), elles le font au prix d'une baisse des revenus des propriétaires, des employés et des clients des firmes polluantes.
La société doit souvent choisir entre efficacité et justice.
Elle se réfère à la taille du gâteau.
La justice consiste à distribuer équitablement entre les membres de la société les produits de ces ressources. Elle se réfère à la façon de la partager.
Savoir que l'on doit faire des choix ne nous renseigne pas sur les décisions qui seront ou devront.
## 3. Le coût d'opportunité
Parce que l'on doit faire des choix, prendre une décision implique d'être capable de comparer des coûts et des bénéfices des diverses options possibles.
Ex. du choix de l'étudiant d'une année supplémentaire à la fac.
Toute décision induit (suppose) un coût appelé le coût d'opportunité.
Le _coût d'opportunité_ est le gain maximum que l'on aurait plus obtenir dans le meilleur emploi alternatif possible d'une ressource. Par ex., les revenus que l'ont emploient pour un voyage ne peuvent pas être utilisés pour un placement financier.
### Définition
_Le coût d'opportunité d'un B&S est la quantité d'autres B&S à laquelle il faut renoncer pour produire une unité supplémentaire de ce B&S._
Le coût d'opportunité est la meilleure alternative prévue d'avance quand une décision économique est prise.
Classification des B&S en fonction du coût d'opportunité: lorsqu'un B&S a un coût d'opportunité car il est relativement rare, alors il a un prix et il est classé comme un B&S économique. Par contre, lorsqu'un B&S est disponible en abondance et gratuit et dont la production ne nécessite aucun travail humain, alors il est classé comme un B&S libre ou naturel. Ex. l'air est abondant, tout le monde peut en avoir autant qu'il veut.
## 4. Les questions fondamentales
Toute société humaine, que ce soit une nation industrialisée avancée, une économie à planification centrale ou une nation tribale isolée, est inévitablement confrontée à trois problème sou questions fondamentales.
Toute société doit trouver un moyen de déterminer *quelles* marchandises sont produites, *comment* elles le sont et pour *qui* elles le sont.
### Que produire?
Quelles marchandises sont produites et en quelles quantités? A quel moment la production sera-t-elle mise en œuvre?
La production est l'activité consistant à créer des B&S propres à satisfaire des besoins individuels ou collectifs.
Produire = utiliser conjointement (combiner) des ressources non directement aptes à satisfaire nos besoins en vue d'obtenir des B&S.
La production est la somme de la production marchande et la production non-marchande.
La production marchande est la production des B&S destinée à être vendue sur un marché.
La production non-marchande représente les services gratuits ou quasi-gratuits réalisés avec des facteurs de production obtenus sur le marché.
### Comment les B&S sont-ils produits?
Une société (un pays) détermine qui effectuera la production, avec quelles ressources et à l'aide de quelles techniques de production.
L'électricité est-elle produite à partir du pétrole, du charbon ou du soleil? Les usines fonctionnent-elles avec des hommes ou des robots?
### Pour qui les biens sont-ils produits?
Qui profitera des fruits de l'activité économique? La répartition du revenu et de la richesse est-elle impartiale et équitable? Comment le PIBB est-il partagé entre les différents ménages? Y a-t-il beaucoup de pauvres et quelques riches? A qui vont les revenus élevés? L'université doit-elle être pour tous ou pour ceux qui peuvent payer l'enseignement?
Toute l'activité économique qui permettra de répondre à ces questions est organisée à l'aide de trois opérations: la production, l'échange et la consommation.
## 5. Les facteurs de production
Il y a quatre ressources ou moyens qui permettent à une économie de produire ses produits et donc de répondre à ces trois questions fondamentales.
Toute société doit faire des choix concernant les moyens de production et les B&S produits pour l'Economie.
### a) La terre (T)
La terre ou ressources naturelles inclut beaucoup d'éléments. Cela comprend tout ce qui est sous la terre comme l'or, le pétrole, le gaz naturel, etc., et tout ce qui est au-dessus de la terre qui est cultivé comme le riz, le blé.
### b) Le travail (L ou W)
Le travail est un facteur humain. Il désigne dans le pays industrialisés (PI) une activité humaine rémunérée qui donne lieu à une contrepartie monétaire ou en nature.
Les personnes ou actifs mobilisent leurs capacités physiques ou intellectuelles pour obtenir un B&S qui répondent à des besoins déterminés.
A noter l'importance de l'organisation du travail qui représente la façon dont l'activité est répartie entre les différents salariés de l'entreprise.
Population (P.) totale
| - P. inactive
| - P. active = P.AO + P.AI
| - P. A. Occupée
| - P. A. Inoccupée = chômeurs
### c) Le capital
Le capital provient de l'investissement(*=achat*) en capital technique et en capital humain.
- Le capital technique est composé des moyens matériels comme les machines, les routes.
Plus précisément, il y a le stock de biens manufacturés comme les usines et les machines d'une part
et le stock du pays comme les routes, les chemins de fer, les ports et les aéroports, les communications d'autre part.
- Le capital humain qui représente la valeur de la force du travail comme l'éducation.
Le capital humain est l'ensemble des capacités intellectuelles et professionnelles d'un individu qui lui assurent des revenus monétaires futurs.
Cf. Gary Becker, prix Nobel d'Economie, 1992, est à l'origine de cette expression.
On distingue aussi deux formes de capital:
- Le capital fixe qui sert plusieurs fois, à plusieurs cycles de production.
- Le capital circulant qui disparaît dès la 1ère utilisation dans le processus de production.
Ex. La production de transport a besoin de capital fixe comme le camion et de capital circulant comme l'essence.
NB: Facteurs complémentaires et substituables
- Facteurs complémentaires: l'usage d'un facteur rend nécessaire l'usage de l'autre
- Facteurs substituables: l'usage de l'un peut être remplacé par l'usage d'un autre facteur. On distingue le facteur à forte intensité capitalistique quand il y a peu de travail et beaucoup de capital du facteur à faible capacité capitalistique quand il y a peu de capital et beaucoup de travail.
Ne pas confondre la _capital technique_ qui incorpore un certain progrès technique (machines récentes) du _capital physique_ qui représente les biens produits dans le passé et qui sont des moyens de la production présente et future (bâtiments, matériel, machines, produits semi-finis, matières premières) et du _capital financier_ qui regroupe les actifs qui rapportent un intérêt.
### d) Le management
C'est l'ensemble des connaissances concernant l'organisation et la gestion d'une entreprise.
À noter l'importance de l'organisation du travail qui représente la façon dont l'activité est répartie entre les différents salariés de l'entreprise.
(quantité++ && temps++) OU (quantité-- && temps--) ?
Productivité = efficacité de production = $\frac{Quantité}{temps}$
## 6. Les courbes de possibilité de production
L'ensemble de nos actes quotidiens, notre vie quotidienne dépend des actions de milliers de personnes que nous ne rencontrerons jamais mais qui ont contribué à produire tout ce dont nous jouissons chaque jour.
L'Economie coordonne les activités de millions de personnes aux goûts et aux talents différents. Il y a donc une interdépendance économique.
Les économistes pour montrer les concepts de rareté, de choix, de coût d'opportunité utilisent la(es) courbe(s) des possibilités de production appelée(s) plus précisément la frontière des possibilités de production soit la `fpp`.
La `fpp` montre les quantités maximales de production qui peuvent être obtenues par l'économie, compte tenu des connaissances technologiques et de la quantité de moyens de production disponibles. On parle de production potentielle.
La `fpp` exprime l'ensemble des combinaisons de biens et services accessible pour une société donnée.
Ex. p.6
Autre exemple celui de Samuelson avec les canons et le beurre.
Les pays ne disposent pas de moyens illimités des divers produits. Ils sont contraints par les ressources et la technologie disponibles.
### Tableau des possibilités de production
| Cas | Beurre | Canon |
|-----|--------|-------|
|A |0 |15 |
|B |1 |14 |
|C |2 |12 |
|D |3 |9 |
|E |4 |5 |
|F |5 |9 |
|G |5 |0 |

- C, D: production sous-maximale
- E, F: situation impossible à moment donné
### Représentations avec la fpp
[illustrations](../fpp.ipynb)
## 7. L'utilité
> Maximisation du bien-être, comportement économique rationnel
Ils cherchent à maximiser leur utilité (càd le degré de satisfaction que leurs procurent leurs achats) compte tenu de leurs ressources et des prix fixés sur le marché.
L'utilité mesure le degré de satisfaction que l'on tire d'un produit.
Cette notion de l'utilité provient de l'Ecole des Marginalistes. L'école se base sur l'utilité marginale pour décrire la valeur économique d'un produit.
L'utilité mesure la satisfaction des consommateurs et les goûts de préférence entre plusieurs biens. Les ménages ou les consommateurs cherchent à maximiser leur utilité compte tenu de leur ressource (revenu) et des prix fixés par le marché. Ils recherchent à répartir leur budget entre tous les biens et services disponibles. La théorie du consommateur traite de toutes ces décisions prise par le consommateur. Les choix de consommation dépendent de nos besoins des goûts, des prix, des revenus (préférence).
### Définition
L'utilité désigne la satisfaction ou le plaisir retiré par un individu de la consommation d'un B&S.
### Exemple
\ 1 / \ 2 / \ 3 / \ 4 / \ 5 /
U maximale ... U minimale
L'utilité marginale mesure la variation de la satisfaction liée à la consommation d'une unité supplémentaire d'un B&S. L'utilité marginale mesure la variation de l'utilité totale pour une variation très petite de la quantité consommée ($U_m$).
Supposons:
- La consommation de 1 café procure une utilité, $U_1$ = 10
- et que celle de 2 cafés procure une utilité, $U_2$ = 15
- ($U_3 = 17$)
L'utilité totale augmente (sinon on ne prendra pas un autre café), mais moins fortement qu'avec le 1^er café. L'utilité marginale correspond au $U_m = U_2 - U_1 = 15 - 10 = 5$
L'intérêt de cette notion d'utilité marginale et du raisonnement "à la marge" est de mettre en évidence la Loi de l'$U_m$ décroissante. L'utilité continue à croître puisque la consommation correspond à une utilité, mais elle augmente de moins en moins vite.
NB: les prix relatifs = rapport entre les prix de deux biens ou de plusieurs biens
## 8. Microéconomie et macroéconomie
Pour faciliter l'étude de l'économie, on la divise en deux branches.
La microéconomie étudie une partie de l'économie, la macroéconomie s'intéresse au fonctionnement de l'économie prise dans son ensemble.
La microéconomie étudie les comportements des individus, des consommateurs et des producteurs. Elle s'intéresse à la façon dont les choix des uns et des autres s'ajustent au travers de l'équilibre de la demande et de l'offre sur chaque marché des B&S.
La macroéconomie étudie et cherche des solutions aux grands problèmes économique comme l'inflation, le chômage, la croissance et le développement.
Remarque: même si on distingue les deux sciences, en réalité on peut retrouver le comportement microéconomique dans la macroéconomie.
## 9. Economie positive et économie normative
L'économie combine des considérations normatives et des constatations positives.
**L'économie positive** s'intéresse à l'explication objective ou scientifique du fonctionnement de l'économie.
**L'économie normative** fournit des recommandations pour améliorer la situation économique: ces avis reposent sur des opinions. Ces opinions normatives peuvent aussi être à la base d'hypothèses simplificatrices nécessaires à la construction de modèles et méthodes.
- Jean: Le salaire minimum légal est une des causes de chômage
- économie positive / scientifique => descriptif
- Paula: Le gouvernement devrait augmenter le salaire minimum légal
- économie normative => prescriptif
## 10. Circuit économique
L'activité économique est le résultat d'innombrables opérations effectuées par une multitude d'unités élémentaires telles l'entreprise, les ménages, etc.)
Comme il est impossible de décrire tous ces mouvements particuliers, on regroupe ces unités élémentaires en grandes catégories (cf. les acteurs économiques) afin de schématiser les opérations économiques réalisées.
Le circuit économique désigne une façon simplifiée de représenter l'activité économique.
Il représente donc le fonctionnement d'une économie sous la forme de flux orientés reliant des agents économiques (entreprises, ménages, Etat etc.), des marchés (marché du travail) ou des opérations (consommation, production, etc.)
François Quesnay fut un des premiers à utiliser cette approche avec son «Tableau économique» en 1758.
Cf. les représentations p.9 et en cours

## 11. Système de rationnement: économique planifiée à l'opposé de l'économie du marché libre
Le rationnement désigne une situation de marché dans laquelle les prix ne peuvent pas se fixer librement par le jeu de l’offre et de la demande, ce qui conduit à une limitation soit de la quantité offerte soit de la quantité demandée des b&s.
(L'Etat décide sur les 3 questions fondamentales: que produire, comment produire, pour qui produire)
### a) Economie planifiée
C’est une économie où des agents économiques(=Etat) mettent en place un processus consistant à fixer les prix, pour un horizon de moyen terme (entre 3 et 5 ans) des grandeurs économiques et des mutations ou changements qualitative associées à l’évolution de ces grandeurs (modifications de la structure de consommation, de production ...).
On oppose planification impérative comme celle soviétique de la planification indicative comme en France (elle est née dans un contexte de pénurie).
Dans la planification impérative, les objectifs s’opposent aux agents économiques tout particulièrement aux entreprises qui sont tenues d’appliquer les objectifs fixés par le Plan.
### b) Economie du marché libre
#### Définition
_Système économique qui accorde un rôle central aux mécanismes de marché pour assurer a régulation des activités techniques économiques._
#### Exemple
Les économies occidentales
L’économie est alors considérée comme un ensemble de marchés assurant automatiquement l’équilibre entre les offres et les demandes de b&s économiques.
Cette représentation de l’économie est appelée libérale car la régulation ne doit pas être perturbée par les interventions de l’Etat.
## 12. La croissance économique (Cr)
La croissance (Cr) désigne l’augmentation _durable_(=long terme / 10+ années) de la production d’une économie.
C’est un phénomène quantitatif que l’on peut mesurer par le taux de Cr. du PIB càd le taux du produit intérieur brut. Il est donné en monnaie constante ou en volume ou en réel.
La Cr. réelle est la hausse du PIB après avoir éliminé la hausse due à l’inflation (I°) en %.
> Variable économique (ex. **PIB**)
>
> - avec inflation
> - prix courant
> - en valeur
> - en normal
> - sans inflation
> - prix constant
> - en volume
> - en réel
Ex. En 2017 (e), le taux de Cr des pays avancés s’élève à 1,6%, celui de la zone euro à 1,7%, et celui des pays émergeants et en développement à 4,2%.
Écart entre taux de Cr. de la zone euro et celui des pays émergeants:
4.2% - 1.7% = 2.5 POINTS de %
A ne pas confondre avec l’expansion qui est aussi une hausse de la production d’un pays mais de courte durée soit une année.
A ne pas confondre aussi Cr. et développement. En effet, ces termes sont proches mais distincts.
Cf. la section 2
## 13. Le développement économique (Dt)
Selon la définition de François Perroux, le développement est:
- une combinaison de changements mentaux et sociaux
- aptes à faire croître
- cumulativement et durablement (une génération = 20 ans)
- le produit réel global.
C’est un phénomène qualitatif.
Il est mesuré par l’indice de développement humain, IDH.
Cf. la section 4
## 14. Le développement durable
Le Dvt durable est un nouveau mode de développement (Dvt) officiellement proposé comme objectif à leur état-membre par la CNUCED (Conférence des Nations unies sur l’environnement et le Dvt) et la Banque mondiale par le rapport de la commission Brundtland.
Il y a une volonté de concilier le bien-être des générations présentes avec la sauvegarde de l’environnement pour les générations futures.
### Définition
Le développement durable est une forme de développement qui répond aux besoins du présent (des générations actuelles) sans compromettre la capacité de répondre aux besoins des générations futures.
Auteurs :
- Adam SMITH (1723-1790)
- Karl MARX (1818-1883)
- Dr.Gro BRUNDTLAND (1939-..)
## Complément: STOCK et FLUX
### Définition
- Le `stock` désigne une ou des grandeurs disponibles à un moment donné
- Le `flux` désigne un mouvement de grandeur qui est transporté pendant une période
### Exemple
Avec 1 entreprise:
- capital technique: **S**
- investissement: **F**
### Schéma
=||=
--------o - flux (Investissement)
------o |
| |
*
*
\**************/
\************/ - stock
\__________ *|
|*| - capital, usage
|*|
\__
| github_jupyter |
# Notebook version of NSGA-II constrained, without scoop
```
%matplotlib inline
#!/usr/bin/env python
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import array
import random
import json
import time
import numpy
from math import sqrt, cos, atan
#from scoop import futures
from deap import algorithms
#from deap import base
from deap import benchmarks
from deap.benchmarks.tools import diversity, convergence
from deap import creator
from deap import base, tools
from xopt import fitness_with_constraints # Chris' custom routines
from deap.benchmarks.tools import diversity, convergence, hypervolume
creator.create("FitnessMin", fitness_with_constraints.FitnessWithConstraints, weights=(-1.0, -1.0, 1.0, 1.0))
creator.create("Individual", array.array, typecode='d', fitness=creator.FitnessMin)
toolbox = base.Toolbox()
def uniform(low, up, size=None):
try:
return [random.uniform(a, b) for a, b in zip(low, up)]
except TypeError:
return [random.uniform(a, b) for a, b in zip([low] * size, [up] * size)]
NDIM = 2
N_CONSTRAINTS = 2
#BOUND_LOW, BOUND_UP = [0.1, 0.0] , [1.0, 1.0]
def CONSTR(individual):
#time.sleep(.01)
x1=individual[0]
x2=individual[1]
objectives = (x1, (1.0+x2)/x1)
constraints = (x2+9*x1-6.0, -x2+9*x1-1.0)
return (objectives, constraints)
BOUND_LOW, BOUND_UP = [0.0, 0.0], [3.14159, 3.14159]
def TNK(individual):
x1=individual[0]
x2=individual[1]
objectives = (x1, x2)
constraints = (x1**2+x2**2-1.0 - 0.1*cos(16*atan(x1/x2)), 0.5-(x1-0.5)**2-(x2-0.5)**2 )
return (objectives, constraints, (x1, x2))
#BOUND_LOW, BOUND_UP = [-20.0, -20.0], [20.0, 20.0]
def SRN(individual):
x1=individual[0]
x2=individual[1]
objectives = ( (x1-2.0)**2 + (x2-1.0)**2+2.0, 9*x1-(x2-1.0)**2 )
constraints = (225.0-x1**2-x2**2, -10.0 -x1 - 3*x2 )
return (objectives, constraints)
toolbox.register("attr_float", uniform, BOUND_LOW, BOUND_UP, NDIM)
toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.attr_float)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
# scoop map function
#toolbox.register('map', futures.map)
toolbox.register('map', map)
#toolbox.register("evaluate", CONSTR)
toolbox.register("evaluate", TNK)
#toolbox.register("evaluate", SRN)
toolbox.register("mate", tools.cxSimulatedBinaryBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0)
toolbox.register("mutate", tools.mutPolynomialBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0, indpb=1.0/NDIM)
toolbox.register("select", tools.selNSGA2)
def main(seed=None):
random.seed(seed)
NGEN = 50
MU = 100
CXPB = 0.9
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean, axis=0)
stats.register("std", numpy.std, axis=0)
stats.register("min", numpy.min, axis=0)
stats.register("max", numpy.max, axis=0)
logbook = tools.Logbook()
logbook.header = "gen", "evals", "std", "min", "avg", "max"
pop = toolbox.population(n=MU)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in pop if not ind.fitness.valid]
evaluate_result = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, evaluate_result):
ind.fitness.values = fit[0]
ind.fitness.cvalues = fit[1]
ind.fitness.n_constraints = len(fit[1])
# This is just to assign the crowding distance to the individuals
# no actual selection is done
pop = toolbox.select(pop, len(pop))
record = stats.compile(pop)
logbook.record(gen=0, evals=len(invalid_ind), **record)
print(logbook.stream)
# Begin the generational process
for gen in range(1, NGEN):
# Vary the population
offspring = tools.selTournamentDCD(pop, len(pop))
offspring = [toolbox.clone(ind) for ind in offspring]
for ind1, ind2 in zip(offspring[::2], offspring[1::2]):
if random.random() <= CXPB:
toolbox.mate(ind1, ind2)
toolbox.mutate(ind1)
toolbox.mutate(ind2)
del ind1.fitness.values, ind2.fitness.values
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit[0]
ind.fitness.cvalues = fit[1]
ind.fitness.n_constraints = len(fit[1])
# Allow for additional info to be saved (for example, a dictionary of properties)
if len(fit) > 2:
ind.fitness.info = fit[2]
# Select the next generation population
pop = toolbox.select(pop + offspring, MU)
record = stats.compile(pop)
logbook.record(gen=gen, evals=len(invalid_ind), **record)
print(logbook.stream, hypervolume(pop, [1.0,1.0]))
return pop, logbook
#if __name__ == "__main__":
# #optimal_front = json.load(open("pareto_front/zdt4_front.json"))
# # Use 500 of the 1000 points in the json file
# #optimal_front = sorted(optimal_front[i] for i in range(0, len(optimal_front), 2))
pop, stats = main()
pop.sort(key=lambda x: x.fitness.values)
print(stats)
#print("Convergence: ", convergence(pop, optimal_front))
#print("Diversity: ", diversity(pop, optimal_front[0], optimal_front[-1]))
import matplotlib.pyplot as plt
import numpy
front = numpy.array([ind.fitness.values for ind in pop])
#optimal_front = numpy.array(optimal_front)
#plt.scatter(optimal_front[:,0], optimal_front[:,1], c="r")
plt.scatter(front[:,0], front[:,1], c="b")
plt.axis("tight")
plt.show()
pop[0]
[float(x) for x in pop[0]]
pop[0].fitness.info
```
# Hypervolume
```
from deap.benchmarks.tools import diversity, convergence, hypervolume
print("Final population hypervolume is %f" % hypervolume(pop, [1.0,1.0]))
```
| github_jupyter |
# Encontrando los Ingredientes de Otros Mundos
Didier Queloz y Michel Mayor encontraron el primer exoplaneta orbitando una estrella similar al Sol, lo que les valió el [premio Nobel de Física 2019](https://www.nobelprize.org/prizes/physics/2019/summary/). Desde entonces, el número de planetas conocidos ha crecido exponencialmente. Ahora, los astrónomos van más allá de solo hecho de descubrir planetas fuera del Sistema Solar, tienen el desafio de aprender sobre sus atmósferas. En este "simulador" obtendremos espectros de sistemas de exoplanetas para entender de qué están hechas sus atmósferas.
___
# Table of Contents
* [Cómo Usar esta Guía](#Cómo-Usar-esta-Guía)
* [Configuración Previa a la Actividad](#Configuración-Previa-a-la-Actividad)
* [Actividad 1: Introducción - Curvas de Luz del Planeta](#Actividad-1:-Introducción---Curvas-de-Luz-del-Planeta)
* [Actividad 2: Radio del Planeta](#Actividad-2:-Radio-del-Planeta)
* [Actividad 3: Un espectro Planetario](#Actividad-3:-Un-espectro-Planetario)
* [Actividad 4: Ejemplos de Atmósferas de Planetas](#Actividad-4:-Ejemplos-de-Atmósferas-de-Planetas)
* [Actividad 5: Atmósferas de Planetas Misteriosos](#Actividad-5:-Atmósferas-de-Planetas-Misteriosos)
* [Actividad 6: Conclusiones](#Actividad-6:-Conclusiones)
___
# Cómo Usar esta Guía
La página web en la que se encuentra es en realidad una aplicación llamada Jupyter Notebook, muy parecida a las de su teléfono. Esta aplicación consta de celdas.
Una celda de *entrada/input* parece un cuadro gris claro con `[ ]` a su izquierda. Cada una de las celdas de entrada contiene código: instrucciones para hacer que la computadora haga algo.
Para activar o seleccionar una celda, haga clic en cualquier lugar dentro de ella.
\\
<div class='alert alert-info'>
<font size='3'><b>Seleccione la celda de abajo y lea su contenido.</b></font>
</div>
```
# El texto que sigue a un "#" es un comentario.
# Los comentarios no afectan su código de ninguna manera.
# Lea siempre los comentarios en la parte superior de cada celda con la que interactúe.
# Los comentarios se utilizarán para describir lo que realmente está haciendo el código de la celda.
```
Para ejecutar una celda seleccionada, haga clic en el pequeño botón de reproducción o presione `[Shift + Enter]` en su teclado.
\\
<div class='alert alert-info'>
<font size='3'><b>Seleccione la celda de abajo y lea su contenido. Luego, ejecute la celda.</b></font>
<br> Si aparece una advertencia, simplemente haga clic en <em>"Ejecutar de todos modos (/Run Anyway, REVISAR CON UN PC EN ESPAÑOL)"</em>, este código es seguro 😉
<br> Además, si desea guardar tú progreso, haga clic en el botón <em>"Copiar a Drive(/Copy to Drive, REVISAR CON UN PC EN ESPAÑOL)"</em> en la parte superior.
</div>
```
# El texto que NO esta antecedido por un "#" se considera código.
# Las líneas de código son instrucciones dadas a su computadora.
# La línea de código a continuación es una instrucción de "impresión", que literalmente imprime el texto entre comillas.
print("¡Felicitaciones! ¡Has ejecutado con éxito tu primera celda!")
```
Ejecutar una celda crea una salida directamente debajo de ella. Una salida puede ser un texto, un gráfico, un control deslizante interactivo, ¡o incluso nada en absoluto! Cuando ha corrido una celda, aparece un número entre corchetes, por ejemplo [1] al lado izquierdo de la celda.
<div class='alert alert-info'>
<font size='3'><b>Abra todas las secciones de este bloc de notas seleccionando el menú "Ver" y "Expandir secciones" </b></font>
<br>
</div>
Puede obtener más información sobre cómo funcionan los Jupyter Notebooks en https://try.jupyter.org/ (Página en Ingles)
___
# Configuración Previa a la Actividad
Para que cualquiera de las actividades funcione correctamente, debe importar las bibliotecas necesarias para el funcionamiento del código de esta guía. Estos ya deberían haberse cargado cuando ejecutó todas las celdas.
```
# Las siguientes pasos cargamos las bibliotecas necesarias para correr el código de esta guía.
from httpimport import remote_repo
repoURL = 'https://raw.githubusercontent.com/astro-datalab/notebooks-latest/master/06_EPO/e-TeenAstronomyCafe/'
with remote_repo(['lightcurve_sliderES'], repoURL+'09_Exoplanet_Spectra') :
import lightcurve_sliderES
print("Bibliotecas importadas con éxito.")
lightcurve_sliderES.initial_imports()
print("Archivos importados con éxito.")
```
<div class='alert alert-info'>
<font size='3'><b>Ajuste el control deslizante a continuación a 5.0.</b></font>
</div>
```
lightcurve_sliderES.practice_slider()
```
<div class='alert alert-info'>
<font size='3'><b>Pase el cursor sobre el texto verde a continuación.</b></font>
</div>
**Finalmente**, habrán algunas <span title="Terminología es un lenguaje especial utilizado por personas en un campo de estudio específico, generalmente como un atajo"><font color='green'>terminologías</font></span> utilizadas en esta guía. Puede pasar el cursor sobre el texto para obtener más información.
<div class='alert alert-info'>
<font size='3'><b>En este punto, asegúrese de haber ejecutado todas las celdas y expandido todas las secciones siguiendo las instrucciones anteriores.</b></font>
</div>
___
# Actividad 1: Introducción - Curvas de Luz del Planeta
Comencemos con una <span title="Esta es una gráfica que muestra cómo cambia el brillo de un sistema estrella + planeta en función del tiempo a medida que el planeta pasa frente a la estrella."><font color='green'>curva de luz de un tránsito de un exoplaneta</font></span>. Esta es una gráfica que muestra cómo el brillo de un sistema estrella + planeta cambia con el tiempo a medida que el planeta pasa frente a la estrella. El eje **x** es el tiempo en horas, el eje **y** es el brillo en porcentaje. El tiempo se muestra en relación con <span title="Este es el momento en que un planeta y una estrella se alinean"><font color='green'>centro del transito</font></span>, que es cuando el planeta y la estrella se alinean.
\\
<div class='alert alert-info'>
<font size='3'><b>Arrastre el control deslizante para cambiar la hora. <br>Mira lo que sucede con el brillo (Curva de Luz) y el planeta que cruza la estrella (animación de la estrella)</b></font>
</div>
```
lightcurve_sliderES.lightcurve_slider(free_radius=False)
```
<font size='4' color='#0076b6'>
<b>Pregunta 1: ¿Cuándo cambia el brillo? ¿Por qué crees que es 100% al principio y al final?</b>
</font>
___
# Actividad 2: Radio del Planeta
La siguiente gráfico es otra <span title="Esta es una gráfica que muestra cómo cambia el brillo de un sistema estrella + planeta en función del tiempo a medida que el planeta pasa frente a la estrella."><font color='green'>curva de luz de un exoplaneta en transito</font></span>. Esto debería ser familiar a lo que viste arriba con los mismos ejes y forma. Ahora, hemos agregado una nueva variable, el radio del planeta. Aquí, damos el radio del planeta en <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radio terrestre</font></span>. Considerando que el radio de la Tierra es de 6.371 km.
<div class='alert alert-info'>
<font size='3'><b>
* Arrastre el control deslizante del Radio para ver cómo afecta la curva de luz y la vista de la estrella y el planeta.
* Arrastre el control deslizante Tiempo a una posición diferente para ver cómo afecta la geometría allí. En realidad, no podemos ver el círculo negro sino solo la curva de luz.
</b></font>
</div>
```
lightcurve_sliderES.lightcurve_slider()
```
<font size='4' color='#0076b6'>
<b>
Pregunta 2: ¿Aumentar el radio del planeta hace que la caída de la curva de luz sea más profunda o menos profunda?<br>
</b>
</font>
<br>
<font size='4' color='#0076b6'>
<b>
Pregunta 3: ¿Cómo afecta el radio del planeta al rango de tiempo en el cual cae la curva de luz cae por debajo del 100%?
</b>
</font>
___
# Actividad 3: Un espectro Planetario
#### Tamaño del planeta en diferentes colores
Ahora exploremos qué sucede si un planeta tiene una atmósfera. Algunos colores de luz (<span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite."><font color='green'>longitudes de onda</font></span>) atravesarán la atmósfera, mientras que otros serán absorbidos o dispersados. Puede notar esto en nuestro planeta, La Tierra, durante las puestas de sol, donde la atmósfera dispersa la luz azul y la luz roja atraviesa la atmósfera. Desde la perspectiva del espacio, la Tierra se ve un poco más grande en las <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite."><font color='green'>longitudes de onda</font></span> azules que en las rojas.
Veamos qué sucede con el tamaño efectivo de un planeta en cada color cuando agregas una atmósfera a un planeta.
\\
El control deslizante a continuación controla el espesor de la atmósfera en <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radio Terrestre</font></span>. Los ejes **x** e **y** son efectivamente reglas para medir el tamaño del planeta en <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radio Terrestre</font></span>.
<div class='alert alert-info'>
<font size='3'><b>Arrastre el control deslizante para cambiar el espesor atmosférico.</b></font>
</div>
```
lightcurve_sliderES.scattering_slider(plots=['planet'])
```
<font size='4' color='#0076b6'>
<b>
Pregunta 4: ¿En qué color aparece el planeta más grande?
</b>
</font>
<br>
<br>
<font size='4' color='#0076b6'>
<b>
Pregunta 5: ¿Cómo podrías saber si un planeta tiene atmósfera?
</b>
</font>
#### Una gráfica del espectro
La forma en que los astrónomos visualizan la imagen en color de arriba de un planeta es a través de un <span title="Un espectro es un gráfico del tamaño del planeta versus la longitud de onda."><font color='green'>espectro de transmisión</font></span>. Esta es una gráfica del tamaño del planeta en <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radio Terrestre</font></span> versus la <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite."><font color='green'>longitudes de onda</font></span>. La longitud de onda se mide en unidades de <span title="Un micrón es una unidad de longitud que es una millonésima parte de un metro. El cabello humano tiene unas 75 micras de diámetro."><font color='green'>micrón</font></span>. Un micrón es la millonésima parte de un metro. El ancho típico de un cabello humano es de 75 micrones (Smith 2002, *Metrología Industrial*).
<div class='alert alert-info'>
<font size='3'><b>Arrastre el control deslizante para cambiar el espesor atmosférico.</b></font>
</div>
```
lightcurve_sliderES.scattering_slider(plots=['planet','spectrum'])
```
<font size='4' color='#0076b6'>
<b>
Pregunta 6: ¿Cómo describirías el espectro cuando la pendiente de esta línea es cero?
</b>
</font>
<br>
<br>
<font size='4' color='#0076b6'>
<b>
Pregunta 7: ¿Cómo describirías la atmósfera cuando la pendiente de esta línea es cero?
</b>
</font>
#### Curva de luz multicolor
Ahora que hemos construido una cierta comprensión de las <span title="Esta es una gráfica que muestra cómo cambia el brillo de un sistema estrella + planeta en función del tiempo a medida que el planeta pasa frente a la estrella."><font color='green'>curvas de luz del tránsito de un exoplanetas</font></span> en [Sección 1](#1.-Introducción---Curvas-de-Luz-del-Planeta) y [Sección 2](##Actividad-2:-Radio-del-Planeta), los examinaremos en diferentes <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span>. La curva de luz y el radio del planeta pueden ser diferentes de una longitud de onda a la siguiente porque parte de la luz atraviesa la atmósfera mientras que otra luz es absorbida. Ahora examinará la curva de luz para diferentes colores con una variable para el espesor de una atmósfera en radios terrestres. <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radios terrestres</font></span>.
<div class='alert alert-info'>
<font size='3'><b>Arrastre el control deslizante para cambiar el espesor atmosférico.</b></font>
</div>
```
lightcurve_sliderES.scattering_slider(plots=['planet','spectrum','lightcurve'])
```
<font size='4' color='#0076b6'>
<b>
Pregunta 8: ¿Qué tipo de observaciones podrías hacer para saber si un planeta tiene atmósfera?
</b>
</font>
___
# Actividad 4: Ejemplos de atmósferas de planetas
Ahora que tenemos una idea de cómo <span title="A spectrum is a plot of a planet's size versus wavelength."><font color='green'>espectros de transmisión</font></span> funciona, consideremos diferentes tipos de modelos. Los tamaños atmosféricos se han hecho más grandes que la realidad para que sean más fáciles de ver.
#### Una atmósfera de vapor de agua
El siguiente modelo atmosférico contiene vapor de agua. Las moléculas de agua vibrarán y rotarán en algunas <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span> mejor que en otras, por lo que el planeta se verá más grande en esas longitudes de onda cercanas a los 2,6 <span title="Un micrón es una unidad de longitud que es una millonésima parte de un metro. El cabello humano tiene unas 75 micras de diámetro."><font color='green'>micrones</font></span>.
<div class='alert alert-info'>
<font size='3'><b>Inspeccione el espectro a continuación.</b></font>
</div>
```
lightcurve_sliderES.example_spectra(atmospheres=['H2O'])
```
#### Una atmósfera de metano
El siguiente modelo atmosférico contiene metano. Como el agua, las moléculas de metano vibrarán y rotarán a una <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span> mejor que a otra. Sin embargo, el metano tiene una configuración diferente de átomos, por lo que el planeta parece más grande, cerca de 3,4 <span title="Un micrón es una unidad de longitud que es una millonésima parte de un metro. El cabello humano tiene unas 75 micras de diámetro."><font color='green'>micrones</font></span>.
<div class='alert alert-info'>
<font size='3'><b>Inspeccione el espectro a continuación.</b></font>
</div>
```
lightcurve_sliderES.example_spectra(atmospheres=['CH4'])
```
#### Una atmósfera de dióxido de carbono
CEl dióxido de carbono es otra configuración de moléculas con dos átomos de oxígeno en lados opuestos del carbono. La simetría de la molécula significa que solo hay unas pocas formas de hacer vibrar el dióxido de carbono. Este planeta se verá más grande en 2.8 <span title="Un micrón es una unidad de longitud que es una millonésima parte de un metro. El cabello humano tiene unas 75 micras de diámetro."><font color='green'>micrones</font></span> y 4.4 <span title="Un micrón es una unidad de longitud que es una millonésima parte de un metro. El cabello humano tiene unas 75 micras de diámetro."><font color='green'>micrones</font></span> pero más pequeño en la mayoría de las otras <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span>.
<div class='alert alert-info'>
<font size='3'><b>Inspeccione el espectro a continuación.</b></font>
</div>
```
lightcurve_sliderES.example_spectra(atmospheres=['CO2'])
```
#### Sin atmósfera
Si un planeta no tiene atmósfera, todas las <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span> estarán en el limite rocoso del planeta. Por lo tanto, un planeta sin aire se verá del mismo tamaño en todas las longitudes de onda.
<div class='alert alert-info'>
<font size='3'><b>Inspeccione el espectro a continuación.</b></font>
</div>
```
lightcurve_sliderES.example_spectra(atmospheres=['No Atmosphere'])
```
<font size='4' color='#0076b6'>
<b>
Pregunta 9: Hay una superficie sólida visible aquí. ¿A qué nivel (en radios terrestres) se encuentra la superficie? ¿Dónde crees que estaba en las atmósferas anteriores?
</b>
</font>
___
# Actividad 5: Atmósferas de planetas misteriosos
Ahora estás jugando el papel de un astrónomo. Mide la curva de luz de un planeta a diferentes <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span> y esto se muestra a continuación como una dispersión de puntos en cada color. Deberá averiguar cuál es el radio del planeta (en <span title="El radio de la Tierra es de poco más de 6.000 kilometros. Podrías colocar unas 11 Tierras a lo largo de Júpiter y unas 109 Tierras a lo largo del sol."><font color='green'>radios terrestres</font></span>) para esa <span title="En una onda periódica la longitud de onda es la distancia física entre dos puntos a partir de los cuales la onda se repite"><font color='green'>longitudes de onda</font></span>.
#### Planeta Misterioso 1
<div class='alert alert-info'>
<font size='3'><b>Arrastre los controles deslizantes para hacer que las líneas coincidan con los puntos de cada color, formando las líneas que mejor se ajusten. Asegúrese de desplazarse lo suficiente para ver ambos gráficos.
</b></font>
</div>
```
lightcurve_sliderES.transmission_spec_slider(mysteryNum=1)
```
Ahora ha encontrado un <span title="Un espectro es un gráfico del tamaño del planeta versus la longitud de onda."><font color='green'>espectro de transmisión</font></span> del planeta que mejor se ajusta a los datos.
```
lightcurve_sliderES.example_spectra()
```
<font size='4' color='#0076b6'>
<b>
Pregunta 10: Compare su espectro de transmisión con los modelos. ¿Qué tipo de ambiente encontraste?
</b>
</font>
#### Planeta Misterioso 2
<div class='alert alert-info'>
<font size='3'><b>Arrastre los controles deslizantes para hacer que las líneas coincidan con los puntos de cada color, formando las líneas que mejor se ajusten. Asegúrese de desplazarse lo suficiente para ver ambos gráficos.
</b></font>
</div>
```
lightcurve_sliderES.transmission_spec_slider(mysteryNum=2)
```
Ahora ha encontrado un <span title="Un espectro es un gráfico del tamaño del planeta versus la longitud de onda."><font color='green'>espectro de transmisión</font></span> del planeta que mejor se ajusta a los datos.
```
lightcurve_sliderES.example_spectra()
```
<font size='4' color='#0076b6'>
<b>
Pregunta 11: Compare su espectro de transmisión con los modelos. ¿Qué tipo de ambiente encontraste?
</b>
</font>
#### Planeta Misterioso
<div class='alert alert-info'>
<font size='3'><b>Arrastre los controles deslizantes para hacer que las líneas coincidan con los puntos de cada color, formando las líneas que mejor se ajusten. Asegúrese de desplazarse lo suficiente para ver ambos gráficos.
</b></font>
</div>
```
lightcurve_sliderES.transmission_spec_slider(mysteryNum=3)
```
Ahora ha encontrado un <span title="Un espectro es un gráfico del tamaño del planeta versus la longitud de onda."><font color='green'>espectro de transmisión</font></span> del planeta que mejor se ajusta a los datos.
```
lightcurve_sliderES.example_spectra()
```
<font size='4' color='#0076b6'>
<b>
Pregunta 12: Compare su espectro de transmisión con los modelos. ¿Qué tipo de ambiente encontraste?
</b>
</font>
#### Mystery Planet 4
<div class='alert alert-info'>
<font size='3'><b>Arrastre los controles deslizantes para hacer que las líneas coincidan con los puntos de cada color, formando las líneas que mejor se ajusten. Asegúrese de desplazarse lo suficiente para ver ambos gráficos.
</b></font>
</div>
```
lightcurve_sliderES.transmission_spec_slider(mysteryNum=4)
```
Ahora ha encontrado un <span title="Un espectro es un gráfico del tamaño del planeta versus la longitud de onda."><font color='green'>espectro de transmisión</font></span> del planeta que mejor se ajusta a los datos.
```
lightcurve_sliderES.example_spectra()
```
<font size='4' color='#0076b6'>
<b>
Pregunta 13: Compare su espectro de transmisión con los modelos. ¿Qué tipo de ambiente encontraste?
</b>
</font>
___
# Actividad 6: Conclusiones
¡Felicidades! Ahora estás averiguando de qué están hechas las atmósferas de los planetas o si un planeta carece de atmósfera. En atmósferas reales, obtendremos una mezcla de moléculas que nos pueden informar sobre la química de los planetas y, algún día, incluso ayudarnos a encontrar vida en otras partes del Universo.
Los astrónomos están explorando atmósferas de planetas reales con telescopios operativos actualmente y a la espera de que nuevos telescopios comiencen a operar, como el telescopio espacial James Webb. Puede leer sobre el telescopio Webb y ver imágenes del mismo en [jwst.nasa.gov/](https://jwst.nasa.gov/content/features/index.html#educationalFeatures).
<!-- <div class='alert alert-info'>
<font size='3'><b>If you are doing this for a class, turn in your answers to the bold questions. If you are doing it for fun, you're done!</b></font><br> -->
————
##### Jupyter Notebook by [Everett Schlawin](http://mips.as.arizona.edu/~schlawin/) and the [NOIR Lab's Teen Astronomy Cafe Team](http://www.teenastronomycafe.org)
#### Version 1.0
The source code for this notebook is available at <a href="https://github.com/eas342/interactive_lc">https://github.com/eas342/interactive_lc</a>.
| github_jupyter |
## Let's start!
Manipulating values in any language is done through the use of variables and operations.
### Variables
A variable is a holder for data and allows the programmer to pass around references to the data.
Variables are generally said to be:
* **mutable** - a variable can be changed after creation
* **immutable** - a variable is fixed and unchangeable (there's nothing to stop you overwriting it)
Variables have data types; Python is a little unusual with respect to its type system. It has a loose typing system.
This means:
* You don't have to declare what type a variable is - the interpreter will work it out as you *assign* the variable a value
When you want to work out what type a variable is, you can use the `type` function, it returns the python type for a variable instance.
Following this are a set of assigments for the python fundamental types.
```
# first off - an integer (int)
a = 1
type(a)
# next a float (float)
a = 1.1
type(a)
# next a string (str)
a = "1"
type(a)
# next a boolean (bool)
a = True
type(a)
# next a complex number
a = 1.0 - 1.0j
print(type(a))
print(a.real, a.imag)
```
In each of these cases we have not declared up front what `type` of variable `a` is - we assign the value to the variable and python has worked it out. The type inference is very powerful, but you must be careful in how you use it. As the types are not defined in advance, there's nothing to stop or warn you about an incompatible assignment.
You can use the `isinstance` function to check:
```
a = "1"
b = 1
print("a is a string: ", isinstance(a, (str,)))
print("a is an integer: ", isinstance(a, (int,)))
print("b is a string: ", isinstance(b, (str,)))
print("b is an integer: ", isinstance(b, (int,)))
```
The type of variable is important when you want to use it, in these examples we use the increment operator `+=` on different types of variable
```
# note that += 1 is a shorthand for a = a + 1 (+= is an operator)
a = 1
a += 1
a
# a bit strange
a = 1.1
a += 1
a
# very strange
a = "1"
a += 1
a
# are you out of your mind???
a = True
a += 1
a
```
Note that Python has just worked it out where it makes sense, and if it doesn't make sense then it throws an error (which you can anticipate and handle, but more on that later)
## Type casting
Python will try to do the right thing when you attempt to use a variable as a different type. Changing the type of variable is called *casting*
```
x = 1.5 # a float
print(x, type(x))
x = int(x) # cast the float to an int - note the cast to int will floor (take the lower bound of) the value
print(x, type(x))
# Where it makes sense, you can cast from a string
x = "1"
c = int(x)
print(c, type(c))
x = "1.1"
c = float(x)
print(c, type(c))
x = "1"
c = bool(x)
print(c, type(c))
```
What do you think the cast to `bool` value will be for the following cases:
* "0"
* ""
* "false"
* "true"
* "banana"
Make your prediction in the following sections
```
prediction = None # replace None with True or False
x = "0"
c = bool(x)
assert prediction == c, "You guessed wrong, try again"
prediction = None # replace None with True or False
x = ""
c = bool(x)
assert prediction == c, "You guessed wrong, try again"
prediction = None # replace None with True or False
x = "false"
c = bool(x)
assert prediction == c, "You guessed wrong, try again"
prediction = None # replace None with True or False
x = "true"
c = bool(x)
assert prediction == c, "You guessed wrong, try again"
prediction = None # replace None with True or False
x = "banana"
c = bool(x)
assert prediction == c, "You guessed wrong, try again"
```
### Strings
Just a couple of comments on strings as these will form an important part of your usage of Python. By default in Python 3 strings are immutable sequences of Unicode code points.
Strings can be created in a few ways:
* Single quotes: 'allows embedded "double" quotes'
* Double quotes: "allows embedded 'single' quotes".
* Triple quoted: '''Three single quotes''', """Three double quotes"""
Triple quoted strings can include newlines.
```
some_string = "Some strings
are longer than one line"
some_string = """Some strings
are longer than one line"""
```
To get the length of a string use the `len` function.
```
sample_str = "Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics"
len(sample_str)
```
You can access specific characters (or ranges of characters) using indexes `[]`
```
# get the first character
print(sample_str[0])
# get the first 5 characters
print(sample_str[0:5])
# get the last 5 characters
print(sample_str[-5:])
# get the last 5th through 10th characters
print(sample_str[5:10])
```
There are many really useful operators on strings, I wanted to highlight a couple here as really helpful
```
# upper, lower and capitalize modify strings
some_string = "Here I am"
print(some_string.lower())
print(some_string.upper())
print(some_string.capitalize())
# strip removes whitespace characters
some_string = " Here I am "
print(some_string.strip())
# split splits a string based on a delimiter (defaults to space) and returns a list (more on this later)
some_string = " Here I am "
print(some_string.split())
# pass the delimiter as an argument
delim = "10,120,30"
print(delim.split(","))
# replace replaces a substring with another
some_string = " Here I am "
print(some_string.replace("am", "was"))
# NOTE the replace operation returns a new string, it doesn't update the original string
was_string = some_string.replace("am", "was")
print(some_string)
print(was_string)
# format is used to substitute text, the {} is replaced by the corresponding argument from the format
print("The string '{}' after replace was '{}'".format(some_string, was_string))
```
The output format of the values in the `format` method can be controlled using [format strings](https://docs.python.org/3/library/string.html#formatstrings).
This is just a highlight of some of the methods on strings; check out [String Methods](https://docs.python.org/3/library/stdtypes.html#string-methods) for more.
Note, there are also similar methods for the other types; the [Standard Types](https://docs.python.org/3/library/stdtypes.html) page will serve you well.
## Operators
### Arithmetic Operators
* `+` (addition)
* `-` (subtraction)
* `*` (multiplication)
* `/` (division)
* `//` (integer division)
* `**` (power)
* `%` (modulus)
```
print(1 + 2, 1.0 + 2.0)
print(2 - 1, 2.0 - 1.0)
print(3 * 4, 3.0 * 4.0)
print(3 / 4, 3.0 / 4.0)
print(3.0 // 4.0)
print(3**2, 3.0**2.0)
print( 5 % 2, 5.0 % 2)
```
### Logical Operators
* `not` (`!`)
* `and` (`&`)
* `or` (`|`)
* xor (`^`)
```
not True
(True and False, True & False)
(True or False, True | False)
# Exclusive or!
(True ^ False, True ^ True, False ^ False)
```
### Comparison Operators
* `==` equals
* `<` less than
* `>` greater than
* `<=` less than or equal to
* `>=` greater than or equal to
```
a = 1
b = 2
print("Equal:", a == b)
print("Less than:", a < b)
print("Greater than:", a > b)
```
Try using these operators with non-numeric variables
```
a = "apple"
b = "banana"
print("Equal:", a == b)
print("Less than:", a < b)
print("Greater than:", a > b)
```
Based on the output, what attributes of the strings do you think are being used for evaluation?
## Next
Now, we're going to move on to compound types. Click [here](./02_introduction_to_python_variables_compound_types.ipynb) to continue.
| github_jupyter |
```
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
# 12-0.2
# 13-2.4
# 18-12.14
import pandas as pd
import numpy as np
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1250rmin-mat\1250rnormalviby.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-18上午振动1250rmin-mat\1250r_chanraoviby.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[8:10] #提取前两行
data_chanrao=chanrao[8:10] #提取前两行
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#水泵的两种故障类型信号normal正常,chanrao故障
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#打乱数据顺序
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#打乱数据
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#对训练集和测试集标准化
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#转化为一维序列
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
#普通卷积层
conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x)
#池化层
POOL1 = MaxPooling2D((2,1))(conv1)
#普通卷积层
conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1)
#池化层
POOL2 = MaxPooling2D((2,1))(conv2)
#Dropout层
Dropout=layers.Dropout(0.1)(POOL2 )
Flatten=layers.Flatten()(Dropout)
#全连接层
Dense1=layers.Dense(50, activation='relu')(Flatten)
Dense2=layers.Dense(2, activation='softmax')(Dense1)
model = keras.Model(x, Dense2)
model.summary()
#定义优化
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#绘制acc-loss曲线
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
```
| github_jupyter |
ERROR: type should be string, got "https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/\n# Multivariate Time Series Forecasting with LSTMs in Keras\n\n## Dataset\nThis is a dataset that reports on the weather and the level of pollution each hour for five years at the US embassy in Beijing, China.\n\nBeijing PM2.5 Data Set (rename to raw.csv)\n\nhttps://raw.githubusercontent.com/jbrownlee/Datasets/master/pollution.csv\n\n```\nfrom datetime import datetime\nfrom math import sqrt\n\nfrom numpy import concatenate\n\nfrom pandas import read_csv, DataFrame, concat\n\nfrom sklearn.preprocessing import LabelEncoder, MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, LSTM\n\nfrom matplotlib import pyplot\n# Load data\ndef parse(x):\n return datetime.strptime(x, '%Y %m %d %H')\ndataset = read_csv('raw.csv', parse_dates=[['year', 'month', 'day', 'hour']], index_col=0, date_parser=parse)\ndataset\ndataset.drop('No', axis=1, inplace=True)\n# Manually specify column names\ndataset.columns =['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']\ndataset.index.name ='date'\n# Mark all N.A. values with 0\ndataset['pollution'].fillna(0, inplace=True)\n# Drop the first 24 hours\ndataset = dataset[24:]\n# Summarize first 5 rows\nprint(dataset.head(5))\n# Save to file\ndataset.to_csv('pollution.csv')\n```\n\n## Process the new dataset\n\n```\n# Load dataset\ndataset = read_csv('pollution.csv', header=0, index_col=0)\nvalues = dataset.values\n# Specify columns to plot\ngroups = [0, 1, 2, 3, 5, 6, 7]\ni = 1\n# Plot each column\npyplot.figure()\nfor group in groups:\n pyplot.subplot(len(groups), 1, i)\n pyplot.plot(values[:, group])\n pyplot.title(dataset.columns[group], y=0.5, loc='right')\n i += 1\npyplot.show()\n```\n\n## Prepare data for the LSTM\n\n```\n# Convert series to supervised learning\ndef series_to_supervised(data, n_in=1, n_out=1, dropnan=True):\n n_vars = 1 if type(data) is list else data.shape[1]\n df = DataFrame(data)\n cols, names = list(), list()\n\n # Input sequence (t-n, ..., t-1)\n for i in range(n_in, 0, -1):\n cols.append(df.shift(i))\n names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]\n\n # Forecast sequence\n for i in range(0, n_out):\n cols.append(df.shift(-i))\n if i == 0:\n names += [('var%d(t)' % (j+1)) for j in range(n_vars)]\n else:\n names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]\n \n # Put it all together\n agg = concat(cols, axis=1)\n agg.columns = names\n \n # Drop rows with NaN values\n if dropnan:\n agg.dropna(inplace=True)\n \n return agg\n# Load dataset\ndataset = read_csv('pollution.csv', header=0, index_col=0)\nvalues = dataset.values\n# Integer encode direction\nencoder = LabelEncoder()\nvalues[:, 4] = encoder.fit_transform(values[:, 4])\n# Ensure all data is float\nvalues = values.astype('float32')\n# Normalize features\nscaler = MinMaxScaler(feature_range=(0, 1))\nscaled = scaler.fit_transform(values)\n# Specify number of lag hours\nn_hours = 3\nn_features = 8\n# Frame as supervised learning\nreframed = series_to_supervised(scaled, n_hours, 1)\nreframed.shape\n```\n\n## Define and fit the model\n\n```\n# Split into train and test sets\nvalues = reframed.values\nn_train_hours = 365 * 24\ntrain = values[:n_train_hours, :]\ntest = values[n_train_hours:, :]\n# Split into input and outputs\nn_obs = n_hours * n_features\ntrain_X, train_y = train[:, :n_obs], train[:, -n_features]\ntest_X, test_y = test[:, :n_obs], test[:, -n_features]\n# Reshape input to be 3D [samples, timesteps, features]\ntrain_X = train_X.reshape((train_X.shape[0], n_hours, n_features))\ntest_X = test_X.reshape((test_X.shape[0], n_hours, n_features))\ntrain_X.shape, train_y.shape, test_X.shape, test_y.shape\n# Design network\nmodel = Sequential()\nmodel.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))\nmodel.add(Dense(1))\nmodel.compile(loss='mae', optimizer='adam')\n# Fit network\nhistory = model.fit(train_X, train_y, \n epochs=50, batch_size=72, \n validation_data=(test_X, test_y), \n verbose=2, shuffle=False)\n# Plot history\npyplot.plot(history.history['loss'], label='train')\npyplot.plot(history.history['val_loss'], label='test')\npyplot.legend()\npyplot.show()\n```\n\n## Evaluate model\n\n```\n# Make a prediction\nyhat = model.predict(test_X)\ntest_X = test_X.reshape((test_X.shape[0], n_hours*n_features))\n# Invert scaling for forecast\ninv_yhat = concatenate((yhat, test_X[:, -7:]), axis=1)\ninv_yhat = scaler.inverse_transform(inv_yhat)\ninv_yhat = inv_yhat[:, 0]\n# Invert scaling for actual\ntest_y = test_y.reshape(len(test_y), 1)\ninv_y = concatenate((test_y, test_X[:, -7:]), axis=1)\ninv_y = scaler.inverse_transform(inv_y)\ninv_y = inv_y[:, 0]\n# Calculate RMSE\nrmse = sqrt(mean_squared_error(inv_y, inv_yhat))\nrmse\n```\n\n" | github_jupyter |
```
%matplotlib inline
```
# Constructing multiple views to classify singleview data
As demonstrated in "Asymmetric bagging and random subspace for support vector
machines-based relevance feedback in image retrieval" (Dacheng 2006), in high
dimensional data it can be useful to subsample the features and construct
multiple classifiers on each subsample whose individual predictions are
combined using majority vote. This is akin to bagging but concerns the
features rather than samples and is how random forests are ensembled
from individual decision trees. Here, we apply Linear Discriminant Analysis
(LDA) to a high dimensional image classification problem and demonstrate
how subsampling features can help when the sample size is relatively low.
A variety of possible subsample dimensions are considered, and for each the
number of classifiers (views) is chosen such that their product is equal to
the number of features in the singleview data.
Two subsampling methods are applied. The random subspace method simply selects
a random subset of the features. The random Gaussian projection method creates
new features by sampling random multivariate Gaussian vectors used to project
the original features. The latter method can potentially help in complicated
settings where combinations of features better capture informative relations.
It is clear that subsampling features in this setting leads to improved
out-of-sample accuracy, most likely as it reduces overfitting to the large
number of raw features. This is confirmed as the accuracy seems to peak
around when the number of features is equal to the number of samples, at which
point overfitting becomes possible.
```
# Author: Ronan Perry
# License: MIT
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score, ShuffleSplit
from sklearn.datasets import fetch_olivetti_faces
from mvlearn.compose import RandomSubspaceMethod, RandomGaussianProjection, \
ViewClassifier
# Load the singleview Olivevetti faces dataset from sklearn
X, y = fetch_olivetti_faces(return_X_y=True)
# The data has 4096 features. The following subspace dimensions are used
dims = [16, 64, 256, 1024]
# We are interested in when the low sample size, high dimensionality setting
train_size = 0.2
rsm_scores = []
rgp_scores = []
# Initialze cross validation
splitter = ShuffleSplit(n_splits=5, train_size=train_size, random_state=0)
# Compute singleview score, using all dimensions
singleview_clf = make_pipeline(StandardScaler(), LinearDiscriminantAnalysis())
singleview_scores = cross_val_score(singleview_clf, X, y, cv=splitter)
# For each dimension, we compute scores for a multiview classifier
for dim in dims:
n_views = int(X.shape[1] / dim)
rsm_clf = make_pipeline(
StandardScaler(),
RandomSubspaceMethod(n_views=n_views, subspace_dim=dim),
ViewClassifier(LinearDiscriminantAnalysis())
)
rsm_scores.append(cross_val_score(rsm_clf, X, y, cv=splitter))
rgp_clf = make_pipeline(
StandardScaler(),
RandomGaussianProjection(n_views=n_views, n_components=dim),
ViewClassifier(LinearDiscriminantAnalysis())
)
rgp_scores.append(cross_val_score(rgp_clf, X, y, cv=splitter))
# The results are plotted
fig, ax = plt.subplots()
ax.axvline(X.shape[0] * train_size, ls=':', c='grey',
label='Number of training samples')
ax.axhline(np.mean(singleview_scores), ls='--', c='grey',
label='LDA singleview score')
ax.errorbar(
dims, np.mean(rsm_scores, axis=1),
yerr=np.std(rsm_scores, axis=1), label='LDA o Random Subspace')
ax.errorbar(
dims, np.mean(rgp_scores, axis=1),
yerr=np.std(rgp_scores, axis=1), label='LDA o Random Gaussian Projection')
ax.set_xlabel('Number of subsampled dimensions')
ax.set_ylabel('Score')
plt.title('Classification accuracy using constructed multiview data')
plt.legend()
plt.show()
```
| github_jupyter |
```
# Dependencies
import numpy as np
import pandas as pd
import datetime as dt
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
## D1: Determine the Summary Statistics for June
```
# 1. Import the sqlalchemy extract function.
from sqlalchemy import extract
# 2. Write a query that filters the Measurement table to retrieve the temperatures for the month of June.
results = session.query(Measurement.date, Measurement.tobs).filter(extract('month', Measurement.date) == 6)
# 3. Convert the June temperatures to a list.
june = []
june = session.query(Measurement.date, Measurement.tobs).filter(extract('month', Measurement.date) == 6).all()
print(june)
# 4. Create a DataFrame from the list of temperatures for the month of June.
june_df = pd.DataFrame(june, columns = ['date','June Temperature'])
june_df.set_index(june_df['date'], inplace=True)
june_df = june_df.sort_index()
print(june_df.to_string(index=False))
# 5. Calculate and print out the summary statistics for the June temperature DataFrame.
june_df.describe()
```
## D2: Determine the Summary Statistics for December
```
# 6. Write a query that filters the Measurement table to retrieve the temperatures for the month of December.
dec_results = session.query(Measurement.date, Measurement.tobs).filter(extract('month', Measurement.date) == 12)
# 7. Convert the December temperatures to a list.
december = []
december = session.query(Measurement.date, Measurement.tobs).filter(extract('month', Measurement.date) == 12).all()
# 8. Create a DataFrame from the list of temperatures for the month of December.
dec_df = pd.DataFrame(december, columns = ['date','December Temperature'])
dec_df.set_index(dec_df['date'], inplace=True)
dec_df = dec_df.sort_index()
print(dec_df.to_string(index=False))
# 9. Calculate and print out the summary statistics for the Decemeber temperature DataFrame.
dec_df.describe()
# D3:Statistical analysis
# Plot the Data
plt.boxplot([june_df['June Temperature'], dec_df['December Temperature']])
plt.xticks([1, 2], ['June', 'December'])
# Additional queries
# Precipitation
prcp_jun = session.query(Measurement.date, Measurement.prcp).filter(extract('month', Measurement.date) == 6).all()
prcp_dec = session.query(Measurement.date, Measurement.prcp).filter(extract('month', Measurement.date) == 12).all()
prcp_jun_df = pd.DataFrame(prcp_jun, columns=['date', 'June Precipitation'])
prcp_jun_df.set_index(prcp_jun_df['date'], inplace=True)
prcp_jun_df = prcp_jun_df.sort_index()
prcp_dec_df = pd.DataFrame(prcp_dec, columns=['date', 'December Precipitation'])
prcp_dec_df.set_index(prcp_dec_df['date'], inplace=True)
prcp_dec_df = prcp_dec_df.sort_index()
prcp_jun_df.describe()
prcp_dec_df.describe()
prcp_jun_df.boxplot()
prcp_dec_df.boxplot()
ax1 = prcp_jun_df.plot()
ax2 = prcp_dec_df.plot()
git
```
| github_jupyter |
# RPA com Python
- O que é RPA?
- Por que isso é diferente de Selenium/Web-Scraping e do que vimos até agora?
- Pontos Positivos
- Pontos Negativos
- Biblioteca usada:
- pip install pyautogui
- https://pyautogui.readthedocs.io/en/latest/
- Para os comandos de imagem pode ser que seja necessário instalar pip install pillow
- Para resolver problemas de caracteres especiais vamos usar um macete com a pyperclip
- Link com um resumo dos principais comandos: https://pyautogui.readthedocs.io/en/latest/quickstart.html
## Desafio
- Vamos automatizar a extração de informações de um sistema e envio de um relatório por e-mail
- No nosso caso, para todo mundo conseguir fazer o mesmo programa, o nosso "sistema" vai ser o Gmail, mas o mesmo processo pode ser feito com qualquer programa do seu computador e qualquer sistema
- Passo 1: Entrar no sistema (entrar no Gmail)
- Passo 2: Entrar em uma aba específica do sistema onde tem o nosso relatório (Aba Contatos)
- Passo 3: Exportar o Relatório (Exportar Contatos)
- Passo 4: Pegar o relatório exportado, tratar e pegar as informações que queremos
- Passo 5: Preencher/Atualizar informações do sistema (No nosso caso, criar um e-mail e enviar)
```
import pyautogui
import time
# pyautogui.write() -> escreve
# pyautogui.click -> clica
# pyautogui.locateOnScreen -> identifica uma imagem na sua tela
# pyautogui.hotkey -> usa atalhos do teclado (combinação de teclas)
# pyautogui.press -> aperta um botão do teclado
# print(pyautogui.KEYBOARD_KEYS)
pyautogui.alert('O código vai começar. Não mexa em NADA enquanto o código tiver rodando. Quando finalizar, eu te aviso')
pyautogui.PAUSE = 1
# apertar o windows do teclado
pyautogui.press('win')
# digitar chrome
pyautogui.write("chrome")
# apertar enter
pyautogui.press('enter')
# entrar no Gmail
pyautogui.write('gmail')
pyautogui.press('enter')
#esperar carregar o google
while not pyautogui.locateOnScreen('busca_google.png'):
time.sleep(1)
# localizar a imagem -> vai te dar 4 informações: posicao x, posicao y, largura e altura
x, y, largura, altura = pyautogui.locateOnScreen('busca_google.png')
# clicar no meio da imagem
pyautogui.click(x + largura/2, y + altura/2)
#esperar o gmail
while not pyautogui.locateOnScreen('logo_gmail.png'):
time.sleep(1)
# entrar em contatos
x, y, largura, altura = pyautogui.locateOnScreen('pontinhos_menu.png')
pyautogui.click(x + largura/2, y + altura/2)
time.sleep(1)
x, y, largura, altura = pyautogui.locateOnScreen('contatos.png')
pyautogui.click(x + largura/2, y + altura/2)
#esperar o contatos
while not pyautogui.locateOnScreen('tela_contatos.png'):
time.sleep(1)
# exportar os contatos
x, y, largura, altura = pyautogui.locateOnScreen('exportar.png')
pyautogui.click(x + largura/2, y + altura/2)
x, y, largura, altura = pyautogui.locateOnScreen('confirmar_exportar.png')
pyautogui.click(x + largura/2, y + altura/2)
```
### Agora vamos escrever o e-mail
```
import pandas as pd
import pyperclip
time.sleep(2)
df = pd.read_csv(r'C://Users/joaop/Downloads/contacts.csv')
df = df.dropna(axis=1)
display(df)
pyautogui.hotkey('ctrl', 'pgup')
for email in df['E-mail 1 - Value']:
#clicar no botão escrever
time.sleep(1)
x, y, largura, altura = pyautogui.locateOnScreen('escrever.png')
pyautogui.click(x + largura/2, y + altura/2)
time.sleep(1)
# escrever o email
pyautogui.write(email)
# enter
pyautogui.press('enter')
#tab para o assunto do email
pyautogui.press('tab')
pyautogui.write('Lira Caloteiro')
#tab para o corpo do email
pyautogui.press('tab')
texto = """
Coe João Lira,
Para de dar calote na Hashtag e paga as parcelas aí, namoral.
Abs e tmj"""
pyperclip.copy(texto)
pyautogui.hotkey('ctrl', 'v')
pyautogui.hotkey('ctrl', 'enter')
pyautogui.alert('O código terminou, pode pegar o seu computador de volta')
```
### E se eu já tiver com a aba aberta, como que eu coloco ela na frente?
```
while not pyautogui.locateOnScreen('paint.png'):
pyautogui.hotkey('alt', 'shift', 'tab')
print("Achei o paint")
```
### Como descobrir a posição do mouse do local que eu quero
```
#pyautogui.click(2470, 38)
print(pyautogui.position())
```
| github_jupyter |
# 08 Errors
(See also *Computational Physics* (Landau, Páez, Bordeianu), Chapter 3)
These slides include material from *Computational Physics. eTextBook Python 3rd Edition.* Copyright © 2012 Landau, Rubin, Páez. Used under the Creative-Commons Attribution-NonCommerical-ShareAlike 3.0 Unported License.
## Stupidity or Incompetence
(e.g., [PEBCAK](https://en.wiktionary.org/wiki/PEBCAK))
## Random errors
- cosmic rays
- random bit flips
## Approximation errors
"**algorithmic errors**"
- simplifying and adapting mathematics to the computer
- should decrease as $N$ increases
#### Example:
Approximate $\sin(x)$ with its truncated series expansion:
\begin{align}
\sin x &= \sum_{n=1}^{+\infty} \frac{(-1)^{n-1} x^{2n-1}}{(2n - 1)!}\\
&\approx \sum_{n=1}^{N} \frac{(-1)^{n-1} x^{2n-1}}{(2n - 1)!} + \mathcal{E}(x, N)
\end{align}
## Round-off errors
- finite precision for storing floating-point numbers (32 bit, 64 bit)
- not known exactly (treat as uncertainty)
- can *accumulate* and lead to *garbage*
#### Example:
Assume you can only store four decimals:
\begin{align}
\text{storage}:&\quad \frac{1}{3} = 0.3333_c \quad\text{and}\quad \frac{2}{3} = 0.6667_c\\
\text{exact}:&\quad 2\times\frac{1}{3} - \frac{2}{3} = 0\\
\text{computer}:&\quad 2 \times 0.3333 - 0.6667 = -0.0001 \neq 0
\end{align}
... now imagine adding "$2\times\frac{1}{3} - \frac{2}{3}$" in a loop 100,000 times.
## The problems with *subtractive cancelation*
Model the computer representation $x_c$ of a number $x$ as
$$
x_c \simeq x(1+\epsilon_x)
$$
with the *relative* error $|\epsilon_x| \approx \epsilon_m$ (similar to machine precision).
Note: The *absolute* error is $\Delta x = x_c - x$ and is related to the relative error by $\epsilon_x = \Delta x/x$.
What happens when we subtract two numbers $b$ and $c$:
$$a = b - c$$
\begin{gather}
a_c = b_c - c_c = b(1+\epsilon_b) - c(1+\epsilon_c)\\
\frac{a_c}{a} = 1 + \frac{b}{a}\epsilon_b - \frac{c}{a} \epsilon_c
\end{gather}
No guarantee that the errors cancel, and the relative error on $a$
$$
\epsilon_a = \frac{a_c}{a} - 1 = \frac{b}{a}\epsilon_b - \frac{c}{a} \epsilon_c
$$
can be huge for small $a$!
### Subtracting two nearly equal numbers
$$b \approx c$$ is bad!
\begin{align}
\frac{a_c}{a} &= 1 + \frac{b}{a}(\epsilon_b - \epsilon_c) \\
\left| \frac{a_c}{a} \right| &\leq 1 + \left| \frac{b}{a} \right| (|\epsilon_b| + |\epsilon_a|)
\end{align}
i.e. the large number $b/a$ magnifies the error.
# Beware of subtractions!
**If you subtract two large numbers and end up with a small one, then the small one is less significant than any of the large ones.**
## Round-off errors
Repeated calculations of quantities with errors beget new errors: In general, analyze with the rules of *error propagation*: function $f(x_1, x_2, \dots, x_N)$ with absolute errors on the $x_i$ of $\Delta x_i$ (i.e., $x_i \pm \Delta x_i$):
$$
\Delta f(x_1, x_2, \dots; \Delta x_1, \Delta x_2, \dots) =
\sqrt{\sum_{i=1}^N \left(\Delta x_i \frac{\partial f}{\partial x_i}\right)^2}
$$
Note: relative error $$\epsilon_i = \frac{\Delta x_i}{x_i}$$
Example: division $a = b/c$ (... with short cut)
\begin{align}
a_c &= \frac{b_c}{c_c} = \frac{b(1+\epsilon_b)}{c(1+\epsilon_b)} \\
\frac{a_c}{a} &= \frac{1+\epsilon_b}{1+\epsilon_c}
= \frac{(1+\epsilon_b)(1-\epsilon_c)}{1-\epsilon_c^2} \approx (1+\epsilon_b)(1-\epsilon_c)\\
&\approx 1 + |\epsilon_b| + |\epsilon_c|\\
\epsilon_a = \frac{a_c}{a} - 1 &\approx |\epsilon_b| + |\epsilon_c|
\end{align}
(neglected terms of order $\mathcal{O}(\epsilon^2)$); and same for multiplication.
**Errors accumulate with every operation.**
### Model for round-off error accumulation
View error in each calculation as a step in a *random walk*. The total "distance" (i.e. total error) $R$ over $N$ steps of length $r$ (the individual, "random" errors), is on average
$$ R \approx \sqrt{N} r $$
Total relative error $\epsilon_{\text{ro}}$ after $N$ calculations with error of the order of the machine precision $\epsilon_m$ is
$$ \epsilon_{\text{ro}} \approx \sqrt{N} \epsilon_m $$
(Only a model, depending on algorithm may be less or even $N!$...)
## Total error of an algorithm
What you need to know to evaluate an algorithm:
1. Does it converge? (What $N$ do I need?)
2. How precise are the converged results (What is the error $\epsilon_\text{tot}$?)
3. What is its run time? (How fast is it for a given problem size?)
The total error contains *approximation* and *round off* errors:
\begin{gather}
\epsilon_\text{tot} = \epsilon_\text{app} + \epsilon_\text{ro}
\end{gather}
Model for the approximation error for an algorithm that takes $N$ steps (operations) to find a "good" answer:
$$
\epsilon_\text{app} \simeq \frac{\alpha}{N^\beta}
$$
and round off error as
$$
\epsilon_{\text{ro}} \approx \sqrt{N} \epsilon_m
$$
Model for total error:
$$
\epsilon_\text{tot} = \frac{\alpha}{N^\beta} + \sqrt{N} \epsilon_m
$$
Analyze $\log_{10} $ of the relative error (direct readout of number of significant decimals).
<img style="align: center" width="80%" src="./images/CompPhys_total_error.png" />
<span style="font-size: small; text-align: right">Image from Computational Physics. eTextBook Python 3rd Edition. Copyright © 2012 Landau, Rubin, Páez. Used under the Creative-Commons Attribution-NonCommerical-ShareAlike 3.0 Unported License.</span>
### Example analysis
\begin{gather}
\epsilon_\text{app} = \frac{1}{N^2}, \quad \epsilon_\text{ro} = \sqrt{N}\epsilon_m\\
\epsilon_\text{tot} = \frac{1}{N^2} + \sqrt{N}\epsilon_m
\end{gather}
Total error is a *minimum* for
\begin{gather}
\frac{d\epsilon_\text{tot}}{dN} = -\frac{2}{N^{3}} + \frac{1}{2}\frac{\epsilon_m}{\sqrt{N}} = 0, \quad\text{thus} \quad
N^{5/2} = 4 \epsilon_m^{-1}\\
N = \left(\frac{4}{\epsilon_m}\right)^{2/5}
\end{gather}
What is the best $N$ for single precision $\epsilon_m \approx 10^{-7}$?
```
import math
def N_opt(eps_m):
return round(math.pow(4./eps_m, 2./5.))
def eps_app(N):
return 1./(N*N)
def eps_ro(N, eps_m):
return math.sqrt(N)*eps_m
epsilon_m = 1e-7 # single precision
N = N_opt(epsilon_m)
err_app = eps_app(N)
err_ro = eps_ro(N, epsilon_m)
print("best N = {0} (for eps_m={1})".format(N, epsilon_m))
print("eps_tot = {0:.3g}".format(err_app + err_ro))
print("eps_app = {0:.3g}, eps_ro = {1:.3g}".format(err_app, err_ro))
```
Single precision $\epsilon_m \approx 10^{-7}$:
$$
N \approx 1099\\
\epsilon_\text{tot} \approx 4 \times 10^{-6} \\
\epsilon_\text{app} = 8.28 \times 10^{-7} \\
\epsilon_\text{ro} = 3.32 \times 10^{-6}
$$
Here, most of the error is round-off error! What can you do?
* use double precision (delay round-off error)
* use a better algorithm, e.g. $\epsilon_\text{app}\simeq \frac{2}{N^4}$ (uses fewer steps)
**Better algorithms are always a good idea :-)**
Remember: trade-off between **approximation error** and **rounding error**.
| github_jupyter |
# Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
## Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available [here](http://static.decontextualize.com/widgets2016.html). The page concerns the catalog of a famous [widget](http://en.wikipedia.org/wiki/Widget) company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called `html_str` that contains the HTML source code of the page, and a variable `document` that stores a Beautiful Soup object.
```
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
```
Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of `<h3>` tags contained in `widgets2016.html`.
```
h3_tags = document.find_all('h3')
print("There are", len(h3_tags), "h3 tags in widgets2016.html")
```
Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
```
#h2_tags = document.find_all('h2')
#for h2_tag in h2_tags:
phone_number = document.find_all('a')[1].string
print(phone_number)
```
In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, `widget_names` should evaluate to a list that looks like this (though not necessarily in this order):
```
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
```
```
table_tags = document.find_all('table', {'class':'widgetlist'})
#print(table_tag)
for table_tag in table_tags:
tr_tag = table_tag.find_all('tr')
# tds = table_tag.find_all('td')
# print(tds)
for tr in tr_tag:
print(tr.find('td', {'class':'wname'}).string)
```
## Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called `widgets`. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be `partno`, `wname`, `price`, and `quantity`, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
```
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
```
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
```
widgets = []
# your code here
table_tags = document.find_all('table', {'class':'widgetlist'})
#print(table_tag)
for table_tag in table_tags:
tr_tag = table_tag.find_all('tr')
# tds = table_tag.find_all('td')
# print(tds)
for tr in tr_tag:
wname = tr.find('td', {'class':'wname'}).string
partno = tr.find('td', {'class':'partno'}).string
price = tr.find('td', {'class':'price'}).string
quantity = tr.find('td', {'class':'quantity'}).string
widgets.append({'partno': partno, 'price':price , 'quantity':quantity, 'wname' : wname})
# end your code
widgets
#evaluates to LH-74/O
# widgets[5]['partno']
```
In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for `price` and `quantity` in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the `float()` and `int()` functions. You may need to use string slices to convert the `price` field to a floating-point number.)
```
widgets = []
# your code here
table_tags = document.find_all('table', {'class':'widgetlist'})
#print(table_tag)
for table_tag in table_tags:
tr_tag = table_tag.find_all('tr')
# tds = table_tag.find_all('td')
# print(tds)
for tr in tr_tag:
wname = tr.find('td', {'class':'wname'}).string
partno = tr.find('td', {'class':'partno'}).string
price = tr.find('td', {'class':'price'}).string
quantity = tr.find('td', {'class':'quantity'}).string
# print(pricef)
pricef = str(price[1:len(price)])
pricef = float(pricef)
#if you want to ad a '$' sign use the following code
# priceToAppend = '$' + str(float(pricef))
# else:
priceToAppend = pricef
widgets.append({'partno': partno, 'price': priceToAppend , 'quantity': int(quantity), 'wname' : wname})
# end your code
widgets
```
Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the `widgets` list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: `7928`
```
total_widgets = 0
for widget in widgets:
#print(widget['quantity'])
total_widgets = total_widgets + widget['quantity']
total_widgets
```
In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
```
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
```
```
for widget in widgets:
price = widget['price']
if price > 9.30:
print(widget['wname'])
```
## Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in `widgets2016.html`. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's `.find_next_sibling()` method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using `.find()` and `.find_all()`, and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called `example_html`):
```
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
```
If our task was to create a dictionary that maps the name of the cheese to the description that follows in the `<p>` tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a `.find_next_sibling()` method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
```
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
widget_dict = {}
document.find_all('h3')
for h3_tag in document.find_all('h3'):
widget_name = h3_tag.string
widget_desc_tag = h3_tag.find_next_sibling('table')
widgets = widget_desc_tag.find_all('td', {'class':'partno'})
for widget in widgets:
print(widget.string)
widget_dict = {}
document.find_all('h3')
for h3_tag in document.find_all('h3'):
widget_name = h3_tag.string
widget_desc_tag = h3_tag.find_next_sibling('table')
widgets = widget_desc_tag.find_all('td', {'class':'partno'})
for widget in widgets:
print(widget.string)
```
With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the `.find_next_sibling()` method, to print the part numbers of the widgets that are in the table *just beneath* the header "Hallowed Widgets."
Expected output:
```
MZ-556/B
QV-730
T1-9731
5B-941/F
```
Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable `category_counts` and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the `<h3>` tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary `category_counts` should look like this:
```
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
```
```
category_counts = {}
# your code here
for h3_tag in document.find_all('h3'):
h3_name = h3_tag.string
# print(h3_name)
count = 0
h3_desc_tag = h3_tag.find_next_sibling('table')
widgets = h3_desc_tag.find_all('td', {'class':'partno'})
for widget in widgets:
count = count + 1
# print(count)
category_counts[h3_name] = count
# end your code
category_counts
```
Congratulations! You're done.
| github_jupyter |

# 1. Introduction
This notebook demonstrates how to create two parallel video pipelines using the GStreamer multimedia framework:
* The first pipeline captures video from a V4L2 device and displays the output on a monitor using a DRM/KMS display device.
* The second pipeline decodes a VP9 encoded video file and displays the output on the same monitor using the same DRM/KMS display device.
The display device contains a video mixer which allows targeting different video planes for the individual pipelines with programmable x/y-offsets as well as width and height.
Refer to:
* nb1 for more details on the video file decode pipeline
* nb2 for more details on the V4L2 capture pipeline
* nb3 for more details on the video mixer configuration and display pipeline
In this notebook, you will:
1. Create two parallel GStreamer video pipelines using the ``parse_launch()`` API
2. Create a GStreamer pipeline graph and view it inside this notebook.
# 2. Imports and Initialization
Import all python modules required for this notebook.
```
from IPython.display import Image, display, clear_output
import pydot
import sys
import time
import gi
gi.require_version('Gst', '1.0')
gi.require_version("GstApp", "1.0")
from gi.repository import GObject, GLib, Gst, GstApp
```
This is the VMK180 TRD notebook 4 (nb4).
```
nb = "nb4"
```
Create a directory for saving the pipeline graph as dot file. Set the GStreamer debug dot directory environement variable to point to that directory.
```
dotdir = "/home/root/gst-dot/" + nb
!mkdir -p $dotdir
%env GST_DEBUG_DUMP_DOT_DIR = $dotdir
```
Initialize the GStreamer library. Optionally enable debug (default off) and set the debug level.
```
Gst.init(None)
Gst.debug_set_active(False)
Gst.debug_set_default_threshold(1)
```
# 3. Create String Representation of the First GStreamer Pipeline
The first pipeline consist of the following elements:
* ``xlnxvideosrc``
* ``caps``
* ``kmssink``
Describe the ``xlnxvideosrc`` element and its properties as string representation.
```
src_types = ["vivid", "usbcam", "mipi"]
src_type = src_types[1] # Change the source type to vivid, usbcam, or mipi via list index
io_mode = "mmap"
if src_type == "mipi":
io_mode = "dmabuf"
src_1 = "xlnxvideosrc src-type=" + src_type + " io-mode=" + io_mode
```
Describe the ``caps`` filter element as string representation.
```
width = 1280
height = 720
fmt = "YUY2"
caps = "video/x-raw, width=" + str(width) + ", height=" + str(height) + ", format=" + fmt
```
Describe the ``kmssink`` element and its properties as string representation.
```
driver_name = "xlnx"
plane_id_1 = 39
xoff_1 = 0
yoff_1 = 0
render_rectangle_1 = "<" + str(xoff_1) + "," + str(yoff_1) + "," + str(width) + "," + str(height) + ">"
sink_1 = "kmssink" + " driver-name=" + driver_name + " plane-id=" + str(plane_id_1) + " render-rectangle=" + render_rectangle_1
```
Create a string representation of the first pipeline by concatenating the individual element strings.
```
pipe_1 = src_1 + " ! " + caps + " ! " + sink_1
print(pipe_1)
```
# 4. Create String Representation of the Second GStreamer Pipeline
The second pipeline consist of the following elements:
* ``multifilesrc``
* ``decodebin``
* ``videoconvert``
* ``kmssink``
Describe the ``multifilesrc`` element and its properties as string representation.
```
file_name = "/usr/share/movies/Big_Buck_Bunny_4K.webm.360p.vp9.webm"
loop = True
src_2 = "multifilesrc location=" + file_name + " loop=" + str(loop)
```
Describe the ``decodebin`` and ``videoconvert`` elements as string representations.
```
dec = "decodebin"
cvt = "videoconvert"
```
Describe the ``kmssink`` element and its properties as string representation.
**Note:** The same ``kmssink`` element and ``driver-name`` property are used as in pipeline 1, only the ``plane-id`` and the ``render-rectangle`` properties are set differently. The output of this pipeline is shown on a different plane and the x/y-offsets are set such that the planes of pipeline 1 and 2 don't overlap.
```
driver_name = "xlnx"
plane_id_2 = 38
xoff_2 = 0
yoff_2 = 720
width_2 = 640
height_2 = 360
render_rectangle_2 = "<" + str(xoff_2) + "," + str(yoff_2) + "," + str(width_2) + "," + str(height_2) + ">"
sink_2 = "kmssink" + " driver-name=" + driver_name + " plane-id=" + str(plane_id_2) + " render-rectangle=" + render_rectangle_2
```
Create a string representation of the second pipeline by concatenating the individual element strings.
```
pipe_2 = src_2 + " ! " + dec + " ! " + cvt + " ! "+ sink_2
print(pipe_2)
```
# 5. Create and Run the GStreamer Pipelines
Parse the string representations of the first and second pipeline as a single pipeline graph.
```
pipeline = Gst.parse_launch(pipe_1 + " " + pipe_2)
```
The ``bus_call`` function listens on the bus for ``EOS`` and ``ERROR`` events. If any of these events occur, stop the pipeline (set to ``NULL`` state) and quit the main loop.
In case of an ``ERROR`` event, parse and print the error message.
```
def bus_call(bus, message, loop):
t = message.type
if t == Gst.MessageType.EOS:
sys.stdout.write("End-of-stream\n")
pipeline.set_state(Gst.State.NULL)
loop.quit()
elif t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
sys.stderr.write("Error: %s: %s\n" % (err, debug))
pipeline.set_state(Gst.State.NULL)
loop.quit()
return True
```
Start the pipeline (set to ``PLAYING`` state), create the main loop and listen to messages on the bus. Register the ``bus_call`` callback function with the ``message`` signal of the bus. Start the main loop.
The video will be displayed on the monitor.
To stop the pipeline, click the square shaped icon labelled 'Interrupt the kernel' in the top menu bar. Create a dot graph of the pipeline topology before stopping the pipeline. Quit the main loop.
```
pipeline.set_state(Gst.State.PLAYING);
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
try:
loop.run()
except:
sys.stdout.write("Interrupt caught\n")
Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, nb)
pipeline.set_state(Gst.State.NULL)
loop.quit()
pass
```
# 6. View the Pipeline dot Graph
Register dot plugins for png export to work.
```
!dot -c
```
Convert the dot file to png and display the pipeline graph. The image will be displayed below the following code cell. Double click on the generate image file to zoom in.
**Note:** This step may take a few seconds. Also, compared to previous notebooks, two disjoint graphs are displayed in the same image as we have created two parallel pipelines in this example.
```
dotfile = dotdir + "/" + nb + ".dot"
graph = pydot.graph_from_dot_file(dotfile, 'utf-8')
display(Image(graph[0].create(None, 'png', 'utf-8')))
```
# 7. Summary
In this notebook you learned how to:
1. Create two parallel GStreamer pipelines from a string representation using the ``parse_launch()`` API
2. Export the pipeline topology as a dot file image and display it in the notebook
<center>Copyright© 2019 Xilinx</center>
| github_jupyter |
# Running a Federated Cycle with Synergos
In a federated learning system, there are many contributory participants, known as Worker nodes, which receive a global model to train on, with their own local dataset. The dataset does not leave the individual Worker nodes at any point, and remains private to the node.
The job to synchronize, orchestrate and initiate an federated learning cycle, falls on a Trusted Third Party (TTP). The TTP pushes out the global model architecture and parameters for the individual nodes to train on, calling upon the required data, based on tags, e.g "training", which points to relevant data on the individual nodes. At no point does the TTP receive, copy or access the Worker nodes' local datasets.

This tutorial aims to give you an understanding of how to use the synergos package to run a full federated learning cycle on a `Synergos Cluster` grid.
In a `Synergos Cluster` Grid, with the inclusion of a new director and queue component, you will be able to parallelize your jobs, where the number of concurrent jobs possible is equal to the number of sub-grids. This is done alongside all quality-of-life components supported in a `Synergos Plus` grid.
In this tutorial, you will go through the steps required by each participant (TTP and Worker), by simulating each of them locally with docker containers. Specifically, we will simulate a Director and 2 sub-grids, each of which has a TTP and 2 Workers, allowing us to perform 2 concurrent federated operations at any time.
At the end of this, we will have:
- Connected the participants
- Trained the model
- Evaluate the model
## About the Dataset and Task
The dataset used in this notebook is on a small subset of Imagenette images, comprising 3 classes, and all images are 28 x 28 pixels. The dataset is available in the same directory as this notebook. Within the dataset directory, `data1` is for Worker 1 and `data2` is for Worker 2. The task to be carried out will be a multi-classification.
The dataset we have provided is a processed subset of the [original Imagenette dataset](https://github.com/fastai/imagenette).
## Initiating the docker containers
Before we begin, we have to start the docker containers.
### A. Initialization via `Synergos Simulator`
In `Synergos Simulator`, a sandboxed environment has been created for you!
By running:
`docker-compose -f docker-compose-syncluster.yml up --build`
the following components will be started:
- Director
- Sub-Grid 1
- TTP_1 (Cluster)
- Worker_1_n1
- Worker_2_n1
- Sub-Grid 2
- TTP_2 (Cluster)
- Worker_1_n2
- Worker_2_n2
- Synergos UI
- Synergos Logger
- Synergos MLOps
- Synergos MQ
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Manual Initialization
Firstly, pull the required docker images with the following commands:
1. Synergos Director:
`docker pull gcr.io/synergos-aisg/synergos_director:v0.1.0`
2. Synergos TTP (Cluster):
`docker pull gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0`
3. Synergos Worker:
`docker pull gcr.io/synergos-aisg/synergos_worker:v0.1.0`
4. Synergos MLOps:
`docker pull gcr.io/synergos-aisg/synergos_mlops:v0.1.0`
5. Synergos MQ:
`docker pull gcr.io/synergos-aisg/synergos_mq:v0.1.0`
Next, in <u>separate</u> CLI terminals, run the following command(s):
**Note: For Windows users, it is advisable to use powershell or command prompt based interfaces**
#### Director
```
docker run --rm
-p 5000:5000
-v <directory imagenette/orchestrator_data>:/orchestrator/data
-v <directory imagenette/orchestrator_outputs>:/orchestrator/outputs
-v <directory imagenette/mlflow>:/mlflow
--name director
gcr.io/synergos-aisg/synergos_director:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Sub-Grid 1
- **TTP_1**
```
docker run --rm
-p 6000:5000
-p 9020:8020
-v <directory imagenette/orchestrator_data>:/orchestrator/data
-v <directory imagenette/orchestrator_outputs>:/orchestrator/outputs
--name ttp_1
gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_1 Node 1**
```
docker run --rm
-p 5001:5000
-p 8021:8020
-v <directory imagenette/data1>:/worker/data
-v <directory imagenette/outputs_1>:/worker/outputs
--name worker_1_n1
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_1_n1
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_2 Node 1**
```
docker run --rm
-p 5002:5000
-p 8022:8020
-v <directory imagenette/data2>:/worker/data
-v <directory imagenette/outputs_2>:/worker/outputs
--name worker_2_n1
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_2_n1
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Sub-Grid 2
- **TTP_2**
```
docker run --rm
-p 7000:5000
-p 10020:8020
-v <directory imagenette/orchestrator_data>:/orchestrator/data
-v <directory imagenette/orchestrator_outputs>:/orchestrator/outputs
--name ttp_2
gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_1 Node 2**
```
docker run --rm
-p 5003:5000
-p 8023:8020
-v <directory imagenette/data1>:/worker/data
-v <directory imagenette/outputs_1>:/worker/outputs
--name worker_1_n2
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_1_n2
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_2 Node 2**
```
docker run --rm
-p 5004:5000
-p 8024:8020
-v <directory imagenette/data2>:/worker/data
-v <directory imagenette/outputs_2>:/worker/outputs
--name worker_2_n2
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_2_n2
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Synergos MLOps
```
docker run --rm
-p 5500:5500
-v /path/to/mlflow_test/:/mlflow # <-- IMPT! Same as orchestrator's
--name synmlops
gcr.io/synergos-aisg/synergos_mlops:v0.1.0
```
#### Synergos MQ
```
docker run --rm
-p 15672:15672 # UI port
-p 5672:5672 # AMQP port
--name synergos_mq
gcr.io/synergos-aisg/synergos_mq:v0.1.0
```
#### Synergos UI
- Refer to these [instructions](https://github.com/aimakerspace/synergos_ui) to deploy `Synergos UI`.
#### Synergos Logger
- Refer to these [instructions](https://github.com/aimakerspace/synergos_logger) to deploy `Synergos Logger`.
Once ready, for each terminal, you should see a REST server running on http://0.0.0.0:5000 of the container.
You are now ready for the next step.
## Configurations
### A. Configuring `Synergos Simulator`
All hosts & ports have already been pre-allocated!
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Configuring your manual setup
In a new terminal, run `docker inspect bridge` and find the IPv4Address for each container. Ideally, the containers should have the following addresses:
- director address: `172.17.0.2`
- Sub-Grid 1
- ttp_1 address: `172.17.0.3`
- worker_1_n1 address: `172.17.0.4`
- worker_2_n1 address: `172.17.0.5`
- Sub-Grid 2
- ttp_2 address: `172.17.0.6`
- worker_1_n2 address: `172.17.0.7`
- worker_2_n2 address: `172.17.0.8`
- UI address: `172.17.0.9`
- Logger address: `172.17.0.14`
- MLOps address: `172.17.0.15`
- MQ address: `172.17.0.16`
If not, just note the relevant IP addresses for each docker container.
Run the following cells below.
**Note: For Windows users, `host` should be Docker Desktop VM's IP. Follow [this](https://stackoverflow.com/questions/58073936/how-to-get-ip-address-of-docker-desktop-vm) on instructions to find IP**
```
import time
from synergos import Driver
host = "172.20.0.2"
port = 5000
# Initiate Driver
driver = Driver(host=host, port=port)
```
## Phase 1: Registration
Submitting Orchestrator & Participant metadata
#### 1A. Orchestrator creates a collaboration
```
collab_task = driver.collaborations
collab_task.configure_logger(
host="172.20.0.14",
port=9000,
sysmetrics_port=9100,
director_port=9200,
ttp_port=9300,
worker_port=9400,
ui_port=9000,
secure=False
)
collab_task.configure_mlops(
host="172.20.0.15",
port=5500,
ui_port=5500,
secure=False
)
collab_task.configure_mq(
host="172.20.0.16",
port=5672,
ui_port=15672,
secure=False
)
collab_task.create('imagenette_syncluster_collaboration')
```
#### 1B. Orchestrator creates a project
```
driver.projects.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
action="classify",
incentives={
'tier_1': [],
'tier_2': [],
}
)
```
#### 1C. Orchestrator creates an experiment
```
driver.experiments.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
model=[
{
"activation": "relu",
"is_input": True,
"l_type": "Conv2d",
"structure": {
"in_channels": 1,
"out_channels": 4,
"kernel_size": 3,
"stride": 1,
"padding": 1
}
},
{
"activation": None,
"is_input": False,
"l_type": "Flatten",
"structure": {}
},
{
"activation": "softmax",
"is_input": False,
"l_type": "Linear",
"structure": {
"bias": True,
"in_features": 4 * 28 * 28,
"out_features": 3
}
}
]
)
```
#### 1D. Orchestrator creates a run
```
driver.runs.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
rounds=2,
epochs=1,
base_lr=0.0005,
max_lr=0.005,
criterion="NLLLoss"
)
```
#### 1E. Participants registers their servers' configurations and roles
```
participant_resp_1 = driver.participants.create(
participant_id="worker_1",
)
display(participant_resp_1)
participant_resp_2 = driver.participants.create(
participant_id="worker_2",
)
display(participant_resp_2)
registration_task = driver.registrations
# Add and register worker_1 node
registration_task.add_node(
host='172.20.0.4',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.add_node(
host='172.20.0.7',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
participant_id="worker_1",
role="host"
)
registration_task = driver.registrations
# Add and register worker_2 node
registration_task.add_node(
host='172.20.0.5',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.add_node(
host='172.20.0.8',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
participant_id="worker_2",
role="guest"
)
```
#### 1F. Participants registers their tags for a specific project
```
# Worker 1 declares their data tags
driver.tags.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
participant_id="worker_1",
train=[["imagenette", "dataset", "data1", "train"]],
evaluate=[["imagenette", "dataset", "data1", "evaluate"]]
)
# Worker 2 declares their data tags
driver.tags.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
participant_id="worker_2",
train=[["imagenette", "dataset", "data2", "train"]],
evaluate=[["imagenette", "dataset", "data2", "evaluate"]]
)
stop!
```
## Phase 2:
Alignment, Training & Optimisation
#### 2A. Perform multiple feature alignment to dynamically configure datasets and models for cross-grid compatibility
```
driver.alignments.create(
collab_id='imagenette_syncluster_collaboration',
project_id="imagenette_syncluster_project",
verbose=False,
log_msg=False
)
# Important! MUST wait for alignment process to first complete before proceeding on
while True:
align_resp = driver.alignments.read(
collab_id='imagenette_syncluster_collaboration',
project_id="imagenette_syncluster_project"
)
align_data = align_resp.get('data')
if align_data:
display(align_resp)
break
time.sleep(5)
```
#### 2B. Trigger training across the federated grid
```
model_resp = driver.models.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
log_msg=False,
verbose=False
)
display(model_resp)
# Important! MUST wait for training process to first complete before proceeding on
while True:
train_resp = driver.models.read(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run"
)
train_data = train_resp.get('data')
if train_data:
display(train_data)
break
time.sleep(5)
```
#### 2C. Perform hyperparameter tuning once ideal model is found (experimental)
```
optim_parameters = {
'search_space': {
"rounds": {"_type": "choice", "_value": [1, 2]},
"epochs": {"_type": "choice", "_value": [1, 2]},
"batch_size": {"_type": "choice", "_value": [32, 64]},
"lr": {"_type": "choice", "_value": [0.0001, 0.1]},
"criterion": {"_type": "choice", "_value": ["NLLLoss"]},
"mu": {"_type": "uniform", "_value": [0.0, 1.0]},
"base_lr": {"_type": "choice", "_value": [0.00005]},
"max_lr": {"_type": "choice", "_value": [0.2]}
},
'backend': "tune",
'optimize_mode': "max",
'metric': "accuracy",
'trial_concurrency': 1,
'max_exec_duration': "1h",
'max_trial_num': 2,
'max_concurrent': 1,
'is_remote': True,
'use_annotation': True,
'auto_align': True,
'dockerised': True,
'verbose': True,
'log_msgs': True
}
driver.optimizations.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
**optim_parameters
)
driver.optimizations.read(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment"
)
```
## Phase 3: EVALUATE
Validation & Predictions
#### 3A. Perform validation(s) of combination(s)
```
# Orchestrator performs post-mortem validation
driver.validations.create(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after validation has completed to retrieve your validation statistics
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.validations.read(
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
)
```
#### 3B. Perform prediction(s) of combination(s)
```
# Worker 1 requests for inferences
driver.predictions.create(
tags={
"imagenette_syncluster_project": [
["imagenette", "dataset", "data1", "predict"]
]
},
participant_id="worker_1",
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after prediction has completed to retrieve your predictions for worker 1
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.predictions.read(
participant_id="worker_1",
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
)
# Worker 2 requests for inferences
driver.predictions.create(
tags={
"imagenette_syncluster_project": [
["imagenette", "dataset", "data2", "predict"]
]
},
participant_id="worker_2",
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after prediction has completed to retrieve your predictions for worker 2
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.predictions.read(
participant_id="worker_2",
collab_id="imagenette_syncluster_collaboration",
project_id="imagenette_syncluster_project",
expt_id="imagenette_syncluster_experiment",
run_id="imagenette_syncluster_run",
)
```
| github_jupyter |
## 人脸过滤器
现在,使用已训练的人脸关键点检测器,就可以自动执行一些操作了,比如将过滤器添加到人脸。这个notebook是可选的,你可以根据在人眼周围检测到的关键点为图像中检测到的人脸添加太阳镜。打开`images/`目录,看一看我们还为你提供了哪些用于尝试的 .png!
<img src="images/face_filter_ex.png" width=60% height=60%/>
下面,查看一下我们将要使用的太阳镜.png,然后开始行动吧!
```
# import necessary resources
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import cv2
# load in sunglasses image with cv2 and IMREAD_UNCHANGED
sunglasses = cv2.imread('images/sunglasses.png', cv2.IMREAD_UNCHANGED)
# plot our image
plt.imshow(sunglasses)
# print out its dimensions
print('Image shape: ', sunglasses.shape)
```
## 第四个维度
你会注意到这个图像实际上有*4 个颜色通道*,与一般的RGB图像不同,因为一般的RGB图像只有3个颜色通道。这是由于我们设置了标记`cv2.IMREAD_UNCHANGED`,这个标记会告诉它使其读取另一个颜色通道。
#### Alpha通道
除了具有通常彩色图像的红色、蓝色和绿色通道,第4个通道表示图像中**每个像素的透明度级别**,这个通道通常被称为**alpha**通道。透明度通道的工作原理如下:像素越低,像素越透明。这里的下限(即完全透明)为零,因此任何设置为0的像素都不会被看到;上图中这些看起来像白色背景像素,但它们实际上是完全透明的。
有了这个透明的通道,我们将这个太阳镜的矩形图像放在一张人脸图像上,仍然可以看到人脸区域在技术上被太阳镜图像的透明背景覆盖了!
接下来,我们看看下一个Python单元格中太阳镜图像的alpha通道。因为图像背景中的许多像素的alpha值为0,所以如果我们想看到它们,就需要显式地输出非零值。
```
# print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('The alpha channel looks like this (black pixels = transparent): ')
plt.imshow(alpha_channel, cmap='gray')
# just to double check that there are indeed non-zero values
# let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('The non-zero values of the alpha channel are: ')
print (values)
```
#### 覆盖图像
覆盖图像的意思是说,当我们将太阳镜图像放在另一个图像上时,我们可以把透明度通道当做一个过滤器:
* 如果像素不透明(即 alpha_channel> 0),则将它们覆盖在新图像上
#### 关键点位置
在这个过程时,了解哪个关键点属于眼睛或嘴巴等,这一点对你很有帮助,因此在下图中我们还直接在图像上输出了每个人脸面部关键点的索引,这样就可以分辨哪些关键点适合眼睛、眉毛等,
<img src="images/landmarks_numbered.jpg" width=50% height=50%/>
使用对应于人脸边缘的关键点来定义太阳镜的宽度,并使用眼睛的位置来定义位移,这个方法可能也会很有用。
接下来,我们要加载一个示例图像。你会从下面提供的训练数据集中获得一个图像和一组关键点,但也可以使用自己的CNN模型为*任何*一张人脸图像生成关键点(如在Notebook 3中)并进行相同的覆盖处理!
```
# load in the data if you have not already!
# otherwise, you may comment out this cell
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data
# load in training data
key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv')
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
# helper function to display keypoints
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# a selected image
n = 120
image_name = key_pts_frame.iloc[n, 0]
image = mpimg.imread(os.path.join('/data/training/', image_name))
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
plt.figure(figsize=(5, 5))
show_keypoints(image, key_pts)
plt.show()
```
接下来,你会看到在加载的图像中将太阳镜放在这个人面部的一个示例。
请注意,关键点会在上面的编号图像中逐个编号,因此`key_pts[0,:]`对应于标记图像中的第一个点(1)。
```
# Display sunglasses on top of the image in the appropriate place
# copy of the face image for overlay
image_copy = np.copy(image)
# top-left location for sunglasses to go
# 17 = edge of left eyebrow
x = int(key_pts[17, 0])
y = int(key_pts[17, 1])
# height and width of sunglasses
# h = length of nose
h = int(abs(key_pts[27,1] - key_pts[34,1]))
# w = left to right eyebrow edges
w = int(abs(key_pts[17,0] - key_pts[26,0]))
# read in sunglasses
sunglasses = cv2.imread('images/sunglasses.png', cv2.IMREAD_UNCHANGED)
# resize sunglasses
new_sunglasses = cv2.resize(sunglasses, (w, h), interpolation = cv2.INTER_CUBIC)
# get region of interest on the face to change
roi_color = image_copy[y:y+h,x:x+w]
# find all non-transparent pts
ind = np.argwhere(new_sunglasses[:,:,3] > 0)
# for each non-transparent point, replace the original image pixel with that of the new_sunglasses
for i in range(3):
roi_color[ind[:,0],ind[:,1],i] = new_sunglasses[ind[:,0],ind[:,1],i]
# set the area of the image to the changed region with sunglasses
image_copy[y:y+h,x:x+w] = roi_color
# display the result!
plt.imshow(image_copy)
```
#### 接下来的步骤
查看`images/`目录,查看其他可用的覆盖图像.png!此外,你可能会注意到,太阳镜的覆盖层并不完美。因此,我们建议你尝试调整眼镜宽度和高度的比例,并研究如何在OpenCV中执行 [图像旋转](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html),从而将覆盖层与任何一种人脸表情都能够相匹配。
| github_jupyter |
```
import torch
import torch.nn as nn
import numpy as np
from copy import deepcopy
device = "cuda" if torch.cuda.is_available() else "cpu"
class RBF(nn.Module):
def __init__(self):
super(RBF, self).__init__()
torch.cuda.manual_seed(0)
self.rbf_clt = self.init_clt()
self.rbf_std = self.init_std()
def init_clt(self):
return nn.Parameter(torch.rand(1))
def init_std(self):
return nn.Parameter(torch.rand(1))
def rbf(self, x, cluster, std):
return torch.exp(-(x - cluster) * (x - cluster) / 2 * (std * std))
def forward(self, x):
x = self.rbf(x, self.rbf_clt, self.rbf_std)
return x
class RBFnetwork(nn.Module):
def __init__(self, timelag):
super(RBFnetwork, self).__init__()
torch.cuda.manual_seed(0)
device = "cuda" if torch.cuda.is_available() else "cpu"
self.timelag = timelag
self.init_weight = nn.Parameter(torch.rand(self.timelag))
self.rbf_list = [RBF().to(device) for i in range(self.timelag)]
def forward(self, x):
for j in range(self.timelag):
if j ==0:
y = sum([self.init_weight[i] * self.rbf_list[i](x[j]) for i in range(self.timelag)])
else:
y = torch.cat([y, sum([self.init_weight[i] * self.rbf_list[i](x[j]) for i in range(self.timelag)])])
return y
def restore_parameters(model, best_model):
'''Move parameter values from best_model to model.'''
for params, best_params in zip(model.parameters(), best_model.parameters()):
params.data = best_params
def train_RBFlayer(model, input_, target, lr, epochs, lookback = 5, device = device):
model.to(device)
loss_fn = nn.MSELoss(reduction='mean')
optimizer = torch.optim.Adam(model.parameters(), lr = lr)
train_loss_list = []
best_it = None
best_model = None
best_loss = np.inf
target_list = []
for j in range(len(target) - 2):
target_list.append((target[j+2] - target[j])/2)
loss_list = []
cause_list = []
for epoch in range(epochs):
cause = model(input_)
cause_list.append(cause)
grad = []
for i in range(len(cause) - 2):
grad.append((cause[i+2] - cause[i])/2)
loss1 = sum([loss_fn(grad[i], target_list[i]) for i in range(len(grad))])
loss2 = sum([loss_fn(cause[i], target[i]) for i in range(len(input_))])
loss = loss1 + loss2
loss.backward()
optimizer.step()
model.zero_grad()
loss_list.append(loss)
mean_loss = loss / len(grad)
train_loss_list.append(mean_loss)
if mean_loss < best_loss:
best_loss = mean_loss
best_it = epoch
best_model = deepcopy(model)
elif (epoch - best_it) == lookback:
if verbose:
print('Stopping early')
break
print("epoch {} cause loss {} :".format(epoch, loss / len(input_)))
print('gradient loss :', loss1/len(grad))
print('value loss :', loss2/len(input_))
best_cause = cause_list[best_it]
restore_parameters(model, best_model)
return best_model, loss_list, best_cause
```
# data generation
```
import random as rand
import numpy as np
def data_gen(timelag):
data = []
clt_list = []
std_list = []
for i in range(timelag):
clt = rand.random()
std = rand.random()
data_i = np.exp(-(i - clt) * (i - clt) / 2 * (std * std))
data.append(data_i)
clt_list.append(clt)
std_list.append(std)
return torch.tensor(data, device = device).float(), torch.tensor(clt_list, device = device).float(), torch.tensor(std_list, device = device).float()
data, clt_list, std_list = data_gen(10)
data
clt_list
std_list
```
# test1
```
import time
cause_list = []
start = time.time()
model = RBFnetwork(10)
best_model, loss_list, best_cause = train_RBFlayer(model, data, data, 0.001, 1000, device)
cause_list.append(best_cause.cpu().detach().numpy())
print("time :", time.time() - start)
print('-------------------------------------------------------------------------------------------')
import matplotlib.pyplot as plt
plt.plot(cause_list[0])
plt.plot(cause_list[0])
plt.plot(data.cpu().detach().numpy())
plt.show()
```
| github_jupyter |
# Aufgaben Blatt1
# Aufgabe 1
```
year = 1998:2017
snowcover = c(25.0, 23.9, 25.1, 24.4, 21.2, 26.1, 23.2, 25.5, 24.9, 24.0, 21.3, 23.8, 26.1, 26.0, 26.1, 25.1, 22.2, 23.4, 22.6, 24.6)
snow = data.frame(years=year, covers=snowcover)
plot(snowcover~year, snow)
plot(snowcover~year, snow, type="l")
abline(lm(snowcover~year), col="red")
hist(snow$covers, xlab="Snow Height", main="Snow heights frequencies")
plot(log(snow$covers)~snow$years)
plot(log(snow$covers)~snow$years, type="l")
abline(lm(log(snow$covers)~snow$years), col="red")
hist(log(snow$covers), xlab="Snow Height", main="Snow heights frequencies")
snow2 = read.table("./snow.csv", header=TRUE, dec=".", sep=",")
snow2
```
## Aufgabe 2
```
data(ChickWeight)
ChickWeight
day10s = ChickWeight[ChickWeight[, "Time"] == 10,]
diets = split(day10s, day10s$Diet)
means = lapply(diets, function(d) mean(d$weight))
means
```
## Aufgabe 3
```
p_smaller_1.4 = punif(1.4, 1., 2.)
d_at_1.4 = dunif(1.4, min=1., max=2.)
quantiles = qunif(c(0.25, 0.75), 1., 2.)
p_smaller_1.4
d_at_1.4
quantiles
ex_b<-function(n) {
sample = runif(n, 1., 2.)
quantile(sample, probs=c(0.25, 0.75))
}
ex_b(20)
ex_b(100)
ex_b(1000)
qa<-function(samples) {
# Builtin: IQR(samples)
quantiles = quantile(samples, probs=c(0.25, 0.75), names=F)
quantiles[2] - quantiles[1]
}
qa(runif(100, 1.0, 2.))
```
### Aufgabe d): Herleitung Formel:
$$\Phi^{-1}_{\mu, \sigma^2}(0.75) - \Phi^{-1}_{\mu, \sigma^2}(0.25) = \mu + \sigma\Phi^{-1}_{0,1}(0.75) - \mu - \sigma\Phi^{-1}_{0,1}(0.25) = \sigma(\Phi^{-1}_{0,1}(0.75) - \Phi^{-1}_{0,1}(0.25)) \Rightarrow \sigma = \frac{\Phi^{-1}_{\mu, \sigma^2}(0.75) - \Phi^{-1}_{\mu, \sigma^2}(0.25)}{\sigma(\Phi^{-1}_{0,1}(0.75) - \Phi^{-1}_{0,1}(0.25))}$$
Dann schätzen wir den Zähler mit dem empirischen Quartilsabstand.
```
approx_sd<-function(n_samples, mu, sd) {
samples = rnorm(n_samples, mu, sd)
emp = qa(samples)
norm_q = qnorm(0.75) - qnorm(0.25)
c(sd(samples), emp/norm_q)
}
approx_sd(10000, 1, 0.5)
compute_sample_variance<-function(n, m, mu, sd) {
twoxn_samples = replicate(n, approx_sd(m, mu, sd))
apply(twoxn_samples, MARGIN=1, FUN=var)
}
compute_sample_variance(10000, 100, 1., 0.2)
```
## Zusatzaufgabe
```
fib.iterative<-function(ns) {
n = max(ns)
a = 1
b = 0
rets = ns
for (i in 0:n) {
c = a
a = b
b = b + c
idx = which(ns == i)
if (length(idx) > 0) {
rets[idx] = b
}
}
rets
}
print(fib.iterative(c(30,40,50)))
fib.recursive<-function(n) {
if (n == 1 || n == 2) {
return(1)
}
return(fib.recursive(n-1) + fib.recursive(n-2))
}
fib.recursive(30)
fib.recursive(35)
```
| github_jupyter |
```
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
```
**confusion matrix**
```
sns.set(font_scale=2)
# 행은 실제값, 열은 예측값
array = [[5,0,0,0], # A인데 A로 예측한 것이 5건
[0,10,0,0], # B인데 B로 예측한 것이 10건
[0,0,15,0],
[0,0,0,5]]
df_cm = pd.DataFrame(array, index = [i for i in "ABCD"], columns = [i for i in "ABCD"])
df_cm
plt.figure(figsize = (7,5))
plt.title('confusion matrix')
sns.heatmap(df_cm, annot = True)
plt.show()
array = [[9,1,0,0],
[1,15,3,1],
[5,0,24,1],
[0,4,1,15]]
df_cm = pd.DataFrame(array, index = [i for i in "ABCD"], columns = [i for i in "ABCD"])
df_cm
plt.figure(figsize = (7,5))
plt.title('confusion matrix')
sns.heatmap(df_cm, annot = True)
plt.show()
```
* * *
**mnist CLassifier - randomforest**
```
from sklearn import datasets
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
mnist = datasets.load_digits()
features, labels = mnist.data, mnist.target
print(np.shape((features)))
print(np.shape((labels)))
def cross_validation(classifier, features, labels):
cv_scores = []
for i in range(10):
scores = cross_val_score(classifier, features, labels, cv = 10, scoring='accuracy')
cv_scores.append(scores.mean())
return cv_scores
dt_cv_scores = cross_validation(tree.DecisionTreeClassifier(), features, labels)
rf_cv_scores = cross_validation(RandomForestClassifier(), features, labels)
cv_list = [['random forest', rf_cv_scores],
['decision tree', dt_cv_scores]]
df = pd.DataFrame.from_items(cv_list)
df.plot()
plt.show()
print(np.mean(dt_cv_scores))
print(np.mean(rf_cv_scores))
```
* * *
**KNN CLassifier**
```
import pandas
with open('DataSet/nba_2013.csv', 'r') as csvfile:
nba = pandas.read_csv(csvfile)
nba.head(15)
nba.columns
distance_columns = ['age', 'g', 'gs', 'mp', 'fg', 'fga',
'fg.', 'x3p', 'x3pa', 'x3p.', 'x2p', 'x2pa', 'x2p.', 'efg.', 'ft',
'fta', 'ft.', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov', 'pf',
'pts']
len(distance_columns)
import math
selected_player = nba[nba["player"]=="LeBron James"].iloc[0]
def euclidean_distance(row) :
inner_value = 0
for k in distance_columns :
inner_value += (selected_player[k]-row[k])**2
return math.sqrt(inner_value)
LeBron_distance = nba.apply(euclidean_distance, axis = 1)
LeBron_distance.head(15)
nba_numeric = nba[distance_columns]
nba_numeric.head()
nba_normalized = (nba_numeric - nba_numeric.mean())/nba_numeric.std()
nba_normalized.head()
from scipy.spatial import distance
nba_normalized.fillna(0, inplace=True) # inplace = True : 기존객체(nba_normalized)에 지정된 값을 바꾸겠다
nba_normalized[nba["player"]=="LeBron James"]
LeBron_normalized = nba_normalized[nba["player"]=="LeBron James"]
euclidean_distances = nba_normalized.apply(lambda row : distance.euclidean(row, LeBron_normalized), axis =1)
euclidean_distances.head(15)
distance_frame = pandas.DataFrame(data = {"dist":euclidean_distances, "idx":euclidean_distances.index})
distance_frame.head(15)
distance_frame.sort_values("dist", inplace=True)
distance_frame.head(15)
distance_frame.iloc[1]["idx"]
distance_frame.iloc[1]
second_smallest = distance_frame.iloc[1]["idx"]
most_similar_to_Lebron = nba.loc[int(second_smallest)]["player"]
print("가장 비슷한 성적의 선수 : ", most_similar_to_Lebron)
```
* * *
**K-means clustering**
```
from sklearn import datasets
import pandas as pd
iris = datasets.load_iris()
labels = pd.DataFrame(iris.target)
labels.head()
labels.columns = ['labels']
data = pd.DataFrame(iris.data)
data.columns = ['Sepal_Length', 'Sepal_width', 'Petal_Lenght', 'Petal_width']
data.head(15)
data = pd.concat([data,labels], axis = 1)
data.head(15)
feature = data[['Sepal_Length', 'Sepal_width']]
feature.head(15)
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
model = KMeans(n_clusters = 3, algorithm='auto')
model.fit(feature)
predict = pd.DataFrame(model.predict(feature))
predict.columns = ['predict']
predict.head()
r = pd.concat([feature, predict], axis =1)
r.head()
plt.scatter(r['Sepal_Length'], r['Sepal_width'],c=r['predict'], alpha=0.5)
plt.show()
centers = pd.DataFrame(model.cluster_centers_,
columns = ['Sepal_Length', 'Sepal_width'])
centers
center_x = centers['Sepal_Length']
center_y = centers['Sepal_width']
plt.scatter(center_x, center_y, s=50, marker = 'D', c ='r')
plt.scatter(r['Sepal_Length'], r['Sepal_width'],c=r['predict'], alpha=0.5)
plt.show()
```
* * *
**pipeline**
scaler와 kmeans를 순차적으로 실행시키는 기능을 수행
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
model = KMeans(n_clusters = 3)
scaler = StandardScaler()
pipeline = make_pipeline(scaler, model)
pipeline.fit(feature)
predict = pd.DataFrame(pipeline.predict(feature))
ks = range(1,10)
inertias = []
for k in ks:
model = KMeans(n_clusters = k)
model.fit(feature)
inertias.append(model.inertias_)
#inertia_:inertia(관성:응집) value를 이용해서 적정수준의 클러스터 개수 파악
plt.plot(ks, inertias, '-o')
plt.xlabel('number of clusters, k')
plt.ylabel('inertia')
plt.xtkcks(ks)
plt.show()
ct = pd.crosstab(data['labels'], r['predict'])
print(ct)
make_pipeline()
```
* * *
**PCA**
```
import pandas as pd
df = pd.DataFrame(columns=['calory', 'breakfast', 'lunch', 'dinner', 'exercise', 'body_shape'])
df.loc[0] = [1200, 1, 0, 0, 2, 'Skinny']
df.loc[1] = [2800, 1, 1, 1, 1, 'Normal']
df.loc[2] = [3500, 2, 2, 1, 0, 'Fat']
df.loc[3] = [1400, 0, 1, 0, 3, 'Skinny']
df.loc[4] = [5000, 2, 2, 2, 0, 'Fat']
df.loc[5] = [1300, 0, 0, 1, 2, 'Skinny']
df.loc[6] = [3000, 1, 0, 1, 1, 'Normal']
df.loc[7] = [4000, 2, 2, 2, 0, 'Fat']
df.loc[8] = [2600, 0, 2, 0, 0, 'Normal']
df.loc[9] = [3000, 1, 2, 1, 1, 'Fat']
df
X = df[['calory', 'breakfast', 'lunch', 'dinner', 'exercise']]
print(X)
Y = df[['body_shape']]
print(Y)
from sklearn.preprocessing import StandardScaler
x_std = StandardScaler().fit_transform(X)
x_std
x_std.shape
features = x_std.T
features.shape
covariance_matrix = np.cov(features) # 공분산 : X = (10,5) => (5,10)
covariance_matrix
eig_vals, eig_vecs = np.linalg.eig(covariance_matrix)
print("고유벡터를 출력합니다 \n%s" % eig_vecs )
print("고유값을 출력합니다 : %s" % eig_vals)
print(eig_vals[0]/sum(eig_vals))
x_std.shape
eig_vecs.T[0].shape
projected_X = x_std.dot(eig_vecs.T[0]) # 5차원 ->1차원
projected_X
res = pd.DataFrame(projected_X, columns = ['PC1'])
res['y-axis'] = 0.0
res['label'] = Y
res
import matplotlib.pyplot as plt
import seaborn as sns
sns.lmplot('PC1', 'y-axis', data = res, fit_reg = False, scatter_kws={"s":50}, hue = 'label')
plt.title('PCA result')
plt.show()
```
| github_jupyter |
```
import os
import numpy as np
import sys
import matplotlib.pyplot as plt
from matplotlib import rc
from matplotlib.pyplot import cm
from library.trajectory import Trajectory
# uzh trajectory toolbox
sys.path.append(os.path.abspath('library/rpg_trajectory_evaluation/src/rpg_trajectory_evaluation'))
import plot_utils as pu
%matplotlib inline
rc('font', **{'family': 'serif', 'serif': ['Cardo']})
rc('text', usetex=True)
```
### Parameters (to specify/set)
```
# directory where the data is saved
DATA_DIR = '/home/mayankm/my_projects/multiview_deeptam_3DV/multi-camera-deeptam/resources/data/cvg_cams'
# directory to save the output
RESULTS_DIR = os.path.abspath('eval')
# format in which to save the plots
FORMAT = '.png'
# set the camera indices to plot
CAM_IDXS = [0, 2, 4, 6, 8]
# set the reference camera (in case groundtruth is not available)
REF_CAM_ID = 0
# evaluation parameters
align_type = 'none' # choose from ['posyaw', 'sim3', 'se3', 'none']
align_num_frames = -1
```
### Variables to allow the plots to look nice
```
N = len(CAM_IDXS)
ALGORITHM_CONFIGS = []
for i in range(N):
ALGORITHM_CONFIGS.append('cam_%d' % CAM_IDXS[i])
# These are the labels that will be displayed for items in ALGORITHM_CONFIGS
PLOT_LABELS = { 'cam_0': 'Camera 0',
'cam_2': 'Camera 2',
'cam_4': 'Camera 4',
'cam_6': 'Camera 6',
'cam_8': 'Camera 8'}
PLOT_LABELS['cam_%d' % REF_CAM_ID] = PLOT_LABELS['cam_%d' % REF_CAM_ID] + ' (ref)'
# assgin colors to different configurations
COLORS = {}
color = iter(cm.plasma(np.linspace(0, 0.75, N)))
for i in range(N):
COLORS['cam_%d' % CAM_IDXS[i]] = next(color)
```
### Defining the txt files with the pose information
```
# file name for reference trajectory
ref_traj_file = os.path.join(DATA_DIR, 'cam_%d' % REF_CAM_ID, 'groundtruth.txt')
# file names for camera trajectories
estimated_traj_files = []
for i in range(N):
# path to camera trajectory
estimated_traj_file = os.path.join(DATA_DIR, 'cam_%d' % CAM_IDXS[i], 'groundtruth.txt')
assert os.path.exists(estimated_traj_file), "No corresponding file exists: %s!" % estimated_traj_file
estimated_traj_files.append(estimated_traj_file)
```
# Main
```
print("Going to analyze the results in {0}.".format(DATA_DIR))
print("The plots will saved in {0}.".format(RESULTS_DIR))
if not os.path.exists(plots_dir):
os.makedirs(plots_dir)
os.makedies(RESULTS_DIR, )
print("#####################################")
print(">>> Start loading and preprocessing all trajectories...")
print("#####################################")
config_trajectories_list = []
for i in range(N):
# create instance of trajectory object
cur_traj = Trajectory(RESULTS_DIR, run_name='cam_%d' % CAM_IDXS[i], gt_traj_file=ref_traj_file, estimated_traj_file=estimated_traj_files[i], \
align_type=align_type, align_num_frames=align_num_frames)
config_trajectories_list.append(cur_traj)
print("#####################################")
print(">>> Start plotting results....")
print("#####################################")
p_gt_0 = config_trajectories_list[0].p_gt
fig1 = plt.figure(figsize=(10, 10))
ax1 = fig1.add_subplot(111, aspect='equal',
xlabel='x [m]', ylabel='y [m]')
fig2 = plt.figure(figsize=(8, 8))
ax2 = fig2.add_subplot(111, aspect='equal',
xlabel='x [m]', ylabel='z [m]')
# pu.plot_trajectory_top(ax1, p_gt_0, 'k', 'Groundtruth')
# pu.plot_trajectory_side(ax2, p_gt_0,'k', 'Groundtruth')
for i in range(N):
traj = config_trajectories_list[i]
p_es_0 = traj.p_es_aligned
alg = ALGORITHM_CONFIGS[i]
print('Plotting for %s' % alg)
# plot trajectory
pu.plot_trajectory_top(ax1, p_es_0, COLORS[alg], PLOT_LABELS[alg])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
fig1.tight_layout()
# plot trajectory side
pu.plot_trajectory_side(ax2, p_es_0, COLORS[alg], PLOT_LABELS[alg])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
fig2.tight_layout()
fig1.savefig(RESULTS_DIR + '/plots/trajectory_top_' + align_type + FORMAT,bbox_inches="tight")
plt.close(fig1)
fig2.savefig(RESULTS_DIR + '/plots/trajectory_side_' + align_type + FORMAT, bbox_inches="tight")
plt.close(fig2)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1 </span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2 </span>Var</a></span><ul class="toc-item"><li><span><a href="#Init" data-toc-modified-id="Init-2.1"><span class="toc-item-num">2.1 </span>Init</a></span></li></ul></li><li><span><a href="#DeepMAsED-SM" data-toc-modified-id="DeepMAsED-SM-3"><span class="toc-item-num">3 </span>DeepMAsED-SM</a></span><ul class="toc-item"><li><span><a href="#Config" data-toc-modified-id="Config-3.1"><span class="toc-item-num">3.1 </span>Config</a></span></li><li><span><a href="#Run" data-toc-modified-id="Run-3.2"><span class="toc-item-num">3.2 </span>Run</a></span></li></ul></li><li><span><a href="#Summary" data-toc-modified-id="Summary-4"><span class="toc-item-num">4 </span>Summary</a></span><ul class="toc-item"><li><span><a href="#Communities" data-toc-modified-id="Communities-4.1"><span class="toc-item-num">4.1 </span>Communities</a></span></li><li><span><a href="#Feature-tables" data-toc-modified-id="Feature-tables-4.2"><span class="toc-item-num">4.2 </span>Feature tables</a></span><ul class="toc-item"><li><span><a href="#No.-of-contigs" data-toc-modified-id="No.-of-contigs-4.2.1"><span class="toc-item-num">4.2.1 </span>No. of contigs</a></span></li><li><span><a href="#Misassembly-types" data-toc-modified-id="Misassembly-types-4.2.2"><span class="toc-item-num">4.2.2 </span>Misassembly types</a></span></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-5"><span class="toc-item-num">5 </span>sessionInfo</a></span></li></ul></div>
# Goal
* Replicate metagenome assemblies using intra-spec training genome dataset
* Richness = 0.1 (10% of all ref genomes used)
# Var
```
ref_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes/intraSpec/'
ref_file = file.path(ref_dir, 'GTDBr86_genome-refs_train_clean.tsv')
work_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p1/'
# params
pipeline_dir = '/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM/'
```
## Init
```
library(dplyr)
library(tidyr)
library(ggplot2)
library(data.table)
source('/ebio/abt3_projects/software/dev/DeepMAsED/bin/misc_r_functions/init.R')
#' "cat {file}" in R
cat_file = function(file_name){
cmd = paste('cat', file_name, collapse=' ')
system(cmd, intern=TRUE) %>% paste(collapse='\n') %>% cat
}
```
# DeepMAsED-SM
## Config
```
config_file = file.path(work_dir, 'config.yaml')
cat_file(config_file)
```
## Run
```
(snakemake_dev) @ rick:/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM
$ screen -L -S DM-intraS-rich0.1 ./snakemake_sge.sh /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p1/config.yaml cluster.json /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p1/SGE_log 20
```
# Summary
## Communities
```
comm_files = list.files(file.path(work_dir, 'MGSIM'), 'comm_wAbund.txt', full.names=TRUE, recursive=TRUE)
comm_files %>% length %>% print
comm_files %>% head
comms = list()
for(F in comm_files){
df = read.delim(F, sep='\t')
df$Rep = basename(dirname(F))
comms[[F]] = df
}
comms = do.call(rbind, comms)
rownames(comms) = 1:nrow(comms)
comms %>% dfhead
p = comms %>%
mutate(Perc_rel_abund = ifelse(Perc_rel_abund == 0, 1e-5, Perc_rel_abund)) %>%
group_by(Taxon) %>%
summarize(mean_perc_abund = mean(Perc_rel_abund),
sd_perc_abund = sd(Perc_rel_abund)) %>%
ungroup() %>%
mutate(neg_sd_perc_abund = mean_perc_abund - sd_perc_abund,
pos_sd_perc_abund = mean_perc_abund + sd_perc_abund,
neg_sd_perc_abund = ifelse(neg_sd_perc_abund <= 0, 1e-5, neg_sd_perc_abund)) %>%
mutate(Taxon = Taxon %>% reorder(-mean_perc_abund)) %>%
ggplot(aes(Taxon, mean_perc_abund)) +
geom_linerange(aes(ymin=neg_sd_perc_abund, ymax=pos_sd_perc_abund),
size=0.3, alpha=0.3) +
geom_point(size=0.5, alpha=0.4, color='red') +
labs(y='% abundance') +
theme_bw() +
theme(
axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_blank(),
panel.grid.minor.x = element_blank(),
panel.grid.minor.y = element_blank()
)
dims(10,2.5)
plot(p)
dims(10,2.5)
plot(p + scale_y_log10())
```
## Feature tables
```
feat_files = list.files(file.path(work_dir, 'map'), 'features.tsv.gz', full.names=TRUE, recursive=TRUE)
feat_files %>% length %>% print
feat_files %>% head
feats = list()
for(F in feat_files){
cmd = glue::glue('gunzip -c {F}', F=F)
df = fread(cmd, sep='\t') %>%
distinct(contig, assembler, Extensive_misassembly)
df$Rep = basename(dirname(dirname(F)))
feats[[F]] = df
}
feats = do.call(rbind, feats)
rownames(feats) = 1:nrow(feats)
feats %>% dfhead
```
### No. of contigs
```
feats_s = feats %>%
group_by(assembler, Rep) %>%
summarize(n_contigs = n_distinct(contig)) %>%
ungroup
feats_s$n_contigs %>% summary
```
### Misassembly types
```
p = feats %>%
mutate(Extensive_misassembly = ifelse(Extensive_misassembly == '', 'None',
Extensive_misassembly)) %>%
group_by(Extensive_misassembly, assembler, Rep) %>%
summarize(n = n()) %>%
ungroup() %>%
ggplot(aes(Extensive_misassembly, n, color=assembler)) +
geom_boxplot() +
scale_y_log10() +
labs(x='metaQUAST extensive mis-assembly', y='Count') +
coord_flip() +
theme_bw() +
theme(
axis.text.x = element_text(angle=45, hjust=1)
)
dims(8,4)
plot(p)
```
# sessionInfo
```
sessionInfo()
pipelineInfo(pipeline_dir)
```
| github_jupyter |
# Distributed Object Tracker RL training with Amazon SageMaker RL and RoboMaker
---
## Introduction
In this notebook, we show you how you can apply reinforcement learning to train a Robot (named Waffle) track and follow another Robot (named Burger) by using the [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) algorithm implementation in [coach](https://ai.intel.com/r-l-coach/) toolkit, [Tensorflow](https://www.tensorflow.org/) as the deep learning framework, and [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) as the simulation environment.

---
## How it works?
The reinforcement learning agent (i.e. Waffle) learns to track and follow Burger by interacting with its environment, e.g., visual world around it, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through multiple episodes.
This notebook shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following:
1. **Objective**: Learn to drive toward and reach the Burger.
2. **Environment**: A simulator with Burger hosted on AWS RoboMaker.
3. **State**: The driving POV image captured by the Waffle's head camera.
4. **Action**: Six discrete steering wheel positions at different angles (configurable)
5. **Reward**: Reward is inversely proportional to distance from Burger. Waffle gets more reward as it get closer to the Burger. It gets a reward of 0 if the action takes it away from Burger.
---
## Prequisites
### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
You can run this notebook from your local host or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
```
### Setup S3 bucket
```
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
print("S3 bucket path: {}".format(s3_output_path))
```
### Define Variables
We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
```
# create unique job name
job_name_prefix = 'rl-object-tracker'
# create unique job name
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
print("S3 bucket path: {}{}".format(s3_output_path, job_name))
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
```
### Permission setup for invoking AWS RoboMaker from this notebook
In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
```
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
```
## Configure VPC
Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts.
We will use the default VPC configuration for this example.
```
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
```
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
> The cell below should be executed to create the VPC S3 endpoint only if your are running this example for the first time. If the execution fails due to insufficient premissions or some other reasons, please create a VPC S3 endpoint manually by following [create-s3-endpoint.md](create-s3-endpoint.md) (can be found in the same folder as this notebook).
```
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
```
## Setup the environment
The environment is defined in a Python file called “object_tracker_env.py” and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only.
We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `TurtleBot3ObjectTrackerAndFollowerDiscreteEnv` class.
### Configure the preset for RL algorithm
The parameters that configure the RL training job are defined in `src/robomaker/presets/object_tracker.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example.
You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
```
!pygmentize src/robomaker/presets/object_tracker.py
```
### Training Entrypoint
The training code is written in the file “training_worker.py” which is uploaded in the /src directory. At a high level, it does the following:
- Uploads SageMaker node's IP address.
- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).
- Trains the model everytime after a certain number of episodes are received.
- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
```
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
```
## Train the model using Python SDK/ script mode
```
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files if you want to use custom environment and preset
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*"
```
First, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
```
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
```
We use the RLEstimator for training RL jobs.
1. Specify the source directory where the environment, presets and training code is uploaded.
2. Specify the entry point as the training code
3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**
4. Set the RLCOACH_PRESET as "object_tracker" for this example.
5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
RLCOACH_PRESET = "object_tracker"
instance_type = "ml.c5.4xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11.0',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds,
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets,
security_group_ids=default_security_groups,
)
estimator.fit(job_name=job_name, wait=False)
```
### Start the Robomaker job
```
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
```
### Create Simulation Application
We first create a RoboMaker simulation application using the `object-tracker public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-objecttracker) if you want to learn more about this bundle or modify it.
```
bundle_s3_key = 'object-tracker/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE',
'version': '1.x'}
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/object-tracker/object-tracker-1.0.74.0.1.0.105.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "object-tracker-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
```
### Launch the Simulation job on RoboMaker
We create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
```
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"MARKOV_PRESET_FILE": "object_tracker.py",
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "object_tracker_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
```
### Visualizing the simulations in RoboMaker
You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
```
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
```
### Clean Up
Execute the cells below if you want to kill RoboMaker and SageMaker job. It also removes RoboMaker resources created during the run.
```
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
```
### Evaluation
```
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "object_tracker_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
print("Created the following job:")
print("Job ARN", response["arn"])
```
### Clean Up Simulation Application Resource
```
robomaker.delete_simulation_application(application=simulation_app_arn)
```
| github_jupyter |
# Benchmark and Repositories
```
%matplotlib inline
from matplotlib import pyplot as plt
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
default_edge_color = 'gray'
default_node_color = '#407cc9'
enhanced_node_color = '#f5b042'
enhanced_edge_color = '#cc2f04'
output_dir = "./"
import os
def draw_graph(G, node_names={}, filename=None, node_size=50, layout = None):
pos_nodes = nx.spring_layout(G) if layout is None else layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=node_size, edge_color='gray')
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif')
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
if filename:
plt.savefig(os.path.join(output_dir, filename), format="png")
# draw enhanced path on the graph
def draw_enhanced_path(G, path_to_enhance, node_names={}, filename=None, layout=None):
path_edges = list(zip(path,path[1:]))
pos_nodes = nx.spring_layout(G) if layout is None else layout(G)
plt.figure(figsize=(5,5),dpi=300)
pos_nodes = nx.spring_layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=50, edge_color='gray')
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif')
nx.draw_networkx_edges(G,pos_nodes,edgelist=path_edges, edge_color='#cc2f04', style='dashed', width=2.0)
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
if filename:
plt.savefig(os.path.join(output_dir, filename), format="png")
```
### Simple Example of Graphs
We start with some simple graphs
```
complete = nx.complete_graph(n=7)
lollipop = nx.lollipop_graph(m=7, n=3)
barbell = nx.barbell_graph(m1=7, m2=4)
plt.figure(figsize=(15,6))
plt.subplot(1,3,1)
draw_graph(complete)
plt.title("Complete")
plt.subplot(1,3,2)
plt.title("Lollipop")
draw_graph(lollipop)
plt.subplot(1,3,3)
plt.title("Barbell")
draw_graph(barbell)
plt.savefig(os.path.join(output_dir, "SimpleGraphs.png"))
complete = nx.relabel_nodes(nx.complete_graph(n=7), lambda x: x + 0)
lollipop = nx.relabel_nodes(nx.lollipop_graph(m=7, n=3), lambda x: x+100)
barbell = nx.relabel_nodes(nx.barbell_graph(m1=7, m2=4), lambda x: x+200)
def get_random_node(graph):
return np.random.choice(graph.nodes)
import numpy as np
```
## We compose simple graphs into one
```
allGraphs = nx.compose_all([complete, barbell, lollipop])
allGraphs.add_edge(get_random_node(lollipop), get_random_node(lollipop))
allGraphs.add_edge(get_random_node(complete), get_random_node(barbell))
draw_graph(allGraphs, layout=nx.kamada_kawai_layout)
```
#### Model Barabasi Albert
In the following we create and analyse some simple graph generated by the Barabasi-Albert model
```
BA_graph_small = nx.extended_barabasi_albert_graph(n=20,m=1,p=0,q=0)
draw_graph(BA_graph_small, layout=nx.circular_layout)
```
We analyse large Barabasi-Albert graphs to investigate their ability to generate power-law distribution for the degree of node
```
n = 1E5
bag = nx.extended_barabasi_albert_graph(n,m=1,p=0,q=0)
degree = dict(nx.degree(bag)).values()
bins = np.round(np.logspace(np.log10(min(degree)), np.log10(max(degree)), 10))
from collections import Counter
cnt = Counter(np.digitize(np.array(list(degree)), bins))
plt.figure(figsize=(15,6))
plt.subplot(1,2,1)
draw_graph(BA_graph_small, layout=nx.circular_layout)
plt.subplot(1,2,2)
x, y = list(zip(*[(bins[k-1], v/n) for k, v in cnt.items()]))
plt.plot(x, y, 'o'); plt.xscale("log"); plt.yscale("log")
plt.xlabel("Degree k")
plt.ylabel("P(k)")
plt.savefig(os.path.join(output_dir, "Barabasi_Albert.png"))
plt.figure(figsize=(15, 6))
plt.hist(degree, bins=bins)
plt.xscale("log")
plt.yscale("log")
```
Other simple graph Benchmarks
```
import pandas as pd
graph = nx.florentine_families_graph()
nx.draw_kamada_kawai(graph, with_labels=True, node_size=20, font_size=14)
plt.savefig("Florentine.png")
```
### Benchmarks from the Network Data Repository
This dataset (and other) can be downloaded from http://networkrepository.com/. The datasets are generally in the MTX file format that has been described in the book.
In particular the dataset here presented is taken from the collaboration network of Arxiv Astro Physics, that can be downloaded from http://networkrepository.com/ca-AstroPh.php.
```
from scipy.io import mmread
file = "ca-AstroPh.mtx"
adj_matrix = mmread(file)
graph = nx.from_scipy_sparse_matrix(adj_matrix)
degrees = dict(nx.degree(graph))
ci = nx.clustering(graph)
centrality = nx.centrality.betweenness_centrality(graph)
stats = pd.DataFrame({
"centrality": centrality,
"C_i": ci,
"degree": degrees
})
stats.head()
```
Here we provide some simple analysis of the DataFrame we generated to see correlations between centrality, clustering coefficient and degree.
```
plt.plot(stats["centrality"], stats["degree"], 'o')
plt.xscale("log")
plt.yscale("log")
plt.plot(stats["centrality"], stats["C_i"], 'o')
plt.xscale("log")
plt.yscale("log")
```
### Ego-network
Here we plot the ego-network of the most-connected node, that has id 6933. However, even this network looks a bit messy since it has hundreds of nodes. We therefore sample randomly or based on centrality/clustering coefficient in order to plot a relevant subgraph.
```
neighbors = [n for n in nx.neighbors(graph, 6933)]
sampling = 0.1
nTop = round(len(neighbors)*sampling)
idx = {
"random": stats.loc[neighbors].sort_index().index[:nTop],
"centrality": stats.loc[neighbors].sort_values("centrality", ascending=False).index[:nTop],
"C_i": stats.loc[neighbors].sort_values("C_i", ascending=False).index[:nTop]
}
def plotSubgraph(graph, indices, center = 6933):
draw_graph(
nx.subgraph(graph, list(indices) + [center]),
layout = nx.kamada_kawai_layout
)
plt.figure(figsize=(15,6))
for ith, title in enumerate(["random", "centrality", "C_i"]):
plt.subplot(1,3,ith+1)
plotSubgraph(graph, idx[title])
plt.title(title)
plt.savefig(os.path.join(output_dir, "PhAstro"))
```
### Data to Gephi
Otherwise, we could also export the data from networkx in order to plot it and analyse it using the Gephi software.
```
nx.write_gexf(graph, 'ca-AstroPh.gexf')
```
| github_jupyter |
# This jupyter notebook contains examples of
- some basic functions related to Global Distance Test (GDT) analyses
- local accuracy plot
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import MDAnalysis as mda
import pyrexMD.misc as misc
import pyrexMD.core as core
import pyrexMD.topology as top
import pyrexMD.analysis.analyze as ana
import pyrexMD.analysis.gdt as gdt
```
We define MDAnalysis universes to handle data. In this case we define:
- ref: universe with reference structure
- mobile: universe with trajectory
```
pdb = "files/traj_rna/4tzx_ref.pdb"
tpr = "files/traj_rna/traj_rna.tpr"
traj = "files/traj_rna/traj_rna_cat.xtc"
ref = mda.Universe(pdb)
mobile = mda.Universe(tpr, traj)
tv = core.iPlayer(mobile)
tv()
```
# Global Distance Test (GDT) Analysis
first we norm and align the universes (shift res ids, atom ids) and run the Global Distance Test
```
# first norm and align universes
top.norm_and_align_universe(mobile, ref)
# run GDT using selection idnex string for correct mapping
GDT = gdt.GDT_rna(mobile, ref)
GDT_percent, GDT_resids, GDT_cutoff, RMSD, FRAME = GDT
```
Now we can calculate individual GDT scores
- TS: Total Score
- HA: High Accuracy
```
GDT_TS = gdt.get_GDT_TS(GDT_percent)
GDT_HA = gdt.get_GDT_HA(GDT_percent)
```
We can print the scores in a table to take a quick look on the content
```
frames = [i for i in range(len(GDT_TS))]
misc.cprint("GDT TS GDT HA frame", "blue")
_ = misc.print_table([GDT_TS, GDT_HA, frames], verbose_stop=10, spacing=10)
```
alternatively we can also first rank the scores and print the table sorted by rank
```
SCORES = gdt.GDT_rank_scores(GDT_percent, ranking_order="GDT_TS", verbose=False)
GDT_TS_ranked, GDT_HA_ranked, GDT_ndx_ranked = SCORES
misc.cprint("GDT TS GDT HA frame", "blue")
_ = misc.print_table([GDT_TS_ranked, GDT_HA_ranked, GDT_ndx_ranked], spacing=10, verbose_stop=10)
```
To plot the GDT_TS curve we can use a generalized PLOT function:
```
fig, ax = ana.PLOT(xdata=frames, ydata=GDT_TS, xlabel="Frame", ylabel="GDT TS")
```
Histrograms are often also important as they can be used to extract probabilities of protein conformations
```
hist = ana.plot_hist(GDT_TS, n_bins=20, xlabel="GDT TS", ylabel="Counts")
```
# Local Accuracy Plot
Figure showing local accuracy of models at specified frames to identify which parts of a structure are good or bad refined.
```
# edit text box positions of labels "Frame", "TS", "HA"
text_pos_kws = {"text_pos_Frame": [-33.6, -0.3],
"text_pos_TS": [-16.0, -0.3],
"text_pos_HA": [-7.4, -0.3],
"font_scale": 1.0,
"show_frames": True,
"vmax": 14}
# plot
A = gdt.plot_LA_rna(mobile, ref, GDT_TS_ranked, GDT_HA_ranked, GDT_ndx_ranked, **text_pos_kws)
```
| github_jupyter |
# Text Using Markdown
**If you double click on this cell**, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using [Markdown](http://daringfireball.net/projects/markdown/syntax), which is a way to format text using headers, links, italics, and many other options. Hit _shift_ + _enter_ or _shift_ + _return_ on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
# Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
```
# Hit shift + enter or use the run button to run this cell and see the results
print('hello world')
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
```
# Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
```
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
```
# Creating cells
To create a new **code cell**, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new **markdown cell**, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
# Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
```
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
```
Once you've run all three cells, try modifying the first one to set `class_name` to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the `class_name` variable was updated, the `message` variable was not. Now try rerunning the second cell, and then the third.
You should have seen the output change to "*your name* is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off.
| github_jupyter |
# Amazon product data mapping
```
# importing libraries
from fuzzywuzzy import fuzz
import pandas as pd
import re
import csv
import warnings
warnings.filterwarnings('ignore')
# Amazon product dataset
amazon = pd.read_csv('AmazonProductdata.csv')
# change columns name
amazon.rename(columns={'Name': 'Amazon_Name',
'Synonyms': 'Amazon_Synonyms',
'SalesPrice': 'Amazon_SalesPrice',
'OriginalPrice': 'Amazon_OriginalPrice',
'Rating': 'Amazon_Rating',
'ProductLink': 'Amazon_ProductLink',
'ImageLink': 'Amazon_ImageLink'}, inplace=True)
# Flipkart product dataset
flipkart = pd.read_csv('FlipkartDataset.csv')
# change columns name
flipkart.rename(columns={'Name': 'Flipkart_Name',
'Synonyms': 'Flipkart_Synonyms',
'SalesPrice': 'Flipkart_SalesPrice',
'OriginalPrice': 'Flipkart_OriginalPrice',
'Rating': 'Flipkart_Rating',
'ProductLink': 'Flipkart_ProductLink',
'ImageLink': 'Flipkart_ImageLink'}, inplace=True)
# Snapdeal product dataset
snapdeal= pd.read_csv('SnapdealDataset.csv')
# change columns name
snapdeal.rename(columns={'Name': 'Snapdeal_Name',
'Synonyms': 'Snapdeal_Synonyms',
'SalesPrice': 'Snapdeal_SalesPrice',
'OriginalPrice': 'Snapdeal_OriginalPrice',
'Rating': 'Snapdeal_Rating',
'ProductLink': 'Snapdeal_ProductLink',
'ImageLink': 'Snapdeal_ImageLink'}, inplace=True)
# Remove all the special characters
amazon_name = amazon['Amazon_Synonyms'].str.replace(r"[^0-9,a-z,A-Z, ]",'')
amazon_name = amazon_name.str.replace(r"[,]",'')
amazon_name_lst = amazon_name.to_list()
# remove duplicate string
new_amazon_name_lst = []
for lst in amazon_name_lst:
new_amazon_lst = (' '.join(dict.fromkeys(lst.split())))
new_amazon_name_lst.append(new_amazon_lst)
amazon.insert(1, "New_Amazon_Name",new_amazon_name_lst)
amazon.head()
# Remove all the special characters
flipkart_name = flipkart['Flipkart_Name'].str.replace(r"[(),,]",'')
flipkart.insert(1, "New_Flipkart_Name",flipkart_name, True)
flipkart.head()
# Remove all the special characters
snapdeal_name = snapdeal['Snapdeal_Name'].str.replace(r"[(),,]",'')
snapdeal.insert(1, "New_Snapdeal_Name",snapdeal_name, True)
snapdeal.head()
def get_match(amazon):
"""Extract and Return matched value using fuzzywuzzy"""
dictionary_name = {}
try:
# iterate flipkart product name
for product_name in flipkart['New_Flipkart_Name']:
# string matching using token_set_ratio function
match = fuzz.token_set_ratio(amazon, product_name)
if match >= 85:
dictionary_name[product_name] = match
Keymax = max(dictionary_name, key=dictionary_name.get)
# return max matched value
return Keymax
except:
# return NAN if not matched
return ''
# store matched values
flipkart_result = []
# Iterate amazon product ame
for item in amazon['New_Amazon_Name']:
record = get_match(item)
flipkart_result.append(record)
amazon.insert(2, "New_Flipkart_Name", flipkart_result)
def get_match(amazon):
"""Extract and Return matched value using fuzzywuzzy"""
dictionary_name = {}
try :
# iterate snapdeal product name
for product_name in snapdeal['New_Snapdeal_Name']:
# string matching using token_set_ratio function
match = fuzz.token_set_ratio(amazon,product_name)
if match >=85 :
dictionary_name[product_name] = match
Keymax = max(dictionary_name, key=dictionary_name.get)
# return max matched value
return Keymax
except :
# return NAN if not matched
return ''
# store matched values
snapdeal_result = []
# Iterate amazon product name
for item in amazon['New_Amazon_Name']:
record = get_match(item)
snapdeal_result.append(record)
amazon.insert(3, "New_Snapdeal_Name",snapdeal_result)
# Merge snapdeal and flipkart data in amazon dataset
data = amazon.merge(flipkart,on='New_Flipkart_Name',how='left')
new_data = data.merge(snapdeal,on='New_Snapdeal_Name',how='left')
# drop unwanted columns
new_data = new_data.drop(['Id','ID_x','ID_y','New_Amazon_Name','New_Flipkart_Name','New_Snapdeal_Name'], axis = 1)
# rename columns
new_data.rename(columns={'Ratings': 'Amazon_Rating',
'Availability': 'Amazon_Availability',
'Availibility': 'Snapdeal_Availability',
}, inplace=True)
# There are the columns
new_data.columns
# Save dataframe in csv file
new_data.to_csv('AmazonMappedData.csv',index=0)
```
| github_jupyter |
# 数据管理
本节内容可应用在数据读取之后。包括基本的运算(包括统计函数)、数据重整(排序、合并、子集、随机抽样、整合、重塑等)、字符串处理、异常值(NA/Inf/NaN)处理等内容。也包括 apply() 这种函数式编程函数的使用。
## 数学函数
数学运算符和一些统计学上需要的函数。
### 数学运算符
| 四则 | 幂运算 | 求余 | 整除 |
| --- | --- | --- | --- |
| +, -, \*, / | ^ 或 \*\* | %% | %/% |
例子:
```
a <- 2 ^ 3
b <- 5 %% 2
c <- 5 %/% 2
print(c(a, b, c))
```
### 基本数学函数
- 绝对值:abs()
- 平方根:sqrt()
- 三角函数:sin(), cos(), tan(), acos(), asin(), atan()
- 对数:
- log(x, base=n) 以 n 为底 x 的对数
- log10(x) 以 10 为底的对数
- 指数:exp()
- 取整:
- 向上取整 ceiling()
- 向下取整 floor()
- 舍尾取整(绝对值减小) trunc()
- 四舍五入到第 N 位 round(x, digits=N)
- 四舍五入为有效数字共 N 位 singif(x, digits=N)
### 统计、概率与随机数
描述性统计等更多的统计内容,参考 [“描述性统计”一文](DescriptiveStatistics.ipynb)。
#### 统计函数
常用的统计函数:
- 均值:mean()
- 中位数:median()
- 标准差:sd()
- 方差:var()
- 绝对中位差:mad(x, center=median(x), constant=1.4826, ...),计算式:
$$ \mathrm{mad}(x) = constant * \mathrm{Median}(|x - center|)$$
- 分位数:quantile(x, probs),例如 quantile(x, c(.3, 84%)) 返回 x 的 30% 和 84% 分位数。
- 极值:min() & max()
- 值域与极差:range(x),例如 range(c(1, 2, 3)) 结果为 c(1, 3)。极差用 diff(range(x))
- 差分:diff(x, lag=1)。可以用 lag 指定滞后项的个数,默认 1
- 标准化:scale(x, center=TRUE, scale=TRUE)。可以使用 scale(x) * SD + C 来获得标准差为 SD、均值为 C 的标准化结果。
#### 概率函数
常用的概率分布函数:
- 正态分布:norm
- 泊松分布:pois
- 均匀分布:unif
- Beta 分布:beta
- 二项分布:binom
- 柯西分布:cauchy
- 卡方分布:chisq
- 指数分布:exp
- F 分布:f
- t 分布:t
- Gamma 分布:gamma
- 几何分布:geom
- 超几何分布:hyper
- 对数正态分布:lnorm
- Logistic 分布:logis
- 多项分布:multinom
- 负二项分布:nbinom
以上各概率函数的缩写记为 *abbr*, 那么对应的概率函数有:
1. **密度函数**: d{abbr}(),例如对于正态就是 dnorm()
2. **分布函数**:p{abbr}()
3. **分位数函数**:q{abbr}()
4. **生成随机数**:r{abbr}(),例如常用的 runif() 生成均匀分布
#### 例子
通过 runif() 产生 $[0, 1]$ 上的服从均匀分布的伪随机数列。通过 set.seed() 可以指定随机数种子,使得代码可以重现。不过**作用域只有跟随其后的那个随机数函数。**
```
set.seed(123)
print(runif(3))
# 位于 1.96 左侧的标准正态分布曲线下方的面积
pnorm(1.96)
# 均值为500,标准差为100 的正态分布的0.9 分位点
qnorm(.9, mean=500, sd=100)
# 生成 3 个均值为50,标准差为10 的正态随机数
set.seed(123)
print(rnorm(3, mean=50, sd=10))
```
## 数据框操作
数据框是最常使用的数据类型。下面给出数据框使用中一些实用的场景,以及解决方案。
### 行、列操作
#### 新建
创建一个新的列(变量)是很常见的操作。比如我们现在有数据框 df ,想要在右侧新建一个列,使其等于左侧两列的和。
```
df = data.frame(x1=c(1, 3, 5), x2=c(2, 4, 6))
# 直接用美元符声明一个新列
df$sumx <- df$x1 + df$x2
df
# 或者使用 transform 函数
df <- transform(df, sumx2=x1+x2)
df
```
#### 重命名
```
colnames(df)[4] <- "SUM"
print(colnames(df))
```
#### 选取/剔除: subset()
```
# 选取前两列
df[,1:2] # 或者 df[c("x1", "x2")]
# 剔除列 sumx
df <- df[!names(df) == "sumx"]
df
# 剔除第三列
df <- df[-c(3)] # 或者 df[c(-3)]
df
```
至于选取行,与列的操作方式是类似的:
```
# 选取 x1>2 且 x2为偶数的观测(行)
df[df$x1 > 2 & df$x2 %% 2 ==0,]
```
再介绍一个 subset() 指令,非常简单粗暴。先来一个复杂点的数据集:
```
DF <- data.frame(age = c(22, 37, 28, 33, 43),
gender = c(1, 2, 1, 2, 1),
q1 = c(1, 5, 3, 3, 2),
q2 = c(4, 4, 5, 3, 1),
q3 = c(3, 2, 4, 3, 1))
DF$gender <- factor(DF$gender, labels=c("Male", "Female"))
DF
# 选中年龄介于 25 与 40 之间的观测
# 并只保留变量 age 到 q2
subset(DF, age > 25 & age < 40, select=age:q2)
```
#### 横向合并
如果你有两个**行数相同**的数据框,你可以使用 merge() 将其进行内联合并(inner join),他们将通过一个或多个共有的变量进行合并。
```
df1 <- data.frame(ID=c(1, 2, 3), Sym=c("A", "B", "C"), Oprtr=c("x", "y", "z"))
df2 <- data.frame(ID=c(1, 3, 2), Oprtr=c("x", "y", "z"))
# 按 ID 列合并
merge(df1, df2, by="ID")
# 由于 ID 与 Oprtr 一致的只有一行,因此其余的都舍弃
merge(df1, df2, by=c("ID", "Oprtr"))
```
或者直接用 cbind() 函数组合。
```
# 直接组合。注意:列名相同的话,在按列名调用时右侧的会被忽略
cbind(df1, df2)
```
#### 纵向合并
相当于追加观测。两个数据框必须有**相同的变量**,尽管顺序可以不同。如果两个数据框变量不同请:
- 删除多余变量;
- 在缺少变量的数据框中,追加同名变量并将其设为缺失值 NA。
```
df1 <- data.frame(ID=c(1, 2, 3), Sym=c("A", "B", "C"), Oprtr=c("x", "y", "z"))
df2 <- data.frame(ID=c(1, 3, 2), Oprtr=c("x", "y", "z"))
df2$Sym <- NA
rbind(df1, df2)
```
### 逻辑型筛选
通过逻辑判断来过滤数据,或者选取数据子集,或者将子集作统一更改。在前面的一些例子中已经使用到了。
```
df$x3 <- c(7, 8, 9)
# 把列 x3 中的奇数换成 NA
df$x3[df$x3 %% 2 == 1] <- NA
df
df$y <- c(7, 12, 27)
# 把所有小于 3 的标记为 NaN
# 把所有大于 10 的数按奇偶标记为正负Inf
df[df < 3] <- NaN
df[df > 10 & df %% 2 == 1] <- Inf
df[df > 10 & df %% 2 == 0] <- -Inf
df
```
### 排序
排序使用 order() 命令。
```
df <- data.frame(age =c(22, 37, 28, 33, 43),
gender=c(1, 2, 1, 2, 1))
df$gender <- factor(df$gender, labels=c("Male", "Female"))
# 按gender升序排序,各gender内按age降序排序
df[order(df$gender, -df$age),]
```
### 随机抽样
从已有的数据集中随机抽选样本是常见的做法。例如,其中一份用于构建预测模型,另一份用于验证模型。
```r
# 无放回地从 df 的所有观测中,抽取一个大小为 3 的样本
df[sample(1:nrow(df), 3, replace=F)]
```
随机抽样的 R 包有 sampling 与 survey,如果可能我会在本系列下另建文章介绍。
### SQL语句
在 R 中,借助 sqldf 包可以直接用 SQL 语句操作数据框(data.frame)。一个来自书中的例子:
```r
newdf <- sqldf("select * from mtcars where carb=1 order by mpg", row.names=TRUE)
```
这里就不过多涉及了。
## 字符串处理
R 中的字符串处理函数有以下几种:
### 通用函数
| 函数 | 含义 |
| --- | --- |
| nchar(x) | 计算字符串的长度 |
| substr(x, start, stop) | 提取子字符串 |
| grep(pattern, x, ignore.case=FALSE, fixed=FALSE) | 正则搜索,返回为匹配的下标。如果 fixed=T,则按字符串而不是正则搜索。 |
| grepl() | 类似 grep(),只不过返回值是逻辑值向量。 |
| sub(pattern, replacement, x, ignore.base=FALSE, fixed=FALSE) | 在 x 中搜索正则式,并以 replacement 将其替换。如果 fixed=T,则按字符串而不是正则搜索 |
| strsplit(x, split, fixed=FALSE) | 在 split 处分割字符向量 x 中的元素,返回一个列表。 |
| paste(x1, x2, ..., sep="") | 连接字符串,连接符为 sep。也可以连接重复字串:`paste("x", 1:3, sep="")` |
| toupper(x) | 转换字符串为全大写 |
| tolower(x) | 转换字符串为全小写 |
一些例子。首先是正则表达式的使用:
```
streg <- c("abc", "abcc", "abccc", "abc5")
re1 <- grep("abc*", streg)
re2 <- grep("abc\\d", streg) # 注意反斜杠要双写来在 R 中转义
re3 <- sub("[a-z]*", "Hey", streg)
re4 <- sub("[a-z]*\\d", "NEW", streg)
print(list(re1, re2, re3, re4))
```
然后是字符串分割与连接。注意这里的 paste() 有非常巧妙的用法:
```
splt <- strsplit(streg, "c") # 结果中不含分隔符 "c"
cat1 <- paste("a", "b", "c", sep="-")
cat2 <- paste("x", 1:3, sep="") # 生成列名时非常有用
print(list(splt, cat1, cat2))
```
### 日期型字符串
与其他类型相似,日期型字符串能够通过 as.Date() 函数处理。各格式字符的含义如下:
| 符号 | 含义 | 通用示例 | 中文示例 |
| --- | --- | --- | --- |
| %d | 日(1~31) | 22 | 22 |
| %a | 缩写星期 | Mon | 周一 |
| %A | 全写星期 | Monday | 星期一 |
| %m | 月(1~12) | 10 | 10 |
| %b | 缩写月 | Jan | 1月 |
| %B | 全写月 | January | 一月 |
| %y | 两位年 | 17 | 17 |
| %Y | 四位年 | 2017 | 2017 |
```
# 对字符串数据 x,用法:as.Date(x, format=, ...)
dates <- as.Date("01-28-2017", format="%m-%d-%Y")
print(dates)
```
要想获得当前的日期或时间,有两种格式可以参考,并可以用 format() 函数辅助输出。
```
# Sys.Date() 返回一个精确到日的标准日期格式
dates1 <- Sys.Date()
format(dates1, format="%A") # 可以指定输出格式
# date() 返回一个精确到秒的详细的字串
dates2 <- date()
dates2
```
函数 difftime() 提供了计算时间差的方式。其中计量单位可以是以下之一:"auto", "secs", "mins", "hours", "days", "weeks"。
截至本文最后更新,我有 1100+ 周大。唔……这好像听起来没什么感觉
```
dates1 <- as.Date("1994-11-23")
dates2 <- Sys.Date()
difftime(dates2, dates1, units="weeks")
```
## 异常值处理
异常值包括三类:
- NA:缺失值。
- Inf:正无穷。用 -Inf 表示负无穷。**无穷与数可以比较大小,**比如 -Inf < 3 为真。
- NaN:非可能值。比如 0/0。
使用 is.na() 函数判断数据集中是否存在 NA 或者 NaN,并返回矩阵。注意 NaN 会被判断为缺失值。
```
is.na(df)
```
另外也有类似的函数来判断 Inf 与 NaN,但只能对一维数据集使用:
```
print(c(is.infinite(c(Inf, -Inf)), is.nan(NA)))
```
在进行数据处理之前,处理 NA 缺失值是必须的步骤。如果某些数值过于离群,你也可能需要将其标记为 NA 。行移除是最简单粗暴的处理方法。
```
# NA 行移除
df <- na.omit(df)
df
```
## 整合与重构
### 转置
常见的转置方法是 t() 函数:
```
df = matrix(1:6, nrow=2, ncol=3)
t(df)
```
### 整合:aggregate()
这个函数是非常强大的。语法:
aggregate(x, by=list(), FUN)
其中 x 是待整合的数据对象,by 是分类依据的列,FUN 是待应用的标量函数。
```
# 这个例子改编自 R 的官方帮助 aggregate()
df <- data.frame(v1 = c(1,3,5,7,8,3,5,NA,4,6,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99) )
by1 <- c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12)
by2 <- c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA)
# 按照 by1 & by2 整合原数据 testDF
# 注意(by1, by2)=(1, 99) 对应 (v1, v2)=(5, 55) 与 (6,55) 两条数据
# 因此第三行的 v1 = mean(c(5, 6)) = 5.5
aggregate(x = df, by = list(b1=by1, b2=by2), FUN = "mean")
# 用公式筛选原数据的列,仅整合这些列
# 注意:v1中的一个含 NA 的观测被移除
aggregate(cbind(df$v1) ~ by1+by2, FUN = "mean")
```
还有一个强大的整合包 reshape2,这里就不多介绍了。
## 函数式编程
函数式编程是每个科学计算语言中的重要内容;操作实现的优先级依次是**矢量运算(例如 df+1)、函数式书写,最后才是循环语句**。在 R 中,函数式编程主要是由 apply 函数族承担。R 中的 apply 函数族包括:
- apply():指定轴向。传入 data.frame,返回 vector.
- tapply():
- vapply():
- lapply():
- sapply():
- mapply():
- rapply():
- eapply():
下面依次介绍。
### apply():指定多维对象的轴
在 R 中,通过 apply() 可以将函数运用于多维对象。基本语法是:
apply(d, N, FUN, ...)
其中,N 用于指定将函数 FUN 应用于数据 d 的第几维(1为行,2为列)。省略号中可以传入 function 的参数。
```
df <- data.frame(x=c(1, 2, 3), y=c(5, 4, 2), z=c(8, 6, 9), s=c(3, 7, 4))
df
# 计算 df 各列的中位数
colmean <- apply(df, 2, median)
# 计算 df 各行的 25 分位数
rowquan <- apply(df, 1, quantile, probs=.25)
print(list(colmean, rowquan))
```
### lapply():列表式应用
lapply 函数的本意是对 list 对象进行操作。返回值是 list 类型。
```
lst <- list(a=c(0,1), b=c(1,2), c=c(3,4))
lapply(lst, function(x) {sum(x^2)})
```
但同样可以作用于 DataFrame 对象的各个列(因为 DataFrame 对象是类似于各列组成的 list):
```
lapply(df, sum)
```
### sapply()/vapply():变种 lapply()
sapply() 实质上是一种异化的 lapply(),返回值可以转变为 vector 而不是 list 类型。
```
class(sapply(lst, function(x) {sum(x^2)}))
class(lapply(lst, function(x) {sum(x^2)}))
print(sapply(df, sum))
```
参数 simplify=TRUE 是默认值,表示返回 vector 而不是 list。如果改为 FALSE,就退化为 lapply() 函数。
```
sapply(df, sum, simplify=FALSE)
```
vapply() 函数可以通过 FUN.VALUE 参数传入行名称,但这一步往往可以借助 lapply()/sapply() 加上外部的 row.names() 函数完成。
### mapply():多输入值的应用
mapply() 函数支持多个输入值:
mapply(FUN, [input1, input2, ...], MoreArgs=NULL)
其中各 input 的**长度应该相等或互为整数倍数**。该函数的用处在于避免了事先将数据合并。
```
print(mapply(min, seq(0, 2, by=0.5), -2:7))
```
### tapply():分组应用
tapply() 函数可以借助 factor 的各水平进行分组,然后进行计算。类似于 group by 操作:
tapply(X, idx, FUN)
其中 X 是数据,idx 是分组依据。
```
df <- data.frame(x=1:6, groups=rep(c("a", "b"), 3))
print(tapply(df$x, df$groups, cumsum))
```
其他的 apply() 函数很少用到,在此就不介绍了。
## 其他实用函数
在本系列的 [“数据读写操作”一文](ReadData.ipynb) 中,也介绍了一些实用的函数,可以参考。
此外还有:
| 函数 | 含义 |
| --- | --- |
| seq(from=N, to=N, by=N, [length.out=N, along.with=obj]) | 生成数列。参数分别是起、止、步长、数列长、指定数列长度与某对象等长。 |
| rep(x, N) | 重复组合。比如 rep(1:2, 2) 会生成一个向量 c(1, 2, 1, 2) |
| cut(x, N, [ordered_result=F]) | 分割为因子。 将连续变量 x 分割为有 N 个水平的因子,可以指定是否有序。 |
| pretty(x, N) | 美观分割。将连续变量 x 分割为 N 个区间(N+1 个端点),并使端点为取整值。 绘图中使用。|
| cat(obj1, obj2, ..., [file=, append=]) | 连接多个对象,并输出到屏幕或文件。 |
| github_jupyter |
<h3> ABSTRACT </h3>
All CMEMS in situ data products can be found and downloaded after [registration](http://marine.copernicus.eu/services-portfolio/register-now/) via [CMEMS catalogue] (http://marine.copernicus.eu/services-portfolio/access-to-products/).
Such channel is advisable just for sporadic netCDF donwloading because when operational, interaction with the web user interface is not practical. In this context though, the use of scripts for ftp file transference is is a much more advisable approach.
As long as every line of such files contains information about the netCDFs contained within the different directories [see at tips why](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md), it is posible for users to loop over its lines to download only those that matches a number of specifications such as spatial coverage, time coverage, provider, data_mode, parameters or file_name related (region, data type, TS or PF, platform code, or/and platform category, timestamp).
<h3>PREREQUISITES</h3>
- [credentias](http://marine.copernicus.eu/services-portfolio/register-now/)
- aimed [in situ product name](http://cmems-resources.cls.fr/documents/PUM/CMEMS-INS-PUM-013.pdf)
- aimed [hosting distribution unit](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md)
- aimed [index file](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md)
i.e:
```
user = '' #type CMEMS user name within colons
password = '' #type CMEMS password within colons
product_name = 'INSITU_BAL_NRT_OBSERVATIONS_013_032' #type aimed CMEMS in situ product
distribution_unit = 'cmems.smhi.se' #type aimed hosting institution
index_file = 'index_history.txt' #type aimed index file name
#remember! platform category only for history and monthly directories
```
<h3>DOWNLOAD</h3>
1. Index file download
```
import ftplib
ftp=ftplib.FTP(distribution_unit,user,password)
ftp.cwd("Core")
ftp.cwd(product_name)
local_file = open(index_file, 'wb')
ftp.retrbinary('RETR ' + index_file, local_file.write)
local_file.close()
ftp.quit()
#ready when 221 Goodbye.!
```
<h3>QUICK VIEW</h3>
```
import numpy as np
import pandas as pd
from random import randint
index = np.genfromtxt(index_file, skip_header=6, unpack=False, delimiter=',', dtype=None,
names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters'])
dataset = randint(0,len(index)) #ramdom line of the index file
values = [index[dataset]['catalog_id'], '<a href='+index[dataset]['file_name']+'>'+index[dataset]['file_name']+'</a>', index[dataset]['geospatial_lat_min'], index[dataset]['geospatial_lat_max'],
index[dataset]['geospatial_lon_min'], index[dataset]['geospatial_lon_max'], index[dataset]['time_coverage_start'],
index[dataset]['time_coverage_end'], index[dataset]['provider'], index[dataset]['date_update'], index[dataset]['data_mode'],
index[dataset]['parameters']]
headers = ['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters']
df = pd.DataFrame(values, index=headers, columns=[dataset])
df.style
```
<h3>FILTERING CRITERIA</h3>
Regarding the above glimpse, it is posible to filter by 12 criteria. As example we will setup next a filter to only download those files that contains data within a defined boundingbox.
1. Aimed category
```
targeted_category = 'drifter'
```
2. netCDF filtering/selection
```
selected_netCDFs = [];
for netCDF in index:
file_name = netCDF['file_name']
folders = file_name.split('/')[3:len(file_name.split('/'))-1]
category = file_name.split('/')[3:len(file_name.split('/'))-1][len(file_name.split('/')[3:len(file_name.split('/'))-1])-1]
if (category == targeted_category):
selected_netCDFs.append(file_name)
print("total: " +str(len(selected_netCDFs)))
```
<h3> SELECTION DOWNLOAD </h3>
```
for nc in selected_netCDFs:
last_idx_slash = nc.rfind('/')
ncdf_file_name = nc[last_idx_slash+1:]
folders = nc.split('/')[3:len(nc.split('/'))-1]
host = nc.split('/')[2] #or distribution unit
ftp=ftplib.FTP(host,user,password)
for folder in folders:
ftp.cwd(folder)
local_file = open(ncdf_file_name, 'wb')
ftp.retrbinary('RETR '+ncdf_file_name, local_file.write)
local_file.close()
ftp.quit()
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Tutorial-IllinoisGRMHD: ID_converter_ILGRMHD ETKThorn
## Authors: Leo Werneck & Zach Etienne
<font color='red'>**This module is currently under development**</font>
## In this tutorial module we generate the ID_converter_ILGRMHD ETK thorn files, compatible with our latest implementation of IllinoisGRMHD
### Required and recommended citations:
* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).
* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).
* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follows
0. [Step 0](#src_dir): **Source directory creation**
1. [Step 1](#introduction): **Introduction**
1. [Step 2](#convert_to_hydrobase__src): **`set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C`**
1. [Step 3](#convert_to_hydrobase__param): **`param.ccl`**
1. [Step 4](#convert_to_hydrobase__interface): **`interface.ccl`**
1. [Step 5](#convert_to_hydrobase__schedule): **`schedule.ccl`**
1. [Step 6](#convert_to_hydrobase__make): **`make.code.defn`**
1. [Step n-1](#code_validation): **Code validation**
1. [Step n](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file**
<a id='src_dir'></a>
# Step 0: Source directory creation \[Back to [top](#toc)\]
$$\label{src_dir}$$
We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
```
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Load up cmdline_helper and create the directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd
IDcIGM_dir_path = os.path.join("..","ID_converter_ILGRMHD")
cmd.mkdir(IDcIGM_dir_path)
IDcIGM_src_dir_path = os.path.join(IDcIGM_dir_path,"src")
cmd.mkdir(IDcIGM_src_dir_path)
# Step 0b: Create the output file path
outfile_path__ID_converter_ILGRMHD__source = os.path.join(IDcIGM_src_dir_path,"set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C")
outfile_path__ID_converter_ILGRMHD__make = os.path.join(IDcIGM_src_dir_path,"make.code.defn")
outfile_path__ID_converter_ILGRMHD__param = os.path.join(IDcIGM_dir_path,"param.ccl")
outfile_path__ID_converter_ILGRMHD__interface = os.path.join(IDcIGM_dir_path,"interface.ccl")
outfile_path__ID_converter_ILGRMHD__schedule = os.path.join(IDcIGM_dir_path,"schedule.ccl")
```
<a id='introduction'></a>
# Step 1: Introduction \[Back to [top](#toc)\]
$$\label{introduction}$$
<a id='convert_to_hydrobase__src'></a>
# Step 2: `set_IllinoisGRMHD_metric_GRMHD_variables _based_on_HydroBase_and_ADMBase_variables.C` \[Back to [top](#toc)\]
$$\label{convert_to_hydrobase__src}$$
```
%%writefile $outfile_path__ID_converter_ILGRMHD__source
/********************************
* CONVERT ET ID TO IllinoisGRMHD
*
* Written in 2014 by Zachariah B. Etienne
*
* Sets metric & MHD variables needed
* by IllinoisGRMHD, converting from
* HydroBase and ADMBase.
********************************/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <sys/time.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
#include "IllinoisGRMHD_headers.h"
extern "C" void set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if(rho_b_atm > 1e199) {
CCTK_VError(VERR_DEF_PARAMS, "You MUST set rho_b_atm to some reasonable value in your param.ccl file.\n");
}
// Convert ADM variables (from ADMBase) to the BSSN-based variables expected by this routine.
IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp,
gtxx,gtxy,gtxz,gtyy,gtyz,gtzz,
gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz,
phi_bssn,psi_bssn,lapm1);
/***************
* PPEOS Patch *
***************/
eos_struct eos;
initialize_EOS_struct_from_input(eos);
if(pure_hydro_run) {
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) {
int index=CCTK_GFINDEX3D(cctkGH,i,j,k);
Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,0)]=0;
Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,1)]=0;
Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,2)]=0;
Aphi[index]=0;
}
}
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) {
int index=CCTK_GFINDEX3D(cctkGH,i,j,k);
rho_b[index] = rho[index];
P[index] = press[index];
/***************
* PPEOS Patch *
***************
* We now verify that the initial data
* provided by the user is indeed "cold",
* i.e. it contains no Thermal part and
* P = P_cold.
*/
/* Compute P_cold */
const int polytropic_index = find_polytropic_K_and_Gamma_index(eos, rho_b[index]);
const double K_poly = eos.K_ppoly_tab[polytropic_index];
const double Gamma_poly = eos.Gamma_ppoly_tab[polytropic_index];
const double P_cold = K_poly*pow(rho_b[index],Gamma_poly);
/* Compare P and P_cold */
double P_rel_error = fabs(P[index] - P_cold)/P[index];
if( rho_b[index] > rho_b_atm && P_rel_error > 1e-2 ) {
const double Gamma_poly_local = log(P[index]/K_poly) / log(rho_b[index]);
/* Determine the value of Gamma_poly_local associated with P[index] */
CCTK_VWarn(CCTK_WARN_ALERT, __LINE__, __FILE__, CCTK_THORNSTRING,
"Expected a PP EOS with local Gamma_poly = %.15e, but found a point such that Gamma_poly_local = %.15e.\n",
Gamma_poly, Gamma_poly_local);
CCTK_VWarn(CCTK_WARN_ALERT, __LINE__, __FILE__, CCTK_THORNSTRING,
"{rho_b; rho_b_atm; P; P_cold; P_rel_Error} = %.10e %e %.10e %.10e %e\n",
rho_b[index], rho_b_atm, P[index],P_cold,P_rel_error);
}
Ax[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,0)];
Ay[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,1)];
Az[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,2)];
psi6phi[index] = Aphi[index];
double ETvx = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,0)];
double ETvy = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,1)];
double ETvz = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,2)];
// IllinoisGRMHD defines v^i = u^i/u^0.
// Meanwhile, the ET/HydroBase formalism, called the Valencia
// formalism, splits the 4 velocity into a purely spatial part
// and a part that is normal to the spatial hypersurface:
// u^a = G (n^a + U^a), (Eq. 14 of arXiv:1304.5544; G=W, U^a=v^a)
// where n^a is the unit normal vector to the spatial hypersurface,
// n_a = {-\alpha,0,0,0}, and U^a is the purely spatial part, which
// is defined in HydroBase as the vel[] vector gridfunction.
// Then u^a n_a = - \alpha u^0 = G n^a n_a = -G, and
// of course \alpha u^0 = 1/sqrt(1+γ^ij u_i u_j) = \Gamma,
// the standard Lorentz factor.
// Note that n^i = - \beta^i / \alpha, so
// u^a = \Gamma (n^a + U^a)
// -> u^i = \Gamma ( U^i - \beta^i / \alpha )
// which implies
// v^i = u^i/u^0
// = \Gamma/u^0 ( U^i - \beta^i / \alpha ) <- \Gamma = \alpha u^0
// = \alpha ( U^i - \beta^i / \alpha )
// = \alpha U^i - \beta^i
vx[index] = alp[index]*ETvx - betax[index];
vy[index] = alp[index]*ETvy - betay[index];
vz[index] = alp[index]*ETvz - betaz[index];
}
// Neat feature for debugging: Add a roundoff-error perturbation
// to the initial data.
// Set random_pert variable to ~1e-14 for a random 15th digit
// perturbation.
srand(random_seed); // Use srand() as rand() is thread-safe.
for(int k=0;k<cctk_lsh[2];k++)
for(int j=0;j<cctk_lsh[1];j++)
for(int i=0;i<cctk_lsh[0];i++) {
int index=CCTK_GFINDEX3D(cctkGH,i,j,k);
double pert = (random_pert*(double)rand() / RAND_MAX);
double one_plus_pert=(1.0+pert);
rho[index]*=one_plus_pert;
vx[index]*=one_plus_pert;
vy[index]*=one_plus_pert;
vz[index]*=one_plus_pert;
psi6phi[index]*=one_plus_pert;
Ax[index]*=one_plus_pert;
Ay[index]*=one_plus_pert;
Az[index]*=one_plus_pert;
}
// Next compute B & B_stagger from A_i. Note that this routine also depends on
// the psi_bssn[] gridfunction being set to exp(phi).
double dxi = 1.0/CCTK_DELTA_SPACE(0);
double dyi = 1.0/CCTK_DELTA_SPACE(1);
double dzi = 1.0/CCTK_DELTA_SPACE(2);
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++)
for(int j=0;j<cctk_lsh[1];j++)
for(int i=0;i<cctk_lsh[0];i++) {
// Look Mom, no if() statements!
int shiftedim1 = (i-1)*(i!=0); // This way, i=0 yields shiftedim1=0 and shiftedi=1, used below for our COPY boundary condition.
int shiftedi = shiftedim1+1;
int shiftedjm1 = (j-1)*(j!=0);
int shiftedj = shiftedjm1+1;
int shiftedkm1 = (k-1)*(k!=0);
int shiftedk = shiftedkm1+1;
int index,indexim1,indexjm1,indexkm1;
int actual_index = CCTK_GFINDEX3D(cctkGH,i,j,k);
double Psi = psi_bssn[actual_index];
double Psim3 = 1.0/(Psi*Psi*Psi);
// For the lower boundaries, the following applies a "copy"
// boundary condition on Bi_stagger where needed.
// E.g., Bx_stagger(i,jmin,k) = Bx_stagger(i,jmin+1,k)
// We find the copy BC works better than extrapolation.
// For the upper boundaries, we do the following copy:
// E.g., Psi(imax+1,j,k)=Psi(imax,j,k)
/**************/
/* Bx_stagger */
/**************/
index = CCTK_GFINDEX3D(cctkGH,i,shiftedj,shiftedk);
indexjm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedjm1,shiftedk);
indexkm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedj,shiftedkm1);
// Set Bx_stagger = \partial_y A_z - partial_z A_y
// "Grid" Ax(i,j,k) is actually Ax(i,j+1/2,k+1/2)
// "Grid" Ay(i,j,k) is actually Ay(i+1/2,j,k+1/2)
// "Grid" Az(i,j,k) is actually Ay(i+1/2,j+1/2,k)
// Therefore, the 2nd order derivative \partial_z A_y at (i+1/2,j,k) is:
// ["Grid" Ay(i,j,k) - "Grid" Ay(i,j,k-1)]/dZ
Bx_stagger[actual_index] = (Az[index]-Az[indexjm1])*dyi - (Ay[index]-Ay[indexkm1])*dzi;
// Now multiply Bx and Bx_stagger by 1/sqrt(gamma(i+1/2,j,k)]) = 1/sqrt(1/2 [gamma + gamma_ip1]) = exp(-6 x 1/2 [phi + phi_ip1] )
int imax_minus_i = (cctk_lsh[0]-1)-i;
int indexip1jk = CCTK_GFINDEX3D(cctkGH,i + ( (imax_minus_i > 0) - (0 > imax_minus_i) ),j,k);
double Psi_ip1 = psi_bssn[indexip1jk];
Bx_stagger[actual_index] *= Psim3/(Psi_ip1*Psi_ip1*Psi_ip1);
/**************/
/* By_stagger */
/**************/
index = CCTK_GFINDEX3D(cctkGH,shiftedi,j,shiftedk);
indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,j,shiftedk);
indexkm1 = CCTK_GFINDEX3D(cctkGH,shiftedi,j,shiftedkm1);
// Set By_stagger = \partial_z A_x - \partial_x A_z
By_stagger[actual_index] = (Ax[index]-Ax[indexkm1])*dzi - (Az[index]-Az[indexim1])*dxi;
// Now multiply By and By_stagger by 1/sqrt(gamma(i,j+1/2,k)]) = 1/sqrt(1/2 [gamma + gamma_jp1]) = exp(-6 x 1/2 [phi + phi_jp1] )
int jmax_minus_j = (cctk_lsh[1]-1)-j;
int indexijp1k = CCTK_GFINDEX3D(cctkGH,i,j + ( (jmax_minus_j > 0) - (0 > jmax_minus_j) ),k);
double Psi_jp1 = psi_bssn[indexijp1k];
By_stagger[actual_index] *= Psim3/(Psi_jp1*Psi_jp1*Psi_jp1);
/**************/
/* Bz_stagger */
/**************/
index = CCTK_GFINDEX3D(cctkGH,shiftedi,shiftedj,k);
indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,shiftedj,k);
indexjm1 = CCTK_GFINDEX3D(cctkGH,shiftedi,shiftedjm1,k);
// Set Bz_stagger = \partial_x A_y - \partial_y A_x
Bz_stagger[actual_index] = (Ay[index]-Ay[indexim1])*dxi - (Ax[index]-Ax[indexjm1])*dyi;
// Now multiply Bz_stagger by 1/sqrt(gamma(i,j,k+1/2)]) = 1/sqrt(1/2 [gamma + gamma_kp1]) = exp(-6 x 1/2 [phi + phi_kp1] )
int kmax_minus_k = (cctk_lsh[2]-1)-k;
int indexijkp1 = CCTK_GFINDEX3D(cctkGH,i,j,k + ( (kmax_minus_k > 0) - (0 > kmax_minus_k) ));
double Psi_kp1 = psi_bssn[indexijkp1];
Bz_stagger[actual_index] *= Psim3/(Psi_kp1*Psi_kp1*Psi_kp1);
}
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++)
for(int j=0;j<cctk_lsh[1];j++)
for(int i=0;i<cctk_lsh[0];i++) {
// Look Mom, no if() statements!
int shiftedim1 = (i-1)*(i!=0); // This way, i=0 yields shiftedim1=0 and shiftedi=1, used below for our COPY boundary condition.
int shiftedi = shiftedim1+1;
int shiftedjm1 = (j-1)*(j!=0);
int shiftedj = shiftedjm1+1;
int shiftedkm1 = (k-1)*(k!=0);
int shiftedk = shiftedkm1+1;
int index,indexim1,indexjm1,indexkm1;
int actual_index = CCTK_GFINDEX3D(cctkGH,i,j,k);
// For the lower boundaries, the following applies a "copy"
// boundary condition on Bi and Bi_stagger where needed.
// E.g., Bx(imin,j,k) = Bx(imin+1,j,k)
// We find the copy BC works better than extrapolation.
/******/
/* Bx */
/******/
index = CCTK_GFINDEX3D(cctkGH,shiftedi,j,k);
indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,j,k);
// Set Bx = 0.5 ( Bx_stagger + Bx_stagger_im1 )
// "Grid" Bx_stagger(i,j,k) is actually Bx_stagger(i+1/2,j,k)
Bx[actual_index] = 0.5 * ( Bx_stagger[index] + Bx_stagger[indexim1] );
/******/
/* By */
/******/
index = CCTK_GFINDEX3D(cctkGH,i,shiftedj,k);
indexjm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedjm1,k);
// Set By = 0.5 ( By_stagger + By_stagger_im1 )
// "Grid" By_stagger(i,j,k) is actually By_stagger(i,j+1/2,k)
By[actual_index] = 0.5 * ( By_stagger[index] + By_stagger[indexjm1] );
/******/
/* Bz */
/******/
index = CCTK_GFINDEX3D(cctkGH,i,j,shiftedk);
indexkm1 = CCTK_GFINDEX3D(cctkGH,i,j,shiftedkm1);
// Set Bz = 0.5 ( Bz_stagger + Bz_stagger_im1 )
// "Grid" Bz_stagger(i,j,k) is actually Bz_stagger(i,j+1/2,k)
Bz[actual_index] = 0.5 * ( Bz_stagger[index] + Bz_stagger[indexkm1] );
}
// Finally, enforce limits on primitives & compute conservative variables.
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++)
for(int j=0;j<cctk_lsh[1];j++)
for(int i=0;i<cctk_lsh[0];i++) {
static const int zero_int=0;
int index = CCTK_GFINDEX3D(cctkGH,i,j,k);
int ww;
double PRIMS[MAXNUMVARS];
ww=0;
PRIMS[ww] = rho_b[index]; ww++;
PRIMS[ww] = P[index]; ww++;
PRIMS[ww] = vx[index]; ww++;
PRIMS[ww] = vy[index]; ww++;
PRIMS[ww] = vz[index]; ww++;
PRIMS[ww] = Bx[index]; ww++;
PRIMS[ww] = By[index]; ww++;
PRIMS[ww] = Bz[index]; ww++;
double METRIC[NUMVARS_FOR_METRIC],dummy=0;
ww=0;
// FIXME: NECESSARY?
//psi_bssn[index] = exp(phi[index]);
METRIC[ww] = phi_bssn[index];ww++;
METRIC[ww] = dummy; ww++; // Don't need to set psi.
METRIC[ww] = gtxx[index]; ww++;
METRIC[ww] = gtxy[index]; ww++;
METRIC[ww] = gtxz[index]; ww++;
METRIC[ww] = gtyy[index]; ww++;
METRIC[ww] = gtyz[index]; ww++;
METRIC[ww] = gtzz[index]; ww++;
METRIC[ww] = lapm1[index]; ww++;
METRIC[ww] = betax[index]; ww++;
METRIC[ww] = betay[index]; ww++;
METRIC[ww] = betaz[index]; ww++;
METRIC[ww] = gtupxx[index]; ww++;
METRIC[ww] = gtupyy[index]; ww++;
METRIC[ww] = gtupzz[index]; ww++;
METRIC[ww] = gtupxy[index]; ww++;
METRIC[ww] = gtupxz[index]; ww++;
METRIC[ww] = gtupyz[index]; ww++;
double CONSERVS[NUM_CONSERVS] = {0,0,0,0,0};
double g4dn[4][4];
double g4up[4][4];
double TUPMUNU[10],TDNMUNU[10];
struct output_stats stats; stats.failure_checker=0;
IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs(zero_int,PRIMS,stats,eos,
METRIC,g4dn,g4up,TUPMUNU,TDNMUNU,CONSERVS);
rho_b[index] = PRIMS[RHOB];
P[index] = PRIMS[PRESSURE];
vx[index] = PRIMS[VX];
vy[index] = PRIMS[VY];
vz[index] = PRIMS[VZ];
rho_star[index] = CONSERVS[RHOSTAR];
mhd_st_x[index] = CONSERVS[STILDEX];
mhd_st_y[index] = CONSERVS[STILDEY];
mhd_st_z[index] = CONSERVS[STILDEZ];
tau[index] = CONSERVS[TAUENERGY];
if(update_Tmunu) {
ww=0;
eTtt[index] = TDNMUNU[ww]; ww++;
eTtx[index] = TDNMUNU[ww]; ww++;
eTty[index] = TDNMUNU[ww]; ww++;
eTtz[index] = TDNMUNU[ww]; ww++;
eTxx[index] = TDNMUNU[ww]; ww++;
eTxy[index] = TDNMUNU[ww]; ww++;
eTxz[index] = TDNMUNU[ww]; ww++;
eTyy[index] = TDNMUNU[ww]; ww++;
eTyz[index] = TDNMUNU[ww]; ww++;
eTzz[index] = TDNMUNU[ww];
}
}
}
```
<a id='convert_to_hydrobase__param'></a>
# Step 3: `param.ccl` \[Back to [top](#toc)\]
$$\label{convert_to_hydrobase__param}$$
```
%%writefile $outfile_path__ID_converter_ILGRMHD__param
# Parameter definitions for thorn ID_converter_ILGRMHD
shares: IllinoisGRMHD
USES KEYWORD rho_b_max
USES KEYWORD rho_b_atm
USES KEYWORD tau_atm
USES KEYWORD neos
USES KEYWORD K_ppoly_tab0
USES KEYWORD rho_ppoly_tab_in[10]
USES KEYWORD Gamma_ppoly_tab_in[10]
USES KEYWORD Sym_Bz
USES KEYWORD GAMMA_SPEED_LIMIT
USES KEYWORD Psi6threshold
USES KEYWORD update_Tmunu
private:
INT random_seed "Random seed for random, generally roundoff-level perturbation on initial data. Seeds srand(), and rand() is used for the RNG."
{
0:99999999 :: "Anything unsigned goes."
} 0
REAL random_pert "Random perturbation atop data"
{
*:* :: "Anything goes."
} 0
BOOLEAN pure_hydro_run "Set the vector potential and corresponding EM gauge quantity to zero"
{
} "no"
```
<a id='convert_to_hydrobase__interface'></a>
# Step 4: `interface.ccl` \[Back to [top](#toc)\]
$$\label{convert_to_hydrobase__interface}$$
```
%%writefile $outfile_path__ID_converter_ILGRMHD__interface
# Interface definition for thorn ID_converter_ILGRMHD
implements: ID_converter_ILGRMHD
inherits: ADMBase, Boundary, SpaceMask, Tmunubase, HydroBase, grid, IllinoisGRMHD
uses include header: IllinoisGRMHD_headers.h
USES INCLUDE: Symmetry.h
```
<a id='convert_to_hydrobase__schedule'></a>
# Step 5: `schedule.ccl` \[Back to [top](#toc)\]
$$\label{convert_to_hydrobase__schedule}$$
```
%%writefile $outfile_path__ID_converter_ILGRMHD__schedule
# Schedule definitions for thorn ID_converter_ILGRMHD
schedule group IllinoisGRMHD_ID_Converter at CCTK_INITIAL after HydroBase_Initial before Convert_to_HydroBase
{
} "Translate ET-generated, HydroBase-compatible initial data and convert into variables used by IllinoisGRMHD"
schedule set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables IN IllinoisGRMHD_ID_Converter as first_initialdata before TOV_Initial_Data
{
LANG: C
OPTIONS: LOCAL
# What the heck, let's synchronize everything!
SYNC: IllinoisGRMHD::grmhd_primitives_Bi, IllinoisGRMHD::grmhd_primitives_Bi_stagger, IllinoisGRMHD::grmhd_primitives_allbutBi, IllinoisGRMHD::em_Ax,IllinoisGRMHD::em_Ay,IllinoisGRMHD::em_Az,IllinoisGRMHD::em_psi6phi,IllinoisGRMHD::grmhd_conservatives,IllinoisGRMHD::BSSN_quantities,ADMBase::metric,ADMBase::lapse,ADMBase::shift,ADMBase::curv
} "Convert HydroBase initial data (ID) to ID that IllinoisGRMHD can read."
schedule IllinoisGRMHD_InitSymBound IN IllinoisGRMHD_ID_Converter as third_initialdata after second_initialdata
{
SYNC: IllinoisGRMHD::grmhd_conservatives,IllinoisGRMHD::em_Ax,IllinoisGRMHD::em_Ay,IllinoisGRMHD::em_Az,IllinoisGRMHD::em_psi6phi
LANG: C
} "Schedule symmetries -- Actually just a placeholder function to ensure prolongation / processor syncs are done BEFORE the primitives solver."
schedule IllinoisGRMHD_compute_B_and_Bstagger_from_A IN IllinoisGRMHD_ID_Converter as fourth_initialdata after third_initialdata
{
SYNC: IllinoisGRMHD::grmhd_primitives_Bi, IllinoisGRMHD::grmhd_primitives_Bi_stagger
LANG: C
} "Compute B and B_stagger from A"
schedule IllinoisGRMHD_conserv_to_prims IN IllinoisGRMHD_ID_Converter as fifth_initialdata after fourth_initialdata
{
LANG: C
} "Compute primitive variables from conservatives. This is non-trivial, requiring a Newton-Raphson root-finder."
```
<a id='convert_to_hydrobase__make'></a>
# Step 6: `make.code.defn` \[Back to [top](#toc)\]
$$\label{convert_to_hydrobase__make}$$
```
%%writefile $outfile_path__ID_converter_ILGRMHD__make
# Main make.code.defn file for thorn ID_converter_ILGRMHD
# Source files in this directory
SRCS = set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C
```
<a id='code_validation'></a>
# Step n-1: Code validation \[Back to [top](#toc)\]
$$\label{code_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# # Verify if the code generated by this tutorial module
# # matches the original IllinoisGRMHD source code
# # First download the original IllinoisGRMHD source code
# import urllib
# from os import path
# original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/A_i_rhs_no_gauge_terms.C"
# original_IGM_file_name = "A_i_rhs_no_gauge_terms-original.C"
# original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# # Then download the original IllinoisGRMHD source code
# # We try it here in a couple of ways in an attempt to keep
# # the code more portable
# try:
# original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# # Write down the file the original IllinoisGRMHD source code
# with open(original_IGM_file_path,"w") as file:
# file.write(original_IGM_file_code)
# except:
# try:
# original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# # Write down the file the original IllinoisGRMHD source code
# with open(original_IGM_file_path,"w") as file:
# file.write(original_IGM_file_code)
# except:
# # If all else fails, hope wget does the job
# !wget -O $original_IGM_file_path $original_IGM_file_url
# # Perform validation
# Validation__A_i_rhs_no_gauge_terms__C = !diff $original_IGM_file_path $outfile_path__A_i_rhs_no_gauge_terms__C
# if Validation__A_i_rhs_no_gauge_terms__C == []:
# # If the validation passes, we do not need to store the original IGM source code file
# !rm $original_IGM_file_path
# print("Validation test for A_i_rhs_no_gauge_terms.C: PASSED!")
# else:
# # If the validation fails, we keep the original IGM source code file
# print("Validation test for A_i_rhs_no_gauge_terms.C: FAILED!")
# # We also print out the difference between the code generated
# # in this tutorial module and the original IGM source code
# print("Diff:")
# for diff_line in Validation__A_i_rhs_no_gauge_terms__C:
# print(diff_line)
```
<a id='latex_pdf_output'></a>
# Step n: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.pdf](Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
```
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
```
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid", {"font.family": "DejaVu Sans"})
sns.set(palette="pastel", color_codes=True)
sns.set_context("poster")
%matplotlib inline
from matplotlib import rc
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
path = 'data/'
filename_DB = 'DeBruijn_alpha.json'
filename_pUC19 = 'pUC19_alpha.json'
filename_M13 = 'M13_square.json'
filename_DB7k = 'DB_7k_square.json'
DB_small = seaborn.ree
#ids, sequences, energies
#_, _, energies_DB = read_data(path + filename_DB)
#_, _, energies_pUC19 = read_data(path + filename_pUC19)
#_, _, energies_M13 = read_data(path + filename_M13)
_, _, energies_DB_short = read_data(path + filename_DB, short=True)
_, _, energies_pUC19_short = read_data(path + filename_pUC19, short=True)
_, _, energies_M13_short = read_data(path + filename_M13, short=True)
_, _, energies_DB7k_short = read_data(path + filename_DB7k, short=True)
#DB_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_DB_short)
#pUC19_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_pUC19_short)
#M13_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_M13_short)
#DB_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_DB_short)
#pUC19_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_pUC19_short)
#M13_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_M13_short)
#DB_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_DB_short)
#pUC19_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_pUC19_short)
#M13_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_M13_short)
DB_dist_all = get_boltzmann_distribution(d for d in energies_DB_short)
pUC19_dist_all = get_boltzmann_distribution(d for d in energies_pUC19_short)
M13_dist_all = get_boltzmann_distribution(d for d in energies_M13_short)
DB7k_dist_all = get_boltzmann_distribution(d for d in energies_DB7k_short)
#DB_dist = get_boltzmann_distribution(d[:100] for d in energies_DB_short)
#pUC19_dist = get_boltzmann_distribution(d[:100] for d in energies_pUC19_short)
#M13_dist = get_boltzmann_distribution(d[:100] for d in energies_M13_short)
#DB_dist = get_boltzmann_distribution(energies_DB_short)
#pUC19_dist = get_boltzmann_distribution(energies_pUC19_short)
#dist = [d[0] for d in DB_dist]
def example_plot(ax, fontsize=12):
ax.plot([1, 2])
ax.locator_params(nbins=3)
ax.set_xlabel('x-label', fontsize=fontsize)
ax.set_ylabel('y-label', fontsize=fontsize)
ax.set_title('Title', fontsize=fontsize)
def distribution_plot(ax, data_label, data, xlabel, ylabel, fontsize=15):
bins = 20
x = numpy.zeros(bins)
for dist in data:
i = int(dist[0]*bins)
i = 0 if i < 0 else i
i = bins-1 if i > bins-1 else i
x[i] += 1
for i in range(len(x)):
x[i] = 1.0 * x[i] / len(data)
index = numpy.arange(0, bins)
ax.bar(index, x, bar_width, linewidth=0)
ax.set_xticks(numpy.arange(0, bins+1))
ax.set_xticklabels([('$' + str(i*0.05) + '$') if i % 2 == 0 else "" for i in range(0, bins+1)])
#ax.tick_params(axis='both', which='major')
ylimit = 0.2 if ('6.9' in data_label or 'M13' in data_label) else 0.3
ax.set_xlim(0, bins)
ax.set_ylim(0, ylimit)
ax.set_xlabel(xlabel, fontsize=20)
ax.set_ylabel(ylabel, fontsize=20)
ax.set_title(data_label, fontsize=20)
ax.legend()
plt.close('all')
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
data_set = OrderedDict()
data_set['pUC19 (all)'] = pUC19_dist_all
data_set['DB (all)'] = DB_dist_all
data_set['M13 (all)'] = M13_dist_all
data_set['DB7k (all)'] = DB7k_dist_all
xlabel = r'Specific binding probability'
ylabel = r'Fraction of staples'
distribution_plot(ax1, r'pUC19', pUC19_dist_all, '', ylabel)
distribution_plot(ax2, r'DB (2.4 knt)', DB_dist_all, '', '')
distribution_plot(ax3, r'M13', M13_dist_all, xlabel, ylabel)
distribution_plot(ax4, r'DBS (6.9 knt)', DB7k_dist_all, xlabel, '')
#%matplotlib inline
fig.set_size_inches(10, 10)
plt.tight_layout()
plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600)
#plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison_long.pdf",format='pdf',dpi=600)
fig, axes = plt.subplots(nrows=2, ncols=2)
bar_width = 1.0
data_set = OrderedDict()
data_set['pUC19'] = pUC19_dist_all
data_set['DBS (2.4 knt)'] = DB_dist_all
data_set['M13'] = M13_dist_all
data_set['DBS (6.9 knt)'] = DB7k_dist_all
plt.close('all')
fig = plt.figure()
from mpl_toolkits.axes_grid1 import Grid
grid = Grid(fig, rect=111, nrows_ncols=(2,2),
axes_pad=0.4, label_mode='O',
add_all = True,
)
for ax, (data_label, data) in zip(grid, data_set.items()):
xlabel = 'Specific binding probability' if ('6.9' in data_label or 'M13' in data_label) else ''
ylabel = 'Fraction of staples'if ('pUC' in data_label or 'M13' in data_label) else ''
distribution_plot(ax, data_label, data, xlabel, ylabel)
#axes[0,0].set_title('pUC19')
#grid[0].set_title('pUC19')
#grid[0].set_ylabel('Fraction of staples', fontsize=15)
#grid[1].set_title('DBS (2.4 knt)')
#grid[2].set_title('M13')
#grid[2].set_xlabel('Specific binding probability', fontsize=15)
#grid[2].set_ylabel('Fraction of staples', fontsize=15)
#grid[3].set_title('DBS (6.9 knt)')
#axes[1].set_title('M13')
#axes[2].set_title(r'$\lambda$-phage')
#fig.text(0.16, 0.92, 'pUC19', fontsize=15)
#fig.text(0.6, 0.92, 'DBS (2.4 knt)', fontsize=15)
#fig.text(0.16, 0.46, 'M13mp18', fontsize=15)
#fig.text(0.6, 0.46, 'DBS (6.9 knt)', fontsize=15)
fig.set_size_inches(6, 6)
plt.tight_layout()
plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600)
#######################
## OBSOLETE ##
#######################
#%matplotlib inline
fig, axes = plt.subplots(nrows=2, ncols=2)
bar_width = 1.0
data_set = OrderedDict()
data_set['pUC19 (all)'] = pUC19_dist_all
data_set['DB (all)'] = DB_dist_all
data_set['M13 (all)'] = M13_dist_all
data_set['DB7k (all)'] = DB7k_dist_all
for ax0, (data_label, data) in zip(axes.flat, data_set.items()):
distribution_plot(ax0)
#fig.text(0.19, 0.96, 'De Bruijn', ha='center')
fig.text(0.3, 1, 'pUC19 (2.6 knt)', ha='center')
fig.text(0.7, 1, 'DBS (2.4 knt)', ha='center')
fig.text(0.5, 0.008, 'Specific binding probability', ha='center')
fig.text(0.001, 0.5, 'Fraction of staples', va='center', rotation='vertical')
fig.set_size_inches(7, 7)
plt.tight_layout()
plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600)
## CONVERT DATA
path = 'data/'
filename_DB = 'DeBruijn_alpha.json'
filename_pUC19 = 'pUC19_alpha.json'
filename_M13 = 'M13_square.json'
filename_DB7k = 'DB_7k_square.json'
ids, sequences, energies = read_data(path + filename_DB7k, short=True)
dist_all = get_boltzmann_distribution(d for d in energies)
with open('data/DB_medium.csv', 'w') as out:
for i in range(len(ids)):
out.write(ids[i] + ',' + sequences[i] + ',')
out.write('%.3f' % dist_all[i][0])
out.write('\n')
#print idsi], sequences[i], energies_DB_short[i], DB_dist_all[i]
```
| github_jupyter |
### Bibliotecas Utilizadas:
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
```
- Faça um ranking para o número total de PAX por dia da semana. - - Power BI
- Qual a correlação de sábado e domingo somados com o total de RPK? - ????
- Qual a média de ‘Monetário’ por mês por Canal? E a mediana? - Power BI
- Crie um forecast de PAX por ‘Local de Venda’ para os próximos 15 dias a contar da última data de venda. (Aqui a técnica é livre) - Série Temporal
- Supondo que você precisa gerar um estudo para a área responsável, com base em qualquer modelo ou premissa, qual ‘Local de Venda’ você considera mais crítico. Por quê? - Power BI
- Criar modelo relacionando o comporatamento de venda com variaveis não apresentada nos dados (Ex : PIB, Dolar, e etc) - Regressão Múltipla
Observações:
- PAX é o total de passageiros. RPK é um indicador diretamente relacionada com o número de PAX.
- Não se atenha às grandezas. Os dados são fictícios. 😉
- Envie todo o material que produzir (códigos, tabelas e outros arquivos) com o detalhamento de cada um. Se possível, comente nos códigos.
- Para a apresentação, use PowerPoint ou qualquer outra ferramenta de DataViz que julgar pertinente.
Pelo próprio excel o xlsx foi modificado apenas para estar com a aba dados e salvo em csv, UTF-8 delimitado por ;
```
df = pd.read_csv('data.csv',sep=';')
df.info()
#Observando qualidade de preenchimento do dataset;
df.isnull().sum()
```
Organizando as variáveis e atribuindo seu tipo correto.
```
df["Data Venda"] = pd.to_datetime(df["Data Venda"])
df[['Canal de Venda','Local de Venda']] = df[['Canal de Venda','Local de Venda']].astype('category')
df[['PAX','RPK']] = df[['PAX','RPK']].astype('float')
df["Monetário Vendido"] = df["Monetário Vendido"].str.replace(",", ".")
df["Monetário Vendido"] = df["Monetário Vendido"].astype('float')
df.info()
df.isnull().sum()
```
#### Criando uma coluna que contenha os dia da semana;
- Nova coluna contendo: Conversão de dia para dia da semana;
```
df['Dia Semana'] = df['Data Venda'].dt.dayofweek
df['Dia Semana'].sample(5)
```
De acordo com a documentação do dayofweek:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.dayofweek.html
"... Monday, which is denoted by 0 and ends on Sunday which is denoted by 6..."
```
df['Dia Semana Nome'] = df['Data Venda'].dt.day_name()
df['Dia Semana Nome'].sample(5)
df.groupby(['Dia Semana Nome'])['PAX'].agg('sum').sort_values(ascending=False)
```
## Repostas
- Faça um ranking para o número total de PAX por dia da semana.
- Qual a correlação de sábado e domingo somados com o total de RPK?
- Qual a média de ‘Monetário’ por mês por Canal? E a mediana?
- Crie um forecast de PAX por ‘Local de Venda’ para os próximos 15 dias a contar da última data de venda. (Aqui a técnica é livre)
- Supondo que você precisa gerar um estudo para a área responsável, com base em qualquer modelo ou premissa, qual ‘Local de Venda’ você considera mais crítico. Por quê?
- Criar modelo relacionando o comporatamento de venda com variaveis não apresentada nos dados (Ex : PIB, Dolar, e etc)
```
df.describe()
df.sample(15)
```
#### Análise Exploratória dos Dados
```
sns.pairplot(df)
```
#### Modelo relacionando o comportamento de venda com variaveis não apresentada nos dados (Ex : PIB, Dolar, e etc)
Criar modelo relacionando o comporatamento de venda com variaveis não apresentada nos dados (Ex : PIB, Dolar, e etc)
Brainstorm de possíveis variáveis a avaliar:
PIB,
Aumento população,
Dolar,
Euro,
Ações,
Condições Climáticas, ( Estações do Ano )
Desemprego,
IPCA,
Selic,
CDI
Para coleta de algumas variáveis é necessário supor uma vez que os dados são fictícios!
Estações do Ano, Jet fuel, Dolar price.
tive a liberardade de considerar que estamos falando de Brasil uma vez que Monetário Vendido estava inicialmente em R$
```
df=pd.read_csv('data_modified.csv',sep=';') # Foram adicionadas duas colunas com Jet fuel price e Taxa de Desemprego no Brasil
```
fonte - Desemprego: https://www.ibge.gov.br/estatisticas/sociais/trabalho/9173-pesquisa-nacional-por-amostra-de-domicilios-continua-trimestral.html?=&t=series-historicas&utm_source=landing&utm_medium=explica&utm_campaign=desemprego
fonte - Fuel Jet Price: https://www.indexmundi.com/pt/pre%C3%A7os-de-mercado/?mercadoria=combust%c3%advel-de-jato&meses=60
Pensei em pegar os valores de dólar porém obtive muitos dados que seriam necessários grande tempo para sincronizar as datas, desta forma resolvi seguir assim mesmo
```
df.columns
df.info()
df.isnull().sum()
#Mudando os Dtypes das variáveis:
df["Data Venda"] = pd.to_datetime(df["Data Venda"])
df[['Canal de Venda','Local de Venda']] = df[['Canal de Venda','Local de Venda']].astype('category')
df[['PAX','RPK']] = df[['PAX','RPK']].astype('float')
df["Monetário Vendido"] = df["Monetário Vendido"].str.replace(",", ".")
df["Monetário Vendido"] = df["Monetário Vendido"].astype('float')
df['Preço Jet Fuel'] = df['Preço Jet Fuel'].str.replace(",", ".")
df['Preço Jet Fuel'] = df['Preço Jet Fuel'].astype('float')
df.info()
sns.pairplot(df)
sns.heatmap(df[['PAX',
'Monetário Vendido', 'RPK', 'Preço Jet Fuel',
'Taxa de Desemprego Brasil']].corr(),vmax=1,vmin=-1,annot=True)
```
### Regressão Linear com PAX x Monetário Vendido
- RPK altamente relacionada com PAX, vai sair para que não haja problemas com a multicolineariedade;
- Taxa Desmprego do Brasil e Prejo do Jet Fuel não influenciaram como pensei,
o jet fuel pensei que aumentaria o preço e teria um impacto negativo no monetário, porém é muito baixa a correlação
- Seguindo com regressão linear simples PAX x Monetário Vendido
```
plt.figure(figsize=(12.5,12.5))
sns.scatterplot(data=df,x='PAX',y='Monetário Vendido')
df.describe()
# Criação do modelo de regressão linear pelo statsmodels
import statsmodels.api as sm
X=df['PAX']
y=df['Monetário Vendido']
spector_data = sm.datasets.spector.load(as_pandas=False)
spector_data.exog = sm.add_constant(spector_data.exog, prepend=False)
mod = sm.OLS(X, y)
res = mod.fit()
print(res.summary())
```
Gostaria de exploar muito mais o modelo e procurar novas variáveis, porém decidi parar pelo tempo!
| github_jupyter |
# TensorFlow script mode training and serving
Script mode is a training script format for TensorFlow that lets you execute any TensorFlow training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model.
Script mode supports training with a Python script, a Python module, or a shell script. In this example, we use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). In this example, we will show how easily you can train a SageMaker using TensorFlow 1.x and TensorFlow 2.0 scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container). The TensorFlow Serving container is the default inference method for script mode. For full documentation on the TensorFlow Serving container, please visit [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst).
# Set up the environment
Let's start by setting up the environment:
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
```
## Training Data
The MNIST dataset has been loaded to the public S3 buckets ``sagemaker-sample-data-<REGION>`` under the prefix ``tensorflow/mnist``. There are four ``.npy`` file under this prefix:
* ``train_data.npy``
* ``eval_data.npy``
* ``train_labels.npy``
* ``eval_labels.npy``
```
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
```
# Construct a script for distributed training
This tutorial's training script was adapted from TensorFlow's official [CNN MNIST example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py). We have modified it to handle the ``model_dir`` parameter passed in by SageMaker. This is an S3 path which can be used for data sharing during distributed training and checkpointing and/or model persistence. We have also added an argument-parsing function to handle processing training-related variables.
At the end of the training job we have added a step to export the trained model to the path stored in the environment variable ``SM_MODEL_DIR``, which always points to ``/opt/ml/model``. This is critical because SageMaker uploads all the model artifacts in this folder to S3 at end of training.
Here is the entire script:
```
!pygmentize 'mnist.py'
# TensorFlow 2.1 script
!pygmentize 'mnist-2.py'
```
# Create a training job using the `TensorFlow` estimator
The `sagemaker.tensorflow.TensorFlow` estimator handles locating the script mode container, uploading your script to a S3 location and creating a SageMaker training job. Let's call out a couple important parameters here:
* `py_version` is set to `'py3'` to indicate that we are using script mode since legacy mode supports only Python 2. Though Python 2 will be deprecated soon, you can use script mode with Python 2 by setting `py_version` to `'py2'` and `script_mode` to `True`.
* `distributions` is used to configure the distributed training setup. It's required only if you are doing distributed training either across a cluster of instances or across multiple GPUs. Here we are using parameter servers as the distributed training schema. SageMaker training jobs run on homogeneous clusters. To make parameter server more performant in the SageMaker setup, we run a parameter server on every instance in the cluster, so there is no need to specify the number of parameter servers to launch. Script mode also supports distributed training with [Horovod](https://github.com/horovod/horovod). You can find the full documentation on how to configure `distributions` [here](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflow#distributed-training).
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
instance_count=2,
instance_type='ml.p3.2xlarge',
framework_version='1.15.2',
py_version='py3',
distribution={'parameter_server': {'enabled': True}})
```
You can also initiate an estimator to train with TensorFlow 2.1 script. The only things that you will need to change are the script name and ``framewotk_version``
```
mnist_estimator2 = TensorFlow(entry_point='mnist-2.py',
role=role,
instance_count=2,
instance_type='ml.p3.2xlarge',
framework_version='2.1.0',
py_version='py3',
distribution={'parameter_server': {'enabled': True}})
```
## Calling ``fit``
To start a training job, we call `estimator.fit(training_data_uri)`.
An S3 location is used here as the input. `fit` creates a default channel named `'training'`, which points to this S3 location. In the training script we can then access the training data from the location stored in `SM_CHANNEL_TRAINING`. `fit` accepts a couple other types of input as well. See the API doc [here](https://sagemaker.readthedocs.io/en/stable/estimators.html#sagemaker.estimator.EstimatorBase.fit) for details.
When training starts, the TensorFlow container executes mnist.py, passing `hyperparameters` and `model_dir` from the estimator as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and `model_dir` defaults to `s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>`, so the script execution is as follows:
```bash
python mnist.py --model_dir s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>
```
When training is complete, the training job will upload the saved model for TensorFlow serving.
```
mnist_estimator.fit(training_data_uri)
```
Calling fit to train a model with TensorFlow 2.1 script.
```
mnist_estimator2.fit(training_data_uri)
```
# Deploy the trained model to an endpoint
The `deploy()` method creates a SageMaker model, which is then deployed to an endpoint to serve prediction requests in real time. We will use the TensorFlow Serving container for the endpoint, because we trained with script mode. This serving container runs an implementation of a web server that is compatible with SageMaker hosting protocol. The [Using your own inference code]() document explains how SageMaker runs inference containers.
```
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.p2.xlarge')
```
Deployed the trained TensorFlow 2.1 model to an endpoint.
```
predictor2 = mnist_estimator2.deploy(initial_instance_count=1, instance_type='ml.p2.xlarge')
```
# Invoke the endpoint
Let's download the training data and use that as input for inference.
```
import numpy as np
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_data.npy train_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_labels.npy train_labels.npy
train_data = np.load('train_data.npy')
train_labels = np.load('train_labels.npy')
```
The formats of the input and the output data correspond directly to the request and response formats of the `Predict` method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects ("jsons" or "jsonlines"), and CSV data.
In this example we are using a `numpy` array as input, which will be serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once as you can see in the following code. You can find the complete documentation on how to make predictions against a TensorFlow serving SageMaker endpoint [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst#making-predictions-against-a-sagemaker-endpoint).
```
predictions = predictor.predict(train_data[:50])
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
```
Examine the prediction result from the TensorFlow 2.1 model.
```
predictions2 = predictor2.predict(train_data[:50])
for i in range(0, 50):
prediction = np.argmax(predictions2['predictions'][i])
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
```
# Delete the endpoint
Let's delete the endpoint we just created to prevent incurring any extra costs.
```
predictor.delete_endpoint()
```
Delete the TensorFlow 2.1 endpoint as well.
```
predictor2.delete_endpoint()
```
| github_jupyter |
# Rekurrente Netze (RNNs)
## Sequentialle Daten
<img src="img/ag/Figure-22-001.png" style="width: 10%; margin-left: auto; margin-right: auto;"/>
## Floating Window
<img src="img/ag/Figure-22-002.png" style="width: 20%; margin-left: auto; margin-right: auto;"/>
## Verarbeitung mit MLP
<img src="img/ag/Figure-22-002.png" style="width: 20%; margin-left: 10%; margin-right: auto; float: left;"/>
<img src="img/ag/Figure-22-003.png" style="width: 35%; margin-left: 10%; margin-right: auto; float: right;"/>
## MLP berücksichtigt die Reihenfolge nicht!
<img src="img/ag/Figure-22-004.png" style="width: 25%; margin-left: auto; margin-right: auto;"/>
## RNNs: Netzwerke mit Speicher
<img src="img/ag/Figure-22-005.png" style="width: 15%; margin-left: auto; margin-right: auto;"/>
## Zustand: Reperatur-Roboter
<img src="img/ag/Figure-22-006.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Zustand: Reperatur-Roboter
<img src="img/ag/Figure-22-007.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Zustand: Reperatur-Roboter
<img src="img/ag/Figure-22-008.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Zustand: Reperatur-Roboter
<img src="img/ag/Figure-22-009.png" style="width: 85%; margin-left: auto; margin-right: auto;"/>
# Arbeitsweise RNN
<img src="img/ag/Figure-22-010.png" style="width: 85%; margin-left: auto; margin-right: auto;"/>
## State wird nach der Verarbeitung geschrieben
<img src="img/ag/Figure-22-011.png" style="width: 35%; margin-left: 10%; margin-right: auto; float: left;"/>
<img src="img/ag/Figure-22-012.png" style="width: 15%; margin-left: auto; margin-right: 10%; float: right;"/>
## Netzwerkstruktur (einzelner Wert)
Welche Operation ist sinnvoll?
<img src="img/ag/Figure-22-013.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Netzwerkstruktur (einzelner Wert)
<img src="img/ag/Figure-22-014.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Repräsentation in Diagrammen
<img src="img/ag/Figure-22-015.png" style="width: 10%; margin-left: auto; margin-right: auto;"/>
## Entfaltete Darstellung
<img src="img/ag/Figure-22-016.png" style="width: 45%; margin-left: auto; margin-right: auto;"/>
## Netzwerkstruktur für mehrere Werte
<img src="img/ag/Figure-22-018.png" style="width: 35%; margin-left: auto; margin-right: auto;"/>
## Darstellung der Daten
<img src="img/ag/Figure-22-019.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
# Darstellung der Daten
<img src="img/ag/Figure-22-020.png" style="width: 45%; margin-left: auto; margin-right: auto;"/>
# Darstellung der Daten
<img src="img/ag/Figure-22-021.png" style="width: 45%; margin-left: auto; margin-right: auto;"/>
## Arbeitsweise
<img src="img/ag/Figure-22-022.png" style="width: 55%; margin-left: auto; margin-right: auto;"/>
## Probleme
<div style="margin-top: 20pt; float:left;">
<ul>
<li>Verlust der Gradienten</li>
<li>Explosion der Gradienten</li>
<li>Vergessen</li>
</ul>
</div>
<img src="img/ag/Figure-22-023.png" style="width: 55%; margin-left: auto; margin-right: 5%; float: right;"/>
## LSTM
<img src="img/ag/Figure-22-029.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## Gates
<img src="img/ag/Figure-22-024.png" style="width: 55%; margin-left: auto; margin-right: auto;"/>
## Gates
<img src="img/ag/Figure-22-025.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## Forget-Gate
<img src="img/ag/Figure-22-026.png" style="width: 30%; margin-left: auto; margin-right: auto;"/>
## Remember Gate
<img src="img/ag/Figure-22-027.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## Output Gate
<img src="img/ag/Figure-22-028.png" style="width: 55%; margin-left: auto; margin-right: auto;"/>
## LSTM
<img src="img/ag/Figure-22-029.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## LSTM Funktionsweise
<img src="img/ag/Figure-22-030.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## LSTM Funktionsweise
<img src="img/ag/Figure-22-031.png" style="width: 45%; margin-left: auto; margin-right: auto;"/>
## LSTM Funktionsweise
<img src="img/ag/Figure-22-032.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## LSTM Funktionsweise
<img src="img/ag/Figure-22-033.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## Verwendung von LSTMs
<img src="img/ag/Figure-22-034.png" style="width: 75%; margin-left: auto; margin-right: auto;"/>
## Darstellung von LSTM Layern
<img src="img/ag/Figure-22-035.png" style="width: 25%; margin-left: auto; margin-right: auto;"/>
## Conv/LSTM (Conv/RNN) Architektur
<img src="img/ag/Figure-22-036.png" style="width: 15%; margin-left: auto; margin-right: auto;"/>
## Tiefe RNN Netze
<img src="img/ag/Figure-22-037.png" style="width: 55%; margin-left: auto; margin-right: auto;"/>
## Bidirektionale RNNs
<img src="img/ag/Figure-22-038.png" style="width: 65%; margin-left: auto; margin-right: auto;"/>
## Tiefe Bidirektionale Netze
<img src="img/ag/Figure-22-039.png" style="width: 45%; margin-left: auto; margin-right: auto;"/>
# Anwendung: Generierung von Text
<img src="img/ag/Figure-22-040.png" style="width: 15%; margin-left: auto; margin-right: auto;"/>
## Trainieren mittels Sliding Window
<img src="img/ag/Figure-22-042.png" style="width: 25%; margin-left: auto; margin-right: auto;"/>
# Vortrainierte LSTM-Modelle
```
from fastai.text.all import *
path = untar_data(URLs.IMDB)
path.ls()
(path/'train').ls()
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
dls.show_batch()
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
# learn.fine_tune(4, 1e-2)
learn.fine_tune(4, 1e-2)
learn.show_results()
learn.predict("I really liked that movie!")
```
# ULMFiT
Problem: Wir trainieren die oberen Layer des Classifiers auf unser Problem, aber das Language-Model bleibt auf Wikipedia spezialisiert!
Lösung: Fine-Tuning des Language-Models bevor wir den Classifier trainieren.
<img src="img/ulmfit.png" style="width: 75%; margin-left: auto; margin-right: auto;"/>
```
dls_lm = TextDataLoaders.from_folder(path, is_lm=True, valid_pct=0.1)
dls_lm.show_batch(max_n=5)
learn = language_model_learner(dls_lm, AWD_LSTM, metrics=[accuracy, Perplexity()], path=path, wd=0.1).to_fp16()
learn.fit_one_cycle(1, 1e-2)
learn.save('epoch-1')
learn = learn.load('epoch-1')
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3)
learn.save_encoder('finetuned')
TEXT = "I liked this movie because"
N_WORDS = 40
N_SENTENCES = 2
preds = [learn.predict(TEXT, N_WORDS, temperature=0.75)
for _ in range(N_SENTENCES)]
print("\n".join(preds))
dls_clas = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test', text_vocab=dls_lm.vocab)
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn = learn.load_encoder('finetuned')
learn.fit_one_cycle(1, 2e-2)
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))
```
| github_jupyter |
# This notebook is copied from [here](https://github.com/warmspringwinds/tensorflow_notes/blob/master/tfrecords_guide.ipynb) with some small changes
---
### Introduction
In this post we will cover how to convert a dataset into _.tfrecord_ file.
Binary files are sometimes easier to use, because you don't have to specify
different directories for images and groundtruth annotations. While storing your data
in binary file, you have your data in one block of memory, compared to storing
each image and annotation separately. Openning a file is a considerably
time-consuming operation especially if you use _hdd_ and not _ssd_, because it
involves moving the disk reader head and that takes quite some time. Overall,
by using binary files you make it easier to distribute and make
the data better aligned for efficient reading.
The post consists of tree parts:
* in the first part, we demonstrate how you can get raw data bytes of any image using _numpy_ which is in some sense similar to what you do when converting your dataset to binary format.
* Second part shows how to convert a dataset to _tfrecord_ file without defining a computational graph and only by employing some built-in _tensorflow_ functions.
* Third part explains how to define a model for reading your data from created binary file and batch it in a random manner, which is necessary during training.
### Getting raw data bytes in numpy
Here we demonstrate how you can get raw data bytes of an image (any ndarray)
and how to restore the image back.
One important note is that **during this operation
the information about the dimensions of the image is lost and we have to
use it to recover the original image. This is one of the reasons why
we will have to store the raw image representation along with the dimensions
of the original image.**
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
cat_img = plt.imread('data/imgs/cat.jpg')
plt.imshow(cat_img)
# io.imshow(cat_img)
# Let's convert the picture into string representation
# using the ndarray.tostring() function
cat_string = cat_img.tostring()
# Now let's convert the string back to the image
# Important: the dtype should be specified
# otherwise the reconstruction will be errorness
# Reconstruction is 1d, so we need sizes of image
# to fully reconstruct it.
reconstructed_cat_1d = np.fromstring(cat_string, dtype=np.uint8)
# Here we reshape the 1d representation
# This is the why we need to store the sizes of image
# along with its serialized representation.
reconstructed_cat_img = reconstructed_cat_1d.reshape(cat_img.shape)
# Let's check if we got everything right and compare
# reconstructed array to the original one.
np.allclose(cat_img, reconstructed_cat_img)
```
### Creating a _.tfrecord_ file and reading it without defining a graph
Here we show how to write a small dataset (three images/annotations from _PASCAL VOC_) to
_.tfrrecord_ file and read it without defining a computational graph.
We also make sure that images that we read back from _.tfrecord_ file are equal to
the original images. Pay attention that we also write the sizes of the images along with
the image in the raw format. We showed an example on why we need to also store the size
in the previous section.
```
# Get some image/annotation pairs for example
filename_pairs = [
('data/VOC2012/JPEGImages/2007_000032.jpg',
'data/VOC2012/SegmentationClass/2007_000032.png'),
('data/VOC2012/JPEGImages/2007_000039.jpg',
'data/VOC2012/SegmentationClass/2007_000039.png'),
('data/VOC2012/JPEGImages/2007_000033.jpg',
'data/VOC2012/SegmentationClass/2007_000033.png')
]
%matplotlib inline
# Important: We are using PIL to read .png files later.
# This was done on purpose to read indexed png files
# in a special way -- only indexes and not map the indexes
# to actual rgb values. This is specific to PASCAL VOC
# dataset data. If you don't want thit type of behaviour
# consider using skimage.io.imread()
from PIL import Image
import numpy as np
import skimage.io as io
import tensorflow as tf
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
tfrecords_filename = 'pascal_voc_segmentation.tfrecords'
writer = tf.python_io.TFRecordWriter(tfrecords_filename)
# Let's collect the real images to later on compare
# to the reconstructed ones
original_images = []
for img_path, annotation_path in filename_pairs:
img = np.array(Image.open(img_path))
annotation = np.array(Image.open(annotation_path))
# The reason to store image sizes was demonstrated
# in the previous example -- we have to know sizes
# of images to later read raw serialized string,
# convert to 1d array and convert to respective
# shape that image used to have.
height = img.shape[0]
width = img.shape[1]
# Put in the original images into array
# Just for future check for correctness
original_images.append((img, annotation))
img_raw = img.tostring()
annotation_raw = annotation.tostring()
example = tf.train.Example(features=tf.train.Features(feature={
'height': _int64_feature(height),
'width': _int64_feature(width),
'image_raw': _bytes_feature(img_raw),
'mask_raw': _bytes_feature(annotation_raw)}))
writer.write(example.SerializeToString())
writer.close()
reconstructed_images = []
record_iterator = tf.python_io.tf_record_iterator(path=tfrecords_filename)
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
height = int(example.features.feature['height']
.int64_list
.value[0])
width = int(example.features.feature['width']
.int64_list
.value[0])
img_string = (example.features.feature['image_raw']
.bytes_list
.value[0])
annotation_string = (example.features.feature['mask_raw']
.bytes_list
.value[0])
img_1d = np.fromstring(img_string, dtype=np.uint8)
reconstructed_img = img_1d.reshape((height, width, -1))
annotation_1d = np.fromstring(annotation_string, dtype=np.uint8)
# Annotations don't have depth (3rd dimension)
reconstructed_annotation = annotation_1d.reshape((height, width))
reconstructed_images.append((reconstructed_img, reconstructed_annotation))
# Let's check if the reconstructed images match
# the original images
for original_pair, reconstructed_pair in zip(original_images, reconstructed_images):
img_pair_to_compare, annotation_pair_to_compare = zip(original_pair,
reconstructed_pair)
print(np.allclose(*img_pair_to_compare))
print(np.allclose(*annotation_pair_to_compare))
```
### Defining the graph to read and batch images from _.tfrecords_
Here we define a graph to read and batch images from the file that we have created
previously. It is very important to randomly shuffle images during training and depending
on the application we have to use different batch size.
It is very important to point out that if we use batching -- we have to define
the sizes of images beforehand. This may sound like a limitation, but actually in the
Image Classification and Image Segmentation fields the training is performed on the images
of the same size.
The code provided here is partially based on [this official example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py) and code from [this stackoverflow question](http://stackoverflow.com/questions/35028173/how-to-read-images-with-different-size-in-a-tfrecord-file).
Also if you want to know how you can control the batching according to your need read [these docs](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md)
.
```
%matplotlib inline
import tensorflow as tf
import skimage.io as io
IMAGE_HEIGHT = 384
IMAGE_WIDTH = 384
tfrecords_filename = 'pascal_voc_segmentation.tfrecords'
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'height': tf.FixedLenFeature([], tf.int64),
'width': tf.FixedLenFeature([], tf.int64),
'image_raw': tf.FixedLenFeature([], tf.string),
'mask_raw': tf.FixedLenFeature([], tf.string)
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
annotation = tf.decode_raw(features['mask_raw'], tf.uint8)
height = tf.cast(features['height'], tf.int32)
width = tf.cast(features['width'], tf.int32)
image_shape = tf.stack([height, width, 3])
annotation_shape = tf.stack([height, width, 1])
image = tf.reshape(image, image_shape)
annotation = tf.reshape(annotation, annotation_shape)
image_size_const = tf.constant((IMAGE_HEIGHT, IMAGE_WIDTH, 3), dtype=tf.int32)
annotation_size_const = tf.constant((IMAGE_HEIGHT, IMAGE_WIDTH, 1), dtype=tf.int32)
# Random transformations can be put here: right before you crop images
# to predefined size. To get more information look at the stackoverflow
# question linked above.
resized_image = tf.image.resize_image_with_crop_or_pad(image=image,
target_height=IMAGE_HEIGHT,
target_width=IMAGE_WIDTH)
resized_annotation = tf.image.resize_image_with_crop_or_pad(image=annotation,
target_height=IMAGE_HEIGHT,
target_width=IMAGE_WIDTH)
images, annotations = tf.train.shuffle_batch( [resized_image, resized_annotation],
batch_size=2,
capacity=30,
num_threads=2,
min_after_dequeue=10)
return images, annotations
filename_queue = tf.train.string_input_producer(
[tfrecords_filename], num_epochs=10)
# Even when reading in multiple threads, share the filename
# queue.
image, annotation = read_and_decode(filename_queue)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
# Let's read off 3 batches just for example
for i in range(3):
img, anno = sess.run([image, annotation])
print(img[0, :, :, :].shape)
print('current batch')
# We selected the batch size of two
# So we should get two image pairs in each batch
# Let's make sure it is random
io.imshow(img[0, :, :, :])
io.show()
io.imshow(anno[0, :, :, 0])
io.show()
io.imshow(img[1, :, :, :])
io.show()
io.imshow(anno[1, :, :, 0])
io.show()
coord.request_stop()
coord.join(threads)
```
### Conclusion and Discussion
In this post we covered how to convert a dataset into _.tfrecord_ format,
made sure that we have the same data and saw how to define a graph to
read and batch files from the created file.
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3Av2:oracle.run1.framed -> cores+wisig",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "take_200"],
"episode_transforms": [],
"domain_prefix": "W_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_power", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 500,
"dataset_seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Custom Models in pycalphad: Viscosity
## Viscosity Model Background
We are going to take a CALPHAD-based property model from the literature and use it to predict the viscosity of Al-Cu-Zr liquids.
For a binary alloy liquid under small undercooling, Gąsior suggested an entropy model of the form
$$\eta = (\sum_i x_i \eta_i ) (1 - 2\frac{S_{ex}}{R})$$
where $\eta_i$ is the viscosity of the element $i$, $x_i$ is the mole fraction, $S_{ex}$ is the excess entropy, and $R$ is the gas constant.
For more details on this model, see
1. M.E. Trybula, T. Gancarz, W. Gąsior, *Density, surface tension and viscosity of liquid binary Al-Zn and ternary Al-Li-Zn alloys*, Fluid Phase Equilibria 421 (2016) 39-48, [doi:10.1016/j.fluid.2016.03.013](http://dx.doi.org/10.1016/j.fluid.2016.03.013).
2. Władysław Gąsior, *Viscosity modeling of binary alloys: Comparative studies*, Calphad 44 (2014) 119-128, [doi:10.1016/j.calphad.2013.10.007](http://dx.doi.org/10.1016/j.calphad.2013.10.007).
3. Chenyang Zhou, Cuiping Guo, Changrong Li, Zhenmin Du, *Thermodynamic assessment of the phase equilibria and prediction of glass-forming ability of the Al–Cu–Zr system*, Journal of Non-Crystalline Solids 461 (2017) 47-60, [doi:10.1016/j.jnoncrysol.2016.09.031](https://doi.org/10.1016/j.jnoncrysol.2016.09.031).
```
from pycalphad import Database
```
## TDB Parameters
We can calculate the excess entropy of the liquid using the Al-Cu-Zr thermodynamic database from Zhou et al.
We add three new parameters to describe the viscosity (in Pa-s) of the pure elements Al, Cu, and Zr:
```
$ Viscosity test parameters
PARAMETER ETA(LIQUID,AL;0) 2.98150E+02 +0.000281*EXP(12300/(8.3145*T)); 6.00000E+03
N REF:0 !
PARAMETER ETA(LIQUID,CU;0) 2.98150E+02 +0.000657*EXP(21500/(8.3145*T)); 6.00000E+03
N REF:0 !
PARAMETER ETA(LIQUID,ZR;0) 2.98150E+02 +4.74E-3 - 4.97E-6*(T-2128) ; 6.00000E+03
N REF:0 !
```
Great! However, if we try to load the database now, we will get an error. This is because `ETA` parameters are not supported by default in pycalphad, so we need to tell pycalphad's TDB parser that "ETA" should be on the list of supported parameter types.
```
dbf = Database('alcuzr-viscosity.tdb')
```
### Adding the `ETA` parameter to the TDB parser
```
import pycalphad.io.tdb_keywords
pycalphad.io.tdb_keywords.TDB_PARAM_TYPES.append('ETA')
```
Now the database will load:
```
dbf = Database('alcuzr-viscosity.tdb')
```
## Writing the Custom Viscosity Model
Now that we have our `ETA` parameters in the database, we need to write a `Model` class to tell pycalphad how to compute viscosity. All custom models are subclasses of the pycalphad `Model` class.
When the `ViscosityModel` is constructed, the `build_phase` method is run and we need to construct the viscosity model after doing all the other initialization using a new method `build_viscosity`. The implementation of `build_viscosity` needs to do four things:
1. Query the Database for all the `ETA` parameters
2. Compute their weighted sum
3. Compute the excess entropy of the liquid
4. Plug all the values into the Gąsior equation and return the result
Since the `build_phase` method sets the attribute `viscosity` to the `ViscosityModel`, we can access the property using `viscosity` as the output in pycalphad caluclations.
```
from tinydb import where
import sympy
from pycalphad import Model, variables as v
class ViscosityModel(Model):
def build_phase(self, dbe):
super(ViscosityModel, self).build_phase(dbe)
self.viscosity = self.build_viscosity(dbe)
def build_viscosity(self, dbe):
if self.phase_name != 'LIQUID':
raise ValueError('Viscosity is only defined for LIQUID phase')
phase = dbe.phases[self.phase_name]
param_search = dbe.search
# STEP 1
eta_param_query = (
(where('phase_name') == phase.name) & \
(where('parameter_type') == 'ETA') & \
(where('constituent_array').test(self._array_validity))
)
# STEP 2
eta = self.redlich_kister_sum(phase, param_search, eta_param_query)
# STEP 3
excess_energy = self.GM - self.models['ref'] - self.models['idmix']
#liquid_mod = Model(dbe, self.components, self.phase_name)
## we only want the excess contributions to the entropy
#del liquid_mod.models['ref']
#del liquid_mod.models['idmix']
excess_entropy = -excess_energy.diff(v.T)
ks = 2
# STEP 4
result = eta * (1 - ks * excess_entropy / v.R)
self.eta = eta
return result
```
## Performing Calculations
Now we can create an instance of `ViscosityModel` for the liquid phase using the `Database` object we created earlier. We can verify this model has a `viscosity` attribute containing a symbolic expression for the viscosity.
```
mod = ViscosityModel(dbf, ['CU', 'ZR'], 'LIQUID')
print(mod.viscosity)
```
Finally we calculate and plot the viscosity.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from pycalphad import calculate
mod = ViscosityModel(dbf, ['CU', 'ZR'], 'LIQUID')
temp = 2100
# NOTICE: we need to tell pycalphad about our model for this phase
models = {'LIQUID': mod}
res = calculate(dbf, ['CU', 'ZR'], 'LIQUID', P=101325, T=temp, model=models, output='viscosity')
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
ax.scatter(res.X.sel(component='ZR'), 1000 * res.viscosity.values)
ax.set_xlabel('X(ZR)')
ax.set_ylabel('Viscosity (mPa-s)')
ax.set_xlim((0,1))
ax.set_title('Viscosity at {}K'.format(temp));
```
We repeat the calculation for Al-Cu.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from pycalphad import calculate
temp = 1300
models = {'LIQUID': ViscosityModel} # we can also use Model class
res = calculate(dbf, ['CU', 'AL'], 'LIQUID', P=101325, T=temp, model=models, output='viscosity')
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
ax.scatter(res.X.sel(component='CU'), 1000 * res.viscosity.values)
ax.set_xlabel('X(CU)')
ax.set_ylabel('Viscosity (mPa-s)')
ax.set_xlim((0,1))
ax.set_title('Viscosity at {}K'.format(temp));
```
| github_jupyter |
# Section 3.3
```
%run preamble.py
danish = pd.read_csv("../Data/danish.csv").x.values
```
# MLE of composite models
```
parms, BIC, AIC = mle_composite(danish, (1,1,1), "gam-par")
fit_gam_par = pd.DataFrame(np.append(parms, [AIC, BIC])).T
fit_gam_par.columns = ["shape", "tail", "thres", "AIC","BIC"]
print(fit_gam_par)
parms, BIC, AIC = mle_composite(danish, (1,1,1), "wei-par")
fit_wei_par = pd.DataFrame(np.append(parms, [AIC, BIC])).T
fit_wei_par.columns = ["shape", "tail", "thres", "AIC","BIC"]
print(fit_wei_par)
parms, BIC, AIC = mle_composite(danish, (0.5,1,1), "lnorm-par")
fit_lnorm_par = pd.DataFrame(np.append(parms, [AIC, BIC])).T
fit_lnorm_par.columns = ["shape", "tail", "thres", "AIC","BIC"]
print(fit_lnorm_par)
```
# Bayesian inference and model comparison using SMC
```
np.random.seed(333)
model_prior, a, b = "gamma", 0.1*np.array([1,1,1]), 0.1*np.array([1, 1, 1])
popSize, verbose, smc_method, paralell, nproc = 1000, True, "likelihood_anealing", True, 20
loss_models = ['lnorm-par', "wei-par", "gam-par"]
%time traces_like, res_df_like = fit_composite_models_smc(danish,loss_models, model_prior, a, b, popSize, verbose, smc_method, paralell, nproc)
np.random.seed(333)
model_prior, a, b = "gamma", np.array([0.1,0.1,0.1]), np.array([0.1, 0.1, 0.1])
popSize, verbose, smc_method, paralell, nproc = 1000, True, "data_by_batch", True, 20
loss_models = ['lnorm-par', "wei-par", "gam-par"]
%time traces_data, res_df_data = fit_composite_models_smc(danish,loss_models, model_prior, a, b, popSize, verbose, smc_method, paralell, nproc)
```
## Fitting the gamma-Pareto model
```
np.random.seed(333)
fig, axs = plt.subplots(1, 3, figsize=(5, 3.5))
loss_model = "gam-par"
parms_names = ['shape', 'tail', 'thres' ]
x_labs = ['Shape', 'Tail', 'Threshold']
for k in range(3):
# positions = np.linspace(min(trace_gibbs_gam_par[parms_names[k]]), max(trace_gibbs_gam_par[parms_names[k]]), 1000)
# kernel = st.gaussian_kde(trace_gibbs_gam_par[parms_names[k]])
# axs[k].plot(positions, kernel(positions), lw=3, label = "Gibbs", color = "blue")
positions = np.linspace(min(traces_like[loss_model][parms_names[k]].values),
max(traces_like[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_like[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC simulated annealing",
color = "blue", linestyle = "dotted")
positions = np.linspace(min(traces_data[loss_model][parms_names[k]].values),
max(traces_data[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_data[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC data by batches",
color = "blue", linestyle = "dashed")
axs[k].axvline(fit_gam_par[parms_names[k]].values, color = "black", linestyle = "dotted", label = "mle")
axs[k].set_yticks([])
axs[k].set_xlabel(x_labs[k])
axs[k].set_xticks(np.round(
traces_like[loss_model][parms_names[k]].quantile([0.05, 0.95]).values, 2))
handles, labels = axs[0].get_legend_handles_labels()
fig.legend(handles, labels, ncol = 2, borderaxespad=-0.2, loc='upper center',
frameon=False)
# fig.tight_layout()
sns.despine()
plt.savefig("../Figures/smc_posterior_danish_gamma_par_en.pdf")
```
## Fitting the Weibull-Pareto model
```
np.random.seed(333)
fig, axs = plt.subplots(1, 3, figsize=(5, 3.5))
loss_model = "wei-par"
for k in range(3):
# positions = np.linspace(min(trace_gibbs_wei_par[parms_names[k]]), max(trace_gibbs_wei_par[parms_names[k]]), 1000)
# kernel = st.gaussian_kde(trace_gibbs_wei_par[parms_names[k]])
# axs[k].plot(positions, kernel(positions), lw=3, label = "Gibbs", color = "green")
positions = np.linspace(min(traces_like[loss_model][parms_names[k]].values),
max(traces_like[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_like[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC simulated annealing",
color = "green", linestyle = "dotted")
positions = np.linspace(min(traces_data[loss_model][parms_names[k]].values),
max(traces_data[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_data[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC data by batches",
color = "green", linestyle = "dashed")
axs[k].axvline(fit_wei_par[parms_names[k]].values, color = "black", linestyle = "dotted", label = "mle")
axs[k].set_yticks([])
axs[k].set_xlabel(x_labs[k])
axs[k].set_xticks(np.round(
traces_like[loss_model][parms_names[k]].quantile([0.05, 0.95]).values, 2))
handles, labels = axs[0].get_legend_handles_labels()
fig.legend(handles, labels, ncol = 2, borderaxespad=-0.2, loc='upper center',
frameon=False)
sns.despine()
print(fit_gam_par[parms_names[0]].values)
plt.savefig("../Figures/smc_posterior_danish_weibull_par_en.pdf")
```
## Fitting the lognormal-Pareto model
```
np.random.seed(333)
fig, axs = plt.subplots(1, 3, figsize=(5, 3.5))
loss_model = "lnorm-par"
for k in range(3):
# positions = np.linspace(min(trace_gibbs_lnorm_par[parms_names[k]]), max(trace_gibbs_lnorm_par[parms_names[k]]), 1000)
# kernel = st.gaussian_kde(trace_gibbs_lnorm_par[parms_names[k]])
# axs[k].plot(positions, kernel(positions), lw=3, label = "Gibbs", color = "red")
positions = np.linspace(min(traces_like[loss_model][parms_names[k]].values),
max(traces_like[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_like[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC simulated annealing",
color = "red", linestyle = "dotted")
positions = np.linspace(min(traces_data[loss_model][parms_names[k]].values),
max(traces_data[loss_model][parms_names[k]].values), 1000)
kernel = st.gaussian_kde(traces_data[loss_model][parms_names[k]].values)
axs[k].plot(positions, kernel(positions), lw=3, label = "SMC data by batches",
color = "red", linestyle = "dashed")
axs[k].axvline(fit_lnorm_par[parms_names[k]].values, color = "black", linestyle = "dotted", label = "mle")
axs[k].set_yticks([])
axs[k].set_xlabel(x_labs[k])
axs[k].set_xticks(np.round(
traces_like[loss_model][parms_names[k]].quantile([0.05, 0.95]).values, 2))
handles, labels = axs[0].get_legend_handles_labels()
fig.legend(handles, labels, ncol = 2, borderaxespad=-0.2, loc='upper center',
frameon=False)
sns.despine()
print(fit_gam_par[parms_names[0]].values)
plt.savefig("../Figures/smc_posterior_danish_lnorm_par_en.pdf")
print(res_df_data.to_latex(index = False,float_format="%.2f", columns = ["loss_model","log_marg","model_evidence", "DIC", "WAIC"]))
res_df_data
print(res_df_like.to_latex(index = False, float_format="%.2f", columns = ["loss_model","log_marg","model_evidence", "DIC", "WAIC"]))
res_df_like
```
| github_jupyter |
# "[Prob] Basics of the Poisson Distribution"
> "Some useful facts about the Poisson distribution"
- toc:false
- branch: master
- badges: false
- comments: true
- author: Peiyi Hung
- categories: [category, learning, probability]
# Introduction
The Poisson distribution is an important discrete probability distribution prevalent in a variety of fields. In this post, I will present some useful facts about the Poisson distribution. Here's the concepts I will discuss in the post:
* PMF, expectation and variance of Poisson
* In what situation we can use it?
* Sum of indepentent Poisson is also a Poisson
* Relationship with the Binomial distribution
# PMF, Expectation and Variance
First, we define what's Poisson distribution.
Let X be a Poisson random variable with a parameter $\lambda$, where $\lambda >0$. The pmf of X would be:
$$P(X=x) = \frac{e^{-\lambda}\lambda^{x}}{x!}, \quad \text{for } k = 0, 1,2,3,\dots$$
where $x$ can only be non-negative integer.
This is a valid pmf since
$$\sum_{k=0}^{\infty} \frac{e^{-\lambda}\lambda^{k}}{k!} = e^{-\lambda}\sum_{k=0}^{\infty} \frac{\lambda^{k}}{k!}= e^{-\lambda}e^{\lambda}=1$$
where $\displaystyle\sum_{k=0}^{\infty} \frac{\lambda^{k}}{k!}$ is the Taylor expansion of $e^{\lambda}$.
The expectation and the variance of the Poisson distribution are both $\lambda$. The derivation of this result is just some pattern recognition of $\sum_{k=0}^{\infty} \frac{\lambda^{k}}{k!}=e^{\lambda}$, so I omit it here.
# In what situation can we use it?
The Poisson distribution is often applied to the situation where we are counting the number of successes or an event happening in a time interval or a particular region, and there are a large number of trials with a small probability of success. The parameter $\lambda$ is the rate parameter which indicates the average number of successes in a time interval or a region.
Here are some examples:
* The number of emails you receive in an hour.
* The number of chips in a chocolate chip cookie.
* The number of earthquakes in a year in some region of the world.
Also, let's consider an example probability problem.
**Example problem 1**
> Raindrops are falling at an average rate of 20 drops per square inch per minute. Find the probability that the region has no rain drops in a given 1-minute time interval.
The success in this problem is one raindrop. The average rate is 20, so $\lambda=20$. Let $X$ be the raindrops that region has in a minute. We would model $X$ with Pois$(20)$, so the probability we concerned would be
$$P(X=0) = \frac{e^{-20}20^0}{0!}=e^{-20} \approx 2.0611\times 10 ^{-9}$$
If we are concerned with raindrops in a 3-second time interval in 5 square inches, then $$\lambda = 20\times\frac{1}{20} \text{ minutes} \times5 \text{ square inches} = 5$$
Let $Y$ be raindrops in a 3-second time interval. $Y$ would be Pois$(5)$, so $P(Y=0) = e^{-5} \approx 0.0067$.
# Sum of Independent Poisson
The sum of independent Poisson would also be Poisson. Let $X$ be Pois$(\lambda_1)$ and $Y$ be Pois$(\lambda_2)$. If $T=X+Y$, then $T \sim \text{Pois}(\lambda_1 + \lambda_2)$.
To get pmf of $T$, we should first apply the law of total probability:
$$
P(X+Y=t) = \sum_{k=0}^{t}P(X+Y=t|X=k)P(X=k)
$$
Since they are independent, we got
$$
\sum_{k=0}^{t}P(X+Y=t|X=k)P(X=k) = \sum_{k=0}^{t}P(Y=t-k)P(X=k)
$$
Next, we plug in the pmf of Poisson:
$$
\sum_{k=0}^{t}P(Y=t-k)P(X=k) = \sum_{k=0}^{t}\frac{e^{-\lambda_2}\lambda_2^{t-k}}{(t-k)!}\frac{e^{-\lambda_2}\lambda_1^k}{k!} = \frac{e^{-(\lambda_1+\lambda_2)}}{t!}\sum_{k=0}^{t} {t \choose k}\lambda_1^{k}\lambda_2^{t-k}
$$
Finally, by Binomial theorem, we got
$$
P(X+Y=t) = \frac{e^{-(\lambda_1+\lambda_2)}(\lambda_1+\lambda_2)^t}{t!}
$$
which is the pmf of Pois$(\lambda_1 + \lambda_2)$.
# Relationship with the Binomial distribution
We can obtain Poisson from Binomial and can also obtain Binomial to Poisson. Let's first see how we get the Binomial distribution from the Poisson distribution
**From Poisson to Binomial**
If $X \sim$ Pois$(\lambda_1)$ and $Y \sim$ Pois$(\lambda_2)$, and they are independent, then the conditional distribution of $X$ given $X+Y=n$ is Bin$(n, \lambda_1/(\lambda_1 + \lambda_2))$. Let's derive the pmf of $X$ given $X+Y=n$.
By Bayes' rule and the indenpendence between $X$ and $Y$:
$$
P(X=x|X+Y=n) = \frac{P(X+Y=n|X=x)P(X=x)}{P(X+Y=n)} = \frac{P(Y=n-k)P(X=x)}{P(X+Y=n)}
$$
From the previous section, we know $X+Y \sim$ Poin$(\lambda_1 + \lambda_2)$. Use this fact, we get
$$
P(X=x|X+Y=n) = \frac{ \big(\frac{e^{-\lambda_2}\lambda_2^{n-k}}{(n-k)!}\big) \big( \frac{e^{\lambda_1\lambda_1^k}}{k!} \big)}{ \frac{e^{-(\lambda_1 + \lambda_2)}(\lambda_1 + \lambda_2)^n}{n!}} = {n\choose k}\bigg(\frac{\lambda_1}{\lambda_1+\lambda_2}\bigg)^k \bigg(\frac{\lambda_2}{\lambda_1+\lambda_2}\bigg)^{n-k}
$$
which is the Bin$(n, \lambda_1/(\lambda_1 + \lambda_2))$ pmf.
**From Binomial to Poisson**
We can approximate Binomial by Poisson when $n \rightarrow \infty$ and $p \rightarrow 0$, and $\lambda = np$.
The pmf of Binomial is
$$
P(X=k) = {n \choose k}p^{k}(1-p)^{n-k} = {n \choose k}\big(\frac{\lambda}{n}\big)^{k}\big(1-\frac{\lambda}{n}\big)^n\big(1-\frac{\lambda}{n}\big)^{-k}
$$
By some algebra manipulation, we got
$$
P(X=k) = \frac{\lambda^{k}}{k!}\frac{n(n-1)\dots(n-k+1)}{n^k}\big(1-\frac{\lambda}{n}\big)^n\big(1-\frac{\lambda}{n}\big)^{-k}
$$
When $n \rightarrow \infty$, we got:
$$
\frac{n(n-1)\dots(n-k+1)}{n^k} \rightarrow 1,\\
\big(1-\frac{\lambda}{n}\big)^n \rightarrow e^{-\lambda}, \text{and}\\
\big(1-\frac{\lambda}{n}\big)^{-k} \rightarrow 1
$$
Therefore, $P(X=k) = \frac{e^{-\lambda}\lambda^k}{k!}$ when $n \rightarrow \infty$.
Let's see an example on how to use Poisson to approximate Binomial.
**Example problem 2**
>Ten million people enter a certain lottery. For each person, the chance of winning is one in ten million, independently. Find a simple, good approximation for the PMF of the number of people who win the lottery.
Let $X$ be the number of people winning the lottery. $X$ would be Bin$(10000000, 1/10000000)$ and $E(X) = 1$. We can approximate the pmf of $X$ by Pois$(1)$:
$$
P(X=k) \approx \frac{1}{e\cdot k!}
$$
Let's see if this approximation is accurate by Python code.
```
#collapse-hide
from scipy.stats import binom
from math import factorial, exp
import numpy as np
import matplotlib.pyplot as plt
def pois(k):
return 1 / (exp(1) * factorial(k))
n = 10000000
p = 1/10000000
k = np.arange(10)
binomial = binom.pmf(k, n, p)
poisson = [pois(i) for i in k]
fig, ax = plt.subplots(ncols=2, nrows=1, figsize=(15, 4), dpi=120)
ax[0].plot(k, binomial)
ax[0].set_title("PMF of Binomial")
ax[0].set_xlabel(r"$X=k$")
ax[0].set_xticks(k)
ax[1].plot(k, poisson)
ax[1].set_title("Approximation by Poisson")
ax[1].set_xlabel(r"X=k")
ax[1].set_xticks(k)
plt.tight_layout();
```
The approximation is quite accurate since these two graphs are almost identical.
**Reference**
1. *Introduction to Probability* by Joe Blitzstein and Jessica Hwang.
| github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h2>Entanglement and Superdense Coding</h2>
[Watch Lecture](https://youtu.be/ZzRcItzUF2U)
Asja has a qubit, initially set to $ \ket{0} $.
Balvis has a qubit, initially set to $ \ket{0} $.
<h3> Entanglement </h3>
Asja applies Hadamard operator to her qubit.
The quantum state of Asja's qubit is $ \stateplus $.
Then, Asja and Balvis combine their qubits. Their quantum state is
$ \stateplus \otimes \vzero = \myvector{ \frac{1}{\sqrt{2}} \\ 0 \\ \frac{1}{\sqrt{2}} \\ 0 } $.
Asja and Balvis apply CNOT operator on two qubits.
The new quantum state is
$ \CNOT \myvector{ \frac{1}{\sqrt{2}} \\ 0 \\ \frac{1}{\sqrt{2}} \\ 0 } = \myvector{ \frac{1}{\sqrt{2}} \\ 0 \\0 \\ \frac{1}{\sqrt{2}} } = \frac{1}{\sqrt{2}}\ket{00} + \frac{1}{\sqrt{2}}\ket{11} $.
At this moment, Asja's and Balvis' qubits are correlated to each other.
If we measure both qubits, we can observe either state $ \ket{00} $ or state $ \ket{11} $.
Suppose that Asja observes her qubit secretly.
<ul>
<li> When Asja sees the result $ \ket{0} $, then Balvis' qubit also collapses to state $ \ket{0} $. Balvis cannot observe state $ \ket{1} $. </li>
<li> When Asja sees the result $ \ket{1} $, then Balvis' qubit also collapses to state $ \ket{1} $. Balvis cannot observe state $ \ket{0} $. </li>
</ul>
Experimental results have confirmed that this happens even if there is a physical distance between Asja's and Balvis' qubits.
It seems correlated quantum particales can "affect each other" instantly, even if they are in the different part of the universe.
If two qubits are correlated in this way, then we say that they are <b>entangled</b>.
<i> <u>Technical note</u>:
If the quantum state of two qubits can be written as $ \ket{u} \otimes \ket{v} $, then two qubits are not correlated, where $ \ket{u} $ and $ \ket{v} $ are the quantum states of the first and second qubits.
On the other hand, if the quantum state of two qubits cannot be written as $ \ket{u} \otimes \ket{v} $, then there is an entanglement between the qubits.
</i>
<b> Entangled qubits can be useful </b>
<h3> The quantum communication </h3>
After having the entanglement, Balvis takes his qubit and goes away.
Asja will send two classical bits of information by only sending her qubit.
<img src="../images/superdense_coding.png">
<font size="-2">source: https://fi.m.wikipedia.org/wiki/Tiedosto:Superdense_coding.png </font>
Now, we describe this protocol.
Asja has two bits of classical information: $ a,b \in \{0,1\} $.
There are four possible values for the pair $ (a,b) $: $ (0,0), (0,1), (1,0),\mbox{ or } (1,1) $.
If $a$ is 1, then Asja applies z-gate, i.e., $ Z = \Z $, to her qubit.
If $b$ is 1, then Asja applies x-gate (NOT operator) to her qubit.
Then, Asja sends her qubit to Balvis.
<h3> After the communication </h3>
Balvis has both qubits.
Balvis applies cx-gate (CNOT operator), where Asja's qubit is the controller.
Then, Balvis applies h-gate (Hadamard operator) to Asja's qubit.
Balvis measures both qubits.
The measurement result will be exactly $ (a,b) $.
<h3> Task 1</h3>
Verify the correctness of the above protocol.
For each pair of $ (a,b) \in \left\{ (0,0), (0,1), (1,0),(1,1) \right\} $:
- Create a quantum curcuit with two qubits: Asja's and Balvis' qubits
- Both are initially set to $ \ket{0} $
- Apply h-gate (Hadamard) to the Asja's qubit
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
Assume that both qubits are separated from each other.
<ul>
<li> If $ a $ is 1, then apply z-gate to Asja's qubit. </li>
<li> If $ b $ is 1, then apply x-gate (NOT) to Asja's qubit. </li>
</ul>
Assume that Asja sends her qubit to Balvis.
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
- Apply h-gate (Hadamard) to the Asja's qubit
- Measure both qubits and compare the results with pair $ (a,b) $
```
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
all_pairs = ['00','01','10','11']
#
# your code is here
#
```
<a href="B54_Superdense_Coding_Solutions.ipynb#task1">click for our solution</a>
<h3> Task 2 </h3>
Verify each case by tracing the state vector (on paper).
_Hint: Representing quantum states as the linear combinations of basis states makes calculation easier._
<h3> Task 3</h3>
Can the above set-up be used by Balvis?
Verify that the following modified protocol allows Balvis to send two classical bits by sending only his qubit.
For each pair of $ (a,b) \in \left\{ (0,0), (0,1), (1,0),(1,1) \right\} $:
- Create a quantum curcuit with two qubits: Asja's and Balvis' qubits
- Both are initially set to $ \ket{0} $
- Apply h-gate (Hadamard) to the Asja's qubit
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
Assume that both qubits are separated from each other.
<ul>
<li> If $ a $ is 1, then apply z-gate to Balvis' qubit. </li>
<li> If $ b $ is 1, then apply x-gate (NOT) to Balvis' qubit. </li>
</ul>
Assume that Balvis sends his qubit to Asja.
- Apply cx-gate as CNOT(Asja's-qubit,Balvis'-qubit)
- Apply h-gate (Hadamard) to the Asja's qubit
- Measure both qubits and compare the results with pair $ (a,b) $
```
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
all_pairs = ['00','01','10','11']
#
# your code is here
#
```
<a href="B54_Superdense_Coding_Solutions.ipynb#task3">click for our solution</a>
<h3> Task 4 </h3>
Verify each case by tracing the state vector (on paper).
_Hint: Representing quantum states as the linear combinations of basis states makes calculation easier._
| github_jupyter |
# Transcript to BUILD wavs
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%load_ext autoreload
%autoreload 2
%matplotlib inline
from glob import glob
import os
from matplotlib.pylab import *
import librosa
import torch
from epoch_time import epoch_time
from tqdm.notebook import tqdm
from txt_to_stm import txt_to_stm
import pandas as pd
import numpy as np
from padarray import padarray
from to_samples import to_samples
from torch.utils.data import TensorDataset, DataLoader
import audioread
import random
import soundfile as sf
from pathlib import Path
os.getcwd()
stage='NIST'
sample_rate=8000
window = sample_rate
H=window
transcripts = list(sorted(glob(f'{stage}/*/build/transcription/*.txt')))
len(transcripts)
audio_files=[x.replace('/transcription/', '/audio/').replace('.txt','.wav') for x in transcripts]
for transcript_file in tqdm(transcripts):
audio_file = transcript_file.replace('/transcription/', '/audio/').replace('.txt','.wav')
if not os.path.exists(audio_file):
print('missing', audio_file)
continue
# Create split dirs
audio_dir=os.path.dirname(audio_file)
audio_split_dir=audio_dir.replace('/audio', '/audio_split')
Path(audio_split_dir).mkdir(parents=True, exist_ok=True)
transcript_dir=os.path.dirname(transcript_file)
transcript_split_dir=transcript_dir.replace('/transcription', '/transcription_split')
Path(transcript_split_dir).mkdir(parents=True, exist_ok=True)
# Load audio
file = "_".join(os.path.basename(transcript_file).split("_")[:-1])
channel = os.path.basename(transcript_file).split("_")[-1].split(".")[-2]
transcript_df = pd.read_csv(transcript_file, sep = "\n", header = None, names = ["content"])
result = txt_to_stm(transcript_df, file, channel)
speech=[(float(x[-3]), float(x[-2]), x[-1]) for x in result if len(x)==6]
x_np,sr=librosa.load(audio_file, sr=sample_rate)
with audioread.audio_open(audio_file) as f:
sr = f.samplerate
if sr != sample_rate:
print('RESIZING', sr, audio_file)
sf.write(audio_file, x_np, sample_rate)
# Split audio
speech_segments=[(int(a*sample_rate), int(b*sample_rate), words) for (a,b,words) in speech]
for i, (lower, upper, words) in enumerate(speech_segments):
audio_split_file=f"{audio_file[0:-4].replace('/audio/','/audio_split/')}_{i:03d}.wav"
sf.write(audio_split_file, x_np[lower:upper], sample_rate)
transcript_split_file=f"{transcript_file[0:-4].replace('/transcription/','/transcription_split/')}_{i:03d}.txt"
with open(transcript_split_file,'w') as f:
f.write(words)
```
| github_jupyter |
# Introduction
```
#r "BoSSSpad.dll"
using System;
using System.Collections.Generic;
using System.Linq;
using ilPSP;
using ilPSP.Utils;
using BoSSS.Platform;
using BoSSS.Platform.LinAlg;
using BoSSS.Foundation;
using BoSSS.Foundation.XDG;
using BoSSS.Foundation.Grid;
using BoSSS.Foundation.Grid.Classic;
using BoSSS.Foundation.Grid.RefElements;
using BoSSS.Foundation.IO;
using BoSSS.Solution;
using BoSSS.Solution.Control;
using BoSSS.Solution.GridImport;
using BoSSS.Solution.Statistic;
using BoSSS.Solution.Utils;
using BoSSS.Solution.AdvancedSolvers;
using BoSSS.Solution.Gnuplot;
using BoSSS.Application.BoSSSpad;
using BoSSS.Application.XNSE_Solver;
using static BoSSS.Application.BoSSSpad.BoSSSshell;
Init();
```
# Note:
- Setting Boundary values and initial values is similar;
- For most solvers, inital and boundary values are set the same way;
- We will use the incompressible solver as an example:
```
using BoSSS.Application.XNSE_Solver;
```
Create a control object:
```
var C = new XNSE_Control();
```
# 1 From Formulas
If the Formula is simple enough to be represented by C\# code,
it can be embedded in the control file.
However, the code bust be put into a string, since it is not
possible to serialize classes/objects from the notebook
into a control object:
```
string code =
"static class MyInitialValue {" // class must be static!
// Warning: static constants are allowed,
// but any changes outside of the current text box in BoSSSpad
// will not be recorded for the code that is passed to the solver.
+ " public static double alpha = 0.7;"
// a method, which should be used for an initial value,
// must be static!
+ " public static double VelocityX(double[] X, double t) {"
+ " double x = X[0];"
+ " double y = X[1];"
+ " return Math.Sin(x*y*alpha);"
+ " }"
+ "}";
var fo = new BoSSS.Solution.Control.Formula("MyInitialValue.VelocityX",
true, code);
```
Use the BoSSSpad-intrinsic **GetFormulaObject** to set tie inital value:
```
C.AddInitialValue("VelocityX", fo);
/// Deprecated:
/// Note: such a declaration is very restrictive;
/// \code{GetFormulaObject} works only for
/// \begin{itemize}
/// \item a static class
/// \item no dependence on any external parameters
/// \end{itemize}
/// E.g. the following code would only change the behavior in BoSSSpad,
/// but not the code that is passed to the solver:
//Deprecated:
//MyInitialValue.alpha = 0.5;
//MyInitialValue.VelocityX(new double[]{ 0.5, 0.5 }, 0.0);
C.InitialValues["VelocityX"].Evaluate(new double[]{ 0.5, 0.5 }, 0.0)
```
# 2 Advanced functions
Some more advanced mathematical functions, e.g.
Jacobian elliptic functions $\text{sn}(u|m)$, $\text{cn}(u|m)$ and $\text{dn}(u|m)$
are available throug the GNU Scientific Library, for which BoSSS provides
bindings, see e.g.
**BoSSS.Platform.GSL.gsl\_sf\_elljac\_e**
## 2.1 From MATLAB code
Asssume e.g. the following MATLAB code; obviously, this could
also be implemented in C\#, we yust use something smple for demonstration:
```
string[] MatlabCode = new string[] {
@"[n,d2] = size(X_values);",
@"u=zeros(2,n);",
@"for k=1:n",
@"X=[X_values(k,1),X_values(k,2)];",
@"",
@"u_x_main = -(-sqrt(X(1).^ 2 + X(2).^ 2) / 0.3e1 + 0.4e1 / 0.3e1 * (X(1).^ 2 + X(2).^ 2) ^ (-0.1e1 / 0.2e1)) * sin(atan2(X(2), X(1)));",
@"u_y_main = (-sqrt(X(1).^ 2 + X(2).^ 2) / 0.3e1 + 0.4e1 / 0.3e1 * (X(1).^ 2 + X(2).^ 2) ^ (-0.1e1 / 0.2e1)) * cos(atan2(X(2), X(1)));",
@"",
@"u(1,k)=u_x_main;",
@"u(2,k)=u_y_main;",
@"end" };
```
We can evaluate this code in **BoSSS** using the MATLAB connector;
We encapsulate it in a **ScalarFunction** which allows
**vectorized** evaluation
(multiple evaluatiuons in one function call) e
of some function.
This is much more efficient, since there will be significant overhead
for calling MATLAB (starting MATLAB, checking the license,
transfering data, etc.).
```
using ilPSP.Connectors.Matlab;
ScalarFunction VelocityXInitial =
delegate(MultidimensionalArray input, MultidimensionalArray output) {
int N = input.GetLength(0); // number of points which we evaluate
// at once.
var output_vec = MultidimensionalArray.Create(2, N); // the MATLAB code
// returns an entire vector.
using(var bmc = new BatchmodeConnector()) {
bmc.PutMatrix(input,"X_values");
foreach(var line in MatlabCode) {
bmc.Cmd(line);
}
bmc.GetMatrix(output_vec, "u");
bmc.Execute(); // Note: 'Execute' has to be *after* 'GetMatrix'
}
output.Set(output_vec.ExtractSubArrayShallow(0,-1)); // extract row 0 from
// 'output_vec' and store it in 'output'
};
```
We test our implementation:
```
var inputTest = MultidimensionalArray.Create(3,2); // set some test values for input
inputTest.SetColumn(0, GenericBlas.Linspace(1,2,3));
inputTest.SetColumn(1, GenericBlas.Linspace(2,3,3));
var outputTest = MultidimensionalArray.Create(3); // allocate memory for output
VelocityXInitial(inputTest, outputTest);
```
We recive the following velocity values for our input coordinates:
```
outputTest.To1DArray()
```
# Projecting the MATLAB function to a DG field
As for a standard calculation, we create a mesh, save it to some database
and set the mesh in the control object.
```
var nodes = GenericBlas.Linspace(1,2,11);
GridCommons grid = Grid2D.Cartesian2DGrid(nodes,nodes);
var db = CreateTempDatabase();
db.SaveGrid(ref grid);
C.SetGrid(grid);
```
We create a DG field for the $x$-velocity on our grid:
```
var gdata = new GridData(grid);
var b = new Basis(gdata, 3); // use DG degree 2
var VelX = new SinglePhaseField(b,"VelocityX"); // important: name the DG field
// equal to initial value name
```
Finally, we are able to project the MATLAB function onto the DG field:
```
//VelX.ProjectField(VelocityXInitial);
```
One might want to check the data visually, so it can be exported
in the usual fashion
```
//Tecplot("initial",0.0,2,VelX);
```
# Storing the initial value in the database and linking it in the control object
The DG field with the initial value can be stored in the database.
this will create a dummy session.
```
BoSSSshell.WorkflowMgm.Init("TestProject");
var InitalValueTS = db.SaveTimestep(VelX); // further fields an be
// appended
BoSSSshell.WorkflowMgm.Sessions
/// Now, we can use this timestep as a restart-value for the simulation:
C.SetRestart(InitalValueTS);
```
| github_jupyter |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Alpha Beta Filter
***
Referring to Section 9.1.1, the alpha-beta filter, is a simplified filter for parameter estimation and smoothing. The alpha-beta filter is related to Kalman filters but does not require a detailed system model. It presumes that the system is approximated by two internal states. The first state is determined by integrating the second state over time. The radar measurements are the observations of the first model state. This is a low-order approximation and may be adequate for simple tracking problems, such as tracking a target's position where the position is found from the time integral of the velocity. Assuming the velocity remains fixed over the time interval between measurements, the position is projected forward in time to predict its value at the next sampling time.
The Python sample code for the alpha beta filter is given in Listing 9.1
***
Set the start, step and end times (s)
```
start = 0.0
end = 20.0
step = 0.1
```
Calculate the number of updates and create the time array with the `linspace` routine from `scipy`
```
from numpy import linspace
number_of_updates = round( (end - start) / step) + 1
t, dt = linspace(start, end, number_of_updates, retstep=True)
```
Set the initial position (m) and initial velocity (m/s)
```
initial_position = 5.0
initial_velocity = 0.5
```
Set the noise variance and the factors (alpha, beta) for the filter
```
noise_variance = 2.0
alpha = 0.1
beta = 0.001
```
Calculate the true position
```
x_true = initial_position + initial_velocity * t
```
Create the measurements using the random number routines from `scipy`
```
from numpy import random, sqrt
z = x_true + sqrt(noise_variance) * (random.rand(number_of_updates) - 0.5)
```
Initialize the state and create the empty filter estimates
```
xk_1 = 0.0
vk_1 = 0.0
x_filt = []
v_filt = []
r_filt = []
```
Perform the alpha-beta filtering
```
# Loop over all measurements
for zk in z:
# Predict the next state
xk = xk_1 + vk_1 * dt
vk = vk_1
# Calculate the residual
rk = zk - xk
# Correct the predicted state
xk += alpha * rk
vk += beta / dt * rk
# Set the current state as previous
xk_1 = xk
vk_1 = vk
x_filt.append(xk)
v_filt.append(vk)
r_filt.append(rk)
```
Display the results of the alpha beta filter using the `matplotlib` routines
```
from matplotlib import pyplot as plt
from numpy import ones_like
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Position
plt.figure()
plt.plot(t, x_true, '', label='True')
plt.plot(t, z, ':', label='Measurement')
plt.plot(t, x_filt, '--', label='Filtered')
plt.ylabel('Position (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Alpha-Beta Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity
plt.figure()
plt.plot(t, initial_velocity * ones_like(t), '', label='True')
plt.plot(t, v_filt, '--', label='Filtered')
plt.ylabel('Velocity (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Alpha-Beta Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Residual
plt.figure()
plt.plot(t, r_filt, '')
plt.ylabel('Residual (m)', size=12)
# Set the plot title and labels
plt.title('Alpha-Beta Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
```
| github_jupyter |
# Pre-computing various second-moment related quantities
This saves computation for M&M by precomputing and re-using quantitaties shared between iterations. It mostly saves $O(R^3)$ computations. This vignette shows results agree with the original version. Cannot use unit test due to numerical discrepency between `chol` of amardillo and R -- this has been shown problematic for some computations. I'll have to improve `mashr` for it.
```
muffled_chol = function(x, ...)
withCallingHandlers(chol(x, ...),
warning = function(w) {
if (grepl("the matrix is either rank-deficient or indefinite", w$message))
invokeRestart("muffleWarning")
} )
set.seed(1)
library(mashr)
simdata = simple_sims(500,5,1)
data = mash_set_data(simdata$Bhat, simdata$Shat, alpha = 0)
U.c = cov_canonical(data)
grid = mashr:::autoselect_grid(data,sqrt(2))
Ulist = mashr:::normalize_Ulist(U.c)
xUlist = expand_cov(Ulist,grid,TRUE)
llik_mat0 = mashr:::calc_lik_rcpp(t(data$Bhat),t(data$Shat),data$V,
matrix(0,0,0), simplify2array(xUlist),T,T)$data
svs = data$Shat[1,] * t(data$V * data$Shat[1,])
sigma_rooti = list()
for (i in 1:length(xUlist)) sigma_rooti[[i]] = t(backsolve(muffled_chol(svs + xUlist[[i]], pivot=T), diag(nrow(svs))))
llik_mat = mashr:::calc_lik_common_rcpp(t(data$Bhat),
simplify2array(sigma_rooti),
T)$data
head(llik_mat0)
head(llik_mat)
rows <- which(apply(llik_mat,2,function (x) any(is.infinite(x))))
if (length(rows) > 0)
warning(paste("Some mixture components result in non-finite likelihoods,",
"either\n","due to numerical underflow/overflow,",
"or due to invalid covariance matrices",
paste(rows,collapse=", "), "\n"))
loglik_null = llik_mat[,1]
lfactors = apply(llik_mat,1,max)
llik_mat = llik_mat - lfactors
mixture_posterior_weights = mashr:::compute_posterior_weights(1/ncol(llik_mat), exp(llik_mat))
post0 = mashr:::calc_post_rcpp(t(data$Bhat), t(data$Shat), matrix(0,0,0), matrix(0,0,0),
data$V,
matrix(0,0,0), matrix(0,0,0),
simplify2array(xUlist),
t(mixture_posterior_weights),
T, 4)
Vinv = solve(svs)
U0 = list()
for (i in 1:length(xUlist)) U0[[i]] = xUlist[[i]] %*% solve(Vinv %*% xUlist[[i]] + diag(nrow(xUlist[[i]])))
post = mashr:::calc_post_precision_rcpp(t(data$Bhat), t(data$Shat), matrix(0,0,0), matrix(0,0,0),
data$V,
matrix(0,0,0), matrix(0,0,0),
Vinv,
simplify2array(U0),
t(mixture_posterior_weights),
4)
head(post$post_mean)
head(post0$post_mean)
head(post$post_cov)
head(post0$post_cov)
```
Now test the relevant `mmbr` interface:
```
simulate_multivariate = function(n=100,p=100,r=2) {
set.seed(1)
res = mmbr::mmbr_sim1(n,p,r,4,center_scale=TRUE)
res$L = 10
return(res)
}
attach(simulate_multivariate(r=2))
prior_var = V[1,1]
residual_var = as.numeric(var(y))
data = mmbr:::DenseData$new(X,y)
A = mmbr:::BayesianSimpleRegression$new(ncol(X), residual_var, prior_var)
A$fit(data, save_summary_stats = T)
null_weight = 0
mash_init = mmbr:::MashInitializer$new(list(V), 1, 1 - null_weight, null_weight)
residual_covar = cov(y)
mash_init$precompute_cov_matrices(data, residual_covar)
B = mmbr:::MashRegression$new(ncol(X), residual_covar, mash_init)
B$fit(data, save_summary_stats = T)
```
| github_jupyter |
```
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from torchvision import transforms
# TODO #1: Define a transform to pre-process the testing images.
transform_test = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Create the data loader.
data_loader = get_loader(transform=transform_test,
mode='test')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Obtain sample image before and after pre-processing.
orig_image, image = next(iter(data_loader))
# Visualize sample image, before pre-processing.
plt.imshow(np.squeeze(orig_image))
plt.title('example image')
plt.show()
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Watch for any changes in model.py, and re-load it automatically.
% load_ext autoreload
% autoreload 2
import os
import torch
from model import EncoderCNN, DecoderRNN
# TODO #2: Specify the saved models to load.
encoder_file = 'encoder-3.pkl'
decoder_file = 'decoder-3.pkl'
# TODO #3: Select appropriate values for the Python variables below.
embed_size = 512
hidden_size = 512
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder, and set each to inference mode.
encoder = EncoderCNN(embed_size)
encoder.eval()
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
decoder.eval()
# Load the trained weights.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
# Move models to GPU if CUDA is available.
encoder.to(device)
decoder.to(device)
# Move image Pytorch Tensor to GPU if CUDA is available.
image = image.to(device)
# Obtain the embedded image features.
features = encoder(image).unsqueeze(1)
# Pass the embedded image features through the model to get a predicted caption.
output = decoder.sample(features)
print('example output:', output)
assert (type(output)==list), "Output needs to be a Python list"
assert all([type(x)==int for x in output]), "Output should be a list of integers."
assert all([x in data_loader.dataset.vocab.idx2word for x in output]), "Each entry in the output needs to correspond to an integer that indicates a token in the vocabulary."
# TODO #4: Complete the function.
def clean_sentence(output):
sentence=" "
for i in output:
word=data_loader.dataset.vocab.idx2word[i]
if(i==0):
continue
elif(i==18):
break
else:
sentence=sentence + " " + word
return sentence
sentence = clean_sentence(output)
print('example sentence:', sentence)
assert type(sentence)==str, 'Sentence needs to be a Python string!'
def get_prediction():
orig_image, image = next(iter(data_loader))
plt.imshow(np.squeeze(orig_image))
plt.title('Sample Image')
plt.show()
image = image.to(device)
features = encoder(image).unsqueeze(1)
output = decoder.sample(features)
sentence = clean_sentence(output)
print(sentence)
get_prediction()
get_prediction()
```
| github_jupyter |
From https://pythonprogramming.net/testing-visualization-and-conclusion/?completed=/basic-image-recognition-testing/
```
!apt-get install -y unzip
!wget https://pythonprogramming.net/static/downloads/image-recognition/tutorialimages.zip
!unzip tutorialimages.zip
!cp images/numbers/3.8.png images/test.png
%matplotlib inline
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import time
def threshold(imageArray):
balanceAr = []
newAr = imageArray
for eachRow in imageArray:
for eachPix in eachRow:
avgNum = reduce(lambda x, y: x + y, eachPix[:3]) / len(eachPix[:3])
balanceAr.append(avgNum)
i = Image.open('images/numbers/0.1.png')
iar = np.array(i)
i2 = Image.open('images/numbers/y0.4.png')
iar2 = np.array(i2)
i3 = Image.open('images/numbers/y0.5.png')
iar3 = np.array(i3)
i4 = Image.open('images/sentdex.png')
iar4 = np.array(i4)
fig = plt.figure()
ax1 = plt.subplot2grid((8,6),(0,0), rowspan=4, colspan=3)
ax2 = plt.subplot2grid((8,6),(4,0), rowspan=4, colspan=3)
ax3 = plt.subplot2grid((8,6),(0,3), rowspan=4, colspan=3)
ax4 = plt.subplot2grid((8,6),(4,3), rowspan=4, colspan=3)
ax1.imshow(iar)
ax2.imshow(iar2)
ax3.imshow(iar3)
ax4.imshow(iar4)
plt.show()
def threshold(imageArray):
balanceAr = []
newAr = imageArray
for eachRow in imageArray:
for eachPix in eachRow:
avgNum = reduce(lambda x, y: x + y, eachPix[:3]) / len(eachPix[:3])
balanceAr.append(avgNum)
balance = reduce(lambda x, y: x + y, balanceAr) / len(balanceAr)
for eachRow in newAr:
for eachPix in eachRow:
if reduce(lambda x, y: x + y, eachPix[:3]) / len(eachPix[:3]) > balance:
eachPix[0] = 255
eachPix[1] = 255
eachPix[2] = 255
eachPix[3] = 255
else:
eachPix[0] = 0
eachPix[1] = 0
eachPix[2] = 0
eachPix[3] = 255
return newAr
i = Image.open('images/numbers/0.1.png')
iar = np.array(i)
i2 = Image.open('images/numbers/y0.4.png')
iar2 = np.array(i2)
i3 = Image.open('images/numbers/y0.5.png')
iar3 = np.array(i3)
i4 = Image.open('images/sentdex.png')
iar4 = np.array(i4)
iar = threshold(iar)
iar2 = threshold(iar2)
iar3 = threshold(iar3)
iar4 = threshold(iar4)
fig = plt.figure()
ax1 = plt.subplot2grid((8,6),(0,0), rowspan=4, colspan=3)
ax2 = plt.subplot2grid((8,6),(4,0), rowspan=4, colspan=3)
ax3 = plt.subplot2grid((8,6),(0,3), rowspan=4, colspan=3)
ax4 = plt.subplot2grid((8,6),(4,3), rowspan=4, colspan=3)
ax1.imshow(iar)
ax2.imshow(iar2)
ax3.imshow(iar3)
ax4.imshow(iar4)
plt.show()
def createExamples():
numberArrayExamples = open('numArEx.txt','a')
numbersWeHave = range(1,10)
for eachNum in numbersWeHave:
#print eachNum
for furtherNum in numbersWeHave:
# you could also literally add it *.1 and have it create
# an actual float, but, since in the end we are going
# to use it as a string, this way will work.
print(str(eachNum)+'.'+str(furtherNum))
imgFilePath = 'images/numbers/'+str(eachNum)+'.'+str(furtherNum)+'.png'
ei = Image.open(imgFilePath)
eiar = np.array(ei)
eiarl = str(eiar.tolist())
print(eiarl)
lineToWrite = str(eachNum)+'::'+eiarl+'\n'
numberArrayExamples.write(lineToWrite)
createExamples()
from PIL import Image
import numpy as np
import time
from collections import Counter
def whatNumIsThis(filePath):
matchedAr = []
loadExamps = open('numArEx.txt','r').read()
loadExamps = loadExamps.split('\n')
i = Image.open(filePath)
iar = np.array(i)
iarl = iar.tolist()
inQuestion = str(iarl)
for eachExample in loadExamps:
try:
splitEx = eachExample.split('::')
currentNum = splitEx[0]
currentAr = splitEx[1]
eachPixEx = currentAr.split('],')
eachPixInQ = inQuestion.split('],')
x = 0
while x < len(eachPixEx):
if eachPixEx[x] == eachPixInQ[x]:
matchedAr.append(int(currentNum))
x+=1
except Exception as e:
print(str(e))
print(matchedAr)
x = Counter(matchedAr)
print(x)
print(x[0])
whatNumIsThis('images/test.png')
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import time
from collections import Counter
from matplotlib import style
style.use("ggplot")
def createExamples():
numberArrayExamples = open('numArEx.txt','a')
numbersWeHave = range(1,10)
for eachNum in numbersWeHave:
for furtherNum in numbersWeHave:
imgFilePath = 'images/numbers/'+str(eachNum)+'.'+str(furtherNum)+'.png'
ei = Image.open(imgFilePath)
eiar = np.array(ei)
eiarl = str(eiar.tolist())
lineToWrite = str(eachNum)+'::'+eiarl+'\n'
numberArrayExamples.write(lineToWrite)
def threshold(imageArray):
balanceAr = []
newAr = imageArray
for eachPart in imageArray:
for theParts in eachPart:
# for the reduce(lambda x, y: x + y, theParts[:3]) / len(theParts[:3])
# in Python 3, just use: from statistics import mean
# then do avgNum = mean(theParts[:3])
avgNum = reduce(lambda x, y: x + y, theParts[:3]) / len(theParts[:3])
balanceAr.append(avgNum)
balance = reduce(lambda x, y: x + y, balanceAr) / len(balanceAr)
for eachRow in newAr:
for eachPix in eachRow:
if reduce(lambda x, y: x + y, eachPix[:3]) / len(eachPix[:3]) > balance:
eachPix[0] = 255
eachPix[1] = 255
eachPix[2] = 255
eachPix[3] = 255
else:
eachPix[0] = 0
eachPix[1] = 0
eachPix[2] = 0
eachPix[3] = 255
return newAr
def whatNumIsThis(filePath):
matchedAr = []
loadExamps = open('numArEx.txt','r').read()
loadExamps = loadExamps.split('\n')
i = Image.open(filePath)
iar = np.array(i)
iarl = iar.tolist()
inQuestion = str(iarl)
for eachExample in loadExamps:
try:
splitEx = eachExample.split('::')
currentNum = splitEx[0]
currentAr = splitEx[1]
eachPixEx = currentAr.split('],')
eachPixInQ = inQuestion.split('],')
x = 0
while x < len(eachPixEx):
if eachPixEx[x] == eachPixInQ[x]:
matchedAr.append(int(currentNum))
x+=1
except Exception as e:
print(str(e))
x = Counter(matchedAr)
print(x)
graphX = []
graphY = []
ylimi = 0
for eachThing in x:
graphX.append(eachThing)
graphY.append(x[eachThing])
ylimi = x[eachThing]
fig = plt.figure()
ax1 = plt.subplot2grid((4,4),(0,0), rowspan=1, colspan=4)
ax2 = plt.subplot2grid((4,4),(1,0), rowspan=3,colspan=4)
ax1.imshow(iar)
ax2.bar(graphX,graphY,align='center')
plt.ylim(400)
xloc = plt.MaxNLocator(12)
ax2.xaxis.set_major_locator(xloc)
plt.show()
whatNumIsThis('images/test.png')
```
| github_jupyter |
```
# Convolutional Variational Autoencoder taken from TensorFlow Tutorials
# https://www.tensorflow.org/tutorials/generative/cvae
# preferably run with a GPU or on Google Colab etc
# to generate gifs
!pip install -q imageio
import tensorflow as tf
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
# Normalizing the images to the range of [0., 1.]
train_images /= 255.
test_images /= 255.
# Binarization
#train_images[train_images >= .5] = 1.
#train_images[train_images < .5] = 0.
#test_images[test_images >= .5] = 1.
#test_images[test_images < .5] = 0.
TRAIN_BUF = 60000
BATCH_SIZE = 100
TEST_BUF = 10000
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE)
class CVAE(tf.keras.Model):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.inference_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.generative_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation='relu'),
tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
@tf.function
def sample(self, eps=None):
if eps is None:
eps = tf.random.normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.generative_net(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
optimizer = tf.keras.optimizers.Adam(1e-4)
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
@tf.function
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
@tf.function
def compute_apply_gradients(model, x, optimizer):
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
epochs = 100
latent_dim = 50
num_examples_to_generate = 9
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random.normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_input):
predictions = model.sample(test_input)
fig = plt.figure(figsize=(3,3))
for i in range(predictions.shape[0]):
plt.subplot(3, 3, i+1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(model, 0, random_vector_for_generation)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
compute_apply_gradients(model, train_x, optimizer)
end_time = time.time()
if epoch % 1 == 0:
loss = tf.keras.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, '
'time elapse for current epoch {}'.format(epoch,
elbo,
end_time - start_time))
generate_and_save_images(
model, epoch, random_vector_for_generation)
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science
## Lab 2 - Smoothers and Generalized Additive Models
**Harvard University**<br>
**Spring 2019**<br>
**Instructors:** Mark Glickman and Pavlos Protopapas<br>
**Lab Instructors:** Will Claybaugh<br>
**Contributors:** Paul Tyklin and Will Claybaugh
---
```
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
```
## Learning Goals
The main goal of this lab is to get familiar with calling R functions within Python. Along the way, we'll learn about the "formula" interface to statsmodels, which gives an intuitive way of specifying regression models, and we'll review the different approaches to fitting curves.
Key Skills:
- Importing (base) R functions
- Importing R library functions
- Populating vectors R understands
- Populating dataframes R understands
- Populating formulas R understands
- Running models in R
- Getting results back to Python
- Getting model predictions in R
- Plotting in R
- Reading R's documentation
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
## Linear/Polynomial Regression (Python, Review)
Hopefully, you remember working with Statsmodels during 109a
Reading data and (some) exploring in Pandas:
```
diab = pd.read_csv("data/diabetes.csv")
print("""
# Variables are:
# subject: subject ID number
# age: age diagnosed with diabetes
# acidity: a measure of acidity called base deficit
# y: natural log of serum C-peptide concentration
#
# Original source is Sockett et al. (1987)
# mentioned in Hastie and Tibshirani's book
# "Generalized Additive Models".
""")
display(diab.head())
display(diab.dtypes)
display(diab.describe())
```
Plotting with matplotlib:
```
ax0 = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data") #plotting direclty from pandas!
ax0.set_xlabel("Age at Diagnosis")
ax0.set_ylabel("Log C-Peptide Concentration");
```
Linear regression with statsmodels.
- Previously, we worked from a vector of target values and a design matrix we built ourself (e.g. from PolynomialFeatures).
- Now, Statsmodels' *formula interface* can help build the target value and design matrix for you.
```
#Using statsmodels
import statsmodels.formula.api as sm
model1 = sm.ols('y ~ age',data=diab)
fit1_lm = model1.fit()
```
Build a data frame to predict values on (sometimes this is just the test or validation set)
- Very useful for making pretty plots of the model predcitions -- predict for TONS of values, not just whatever's in the training set
```
x_pred = np.linspace(0,16,100)
predict_df = pd.DataFrame(data={"age":x_pred})
predict_df.head()
```
Use `get_prediction(<data>).summary_frame()` to get the model's prediction (and error bars!)
```
prediction_output = fit1_lm.get_prediction(predict_df).summary_frame()
prediction_output.head()
```
Plot the model and error bars
```
ax1 = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data with least-squares linear fit")
ax1.set_xlabel("Age at Diagnosis")
ax1.set_ylabel("Log C-Peptide Concentration")
ax1.plot(predict_df.age, prediction_output['mean'],color="green")
ax1.plot(predict_df.age, prediction_output['mean_ci_lower'], color="blue",linestyle="dashed")
ax1.plot(predict_df.age, prediction_output['mean_ci_upper'], color="blue",linestyle="dashed");
ax1.plot(predict_df.age, prediction_output['obs_ci_lower'], color="skyblue",linestyle="dashed")
ax1.plot(predict_df.age, prediction_output['obs_ci_upper'], color="skyblue",linestyle="dashed");
```
<div class="discussion"><b>Discussion</b></div>
- What are the dark error bars?
- What are the light error bars?
<div class="exercise"><b>Exercise 1</b></div>
1. Fit a 3rd degree polynomial model and plot the model+error bars
- Route1: Build a design df with a column for each of `age`, `age**2`, `age**3`
- Route2: Just edit the formula
**Answers**:
1.
```
# your code here
```
2.
```
# your code here
```
## Linear/Polynomial Regression, but make it R
This is the meat of the lab. After this section we'll know everything we need to in order to work with R models. The rest of the lab is just applying these concepts to run particular models. This section therefore is your 'cheat sheet' for working in R.
What we need to know:
- Importing (base) R functions
- Importing R Library functions
- Populating vectors R understands
- Populating DataFrames R understands
- Populating Formulas R understands
- Running models in R
- Getting results back to Python
- Getting model predictions in R
- Plotting in R
- Reading R's documentation
**Importing R functions**
```
# if you're on JupyterHub you may need to specify the path to R
#import os
#os.environ['R_HOME'] = "/usr/share/anaconda3/lib/R"
import rpy2.robjects as robjects
r_lm = robjects.r["lm"]
r_predict = robjects.r["predict"]
#r_plot = robjects.r["plot"] # more on plotting later
#lm() and predict() are two of the most common functions we'll use
```
**Importing R libraries**
```
from rpy2.robjects.packages import importr
#r_cluster = importr('cluster')
#r_cluster.pam;
```
**Populating vectors R understands**
```
r_y = robjects.FloatVector(diab['y'])
r_age = robjects.FloatVector(diab['age'])
# What happens if we pass the wrong type?
# How does r_age display?
# How does r_age print?
```
**Populating Data Frames R understands**
```
diab_r = robjects.DataFrame({"y":r_y, "age":r_age})
# How does diab_r display?
# How does diab_r print?
```
**Populating formulas R understands**
```
simple_formula = robjects.Formula("y~age")
simple_formula.environment["y"] = r_y #populate the formula's .environment, so it knows what 'y' and 'age' refer to
simple_formula.environment["age"] = r_age
```
**Running Models in R**
```
diab_lm = r_lm(formula=simple_formula) # the formula object is storing all the needed variables
simple_formula = robjects.Formula("y~age") # reset the formula
diab_lm = r_lm(formula=simple_formula, data=diab_r) #can also use a 'dumb' formula and pass a dataframe
```
**Getting results back to Python**
```
diab_lm #the result is already 'in' python, but it's a special object
print(diab_lm.names) # view all names
diab_lm[0] #grab the first element
diab_lm.rx2("coefficients") #use rx2 to get elements by name!
np.array(diab_lm.rx2("coefficients")) #r vectors can be converted to numpy (but rarely needed)
```
**Getting Predictions**
```
# make a df to predict on (might just be the validation or test dataframe)
predict_df = robjects.DataFrame({"age": robjects.FloatVector(np.linspace(0,16,100))})
# call R's predict() function, passing the model and the data
predictions = r_predict(diab_lm, predict_df)
x_vals = predict_df.rx2("age")
ax = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data")
ax.set_xlabel("Age at Diagnosis")
ax.set_ylabel("Log C-Peptide Concentration");
ax.plot(x_vals,predictions); #plt still works with r vectors as input!
```
**Plotting in R**
```
%load_ext rpy2.ipython
```
- The above turns on the %R "magic"
- R's plot() command responds differently based on what you hand to it; Different models get different plots!
- For any specific model search for plot.modelname. E.g. for a GAM model, search plot.gam for any details of plotting a GAM model
- The %R "magic" runs R code in 'notebook' mode, so figures display nicely
- Ahead of the `plot(<model>)` code we pass in the variables R needs to know about (`-i` is for "input")
```
%R -i diab_lm plot(diab_lm);
```
**Reading R's documentation**
The documentation for the `lm()` funciton is [here](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/lm.html), and a prettier version (same content) is [here](https://www.rdocumentation.org/packages/stats/versions/3.5.2/topics/lm). When googling, perfer rdocumentation.org when possible.
Sections:
- **Usage**: gives the function signature, including all optional arguments
- **Arguments**: What each function input controls
- **Details**: additional info on what the funciton *does* and how arguments interact. **Often the right place to start reading**
- **Value**: the structure of the object returned by the function
- **Refferences**: The relevant academic papers
- **See Also**: other functions of interest
<div class="exercise"><b>Exercise 2</b></div>
1. Add confidence intervals calculated in R to the linear regression plot above. Use the `interval=` argument to `r_predict()` (documentation [here](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/predict.lm.html)). You will have to work with a matrix returned by R.
2. Fit a 5th degree polynomial to the diabetes data in R. Search the web for an easier method than writing out a formula with all 5 polynomial terms.
**Answers**
1.
```
# your code here
```
2.
```
# your code here
```
## Lowess Smoothing
Lowess Smoothing is implemented in both Python and R. We'll use it as another example as we transition languages.
<div class="discussion"><b>Discussion</b></div>
- What is lowess smoothing? Which 109a models is it related to?
- How explainable is lowess?
- What are the tunable parameters?
**In Python**
```
from statsmodels.nonparametric.smoothers_lowess import lowess as lowess
ss1 = lowess(diab['y'],diab['age'],frac=0.15)
ss2 = lowess(diab['y'],diab['age'],frac=0.25)
ss3 = lowess(diab['y'],diab['age'],frac=0.7)
ss4 = lowess(diab['y'],diab['age'],frac=1)
ss1[:10,:] # we get back simple a smoothed y value for each x value in the data
```
Notice the clean code to plot different models. We'll see even cleaner code in a minute
```
for cur_model, cur_frac in zip([ss1,ss2,ss3,ss4],[0.15,0.25,0.7,1]):
ax = diab.plot.scatter(x='age',y='y',c='Red',title="Lowess Fit, Fraction = {}".format(cur_frac))
ax.set_xlabel("Age at Diagnosis")
ax.set_ylabel("Log C-Peptide Concentration")
ax.plot(cur_model[:,0],cur_model[:,1],color="blue")
plt.show()
```
<div class="discussion"><b>Discussion</b></div>
1. Which model has high variance, which has high bias?
2. What makes a model high variance or high bias?
**In R**
We need to:
- Import the loess function
- Send data over to R
- Call the function and get results
```
r_loess = robjects.r['loess.smooth'] #extract R function
r_y = robjects.FloatVector(diab['y'])
r_age = robjects.FloatVector(diab['age'])
ss1_r = r_loess(r_age,r_y, span=0.15, degree=1)
ss1_r #again, a smoothed y value for each x value in the data
```
<div class="exercise"><b>Exercise 3</b></div>
Predict the output of
1. `ss1_r[0]`
2. `ss1_r.rx2("y")`
1.
*your answer here*
2.
*your answer here*
**Varying span**
Next, some extremely clean code to fit and plot models with various parameter settings. (Though the `zip()` method seen earlier is great when e.g. the label and the parameter differ)
```
for cur_frac in [0.15,0.25,0.7,1]:
cur_smooth = r_loess(r_age,r_y, span=cur_frac)
ax = diab.plot.scatter(x='age',y='y',c='Red',title="Lowess Fit, Fraction = {}".format(cur_frac))
ax.set_xlabel("Age at Diagnosis")
ax.set_ylabel("Log C-Peptide Concentration")
ax.plot(cur_smooth[0], cur_smooth[1], color="blue")
plt.show()
```
<div class="discussion"><b>Discussion</b></div>
- Mark wasn't kidding; the Python and R results differ for frac=.15. Thoughts?
- Why isn't the bottom plot a straight line? We're using 100% of the data in each window...
## Smoothing Splines
From this point forward, we're working with R functions; these models aren't (well) supported in Python.
For clarity: this is the fancy spline model that minimizes $MSE - \lambda\cdot\text{wiggle penalty}$ $=$ $\sum_{i=1}^N \left(y_i - f(x_i)\right)^2 - \lambda \int \left(f''(x)\right)^2$, across all possible functions $f$. The winner will always be a continuous, cubic polynomial with a knot at each data point
<div class="discussion"><b>Discussion</b></div>
- Any idea why the winner is cubic?
- How interpretable is this model?
- What are the tunable parameters?
```
r_smooth_spline = robjects.r['smooth.spline'] #extract R function
# run smoothing function
spline1 = r_smooth_spline(r_age, r_y, spar=0)
```
<div class="exercise"><b>Exercise 4</b></div>
1. We actually set the spar parameter, a scale-free value that translates to a $\lambda$ through a complex expression. Inspect the 'spline1' result and extract the implied value of $\lambda$
2. Working from the fitting/plotting loop examples above, produce a plot like the one below for spar = [0,.5,.9,2], including axes labels and title.
1.
```
# your answer here
```
2.
```
# your answer here
```
**CV**
R's `smooth_spline` funciton has built-in CV to find a good lambda. See package [docs](https://www.rdocumentation.org/packages/stats/versions/3.5.2/topics/smooth.spline).
```
spline_cv = r_smooth_spline(r_age, r_y, cv=True)
lambda_cv = spline_cv.rx2("lambda")[0]
ax19 = diab.plot.scatter(x='age',y='y',c='Red',title="smoothing spline with $\lambda=$"+str(np.round(lambda_cv,4))+", chosen by cross-validation")
ax19.set_xlabel("Age at Diagnosis")
ax19.set_ylabel("Log C-Peptide Concentration")
ax19.plot(spline_cv.rx2("x"),spline_cv.rx2("y"),color="darkgreen");
```
<div class="discussion"><b>Discussion</b></div>
- Does the selected model look reasonable?
- How would you describe the effect of age at diagnosis on C_peptide concentration?
- What are the costs/benefits of the (fancy) spline model, relative to the linear regression we fit above?
## Natural & Basis Splines
Here, we take a step backward on model complexity, but a step forward in coding complexity. We'll be working with R's formula interface again, so we will need to populate Formulas and DataFrames.
<div class="discussion"><b>Discussion</b></div>
- In what way are Natural and Basis splines less complex than the splines we were just working with?
- What makes a spline 'natural'?
- What makes a spline 'basis'?
- What are the tuning parameters?
```
#We will now work with a new dataset, called GAGurine.
#The dataset description (from the R package MASS) is below:
#Data were collected on the concentration of a chemical GAG
# in the urine of 314 children aged from zero to seventeen years.
# The aim of the study was to produce a chart to help a paediatrican
# to assess if a child's GAG concentration is ‘normal’.
#The variables are:
# Age: age of child in years.
# GAG: concentration of GAG (the units have been lost).
GAGurine = pd.read_csv("data/GAGurine.csv")
display(GAGurine.head())
ax31 = GAGurine.plot.scatter(x='Age',y='GAG',c='black',title="GAG in urine of children")
ax31.set_xlabel("Age");
ax31.set_ylabel("GAG");
```
Standard stuff: import function, convert variables to R format, call function
```
from rpy2.robjects.packages import importr
r_splines = importr('splines')
# populate R variables
r_gag = robjects.FloatVector(GAGurine['GAG'].values)
r_age = robjects.FloatVector(GAGurine['Age'].values)
r_quarts = robjects.FloatVector(np.quantile(r_age,[.25,.5,.75])) #woah, numpy functions run on R objects!
```
What happens when we call the ns or bs functions from r_splines?
```
ns_design = r_splines.ns(r_age, knots=r_quarts)
bs_design = r_splines.bs(r_age, knots=r_quarts)
print(ns_design)
```
`ns` and `bs` return design matrices, not model objects! That's because they're meant to work with `lm`'s formula interface. To get a model object we populate a formula including `ns(<var>,<knots>)` and fit to data
```
r_lm = robjects.r['lm']
r_predict = robjects.r['predict']
# populate the formula
ns_formula = robjects.Formula("Gag ~ ns(Age, knots=r_quarts)")
ns_formula.environment['Gag'] = r_gag
ns_formula.environment['Age'] = r_age
ns_formula.environment['r_quarts'] = r_quarts
# fit the model
ns_model = r_lm(ns_formula)
```
Predict like usual: build a dataframe to predict on and call `predict()`
```
# predict
predict_frame = robjects.DataFrame({"Age": robjects.FloatVector(np.linspace(0,20,100))})
ns_out = r_predict(ns_model, predict_frame)
ax32 = GAGurine.plot.scatter(x='Age',y='GAG',c='grey',title="GAG in urine of children")
ax32.set_xlabel("Age")
ax32.set_ylabel("GAG")
ax32.plot(predict_frame.rx2("Age"),ns_out, color='red')
ax32.legend(["Natural spline, knots at quartiles"]);
```
<div class="exercise"><b>Exercise 5</b></div>
1. Fit a basis spline model with the same knots, and add it to the plot above
2. Fit a basis spline with 8 knots placed at [2,4,6...14,16] and add it to the plot above
**Answers:**
1.
```
# your answer here
```
2.
```
# your answer here
#%R -i overfit_model plot(overfit_model)
# we'd get the same diagnostic plot we get from an lm model
```
## GAMs
We come, at last, to our most advanced model. The coding here isn't any more complex than we've done before, though the behind-the-scenes is awesome.
First, let's get our (multivariate!) data
```
kyphosis = pd.read_csv("data/kyphosis.csv")
print("""
# kyphosis - wherther a particular deformation was present post-operation
# age - patient's age in months
# number - the number of vertebrae involved in the operation
# start - the number of the topmost vertebrae operated on
""")
display(kyphosis.head())
display(kyphosis.describe(include='all'))
display(kyphosis.dtypes)
#If there are errors about missing R packages, run the code below:
#r_utils = importr('utils')
#r_utils.install_packages('codetools')
#r_utils.install_packages('gam')
```
To fit a GAM, we
- Import the `gam` library
- Populate a formula including `s(<var>)` on variables we want to fit smooths for
- Call `gam(formula, family=<string>)` where `family` is a string naming a probability distribution, chosen based on how the response variable is thought to occur.
- Rough `family` guidelines:
- Response is binary or "N occurances out of M tries", e.g. number of lab rats (out of 10) developing disease: chooose `"binomial"`
- Response is a count with no logical upper bound, e.g. number of ice creams sold: choose `"poisson"`
- Response is real, with normally-distributed noise, e.g. person's height: choose `"gaussian"` (the default)
```
#There is a Python library in development for using GAMs (https://github.com/dswah/pyGAM)
# but it is not yet as comprehensive as the R GAM library, which we will use here instead.
# R also has the mgcv library, which implements some more advanced/flexible fitting methods
r_gam_lib = importr('gam')
r_gam = r_gam_lib.gam
r_kyph = robjects.FactorVector(kyphosis[["Kyphosis"]].values)
r_Age = robjects.FloatVector(kyphosis[["Age"]].values)
r_Number = robjects.FloatVector(kyphosis[["Number"]].values)
r_Start = robjects.FloatVector(kyphosis[["Start"]].values)
kyph1_fmla = robjects.Formula("Kyphosis ~ s(Age) + s(Number) + s(Start)")
kyph1_fmla.environment['Kyphosis']=r_kyph
kyph1_fmla.environment['Age']=r_Age
kyph1_fmla.environment['Number']=r_Number
kyph1_fmla.environment['Start']=r_Start
kyph1_gam = r_gam(kyph1_fmla, family="binomial")
```
The fitted gam model has a lot of interesting data within it
```
print(kyph1_gam.names)
```
Remember plotting? Calling R's `plot()` on a gam model is the easiest way to view the fitted splines
```
%R -i kyph1_gam plot(kyph1_gam, residuals=TRUE,se=TRUE, scale=20);
```
Prediction works like normal (build a data frame to predict on, if you don't already have one, and call `predict()`). However, predict always reports the sum of the individual variable effects. If `family` is non-default this can be different from the actual prediction for that point.
For instance, we're doing a 'logistic regression' so the raw prediction is log odds, but we can get probability by using in `predict(..., type="response")`
```
kyph_new = robjects.DataFrame({'Age': robjects.IntVector((84,85,86)),
'Start': robjects.IntVector((5,3,1)),
'Number': robjects.IntVector((1,6,10))})
print("Raw response (so, Log odds):")
display(r_predict(kyph1_gam, kyph_new))
print("Scaled response (so, probabilty of kyphosis):")
display(r_predict(kyph1_gam, kyph_new, type="response"))
```
<div class="discussion"><b>Discussion</b></div>
<div class="exercise"><b>Exercise 6</b></div>
1. What lambda did we use?
2. What is the model telling us about the effects of age, starting vertebrae, and number of vertebae operated on
3. If we fit a logistic regression instead, which variables might want quadratic terms. What is the cost and benefit of a logistic regression model versus a GAM?
4. Critique the model:
- What is it assuming? Are the assumptions reasonable
- Are we using the right data?
- Does the model's story about the world make sense?
## Appendix
GAMs and smoothing splines support hypothesis tets to compare models. (We can always compare models via out-of-sample prediction quality (i.e. performance on a validation set), but statistical ideas like hypothesis tests yet information criteria allow us to use all data for training *and* still compare the quality of model A to model B)
```
r_anova = robjects.r["anova"]
kyph0_fmla = robjects.Formula("Kyphosis~1")
kyph0_fmla.environment['Kyphosis']=r_kyph
kyph0_gam = r_gam(kyph0_fmla, family="binomial")
print(r_anova(kyph0_gam, kyph1_gam, test="Chi"))
```
**Explicitly joining spline functions**
```
def h(x, xi, pow_arg): #pow is a reserved keyword in Python
if (x > xi):
return pow((x-xi),pow_arg)
else:
return 0
h = np.vectorize(h,otypes=[np.float]) #default behavior is to return ints, which gives incorrect answer
#also, vectorize does not play nicely with default arguments, so better to set directly (e.g., pow_arg=1)
xvals = np.arange(0,10.1,0.1)
ax20 = plt.plot(xvals,h(xvals,4,1),color="red")
_ = plt.title("Truncated linear basis function with knot at x=4")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$(x-4)_+$") #note the use of TeX in the label
ax21 = plt.plot(xvals,h(xvals,4,3),color="red")
_ = plt.title("Truncated cubic basis function with knot at x=4")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$(x-4)_+^3$")
ax22 = plt.plot(xvals,2+xvals+3*h(xvals,2,1)-4*h(xvals,5,1)+0.5*h(xvals,8,1),color="red")
_ = plt.title("Piecewise linear spline with knots at x=2, 5, and 8")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
```
Comparing splines to the (noisy) model that generated them.
```
x = np.arange(0.1,10,9.9/100)
from scipy.stats import norm
#ppf (percent point function) is the rather unusual name for
#the quantile or inverse CDF function in SciPy
y = norm.ppf(x/10) + np.random.normal(0,0.4,100)
ax23 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("3 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+h(x,2,1)+h(x,5,1)+h(x,8,1)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
ax24 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("6 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+h(x,1,1)+h(x,2,1)+h(x,3.5,1)+h(x,5,1)+h(x,6.5,1)+h(x,8,1)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
ax25 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("9 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+h(x,1,1)+h(x,2,1)+h(x,3,1)+h(x,4,1)+h(x,5,1)+h(x,6,1)+h(x,7,1)+h(x,8,1)+h(x,9,1)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
regstr = 'y~x+'
for i in range(1,26):
regstr += 'h(x,'+str(i/26*10)+',1)+'
regstr = regstr[:-1] #drop last +
ax26 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("25 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols(regstr,data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
```
### Exercise:
Try generating random data from different distributions and fitting polynomials of different degrees to it. What do you observe?
```
# try it here
#So, we see that increasing the number of knots results in a more polynomial-like fit
#Next, we look at cubic splines with increasing numbers of knots
ax27 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("3 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+np.power(x,2)+np.power(x,3)+h(x,2,3)+h(x,5,3)+h(x,8,3)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
ax28 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("6 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+np.power(x,2)+np.power(x,3)+h(x,1,3)+h(x,2,3)+h(x,3.5,3)+h(x,5,3)+h(x,6.5,3)+h(x,8,3)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
ax29 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("9 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols('y~x+np.power(x,2)+np.power(x,3)+h(x,1,3)+h(x,2,3)+h(x,3,3)+h(x,4,3)+h(x,5,3)+h(x,6,3)+h(x,7,3)+h(x,8,3)+h(x,9,3)',data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
regstr2 = 'y~x+np.power(x,2)+np.power(x,3)+'
for i in range(1,26):
regstr2 += 'h(x,'+str(i/26*10)+',3)+'
regstr2 = regstr2[:-1] #drop last +
ax30 = plt.scatter(x,y,facecolors='none', edgecolors='black')
_ = plt.title("25 knots")
_ = plt.xlabel("$x$")
_ = plt.ylabel("$y$")
_ = plt.plot(x,sm.ols(regstr2,data={'x':x,'y':y}).fit().predict(),color="darkblue",linewidth=2)
_ = plt.plot(x,norm.ppf(x/10),color="red")
```
| github_jupyter |
# 列表List
- 一个列表可以储存任意大小的数据集合,你可以理解为他是一个容器
```
def b():
pass#列表储存数据
a=[1,2,1,5,'ab',True,b]
a
c='zxc'
list(c)
"".join(['a','b'])
```
## 先来一个例子爽一爽

## 创建一个列表
- a = [1,2,3,4,5]
## 列表的一般操作

```
s='aaa'
s*8
a=100
b=[1,2,34,5]
a in b
a=[100]
b=[1,2,3,4,a]
a in b
a=[1,2]
b=[3]
a+b
a=[1,2]
b=[3]
b+a
a=[1,2,34,[1000,2333]]
a
a[3]
a[3][1]
a=[1,2,34,[1000,[2888],2333]]
a
a[3][1][0]
b=[1,2,3,4,5]
b[1]=100
b
b=[1,2,3,4,5,6,7,8,9,10]
b[1:11:2]
for i in range(0,10,2):
b[i]=100
b
[1,2][4,5][7,8][10,11]
for i in range(0,10,3):
print(b[i:i+2])
陈贺大傻子=[1,2,3,[1,2,3,[3,4,5,6]]]
len(陈贺大傻子)
hhh=[1,2,3,[5,6,7]]
count=0
for i in hhh:
if type(i)==list:
for j in i:
count=count+1
else:
count=count+1
len(hhh)
a=[1,2,3]
for i in a:
print(i)
a=[1,2,3,True]
max(a)
hhh=[1,2,3,[5,6,7]]
a=[1,2,3]
b=[2,3,4]
a>b
b=[4,3,2,1]
length=len(b)
for i in range(length):
for j in range
def Dx(b):
n=len(b)
for i in range(0,n-1):
for j in range(0,n-1-i):
if b[j]>b[j+1]:
b[j],b[j+1]=b[j+1],b[j]
Dx(b)
print(b)
```
# 列表索引操作
- Mylist[index]
- 正序索引,逆序索引
- 列表一定注意越界
- 
## 列表切片操作
- Mylist[start:end]
- 正序切片,逆序切片
## 列表 +、*、in 、not in
## 使用for循环遍历元素
- for 循环可以遍历一切可迭代元素
## EP:
- 使用while 循环遍历列表
## 列表的比较
- \>,<,>=,<=,==,!=
## 列表生成式
[x for x in range(10)]
## 列表的方法

```
d=[1,2,3,4,5]
d.remove(3)
d
a=[1,2,3]
b=100
a.append(b)#只能接受一个元素,元素里可有多个
a
a=[1,2,5,3,[5,6],8]
a.count(5)
a=[1,2,3]
b=[100,22]
b.extend(a)
b
a.extend(b)
a
c=[1,2,3,4,5]
c.insert(0,100)
c.insert(3,100)
c=[1,2,3,4,5,6,7,8,9]
for i in range(0,len(c)+3,3):
c.insert(i,100)
c
d=[1,2,4,5,3,7](作业)
d=[1,2,4,5,3,7]
for i in d:tm
if i%2==0:
print('')
else:
print('100',i)
```
## 将字符串分割成列表
- split 按照自定义的内容拆分
## EP:


## 列表的复制
- copy 浅复制
- deepcopy import copy 深复制
- http://www.pythontutor.com/visualize.html#mode=edit
## 列表排序
- sort
- sorted
- 列表的多级排序
- 匿名函数
```
(lambda x:print(x))(100)
*强制命名
c=[1,2,3,4]
c.sort(reverse=True)
c
```
## EP:
- 手动排序该列表[5,3,8,0,17],以升序或者降序
- 1

```
a=eval(input('请输入成绩列表:'))
b=max(a)
for i in a:
if i>=b-10:
print(i,'A')
elif i>=b-20:
print(i,'B')
elif i>=b-30:
print(i,'C')
elif i>=b-40:
print(i,'D')
else:
print(i,'F')
```
- 2

```
a,b,c,d= int(input('请输入一个整数列表:'))
print(a,b,c,d[::-1])
```
- 3

```
a=eval(input('Enter integers between 1 and 100:'))
for i in a:
b=a.count(i)
print(i,'occurs',b,'times')
```
- 4

- 5

- 6

- 7


- 8

- 9

- 10

- 11

- 12

| github_jupyter |
<br>
<h1 style = "font-size:30px; font-weight : bold; color : black; text-align: center; border-radius: 10px 15px;"> Telco Customer Churn: EDA, Predictions and Feature Importance with SHAP </h1>
<br>
# Goals
Perform an Exploratory Data Analysis (EDA) to visualize and understand:
* The distribution of values of the target and features;
* The relationship between each feature and the likelihood of customer churn.
Predict churn using 20% of data as test set using the following models:
* Logistic Regression;
* Random Forest;
* XGBoost;
* Catboost.
Understand how each feature impacts the predicted value using:
* Feature Importance;
* SHAP.
# <a id='0'>Content</a>
- <a href='#1'>Dataset Information</a>
- <a href='#2'>Importing Packages and Dataset + Data Cleaning</a>
- <a href='#3'>Exploratory Data Analysis</a>
- <a href='#31'>Demographic Features</a>
- <a href='#32'>Services Related Features</a>
- <a href='#33'>Account Information Features (categorical)</a>
- <a href='#34'>Account Information Features (numerical)</a>
- <a href='#4'>Creating and Evaluating Models</a>
- <a href='#41'>Logistic Regression</a>
- <a href='#42'>Random Forest</a>
- <a href='#43'>Random Forest w/preprocessing</a>
- <a href='#44'>XGBoost</a>
- <a href='#45'>CatBoost</a>
- <a href='#46'>Feature Importance and SHAP Plot</a>
- <a href='#5'>References</a>
## <center> If you find this notebook useful, support with an upvote! <center>
# <a id="1">Dataset Information</a>
### Content
"Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs." [IBM Sample Data Sets]
Each row represents a customer, each column contains customer’s attributes described on the column Metadata.
The data set includes information about:
- Customers who left within the last month – the column is called Churn
- Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
- Customer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
- Demographic info about customers – gender, age range, and if they have partners and dependents
# <a id="2">Importing Packages and Dataset + Data Cleaning</a>
```
import pandas as pd
import matplotlib as mat
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from catboost import CatBoostClassifier
from catboost import Pool
import shap
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('../data/raw/data.csv')
df
df.info()
```
Apparently, there are no missing values. But there is clearly an error. ‘Total Charges’ should be numeric. We can use pd.to_numeric to convert it.
```
df['TotalCharges'] = pd.to_numeric(df['TotalCharges'], errors='coerce')
df['TotalCharges'].dtype
```
After changing a column from string to numeric, some values may not be recognized, resulting in missing values. Let’s check if this happened.
```
df['TotalCharges'].isnull().sum()
```
Now there are supposedly 11 missing values, but they might indicate that there were no charges for that customer up to the point when the data was obtained. The feature 'tenure' indicates for how long someone has been a customer. Let's check the number of samples with value '0' on that feature and, in case we also find 11 customers, compare if their index match those from the 'missing' values.
```
df['tenure'].isin([0]).sum()
print(df[df['tenure'].isin([0])].index)
print(df[df['TotalCharges'].isna()].index)
```
We got a match here. After confirming our suspects, we can replace those missing values with '0'.
```
df.loc[:,'TotalCharges'] = df.loc[:,'TotalCharges'].replace(np.nan,0)
df['TotalCharges'].isnull().sum()
```
The feature 'Senior Citizen', which is categorical ('Yes' or 'No'), is set as numeric. Although all features will be changed to numeric to be used in our prediction models, I'll convert it from numeric to string for now.
```
df['SeniorCitizen'] = df['SeniorCitizen'].apply(str)
senior_map = {'0': 'No', '1': 'Yes'}
df['SeniorCitizen'] = df['SeniorCitizen'].map(senior_map)
df.info()
```
Let's finish this section by checking the possible values of categorical features and viewing descriptive statistics (df.describe) for numerical features.
```
for col in df.select_dtypes('object').columns:
print(col, '- # unique values:', df[col].nunique())
for col in df.select_dtypes('object').columns:
print(col, '\n')
print(df[col].value_counts(), '\n')
df.describe().T
```
# <a id="3">Exploratory Data Analysis</a>
We will start our EDA by looking at the distribution of the target variable (Churn). It’s expected that the dataset is imbalanced, with less than 50% of the customers leaving the company
## Churn
```
plt.figure(figsize=(6,4))
ax = sns.countplot(x="Churn", data=df, palette="rocket")
plt.xlabel("Churn?", fontsize= 12)
plt.ylabel("# of Clients", fontsize= 12)
plt.ylim(0,7500)
plt.xticks([0,1], ['No', 'Yes'], fontsize = 11)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.30, p.get_height()+300), fontsize = 14)
plt.show()
plt.figure(figsize=(7,5))
df['Churn'].value_counts().plot(kind='pie',labels = ['',''], autopct='%1.1f%%', colors = ['indigo','salmon'], explode = [0,0.05], textprops = {"fontsize":15})
plt.legend(labels=['No Churn', 'Churn'])
plt.show()
```
At the period represented in this dataset, there is a 26,5% of customer churn. As we move on to analyze the features, we can compare this number with the percentage of churn found for each category, providing us a better idea on the impact of a given feature in the company’s ability to retain its customers.
```
#Label encoding Churn to use sns.barplot
le = LabelEncoder()
df['Churn'] = le.fit_transform(df['Churn'])
df['Churn'].value_counts()
```
We can divide the features into the following groups:
- Demographic features;
- Services related features
- Account information related features (categorical and numerical).
For each group, we’ll start by looking at the features’ distributions. Then, we’ll check the percentage of churn for each category to understand their relationship with the target.
```
demo_features = ['gender', 'SeniorCitizen', 'Partner', 'Dependents']
serv_features = ['PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup'
, 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies']
cat_accinfo_features = ['Contract', 'PaperlessBilling', 'PaymentMethod']
num_accinfo_features = ['tenure', 'MonthlyCharges', 'TotalCharges']
```
## <a id="31">Demographic Features</a>
```
plt.figure(figsize=(18,12))
for i,col in enumerate(demo_features):
plt.subplot(2,2,i + 1)
ax = sns.countplot(data = df, x = col, palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,7000)
plt.xticks(fontsize= 15)
plt.yticks(fontsize= 14)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.32, p.get_height()+300), fontsize= 16)
plt.tight_layout()
plt.show()
plt.figure(figsize=(18,12))
for i,col in enumerate(demo_features):
plt.subplot(2,2,i + 1)
ax = sns.countplot(data = df, x = col, hue="Churn", palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,7000)
plt.xticks(fontsize= 14)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.14, p.get_height()+300), fontsize= 14)
plt.tight_layout()
plt.show()
plt.figure(figsize=(16,10))
for i,col in enumerate(demo_features):
plt.subplot(2,2,i + 1)
ax = sns.barplot(x = col, y = "Churn", data = df, palette = 'rocket', ci = None)
plt.xlabel(col, fontsize= 14)
plt.ylabel("% of Churn", fontsize= 13)
plt.ylim(0,0.5)
plt.xticks(fontsize= 14)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.35, p.get_height()+0.03),fontsize=15)
plt.tight_layout()
plt.show()
```
What we can observe for each feature:
- Gender: There is barely any difference in churn percentage between men and women;
- Senior Citizen: The churn percentage for senior customers are above 40%, indicating a high likelihood of churn from that group;
- Partner: Single customers are more likely to churn than customers with partners;
- Dependents: Customers with dependents are less likely to churn than customers without any dependents.
We could go a little further and combine the two ‘family-related’ features, ‘Partner’ and ‘Dependents’ to see if, in fact, both of them contribute to the chance of customer churn or retention.
It is expected that the majority of customers with dependents are married and, for instance, it could be that the partnership has more influence on the target than the fact that a customer has or hasn’t a child. Although this might be unlikely, by analyzing both features together, we can discard such hypothesis with more confidence.
```
df.groupby(['Partner'])['Dependents'].value_counts()
```
As expected, most customers with dependents also have a partner. Yet, the number of single customers with dependents seems significant enough for us to draw some conclusions about this particular group.
```
df.groupby(by=['Partner', 'Dependents'])['Churn'].value_counts(normalize = True)
plt.figure(figsize=(12,4))
ax = sns.barplot(x = "Dependents", y = "Churn", hue = "Partner", data = df, palette = 'rocket', ci = None)
plt.ylabel("% of Churn", fontsize= 12)
plt.ylim(0,0.5)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.15, p.get_height()+0.03),fontsize=14)
plt.show()
```
We can see that both features contribute to the likelihood of churn. The group of people with partners and dependents and the group with neither of those are on the extremes in terms of likelihood of churn (14% and 34%, respectively). The churn of customers with partners and without dependents falls close to the overall percentage of churn in our dataset, while the ‘opposite’ group still have a lower chance of it.
## <a id="32">Services Related Features</a>
```
plt.figure(figsize=(18,30))
for i,col in enumerate(serv_features):
plt.subplot(5,2,i + 1)
ax = sns.countplot(data = df, x = col, palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,7500)
plt.xticks(fontsize= 15)
plt.yticks(fontsize= 14)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.31, p.get_height()+300), fontsize= 16)
plt.tight_layout()
plt.show()
```
A relatively small group of customers doesn’t have internet services and an even smaller one doesn’t have phone services. One thing to keep in mind is that most services can be and/or are only provided to customers who sign the Telco’s internet service.
```
plt.figure(figsize=(18,30))
for i,col in enumerate(serv_features):
plt.subplot(5,2,i + 1)
ax = sns.countplot(data = df, x = col, hue="Churn", palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,7000)
plt.xticks(fontsize= 14)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.12, p.get_height()+300), fontsize= 13)
plt.tight_layout()
plt.show()
plt.figure(figsize=(16,25))
for i,col in enumerate(serv_features):
plt.subplot(5,2,i + 1)
ax = sns.barplot(x = col, y = "Churn", data = df, palette = 'rocket', ci = None)
plt.xlabel(col, fontsize= 14)
plt.ylabel("% of Churn", fontsize= 13)
plt.ylim(0,0.5)
plt.xticks(fontsize= 14)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.32, p.get_height()+0.03),fontsize=15)
plt.tight_layout()
plt.show()
```
Curiously enough, the difference of churn between clients with and without phone services is quite small, been negligible if we take those with multiple lines out of equation. In this group of features, the real game-changing ones in terms of customer retainment are those related to internet services.
In the feature ‘InternetServices’, the percentage of churn in each category is highly different one from another. Those who don’t subscribe to the company’s internet (presumably, they only use their phone service), are the most likely to endure as their customers. The likelihood of churn from customers with DSL service is also smaller than the overall probability.
The highest percentage of churn, with over 40%, is from customers with fiber optic internet. Fiber optic tends to be faster than DSL internet, but their subscription is usually more expensive as well. We don't have the information about the fee for each service, but at least we can find the mean value of monthly charges per type of internet just to have an idea that this is the case.
```
df.groupby(by=['InternetService'])['MonthlyCharges'].mean().sort_values()
```
As expected, the average charges for each service are significantly different, with fiber optic been the most expensive. Without any additional information, it’s hard to draw definitive conclusions, but it seems that the cost-benefit relationship of their fiber optic service is far from been attractive enough to retain customers.
Such a high churn rate might indicate that their service’s quality is subpar in terms of speed and/or reliability. Analyzing complaints received by their customer service call center service to extract useful and specific information about their internet is a must. A survey with a significant group of customers, aiming to understand how they perceive the quality of the service, is another step to find the problem and to help defining the course of action.
As for the other services, the likelihood of churn from customers who have each one of them is actually lower than from those who haven’t. The higher differences are found in ‘TechSupport’ and ‘OnlineSecurity’, while the lower ones are found in the streaming services.
Let’s calculate the average monthly charges from each category in the Tech Support and Online Security features.
```
print(df.groupby(by=['TechSupport'])['MonthlyCharges'].mean().sort_values(), '\n')
print(df.groupby(by=['OnlineSecurity'])['MonthlyCharges'].mean().sort_values(), '\n')
print(df.groupby(by=['OnlineSecurity', 'TechSupport'])['MonthlyCharges'].mean().sort_values())
```
Both services don’t seem to affect the subscription charges by much. If the company can quantify the cost of providing each service per customer and find out that it is relatively small, they could either reduce the extra subscription fee for those additional services or simply cut that fee and offer those services as standard for internet customers for a trial period. Given that most customers don’t subscribe to those services and given that they have a significant impact on the customer retainment, it’s possible that such strategy could result in a higher profit on the long term.
Let’s see if the churn rate gets significantly lower for customers who have access to both services.
```
print(df.groupby(by=['TechSupport'])['OnlineSecurity'].value_counts(), '\n')
plt.figure(figsize=(12,4))
ax = sns.barplot(x = "TechSupport", y = "Churn", hue = "OnlineSecurity", data = df, palette = 'rocket', ci = None)
plt.ylabel("% of Churn", fontsize= 12)
plt.ylim(0,1.0)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.070, p.get_height()+0.03),fontsize=14)
plt.show()
```
The differences in terms of churn rate are quite significant. While customers who don’t use neither of those services have a close to 50% chance of churn, the churn rate for those who have both is lower than 10%, supporting the previous point.
## <a id="33">Account Information Features (categorical)</a>
```
plt.figure(figsize=(12,15))
for i,col in enumerate(cat_accinfo_features):
plt.subplot(3,1,i + 1)
ax = sns.countplot(data = df, x = col, palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,5000)
plt.xticks(fontsize= 14)
plt.yticks(fontsize= 14)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.32, p.get_height()+300), fontsize= 15)
plt.tight_layout()
plt.show()
plt.figure(figsize=(12,15))
for i,col in enumerate(cat_accinfo_features):
plt.subplot(3,1,i + 1)
ax = sns.countplot(data = df, x = col, hue="Churn", palette = 'rocket')
plt.xlabel(col, fontsize= 14)
plt.ylabel("# of Clients", fontsize= 13)
plt.ylim(0,5000)
plt.xticks(fontsize= 13)
for p in ax.patches:
ax.annotate((p.get_height()), (p.get_x()+0.135, p.get_height()+300), fontsize= 14)
plt.tight_layout()
plt.show()
plt.figure(figsize=(12,15))
for i,col in enumerate(cat_accinfo_features):
plt.subplot(3,1,i + 1)
ax = sns.barplot(x = col, y = "Churn", data = df, palette = 'rocket', ci = None)
plt.xlabel(col, fontsize= 14)
plt.ylabel("% of Churn", fontsize= 13)
plt.ylim(0,0.55)
plt.xticks(fontsize= 14)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.32, p.get_height()+0.02),fontsize=15)
plt.tight_layout()
plt.show()
```
Naturally, in terms of contract, the highest churn rate is from the ‘month-to-month’ type, which is also the most dominant contract. What seems odd is the high chance of churn from customers who choose electronic check as payment method and opts for paperless billing. It could be, for instance, that most customers in the month-to-month contract also fall into those categories. We can check that.
```
print(df.groupby(by=['Contract'])['PaperlessBilling'].value_counts(normalize = True),' \n')
print(df.groupby(by=['Contract'])['PaymentMethod'].value_counts(normalize = True))
```
When we group the dataset by contract, we can see that the percentage of customers who don’t receive their bills through the mail and that pay them via electronic check is higher for the ‘month-to-month’ type. Yet, this doesn’t seem to be enough to justify such a high churn rate for those categories. There is a good chance that we will find higher percentages of churn in them, regardless of the type of contract. Let’s see.
```
plt.figure(figsize=(12,4))
ax = sns.barplot(x = "PaperlessBilling", y = "Churn", hue = "Contract", data = df, palette = 'rocket', ci = None)
plt.ylabel("% of Churn", fontsize= 12)
plt.ylim(0,0.6)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.08, p.get_height()+0.03),fontsize=14)
plt.show()
plt.figure(figsize=(12,4))
ax = sns.barplot(x = "PaymentMethod", y = "Churn", hue = "Contract", data = df, palette = 'rocket', ci = None)
plt.ylabel("% of Churn", fontsize= 12)
plt.ylim(0,0.6)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.05, p.get_height()+0.020),fontsize=14)
plt.show()
```
The likelihood of churn is, in fact, higher for those categories, regardless of type of contract. Personally, is hard for me to see a causality, without additional information or domain knowledge, between the churn rate and the way someone receives their bill and choose to pay them. It is more likely that those two features are associated with several others. The internet service, a feature with notable differences of churn rate between each one of its categories, could present some correlation between them.
```
print(df.groupby(by=['InternetService'])['PaperlessBilling'].value_counts(normalize = True), '\n')
print(df.groupby(by=['InternetService'])['PaymentMethod'].value_counts(normalize = True))
```
What stands out here in our grouping operations:
- Customers with Internet Service = ‘No’: Less than 30% receive paperless bills and only 8% pay them with electronic check;
- Customers with Internet Service = ‘Fiber Optic’: 77% receive paperless bills and more them 51% pay them with electronic check.
We can recall that the lowest churn rate in the internet services feature is from those customers who don’t use Telco’s internet, while the highest is found among those who use their fiber optic internet. So, we can say that those results don’t come out as a surprise.
Although we shouldn’t conclude that the payment method or the way the bills are sent have a direct influence in the customer retainment, it is worth to point that those features will probably be useful for our prediction models.
## <a id="34">Account Information Features (numerical)</a>
```
plt.figure(figsize=(12,15))
for i,col in enumerate(num_accinfo_features):
plt.subplot(3,1,i + 1)
sns.distplot(df.loc[:,col])
#plt.ticklabel_format(style='plain', axis='x') #repressing scientific notation
plt.ylabel('')
plt.tight_layout()
plt.show()
plt.figure(figsize=(12,15))
for i,col in enumerate(num_accinfo_features):
plt.subplot(3,1,i + 1)
sns.kdeplot(df.loc[(df['Churn'] == 0), col], label = 'No Churn', shade = True)
sns.kdeplot(df.loc[(df['Churn'] == 1), col], label = 'Churn', shade = True)
plt.legend()
plt.ylabel('')
plt.tight_layout()
plt.show()
```
What we can observe for each feature:
- Tenure: High concentration of churned customer in the first months.
- Monthly Charges: High concentration of churned customer in higher values (around 60 and beyond)
- Total Charges: Somewhat similar distributions, but the ‘No churn’ distribution have lower values.
Let’s get the mean values to complement our analysis.
```
print(df.groupby(by=['Churn'])['tenure'].mean().sort_values(), '\n')
print(df.groupby(by=['Churn'])['MonthlyCharges'].mean().sort_values(), '\n')
print(df.groupby(by=['Churn'])['TotalCharges'].mean().sort_values())
```
As expected, the average tenure period for churned customers is lower and the average monthly charges are higher than the same metrics for retained customers. The average total charges are lower for churned customers, which is probably due to their lower tenure.
The density plot for churned customers in the ‘tenure’ feature showed a high concentration in the first months. Let’s divide this feature in bins to get the churn rate per year of service.
```
df['tenure_bin'] = pd.cut(df['tenure'],[-1,12,24,36,48,60,100])
df['tenure_bin'].value_counts(sort = False)
plt.figure(figsize=(12,4))
ax = sns.barplot(x = "tenure_bin", y = "Churn", data = df, palette = 'rocket', ci = None)
plt.ylabel("% of Churn", fontsize= 12)
plt.ylim(0,0.6)
plt.xticks([0,1,2,3,4,5], ['12 or less', '13 to 24', '25 to 36', '37 to 48', '49 to 60', 'more than 60'], fontsize = 12)
plt.xlabel("Tenure Group (in months)", fontsize= 12)
for p in ax.patches:
ax.annotate("%.2f" %(p.get_height()), (p.get_x()+0.25, p.get_height()+0.03),fontsize=14)
plt.show()
```
Almost 50 percent of those who became a customer for a year or less ended up leaving the company. It’s not unusual to have a higher churn rate in the first year or two for some types of business. Nevertheless, a churn rate this high in the first year indicates that the quality of the service provided fails to hold up to their new customers’ expectation.
# <a id="4">Creating and Evaluating Models</a>
Now, let's move on to the predictive models. In this notebook, we will use the Area Under the Curve of Receiver Characteristic Operator (AUC-ROC or ROC-AUC) as the main metric to assess the performance of our models. The ROC-AUC measures the ability of a model is to distinguish between classes. [(Link for more information about ROC-AUC)](https://www.analyticsvidhya.com/blog/2020/06/auc-roc-curve-machine-learning/). Nevertheless, we will also check the accuracy, the classification report and the confusion matrix for each model.
First, we will make a copy of the dataset and separate the features from the target.
```
X = df.copy().drop('Churn', axis = 1)
Y = df['Churn'].copy()
```
We’re also going to remove the customer_id and the feature ‘tenure_bin’, that we created for EDA purposes, since we’re not planning to use them
```
X = X.drop(['customerID', 'tenure_bin'], axis = 1)
X
X.info()
```
We need to encode the features to use them in our models. We could use something like sklearn’s OrdinalEncoder for this, but I’ll do it manually. This effort will pay off later when we’ll analyze the predictions using SHAP.
```
gender_map = {'Female': 0, 'Male': 1}
yes_or_no_map = {'No': 0, 'Yes': 1} #seniorcitizen, partner, dependents, phoneservice, paperlessbilling
multiplelines_map = {'No phone service': -1, 'No': 0, 'Yes': 1}
internetservice_map = {'No': -1, 'DSL': 0, 'Fiber optic': 1}
add_netservices_map = {'No internet service': -1, 'No': 0, 'Yes': 1} #onlinesecurity, onlinebackup, deviceprotection,techsupport,streaming services
contract_map = {'Month-to-month': 0, 'One year': 1, 'Two year': 2}
paymentmethod_map = {'Electronic check': 0, 'Mailed check': 1, 'Bank transfer (automatic)': 2, 'Credit card (automatic)': 3}
X['gender'] = X['gender'].map(gender_map).astype('int')
X['Partner'] = X['Partner'].map(yes_or_no_map).astype('int')
X['SeniorCitizen'] = X['SeniorCitizen'].map(yes_or_no_map).astype('int')
X['Dependents'] = X['Dependents'].map(yes_or_no_map).astype('int')
X['PhoneService'] = X['PhoneService'].map(yes_or_no_map).astype('int')
X['MultipleLines'] = X['MultipleLines'].map(multiplelines_map).astype('int')
X['InternetService'] = X['InternetService'].map(internetservice_map).astype('int')
X['OnlineSecurity'] = X['OnlineSecurity'].map(add_netservices_map).astype('int')
X['OnlineBackup'] = X['OnlineBackup'].map(add_netservices_map).astype('int')
X['DeviceProtection'] = X['DeviceProtection'].map(add_netservices_map).astype('int')
X['TechSupport'] = X['TechSupport'].map(add_netservices_map).astype('int')
X['StreamingTV'] = X['StreamingTV'].map(add_netservices_map).astype('int')
X['StreamingMovies'] = X['StreamingMovies'].map(add_netservices_map).astype('int')
X['Contract'] = X['Contract'].map(contract_map).astype('int')
X['PaperlessBilling'] = X['PaperlessBilling'].map(yes_or_no_map).astype('int')
X['PaymentMethod'] = X['PaymentMethod'].map(paymentmethod_map).astype('int')
X.info()
```
Now we will split the data into train and test sets.
```
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 42
, stratify = Y)
```
## <a id="41">Logistic Regression</a>
The first model we're going to use is Logistic Regression, which will require two things for a better performance:
- Scaling the numerical features;
- (One hot) encoding the categorical (nominal) features.
We can use the Column Transformer to assign each transformation to its correct features and fit it in a pipeline as a preprocessing step.
```
num_features = num_accinfo_features
cat_3p_features = []
for col in X.columns:
if (X[col].nunique() > 2) & (X[col].nunique() < 5): #less than 5 to exclude the numerical features
cat_3p_features.append(col)
print('Numerical features: ', num_features, '\n')
print('Nominal with 3 or more categories: ', cat_3p_features)
cat_transformer = OneHotEncoder(handle_unknown='ignore')
num_transformer = StandardScaler()
preprocessor = ColumnTransformer(
transformers=[
('num', num_transformer, num_features),
('cat', cat_transformer, cat_3p_features)
], remainder='passthrough')
lr_pipe = Pipeline([('Transformers', preprocessor)
,('LR', LogisticRegression(random_state = 42, max_iter = 1000))])
```
Even without the intent of doing an extensive hyperparameter tuning, we can give each model a better chance of good performance by testing some values for a key parameter and choosing one of them based on cross-validation score.
```
def cv_function (model, param, list):
rp_st_kfold = RepeatedStratifiedKFold(n_splits=10, n_repeats = 3, random_state = 42)
search_model = model
print ('Hyperparameter: ', param)
for i in list:
param_dict = {param : i}
search_model.set_params(**param_dict)
cv_score = cross_val_score(search_model, X_train, Y_train, cv=rp_st_kfold, scoring='roc_auc')
print("Parameter: {0:0.2f} - AUC(SD): {1:0.4f} ({2:0.4f})". format(i, cv_score.mean(), cv_score.std()))
params_lr_list = [0.01,0.1,0.2,0.3,0.5,0.7,1,2,3,5]
param_lr = 'LR__C'
cv_function(lr_pipe, param_lr, params_lr_list)
```
After some point, there is barely an improvement. Choice: C = 3.0
```
lr_param = {'LR__C': 3.0}
lr_pipe.set_params(**lr_param)
lr_pipe
```
Now, let’s fit this model and predict.
```
lr_pipe.fit(X_train, Y_train)
pred_lr = lr_pipe.predict(X_test)
print("Test Accuracy: ",metrics.accuracy_score(Y_test, pred_lr))
lr_confusion_matrix = metrics.confusion_matrix(Y_test, pred_lr)
sns.heatmap(lr_confusion_matrix, annot=True, fmt="d")
plt.xlabel("Predicted Label", fontsize= 12)
plt.ylabel("True Label", fontsize= 12)
plt.show()
print(metrics.classification_report(Y_test, pred_lr, labels = [0, 1]))
lr_pred_proba = lr_pipe.predict_proba(X_test)[:,1]
lr_roc_auc = metrics.roc_auc_score(Y_test, lr_pred_proba)
print('ROC_AUC: ', lr_roc_auc)
lr_fpr, lr_tpr, thresholds = metrics.roc_curve(Y_test, lr_pred_proba)
plt.plot(lr_fpr,lr_tpr, label = 'ROC_AUC = %0.3f' % lr_roc_auc)
plt.xlabel("False Positive Rate", fontsize= 12)
plt.ylabel("True Positive Rate", fontsize= 12)
plt.legend(loc="lower right")
plt.show()
```
## <a id="42">Random Forest</a>
For every model, we’re going to follow the same steps that we made with Logistic Regression, with the exception of using a pipeline for preprocessing.
```
rf_model = RandomForestClassifier(random_state = 42)
params_rf_list = [100,150,200,250,300,400,500]
param_rf = 'n_estimators'
cv_function(rf_model, param_rf, params_rf_list)
rf_param = {'n_estimators': 500}
rf_model.set_params(**rf_param)
rf_model
rf_model.fit(X_train, Y_train)
pred_rf = rf_model.predict(X_test)
print("Test Accuracy: ",metrics.accuracy_score(Y_test, pred_rf))
rf_confusion_matrix = metrics.confusion_matrix(Y_test, pred_rf)
sns.heatmap(rf_confusion_matrix, annot=True, fmt="d")
plt.xlabel("Predicted Label", fontsize= 12)
plt.ylabel("True Label", fontsize= 12)
plt.show()
print(metrics.classification_report(Y_test, pred_rf, labels = [0, 1]))
rf_pred_proba = rf_model.predict_proba(X_test)[:,1]
rf_roc_auc = metrics.roc_auc_score(Y_test, rf_pred_proba)
print('ROC_AUC: ', rf_roc_auc)
rf_fpr, rf_tpr, thresholds = metrics.roc_curve(Y_test, rf_pred_proba)
plt.plot(rf_fpr,rf_tpr, label = 'ROC_AUC = %0.3f' % rf_roc_auc)
plt.xlabel("False Positive Rate", fontsize= 12)
plt.ylabel("True Positive Rate", fontsize= 12)
plt.legend(loc="lower right")
plt.show()
```
The results we found with Random Forest were quite disappointing. Although feature scaling and one hot encoding aren’t necessary, we can use them just for testing purposes.
## <a id="43">Random Forest with Preprocessing</a>
```
rf_pipe = Pipeline([('Transformers', preprocessor)
,('RF', RandomForestClassifier(n_estimators = 500, random_state = 42))])
rf_pipe.fit(X_train, Y_train)
pred_rf_pipe = rf_pipe.predict(X_test)
print("Test Accuracy: ",metrics.accuracy_score(Y_test, pred_rf_pipe))
rf_pipe_confusion_matrix = metrics.confusion_matrix(Y_test, pred_rf_pipe)
sns.heatmap(rf_pipe_confusion_matrix, annot=True, fmt="d")
plt.xlabel("Predicted Label", fontsize= 12)
plt.ylabel("True Label", fontsize= 12)
plt.show()
print(metrics.classification_report(Y_test, pred_rf_pipe, labels = [0, 1]))
rf_pipe_pred_proba = rf_pipe.predict_proba(X_test)[:,1]
rf_pipe_roc_auc = metrics.roc_auc_score(Y_test, rf_pipe_pred_proba)
print('ROC_AUC: ', rf_pipe_roc_auc)
rf_pipe_fpr, rf_pipe_tpr, thresholds = metrics.roc_curve(Y_test, rf_pipe_pred_proba)
plt.plot(rf_pipe_fpr,rf_pipe_tpr, label = 'ROC_AUC = %0.3f' % rf_pipe_roc_auc)
plt.xlabel("False Positive Rate", fontsize= 12)
plt.ylabel("True Positive Rate", fontsize= 12)
plt.legend(loc="lower right")
plt.show()
```
It did not go too well either. Let’s move on to the boosting models.
## <a id="44">XGBoost</a>
```
xgb_model = XGBClassifier(learning_rate = 0.05 ,random_state = 42, eval_metric = 'logloss')
params_xgb_list = [50,75,100,150,200,250,300]
param_xgb = 'n_estimators'
cv_function(xgb_model, param_xgb, params_xgb_list)
xgb_param = {'n_estimators': 75}
xgb_model.set_params(**xgb_param)
xgb_model
xgb_model.fit(X_train, Y_train, eval_set = [(X_test,Y_test)])
pred_xgb = xgb_model.predict(X_test)
print("Test Accuracy: ",metrics.accuracy_score(Y_test, pred_xgb))
xgb_confusion_matrix = metrics.confusion_matrix(Y_test, pred_xgb)
sns.heatmap(xgb_confusion_matrix, annot=True, fmt="d")
plt.xlabel("Predicted Label", fontsize= 12)
plt.ylabel("True Label", fontsize= 12)
plt.show()
print(metrics.classification_report(Y_test, pred_xgb, labels = [0, 1]))
xgb_pred_proba = xgb_model.predict_proba(X_test)[:,1]
xgb_roc_auc = metrics.roc_auc_score(Y_test, xgb_pred_proba)
print('ROC_AUC: ', xgb_roc_auc)
xgb_fpr, xgb_tpr, thresholds = metrics.roc_curve(Y_test, xgb_pred_proba)
plt.plot(xgb_fpr,xgb_tpr, label = 'ROC_AUC = %0.3f' % xgb_roc_auc)
plt.xlabel("False Positive Rate", fontsize= 12)
plt.ylabel("True Positive Rate", fontsize= 12)
plt.legend(loc="lower right")
plt.show()
```
## <a id="45">Catboost</a>
```
categorical_ft = [x for x in X.columns if x not in num_features]
print(categorical_ft)
cat_model = CatBoostClassifier (random_state = 42, eval_metric = 'AUC', cat_features = categorical_ft, verbose = 0)
#cat_model.get_params()
params_cat_list = [50,75,100,150,200,250,300]
param_cat = 'n_estimators'
cv_function(cat_model, param_cat, params_cat_list)
cat_param = {'n_estimators':100}
cat_model.set_params(**cat_param)
#cat_model
cat_model.fit(X_train, Y_train, eval_set = [(X_test,Y_test)], cat_features = categorical_ft)
#xgb_model.fit(X_train, Y_train, early_stopping_rounds = 100, eval_set = [(X_test,Y_test)])
#cat_model.fit(X_train, Y_train)
pred_cat = cat_model.predict(X_test)
print("Test Accuracy: ",metrics.accuracy_score(Y_test, pred_cat))
cat_confusion_matrix = metrics.confusion_matrix(Y_test, pred_cat)
sns.heatmap(cat_confusion_matrix, annot=True, fmt="d")
plt.xlabel("Predicted Label", fontsize= 12)
plt.ylabel("True Label", fontsize= 12)
plt.show()
print(metrics.classification_report(Y_test, pred_cat, labels = [0, 1]))
cat_pred_proba = cat_model.predict_proba(X_test)[:,1]
cat_roc_auc = metrics.roc_auc_score(Y_test, cat_pred_proba)
print('ROC_AUC: ', cat_roc_auc)
cat_fpr, cat_tpr, thresholds = metrics.roc_curve(Y_test, cat_pred_proba)
plt.plot(cat_fpr,cat_tpr, label = 'ROC_AUC = %0.3f' % cat_roc_auc)
plt.xlabel("False Positive Rate", fontsize= 12)
plt.ylabel("True Positive Rate", fontsize= 12)
plt.legend(loc="lower right")
plt.show()
```
Results (AUC/accuracy):
- Logistic Regression: 0.842/0.807
- Random Forest: 0.825/0.788
- Random Forest w/preprocessing: 0.823/0.780
- XGBoost: 0.846/0.806
- Catboost: 0.849/0.813
The Catboost yielded the best results, although they were quite close from those obtained with XGBoost and Logistic Regression.
## <a id="46">Feature Importance and SHAP Plot</a>
Let’s see what features have more importance for the Catboost’s predictions.
```
pool = Pool(X_train, Y_train, cat_features=categorical_ft)
Feature_importance = pd.DataFrame({'feature_importance': cat_model.get_feature_importance(pool),
'feature_names': X_train.columns}).sort_values(by=['feature_importance'],
ascending=False)
Feature_importance
plt.figure(figsize=(10,10))
sns.barplot(x=Feature_importance['feature_importance'], y=Feature_importance['feature_names'], palette = 'rocket')
plt.show()
```
To better interpret the model’s results, and maybe gain some insights, we can use the SHAP package [(link)](https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Catboost%20tutorial.html).
```
explainer = shap.TreeExplainer(cat_model)
shap_values = explainer.shap_values(pool)
shap.summary_plot(shap_values, X_train)
```
Since we manually encoded the categorical features, it becomes easier to understand what’s been represented in each category. For instance, the feature ‘contract’ has 3 categories. ‘Month-to-month’ was encoded with the lowest value and it’s represented by the blue color. ‘One year’ is the mid value and it’s represented in purple. ‘Two years’ is the highest value and is represented in red. We can clearly see that the ‘month-to-month’ category impacts the prediction towards the positive value (churn), while the other types of contracts push the prediction into the opposite direction (no churn).
# <a id="5">References</a>
- https://www.analyticsvidhya.com/blog/2020/06/auc-roc-curve-machine-learning/
- https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Catboost%20tutorial.html
## <center> If you find this notebook useful, support with an upvote! <center>
| github_jupyter |
### **Install ChEMBL client for getting the dataset**
#### **https://www.ebi.ac.uk/chembl/**
```
!pip install chembl_webresource_client
```
### **Import Libraries**
```
import pandas as pd
from chembl_webresource_client.new_client import new_client
```
### **Find Coronavirus Dataset**
#### **Search Target**
```
target = new_client.target
target_query = target.search ('acetylcholinesterase')
targets = pd.DataFrame.from_dict (target_query)
targets
```
#### **Fetch Bio-Activity data for the target**
```
selected_target = targets.target_chembl_id [0]
selected_target
activity = new_client.activity
res = activity.filter (target_chembl_id = selected_target).filter (standard_type = "IC50")
```
#### **A Higher Standard Value means we'll require more amount of the drug for same inhibition**
```
df = pd.DataFrame.from_dict (res)
df.head (3)
df.standard_type.unique ()
```
##### **Save the resulting Bio-Activity data to a CSV file**
```
import os
df.to_csv (os.path.join ('Datasets', 'Part-1_Bioactivity_Data.csv'), index = False)
```
### **Pre-Processing Data**
#### **Ignore values with Missing Standard Value data**
```
df2 = df [df.standard_value.notna ()]
df2 = df2 [df.canonical_smiles.notna ()]
df2
```
#### **Label Compounds as active or inactive**
##### Compounds with IC50 less than 1000nM are considered active, greater than 10000nM are considered to be inactive, in between 1000nM to 10000nM are considered intermediate
##### 1. IC50 value of the drug indicates the toxicity of the drug to other disease causing organisms.
##### 2. IC50 is a quantitative measure that shows how much a particular inhibitory drug/substance/extract/fraction is needed to inhibit a biological component by 50%.
###### Above Definition taken from https://www.researchgate.net/post/What-is-the-significance-of-IC50-value-when-the-drug-is-exogenously-administered-to-an-animal-tissue
```
bioactivity_class = []
for i in df2.standard_value :
if float (i) >= 10000 :
bioactivity_class.append ("inactive")
elif float (i) <= 1000 :
bioactivity_class.append ("active")
else :
bioactivity_class.append ("intermediate")
print (len (bioactivity_class))
```
#### **Append Chembl ID, Canonical Smiles and Standard Value to a list**
##### Canonical Smiles :-
##### 1. Simplified Molecular Input Line Entry Specification
##### 2. They can represent a Molecular Compound in a Single Line
```
selection = ['molecule_chembl_id', 'canonical_smiles', 'standard_value']
df3 = df2 [selection]
print (len (df3))
df3
import numpy as np
#print (df3.values.shape)
#print (np.array (bioactivity_class).shape)
df4 = df3.values
df4
bioactivity_class = np.matrix (bioactivity_class).T
#bioactivity_class
columns = list (df3.columns)
columns.append ('bioactivity_class')
print (columns)
print (bioactivity_class.shape)
print (df4.shape)
#df3 = pd.concat ([df3, pd.Series (np.array (bioactivity_class))], axis = 1)
#print (len (df3))
#df3
df4
#df3 = df3.rename (columns = {0 : 'bioactivity_class'})
df_final = np.concatenate ((df4, bioactivity_class), axis = 1)
#df_final = pd.DataFrame (df_final, columns)
df_final
#df3.head (3)
#print (len (df3))
df_final = pd.DataFrame (df_final, columns = columns)
df_final
```
#### **Save Pre-Processed data to a CSV file**
```
df_final.to_csv (os.path.join ('Datasets', 'Part-1_Bioactivity_Preprocessed_Data.csv'), index = False)
!dir
```
| github_jupyter |
```
%matplotlib inline
import sys, os
sys.path.append("../")
import numpy as np
import scipy as sp
import numpy.linalg as nla
import matplotlib as mpl
import matplotlib.pyplot as plt
from timeit import timeit
import ot
import ot.plot
from ot.datasets import make_1D_gauss as gauss
from drot.solver import drot, sinkhorn
from drot.proximal import *
import csv
%load_ext autoreload
%autoreload 2
```
# Optimal transport
```
def save(C, nrows, ncols, filename):
assert C.flags['F_CONTIGUOUS']
output_file = open(filename, 'wb')
C.tofile(output_file)
output_file.close()
def two_dimensional_gaussian_ot(m, n):
d = 2
mu_s = np.random.normal(0.0, 1.0, (d,)) # Gaussian mean
A_s = np.random.rand(d, d)
cov_s = np.dot(A_s, A_s.transpose()) # Gaussian covariance matrix
mu_t = np.random.normal(5.0, 5.0, (d,))
A_t = np.random.rand(d, d)
cov_t = np.dot(A_t, A_t.transpose())
xs = ot.datasets.make_2D_samples_gauss(m, mu_s, cov_s)
xt = ot.datasets.make_2D_samples_gauss(n, mu_t, cov_t)
p, q = np.ones((m,)) / m, np.ones((n,)) / n
C = np.array(ot.dist(xs, xt), order='F')
C /= C.max()
return m, n, C, p, q
def multi_experiment(m, n, max_iters, accuracies, skregs, alpha=2.0, ntests=10):
num_accuracies = accuracies.shape[0]
num_algs = skregs.shape[0] + 1
outs = np.zeros([num_algs, 1, num_accuracies, ntests])
for test_idx in range(ntests):
print("\n *** Experiment", test_idx+1, "of", ntests, "***")
m, n, C, p, q = two_dimensional_gaussian_ot(m, n)
x0 = np.array(np.outer(p, q), order = 'F')
step = alpha / (m+n)
C_ = C.copy()
optval = ot.emd2(p, q, C_, numItermax=1_000_000)
drout = drot(x0, C, p, q, max_iters=max_iters, step=step, compute_r_primal=True,
compute_r_dual=False, eps_abs=1e-4, eps_rel=0.0)
skout = []
for reg in skregs:
skout.append(ot.sinkhorn(p, q, C_, reg, numItermax=max_iters, stopThr=7e-5))
outs[0, 0, :, test_idx] = abs(np.sum(drout['sol']*C) - optval) / optval
for sk_idx in range(skregs.shape[0]):
outs[sk_idx+1, 0, :, test_idx] = abs(np.sum(skout[sk_idx]*C_) - optval) / optval
file_name = 'Dims_' + str(m) + '_test_' + str(ntests)
np.save('output/'+file_name + '.npy', outs)
return file_name
def profile(dir, accuracies, labels, colors):
outs = np.load(dir)
(num_algs, num_objs_computed, num_accuracies, ntests) = outs.shape
performance_ratio = np.zeros((num_algs, num_accuracies))
for alg_idx in range(num_algs):
for acc_idx in range(num_accuracies):
performance_ratio[alg_idx, acc_idx] = np.sum((outs[alg_idx, 0, acc_idx, :] <= accuracies[acc_idx])) / ntests
fig = plt.figure()
for alg_idx in range(num_algs):
plt.plot(accuracies, performance_ratio[alg_idx, :], color=colors[alg_idx], label=labels[alg_idx], linewidth=2.5)
ylabel = r'Performance ratio'
plt.xlabel(r'Accuracy')
plt.ylabel(ylabel)
plt.xscale('log')
# plt.xlim(1e-4, 1e-1)
plt.legend()
return fig
m, n = 512, 512
max_iters = 1000
accuracies = np.logspace(-4.5, -1, num=15)
skregs = np.array([1e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1])
file_name = multi_experiment(m, n, max_iters, accuracies, skregs, ntests=100)
labels = ['DROT', 'SK1', 'SK2', 'SK3', 'SK4', 'Sk5', 'SK6']
colors = ['C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6']
dir = "output/" + file_name + '.npy'
fig = profile(dir, accuracies, labels, colors)
# fig.savefig('figures/'+ file_name + '_mean_1_f64.eps', format='eps')
```
## Single problem
```
m, n, C, p, q = two_dimensional_gaussian_ot(512, 512)
C_ = C.copy()
G0 = ot.emd(p, q, C_, numItermax=1_000_000)
Gsk = ot.sinkhorn(p, q, C_, 1e-3, numItermax=1000, stopThr=1e-5)
Gsb = ot.bregman.sinkhorn_stabilized(p, q, C_, 1e-3, umItermax=1000, stopThr=1e-5)
femd, fsk, fsb = np.sum(G0*C_), np.sum(Gsk*C_), np.sum(Gsb*C_)
x0 = np.array(np.outer(p, q), order = 'F')
max_iters = 500
step = .051 / (m+n)
drout = drot(x0, C, p, q, max_iters=max_iters, step=step, compute_r_primal=True,
compute_r_dual=True, eps_abs=1e-5, verbose=False, print_every=100)
xopt = drout["sol"]
skout, log = sinkhorn(p, q, C_, 1e-3, numItermax=200, stopThr=1e-15)
optval = femd
plt.figure(1, figsize=(10,8))
plt.plot(range(drout["num_iters"]), [ abs(f-optval) for f in drout['dual']], color='C0', label='DROT: Funtion gap', linewidth=2)
plt.plot(range(drout["num_iters"]), [r for r in drout['primal']], color='C0', marker='o', label='DROT: Residual', linewidth=2)
plt.plot([k for k in log['iter']], [ abs(f - optval) for f in log['fval']], color='C1', label='SK: Function gap', linewidth=2)
plt.plot([k for k in log['iter']], [ r for r in log['res']], color='C1', marker='o', label='SK: Residual', linewidth=2)
plt.xlabel("Iteration")
plt.ylabel("Suboptimality")
plt.yscale('log')
plt.legend()
```
### Sparsity of the approximate solutions
```
np.sum(xopt > 0) / (m*n), np.sum(G0 > 0) / (m*n), np.sum(Gsk > 0) / (m*n), np.sum(Gsb > 0) / (m*n)
fig, axs = plt.subplots(2, 2, figsize=(15, 10))
axs[0, 0].imshow(xopt, interpolation='nearest')
axs[0, 0].set_title('OT matrix DR')
axs[0, 1].imshow(G0, interpolation='nearest')
axs[0, 1].set_title('OT matrix G0')
axs[1, 0].imshow(Gsk, interpolation='nearest')
axs[1, 0].set_title('OT matrix Sinkhorn')
axs[1, 1].imshow(Gsk, interpolation='nearest')
axs[1, 1].set_title('OT matrix Sinkhorn')
```
| github_jupyter |
```
import itertools
import numpy as np
import pandas as pd
from scipy import stats
from ebnmpy.estimators import estimators
def sample_point_normal(n, pi0=.9, mu=0, sigma=2):
not_delta = stats.bernoulli.rvs(pi0, size=n) == 0
z = np.full(n, mu, dtype=float)
z[not_delta] = stats.norm.rvs(mu, sigma, size=not_delta.sum())
return z
def sample_point_t(n, pi0=.8, df=5, scale=1.5):
not_delta = stats.bernoulli.rvs(pi0, size=n) == 0
z = np.zeros(n)
z[not_delta] = stats.t.rvs(df=df, scale=scale, size=not_delta.sum())
return z
def sample_asymmetric_tophat(n, pi0=.5, a=-5, b=10):
not_delta = stats.bernoulli.rvs(pi0, size=n) == 0
z = np.zeros(n)
z[not_delta] = stats.uniform.rvs(a, b - a, size=not_delta.sum())
return z
def get_rmse(theta, theta_hat):
return np.sqrt(np.mean((theta_hat - theta) ** 2))
def get_clcov(theta, samples, intervals=(.05, .95)):
lower = np.quantile(samples, intervals[0], axis=0)
upper = np.quantile(samples, intervals[1], axis=0)
return np.mean((theta >= lower) & (theta <= upper))
```
Run simulations
```
np.random.seed(0)
s = 1
n = 1000
n_posterior_samples = 1000
n_simulations = 10
samplers = {
"Point-normal": sample_point_normal,
"Point-t": sample_point_t,
"Asymmetric tophat": sample_asymmetric_tophat,
}
results = []
for _ in range(n_simulations):
for sampler_name, sampler in samplers.items():
theta = sampler(n)
x = theta + stats.norm.rvs(size=n)
for cls_name, cls in estimators.items():
# run ebnm
est = cls(include_posterior_sampler=True).fit(x=x, s=s)
# sample from posterior
samples = est.sample(n_posterior_samples)
# compute metrics
loglik = est.log_likelihood_
rmse = get_rmse(theta, theta_hat=est.posterior_["mean"])
clcov = get_clcov(theta, samples)
results.append((sampler_name, cls.__name__, loglik, rmse, clcov))
```
Format table
```
df = pd.DataFrame(results, columns=("Distribution", "Class", "LogLik", "RMSE", "ClCov"))
columns = list(itertools.product(list(samplers), ("LogLik", "RMSE", "ClCov")))
df_mean = df.groupby(["Distribution", "Class"]).mean().unstack(0).swaplevel(0, 1, axis=1)[columns].loc[[i.__name__ for i in estimators.values()]]
df_mean.index.name = None
df_mean.columns.names = [None, None]
formatter = {i: "{:.1f}" if "LogLik" in i else "{:.3f}" for i in columns}
s = df_mean.style.format(formatter=formatter)
s = s.background_gradient(cmap="Reds_r", subset=columns[::3]).background_gradient(cmap="Reds", subset=columns[1::3]).background_gradient(cmap="Reds_r", subset=columns[2::3])
s = s.set_properties(**{'text-align': 'center'})
s = s.set_table_styles([dict(selector='th', props=[('text-align', 'center')])])
for i in (3, 6):
s = s.set_table_styles({
columns[i]: [{'selector': 'th', 'props': 'border-left: 1px solid black'},
{'selector': 'td', 'props': 'border-left: 1px solid #000000'}]
}, overwrite=False, axis=0)
```
Display table
```
s
```
| github_jupyter |
# Facial Keypoint Detection
This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
Let's take a look at some examples of images and corresponding facial keypoints.
<img src='images/key_pts_example.png' width=50% height=50%/>
Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
<img src='images/landmarks_numbered.jpg' width=30% height=30%/>
---
## Load and Visualize Data
The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#### Training and Testing Data
This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
* 3462 of these images are training images, for you to use as you create a model to predict keypoints.
* 2308 are test images, which will be used to test the accuracy of your model.
The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
---
```
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
```
## Look at some images
Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
```
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
show_keypoints(mpimg.imread(os.path.join('data/training/', image_name)), key_pts)
plt.show()
```
## Dataset class and Transformations
To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#### Dataset class
``torch.utils.data.Dataset`` is an abstract class representing a
dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
Your custom dataset should inherit ``Dataset`` and override the following
methods:
- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
- ``__getitem__`` to support the indexing such that ``dataset[i]`` can
be used to get the i-th sample of image/keypoint data.
Let's create a dataset class for our face keypoints dataset. We will
read the CSV file in ``__init__`` but leave the reading of images to
``__getitem__``. This is memory efficient because all the images are not
stored in the memory at once but read as required.
A sample of our dataset will be a dictionary
``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
optional argument ``transform`` so that any required processing can be
applied on the sample. We will see the usefulness of ``transform`` in the
next section.
```
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
```
Now that we've defined this class, let's instantiate the dataset and display some images.
```
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
```
## Transforms
Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
Therefore, we will need to write some pre-processing code.
Let's create four transforms:
- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
- ``Rescale``: to rescale an image to a desired size.
- ``RandomCrop``: to crop an image randomly.
- ``ToTensor``: to convert numpy images to torch images.
We will write them as callable classes instead of simple functions so
that parameters of the transform need not be passed everytime it's
called. For this, we just need to implement ``__call__`` method and
(if we require parameters to be passed in), the ``__init__`` method.
We can then use a transform like this:
tx = Transform(params)
transformed_sample = tx(sample)
Observe below how these transforms are generally applied to both the image and its keypoints.
```
import torch
from torchvision import transforms, utils
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
```
## Test out the transforms
Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
```
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
```
## Create the transformed dataset
Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
```
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Data Iteration and Batching
Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
- Batch the data
- Shuffle the data
- Load the data in parallel using ``multiprocessing`` workers.
``torch.utils.data.DataLoader`` is an iterator which provides all these
features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
---
| github_jupyter |
```
import os
import sys
module_path = os.path.abspath(os.path.join('../src'))
if module_path not in sys.path:
sys.path.append(module_path)
from prefix_span import PrefixSpan
from js_distance import JS
from sequence_generator import SequenceGenerator
```
# Descriptive Database
The tabel `data` contains the following content:
| column | content explaination |
|:----------------: | :----------------------------------------------------------: |
| item_id | edited item page ID |
| item_name | respective item page name |
| label | English label of the item page |
| category | classified content category based on label and description |
| user_id | editor ID |
| user_name | editer name |
| user_group | editor's user group and their corresponding user rights |
| user_editcount | rough number of edits and edit-like actions the user has performed |
| user_registration | editor registration timestamp |
| rev_id | revision(edit) ID |
| rev_timestamp | revision timestamp |
| comment | original comment information for this edit |
| edit_summary | comment information simplified with regular expression |
| edit_type | schematized and classified edit summary for ease of use |
| paraphrase | paraphrase of edit summary according to Wikibase API |
| prediction | quality prediction of this revision ID, chosen as the one with the biggest probability |
|itemquality_A, itemquality_B, itemquality_C, itemquality_D, itemquality_E | concrete quality level probability distribution of this revision |
| js_distance | Jensen-Shannon divergence value based on given quality distribution |
# Sequence Analysis
## Generate Sequence Database
An event is a list of continuous activities contributed by the same editor. (list of strings)
A sequence is a list of events occurred on the same article. (list)
A sequence database is a list of sequences. (list)
Thus, a sequence database is a list of lists of lists of strings.
A sequence database ready to be mined is determined by setting up the js-distance constraint.
```
seq = SequenceGenerator(csvfile='../db/data.csv', jsThreshold=0.8)
seq_db = seq.generate_sequence()
for sequence in seq_db:
print(sequence)
```
## Mine Sequential Patterns
The sequential patterns within the sequence database are discovered with PrefixSpan algorithm by setting up the minimum support threshold.
```
prex = PrefixSpan()
result = prex.prefix_span(dataset=seq_db, minSupport=0.1)
df = prex.display(result)
print(df)
```
## Representative Patterns
Following metrics are used for mining patterns from different perspectives, this can be archieved by adjusting the jsThreshold and minSupport constraints:
* high quality + high frequency
* high quality + middle frequency
* high quality + low frequency
____________________________________
* middle quality + high frequency
* middle quality + middle frequency
* middle quality + low frequency
____________________________________
* low quality + high frequency
* low quality + middle frequency
* low quality + low frequency
____________________________________
* no quality constraint + high frequency
* no quality constraint + middle frequency
* no quality constraint + low frequency
____________________________________
| Grading | Range |
| :-------------| :-------------|
| Q_high | \[0.7, 1) |
| Q_middle | \[0.3, 0.7) |
| Q_low | (0, 0.3) |
| F_high | \[0.2, 1) |
| F_middle | \[0.05, 0.2) |
| F_low | (0, 0.05) |
### High Quality Constraint
```
seq_high = SequenceGenerator(csvfile='../db/data.csv', jsThreshold=0.75)
db = seq_high.generate_sequence()
highF = prex.prefix_span(dataset=db, minSupport=0.25)
midF = prex.prefix_span(dataset=db, minSupport=0.1)
lowF = prex.prefix_span(dataset=db, minSupport=0.01)
prex.display(highF)
prex.display(midF)
prex.display(lowF)
```
### Middle Quality Constraint
```
seq_mid = SequenceGenerator(csvfile='../db/data.csv', jsThreshold=0.35)
db = seq_mid.generate_sequence()
highF = prex.prefix_span(dataset=db, minSupport=0.25)
midF = prex.prefix_span(dataset=db, minSupport=0.1)
lowF = prex.prefix_span(dataset=db, minSupport=0.01)
prex.display(highF)
prex.display(midF)
prex.display(lowF)
```
### Low Quality Constraint
```
seq_low = SequenceGenerator(csvfile='../db/data.csv', jsThreshold=0.05)
db = seq_low.generate_sequence()
highF = prex.prefix_span(dataset=db, minSupport=0.25)
midF = prex.prefix_span(dataset=db, minSupport=0.1)
lowF = prex.prefix_span(dataset=db, minSupport=0.01)
prex.display(highF)
prex.display(midF)
prex.display(lowF)
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science
## Lab 4 - Bayesian Analysis
**Harvard University**<br>
**Spring 2020**<br>
**Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner<br>
**Lab Instructors:** Chris Tanner and Eleni Angelaki Kaxiras<br>
**Content:** Eleni Angelaki Kaxiras
---
```
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
import pymc3 as pm
from pymc3 import summary
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import pandas as pd
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
print('Running on PyMC3 v{}'.format(pm.__version__))
%%javascript
IPython.OutputArea.auto_scroll_threshold = 20000;
```
<a id=top></a>
## Learning Objectives
By the end of this lab, you should be able to:
* Understand how probability distributions work.
* Apply Bayes Rule in calculating probabilities.
* Understand how to apply Bayesian analysis using PyMC3
* Avoid getting fired when talking to your Bayesian employer.
**This lab corresponds to Lectures 6, 7, and 8, and maps to Homework 3.**
## Table of Contents
1. The Bayesian Way of Thinking or Is this a Fair Coin?
2. [Intro to `pyMC3`](#pymc3).
3. [Bayesian Linear Regression](#blr).
4. [Try this at Home: Example on Mining Disasters](#no4).
## 1. The Bayesian way of Thinking
```
Here is my state of knowledge about the situation. Here is some data, I am now going to revise my state of knowledge.
```
<div class="exercise" style="background-color:#b3e6ff"><b>Table Exercise</b>: Discuss the statement above with your table mates and make sure everyone understands what it means and what constitutes Bayesian way of thinking. Finally, count the Bayesians among you. </div>
### A. Bayes Rule
\begin{equation}
\label{eq:bayes}
P(A|\textbf{B}) = \frac{P(\textbf{B} |A) P(A) }{P(\textbf{B})}
\end{equation}
$P(A|\textbf{B})$ is the **posterior** distribution, prob(hypothesis | data)
$P(\textbf{B} |A)$ is the **likelihood** function, how probable is my data **B** for different values of the parameters
$P(A)$ is the marginal probability to observe the data, called the **prior**, this captures our belief about the data before observing it.
$P(\textbf{B})$ is the marginal distribution (sometimes called marginal likelihood)
<BR>
<div class="exercise" style="background-color:#b3e6ff"><b>Table Exercise</b>: Solve the Monty Hall Paradox using Bayes Rule.</div>

You are invited to play a game. There are 3 doors behind **one** of which are the keys to a brand new red Tesla. There is a goat behind each of the other two.
You are asked to pick one door, and let's say you pick **Door1**. The host knows where the keys are. Of the two remaining closed doors, he will always open the door that has a goat behind it. He'll say "I will do you a favor and open **Door2**". So he opens Door2 inside which there is, of course, a goat. He now asks you, do you want to open the initial Door you chose or change to **Door3**? Generally, in this game, when you are presented with this choice should you swap the doors?
**Initial Steps:**
- Start by defining the `events` of this probabilities game. One definition is:
- $A_i$: car is behind door $i$
- $B_i$ host opens door $i$
$i\in[1,2,3]$
- In more math terms, the question is: is the probability that the price is behind **Door 1** higher than the probability that the price is behind **Door2**, given that an event **has occured**?
### B. Bayes Rule written with Probability Distributions
We have data that we believe come from an underlying distribution of unknown parameters. If we find those parameters, we know everything about the process that generated this data and we can make inferences (create new data).
\begin{equation}
\label{eq:bayes}
P(\theta|\textbf{D}) = \frac{P(\textbf{D} |\theta) P(\theta) }{P(\textbf{D})}
\end{equation}
#### But what is $\theta \;$?
$\theta$ is an unknown yet fixed set of parameters. In Bayesian inference we express our belief about what $\theta$ might be and instead of trying to guess $\theta$ exactly, we look for its **probability distribution**. What that means is that we are looking for the **parameters** of that distribution. For example, for a Poisson distribution our $\theta$ is only $\lambda$. In a normal distribution, our $\theta$ is often just $\mu$ and $\sigma$.
### C. A review of Common Probability Distributions
#### Discrete Distributions
The random variable has a **probability mass function (pmf)** which measures the probability that our random variable will take a specific value $y$, denoted $P(Y=y)$.
- **Bernoulli** (binary outcome, success has probability $\theta$, $one$ trial):
$
P(Y=k) = \theta^k(1-\theta)^{1-k}
$
<HR>
- **Binomial** (binary outcome, success has probability $\theta$, $n$ trials):
\begin{equation}
P(Y=k) = {{n}\choose{k}} \cdot \theta^k(1-\theta)^{n-k}
\end{equation}
*Note*: Binomial(1,$p$) = Bernouli($p$)
<HR>
- **Negative Binomial**
<HR>
- **Poisson** (counts independent events occurring at a rate)
\begin{equation}
P\left( Y=y|\lambda \right) = \frac{{e^{ - \lambda } \lambda ^y }}{{y!}}
\end{equation}
y = 0,1,2,...
<HR>
- **Discrete Uniform**
<HR>
- **Categorical, or Multinulli** (random variables can take any of K possible categories, each having its own probability; this is a generalization of the Bernoulli distribution for a discrete variable with more than two possible outcomes, such as the roll of a die)
<HR>
- **Dirichlet-multinomial** (a generalization of the beta distribution for many variables)
#### Continuous Distributions
The random variable has a **probability density function (pdf)**.
- **Uniform** (variable equally likely to be near each value in interval $(a,b)$)
\begin{equation}
P(X = x) = \frac{1}{b - a}
\end{equation}
anywhere within the interval $(a, b)$, and zero elsewhere.
<HR>
- **Normal** (a.k.a. Gaussian)
\begin{equation}
X \sim \mathcal{N}(\mu,\,\sigma^{2})
\end{equation}
A Normal distribution can be parameterized either in terms of precision $\tau$ or standard deviation ($\sigma^{2}$. The link between the two is given by
\begin{equation}
\tau = \frac{1}{\sigma^{2}}
\end{equation}
- Mean $\mu$
- Variance $\frac{1}{\tau}$ or $\sigma^{2}$
- Parameters: `mu: float`, `sigma: float` or `tau: float`
<HR>
- **Beta** (variable ($\theta$) taking on values in the interval $[0,1]$, and parametrized by two positive parameters, $\alpha$ and $\beta$ that control the shape of the distribution.
*Note:*Beta is a good distribution to use for priors (beliefs) because its range is $[0,1]$ which is the natural range for a probability and because we can model a wide range of functions by changing the $\alpha$ and $\beta$ parameters.
\begin{equation}
\label{eq:beta}
P(\theta) = \frac{1}{B(\alpha, \beta)} {\theta}^{\alpha - 1} (1 - \theta)^{\beta - 1} \propto {\theta}^{\alpha - 1} (1 - \theta)^{\beta - 1}
\end{equation}
where the normalisation constant, $B$, is a beta function of $\alpha$ and $\beta$,
\begin{equation}
B(\alpha, \beta) = \int_{t=0}^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.
\end{equation}
<HR>
- **Exponential**
<HR>
- **Gamma**
#### Code Resources:
- Statistical Distributions in numpy/scipy: [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html)
- Statistical Distributions in pyMC3: [distributions in PyMC3](https://docs.pymc.io/api/distributions.html) (we will see those below).
<div class="discussion"><b>Exercise: Plot a Discrete variable</b></div>
Change the value of $\mu$ in the Poisson PMF and see how the plot changes. Remember that the y-axis in a discrete probability distribution shows the probability of the random variable having a specific value in the x-axis.
\begin{equation}
P\left( X=k \right) = \frac{{e^{ - \mu } \mu ^k }}{{k!}}
\end{equation}
**stats.poisson.pmf(x, mu)** $\mu$(mu) is our $\theta$ in this case.
```
plt.style.use('seaborn-darkgrid')
x = np.arange(0, 30)
for m in [0.5, 3, 8]:
pmf = stats.poisson.pmf(x, m)
plt.plot(x, pmf, 'o', alpha=0.5, label='$\mu$ = {}'.format(m))
plt.xlabel('random variable', fontsize=12)
plt.ylabel('probability', fontsize=12)
plt.legend(loc=1)
plt.ylim=(-0.1)
plt.show()
# same for binomial
plt.style.use('seaborn-darkgrid')
x = np.arange(0, 22)
ns = [10, 17]
ps = [0.5, 0.7]
for n, p in zip(ns, ps):
pmf = stats.binom.pmf(x, n, p)
plt.plot(x, pmf, 'o', alpha=0.5, label='n = {}, p = {}'.format(n, p))
plt.xlabel('x', fontsize=14)
plt.ylabel('f(x)', fontsize=14)
plt.legend(loc=1)
plt.show()
# discrete uniform
plt.style.use('seaborn-darkgrid')
ls = [0]
us = [3] # watch out, this number can only be integer!
for l, u in zip(ls, us):
x = np.arange(l, u+1)
pmf = [1.0 / (u - l + 1)] * len(x)
plt.plot(x, pmf, '-o', label='lower = {}, upper = {}'.format(l, u))
plt.xlabel('x', fontsize=12)
plt.ylabel('probability P(x)', fontsize=12)
plt.legend(loc=1)
plt.show()
```
<div class="discussion"><b>Exercise: Plot a continuous variable<br></div>
Change the value of $\mu$ in the Uniform PDF and see how the plot changes.
Remember that the y-axis in a continuous probability distribution does not shows the actual probability of the random variable having a specific value in the x-axis because that probability is zero!. Instead, to see the probability that the variable is within a small margin we look at the integral below the curve of the PDF.
The uniform is often used as a noninformative prior.
```
Uniform - numpy.random.uniform(a=0.0, b=1.0, size)
```
$\alpha$ and $\beta$ are our parameters. `size` is how many tries to perform.
Our $\theta$ is basically the combination of the parameters a,b. We can also call it
\begin{equation}
\mu = (a+b)/2
\end{equation}
```
from scipy.stats import uniform
r = uniform.rvs(size=1000)
plt.plot(r, uniform.pdf(r),'r-', lw=5, alpha=0.6, label='uniform pdf')
plt.hist(r, density=True, histtype='stepfilled', alpha=0.2)
plt.ylabel(r'probability density')
plt.xlabel(f'random variable')
plt.legend(loc='best', frameon=False)
plt.show()
from scipy.stats import beta
alphas = [0.5, 1.5, 3.0]
betas = [0.5, 1.5, 3.0]
x = np.linspace(0, 1, 1000)
colors = ['red', 'green', 'blue']
fig, ax = plt.subplots(figsize=(8, 5))
for a, b, colors in zip(alphas, betas, colors):
dist = beta(a, b)
plt.plot(x, dist.pdf(x), c=colors,
label=f'a={a}, b={b}')
ax.set_ylim(0, 3)
ax.set_xlabel(r'$\theta$')
ax.set_ylabel(r'$p(\theta|\alpha,\beta)$')
ax.set_title('Beta Distribution')
ax.legend(loc='best')
fig.show();
plt.style.use('seaborn-darkgrid')
x = np.linspace(-5, 5, 1000)
mus = [0., 0., 0., -2.]
sigmas = [0.4, 1., 2., 0.4]
for mu, sigma in zip(mus, sigmas):
pdf = stats.norm.pdf(x, mu, sigma)
plt.plot(x, pdf, label=r'$\mu$ = '+ f'{mu},' + r'$\sigma$ = ' + f'{sigma}')
plt.xlabel('random variable', fontsize=12)
plt.ylabel('probability density', fontsize=12)
plt.legend(loc=1)
plt.show()
plt.style.use('seaborn-darkgrid')
x = np.linspace(-5, 5, 1000)
mus = [0., 0., 0., -2.] # mean
sigmas = [0.4, 1., 2., 0.4] # std
for mu, sigma in zip(mus, sigmas):
plt.plot(x, uniform.pdf(x, mu, sigma), lw=5, alpha=0.4, \
label=r'$\mu$ = '+ f'{mu},' + r'$\sigma$ = ' + f'{sigma}')
plt.xlabel('random variable', fontsize=12)
plt.ylabel('probability density', fontsize=12)
plt.legend(loc=1)
plt.show()
```
### D. Is this a Fair Coin?
We do not want to promote gambling but let's say you visit the casino in **Monte Carlo**. You want to test your theory that casinos are dubious places where coins have been manipulated to have a larger probability for tails. So you will try to estimate how fair a coin is based on 100 flips. <BR>
You begin by flipping the coin. You get either Heads ($H$) or Tails ($T$) as our observed data and want to see if your posterior probabilities change as you obtain more data, that is, more coin flips. A nice way to visualize this is to plot the posterior probabilities as we observe more flips (data).
We will be using Bayes rule. $\textbf{D}$ is our data.
\begin{equation}
\label{eq:bayes}
P(\theta|\textbf{D}) = \frac{P(\textbf{D} |\theta) P(\theta) }{P(\textbf{D})}
\end{equation}
In the case of a coin toss when we observe $k$ heads in $n$ tosses:
\begin{equation}
\label{eq:bayes}
P(\theta|\textbf{k}) = Beta(\alpha + \textbf{k}, \beta + n - \textbf{k})
\end{equation}
we can say that $\alpha$ and $\beta$ play the roles of a "prior number of heads" and "prior number of tails".
```
# play with the priors - here we manually set them but we could be sampling from a separate Beta
trials = np.array([0, 1, 3, 5, 10, 15, 20, 100, 200, 300])
heads = np.array([0, 1, 2, 4, 8, 10, 10, 50, 180, 150])
x = np.linspace(0, 1, 100)
# for simplicity we set a,b=1
plt.figure(figsize=(10,8))
for k, N in enumerate(trials):
sx = plt.subplot(len(trials)/2, 2, k+1)
posterior = stats.beta.pdf(x, 1 + heads[k], 1 + trials[k] - heads[k])
plt.plot(x, posterior, alpha = 0.5, label=f'{trials[k]} tosses\n {heads[k]} heads');
plt.fill_between(x, 0, posterior, color="#348ABD", alpha=0.4)
plt.legend(loc='upper left', fontsize=10)
plt.legend()
plt.autoscale(tight=True)
plt.suptitle("Posterior probabilities for coin flips", fontsize=15);
plt.tight_layout()
plt.subplots_adjust(top=0.88)
```
<a id=pymc3></a> [Top](#top)
## 2. Introduction to `pyMC3`
PyMC3 is a Python library for programming Bayesian analysis, and more specifically, data creation, model definition, model fitting, and posterior analysis. It uses the concept of a `model` which contains assigned parametric statistical distributions to unknown quantities in the model. Within models we define random variables and their distributions. A distribution requires at least a `name` argument, and other `parameters` that define it. You may also use the `logp()` method in the model to build the model log-likelihood function. We define and fit the model.
PyMC3 includes a comprehensive set of pre-defined statistical distributions that can be used as model building blocks. Although they are not meant to be used outside of a `model`, you can invoke them by using the prefix `pm`, as in `pm.Normal`.
#### Markov Chain Monte Carlo (MCMC) Simulations
PyMC3 uses the **No-U-Turn Sampler (NUTS)** and the **Random Walk Metropolis**, two Markov chain Monte Carlo (MCMC) algorithms for sampling in posterior space. Monte Carlo gets into the name because when we sample in posterior space, we choose our next move via a pseudo-random process. NUTS is a sophisticated algorithm that can handle a large number of unknown (albeit continuous) variables.
```
with pm.Model() as model:
z = pm.Normal('z', mu=0., sigma=5.)
x = pm.Normal('x', mu=z, sigma=1., observed=5.)
print(x.logp({'z': 2.5}))
print(z.random(10, 100)[:10])
```
**References**:
- *Salvatier J, Wiecki TV, Fonnesbeck C. 2016. Probabilistic programming in Python using PyMC3. PeerJ Computer Science 2:e55* [(https://doi.org/10.7717/peerj-cs.55)](https://doi.org/10.7717/peerj-cs.55)
- [Distributions in PyMC3](https://docs.pymc.io/api/distributions.html)
- [More Details on Distributions](https://docs.pymc.io/developer_guide.html)
Information about PyMC3 functions including descriptions of distributions, sampling methods, and other functions, is available via the `help` command.
```
#help(pm.Poisson)
```
<a id=blr></a> [Top](#top)
## 3. Bayesian Linear Regression
Let's say we want to predict outcomes Y as normally distributed observations with an expected value $mu$ that is a linear function of two predictor variables, $\bf{x}_1$ and $\bf{x}_2$.
\begin{equation}
\mu = \alpha + \beta_1 \bf{x}_1 + \beta_2 x_2
\end{equation}
\begin{equation}
Y \sim \mathcal{N}(\mu,\,\sigma^{2})
\end{equation}
where $\sigma^2$ represents the measurement error.
In this example, we will use $\sigma^2 = 10$
We also choose the parameters as normal distributions:
\begin{eqnarray}
\alpha \sim \mathcal{N}(0,\,10) \\
\beta_i \sim \mathcal{N}(0,\,10) \\
\sigma^2 \sim |\mathcal{N}(0,\,10)|
\end{eqnarray}
We will artificially create the data to predict on. We will then see if our model predicts them correctly.
```
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.linspace(0, 1, size)
X2 = np.linspace(0,.2, size)
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
fig, ax = plt.subplots(1,2, figsize=(10,6), sharex=True)
ax[0].scatter(X1,Y)
ax[1].scatter(X2,Y)
ax[0].set_xlabel(r'$x_1$', fontsize=14)
ax[0].set_ylabel(r'$Y$', fontsize=14)
ax[1].set_xlabel(r'$x_2$', fontsize=14)
ax[1].set_ylabel(r'$Y$', fontsize=14)
from pymc3 import Model, Normal, HalfNormal
basic_model = Model()
with basic_model:
# Priors for unknown model parameters, specifically create stochastic random variables
# with Normal prior distributions for the regression coefficients,
# and a half-normal distribution for the standard deviation of the observations, σ.
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome - posterior
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
# model fitting with sampling
from pymc3 import NUTS, sample, find_MAP
from scipy import optimize
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = NUTS(scaling=start)
# draw 2000 posterior samples
trace = sample(2000, step, start=start)
from pymc3 import traceplot
traceplot(trace);
results = pm.summary(trace,
var_names=['alpha', 'beta', 'sigma'])
results
```
This linear regression example is from the original paper on PyMC3: *Salvatier J, Wiecki TV, Fonnesbeck C. 2016. Probabilistic programming in Python using PyMC3. PeerJ Computer Science 2:e55 https://doi.org/10.7717/peerj-cs.55*
<a id=no4></a> [Top](#top)
## 4. Try this at Home: Example on Mining Disasters
We will go over the classical `mining disasters from 1851 to 1962` dataset.
This example is from the [pyMC3 Docs](https://docs.pymc.io/notebooks/getting_started.html).
```
import pandas as pd
disaster_data = pd.Series([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, np.nan, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, np.nan, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
fontsize = 12
years = np.arange(1851, 1962)
plt.figure(figsize=(10,5))
#plt.scatter(years, disaster_data);
plt.bar(years, disaster_data)
plt.ylabel('Disaster count', size=fontsize)
plt.xlabel('Year', size=fontsize);
plt.title('Was there a Turning Point in Mining disasters from 1851 to 1962?', size=15);
```
#### Building the model
**Step1:** We choose the probability model for our experiment. Occurrences of disasters in the time series is thought to follow a **Poisson** process with a large **rate** parameter in the early part of the time series, and from one with a smaller **rate** in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
```
disasters = pm.Poisson('disasters', rate, observed=disaster_data)
```
We have two rates, `early_rate` if $t<=s$, and `late_rate` if $t>s$, where $s$ is the year the switch was made (a.k.a. the `switchpoint`).
**Step2:** Choose a prior distributions of the two rates, what we believe the rates were before we observed the data, and the switchpoint. We choose Exponential.
```
early_rate = pm.Exponential('early_rate', 1)
```
The parameters of this model are:
**Note:** Watch for missing values. Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. If you pass a np.array with missing values you will get an error.
```
with pm.Model() as disaster_model:
# discrete
switchpoint = pm.DiscreteUniform('switchpoint', lower=years.min(), upper=years.max(), testval=1900)
# Priors for pre- and post-switch rates number of disasters
early_rate = pm.Exponential('early_rate', 1)
late_rate = pm.Exponential('late_rate', 1)
# our theta - allocate appropriate Poisson rates to years before and after current
# switch is an `if` statement in puMC3
rate = pm.math.switch(switchpoint >= years, early_rate, late_rate)
# our observed data as a likelihood function of the `rate` parameters
# shows how we think our data is distributed
disasters = pm.Poisson('disasters', rate, observed=disaster_data)
```
#### Model Fitting
```
# there are defaults but we can also more explicitly set the sampling algorithms
with disaster_model:
# for continuous variables
step1 = pm.NUTS([early_rate, late_rate])
# for discrete variables
step2 = pm.Metropolis([switchpoint, disasters.missing_values[0]] )
trace = pm.sample(10000, step=[step1, step2])
# try different number of samples
#trace = pm.sample(5000, step=[step1, step2])
```
#### Posterior Analysis
On the left side plots we notice that our early rate is between 2.5 and 3.5 disasters a year. In the late period it seems to be between 0.6 and 1.2 so definitely lower.
The right side plots show the samples we drew to come to our conclusion.
```
pm.traceplot(trace, ['early_rate', 'late_rate', 'switchpoint'], figsize=(20,10));
results = pm.summary(trace,
var_names=['early_rate', 'late_rate', 'switchpoint'])
results
```
| github_jupyter |
# Introduction to Machine Learning
## What is Machine Learning?
Machine Learning is the field of study that gives computers the capability to learn without being explicitly programmed.
I like to think of it as a comparison rather than a definition.
- If you **can** give clear instructions on how to do the task - traditional computing
- If you **cannot** give clear instructions but can give lots of examples - machine learning
Let's look at a few illustrations -
1. Complex mathematical calculations - can give clear instructions: traditional computing
2. Processing a financial transaction - can give clear instructions: traditional computing
3. Differentiate between pictures of cats and dogs - cant give instructions but can give examples: machine learning
4. Playing chess - can give clear instructions for how to play but cannot give instructions on how to win! We can give a lot of past games as examples though: machine learning
5. Customer segmentation - dont know what the segments/groupings are, so giving clear instructions is out of question. Can give a large amount of examples with customer demographic data and purchase history: machine learning
So we can say that traditional programming takes data and program to give us output, while machine learning takes data and output (examples) to give us a program!
<img src='https://drive.google.com/uc?id=1SAu0GNpDqDNRNxEtXRqBX-t20BuB0HcR' align = 'left'/>
## Data is the New Oil!
Data is absolutely **critical** to creating a viable Machine Learning model. Here's simple representation of how data helps us create a model and a model helps us make predictions.
<img src="https://drive.google.com/uc?id=1rM6SBXOMeAcFXu1OLtvk4HOWsXdY_xGU" width=500 height=300 align="left"/>
Here's a short explainer video if the pictures didnt really do it for you...
```
## Run this cell (shift+enter) to see the video
from IPython.display import IFrame
IFrame("https://www.youtube.com/embed/f_uwKZIAeM0", width="600", height="400")
```
## What are the Different Types of Machine Learning?
<img src="https://drive.google.com/uc?id=1ESgroj56fbOoE0_xiMhsaibVa8D-_80H" align="left" width="1000" height="800"/>
---
## Course Overview
This course is designed for the 'do-ers'. Our entire focus during this course will be to apply and experiment. Conceptual understanding is very important and we will build a strong conceptual foundation but it will always be in context of a project rather than just a theoretical understanding.
We will be exploring a variety of Machine Learning algorithms. For each we will use an appropriate real world dataset, work on a real problem statement, and execute a project that can become the foundation of your ML skills portfolio and your resume.
You now have access to a full scale ML lab-on-cloud. This is a very powerful tool, IF you use it. Make the most of what you have - explore, experiment, break a few things. You learn the most out of failure!
### What Will We Do?
- We will understand the life cycle of a typical ML project and exercise it through real projects
- We will be exploring a slew of ML algorithms (supervised and un-supervised learning)
- For each of these algorithms we will understand how it works and apply it in a project
- We will extensively work on real world datasets and strive to be hands-on
### What Will We NOT Do?
- We will not cover every ML algorithm under the sun
- We will not cover reinforced learning and deep learning in this course
- We will not go deep into the mathematical, probabilistic, and statistical foundations of ML
## Course Curriculum
**Key Concepts Covered**
1. Lifecycle of a typical ML project
2. Data Pre-processing</td>
- Data acquisition and loading\n
- Data integration\n
- Exploratory data analysis
- Data cleaning
- Feature selection
- Encoding
- Normalization
3. Picking the Right Algorithm
4. Evaluating Your Model
- Train - Test Split
- Evaluation Metrics
- Under and Over Fitting
5. Other key concepts
- Imputation
- Kernel Functions
- Bagging
- Hyperparameters
- Boosting
**Algorithms Covered**
1. Linear Regression
2. Logistic Regression
3. K Nearest Neighbors
4. Decision Trees
5. Random Forest
6. Naive Bayes
7. Support Vector Machine
8. K Means Clustering
9. Hierarchical Clustering
**Datasets Used**
1. Healthcare - patient data on drug efficacy
2. Telecom - customer profiles
3. Retail - customer profiles
4. Automobile - automobile catalogue make, model, engine specs, etc.
5. Environment - CO2 emmissions data
6. Health Informatics - cancer cell biopsy observations
---
## Life Cycle of a Typical ML Project
A typical ML project goes through 5 major steps -
1. Define Project Objectives
2. Acquire, Explore and Prepare Data
3. Model Data
4. Interpret and Communicate the Insights
5. Implement, Document, and Maintain
We will work through steps 1 thru 4 during this course. We will **not** be deploying, documenting or maintaining our models.
<img src="https://drive.google.com/uc?id=1hQrE2Q7D_j4T8y5aM8pW-ejuS4VUP7Co" align="left"/>
Let's look at each of steps in further detail -
1. **Define Project Objectives** - this is very important step that most of us tend to forget. Without a clear understand of why you are doing any project, the project will fail. What the business or clients expects as outcome of the project has to be discussed and understood before you start off.
2. **Acquire, Explore, and Prepare Data** - you will spend a lot of your time on this step when you do an ML project. This is a critical step - exploring the data will help you decide which models you might want to employ, based this preliminary hypothesis you will prepare the data for the next step (Model Data). Here are a few things you will end up doing within this step -
- Data acquisition and loading
- Data integration
- Exploratory data analysis
- Data cleaning
- Feature selection
- Encoding
- Normalization
3. **Model Data** - this is the heart of our project. But, most students of ML get stuck on fancy algorithm names. There's a lot more to it than just claiming that you have done a project using SVM or Logistic Regression. You have to be able to articulate how you picked a model, how you trained it, and why did you conclude that the output looks good.
- Select the algorithm(s) to use
- Train the model(s)
- Evaluate performance
- Tweak parameters and re-evaluate
4. **Interpret and Communicate the Insights** - just modeling the data, showing a few visualizations, and reducing the error is not enough. As an ML engineer you have to be able to talk to your client and help them interpret the outcome of all your hard work. Be ready to answer a few questions -
- What interesting patterns did you notice in the data?
- Did you notice any intrinsic dependencies, correlation, or causation in the features?
- Why did you pick the algorithm that you did?
- How did you split the train-test data? why?
- Is this error rate acceptable? why?
- How will the outcome of this project help the client?
5. **Implement, Document, and Maintain** - at a real client, you will have to deploy your model in production, document it extensively, and also maintain it going forward. We will not go into this step given we are not going to be deploying our models in production.
## Kick Start!
Here's a 12 minute crash course on ML to kick-start our journey!
```
## Run this cell (shift+enter) to see the video
from IPython.display import IFrame
IFrame("https://www.youtube.com/embed/z-EtmaFJieY", width="814", height="509")
```
Here's a great article that summarizes Machine Learning really well.
https://machinelearningmastery.com/basic-concepts-in-machine-learning/
| github_jupyter |
```
from cubelib.stac_eco import Stac_eco
from cubelib.fm_map import Fmap
import pandas as pd
# pd.set_option('display.max_colwidth', None)
# pd.set_option('display.max_rows', None)
# pd.set_option('display.max_columns', None)
# pd.set_option('display.width', 2000)
#! cp ../2_Gridding_For_Scale/*.geojson .
! ls *.geojson
geojson_file = 'one_tile.geojson'
se = Stac_eco(geojson_file)
se
se.set_collection('landsat-c2ard-sr')
se
fm = Fmap()
fm.sat_geojson(geojson_file)
dates="2020-04-01/2020-10-31"
search_object_eco = se.search(dates, cloud_cover=100)
number_of_matched_scenes = search_object_eco.matched()
print(f"I found {number_of_matched_scenes} Scenes yay!")
so = search_object_eco
gdf1 = se.items_gdf(so)
#gdf1
gdf1.T
import pandas as pd
pd.set_option('display.max_colwidth', None)
gdf1['stac_extensions']
se.plot_polygons(so)
gdf1['properties.landsat:grid_vertical']
gdf1['properties.landsat:grid_horizontal']
gdf2 = gdf1[gdf1['properties.landsat:grid_horizontal']=='29']
gdf2.T
gdf3 = gdf2[gdf2['properties.landsat:grid_vertical']=='03']
gdf3[['properties.landsat:grid_horizontal', 'properties.landsat:grid_vertical']]
len(gdf3[['properties.landsat:grid_horizontal', 'properties.landsat:grid_vertical']])
dir(se)
se.df_assets(so)
import boto3
from rasterio.session import AWSSession
aws_session = AWSSession(boto3.Session(), requester_pays=True)
import rasterio as rio
import xarray as xr
def create_dataset(row, bands = ['Swirs', 'Green'], chunks = {'band': 1, 'x':2048, 'y':2048}):
datasets = []
with rio.Env(aws_session):
for band in bands:
print(row[band]['href'])
url = row[band]['href']
#url = url.replace('usgs-landsat', 'usgs-landsat-ard')
#da = xr.open_rasterio(url, chunks = chunks)
da = xr.open_rasterio(url)
daSub=da
# daSub = da.sel(x=slice(ll_corner[0], ur_corner[0]), y=slice(ur_corner[1], ll_corner[1]))
daSub = daSub.squeeze().drop(labels='band')
DS = daSub.to_dataset(name = band)
datasets.append(DS)
DS = xr.merge(datasets)
return DS
def asset_gdf(my_gdf,bands):
#print(my_gdf.keys)
i_dict_array = []
for i,item in my_gdf.iterrows():
i_dict ={}
print(item.id)
i_dict['id'] = item.id
for band in bands:
href = f'assets.{band}.href'
#print(item[href])
i_dict[band] = {'band': band,
'href': item[href]
}
i_dict_array.append(i_dict)
print(i_dict_array)
new_gdf = pd.DataFrame(i_dict_array)
return new_gdf
gdf3
bands=['blue','green','red','nir08','swir16','swir22','qa_pixel']
gdf4=asset_gdf(gdf3,bands)
gdf4.id
datasets = []
for i,row in gdf4.iterrows():
try:
print('loading....', row.id)
ds = create_dataset(row,bands)
datasets.append(ds)
except Exception as e:
print('Error loading, skipping')
print(e)
! aws s3 ls --request-payer requester s3://usgs-landsat/collection02/oli-tirs/2020/CU/029/003/LC08_CU_029003_20200419_20210504_02/LC08_CU_029003_20200419_20210504_02_SR_B2.TIF
! aws s3 ls --request-payer requester s3://usgs-landsat/collection02/oli-tirs/2020/CU/029/003/
! aws s3 ls --request-payer requester s3://usgs-landsat-ard/collection02/oli-tirs/2020/CU/029/003/
datasets
! date
gdf3
gdf3.keys()
gdf3['properties.start_datetime'].tolist()
len(gdf3)
gdf3.index.tolist()
from datetime import datetime
my_date_list = gdf3.index.tolist()
my_str_date_list=[]
for dt in my_date_list:
print(dt)
str_dt = dt.strftime('%Y-%m-%d')
print(str_dt)
my_str_date_list.append(str_dt)
DS = xr.concat(datasets, dim= pd.DatetimeIndex(my_str_date_list, name='time'))
print('Dataset Size (Gb): ', DS.nbytes/1e9)
DS
DS['red'].isel(time=0).plot()
DS['red'][1].plot()
DS['red'][15].plot()
ds_mini = DS.isel(x=slice(0,5000,10), y=slice(0,5000,10))
ds_mini
%matplotlib inline
display_color = 'blue'
#
ds_mini[display_color].plot.imshow('x','y', col='time', col_wrap=6, cmap='viridis')
#ds_mini[display_color].plot.imshow('x','y', col='time', col_wrap=6, cmap='viridis', vmin=7000, vmax=19000)
ds_mini.hvplot()
ds_mini['red'][0].hvplot.image(rasterize=True)
ds_mini['red'][0].plot()
d2 = ds_mini.transpose('time', 'y', 'x')
d2['red'].hvplot.image(rasterize=True)
d2
dir(d2)
#d2.swap_dims({'time')
#help(d2.swap_dims)
#help(d2.rename)
#d2.swap_dims({'time':'x'})
#d2.dims
#d2.drop_dims()
help(d2.drop_dims)
d2['red'].hvplot.image(rasterize=True, x='x', y='y', width=600, height=400, cmap='viridis', clim=(4000,20000))
help(d2['red'].hvplot.image)
d2['qa_pixel'].hvplot.image(rasterize=True, x='x', y='y', width=600, height=400, cmap='viridis')
DS.time.attrs = {} #this allowed the nc to be written
#ds.SCL.attrs = {}
DS.to_netcdf('~/maine_one_tile_swir_also.nc')
! ls -lh ~/*.nc
```
| github_jupyter |
```
import warnings
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from astropy.io import fits
from astropy.table import Table
import pandas as pd
import numpy as np
np.seterr(divide='ignore')
warnings.filterwarnings("ignore", category=RuntimeWarning)
class HRCevt1:
'''
A more robust HRC EVT1 file. Includes explicit
columns for every status bit, as well as calculated
columns for the f_p, f_b plane for your boomerangs.
Check out that cool new filtering algorithm!
'''
def __init__(self, evt1file):
# Do a standard read in of the EVT1 fits table
self.filename = evt1file
self.hdulist = fits.open(evt1file)
self.data = Table(self.hdulist[1].data)
self.header = self.hdulist[1].header
self.gti = self.hdulist[2].data
self.hdulist.close() # Don't forget to close your fits file!
fp_u, fb_u, fp_v, fb_v = self.calculate_fp_fb()
self.gti.starts = self.gti['START']
self.gti.stops = self.gti['STOP']
self.gtimask = []
# for start, stop in zip(self.gti.starts, self.gti.stops):
# self.gtimask = (self.data["time"] > start) & (self.data["time"] < stop)
self.gtimask = (self.data["time"] > self.gti.starts[0]) & (
self.data["time"] < self.gti.stops[-1])
self.data["fp_u"] = fp_u
self.data["fb_u"] = fb_u
self.data["fp_v"] = fp_v
self.data["fb_v"] = fb_v
# Make individual status bit columns with legible names
self.data["AV3 corrected for ringing"] = self.data["status"][:, 0]
self.data["AU3 corrected for ringing"] = self.data["status"][:, 1]
self.data["Event impacted by prior event (piled up)"] = self.data["status"][:, 2]
# Bit 4 (Python 3) is spare
self.data["Shifted event time"] = self.data["status"][:, 4]
self.data["Event telemetered in NIL mode"] = self.data["status"][:, 5]
self.data["V axis not triggered"] = self.data["status"][:, 6]
self.data["U axis not triggered"] = self.data["status"][:, 7]
self.data["V axis center blank event"] = self.data["status"][:, 8]
self.data["U axis center blank event"] = self.data["status"][:, 9]
self.data["V axis width exceeded"] = self.data["status"][:, 10]
self.data["U axis width exceeded"] = self.data["status"][:, 11]
self.data["Shield PMT active"] = self.data["status"][:, 12]
# Bit 14 (Python 13) is hardware spare
self.data["Upper level discriminator not exceeded"] = self.data["status"][:, 14]
self.data["Lower level discriminator not exceeded"] = self.data["status"][:, 15]
self.data["Event in bad region"] = self.data["status"][:, 16]
self.data["Amp total on V or U = 0"] = self.data["status"][:, 17]
self.data["Incorrect V center"] = self.data["status"][:, 18]
self.data["Incorrect U center"] = self.data["status"][:, 19]
self.data["PHA ratio test failed"] = self.data["status"][:, 20]
self.data["Sum of 6 taps = 0"] = self.data["status"][:, 21]
self.data["Grid ratio test failed"] = self.data["status"][:, 22]
self.data["ADC sum on V or U = 0"] = self.data["status"][:, 23]
self.data["PI exceeding 255"] = self.data["status"][:, 24]
self.data["Event time tag is out of sequence"] = self.data["status"][:, 25]
self.data["V amp flatness test failed"] = self.data["status"][:, 26]
self.data["U amp flatness test failed"] = self.data["status"][:, 27]
self.data["V amp saturation test failed"] = self.data["status"][:, 28]
self.data["U amp saturation test failed"] = self.data["status"][:, 29]
self.data["V hyperbolic test failed"] = self.data["status"][:, 30]
self.data["U hyperbolic test failed"] = self.data["status"][:, 31]
self.data["Hyperbola test passed"] = np.logical_not(np.logical_or(
self.data['U hyperbolic test failed'], self.data['V hyperbolic test failed']))
self.data["Hyperbola test failed"] = np.logical_or(
self.data['U hyperbolic test failed'], self.data['V hyperbolic test failed'])
self.obsid = self.header["OBS_ID"]
self.obs_date = self.header["DATE"]
self.target = self.header["OBJECT"]
self.detector = self.header["DETNAM"]
self.grating = self.header["GRATING"]
self.exptime = self.header["EXPOSURE"]
self.numevents = len(self.data["time"])
self.goodtimeevents = len(self.data["time"][self.gtimask])
self.badtimeevents = self.numevents - self.goodtimeevents
self.hyperbola_passes = np.sum(np.logical_or(
self.data['U hyperbolic test failed'], self.data['V hyperbolic test failed']))
self.hyperbola_failures = np.sum(np.logical_not(np.logical_or(
self.data['U hyperbolic test failed'], self.data['V hyperbolic test failed'])))
if self.hyperbola_passes + self.hyperbola_failures != self.numevents:
print("Warning: Number of Hyperbola Test Failures and Passes ({}) does not equal total number of events ({}).".format(
self.hyperbola_passes + self.hyperbola_failures, self.numevents))
# Multidimensional columns don't grok with Pandas
self.data.remove_column('status')
self.data = self.data.to_pandas()
def __str__(self):
return "HRC EVT1 object with {} events. Data is packaged as a Pandas Dataframe".format(self.numevents)
def calculate_fp_fb(self):
'''
Calculate the Fine Position (fp) and normalized central tap
amplitude (fb) for the HRC U- and V- axes.
Parameters
----------
data : Astropy Table
Table object made from an HRC evt1 event list. Must include the
au1, au2, au3 and av1, av2, av3 columns.
Returns
-------
fp_u, fb_u, fp_v, fb_v: float
Calculated fine positions and normalized central tap amplitudes
for the HRC U- and V- axes
'''
a_u = self.data["au1"] # otherwise known as "a1"
b_u = self.data["au2"] # "a2"
c_u = self.data["au3"] # "a3"
a_v = self.data["av1"]
b_v = self.data["av2"]
c_v = self.data["av3"]
with np.errstate(invalid='ignore'):
# Do the U axis
fp_u = ((c_u - a_u) / (a_u + b_u + c_u))
fb_u = b_u / (a_u + b_u + c_u)
# Do the V axis
fp_v = ((c_v - a_v) / (a_v + b_v + c_v))
fb_v = b_v / (a_v + b_v + c_v)
return fp_u, fb_u, fp_v, fb_v
def threshold(self, img, bins):
nozero_img = img.copy()
nozero_img[img == 0] = np.nan
# This is a really stupid way to threshold
median = np.nanmedian(nozero_img)
thresh = median*5
thresh_img = nozero_img
thresh_img[thresh_img < thresh] = np.nan
thresh_img[:int(bins[1]/2), :] = np.nan
# thresh_img[:,int(bins[1]-5):] = np.nan
return thresh_img
def hyperscreen(self):
'''
Grant Tremblay's new algorithm. Screens events on a tap-by-tap basis.
'''
data = self.data
#taprange = range(data['crsu'].min(), data['crsu'].max() + 1)
taprange_u = range(data['crsu'].min() -1 , data['crsu'].max() + 1)
taprange_v = range(data['crsv'].min() - 1, data['crsv'].max() + 1)
bins = [200, 200] # number of bins
# Instantiate these empty dictionaries to hold our results
u_axis_survivals = {}
v_axis_survivals = {}
for tap in taprange_u:
# Do the U axis
tapmask_u = data[data['crsu'] == tap].index.values
if len(tapmask_u) < 2:
continue
keep_u = np.isfinite(data['fb_u'][tapmask_u])
hist_u, xbounds_u, ybounds_u = np.histogram2d(
data['fb_u'][tapmask_u][keep_u], data['fp_u'][tapmask_u][keep_u], bins=bins)
thresh_hist_u = self.threshold(hist_u, bins=bins)
posx_u = np.digitize(data['fb_u'][tapmask_u], xbounds_u)
posy_u = np.digitize(data['fp_u'][tapmask_u], ybounds_u)
hist_mask_u = (posx_u > 0) & (posx_u <= bins[0]) & (
posy_u > -1) & (posy_u <= bins[1])
# Values of the histogram where the points are
hhsub_u = thresh_hist_u[posx_u[hist_mask_u] -
1, posy_u[hist_mask_u] - 1]
pass_fb_u = data['fb_u'][tapmask_u][hist_mask_u][np.isfinite(
hhsub_u)]
u_axis_survivals["U Axis Tap {:02d}".format(
tap)] = pass_fb_u.index.values
for tap in taprange_v:
# Now do the V axis:
tapmask_v = data[data['crsv'] == tap].index.values
if len(tapmask_v) < 2:
continue
keep_v = np.isfinite(data['fb_v'][tapmask_v])
hist_v, xbounds_v, ybounds_v = np.histogram2d(
data['fb_v'][tapmask_v][keep_v], data['fp_v'][tapmask_v][keep_v], bins=bins)
thresh_hist_v = self.threshold(hist_v, bins=bins)
posx_v = np.digitize(data['fb_v'][tapmask_v], xbounds_v)
posy_v = np.digitize(data['fp_v'][tapmask_v], ybounds_v)
hist_mask_v = (posx_v > 0) & (posx_v <= bins[0]) & (
posy_v > -1) & (posy_v <= bins[1])
# Values of the histogram where the points are
hhsub_v = thresh_hist_v[posx_v[hist_mask_v] -
1, posy_v[hist_mask_v] - 1]
pass_fb_v = data['fb_v'][tapmask_v][hist_mask_v][np.isfinite(
hhsub_v)]
v_axis_survivals["V Axis Tap {:02d}".format(
tap)] = pass_fb_v.index.values
# Done looping over taps
u_all_survivals = np.concatenate(
[x for x in u_axis_survivals.values()])
v_all_survivals = np.concatenate(
[x for x in v_axis_survivals.values()])
# If the event passes both U- and V-axis tests, it survives
all_survivals = np.intersect1d(u_all_survivals, v_all_survivals)
survival_mask = np.isin(self.data.index.values, all_survivals)
failure_mask = np.logical_not(survival_mask)
num_survivals = sum(survival_mask)
num_failures = sum(failure_mask)
percent_tapscreen_rejected = round(
((num_failures / self.numevents) * 100), 2)
# Do a sanity check to look for lost events. Shouldn't be any.
if num_survivals + num_failures != self.numevents:
print("WARNING: Total Number of survivals and failures does \
not equal total events in the EVT1 file. Something is wrong!")
legacy_hyperbola_test_survivals = sum(
self.data['Hyperbola test passed'])
legacy_hyperbola_test_failures = sum(
self.data['Hyperbola test failed'])
percent_legacy_hyperbola_test_rejected = round(
((legacy_hyperbola_test_failures / self.goodtimeevents) * 100), 2)
percent_improvement_over_legacy_test = round(
(percent_tapscreen_rejected - percent_legacy_hyperbola_test_rejected), 2)
hyperscreen_results_dict = {"ObsID": self.obsid,
"Target": self.target,
"Exposure Time": self.exptime,
"Detector": self.detector,
"Number of Events": self.numevents,
"Number of Good Time Events": self.goodtimeevents,
"U Axis Survivals by Tap": u_axis_survivals,
"V Axis Survivals by Tap": v_axis_survivals,
"U Axis All Survivals": u_all_survivals,
"V Axis All Survivals": v_all_survivals,
"All Survivals (event indices)": all_survivals,
"All Survivals (boolean mask)": survival_mask,
"All Failures (boolean mask)": failure_mask,
"Percent rejected by Tapscreen": percent_tapscreen_rejected,
"Percent rejected by Hyperbola": percent_legacy_hyperbola_test_rejected,
"Percent improvement": percent_improvement_over_legacy_test
}
return hyperscreen_results_dict
def hyperbola(self, fb, a, b, h):
'''Given the normalized central tap amplitude, a, b, and h,
return an array of length len(fb) that gives a hyperbola.'''
hyperbola = b * np.sqrt(((fb - h)**2 / a**2) - 1)
return hyperbola
def legacy_hyperbola_test(self, tolerance=0.035):
'''
Apply the hyperbolic test.
'''
# Remind the user what tolerance they're using
# print("{0: <25}| Using tolerance = {1}".format(" ", tolerance))
# Set hyperbolic coefficients, depending on whether this is HRC-I or -S
if self.detector == "HRC-I":
a_u = 0.3110
b_u = 0.3030
h_u = 1.0580
a_v = 0.3050
b_v = 0.2730
h_v = 1.1
# print("{0: <25}| Using HRC-I hyperbolic coefficients: ".format(" "))
# print("{0: <25}| Au={1}, Bu={2}, Hu={3}".format(" ", a_u, b_u, h_u))
# print("{0: <25}| Av={1}, Bv={2}, Hv={3}".format(" ", a_v, b_v, h_v))
if self.detector == "HRC-S":
a_u = 0.2706
b_u = 0.2620
h_u = 1.0180
a_v = 0.2706
b_v = 0.2480
h_v = 1.0710
# print("{0: <25}| Using HRC-S hyperbolic coefficients: ".format(" "))
# print("{0: <25}| Au={1}, Bu={2}, Hu={3}".format(" ", a_u, b_u, h_u))
# print("{0: <25}| Av={1}, Bv={2}, Hv={3}".format(" ", a_v, b_v, h_v))
# Set the tolerance boundary ("width" of the hyperbolic region)
h_u_lowerbound = h_u * (1 + tolerance)
h_u_upperbound = h_u * (1 - tolerance)
h_v_lowerbound = h_v * (1 + tolerance)
h_v_upperbound = h_v * (1 - tolerance)
# Compute the Hyperbolae
with np.errstate(invalid='ignore'):
zone_u_fit = self.hyperbola(self.data["fb_u"], a_u, b_u, h_u)
zone_u_lowerbound = self.hyperbola(
self.data["fb_u"], a_u, b_u, h_u_lowerbound)
zone_u_upperbound = self.hyperbola(
self.data["fb_u"], a_u, b_u, h_u_upperbound)
zone_v_fit = self.hyperbola(self.data["fb_v"], a_v, b_v, h_v)
zone_v_lowerbound = self.hyperbola(
self.data["fb_v"], a_v, b_v, h_v_lowerbound)
zone_v_upperbound = self.hyperbola(
self.data["fb_v"], a_v, b_v, h_v_upperbound)
zone_u = [zone_u_lowerbound, zone_u_upperbound]
zone_v = [zone_v_lowerbound, zone_v_upperbound]
# Apply the masks
# print("{0: <25}| Hyperbolic masks for U and V axes computed".format(""))
with np.errstate(invalid='ignore'):
# print("{0: <25}| Creating U-axis mask".format(""), end=" |")
between_u = np.logical_not(np.logical_and(
self.data["fp_u"] < zone_u[1], self.data["fp_u"] > -1 * zone_u[1]))
not_beyond_u = np.logical_and(
self.data["fp_u"] < zone_u[0], self.data["fp_u"] > -1 * zone_u[0])
condition_u_final = np.logical_and(between_u, not_beyond_u)
# print(" Creating V-axis mask")
between_v = np.logical_not(np.logical_and(
self.data["fp_v"] < zone_v[1], self.data["fp_v"] > -1 * zone_v[1]))
not_beyond_v = np.logical_and(
self.data["fp_v"] < zone_v[0], self.data["fp_v"] > -1 * zone_v[0])
condition_v_final = np.logical_and(between_v, not_beyond_v)
mask_u = condition_u_final
mask_v = condition_v_final
hyperzones = {"zone_u_fit": zone_u_fit,
"zone_u_lowerbound": zone_u_lowerbound,
"zone_u_upperbound": zone_u_upperbound,
"zone_v_fit": zone_v_fit,
"zone_v_lowerbound": zone_v_lowerbound,
"zone_v_upperbound": zone_v_upperbound}
hypermasks = {"mask_u": mask_u, "mask_v": mask_v}
# print("{0: <25}| Hyperbolic masks created".format(""))
# print("{0: <25}| ".format(""))
return hyperzones, hypermasks
def boomerang(self, mask=None, show=True, plot_legacy_zone=True, title=None, cmap=None, savepath=None, create_subplot=False, ax=None, rasterized=True):
# You can plot the image on axes of a subplot by passing
# that axis to this function. Here are some switches to enable that.
if create_subplot is False:
self.fig, self.ax = plt.subplots(figsize=(12, 8))
elif create_subplot is True:
if ax is None:
self.ax = plt.gca()
else:
self.ax = ax
if cmap is None:
cmap = 'plasma'
if mask is not None:
self.ax.scatter(self.data['fb_u'], self.data['fp_u'],
c=self.data['sumamps'], cmap='bone', s=0.3, alpha=0.8, rasterized=rasterized)
frame = self.ax.scatter(self.data['fb_u'][mask], self.data['fp_u'][mask],
c=self.data['sumamps'][mask], cmap=cmap, s=0.5, rasterized=rasterized)
else:
frame = self.ax.scatter(self.data['fb_u'], self.data['fp_u'],
c=self.data['sumamps'], cmap=cmap, s=0.5, rasterized=rasterized)
if plot_legacy_zone is True:
hyperzones, hypermasks = self.legacy_hyperbola_test(
tolerance=0.035)
self.ax.plot(self.data["fb_u"], hyperzones["zone_u_lowerbound"],
'o', markersize=0.3, color='black', alpha=0.8, rasterized=rasterized)
self.ax.plot(self.data["fb_u"], -1 * hyperzones["zone_u_lowerbound"],
'o', markersize=0.3, color='black', alpha=0.8, rasterized=rasterized)
self.ax.plot(self.data["fb_u"], hyperzones["zone_u_upperbound"],
'o', markersize=0.3, color='black', alpha=0.8, rasterized=rasterized)
self.ax.plot(self.data["fb_u"], -1 * hyperzones["zone_u_upperbound"],
'o', markersize=0.3, color='black', alpha=0.8, rasterized=rasterized)
self.ax.grid(False)
if title is None:
self.ax.set_title('{} | {} | ObsID {} | {} ksec | {} counts'.format(
self.target, self.detector, self.obsid, round(self.exptime / 1000, 1), self.numevents))
else:
self.ax.set_title(title)
self.ax.set_ylim(-1.1, 1.1)
self.ax.set_xlim(-0.1, 1.1)
self.ax.set_ylabel(r'Fine Position $f_p$ $(C-A)/(A + B + C)$')
self.ax.set_xlabel(
r'Normalized Central Tap Amplitude $f_b$ $B / (A+B+C)$')
if create_subplot is False:
self.cbar = plt.colorbar(frame, pad=-0.005)
self.cbar.set_label("SUMAMPS")
if show is True:
plt.show()
if savepath is not None:
plt.savefig(savepath, dpi=150, bbox_inches='tight')
print('Saved boomerang figure to: {}'.format(savepath))
def image(self, masked_x=None, masked_y=None, xlim=None, ylim=None, detcoords=False, title=None, cmap=None, show=True, savepath=None, create_subplot=False, ax=None):
'''
Create a quicklook image, in detector or sky coordinates, of the
observation. The image will be binned to 400x400.
'''
# Create the 2D histogram
nbins = (400, 400)
if masked_x is not None and masked_y is not None:
x = masked_x
y = masked_y
img_data, yedges, xedges = np.histogram2d(y, x, nbins)
else:
if detcoords is False:
x = self.data['x'][self.gtimask]
y = self.data['y'][self.gtimask]
elif detcoords is True:
x = self.data['detx'][self.gtimask]
y = self.data['dety'][self.gtimask]
img_data, yedges, xedges = np.histogram2d(y, x, nbins)
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
# Create the Figure
styleplots()
# You can plot the image on axes of a subplot by passing
# that axis to this function. Here are some switches to enable that.
if create_subplot is False:
self.fig, self.ax = plt.subplots()
elif create_subplot is True:
if ax is None:
self.ax = plt.gca()
else:
self.ax = ax
self.ax.grid(False)
if cmap is None:
cmap = 'viridis'
self.ax.imshow(img_data, extent=extent, norm=LogNorm(),
interpolation=None, cmap=cmap, origin='lower')
if title is None:
self.ax.set_title("ObsID {} | {} | {} | {:,} events".format(
self.obsid, self.target, self.detector, self.goodtimeevents))
else:
self.ax.set_title("{}".format(title))
if detcoords is False:
self.ax.set_xlabel("Sky X")
self.ax.set_ylabel("Sky Y")
elif detcoords is True:
self.ax.set_xlabel("Detector X")
self.ax.set_ylabel("Detector Y")
if xlim is not None:
self.ax.set_xlim(xlim)
if ylim is not None:
self.ax.set_ylim(ylim)
if show is True:
plt.show(block=True)
if savepath is not None:
plt.savefig('{}'.format(savepath))
print("Saved image to {}".format(savepath))
def styleplots():
mpl.rcParams['agg.path.chunksize'] = 10000
# Make things pretty
plt.style.use('ggplot')
labelsizes = 10
plt.rcParams['font.size'] = labelsizes
plt.rcParams['axes.titlesize'] = 12
plt.rcParams['axes.labelsize'] = labelsizes
plt.rcParams['xtick.labelsize'] = labelsizes
plt.rcParams['ytick.labelsize'] = labelsizes
from astropy.io import fits
import os
os.listdir('../tests/data/')
fitsfile = '../tests/data/hrcS_evt1_testfile.fits.gz'
obs = HRCevt1(fitsfile)
obs.image(obs.data['detx'][obs.gtimask], obs.data['dety'][obs.gtimask], xlim=(26000, 41000), ylim=(31500, 34000))
results = obs.hyperscreen()
obs.image(obs.data['detx'][results['All Failures (boolean mask)']], obs.data['dety'][results['All Failures (boolean mask)']], xlim=(26000, 41000), ylim=(31500, 34000))
obs.data['crsv'].min()
obs.data['crsv'].max()
obs.data['crsv']
obs.numevents
from astropy.io import fits
header = fits.getheader(fitsfile, 1)
header
from hyperscreen import hypercore
```
| github_jupyter |
# Parameterizing with Continuous Variables
```
from IPython.display import Image
```
## Continuous Factors
1. Base Class for Continuous Factors
2. Joint Gaussian Distributions
3. Canonical Factors
4. Linear Gaussian CPD
In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case.
Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, P(X | Y1, Y2, ... Yn) represent, for every assignment of values y1 ∈ Val(Y1), y2 ∈ Val(Y2), .....yn ∈ val(Yn), a distribution over X. In this case, X might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have X’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well.
### Base Class for Continuous Factors
This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class.
```
import numpy as np
from scipy.special import beta
# Two variable drichlet ditribution with alpha = (1,2)
def drichlet_pdf(x, y):
return (np.power(x, 1)*np.power(y, 2))/beta(x, y)
from pgmpy.factors.continuous import ContinuousFactor
drichlet_factor = ContinuousFactor(['x', 'y'], drichlet_pdf)
drichlet_factor.scope(), drichlet_factor.assignment(5,6)
```
This class supports methods like **marginalize, reduce, product and divide** just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later.
```
def custom_pdf(x, y, z):
return z*(np.power(x, 1)*np.power(y, 2))/beta(x, y)
custom_factor = ContinuousFactor(['x', 'y', 'z'], custom_pdf)
custom_factor.scope(), custom_factor.assignment(1, 2, 3)
custom_factor.reduce([('y', 2)])
custom_factor.scope(), custom_factor.assignment(1, 3)
from scipy.stats import multivariate_normal
std_normal_pdf = lambda *x: multivariate_normal.pdf(x, [0, 0], [[1, 0], [0, 1]])
std_normal = ContinuousFactor(['x1', 'x2'], std_normal_pdf)
std_normal.scope(), std_normal.assignment([1, 1])
std_normal.marginalize(['x2'])
std_normal.scope(), std_normal.assignment(1)
sn_pdf1 = lambda x: multivariate_normal.pdf([x], [0], [[1]])
sn_pdf2 = lambda x1,x2: multivariate_normal.pdf([x1, x2], [0, 0], [[1, 0], [0, 1]])
sn1 = ContinuousFactor(['x2'], sn_pdf1)
sn2 = ContinuousFactor(['x1', 'x2'], sn_pdf2)
sn3 = sn1 * sn2
sn4 = sn2 / sn1
sn3.assignment(0, 0), sn4.assignment(0, 0)
```
The ContinuousFactor class also has a method **discretize** that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class.
### Joint Gaussian Distributions
In its most common representation, a multivariate Gaussian distribution over X1………..Xn is characterized by an n-dimensional mean vector μ, and a symmetric n x n covariance matrix Σ. The density function is most defined as -
$$
p(x) = \dfrac{1}{(2\pi)^{n/2}|Σ|^{1/2}} exp[-0.5*(x-μ)^TΣ^{-1}(x-μ)]
$$
The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters.
```
from pgmpy.factors.distributions import GaussianDistribution as JGD
dis = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis.variables
dis.mean
dis.covariance
dis.pdf([0,0,0])
```
This class overrides the basic operation methods **(marginalize, reduce, normalize, product and divide)** as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is O(n^3) with repect to the number of variables.
```
dis1 = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis2 = JGD(['x3', 'x4'], [1, 2], [[2, 3], [5, 6]])
dis3 = dis1 * dis2
dis3.variables
dis3.mean
dis3.covariance
```
The others methods can also be used in a similar fashion.
### Canonical Factors
While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step.
So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference: factor product, factor division, factor reduction, and marginalization. Thus, we can define a set of simple data structures that allow the inference process to be performed. Moreover, the integration operation required by marginalization is always well defined, and it is guaranteed to produce a finite integral under certain conditions; when it is well defined, it has a simple analytical solution.
A canonical form C (X; K,h, g) is defined as:
$$C(X; K,h,g) = exp(-0.5X^TKX + h^TX + g)$$
We can represent every Gaussian as a canonical form. Rewriting the joint Gaussian pdf we obtain,
N (μ; Σ) = C (K, h, g) where:
$$
K = Σ^{-1}
$$
$$
h = Σ^{-1}μ
$$
$$
g = -0.5μ^TΣ^{-1}μ - log((2π)^{n/2}|Σ|^{1/2}
$$
Similar to the JointGaussainDistribution class, the CanonicalFactor class is also derived from the ContinuousFactor class but with its own implementations of the methods required for the sum-product algorithms that are much more efficient than its parent class methods. Let us have a look at the API of a few methods in this class.
```
from pgmpy.factors.continuous import CanonicalDistribution
phi1 = CanonicalDistribution(['x1', 'x2', 'x3'],
np.array([[1, -1, 0], [-1, 4, -2], [0, -2, 4]]),
np.array([[1], [4], [-1]]), -2)
phi2 = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
phi3 = phi1 * phi2
phi3.variables
phi3.h
phi3.K
phi3.g
```
This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution.
```
phi = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
jgd = phi.to_joint_gaussian()
jgd.variables
jgd.covariance
jgd.mean
```
### Linear Gaussian CPD
A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent.
For example,
$$
P(Y ; x1, x2, x3) = N(β_1x_1 + β_2x_2 + β_3x_3 + β_0 ; σ^2)
$$
Let Y be a linear Gaussian of its parents X1,...,Xk:
$$
p(Y | x) = N(β_0 + β^T x ; σ^2)
$$
The distribution of Y is a normal distribution p(Y) where:
$$
μ_Y = β_0 + β^Tμ
$$
$$
{{σ^2}_Y = σ^2 + β^TΣβ}
$$
The joint distribution over {X, Y} is a normal distribution where:
$$Cov[X_i; Y] = {\sum_{j=1}^{k} β_jΣ_{i,j}}$$
Assume that X1,...,Xk are jointly Gaussian with distribution N (μ; Σ). Then:
For its representation pgmpy has a class named LinearGaussianCPD in the module pgmpy.factors.continuous. To instantiate an object of this class, one needs to provide a variable name, the value of the beta_0 term, the variance, a list of the parent variable names and a list of the coefficient values of the linear equation (beta_vector), where the list of parent variable names and beta_vector list is optional and defaults to None.
```
# For P(Y| X1, X2, X3) = N(-2x1 + 3x2 + 7x3 + 0.2; 9.6)
from pgmpy.factors.continuous import LinearGaussianCPD
cpd = LinearGaussianCPD('Y', [0.2, -2, 3, 7], 9.6, ['X1', 'X2', 'X3'])
print(cpd)
```
A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution.
These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model.
```
from pgmpy.models import LinearGaussianBayesianNetwork
model = LinearGaussianBayesianNetwork([('x1', 'x2'), ('x2', 'x3')])
cpd1 = LinearGaussianCPD('x1', [1], 4)
cpd2 = LinearGaussianCPD('x2', [-5, 0.5], 4, ['x1'])
cpd3 = LinearGaussianCPD('x3', [4, -1], 3, ['x2'])
# This is a hack due to a bug in pgmpy (LinearGaussianCPD
# doesn't have `variables` attribute but `add_cpds` function
# wants to check that...)
cpd1.variables = [*cpd1.evidence, cpd1.variable]
cpd2.variables = [*cpd2.evidence, cpd2.variable]
cpd3.variables = [*cpd3.evidence, cpd3.variable]
model.add_cpds(cpd1, cpd2, cpd3)
jgd = model.to_joint_gaussian()
jgd.variables
jgd.mean
jgd.covariance
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# tf.data を使ったテキストの読み込み
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/text"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、テキストを読み込んで前処理する 2 つの方法を紹介します。
- まず、Keras ユーティリティとレイヤーを使用します。 TensorFlow を初めて使用する場合は、これらから始める必要があります。
- このチュートリアルでは、`tf.data.TextLineDataset` を使ってテキストファイルからサンプルを読み込む方法を例示します。`TextLineDataset` は、テキストファイルからデータセットを作成するために設計されています。この中では、元のテキストファイルの一行一行がサンプルです。これは、(たとえば、詩やエラーログのような) 基本的に行ベースのテキストデータを扱うのに便利でしょう。
```
# Be sure you're using the stable versions of both tf and tf-text, for binary compatibility.
!pip uninstall -y tensorflow tf-nightly keras
!pip install -q -U tf-nightly
!pip install -q -U tensorflow-text-nightly
import collections
import pathlib
import re
import string
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import preprocessing
from tensorflow.keras import utils
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
import tensorflow_datasets as tfds
import tensorflow_text as tf_text
```
## 例 1: StackOverflow の質問のタグを予測する
最初の例として、StackOverflow からプログラミングの質問のデータセットをダウンロードします。それぞれの質問 (「ディクショナリを値で並べ替えるにはどうすればよいですか?」) は、1 つのタグ (`Python`、`CSharp`、`JavaScript`、または`Java`) でラベルされています。このタスクでは、質問のタグを予測するモデルを開発します。これは、マルチクラス分類の例です。マルチクラス分類は、重要で広く適用できる機械学習の問題です。
### データセットをダウンロードして調査する
次に、データセットをダウンロードして、ディレクトリ構造を調べます。
```
data_url = 'https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz'
dataset_dir = utils.get_file(
origin=data_url,
untar=True,
cache_dir='stack_overflow',
cache_subdir='')
dataset_dir = pathlib.Path(dataset_dir).parent
list(dataset_dir.iterdir())
train_dir = dataset_dir/'train'
list(train_dir.iterdir())
```
`train/csharp`、`train/java`, `train/python` および `train/javascript` ディレクトリには、多くのテキストファイルが含まれています。それぞれが Stack Overflow の質問です。ファイルを出力してデータを調べます。
```
sample_file = train_dir/'python/1755.txt'
with open(sample_file) as f:
print(f.read())
```
### データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、[text_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory) ユーティリティを使用して、ラベル付きの `tf.data.Dataset` を作成します。これは、入力パイプラインを構築するための強力なツールのコレクションです。
`preprocessing.text_dataset_from_directory` は、次のようなディレクトリ構造を想定しています。
```
train/
...csharp/
......1.txt
......2.txt
...java/
......1.txt
......2.txt
...javascript/
......1.txt
......2.txt
...python/
......1.txt
......2.txt
```
機械学習実験を実行するときは、データセットを[トレーニング](https://developers.google.com/machine-learning/glossary#training_set)、[検証](https://developers.google.com/machine-learning/glossary#validation_set)、および、[テスト](https://developers.google.com/machine-learning/glossary#test-set)の 3 つに分割することをお勧めします。Stack Overflow データセットはすでにトレーニングとテストに分割されていますが、検証セットがありません。以下の `validation_split` 引数を使用して、トレーニングデータの 80:20 分割を使用して検証セットを作成します。
```
batch_size = 32
seed = 42
raw_train_ds = preprocessing.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
```
上記のように、トレーニングフォルダには 8,000 の例があり、そのうち 80% (6,400 件) をトレーニングに使用します。この後で見ていきますが、`tf.data.Dataset` を直接 `model.fit` に渡すことでモデルをトレーニングできます。まず、データセットを繰り返し処理し、いくつかの例を出力します。
注意: 分類問題の難易度を上げるために、データセットの作成者は、プログラミングの質問で、*Python*、*CSharp*、*JavaScript*、*Java* という単語を *blank* に置き換えました。
```
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(10):
print("Question: ", text_batch.numpy()[i])
print("Label:", label_batch.numpy()[i])
```
ラベルは、`0`、`1`、`2` または `3` です。これらのどれがどの文字列ラベルに対応するかを確認するには、データセットの `class_names` プロパティを確認します。
```
for i, label in enumerate(raw_train_ds.class_names):
print("Label", i, "corresponds to", label)
```
次に、検証およびテスト用データセットを作成します。トレーニング用セットの残りの 1,600 件のレビューを検証に使用します。
注意: `validation_split` および `subset` 引数を使用する場合は、必ずランダムシードを指定するか、`shuffle=False`を渡して、検証とトレーニング分割に重複がないようにします。
```
raw_val_ds = preprocessing.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
test_dir = dataset_dir/'test'
raw_test_ds = preprocessing.text_dataset_from_directory(
test_dir, batch_size=batch_size)
```
### トレーニング用データセットを準備する
注意: このセクションで使用される前処理 API は、TensorFlow 2.3 では実験的なものであり、変更される可能性があります。
次に、`preprocessing.TextVectorization` レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
- 標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。
- トークン化とは、文字列をトークンに分割することです(たとえば、空白で分割することにより、文を個々の単語に分割します)。
- ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。
これらのタスクはすべて、このレイヤーで実行できます。これらの詳細については、[API doc](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) をご覧ください。
- デフォルトの標準化では、テキストが小文字に変換され、句読点が削除されます。
- デフォルトのトークナイザーは空白で分割されます。
- デフォルトのベクトル化モードは `int` です。これは整数インデックスを出力します(トークンごとに1つ)。このモードは、語順を考慮したモデルを構築するために使用できます。`binary` などの他のモードを使用して、bag-of-word モデルを構築することもできます。
これらについてさらに学ぶために、2 つのモードを構築します。まず、`binary` モデルを使用して、bag-of-words モデルを構築します。次に、1D ConvNet で `int` モードを使用します。
```
VOCAB_SIZE = 10000
binary_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='binary')
```
`int` の場合、最大語彙サイズに加えて、明示的な最大シーケンス長を設定する必要があります。これにより、レイヤーはシーケンスを正確に sequence_length 値にパディングまたは切り捨てます。
```
MAX_SEQUENCE_LENGTH = 250
int_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
```
次に、`adapt` を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意: Adapt を呼び出すときは、トレーニング用データのみを使用することが重要です (テスト用セットを使用すると情報が漏洩します)。
```
# Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda text, labels: text)
binary_vectorize_layer.adapt(train_text)
int_vectorize_layer.adapt(train_text)
```
これらのレイヤーを使用してデータを前処理した結果を確認してください。
```
def binary_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return binary_vectorize_layer(text), label
def int_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return int_vectorize_layer(text), label
# Retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_question, first_label = text_batch[0], label_batch[0]
print("Question", first_question)
print("Label", first_label)
print("'binary' vectorized question:",
binary_vectorize_text(first_question, first_label)[0])
print("'int' vectorized question:",
int_vectorize_text(first_question, first_label)[0])
```
上記のように、`binary` モードは、入力に少なくとも 1 回存在するトークンを示す配列を返しますが、`int` モードは、各トークンを整数に置き換えて、順序を維持します。レイヤーで `.get_vocabulary()` を呼び出すことにより、各整数が対応するトークン (文字列) を検索できます
```
print("1289 ---> ", int_vectorize_layer.get_vocabulary()[1289])
print("313 ---> ", int_vectorize_layer.get_vocabulary()[313])
print("Vocabulary size: {}".format(len(int_vectorize_layer.get_vocabulary())))
```
モデルをトレーニングする準備がほぼ整いました。最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した `TextVectorization` レイヤーを適用します。
```
binary_train_ds = raw_train_ds.map(binary_vectorize_text)
binary_val_ds = raw_val_ds.map(binary_vectorize_text)
binary_test_ds = raw_test_ds.map(binary_vectorize_text)
int_train_ds = raw_train_ds.map(int_vectorize_text)
int_val_ds = raw_val_ds.map(int_vectorize_text)
int_test_ds = raw_test_ds.map(int_vectorize_text)
```
### パフォーマンスのためにデータセットを構成する
以下は、データを読み込むときに I/O がブロックされないようにするために使用する必要がある 2 つの重要な方法です。
`.cache()` はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
`.prefetch()` はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、[データパフォーマンスガイド](https://www.tensorflow.org/guide/data_performance)を参照してください。
```
AUTOTUNE = tf.data.AUTOTUNE
def configure_dataset(dataset):
return dataset.cache().prefetch(buffer_size=AUTOTUNE)
binary_train_ds = configure_dataset(binary_train_ds)
binary_val_ds = configure_dataset(binary_val_ds)
binary_test_ds = configure_dataset(binary_test_ds)
int_train_ds = configure_dataset(int_train_ds)
int_val_ds = configure_dataset(int_val_ds)
int_test_ds = configure_dataset(int_test_ds)
```
### モデルをトレーニングする
ニューラルネットワークを作成します。`binary` のベクトル化されたデータの場合、単純な bag-of-words 線形モデルをトレーニングします。
```
binary_model = tf.keras.Sequential([layers.Dense(4)])
binary_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = binary_model.fit(
binary_train_ds, validation_data=binary_val_ds, epochs=10)
```
次に、`int` ベクトル化レイヤーを使用して、1D ConvNet を構築します。
```
def create_model(vocab_size, num_labels):
model = tf.keras.Sequential([
layers.Embedding(vocab_size, 64, mask_zero=True),
layers.Conv1D(64, 5, padding="valid", activation="relu", strides=2),
layers.GlobalMaxPooling1D(),
layers.Dense(num_labels)
])
return model
# vocab_size is VOCAB_SIZE + 1 since 0 is used additionally for padding.
int_model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=4)
int_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = int_model.fit(int_train_ds, validation_data=int_val_ds, epochs=5)
```
2 つのモデルを比較します。
```
print("Linear model on binary vectorized data:")
print(binary_model.summary())
print("ConvNet model on int vectorized data:")
print(int_model.summary())
```
テストデータで両方のモデルを評価します。
```
binary_loss, binary_accuracy = binary_model.evaluate(binary_test_ds)
int_loss, int_accuracy = int_model.evaluate(int_test_ds)
print("Binary model accuracy: {:2.2%}".format(binary_accuracy))
print("Int model accuracy: {:2.2%}".format(int_accuracy))
```
注意: このサンプルデータセットは、かなり単純な分類問題を表しています。より複雑なデータセットと問題は、前処理戦略とモデルアーキテクチャに微妙ながら重要な違いをもたらします。さまざまなアプローチを比較するために、さまざまなハイパーパラメータとエポックを試してみてください。
### モデルをエクスポートする
上記のコードでは、モデルにテキストをフィードする前に、`TextVectorization` レイヤーをデータセットに適用しました。モデルで生の文字列を処理できるようにする場合 (たとえば、展開を簡素化するため)、モデル内に `TextVectorization` レイヤーを含めることができます。これを行うには、トレーニングしたばかりの重みを使用して新しいモデルを作成します。
```
export_model = tf.keras.Sequential(
[binary_vectorize_layer, binary_model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print("Accuracy: {:2.2%}".format(binary_accuracy))
```
これで、モデルは生の文字列を入力として受け取り、`model.predict` を使用して各ラベルのスコアを予測できます。最大スコアのラベルを見つける関数を定義します。
```
def get_string_labels(predicted_scores_batch):
predicted_int_labels = tf.argmax(predicted_scores_batch, axis=1)
predicted_labels = tf.gather(raw_train_ds.class_names, predicted_int_labels)
return predicted_labels
```
### 新しいデータで推論を実行する
```
inputs = [
"how do I extract keys from a dict into a list?", # python
"debug public static void main(string[] args) {...}", # java
]
predicted_scores = export_model.predict(inputs)
predicted_labels = get_string_labels(predicted_scores)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
```
モデル内にテキスト前処理ロジックを含めると、モデルを本番環境にエクスポートして展開を簡素化し、[トレーニング/テストスキュー](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew)の可能性を減らすことができます。
`TextVectorization` レイヤーを適用する場所を選択する際に性能の違いに留意する必要があります。モデルの外部で使用すると、GPU でトレーニングするときに非同期 CPU 処理とデータのバッファリングを行うことができます。したがって、GPU でモデルをトレーニングしている場合は、モデルの開発中に最高のパフォーマンスを得るためにこのオプションを使用し、デプロイの準備ができたらモデル内に TextVectorization レイヤーを含めるように切り替えることをお勧めします。
モデルの保存の詳細については、この[チュートリアル](https://www.tensorflow.org/tutorials/keras/save_and_load)にアクセスしてください。
## テキストをデータセットに読み込む
以下に、`tf.data.TextLineDataset` を使用してテキストファイルから例を読み込み、`tf.text` を使用してデータを前処理する例を示します。この例では、ホーマーのイーリアスの 3 つの異なる英語翻訳を使用し、与えられた 1 行のテキストから翻訳者を識別するようにモデルをトレーニングします。
### データセットをダウンロードして調査する
3 つのテキストの翻訳者は次のとおりです。
- [ウィリアム・クーパー](https://en.wikipedia.org/wiki/William_Cowper) — [テキスト](https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt)
- [エドワード、ダービー伯爵](https://en.wikipedia.org/wiki/Edward_Smith-Stanley,_14th_Earl_of_Derby) — [テキスト](https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt)
- [サミュエル・バトラー](https://en.wikipedia.org/wiki/Samuel_Butler_%28novelist%29) — [テキスト](https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt)
このチュートリアルで使われているテキストファイルは、ヘッダ、フッタ、行番号、章のタイトルの削除など、いくつかの典型的な前処理が行われています。前処理後のファイルをローカルにダウンロードします。
```
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']
for name in FILE_NAMES:
text_dir = utils.get_file(name, origin=DIRECTORY_URL + name)
parent_dir = pathlib.Path(text_dir).parent
list(parent_dir.iterdir())
```
### データセットを読み込む
`TextLineDataset` を使用します。これは、テキストファイルから `tf.data.Dataset` を作成するように設計されています。テキストファイルでは各例は、元のファイルのテキスト行ですが、`text_dataset_from_directory` は、ファイルのすべての内容を 1 つの例として扱います。`TextLineDataset` は、主に行があるテキストデータ(詩やエラーログなど)に役立ちます。
これらのファイルを繰り返し処理し、各ファイルを独自のデータセットに読み込みます。各例には個別にラベルを付ける必要があるため、`tf.data.Dataset.map` を使用して、それぞれにラベラー関数を適用します。これにより、データセット内のすべての例が繰り返され、 (`example, label`) ペアが返されます。
```
def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.TextLineDataset(str(parent_dir/file_name))
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset)
```
次に、これらのラベル付きデータセットを 1 つのデータセットに結合し、シャッフルします。
```
BUFFER_SIZE = 50000
BATCH_SIZE = 64
TAKE_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False)
```
前述の手順でいくつかの例を出力します。データセットはまだバッチ処理されていないため、`all_labeled_data` の各エントリは 1 つのデータポイントに対応します。
```
for text, label in all_labeled_data.take(10):
print("Sentence: ", text.numpy())
print("Label:", label.numpy())
```
### トレーニング用データセットを準備する
Keras `TextVectorization` レイヤーを使用してテキストデータセットを前処理する代わりに、[`tf.text` API](https://www.tensorflow.org/tutorials/tensorflow_text/intro) を使用してデータを標準化およびトークン化し、語彙を作成し、`StaticVocabularyTable` を使用してトークンを整数にマッピングし、モデルにフィードします。
tf.text はさまざまなトークナイザーを提供しますが、`UnicodeScriptTokenizer` を使用してデータセットをトークン化します。テキストを小文字に変換してトークン化する関数を定義します。`tf.data.Dataset.map` を使用して、トークン化をデータセットに適用します。
```
tokenizer = tf_text.UnicodeScriptTokenizer()
def tokenize(text, unused_label):
lower_case = tf_text.case_fold_utf8(text)
return tokenizer.tokenize(lower_case)
tokenized_ds = all_labeled_data.map(tokenize)
```
データセットを反復処理して、トークン化されたいくつかの例を出力できます。
```
for text_batch in tokenized_ds.take(5):
print("Tokens: ", text_batch.numpy())
```
次に、トークンを頻度で並べ替え、上位の `VOCAB_SIZE` トークンを保持することにより、語彙を構築します。
```
tokenized_ds = configure_dataset(tokenized_ds)
vocab_dict = collections.defaultdict(lambda: 0)
for toks in tokenized_ds.as_numpy_iterator():
for tok in toks:
vocab_dict[tok] += 1
vocab = sorted(vocab_dict.items(), key=lambda x: x[1], reverse=True)
vocab = [token for token, count in vocab]
vocab = vocab[:VOCAB_SIZE]
vocab_size = len(vocab)
print("Vocab size: ", vocab_size)
print("First five vocab entries:", vocab[:5])
```
トークンを整数に変換するには、`vocab` セットを使用して、`StaticVocabularyTable`を作成します。トークンを [`2`, `vocab_size + 2`] の範囲の整数にマップします。`TextVectorization` レイヤーと同様に、`0` はパディングを示すために予約されており、`1` は語彙外 (OOV) トークンを示すために予約されています。
```
keys = vocab
values = range(2, len(vocab) + 2) # reserve 0 for padding, 1 for OOV
init = tf.lookup.KeyValueTensorInitializer(
keys, values, key_dtype=tf.string, value_dtype=tf.int64)
num_oov_buckets = 1
vocab_table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets)
```
最後に、トークナイザーとルックアップテーブルを使用して、データセットを標準化、トークン化、およびベクトル化する関数を定義します。
```
def preprocess_text(text, label):
standardized = tf_text.case_fold_utf8(text)
tokenized = tokenizer.tokenize(standardized)
vectorized = vocab_table.lookup(tokenized)
return vectorized, label
```
1 つの例でこれを試して、出力を確認します。
```
example_text, example_label = next(iter(all_labeled_data))
print("Sentence: ", example_text.numpy())
vectorized_text, example_label = preprocess_text(example_text, example_label)
print("Vectorized sentence: ", vectorized_text.numpy())
```
次に、`tf.data.Dataset.map` を使用して、データセットに対して前処理関数を実行します。
```
all_encoded_data = all_labeled_data.map(preprocess_text)
```
### データセットをトレーニングとテストに分割する
Keras `TextVectorization` レイヤーでも、ベクトル化されたデータをバッチ処理してパディングします。バッチ内のサンプルは同じサイズと形状である必要があるため、パディングが必要です。これらのデータセットのサンプルはすべて同じサイズではありません。テキストの各行には、異なる数の単語があります。`tf.data.Dataset` は、データセットの分割と埋め込みバッチ処理をサポートしています
```
train_data = all_encoded_data.skip(VALIDATION_SIZE).shuffle(BUFFER_SIZE)
validation_data = all_encoded_data.take(VALIDATION_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)
validation_data = validation_data.padded_batch(BATCH_SIZE)
```
`validation_data` および `train_data` は(`example, label`) ペアのコレクションではなく、バッチのコレクションです。各バッチは、配列として表される (*多くの例*、*多くのラベル*) のペアです。以下に示します。
```
sample_text, sample_labels = next(iter(validation_data))
print("Text batch shape: ", sample_text.shape)
print("Label batch shape: ", sample_labels.shape)
print("First text example: ", sample_text[0])
print("First label example: ", sample_labels[0])
```
パディングに 0 を使用し、語彙外 (OOV) トークンに 1 を使用するため、語彙のサイズが 2 つ増えました。
```
vocab_size += 2
```
以前と同じように、パフォーマンスを向上させるためにデータセットを構成します。
```
train_data = configure_dataset(train_data)
validation_data = configure_dataset(validation_data)
```
### モデルをトレーニングする
以前と同じように、このデータセットでモデルをトレーニングできます。
```
model = create_model(vocab_size=vocab_size, num_labels=3)
model.compile(
optimizer='adam',
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_data, validation_data=validation_data, epochs=3)
loss, accuracy = model.evaluate(validation_data)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
```
### モデルをエクスポートする
モデルが生の文字列を入力として受け取ることができるようにするには、カスタム前処理関数と同じ手順を実行する `TextVectorization` レイヤーを作成します。すでに語彙をトレーニングしているので、新しい語彙をトレーニングする `adapt` の代わりに、`set_vocaublary` を使用できます。
```
preprocess_layer = TextVectorization(
max_tokens=vocab_size,
standardize=tf_text.case_fold_utf8,
split=tokenizer.tokenize,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
preprocess_layer.set_vocabulary(vocab)
export_model = tf.keras.Sequential(
[preprocess_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Create a test dataset of raw strings
test_ds = all_labeled_data.take(VALIDATION_SIZE).batch(BATCH_SIZE)
test_ds = configure_dataset(test_ds)
loss, accuracy = export_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
```
エンコードされた検証セットのモデルと生の検証セットのエクスポートされたモデルの損失と正確度は、予想どおり同じです。
### 新しいデータで推論を実行する
```
inputs = [
"Join'd to th' Ionians with their flowing robes,", # Label: 1
"the allies, and his armour flashed about him so that he seemed to all", # Label: 2
"And with loud clangor of his arms he fell.", # Label: 0
]
predicted_scores = export_model.predict(inputs)
predicted_labels = tf.argmax(predicted_scores, axis=1)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
```
## TensorFlow Datasets (TFDS) を使用してより多くのデータセットをダウンロードする
[TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview) からさらに多くのデータセットをダウンロードできます。例として、[IMDB Large Movie Review データセット](https://www.tensorflow.org/datasets/catalog/imdb_reviews)をダウンロードし、それを使用して感情分類のモデルをトレーニングします。
```
train_ds = tfds.load(
'imdb_reviews',
split='train[:80%]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
val_ds = tfds.load(
'imdb_reviews',
split='train[80%:]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
```
いくつかの例を出力します。
```
for review_batch, label_batch in val_ds.take(1):
for i in range(5):
print("Review: ", review_batch[i].numpy())
print("Label: ", label_batch[i].numpy())
```
これで、以前と同じようにデータを前処理してモデルをトレーニングできます。
注意: これはバイナリ分類の問題であるため、モデルには `losses.SparseCategoricalCrossentropy` の代わりに `losses.BinaryCrossentropy` を使用します。
### トレーニング用データセットを準備する
```
vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
# Make a text-only dataset (without labels), then call adapt
train_text = train_ds.map(lambda text, labels: text)
vectorize_layer.adapt(train_text)
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
train_ds = train_ds.map(vectorize_text)
val_ds = val_ds.map(vectorize_text)
# Configure datasets for performance as before
train_ds = configure_dataset(train_ds)
val_ds = configure_dataset(val_ds)
```
### モデルをトレーニングする
```
model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=1)
model.summary()
model.compile(
loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_ds, validation_data=val_ds, epochs=3)
loss, accuracy = model.evaluate(val_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
```
### モデルをエクスポートする
```
export_model = tf.keras.Sequential(
[vectorize_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# 0 --> negative review
# 1 --> positive review
inputs = [
"This is a fantastic movie.",
"This is a bad movie.",
"This movie was so bad that it was good.",
"I will never say yes to watching this movie.",
]
predicted_scores = export_model.predict(inputs)
predicted_labels = [int(round(x[0])) for x in predicted_scores]
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label)
```
## まとめ
このチュートリアルでは、テキストを読み込んで前処理するいくつかの方法を示しました。次のステップとして、Web サイトで他のチュートリアルをご覧ください。また、[TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview) から新しいデータセットをダウンロードできます。
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# K-Nearest Neighbors
Estimated time needed: **25** minutes
## Objectives
After completing this lab you will be able to:
- Use K Nearest neighbors to classify data
In this Lab you will load a customer dataset, fit the data, and use K-Nearest Neighbors to predict a data point. But what is **K-Nearest Neighbors**?
**K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification.
### Here's an visualization of the K-Nearest Neighbors algorithm.
<img src="https://ibm.box.com/shared/static/mgkn92xck0z05v7yjq8pqziukxvc2461.png">
In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A.
In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#about_dataset">About the dataset</a></li>
<li><a href="#visualization_analysis">Data Visualization and Analysis</a></li>
<li><a href="#classification">Classification</a></li>
</ol>
</div>
<br>
<hr>
```
!pip install scikit-learn==0.23.1
```
Lets load required libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn import preprocessing
%matplotlib inline
```
<div id="about_dataset">
<h2>About the dataset</h2>
</div>
Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict class of a new or unknown case.
The example focuses on using demographic data, such as region, age, and marital, to predict usage patterns.
The target field, called **custcat**, has four possible values that correspond to the four customer groups, as follows:
1- Basic Service
2- E-Service
3- Plus Service
4- Total Service
Our objective is to build a classifier, to predict the class of unknown cases. We will use a specific type of classification called K nearest neighbour.
Lets download the dataset. To download the data, we will use !wget to download it from IBM Object Storage.
```
!wget -O teleCust1000t.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/teleCust1000t.csv
```
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Load Data From CSV File
```
df = pd.read_csv('teleCust1000t.csv')
df.head()
```
<div id="visualization_analysis">
<h2>Data Visualization and Analysis</h2>
</div>
#### Let’s see how many of each class is in our data set
```
df['custcat'].value_counts()
```
#### 281 Plus Service, 266 Basic-service, 236 Total Service, and 217 E-Service customers
You can easily explore your data using visualization techniques:
```
df.hist(column='income', bins=50)
```
### Feature set
Lets define feature sets, X:
```
df.columns
```
To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:
```
X = df[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']] .values #.astype(float)
X[0:5]
```
What are our labels?
```
y = df['custcat'].values
y[0:5]
```
## Normalize Data
Data Standardization give data zero mean and unit variance, it is good practice, especially for algorithms such as KNN which is based on distance of cases:
```
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
```
### Train Test Split
Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.
It is important that our models have a high, out-of-sample accuracy, because the purpose of any model, of course, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
<div id="classification">
<h2>Classification</h2>
</div>
<h3>K nearest neighbor (KNN)</h3>
#### Import library
Classifier implementing the k-nearest neighbors vote.
```
from sklearn.neighbors import KNeighborsClassifier
```
### Training
Lets start the algorithm with k=4 for now:
```
k = 4
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
neigh
```
### Predicting
we can use the model to predict the test set:
```
yhat = neigh.predict(X_test)
yhat[0:5]
```
### Accuracy evaluation
In multilabel classification, **accuracy classification score** is a function that computes subset accuracy. This function is equal to the jaccard_score function. Essentially, it calculates how closely the actual labels and predicted labels are matched in the test set.
```
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))
```
## Practice
Can you build the model again, but this time with k=6?
```
# write your code here
```
<details><summary>Click here for the solution</summary>
```python
k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))
```
</details>
#### What about other K?
K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by the User. So, how can we choose right value for K?
The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.
We can calculate the accuracy of KNN for different Ks.
```
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
mean_acc
```
#### Plot model accuracy for Different number of Neighbors
```
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.fill_between(range(1,Ks),mean_acc - 3 * std_acc,mean_acc + 3 * std_acc, alpha=0.10,color="green")
plt.legend(('Accuracy ', '+/- 1xstd','+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Neighbors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio">Watson Studio</a>
### Thank you for completing this lab!
## Author
Saeed Aghabozorgi
### Other Contributors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2021-01-21 | 2.4 | Lakshmi | Updated sklearn library |
| 2020-11-20 | 2.3 | Lakshmi | Removed unused imports |
| 2020-11-17 | 2.2 | Lakshmi | Changed plot function of KNN |
| 2020-11-03 | 2.1 | Lakshmi | Changed URL of csv |
| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
```
# 使下面的代码支持python2和python3
from __future__ import division, print_function, unicode_literals
# 查看python的版本是否为3.5及以上
import sys
assert sys.version_info >= (3, 5)
# 查看sklearn的版本是否为0.20及以上
import sklearn
assert sklearn.__version__ >= "0.20"
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os
# 在每一次的运行后获得的结果与这个notebook的结果相同
np.random.seed(42)
# 让matplotlib的图效果更好
%matplotlib inline
import matplotlib as mpl
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 设置保存图片的途径
PROJECT_ROOT_DIR = "."
IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "images")
os.makedirs(IMAGE_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True):
'''
运行即可保存自动图片
:param fig_id: 图片名称
'''
path = os.path.join(PROJECT_ROOT_DIR, "images", fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
# 忽略掉没用的警告 (Scipy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", category=FutureWarning, module='sklearn', lineno=196)
# 读取数据集
df = pd.read_excel('Test_2.xlsx')
df.head()
# 查看数据集是否有空值,看需不需要插值
df.info()
'''
# 插值
df.fillna(0, inplace=True)
# 或者是参考之前在多项式回归里的插值方式
'''
# 将真实的分类标签与特征分开
data = df.drop('TRUE VALUE', axis=1)
labels = df['TRUE VALUE'].copy()
np.unique(labels)
labels
# 获取数据的数量和特征的数量
n_samples, n_features = data.shape
# 获取分类标签的数量
n_labels = len(np.unique(labels))
np.unique(labels)
labels.value_counts()
```
# KMeans算法聚类
```
from sklearn import metrics
def get_marks(estimator, data, name=None, kmeans=None, af=None):
"""
获取评分,有五种需要知道数据集的实际分类信息,有三种不需要,参考readme.txt
对于Kmeans来说,一般用轮廓系数和inertia即可
:param estimator: 模型
:param name: 初始方法
:param data: 特征数据集
"""
estimator.fit(data)
print(20 * '*', name, 20 * '*')
if kmeans:
print("Mean Inertia Score: ", estimator.inertia_)
elif af:
cluster_centers_indices = estimator.cluster_centers_indices_
print("The estimated number of clusters: ", len(cluster_centers_indices))
print("Homogeneity Score: ", metrics.homogeneity_score(labels, estimator.labels_))
print("Completeness Score: ", metrics.completeness_score(labels, estimator.labels_))
print("V Measure Score: ", metrics.v_measure_score(labels, estimator.labels_))
print("Adjusted Rand Score: ", metrics.adjusted_rand_score(labels, estimator.labels_))
print("Adjusted Mutual Info Score: ", metrics.adjusted_mutual_info_score(labels, estimator.labels_))
print("Calinski Harabasz Score: ", metrics.calinski_harabasz_score(data, estimator.labels_))
print("Silhouette Score: ", metrics.silhouette_score(data, estimator.labels_))
from sklearn.cluster import KMeans
# 使用k-means进行聚类,设置簇=2,设置不同的初始化方式('k-means++'和'random')
km1 = KMeans(init='k-means++', n_clusters=n_labels-1, n_init=10, random_state=42)
km2 = KMeans(init='random', n_clusters=n_labels-1, n_init=10, random_state=42)
print("n_labels: %d \t n_samples: %d \t n_features: %d" % (n_labels, n_samples, n_features))
get_marks(km1, data, name="k-means++", kmeans=True)
get_marks(km2, data, name="random", kmeans=True)
# 聚类后每个数据的类别
km1.labels_
# 类别的类型
np.unique(km1.labels_)
# 将聚类的结果写入原始表格中
df['km_clustering_label'] = km1.labels_
# 以csv形式导出原始表格
#df.to_csv('result.csv')
# 区别于data,df是原始数据集
df.head()
from sklearn.model_selection import GridSearchCV
# 使用GridSearchCV自动寻找最优参数,kmeans在这里是作为分类模型使用
params = {'init':('k-means++', 'random'), 'n_clusters':[2, 3, 4, 5, 6], 'n_init':[5, 10, 15]}
cluster = KMeans(random_state=42)
# 使用调整的兰德系数(adjusted_rand_score)作为评分,具体可参考readme.txt
km_best_model = GridSearchCV(cluster, params, cv=3, scoring='adjusted_rand_score',
verbose=1, n_jobs=-1)
# 由于选用的是外部评价指标,因此得有原数据集的真实分类信息
km_best_model.fit(data, labels)
# 最优模型的参数
km_best_model.best_params_
# 最优模型的评分
km_best_model.best_score_
# 获得的最优模型
km3 = km_best_model.best_estimator_
km3
# 获取最优模型的8种评分,具体含义参考readme.txt
get_marks(km3, data, name="k-means++", kmeans=True)
from sklearn.metrics import silhouette_score
from sklearn.metrics import calinski_harabasz_score
from matplotlib import pyplot as plt
def plot_scores(init, max_k, data, labels):
'''画出kmeans不同初始化方法的三种评分图
:param init: 初始化方法,有'k-means++'和'random'两种
:param max_k: 最大的簇中心数目
:param data: 特征的数据集
:param labels: 真实标签的数据集
'''
i = []
inertia_scores = []
y_silhouette_scores = []
y_calinski_harabaz_scores = []
for k in range(2, max_k):
kmeans_model = KMeans(n_clusters=k, random_state=1, init=init, n_init=10)
pred = kmeans_model.fit_predict(data)
i.append(k)
inertia_scores.append(kmeans_model.inertia_)
y_silhouette_scores.append(silhouette_score(data, pred))
y_calinski_harabaz_scores.append(calinski_harabasz_score(data, pred))
new = [inertia_scores, y_silhouette_scores, y_calinski_harabaz_scores]
for j in range(len(new)):
plt.figure(j+1)
plt.plot(i, new[j], 'bo-')
plt.xlabel('n_clusters')
if j == 0:
name = 'inertia'
elif j == 1:
name = 'silhouette'
else:
name = 'calinski_harabasz'
plt.ylabel('{}_scores'.format(name))
plt.title('{}_scores with {} init'.format(name, init))
save_fig('{} with {}'.format(name, init))
plot_scores('k-means++', 18, data, labels)
plot_scores('random', 10, data, labels)
from sklearn.metrics import silhouette_samples, silhouette_score
from matplotlib.ticker import FixedLocator, FixedFormatter
def plot_silhouette_diagram(clusterer, X, show_xlabels=True,
show_ylabels=True, show_title=True):
"""
画轮廓图表
:param clusterer: 训练好的聚类模型(这里是能提前设置簇数量的,可以稍微修改代码换成不能提前设置的)
:param X: 只含特征的数据集
:param show_xlabels: 为真,添加横坐标信息
:param show_ylabels: 为真,添加纵坐标信息
:param show_title: 为真,添加图表名
"""
y_pred = clusterer.labels_
silhouette_coefficients = silhouette_samples(X, y_pred)
silhouette_average = silhouette_score(X, y_pred)
padding = len(X) // 30
pos = padding
ticks = []
for i in range(clusterer.n_clusters):
coeffs = silhouette_coefficients[y_pred == i]
coeffs.sort()
color = mpl.cm.Spectral(i / clusterer.n_clusters)
plt.fill_betweenx(np.arange(pos, pos + len(coeffs)), 0, coeffs,
facecolor=color, edgecolor=color, alpha=0.7)
ticks.append(pos + len(coeffs) // 2)
pos += len(coeffs) + padding
plt.axvline(x=silhouette_average, color="red", linestyle="--")
plt.gca().yaxis.set_major_locator(FixedLocator(ticks))
plt.gca().yaxis.set_major_formatter(FixedFormatter(range(clusterer.n_clusters)))
if show_xlabels:
plt.gca().set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
plt.xlabel("Silhouette Coefficient")
else:
plt.tick_params(labelbottom=False)
if show_ylabels:
plt.ylabel("Cluster")
if show_title:
plt.title("init:{} n_cluster:{}".format(clusterer.init, clusterer.n_clusters))
plt.figure(figsize=(15, 4))
plt.subplot(121)
plot_silhouette_diagram(km1, data)
plt.subplot(122)
plot_silhouette_diagram(km3, data, show_ylabels=False)
save_fig("silhouette_diagram")
```
# MiniBatch KMeans
```
from sklearn.cluster import MiniBatchKMeans
# 测试KMeans算法运行速度
%timeit KMeans(n_clusters=3).fit(data)
# 测试MiniBatchKMeans算法运行速度
%timeit MiniBatchKMeans(n_clusters=5).fit(data)
from timeit import timeit
times = np.empty((100, 2))
inertias = np.empty((100, 2))
for k in range(1, 101):
kmeans = KMeans(n_clusters=k, random_state=42)
minibatch_kmeans = MiniBatchKMeans(n_clusters=k, random_state=42)
print("\r Training: {}/{}".format(k, 100), end="")
times[k-1, 0] = timeit("kmeans.fit(data)", number=10, globals=globals())
times[k-1, 1] = timeit("minibatch_kmeans.fit(data)", number=10, globals=globals())
inertias[k-1, 0] = kmeans.inertia_
inertias[k-1, 1] = minibatch_kmeans.inertia_
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(range(1, 101), inertias[:, 0], "r--", label="K-Means")
plt.plot(range(1, 101), inertias[:, 1], "b.-", label="Mini-batch K-Means")
plt.xlabel("$k$", fontsize=16)
plt.ylabel("Inertia", fontsize=14)
plt.legend(fontsize=14)
plt.subplot(122)
plt.plot(range(1, 101), times[:, 0], "r--", label="K-Means")
plt.plot(range(1, 101), times[:, 1], "b.-", label="Mini-batch K-Means")
plt.xlabel("$k$", fontsize=16)
plt.ylabel("Training time (seconds)", fontsize=14)
plt.axis([1, 100, 0, 6])
plt.legend(fontsize=14)
save_fig("minibatch_kmeans_vs_kmeans")
plt.show()
```
# 降维后聚类
```
from sklearn.decomposition import PCA
# 使用普通PCA进行降维,将特征从11维降至3维
pca1 = PCA(n_components=n_labels)
pca1.fit(data)
km4 = KMeans(init=pca1.components_, n_clusters=n_labels, n_init=10)
get_marks(km4, data, name="PCA-based KMeans", kmeans=True)
# 查看训练集的维度,已降至3个维度
len(pca1.components_)
# 使用普通PCA降维,将特征降至2维,作二维平面可视化
pca2 = PCA(n_components=2)
reduced_data = pca2.fit_transform(data)
# 使用k-means进行聚类,设置簇=3,初始化方法为'k-means++'
kmeans1 = KMeans(init="k-means++", n_clusters=3, n_init=3)
kmeans2 = KMeans(init="random", n_clusters=3, n_init=3)
kmeans1.fit(reduced_data)
kmeans2.fit(reduced_data)
# 训练集的特征维度降至2维
len(pca2.components_)
# 2维的特征值(降维后)
reduced_data
# 3个簇中心的坐标
kmeans1.cluster_centers_
from matplotlib.colors import ListedColormap
def plot_data(X, real_tag=None):
"""
画散点图
:param X: 只含特征值的数据集
:param real_tag: 有值,则给含有不同分类的散点上色
"""
try:
if not real_tag:
plt.plot(X[:, 0], X[:, 1], 'k.', markersize=2)
except ValueError:
types = list(np.unique(real_tag))
for i in range(len(types)):
plt.plot(X[:, 0][real_tag==types[i]], X[:, 1][real_tag==types[i]],
'.', label="{}".format(types[i]), markersize=3)
plt.legend()
def plot_centroids(centroids, circle_color='w', cross_color='k'):
"""
画出簇中心
:param centroids: 簇中心坐标
:param circle_color: 圆圈的颜色
:param cross_color: 叉的颜色
"""
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='o', s=30, zorder=10, linewidths=8,
color=circle_color, alpha=0.9)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=50, zorder=11, linewidths=50,
color=cross_color, alpha=1)
def plot_centroids_labels(clusterer):
labels = np.unique(clusterer.labels_)
centroids = clusterer.cluster_centers_
for i in range(centroids.shape[0]):
t = str(labels[i])
plt.text(centroids[i, 0]-1, centroids[i, 1]-1, t, fontsize=25,
zorder=10, bbox=dict(boxstyle='round', fc='yellow', alpha=0.5))
def plot_decision_boundaried(clusterer, X, tag=None, resolution=1000,
show_centroids=True, show_xlabels=True,
show_ylabels=True, show_title=True,
show_centroids_labels=False):
"""
画出决策边界,并填色
:param clusterer: 训练好的聚类模型(能提前设置簇中心数量或不能提前设置都可以)
:param X: 只含特征值的数据集
:param tag: 只含真实分类信息的数据集,有值,则给散点上色
:param resolution: 类似图片分辨率,给最小的单位上色
:param show_centroids: 为真,画出簇中心
:param show_centroids_labels: 为真,标注出该簇中心的标签
"""
mins = X.min(axis=0) - 0.1
maxs = X.max(axis=0) + 0.1
xx, yy = np.meshgrid(np.linspace(mins[0], maxs[0], resolution),
np.linspace(mins[1], maxs[1], resolution))
Z = clusterer.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# 可用color code或者color自定义填充颜色
# custom_cmap = ListedColormap(["#fafab0", "#9898ff", "#a0faa0"])
plt.contourf(xx, yy, Z, extent=(mins[0], maxs[0], mins[1], maxs[1]),
cmap="Pastel2")
plt.contour(xx, yy, Z, extent=(mins[0], maxs[0], mins[1], maxs[1]),
colors='k')
try:
if not tag:
plot_data(X)
except ValueError:
plot_data(X, real_tag=tag)
if show_centroids:
plot_centroids(clusterer.cluster_centers_)
if show_centroids_labels:
plot_centroids_labels(clusterer)
if show_xlabels:
plt.xlabel(r"$x_1$", fontsize=14)
else:
plt.tick_params(labelbottom=False)
if show_ylabels:
plt.ylabel(r"$x_2$", fontsize=14, rotation=0)
else:
plt.tick_params(labelleft=False)
if show_title:
plt.title("init:{} n_cluster:{}".format(clusterer.init, clusterer.n_clusters))
plt.figure(figsize=(15, 4))
plt.subplot(121)
plot_decision_boundaried(kmeans1, reduced_data, tag=labels)
plt.subplot(122)
plot_decision_boundaried(kmeans2, reduced_data, show_centroids_labels=True)
save_fig("real_tag_vs_non")
plt.show()
kmeans3 = KMeans(init="k-means++", n_clusters=3, n_init=3)
kmeans4 = KMeans(init="k-means++", n_clusters=4, n_init=3)
kmeans5 = KMeans(init="k-means++", n_clusters=5, n_init=3)
kmeans6 = KMeans(init="k-means++", n_clusters=6, n_init=3)
kmeans3.fit(reduced_data)
kmeans4.fit(reduced_data)
kmeans5.fit(reduced_data)
kmeans6.fit(reduced_data)
plt.figure(figsize=(15, 8))
plt.subplot(221)
plot_decision_boundaried(kmeans3, reduced_data, show_xlabels=False, show_centroids_labels=True)
plt.subplot(222)
plot_decision_boundaried(kmeans4, reduced_data, show_ylabels=False, show_xlabels=False)
plt.subplot(223)
plot_decision_boundaried(kmeans5, reduced_data, show_centroids_labels=True)
plt.subplot(224)
plot_decision_boundaried(kmeans6, reduced_data, show_ylabels=False)
save_fig("reduced_and_cluster")
plt.show()
```
# AP算法聚类
```
from sklearn.cluster import AffinityPropagation
# 使用AP聚类算法
af = AffinityPropagation(preference=-500, damping=0.8)
af.fit(data)
# 获取簇的坐标
cluster_centers_indices = af.cluster_centers_indices_
cluster_centers_indices
# 获取分类的类别数量
af_labels = af.labels_
np.unique(af_labels)
get_marks(af, data=data, af=True)
# 将AP聚类聚类的结果写入原始表格中
df['ap_clustering_label'] = af.labels_
# 以csv形式导出原始表格
df.to_csv('test2_result.csv')
# 最后两列为两种聚类算法的分类信息
df.head()
from sklearn.model_selection import GridSearchCV
# from sklearn.model_selection import RamdomizedSearchCV
# 使用GridSearchCV自动寻找最优参数,如果时间太久(约4.7min),可以使用随机搜索,这里是用AP做分类的工作
params = {'preference':[-50, -100, -150, -200], 'damping':[0.5, 0.6, 0.7, 0.8, 0.9]}
cluster = AffinityPropagation()
af_best_model = GridSearchCV(cluster, params, cv=5, scoring='adjusted_rand_score', verbose=1, n_jobs=-1)
af_best_model.fit(data, labels)
# 最优模型的参数设置
af_best_model.best_params_
# 最优模型的评分,使用调整的兰德系数(adjusted_rand_score)作为评分
af_best_model.best_score_
# 获取最优模型
af1 = af_best_model.best_estimator_
af1
# 最优模型的评分
get_marks(af1, data=data, af=True)
"""
from sklearn.externals import joblib
# 保存以pkl格式最优模型
joblib.dump(af1, "af1.pkl")
"""
"""
# 从pkl格式中导出最优模型
my_model_loaded = joblib.load("af1.pkl")
"""
"""
my_model_loaded
"""
from sklearn.decomposition import PCA
# 使用普通PCA进行降维,将特征从11维降至3维
pca3 = PCA(n_components=n_labels)
reduced_data = pca3.fit_transform(data)
af2 = AffinityPropagation(preference=-200, damping=0.8)
get_marks(af2, reduced_data, name="PCA-based AF", af=True)
```
# 基于聚类结果的分层抽样
```
# data2是去掉真实分类信息的数据集(含有聚类后的结果)
data2 = df.drop("TRUE VALUE", axis=1)
data2.head()
# 查看使用kmeans聚类后的分类标签值,两类
data2['km_clustering_label'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
# 基于kmeans聚类结果的分层抽样
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(data2, data2["km_clustering_label"]):
strat_train_set = data2.loc[train_index]
strat_test_set = data2.loc[test_index]
def clustering_result_propotions(data):
"""
分层抽样后,训练集或测试集里不同分类标签的数量比
:param data: 训练集或测试集,纯随机取样或分层取样
"""
return data["km_clustering_label"].value_counts() / len(data)
# 经过分层抽样的测试集中,不同分类标签的数量比
clustering_result_propotions(strat_test_set)
# 经过分层抽样的训练集中,不同分类标签的数量比
clustering_result_propotions(strat_train_set)
# 完整的数据集中,不同分类标签的数量比
clustering_result_propotions(data2)
from sklearn.model_selection import train_test_split
# 纯随机取样
random_train_set, random_test_set = train_test_split(data2, test_size=0.2, random_state=42)
# 完整的数据集、分层抽样后的测试集、纯随机抽样后的测试集中,不同分类标签的数量比
compare_props = pd.DataFrame({
"Overall": clustering_result_propotions(data2),
"Stratified": clustering_result_propotions(strat_test_set),
"Random": clustering_result_propotions(random_test_set),
}).sort_index()
# 计算分层抽样和纯随机抽样后的测试集中不同分类标签的数量比,和完整的数据集中不同分类标签的数量比的误差
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Start. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
def get_classification_marks(model, data, labels, train_index, test_index):
"""
获取分类模型(二元或多元分类器)的评分:F1值
:param data: 只含有特征值的数据集
:param labels: 只含有标签值的数据集
:param train_index: 分层抽样获取的训练集中数据的索引
:param test_index: 分层抽样获取的测试集中数据的索引
:return: F1评分值
"""
m = model(random_state=42)
m.fit(data.loc[train_index], labels.loc[train_index])
test_labels_predict = m.predict(data.loc[test_index])
score = f1_score(labels.loc[test_index], test_labels_predict, average="weighted")
return score
# 用分层抽样后的训练集训练分类模型后的评分值
start_marks = get_classification_marks(LogisticRegression, data, labels, strat_train_set.index, strat_test_set.index)
start_marks
# 用纯随机抽样后的训练集训练分类模型后的评分值
random_marks = get_classification_marks(LogisticRegression, data, labels, random_train_set.index, random_test_set.index)
random_marks
import numpy as np
from sklearn.metrics import f1_score
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone, BaseEstimator, TransformerMixin
class stratified_cross_val_score(BaseEstimator, TransformerMixin):
"""实现基于分层抽样的k折交叉验证"""
def __init__(self, model, data, labels, random_state=0, cv=5):
"""
:model: 训练的模型(回归或分类)
:data: 只含特征值的完整数据集
:labels: 只含标签值的完整数据集
:random_state: 模型的随机种子值
:cv: 交叉验证的次数
"""
self.model = model
self.data = data
self.labels = labels
self.random_state = random_state
self.cv = cv
self.score = [] # 储存每折测试集的模型评分
self.i = 0
def fit(self, X, y):
"""
:param X: 含有特征值和聚类结果的完整数据集
:param y: 含有聚类结果的完整数据集
:return: 每一折交叉验证的评分
"""
skfolds = StratifiedKFold(n_splits=self.cv, random_state=self.random_state)
for train_index, test_index in skfolds.split(X, y):
# 复制要训练的模型(分类或回归)
clone_model = clone(self.model)
strat_X_train_folds = self.data.loc[train_index]
strat_y_train_folds = self.labels.loc[train_index]
strat_X_test_fold = self.data.loc[test_index]
strat_y_test_fold = self.labels.loc[test_index]
# 训练模型
clone_model.fit(strat_X_train_folds, strat_y_train_folds)
# 预测值(这里是分类模型的分类结果)
test_labels_pred = clone_model.predict(strat_X_test_fold)
# 这里使用的是分类模型用的F1值,如果是回归模型可以换成相应的模型
score_fold = f1_score(labels.loc[test_index], test_labels_pred, average="weighted")
# 避免重复向列表里重复添加值
if self.i < self.cv:
self.score.append(score_fold)
else:
None
self.i += 1
return self.score
def transform(self, X, y=None):
return self
def mean(self):
"""返回交叉验证评分的平均值"""
return np.array(self.score).mean()
def std(self):
"""返回交叉验证评分的标准差"""
return np.array(self.score).std()
from sklearn.linear_model import SGDClassifier
# 分类模型
clf_model = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
# 基于分层抽样的交叉验证,data是只含特征值的完整数据集,labels是只含标签值的完整数据集
clf_cross_val = stratified_cross_val_score(clf_model, data, labels, cv=5, random_state=42)
# data2是含有特征值和聚类结果的完整数据集
clf_cross_val_score = clf_cross_val.fit(data2, data2["km_clustering_label"])
# 每折交叉验证的评分
clf_cross_val.score
# 交叉验证评分的平均值
clf_cross_val.mean()
# 交叉验证评分的标准差
clf_cross_val.std()
```
| github_jupyter |
```
# notebook that produces all plots for UBE3A_HUN_paper
import matplotlib.pyplot as plt
import config
% matplotlib inline
modes = ['UBE3A','CAMK2D']
# UBE3A and CAMK2D network
# plot the distribution of the fraction of seeds retained in the randomly built networks
# and compare to the fraction of seeds in the real network
for mode in modes:
file_graphs = config.output_path + 'summary_test_signif_' + mode + '_QBCHL_rand_graph_stats.txt'
file1 = open(file_graphs,'r')
entries_graphs = file1.readlines()
file1.close()
fracs_seeds = []
for line in entries_graphs[1:]:
tab_list = str.split(line[:-1],'\t')
seeds_in_n = int(tab_list[2])
seeds_start = int(tab_list[1])
frac_seeds = seeds_in_n/float(seeds_start)
if tab_list[0] == 'real':
real_frac = frac_seeds
else:
fracs_seeds.append(frac_seeds)
plt.hist(fracs_seeds,bins=20,color="grey")
plt.xlabel('Fraction of seed genes in ' + mode + ' network',fontsize=12)
plt.ylabel('Frequency',fontsize=12)
plt.title('Significance of fraction of\nseed genes in ' + mode + ' network',fontsize=14)
plt.xlim([0,1])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_frac,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + 'QBCHL_signif_frac_seeds_in_' + mode + '_network.pdf')
plt.show()
num_fracs_larger = filter(lambda x: x >= real_frac,fracs_seeds)
print 'Significance of fraction of seed genes that are retained in', mode, 'network:', float(len(num_fracs_larger))/len(fracs_seeds)
# UBE3A and CAMK2D network
# plot the distribution of the LCC size of the randomly built networks
# and compare to the LCC size of the real network
for m,mode in enumerate(modes):
file_graphs = config.output_path + 'summary_test_signif_' + mode + '_QBCHL_rand_graph_stats.txt'
file1 = open(file_graphs,'r')
entries_graphs = file1.readlines()
file1.close()
lcc_sizes = []
for line in entries_graphs[1:]:
tab_list = str.split(line[:-1],'\t')
lcc_size = int(tab_list[5])
if tab_list[0] == 'real':
real_lcc = lcc_size
else:
lcc_sizes.append(lcc_size)
plt.hist(lcc_sizes,bins=20,color="grey")
plt.xlabel('LCC size of ' + mode + ' network',fontsize=12)
plt.ylabel('Frequency',fontsize=12)
plt.title('Significance of LCC size of ' + mode + ' network',fontsize=14)
plt.xlim([0,220])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_lcc,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + 'QBCHL_signif_LCCsize_in_' + mode + '_network.pdf')
plt.show()
num_lccs_larger = filter(lambda x: x >= real_lcc,lcc_sizes)
print 'Significance of LCC size of', mode, 'network:', float(len(num_lccs_larger))/len(lcc_sizes)
# HUN network
# plot the distribution of the LCC size of the randomly built networks
# and compare to the LCC size of the real network
infile_rand = config.output_path + 'HUN_network_QBCHL.node_attributes_rand_lccs.txt'
file1 = open(infile_rand,'r')
entries = file1.readlines()
file1.close()
lcc_sizes = []
for line in entries[1:]:
tab_list = str.split(line[:-1],'\t')
lcc_size = int(tab_list[1])
if tab_list[0] == 'real':
real_lcc = lcc_size
else:
lcc_sizes.append(lcc_size)
plt.hist(lcc_sizes,bins=20,color="grey")
plt.xlabel('LCC size of HUN network',fontsize=12)
plt.ylabel('Frequency',fontsize=12)
plt.title('Significance of LCC size of HUN network',fontsize=14)
plt.xlim([0,200])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_lcc,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + 'QBCHL_signif_LCCsize_in_HUN_network.pdf')
plt.show()
num_lccs_larger = filter(lambda x: x >= real_lcc,lcc_sizes)
print 'Significance of LCC size of HUN network:', float(len(num_lccs_larger))/len(lcc_sizes)
# CAMK2D - HUN complex closeness
# draw the distribution of the number of CAMK2D preys that interact with HUN core complex proteins
# as obtained from randomized networks and show where the real observation lies
prefixes = ['CAMK2D_HUN','CAMK2D_HN','CAMK2D_UBE3A']
titles = ['HUN complex','HN complex','UBE3A']
for i,prefix in enumerate(prefixes):
infile_rand = config.output_path + prefix + '_counts_preys_connected_to_core_complex_members_rand_distr.txt'
file1 = open(infile_rand,'r')
entries = file1.readlines()
file1.close()
rand_values = []
for line in entries[1:]:
tab_list = str.split(line[:-1],'\t')
value = int(tab_list[1])
if tab_list[0] == 'real':
real_count = value
else:
rand_values.append(value)
plt.figure(figsize=(5,4))
plt.hist(rand_values,bins=range(7),color="grey",edgecolor='black')
plt.xlabel('Number of CAMK2D preys linked\nto ' + titles[i] + ' core members',size=12)
plt.ylabel('Frequency',size=12)
plt.title('Significance of closeness\nof CAMK2D preys to ' + titles[i] + ' core members',size=14)
plt.xlim([0,7])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_count,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + prefix + '_counts_preys_connected_to_core_complex_members.pdf')
plt.show()
num_more = filter(lambda x: x >= real_count,rand_values)
print 'Significance of number of CAMK2D preys linked to ' + titles[i] + ' core members:', float(len(num_more))/len(rand_values)
# CAMK2D - HUN complex closeness
# draw the distribution of the number of CAMK2D preys that interact with HUN complex preys
# as obtained from randomized networks and show where the real observation lies
for i,prefix in enumerate(prefixes):
infile_rand = config.output_path + prefix + '_counts_preys_connected_to_complex_preys_rand_distr.txt'
file1 = open(infile_rand,'r')
entries = file1.readlines()
file1.close()
rand_values = []
for line in entries[1:]:
tab_list = str.split(line[:-1],'\t')
value = int(tab_list[1])
if tab_list[0] == 'real':
real_count = value
else:
rand_values.append(value)
plt.figure(figsize=(5,4))
plt.hist(rand_values,bins=18,color="grey")
plt.xlabel('Number of CAMK2D preys\nlinked to ' + titles[i] + ' preys',size=12)
plt.ylabel('Frequency',size=12)
plt.title('Significance of closeness of\nCAMK2D preys to ' + titles[i] + ' preys',size=14)
plt.xlim([0,60])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_count,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + prefix + '_counts_preys_connected_to_complex_preys.pdf')
plt.show()
num_more = filter(lambda x: x >= real_count,rand_values)
print 'Significance of number of CAMK2D preys linked to ' + titles[i] + ' preys:', float(len(num_more))/len(rand_values)
# CAMK2D - HUN complex closeness
# draw the distribution of the number of HUN complex members that interact with
# CAMK2D preys as obtained from randomized networks and show where the real observation lies
for i,prefix in enumerate(prefixes):
infile_rand = config.output_path + prefix + '_counts_complex_preys_connected_to_preys_rand_distr.txt'
file1 = open(infile_rand,'r')
entries = file1.readlines()
file1.close()
rand_values = []
for line in entries[1:]:
tab_list = str.split(line[:-1],'\t')
value = int(tab_list[1])
if tab_list[0] == 'real':
real_count = value
else:
rand_values.append(value)
plt.figure(figsize=(5,4))
plt.hist(rand_values,bins=18,color="grey")
plt.xlabel('Number of ' + titles[i] + ' preys\nthat interact with CAMK2D preys',size=12)
plt.ylabel('Frequency',size=12)
plt.title('Significance of closeness of\n' + titles[i] + ' preys to CAMK2D preys',size=14)
plt.xlim([0,130])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_count,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + prefix + '_counts_complex_preys_connected_to_preys.pdf')
plt.show()
num_more = filter(lambda x: x >= real_count,rand_values)
print 'Significance of number of ' + titles[i] + ' preys linked to CAMK2D preys:', float(len(num_more))/len(rand_values)
# significance of closeness of preys per bait
xupper = [14,15,8,150,30,10,60,35,90,20,400]
num_bins = [12,12,6,20,10,4,20,10,20,10,20]
seed_files = ['CAMK2D_seed_file.txt','ECH1_seed_file.txt','ECI2_seed_file.txt','HERC2_seed_file.txt',\
'HIF1AN_seed_file.txt','MAPK6_seed_file.txt','NEURL4_seed_file.txt','UBE3A_seed_file.txt',\
'UBE3A_seed_file_with_proteasome.txt','UBE3A_seed_file_no_Y2H.txt','HUN_complex_seed_file.txt']
for i,seed_file in enumerate(seed_files):
infile_rand = config.output_path + seed_file[:-4] + '_rand_lccs.txt'
file1 = open(infile_rand,'r')
entries = file1.readlines()
file1.close()
lcc_sizes = []
for line in entries[1:]:
tab_list = str.split(line[:-1],'\t')
lcc_size = (float(tab_list[1]))
if tab_list[0] == 'real':
real_lcc = lcc_size
else:
lcc_sizes.append(lcc_size)
plt.hist(lcc_sizes,bins=num_bins[i],color="grey")
plt.xlabel('LCC size of ' + seed_file[:-4],fontsize=12)
plt.ylabel('Frequency',fontsize=12)
plt.title('Significance of LCC size of ' + seed_file[:-4],fontsize=14)
plt.xlim([0,xupper[i]])
ax = plt.gca()
ylim = ax.get_ylim()
xlim = ax.get_xlim()
x = xlim[1] - xlim[0]
y = ylim[1] - ylim[0]
plt.arrow(real_lcc,y/4.0,0,y/7.0*(-1),color='red',head_width=x/60.0,head_length=y/20.0)
plt.tight_layout()
plt.savefig(config.plot_path + 'QBCHL_signif_LCCsize_in_' + seed_file[:-4] + '.pdf')
plt.show()
num_lccs_larger = filter(lambda x: x >= real_lcc,lcc_sizes)
print 'Significance of LCC size of ' + seed_file[:-4], float(len(num_lccs_larger))/len(lcc_sizes)
```
| github_jupyter |
```
import pandas as pd
from datetime import datetime, timedelta
import time
import requests
import numpy as np
import json
import urllib
from pandas.io.json import json_normalize
import re
import os.path
import zipfile
from glob import glob
url ="https://api.usaspending.gov/api/v1/awards/?limit=100"
r = requests.get(url, verify=False)
r.raise_for_status()
type(r)
data = r.json()
meta = data['page_metadata']
data = data['results']
df_API_data = pd.io.json.json_normalize(data)
df_API_data.col
base_url = "https://api.usaspending.gov"
endpt_trans = "/api/v2/search/spending_by_award/?limit=10"
params = {
"filters": {
"time_period": [
{
"start_date": "2016-10-01",
"end_date": "2017-09-30"
}
]
}
}
url = base_url + endpt_trans
r = requests.post(url, json=params)
print(r.status_code, r.reason)
r.raise_for_status()
r.headers
r.request.headers
data = r.json()
meta = data['page_metadata']
data = data['results']
df_trans = pd.io.json.json_normalize(data)
currentFY = 2019
n_years_desired = 10
def download_latest_data(currentFY,n_years_desired):
#find latest datestamp on usaspending files
usaspending_base = 'https://files.usaspending.gov/award_data_archive/'
save_path = '../new_data/'
r = requests.get(usaspending_base, allow_redirects=True)
r.raise_for_status()
datestr = re.findall('_(\d{8}).zip',r.content)[0]
for FY in np.arange(currentFY-n_years_desired+1,currentFY+1):
doe_contracts_url = usaspending_base+str(FY)+'_089_Contracts_Full_' + datestr + '.zip'
doe_grants_url = usaspending_base+str(FY)+'_089_Assistance_Full_' + datestr + '.zip'
nsf_grants_url = usaspending_base+str(FY)+'_049_Assistance_Full_' + datestr + '.zip'
doe_sc_url = 'https://science.energy.gov/~/media/_/excel/universities/DOE-SC_Grants_FY'+str(FY)+'.xlsx'
for url in [doe_contracts_url,doe_grants_url,nsf_grants_url,doe_sc_url]:
filename = url.split('/')[-1]
if os.path.exists(save_path+filename): continue
if url == doe_sc_url:
verify='doe_cert.pem'
else:
verify=True
try:
r = requests.get(url, allow_redirects=True,verify=verify)
r.raise_for_status()
except:
print 'could not find', url
continue
# DOE website stupidly returns a 200 HTTP code when displaying 404 page :/
page_not_found_text = 'The page that you have requested was not found.'
if page_not_found_text in r.content:
print 'could not find', url
continue
open(save_path+filename, 'wb+').write(r.content)
zipper = zipfile.ZipFile(save_path+filename,'r')
zipper.extractall(path='../new_data')
print 'Data download complete'
def unzip_all():
for unzip_this in glob('../new_data/*.zip'):
zipper = zipfile.ZipFile(unzip_this,'r')
zipper.extractall(path='../new_data')
print 'Generating DOE Contract data...'
contract_file_list = glob('../new_data/*089_Contracts*.csv')
contract_df_list = []
for contract_file in contract_file_list:
contract_df_list.append(pd.read_csv(contract_file))
fulldata = pd.concat(contract_df_list,ignore_index=True)
print len(fulldata)
sc_awarding_offices = ['CHICAGO SERVICE CENTER (OFFICE OF SCIENCE)',
'OAK RIDGE OFFICE (OFFICE OF SCIENCE)',
'SC CHICAGO SERVICE CENTER',
'SC OAK RIDGE OFFICE']
sc_funding_offices = ['CHICAGO SERVICE CENTER (OFFICE OF SCIENCE)',
'OAK RIDGE OFFICE (OFFICE OF SCIENCE)',
'SCIENCE',
'SC OAK RIDGE OFFICE',
'SC CHICAGO SERVICE CENTER'
]
sc_contracts = fulldata[(fulldata['awarding_office_name'].isin(
sc_awarding_offices)) | (fulldata['funding_office_name'].isin(sc_funding_offices))]
print len(sc_contracts)
#sc_contracts.to_pickle('../cleaned_data/sc_contracts.pkl')
print 'Generating NSF Grant data...'
grant_file_list = glob('../new_data/*049_Assistance*.csv')
grant_df_list = []
for grant_file in grant_file_list:
grant_df_list.append(pd.read_csv(grant_file))
fulldata = pd.concat(grant_df_list,ignore_index=True)
len(fulldata)
mps_grants = fulldata[fulldata['cfda_title'] == 'MATHEMATICAL AND PHYSICAL SCIENCES']
len(mps_grants)
mps_grants['recipient_congressional_district'].unique()
mps_grants = mps_grants.dropna(subset=['principal_place_cd'])
strlist = []
for code in mps_grants['principal_place_cd'].values:
if code == 'ZZ':
code = '00'
if len(str(int(code))) < 2:
strlist.append('0' + str(int(code)))
else:
strlist.append(str(int(code)))
mps_grants['cong_dist'] = mps_grants['principal_place_state_code'] + strlist
pd.to_pickle(mps_grants, '../cleaned_data/nsf_mps_grants.pkl')
```
| github_jupyter |
# Understanding Classification and Logistic Regression with Python
## Introduction
This notebook contains a short introduction to the basic principles of classification and logistic regression. A simple Python simulation is used to illustrate these principles. Specifically, the following steps are performed:
- A data set is created. The label has binary `TRUE` and `FALSE` labels. Values for two features are generated from two bivariate Normal distribion, one for each label class.
- A plot is made of the data set, using color and shape to show the two label classes.
- A plot of a logistic function is computed.
- For each of three data sets a logistic regression model is computed, scored and a plot created using color to show class and shape to show correct and incorrect scoring.
## Create the data set
The code in the cell below computes the two class data set. The feature values for each label level are computed from a bivariate Normal distribution. Run this code and examine the first few rows of the data frame.
```
def sim_log_data(x1, y1, n1, sd1, x2, y2, n2, sd2):
import pandas as pd
import numpy.random as nr
wx1 = nr.normal(loc = x1, scale = sd1, size = n1)
wy1 = nr.normal(loc = y1, scale = sd1, size = n1)
z1 = [1]*n1
wx2 = nr.normal(loc = x2, scale = sd2, size = n2)
wy2 = nr.normal(loc = y2, scale = sd2, size = n2)
z2 = [0]*n2
df1 = pd.DataFrame({'x': wx1, 'y': wy1, 'z': z1})
df2 = pd.DataFrame({'x': wx2, 'y': wy2, 'z': z2})
return pd.concat([df1, df2], axis = 0, ignore_index = True)
sim_data = sim_log_data(1, 1, 50, 1, -1, -1, 50, 1)
sim_data.head()
```
## Plot the data set
The code in the cell below plots the data set using color to show the two classes of the labels. Execute this code and examine the results. Notice that the posion of the points from each class overlap with each other.
```
%matplotlib inline
def plot_class(df):
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5, 5))
fig.clf()
ax = fig.gca()
df[df.z == 1].plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'Red', marker = 'x', s = 40)
df[df.z == 0].plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'DarkBlue', marker = 'o', s = 40)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Classes vs X and Y')
return 'Done'
plot_class(sim_data)
```
## Plot the logistic function
Logistic regression computes a binary {0,1} score using a logistic function. A value of the logistic function above the cutoff (typically 0.5) are scored as a 1 or true, and values less than the cutoff are scored as a 0 or false. Execute the code and examine the resulting logistic function.
```
def plot_logistic(upper = 6, lower = -6, steps = 100):
import matplotlib.pyplot as plt
import pandas as pd
import math as m
step = float(upper - lower) / float(steps)
x = [lower + x * step for x in range(101)]
y = [m.exp(z)/(1 + m.exp(z)) for z in x]
fig = plt.figure(figsize=(5, 4))
fig.clf()
ax = fig.gca()
ax.plot(x, y, color = 'r')
ax.axvline(0, 0.0, 1.0)
ax.axhline(0.5, lower, upper)
ax.set_xlabel('X')
ax.set_ylabel('Probabiltiy of positive response')
ax.set_title('Logistic function for two-class classification')
return 'done'
plot_logistic()
```
## Compute and score a logistic regression model
There is a considerable anount of code in the cell below.
The fist function uses scikit-learn to compute and scores a logsitic regression model. Notie that the features and the label must be converted to a numpy array which is required for scikit-learn.
The second function computes the evaluation of the logistic regression model in the following steps:
- Compute the elements of theh confusion matrix.
- Plot the correctly and incorrectly scored cases, using shape and color to identify class and classification correctness.
- Commonly used performance statistics are computed.
Execute this code and examine the results. Notice that most of the cases have been correctly classified. Classification errors appear along a boundary between those two classes.
```
def logistic_mod(df, logProb = 1.0):
from sklearn import linear_model
## Prepare data for model
nrow = df.shape[0]
X = df[['x', 'y']].as_matrix().reshape(nrow,2)
Y = df.z.as_matrix().ravel() #reshape(nrow,1)
## Compute the logistic regression model
lg = linear_model.LogisticRegression()
logr = lg.fit(X, Y)
## Compute the y values
temp = logr.predict_log_proba(X)
df['predicted'] = [1 if (logProb > p[1]/p[0]) else 0 for p in temp]
return df
def eval_logistic(df):
import matplotlib.pyplot as plt
import pandas as pd
truePos = df[((df['predicted'] == 1) & (df['z'] == df['predicted']))]
falsePos = df[((df['predicted'] == 1) & (df['z'] != df['predicted']))]
trueNeg = df[((df['predicted'] == 0) & (df['z'] == df['predicted']))]
falseNeg = df[((df['predicted'] == 0) & (df['z'] != df['predicted']))]
fig = plt.figure(figsize=(5, 5))
fig.clf()
ax = fig.gca()
truePos.plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'DarkBlue', marker = '+', s = 80)
falsePos.plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'Red', marker = 'o', s = 40)
trueNeg.plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'DarkBlue', marker = 'o', s = 40)
falseNeg.plot(kind = 'scatter', x = 'x', y = 'y', ax = ax,
alpha = 1.0, color = 'Red', marker = '+', s = 80)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Classes vs X and Y')
TP = truePos.shape[0]
FP = falsePos.shape[0]
TN = trueNeg.shape[0]
FN = falseNeg.shape[0]
confusion = pd.DataFrame({'Positive': [FP, TP],
'Negative': [TN, FN]},
index = ['TrueNeg', 'TruePos'])
accuracy = float(TP + TN)/float(TP + TN + FP + FN)
precision = float(TP)/float(TP + FP)
recall = float(TP)/float(TP + FN)
print(confusion)
print('accracy = ' + str(accuracy))
print('precision = ' + str(precision))
print('recall = ' + str(recall))
return 'Done'
mod = logistic_mod(sim_data)
eval_logistic(mod)
```
## Moving the decision boundary
The example above uses a cutoff at the midpoint of the logistic function. However, you can change the trade-off between correctly classifying the positive cases and correctly classifing the negative cases. The code in the cell below computes and scores a logistic regressiion model for three different cutoff points.
Run the code in the cell and carefully compare the results for the three cases. Notice, that as the logistic cutoff changes the decision boundary moves on the plot, with progressively more positive cases are correctly classified. In addition, accuracy and precision decrease and recall increases.
```
def logistic_demo_prob():
logt = sim_log_data(0.5, 0.5, 50, 1, -0.5, -0.5, 50, 1)
probs = [1, 2, 4]
for p in probs:
logMod = logistic_mod(logt, p)
eval_logistic(logMod)
return 'Done'
logistic_demo_prob()
```
| github_jupyter |
# Implementation of Softmax Regression from Scratch
:label:`chapter_softmax_scratch`
Just as we implemented linear regression from scratch,
we believe that multiclass logistic (softmax) regression
is similarly fundamental and you ought to know
the gory details of how to implement it from scratch.
As with linear regression, after doing things by hand
we will breeze through an implementation in Gluon for comparison.
To begin, let's import our packages.
```
import sys
sys.path.insert(0, '..')
%matplotlib inline
import d2l
import torch
from torch.distributions import normal
```
We will work with the Fashion-MNIST dataset just introduced,
cuing up an iterator with batch size 256.
```
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
```
## Initialize Model Parameters
Just as in linear regression, we represent each example as a vector.
Since each example is a $28 \times 28$ image,
we can flatten each example, treating them as $784$ dimensional vectors.
In the future, we'll talk about more sophisticated strategies
for exploiting the spatial structure in images,
but for now we treat each pixel location as just another feature.
Recall that in softmax regression,
we have as many outputs as there are categories.
Because our dataset has $10$ categories,
our network will have an output dimension of $10$.
Consequently, our weights will constitute a $784 \times 10$ matrix
and the biases will constitute a $1 \times 10$ vector.
As with linear regression, we will initialize our weights $W$
with Gaussian noise and our biases to take the initial value $0$.
```
num_inputs = 784
num_outputs = 10
W = normal.Normal(loc = 0, scale = 0.01).sample((num_inputs, num_outputs))
b = torch.zeros(num_outputs)
```
Recall that we need to *attach gradients* to the model parameters.
More literally, we are allocating memory for future gradients to be stored
and notifiying PyTorch that we want gradients to be calculated with respect to these parameters in the first place.
```
W.requires_grad_(True)
b.requires_grad_(True)
```
## The Softmax
Before implementing the softmax regression model,
let's briefly review how `torch.sum` work
along specific dimensions in a PyTorch tensor.
Given a matrix `X` we can sum over all elements (default) or only
over elements in the same column (`dim=0`) or the same row (`dim=1`).
Note that if `X` is an array with shape `(2, 3)`
and we sum over the columns (`torch.sum(X, dim=0`),
the result will be a (1D) vector with shape `(3,)`.
If we want to keep the number of axes in the original array
(resulting in a 2D array with shape `(1,3)`),
rather than collapsing out the dimension that we summed over
we can specify `keepdim=True` when invoking `torch.sum`.
```
X = torch.tensor([[1, 2, 3], [4, 5, 6]])
torch.sum(X, dim=0, keepdim=True), torch.sum(X, dim=1, keepdim=True)
```
We are now ready to implement the softmax function.
Recall that softmax consists of two steps:
First, we exponentiate each term (using `torch.exp`).
Then, we sum over each row (we have one row per example in the batch)
to get the normalization constants for each example.
Finally, we divide each row by its normalization constant,
ensuring that the result sums to $1$.
Before looking at the code, let's recall
what this looks expressed as an equation:
$$
\mathrm{softmax}(\mathbf{X})_{ij} = \frac{\exp(X_{ij})}{\sum_k \exp(X_{ik})}
$$
The denominator, or normalization constant,
is also sometimes called the partition function
(and its logarithm the log-partition function).
The origins of that name are in [statistical physics](https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics))
where a related equation models the distribution
over an ensemble of particles).
```
def softmax(X):
X_exp = torch.exp(X)
partition = torch.sum(X_exp, dim=1, keepdim=True)
return X_exp / partition # The broadcast mechanism is applied here
```
As you can see, for any random input, we turn each element into a non-negative number. Moreover, each row sums up to 1, as is required for a probability.
Note that while this looks correct mathematically,
we were a bit sloppy in our implementation
because failed to take precautions against numerical overflow or underflow
due to large (or very small) elements of the matrix,
as we did in
:numref:`chapter_naive_bayes`.
```
# X = nd.random.normal(shape=(2, 5))
X = normal.Normal(loc = 0, scale = 1).sample((2, 5))
X_prob = softmax(X)
X_prob, torch.sum(X_prob, dim=1)
```
## The Model
Now that we have defined the softmax operation,
we can implement the softmax regression model.
The below code defines the forward pass through the network.
Note that we flatten each original image in the batch
into a vector with length `num_inputs` with the `view` function
before passing the data through our model.
```
def net(X):
return softmax(torch.matmul(X.reshape((-1, num_inputs)), W) + b)
```
## The Loss Function
Next, we need to implement the cross entropy loss function,
introduced in :numref:`chapter_softmax`.
This may be the most common loss function
in all of deep learning because, at the moment,
classification problems far outnumber regression problems.
Recall that cross entropy takes the negative log likelihood
of the predicted probability assigned to the true label $-\log p(y|x)$.
Rather than iterating over the predictions with a Python `for` loop
(which tends to be inefficient), we can use the `gather` function
which allows us to select the appropriate terms
from the matrix of softmax entries easily.
Below, we illustrate the `gather` function on a toy example,
with 3 categories and 2 examples.
```
y_hat = torch.tensor([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])
y = torch.tensor([0, 2])
torch.gather(y_hat, 1, y.unsqueeze(dim=1)) # y has to be unsqueezed so that shape(y_hat) = shape(y)
```
Now we can implement the cross-entropy loss function efficiently
with just one line of code.
```
def cross_entropy(y_hat, y):
return -torch.gather(y_hat, 1, y.unsqueeze(dim=1)).log()
```
## Classification Accuracy
Given the predicted probability distribution `y_hat`,
we typically choose the class with highest predicted probability
whenever we must output a *hard* prediction. Indeed, many applications require that we make a choice. Gmail must catetegorize an email into Primary, Social, Updates, or Forums. It might estimate probabilities internally, but at the end of the day it has to choose one among the categories.
When predictions are consistent with the actual category `y`, they are correct. The classification accuracy is the fraction of all predictions that are correct. Although we cannot optimize accuracy directly (it is not differentiable), it's often the performance metric that we care most about, and we will nearly always report it when training classifiers.
To compute accuracy we do the following:
First, we execute `y_hat.argmax(dim=1)`
to gather the predicted classes
(given by the indices for the largest entires each row).
The result has the same shape as the variable `y`.
Now we just need to check how frequently the two match. The result is PyTorch tensor containing entries of 0 (false) and 1 (true). Since the attribute `mean` can only calculate the mean of floating types,
we also need to convert the result to `float`. Taking the mean yields the desired result.
```
def accuracy(y_hat, y):
return (y_hat.argmax(dim=1) == y).float().mean().item()
```
We will continue to use the variables `y_hat` and `y`
defined in the `gather` function,
as the predicted probability distribution and label, respectively.
We can see that the first example's prediction category is 2
(the largest element of the row is 0.6 with an index of 2),
which is inconsistent with the actual label, 0.
The second example's prediction category is 2
(the largest element of the row is 0.5 with an index of 2),
which is consistent with the actual label, 2.
Therefore, the classification accuracy rate for these two examples is 0.5.
```
accuracy(y_hat, y)
```
Similarly, we can evaluate the accuracy for model `net` on the data set
(accessed via `data_iter`).
```
# The function will be gradually improved: the complete implementation will be
# discussed in the "Image Augmentation" section
def evaluate_accuracy(data_iter, net):
acc_sum, n = 0.0, 0
for X, y in data_iter:
acc_sum += (net(X).argmax(dim=1) == y).sum().item()
n += y.size()[0] # y.size()[0] = batch_size
return acc_sum / n
```
Because we initialized the `net` model with random weights,
the accuracy of this model should be close to random guessing,
i.e. 0.1 for 10 classes.
```
evaluate_accuracy(test_iter, net)
```
## Model Training
The training loop for softmax regression should look strikingly familiar
if you read through our implementation
of linear regression earlier in this chapter.
Again, we use the mini-batch stochastic gradient descent
to optimize the loss function of the model.
Note that the number of epochs (`num_epochs`),
and learning rate (`lr`) are both adjustable hyper-parameters.
By changing their values, we may be able to increase the classification accuracy of the model. In practice we'll want to split our data three ways
into training, validation, and test data, using the validation data to choose the best values of our hyperparameters.
```
num_epochs, lr = 5, 0.1
# This function has been saved in the d2l package for future use
def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params=None, lr=None, trainer=None):
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n = 0.0, 0.0, 0
for X, y in train_iter:
y_hat = net(X)
l = loss(y_hat, y).sum()
l.backward()
if trainer is None:
d2l.sgd(params, lr, batch_size)
else:
# This will be illustrated in the next section
trainer.step(batch_size)
train_l_sum += l.item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().item()
n += y.size()[0]
test_acc = evaluate_accuracy(test_iter, net)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'
% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr)
```
## Prediction
Now that training is complete, our model is ready to classify some images.
Given a series of images, we will compare their actual labels
(first line of text output) and the model predictions
(second line of text output).
```
for X, y in test_iter:
break
true_labels = d2l.get_fashion_mnist_labels(y.numpy())
pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(dim=1).numpy())
titles = [truelabel + '\n' + predlabel for truelabel, predlabel in zip(true_labels, pred_labels)]
d2l.show_fashion_mnist(X[10:20], titles[10:20])
```
## Summary
With softmax regression, we can train models for multi-category classification. The training loop is very similar to that in linear regression: retrieve and read data, define models and loss functions,
then train models using optimization algorithms. As you'll soon find out, most common deep learning models have similar training procedures.
## Exercises
1. In this section, we directly implemented the softmax function based on the mathematical definition of the softmax operation. What problems might this cause (hint - try to calculate the size of $\exp(50)$)?
1. The function `cross_entropy` in this section is implemented according to the definition of the cross-entropy loss function. What could be the problem with this implementation (hint - consider the domain of the logarithm)?
1. What solutions you can think of to fix the two problems above?
1. Is it always a good idea to return the most likely label. E.g. would you do this for medical diagnosis?
1. Assume that we want to use softmax regression to predict the next word based on some features. What are some problems that might arise from a large vocabulary?
| github_jupyter |
# Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this:
<img src="images/cost.jpg" style="width:650px;height:300px;">
<caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>
**Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.
To get started, run the following code to import the libraries you will need.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "Optimization_methods_v1b".
* You can find your work in the file directory as version "Optimization methods'.
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* op_utils is now opt_utils_v1a. Assertion statement in `initialize_parameters` is fixed.
* opt_utils_v1a: `compute_cost` function now accumulates total cost of the batch without taking the average (average is taken for entire epoch instead).
* In `model` function, the total cost per mini-batch is accumulated, and the average of the entire epoch is taken as the average cost. So the plot of the cost function over time is now a smooth downward curve instead of an oscillating curve.
* Print statements used to check each function are reformatted, and 'expected output` is reformatted to match the format of the print statements (for easier visual comparisons).
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils_v1a import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils_v1a import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
## 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
**Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l+1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 =\n" + str(parameters["W1"]))
print("b1 =\n" + str(parameters["b1"]))
print("W2 =\n" + str(parameters["W2"]))
print("b2 =\n" + str(parameters["b2"]))
```
**Expected Output**:
```
W1 =
[[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 =
[[ 1.74604067]
[-0.75184921]]
W2 =
[[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 =
[[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
```
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
- **(Batch) Gradient Descent**:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost += compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
- **Stochastic Gradient Descent**:
```python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost += compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
**Note** also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
**What you should remember**:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
## 2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
- **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
**Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
```python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
```
Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
```
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,k*mini_batch_size:(k+1)*mini_batch_size]
mini_batch_Y = shuffled_Y[:,k*mini_batch_size:(k+1)*mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
## 3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
**Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:
for $l =1,...,L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
```
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros((parameters["W"+str(l+1)].shape[0],parameters["W"+str(l+1)].shape[1]))
v["db" + str(l+1)] = np.zeros((parameters["b"+str(l+1)].shape[0],1))
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] =\n" + str(v["dW1"]))
print("v[\"db1\"] =\n" + str(v["db1"]))
print("v[\"dW2\"] =\n" + str(v["dW2"]))
print("v[\"db2\"] =\n" + str(v["db2"]))
```
**Expected Output**:
```
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
```
**Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$:
$$ \begin{cases}
v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\
W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}
\end{cases}\tag{3}$$
$$\begin{cases}
v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\
b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}}
\end{cases}\tag{4}$$
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = (beta * v["dW" + str(l+1)]) + (1-beta)*grads['dW'+str(l+1)]
v["db" + str(l+1)] = (beta * v["db" + str(l+1)]) + (1-beta)*grads['db'+str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * beta * v["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * beta * v["db" + str(l+1)])
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = v" + str(v["db2"]))
```
**Expected Output**:
```
W1 =
[[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 =
[[ 1.74493465]
[-0.76027113]]
W2 =
[[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 =
[[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = v[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
```
**Note** that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
**How do you choose $\beta$?**
- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
- Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
- Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
**What you should remember**:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
## 4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
**How does Adam work?**
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\
v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\
s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_2)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the `parameters` dictionary
**Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.
**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
```
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros_like(parameters["W"+str(l+1)])
v["db" + str(l+1)] = np.zeros_like(parameters["b"+str(l+1)])
s["dW" + str(l+1)] = np.zeros_like(parameters["W"+str(l+1)])
s["db" + str(l+1)] = np.zeros_like(parameters["b"+str(l+1)])
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
```
**Expected Output**:
```
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
s["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] =
[[ 0.]
[ 0.]]
s["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
```
**Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\
v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\
s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}
\end{cases}$$
**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = (beta1 * v["dW" + str(l+1)]) + (1-beta1)*grads['dW'+str(l+1)]
v["db" + str(l+1)] = (beta1 * v["db" + str(l+1)]) + (1-beta1)*grads['db'+str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)]/(1-np.power(beta1,t))
v_corrected["db" + str(l+1)] = v["db" + str(l+1)]/(1-np.power(beta1,t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = (beta2 * s["dW" + str(l+1)]) + ((1-beta2)*np.power(grads['dW'+str(l+1)],2))
s["db" + str(l+1)] = (beta2 * s["db" + str(l+1)]) + ((1-beta2)*np.power(grads['db'+str(l+1)],2))
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)]/(1-np.power(beta2,t))
s_corrected["db" + str(l+1)] = s["db" + str(l+1)]/(1-np.power(beta2,t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate*(v_corrected["dW"+str(l+1)]/(np.sqrt(s_corrected["dW"+str(l+1)])+epsilon)))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate*(v_corrected["db"+str(l+1)]/(np.sqrt(s_corrected["db"+str(l+1)])+epsilon)))
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
```
**Expected Output**:
```
W1 =
[[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 =
[[ 1.75225313]
[-0.75376553]]
W2 =
[[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]]
b2 =
[[-0.88529978]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] =
[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] =
[[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] =
[[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] =
[[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] =
[[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
```
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.
## 5 - Model with different optimization algorithms
Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
```
train_X, train_Y = load_dataset()
```
We have already implemented a 3-layer neural network. You will train it with:
- Mini-batch **Gradient Descent**: it will call your function:
- `update_parameters_with_gd()`
- Mini-batch **Momentum**: it will call your functions:
- `initialize_velocity()` and `update_parameters_with_momentum()`
- Mini-batch **Adam**: it will call your functions:
- `initialize_adam()` and `update_parameters_with_adam()`
```
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
m = X.shape[1] # number of training examples
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
cost_total = 0
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost and add to the cost total
cost_total += compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
cost_avg = cost_total / m
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost_avg))
if print_cost and i % 100 == 0:
costs.append(cost_avg)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
```
You will now run this 3 layer neural network with each of the 3 optimization methods.
### 5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.4 - Summary
<table>
<tr>
<td>
**optimization method**
</td>
<td>
**accuracy**
</td>
<td>
**cost shape**
</td>
</tr>
<td>
Gradient descent
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
<tr>
<td>
Momentum
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
</tr>
<tr>
<td>
Adam
</td>
<td>
94%
</td>
<td>
smoother
</td>
</tr>
</table>
Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.
Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.
Some advantages of Adam include:
- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum)
- Usually works well even with little tuning of hyperparameters (except $\alpha$)
**References**:
- Adam paper: https://arxiv.org/pdf/1412.6980.pdf
| github_jupyter |
# PTSD Model Inference with IRT Features
## [Center for Health Statistics](http://www.healthstats.org)
## [The Zero Knowledge Discovery Lab](http://zed.uchicago.edu)
---
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
import pandas as pd
import seaborn as sns
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import neighbors, datasets
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from scipy.spatial import ConvexHull
from tqdm import tqdm
import random
plt.style.use('ggplot')
import pickle
from sklearn import tree
from sklearn.tree import export_graphviz
from joblib import dump, load
%matplotlib inline
plt.rcParams["font.size"]=12
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman'] + plt.rcParams['font.serif']
datafile='../../data/CAD-PTSDData.csv'
def processDATA(datafile):
'''
process data file
into training data X, target labels y
'''
Df=pd.read_csv(datafile)
X=Df.drop(['record_id','PTSDDx'],axis=1).values
y=Df.drop(['record_id'],axis=1).PTSDDx.values
[nsamples,nfeatures]=X.shape
return X,y,nfeatures,nsamples
def pickleModel(models,threshold=0.87,filename='model.pkl',verbose=True):
'''
save trained model set
'''
MODELS=[]
for key,mds in models.items():
if key >= threshold:
mds_=[i[0] for i in mds]
MODELS.extend(mds_)
if verbose:
print("number of models (tests):", len(MODELS))
FS=getCoverage(MODELS,verbose=True)
print("Item Use Fraction:", FS.size/(len(MODELS)+0.0))
dump(MODELS, filename)
return
def loadModel(filename):
'''
load models
'''
return load(filename)
def drawTrees(model,index=0):
'''
draw the estimators (trees)
in a single model
'''
N=len(model[index].estimators_)
for count in range(N):
estimator = model[index].estimators_[count]
export_graphviz(estimator, out_file='tree.dot',
#feature_names = iris.feature_names,
#class_names = iris.target_names,
rounded = True, proportion = False,
precision = 2, filled = True)
from subprocess import call
call(['dot', '-Tpng', 'tree.dot', '-o', 'tree'+str(count)+'.png', '-Gdpi=600'])
from IPython.display import Image
Image(filename = 'tree'+str(count)+'.png')
def getCoverage(model,verbose=True):
'''
return how many distinct items (questions)
are used in the model set.
This includes the set of questions being
covered by all forms that may be
generated by the model set
'''
FS=[]
for m in model:
for count in range(len(m.estimators_)):
clf=m.estimators_[count]
fs=clf.tree_.feature[clf.tree_.feature>0]
FS=np.array(list(set(np.append(FS,fs))))
if verbose:
print("Number of items used: ", FS.size)
return FS
def getAuc(X,y,test_size=0.25,max_depth=None,n_estimators=100,
minsplit=4,FPR=[],TPR=[],VERBOSE=False, USE_ONLY=None):
'''
get AUC given training data X, with target labels y
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size)
CLASSIFIERS=[DecisionTreeClassifier(max_depth=max_depth, min_samples_split=minsplit),
RandomForestClassifier(n_estimators=n_estimators,
max_depth=max_depth,min_samples_split=minsplit),
ExtraTreesClassifier(n_estimators=n_estimators,
max_depth=max_depth,min_samples_split=minsplit),
AdaBoostClassifier(n_estimators=n_estimators),
GradientBoostingClassifier(n_estimators=n_estimators,max_depth=max_depth),
svm.SVC(kernel='rbf',gamma='scale',class_weight='balanced',probability=True)]
if USE_ONLY is not None:
if isinstance(USE_ONLY, (list,)):
CLASSIFIERS=[CLASSIFIERS[i] for i in USE_ONLY]
if isinstance(USE_ONLY, (int,)):
CLASSIFIERS=CLASSIFIERS[USE_ONLY]
for clf in CLASSIFIERS:
clf.fit(X_train,y_train)
y_pred=clf.predict_proba(X_test)
fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)
auc=metrics.auc(fpr, tpr)
if auc > 0.9:
fpr_c=fpr
tpr_c=tpr
dfa=pd.DataFrame(fpr_c,tpr_c).reset_index()
dfa.columns=['tpr','fpr']
dfa[['fpr','tpr']].to_csv('roc_.csv')
if VERBOSE:
print(auc)
FPR=np.append(FPR,fpr)
TPR=np.append(TPR,tpr)
points=np.array([[a[0],a[1]] for a in zip(FPR,TPR)])
hull = ConvexHull(points)
x=np.argsort(points[hull.vertices,:][:,0])
auc=metrics.auc(points[hull.vertices,:][x,0],points[hull.vertices,:][x,1])
if auc > 0.91:
fpr_c=points[hull.vertices,:][x,0]
tpr_c=points[hull.vertices,:][x,1]
dfa=pd.DataFrame(fpr_c,tpr_c).reset_index()
dfa.columns=['tpr','fpr']
dfa[['fpr','tpr']].to_csv('roc.csv')
return auc,CLASSIFIERS
#test model
def getModel(P,THRESHOLD=0.9):
'''
Select only models with minimum AUC
'''
Pgood=[model for (auc,model) in zip(P[::2],P[1::2]) if auc > THRESHOLD]
AUC=[]
if len(Pgood)==0:
return Pgood,len(Pgood),0,0,0,[]
for i in tqdm(range(1000)):
random_choice=random.randint(0,len(Pgood)-1)
clf=Pgood[random_choice][0]
# pretend as if we have not sen any of this data before
# but we have!
# need to only use test data here
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.8)
y_pred=clf.predict_proba(X_test)
fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)
auc=metrics.auc(fpr, tpr)
AUC=np.append(AUC,auc)
DEPTH=Pgood[0][0].max_depth
N_ESTIMATORS=Pgood[0][0].n_estimators
NITEMS=DEPTH*N_ESTIMATORS
VARIATIONS=len(Pgood)#2*DEPTH*len(Pgood)
return Pgood,len(Pgood),np.median(AUC),NITEMS,VARIATIONS,AUC
def getSystem(X,y,max_depth=2,n_estimators=3):
'''
get model set with training data X and target labels y
-> calls getAUC, and getModel
'''
P1=[]
for i in tqdm(range(100)):
#USE_ONLY=2 implies ExtraTreesClassifier is used only
P1=np.append(P1,getAuc(X,y,minsplit=2,max_depth=max_depth,
n_estimators=n_estimators,USE_ONLY=[2]))
PERF=[]
DPERF={}
MODELS={}
for threshold in np.arange(0.8,0.95,0.01):
Pgood,nmodels,auc_,NITEMS,VARIATIONS,AUC=getModel(P1,threshold)
if len(Pgood) > 0:
PERF=np.append(PERF,[auc_,NITEMS,VARIATIONS])
DPERF[VARIATIONS]=AUC
MODELS[auc_]=Pgood
PERF=PERF.reshape(int(len(PERF)/3),3)
return PERF,DPERF,MODELS,NITEMS
def PLOT(Dperf,Nitems,N=1000,dn=''):
'''
Plots the achieved AUC along with
confidence bounds against the
number of different forms
generated.
'''
NUMQ='No. of Items Per Subject: '+str(Nitems)
Df=pd.DataFrame(Dperf)
dfs=Df.std()
dfm=Df.mean()
plt.figure(figsize=[8,6])
dfm.plot(marker='o',color='r',ms=10,markeredgecolor='w',markerfacecolor='k',lw=2)
(dfm+2.62*(dfs/np.sqrt(N))).plot(ls='--',color='.5')
(dfm-2.62*(dfs/np.sqrt(N))).plot(ls='--',color='.5')
plt.xlabel('No. of different question sets')
plt.ylabel('mean AUC')
plt.title('AUC vs Test Variation (99% CB)',fontsize=12,fontweight='bold')
plt.text(0.55,0.9,NUMQ,transform=plt.gca().transAxes,fontweight='bold',
fontsize=12,bbox=dict(facecolor='k', alpha=0.4),color='w')
pdfname='Result'+dn+'.pdf'
plt.savefig(pdfname,dpi=300,bbox_inches='tight',pad_inches=0,transparent=False)
return
X,y,nfeatures,nsamples=processDATA(datafile)
Perf23,Dperf23,Models23,Nitems23=getSystem(X,y,max_depth=2,n_estimators=3)
print(Nitems23)
PLOT(Dperf23,Nitems23,dn='23')
Perf32,Dperf32,Models32,Nitems32=getSystem(X,y,max_depth=3,n_estimators=2)
PLOT(Dperf32,Nitems32,dn='32')
pickleModel(Models23,threshold=.89,filename='model_2_3.pkl')
print("--")
pickleModel(Models32,threshold=.9,filename='model_3_2.pkl')
drawTrees(loadModel('model_2_3.pkl'),1)
FS23=getCoverage(load('model_2_3.pkl'))
FS32=getCoverage(load('model_3_2.pkl'))
drawTrees(loadModel('model_3_2.pkl'),1)
```
| github_jupyter |
```
# To begin, I created a folder called "Project3_dataTest". I placed the fer2013.csv
# file within the Project3_data folder, and then I ran the following code.
import numpy as np
import pandas as pd
import os
from PIL import Image
df = pd.read_csv('/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/fer2013.csv')
df.head()
df0 = df.query('emotion == 0 and Usage != "Training"')
df1 = df.query('emotion == 1 and Usage != "Training"')
df2 = df.query('emotion == 2 and Usage != "Training"')
df3 = df.query('emotion == 3 and Usage != "Training"')
df4 = df.query('emotion == 4 and Usage != "Training"')
df5 = df.query('emotion == 5 and Usage != "Training"')
df6 = df.query('emotion == 6 and Usage != "Training"')
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/0/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/1/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/2/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/3/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/4/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/5/")
os.mkdir("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/6/")
d=0
for image_pixels in df0.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/0/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df1.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/1/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df2.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/2/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df3.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/3/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df4.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/4/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df5.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/5/img_%d.jpg"%d, "JPEG")
d+=1
d=0
for image_pixels in df6.iloc[1:,1]:
image_string = image_pixels.split(' ')
image_data = np.asarray(image_string, dtype=np.uint8).reshape(48,48)
img = Image.fromarray(image_data)
img.save("/Users/blakemyers/Desktop/Jupyter/Project3_dataTest/6/img_%d.jpg"%d, "JPEG")
d+=1
df99 = df.query('Usage != "Training"')
df99.shape
```
| github_jupyter |
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ShopRunner/collie/blob/main/tutorials/05_hybrid_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/ShopRunner/collie/blob/main/tutorials/05_hybrid_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://raw.githubusercontent.com/ShopRunner/collie/main/tutorials/05_hybrid_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" /> Download notebook</a>
</td>
</table>
```
# for Collab notebooks, we will start by installing the ``collie`` library
!pip install collie --quiet
%reload_ext autoreload
%autoreload 2
%matplotlib inline
%env DATA_PATH data/
import os
import numpy as np
import pandas as pd
from pytorch_lightning.utilities.seed import seed_everything
from IPython.display import HTML
import joblib
import torch
from collie.metrics import mapk, mrr, auc, evaluate_in_batches
from collie.model import CollieTrainer, HybridPretrainedModel, MatrixFactorizationModel
from collie.movielens import get_movielens_metadata, get_recommendation_visualizations
```
## Load Data From ``01_prepare_data`` Notebook
If you're running this locally on Jupyter, you should be able to run the next cell quickly without a problem! If you are running this on Colab, you'll need to regenerate the data by running the cell below that, which should only take a few extra seconds to complete.
```
try:
# let's grab the ``Interactions`` objects we saved in the last notebook
train_interactions = joblib.load(os.path.join(os.environ.get('DATA_PATH', 'data/'),
'train_interactions.pkl'))
val_interactions = joblib.load(os.path.join(os.environ.get('DATA_PATH', 'data/'),
'val_interactions.pkl'))
except FileNotFoundError:
# we're running this notebook on Colab where results from the first notebook are not saved
# regenerate this data below
from collie.cross_validation import stratified_split
from collie.interactions import Interactions
from collie.movielens import read_movielens_df
from collie.utils import convert_to_implicit, remove_users_with_fewer_than_n_interactions
df = read_movielens_df(decrement_ids=True)
implicit_df = convert_to_implicit(df, min_rating_to_keep=4)
implicit_df = remove_users_with_fewer_than_n_interactions(implicit_df, min_num_of_interactions=3)
interactions = Interactions(
users=implicit_df['user_id'],
items=implicit_df['item_id'],
ratings=implicit_df['rating'],
allow_missing_ids=True,
)
train_interactions, val_interactions = stratified_split(interactions, test_p=0.1, seed=42)
print('Train:', train_interactions)
print('Val: ', val_interactions)
```
# Hybrid Collie Model Using a Pre-Trained ``MatrixFactorizationModel``
In this notebook, we will use this same metadata and incorporate it directly into the model architecture with a hybrid Collie model.
## Read in Data
```
# read in the same metadata used in notebooks ``03`` and ``04``
metadata_df = get_movielens_metadata()
metadata_df.head()
# and, as always, set our random seed
seed_everything(22)
```
## Train a ``MatrixFactorizationModel``
The first step towards training a Collie Hybrid model is to train a regular ``MatrixFactorizationModel`` to generate rich user and item embeddings. We'll use these embeddings in a ``HybridPretrainedModel`` a bit later.
```
model = MatrixFactorizationModel(
train=train_interactions,
val=val_interactions,
embedding_dim=30,
lr=1e-2,
)
trainer = CollieTrainer(model=model, max_epochs=10, deterministic=True)
trainer.fit(model)
mapk_score, mrr_score, auc_score = evaluate_in_batches([mapk, mrr, auc], val_interactions, model)
print(f'Standard MAP@10 Score: {mapk_score}')
print(f'Standard MRR Score: {mrr_score}')
print(f'Standard AUC Score: {auc_score}')
```
## Train a ``HybridPretrainedModel``
With our trained ``model`` above, we can now use these embeddings and additional side data directly in a hybrid model. The architecture essentially takes our user embedding, item embedding, and item metadata for each user-item interaction, concatenates them, and sends it through a simple feedforward network to output a recommendation score.
We can initially freeze the user and item embeddings from our previously-trained ``model``, train for a few epochs only optimizing our newly-added linear layers, and then train a model with everything unfrozen at a lower learning rate. We will show this process below.
```
# we will apply a linear layer to the metadata with ``metadata_layers_dims`` and
# a linear layer to the combined embeddings and metadata data with ``combined_layers_dims``
hybrid_model = HybridPretrainedModel(
train=train_interactions,
val=val_interactions,
item_metadata=metadata_df,
trained_model=model,
metadata_layers_dims=[8],
combined_layers_dims=[16],
lr=1e-2,
freeze_embeddings=True,
)
hybrid_trainer = CollieTrainer(model=hybrid_model, max_epochs=10, deterministic=True)
hybrid_trainer.fit(hybrid_model)
mapk_score, mrr_score, auc_score = evaluate_in_batches([mapk, mrr, auc], val_interactions, hybrid_model)
print(f'Hybrid MAP@10 Score: {mapk_score}')
print(f'Hybrid MRR Score: {mrr_score}')
print(f'Hybrid AUC Score: {auc_score}')
hybrid_model_unfrozen = HybridPretrainedModel(
train=train_interactions,
val=val_interactions,
item_metadata=metadata_df,
trained_model=model,
metadata_layers_dims=[8],
combined_layers_dims=[16],
lr=1e-4,
freeze_embeddings=False,
)
hybrid_model.unfreeze_embeddings()
hybrid_model_unfrozen.load_from_hybrid_model(hybrid_model)
hybrid_trainer_unfrozen = CollieTrainer(model=hybrid_model_unfrozen, max_epochs=10, deterministic=True)
hybrid_trainer_unfrozen.fit(hybrid_model_unfrozen)
mapk_score, mrr_score, auc_score = evaluate_in_batches([mapk, mrr, auc],
val_interactions,
hybrid_model_unfrozen)
print(f'Hybrid Unfrozen MAP@10 Score: {mapk_score}')
print(f'Hybrid Unfrozen MRR Score: {mrr_score}')
print(f'Hybrid Unfrozen AUC Score: {auc_score}')
```
Note here that while our ``MAP@10`` and ``MRR`` scores went down slightly from the frozen version of the model above, our ``AUC`` score increased. For implicit recommendation models, each evaluation metric is nuanced in what it represents for real world recommendations.
You can read more about each evaluation metric by checking out the [Mean Average Precision at K (MAP@K)](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision), [Mean Reciprocal Rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank), and [Area Under the Curve (AUC)](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) Wikipedia pages.
```
user_id = np.random.randint(0, train_interactions.num_users)
display(
HTML(
get_recommendation_visualizations(
model=hybrid_model_unfrozen,
user_id=user_id,
filter_films=True,
shuffle=True,
detailed=True,
)
)
)
```
The metrics and results look great, and we should only see a larger difference compared to a standard model as our data becomes more nuanced and complex (such as with MovieLens 10M data).
If we're happy with this model, we can go ahead and save it for later!
## Save and Load a Hybrid Model
```
# we can save the model with...
os.makedirs('models', exist_ok=True)
hybrid_model_unfrozen.save_model('models/hybrid_model_unfrozen')
# ... and if we wanted to load that model back in, we can do that easily...
hybrid_model_loaded_in = HybridPretrainedModel(load_model_path='models/hybrid_model_unfrozen')
hybrid_model_loaded_in
```
While our model works and the results look great, it's not always possible to be able to fully train two separate models like we've done in this tutorial. Sometimes, it's easier (and even better) to train a single hybird model up from scratch, no pretrained ``MatrixFactorizationModel`` needed.
In the next tutorial, we'll cover multi-stage models in Collie, tackling this exact problem and more! See you there!
-----
| github_jupyter |
# Ensemble Clustering for Graphs (ECG)
# Does not run on Pascal
In this notebook, we will use cuGraph to identify the cluster in a test graph using the Ensemble Clustering for Graph approach.
Notebook Credits
* Original Authors: Bradley Rees and James Wyles
* Created: 04/24/2020
* Last Edit: 08/16/2020
RAPIDS Versions: 0.15
Test Hardware
* GV100 32G, CUDA 10.2
## Introduction
The Ensemble Clustering for Graphs (ECG) method of community detection is based on the Louvain algorithm
For a detailed description of the algorithm see: https://arxiv.org/abs/1809.05578
It takes as input a cugraph.Graph object and returns as output a
cudf.Dataframe object with the id and assigned partition for each
vertex as well as the final modularity score
To compute the ECG cluster in cuGraph use: <br>
__df = cugraph.ecg(G, min_weight = 0.05 , ensemble_size = 16 )__
Parameters
----------
G cugraph.Graph
cuGraph graph descriptor, should contain the connectivity information and weights.
The adjacency list will be computed if not already present.
min_weight: floating point
The minimum value to assign as an edgeweight in the ECG algorithm.
It should be a value in the range [0,1] usually left as the default value of .05
ensemble_size: integer
The number of graph permutations to use for the ensemble.
The default value is 16, larger values may produce higher quality partitions for some graphs.
Returns
-------
parts : cudf.DataFrame
A GPU data frame of size V containing two columns the vertex id and the
partition id it is assigned to.
df[‘vertex’] cudf.Series
Contains the vertex identifiers
df[‘partition’] cudf.Series
Contains the partition assigned to the vertices
All vertices with the same partition ID are in the same cluster
### References
* Poulin, V., & Théberge, F. (2018, December). Ensemble clustering for graphs. In International Conference on Complex Networks and their Applications (pp. 231-243). Springer, Cham.
#### Some notes about vertex IDs...
* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.
* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times.
* To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`).
* For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb`
### Test Data
We will be using the Zachary Karate club dataset
*W. W. Zachary, An information flow model for conflict and fission in small groups, Journal of
Anthropological Research 33, 452-473 (1977).*

Because the test data has vertex IDs starting at 1, the auto-renumber feature of cuGraph (mentioned above) will be used so the starting vertex ID is zero for maximum efficiency. The resulting data will then be auto-unrenumbered, making the entire renumbering process transparent to users.
### Prep
```
# Import needed libraries
import cugraph
import cudf
```
## Read data using cuDF
```
# Test file
datafile='../data//karate-data.csv'
# read the data using cuDF
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
# The algorithm also requires that there are vertex weights. Just use 1.0
gdf["data"] = 1.0
# just for fun, let's look at the data types in the dataframe
gdf.dtypes
# create a Graph - since the data does not start at '0', use the auto-renumbering feature
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst', edge_attr='data', renumber=True)
# Call Louvain on the graph
df = cugraph.ecg(G)
df.dtypes
# How many partitions where found
part_ids = df["partition"].unique()
print(str(len(part_ids)) + " partition detected")
# print the clusters.
for p in range(len(part_ids)):
part = []
for i in range(len(df)):
if (df['partition'].iloc[i] == p):
part.append(df['vertex'].iloc[i] )
print("Partition " + str(p) + ":")
print(part)
```
___
Copyright (c) 2019-2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| github_jupyter |
This script goes along my blog post:
Keras Cats Dogs Tutorial (https://jkjung-avt.github.io/keras-tutorial/)
"""
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Flatten, Dense, Dropout
from tensorflow.python.keras.applications.resnet50 import ResNet50, preprocess_input
from tensorflow.python.keras.optimizers import Adam
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
DATASET_PATH = './catsdogs/sample'
IMAGE_SIZE = (224, 224)
NUM_CLASSES = 2
BATCH_SIZE = 8 # try reducing batch size or freeze more layers if your GPU runs out of memory
FREEZE_LAYERS = 2 # freeze the first this many layers for training
NUM_EPOCHS = 20
WEIGHTS_FINAL = 'model-resnet50-final.h5'
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
channel_shift_range=10,
horizontal_flip=True,
fill_mode='nearest')
train_batches = train_datagen.flow_from_directory(DATASET_PATH + '/train',
target_size=IMAGE_SIZE,
interpolation='bicubic',
class_mode='categorical',
shuffle=True,
batch_size=BATCH_SIZE)
valid_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
valid_batches = valid_datagen.flow_from_directory(DATASET_PATH + '/valid',
target_size=IMAGE_SIZE,
interpolation='bicubic',
class_mode='categorical',
shuffle=False,
batch_size=BATCH_SIZE)
# show class indices
print('****************')
for cls, idx in train_batches.class_indices.items():
print('Class #{} = {}'.format(idx, cls))
print('****************')
# build our classifier model based on pre-trained ResNet50:
# 1. we don't include the top (fully connected) layers of ResNet50
# 2. we add a DropOut layer followed by a Dense (fully connected)
# layer which generates softmax class score for each class
# 3. we compile the final model using an Adam optimizer, with a
# low learning rate (since we are 'fine-tuning')
net = ResNet50(include_top=False, weights='imagenet', input_tensor=None,
input_shape=(IMAGE_SIZE[0],IMAGE_SIZE[1],3))
x = net.output
x = Flatten()(x)
x = Dropout(0.5)(x)
output_layer = Dense(NUM_CLASSES, activation='softmax', name='softmax')(x)
net_final = Model(inputs=net.input, outputs=output_layer)
for layer in net_final.layers[:FREEZE_LAYERS]:
layer.trainable = False
for layer in net_final.layers[FREEZE_LAYERS:]:
layer.trainable = True
net_final.compile(optimizer=Adam(lr=1e-5),
loss='categorical_crossentropy', metrics=['accuracy'])
print(net_final.summary())
# train the model
net_final.fit_generator(train_batches,
steps_per_epoch = train_batches.samples // BATCH_SIZE,
validation_data = valid_batches,
validation_steps = valid_batches.samples // BATCH_SIZE,
epochs = NUM_EPOCHS)
# save trained weights
net_final.save(WEIGHTS_FINAL)
| github_jupyter |
# Implicit Georeferencing
This workbook sets explicit georeferences from implicit georeferencing through names of extents given in dataset titles or keywords.
A file `sources.py` needs to contain the CKAN and SOURCE config as follows:
```
CKAN = {
"dpaw-internal":{
"url": "http://internal-data.dpaw.wa.gov.au/",
"key": "API-KEY"
}
}
```
## Configure CKAN and source
```
import ckanapi
from harvest_helpers import *
from secret import CKAN
ckan = ckanapi.RemoteCKAN(CKAN["dpaw-internal"]["url"], apikey=CKAN["dpaw-internal"]["key"])
print("Using CKAN {0}".format(ckan.address))
```
## Spatial extent name-geometry lookup
The fully qualified names and GeoJSON geometries of relevant spatial areas are contained in our custom dataschema.
```
# Getting the extent dictionary e
url = "https://raw.githubusercontent.com/datawagovau/ckanext-datawagovautheme/dpaw-internal/ckanext/datawagovautheme/datawagovau_dataset.json"
ds = json.loads(requests.get(url).content)
choice_dict = [x for x in ds["dataset_fields"] if x["field_name"] == "spatial"][0]["choices"]
e = dict([(x["label"], json.dumps(x["value"])) for x in choice_dict])
print("Extents: {0}".format(e.keys()))
```
## Name lookups
Relevant areas are listed under different synonyms. We'll create a dictionary of synonymous search terms ("s") and extent names (index "i").
```
# Creating a search term - extent index lookup
# m is a list of keys "s" (search term) and "i" (extent index)
m = [
{"s":"Eighty", "i":"MPA Eighty Mile Beach"},
{"s":"EMBMP", "i":"MPA Eighty Mile Beach"},
{"s":"Camden", "i":"MPA Lalang-garram / Camden Sound"},
{"s":"LCSMP", "i":"MPA Lalang-garram / Camden Sound"},
{"s":"Rowley", "i":"MPA Rowley Shoals"},
{"s":"RSMP", "i":"MPA Rowley Shoals"},
{"s":"Montebello", "i":"MPA Montebello Barrow"},
{"s":"MBIMPA", "i":"MPA Montebello Barrow"},
{"s":"Ningaloo", "i":"MPA Ningaloo"},
{"s":"NMP", "i":"MPA Ningaloo"},
{"s":"Shark bay", "i":"MPA Shark Bay Hamelin Pool"},
{"s":"SBMP", "i":"MPA Shark Bay Hamelin Pool"},
{"s":"Jurien", "i":"MPA Jurien Bay"},
{"s":"JBMP", "i":"MPA Jurien Bay"},
{"s":"Marmion", "i":"MPA Marmion"},
{"s":"Swan Estuary", "i":"MPA Swan Estuary"},
{"s":"SEMP", "i":"MPA Swan Estuary"},
{"s":"Shoalwater", "i":"MPA Shoalwater Islands"},
{"s":"SIMP", "i":"MPA Shoalwater Islands"},
{"s":"Ngari", "i":"MPA Ngari Capes"},
{"s":"NCMP", "i":"MPA Ngari Capes"},
{"s":"Walpole", "i":"MPA Walpole Nornalup"},
{"s":"WNIMP", "i":"MPA Walpole Nornalup"}
]
def add_spatial(dsdict, extent_string, force=False, debug=False):
"""Adds a given spatial extent to a CKAN dataset dict if
"spatial" is None, "" or force==True.
Arguments:
dsdict (ckanapi.action.package_show()) CKAN dataset dict
extent_string (String) GeoJSON geometry as json.dumps String
force (Boolean) Whether to force overwriting "spatial"
debug (Boolean) Debug noise
Returns:
(dict) The dataset with spatial extent replaced per above rules.
"""
if not dsdict.has_key("spatial"):
overwrite = True
if debug:
msg = "Spatial extent not given"
elif dsdict["spatial"] == "":
overwrite = True
if debug:
msg = "Spatial extent is empty"
elif force:
overwrite = True
msg = "Spatial extent was overwritten"
else:
overwrite = False
msg = "Spatial extent unchanged"
if overwrite:
dsdict["spatial"] = extent_string
print(msg)
return dsdict
def restore_extents(search_mapping, extents, ckan, debug=False):
"""Restore spatial extents for datasets
Arguments:
search_mapping (list) A list of dicts with keys "s" for ckanapi
package_search query parameter "q", and key "i" for the name
of the extent
e.g.:
m = [
{"s":"tags:marinepark_80_mile_beach", "i":"MPA Eighty Mile Beach"},
...
]
extents (dict) A dict with key "i" (extent name) and
GeoJSON Multipolygon geometry strings as value, e.g.:
{u'MPA Eighty Mile Beach': '{"type": "MultiPolygon", "coordinates": [ .... ]', ...}
ckan (ckanapi) A ckanapi instance
debug (boolean) Debug noise
Returns:
A list of dictionaries returned by ckanapi's package_update
"""
for x in search_mapping:
if debug:
print("\nSearching CKAN with '{0}'".format(x["s"]))
found = ckan.action.package_search(q=x["s"])["results"]
if debug:
print("Found datasets: {0}\n".format([d["title"] for d in found]))
fixed = [add_spatial(d, extents[x["i"]], force=True, debug=True) for d in found]
if debug:
print(fixed, "\n")
datasets_updated = upsert_datasets(fixed, ckan, debug=False)
restore_extents(m, e, ckan)
d = [ckan.action.package_show(id = x) for x in ckan.action.package_list()]
fix = [x["title"] for x in d if not x.has_key("spatial")]
len(fix)
d[0]
fix
```
| github_jupyter |
```
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12.0, 6.0]
import okama as ok
```
**EfficientFrontier** class can be used for "classic" frontiers where all portfolios are **rebalanced mothly**. It's the most easy and fast way to draw an Efficient Frontier.
### Simple efficient frontier for 2 ETF
```
ls2 = ['SPY.US', 'BND.US']
curr='USD'
two_assets = ok.EfficientFrontier(symbols=ls2, curr=curr, n_points=100) # n_points - specifies a number of points in the Efficient Frontier chart (default is 20)
two_assets
```
**ef_points** property returns the dataframe (table).
Each row has properties of portfolio or point in the frontier:
_Risk_ - the volatility or standard deviation
_Mean return_ - the expectation or arithmetic mean
_CAGR_ - Compound annual growth rate
All the properties have annualized values.
Last columns are weights for each asset.
```
df = two_assets.ef_points
df
fig = plt.figure()
# Plotting the assets
ok.Plots(ls2, curr=curr).plot_assets(kind='cagr')
ax = plt.gca()
# Plotting the Efficient Frontier
ax.plot(df['Risk'], df['CAGR']);
```
It's possible to draw both efficient frontiers: for mean return and for CAGR with the same dataframe.
```
fig = plt.figure()
# Plotting the assets
ok.Plots(ls2, curr=curr).plot_assets(kind='cagr')
ax = plt.gca()
# Plotting the Efficient Frontiers
# EF with mean return
ax.plot(df['Risk'], df['Mean return'])
# EF with CAGR
ax.plot(df['Risk'], df['CAGR']);
```
### Several assets
Let's add a popular fisical gold and real estate ETFs...
```
ls4 = ['SPY.US', 'BND.US', 'GLD.US', 'VNQ.US']
curr = 'USD'
four_assets = ok.EfficientFrontier(symbols=ls4, curr=curr, n_points=100)
four_assets
df4 = four_assets.ef_points
fig = plt.figure()
# Plotting the assets
ok.Plots(ls4, curr=curr).plot_assets(kind='cagr')
ax = plt.gca()
# Plotting the Efficient Frontier
ax.plot(df4['Risk'], df4['CAGR']);
```
### Efficient Frontier for each pair of assets
Sometimes it can be helpful to see how each pair of assets "contributes" to the common efficient frontier by drawing all the pair frontiers.
```
ok.Plots(ls4, curr=curr).plot_pair_ef();
```
We can see all efficent frontiers (pairs and 4 assets) in a common chart ...
```
fig = plt.figure()
# Plotting the assets
ok.Plots(ls4, curr=curr).plot_pair_ef()
ax = plt.gca()
# Plotting the Efficient Frontier
ax.plot(df4['Risk'], df4['Mean return'], color = 'black', linestyle='--');
```
### Global Minimum Variance (GMV) portfolio
GMV weights and values could be found with **gmv_weights**, **gmv_monthly** and **gmv_annualized** methods.
Weights of GMV portfolio:
```
four_assets.gmv_weights
```
Risk and mean return on monthly basis:
```
four_assets.gmv_monthly
```
Risk and mean return annualized:
```
four_assets.gmv_annualized
```
With annualized values it's easy to draw the GMV point on the chart.
```
fig = plt.figure()
ax = plt.gca()
# Plotting the Efficient Frontier
ax.plot(df4['Risk'], df4['CAGR']);
# plotting GMV point
ax.scatter(four_assets.gmv_annualized[0], four_assets.gmv_annualized[1])
# annotations for GMV point
ax.annotate("GMV", # this is the text
(four_assets.gmv_annualized[0], four_assets.gmv_annualized[1]), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(0, 10), # distance from text to points (x,y)
ha='center'); # horizontal alignment can be left, right or center
```
### Monte Carlo simulation for efficient frontier
Monte Carlo simulation is useful to visualize portfolios allocation inside the Efficient Frontier. It generates N random weights and calculates their properties (risk and return metrics).
Let's create a list of popular German stocks, add US bonds ETF (AGG) and spot gold prices (GC.COMM). Portfolios currency is EUR.
```
ls5 = ['DBK.XETR', 'SIE.XETR', 'TKA.XETR', 'AGG.US', 'GC.COMM']
curr = 'EUR'
gr = ok.EfficientFrontier(symbols=ls5, curr=curr, n_points=100)
gr
gr.names
```
To create a "cloud" of random portfolios **get_monte_carlo** method is used.
```
mc = gr.get_monte_carlo(n=5000, kind='cagr') # it is possible to choose whether mean return or CAGR is used with "kind" attribute
mc
```
We can plot the random portfolios with matplotlib **scatter** method. To add the assets point to the chart **plot_assets** is used (with Plots class).
```
fig = plt.figure(figsize=(12,6))
fig.subplots_adjust(bottom=0.2, top=1.5)
ok.Plots(ls5, curr='EUR').plot_assets(kind='cagr') # plot the assets points
ax = plt.gca()
ax.scatter(mc.Risk, mc.CAGR, linewidth=0, color='green');
```
As the random portfolios "cloud" usually does not have an obvious shape, sometimes it's worth to draw Monte Carlos simulation together with the Efficient Frontier.
```
ef = gr.ef_points # calculate Efficient Frontier points
fig = plt.figure(figsize=(12,6))
fig.subplots_adjust(bottom=0.2, top=1.5)
ok.Plots(ls5, curr='EUR').plot_assets(kind='cagr') # plot the assets points
ax = plt.gca()
ax.plot(ef.Risk, ef['CAGR'], color='black', linestyle='dashed', linewidth=3) # plot the Efficient Frontier
ax.scatter(mc.Risk, mc.CAGR, linewidth=0, color='green'); # plot the Monte Carlo simulation results
```
| github_jupyter |
<a href="https://colab.research.google.com/github/xavoliva6/dpfl_pytorch/blob/main/experiments/exp_FedMNIST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Experiments on FedMNIST
**Colab Support**<br/>
Only run the following lines if you want to run the code on Google Colab
```
# Enable access to files stored in Google Drive
from google.colab import drive
drive.mount('/content/gdrive/')
% cd /content/gdrive/My Drive/OPT4ML/src
```
# Main
```
# Install necessary requirements
!pip install -r ../requirements.txt
# Make sure cuda support is available
import torch
if torch.cuda.is_available():
device_name = "cuda:0"
else:
device_name = "cpu"
print("device_name: {}".format(device_name))
device = torch.device(device_name)
%load_ext autoreload
%autoreload 2
import sys
import warnings
warnings.filterwarnings("ignore")
from server import Server
from utils import plot_exp
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [6, 6]
plt.rcParams['figure.dpi'] = 100
```
### First experiment : impact of federated learning
```
LR = 0.01
EPOCHS = 1
NR_TRAINING_ROUNDS = 30
BATCH_SIZE = 128
RANGE_NR_CLIENTS = [1,5,10]
experiment_losses, experiment_accs = [], []
for nr_clients in RANGE_NR_CLIENTS:
print(f"### Number of clients : {nr_clients} ###\n\n")
server = Server(
nr_clients=nr_clients,
nr_training_rounds=NR_TRAINING_ROUNDS,
data='MNIST',
epochs=EPOCHS,
lr=LR,
batch_size=BATCH_SIZE,
is_private=False,
epsilon=None,
max_grad_norm=None,
noise_multiplier=None,
is_parallel=True,
device=device,
verbose='server')
test_losses, test_accs = server.train()
experiment_losses.append(test_losses)
experiment_accs.append(test_accs)
names = [f'{i} clients' for i in RANGE_NR_CLIENTS]
title = 'First experiment : MNIST database'
fig = plot_exp(experiment_losses, experiment_accs, names, title)
fig.savefig("MNIST_exp1.pdf")
```
### Second experiment : impact of differential privacy
```
NR_CLIENTS = 10
NR_TRAINING_ROUNDS = 30
EPOCHS = 1
LR = 0.01
BATCH_SIZE = 128
MAX_GRAD_NORM = 1.2
NOISE_MULTIPLIER = None
RANGE_EPSILON = [10,50,100]
experiment_losses, experiment_accs = [], []
for epsilon in RANGE_EPSILON:
print(f"### ε : {epsilon} ###\n\n")
server = Server(
nr_clients=NR_CLIENTS,
nr_training_rounds=NR_TRAINING_ROUNDS,
data='MNIST',
epochs=EPOCHS,
lr=LR,
batch_size=BATCH_SIZE,
is_private=True,
epsilon=epsilon,
max_grad_norm=MAX_GRAD_NORM,
noise_multiplier=NOISE_MULTIPLIER,
is_parallel=True,
device=device,
verbose='server')
test_losses, test_accs = server.train()
experiment_losses.append(test_losses)
experiment_accs.append(test_accs)
names = [f'ε = {i}' for i in RANGE_EPSILON]
title = 'Second experiment : MNIST database'
fig = plot_exp(experiment_losses, experiment_accs, names, title)
plt.savefig('MNIST_exp2.pdf')
```
| github_jupyter |
```
# for reading and validating data
import emeval.input.spec_details as eisd
import emeval.input.phone_view as eipv
import emeval.input.eval_view as eiev
# Visualization helpers
import emeval.viz.phone_view as ezpv
import emeval.viz.eval_view as ezev
import emeval.viz.geojson as ezgj
import pandas as pd
# Metrics helpers
import emeval.metrics.dist_calculations as emd
# For computation
import numpy as np
import math
import scipy.stats as stats
import matplotlib.pyplot as plt
import geopandas as gpd
import shapely as shp
import folium
DATASTORE_URL = "http://cardshark.cs.berkeley.edu"
AUTHOR_EMAIL = "shankari@eecs.berkeley.edu"
sd_la = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "unimodal_trip_car_bike_mtv_la")
sd_sj = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "car_scooter_brex_san_jose")
sd_ucb = eisd.SpecDetails(DATASTORE_URL, AUTHOR_EMAIL, "train_bus_ebike_mtv_ucb")
import importlib
importlib.reload(eisd)
pv_la = eipv.PhoneView(sd_la)
pv_sj = eipv.PhoneView(sd_sj)
pv_ucb = eipv.PhoneView(sd_ucb)
```
### Validate distance calculations
Our x,y coordinates are in degrees (lon, lat). So when we calculate the distance between two points, it is also in degrees. In order for this to be meaningful, we need to convert it to a regular distance metric such as meters.
This is a complicated problem in general because our distance calculation applies 2-D spatial operations to a 3-D curved space. However, as documented in the shapely documentation, since our areas of interest are small, we can use a 2-D approximation and get reasonable results.
In order to get distances from degree-based calculations, we can use the following options:
- perform the calculations in degrees and then convert them to meters. As an approximation, we can use the fact that 360 degrees represents the circumference of the earth. Therefore `dist = degree_dist * (C/360)`
- convert degrees to x,y coordinates using utm (https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system) and then calculate the distance
- since we calculate the distance from the ground truth linestring, calculate the closest ground truth point in (lon,lat) and then use the haversine formula (https://en.wikipedia.org/wiki/Haversine_formula) to calculate the distance between the two points
Let us quickly all three calculations for three selected test cases and:
- check whether they are largely consistent
- compare with other distance calculators to see which are closer
```
test_cases = {
"commuter_rail_aboveground": {
"section": pv_ucb.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][0]["evaluation_trip_ranges"][0]["evaluation_section_ranges"][2],
"ground_truth": sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "commuter_rail_aboveground")
},
"light_rail_below_above_ground": {
"section": pv_ucb.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][0]["evaluation_trip_ranges"][2]["evaluation_section_ranges"][7],
"ground_truth": sd_ucb.get_ground_truth_for_leg("berkeley_to_mtv_SF_express_bus", "light_rail_below_above_ground")
},
"express_bus": {
"section": pv_ucb.map()["ios"]["ucb-sdb-ios-3"]["evaluation_ranges"][1]["evaluation_trip_ranges"][2]["evaluation_section_ranges"][4],
"ground_truth": sd_ucb.get_ground_truth_for_leg("berkeley_to_mtv_SF_express_bus", "express_bus")
},
}
for t in test_cases.values():
t["gt_shapes"] = gpd.GeoSeries(eisd.SpecDetails.get_shapes_for_leg(t["ground_truth"]))
importlib.reload(emd)
dist_checks = []
pct_checks = []
for (k, t) in test_cases.items():
location_gpdf = emd.filter_geo_df(emd.to_geo_df(t["section"]["location_df"]), t["gt_shapes"].filter(["start_loc","end_loc"]))
gt_linestring = emd.filter_ground_truth_linestring(t["gt_shapes"])
dc = emd.dist_using_circumference(location_gpdf, gt_linestring)
dcrs = emd.dist_using_crs_change(location_gpdf, gt_linestring)
dmuc = emd.dist_using_manual_utm_change(location_gpdf, gt_linestring)
dmmc = emd.dist_using_manual_mercator_change(location_gpdf, gt_linestring)
dup = emd.dist_using_projection(location_gpdf, gt_linestring)
dist_compare = pd.DataFrame({"dist_circumference": dc, "dist_crs_change": dcrs,
"dist_manual_utm": dmuc, "dist_manual_mercator": dmmc,
"dist_project": dup})
dist_compare["diff_c_mu"] = (dist_compare.dist_circumference - dist_compare.dist_manual_utm).abs()
dist_compare["diff_mu_pr"] = (dist_compare.dist_manual_utm - dist_compare.dist_project).abs()
dist_compare["diff_mm_pr"] = (dist_compare.dist_manual_mercator - dist_compare.dist_project).abs()
dist_compare["diff_c_pr"] = (dist_compare.dist_circumference - dist_compare.dist_project).abs()
dist_compare["diff_c_mu_pct"] = dist_compare.diff_c_mu / dist_compare.dist_circumference
dist_compare["diff_mu_pr_pct"] = dist_compare.diff_mu_pr / dist_compare.dist_circumference
dist_compare["diff_mm_pr_pct"] = dist_compare.diff_mm_pr / dist_compare.dist_circumference
dist_compare["diff_c_pr_pct"] = dist_compare.diff_c_pr / dist_compare.dist_circumference
match_dist = lambda t: {"key": k,
"threshold": t,
"diff_c_mu": len(dist_compare.query('diff_c_mu > @t')),
"diff_mu_pr": len(dist_compare.query('diff_mu_pr > @t')),
"diff_mm_pr": len(dist_compare.query('diff_mm_pr > @t')),
"diff_c_pr": len(dist_compare.query('diff_c_pr > @t')),
"total_entries": len(dist_compare)}
dist_checks.append(match_dist(1))
dist_checks.append(match_dist(5))
dist_checks.append(match_dist(10))
dist_checks.append(match_dist(50))
match_pct = lambda t: {"key": k,
"threshold": t,
"diff_c_mu_pct": len(dist_compare.query('diff_c_mu_pct > @t')),
"diff_mu_pr_pct": len(dist_compare.query('diff_mu_pr_pct > @t')),
"diff_mm_pr_pct": len(dist_compare.query('diff_mm_pr_pct > @t')),
"diff_c_pr_pct": len(dist_compare.query('diff_c_pr_pct > @t')),
"total_entries": len(dist_compare)}
pct_checks.append(match_pct(0.01))
pct_checks.append(match_pct(0.05))
pct_checks.append(match_pct(0.10))
pct_checks.append(match_pct(0.15))
pct_checks.append(match_pct(0.20))
pct_checks.append(match_pct(0.25))
# t = "commuter_rail_aboveground"
# gt_gj = eisd.SpecDetails.get_geojson_for_leg(test_cases[t]["ground_truth"])
# print(gt_gj.features[2])
# gt_gj.features[2] = ezgj.get_geojson_for_linestring(emd.filter_ground_truth_linestring(test_cases[t]["gt_shapes"]))
# curr_map = ezgj.get_map_for_geojson(gt_gj)
# curr_map.add_child(ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(test_cases[t]["gt_shapes"].loc["route"]),
# name="gt_points", color="green"))
# curr_map
pd.DataFrame(dist_checks)
pd.DataFrame(pct_checks)
manual_check_points = pd.concat([location_gpdf, dist_compare], axis=1)[["latitude", "fmt_time", "longitude", "dist_circumference", "dist_manual_utm", "dist_manual_mercator", "dist_project"]].sample(n=3, random_state=10); manual_check_points
# curr_map = ezpv.display_map_detail_from_df(manual_check_points)
# curr_map.add_child(folium.GeoJson(eisd.SpecDetails.get_geojson_for_leg(t["ground_truth"])))
```
### Externally calculated distance for these points is:
Distance calculated manually using
1. https://www.freemaptools.com/measure-distance.htm
1. Google Maps
Note that the error of my eyes + hand is ~ 2-3 m
- 1213: within margin of error
- 1053: 3987 (freemaptools), 4km (google)
- 1107: 15799.35 (freemaptools), 15.80km (google)
```
manual_check_points
```
### Results and method choice
We find that the `manual_utm` and `project` methods are pretty consistent, and are significantly different from the `circumference` method. The `circumference` method appears to be consistently greater than the other two and the difference appears to be around 25%. The manual checks also appear to be closer to the `manual_utm` and `project` values. The `manual_utm` and `project` values are consistently within ~ 5% of each other, so we could really use either one.
**We will use the utm approach** since it is correct, is consistent with the shapely documentation (https://shapely.readthedocs.io/en/stable/manual.html#coordinate-systems) and applicable to operations beyond distance calculation
> Even though the Earth is not flat – and for that matter not exactly spherical – there are many analytic problems that can be approached by transforming Earth features to a Cartesian plane, applying tried and true algorithms, and then transforming the results back to geographic coordinates. This practice is as old as the tradition of accurate paper maps.
## Spatial error calculation
```
def get_spatial_errors(pv):
spatial_error_df = pd.DataFrame()
for phone_os, phone_map in pv.map().items():
for phone_label, phone_detail_map in phone_map.items():
for (r_idx, r) in enumerate(phone_detail_map["evaluation_ranges"]):
run_errors = []
for (tr_idx, tr) in enumerate(r["evaluation_trip_ranges"]):
trip_errors = []
for (sr_idx, sr) in enumerate(tr["evaluation_section_ranges"]):
# This is a Shapely LineString
section_gt_leg = pv.spec_details.get_ground_truth_for_leg(tr["trip_id_base"], sr["trip_id_base"])
section_gt_shapes = gpd.GeoSeries(eisd.SpecDetails.get_shapes_for_leg(section_gt_leg))
if len(section_gt_shapes) == 1:
print("No ground truth route for %s %s, must be polygon, skipping..." % (tr["trip_id_base"], sr["trip_id_base"]))
assert section_gt_leg["type"] != "TRAVEL", "For %s, %s, %s, %s, %s found type %s" % (phone_os, phone_label, r_idx, tr_idx, sr_idx, section_gt_leg["type"])
continue
if len(sr['location_df']) == 0:
print("No sensed locations found, role = %s skipping..." % (r["eval_role_base"]))
# assert r["eval_role_base"] == "power_control", "Found no locations for %s, %s, %s, %s, %s" % (phone_os, phone_label, r_idx, tr_idx, sr_idx)
continue
print("Processing travel leg %s, %s, %s, %s, %s" %
(phone_os, phone_label, r["eval_role_base"], tr["trip_id_base"], sr["trip_id_base"]))
# This is a GeoDataFrame
section_geo_df = emd.to_geo_df(sr["location_df"])
# After this point, everything is in UTM so that 2-D inside/filtering operations work
utm_section_geo_df = emd.to_utm_df(section_geo_df)
utm_section_gt_shapes = section_gt_shapes.apply(lambda s: shp.ops.transform(emd.to_utm_coords, s))
filtered_us_gpdf = emd.filter_geo_df(utm_section_geo_df, utm_section_gt_shapes.loc["start_loc":"end_loc"])
filtered_gt_linestring = emd.filter_ground_truth_linestring(utm_section_gt_shapes)
meter_dist = filtered_us_gpdf.geometry.distance(filtered_gt_linestring)
ne = len(meter_dist)
curr_spatial_error_df = gpd.GeoDataFrame({"error": meter_dist,
"ts": section_geo_df.ts,
"geometry": section_geo_df.geometry,
"phone_os": np.repeat(phone_os, ne),
"phone_label": np.repeat(phone_label, ne),
"role": np.repeat(r["eval_role_base"], ne),
"timeline": np.repeat(pv.spec_details.CURR_SPEC_ID, ne),
"run": np.repeat(r_idx, ne),
"trip_id": np.repeat(tr["trip_id_base"], ne),
"section_id": np.repeat(sr["trip_id_base"], ne)})
spatial_error_df = pd.concat([spatial_error_df, curr_spatial_error_df], axis="index")
return spatial_error_df
spatial_errors_df = pd.DataFrame()
spatial_errors_df = pd.concat([spatial_errors_df, get_spatial_errors(pv_la)], axis="index")
spatial_errors_df = pd.concat([spatial_errors_df, get_spatial_errors(pv_sj)], axis="index")
spatial_errors_df = pd.concat([spatial_errors_df, get_spatial_errors(pv_ucb)], axis="index")
spatial_errors_df.head()
r2q_map = {"power_control": 0, "HAMFDC": 1, "MAHFDC": 2, "HAHFDC": 3, "accuracy_control": 4}
q2r_map = {0: "power", 1: "HAMFDC", 2: "MAHFDC", 3: "HAHFDC", 4: "accuracy"}
spatial_errors_df["quality"] = spatial_errors_df.role.apply(lambda r: r2q_map[r])
spatial_errors_df["label"] = spatial_errors_df.role.apply(lambda r: r.replace('_control', ''))
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
spatial_errors_df.head()
```
## Overall stats
```
ifig, ax_array = plt.subplots(nrows=1,ncols=2,figsize=(8,2), sharey=True)
spatial_errors_df.query("phone_os == 'android' & quality > 0").boxplot(ax = ax_array[0], column=["error"], by=["quality"], showfliers=False)
ax_array[0].set_title('android')
spatial_errors_df.query("phone_os == 'ios' & quality > 0").boxplot(ax = ax_array[1], column=["error"], by=["quality"], showfliers=False)
ax_array[1].set_title("ios")
for i, ax in enumerate(ax_array):
# print([t.get_text() for t in ax.get_xticklabels()])
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[0].set_ylabel("Spatial error (meters)")
# ax_array[1][0].set_ylabel("Spatial error (meters)")
ifig.suptitle("Spatial trajectory error v/s quality (excluding outliers)", y = 1.1)
# ifig.tight_layout()
ifig, ax_array = plt.subplots(nrows=1,ncols=2,figsize=(8,2), sharey=True)
spatial_errors_df.query("phone_os == 'android' & quality > 0").boxplot(ax = ax_array[0], column=["error"], by=["quality"])
ax_array[0].set_title('android')
spatial_errors_df.query("phone_os == 'ios' & quality > 0").boxplot(ax = ax_array[1], column=["error"], by=["quality"])
ax_array[1].set_title("ios")
for i, ax in enumerate(ax_array):
# print([t.get_text() for t in ax.get_xticklabels()])
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[0].set_ylabel("Spatial error (meters)")
# ax_array[1][0].set_ylabel("Spatial error (meters)")
ifig.suptitle("Spatial trajectory error v/s quality", y = 1.1)
# ifig.tight_layout()
```
### Split out results by timeline
```
ifig, ax_array = plt.subplots(nrows=2,ncols=3,figsize=(12,6), sharex=False, sharey=False)
timeline_list = ["train_bus_ebike_mtv_ucb", "car_scooter_brex_san_jose", "unimodal_trip_car_bike_mtv_la"]
for i, tl in enumerate(timeline_list):
spatial_errors_df.query("timeline == @tl & phone_os == 'android' & quality > 0").boxplot(ax = ax_array[0][i], column=["error"], by=["quality"])
ax_array[0][i].set_title(tl)
spatial_errors_df.query("timeline == @tl & phone_os == 'ios' & quality > 0").boxplot(ax = ax_array[1][i], column=["error"], by=["quality"])
ax_array[1][i].set_title("")
for i, ax in enumerate(ax_array[0]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
for i, ax in enumerate(ax_array[1]):
ax.set_xticklabels([q2r_map[int(t.get_text())] for t in ax.get_xticklabels()])
ax.set_xlabel("")
ax_array[0][0].set_ylabel("Spatial error (android)")
ax_array[1][0].set_ylabel("Spatial error (iOS)")
ifig.suptitle("Spatial trajectory error v/s quality over multiple timelines")
# ifig.tight_layout()
```
### Split out results by section for the most complex timeline (train_bus_ebike_mtv_ucb)
```
ifig, ax_array = plt.subplots(nrows=2,ncols=4,figsize=(25,10), sharex=True, sharey=True)
timeline_list = ["train_bus_ebike_mtv_ucb"]
for i, tl in enumerate(timeline_list):
for q in range(1,5):
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'android' & quality == @q")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i][q-1], column=["error"], by=["section_id"])
ax_array[2*i][q-1].tick_params(axis="x", labelrotation=45)
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'ios' & quality == @q")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i+1][q-1], column=["error"], by=["section_id"])
# ax_array[i][].set_title("")
def make_acronym(s):
ssl = s.split("_")
# print("After splitting %s, we get %s" % (s, ssl))
if len(ssl) == 0 or len(ssl[0]) == 0:
return ""
else:
return "".join([ss[0] for ss in ssl])
for q in range(1,5):
ax_array[0][q-1].set_title(q2r_map[q])
curr_ticks = [t.get_text() for t in ax_array[1][q-1].get_xticklabels()]
new_ticks = [make_acronym(t) for t in curr_ticks]
ax_array[1][q-1].set_xticklabels(new_ticks)
print(list(zip(curr_ticks, new_ticks)))
# fig.text(0,0,"%s"% list(zip(curr_ticks, new_ticks)))
timeline_list = ["train_bus_ebike_mtv_ucb"]
for i, tl in enumerate(timeline_list):
unique_sections = spatial_errors_df.query("timeline == @tl").section_id.unique()
ifig, ax_array = plt.subplots(nrows=2,ncols=len(unique_sections),figsize=(40,10), sharex=True, sharey=False)
for sid, s_name in enumerate(unique_sections):
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'android' & section_id == @s_name & quality > 0")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i][sid], column=["error"], by=["quality"])
ax_array[2*i][sid].set_title(s_name)
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'ios' & section_id == @s_name & quality > 0")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i+1][sid], column=["error"], by=["quality"])
ax_array[2*i+1][sid].set_title("")
# ax_array[i][].set_title("")
```
### Focus only on sections where the max error is > 1000 meters
```
timeline_list = ["train_bus_ebike_mtv_ucb"]
for i, tl in enumerate(timeline_list):
unique_sections = pd.Series(spatial_errors_df.query("timeline == @tl").section_id.unique())
sections_with_outliers_mask = unique_sections.apply(lambda s_name: spatial_errors_df.query("timeline == 'train_bus_ebike_mtv_ucb' & section_id == @s_name").error.max() > 1000)
sections_with_outliers = unique_sections[sections_with_outliers_mask]
ifig, ax_array = plt.subplots(nrows=2,ncols=len(sections_with_outliers),figsize=(17,4), sharex=True, sharey=False)
for sid, s_name in enumerate(sections_with_outliers):
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'android' & section_id == @s_name & quality > 0")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i][sid], column=["error"], by=["quality"])
ax_array[2*i][sid].set_title(s_name)
ax_array[2*i][sid].set_xlabel("")
sel_df = spatial_errors_df.query("timeline == @tl & phone_os == 'ios' & section_id == @s_name & quality > 0")
if len(sel_df) > 0:
sel_df.boxplot(ax = ax_array[2*i+1][sid], column=["error"], by=["quality"])
ax_array[2*i+1][sid].set_title("")
print([t.get_text() for t in ax_array[2*i+1][sid].get_xticklabels()])
ax_array[2*i+1][sid].set_xticklabels([q2r_map[int(t.get_text())] for t in ax_array[2*i+1][sid].get_xticklabels() if len(t.get_text()) > 0])
ax_array[2*i+1][sid].set_xlabel("")
ifig.suptitle("")
```
### Validation of outliers
#### (express bus iOS, MAHFDC)
ok, so it looks like the error is non-trivial across all runs, but run #1 is the worst and is responsible for the majority of the outliers. And this is borne out by the map, where on run #1, we end up with points in San Leandro!!
```
spatial_errors_df.query("phone_os == 'ios' & quality == 2 & section_id == 'express_bus' & error > 500").run.unique()
spatial_errors_df.query("phone_os == 'ios' & quality == 2 & section_id == 'express_bus'").boxplot(column="error", by="run")
gt_leg = sd_ucb.get_ground_truth_for_leg("berkeley_to_mtv_SF_express_bus", "express_bus"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'ios' & quality == 2 & section_id == 'express_bus' & run == 1"))
gt_16k = lambda lr: lr["error"] == error_df.error.max()
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color="red"), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color="red", popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
importlib.reload(ezgj)
gt_leg = sd_ucb.get_ground_truth_for_leg("berkeley_to_mtv_SF_express_bus", "express_bus"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
colors = ["red", "yellow", "blue"]
for run in range(3):
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'ios' & quality == 2 & section_id == 'express_bus' & run == @run"))
gt_16k = lambda lr: lr["error"] == error_df.error.max()
print("max error for run %d is %s" % (run, error_df.error.max()))
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color=colors[run]), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color=colors[run], popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
```
#### (commuter rail aboveground android, HAMFDC)
Run 0: Multiple outliers at the start in San Jose. After that, everything is fine.
```
spatial_errors_df.query("phone_os == 'android' & quality == 1 & section_id == 'commuter_rail_aboveground' & error > 500").run.unique()
spatial_errors_df.query("phone_os == 'android' & quality == 1 & section_id == 'commuter_rail_aboveground' & error > 500").boxplot(column="error", by="run")
gt_leg = sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "commuter_rail_aboveground"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'android' & quality == 1 & section_id == 'commuter_rail_aboveground' & run == 0"))
maxes = [error_df.error.max(), error_df[error_df.error < 10000].error.max(), error_df[error_df.error < 1000].error.max()]
gt_16k = lambda lr: lr["error"] in maxes
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color="red"), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color="red", popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
spatial_errors_df.query("phone_os == 'android' & quality == 1 & section_id == 'commuter_rail_aboveground' & error > 10000")
```
#### (walk_to_bus android, HAMFDC, HAHFDC)
Huge zig zag when we get out of the BART station
```
spatial_errors_df.query("phone_os == 'android' & (quality == 1 | quality == 3) & section_id == 'walk_to_bus' & error > 500").run.unique()
spatial_errors_df.query("phone_os == 'android' & (quality == 1 | quality == 3) & section_id == 'walk_to_bus' & error > 500")
spatial_errors_df.query("phone_os == 'android' & (quality == 1 | quality == 3) & section_id == 'walk_to_bus'").boxplot(column="error", by="run")
spatial_errors_df.query("phone_os == 'android' & (quality == 1 | quality == 3) & section_id == 'walk_to_bus'").error.max()
error_df
ucb_and_back = pv_ucb.map()["android"]["ucb-sdb-android-2"]["evaluation_ranges"][0]; ucb_and_back["trip_id"]
to_trip = ucb_and_back["evaluation_trip_ranges"][0]; print(to_trip["trip_id"])
wb_leg = to_trip["evaluation_section_ranges"][6]; print(wb_leg["trip_id"])
gt_leg = sd_ucb.get_ground_truth_for_leg(to_trip["trip_id_base"], wb_leg["trip_id_base"]); gt_leg["id"]
importlib.reload(ezgj)
gt_leg = sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "walk_to_bus"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'android' & quality == 3 & section_id == 'walk_to_bus'").sort_index(axis="index"))
maxes = [error_df.error.max(), error_df[error_df.error < 16000].error.max(), error_df[error_df.error < 5000].error.max()]
gt_16k = lambda lr: lr["error"] in maxes
print("Checking errors %s" % maxes)
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color="red"), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color="red", popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
```
#### (light_rail_below_above_ground, android, accuracy_control)
ok, so it looks like the error is non-trivial across all runs, but run #1 is the worst and is responsible for the majority of the outliers. And this is borne out by the map, where on run #1, we end up with points in San Leandro!!
```
spatial_errors_df.query("phone_os == 'android' & quality == 4 & section_id == 'light_rail_below_above_ground' & error > 100").run.unique()
spatial_errors_df.query("phone_os == 'android' & (quality == 4) & section_id == 'light_rail_below_above_ground'").boxplot(column="error", by="run")
ucb_and_back = pv_ucb.map()["android"]["ucb-sdb-android-2"]["evaluation_ranges"][0]; ucb_and_back["trip_id"]
back_trip = ucb_and_back["evaluation_trip_ranges"][2]; print(back_trip["trip_id"])
lt_leg = back_trip["evaluation_section_ranges"][7]; print(lt_leg["trip_id"])
gt_leg = sd_ucb.get_ground_truth_for_leg(back_trip["trip_id_base"], lt_leg["trip_id_base"]); gt_leg["id"]
import folium
gt_leg = sd_ucb.get_ground_truth_for_leg("berkeley_to_mtv_SF_express_bus", "light_rail_below_above_ground"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
colors = ["red", "yellow", "blue"]
for run in range(3):
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'android' & quality == 4 & section_id == 'light_rail_below_above_ground' & run == @run"))
gt_16k = lambda lr: lr["error"] == error_df.error.max()
print("max error for run %d is %s" % (run, error_df.error.max()))
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color=colors[run]), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color=colors[run], popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
```
#### (subway, android, HAMFDC)
This is the poster child for temporal accuracy tracking
```
bart_leg = pv_ucb.map()["android"]["ucb-sdb-android-3"]["evaluation_ranges"][0]["evaluation_trip_ranges"][0]["evaluation_section_ranges"][5]
gt_leg = sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "subway_underground"); gt_leg["id"]
gt_leg = sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "subway_underground"); print(gt_leg["id"])
curr_map = ezgj.get_map_for_geojson(sd_ucb.get_geojson_for_leg(gt_leg), name="ground_truth")
ezgj.get_fg_for_loc_df(emd.linestring_to_geo_df(eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"]),
name="gt_points", color="green").add_to(curr_map)
name_err_time = lambda lr: "%d: %d, %s, %s" % (lr["index"], lr["df_idx"], lr["error"], sd_ucb.fmt(lr["ts"], "MM-DD HH:mm:ss"))
error_df = emd.to_loc_df(spatial_errors_df.query("phone_os == 'android' & quality == 1 & section_id == 'subway_underground' & run == 0").sort_index(axis="index"))
maxes = [error_df.error.max(), error_df[error_df.error < 16000].error.max(), error_df[error_df.error < 5000].error.max()]
gt_16k = lambda lr: lr["error"] in maxes
print("Checking errors %s" % maxes)
folium.GeoJson(ezgj.get_geojson_for_loc_df(error_df, color="red"), name="sensed_values").add_to(curr_map)
ezgj.get_fg_for_loc_df(error_df, name="sensed_points", color="red", popupfn=name_err_time, stickyfn=gt_16k).add_to(curr_map)
folium.LayerControl().add_to(curr_map)
curr_map
gt_leg = sd_ucb.get_ground_truth_for_leg("mtv_to_berkeley_sf_bart", "subway_underground"); gt_leg["id"]
eisd.SpecDetails.get_shapes_for_leg(gt_leg)["route"].is_simple
pd.concat([
error_df.iloc[40:50],
error_df.iloc[55:60],
error_df.iloc[65:75],
error_df.iloc[70:75]])
import pyproj
latlonProj = pyproj.Proj(init="epsg:4326")
xyProj = pyproj.Proj(init="epsg:3395")
xy = pyproj.transform(latlonProj, xyProj, -122.08355963230133, 37.39091642895306); xy
pyproj.transform(xyProj, latlonProj, xy[0], xy[1])
import pandas as pd
df = pd.DataFrame({"a": [1,2,3], "b": [4,5,6]}); df
pd.concat([pd.DataFrame([{"a": 10, "b": 14}]), df, pd.DataFrame([{"a": 20, "b": 24}])], axis='index').reset_index(drop=True)
```
| github_jupyter |
```
import pandas as pd
from urllib.request import urlopen
import requests
from bs4 import BeautifulSoup
from graphviz import Digraph
import re
import time
import numpy as np
an_urllib = urlopen("https://animalcrossing.fandom.com/wiki/Villager_list_(New_Horizons)")
an_request = requests.get("https://animalcrossing.fandom.com/wiki/Villager_list_(New_Horizons)")
# check the status code
print('The status code is', an_request.status_code)
an = BeautifulSoup(an_request.text)
tables = an.find_all('table')
village_table = []
for row in tables[1].find_all("tr"):
row_data = []
for cell in row.find_all("td"):
row_data.append(cell.text)
if not row_data == []:
village_table.append(row_data)
raw_data = pd.DataFrame(village_table)
raw_data
# clean the dataframe a bit
# remove unneccessary columns
cleaned_data = raw_data.iloc[1:,0:7]
cleaned_data.columns = ["name", "image", "personality", "species",
"birthday", "catchphrase", "hobbies"]
cleaned_data = cleaned_data.drop("image", axis = 1)
# make the data clean a bit
cleaned_data = cleaned_data.replace("\n", "", regex = True)
cleaned_data = cleaned_data.replace("♂", "", regex = True)
cleaned_data = cleaned_data.replace("♀", "", regex = True)
cleaned_data = cleaned_data.replace(" ", "", regex = True)
# add gender column
cleaned_data['gender'] = np.where((cleaned_data['personality'] == 'Cranky')|\
(cleaned_data['personality'] == 'Jock')|\
(cleaned_data['personality'] == 'Lazy')|\
(cleaned_data['personality'] == 'Smug'), 'male', 'female' )
cleaned_data
import altair as alt
alt.renderers.enable('mimetype')
# find out the number of villages from each species
alt.Chart(cleaned_data).mark_bar().encode(
alt.X('species'),
y = 'count()',
color = 'gender'
)
```
#### As we can see, there is more villagers that's Cat, Rabbit and Squirrel. It's rare to get a villager like Octopus or Cow.
- Anteater, Cat, Duck, Kanteroo, Koala, Mouse, Ostrich, Rabbit, Sheep, Squirrel has more female than male villagers.
```
# find out the number of villages in different personailty
alt.Chart(cleaned_data).mark_bar().encode(
alt.X('personality'),
y = 'count()',
color = 'gender'
)
```
#### There is more villagers with Normal and Lazy personality while it's relatively rare to see villagers with sistery and smug personality.
- Lazy is the most common personality among male villager while Normal is the most common personality among female.
```
# number of male and female villagers
alt.Chart(cleaned_data).mark_bar().encode(
alt.X('gender'),
y = 'count()'
)
```
### As we can see, there are more male villagers than female villagers.
## Let's look at their hobbies!
```
alt.Chart(cleaned_data).mark_bar().encode(
alt.X('hobbies'),
y = 'count()',
color = 'gender'
)
```
### Hmm... It seens like no male villagers is interested in **Fashion**. The gender ration seems to be well balanced among Education and Music hobbies. There are more male villages interested into Natural and Play.
```
alt.Chart(cleaned_data).mark_bar().encode(
alt.X('species'),
y = 'count()',
color = 'hobbies'
)
```
### It seems like hobbies are pretty balanced among all species... Wait! Why there is no on interested in education among Alligator, Cow, Gorrilla, Octopus and Rhino? I can understand that there were only 4 cows and 3 Octopus so it's hard to fill in all the categories.
### Apparently the only thing Gorilla cares is Fitness!
| github_jupyter |
```
import wandb
import nltk
from nltk.stem.porter import *
from torch.nn import *
from torch.optim import *
import numpy as np
import pandas as pd
import torch,torchvision
import random
from tqdm import *
from torch.utils.data import Dataset,DataLoader
stemmer = PorterStemmer()
PROJECT_NAME = 'Amazon-Alexa-Reviews'
device = 'cuda'
def tokenize(sentence):
return nltk.word_tokenize(sentence)
tokenize('%100')
def stem(word):
return stemmer.stem(word.lower())
stem('organic')
def bag_of_words(tokenized_words,all_words):
tokenized_words = [stem(w) for w in tokenized_words]
bag = np.zeros(len(all_words))
for idx,w in enumerate(all_words):
if w in tokenized_words:
bag[idx] = 1.0
return bag
bag_of_words(['hi'],['hi','how','hi'])
data = pd.read_csv('./data.tsv',sep='\t')
X = data['verified_reviews']
y = data['rating']
words = []
data = []
idx = 0
labels = {}
labels_r = {}
for X_batch,y_batch in tqdm(zip(X,y)):
if y_batch not in list(labels.keys()):
idx += 1
labels[y_batch] = idx
labels_r[idx] = y_batch
labels
for X_batch,y_batch in tqdm(zip(X,y)):
X_batch = tokenize(X_batch)
new_X = []
for Xb in X_batch:
new_X.append(stem(Xb))
words.extend(new_X)
data.append([new_X,np.eye(labels[y_batch]+1,len(labels))[labels[y_batch]]])
words = sorted(set(words))
np.random.shuffle(data)
np.random.shuffle(data)
X = []
y = []
for sentence,tag in tqdm(data):
X.append(bag_of_words(sentence,words))
y.append(tag)
from sklearn.model_selection import *
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False,random_state=2021)
X_train = torch.from_numpy(np.array(X_train)).to(device).float()
y_train = torch.from_numpy(np.array(y_train)).to(device).float()
X_test = torch.from_numpy(np.array(X_test)).to(device).float()
y_test = torch.from_numpy(np.array(y_test)).to(device).float()
def get_loss(model,X,y,citerion):
preds = model(X)
loss = criterion(preds,y)
return loss.item()
def get_accuracy(model,X,y):
correct = 0
total = 0
preds = model(X)
for pred,yb in zip(preds,y):
pred = int(torch.argmax(pred))
yb = int(torch.argmax(yb))
if pred == yb:
correct += 1
total += 1
acc = round(correct/total,3)*100
return acc
class Model(Module):
def __init__(self):
super().__init__()
self.iters = 10
self.activation = ReLU()
self.linear1 = Linear(len(words),512)
self.linear2 = Linear(512,512)
self.output = Linear(512,len(labels))
def forward(self,X):
preds = self.linear1(X)
for _ in range(self.iters):
preds = self.activation(self.linear2(preds))
preds = self.output(preds)
return preds
model = Model().to(device)
criterion = MSELoss()
optimizer = Adam(model.parameters(),lr=0.001)
epochs = 10000
batch_size = 8
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
torch.cuda.empty_cache()
for i in range(0,len(X_train),batch_size):
torch.cuda.empty_cache()
X_batch = X_train[i:i+batch_size].to(device).float()
y_batch = y_train[i:i+batch_size].to(device).float()
preds = model(X_batch)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
torch.cuda.empty_cache()
torch.cuda.empty_cache()
model.eval()
torch.cuda.empty_cache()
wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion)/2)})
torch.cuda.empty_cache()
wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})
torch.cuda.empty_cache()
wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})
torch.cuda.empty_cache()
wandb.log({'Val Acc':get_accuracy(model,X_test,y_test)})
torch.cuda.empty_cache()
model.train()
wandb.finish()
torch.cuda.empty_cache()
torch.save(X_train,'X_train.pt')
torch.save(X_test,'X_test.pth')
torch.save(y_train,'y_train.pt')
torch.save(y_test,'y_test.pth')
torch.save(model,'model.pt')
torch.save(model,'model.pth')
torch.save(model.state_dict(),'model-sd.pt')
torch.save(model.state_dict(),'model-sd.pth')
torch.save(X,'X.pt')
torch.save(X,'X.pth')
torch.save(y,'y.pt')
torch.save(y,'y.pth')
torch.save(words,'words.pt')
torch.save(words,'words.pth')
torch.save(data,'data.pt')
torch.save(data,'data.pth')
torch.save(labels,'labels.pt')
torch.save(labels,'labels.pth')
torch.save(idx,'idx.pt')
torch.save(idx,'idx.pth')
```
| github_jupyter |
<img width="100" src="https://carbonplan-assets.s3.amazonaws.com/monogram/dark-small.png" style="margin-left:0px;margin-top:20px"/>
# Forest Emissions Tracking - Validation
_CarbonPlan ClimateTrace Team_
This notebook compares our estimates of country-level forest emissions to prior estimates from other
groups. The notebook currently compares againsts:
- Global Forest Watch (Zarin et al. 2016)
- Global Carbon Project (Friedlingstein et al. 2020)
```
import geopandas
import pandas as pd
from io import StringIO
import matplotlib.pyplot as plt
from carbonplan_styles.mpl import set_theme
set_theme()
# Input data
# ----------
# country shapes from GADM36
countries = geopandas.read_file("s3://carbonplan-climatetrace/inputs/shapes/countries.shp")
# CarbonPlan's emissions
emissions = pd.read_csv("s3://carbonplan-climatetrace/v0.1/country_rollups.csv")
# GFW emissions
gfw_emissions = pd.read_excel(
"s3://carbonplan-climatetrace/validation/gfw_global_emissions.xlsx",
sheet_name="Country co2 emissions",
).dropna(axis=0)
gfw_emissions = gfw_emissions[gfw_emissions["threshold"] == 10] # select threshold
# Global Carbon Project
gcp_emissions = (
pd.read_excel(
"s3://carbonplan-climatetrace/validation/Global_Carbon_Budget_2020v1.0.xlsx",
sheet_name="Land-Use Change Emissions",
skiprows=28,
)
.dropna(axis=1)
.set_index("Year")
)
gcp_emissions *= 3.664 # C->CO2
gcp_emissions.index = [pd.to_datetime(f"{y}-01-01") for y in gcp_emissions.index]
gcp_emissions = gcp_emissions[["GCB", "H&N", "BLUE", "OSCAR"]]
# Merge emissions dataframes with countries GeoDataFrame
gfw_counties = countries.merge(gfw_emissions.rename(columns={"country": "name"}), on="name")
trace_counties = countries.merge(emissions.rename(columns={"iso3_country": "alpha3"}), on="alpha3")
# reformat to "wide" format (time x country)
trace_wide = (
emissions.drop(columns=["end_date"])
.pivot(index="begin_date", columns="iso3_country")
.droplevel(0, axis=1)
)
trace_wide.index = pd.to_datetime(trace_wide.index)
gfw_wide = gfw_emissions.set_index("country").filter(regex="whrc_aboveground_co2_emissions_Mg_.*").T
gfw_wide.index = [pd.to_datetime(f"{l[-4:]}-01-01") for l in gfw_wide.index]
gfw_wide.head()
```
## Part 1 - Compare time-averaged country emissions (tropics only)
```
# Create a new dataframe with average emissions
avg_emissions = countries.set_index("alpha3")
avg_emissions["trace"] = trace_wide.mean().transpose()
avg_emissions = avg_emissions.set_index("name")
avg_emissions["gfw"] = gfw_wide.mean().transpose() / 1e9
# Scatter Plot
avg_emissions.plot.scatter("gfw", "trace")
plt.ylabel("Trace [Tg CO2e]")
plt.xlabel("GFW [Tg CO2e]")
```
## Part 2 - Maps of Tropical Emissions
```
avg_emissions_nonan = avg_emissions.dropna()
kwargs = dict(
legend=True,
legend_kwds={"orientation": "horizontal", "label": "Emissions [Tg CO2e]"},
lw=0.25,
cmap="Reds",
vmin=0,
vmax=1,
)
avg_emissions_nonan.plot("trace", **kwargs)
plt.title("Trace v0")
avg_emissions_nonan.plot("gfw", **kwargs)
plt.title("GFW Tropics")
kwargs = dict(
legend=True,
legend_kwds={
"orientation": "horizontal",
"label": "Emissions Difference [%]",
},
lw=0.25,
cmap="RdBu_r",
vmin=-40,
vmax=40,
)
avg_emissions_nonan["pdiff"] = (
(avg_emissions_nonan["trace"] - avg_emissions_nonan["gfw"]) / avg_emissions_nonan["gfw"]
) * 100
avg_emissions_nonan.plot("pdiff", **kwargs)
plt.title("% difference")
```
## Part 3 - Compare global emissions timeseries to Global Carbon Project
```
ax = gcp_emissions[["H&N", "BLUE", "OSCAR"]].loc["2000":].plot(ls="--")
gcp_emissions["GCB"].loc["2000":].plot(ax=ax, label="GCB", lw=3)
trace_wide.sum(axis=1).plot(ax=ax, label="Trace v0", c="k", lw=3)
plt.ylabel("Emissions [Tg CO2e]")
plt.legend()
```
# Part 4 - Compare global emissions with those of other inventories
#### load in the inventory file from climate trace which aggregated multiple inventories (e.g. GCP, EDGAR, CAIT) into one place
```
inventories_df = pd.read_csv(
"s3://carbonplan-climatetrace/validation/210623_all_inventory_data.csv"
)
```
The following inventories are included:
{'CAIT', 'ClimateTRACE', 'EDGAR', 'GCP', 'PIK-CR', 'PIK-TP', 'carbon monitor', 'unfccc',
'unfccc_nai'}
```
set(inventories_df["Data source"].values)
def select_inventory_timeseries(df, inventory=None, country=None, sector=None):
if inventory is not None:
df = df[df["Data source"] == inventory]
if country is not None:
df = df[df["Country"] == country]
if sector is not None:
df = df[df["Sector"] == sector]
return df
```
### access the different inventories and compare with our estimates. country-level comparisons are to-do.
```
select_inventory_timeseries(inventories_df, country="Brazil", inventory="CAIT")
select_inventory_timeseries(
inventories_df,
country="United States of America",
inventory="unfccc",
sector="4.A Forest Land",
)
```
### todo: compare our estimates with these and the same from xu2021
| github_jupyter |
# Interpolation
**Learning Objective:** Learn to interpolate 1d and 2d datasets of structured and unstructured points using SciPy.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
## Overview
We have already seen how to evaluate a Python function at a set of numerical points:
$$ f(x) \rightarrow f_i = f(x_i) $$
Here is an array of points:
```
x = np.linspace(0,4*np.pi,10)
x
```
This creates a new array of points that are the values of $\sin(x_i)$ at each point $x_i$:
```
f = np.sin(x)
f
plt.plot(x, f, marker='o')
plt.xlabel('x')
plt.ylabel('f(x)');
```
This plot shows that the points in this numerical array are an approximation to the actual function as they don't have the function's value at all possible points. In this case we know the actual function ($\sin(x)$). What if we only know the value of the function at a limited set of points, and don't know the analytical form of the function itself? This is common when the data points come from a set of measurements.
[Interpolation](http://en.wikipedia.org/wiki/Interpolation) is a numerical technique that enables you to construct an approximation of the actual function from a set of points:
$$ \{x_i,f_i\} \rightarrow f(x) $$
It is important to note that unlike curve fitting or regression, interpolation doesn't not allow you to incorporate a *statistical model* into the approximation. Because of this, interpolation has limitations:
* It cannot accurately construct the function's approximation outside the limits of the original points.
* It cannot tell you the analytical form of the underlying function.
Once you have performed interpolation you can:
* Evaluate the function at other points not in the original dataset.
* Use the function in other calculations that require an actual function.
* Compute numerical derivatives or integrals.
* Plot the approximate function on a finer grid that the original dataset.
**Warning:**
The different functions in SciPy work with a range of different 1d and 2d arrays. To help you keep all of that straight, I will use lowercase variables for 1d arrays (`x`, `y`) and uppercase variables (`X`,`Y`) for 2d arrays.
## 1d data
We begin with a 1d interpolation example with regularly spaced data. The function we will use it `interp1d`:
```
from scipy.interpolate import interp1d
```
Let's create the numerical data we will use to build our interpolation.
```
x = np.linspace(0,4*np.pi,10) # only use 10 points to emphasize this is an approx
f = np.sin(x)
```
To create our approximate function, we call `interp1d` as follows, with the numerical data. Options for the `kind` argument includes:
* `linear`: draw a straight line between initial points.
* `nearest`: return the value of the function of the nearest point.
* `slinear`, `quadratic`, `cubic`: use a spline (particular kinds of piecewise polynomial of a given order.
The most common case you will want to use is `cubic` spline (try other options):
```
sin_approx = interp1d(x, f, kind='cubic')
```
The `sin_approx` variabl that `interp1d` returns is a callable object that can be used to compute the approximate function at other points. Compute the approximate function on a fine grid:
```
newx = np.linspace(0,4*np.pi,100)
newf = sin_approx(newx)
```
Plot the original data points, along with the approximate interpolated values. It is quite amazing to see how the interpolation has done a good job of reconstructing the actual function with relatively few points.
```
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
```
Let's look at the absolute error between the actual function and the approximate interpolated function:
```
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
```
## 1d non-regular data
It is also possible to use `interp1d` when the x data is not regularly spaced. To show this, let's repeat the above analysis with randomly distributed data in the range $[0,4\pi]$. Everything else is the same.
```
x = 4*np.pi*np.random.rand(15)
f = np.sin(x)
sin_approx = interp1d(x, f, kind='cubic')
# We have to be careful about not interpolating outside the range
newx = np.linspace(np.min(x), np.max(x),100)
newf = sin_approx(newx)
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
```
Notice how the absolute error is larger in the intervals where there are no points.
## 2d structured
For the 2d case we want to construct a scalar function of two variables, given
$$ {x_i, y_i, f_i} \rightarrow f(x,y) $$
For now, we will assume that the points $\{x_i,y_i\}$ are on a structured grid of points. This case is covered by the `interp2d` function:
```
from scipy.interpolate import interp2d
```
Here is the actual function we will use the generate our original dataset:
```
def wave2d(x, y):
return np.sin(2*np.pi*x)*np.sin(3*np.pi*y)
```
Build 1d arrays to use as the structured grid:
```
x = np.linspace(0.0, 1.0, 10)
y = np.linspace(0.0, 1.0, 10)
```
Build 2d arrays to use in computing the function on the grid points:
```
X, Y = np.meshgrid(x, y)
Z = wave2d(X, Y)
```
Here is a scatter plot of the points overlayed with the value of the function at those points:
```
plt.pcolor(X, Y, Z)
plt.colorbar();
plt.scatter(X, Y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
```
You can see in this plot that the function is not smooth as we don't have its value on a fine grid.
Now let's compute the interpolated function using `interp2d`. Notice how we are passing 2d arrays to this function:
```
wave2d_approx = interp2d(X, Y, Z, kind='cubic')
```
Compute the interpolated function on a fine grid:
```
xnew = np.linspace(0.0, 1.0, 40)
ynew = np.linspace(0.0, 1.0, 40)
Xnew, Ynew = np.meshgrid(xnew, ynew) # We will use these in the scatter plot below
Fnew = wave2d_approx(xnew, ynew) # The interpolating function automatically creates the meshgrid!
Fnew.shape
```
Plot the original course grid of points, along with the interpolated function values on a fine grid:
```
plt.pcolor(xnew, ynew, Fnew);
plt.colorbar();
plt.scatter(X, Y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
```
Notice how the interpolated values (green points) are now smooth and continuous. The amazing thing is that the interpolation algorithm doesn't know anything about the actual function. It creates this nice approximation using only the original course grid (blue points).
## 2d unstructured
It is also possible to perform interpolation when the original data is not on a regular grid. For this, we will use the `griddata` function:
```
from scipy.interpolate import griddata
```
There is an important difference between `griddata` and the `interp1d`/`interp2d`:
* `interp1d` and `interp2d` return callable Python objects (functions).
* `griddata` returns the interpolated function evaluated on a finer grid.
This means that you have to pass `griddata` an array that has the finer grid points to be used. Here is the course unstructured grid we will use:
```
x = np.random.rand(100)
y = np.random.rand(100)
```
Notice how we pass these 1d arrays to our function and don't use `meshgrid`:
```
f = wave2d(x, y)
```
It is clear that our grid is very unstructured:
```
plt.scatter(x, y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
```
To use `griddata` we need to compute the final (strcutured) grid we want to compute the interpolated function on:
```
xnew = np.linspace(x.min(), x.max(), 40)
ynew = np.linspace(y.min(), y.max(), 40)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Xnew.shape, Ynew.shape
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
Fnew.shape
plt.pcolor(Xnew, Ynew, Fnew, label="points")
plt.colorbar()
plt.scatter(x, y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
```
Notice how the interpolated function is smooth in the interior regions where the original data is defined. However, outside those points, the interpolated function is missing (it returns `nan`).
| github_jupyter |
*Call expressions* invoke [functions](functions), which are named operations.
The name of the function appears first, followed by expressions in
parentheses.
For example, `abs` is a function that returns the absolute value of the input
argument:
```
abs(-12)
```
`round` is a function that returns the input argument rounded to the nearest integer (counting number).
```
round(5 - 1.3)
max(2, 5, 4)
```
In this last example, the `max` function is *called* on three *arguments*: 2,
5, and 4. The value of each expression within parentheses is passed to the
function, and the function *returns* the final value of the full call
expression. You separate the expressions with commas: `,`. The `max` function
can take any number of arguments and returns the maximum.
Many functions, like `max` can accept a variable number of arguments.
`round` is an example. If you call `round` with one argument, it returns the number rounded to the nearest integer, as you have already seen:
```
round(3.3333)
```
You can also call round with two arguments, where the first argument is the number you want to round, and the second argument is the number of decimal places you want to round to. If you don't pass this second argument, `round` assumes you mean 0, corresponding to no decimal places, and rounding to the nearest integer:
```
# The same as above, rounding to 0 decimal places.
round(3.3333, 0)
```
You can also round to - say - 2 decimal places, like this:
```
# Rounding to 2 decimal places.
round(3.3333, 2)
```
A few functions are available by default, such as `abs` and `round`, but most
functions that are built into the Python language are stored in a collection
of functions called a *module*. An *import statement* is used to provide
access to a module, such as `math`.
```
import math
math.sqrt(5)
```
Operators and call expressions can be used together in an expression. The
*percent difference* between two values is used to compare values for which
neither one is obviously `initial` or `changed`. For example, in 2014 Florida
farms produced 2.72 billion eggs while Iowa farms produced 16.25 billion eggs
[^eggs]. The percent difference is 100 times the absolute value of the
difference between the values, divided by their average. In this case, the
difference is larger than the average, and so the percent difference is
greater than 100.
[^eggs]: <http://quickstats.nass.usda.gov>
```
florida = 2.72
iowa = 16.25
100*abs(florida-iowa)/((florida+iowa)/2)
```
Learning how different functions behave is an important part of learning a
programming language. A Jupyter notebook can assist in remembering the names
and effects of different functions. When editing a code cell, press the *tab*
key after typing the beginning of a name to bring up a list of ways to
complete that name. For example, press *tab* after `math.` to see all of the
functions available in the `math` module. Typing will narrow down the list of
options. To learn more about a function, place a `?` after its name. For
example, typing `math.sin?` will bring up a description of the `sin`
function in the `math` module. Try it now. You should get something like
this:
```
sqrt(x)
Return the square root of x.
```
The list of [Python's built-in
functions](https://docs.python.org/3/library/functions.html) is quite long and
includes many functions that are never needed in data science applications.
The list of [mathematical functions in the `math`
module](https://docs.python.org/3/library/math.html) is similarly long. This
text will introduce the most important functions in context, rather than
expecting the reader to memorize or understand these lists.
### Example ###
In 1869, a French civil engineer named Charles Joseph Minard created what is
still considered one of the greatest graphs of all time. It shows the
decimation of Napoleon's army during its retreat from Moscow. In 1812,
Napoleon had set out to conquer Russia, with over 350,000 men in his army.
They did reach Moscow but were plagued by losses along the way. The Russian
army kept retreating farther and farther into Russia, deliberately burning
fields and destroying villages as it retreated. This left the French army
without food or shelter as the brutal Russian winter began to set in. The
French army turned back without a decisive victory in Moscow. The weather got
colder and more men died. Fewer than 10,000 returned.

The graph is drawn over a map of eastern Europe. It starts at the
Polish-Russian border at the left end. The light brown band represents
Napoleon's army marching towards Moscow, and the black band represents the
army returning. At each point of the graph, the width of the band is
proportional to the number of soldiers in the army. At the bottom of the
graph, Minard includes the temperatures on the return journey.
Notice how narrow the black band becomes as the army heads back. The crossing
of the Berezina river was particularly devastating; can you spot it on the
graph?
The graph is remarkable for its simplicity and power. In a single graph,
Minard shows six variables:
- the number of soldiers
- the direction of the march
- the latitude and longitude of location
- the temperature on the return journey
- the location on specific dates in November and December
Tufte says that Minard's graph is "probably the best statistical graphic ever
drawn."
Here is a subset of Minard's data, adapted from *The Grammar of Graphics* by
Leland Wilkinson.

Each row of the column represents the state of the army in a particular
location. The columns show the longitude and latitude in degrees, the name of
the location, whether the army was advancing or in retreat, and an estimate of
the number of men.
In this table the biggest change in the number of men between two consecutive
locations is when the retreat begins at Moscow, as is the biggest percentage
change.
```
moscou = 100000
wixma = 55000
wixma - moscou
(wixma - moscou)/moscou
```
That's a 45% drop in the number of men in the fighting at Moscow. In other
words, almost half of Napoleon's men who made it into Moscow didn't get very
much farther.
As you can see in the graph, Moiodexno is pretty close to Kowno where the army
started out. Fewer than 10% of the men who marched into Smolensk during the
advance made it as far as Moiodexno on the way back.
```
smolensk_A = 145000
moiodexno = 12000
(moiodexno - smolensk_A)/smolensk_A
```
Yes, you could do these calculations by just using the numbers without names.
But the names make it much easier to read the code and interpret the results.
It is worth noting that bigger absolute changes don't always correspond to
bigger percentage changes.
The absolute loss from Smolensk to Dorogobouge during the advance was 5,000
men, whereas the corresponding loss from Smolensk to Orscha during the retreat
was smaller, at 4,000 men.
However, the percent change was much larger between Smolensk and Orscha
because the total number of men in Smolensk was much smaller during the
retreat.
```
dorogobouge = 140000
smolensk_R = 24000
orscha = 20000
abs(dorogobouge - smolensk_A)
abs(dorogobouge - smolensk_A)/smolensk_A
abs(orscha - smolensk_R)
abs(orscha - smolensk_R)/smolensk_R
```
{% data8page Calls %}
| github_jupyter |
_Lambda School Data Science — Model Validation_
# Validate classification problems
Objectives
- Imbalanced Classes
- Confusion Matrix
- ROC AUC
Reading
- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/)
- [Precision and Recall](https://en.wikipedia.org/wiki/Precision_and_recall)
## Preliminary
We'll use [mlxtend](http://rasbt.github.io/mlxtend/) and [yellowbrick](http://www.scikit-yb.org/en/latest/) for visualizations. These libraries are already installed on Google Colab. But if you are running locally with Anaconda Python, you'll probably need to install them:
```
conda install -c conda-forge mlxtend
conda install -c districtdatalabs yellowbrick
```
We'll reuse the `train_validation_test_split` function from yesterday's lesson.
```
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
```
## Fun demo!
The next code cell does five things:
#### 1. Generate data
We use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:
- Number of samples
- Weights, meaning "the proportions of samples assigned to each class."
- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."
(We are generating fake data so it is easy to visualize.)
#### 2. Split data
We split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.)
#### 3. Fit model
We use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.
We use this model parameter:
> **class_weight : _dict or ‘balanced’, default: None_**
> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.
> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
#### 4. Evaluate model
We use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.
Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set.
#### 5. Visualize decision function
Based on these examples
- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html
- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/#example-1-decision-regions-in-2d
```
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.50, 0.50)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = None
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
print('accuracy', accuracy_score(y_val, y_pred))
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
```
Try re-running the cell above with different values for these four parameters:
- `n_samples`
- `weights`
- `class_sep`
- `class_balance`
For example, with a 50% / 50% class distribution:
```
n_samples = 1000
weights = (0.50, 0.50)
class_sep = 0.8
class_balance = None
```
With a 95% / 5% class distribution:
```
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
class_balance = None
```
With the same 95% / 5% class distribution, but changing the Logistic Regression's `class_balance` parameter to `'balanced'` (instead of its default `None`)
```
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
class_balance = 'balanced'
```
With the same 95% / 5% class distribution, but with different values for `class_balance`:
- `{0: 1, 1: 1}` _(equivalent to `None`)_
- `{0: 1, 1: 2}`
- `{0: 1, 1: 10}` _(roughly equivalent to `'balanced'` for this dataset)_
- `{0: 1, 1: 100}`
- `{0: 1, 1: 10000}`
How do the evaluation metrics and decision region plots change?
## What you can do about imbalanced classes
[Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/) gives "a rough outline of useful approaches" :
- Do nothing. Sometimes you get lucky and nothing needs to be done. You can train on the so-called natural (or stratified) distribution and sometimes it works without need for modification.
- Balance the training set in some way:
- Oversample the minority class.
- Undersample the majority class.
- Synthesize new minority classes.
- Throw away minority examples and switch to an anomaly detection framework.
- At the algorithm level, or after it:
- Adjust the class weight (misclassification costs).
- Adjust the decision threshold.
- Modify an existing algorithm to be more sensitive to rare classes.
- Construct an entirely new algorithm to perform well on imbalanced data.
We demonstrated just one of these options: many scikit-learn classifiers have a `class_balance` parameter, which we can use to "adjust the class weight (misclassification costs)."
The [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn) library can be used to "oversample the minority class, undersample the majority class, or synthesize new minority classes."
You can see how to "adjust the decision threshold" in a great blog post, [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415).
## Bank Marketing — getting started
https://archive.ics.uci.edu/ml/datasets/Bank+Marketing
The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
bank-additional-full.csv with all examples (41188) and 20 inputs, **ordered by date (from May 2008 to November 2010)**
### Download data
```
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip
!unzip bank-additional.zip
%cd bank-additional
```
### Load data, assign to X and y
```
import pandas as pd
bank = pd.read_csv('bank-additional-full.csv', sep=';')
X = bank.drop(columns='y')
y = bank['y'] == 'yes'
```
### Split data
We want to do "model selection (hyperparameter optimization) and performance estimation" so we'll choose a validation method from the diagram's green box.
There is no one "right" choice here, but I'll choose "3-way holdout method (train/validation/test split)."
<img src="https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg" width="600">
Source: https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html
There's no one "right" choice here, but I'll choose to split by time, not with a random shuffle, based on this advice by [Rachel Thomas](
https://www.fast.ai/2017/11/13/validation-sets/):
> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future).
[According to UCI](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing), this data is "ordered by date (from May 2008 to November 2010)" so if I don't shuffle it when splitting, then it will be split by time.
```
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, shuffle=False)
```
## Bank Marketing — live coding!
# ASSIGNMENT options
Replicate code from the lesson or other examples. [Do it "the hard way" or with the "Benjamin Franklin method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit)
Work with one of these datasets
- [Bank Marketing](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing)
- [Synthetic Financial Dataset For Fraud Detection](https://www.kaggle.com/ntnu-testimon/paysim1)
- Any imbalanced binary classification dataset
Continue improving your model. Measure validation performance with a variety of classification metrics, which could include:
- Accuracy
- Precision
- Recall
- F1
- ROC AUC
Try one of the other options mentioned for imbalanced classes
- The [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn) library can be used to "oversample the minority class, undersample the majority class, or synthesize new minority classes."
- You can see how to "adjust the decision threshold" in a great blog post, [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415).
| github_jupyter |
## Don't worry if you don't understand everything at first! You're not supposed to. We will start using some "black boxes" and then we'll dig into the lower level details later.
## To start, focus on what things DO, not what they ARE.
# What is NLP?
Natural Language Processing is technique where computers try an understand human language and make meaning out of it.
NLP is a broad field, encompassing a variety of tasks, including:
1. Part-of-speech tagging: identify if each word is a noun, verb, adjective, etc.)
2. Named entity recognition NER): identify person names, organizations, locations, medical codes, time expressions, quantities, monetary values, etc)
3. Question answering
4. Speech recognition
5. Text-to-speech and Speech-to-text
6. Topic modeling
7. Sentiment classification
9. Language modeling
10. Translation
# What is NLU?
Natural Language Understanding is all about understanding the natural language.
Goals of NLU
1. Gain insights into cognition
2. Develop Artifical Intelligent agents as an assistant.
# What is NLG?
Natural language generation is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical form.
Example applications of NLG
1. Recommendation and Comparison
2. Report Generation –Summarization
3. Paraphrase
4. Prompt and response generation in dialogue systems
# Packages
1. [Flair](https://github.com/zalandoresearch/flair)
2. [Allen NLP](https://github.com/allenai/allennlp)
3. [Deep Pavlov](https://github.com/deepmipt/deeppavlov)
4. [Pytext](https://github.com/facebookresearch/PyText)
5. [NLTK](https://www.nltk.org/)
6. [Hugging Face Pytorch Transformer](https://github.com/huggingface/pytorch-transformers)
7. [Spacy](https://spacy.io/)
8. [torchtext](https://torchtext.readthedocs.io/en/latest/)
9. [Ekphrasis](https://github.com/cbaziotis/ekphrasis)
10. [Genism](https://radimrehurek.com/gensim/)
# NLP Pipeline
## Data Collection
### Sources
For Generative Training :- Where the model has to learn about the data and its distribution
1. News Article:- Archives
2. Wikipedia Article
3. Book Corpus
4. Crawling the Internet for webpages.
5. Reddit
Generative training on an abundant set of unsupervised data helps in performing Transfer learning for a downstream task where few parameters need to be learnt from sratch and less data is also required.
For Determinstic Training :- Where the model learns about Decision boundary within the data.
Generic
1. Kaggle Dataset
Sentiment
1. Product Reviews :- Amazon, Flipkart
Emotion:-
1. ISEAR
2. Twitter dataset
Question Answering:-
1. SQUAD
etc.
### For Vernacular text
In vernacular context we have crisis in data especially when it comes to state specific language in India. (Ex. Bengali, Gujurati etc.)
Few Sources are:-
1. News (Jagran.com, Danik bhaskar)
2. Moview reviews (Web Duniya)
3. Hindi Wikipedia
4. Book Corpus
6. IIT Bombay (English-Hindi Parallel Corpus)
### Tools
1. Scrapy :- Simple, Extensible framework for scraping and crawling websites. Has numerous feature into it.
2. Beautiful-Soup :- For Parsing Html and xml documents.
3. Excel
4. wikiextractor:- A tool for extracting plain text from Wikipedia dumps
### Data Annotation Tool
1. TagTog
2. Prodigy (Explosion AI)
3. Mechanical Turk
4. PyBossa
5. Chakki-works Doccano
6. WebAnno
7. Brat
## Data Preprocessing
1. Cleaning
2. Regex
1. Url Cleanup
2. HTML Tag
3. Date
4. Numbers
5. Lingos
6. Emoticons
3. Lemmatization
4. Stemming
5. Chunking
6. POS Tags
7. NER Tags
8. Stopwords
9. Tokenizers
10. Spell Correction
11. Word Segmentation
12. Word Processing
1. Elongated
2. Repeated
3. All Caps
### Feature Selection
1. Bag of Words

2. TF-IDF

3. Word Embeddings
1. Word2Vec
Word2Vec is a predictive model.

2. Glove
Glove is a Count-based models learn their vectors by essentially doing dimensionality reduction on the co-occurrence counts matrix.
3. FastText
Fastext is trained in a similar fashion how word2vec model is trained, the only difference is the fastext enchriches the word vectors with subword units.
[FastText works](https://www.quora.com/What-is-the-main-difference-between-word2vec-and-fastText)
4. ELMO
ELMo is a deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). These word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. They can be easily added to existing models and significantly improve the state of the art across a broad range of challenging NLP problems, including question answering, textual entailment and sentiment analysis.
ELMo representations are:
* Contextual: The representation for each word depends on the entire context in which it is used.
* Deep: The word representations combine all layers of a deep pre-trained neural network.
* Character based: ELMo representations are purely character based, allowing the network to use morphological clues to form robust representations for out-of-vocabulary tokens unseen in training.
### Modelling
1. RNN

RNN suffers from gradient vanishing problem and they do not persist long term dependencies.
2. LSTM
Long Short Term Memory networks – usually just called “LSTMs” – are a special kind of RNN, capable of learning long-term dependencies.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!

3. BI-LSTM

4. GRU
5. CNNs
6. Seq-Seq

7. Seq-Seq Attention

8. Pointer Generator Network

8. Transformer


9. GPT

10. Transformer-XL

11. BERT
BERT’s key technical innovation is applying the bidirectional training of Transformer, a popular attention model, to language modelling.
BERT is given billions of sentences at training time. It’s then asked to predict a random selection of missing words from these sentences. After practicing with this corpus of text several times over, BERT adopts a pretty good understanding of how a sentence fits together grammatically. It’s also better at predicting ideas that are likely to show up together.


12. GPT-2

## Buisness Problem
1. Text Classification
1. Sentiment Classification
2. Emotion Classification
3. Reviews Rating
2. Topic Modeling
3. Named Entity Recognition
4. Part Of Speech Tagging
5. Language Model
6. Machine Translation
7. Question Answering
8. Text Summarization
9. Text Generation
10. Image Captioning
11. Optical Character Recognition
12. Chatbots
13. [Dependency Parsing](https://nlpprogress.com/english/dependency_parsing.html)
14. [Coreference Resolution](https://en.wikipedia.org/wiki/Coreference)
15. [Semantic Textual Similarity](https://nlpprogress.com/english/semantic_textual_similarity.html)
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.