markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
*Note*: the brackets are optional | t = 1, 2, 3, 4
t | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
"Unpacking": as with lists (or any iterable), it is possible to extract values in a tuple and assign them to new variables | t[1:3]
second_item, third_item = t[1], t[2]
print(second_item)
print(third_item) | 2
3
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
**Tip**: unpack undefined number of items | second_item, *greater_items = t[1:]
second_item
greater_items | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
DictionnariesMap keys to values | d = {'key1': 0, 'key2': 1}
d | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Keys must be unique.But be careful: no error is raised if you provide multiple, identical keys! | d = {'key1': 0, 'key2': 1, 'key1': 3}
d | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Indexing dictionnaries by key | d['key1'] | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Keys are not limited to strings, they can be many things (but not anything, we'll see later) | d = {'key1': 0, 2: 1, 3.: 3}
d[2] | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Get keys or values | d.keys()
d.values()
a[d['key1']]
d = {
'benoit': {
'age': 33,
'section':'5.5'
}
}
d['benoit']['age'] | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Mutable vs. immutable We can change the value of a variable in place (after we create the variable) or we can't. For example, lists are mutable. | a = [1, 2, 3, 4]
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Change the value of one item in place | a[0] = 'one'
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Append one item at the end of the list | a.append(5)
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Insert one item at a given position | a.insert(0, 'zero')
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Extract and remove the last item | a.pop()
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Dictionnaries are mutable (note the order of the keys in the printed dict) | d = {'key1': 0, 'key2': 1, 'key3': 2}
d['key4'] = 4
d | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Pop an item of given key | d.pop('key1')
d | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Tuples are immutable! | t = (1, 2, 3, 4)
t.append(5) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Strings are immutable! | food = "bradwurst"
food[0:4] = "cury" | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
But is easy and efficient to create new strings | food = "curry" + food[-5:]
food | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
A reason why strings are immutable?The keys of a dictionnary cannot be mutable, e.g., we cannot not use a list | d = {[1, 3]: 0} | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
The keys of a dictionnary cannot be mutable, for a quite obvious reason that it is used as indexes, like in a database. If we allow changing the indexes, it can be a real mess!If strings were mutable, then we could'nt use it as keys in dictionnaries.*Note*: more precisely, keys of a dictionnary must be "hashable". Variables or identifiers? What's happening here? | a = [1, 2, 3]
b = a
b[0] = 'one'
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Explanation: the concept of variable is different in Python than in, e.g., C or Fortran`a = [1, 2, 3]` means we create a list object and we bind this object to a name (label or identifier) "a"`b = a` means we bind the same object to a new name "b"You can find more details and good illustrations here: https://nedbatchelder.com/text/names1.html `id()` returns the (unique) identifiant of the value (object) bound to a given identifier | id(a)
id(b) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
`is` : check whether two identifiers are bound to the same value (object) | a is b | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
OK, but how do you explain this? | a = 1
b = a
b = 2
a
a is b
id(a)
id(b) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Can you explain what's going on here? | a = 1
b = 2
b = a + b
b | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Where does go the value "2" that was initially bounded to "b"? OK, now what about this? Very confusing! | a = 1
b = 1
a is b
a = 1.
b = 1.
a is b | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Dynamic, strong, duck typing Dynamic typing: no need to explicitly declare a type of an object/variable before using it. This is done automatically depending on the given object/value. | a = 1
type(a) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Strong typing: Converting from one type to another must be explicit, i.e., a value of a given type cannot be magically converted into another type | a + '1'
a + int('1')
eval('1 + 2 * 3') | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
An exception: integer to float casting | a + 1. | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Duck typing: The type of an object doesn't really matter. What an object can or cannot do is more important.> "If it walks like a duck and it quacks like a duck, then it must be a duck" For example, we can show that iterating trough list, string or dict can be done using the exact same loop | var = [1, 2, 3, 4]
for i in var:
print(i)
var = 'abcd'
for i in var:
print(i)
var = {'key1': 1, 'key2': 2}
for i in var:
print(i) | key1
key2
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
In the last case, iterating a dictionnary uses the keys.It is possible to iterate the values: | for v in var.values():
print(v) | 1
2
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Or more useful, iterate trough both keys and values | for k, v in var.items():
print(k, v)
t = ('key1', 1)
k, v = t
var.items() | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Arithmetic operators can be obviously applied on integer, float... | 1 + 1
1 + 2. | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
...but also on strings and lists (in this case it does concatenation) | [1, 2, 3] + ['a', 'b', 'c']
'other' + 'one' | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
... and also mixing the types, e.g., repeat sequence x times | [1, 2, 3] * 3
'one' * 3 | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
...although, everything is not possible | [1, 2, 3] * 3.5 | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Boolean: what is True and what is False | print(True)
print(False)
print(bool(0))
print(bool(-1))
a = 1.7
if a:
print('non zero')
print(bool(''))
print(bool('no empty'))
print(bool([]))
print(bool([1, 2]))
print(bool({}))
print(bool({'key1': 1}))
d = {}
if not d:
print('there is no item')
| there is no item
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
list comprehension Example: we create a list from another one using a `for` loop | ints = [1, 3, 5, 0, 2, 0]
true_or_false = []
for i in ints:
true_or_false.append(bool(i))
true_or_false | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
But there is a much more succint way to do it. It is still (and maybe even more) readable | true_or_false = [bool(i) for i in ints]
true_or_false | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
More complex example, with conditions | float_no3 = [float(i) for i in ints if i != 3]
float_no3 | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Other kinds of conditions(It starts to be less readable -> don't abuse list comprehension) | float_str3 = [float(i) if i != 3 else str(i) for i in ints]
float_str3 | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Dict comprehensions | int2float_map = {i: float(i) for i in ints}
int2float_map | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
FunctionsA function take value(s) as input and (optionally) return value(s) as outputinputs = arguments | def add(a, b):
"""Add two things."""
return a + b
def print_the_argument(arg):
print(arg)
print_the_argument('a string') | a string
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
We can call it several times with different values | add(1, 3)
help(add) | Help on function add in module __main__:
add(a, b)
Add two things.
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Nested calls | add(add(1, 2), 3) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Duck typing is really useful! A single function for doing many things (write less code) | add(1., 2.)
add('one', 'two')
add([1, 2, 3], [1, 2, 3]) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Functions have a scope that is local | a = 1
def func():
a = 2
a
func()
a | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Call by value? | def func(j):
j = j + 1
print('inside: ', j)
return j
i = 1
print('before:', i)
i = func(i)
print('after:', i) | before: 1
inside: 2
after: 2
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Not really... | def func(li):
li[0] = 1000
print('inside: ', li[0])
li = [1]
print('before:', li[0])
func(li)
print('after:', li[0]) | before: 1
inside: 1000
after: 1000
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Composing functions (start to look like functional programming) | C2K_OFFSET = 273.15
def fahr_to_kelvin(temp):
"""convert temp from fahrenheit to kelvin"""
return ((temp - 32) * (5/9)) + C2K_OFFSET
def kelvin_to_celsius(temp_k):
# convert temperature from kevin to celsius
return temp_k - C2K_OFFSET
def fahr_to_celsius(temp_f):
temp_k = fahr_to_kelvin(temp_f)
temp_c = kelvin_to_celsius(temp_k)
return temp_c
fahr_to_kelvin(50)
fahr_to_celsius(50) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Function docstring (help) Default argument values (keyword arguments) | def display(a=1, b=2, c=3):
print(a, b, c)
display(b=4) | 1 4 3
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
When calling a function, the order of the keyword arguments doesn't matterBut the order matters for positional arguments!! | display(c=5, a=1)
display(3) | 3 2 3
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Mix positional and keyword arguments: positional arguments must be added before keyword arguments | def display(c, a=1, b=2):
print(a, b, c)
display(1000) | 1 2 1000
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
What's going on here? | def add_to_list(li=[], value=1):
li.append(value)
return li
add_to_list()
add_to_list()
add_to_list() | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Try running again the cell that defines the function, and then the cells that call the functionThis is sooo confusing! So you shouldn't use mutable objects as default valuesWorkaround: | def add_to_list(li=None, value=1):
if li is None:
li = []
li.append(value)
return li
add_to_list()
add_to_list() | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Arbitrary number of arguments | def display_args(*args):
print(args)
nb_args = len(args)
print(nb_args)
print(*args)
display_args('one')
display_args(1, '2', 'bradwurst') | (1, '2', 'bradwurst')
3
1 2 bradwurst
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Arbitrary number of keyword arguments | def display_args_kwargs(*args, **kwargs):
print(*args)
print(kwargs)
display_args_kwargs('one', 2, three=3.) | one 2
{'three': 3.0}
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Return more than one value (tuple) | def spherical_coords(x, y, z):
# convert
return r, theta, phi | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
ModulesModules are Python code in (`.py`) files that can be imported from within Python.Like functions, it allows to reusing the code in different contexts. Write a module with the temperature conversion functions above(note: the `%%writefile` is a magic cell command in the notebook that writes the content of the cell in a file) | %%writefile temp_converter.py
C2K_OFFSET = 273.15
def fahr_to_kelvin(temp):
"""convert temp from fahrenheit to kelvin"""
return ((temp - 32) * (5/9)) + C2K_OFFSET
def kelvin_to_celsius(temp_k):
# convert temperature from kevin to celsius
return temp_k - C2K_OFFSET
def fahr_to_celsius(temp_f):
temp_k = fahr_to_kelvin(temp_f)
temp_c = kelvin_to_celsius(temp_k)
return temp_c | Overwriting temp_converter.py
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Import a module | import temp_converter | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Access the functions imported with the module using the module name as a "namespace"**Tip**: imported module + dot + for autocompletion | temp_converter.fahr_to_celsius(100.) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Import the module with a (short) alias for the namespace | import temp_converter as tc
tc.fahr_to_celsius(100.) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Import just a function from the module | from temp_converter import fahr_to_celsius
fahr_to_celsius(100.) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Import everything in the module (without using a namespace)Strongly discouraged!! Name conflicts! | from temp_converter import *
kelvin_to_celsius(270) | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
(Text) file IOLet's create a small file with some data | %%writefile data.csv
"depth", "some_variable"
200, 2.4e2
400, 5.6e2
600, 2.6e8 | Writing data.csv
| CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Open the file using Python: | f = open("data.csv", "r")
f | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Read the content | raw_data = f.readlines()
raw_data | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
What happens here? | f.readlines()
f.seek(0)
f.readlines() | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
Close the file | f.close() | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
It is safer to use the `with` statement (contexts) | with open("data.csv") as f:
raw_data = f.readlines()
raw_data
f.closed | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
We don't need to close the file, it is done automatically after executing the block of instructions under the `with` statement It is safer because if an error happens within the block of instructions, the file is closed anyway.Note here how we can explicitly raise an Error. There are many kinds of exceptions, see: https://docs.python.org/3/library/exceptions.htmlbltin-exceptions | with open("data.csv") as f:
raw_data = f.readlines()
raise ValueError("something wrong happened")
raw_data
f.closed | _____no_output_____ | CC-BY-4.0 | notebooks/lectures_potsdam_201802/python_intro.ipynb | benbovy/python_short_course |
14 - Introduction to Deep Learningby [Alejandro Correa Bahnsen](albahnsen.com/)version 0.1, May 2016 Part of the class [Machine Learning Applied to Risk Management](https://github.com/albahnsen/ML_RiskManagement)This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US)Based on the slides and presentation by [Alec Radford](https://www.youtube.com/watch?v=S75EdAcXHKk) [github](https://github.com/Newmu/Theano-Tutorials/) For this class you must install theno```pip instal theano``` MotivationHow do we program a computer to recognize a picture of ahandwritten digit as a 0-9? What if we have 60,000 of these images and their label? | import numpy as np
from load import mnist
X_train, X_test, y_train2, y_test2 = mnist(onehot=True)
y_train = np.argmax(y_train2, axis=1)
y_test = np.argmax(y_test2, axis=1)
X_train[1].reshape((28, 28)).round(0).astype(int)[:, 4:26].tolist()
from pylab import imshow, show, cm
import matplotlib.pylab as plt
%matplotlib inline
def view_image(image, label="", predicted='', size=4):
"""View a single image."""
plt.figure(figsize = (size, size))
plt.imshow(image.reshape((28, 28)), cmap=cm.gray, )
plt.tick_params(axis='x',which='both', bottom='off',top='off', labelbottom='off')
plt.tick_params(axis='y',which='both', left='off',top='off', labelleft='off')
show()
if predicted == '':
print("Label: %s" % label)
else:
print('Label: ', str(label), 'Predicted: ', str(predicted))
view_image(X_train[1], y_train[1])
view_image(X_train[40000], y_train[40000]) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Naive modelFor each image, find the “most similar” image and guessthat as the label | def similarity(image, images):
similarities = []
image = image.reshape((28, 28))
images = images.reshape((-1, 28, 28))
for i in range(images.shape[0]):
distance = np.sqrt(np.sum(image - images[i]) ** 2)
sim = 1 / distance
similarities.append(sim)
return similarities
np.random.seed(52)
small_train = np.random.choice(X_train.shape[0], 100)
view_image(X_test[0])
similarities = similarity(X_test[0], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]]) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Lets try an other example | view_image(X_test[200])
similarities = similarity(X_test[200], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]]) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Logistic RegressionLogistic regression is a probabilistic, linear classifier. It is parametrizedby a weight matrix $W$ and a bias vector $b$ Classification isdone by projecting data points onto a set of hyperplanes, the distance towhich is used to determine a class membership probability.Mathematically, this can be written as:$$ P(Y=i\vert x, W,b) = softmax_i(W x + b) $$$$ P(Y=i|x, W,b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}$$The output of the model or prediction is then done by taking the argmax ofthe vector whose i'th element is $P(Y=i|x)$.$$ y_{pred} = argmax_i P(Y=i|x,W,b)$$ | import theano
from theano import tensor as T
import numpy as np
import datetime as dt
theano.config.floatX = 'float32' | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
```Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (numpy.ndarray). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent GPUs.Theano combines aspects of a computer algebra system (CAS) with aspects of an optimizing compiler. It can also generate customized C code for many mathematical operations. This combination of CAS with optimizing compilation is particularly useful for tasks in which complicated mathematical expressions are evaluated repeatedly and evaluation speed is critical. For situations where many different expressions are each evaluated once Theano can minimize the amount of compilation/analysis overhead, but still provide symbolic features such as automatic differentiation.``` | def floatX(X):
# return np.asarray(X, dtype='float32')
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01))
def model(X, w):
return T.nnet.softmax(T.dot(X, w))
X = T.fmatrix()
Y = T.fmatrix()
w = init_weights((784, 10))
w.get_value() | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
initialize model | py_x = model(X, w)
y_pred = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
gradient = T.grad(cost=cost, wrt=w)
update = [[w, w - gradient * 0.05]]
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
One iteration | for start, end in zip(range(0, X_train.shape[0], 128), range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors = [(np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test)))]
errors | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Now for 100 epochs | t0 = dt.datetime.now()
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend() | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Checking the results | y_pred = predict(X_test)
np.random.seed(2)
small_test = np.random.choice(X_test.shape[0], 10)
for i in small_test:
view_image(X_test[i], label=y_test[i], predicted=y_pred[i], size=1) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Simple Neural NetAdd a hidden layer with a sigmoid activation function | def sgd(cost, params, lr=0.05):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
updates.append([p, p - g * lr])
return updates
def model(X, w_h, w_o):
h = T.nnet.sigmoid(T.dot(X, w_h))
pyx = T.nnet.softmax(T.dot(h, w_o))
return pyx
w_h = init_weights((784, 625))
w_o = init_weights((625, 10))
py_x = model(X, w_h, w_o)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
params = [w_h, w_o]
updates = sgd(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend() | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Complex Neural NetTwo hidden layers with dropout | from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
srng = RandomStreams()
def rectify(X):
return T.maximum(X, 0.) | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Understanding rectifier units | def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
RMSpropRMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in [Lecture 6e of his Coursera Class](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSprop in fact is identical to the first update vector of Adadelta that we derived above:$$ E[g^2]_t = 0.9 E[g^2]_{t-1} + 0.1 g^2_t. $$$$\theta_{t+1} = \theta_{t} - \frac{\eta}{\sqrt{E[g^2]_t + \epsilon}} g_{t}.$$RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests $\gamma$ to be set to 0.9, while a good default value for the learning rate $\eta$ is 0.001. | def dropout(X, p=0.):
if p > 0:
retain_prob = 1 - p
X *= srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
X /= retain_prob
return X
def model(X, w_h, w_h2, w_o, p_drop_input, p_drop_hidden):
X = dropout(X, p_drop_input)
h = rectify(T.dot(X, w_h))
h = dropout(h, p_drop_hidden)
h2 = rectify(T.dot(h, w_h2))
h2 = dropout(h2, p_drop_hidden)
py_x = softmax(T.dot(h2, w_o))
return h, h2, py_x
def softmax(X):
e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
w_h = init_weights((784, 625))
w_h2 = init_weights((625, 625))
w_o = init_weights((625, 10))
noise_h, noise_h2, noise_py_x = model(X, w_h, w_h2, w_o, 0.2, 0.5)
h, h2, py_x = model(X, w_h, w_h2, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w_h, w_h2, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend() | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Convolutional Neural NetworkIn machine learning, a convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing. (Wikipedia) MotivationConvolutional Neural Networks (CNN) are biologically-inspired variants of MLPs.From Hubel and Wiesel's early work on the cat's visual cortex, weknow the visual cortex contains a complex arrangement of cells. These cells aresensitive to small sub-regions of the visual field, called a *receptivefield*. The sub-regions are tiled to cover the entire visual field. Thesecells act as local filters over the input space and are well-suited to exploitthe strong spatially local correlation present in natural images.Additionally, two basic cell types have been identified: Simple cells respondmaximally to specific edge-like patterns within their receptive field. Complexcells have larger receptive fields and are locally invariant to the exactposition of the pattern.The animal visual cortex being the most powerful visual processing system inexistence, it seems natural to emulate its behavior. Hence, manyneurally-inspired models can be found in the literature. Sparse ConnectivityCNNs exploit spatially-local correlation by enforcing a local connectivitypattern between neurons of adjacent layers. In other words, the inputs ofhidden units in layer **m** are from a subset of units in layer **m-1**, unitsthat have spatially contiguous receptive fields. We can illustrate thisgraphically as follows:Imagine that layer **m-1** is the input retina. In the above figure, units inlayer **m** have receptive fields of width 3 in the input retina and are thusonly connected to 3 adjacent neurons in the retina layer. Units in layer**m+1** have a similar connectivity with the layer below. We say that theirreceptive field with respect to the layer below is also 3, but their receptivefield with respect to the input is larger (5). Each unit is unresponsive tovariations outside of its receptive field with respect to the retina. Thearchitecture thus ensures that the learnt "filters" produce the strongestresponse to a spatially local input pattern.However, as shown above, stacking many such layers leads to (non-linear)"filters" that become increasingly "global" (i.e. responsive to a larger regionof pixel space). For example, the unit in hidden layer **m+1** can encode anon-linear feature of width 5 (in terms of pixel space). Shared WeightsIn addition, in CNNs, each filter $h_i$ is replicated across the entirevisual field. These replicated units share the same parameterization (weightvector and bias) and form a *feature map*.In the above figure, we show 3 hidden units belonging to the same feature map.Weights of the same color are shared---constrained to be identical. Gradientdescent can still be used to learn such shared parameters, with only a smallchange to the original algorithm. The gradient of a shared weight is simply thesum of the gradients of the parameters being shared.Replicating units in this way allows for features to be detected *regardlessof their position in the visual field.* Additionally, weight sharing increaseslearning efficiency by greatly reducing the number of free parameters beinglearnt. The constraints on the model enable CNNs to achieve bettergeneralization on vision problems. Details and NotationA feature map is obtained by repeated application of a function acrosssub-regions of the entire image, in other words, by *convolution* of theinput image with a linear filter, adding a bias term and then applying anon-linear function. If we denote the k-th feature map at a given layer as$h^k$, whose filters are determined by the weights $W^k$ and bias$b_k$, then the feature map $h^k$ is obtained as follows (for$tanh$ non-linearities): $$ h^k_{ij} = \tanh ( (W^k * x)_{ij} + b_k ).$$Note* Recall the following definition of convolution for a 1D signal.$$ o[n] = f[n]*g[n] = \sum_{u=-\infty}^{\infty} f[u] g[n-u] = \sum_{u=-\infty}^{\infty} f[n-u] g[u]`.$$* This can be extended to 2D as follows:$$o[m,n] = f[m,n]*g[m,n] = \sum_{u=-\infty}^{\infty} \sum_{v=-\infty}^{\infty} f[u,v] g[m-u,n-v]`.$$ To form a richer representation of the data, each hidden layer is composed of*multiple* feature maps, $\{h^{(k)}, k=0..K\}$. The weights $W$ ofa hidden layer can be represented in a 4D tensor containing elements for everycombination of destination feature map, source feature map, source verticalposition, and source horizontal position. The biases $b$ can berepresented as a vector containing one element for every destination featuremap. We illustrate this graphically as follows:**Figure 1**: example of a convolutional layerThe figure shows two layers of a CNN. **Layer m-1** contains four feature maps.**Hidden layer m** contains two feature maps ($h^0$ and $h^1$).Pixels (neuron outputs) in $h^0$ and $h^1$ (outlined as blue andred squares) are computed from pixels of layer (m-1) which fall within their2x2 receptive field in the layer below (shown as colored rectangles). Noticehow the receptive field spans all four input feature maps. The weights$W^0$ and $W^1$ of $h^0$ and $h^1$ are thus 3D weighttensors. The leading dimension indexes the input feature maps, while the othertwo refer to the pixel coordinates.Putting it all together, $W^{kl}_{ij}$ denotes the weight connectingeach pixel of the k-th feature map at layer m, with the pixel at coordinates(i,j) of the l-th feature map of layer (m-1). The Convolution OperatorConvOp is the main workhorse for implementing a convolutional layer in Theano.ConvOp is used by ``theano.tensor.signal.conv2d``, which takes two symbolic inputs:* a 4D tensor corresponding to a mini-batch of input images. The shape of the tensor is as follows: [mini-batch size, number of input feature maps, image height, image width].* a 4D tensor corresponding to the weight matrix $W$. The shape of the tensor is: [number of feature maps at layer m, number of feature maps at layer m-1, filter height, filter width] MaxPoolingAnother important concept of CNNs is *max-pooling,* which is a form ofnon-linear down-sampling. Max-pooling partitions the input image intoa set of non-overlapping rectangles and, for each such sub-region, outputs themaximum value.Max-pooling is useful in vision for two reasons: * By eliminating non-maximal values, it reduces computation for upper layers.* It provides a form of translation invariance. Imagine cascading a max-pooling layer with a convolutional layer. There are 8 directions in which one can translate the input image by a single pixel. If max-pooling is done over a 2x2 region, 3 out of these 8 possible configurations will produce exactly the same output at the convolutional layer. For max-pooling over a 3x3 window, this jumps to 5/8. Since it provides additional robustness to position, max-pooling is a "smart" way of reducing the dimensionality of intermediate representations.Max-pooling is done in Theano by way of``theano.tensor.signal.downsample.max_pool_2d``. This function takes as inputan N dimensional tensor (where N >= 2) and a downscaling factor and performsmax-pooling over the 2 trailing dimensions of the tensor. The Full Model: CovNetSparse, convolutional layers and max-pooling are at the heart of the LeNetfamily of models. While the exact details of the model will vary greatly,the figure below shows a graphical depiction of a LeNet model.The lower-layers are composed to alternating convolution and max-poolinglayers. The upper-layers however are fully-connected and correspond to atraditional MLP (hidden layer + logistic regression). The input to thefirst fully-connected layer is the set of all features maps at the layerbelow.From an implementation point of view, this means lower-layers operate on 4Dtensors. These are then flattened to a 2D matrix of rasterized feature maps,to be compatible with our previous MLP implementation. | # from theano.tensor.nnet.conv import conv2d
from theano.tensor.nnet import conv2d
from theano.tensor.signal.downsample import max_pool_2d | /home/al/anaconda3/lib/python3.5/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
| MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Modify dropout function | def model(X, w, w2, w3, w4, w_o, p_drop_conv, p_drop_hidden):
l1a = rectify(conv2d(X, w, border_mode='full'))
l1 = max_pool_2d(l1a, (2, 2))
l1 = dropout(l1, p_drop_conv)
l2a = rectify(conv2d(l1, w2))
l2 = max_pool_2d(l2a, (2, 2))
l2 = dropout(l2, p_drop_conv)
l3a = rectify(conv2d(l2, w3))
l3b = max_pool_2d(l3a, (2, 2))
# convert from 4tensor to normal matrix
l3 = T.flatten(l3b, outdim=2)
l3 = dropout(l3, p_drop_conv)
l4 = rectify(T.dot(l3, w4))
l4 = dropout(l4, p_drop_hidden)
pyx = softmax(T.dot(l4, w_o))
return l1, l2, l3, l4, pyx | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
reshape into conv 4tensor (b, c, 0, 1) format | X_train2 = X_train.reshape(-1, 1, 28, 28)
X_test2 = X_test.reshape(-1, 1, 28, 28)
# now 4tensor for conv instead of matrix
X = T.ftensor4()
Y = T.fmatrix()
w = init_weights((32, 1, 3, 3))
w2 = init_weights((64, 32, 3, 3))
w3 = init_weights((128, 64, 3, 3))
w4 = init_weights((128 * 3 * 3, 625))
w_o = init_weights((625, 10))
noise_l1, noise_l2, noise_l3, noise_l4, noise_py_x = model(X, w, w2, w3, w4, w_o, 0.2, 0.5)
l1, l2, l3, l4, py_x = model(X, w, w2, w3, w4, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w, w2, w3, w4, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
t1 = dt.datetime.now()
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train2[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train2)),
np.mean(y_test != predict(X_test2))))
print(i, errors[-1])
print('Current iter time: ', (dt.datetime.now()-t1).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend() | _____no_output_____ | MIT | notebooks/14_Intro_DeepLearning.ipynb | Torroledo/ML_RiskManagement |
Predict batches of images | tf.compat.v1.enable_v2_behavior()
label = ['3_24+10', '3_24+30', '3_24+5', '3_24+60', '3_24+70', '3_24+90', '3_24+110', '3_24+20', '3_24+40', '3_24+50', '3_24+80', '1_12_1', '1_12_2', '1_13', '1_14', '1_19', '1_24', '1_26', '1_27', '3_21', '3_31', '3_33', '4_4_1', '4_4_2', '4_5_2', '4_5_4', '4_5_5', '4_8_5', '4_8_6', '5_17', '6_2+50', '6_2+70', '6_2+30', '6_2+40', '6_2+60', '6_2+80', '6_7', '7_1', '7_11', '7_13', '7_14', '7_2', '7_4', '7_7', '7_9', 'smoke', 'unknown', '1_11_1', '1_11_2', '1_15', '1_16', '1_18', '1_20_1', '1_22', '1_25', '1_28', '1_29', '1_30', '1_8', '2_3_1', '2_3_L', '2_3_R', '2_6', '2_7', '3_15', '3_17', '3_20', '3_25+70', '3_25+20', '3_25+30', '3_25+40', '3_25+50', '3_25+5', '3_25+60', '3_6', '4_1_6', '4_2_1', '4_2_2', '5_15_5', '6_3_1', '7_3', '7_6', '1_17', '3_16', '5_15_3', '5_20', '7_12', '1_31', '3_10', '3_19', '3_2', '3_5', '3_7', '3_9', '4_1_2_1', '4_1_3_1', '4_5_1', '4_5_6', '4_8_1', '4_8_2', '4_8_3', '5_1', '5_11_1', '5_12_1', '5_13_1', '5_13_2', '5_14_1', '5_14_2', '5_14_3', '5_2', '5_23_2', '5_24_2', '5_3', '5_4', '5_8', '7_5', '3_32', '7_18', '1_2', '1_33', '1_7', '2_4', '3_18_1', '3_18_2', '3_8', '4_1_2', '4_1_3', '5_14', '6_15_2', '6_15_3', '6_6', '6_8_1', '1_1', '1_20_2', '1_20_3', '1_21', '1_23', '1_5', '2_1', '2_2', '2_5', '3_1', '3_26', '3_27', '3_28', '3_29', '3_30', '4_1_1', '4_1_4', '4_1_5', '4_2_3', '4_3', '4_8_4', '5_16', '5_18', '5_19', '5_21', '5_22', '5_5', '5_6', '5_7_1', '5_7_2', '5_9', '6_15_1', '6_16', '6_4', '6_8_2', '6_8_3', '5_29', '5_31+10', '5_31+20', '5_31+30', '5_31+40', '5_31+5', '5_31+50', '5_32', '5_33', '1_6', '5_15_2+2', '5_15_2+1', '5_15_2+3', '5_15_2+5']
autoencoder = keras.models.load_model("../input/aaaaaaaaaa/autoencoder.h5") # load pre_trained auto encoder model
model_1= keras.models.load_model("../input/aaaaaaaaaa/VGG19_2.h5")
model_2= keras.models.load_model("../input/aaaaaaaaaa/InceptionResNetV2_2.h5")
model_3 = keras.models.load_model('../input/aaaaaaaaaa/denset201_2.h5')
root_dir = '../input/aiijc-final-dcm/AIJ_2gis_data/'
def load_and_change_img(img):
img = image.img_to_array(img)
img = img/255.
result= autoencoder.predict(img[None])
new_arr = ((result - result.min()) * (1/(result.max() - result.min()) * 255)).astype('uint8')
img_new = np.zeros(shape=(80,80,3), dtype= np.int16)
img_new[..., 0] = new_arr[...,2]
img_new[...,1]=new_arr[...,1]
img_new[..., 2] = new_arr[...,0]
return img_new/255.
df = pd.read_csv("../input/aiijc-final-dcm/AIJ_2gis_data/sample_submission.csv")
df_a=df[0:100000]
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=load_and_change_img)
test_set =train_datagen.flow_from_dataframe(directory = root_dir,
dataframe=df_a,
x_col = 'filename',
y_col='label',
classes=None,
class_model=None,
shuffle=False,
batch_size=256,
target_size=(80,80))
outputs=[]
y_pred_1=model_1.predict(test_set, batch_size=256,verbose=1)
y_pred_2=model_2.predict(test_set, batch_size=256,verbose=1)
y_pred_3=model_3.predict(test_set, batch_size=256, verbose=1)
y_pred=y_pred_1*0.2 + y_pred_2*0.4 + y_pred_3*0.4
del y_pred_1
del y_pred_2
del y_pred_3
for i in range(len(np.argmax(y_pred, axis=1))):
outputs.append(label[np.argmax(y_pred[i], axis=0)])
df_new=pd.DataFrame({'filename': df_a['filename'], 'label': outputs})
df_new.to_csv('predict.csv') | _____no_output_____ | MIT | predict.ipynb | trancongthinh6304/trafficsignclassification |
Predict single image | model1= keras.models.load_model("../input/aaaaaaaaaa/VGG19_2.h5")
model2= keras.models.load_model("../input/aaaaaaaaaa/InceptionResNetV2_2.h5")
model3 = keras.models.load_model('../input/aaaaaaaaaa/denset201_2.h5')
def auto_encoder(img_path):
img = image.load_img(img_path, target_size=(80,80,3))
img = image.img_to_array(img)
img = img/255.
result= autoencoder.predict(img[None])
new_arr = ((result - result.min()) * (1/(result.max() - result.min()) * 255)).astype('uint8')
img_new = np.zeros(shape=(80,80,3), dtype=np.int16)
img_new[..., 0] = new_arr[...,2]
img_new[...,1]=new_arr[...,1]
img_new[..., 2] = new_arr[...,0]
return img_new/255.
labels=[]
img_path=""
def predict(img_path):
img = auto_encoder(img_path)
y_pred1=model1.predict(np.expand_dims(img, axis=0)*1/255.0)
y_pred2=model2.predict(np.expand_dims(img, axis=0)*1/255.0)
y_pred3=model3.predict(np.expand_dims(img, axis=0)*1/255.0)
y_pred=y_pred1*0.2 + y_pred2*0.4 + y_pred3*0.4
print(label[np.argmax(y_pred)]) | _____no_output_____ | MIT | predict.ipynb | trancongthinh6304/trafficsignclassification |
ML Project 6033657523 - Feedforward neural network Importing the libraries | from sklearn.metrics import mean_absolute_error
from sklearn.svm import SVR
from sklearn.model_selection import KFold, train_test_split
from math import sqrt
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Importing the cleaned dataset | dataset = pd.read_csv('cleanData_Final.csv')
X = dataset[['PrevAVGCost', 'PrevAssignedCost', 'AVGCost', 'LatestDateCost', 'A', 'B', 'C', 'D', 'E', 'F', 'G']]
y = dataset['GenPrice']
X | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Splitting the dataset into the Training set and Test set | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Feedforward neural network Fitting Feedforward neural network to the Training Set | from sklearn.neural_network import MLPRegressor
regressor = MLPRegressor(hidden_layer_sizes = (200, 200, 200, 200, 200), activation = 'relu', solver = 'adam', max_iter = 500, learning_rate = 'adaptive')
regressor.fit(X_train, y_train)
trainSet = pd.concat([X_train, y_train], axis = 1)
trainSet.head() | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Evaluate model accuracy | y_pred = regressor.predict(X_test)
y_pred
testSet = pd.concat([X_test, y_test], axis = 1)
testSet.head() | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Compare GenPrice with PredictedGenPrice | datasetPredict = pd.concat([testSet.reset_index(), pd.Series(y_pred, name = 'PredictedGenPrice')], axis = 1).round(2)
datasetPredict.head(10)
datasetPredict.corr()
print("Training set accuracy = " + str(regressor.score(X_train, y_train)))
print("Test set accuracy = " + str(regressor.score(X_test, y_test))) | Training set accuracy = 0.9898465392908009
Test set accuracy = 0.9841771850834575
| MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Training set accuracy = 0.9885445650077587Test set accuracy = 0.9829187423043221 MSE | from sklearn import metrics
print('MSE:', metrics.mean_squared_error(y_test, y_pred)) | MSE: 160.2404730229541
| MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
MSE v1: 177.15763887557458MSE v2: 165.73161615532584MSE v3: 172.98494783761967 MAPE | def mean_absolute_percentage_error(y_test, y_pred):
y_test, y_pred = np.array(y_test), np.array(y_pred)
return np.mean(np.abs((y_test - y_pred)/y_test)) * 100
print('MAPE:', mean_absolute_percentage_error(y_test, y_pred)) | MAPE: 6.159884199380194
| MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
MAPE v1: 6.706572320387714MAPE v2: 6.926678067146115MAPE v3: 7.34081953098462 Visualize | import matplotlib.pyplot as plt
plt.plot([i for i in range(len(y_pred))], y_pred, color = 'r')
plt.scatter([i for i in range(len(y_pred))], y_test, color = 'b')
plt.ylabel('Price')
plt.xlabel('Index')
plt.legend(['Predict', 'True'], loc = 'best')
plt.show() | _____no_output_____ | MIT | ML Project Feedforward Neural Network 6033657523.ipynb | bellmcp/machine-learning-price-prediction |
Transfer LearningMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture.VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.You can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/tf). Pretrained VGGNetWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. **You'll need to clone the repo into the folder containing this notebook.** Then download the parameter file using the next cell. | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!") | Parameter file already exists!
| MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Flower powerHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining). | import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close() | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
ConvNet CodesBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.Here we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py)):```self.conv1_1 = self.conv_layer(bgr, "conv1_1")self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")self.pool1 = self.max_pool(self.conv1_2, 'pool1')self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")self.pool2 = self.max_pool(self.conv2_2, 'pool2')self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")self.pool3 = self.max_pool(self.conv3_3, 'pool3')self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")self.pool4 = self.max_pool(self.conv4_3, 'pool4')self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")self.pool5 = self.max_pool(self.conv5_3, 'pool5')self.fc6 = self.fc_layer(self.pool5, "fc6")self.relu6 = tf.nn.relu(self.fc6)```So what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use```with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_)```This creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer,```feed_dict = {input_: images}codes = sess.run(vgg.relu6, feed_dict=feed_dict)``` | import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)] | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.