code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
Центр непрерывного образования
# Программа «Python для автоматизации и анализа данных»
Неделя 3 - 1
*Ян Пиле, НИУ ВШЭ*
# Цикл for. Применение циклов к строкам, спискам, кортежам и словарям.
Циклы мы используем в тех случаях, когда нужно повторить что-нибудь n-ное количество раз. Например, у нас уже был цикл **While**
```
ss = {1,2,3}
ss.pop()
ss.pop()
i = 1
while i<=10:
print(i)
i+=1
```
Здесь мы проверяем условие *i <= 10* (оно выполнено, *i = 1*), заходим в цикл с *i = 1*, печатаем значение *i*, добавляем к нему 1 иииии... \
Снова проверяем условие *i <= 10* (оно выполнено, *i = 2*), заходим в цикл с *i = 2*, печатаем значение *i*, добавляем к нему 1 иииии... \
Делаем такие же действия, пока *i* не становится равным 11, тогда условие входа в цикл не выполняется, и цикл завершается
Как мы уже обсуждали, цикл *While* гипотетически может "уходить в бесконечность", если условие, которое проверяется на входе в цикл будет выполнено всегда, например **While True**. Такие зацикливания можно прерывать оператором **break**, НО, это надо использовать очень аккуратно
```
i = 1
while True:
print(i)
i+=1
if i==11:
break
```
### FOR
В Python цикл начинается с ключевого слова **for**, за которым следует произвольное имя переменной, которое будет хранить значения следующего объекта последовательности. Общий синтаксис **for...in** в python выглядит следующим образом:
**for** <переменная> **in** <последовательность>:
<действие>
**else:**
<действие>
Элементы “последовательности” перебираются один за другим “переменной” цикла; если быть точным, переменная указывает на элементы. Для каждого элемента выполняется “действие”.
<img src ="https://d33wubrfki0l68.cloudfront.net/09c51b2f33c74a58ae5ae12689b2c5441e6f6bb4/83a52/wp-content/uploads/2017/06/forloop.png" alt ="Test picture" style="width: 300px;"/>
Вот пример простейшего цикла **for**
```
languages = ["C", "C++", "Perl", "Python"]
for x in languages:
print(x)
```
Элементы “последовательности” перебираются один за другим “переменной” цикла; \
если быть точным, переменная указывает на элементы. Для каждого элемента выполняется “действие”.\
Здесь в роли "последовательности" у нас список
### Итерируемый объект
**Итерация** - это общий термин, который описывает процедуру взятия элементов чего-то по очереди.
В более общем смысле, это последовательность инструкций, которая повторяется определенное количество раз или до выполнения указанного условия.
**Итерируемый объект** (iterable) - это объект, который способен возвращать элементы по одному(не обязательно по порядку). Кроме того, это объект, из которого можно получить итератор.
Примеры итерируемых объектов:
* все последовательности: список, строка, кортеж
* словари и множества
* файлы
**Итератор** (iterator) - это объект, который возвращает свои элементы по одному за раз.
С точки зрения Python - это любой объект, у которого есть метод __next__. Этот метод возвращает следующий элемент, если он есть, или возвращает исключение **StopIteration**, когда элементы закончились.
Кроме того, итератор запоминает, на каком объекте он остановился в последнюю итерацию.
Сейчас сложновато: Наш цикл for проходит именно по итератору! Когда мы говорим:
for object in iterable:
do something
Мы, на самом деле, вызываем метод итерируемого объекта , который возвращает итератор.
Таким образом, создаем объект-итератор , по которому и бежит цикл for.
Для того чтобы все это увидеть, есть функция iter() В качестве аргумента ей передается итерируемый объект (словарь, список, лист и т.д.) , а она возвращает соответствующий итератор.
```
s = {1,2,3,4,5}
print(type(s))
print(type(iter(s)))
for i in iter(s):
print(i)
```
Посмотрим на встроенную функцию next(). Она должна отдавать следующий элемент итератора.
```
s = {1,2,3,4,5}
s_iter = iter(s)
print(next(s_iter))
print(next(s_iter))
print(next(s_iter))
```
Отлично! мы по одному научились перебирать элементы из итерируемого объекта. Стоит отдельно остановиться на том, что цикл **for**, в Python, устроен несколько иначе, чем в большинстве других языков. Он больше похож на **for...each**, или же **for...of**.
Например, для Javascript проход по списку с выводом на печать всех его элементов выглядит так:
```
%%js
let numbers = [10, 12, 15, 18, 20];
for (let i = 0; i < numbers.length; i += 1) {
console.log(numbers[i])
}
l = [1,2,3,4,5]
list(map(str,l))
```
Если же, мы перепишем цикл **for** с помощью цикла **while**, используя индексы, то работать такой подход будет только с последовательностями:
```
list_of_numbers = [1,2,3]
index = 0
while index < len(list_of_numbers):
print(list_of_numbers[index])
index += 1
```
А с итерируемыми объектами, последовательностями не являющимися, не будет (потому что в множестве к элементу по индексу не обращаются!):
```
set_of_numbers = {1,2,3}
index = 0
while index < len(set_of_numbers):
print(set_of_numbers[index])
index += 1
```
Ну если уж прям совсем никак без индексации, то к любому итерируемому объекту можно применить функцию enumerate(), \
которая, как следует из названия, коллекцию занумерует. Здесь мы наделали кортежей вида (индекс, элемент)
```
set_of_numbers = {1,2,3,4,5,6}
for i in enumerate(set_of_numbers):
print(i)
```
Чтобы выдавать это в человеческом виде, можно прямо после for сказать, что мы "итерируемся" по индексам и объектам. \
Выглядит это следующим образом:
```
set_of_numbers = [1,2,3]
for index, element in enumerate(set_of_numbers):
print(index, element)
```
### Немного умных слов об итераторах
**Протокол итератора**
Теперь формализуем протокол итератора целиком:
* Чтобы получить итератор мы должны передать функции iter итерируемый объект.
* Далее мы передаём итератор функции next.
* Когда элементы в итераторе закончились, порождается исключение StopIteration. (Пока представим себе исключения, как объект специального типа, который генерируется в момент ошибки или какого-то терминального события. Например, они появляются, когда мы пытаемся делить на ноль или когда что-то напутали с типами
**Особенности**:
* Любой объект, передаваемый функции iter без исключения TypeError — итерируемый объект.
* Любой объект, передаваемый функции next без исключения TypeError — итератор.
* Любой объект, передаваемый функции iter и возвращающий сам себя — итератор.
**Плюсы итераторов:**
Итераторы работают "лениво" (en. lazy). А это значит, что они не выполняют какой-либо работы, до тех пор, пока мы их об этом не попросим. А это классный функционал, потому что очень многие виды данных в память компьютера не помещаются, а "ленивый" итератор позволяет эти данные читать по кускам! Так, например, можно посчитать количество строк в текстовом файле на несколько гигабайт.
Таким образом, мы можем оптимизировать потребление ресурсов ОЗУ и CPU, а так же создавать бесконечные последовательности.
<img src ="https://files.realpython.com/media/t.ba63222d63f5.png" alt ="Test picture" style="width: 300px;"/>
На самом деле мы уже попробовали использовать цикл **for** на множествах и на списках. \
Теперь давайте систематически разберемся, как for используется с разными коллекциями
### Списки, строки, множества и кортежи
В общем, по индексированным последовательностям мы уже ходить умеем. В списках и кортежах это проход по элементам(подряд)\
а в строках это проход по буквам(в порядке следования).
```
x = 'Take a look around'
for i in x:
print(i)
# Здесь мы прошлись по элементам списка и посчитали их сумму
# В цикл вошли с total = 0 и на каждом элементе добавляли к total значение элемента
x = [1,2,3,4]
total = 0
for i in x:
total+=i
print(total)
# Здесь мы прошлись по элементам кортежа и посчитали сумму тех, которые делятся на 7 нацело
# В цикл вошли с total = 0 и на каждом элементе добавляли к total значение элементов, удовлетворяющих условию
x = (1,2,3,4,7,49,4,23,63,28,28)
total = 0
for i in x:
if i % 7 == 0:
total+=i
print(total)
# Здесь мы преобразовали кортеж из предыдущей ячейки в множество и посчитали сумму четных элементов
# В цикл вошли с total = 0 и на каждом элементе добавляли к total значение элементов, удовлетворяющих условию
x_set = set(x)
total = 0
for i in x_set:
if i % 2 == 0:
total+=i
print(total)
print(x_set)
```
### Словари
В случае словарей итерация (по умолчанию) происходит по ключам
```
d = {'foo': 1, 'bar': 2, 'baz': 3}
for k in d:
print(k)
```
Но по ключам можно вынимать и соответствующие значения
```
for k in d:
print(d[k])
```
Также можно напрямую указать, по чему мы итерируемся: по ключам, по значениям или по кортежам ключ-значение\
Помните методы **.values()** , **.keys()** и **.items()** ?
```
print(d)
for v in d.values():
print(v)
print(d)
for v in d.keys():
print(v)
print(d)
for v in d.items():
print(v)
```
А еще можно "распаковать" эти кортежи-ключ значения (примерно так же, как мы сделали для **enumerate**)
```
d = {'foo': 1, 'bar': 2, 'baz': 3}
for k, v in d.items():
print('k =', k, ', v =', v)
```
Перед тем, как начать решать какие-то задачи, остается упомянуть крайне полезную функцию **range()**. \
Простыми словами, **range()** позволяет вам генерировать ряд чисел в рамках заданного диапазона. В зависимости от того, как много аргументов вы передаете функции, вы можете решить, где этот ряд чисел начнется и закончится, а также насколько велика разница будет между двумя числами.
Есть три способа вызова **range()**:
* **range(стоп)** берет один аргумент
* **range(старт, стоп)** берет два аргумента
* **range(старт, стоп, шаг)** берет три аргумента
На деле **range()** возвращает "ленивый" итерируемый объект (Да, сейчас что-то сложно). Понимать надо следующее:\
* По range() можно итерироваться (значит это итерируемый объект)
* range() не держит все свои объекты в памяти, а достает их "по требованию" (прям как итератор!)
* Но есть и ряд отличий, который делает range() похожим на последовательности (списки, кортежи и строки)
```
# Это как раз первый случай (мы вывели все целые числа ДО трех)
for i in range(3):
print(i)
# Это второй случай (мы вывели все целые числа от 0 до 10 не включая правый конец)
for i in range(0, 10):
print(i)
# Ну а это третий случай (мы вывели все целые числа от 0 до 10 не включая правый конец с шагом 2)
# То есть нулевое, второе, четвертое и т.д.
for i in range(0, 10, 2):
print(i)
```
Шаг здесь может быть положительным или отрицательным числом, но не может быть нулем! Отрицательное число будет означать уменьшение аргумента, то есть:
```
for i in range(10, 0, -1):
print(i)
```
А теперь отличие от итераторов! У range можно обратиться к элементу или даже срезу (как в списках)
```
print(range(3)[1])
print(range(10)[2:5])
```
Немного истории: в Python 2 были функции **range** и **xrange**. Первая создавала список (прям настоящий список), а вторая - \
именно то, что теперь в Python 3 называется **range**
### Задача 1
Считайте с клавиатуры несколько чисел через пробел и выведите сумму их кубов
**Вход:** 1 2 3 \
**Выход:** 36
```
# Решение
numbers = map(int,input().split())
x = 0
for i in numbers:
x += i**2
print(x)
```
### Задача 2
Считайте с клавиатуры две последовательности чисел через пробел и выведите список уникальных общих элементов этих двух последовательностей. Сделать это можно с помощью вложенного цикла for и, например, множеств.
**Вход:**
Последовательность 1: 1 2 3
Последовательность 2: 2,2,4,7,4,3
**Выход:**
Общие элементы: [2,3]
Взяли и посчитали вложенными циклами
```
common = set()
list1 = list(map(int,input().split()))
list2 = list(map(int,input().split()))
for elem1 in list1:
for elem2 in list2:
if elem1 == elem2:
common.add(elem1)
break
print(common)
```
Но можно было и без этого. Решать можно было в несколько строк с использованием функционала множеств
```
set1 = set(map(int,input().split()))
set2 = set(map(int,input().split()))
set1.intersection(set2)
```
### Задача 3
Дан список, содержащий строки, целые числа и числа с плавающей точкой. Разбить его на три списка так, чтобы в одном остались только строки, в другом - только целые числа, а в третьем - только числа с плавающей точкой. Заметьте, что при проверке типов название типа пишется без кавычек, например **int**.
**Вход:**
Список 1: [1, 2, 5.6, 7.5, 'Boo', 1, 'RocknRoll']
**Выход:**
Список 1: [1, 2, 1]
Список 2: [5.6, 7.5]
Список 3: ['Boo', 'RocknRoll']
```
#Решение
list1 = [1, 2, 5.6, 7.5, 'Boo', 1, 'RocknRoll']
ints, floats, strings = [], [], []
for i in list1:
if type(i)==int:
ints.append(i)
elif type(i)==float:
floats.append(i)
else:
strings.append(i)
print(ints)
print(floats)
print(strings)
```
### Генераторы списков и списковые включения. aka List Comprehensions
Этот элемент языка считается его "визитной карточкой". Это своего рода метод быстро создать новый список, не применяя цикл for. Пусть мы, к примеру, хотим создать список с числами от 0 до 20
```
a = []
for i in range(20):
a.append(i)
a
```
Это же выражение можно записать с помощью спискового включения
```
a = [i for i in range(20)]
print(type(a))
print(a)
```
Что мы видим? Во-первых, на выходе такой конструкцими мы получили лист (конечно, это же СПИСКОВОЕ включение). А во-вторых, все написано в одну строчку и , кажется следует вот такой конструкции:
**new_list** = [**expression** for **member** in **iterable**]
1. **expression** какое либо вычисление, вызов метода или любое другое допустимое выражение, которое возвращает значение. В приведенном выше примере выражение i * i является квадратом значения члена.
2. **member** является объектом или значением в списке или итерируемым объекте (iterable). В приведенном выше примере значением элемента является i.
3. **iterable** список, множество, последовательность, генератор или любой другой объект, который может возвращать свои элементы по одному. В приведенном выше примере iterable является range(20).
Одним из основных преимуществ использования является то, что это единственный инструмент, который вы можете использовать в самых разных ситуациях. В дополнение к созданию стандартного списка, списки могут также использоваться для отображения и фильтрации. Вам не нужно использовать разные подходы для каждого сценария. Например, можно в раздел **expression** поставить функцию str(), которая превратит каждый элемент исходного списка в строку.
```
lst = [1,2,3,4,5,45,67,8,765,854,76]
x = [str(i) for i in lst]
x
```
Но и это еще не все. В списковое включение можно добавить какое нибудь условие (как мы это делали с **if**). Выглядеть это будет так:
new_list = [expression for member in iterable (if conditional)]
Разберем на примере:
```
lst = [1,2,3,4,5,45,67,8,765,854,76]
x = [i for i in lst if i%2 == 0] #Здесь я взял и включил в новый список только четные элементы
x
```
Более того - не зря в условии написано iterable, а не list. Значит можно попробовать проделать что-то подобное с любыми другими итерируемыми объектами. с кортежами все точно должно получиться:
```
# Предложение
sentence = '''The rocket, who was named Ted, came back
from Mars because he missed his friends.'''
# Гласные английского языка и пробел
vowels = 'aeiou '
# достанем в список все символы строки, которые не являются гласными и пробелом.
consonants = [i for i in sentence if i not in vowels]
consonants
```
А еще вот так можно было... Не зря же регулярные выражения проходили.
```
import re
re.findall(r'[^aeiou ]',sentence)
```
Мы уже поняли, что можно поместить условие в конец оператора для простой фильтрации, но что, если хочется изменить значение элемента вместо его фильтрации? В этом случае полезно поместить условное выражение в начале выражения. Выглядит это вот так:
new_list = [expression (if conditional) for member in iterable]
С помощью этого шаблона можно, например, использовать условную логику для выбора из нескольких возможных вариантов вывода. Допустим, у вас есть список цен, можно заменить отрицательные цены (это могут быть какие-то ошибки логирования) на 0 и оставить положительные значения без изменений:
```
original_prices = [1.25, -9.45, 10.22, 3.78, -5.92, 1.16]
prices = [i if i > 0 else 0 for i in original_prices]
prices
```
Здесь, наше выражение **i** содержит условный оператор, **if i> 0** else **0**. Это говорит Python выводить значение **i**, если число положительное, но менять **i** на **0**, если число отрицательное.
### Включения для множеств и словарей
Хотя **list comprehension** в Python является распространенным инструментом, вы также можете создавать множественные и словарные представления (**set and dictionary comprehensions**). **set comprehension** почти точно такое же, как представление списка. Разница лишь в том, что заданные значения обеспечивают, чтобы выходные данные не содержали дубликатов. Вы можете создать **set comprehension**, используя фигурные скобки вместо скобок:
```
quote = "life, uh, finds a way"
unique_vowels = {i for i in quote if i in 'aeiou'}
unique_vowels
```
Здесь мы вывели все уникальные гласные, которые встретились в строке
**Dictionary comprehensions** , по сути, работает так же, но с дополнительным требованием определения ключа. Ключ отделяется двоеточием.
```
squares = {i: i * i for i in range(10)}
squares
```
### Генераторы списков
По сути, это то же самое, что списковое включение, но только возвращает оно не сам список, а генератор.
```
type((i * i for i in range(10)))
```
Проверим:
```
x = (i * i for i in range(10))
next(x)
next(x)
```
Так-так, функция next работает.
```
x[4]
```
К элементам обращаться нельзя
```
x = (i * i for i in range(10))
while True:
print(next(x))
```
**StopIteration!** Опять что-то знакомое) Получается, что генератор, это , на самом деле, какой-то вид итератора. Так оно и есть. Генератор это итератор, который можно получить с помощью генераторного выражения, например, (i * i for i in range(10)) или с помощью функции-генератора (но об этом в следующей серии.
Так а зачем все это нужно-то? А вот возьмите, например, и посчитайте сумму квадратов первого миллиона чисел
```
%time
sum([i * i for i in range(1000000)])
%time
sum(i * i for i in range(1000000))
```
При использовании генератора время существенно меньше
Ура, теоретическая часть закончилась. Теперь можно порешать задачи!
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Class 14: Prescott's Real Business Cycle Model I
In this notebook, we'll consider a centralized version of the model from pages 11-17 in Edward Prescott's article "Theory Ahead of Business Cycle Measurement in the Fall 1986 of the Federal Reserve Bank of Minneapolis' *Quarterly Review* (link to article: https://www.minneapolisfed.org/research/qr/qr1042.pdf). The model is just like the RBC model that we studying in the previous lecture, except that now we include an endogenous labor supply.
## Prescott's RBC Model with Labor
The equilibrium conditions for Prescott's RBC model with labor are:
\begin{align}
\frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right]\\
\frac{\varphi}{1-L_t} & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} \\
Y_t & = A_t K_t^{\alpha}L_t^{1-\alpha}\\
K_{t+1} & = I_t + (1-\delta) K_t\\
Y_t & = C_t + I_t\\
\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
\end{align}
where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$.
The objective is use `linearsolve` to simulate impulse responses to a TFP shock using the following parameter values for the simulation:
| $$\rho$$ | $$\sigma$$ | $$\beta$$ | $$\varphi$$ | $$\alpha$$ | $$\delta $$ |
|----------|------------|-------------|-----------|------------|-------------|
| 0.75 | 0.006 | 0.99 | 1.7317 | 0.35 | 0.025 |
The value for $\beta$ implies a steady state (annualized) real interest rate of about 4 percent:
\begin{align}
4 \cdot \left(\beta^{-1} - 1\right) & \approx 0.04040
\end{align}
$\rho = 0.75$ and $\sigma = 0.006$ are consistent with the statistical properties of the cyclical component of TFP in the US. $\alpha$ is set so that, consistent with the long-run average of the US, the labor share of income is about 65 percent of GDP. The deprecation rate of capital is calibrated to be about 10 percent annually. Finally, $\varphi$ was chosen last to ensure that in the steady state households allocate about 33 percent of their available time to labor.
## Model Preparation
Before proceding, let's recast the model in the form required for `linearsolve`. Write the model with all variables moved to the left-hand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$:
\begin{align}
0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\
0 & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} - \frac{\varphi}{1-L_t}\\
0 & = A_t K_t^{\alpha}L_t^{1-\alpha} - Y_t\\
0 & = I_t + (1-\delta) K_t - K_{t+1}\\
0 & = C_t + I_t - Y_t\\
0 & = \rho \log A_t - \log A_{t+1}
\end{align}
Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables.
## Initialization, Approximation, and Solution
The next several cells initialize the model in `linearsolve` and then approximate and solve it.
```
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
parameters = pd.Series(dtype=float)
parameters['rho'] = .75
parameters['beta'] = 0.99
parameters['phi'] = 1.7317
parameters['alpha'] = 0.35
parameters['delta'] = 0.025
# Print the model's parameters
print(parameters)
# Create a variable called 'sigma' that stores the value of sigma
sigma = 0.006
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
var_names = ['a','k','y','c','i','l']
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
shock_names = ['e_a','e_k']
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Define variable to store MPK. Will make things easier later.
mpk = p.alpha*fwd.a*fwd.k**(p.alpha-1)*fwd.l**(1-p.alpha)
# Define variable to store MPL. Will make things easier later.
mpl = (1-p.alpha)*fwd.a*fwd.k**p.alpha*fwd.l**-p.alpha
# Euler equation
euler_equation = p.beta*(mpk+1-p.delta)/fwd.c - 1/cur.c
# Labor-labor choice
labor_leisure = mpl/cur.c - p.phi/(1-cur.l)
# Production function
production_function = cur.a*cur.k**p.alpha*cur.l**(1-p.alpha) - cur.y
# Capital evolution. PROVIDED
capital_evolution = cur.i + (1 - p.delta)*cur.k - fwd.k
# Market clearing. PROVIDED
market_clearing = cur.c+cur.i - cur.y
# Exogenous tfp. PROVIDED
tfp_process = p.rho*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_equation,
labor_leisure,
production_function,
capital_evolution,
market_clearing,
tfp_process
])
```
Next, initialize the model using `ls.model` which takes the following required arguments:
* `equations`
* `n_states`
* `var_names`
* `shock_names`
* `parameters`
```
# Initialize the model into a variable named 'rbc_model'
rbc_model = ls.model(equations = equilibrium_equations,
n_states=2,
var_names=var_names,
shock_names=shock_names,
parameters=parameters)
# Compute the steady state numerically using .compute_ss() method of rbc_model
guess = [1,4,1,1,1,0.5]
rbc_model.compute_ss(guess)
# Print the computed steady state
print(rbc_model.ss)
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of rbc_model
rbc_model.approximate_and_solve()
```
## Impulse Responses
Compute a 26 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5.
```
# Compute impulse responses
rbc_model.impulse(T=26,t0=5,shocks=[0.01,0])
# Print the first 10 rows of the computed impulse responses to the TFP shock
print(rbc_model.irs['e_a'].head(10))
```
Construct a $2\times3$ grid of plots of simulated TFP, output, labor, consumption, investment, and capital. Be sure to multiply simulated values by 100 so that vertical axis units are in "percent deviation from steady state."
```
# Create figure. PROVIDED
fig = plt.figure(figsize=(18,8))
# Create upper-left axis. PROVIDED
ax = fig.add_subplot(2,3,1)
ax.plot(rbc_model.irs['e_a']['a']*100,'b',lw=5,alpha=0.75)
ax.set_title('TFP')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create upper-center axis. PROVIDED
ax = fig.add_subplot(2,3,2)
ax.plot(rbc_model.irs['e_a']['y']*100,'b',lw=5,alpha=0.75)
ax.set_title('Output')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create upper-right axis. PROVIDED
ax = fig.add_subplot(2,3,3)
ax.plot(rbc_model.irs['e_a']['l']*100,'b',lw=5,alpha=0.75)
ax.set_title('Labor')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create lower-left axis. PROVIDED
ax = fig.add_subplot(2,3,4)
ax.plot(rbc_model.irs['e_a']['c']*100,'b',lw=5,alpha=0.75)
ax.set_title('Consumption')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.1,0.4])
ax.grid()
# Create lower-center axis. PROVIDED
ax = fig.add_subplot(2,3,5)
ax.plot(rbc_model.irs['e_a']['i']*100,'b',lw=5,alpha=0.75)
ax.set_title('Investment')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-2,8])
ax.grid()
# Create lower-right axis. PROVIDED
ax = fig.add_subplot(2,3,6)
ax.plot(rbc_model.irs['e_a']['k']*100,'b',lw=5,alpha=0.75)
ax.set_title('Capital')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.2,0.8])
ax.grid()
fig.tight_layout()
```
|
github_jupyter
|
```
import pandas as pd
datafile = "Resources/purchase_data.csv"
purchase_data = pd.read_csv(datafile)
purchase_data.head()
# Player Count
player_count = purchase_data["SN"].count()
player = pd.DataFrame({"Total Players": [player_count]})
player
# Purchasing Analysis (Total)
unique_item = purchase_data["Item Name"].nunique()
avg_price = purchase_data["Price"].mean()
num_purchase = purchase_data["SN"].count()
total_rev = purchase_data["Price"].sum()
summary_df = pd.DataFrame({"Number of Unique Items": [unique_item],
"Average Price": [avg_price],
"Number of Purchases": [num_purchase],
"Total Revenue": [total_rev]})
summary_df["Average Price"] = summary_df["Average Price"].map("${:.2f}".format)
summary_df["Total Revenue"] = summary_df["Total Revenue"].map("${:,.2f}".format)
summary_df
# Gender Demographics
unique_players = purchase_data[["SN", "Gender"]].drop_duplicates()
gender_counts = unique_players["Gender"].value_counts()
gender_percent = gender_counts / unique_players["Gender"].count()
# Make gender counts a dataframe
gender_demo = pd.DataFrame(gender_counts)
# Format percentage of players
gender_demo["Percentage of Players"] = gender_percent * 100
gender_demo["Percentage of Players"] = gender_demo["Percentage of Players"].map("{0:.2f}%".format)
gender_demo
# Purchasing Analysis (Gender)
# List of all genders
genders = purchase_data["Gender"].unique()
# Make list of dataframes with each gender's data and lists for each calculation
gender_df = []
gender_purc_count = []
gender_avg_price = []
gender_purc_total = []
gender_avg_total = []
for n in range(len(genders)):
value = purchase_data.loc[(purchase_data["Gender"] == genders[n])]
gender_df.append(value)
# Purchase count
value = gender_df[n]["SN"].count()
gender_purc_count.append(value)
# Average purchase price
value = gender_df[n]["Price"].mean()
gender_avg_price.append(value)
# Total purchase value
value = gender_df[n]["Price"].sum()
gender_purc_total.append(value)
# Count total unique persons
unique = gender_df[n]["SN"].nunique()
# Calculate average purchase total per person by gender
avg_total = value / unique
gender_avg_total.append(avg_total)
# Summary dataframe
gender_analy_df = pd.DataFrame({"Gender":[genders[0], genders[1], genders[2]],
"Purchase Count":[gender_purc_count[0], gender_purc_count[1], gender_purc_count[2]],
"Average Purchase Price":[gender_avg_price[0], gender_avg_price[1], gender_avg_price[2]],
"Total Purchase Value":[gender_purc_total[0], gender_purc_total[1], gender_purc_total[2]],
"Avg Total Purchase per Person":[gender_avg_total[0], gender_avg_total[1], gender_avg_total[2]]
})
gender_analy_df["Average Purchase Price"] = gender_analy_df["Average Purchase Price"].map("${:.2f}".format)
gender_analy_df["Total Purchase Value"] = gender_analy_df["Total Purchase Value"].map("${:,.2f}".format)
gender_analy_df["Avg Total Purchase per Person"] = gender_analy_df["Avg Total Purchase per Person"].map("${:.2f}".format)
gender_analy_df = gender_analy_df.set_index(["Gender"])
gender_analy_df
# Age Demographics
# Lowest age: 7, highest age: 45
bins = [5, 9, 14, 19, 24, 29, 34, 39, 44, 49]
group_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40-44", "45+"]
age_bin = pd.cut(purchase_data["Age"], bins, labels=group_labels, include_lowest=False)
age_count = pd.DataFrame(age_bin.value_counts(sort=False))
# Rename column
age_count = age_count.rename(columns ={"Age": "Total Count"})
age_count["Percentage of Players"] = (age_count["Total Count"] / player_count) * 100
age_count["Percentage of Players"] = age_count["Percentage of Players"].map("{0:.2f}%".format)
age_count
# Purchasing Analysis (Age)
age_analy = purchase_data.copy()
# Add column of age ranges to copy of original dataframe
age_analy["Age Ranges"] = pd.cut(age_analy["Age"], bins, labels=group_labels, include_lowest=False)
# Get total of unique values per age group
unique_group_age = age_analy[["SN", "Age Ranges"]].drop_duplicates()
unique_group_age = unique_group_age["Age Ranges"].value_counts(sort=False)
# Make a dataframe for the summary table, add column of unique value counts for each age group
age_group_sum = pd.DataFrame(unique_group_age)
age_group_sum = age_group_sum.rename(columns= {"Age Ranges" : "Unique Value Count"})
age_group_sum.index.name='Age Ranges'
# Group by age ranges for calculations
group_age = age_analy.groupby("Age Ranges")
# Add Purchase Count to summary table
age_group_total_count = group_age["Age Ranges"].count()
age_group_sum["Purchase Count"] = age_group_total_count
# Add Average Purchase Price to summary table
age_group_avg_purch = group_age["Price"].mean()
age_group_sum["Average Purchase Price"] = age_group_avg_purch
# Add Total Purchase Value to summary table
age_group_total = group_age["Price"].sum()
age_group_sum["Total Purchase Value"] = age_group_total
# Divide the "Total Purchase Value" and "Unique Value Count" columns to get Avg Total Purchase per Person
age_group_avg_total = age_group_sum["Total Purchase Value"] / age_group_sum["Unique Value Count"]
age_group_sum["Avg Total Purchase per Person"] = age_group_avg_total
# Remove unique value count column from dataframe
del age_group_sum["Unique Value Count"]
# Formatting
age_group_sum["Average Purchase Price"] = age_group_sum["Average Purchase Price"].map("${:.2f}".format)
age_group_sum["Total Purchase Value"] = age_group_sum["Total Purchase Value"].map("${:,.2f}".format)
age_group_sum["Avg Total Purchase per Person"] = age_group_sum["Avg Total Purchase per Person"].map("${:.2f}".format)
age_group_sum
# Top 5 Spenders
# Total Purchase Value
top_total_purc = purchase_data.groupby(["SN"]).sum().sort_values(by=["Price"], ascending = False)
top_total_purc = top_total_purc["Price"].head(5)
top_total_purc_df = pd.DataFrame(top_total_purc)
# Make lists for calculations
top_spender_count = []
avg_purc_price = []
for n in range(len(top_total_purc_df)):
# Find Purchase Count for top 5 spenders by counting number of appearances in original data
count = purchase_data.loc[purchase_data["SN"] == top_total_purc_df.index[n]]["SN"].count()
top_spender_count.append(count)
# Average Purchase Price
value = top_total_purc_df.iloc[n, 0] / count
avg_purc_price.append(value)
# Summary Data Frame
top_total_purc_df["Purchase Count"] = top_spender_count
top_total_purc_df["Average Purchase Price"] = avg_purc_price
top_total_purc_df = top_total_purc_df.rename(columns = {"Price" : "Total Purchase Value"})
# Reformat price columns
top_total_purc_df["Average Purchase Price"] = top_total_purc_df["Average Purchase Price"].map("${:.2f}".format)
top_total_purc_df["Total Purchase Value"] = top_total_purc_df["Total Purchase Value"].map("${:.2f}".format)
# Rearrange columns
top_total_purc_df = top_total_purc_df[["Purchase Count", "Average Purchase Price", "Total Purchase Value"]]
top_total_purc_df
# Top 5 Most Popular Items
item_df = purchase_data[["Item ID", "Item Name", "Price"]]
# Purchase Count
item_group_count = item_df.groupby(["Item ID", "Item Name"]).count().sort_values(by=["Item ID"])
item_group_count = item_group_count.rename(columns = {"Price" : "Purchase Count"})
#print(item_group_count.sort_values(by=["Purchase Count"], ascending = False).head(10))
# Total Purchase Value
item_group_sum = item_df.groupby(["Item ID", "Item Name"]).sum().sort_values(by=["Item ID"])
item_group_sum = item_group_sum.rename(columns = {"Price" : "Total Purchase Value"})
# Obtain Item Price by dropping duplicates
item_group_price = item_df.groupby(["Item ID", "Item Name"])["Price"].mean()
item_group_price = item_df[["Item ID", "Item Name", "Price"]].drop_duplicates()
# Merge three dataframes into one
merge_item_1 = pd.merge(item_group_count, item_group_sum, on = ["Item ID", "Item Name"])
merge_item = pd.merge(merge_item_1, item_group_price, on = ["Item ID", "Item Name"])
# Reset index
merge_item = merge_item.set_index(["Item ID", "Item Name"])
merge_item = merge_item.sort_values(by=["Purchase Count", "Total Purchase Value"], ascending = False)
# Column formatting
merge_item = merge_item.rename(columns = {"Price" : "Item Price"})
merge_item = merge_item[["Purchase Count", "Item Price", "Total Purchase Value"]]
# Save a copy for the next section before formatting data values
merge_item_df = merge_item.copy()
# Formatting
merge_item["Item Price"] = merge_item["Item Price"].map("${:.2f}".format)
merge_item["Total Purchase Value"] = merge_item["Total Purchase Value"].map("${:.2f}".format)
merge_item.head(5)
#item_group_price1
# Top 5 Most Profitable Items
merge_item_df = merge_item_df.sort_values(by=["Total Purchase Value"], ascending = False)
merge_item_df["Item Price"] = merge_item_df["Item Price"].map("${:.2f}".format)
merge_item_df["Total Purchase Value"] = merge_item_df["Total Purchase Value"].map("${:.2f}".format)
merge_item_df.head(5)
# Three Observable Trends
# (1) According to the age range data, players aged 20-24 purchase the most items overall, while those aged 35-39 spend the most per person.
# (2) Players who identify themselves as "Other/Non-Disclosed" or "Female" spend more per total purchase per person than males, $0.49 and $0.40 more respectively.
# (3) "Oathbreaker, Last Hope of the Breaking Storm" is the most popular and the most profitable item with a total purchase count of 12 and total purchase value of $50.76.
```
|
github_jupyter
|
# The thermodynamics of ideal solutions
*Authors: Enze Chen (University of California, Berkeley)*
This animation will show how the Gibbs free energy curves correspond to a lens phase diagram.
## Python imports
```
# General libraries
import io
import os
# Scientific computing libraries
import numpy as np
from scipy.misc import derivative
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.animation as animation
from PIL import Image
import cv2
from moviepy.editor import *
```
### Helper functions
```
# analytical function for the solid free energy curve
def curve_s(x, T, beta=0):
"""This function plots the Gibbs free energy curve for the solid solution.
Args:
x (numpy.ndarray): An array of atomic fractions of B.
T (float): The temperature in Kelvin.
beta (float): The interaction parameter in J/mol.
Returns:
G_s (numpy.ndarray): An array of Gibbs free energy values in kJ/mol.
"""
S_mix = -8.314 * (np.multiply(x, np.log(x)) + np.multiply(1 - x, np.log(1 - x)))
H_mix = beta * np.multiply(x, 1 - x)
G_s = -T * S_mix + H_mix
return G_s / 1000
# analytical function for the liquid free energy curve
def curve_l(x, T, beta=0):
"""This function plots the Gibbs free energy curve for the liquid solution.
Args:
x (numpy.ndarray): An array of atomic fractions of B.
T (float): The temperature in Kelvin.
beta (float): The interaction parameter in J/mol.
Returns:
G_l (numpy.ndarray): An array of Gibbs free energy values in kJ/mol.
"""
S_A, S_B = (52.7, 59.9)
T_A, T_B = (1890 + 273, 1205 + 273)
G_A = S_A * (T_A - T)
G_B = S_B * (T_B - T)
S_mix = -8.314 * (np.multiply(x, np.log(x)) + np.multiply(1 - x, np.log(1 - x)))
H_mix = beta * np.multiply(x, 1 - x)
G_l = x * G_B + (1 - x) * G_A - T * S_mix + H_mix
return G_l / 1000
# find the common tangent using intersections and line search
def common_tangent(x, y1, y2, T, beta=0):
"""This function calculates the common tangent of two convex curves.
Args:
x (numpy.ndarray): An array of atomic fractions of B.
y1 (numpy.ndarray): y values for curve 1.
y2 (numpy.ndarray): y values for curve 2.
T (float): The temperature in Kelvin.
beta (float): The interaction parameter for the solid solution.
Returns:
line (numpy.ndarray): y values for the common tangent.
idmin (int): Index of the x-coordinate of the first tangent point.
idmax (int): Index of the x-coordinate of the second tangent point.
"""
# Compute a derivative
dx = 1e-3
dy1 = derivative(func=curve_s, x0=x, dx=dx, args=(T, beta,))
# Make an initial guess at the minimum of curve 1
n = len(x)
idmin, idmax = (0, n)
idx = np.argmin(y1)
yp = y1[idx]
xp = x[idx]
dyp = dy1[idx]
# Construct the tangent line and count intersections with curve 2
line = dyp * x + yp - dyp * xp
diff = np.diff(np.sign(y2 - line))
nnz = np.count_nonzero(diff)
# They're the same curve. Used for finding miscibility gap.
# I'm assuming that the curve is symmetric
if np.linalg.norm(y1 - y2) < 1e-4:
idmin = np.argmin(y1[:int(n/2)])
idmax = np.argmin(y1[int(n/2):]) + int(n/2)
# If the tangent line intersects curve 2, shift tangent point to the left
elif nnz >= 1:
while nnz >= 1:
idx -= 1
# try-except to avoid an out-of-bounds error
try:
yp = y1[idx]
xp = x[idx]
dyp = dy1[idx]
line = dyp * x + yp - dyp * xp
diff = np.diff(np.sign(y2 - line))
nnz = np.count_nonzero(diff)
except:
break
if diff.any():
# Assign left and right indices of the tangent points
# Here we do it each time because once we miss, we can't go back
idmax = np.nonzero(diff)[0][0]
idmin = idx
# If the tangent line misses curve 2, shift tangent point to the right
elif nnz < 1:
while nnz < 1:
idx += 1
# try-except to avoid an out-of-bounds error
try:
yp = y1[idx]
xp = x[idx]
dyp = dy1[idx]
line = dyp * x + yp - dyp * xp
diff = np.diff(np.sign(y2 - line))
nnz = np.count_nonzero(diff)
except:
break
# Assign left and right indices of the tangent points
idmin = idx
idmax = np.nonzero(diff)[0][0]
# Return a tuple
return (line, idmin, idmax)
# plot the Gibbs free energy curves
def plot_Gx(T=1800, beta_s=0, beta_l=0):
"""This function is called by the widget to perform the plotting based on inputs.
Args:
T (float): The temperature in Kelvin.
beta_s (float): The interaction parameter for solids in J/mol.
beta_l (float): The interaction parameter for liquids in J/mol.
Returns:
None, but a pyplot is displayed.
"""
# For the given temperature, calculate the curves and common tangent
n = int(1e4)
xmin, xmax = (0.001, 0.999)
x = np.linspace(xmin, xmax, n)
y_s = curve_s(x, T, beta_s)
y_l = curve_l(x, T, beta_l)
line, idmin, idmax = common_tangent(x, y_s, y_l, T, beta_s)
# Mostly plot settings for visual appeal
plt.rcParams.update({'figure.figsize':(8,6), 'font.size':20, \
'lines.linewidth':4, 'axes.linewidth':2})
fig, ax = plt.subplots()
ymin, ymax = (-39, 19)
ax.plot(x, y_s, c='C0', label='solid')
ax.plot(x, y_l, c='C1', label='liquid')
if abs(idmin) < n and abs(idmax) < n:
ax.plot(x[idmin:idmax], line[idmin:idmax], c='k', lw=5, ls='-.')
ax.vlines(x=[x[idmin], x[idmax]], ymin=ymin, \
ymax=[line[idmin], line[idmax]], linestyles='dotted', linewidth=3)
ax.tick_params(top=True, right=True, direction='in', length=10, width=2)
ax.set_xlim(0, 1)
ax.set_ylim(ymin, ymax)
ax.set_xlabel(r'$x_{B}$')
ax.set_ylabel(r'$\Delta G$ (kJ/mol)')
ax.set_title('Gibbs free energy at T = {} K'.format(T), fontsize=18)
plt.legend()
plt.show()
```
## Animations using `FuncAnimation`
Finally!! VLC/Windows has buggy glitches, but the embedded HTML version looks fine.
Also, **extremely high quality and low memory footprint**!! 🎉
```
# Initialize quantities
n = int(1e4)
xmin, xmax = (0.001, 0.999)
x = np.linspace(xmin, xmax, n)
liquidus = []
solidus = []
Ts = np.arange(1300, 2301, 5)
# Plot settings
plt.rcParams.update({'figure.figsize':(7,9.5), 'font.size':16})
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True)
# Initialize plot settings
ymin, ymax = -39, 19
ax[0].set_xlim(0, 1)
ax[0].set_ylim(ymin, ymax)
ax[0].set_ylabel(r'$\Delta G$ (kJ/mol)', fontsize=22)
ax[0].set_title('Binary ideal solution\nFree energy vs. composition', fontsize=20)
ax[0].tick_params(axis='both', labelsize=20)
Tmin, Tmax = 1100, 2500
ax[1].set_xlabel(r'$x_{B}$', fontsize=22)
ax[1].set_ylabel(r'$T$ (K)', fontsize=22)
ax[1].set_ylim(Tmin, Tmax)
ax[1].set_title('Phase diagram', fontsize=20)
ax[1].tick_params(axis='both', labelsize=20)
# Initialize the lines
l1, = ax[0].plot([], [], c='C1', label='liquid')
l2, = ax[0].plot([], [], c='C0', label='solid')
l3, = ax[1].plot([], [], c='C1', label='liquidus')
l4, = ax[1].plot([], [], c='C0', label='solidus')
l5, = ax[1].plot([], [], c='gray', ls='dashed', lw=4, alpha=0.5, zorder=-5)
v3, = ax[0].plot([], [], c='k', ls='-.')
v1 = ax[0].vlines(x=[0], ymin=[0], ymax=[0], linestyles='dotted', linewidth=4, color='k')
v2 = ax[1].vlines(x=[0], ymin=[0], ymax=[0], linestyles='dotted', linewidth=4, color='k')
ax[0].legend(loc='upper right')
ax[1].legend(loc='upper right')
plt.tight_layout()
# This is needed to avoid an extra loop
def init():
l1.set_data([], [])
return l1,
# This does the enumeration
def animate(i):
global ymin, ymax, Tmax, liquidus, solidus, x, n, Ts, v1, v2
T = Ts[i]
if T % 100 == 0:
print(T)
y_s = curve_s(x, T)
y_l = curve_l(x, T)
line, idmin, idmax = common_tangent(x, y_s, y_l, T) # compute common tangent
if idmin == 0 or idmin == n-1 or idmax == 0 or idmax == n-1:
liquidus.append(None)
solidus.append(None)
else:
liquidus.append(x[idmax])
solidus.append(x[idmin])
# set the data to be updated each iteration
l1.set_data(x, y_l)
l2.set_data(x, y_s)
l3.set_data(liquidus, Ts[:np.where(Ts==T)[0][0]+1])
l4.set_data(solidus, Ts[:np.where(Ts==T)[0][0]+1])
l5.set_data([0, 1], [T, T])
ax[0].annotate(text=f'$T={T}$ K', xy=(0.70, -33), fontsize=20,
bbox=dict(fc='1.0', boxstyle='round'))
# handle the tangent points
if T == 2170:
v1.remove()
v2.remove()
if abs(idmin) < n and abs(idmax) < n and idmax != 0:
v1.remove()
v2.remove()
v3.set_data(x[idmin:idmax], line[idmin:idmax])
v1 = ax[0].vlines(x=[x[idmin], x[idmax]], ymin=ymin, \
ymax=[line[idmin], line[idmax]], linestyles='dotted', linewidth=4, colors=['C0', 'C1'])
v2 = ax[1].vlines(x=[x[idmin], x[idmax]], ymin=T, ymax=Tmax, linestyles='dotted', linewidth=4, colors=['C0', 'C1'])
# return the artists that get updated (for blitting)
return l1, l2, l3, l4, l5, v3, v2, v1
# Create animation object
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=len(Ts), interval=1000, blit=True, repeat=False)
# Save animation as MP4 (preferred)
# anim.save('C:/Users/Enze/Desktop/test_funcanim.mp4', fps=9, dpi=300, writer='ffmpeg')
# Save animation as GIF (file size MUCH larger!)
# anim.save('C:/Users/Enze/Desktop/test_funcanim.gif', fps=9, dpi=300, writer='pillow')
plt.show()
```
## Other (sub-par) methods that I've tried...
```
# Accumulate images in a list for post-processing
n = int(1e4)
xmin, xmax = (0.001, 0.999)
x = np.linspace(xmin, xmax, n)
liquidus = []
solidus = []
Ts = np.arange(1300, 1450, 10)
plt.rcParams.update({'figure.figsize':(7,9)})
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True)
fig.tight_layout()
ymin, ymax = -39, 19
ax[0].set_xlim(0, 1)
ax[0].set_ylim(ymin, ymax)
ax[0].set_ylabel(r'$\Delta G$ (kJ/mol)')
Tmin, Tmax = 1100, 2500
ax[1].set_xlabel(r'$x_{B}$')
ax[1].set_ylabel(r'$T$ (K)')
ax[1].set_ylim(Tmin, Tmax)
images = []
for i,T in enumerate(Ts):
if T % 100 == 0:
print(T)
y_s = curve_s(x, T)
y_l = curve_l(x, T)
line, idmin, idmax = common_tangent(x, y_s, y_l, T)
if idmin == 0 or idmin == n-1 or idmax == 0 or idmax == n-1:
liquidus.append(None)
solidus.append(None)
else:
liquidus.append(x[idmax])
solidus.append(x[idmin])
ax[0].plot(x, y_s, c='C0', label='solid')
ax[0].plot(x, y_l, c='C1', label='liquid')
if abs(idmin) < n and abs(idmax) < n and idmax != 0:
ax[0].plot(x[idmin:idmax], line[idmin:idmax], c='k', ls='-.')
v1 = ax[0].vlines(x=[x[idmin], x[idmax]], ymin=ymin, \
ymax=[line[idmin], line[idmax]], linestyles='dotted', linewidth=4, color='k')
v2 = ax[1].vlines(x=[x[idmin], x[idmax]], ymin=T, ymax=Tmax, linestyles='dotted', linewidth=4, color='k')
ax[0].legend(loc='upper right')
ax[1].plot(liquidus, Ts[:i+1], c='C1', label='liquidus')
ax[1].plot(solidus, Ts[:i+1], c='C0', label='solidus')
ax[1].plot([0, 1], [T, T], c='gray', ls='dashed', lw=4, alpha=0.5, zorder=-5)
ax[1].annotate(text=f'$T={T}$ K', xy=(0.7, 2320), fontsize=24,
bbox=dict(fc='1.0', boxstyle='round'))
# fig.savefig(f'C:/Users/Enze/Desktop/plots/fig_{T:4d}')
# Convert to PIL image for GIF
buf = io.BytesIO()
fig.savefig(buf)
buf.seek(0)
images.append(Image.open(buf))
while len(ax[0].lines) > 0:
ax[0].lines.remove(ax[0].lines[0])
while len(ax[1].lines) > 0:
ax[1].lines.remove(ax[1].lines[0])
if abs(idmin) < n and abs(idmax) < n and idmax != 0:
v1.remove()
v2.remove()
# Make a GIF by converting from PIL Image
make_gif = True
if make_gif: # Quality is pretty good!!
images[0].save('C:/Users/Enze/Desktop/test_PIL3.gif', save_all=True, append_images=images[1:], optimize=False, duration=200, loop=0)
print('Finished making GIF')
```
### Convert PIL images to mp4 using [OpenCV](https://docs.opencv.org/master/d6/d00/tutorial_py_root.html)
OK, this works!
Quality could be improved... this is where FuncAnimation native support would probably be better.
```
# This movie is very large in size!!
opencv_images = [cv2.cvtColor(np.array(i), cv2.COLOR_RGB2BGR) for i in images]
height, width, channels = opencv_images[0].shape
fourcc = cv2.VideoWriter_fourcc(*'MP4V') # can also be 'MJPG' or 'MP4V'
video = cv2.VideoWriter(filename='C:/Users/Enze/Desktop/test_opencv.mp4',
fourcc=fourcc, fps=6, frameSize=(width, height))
for i in opencv_images:
video.write(i)
cv2.destroyAllWindows()
video.release()
```
### Convert figure files using [`moviepy`](https://moviepy.readthedocs.io/en/latest/index.html)
Quality seems a little worse than OpenCV.
Also takes a longggg time lol, but the file size is very small!
```
datadir = 'C:/Users/Enze/Desktop/plots/'
clips = [ImageClip(os.path.join(datadir, m)).set_duration(0.2) for m in os.listdir(datadir)]
concat = concatenate_videoclips(clips, method='compose')
concat.write_videofile('C:/Users/Enze/Desktop/test_moviepy.mp4', fps=10)
```
|
github_jupyter
|
# Character level language model - Dinosaurus Island
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
- How to store text data for processing using an RNN
- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
- How to build a character-level text generation recurrent neural network
- Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3b".
* You can find your original work saved in the notebook with the previous version name ("v3a")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates 3b
- removed redundant numpy import
* `clip`
- change test code to use variable name 'mvalue' rather than 'maxvalue' and deleted it from namespace to avoid confusion.
* `optimize`
- removed redundant description of clip function to discourage use of using 'maxvalue' which is not an argument to optimize
* `model`
- added 'verbose mode to print X,Y to aid in creating that code.
- wordsmith instructions to prevent confusion
- 2000 examples vs 100, 7 displayed vs 10
- no randomization of order
* `sample`
- removed comments regarding potential different sample outputs to reduce confusion.
```
import numpy as np
from utils import *
import random
import pprint
```
## 1 - Problem Statement
### 1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
```
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
```
* The characters are a-z (26 characters) plus the "\n" (or newline character).
* In this assignment, the newline character "\n" plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture.
- Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence.
* `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.
* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character.
- This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer.
```
chars = sorted(chars)
print(chars)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char)
```
### 1.2 - Overview of the model
Your model will have the following structure:
- Initialize parameters
- Run the optimization loop
- Forward propagation to compute the loss function
- Backward propagation to compute the gradients with respect to the loss function
- Clip the gradients to avoid exploding gradients
- Using the gradients, update your parameters with the gradient descent update rule.
- Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". </center></caption>
* At each time-step, the RNN tries to predict what is the next character given the previous characters.
* The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.
* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward.
* At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$.
## 2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
### 2.1 - Clipping the gradients in the optimization loop
In this section you will implement the `clip` function that you will call inside of your optimization loop.
#### Exploding gradients
* When gradients are very large, they're called "exploding gradients."
* Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.
Recall that your overall loop structure usually consists of:
* forward pass,
* cost computation,
* backward pass,
* parameter update.
Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding."
#### gradient clipping
In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed.
* There are different ways to clip gradients.
* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N].
* For example, if the N=10
- The range is [-10, 10]
- If any component of the gradient vector is greater than 10, it is set to 10.
- If any component of the gradient vector is less than -10, it is set to -10.
- If any components are between -10 and 10, they keep their original values.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. </center></caption>
**Exercise**:
Implement the function below to return the clipped gradients of your dictionary `gradients`.
* Your function takes in a maximum threshold and returns the clipped versions of the gradients.
* You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html).
- You will need to use the argument "`out = ...`".
- Using the "`out`" parameter allows you to update a variable "in-place".
- If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.
```
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWaa, dWax, dWya, db, dby]:
gradient = np.clip(gradient,-maxValue,maxValue,out = gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
# Test with a maxvalue of 10
mValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
```
** Expected output:**
```Python
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
```
```
# Test with a maxValue of 5
mValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
del mValue # avoid common issue
```
** Expected Output: **
```Python
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
```
### 2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. </center></caption>
**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:
- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$.
- This is the default input before we've generated any characters.
We also set $a^{\langle 0 \rangle} = \vec{0}$
- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
hidden state:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
activation:
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
prediction:
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
- Details about $\hat{y}^{\langle t+1 \rangle }$:
- Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1).
- $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character.
- We have provided a `softmax()` function that you can use.
#### Additional Hints
- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.
- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.
- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)
#### Using 2D arrays instead of 1D arrays
* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.
* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.
* This becomes a problem when we add two arrays where we expected them to have the same shape.
* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.
* Here is some sample code that shows the difference between using a 1D and 2D array.
```
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)
```
- **Step 3**: Sampling:
- Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. To make the results more interesting, we will use np.random.choice to select a next letter that is *likely*, but not always the same.
- Pick the next character's **index** according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$.
- This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability.
- Use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).
Example of how to use `np.random.choice()`:
```python
np.random.seed(0)
probs = np.array([0.1, 0.0, 0.7, 0.2])
idx = np.random.choice(range(len((probs)), p = probs)
```
- This means that you will pick the index (`idx`) according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
- Note that the value that's set to `p` should be set to a 1D vector.
- Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array.
- Also notice, while in your implementation, the first argument to np.random.choice is just an ordered list [0,1,.., vocab_len-1], it is *Not* appropriate to use char_to_ix.values(). The *order* of values returned by a python dictionary .values() call will be the same order as they are added to the dictionary. The grader may have a different order when it runs your routine than when you run it in your notebook.
##### Additional Hints
- [range](https://docs.python.org/3/library/functions.html#func-range)
- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.
```Python
arr = np.array([[1,2],[3,4]])
print("arr")
print(arr)
print("arr.ravel()")
print(arr.ravel())
```
Output:
```Python
arr
[[1 2]
[3 4]]
arr.ravel()
[1 2 3 4]
```
- Note that `append` is an "in-place" operation. In other words, don't do this:
```Python
fun_hobbies = fun_hobbies.append('learning') ## Doesn't give you what you want
```
- **Step 4**: Update to $x^{\langle t \rangle }$
- The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$.
- You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction.
- You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name.
##### Additional Hints
- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero.
- You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)
- Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
```
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros(( vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros(( n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh( np.dot(Waa, a_prev) + np.dot(Wax, x) + b)
z = np.dot(Wya, a) + by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
# (see additional hints above)
idx = np.random.choice(range(vocab_size), p = y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros(( vocab_size, 1))
x[idx,:] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])
```
** Expected output:**
```Python
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
```
## 3 - Building the language model
It is time to build the character-level language model for text generation.
### 3.1 - Gradient descent
* In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients).
* You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent.
As a reminder, here are the steps of a common optimization loop for an RNN:
- Forward propagate through the RNN to compute the loss
- Backward propagate through time to compute the gradients of the loss with respect to the parameters
- Clip the gradients
- Update the parameters using gradient descent
**Exercise**: Implement the optimization process (one step of stochastic gradient descent).
The following functions are provided:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
Recall that you previously implemented the `clip` function:
#### parameters
* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.
* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).
```
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
```
** Expected output:**
```Python
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
```
### 3.2 - Training the model
* Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example.
* Every 2000 steps of stochastic gradient descent, you will sample several randomly chosen names to see how the algorithm is doing.
**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:
##### Set the index `idx` into the list of examples
* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".
* For example, if there are n_e examples, and the for-loop increments the index to n_e onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is n_e, n_e + 1, etc.
* Hint: n_e + 1 divided by n_e is zero with a remainder of 1.
* `%` is the modulus operator in python.
##### Extract a single example from the list of examples
* `single_example`: use the `idx` index that you set previously to get one word from the list of examples.
##### Convert a string into a list of characters: `single_example_chars`
* `single_example_chars`: A string is a list of characters.
* You can use a list comprehension (recommended over for-loops) to generate a list of characters.
```Python
str = 'I love learning'
list_of_chars = [c for c in str]
print(list_of_chars)
```
```
['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']
```
##### Convert list of characters to a list of integers: `single_example_ix`
* Create a list that contains the index numbers associated with each character.
* Use the dictionary `char_to_ix`
* You can combine this with the list comprehension that is used to get a list of characters from a string.
##### Create the list of input characters: `X`
* `rnn_forward` uses the **`None`** value as a flag to set the input vector as a zero-vector.
* Prepend the list [**`None`**] in front of the list of input characters.
* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']`
##### Get the integer representation of the newline character `ix_newline`
* `ix_newline`: The newline character signals the end of the dinosaur name.
- get the integer representation of the newline character `'\n'`.
- Use `char_to_ix`
##### Set the list of labels (integer representation of the characters): `Y`
* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`.
- For example, `Y[0]` contains the same value as `X[1]`
* The RNN should predict a newline at the last letter so add ix_newline to the end of the labels.
- Append the integer representation of the newline character to the end of `Y`.
- Note that `append` is an in-place operation.
- It might be easier for you to add two lists together.
```
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27, verbose = False):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
single_example_chars = [c for c in single_example]
single_example_ix = [char_to_ix[c] for c in single_example_chars]
X = [None]+[single_example_ix]
X = [None]+[char_to_ix[ch] for ch in examples[idx]];
# Set the labels Y (see instructions above)
ix_newline = char_to_ix["\n"]
Y = X[1:]+[ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate=0.01)
### END CODE HERE ###
# debug statements to aid in correctly forming X, Y
if verbose and j in [0, len(examples) -1, len(examples)]:
print("j = " , j, "idx = ", idx,)
if verbose and j in [0]:
print("single_example =", single_example)
print("single_example_chars", single_example_chars)
print("single_example_ix", single_example_ix)
print(" X = ", X, "\n", "Y = ", Y, "\n")
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters
```
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
```
parameters = model(data, ix_to_char, char_to_ix, verbose = True)
```
** Expected Output**
```Python
j = 0 idx = 0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0]
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
j = 1535 idx = 1535
j = 1536 idx = 0
Iteration: 2000, Loss: 27.884160
...
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
```
## Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
## 4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
```
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
```
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt).
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
```
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
```
The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:
- LSTMs instead of the basic RNN to capture longer-range dependencies
- The model is a deeper, stacked LSTM model (2 layer)
- Using Keras instead of python to simplify the code
If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.
Congratulations on finishing this notebook!
**References**:
- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).
- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py
|
github_jupyter
|
```
# Adapated from https://scipython.com/book/chapter-8-scipy/additional-examples/the-sir-epidemic-model/ - Courtesy of SciPy
# Slider from -> https://matplotlib.org/3.1.1/gallery/widgets/slider_demo.html - Courtesty of Matplotlib
# UK COVID Data -> https://ourworldindata.org/coronavirus/country/united-kingdom?country=~GBR (OWID)
import numpy as np
import pandas as pd
from scipy.integrate import odeint
import matplotlib.pyplot as plt, mpld3
from ipywidgets import interactive
cases = pd.read_csv('owid-covid-data.csv')
cases = cases[cases['iso_code']=='USA']
cases.drop(['life_expectancy','hospital_beds_per_thousand','handwashing_facilities','male_smokers','female_smokers','diabetes_prevalence','cardiovasc_death_rate','extreme_poverty','gdp_per_capita','aged_70_older', 'aged_65_older','median_age','population_density','population','stringency_index','tests_units','positive_rate','tests_per_case','new_tests_smoothed_per_thousand','new_tests_smoothed','new_tests_per_thousand','total_tests_per_thousand','total_tests','new_tests','new_deaths_per_million','total_deaths_per_million','new_cases_per_million','total_cases_per_million','new_deaths','total_deaths','new_cases','continent','location','new_deaths_smoothed_per_million','new_cases_smoothed_per_million','new_deaths_smoothed','new_cases_smoothed'],axis=1, inplace=True)
cases = cases[cases['date']=='2020-08-20']
pop = pd.read_csv('owid-covid-data.csv')
pop = pop[pop['iso_code']=='USA']
pop.drop(['life_expectancy','hospital_beds_per_thousand','handwashing_facilities','male_smokers','female_smokers','diabetes_prevalence','cardiovasc_death_rate','extreme_poverty','gdp_per_capita','aged_70_older', 'aged_65_older','median_age','population_density','stringency_index','tests_units','positive_rate','tests_per_case','new_tests_smoothed_per_thousand','new_tests_smoothed','new_tests_per_thousand','total_tests_per_thousand','total_tests','new_tests','new_deaths_per_million','total_deaths_per_million','new_cases_per_million','total_cases_per_million','new_deaths','total_deaths','new_cases','continent','location','total_cases', 'new_cases_smoothed','new_deaths_smoothed','new_cases_smoothed_per_million','new_deaths_smoothed_per_million'],axis=1, inplace=True)
pop = pop[pop['date']=='2020-08-20']
N = pop['population']
I0, R0 = cases['total_cases'], 0
S0 = N - I0 - R0
beta, gamma = 0, 0
t = np.linspace(0, 60, 60)
# The SIR model differential equations.
def sir(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
# Initial conditions vector
y0 = S0, I0, R0
# Plot the data on three separate curves for S(t), I(t) and R(t)
def sir_interactive_func(beta, gamma):
ret = odeint(sir, y0, t, args=(N, beta, gamma))
S, I, R = ret.T
fig = plt.figure()
ax = fig.add_subplot(111, axisbelow=True)
ax.plot(t, S/1000, 'yellow', lw=1.5, label='Susceptible')
ax.plot(t, I/1000, 'red', lw=1.5, label='Infected')
ax.plot(t, R/1000, 'blue', lw=1.5, label='Recovered')
ax.set_xlabel('Time (days)')
ax.set_ylabel('Population (1000s)')
ax.grid(b=True, which='major', c='#bbbbbb', lw=1, ls='-')
legend = ax.legend()
legend.get_frame().set_alpha(0.5)
#mpld3.save_html(fig, 'usa.html')
interactive_plot = interactive(sir_interactive_func, beta=(0.10,2,0.01), gamma=(0.10,1,0.01))
interactive_plot
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import os
print(os.listdir("../input"))
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import numpy.random as nr
import math
%matplotlib inline
data = pd.read_csv('../input/train.csv')
print(data.head(3))
data.info()
# Check for negative item_cnt_day
data[data['item_cnt_day']<0]['item_cnt_day'].value_counts()
plt.plot(data[data['item_cnt_day']<0]['item_id'].value_counts().sort_index())
data_filtered=data.loc[data['item_cnt_day']>0]
data_filtered.info()
data=data_filtered
item_categories = pd.read_csv('../input/items.csv')
item_categories.head(3)
dt=pd.merge(data, item_categories, how='inner')
dt.sort_values(by=['date'], inplace=True)
dt.head(3)
## Drop column which is unused
columns=['date','item_price','item_name']
for c in columns:
if c in dt:
dt.drop(c, axis = 1, inplace = True)
dt[(dt['item_cnt_day']>0)].head(3)
#Group by 'date_block_num', 'shop_id','item_id' and
#sum the item count per day to get the sum for each month (or date_block_num)
dtf=dt.groupby(['date_block_num', 'shop_id','item_id'])[["item_cnt_day"]].sum().reset_index()
print(data.size)
print(dtf.size)
dtf.hist(figsize=(15,20))
plt.figure()
pd.plotting.scatter_matrix(dtf[['item_cnt_day','item_id','shop_id','date_block_num']],figsize=(10,10))
plt.figure()
dtf[(dtf['item_id']==2929) & (dtf['shop_id']==0)]
dt[(dt['item_id']==2929) & (dt['shop_id']==0)]
test_shop_id=dt.groupby(['shop_id'])[["item_cnt_day"]].sum().reset_index()
test_shop_id.head()
plt.bar(test_shop_id['shop_id'],test_shop_id ["item_cnt_day"])
#Analyze item_id outliers
test_item_id=dt.groupby(['item_id'])[["item_cnt_day"]].sum().reset_index()
plt.plot(test_item_id[(test_item_id['item_id']!=20949)]['item_id'],test_item_id[(test_item_id['item_id']!=20949)] ["item_cnt_day"])
plt.plot(test_item_id[(test_item_id['item_cnt_day']<=10000)]['item_id'],test_item_id[(test_item_id['item_cnt_day']<=10000)]["item_cnt_day"])
print(test_item_id[(test_item_id['item_id']!=20949)]['item_id'].describe())
print(test_item_id[(test_item_id['item_cnt_day']>12000)]['item_id'].value_counts())
test_item_id=dt.groupby(['item_category_id'])[["item_cnt_day"]].sum().reset_index()
plt.plot(test_item_id['item_category_id'],test_item_id["item_cnt_day"])
#Try to remove outliers (december months)
plt.plot(dt.groupby(['date_block_num'])[["item_cnt_day"]].sum())
dt_filtered=dt.loc[(dt['date_block_num'] ==9) | (dt['date_block_num'] ==10) | (dt['date_block_num'] ==21)| (dt['date_block_num'] ==22) | (dt['date_block_num'] ==33)]
print(dt_filtered.size)
dt_filtered['date_block_num'].value_counts()
pd.options.mode.chained_assignment = None # default='warn'
idx=dt_filtered.loc[(dt_filtered['date_block_num'] ==9)].index.values
dt_filtered.at[idx,'date_block_num']=0
dt_filtered.at[idx,'year']=1
idx=dt_filtered.loc[(dt_filtered['date_block_num'] ==10)].index.values
dt_filtered.at[idx,'date_block_num']=1
dt_filtered.at[idx,'year']=1
idx=dt_filtered.loc[(dt_filtered['date_block_num'] ==21)].index.values
dt_filtered.at[idx,'date_block_num']=0
dt_filtered.at[idx,'year']=2
idx=dt_filtered.loc[(dt_filtered['date_block_num'] ==22)].index.values
dt_filtered.at[idx,'date_block_num']=1
dt_filtered.at[idx,'year']=2
idx=dt_filtered.loc[(dt_filtered['date_block_num'] ==33)].index.values
dt_filtered.at[idx,'date_block_num']=0
dt_filtered.at[idx,'year']=3
print(dt_filtered['date_block_num'].value_counts())
print(dt_filtered['year'].value_counts())
plt.plot(dt_filtered.groupby(['date_block_num'])[["item_cnt_day"]].sum())
print(dt_filtered.head())
dt_filtered.to_csv('sales_train_trans_filtered.csv', sep=',',index=False)
dt.to_csv('sales_train_trans.csv', sep=',',index=False)
#Prepare test data : adding category column
sales_test = pd.read_csv('../input/test.csv')
sales_test.head(3)
sales_test1=pd.merge(sales_test, item_categories, how='inner')
sales_test1.sort_values(by=['ID'], inplace=True)
sales_test1.head(3)
sales_test1['shop_id'].value_counts()
sales_test1.info()
sales_test1.isnull().sum()
sales_test1['item_id'].value_counts().count()
sales_test1['item_category_id'].value_counts().count()
dt['item_category_id'].value_counts().count()
##Item_category_id that can be removed
#pd.concat([pd.unique(sales_test1['item_category_id']),pd.unique(sales_test1['item_category_id'])]).drop_duplicates(keep=False)
#print("sales_test1['item_category_id']-->",pd.unique(sales_test1['item_category_id']))
#print("dt['item_category_id']-->",pd.unique(dt['item_category_id']))
#print("concatenate-->", np.concatenate((pd.unique(sales_test1['item_category_id']),pd.unique(dt['item_category_id'])),axis=0))
np.unique(np.concatenate((pd.unique(sales_test1['item_category_id']),pd.unique(dt['item_category_id'])),axis=0))
a=set(pd.unique(dt['item_category_id']));
b=set(pd.unique(sales_test1['item_category_id']));
list(a-b)
sales_test1.drop('item_name', axis = 1, inplace = True)
sales_test1.to_csv('sales_test1.csv', sep=',',index=False)
```
|
github_jupyter
|
#### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Classification on imbalanced data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](https://www.tensorflow.org/guide/keras/overview) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. .
This tutorial contains complete code to:
* Load a CSV file using Pandas.
* Create train, validation, and test sets.
* Define and train a model using Keras (including setting class weights).
* Evaluate the model using various metrics (including precision and recall).
* Try common techniques for dealing with imbalanced data like:
* Class weighting
* Oversampling
## Setup
```
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
```
## Data processing and exploration
### Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data. It can be used to download CSVs into a Pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html#pandas.DataFrame).
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
```
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
```
### Examine the class label imbalance
Let's look at the dataset imbalance:
```
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
```
This shows the small fraction of positive samples.
### Clean, split and normalize the data
The raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
```
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
```
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
```
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
```
Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
```
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
```
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
### Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
* Do these distributions make sense?
* Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.
* Can you see the difference between the distributions?
* Yes the positive examples contain a much higher rate of extreme values.
```
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
```
## Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
```
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
```
### Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
* **False** negatives and **false** positives are samples that were **incorrectly** classified
* **True** negatives and **true** positives are samples that were **correctly** classified
* **Accuracy** is the percentage of examples correctly classified
> $\frac{\text{true samples}}{\text{total samples}}$
* **Precision** is the percentage of **predicted** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false positives}}$
* **Recall** is the percentage of **actual** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false negatives}}$
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
* **AUPRC** refers to Area Under the Curve of the Precision-Recall Curve. This metric computes precision-recall pairs for different probability thresholds.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
* [Relationship between Precision-Recall and ROC Curves](https://www.biostat.wisc.edu/~page/rocpr.pdf)
## Baseline model
### Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
```
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
```
Test run the model:
```
model.predict(train_features[:10])
```
### Optional: Set the correct initial bias.
These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.
With the default bias initialization the loss should be about `math.log(2) = 0.69314`
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
```
initial_bias = np.log([pos/neg])
initial_bias
```
Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: `pos/total = 0.0018`
```
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
```
With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
This initial loss is about 50 times less than if would have been with naive initialization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
### Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training:
```
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
```
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
```
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
```
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
### Train the model
```
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
```
### Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in the [Overfit and underfit](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit) tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
```
def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
```
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
### Evaluate metrics
You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels, where the X axis is the predicted label and the Y axis is the actual label:
```
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
```
Evaluate your model on the test dataset and display the results for the metrics you created above:
```
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
```
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
### Plot the ROC
Now plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
```
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
```
### Plot the AUPRC
Now plot the [AUPRC](https://developers.google.com/machine-learning/glossary?hl=en#PR_AUC). Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model.
```
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
```
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
## Class weights
### Calculate class weights
The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
```
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg) * (total / 2.0)
weight_for_1 = (1 / pos) * (total / 2.0)
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
```
### Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `tf.keras.optimizers.SGD`, may fail. The optimizer used here, `tf.keras.optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
```
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
```
### Check training history
```
plot_metrics(weighted_history)
```
### Evaluate metrics
```
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
```
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
```
### Plot the AUPRC
```
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
```
## Oversampling
### Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
```
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
```
#### Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
```
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
```
#### Using `tf.data`
If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
```
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
```
Each dataset provides `(feature, label)` pairs:
```
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
```
Merge the two together using `experimental.sample_from_datasets`:
```
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
```
To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
```
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
```
### Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
```
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
### Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
```
plot_metrics(resampled_history)
```
### Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the `tf.keras.callbacks.EarlyStopping` finer control over when to stop training.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
```
### Re-check training history
```
plot_metrics(resampled_history)
```
### Evaluate metrics
```
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
```
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
```
### Plot the AUPRC
```
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
```
## Applying this tutorial to your problem
Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.
|
github_jupyter
|
```
import json
import tensorflow as tf
import csv
import random
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import regularizers
embedding_dim = 100
max_length = 16
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size=160000
test_portion=.1
# Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader
# You can do that yourself with:
# iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv
# I then hosted it on my site to make it easier to use in this notebook
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \
-O /tmp/training_cleaned.csv
corpus = []
num_sentences = 0
with open("/tmp/training_cleaned.csv") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
# Your Code here. Create list items where the first item is the text, found in row[5], and the second is the label. Note that the label is a '0' or a '4' in the text. When it's the former, make
# your label to be 0, otherwise 1. Keep a count of the number of sentences in num_sentences
list_item=[]
list_item.append(row[5])
# YOUR CODE HERE
if row[0] == '0':
list_item.append(0)
else:
list_item.append(1)
num_sentences = num_sentences + 1
corpus.append(list_item)
print(num_sentences)
print(len(corpus))
print(corpus[1])
# Expected Output:
# 1600000
# 1600000
# ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0]
sentences=[]
labels=[]
random.shuffle(corpus)
for x in range(training_size):
sentences.append(corpus[x][0])
labels.append(corpus[x][1])
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
vocab_size=len(word_index)
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
split = int(test_portion * training_size)
test_sequences = padded[:split]
training_sequences = padded[split:training_size]
test_labels = labels[:split]
training_labels = labels[split:training_size]
print(vocab_size)
print(word_index['i'])
# Expected Output
# 138858
# 1
# Note this is the 100 dimension version of GloVe from Stanford
# I unzipped and hosted it on my site to make this notebook easier
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \
-O /tmp/glove.6B.100d.txt
embeddings_index = {};
with open('/tmp/glove.6B.100d.txt') as f:
for line in f:
values = line.split();
word = values[0];
coefs = np.asarray(values[1:], dtype='float32');
embeddings_index[word] = coefs;
embeddings_matrix = np.zeros((vocab_size+1, embedding_dim));
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word);
if embedding_vector is not None:
embeddings_matrix[i] = embedding_vector;
print(len(embeddings_matrix))
# Expected Output
# 138859
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv1D(64, 5, activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=4),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'],
)
model.summary()
num_epochs = 50
history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2)
print("Training Complete")
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r')
plt.plot(epochs, val_acc, 'b')
plt.title('Training and validation accuracy')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["Accuracy", "Validation Accuracy"])
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.plot(epochs, val_loss, 'b')
plt.title('Training and validation loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss", "Validation Loss"])
plt.figure()
# Expected Output
# A chart where the validation loss does not increase sharply!
```
|
github_jupyter
|
# Caching
Interacting with files on a cloud provider can mean a lot of waiting on files downloading and uploading. `cloudpathlib` provides seamless on-demand caching of cloud content that can be persistent across processes and sessions to make sure you only download or upload when you need to.
## Are we synced?
Before `cloudpathlib`, we spent a lot of time syncing our remote and local files. There was no great solution. For example, I just need one file, but I only have a script that downloads the entire 800GB bucket (or worse, you can't remember exactly _which_ files you need 🤮). Or _even worse_, you have all the files synced to your local machine, but you suspect that some are are up-to-date and some are stale. More often that I'd like to admit, the simplest answer was to blast the whole data directory and download all over again. Bandwidth doesn't grow on trees!
## Cache me if you can
Part of what makes `cloudpathlib` so useful is that it takes care of all of that, leaving your precious mental resources free to do other things! It maintains a local cache and only downloads a file if the local version and remote versions are out of sync. Every time you read or write a file, `cloudpathlib` goes through these steps:
- Does the file exist in the cache already?
- If no, download it to the cache.
- If yes, does the cached version have the same modtime as the cloud version?
- If it is older, re-download the file and replace the old cached version with the updated version from the cloud.
- If the local one is newer, something is up! We don't want to overwrite your local changes with the version from the cloud. If we see this scenario, we'll raise an error and offer some options to resolve the versions.
## Supporting reading and writing
The cache logic also support writing to cloud files seamlessly in addition to reading. We do this by tracking when a `CloudPath` is opened and on the close of that file, we will upload the new version to the cloud if it has changed.
**Warning** we don't upload files that weren't opened for write by `cloudpathlib`. For example, if you edit a file in the cache manually in a text edior, `cloudpathlib` won't know to update that file on the cloud. If you want to write to a file in the cloud, you should use the `open` or `write` methods, for example:
```python
with my_cloud_path.open("w") as f:
f.write("My new text!")
```
This will download the file, write to the text to the local version in the cache, and when that file is closed we know to upload the changed version to the cloud.
As an example, let's look at using the [Low Altitude Disaster Imagery](https://registry.opendata.aws/ladi/) open dataset on S3. We'll view one images available of a flooding incident available on S3.
```
from cloudpathlib import CloudPath
from itertools import islice
ladi = CloudPath("s3://ladi/Images/FEMA_CAP/2020/70349")
# list first 5 images for this incident
for p in islice(ladi.iterdir(), 5):
print(p)
```
Just because we saw these images are available, it doesn't mean we have downloaded any of this data yet.
```
# Nothing in the cache yet
!tree {ladi.fspath}
```
Now let's look at just the first image from this dataset.
```
flood_image = ladi / "DSC_0001_5a63d42e-27c6-448a-84f1-bfc632125b8e.jpg"
flood_image.exists()
# Still nothing in the cache
!tree {ladi.fspath}
```
Even though we refer to a specific file and make sure it exists in the cloud, we can still do all of that work without actually downloading the file.
In order to read the file, we do have to download the data. Let's actually display the image:
```
%%time
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
with flood_image.open("rb") as f:
i = Image.open(f)
plt.imshow(i)
# Downloaded image file in the cache
!tree {ladi.fspath}
```
Just be using `open`, we've downloaded the file in the background to the cache. Now that it is local, we won't redownload that file unless it changes on the server. We can confirm that by checking if the file is faster to read a second time.
```
%%time
with flood_image.open("rb") as f:
i = Image.open(f)
plt.imshow(i)
```
Notice that the second display is much faster since we use the cached version!
## Keeping the cache around
By default, the cache uses [`tempfile`](https://docs.python.org/3/library/tempfile.html) this means at some point either Python or your operating system will remove whatever files you have cached. This is helpful in that it means the downloaded files get cleaned up regularly and don't necessarily clutter up your local hard drive.
However, sometimes I don't want to have to re-download files I know won't change. For example, in the LADI dataset, I may want to use the images in a Jupyter notebook and every time I restart the notebook I want to always have the downloaded files. I don't want to re-download since I know the LADI images won't be changing on S3.
We can do this just by using a `Client` that does all the downloading/uploading to a specfic folder on our local machine.
```
from cloudpathlib import S3Client
# explicitly instantiate a client that always uses the local cache
client = S3Client(local_cache_dir="data")
ladi = client.CloudPath("s3://ladi/Images/FEMA_CAP/2020/70349")
# Again, nothing in the cache yet, but we see it is all in the "data" folder
!tree {ladi.fspath}
```
Now let's look at just the first image from this dataset. Note that paths created by using the `ladi` root (e.g., by using the `/` operator below or calls like `iterdir` and `glob`) will inherit the same `Client` instance, and therefore the same `local_cache_dir` without our having to do extra work.
```
flood_image = ladi / "DSC_0002_a89f1b79-786f-4dac-9dcc-609fb1a977b1.jpg"
with flood_image.open("rb") as f:
i = Image.open(f)
plt.imshow(i)
# Now
!tree {ladi.fspath}
# let's explicitly cleanup this directory, since it is not handled for us
!rm -rf data
```
## Accessing the cached version directly (read-only)
Many Python libraries don't properly handle `PathLike` objects. These libraries often only expect a `str` to be passed when working with files or, even worse, they will call `str(p)` on a Path that is passed before using it.
To use `cloudpathlib` with these libraries, you can pass `.fspath` which will provide the path to the cached version of the file as a string.
**Warning:** Using the `.fspath` property will download the file from the cloud if it does not exist yet in the cache.
**Warning:** Since we are no longer in control of opening/closing the file, we cannot upload any changes when the file is closed. Therefore, you should treat any code where you use `fspath` as _read only_. Writes directly to `fspath` will not be uplaoded to the cloud.
## Handling conflicts
We try to be conservative in terms of not losing data—especially data stored on the cloud, which is likely to be the canonical version. Given this, we will raise exceptions in two scenarios:
`OverwriteNewerLocalError`
This exception is raised if we are asked to download a file, but our local version in the cache is newer. This likely means that the cached version has been updated, but not pushed to the cloud. To work around this you could remove the cache version explicitly if you _know_ you don't need that data. If you did write changes you need, make sure your code uses the `cloudpathlib` versions of the `open`, `write_text`, or `write_bytes` methods, which will upload your changes to the cloud automatically.
The `CloudPath.open` method supports a `force_overwrite_from_cloud` kwarg to force overwriting your local version.
`OverwriteNewerCloudError`
This exception is raised if we are asked to upload a file, but the one on the cloud is newer than our local version. This likely means that a separate process has updated the cloud version, and we don't want to overwrite and lose that new data in the cloud.
The `CloudPath.open` method supports a `force_overwrite_to_cloud` kwarg to force overwriting the cloud version.
|
github_jupyter
|
# Basic Usage Guide for Obstacle Tower Gym Interface
```
from obstacle_tower_env import ObstacleTowerEnv, ObstacleTowerEvaluation
%matplotlib inline
from matplotlib import pyplot as plt
from IPython.display import display, clear_output
import numpy as np
# import matplotlib.pyplot as plt
# import matplotlib.animation as animation
```
## Launching the environment
Ensure that the Obstacle Tower binary has been downloaded (https://github.com/Unity-Technologies/obstacle-tower-env#download-the-environment), and placed in the correct sub-folder. Here we use the `examples/ObstacleTower` sub-folder.
```
# Realtime mode determines whether the environment window will render the scene,
# as well as whether the environment will run at realtime speed. Set this to `True`
# to visual the agent behavior as you would in player mode.
env = ObstacleTowerEnv('./ObstacleTower/obstacletower', retro=False, realtime_mode=True)
```
## Environment information
We can also set the random seed used to generate the environment, as well as choose a starting floor.
```
# The environment provided has a MultiDiscrete action space, where the 4 dimensions are:
# 0. Movement (No-Op/Forward/Back)
# 1. Camera Rotation (No-Op/Counter-Clockwise/Clockwise)
# 2. Jump (No-Op/Jump)
# 3. Movement (No-Op/Right/Left)
print(env.action_space)
# The observation space provided includes a 168x168 image (the camera from the simulation)
# as well as the number of keys held by the agent (0-5) and the amount of time remaining.
print(env.observation_space)
```
## Interacting with the environment
```
import numpy as np
seed = 5 # seed = np.random.randint(100)
env.seed(seed) # Seeds can be chosen from range of 0-100.
env.floor(0) # Floors can be chosen from range of 0-100.
obs = env.reset()
plt.imshow(obs[0])
def run_episode(env):
done = False
seed = 5
env.seed(seed)
env.floor(0)
obs = env.reset()
episode_return = 0.0
action=[1, 0, 0, 0]
while not done:
obs, reward, done, info = env.step(env.action_space.sample())
if not done:
obs, reward, done, info = env.step(action)
episode_return += reward
return episode_return
##### 跑指定步數 #####
action=[1, 0, 0, 0]
r = 0
### img ###
fig = plt.figure()
ims = []
# observation = [camera, key, time, floor]
# 1 env.step = 50mms
for i in range(0, 100):
obs, reward, done, info = env.step(env.action_space.sample())
if not done:
obs, reward, done, info = env.step(action)
# im = plt.imshow(obs[0], animated =False)
# ims.append([im])
# clear_output(True)
plt.show()
r += reward
if r>0 :
print("Reward: %.2f" % r)
if done:
obs = env.reset()
print("Result Reward: %.2f" % r)
### test for save video but get error ###
# ims is a list of lists, each row is a list of artists to draw in the
# current frame; here we are just animating one artist, the image, in
# each frame
# ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True, repeat = False, repeat_delay=None)
# To save the animation, use e.g.
# from matplotlib.animation import FFMpegWriter
# ani.save("movie.mp4")
# writer = FFMpegWriter(fps=15, metadata=dict(artist='Me'), bitrate=1800)
# ani.save("movie.mp4", writer=writer)
# plt.show()
##########
env.close()
##### Run until done #####
env = ObstacleTowerEnv('./ObstacleTower/obstacletower', retro=False, realtime_mode=True)
eval_seeds = [1001]
env = ObstacleTowerEvaluation(env, eval_seeds)
print("Total Reward: ",run_episode(env))
env.close()
##### 程式分隔線 #####
print(obs)
```
### Setting environment parameters
We can also set the random seed used to generate the environment, as well as choose a starting floor.
```
# Seeds can be chosen from range of 0-100.
env.seed(5)
# Floors can be chosen from range of 0-100.
env.floor(15)
# Additional reset parameters can be set using a config dictionary
# Here we set the agent perspective to first-person mode.
config = {'agent-perspective': 1}
# These parameters won't take place until the next reset.
obs = env.reset(config=config)
plt.imshow(obs[0])
```
## Evaluation
```
from obstacle_tower_env import ObstacleTowerEnv, ObstacleTowerEvaluation
%matplotlib inline
from matplotlib import pyplot as plt
def run_episode(env):
done = False
episode_return = 0.0
while not done:
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
episode_return += reward
return episode_return
if __name__ == '__main__':
# In this example we use the seeds used for evaluating submissions
# to the Obstacle Tower Challenge.
#eval_seeds = [1001, 1002, 1003, 1004, 1005]
eval_seeds = [1001]
# Create the ObstacleTowerEnv gym and launch ObstacleTower
env = ObstacleTowerEnv('./ObstacleTower/obstacletower', realtime_mode=False)
# Wrap the environment with the ObstacleTowerEvaluation wrapper
# and provide evaluation seeds.
env = ObstacleTowerEvaluation(env, eval_seeds)
# We can run episodes (in this case with a random policy) until
# the "evaluation_complete" flag is True. Attempting to step or reset after
# all of the evaluation seeds have completed will result in an exception.
while not env.evaluation_complete:
episode_rew = run_episode(env)
# Finally the evaluation results can be fetched as a dictionary from the
# environment wrapper.
print(env.results)
env.close()
```
## Launching the environment (retro mode)
We also provide a `retro mode` which uses observation and action spaces similar to those found in the Arcade Learning Environment (ALE).
```
env = ObstacleTowerEnv('./ObstacleTower/obstacletower', retro=True)
# In retro mode, the observation is an 84x84 image with the time remaining and key count visually embedded.
env.observation_space
```
## Interacting with the environment (retro mode)
```
obs = env.reset()
print(obs.shape)
obs, reward, done, info = env.step(env.action_space.sample())
plt.imshow(obs)
```
|
github_jupyter
|
```
import json
import requests
import csv
import pandas as pd
import os
import matplotlib.pylab as plt
import numpy as np
%matplotlib inline
pd.options.mode.chained_assignment = None
from statsmodels.tsa.arima_model import ARIMA
import statsmodels.api as sm
import operator
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import pacf
from pandas.tools.plotting import autocorrelation_plot
dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m-%d')
indicator_data = pd.read_csv('P:\\ADS\\Final\\Indicators_Cleaned.csv',header=0,parse_dates=True,index_col='Year',date_parser=dateparse, low_memory=False)
indicator_data.head()
indicator_data.reset_index()
indicator_data.head()
brazil_df_ind6 = indicator_data[(indicator_data['IndicatorCode'].isin(['NY.GDP.MKTP.KD.ZG'])) &\
(indicator_data['CountryCode'] == 'BR')]
brazil_df_ind6.index
ts = brazil_df_ind6['Value']
ts1 = brazil_df_ind6[['Value']].copy()
ts1['Value']=ts1['Value']+20
ts1.head()
plt.plot(ts1)
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = pd.rolling_mean(timeseries, window=12)
rolstd = pd.rolling_std(timeseries, window=12)
#Plot rolling statistics:
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
#Perform Dickey-Fuller test:
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
test_stationarity(ts1.Value)
decomposition = sm.tsa.seasonal_decompose(ts1, model='additive')
fig = decomposition.plot()
plt.show()
```
## Taking Log
```
def logTransform(df):
ts_log = np.log(df)
plt.plot(ts_log)
return ts_log
ts1_log = logTransform(ts1)
test_stationarity(ts1_log.Value)
```
## Log first difference
```
def logFirstDifference(ts1_log):
ts1_log_diff = ts1_log - ts1_log.shift()
ts1_log_diff.dropna(inplace=True)
return ts1_log_diff
ts1_log_diff = logFirstDifference(ts1_log)
test_stationarity(ts1_log_diff.Value)
```
## First difference
```
def firstDifference(df):
#ts_first_diff = df - df.shift()
#ts_first_diff.dropna(inplace=True)
ts_first_diff = df.diff()
ts_first_diff.dropna(inplace=True)
return ts_first_diff
ts1_first_diff = firstDifference(ts1)
test_stationarity(ts1_first_diff.Value)
def expWeightedavg(ts1_log):
expwighted_avg = pd.ewma(ts1_log, halflife=57)
ts_log_ewma_diff = ts1_log - expwighted_avg
ts1_log_diff.dropna(inplace=True)
return ts1_log_diff
ts_log_ewma_diff = expWeightedavg(ts1_log)
test_stationarity(ts_log_ewma_diff.Value)
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts1_log)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.plot(ts1_log, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
lag_acf = acf(ts1_log, nlags=10)
lag_pacf = pacf(ts1_log, nlags=10, method='ols')
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(ts1_log, lags=10, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(ts1_log, lags=10, ax=ax2)
```
- As seen from the graph above both ACF and PACF are geometric hence this is an ARMA model
```
autocorrelation_plot(ts1_log)
plt.show()
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts1_log)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts1_log)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function')
plt.tight_layout()
aic_metric = pd.DataFrame({'Modelname':[],'AIC':[]})
aic_dict = {}
def cal_aic_metric(modelname,model):
global aic_metric
AIC = model.aic
aic_dict[modelname] = AIC
df_error = pd.DataFrame({'Modelname':[modelname],'AIC':[AIC]})
aic_metric = pd.concat([aic_metric,df_error])
return aic_metric
def AR_Model(ts):
model = ARIMA(ts, order=(2, 0, 0))
results_AR = model.fit(disp=0)
cal_aic_metric('ARIMA(ts, order=(2, 0, 0))',results_AR)
print('Lag: %s' % results_AR.k_ar)
print('Coefficients: %s' % results_AR.params)
#print(results_AR.summary())
predict_MA_HPI = np.exp(results_AR.predict(10, 10, dynamic=True))
print(predict_MA_HPI)
plt.plot(ts1_log)
plt.plot(results_AR.fittedvalues, color='red')
#print(np.exp(results_AR.fittedvalues))
print(results_AR.aic)
return results_AR
model_AR = AR_Model(ts1_log)
def MA_Model(ts):
model = ARIMA(ts, order=(0,0, 5))
results_MA = model.fit(disp=0)
cal_aic_metric('ARIMA(ts, order=(0, 0, 5))',results_MA)
print('Lag: %s' % results_MA.k_ar)
print('Coefficients: %s' % results_MA.params)
print(results_MA.summary())
plt.plot(ts)
plt.plot(results_MA.fittedvalues, color='red')
return results_MA
model_MA = MA_Model(ts1_log)
def Combined_Model(ts):
model = ARIMA(ts, order=(2, 0, 2))
results_ARIMA = model.fit(disp=0)
cal_aic_metric('ARIMA(ts, order=(1,0, 5))',results_ARIMA)
print('Lag: %s' % results_ARIMA.k_ar)
print('Coefficients: %s' % results_ARIMA.params)
print(results_ARIMA.summary())
plt.plot(ts)
plt.plot(results_ARIMA.fittedvalues, color='red')
return results_ARIMA
model_Combined = Combined_Model(ts1_log)
best_model = min(aic_dict.items(),key=operator.itemgetter(1))[0]
print('Best Model is ', best_model)
aic_metric
#Forecast using Best Model
def forecast(model,numSteps):
#model.forecast(steps=numSteps)
output = model.forecast(steps=numSteps)[0]
#output.tolist()
output = np.exp(output)
output = output-20
#out=normal(output)
return output
forecast(model_AR,15)
def FittedValues(model):
fittedVal=model.fittedvalues
#PredictedVal=normal(fittedVal)
#PredictedVal= fittedVal.tolist()
fittedVal = np.exp(fittedVal)
fittedVal = fittedVal-20
print('Predicted existing values are:')
return fittedVal
FittedValues(model_AR)
def normal(predictions_ARIMA_diff):
#predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
predictions_ARIMA_diff_cumsum = np.cumsum(np.concatenate((ts1.values[0], predictions_ARIMA_diff)))
print('normalized')
#predictions_ARIMA_diff_cumsum=np.absolute(predictions_ARIMA_diff_cumsum)
return predictions_ARIMA_diff_cumsum
```
|
github_jupyter
|
# Question C | SVMs hand-on
Yilun Kuang (Mark)
N15511943
FML HW 2
## Question 1
```shell
# Login to the computing cluster
ssh yk2516@greene.hpc.nyu.edu
cd /scratch/yk2516/svm
# Download libsvm github repo
git clone https://github.com/cjlin1/libsvm.git
cd libsvm
make
# Install the libsvm pypi packages on the system
singularity exec --nv --overlay /scratch/yk2516/singularity/overlay-25GB-500K-0.ext3:rw /scratch/work/public/singularity/cuda11.3.0-cudnn8-devel-ubuntu20.04.sif /bin/bash -c "
pip install -U libsvm-official
"
```
## Question 2
```shell
# Download the abalone dataset that is already scaled
wget https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone_scale
# Alternatively, get the raw data from the link below and do the preprocessing
wget http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data
```
```
import os
import numpy as np
import pandas as pd
def preprocess(train_length = 3133):
pd_tmp = pd.read_csv('abalone.data',header=None)
sex_map = {'M':("1 0 0"),'F':("0 1 0"),'I':("0 0 1")}
pd_tmp[0]=pd_tmp[0].apply(lambda x: sex_map[x])
pd_sex = pd_tmp[0].str.split(expand=True)
pd_tmp = pd_tmp.drop(columns=[0])
pd_tmp = pd.concat([pd_sex, pd_tmp], axis=1)
label_col = np.array(pd_tmp[8])
new_label = np.ones(len(pd_tmp[8]),dtype=np.float64)
ind_smaller_than_10 = np.where(label_col<=9)[0]
new_label[ind_smaller_than_10]= np.float64(-1) # new_label is y_test now
pd_tmp[8] = new_label
pd_tmp.columns = range(pd_tmp.shape[1])
cols = list(pd_tmp.columns)
cols = [cols[-1]] + cols[:-1]
pd_tmp = pd_tmp[cols]
pd_tmp.columns = range(pd_tmp.shape[1])
cols = list(pd_tmp.columns)
for cols_name in cols:
if cols_name != 0:
pd_tmp[cols_name] = pd_tmp[cols_name].apply(lambda x: str(cols_name)+":"+str(x))
pd_tmp[0] = pd_tmp[0].apply(lambda x: int(x))
pd_tmp.to_csv('abalone_pre_scale.data',index=False,header=False,sep=" ")
train_set = pd_tmp[0:train_length]
test_set = pd_tmp[train_length:]
train_set.to_csv('abalone_pre_scale_train.data',index=False,header=False,sep=" ")
test_set.to_csv('abalone_pre_scale_test.data',index=False,header=False,sep=" ")
preprocess()
os.system("./svm-scale -s scaling_paras/scale_par.txt abalone_pre_scale_train.data > abalone_scaled_train.data")
os.system("./svm-scale -r scaling_paras/scale_par.txt abalone_pre_scale_test.data > abalone_scaled_test.data")
```
## Question 3
```
import scipy
from libsvm.svmutil import *
y, x = svm_read_problem('abalone_scale', return_scipy = True)
# Train & Test split
train_length = 3133
y_train, x_train = y[0:train_length], x[0:train_length, :]
y_test, x_test = y[train_length:], x[train_length:, :]
# Transform the train dataset into binary classification problem
new_label = np.ones(train_length,dtype=np.float64)
ind_smaller_than_10 = np.where(y_train<=9)[0]
new_label[ind_smaller_than_10]= np.float64(0) # new_label is y_test now
train_dataset = np.array(list(map(lambda x,y: (x,y), x_train,new_label)))
# Transform the test dataset into binary classification problem
new_label_test = np.ones(len(y)-train_length,dtype=np.float64)
ind_smaller_than_10_test = np.where(y_test<=9)[0]
new_label_test[ind_smaller_than_10_test]= np.float64(0) # new_label is y_test now
y_test = new_label_test
# Generate k-fold split
k = 5
lst_k_fold_dataset = []
np.random.shuffle(train_dataset)
ind_increment = int(np.floor(len(train_dataset)/k))
start_ind = 0
end_ind = ind_increment
for i in range(k):
lst_k_fold_dataset.append(train_dataset[start_ind:end_ind])
start_ind += ind_increment
end_ind += ind_increment
ploy_degree_lst = [1, 2, 3, 4, 5]
k_val_sup = 9
kernel_type = "1"
lst_acc_ploy_degree = []
for ploy_degree in ploy_degree_lst:
lst_acc_k_val = []
for k_val in range(-k_val_sup, k_val_sup): # C = -3k to 3k
lst_cross_validation_acc = []
for ind, train_set in enumerate(lst_k_fold_dataset):
X_test_tmp = train_set[:,0]
y_test_tmp = train_set[:,1]
X_tmp = np.concatenate([lst_k_fold_dataset[i] for i,x in enumerate(lst_k_fold_dataset) if i!=ind])[:,0]
y_tmp = np.concatenate([lst_k_fold_dataset[i] for i,x in enumerate(lst_k_fold_dataset) if i!=ind])[:,1]
m = svm_train(y_tmp, scipy.sparse.vstack(X_tmp),"-c %s -t %s -d %s -q" % (3**k_val, kernel_type, ploy_degree))
p_label, p_acc, p_val = svm_predict(y_test_tmp, scipy.sparse.vstack(X_test_tmp), m)
lst_cross_validation_acc.append(p_acc[1])
p_acc_cross_validated = np.mean(lst_cross_validation_acc)
lst_acc_k_val.append(lst_cross_validation_acc)
print(f"Poly_Degree: {ploy_degree} | k: {k_val} | p_error_cross_validated is {p_acc_cross_validated}")
lst_acc_ploy_degree.append(lst_acc_k_val)
import matplotlib.pyplot as plt
import math
lst_cross_validated_error = []
for i in range(len(lst_acc_ploy_degree)):
plt.figure()
mean_val_lst = list(map(lambda x: np.mean(x), lst_acc_ploy_degree[i]))
std_val_lst = list(map(lambda x: np.std(x), lst_acc_ploy_degree[i]))
lst_cross_validated_error.append(mean_val_lst)
plt.plot([math.log(3**k_val,3) for k_val in range(-k_val_sup, k_val_sup)],mean_val_lst, label="mean")
plt.plot([math.log(3**k_val,3) for k_val in range(-k_val_sup, k_val_sup)],np.add(mean_val_lst,std_val_lst))#, label="mean")
plt.plot([math.log(3**k_val,3) for k_val in range(-k_val_sup, k_val_sup)],np.subtract(mean_val_lst,std_val_lst))#, label="mean")
plt.title(f"d={ploy_degree_lst[i]}")
plt.xlabel("The value of k in C=3^k")
plt.ylabel("Cross Validation Error")
plt.legend()
```
## Question 4
```
argmin_ind = np.unravel_index(np.argmin(np.array(lst_cross_validated_error)), np.array(lst_cross_validated_error).shape)
k_lst = [i for i in range(-k_val_sup,k_val_sup)]
best_d = ploy_degree_lst[argmin_ind[0]]
best_k = k_lst[argmin_ind[1]]
best_C = 3**(k_lst[argmin_ind[1]])
print(f"The minimum cross validation error is {np.min(np.array(lst_cross_validated_error))}")
print(f"The index of the minimum cross validation error is {argmin_ind}")
print(f"The best C: {best_C} | The best d: {best_d}")
# Calculate test error
sv_lst = []
lst_test_error = []
for i in range(len(ploy_degree_lst)):
best_m = svm_train(new_label, x_train,"-c %s -t %s -d %s -q" % (best_C, kernel_type, ploy_degree_lst[i]))
p_label, p_acc, p_val = svm_predict(y_test, x_test, best_m)
lst_test_error.append(p_acc[1])
sv_lst.append(best_m.get_nr_sv())
lst_cross_err_for_best_C = []
for i in range(len(ploy_degree_lst)):
ind_C = np.where(np.array(k_lst)==best_k)[0][0]
cross_err = lst_acc_ploy_degree[i][ind_C]
mean_cross_err = np.mean(cross_err)
std_cross_err = np.std(cross_err)
lst_cross_err_for_best_C.append([mean_cross_err,std_cross_err])
# Cross Validation Error
plt.figure()
plt.plot(ploy_degree_lst,np.array(lst_cross_err_for_best_C)[:,0],label = "mean")
plt.plot(ploy_degree_lst,np.add(np.array(lst_cross_err_for_best_C)[:,0], np.array(lst_cross_err_for_best_C)[:,1]))
plt.plot(ploy_degree_lst,np.subtract(np.array(lst_cross_err_for_best_C)[:,0], np.array(lst_cross_err_for_best_C)[:,1]))
plt.legend()
plt.xlabel("The polynomial degress")
plt.ylabel("Cross Validation Errors")
plt.title("Cross Validation Error for the best C=C^*")
# Test Error
plt.figure()
plt.plot(ploy_degree_lst, lst_test_error)
plt.xlabel("The polynomial degress")
plt.ylabel("Test Errors")
plt.title("Test Error for the best C=C^*")
# Numbers of support vectors
plt.figure()
plt.plot(ploy_degree_lst, sv_lst)
plt.xlabel("The polynomial degress")
plt.ylabel("Numbers of support vectors")
plt.title("Numbers of support vectors for the best C=C^*")
```
## Question 5
```
lst_train_err = []
lst_test_err = []
for num_exs in range(1,train_length,500):
best_m = svm_train(new_label[:num_exs], x_train[:num_exs],"-c %s -t %s -d %s -q" % (best_C, kernel_type, best_d))
p_label, p_acc_train, p_val = svm_predict(new_label[:num_exs], x_train[:num_exs], best_m)
p_label, p_acc_test, p_val = svm_predict(y_test, x_test, best_m)
lst_train_err.append(p_acc_train[1])
lst_test_err.append(p_acc_test[1])
plt.figure()
plt.plot([num_exs for num_exs in range(1,train_length,500)], lst_train_err, label="train error")
plt.plot([num_exs for num_exs in range(1,train_length,500)], lst_test_err, label="test error")
plt.legend()
plt.xlabel("Numbers of training examples")
plt.ylabel("Error")
plt.title("The training and test errors at best C and best d")
```
## Question 6 c)
For the hinge loss minimization problem of SVM, we consider the sklearn library implementation.
By sklearn documentation, the `SGDClassifier` trained with the hinge loss using Stochastic Gradient Descent is equivalent to a linear SVM.
```
from sklearn.metrics import accuracy_score
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(loss="hinge")
# Test Error
clf.fit(x_train, new_label)
pred_svm_test = clf.predict(x_test)
svm_hinge_acc_Test = accuracy_score(y_test,pred_svm_test)
# Five-Folds cross validation error
lst_svm_hinge_cross_validation_acc = []
for ind, train_set in enumerate(lst_k_fold_dataset):
X_test_tmp = train_set[:,0]
y_test_tmp = train_set[:,1]
X_tmp = np.concatenate([lst_k_fold_dataset[i] for i,x in enumerate(lst_k_fold_dataset) if i!=ind])[:,0]
y_tmp = np.concatenate([lst_k_fold_dataset[i] for i,x in enumerate(lst_k_fold_dataset) if i!=ind])[:,1]
clf.fit(scipy.sparse.vstack(X_tmp),np.array(y_tmp,dtype='float'))
pred_svm_hinge = clf.predict(scipy.sparse.vstack(X_test_tmp))
svm_hinge_acc = accuracy_score(np.array(y_test_tmp,dtype='float'),pred_svm_hinge)
lst_svm_hinge_cross_validation_acc.append(svm_hinge_acc)
import matplotlib.pyplot as plt
plt.figure()
plt.scatter([1,2,3,4,5],[svm_hinge_acc_Test]*5,label="Test")
plt.scatter([1,2,3,4,5],[1-i for i in lst_svm_hinge_cross_validation_acc], label="Five Folds")
plt.xlabel("Trial numbers")
plt.ylabel("Cross Validation Error")
plt.legend()
plt.title("Five Fold Cross Validation Error for SVM with Hinge Loss trained using SGD")
```
|
github_jupyter
|
```
#Modified version of the following script from nilearn:
#https://nilearn.github.io/auto_examples/03_connectivity/plot_group_level_connectivity.html
from nilearn import datasets
from tqdm.notebook import tqdm
abide_dataset = datasets.fetch_abide_pcp(n_subjects=200)
abide_dataset.keys()
from nilearn import input_data
msdl_data = datasets.fetch_atlas_msdl()
masker = input_data.NiftiMapsMasker(
msdl_data.maps, resampling_target="data", t_r=2, detrend=True,
low_pass=.1, high_pass=.01, memory='nilearn_cache', memory_level=1).fit()
pooled_subjects = []
groups = []
for func_file, dx in tqdm(zip(abide_dataset['func_preproc'], abide_dataset['phenotypic']['DX_GROUP'])):
time_series = masker.transform(func_file)
pooled_subjects.append(time_series)
groups.append(dx)
print(f'Dataset has {len(pooled_subjects)} subjects')
n_regions = pooled_subjects[0].shape[1]
def sym_matrix_to_vec(symmetric):
tril_mask = np.tril(np.ones(symmetric.shape[-2:]), k=-1).astype(np.bool)
return symmetric[..., tril_mask]
def compute_dtw(subjects, n_regions):
dtw_output = []
for subj in subjects:
dtw_output.append(
rust_dtw.dtw_connectome(
connectome=subj,
window=100,
distance_mode="euclidean")
)
connectomes = []
#Post processing them as per paper recommendations
for vec in dtw_output:
sym = np.zeros((n_regions, n_regions))
sym[i_lower] = vec
sym += sym.T
sym *= -1
StandardScaler().fit_transform(sym)
connectomes.append(sym_matrix_to_vec(sym))
return connectomes
from sklearn.svm import LinearSVC
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from nilearn.connectome import ConnectivityMeasure
import matplotlib.pyplot as plt
import rust_dtw
import numpy as np
import copy
kinds = ['dtw', 'correlation', 'partial correlation', 'tangent']
# kinds = ['correlation']
_, classes = np.unique(groups, return_inverse=True)
cv = StratifiedShuffleSplit(n_splits=15, random_state=0, test_size=5)
pooled_subjects = np.asarray(pooled_subjects)
scores = {}
for kind in kinds:
print('PROCESSING: ', kind)
scores[kind] = []
for train, test in cv.split(pooled_subjects, classes):
if kind == 'dtw':
connectomes = compute_dtw(pooled_subjects[train], n_regions)
test_connectomes = compute_dtw(pooled_subjects[test], n_regions)
else:
connectivity = ConnectivityMeasure(kind=kind, vectorize=True)
connectomes = connectivity.fit_transform(pooled_subjects[train])
test_connectomes = connectivity.transform(pooled_subjects[test])
classifier = LinearSVC(max_iter=10000).fit(connectomes, classes[train])
# make predictions for the left-out test subjects
predictions = classifier.predict(test_connectomes)
# store the accuracy for this cross-validation fold
scores[kind].append(accuracy_score(classes[test], predictions))
import matplotlib.pyplot as plt
import seaborn
plt.style.use('seaborn-white')
seaborn.set_context('poster')
mean_scores = [np.mean(scores[kind]) for kind in kinds]
print(list(zip(mean_scores, kinds) ))
scores_std = [np.std(scores[kind]) for kind in kinds]
plt.figure(figsize=(15, 10))
positions = np.arange(len(kinds)) * .1 + .1
plt.barh(positions, mean_scores, align='center', height=.05, xerr=scores_std)
yticks = [k.replace(' ', '\n') for k in kinds]
plt.yticks(positions, yticks)
plt.gca().grid(True)
plt.gca().set_axisbelow(True)
plt.xlabel('Classification accuracy')
plt.tight_layout()
plt.savefig('accuracy.png', bbox_inches="tight", dpi=300)
```
|
github_jupyter
|
# Recommendations with IBM
In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
## Table of Contents
I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
II. [Rank Based Recommendations](#Rank)<br>
III. [User-User Based Collaborative Filtering](#User-User)<br>
IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
V. [Matrix Factorization](#Matrix-Fact)<br>
VI. [Extras & Concluding](#conclusions)
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
%matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
df.shape
df.article_id = df.article_id.astype('str')
# Show df_content to get an idea of the data
df_content.head()
```
### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
```
count_interactions = df.groupby('email').count()['article_id']
plt.hist(count_interactions, bins=30);
plt.title('Number of times each user interacts with an article');
plt.xlabel('Number of Interactions');
plt.ylabel('Number of Articles');
# Fill in the median and maximum number of user_article interactios below
median_val = df['email'].value_counts().median() # 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = df['email'].value_counts().max()# The maximum number of user-article interactions by any 1 user is ______.
```
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
```
# Find and explore duplicate articles
df_content[df_content.duplicated()]
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset=['article_id'],keep='first',inplace=True)
```
`3.` Use the cells below to find:
**a.** The number of unique articles that have an interaction with a user.
**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
**c.** The number of unique users in the dataset. (excluding null values) <br>
**d.** The number of user-article interactions in the dataset.
```
unique_articles = len(pd.unique(df['article_id']))# The number of unique articles that have at least one interaction
total_articles = len(pd.unique(df_content['article_id']))# The number of unique articles on the IBM platform
unique_users = len(pd.unique(df['email'].dropna()))# The number of unique users
user_article_interactions = len(df)# The number of user-article interactions
```
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
```
most_viewed_article_id = str(df.article_id.value_counts().axes[0][0])# The most viewed article in the dataset as a string with one value following the decimal
max_views = df.article_id.value_counts().max()# The most viewed article in the dataset was viewed how many times?
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
```
### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
```
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_articles = df['title'].value_counts().axes[0][:n].tolist()
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_articles = df['article_id'].value_counts().axes[0][:n].tolist()
return top_articles # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
```
### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
* Each **user** should only appear in each **row** once.
* Each **article** should only show up in one **column**.
* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
```
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
user_item = df.groupby(['user_id', 'article_id']).apply(lambda x:1).unstack().fillna(0)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
user_item.head(2)
```
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
```
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# # compute similarity of each user to the provided user
similarity = []
for user in range(1, user_item.shape[0]+1):
sim = np.dot(np.array(user_item.loc[user_id]),np.array(user_item.loc[user]))
similarity.append((user, sim))
# sort by similarity
similarity.sort(key=lambda x: x[1], reverse=True)
# create list of just the ids
most_similar_users =[item[0] for item in similarity]
# remove the own user's id
most_similar_users.remove(user_id)
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
```
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
```
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
# Your code here
article_names = []
for article in article_ids:
article_names.append(df[df.article_id == article]['title'].values[0])
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
article_ids = user_item.loc[user_id][user_item.loc[user_id] == 1].index.tolist()
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
recs = set()
most_similar_users = find_similar_users(user_id)
seen_aritcle,_ = get_user_articles(user_id)
for user in most_similar_users:
neighbs_likes,_ = get_user_articles(user)
new_recs = np.setdiff1d(neighbs_likes, seen_aritcle, assume_unique=True)
for item in new_recs:
recs.add(item)
if len(recs) > m-1:
return recs
return recs # return your recommendations for this user_id
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
```
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
```
def get_article_order(user_id,counts):
'''
INPUT:
user_id - (int)
counts - (pandas Series) value_counts
OUTPUT:
top_articles - (list) A list of article id sorted by interactions
'''
ids,titles = get_user_articles(user_id)
dic = {}
for this_id in ids:
dic[this_id] = counts[this_id]
dic_sort = sorted(dic.items(), key=lambda d:d[1], reverse = True)
article_order = [item[0] for item in dic_sort]
return article_order
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
neighbors_df = pd.DataFrame()
neighbor_id = []
similarity = []
num_interactions = []
for user in df.user_id:
if user == user_id:
continue
neighbor_id.append(user)
score = np.dot(np.array(user_item.loc[user_id]),np.array(user_item.loc[user]).T)
similarity.append(score)
interactions = len(df[df.user_id == user])
num_interactions.append(interactions)
neighbors_df = pd.DataFrame({'neighbor_id':neighbor_id,'similarity':similarity,'num_interactions':num_interactions})
neighbors_df = neighbors_df.sort_values(by=['similarity','num_interactions'])
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10,df=df):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
recs = []
neighbors_df = get_top_sorted_users(user_id)
seen_aritcle,_ = get_user_articles(user_id)
neighbors_id = neighbors_df['neighbor_id'].values
counts = df.article_id.value_counts()
for neighbor in neighbors_id:
if len(recs) == m:
break
new_rec = get_article_order(neighbor, counts)
for this_rec in new_rec:
if this_rec not in recs:
recs.append(this_rec)
if len(recs) == m:
break
rec_names = get_article_names(recs)
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
```
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
```
### Tests with a dictionary of results
user1_most_sim = find_similar_users(1)[0]# Find the user that is most similar to user 1
user131_10th_sim = find_similar_users(131)[9]# Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
```
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
A new user has no reading records,so it isn't able to find what the user like base on records.I will use get_top_articles to recommend some popular articles.I think a better way to impove make recommendations is to categorize articles, and randomly recommend top articles of every categorize ,this help us have more understanding of the user.
`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
```
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = df['article_id'].value_counts().axes[0][:10].tolist()# Your recommendations here
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
```
### <a class="anchor" id="Matrix-Fact">Part IV: Matrix Factorization</a>
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
```
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
```
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
```
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix)# use the built in to get the three matrices
```
in the lessson ,there are missing values in matrix,but in this project , if users and articles don't have interacted with each other,the values is 0,so matrix has no missing values and the code won't cause any error.
**Provide your response here.**
`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
```
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
```
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
* How many users can we make predictions for in the test set?
* How many users are we not able to make predictions for because of the cold start problem?
* How many movies can we make predictions for in the test set?
* How many movies are we not able to make predictions for because of the cold start problem?
```
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
user_item_train = create_user_item_matrix(df_train)
train_idx = user_item_train.index.tolist()
train_arts = user_item_train.columns.tolist()
user_item_test = create_user_item_matrix(df_test)
test_idx = user_item_test.index.tolist()
test_arts = user_item_test.columns.tolist()
# choose user_ids and article_ids that have exit in train data
common_idx = np.intersect1d(train_idx,test_idx)
common_arts = np.intersect1d(train_arts,test_arts)
user_item_test = user_item_test.loc[common_idx,common_arts]
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
user_item_test.shape[0]
print(len(test_idx)-user_item_test.shape[0])
print(len(test_arts))
print(len(test_arts)-user_item_test.shape[1])
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?':c,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,
'How many movies can we make predictions for in the test set?': b,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d
}
t.sol_4_test(sol_4_dict)
```
`5.` Now use the **user_item_train** dataset from above to find **U**, **S**, and **V** transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
```
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train)# fit svd similar to above then use the cells below
user_item_test.head(2)
def get_error(s,u,v,user_item,k=100):
'''
INPUT:
s,u,v: result of svd training
user_item:user_item matrices
k: - (int) numbers of latent feature
OUTPUT:
err - error
'''
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], v[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))/(diffs.shape[0] * diffs.shape[1])
return err
# decomposition to predict on test data
# restructure with k latent features
num_latent_feats = np.arange(10,700+10,20)
predict_list = []
train_list = []
for k in num_latent_feats:
# get test result
article_index = []
for article in user_item_test.columns.tolist():
article_index.append(user_item_train.columns.get_loc(article))
user_index = []
for user in user_item_test.index.tolist():
user_index.append(user_item_train.index.get_loc(user))
u_test = u_train[user_index,:]
vt_test = vt_train[:,article_index]
# training and testing
predict = 1 - get_error(s_train,u_test,vt_test,user_item=user_item_test,k=k)
train = 1- get_error(s_train,u_train,vt_train,user_item=user_item_train,k=k)
predict_list.append(predict)
train_list.append(train)
plt.plot(num_latent_feats,predict_list,num_latent_feats,train_list,'r')
plt.legend(('test', 'trainr'), loc='center right')
plt.title('training and testing accuracy of different K')
plt.xlabel('numbers of latent features')
plt.ylabel('Accuracy')
plt.show()
```
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
- training error and testing error are both very high,because the data is sparse,most labels are 0.
- Fisrt,If the user is new ,I would use get_top_articles to recommend articles for the user.Otherwise,I will find the most similar users based on the articles users have read,and recommend articles for the user which similar users have read but the user haven't.
- Then,I am going to use A/B test to find out weather my recommendation systems works.I will ramdomly choose two groups of users.One group read articles from the old system,and one group receive articles from my recommendation system.Finaly I will compare two groups of click rates.
<a id='conclusions'></a>
### Extras
Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
## Conclusion
> Congratulations! You have reached the end of the Recommendations with IBM project!
> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
## Directions to Submit
> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
```
|
github_jupyter
|
## 1. Google Play Store apps and reviews
<p>Mobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.</p>
<p><img src="https://assets.datacamp.com/production/project_619/img/google_play_store.png" alt="Google Play logo"></p>
<p>Let's take a look at the data, which consists of two files:</p>
<ul>
<li><code>apps.csv</code>: contains all the details of the applications on Google Play. There are 13 features that describe a given app.</li>
<li><code>user_reviews.csv</code>: contains 100 reviews for each app, <a href="https://www.androidpolice.com/2019/01/21/google-play-stores-redesigned-ratings-and-reviews-section-lets-you-easily-filter-by-star-rating/">most helpful first</a>. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.</li>
</ul>
```
# Read in dataset
import pandas as pd
import pandas as pd
apps_with_duplicates = pd.read_csv('datasets/apps.csv')
# Drop duplicates
apps = apps_with_duplicates.drop_duplicates()
# Print the total number of apps
print('Total number of apps in the dataset = ', apps.shape[0])
# Have a look at a random sample of 5 entries
n = 5
apps.sample(n)
```
## 2. Data cleaning
<p>The four features that we will be working with most frequently henceforth are <code>Installs</code>, <code>Size</code>, <code>Rating</code> and <code>Price</code>. The <code>info()</code> function (from the previous task) told us that <code>Installs</code> and <code>Price</code> columns are of type <code>object</code> and not <code>int64</code> or <code>float64</code> as we would expect. This is because the column contains some characters more than just [0,9] digits. Ideally, we would want these columns to be numeric as their name suggests. <br>
Hence, we now proceed to data cleaning and prepare our data to be consumed in our analyis later. Specifically, the presence of special characters (<code>, $ +</code>) in the <code>Installs</code> and <code>Price</code> columns make their conversion to a numerical data type difficult.</p>
```
# List of characters to remove
chars_to_remove = ['+',",","$"]
# List of column names to clean
cols_to_clean = ["Installs","Price"]
# Loop for each column
for col in cols_to_clean:
# Replace each character with an empty string
for char in chars_to_remove:
apps[col] = apps[col].astype(str).str.replace(char, '')
# Convert col to numeric
apps[col] = pd.to_numeric( apps[col])
```
## 3. Exploring app categories
<p>With more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.</p>
<p>This brings us to the following questions:</p>
<ul>
<li>Which category has the highest share of (active) apps in the market? </li>
<li>Is any specific category dominating the market?</li>
<li>Which categories have the fewest number of apps?</li>
</ul>
<p>We will see that there are <code>33</code> unique app categories present in our dataset. <em>Family</em> and <em>Game</em> apps have the highest market prevalence. Interestingly, <em>Tools</em>, <em>Business</em> and <em>Medical</em> apps are also at the top.</p>
```
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
# Print the total number of unique categories
num_categories = len(apps["Category"].unique())
print('Number of categories = ', num_categories)
# Count the number of apps in each 'Category' and sort them in descending order
num_apps_in_category = apps["Category"].value_counts().sort_values(ascending = False)
data = [go.Bar(
x = num_apps_in_category.index, # index = category name
y = num_apps_in_category.values, # value = count
)]
plotly.offline.iplot(data)
```
## 4. Distribution of app ratings
<p>After having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.</p>
<p>From our research, we found that the average volume of ratings across all app categories is <code>4.17</code>. The histogram plot is skewed to the right indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.</p>
```
# Average rating of apps
avg_app_rating = apps['Rating'].mean()
print('Average app rating = ', avg_app_rating)
# Distribution of apps according to their ratings
data = [go.Histogram(
x = apps['Rating']
)]
# Vertical dashed line to indicate the average app rating
layout = {'shapes': [{
'type' :'line',
'x0': avg_app_rating,
'y0': 0,
'x1': avg_app_rating,
'y1': 1000,
'line': { 'dash': 'dashdot'}
}]
}
plotly.offline.iplot({'data': data, 'layout': layout})
```
## 5. Size and price of an app
<p>Let's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.</p>
<p>How can we effectively come up with strategies to size and price our app?</p>
<ul>
<li>Does the size of an app affect its rating? </li>
<li>Do users really care about system-heavy apps or do they prefer light-weighted apps? </li>
<li>Does the price of an app affect its rating? </li>
<li>Do users always prefer free apps over paid apps?</li>
</ul>
<p>We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.</p>
```
%matplotlib inline
import seaborn as sns
sns.set_style("darkgrid")
import warnings
warnings.filterwarnings("ignore")
# Filter rows where both Rating and Size values are not null
apps_with_size_and_rating_present = apps[(~apps["Rating"].isnull()) & (~apps["Size"].isnull())]
# Subset for categories with at least 250 apps
large_categories = apps_with_size_and_rating_present.groupby("Category").filter(lambda x: len(x) >= 250).reset_index()
# Plot size vs. rating
plt1 = sns.jointplot(x = large_categories["Size"], y = large_categories["Rating"], kind = 'hex')
# Subset apps whose 'Type' is 'Paid'
paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present["Type"] == "Paid"]
# Plot price vs. rating
plt2 = sns.jointplot(x = paid_apps["Price"], y = paid_apps["Rating"])
```
## 6. Relation between app category and app price
<p>So now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.</p>
<p>There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.</p>
<p>Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that <em>Medical and Family</em> apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.</p>
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Select a few popular app categories
popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY',
'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
# Examine the price trend by plotting Price vs Category
ax = sns.stripplot(x = popular_app_cats["Price"], y = popular_app_cats["Category"], jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories')
# Apps whose Price is greater than 200
apps_above_200 = popular_app_cats[['Category', 'App', 'Price']][popular_app_cats["Price"] > 200]
apps_above_200
```
## 7. Filter out "junk" apps
<p>It looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called <em>I Am Rich Premium</em> or <em>most expensive app (H)</em> just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.</p>
<p>Let's filter out these junk apps and re-do our visualization. The distribution of apps under \$20 becomes clearer.</p>
```
# Select apps priced below $100
apps_under_100 =popular_app_cats[popular_app_cats["Price"]<100]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Examine price vs category with the authentic apps
ax = sns.stripplot(x=apps_under_100["Price"], y=apps_under_100["Category"], data=apps_under_100,
jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories after filtering for junk apps')
```
## 8. Popularity of paid apps vs free apps
<p>For apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:</p>
<ul>
<li>Free to download.</li>
<li>Main source of income often comes from advertisements.</li>
<li>Often created by companies that have other products and the app serves as an extension of those products.</li>
<li>Can serve as a tool for customer retention, communication, and customer service.</li>
</ul>
<p>Some characteristics of paid apps are:</p>
<ul>
<li>Users are asked to pay once for the app to download and use it.</li>
<li>The user can't really get a feel for the app before buying it.</li>
</ul>
<p>Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!</p>
```
trace0 = go.Box(
# Data for paid apps
y=apps[apps['Type'] == 'Paid']['Installs'],
name = 'Paid'
)
trace1 = go.Box(
# Data for free apps
y=apps[apps['Type'] == 'Free']['Installs'],
name = 'Free'
)
layout = go.Layout(
title = "Number of downloads of paid apps vs. free apps",
yaxis = dict(
type = 'log',
autorange = True
)
)
# Add trace0 and trace1 to a list for plotting
data = [trace0,trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
```
## 9. Sentiment analysis of user reviews
<p>Mining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.</p>
<p>By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.</p>
<p>In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.</p>
```
# Load user_reviews.csv
reviews_df = pd.read_csv('datasets/user_reviews.csv')
# Join and merge the two dataframe
merged_df = pd.merge(apps, reviews_df, on = 'App', how = "inner")
# Drop NA values from Sentiment and Translated_Review columns
merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = 'Type', y = 'Sentiment_Polarity', data = merged_df)
ax.set_title('Sentiment Polarity Distribution')
reviews_df
```
|
github_jupyter
|
<center>
<h1>Fetal Health Classification</h1>
<img src="https://blog.pregistry.com/wp-content/uploads/2018/08/AdobeStock_90496738.jpeg">
<small>Source: Google</small>
</center>
<p>
Fetal mortality refers to stillbirths or fetal death. It encompasses any death of a fetus after 20 weeks of gestation.
Cardiotocograms (CTGs) are a simple and cost accessible option to assess fetal health, allowing healthcare professionals to take action in order to prevent child and maternal mortality.
Cardiotocography is a technical means of recording the fetal heartbeat and the uterine contractions during pregnancy. It is most commonly used in the third trimester and its purpose is to monitor fetal well-being and allow early detection of fetal distress. An abnormal CTG may indicate the need for further investigations and potential intervention.
</p>
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('../Datasets/fetal_health.csv')
```
| Variable symbol | Variable description|
| ----------------|---------------------|
|LB | Fetal heart rate baseline (beats per minute)|
|AC | Number of accelerations per second|
|FM | Number of fetal movements per second|
|UC | Number of uterine contractions per second|
|DL | Number of light decelerations per second|
|DS | Number of severe decelerations per second|
|DP | Number of prolonged decelerations per second|
|ASTV | Percentage of time with abnormal short-term variability|
|MSTV | Mean value of short-term variability|
|ALTV | Percentage of time with abnormal long-term variability|
|MLTV | Mean value of long-term variability|
|Width | Width of FHR histogram|
|Min | Minimum of FHR histogram|
|Max | Maximum of FHR histogram|
|Nmax | Number of histogram peaks|
|Nzeros | Number of histogram zeroes|
|Mode | Histogram mode|
|Median | Histogram median|
|Variance | Histogram variance|
|Tendency | Histogram tendency|
|NSP | Fetal state class code (N=Normal, S=Suspected,P=Pathological)|
Reference: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6822315/
```
df.head()
df.info()
df.describe()
df.isna().sum()
```
Thankfully, there are no NaN values in the dataset.
```
sns.countplot(x='fetal_health', data=df)
print(df['fetal_health'].value_counts())
```
We can see that there is the problem of class imbalance in this dataset. This means we cannot use **accuracy** as a metric to evaluate the performance of our model. The most appropiate metric for model evaluation can be:
1. F1 Score
2. Recall
3. Precision
Before diving deep into understanding the data and features, let us first look at what does the three different categories of fetal_health represent. Please refer to the table below for the same.
Reference: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4812878/

```
corr = df.corr()
plt.figure(figsize=(24, 20))
sns.heatmap(corr, annot=True)
plt.title("Correlation Matrix")
plt.show()
```
From the above correlation matrix, we can observe that the following features show some correlation with target variable fetal health:
1. accelerations (negative corr)
2. uterine contractions (negative corr)
3. prolonged_decelerations (positive corr)
4. abnormal short term variability (positive corr)
5. percentage of time with abnormal long term variability (positive corr)
## Model Selection
```
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report, f1_score, recall_score, precision_score
print("There are total "+str(len(df))+" rows in the dataset")
X = df.drop(["fetal_health"],axis=1)
Y = df["fetal_health"]
std_scale = StandardScaler()
X_sc = std_scale.fit_transform(X)
X_train, X_test, y_train,y_test = train_test_split(X_sc, Y, test_size=0.25, random_state=42)
print("There are total "+str(len(X_train))+" rows in training dataset")
print("There are total "+str(len(X_test))+" rows in test dataset")
```
If you remember, in the initial investigation of the data, we found out that we have imbalanced classes.
To handle the problem of imbalanced classes, we can use oversampling techniques. In oversampling, we populate the minority classes with some synthetic data.
Let us try some oversampling techniques and judge their performance on the above dataset.
1. SMOTE Technique
```
from imblearn.over_sampling import SMOTE
smt = SMOTE()
X_train_sm, y_train_sm = smt.fit_resample(X_train, y_train)
```
2. ADASYN
```
from imblearn.over_sampling import ADASYN
ada = ADASYN(random_state=130)
X_train_ada, y_train_ada = ada.fit_resample(X_train, y_train)
```
3. SMOTE + Tomek Links
```
from imblearn.combine import SMOTETomek
smtom = SMOTETomek(random_state=139)
X_train_smtom, y_train_smtom = smtom.fit_resample(X_train, y_train)
```
4. SMOTE + ENN
```
from imblearn.combine import SMOTEENN
smenn = SMOTEENN()
X_train_smenn, y_train_smenn = smenn.fit_resample(X_train, y_train)
def evaluate_model(clf, X_test, y_test, model_name, oversample_type):
print('--------------------------------------------')
print('Model ', model_name)
print('Data Type ', oversample_type)
y_pred = clf.predict(X_test)
f1 = f1_score(y_test, y_pred, average='weighted')
recall = recall_score(y_test, y_pred, average='weighted')
precision = precision_score(y_test, y_pred, average='weighted')
print(classification_report(y_test, y_pred))
print("F1 Score ", f1)
print("Recall ", recall)
print("Precision ", precision)
return [model_name, oversample_type, f1, recall, precision]
models = {
'DecisionTrees': DecisionTreeClassifier(random_state=42),
'RandomForest':RandomForestClassifier(random_state=42),
'LinearSVC':LinearSVC(random_state=0),
'AdaBoostClassifier':AdaBoostClassifier(random_state=42),
'SGD':SGDClassifier()
}
oversampled_data = {
'ACTUAL':[X_train, y_train],
'SMOTE':[X_train_sm, y_train_sm],
'ADASYN':[X_train_ada, y_train_ada],
'SMOTE_TOMEK':[X_train_smtom, y_train_smtom],
'SMOTE_ENN':[X_train_smenn, y_train_smenn]
}
final_output = []
for model_k, model_clf in models.items():
for data_type, data in oversampled_data.items():
model_clf.fit(data[0], data[1])
final_output.append(evaluate_model(model_clf, X_test, y_test, model_k, data_type))
final_df = pd.DataFrame(final_output, columns=['Model', 'DataType', 'F1', 'Recall', 'Precision'])
final_df.sort_values(by="F1", ascending=False)
```
### Hyperparameter Tuning
```
param_grid = {
'criterion':['gini', 'entropy'],
'max_depth': [10, 20, 40, 80, 100],
'max_features': ['auto', 'sqrt'],
'n_estimators': [200, 400, 600, 800, 1000, 2000]
}
rfc = RandomForestClassifier(random_state=42)
rfc_cv = GridSearchCV(estimator=rfc, param_grid=param_grid, cv=5, verbose=2)
rfc_cv.fit(X_train_smtom, y_train_smtom)
rfc_cv.best_params_
rf = RandomForestClassifier(n_estimators=2000, criterion='entropy', max_depth=20, max_features='auto')
rf.fit(X_train_smtom, y_train_smtom)
evaluate_model(rf, X_test, y_test, 'RandomForest', 'SMOTE+TOMEK')
import pickle
filename = 'fetal-health-model.pkl'
pickle.dump(rf, open(filename, 'wb'))
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import cm
data = pd.read_csv('AB_NYC_2019.csv')
data.head()
```
Printing the columns of the dataset, as well as their types. This is an important step because depending of the type of
data that we have, the treatment that we have to perform differs.
```
data.columns
data.describe()
data.pivot_table(index='neighbourhood_group',columns='room_type',values='price',aggfunc='mean')
```
The relation among neighbourhood group, room type and price give a geneneral idea about the data.
```
data.drop(columns=['id', 'name', 'host_id', 'host_name'], inplace=True)
#visualize the categorical values for the neighbourhood_group
plt.figure(figsize=(6,4))
count_neigh = data.neighbourhood_group.value_counts()
(count_neigh/data.shape[0]).plot(kind='bar');
plt.title('Percent of neighbourhood group', fontsize = 12)
plt.ylabel('percent', fontsize = 12)
plt.xlabel('neighbourhood group', fontsize = 12)
```
In the listed neighborhood group the amount of business in Manhattan is largest and in Staten Island is the smallest. This is expected because in Manhattan and Brooklyn the amount of tourist is higher than the other because of their diiferent attarction of tourists in the area.
```
#visualize the categorical values for the room_type
plt.figure(figsize=(7,5))
count_room = data.room_type.value_counts()
(count_room/data.shape[0]).plot(kind='bar');
plt.title('room_type')
plt.ylabel('the percent of every room type')
```
This variable is an indication of the human prefence with the nature of house and status of privacy. In all nighborhood the amount of people serving in home/apartment and private room is higher than shared room. Shared room business in this specific area is generally not significance.
```
# reference: https://seaborn.pydata.org/generated/seaborn.catplot.html
plt.figure(figsize=(12,12))
sns.set_context("paper")
sns.set(style="darkgrid", font_scale=.9)
sns.catplot(x="room_type", y="price", data=data);
data.groupby('room_type')[['price','number_of_reviews']].mean()
```
The Entire rooms/apartments cost are more than just a shared room, where as the differece in price of a shared room and a private one is about 20 dollars. In contrast, the number of viewrs are higher in private room than the other. This small price differece between private room and shared room may lead to prefer the private room.
The overall price in the entire home is more expensive than the others. The price of the shared price is lower than the others.
```
def plot_price_wrt_room_type(data,title):
data2 = data.pivot(columns='room_type',values='price')
x1=list(data2[data2.columns[0]])
x2=list(data2[data2.columns[1]])
x3=list(data2[data2.columns[2]])
plt.figure(figsize=(8, 6))
plt.rc('legend',**{'fontsize':12})
plt.legend(fontsize=15)
plt.rcParams['figure.figsize']=(15,8)
plt.style.use(style='ggplot')
plt.tick_params(labelsize=12)
plt.tick_params(labelsize=12)
plt.ylabel("Count",fontsize=12,color='black')
plt.xlabel("Price",fontsize=12,color='black')
plt.title(title,fontsize=12,color='black')
plt.legend(prop={'size': 10})
n_bins=12
colors = ['orange', 'aqua', 'green']
labels=[data2.columns[0],data2.columns[1],data2.columns[2]]
plt.hist([x1, x2, x3], n_bins, histtype='bar',
color=colors, range=[0,300],label=labels,alpha=1)
plt.legend(loc="upper right")
plt.show()
title='Price distribution with respect to room type'
plot_price_wrt_room_type(data,title)
```
Price distribution with room type is an indicator for the preference of the business by the customer. Hence, the graph indicates guests more prefer the private room and which it's price is relatively under the average of the others.
```
def plot_price_wrt_neigbourhood_group(data,title):
data2 = data.pivot(columns='neighbourhood_group',values='price')
x1=list(data2[data2.columns[0]])
x2=list(data2[data2.columns[1]])
x3=list(data2[data2.columns[2]])
x4=list(data2[data2.columns[3]])
x5=list(data2[data2.columns[4]])
plt.figure(figsize=(9, 7))
plt.style.use(style='ggplot')
plt.rc('legend',**{'fontsize':12})
plt.tick_params(labelsize=25)
plt.legend(fontsize=20)
plt.rcParams['figure.figsize']=(15,8)
plt.ylabel("Count",fontsize=12,color='black')
plt.xlabel("Price",fontsize=12,color='black')
plt.title(title,fontsize=12,color='black')
plt.legend(prop={'size': 8})
plt.tick_params(labelsize=12)
n_bins=12
colors = ['yellow', 'red', 'green', 'black', 'blue']
labels=[data2.columns[0],data2.columns[1],data2.columns[2], data2.columns[3], data2.columns[4]]
plt.hist([x1, x2, x3, x4, x5], n_bins, histtype='bar',
color=colors, range=[0,400],label=labels,alpha=1)
plt.legend(loc="upper right")
plt.show()
title='plot_price_wrt_neigbourhood_group'
plot_price_wrt_neigbourhood_group(data,title)
```
Neighbourhood Manhattan has highest price with high demand.The area is highly visited by the investor and having high demand and high price is expected.
```
#plt.subplot2grid((2,3), (0,0))
#data.room_type[data.neighbourhood_group == "..... "].value_counts(normalize = True).plot(kind = "bar", alpha= 0.5)
#plt.title("room_type with neigbourhood_group")
plt.figure(figsize=(8,6))
data.boxplot(column='price', return_type='axes')
plt.show()
```
This plot indicates the overall price distribution. The plot indicates the majority of the record is above the average price.
```
plt.figure(figsize = (9, 6))
plt.plot(data.groupby(['neighbourhood_group'])['price'].mean().keys(),data.groupby(['neighbourhood_group'])['price'].mean().values,'o')
plt.title('Newyork City Average Airbnb price per region')
plt.ylabel('Price')
plt.xlabel('Neighbourhood group')
color = ['DarkBlue']
plt.show()
```
As we see that the average price in Manhattan is higher than in other regions. Manhattan is the most populated of Newyork City. it is among the world's major commerical, financial and cultural ceters. because of this the area has high tourists.
```
#boxplot neighbourhood_group and room availability
sns.set(style='whitegrid', rc={"grid.linewidth": 0.1})
sns.set_context("paper", font_scale=0.9)
plt.figure(figsize=(10,10))
plt.tight_layout()
sns.despine(left=True, bottom=True)
plt.savefig('test.pdf', bbox_inches='tight')
df1 = sns.boxplot(data=data, x='neighbourhood_group',y='availability_365',palette='inferno')
```
Availability of one the key aprameter for this business and for anyother business. As the plot indicates there is a no enough supply for all neighborhood group for 365 days.
### Neighbourhood gorup colored by price
```
# reference: https://plot.ly/python/line-and-scatter/
regions_dict = {value: i for i,value in enumerate(data.neighbourhood_group.unique())}
reverse_regions_dict = {i:v for v,i in regions_dict.items()}
data = data.applymap(lambda s: regions_dict.get(s) if s in regions_dict.keys() else s)
# reference: https://plot.ly/python/line-and-scatter/
plt.figure(figsize=(12, 5))
plt.scatter(data.latitude,data.longitude, c = data.neighbourhood_group,cmap='magma')
plt.title('New York city map colored by neighboorhoud_group')
#plt.legend()
plt.show()
```
This graph is inlustrates another quantitative variables(longitude and latitude) with neighborhood group.
```
for i,region in enumerate(data.groupby(['neighbourhood_group'])['price'].mean().keys()):
NY_data = data[data.neighbourhood_group == region]
plt.figure(figsize=(8, 5))
# xxx, sub = plt.subplots(1, 2)
plt.scatter(NY_data.latitude, NY_data.longitude, c = NY_data.price,cmap='PuBuGn')
plt.title('{} Prices'.format(reverse_regions_dict[region]))
plt.colorbar()
plt.show()
```
When we look at the amount of owners per region, it is interesting to notice that in the regions where the rent price is
higher, the number of owners is also higher. That is probably due to the fact that as the apartments cost more in such regions
is harder to find owners of two or more houses there.
```
sns.heatmap(data[['latitude','longitude','price','minimum_nights','availability_365','number_of_reviews']].corr(),annot=True)
plt.show()
```
As seen in the correlation matrix, homes that are more available tend to have more reviews, which is natural,
since as the place is avaliable more days in a year more people can rent it.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/jchen42703/MathResearchQHSS/blob/lipreading-temp/lipreading/Lipreading_Training_Demo_[Cleaner].ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!apt install ffmpeg
! pip install ffmpeg sk-video
```
# IO
* Uploading data through google drive; ask jchen42703@gmail.com or j.chen3@qhss.org to share it with you.
```
import os
!mkdir s1
!mkdir s1_align
os.listdir()
!pip install -U -q PyDrive
s1_folder_id = '1B8cIYz6ljEbjYapG6EW1NrKHgcpZ2HzD'
s1_align_folder_id = '1swcuyFY-ZdNgFBou5GiigfEXsURcPO63'
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# uploading all of speaker 1's .mpg files.
%cd '/content/s1'
!pwd
file_list = drive.ListFile({'q': "'{}' in parents and trashed=false".format(s1_folder_id)}).GetList()
for file1 in sorted(file_list, key = lambda x: x['title']):
print('Downloading {} from GDrive'.format(file1['title'])) # i+1, len(file_list)))
file1.GetContentFile(file1['title'])
# uploading all of the corresponding align files (labels)
cd '/content/s1_align'
!pwd
file_list = drive.ListFile({'q': "'{}' in parents and trashed=false".format(s1_align_folder_id)}).GetList()
for file1 in sorted(file_list, key = lambda x: x['title']):
print('Downloading {} from GDrive'.format(file1['title'])) # i+1, len(file_list)))
file1.GetContentFile(file1['title'])
# check directories
%cd '/content/'
# os.listdir('/content'), os.listdir('/content/training'), os.listdir('/content/labels')
```
## Model + IO
```
! rm -r MathResearchQHSS
! git clone -b lipreading https://github.com/jchen42703/MathResearchQHSS.git
%cd MathResearchQHSS
!pip install -r requirements.txt
%cd '/content/'
from MathResearchQHSS.lipreading.io.generator import FrameGenerator
from MathResearchQHSS.lipreading.models import LipNet
from MathResearchQHSS.lipreading.io.io_utils import get_list_IDs
import os
from glob import glob
import skvideo
old = (75, 576, 720, 3)
new = (75, 192, 240,3)
s1_path = '/content/s1/'
s1_align_path = '/content/s1_align/'
# initializing generators
list_IDs = get_list_IDs(s1_path, val_split = 0.8)
data_dirs = [s1_path, s1_align_path]
train_gen = FrameGenerator(list_IDs['train'], data_dirs, batch_size = 1, resize_shape = new )
val_gen = FrameGenerator(list_IDs['val'], data_dirs, batch_size = 1, resize_shape = new)
# gen.__getitem__(1)
# quick testing of generators
inp, out = train_gen.__getitem__(1)
inp['the_input'].shape
# initializing model (takes a while)
lipnet = LipNet(img_w = new[1], img_h = new[2])
lipnet.summary()
# compiles model
from keras.optimizers import Adam
adam = Adam(lr=3e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
# the loss calc occurs elsewhere, so use a dummy lambda func for the loss
lipnet.model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=adam)
# training
n_epochs = 20
max_queue_size = 1
lipnet.model.fit_generator(generator = train_gen, epochs = n_epochs, max_queue_size = max_queue_size, workers = 2, use_multiprocessing = True)
model.save_weights('train_weights_'+str(n_epochs)+'.h5')
# Adding CLR
! git clone https://github.com/jchen42703/CLR.git
# initializing the cyclical learning rate callback
from CLR import clr_callback
clr = clr_callback.CyclicLR(base_lr=1e-5, max_lr=3e-3,
step_size=1000., # originally 1000
mode = 'triangular')
#mode='exp_range', gamma=0.99994)
callbacks = [clr]
# training
n_epochs = 20
max_queue_size = 1
lipnet.model.fit_generator(generator = train_gen, epochs = n_epochs, callbacks = callbacks, max_queue_size = max_queue_size, workers = 2, use_multiprocessing = True)
```
|
github_jupyter
|
# What Drives MLB Game Attendance?
## Background
### Find Data
* Step 1 - Identify and Source Data
* Step 2 - Perform ETL on the Data:
* Extract: original data sources and how the data was formatted (CSV, JSON, MySQL, etc).
* Transform: what data cleaning or transformation was required.
* Load: the final database, tables/collections, and why this was chosen.
### Data Cleanup & Analysis
* Document the following:
* The sources of data that you will extract from.
* The type of transformation needed for this data (cleaning, joining, filtering, aggregating, etc).
* The type of final production database to load the data into (relational or non-relational).
* The final tables or collections that will be used in the production database.
## Objectives
* Variable Set 1 - Game Attendance & Experience 2013 - 2016:
* Major League Baseball Attendance by Team/Stadium
* Major League Baseball Beer Prices by Team/Stadium
* Variable Set II - Team & Players 2013 - 2016:
* Major League Baseball Team Offensive Output (Homeruns & RBI's)
* Major League Baseball Average Player Salary by Team
```
# Import Dependencies
import pandas as pd
import numpy as np
import matplotlib
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import seaborn as sns
# Python SQL Toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect, func
from sqlalchemy.ext.declarative import declarative_base
```
### Store CSV into DataFrame
```
# Files to Load
baseball_data_to_load = "./Resources/baseballdata.csv"
mlb_team_data_to_load = "./Resources/mlb_teams.csv"
mlb_beer_price_to_load = "./Resources/mlb_beer_prices.csv"
# Read All CSV's
baseball_data = pd.read_csv(baseball_data_to_load)
mlb_team_data = pd.read_csv(mlb_team_data_to_load)
mlb_beer_price = pd.read_csv(mlb_beer_price_to_load)
# Combine/Merge the DataFrames Into a Single Dataset Based on the Team Names They Share & Year
mlb_df = pd.merge(baseball_data, mlb_team_data, left_on=["Team Name", "Year"], right_on=["Team Name","Year"])
# Display Data Table for Preview
mlb_df.head()
# Combine/Merge the 3rd DataFrame With Previously Merged DataFrame
# Into a Single Dataset Based on the Team Names They Share & Year
combined_mlb_df = pd.merge(mlb_beer_price, mlb_df, left_on=["Team Name", "Year"], right_on=["Team Name","Year"])
# Display Data Table for Preview
combined_mlb_df.head()
```
### Create New Data with Select Columns
```
# Clean DataFrame & Get Only the Data Needed
new_combined_mlb_df = combined_mlb_df[["Team Name", "Year", "HR", "RBI", "salary", "PPO", "Attendance"]].copy()
new_combined_mlb_df.head()
# Convert Attendance From String to a Float
new_combined_mlb_df.iloc[:,6] = new_combined_mlb_df.iloc[:,6].str.replace(',', '').astype(float)
new_combined_mlb_df.head()
# Import Pandas DataFrame to PostgreSQL
engine = create_engine("postgresql://postgres:password@localhost:5432/baseball_df")
updated_mlb_df.to_sql("baseball_df", engine)
# Create Engine and Pass in Postgres Connection
# Setup to Connect to Database
engine = create_engine("postgres://postgres:password@localhost:5432/baseball_df")
conn = engine.connect()
# Calculate Average Team Attendance From 2013-2016
attendance_2013 = engine.execute('SELECT "Team Name","Attendance" FROM baseball_df WHERE "Year"=2013').fetchall()
attendance_2014 = engine.execute('SELECT "Team Name","Attendance" FROM baseball_df WHERE "Year"=2014').fetchall()
attendance_2015 = engine.execute('SELECT "Team Name","Attendance" FROM baseball_df WHERE "Year"=2015').fetchall()
attendance_2016 = engine.execute('SELECT "Team Name","Attendance" FROM baseball_df WHERE "Year"=2016').fetchall()
# Calculate Average Team Attendance vs. Average Beer Price per Ounce (PPO) for 4 Years
beer = pd.read_sql('SELECT "Team Name", AVG("Attendance") AS "Attendance", AVG("PPO") AS "PPO" FROM baseball_df GROUP BY "Team Name"', conn)
# Calculate Average Team Attendance vs. Average HR's + Average RBI's AS Offensive Output for 4 Years
offense = pd.read_sql('SELECT "Team Name", AVG("Attendance") AS "Attendance", AVG("HR" + "RBI") AS "Offensive Output" FROM baseball_df GROUP BY "Team Name"', conn)
# Calculate Average Team Attendance vs. Average Team Salary for 4 Years
salary = pd.read_sql('SELECT "Team Name", AVG("Attendance") AS "Attendance", AVG("salary") AS "Salary" FROM baseball_df GROUP BY "Team Name"', conn)
```
### Visualizations with Matplotlib
```
# Plot 2013 Attendance Results in a Matplotlib Bar Chart
df_2013 = pd.DataFrame(attendance_2013, columns=["Team Name","Attendance"])
df_2013.set_index("Team Name", inplace=True)
df_2013.plot.bar(title="2013 MLB Team Attendance", figsize=(12,8))
plt.savefig("./Images/2013_MLB_Team_Attendance.png")
plt.show()
# Plot 2014 Attendance Results in a Matplotlib Bar Chart
df_2014 = pd.DataFrame(attendance_2014, columns=["Team Name","Attendance"])
df_2014.set_index("Team Name", inplace=True)
df_2014.plot.bar(title="2014 MLB Team Attendance", figsize=(12,8))
plt.savefig("./Images/2014_MLB_Team_Attendance.png")
plt.show()
# Plot 2015 Attendance Results in a Matplotlib Bar Chart
df_2015 = pd.DataFrame(attendance_2015, columns=["Team Name","Attendance"])
df_2015.set_index("Team Name", inplace=True)
df_2015.plot.bar(title="2015 MLB Team Attendance", figsize=(12,8))
plt.savefig("./Images/2015_MLB_Team_Attendance.png")
plt.show()
# Plot 2016 Attendance Results in a Matplotlib Bar Chart
df_2016 = pd.DataFrame(attendance_2016, columns=["Team Name","Attendance"])
df_2016.set_index("Team Name", inplace=True)
df_2016.plot.bar(title="2016 MLB Team Attendance", figsize=(12,8))
plt.savefig("./Images/2016_MLB_Team_Attendance.png")
plt.show()
# Create Scatterplot with Regression Showing Average Beer Price per Ounce (PPO) vs. Attendance for 4 years
fig, ax = plt.subplots()
fig.set_size_inches(12, 8)
sns.regplot(x="PPO", y="Attendance", data=beer, ax=ax).set_title("Attendance vs. Beer Price (2013-2016)")
plt.savefig("./Images/Attendance_vs_Beer_Price.png")
# Create Scatterplot with Regression Showing Average HR's + Average RBI's AS Offensive Output vs. Attendance for 4 Years
fig, ax = plt.subplots()
fig.set_size_inches(12, 8)
sns.regplot(x="Offensive Output", y="Attendance", data=offense, ax=ax).set_title("Attendance vs. Offensive Output (2013-2016)")
plt.savefig("./Images/Attendance_vs_Offensive_Output.png")
# Create Scatterplot with Regression Showing Average Team Attendance vs. Average Team Salary for 4 Years
fig, ax = plt.subplots()
fig.set_size_inches(12, 8)
sns.regplot(x="Salary", y="Attendance", data=salary, ax=ax).set_title("Attendance vs. Salary (2013-2016)")
plt.savefig("./Images/Attendance_vs_Salary.png")
```
### Observations
* From 2013-2016, MLB team attendance improved for some teams and decreased for others. For example, the St. Louis Cardinals and the Toronto Blue Jays saw an increase in attendance from 2013-2016 while the Oakland A's and Tampa Bay Rays saw decreases.
* It seems as if lower beer prices did not affect MLB attendance but offensive output, however, affected MLB attendance but only slightly. Beer prices might be lower but fans would rather see more offense.
* The big correlation was between player salary and attendance. The "Attendance vs. Salary" scatterplot indicates that the more a team is willing to spend on players, most likely super stars and/or all-stars, the more willing fans are likely to attend games.
|
github_jupyter
|
# HHVM
## 背景介绍
HHVM 是 Facebook (现 Meta) 开发的高性能 PHP 虚拟机,宣称达到了官方解释器的 9x 性能
### 为什么会有 HHVM
#### 脚本语言
##### Pros
一般我们使用脚本语言 (Perl,Python,PHP,JavaScript)是为了以下几个目的
1. 大部分的脚本语言都拥有较为完备的外部库,能够帮助开发者快速的开发/测试
- 使用 Python 作为 ebt 的技术栈也是因为 `numpy`, `pandas` 等数据科学库的支持比别的编程语言更加的完备
2. 动态语言的特性使得开发过程变得异常轻松,可以最大程度的实现可复用性和多态性,打个比方
- Python
```python
def evaluate(model_impl, params):
return model_impl.calculate(params)
class Model(object):
def calculate(params):
sum_val = 0
for param in params:
sum_val += param
return sum_val
```
- C++
```cpp
class IModel {
public:
virtual double calculate(const vector<double> ¶ms) = 0;
virtual int calculate(const vector<int> ¶ms) = 0;
}
class Model : public IModel {
public:
double calculate(const vector<double> ¶ms) {
double sum_val = 0;
for (vector<double>::iterator it = params.begin(); it != params.end(); ++it) {
sum_val += *it;
}
return sum_val;
}
int calculate(const vector<int> ¶ms) {
int sum_val = 0;
for (vector<int>::iterator it = params.begin(); it != params.end(); ++it) {
sum_val += *it;
}
return sum_val;
}
}
double evaluate(IModel* model_impl, const vector<double> ¶ms) {
return model_impl->calculate(params);
}
int evaluate(IModel* model_impl, const vector<int> ¶ms) {
return model_impl->calculate(params);
}
```
- 模版
```cpp
// This is ok but template is not a general feature for all static typed language
template <typename T>
T evaluate(IModel* model_impl, const vector<T> ¶ms) {
return model_impl->calculate<T>(params);
}
template <typename T>
T::value_type evaluate(IModel* model_impl, const T& params) {
return model_impl->calculate<T>(params);
}
```
3. 动态语言完全是解释执行,调试成本较低。每当改动源码有所改动后,程序重新运行更加直接,只需要解释器重新读取源码即可。编译性语言需要更多的步骤与时间,例如 C++,为了从源码生成可执行程序需要 链接静态库 -> .obj -> 链接动态库 -> 可执行程序。如果是大型项目开发的话这一步骤甚至会花费几个小时。而解释执行的程序可以不需要这些步骤直接重新运行
##### Cons
但是对于有较高性能需求的 situation,编译执行反而会成为拖累。
> Although easy to implement, interpreters are generally slow, which makes scripting language prohibitive for implementing large, CPU-intensive applications. (Zhao, 2021)
Debian 有一个 [benchmark game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html),比较了目前比较常见的几种编程语言的运行速度/内存占用/源码大小
```
import pandas as pd
import matplotlib.pyplot as plt
benchmarks = pd.read_csv('./data/programming_language_benchmarks_game_all_measurements.csv')
benchmarks.head(10)
compile_lang_lst = ['clang', 'csharpcore', 'csharppgo', 'gcc', 'gfortran', 'go', 'gpp', 'java', 'rust', 'swift']
interpreter_lang_lst = ['node', 'perl', 'php', 'python3']
def boxplot_by_lang(data: pd.DataFrame, colname: str) -> None:
fig, ax = plt.subplots()
index = 1
for lang in compile_lang_lst:
ax.boxplot(data[data['lang'] == lang][colname],
positions=[index],
labels=[lang],
boxprops=dict(color='blue'))
index += 1
for lang in interpreter_lang_lst:
ax.boxplot(data[data['lang'] == lang][colname],
positions=[index],
labels=[lang],
boxprops=dict(color='green'))
index += 1
ax.set_title(colname, fontsize=15)
ax.tick_params(axis='x', labelrotation=45)
fig.set_size_inches(10, 6)
filtered = benchmarks[(benchmarks['status'] == 0) & (benchmarks['name'] == 'binarytrees') & (benchmarks['n'] == 21)].reset_index()
boxplot_by_lang(data=filtered, colname='elapsed(s)')
boxplot_by_lang(data=filtered, colname='mem(KB)')
boxplot_by_lang(data=filtered, colname='size(B)')
```
通过以上数据可以显然看出与编译执行语言相比,解释执行的语言在 CPU 的处理性能有明显的优势,部分编译执行的语言在内存处理(申请与回收)上也有着异常优秀的表现。
像 Meta 这样的巨型公司需要 host 的服务器是也是巨型的
<img src='./images/faceboook_datacenter_oregon.png' alt='facebook_datacenter' width='1000' />
上图是 Meta 在 Oregon 的数据中心,据说这两个 building 的造价就高达了 *$750M* (约合 *¥47.78 亿*),并且在 2020 年中 Meta 又在边上造了两个
如此巨型的数据中心的一大作用就是用来做 Facebook 的服务器主机,所以为了优化服务器的物理成本,从代码上优化服务器性能是必然的。
我们都知道 Facebook 是用 PHP 实现的,据 HHVM 的项目主持者之一的 Keith Adams 所说,Facebook 有约莫 $2 \times 10^7$ 行 PHP 代码 (2012年)。Facebook 的开发者在检查之后发现自己服务器的性能问题很大一部分就是资源的消耗就在 PHP 的解释器本身上,所以需要考虑的就是如何优化 PHP 的性能
### 如何优化 PHP 的性能
1. 使用性能更好的语言重写服务端,如 C++,GO,Java
- 重构2千万行代码?算了算了
2. 使用 RPC 将部分业务功能独立,减少 PHP 的处理,比如 Twitter 就将自己的很多业务逻辑从 Ruby on Rails 转为了 Java 和 Scala (前端由 node + react 独立实现)
<img src='./images/twitter_tech_stack.webp' alt='twitter_tech_stack' width="1000" />
- RPC 框架
<img src='./images/Thrift_homepage.png' alt='Thrift' width="1000" />
但是不解决问题
3. 以动态扩展的形式优化 PHP 的性能瓶颈,用 PHP 加载 C++ 实现的方式绕开性能瓶颈
- ebt 目前的解决方案,但是对 Facebook 这样历史包袱过重的源码仓库来说,性能瓶颈并不是 1-2 处小地方,而是不断累积的后果,并且 PHP 的扩展并不像 pybind 一样有比较成熟的加载方式
4. 优化 PHP 的解释器本身
### 如何优化 PHP 的解释器
1. 改进自己的源码
- 用 PHP 写的 PHP 性能分析工具 [XHProf](https://github.com/phacility/xhprof)
- 定位性能瓶颈处,优化代码逻辑,就像 leetcode 去做到 >99%
- 优化的不够
2. 优化 PHP 的解释器实现
- [Zend Engine](https://github.com/php/php-src/tree/master/Zend)
- 将 PHP 编译为 `opcode` 然后执行 `opcode`
- 优化 Zend 的性能代价太大,并且还要做到版本的向下兼容
3. 将 PHP 转为 C/C++,然后编译生成
- Hiphop Compiler for PHP (Zhao, 2012)
<img src='./images/hhvm_autogen.png' alt='hhvm_autogen' width="1000" />
被认为是一种 Ahead of Time 的方式,能够完成非常多的代码优化(就像 LLVM 一样),但是一个问题就是无法正确支持 PHP 中的部分特性,如 `eval()`, `create_function()` 等
> Support for eval is theoretically possible in HipHop by invoking an interpreter. However, we opted not to implement that because the use of eval exposes serious security problems and is therefore discouraged. (Zhao, 2012)
就像是在 C++ 里面 `Py_Initialize()` 启动了 Python 环境,对内存消耗和代码优化都不是很友好
4. 实现一个虚拟机,将 PHP 源码转为当前平台的机器码执行 (JVM)
- 如今的 HHVM
## HipHop Compiler for PHP (HPHPc)
### C++ 与 PHP 的特性区别
<img src='./images/php_cpp_table.png' alt='PHP_cpp_table' width='600' />
### HPHPc 的编译设计
<img src='./images/hphpc_phases.png' alt='hphpc_phases' width='600' />
```php
<?php
define("confName", "OOPSLA");
define("firstYear", 1986);
function year($edition) {
return firstYear - 1 + $edition;
}
echo "Hello " . confName . "'" . year(27);
```
#### 1. 生成 AST
读取 PHP 的源码并生成对应的 AST
<img src='./images/ast.png' alt='ast' width="1000" />
#### 2. 预分析
遍历 AST 并且记录所有符号的信息(类,函数,变量,全局 const),判断哪些符号是存在同名覆盖的情况需要考虑代码上下文。并且建立符号之间的依赖图,为后续的并行优化提供前置准备
#### 3. 预优化
处理不需要类型就能完成的油画,比如将函数转为内联函数,优化不必要的逻辑判断,移除不可能到达的代码片段等
#### 4. 判断类型
核心部分,基于 Damas-Milner constraint-based 算法判断不同符号的类型
<img src='./images/hphpc_types.png' alt='hphpc_types' width="600" />
`variant` 是 `any` 类型,所有符号在类型推断出类型前都是 `variant` 类型,在推断过程中如果成功判断为任何类型则覆盖 `variant`
#### 5. 后优化
在拥有类型之后,HipHop 编译器会根据类型信息优化包括简单的数值计算和逻辑判断在内的部分代码片段,然后再重新执行一次预优化阶段的优化逻辑
#### 6. 生成
最后编译器会遍历带有类型的,被优化后的 AST,并且生成对应的 C++ 代码,并且运行 gcc 编译 C++ 源码,生成的 C++ 部分包括
1. 类的头文件:每个 PHP 的类实现会生成对应的 C++ 头文件与对应的类声明
2. PHP 的头文件:每个 PHP 的文件(函数)会生成对应的 C++ 头文件与对应的声明
3. 具体实现文件:包含了一个或者多个声明类/函数的具体实现
4. 系统文件:不包含 PHP 的源码内容,但是会记录 Symbol Table
### 如何从 AST 生成 C++ 代码
#### 鸭子类型
除了正常标注类型的符号以外,仍然存在无法判断的 `variant` 类型,需要实现一个 `variant` 类型支持运行时处理的逻辑
#### 动态符号
指符号的具体指向需要运行时判断,如
```python
if SOME_CONDITION:
x = 10
else:
x = 'a'
```
对于这种情况,HipHop 用一张 *global symbol table* 记录了:
- 全局变量
- 动态申明的 constant
- 函数体/类实现中的静态变量
- 重复定义的函数体/类
所谓重复定义是指在不同文件中定义了同样的函数名/类名(在动态语言中这样做是合法的),GST 会在符号名后添加唯一后缀,然后根据具体 `#include` 语句引用的文件名导向不同后缀的同名函数/类实现。对于静态编译过程中出现的无法处理的**动态符号**,HipHop 会通过类似的逻辑生成临时的 *local symbol table*。
在具体处理实际的 variable 时,编译器会通过当前所处的上下文获取 LST 和 GST,从 table 中获取实际的指向。
同时,HipHop 还支持了动态添加实例属性,如
```
class SomeClass:
def __init__(self):
pass
some_instance = SomeClass()
some_instance.a = 10
print(some_instance.a)
```
为了实现这样的 feature,HipHop 还实现了一个 *property symbol table*,用于记录符号中的属性,当源码尝试访问实例/类属性时,会通过 PST 找到对应的符号。
但是 HPHPc 的痛点就是
- 无法支持 PHP 的动态方法 `eval()`, `create_function()` 等
- 并且编译后部署上线的源码重量会比 PHP 要大很多,对于2千万行的源码,这种硬性的成本是毁灭性的
## HHVM (HipHop Virtual Machine)
会什么虚拟机能够解决传统解释器的问题呢?
### JIT (Just In Time)
JVM,Numba 和 HHVM 都是对解释性语言的 jit 优化,但是尝试过 Numba 的我们也知道,jit 有可能并没有想象中的那么美好,对于一些简单的方法,jit 甚至还会拖累性能
```
def make_it_double(a: int) -> int:
return a * 2
%%timeit
make_it_double(10)
import numba
make_it_double_numba = numba.jit(make_it_double)
%%timeit
make_it_double_numba(10)
```
从上面的表现可以发现 jit 并不是那么的美好,因为 jit 这个过程也是存在性能损耗的,所以这种简单的方法反而会比普通的解释器慢
<img src='./images/hphpc_vs_hhvm.png' alt='hphpc_vs_hhvm' width="600" />
安卓的目前的运行时方案 ART (Andriod Run Time)作为 Android 上的应用和部分系统服务使用的托管式运行时,采用的就是 HPHPc 的 AOT 方案,其前身 Dalvik 使用的 JIT 方案,但是 ART 虚拟机比 Dalvik 快了一倍多。
为了保证虚拟机 jit 的性能优化,Facebook 招聘了各路神仙
- Andrei Alexandrescu,『Modern C++ Design』和『C++ Coding Standards』的作者,C++ 领域无可争议的大神
- Keith Adams,负责过 VMware 核心架构,当年 VMware 就派他一人去和 Intel 进行技术合作,足以证明在 VMM 领域他有多了解了
- Drew Paroski,在微软参与过 .NET 虚拟机开发,改进了其中的 JIT
- Jason Evans,开发了 jemalloc,减少了 Firefox 一半的内存消耗
- Sara Golemon,『Extending and Embedding PHP』的作者,PHP 内核专家
### Interpreter
HHVM 在 parse 完 PHP 源码会生成一种 Bytecode (opcode),储存在 .hhvm.hhbc 文件中索引,在执行 Bytecode 时和 Zend 类似,将不同的字节码放到不同的函数中去实现 (Subroutine threading)。具体的生成逻辑在 `hphp/runtime/vm/hhbc.cpp`
因为重新实现了解释器,HHVM 比起 HPHPc 能够提供更加优异的兼容性,理论上可以做到对 PHP 所有特性的完美兼容,但是这样的性能还是走了 Zend 的老路子,并且对于动态类型,需要实现类似于如下的判断
```cpp
VMType::Variant VMExecutionContext::add(const VMType::Variant &left, const VMType::Variant &right) {
if (this->GST[left.uid] == VMType::Int64 && this->GST[right.uid] == VMType::Int64) {
return this->IntAddImpl->exec(left, right);
}
// TODO: some other impl
}
```
而我们知道这样的 if else 条件判断对 CPU 的执行优化是严重的拖累,另一个问题是需要从数据都是储存在对象中,作为 boxed structure 每次的间接获取地址也是有成本的,所以需要 jit 的实现来完成这些工作
### JIT Impl
其实本质上 jit 和 `eval()` 是类似的,只不过 jit eval 的不是源码字符串,而是不同平台下的机器码。HHVM 实现了一个 x64 的机器码生成器(HHBBC)。
常见的 jit 触发条件有两种
- trace:记录循环执行次数,如果超过一定数量就对这段代码进行 jit
- method:记录函数执行次数,如果超过一定数量就对这个函数内的代码片段进行 jit,如果过于 hot 就直接改为 inline
关于这两种方法哪种更好在 [Lambada](http://lambda-the-ultimate.org/node/3851) 上有个帖子引来了各路大神的讨论,尤其是 Mike Pall(LuaJIT 作者) 、Andreas Gal(Mozilla VP) 和 Brendan Eich(Mozilla CTO)都发表了很多自己的观点,两者的区别不仅仅是编译的范围,还有包括局部变量处理在内的很多细节方面都不太一样。
然而 HHVM 自创了一种叫做 tracelet 的方式,根据类型划分
<img src='./images/hhvm_tracelet.png' alt='hhvm_tracelet' width="1000" />
tracelet 将一个函数划分了 3 个部分,A,C用于处理入参 `$k` 是 integer 或者 string 两种类型的情况,B 用于处理返回值,因此 HHVM 的 jit 触发更像是与类型相关,具体如何分析和拆解 tracelet 的细节在 `hphp/runtime/vm/jit` 中,太深了就不需要了解了。
当然 HHVM 的 jit 优化还体现在很多地方,比如 如果 C 比 A 更加 hot(命中次数更多),HHVM 就会在 jit 的过程中倾向于把 C 放在 A 的前面,据说这样的操作提升了 14% 的性能,因为这样更容易提前命中需要相应的类型。
### Hack
hhvm/jit 的关键是类型,而猜测类型就像上面的例子一样是一个对 CPU 非常不友好的操作,所以 HHVM 的开发者就开始考虑为 PHP 添加类型支持,推出了一个新的语言 - Hack,通过静态类型指定让 HHVM 能够更好的提供性能优化。同时也能够让代码可读性大大提升,方便 IDE 提供更多帮助,在编码阶段减少很多没有必要的 bug。
## Conclusion
HPHPc / HHVM 作为两种为动态的脚本语言提供性能优化的方案都是有可取之处的,HPHPc 能够在静态编译的过程中完成各种可能的性能优化,比起 HHVM 的优化, HPHPc 的优化能够更加稳定,AOT 是一种非常成熟的优化方案
HHVM 则能够更加完美的兼容动态语言的特性,jit 是一种近年来发展迅速的技术,迭代更新快,能够一直保持对性能的优化。
|
github_jupyter
|
# Módulo 2: Scraping con Selenium
## LATAM Airlines
<a href="https://www.latam.com/es_ar/"><img src="https://i.pinimg.com/originals/dd/52/74/dd5274702d1382d696caeb6e0f6980c5.png" width="420"></img></a>
<br>
Vamos a scrapear el sitio de Latam para averiguar datos de vuelos en funcion el origen y destino, fecha y cabina. La información que esperamos obtener de cada vuelo es:
- Precio(s) disponibles
- Horas de salida y llegada (duración)
- Información de las escalas
**¡Empecemos!**
Utilicemos lo aprendido hasta ahora para lograr el objetivo propuesto
```
import requests
from bs4 import BeautifulSoup
url = 'https://www.latam.com/es_ar/apps/personas/booking?fecha1_dia=18&fecha1_anomes=2019-12&auAvailability=1&ida_vuelta=ida&vuelos_origen=Buenos%20Aires&from_city1=BUE&vuelos_destino=Madrid&to_city1=MAD&flex=1&vuelos_fecha_salida_ddmmaaaa=18/12/2019&cabina=Y&nadults=1&nchildren=0&ninfants=0&cod_promo=#/'
r = requests.get(url)
r.status_code
s = BeautifulSoup(r.text, 'lxml')
print(s.prettify())
```
Vemos que la respuesta de la página no contiene la información que buscamos, ya que la misma aparece recién después de ejecutar el código JavaSCript que está en la respuesta.
## Selenium
Selenium es una herramienta que nos permitirá controlar un navegador y podremos utilizar las funcionalidades del motor de JavaScript para cargar el contenido que no viene en el HTML de la página. Para esto necesitamos el módulo `webdriver`.
```
from selenium import webdriver
```
Paso 1: instanciar un **driver** del navegador
```
options = webdriver.ChromeOptions()
options.add_argument('--incognito')
driver = webdriver.Chrome(executable_path='../chromedriver', options=options)
```
Paso 2: hacer que el navegador cargue la página web.
```
driver.get(url)
```
Paso 3: extraer la información de la página
```
vuelos = driver.find_elements_by_xpath('//li[@class="flight"]')
vuelos
vuelo = vuelos[0]
vuelo
# Hora de salida
vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Hora de llegada
vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime')
boton_escalas = vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button')
boton_escalas.click()
segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]')
segmentos
escalas = len(segmentos) - 1
escalas
segmento = segmentos[0]
# Origen
segmento.find_element_by_xpath('.//div[@class="departure"]/span[@class="ground-point-name"]').text
# Hora de salida
segmento.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Destino
segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text
# Hora de llegada
segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime')
# Numero del vuelo
segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text
# Modelo de avion
segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text
# Duracion de la escala
segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime')
driver.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click()
vuelo.click()
tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]')
tarifas
precios = []
for tarifa in tarifas:
nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for')
moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text
valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text
dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}}
precios.append(dict_tarifa)
print(dict_tarifa)
def obtener_precios(vuelo):
'''
Función que retorna una lista de diccionarios con las distintas tarifas
'''
tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]')
precios = []
for tarifa in tarifas:
nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for')
moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text
valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text
dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}}
precios.append(dict_tarifa)
return precios
def obtener_datos_escalas(vuelo):
'''
Función que retorna una lista de diccionarios con la información de
las escalas de cada vuelo
'''
segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]')
info_escalas = []
for segmento in segmentos:
# Origen
origen = segmento.find_element_by_xpath('.//div[@class="departure"]/span[@class="ground-point-name"]').text
# Hora de salida
dep_time = segmento.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Destino
destino = segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text
# Hora de llegada
arr_time = segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
duracion_vuelo = segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime')
# Numero del vuelo
numero_vuelo = segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text
# Modelo de avion
modelo_avion =segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text
if segmento != segmentos[-1]:
# Duracion de la escala
duracion_escala = segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime')
else:
duracion_escala = ''
data_dict={
'origen':origen,
'dep_time':dep_time,
'destino':destino,
'arr_time':arr_time,
'duracion_vuelo':duracion_vuelo,
'numero_vuelo':numero_vuelo,
'modelo_avion':modelo_avion,
'duracion_escala':duracion_escala,
}
info_escalas.append(data_dict)
return info_escalas
def obtener_tiempos(vuelo):
'''
Función que retorna un diccionario con los horarios de salida y llegada de cada
vuelo, incluyendo la duración.
Nota: la duración del vuelo no es la hora de llegada - la hora de salida porque
puede haber diferencia de horarios entre el origen y el destino.
'''
# Hora de salida
salida = vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Hora de llegada
llegada = vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duracion
duracion = vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime')
tiempos = {'hora_salida': salida, 'hora_llegada': llegada, 'duracion': duracion}
return tiempos
def obtener_info(driver):
vuelos = driver.find_elements_by_xpath('//li[@class="flight"]')
print(f'Se encontraron {len(vuelos)} vuelos.')
print('Iniciando scraping...')
info = []
for vuelo in vuelos:
#obtenemos los tiempos generales de cada vuelo
tiempos = obtener_tiempos(vuelo)
# Clickeamos sobre el boton de las escalas
vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button').click()
escalas = obtener_datos_escalas(vuelo)
# Cerramos el modal
driver.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click()
# Clickeamos el vuelo para ver los precios
vuelo.click()
precios = obtener_precios(vuelo)
vuelo.click()
info.append({'precios':precios, 'tiempos': tiempos, 'escalas':escalas})
return info
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
options = webdriver.ChromeOptions()
options.add_argument('--incognito')
driver = webdriver.Chrome(executable_path='../chromedriver', options=options)
driver.get(url)
# Introducir una demora
delay = 10
try:
# introducir demora inteligente
vuelo = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, '//li[@class="flight"]')))
print('La página terminó de cargar')
info_vuelos = obtener_info(driver)
except TimeoutException:
print('La página tardó demasiado en cargar')
info_vuelos = []
driver.close()
info_vuelos
```
Paso 4: cerrar el navegador
```
driver.close()
```
|
github_jupyter
|
# Accessing higher energy states with Qiskit Pulse
In most quantum algorithms/applications, computations are carried out over a 2-dimensional space spanned by $|0\rangle$ and $|1\rangle$. In IBM's hardware, however, there also exist higher energy states which are not typically used. The focus of this section is to explore these states using Qiskit Pulse. In particular, we demonstrate how to excite the $|2\rangle$ state and build a discriminator to classify the $|0\rangle$, $|1\rangle$ and $|2\rangle$ states.
We recommend reviewing the prior [chapter](https://learn.qiskit.org/course/quantum-hardware-pulses/calibrating-qubits-using-qiskit-pulse) before going through this notebook. We also suggest reading the Qiskit Pulse specifications (Ref [1](#refs)).
### Physics Background
We now give some additional background on the physics of transmon qubits, the basis for much of IBM's quantum hardware. These systems contain superconducting circuits composed of a Josephson junction and capacitor. For those unfamiliar with superconducting circuits, see the review [here](https://arxiv.org/pdf/1904.06560.pdf) (Ref. [2](#refs)). The Hamiltonian of this system is given by
$$
H = 4 E_C n^2 - E_J \cos(\phi),
$$
where $E_C, E_J$ denote the capacitor and Josephson energies, $n$ is the reduced charge number operator and $\phi$ is the reduced flux across the junction. We work in units with $\hbar=1$.
Transmon qubits are defined in the regime where $\phi$ is small, so we may expand $E_J \cos(\phi)$ in a Taylor series (ignoring constant terms)
$$
E_J \cos(\phi) \approx \frac{1}{2} E_J \phi^2 - \frac{1}{24} E_J \phi^4 + \mathcal{O}(\phi^6).
$$
The quadratic term $\phi^2$ defines the standard harmonic oscillator. Each additional term contributes an anharmonicity.
Using the relations $n \sim (a-a^\dagger), \phi \sim (a+a^\dagger)$ (for raising, lowering operators $a^\dagger, a$), it can be shown that the system resembles a Duffing oscillator with Hamiltonian
$$
H = \omega a^\dagger a + \frac{\alpha}{2} a^\dagger a^\dagger a a,
$$
where $\omega$ gives the $0\rightarrow1$ excitation frequency ($\omega \equiv \omega^{0\rightarrow1}$) and $\alpha$ is the anharmonicity between the $0\rightarrow1$ and $1\rightarrow2$ frequencies ($\alpha \equiv \omega^{1\rightarrow2} - \omega^{0\rightarrow1}$). Drive terms can be added as needed.
If we choose to specialize to the standard 2-dimensional subspace, we can make $|\alpha|$ sufficiently large or use special control techniques to suppress the higher energy states.
# Contents
[Getting started](#importing)
[Discriminating the 0, 1 and 2 states](#discrim012)
  [Computing the 1->2 Frequency](#freq12)
  [1->2 Rabi Experiment](#rabi12)
  [Build the 0, 1, 2 discriminator](#builddiscrim012)
[References](#refs)
## Getting Started <a id="importing"></a>
We begin by importing dependencies and defining some default variable values. We choose qubit 0 to run our experiments. We perform our experiments on the publicly available single qubit device `ibmq_armonk`.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import train_test_split
from qiskit import pulse # This is where we access all of our Pulse features!
from qiskit.circuit import Parameter # This is Parameter Class for variable parameters.
from qiskit.circuit import QuantumCircuit, Gate
from qiskit import schedule
from qiskit.tools.monitor import job_monitor
from qiskit.tools.jupyter import *
%matplotlib inline
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
backend = provider.get_backend('ibmq_manila')
backend_defaults = backend.defaults()
backend_properties = backend.properties()
# unit conversion factors -> all backend properties returned in SI (Hz, sec, etc.)
GHz = 1.0e9 # Gigahertz
MHz = 1.0e6 # Megahertz
us = 1.0e-6 # Microseconds
ns = 1.0e-9 # Nanoseconds
qubit = 0 # qubit we will analyze
default_qubit_freq = backend_defaults.qubit_freq_est[qubit] # Default qubit frequency in Hz.
print(f"Qubit {qubit} has an estimated frequency of {default_qubit_freq/ GHz} GHz.")
default_anharmonicity = backend_properties.qubits[qubit][3].value # Default anharmonicity in GHz
print(f"Default anharmonicity is {default_anharmonicity} GHz.")
# scale data (specific to each device)
scale_factor = 1e-7
# number of shots for our experiments
NUM_SHOTS = 1024
```
We define some additional helper functions.
```
def get_job_data(job, average):
"""Retrieve data from a job that has already run.
Args:
job (Job): The job whose data you want.
average (bool): If True, gets the data assuming data is an average.
If False, gets the data assuming it is for single shots.
Return:
list: List containing job result data.
"""
job_results = job.result(timeout = 120) # timeout parameter set to 120 s
result_data = []
for i in range(len(job_results.results)):
if average: # get avg data
result_data.append(np.real(job_results.get_memory(i)[qubit] * scale_factor))
else: # get single data
result_data.append(job_results.get_memory(i)[:, qubit] * scale_factor)
return result_data
def get_closest_multiple_of_16(num):
"""Compute the nearest multiple of 16. Needed because pulse enabled devices require
durations which are multiples of 16 samples.
"""
return int(num + 8 ) - (int(num + 8 ) % 16)
```
Next we include some default parameters for drive pulses.
```
# there are pulse parameters of the single qubit drive in IBM devices
x12_duration = 160
x12_sigma = 40
```
## Discriminating the $|0\rangle$, $|1\rangle$ and $|2\rangle$ states <a id="discrim012"></a>
given we have already calibrated X gate in the qubit subspace, which is available as XGate instruction in the quantum circuit. Here we calibrate transition in the higher energy subspace with pulse gate.
We focus on exciting the $|2\rangle$ state and building a discriminator to classify $|0\rangle$, $|1\rangle$ and $2\rangle$ states from their respective IQ data points. The procedure for even higher states ($|3\rangle$, $|4\rangle$, etc.) should be similar, but we have not tested them explicitly.
The process for building the higher state discriminator is as follows:
1. Compute the $1\rightarrow2$ frequency.
2. Conduct a Rabi experiment to obtain the $\pi$ pulse amplitude for $1\rightarrow2$. To do this, we first apply a $0\rightarrow1$ $\pi$ pulse to get from the $|0\rangle$ to the $|1\rangle$ state. Then, we do a sweep of drive amplitudes at the $1\rightarrow2$ frequency obtained above.
3. Construct 3 schedules:\
a. Zero schedule: just measure the ground state.\
b. One schedule: apply a $0\rightarrow1$ $\pi$ pulse and measure.\
c. Two schedule: apply a $0\rightarrow1$ $\pi$ pulse, then a $1\rightarrow2$ $\pi$ pulse and measure.
4. Separate the data from each schedule into training and testing sets and construct an LDA model for discrimination.
### Computing the 1->2 frequency <a id="freq12"></a>
The first step in our calibration is to compute the frequency needed to go from the $1\rightarrow2$ state. There are two methods to do this:
1. Do a frequency sweep from the ground state and apply very high power. If the applied power is large enough, two peaks should be observed. One at the $0\rightarrow1$ frequency found in section [1](#discrim01) and one at the $0\rightarrow2$ frequency. The $1\rightarrow2$ frequency can be obtained by taking the difference of the two. Unfortunately, for `ibmq_armonk`, the maximum drive power of $1.0$ is not sufficient to see this transition. Instead, we turn to the second method.
2. Excite the $|1\rangle$ state by applying a $0\rightarrow1$ $\pi$ pulse. Then perform the frequency sweep over excitations of the $|1\rangle$ state. A single peak should be observed at a frequency lower than the $0\rightarrow1$ frequency which corresponds to the $1\rightarrow2$ frequency.
We follow the second method described above.
```
# smaller range sweep
num_freqs = 75
drive_power = 0.15
sweep_freqs = default_anharmonicity*GHz + np.linspace(-30*MHz, 30*MHz, num_freqs)
freq = Parameter('freq')
with pulse.build(backend=backend, default_alignment='sequential', name='Frequency sweep') as freq12_sweep_sched:
drive_chan = pulse.drive_channel(qubit)
with pulse.frequency_offset(freq, drive_chan):
pulse.play(pulse.Gaussian(duration=x12_duration,
amp=drive_power,
sigma=x12_sigma,
name='x12_pulse'), drive_chan)
spect_gate = Gate("spect", 1, [freq])
qc_spect = QuantumCircuit(1, 1)
qc_spect.x(0)
qc_spect.append(spect_gate, [0])
qc_spect.measure(0, 0)
qc_spect.add_calibration(spect_gate, (0,), freq12_sweep_sched, [freq])
exp_spect_circs = [qc_spect.assign_parameters({freq: f}) for f in sweep_freqs]
excited_freq_sweep_job = backend.run(exp_spect_circs,
meas_level=1,
meas_return='avg',
shots=NUM_SHOTS)
job_monitor(excited_freq_sweep_job)
# Get the refined data (average)
excited_freq_sweep_data = get_job_data(excited_freq_sweep_job, average=True)
excited_sweep_freqs = default_qubit_freq + default_anharmonicity*GHz + np.linspace(-30*MHz, 30*MHz, num_freqs)
```
Let's plot and fit the refined signal, using the standard Lorentzian curve.
```
def fit_function(x_values, y_values, function, init_params):
"""Fit a function using scipy curve_fit."""
fitparams, conv = curve_fit(function, x_values, y_values, init_params, maxfev = 50000)
y_fit = function(x_values, *fitparams)
return fitparams, y_fit
# do fit in Hz
(excited_sweep_fit_params,
excited_sweep_y_fit) = fit_function(excited_sweep_freqs,
excited_freq_sweep_data,
lambda x, A, q_freq, B, C: (A / np.pi) * (B / ((x - q_freq)**2 + B**2)) + C,
[-20, 4.625*GHz, 0.06*GHz, 3*GHz] # initial parameters for curve_fit
)
# Note: we are only plotting the real part of the signal
plt.scatter(excited_sweep_freqs/GHz, excited_freq_sweep_data, color='black')
plt.plot(excited_sweep_freqs/GHz, excited_sweep_y_fit, color='red')
plt.xlim([min(excited_sweep_freqs/GHz), max(excited_sweep_freqs/GHz)])
plt.xlabel("Frequency [GHz]", fontsize=15)
plt.ylabel("Measured Signal [a.u.]", fontsize=15)
plt.title("1->2 Frequency Sweep (refined pass)", fontsize=15)
plt.show()
_, qubit_12_freq, _, _ = excited_sweep_fit_params
print(f"Our updated estimate for the 1->2 transition frequency is "
f"{round(qubit_12_freq/GHz, 7)} GHz.")
```
### 1->2 Rabi Experiment <a id="rabi12"></a>
Now that we have a good estimate for the $1\rightarrow2$ frequency, we perform a Rabi experiment to obtain the $\pi$ pulse amplitude for the $1\rightarrow2$ transition. To do so, we apply a $0\rightarrow1$ $\pi$ pulse and then sweep over drive amplitudes at the $1\rightarrow2$ frequency.
```
# experimental configuration
num_rabi_points = 75 # number of experiments (ie amplitudes to sweep out)
# Drive amplitude values to iterate over: 75 amplitudes evenly spaced from 0 to 1.0
drive_amp_min = 0
drive_amp_max = 1.0
drive_amps = np.linspace(drive_amp_min, drive_amp_max, num_rabi_points)
amp = Parameter('amp')
with pulse.build(backend=backend, default_alignment='sequential', name='Amp sweep') as rabi_sched:
drive_chan = pulse.drive_channel(qubit)
pulse.set_frequency(qubit_12_freq, drive_chan)
pulse.play(pulse.Gaussian(duration=x12_duration,
amp=amp,
sigma=x12_sigma,
name='x12_pulse'), drive_chan)
rabi_gate = Gate("rabi", 1, [amp])
qc_rabi = QuantumCircuit(1, 1)
qc_rabi.x(0)
qc_rabi.append(rabi_gate, [0])
qc_rabi.measure(0, 0)
qc_rabi.add_calibration(rabi_gate, (0,), rabi_sched, [amp])
exp_rabi_circs = [qc_rabi.assign_parameters({amp: a}) for a in drive_amps]
rabi_12_job = backend.run(exp_rabi_circs,
meas_level=1,
meas_return='avg',
shots=NUM_SHOTS)
job_monitor(rabi_12_job)
# Get the job data (average)
rabi_12_data = get_job_data(rabi_12_job, average=True)
def baseline_remove(values):
"""Center data around 0."""
return np.array(values) - np.mean(values)
# Note: Only real part of data is plotted
rabi_12_data = np.real(baseline_remove(rabi_12_data))
(rabi_12_fit_params,
rabi_12_y_fit) = fit_function(drive_amps,
rabi_12_data,
lambda x, A, B, drive_12_period, phi: (A*np.cos(2*np.pi*x/drive_12_period - phi) + B),
[0.2, 0, 0.3, 0])
plt.scatter(drive_amps, rabi_12_data, color='black')
plt.plot(drive_amps, rabi_12_y_fit, color='red')
drive_12_period = rabi_12_fit_params[2]
pi_amp_12 = drive_12_period/2
plt.axvline(pi_amp_12, color='red', linestyle='--')
plt.axvline(pi_amp_12+drive_12_period/2, color='red', linestyle='--')
plt.annotate("", xy=(pi_amp_12+drive_12_period/2, 0), xytext=(pi_amp_12,0), arrowprops=dict(arrowstyle="<->", color='red'))
plt.annotate("$\pi$", xy=(pi_amp_12-0.03, 0.1), color='red')
plt.xlabel("Drive amp [a.u.]", fontsize=15)
plt.ylabel("Measured signal [a.u.]", fontsize=15)
plt.title('Rabi Experiment (1->2)', fontsize=20)
plt.show()
```
We plot and fit our data as before.
```
print(f"Our updated estimate for the 1->2 transition frequency is "
f"{round(qubit_12_freq/GHz, 7)} GHz.")
print(f"Pi Amplitude (1->2) = {pi_amp_12}")
```
### Build the 0, 1, 2 discriminator <a id="builddiscrim012"></a>
Finally, we build our discriminator for the $|0\rangle$, $|1\rangle$ and $|2\rangle$ states.
As a review, our three circuits are (again, recalling that our system starts in the $|0\rangle$ state):
1. Measure the $|0\rangle$ state directly (obtain $|0\rangle$ centroid).
2. Apply $0\rightarrow1$ $\pi$ pulse and then measure (obtain $|1\rangle$ centroid).
3. Apply $0\rightarrow1$ $\pi$ pulse, then $1\rightarrow2$ $\pi$ pulse, then measure (obtain $|2\rangle$ centroid).
```
with pulse.build(backend=backend, default_alignment='sequential', name='x12 schedule') as x12_sched:
drive_chan = pulse.drive_channel(qubit)
pulse.set_frequency(qubit_12_freq, drive_chan)
pulse.play(pulse.Gaussian(duration=x12_duration,
amp=pi_amp_12,
sigma=x12_sigma,
name='x12_pulse'), drive_chan)
# Create the three circuits
# 0 state
qc_ground = QuantumCircuit(1, 1)
qc_ground.measure(0, 0)
# 1 state
qc_one = QuantumCircuit(1, 1)
qc_one.x(0)
qc_one.measure(0, 0)
# 2 state
x12_gate = Gate("one_two_pulse", 1, [])
qc_x12 = QuantumCircuit(1, 1)
qc_x12.x(0)
qc_x12.append(x12_gate, [0])
qc_x12.measure(0, 0)
qc_x12.add_calibration(x12_gate, (0,), x12_sched, [])
```
We construct the program and plot the centroids in the IQ plane.
```
# Assemble the schedules into a program
IQ_012_job = backend.run([qc_ground, qc_one, qc_x12],
meas_level=1,
meas_return='single',
shots=NUM_SHOTS)
job_monitor(IQ_012_job)
# Get job data (single); split for zero, one and two
IQ_012_data = get_job_data(IQ_012_job, average=False)
zero_data = IQ_012_data[0]
one_data = IQ_012_data[1]
two_data = IQ_012_data[2]
def IQ_012_plot(x_min, x_max, y_min, y_max):
"""Helper function for plotting IQ plane for 0, 1, 2. Limits of plot given
as arguments."""
# zero data plotted in blue
plt.scatter(np.real(zero_data), np.imag(zero_data),
s=5, cmap='viridis', c='blue', alpha=0.5, label=r'$|0\rangle$')
# one data plotted in red
plt.scatter(np.real(one_data), np.imag(one_data),
s=5, cmap='viridis', c='red', alpha=0.5, label=r'$|1\rangle$')
# two data plotted in green
plt.scatter(np.real(two_data), np.imag(two_data),
s=5, cmap='viridis', c='green', alpha=0.5, label=r'$|2\rangle$')
# Plot a large dot for the average result of the 0, 1 and 2 states.
mean_zero = np.mean(zero_data) # takes mean of both real and imaginary parts
mean_one = np.mean(one_data)
mean_two = np.mean(two_data)
plt.scatter(np.real(mean_zero), np.imag(mean_zero),
s=200, cmap='viridis', c='black',alpha=1.0)
plt.scatter(np.real(mean_one), np.imag(mean_one),
s=200, cmap='viridis', c='black',alpha=1.0)
plt.scatter(np.real(mean_two), np.imag(mean_two),
s=200, cmap='viridis', c='black',alpha=1.0)
plt.xlim(x_min, x_max)
plt.ylim(y_min,y_max)
plt.legend()
plt.ylabel('I [a.u.]', fontsize=15)
plt.xlabel('Q [a.u.]', fontsize=15)
plt.title("0-1-2 discrimination", fontsize=15)
x_min = -5
x_max = 5
y_min = -10
y_max = 10
IQ_012_plot(x_min, x_max, y_min, y_max)
```
Now it is time to actually build the discriminator. We will use a machine learning technique called Linear Discriminant Analysis (LDA). LDA classifies an arbitrary data set into a set of categories (here $|0\rangle$, $|1\rangle$ and $|2\rangle$) by maximizing the distance between the means of each category and minimizing the variance within each category. For further detail, see [here](https://scikit-learn.org/stable/modules/lda_qda.html#id4) (Ref. [3](#refs)).
LDA generates a line called a separatrix. Depending on which side of the separatrix a given data point is on, we can determine which category it belongs to.
We use `scikit.learn` for an implementation of LDA; in a future release, this functionality will be added released directly into Qiskit-Ignis (see [here](https://github.com/Qiskit/qiskit-ignis/tree/master/qiskit/ignis/measurement/discriminator)).
We observe a third centroid corresponding to the $|2\rangle$ state. (Note: If the plot looks off, rerun the notebook.)
We begin by reshaping our result data into a format suitable for discrimination.
```
def reshape_complex_vec(vec):
"""Take in complex vector vec and return 2d array w/ real, imag entries. This is needed for the learning.
Args:
vec (list): complex vector of data
Returns:
list: vector w/ entries given by (real(vec], imag(vec))
"""
length = len(vec)
vec_reshaped = np.zeros((length, 2))
for i in range(len(vec)):
vec_reshaped[i]=[np.real(vec[i]), np.imag(vec[i])]
return vec_reshaped
```
We begin by shaping the data for LDA.
```
# Create IQ vector (split real, imag parts)
zero_data_reshaped = reshape_complex_vec(zero_data)
one_data_reshaped = reshape_complex_vec(one_data)
two_data_reshaped = reshape_complex_vec(two_data)
IQ_012_data = np.concatenate((zero_data_reshaped, one_data_reshaped, two_data_reshaped))
print(IQ_012_data.shape) # verify IQ data shape
```
Next, we split our training and testing data. The testing data is a vector containing an array of `0`'s (for the zero schedule, `1`'s (for the one schedule) and `2`'s (for the two schedule).
```
# construct vector w/ 0's, 1's and 2's (for testing)
state_012 = np.zeros(NUM_SHOTS) # shots gives number of experiments
state_012 = np.concatenate((state_012, np.ones(NUM_SHOTS)))
state_012 = np.concatenate((state_012, 2*np.ones(NUM_SHOTS)))
print(len(state_012))
# Shuffle and split data into training and test sets
IQ_012_train, IQ_012_test, state_012_train, state_012_test = train_test_split(IQ_012_data, state_012, test_size=0.5)
```
Finally, we set up our model and train it. The accuracy of our fit is printed.
```
# Set up the LDA
LDA_012 = LinearDiscriminantAnalysis()
LDA_012.fit(IQ_012_train, state_012_train)
# test on some simple data
print(LDA_012.predict([[0, 0], [-10, 0], [-15, -5]]))
# Compute accuracy
score_012 = LDA_012.score(IQ_012_test, state_012_test)
print(score_012)
```
The last step is to plot the separatrix.
```
# Plot separatrix on top of scatter
def separatrixPlot(lda, x_min, x_max, y_min, y_max, shots):
nx, ny = shots, shots
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx),
np.linspace(y_min, y_max, ny))
Z = lda.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.contour(xx, yy, Z, [0.5], linewidths=2., colors='black')
IQ_012_plot(x_min, x_max, y_min, y_max)
separatrixPlot(LDA_012, x_min, x_max, y_min, y_max, NUM_SHOTS)
```
Now that we have 3 centroids, the separatrix is no longer a line, but rather a curve containing a combination of two lines. In order to discriminate between $|0\rangle$, $|1\rangle$ and $|2\rangle$ states, our model checks where the IQ point lies relative to the separatrix and classifies the point accordingly.
## References <a id="refs"></a>
1. D. C. McKay, T. Alexander, L. Bello, M. J. Biercuk, L. Bishop, J. Chen, J. M. Chow, A. D. C ́orcoles, D. Egger, S. Filipp, J. Gomez, M. Hush, A. Javadi-Abhari, D. Moreda, P. Nation, B. Paulovicks, E. Winston, C. J. Wood, J. Wootton, and J. M. Gambetta, “Qiskit backend specifications for OpenQASM and OpenPulse experiments,” 2018, https://arxiv.org/abs/1809.03452.
2. Krantz, P. et al. “A Quantum Engineer’s Guide to Superconducting Qubits.” Applied Physics Reviews 6.2 (2019): 021318, https://arxiv.org/abs/1904.06560.
3. Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011, https://scikit-learn.org/stable/modules/lda_qda.html#id4.
```
import qiskit.tools.jupyter
%qiskit_version_table
```
|
github_jupyter
|
```
import collections
import numpy as np
import seaborn as sns
import os
import matplotlib.gridspec as gridspec
import pickle
from matplotlib import pyplot as plt
import matplotlib as mpl
pgf_with_custom_preamble = {
"text.usetex": False, # use inline math for ticks
"pgf.rcfonts": False, # don't setup fonts from rc parameters
}
def figsize(scale, height_ratio=1.0):
fig_width_pt = 344.43306 # Get this from LaTeX using \the\textwidth
inches_per_pt = 1.0/72.27 # Convert pt to inch
golden_mean = (np.sqrt(5.0)-1.0)/2.0 # Aesthetic ratio (you could change this)
fig_width = fig_width_pt*inches_per_pt*scale # width in inches
fig_height = height_ratio*fig_width*golden_mean # height in inches
fig_size = [fig_width,fig_height]
return fig_size
pgf_with_latex = { # setup matplotlib to use latex for output
"pgf.texsystem": "pdflatex", # change this if using xetex or lautex
"text.usetex": True, # use LaTeX to write all text
"font.family": "sans-serif",
"font.serif": [], # blank entries should cause plots to inherit fonts from the document
"font.sans-serif": [],
"font.monospace": [],
"axes.labelsize": 10, # LaTeX default is 10pt font.
"font.size": 10,
"legend.fontsize": 8, # Make the legend/label fonts a little smaller
"xtick.labelsize": 8,
"ytick.labelsize": 8,
"figure.figsize": figsize(0.9), # default fig size of 0.9 textwidth
"pgf.preamble": [
r"\usepackage[utf8x]{inputenc}", # use utf8 fonts becasue your computer can handle it :)
r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble
]
}
sns.set_style('ticks')
sns.set_context('poster')
sns.set_palette('dark', 40)
colors = sns.color_palette('dark', 40)
mpl.rcParams.update(pgf_with_latex)
# I make my own newfig and savefig functions
def newfig(width):
plt.clf()
fig = plt.figure(figsize=figsize(width))
ax = fig.add_subplot(111)
return fig, ax
def savefig(filename):
plt.savefig('{}.pgf'.format(filename))
plt.savefig('{}.pdf'.format(filename))
%matplotlib inline
from scipy import interpolate
#plt.subplots_adjust(left=.15, bottom=.16, right=.99, top=.97)
plt.rcParams["axes.labelsize"]
%matplotlib inline
```
# PDI, Pn
## No water
```
cr = 0.001
scan_p0_1000_nowater = collections.defaultdict(list)
for f in sorted(os.listdir('scan_p_1000/no_water/')):
if f.startswith('polstat'):
k = float(f.split('_')[2])
if k != cr:
continue
d = np.loadtxt(os.path.join('scan_p_1000/no_water/', f))
header = open(os.path.join('scan_p_1000/no_water/', f)).readline().replace('#', '').split()
d.dtype = [(x, 'float') for x in header]
scan_p0_1000_nowater[k].append(d)
avg_no_water_pdi = []
p_vals = np.arange(0.0, 0.925, 0.01)
for l in scan_p0_1000_nowater[cr]:
x = (l['cr']/2000)[:, 0]
y = l['pdi'][:, 0]
print(x.shape, y.shape, max(x))
f = interpolate.interp1d(x, y)
ynew = f(p_vals)
avg_no_water_pdi.append(ynew)
#plt.plot(l['cr']/2000, l['pdi'])
p_vals = np.array(p_vals)
std_no_water_pdi = np.std(np.array(avg_no_water_pdi), axis=0)
avg_no_water_pdi = np.average(avg_no_water_pdi, axis=0)
plt.rcParams['figure.figsize'] = figsize(0.9)
for l in scan_p0_1000_nowater[cr]:
x = (l['cr']/2000.0)
y = l['pdi']
plt.plot(x, y, '.', markevery=100, alpha=0.8)
#plt.errorbar(p_vals, avg_no_water_pdi, std_no_water_pdi)
plt.plot(np.arange(0.0, 1.0, 0.01), 1+np.arange(0.0, 1.0, 0.01), linestyle='--', color='k', linewidth=1.8, label='1+p')
plt.annotate(r'$k_f={}$'.format(cr), xy=(0.05, 0.75), xycoords='axes fraction', fontsize=12)
plt.legend(loc=0)
plt.ylabel('PDI')
plt.xlabel('p')
plt.tight_layout()
plt.xticks([0.0, 0.2, 0.5, 0.8, 0.9, 1.0])
plt.yticks([1.0, 1.5, 2.0, 2.5])
plt.tick_params(size=5, direction='inout', right=True)
plt.savefig('result_graphics/pdi_no_water.pdf', dpi=200)
plt.rcParams['figure.figsize'] = figsize(0.9, height_ratio=1.5)
f, (a0, a1) = plt.subplots(2,1, gridspec_kw = {'height_ratios':[2, 1]})
l = scan_p0_1000_nowater[0.001][0]
a0.plot(l['cr']/2000, l['pn'], '*', markevery=200, color='k')
a0.plot(l['cr']/2000, 1/(1-l['cr']/2000), label=r'1/(1-p)', linestyle='--', color='k', linewidth=1.8)
#a0.set_xlabel('p')
a0.set_ylabel(r'$\langle n \rangle$')
a0.legend()
a0.set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 0.9])
l = scan_p0_1000_nowater[0.001][0]
theory_val = 1/(1-l['cr']/2000)
a1.plot(l['cr']/2000, np.abs(l['pn']-theory_val), linestyle='None', marker='.', color='k')
a1.set_xlabel('p')
a1.set_ylabel(r'$|\Delta|$')
a1.set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 0.9])
f.tight_layout()
plt.savefig('result_graphics/average_n_no_water.pdf', dpi=200)
```
## Water
```
plt.rcParams['figure.figsize'] = figsize(0.9)
cr = 0.001
scan_p0_1000_water = collections.defaultdict(list)
for f in sorted(os.listdir('scan_p_1000/with_water/')):
if f.startswith('polstat'):
k = float(f.split('_')[2])
if k != cr:
continue
d = np.loadtxt(os.path.join('scan_p_1000/with_water/', f))
header = open(os.path.join('scan_p_1000/with_water/', f)).readline().replace('#', '').split()
if len(header) < 8:
continue
d.dtype = [(x, 'float') for x in header]
scan_p0_1000_water[k].append(d)
# avg_water_pdi = []
# p_vals = np.arange(0.0, 0.925, 0.01)
# for l in scan_p0_1000_water[0.001]:
# x = (l['cr']/2000)[:, 0]
# y = l['pdi'][:, 0]
# print x.shape, y.shape, max(x)
# f = interpolate.interp1d(x, y)
# ynew = f(p_vals)
# avg_water_pdi.append(ynew)
# #plt.plot(l['cr']/2000, l['pdi'])
# p_vals = np.array(p_vals)
# std_water_pdi = np.std(np.array(avg_water_pdi), axis=0)
# avg_water_pdi = np.average(avg_water_pdi, axis=0)
for l in scan_p0_1000_water[cr]:
x = (l['cr']/2000.0)
y = l['pdi']
plt.plot(x, y, '.', markevery=100)
plt.annotate(r'$k_f={}$'.format(cr), xy=(0.05, 0.72), xycoords='axes fraction', fontsize=12)
plt.plot(np.arange(0.0, 1.0, 0.01), 1+np.arange(0.0, 1.0, 0.01), linestyle='--', color='k', linewidth=1.8, label='1+p')
plt.ylabel('PDI')
plt.xlabel('p')
plt.legend(loc=0)
plt.tight_layout()
plt.xticks([0.0, 0.2, 0.5, 0.8, 0.9, 1.0])
plt.yticks([1.0, 1.5, 2.0, 2.25])
plt.tick_params(size=5, direction='inout', right=True)
plt.savefig('result_graphics/pdi_water.pdf', dpi=200)
#plt.savefig('pdi_water.png', dpi=200)
plt.rcParams['figure.figsize'] = figsize(0.9, height_ratio=1.5)
f, (a0, a1) = plt.subplots(2,1, gridspec_kw = {'height_ratios':[2, 1]})
a0.plot(scan_p0_1000_water[0.001][1]['cr']/2000, scan_p0_1000_water[0.001][1]['pn'], '*', markevery=200, color='k')
a0.plot(np.arange(0, 0.925, 0.01), 1/(1-np.arange(0, 0.925, 0.01)), label=r'1/(1-p)', linestyle='--', color='k', linewidth=1.8)
a0.set_ylabel(r'$\langle n \rangle$')
a0.legend()
a0.set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 0.9])
l = scan_p0_1000_water[0.001][1]
theory_val = 1/(1-l['cr']/2000)
a1.plot(l['cr']/2000, np.abs(l['pn']-theory_val), linestyle='None', marker='.', color='k')
a1.set_xlabel('p')
a1.set_ylabel(r'$|\Delta|$')
a1.set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 0.9])
f.tight_layout()
plt.savefig('result_graphics/average_n_with_water.pdf', dpi=200)
```
## with water
```
plt.rcParams['figure.figsize'] = figsize(0.9)
cr = [0.1, 0.01, 0.001]
scan_p0_1000_water_rev = collections.defaultdict(list)
for f in sorted(os.listdir('scan_p_1000/with_water_rev/')):
if f.startswith('polstat'):
k = float(f.split('_')[2])
if k not in cr:
continue
k2 = float(f.split('_')[3])
if k2 != 0.01:
continue
d = np.loadtxt(os.path.join('scan_p_1000/with_water_rev/', f))
header = open(os.path.join('scan_p_1000/with_water_rev/', f)).readline().replace('#', '').split()
d.dtype = [(x, 'float') for x in header]
scan_p0_1000_water_rev[k].append(d)
inter_water_rev_pdi = collections.defaultdict(list)
p_vals = {}
# for ss, max_cr in [(0.001, 0.32), (0.01, 0.61), (0.1, 0.86)]:
# p_vals[ss] = np.arange(0.0, max_cr, 0.01)
# for cr in scan_p0_1000_water_rev:
# for l in scan_p0_1000_water_rev[cr]:
# x = (l['cr']/2000)[:, 0]
# y = l['pdi'][:, 0]
# f = interpolate.interp1d(x, y)
# ynew = f(p_vals[cr])
# inter_water_rev_pdi[cr].append(ynew)
# std_water_rev_pdi = {}
# avg_water_rev_pdi = {}
# for cr in inter_water_rev_pdi:
# std_water_rev_pdi[cr] = np.std(np.array(inter_water_rev_pdi[cr]), axis=0)
# avg_water_rev_pdi[cr] = np.average(inter_water_rev_pdi[cr], axis=0)
# plt.subplot(121)
markers = ['h', '.', 'd']
legend_lines = {}
for i, cr in enumerate(sorted(scan_p0_1000_water_rev)):
for l in scan_p0_1000_water_rev[cr]:
line, = plt.plot(
l['cr']/2000.0,
l['pdi'],
'.',
markevery=100,
marker=markers[i])
legend_lines[cr] = line
l, = plt.plot(np.arange(0.0, 1.0, 0.01), 1+np.arange(0.0, 1.0, 0.01), linestyle='--', color='k', linewidth=1.8, label='1+p')
plt.legend()
# [legend_lines[cr] for cr in sorted(scan_p0_1000_water_rev)] + [l],
# list(map(r'$k_f={}$'.format, sorted(scan_p0_1000_water_rev.keys()))) + ['1+p'], loc=0)
plt.annotate(r'$k_f=0.001$', xy=(0.2, 0.05), xycoords='axes fraction')
plt.annotate(r'$k_f=0.01$', xy=(0.5, 0.3), xycoords='axes fraction')
plt.annotate(r'$k_f=0.01$', xy=(0.5, 0.3), xycoords='axes fraction')
plt.annotate(r'$k_f=0.1$', xy=(0.8, 0.35), xycoords='axes fraction')
plt.annotate(r'$k_r = 0.01$', xy=(0.02, 0.72), xycoords='axes fraction', fontsize=12)
plt.xlabel('p')
plt.ylabel('PDI')
plt.yticks([1.0, 1.5, 2.0])
plt.xticks([0.0, 0.5, 1.0])
plt.tight_layout()
plt.savefig('result_graphics/average_pdi_rev_water.pdf', dpi=200)
plt.rcParams['figure.figsize'] = figsize(0.9)
# plt.subplots_adjust(left=5, bottom=0, right=6, top=1, wspace=0, hspace=0)
plot_margin = 0.25
x0, x1, y0, y1 = plt.axis()
legend_lines = {}
for i, cr in enumerate(sorted(scan_p0_1000_water_rev)):
for l in scan_p0_1000_water_rev[cr]:
line, = plt.plot(
l['pdi'],
'.',
markevery=100,
marker=markers[i],
markersize=8.0,
color=colors[i])
legend_lines[cr] = line
plt.annotate(r'$k_r = 0.01$', xy=(0.01, 0.85), xycoords='axes fraction', fontsize=12)
# plt.legend(
# [legend_lines[cr] for cr in sorted(scan_p0_1000_water_rev)],
# map(r'$k_f={}$'.format, sorted(scan_p0_1000_water_rev.keys())), loc=0)
plt.annotate(r'$k_f=0.001$', xy=(0.75, 0.1), xycoords='axes fraction', color=colors[0], fontsize=12)
plt.annotate(r'$k_f=0.01$', xy=(0.8, 0.32), xycoords='axes fraction', color=colors[1], fontsize=12)
plt.annotate(r'$k_f=0.1$', xy=(0.8, 0.8), xycoords='axes fraction', color=colors[2], fontsize=12)
plt.ylabel('PDI')
plt.xlabel('simulation time (ps)')
plt.tight_layout()
plt.savefig('result_graphics/average_pdi_t_rev_water.pdf', dpi=200, tight_layout=True)
plt.rcParams['figure.figsize'] = figsize(0.9, height_ratio=1.5)
f, (a0, a1) = plt.subplots(2,1, gridspec_kw = {'height_ratios':[2, 1]})
markers = ['h', '.', 'd']
legend_lines = {}
for i, cr in enumerate(sorted(scan_p0_1000_water_rev)):
for l in scan_p0_1000_water_rev[cr]:
line, = a0.plot(
l['cr']/2000.0,
l['pn'],
'.',
markevery=100,
marker=markers[i],
color=colors[i])
legend_lines[cr] = line
l, = a0.plot(np.arange(0.0, 0.9, 0.01), 1.0/(1.0-np.arange(0.0, 0.9, 0.01)), 'k--', label='1/(1-p)', linewidth=1.8)
# a0.legend(
# [legend_lines[cr] for cr in sorted(scan_p0_1000_water_rev)] + [l],
# list(map(r'$k_f={}$'.format, sorted(scan_p0_1000_water_rev.keys()))) + ['1/(1-p)'])
#a0.set_xlabel('p')
a0.set_ylabel(r'$\langle n \rangle$')
a0.set_xticks([0.0, 0.5, 0.9])
a0.annotate(r'$k_f=0.001$', xy=(0.1, 0.15), xycoords='axes fraction', color=colors[0], fontsize=12)
a0.annotate(r'$k_f=0.01$', xy=(0.5, 0.25), xycoords='axes fraction', color=colors[1], fontsize=12)
a0.annotate(r'$k_f=0.1$', xy=(0.65, 0.55), xycoords='axes fraction', color=colors[2], fontsize=12)
a0.annotate(r'$k_r = 0.01$', xy=(0.02, 0.75), xycoords='axes fraction', fontsize=12)
a0.legend(loc=0)
for i, cr in enumerate(sorted(scan_p0_1000_water_rev)):
for l in scan_p0_1000_water_rev[cr]:
theory_val = 1/(1-l['cr']/2000)
a1.plot(l['cr']/2000, np.abs(l['pn']-theory_val), color=colors[i], linestyle='None', marker=markers[i], markevery=100)
a1.set_xlabel('p')
a1.set_ylabel(r'$|\Delta|$')
a1.set_xticks([0.0, 0.5, 0.9])
f.tight_layout()
f.savefig('result_graphics/average_pn_rev_water.pdf', dpi=200, tight_layout=True)
```
# Loops
```
import pickle
with open('scan_p_1000/no_water/loops.pck', 'rb') as ib:
loops_no_water = pickle.load(ib)
loops_no_water = [l for x in loops_no_water for l in x]
# print np.average(loops_no_water), np.std(loops_no_water), np.min(loops_no_water), np.max(loops_no_water), np.sum(loops_no_water)
with open('scan_p_1000/with_water/loops.pck', 'rb') as ib:
loops_water = pickle.load(ib)
loops_water = [l for x in loops_water for l in x]
# print np.average(loops_water), np.std(loops_water), np.min(loops_water), np.max(loops_water)
with open('scan_p_1000/with_water_rev/old/loops.pck', 'rb') as ib:
loops_rev_water = pickle.load(ib)
loops_rev_water = [l for x in loops_rev_water for l in x]
# print np.average(loops_rev_water), np.std(loops_rev_water), np.min(loops_rev_water), np.max(loops_rev_water)
with open('scan_p_1000/with_water_rev/old/loops_0.1_0.01.pck', 'rb') as ib:
loops_rev_water_01 = pickle.load(ib)
loops_rev_water_01 = [l for x in loops_rev_water_01 for l in x]
# print np.average(loops_rev_water_01), np.std(loops_rev_water_01), np.min(loops_rev_water_01), np.max(loops_rev_water_01)
with open('scan_p_1000/with_water_rev/old/loops_0.01_0.01.pck', 'rb') as ib:
loops_rev_water_001 = pickle.load(ib)
loops_rev_water_001 = [l for x in loops_rev_water_001 for l in x]
# print np.average(loops_rev_water_001), np.std(loops_rev_water_001), np.min(loops_rev_water_001), np.max(loops_rev_water_001)
with open('scan_p_1000/with_water_rev/old/loops_0.001_0.01.pck', 'rb') as ib:
loops_rev_water_0001 = pickle.load(ib)
# print loops_rev_water_0001
loops_rev_water_0001 = [l for x in loops_rev_water_0001 for l in x]
# print np.average(loops_rev_water_0001), np.std(loops_rev_water_0001), np.min(loops_rev_water_0001), np.max(loops_rev_water_0001)
plt.rcParams['figure.figsize'] = figsize(0.9)
plt.rc('text', usetex=True)
n, x = np.histogram(loops_no_water, bins=range(40))
n = np.asarray(n, dtype=float)
n[n==0.0] = np.nan
plt.plot(x[1:]-0.5, n, '^', label=r'no $H_2O$, $k_f=0.001$', markersize=8)
print(np.nansum(n))
n, x = np.histogram(loops_water, bins=range(40))
n = np.asarray(n, dtype=float)
n[n==0.0] = np.nan
plt.plot(x[1:]-0.5, n, 'd', label=r'$H_2O$ (no hydrolysis), $k_f=0.001$', markersize=8)
print(np.nansum(n))
n, x = np.histogram(loops_rev_water_001, bins=range(40))
n = np.asarray(n, dtype=float)
n[n==0.0] = np.nan
plt.plot(x[1:]-0.5, n, 'h', label=r'$H_2O$ (hydrolysis), $k_f/k_r=0.01/0.01$', markersize=8)
print(np.nansum(n))
n, x = np.histogram(loops_rev_water_01, bins=range(40))
n = np.asarray(n, dtype=float)
n[n==0.0] = np.nan
plt.plot(x[1:]-0.5, n, 's', label=r'$H_2O$ (hydrolysis), $k_f/k_r=0.1/0.01$', markersize=8)
print(np.nansum(n))
xticks = np.array(list(range(0, 40, 5)))
plt.xticks(xticks-0.5, xticks)
plt.xlim([3, 40])
plt.legend(loc=0)
plt.xlabel('cycle size x (\# of monomers)')
plt.ylabel('number of cycles of size x')
#plt.annotate(r'$k_r=0.01$', xy=(0.57, 0.64), xycoords='axes fraction', fontsize=16)
plt.tight_layout()
plt.savefig('result_graphics/loop_size.pdf', tight_layout=True)
with open('scan_p_1000/no_water/loops.pck', 'rb') as ib:
loops_no_water = pickle.load(ib)
print(loops_no_water)
#loops_no_water = [l for x in loops_no_water for l in x]
```
|
github_jupyter
|
### Homework: going neural (6 pts)
We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/expanding_mind_lm_kn_3.png' width=300px>
We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. Before you go:
* If you haven't done seminar already, use `seminar.ipynb` to download the data.
* This homework uses Pytorch v1.x: this is [how you install it](https://pytorch.org/get-started/locally/); and that's [how you use it](https://github.com/yandexdataschool/Practical_RL/tree/9f89e98d7df7ad47f5d6c85a70a38283e06be16a/week04_%5Brecap%5D_deep_learning).
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.
However, we still need special tokens:
* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$
* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
```
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w
lines[:3]
```
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset.
```
# get all unique characters from lines (including capital letters and symbols)
token_set = set()
list(map(lambda x: token_set.update(x), [list(line) for line in lines])) # add list to eager execution
tokens = list(token_set)
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokens
```
We can now assign each character with it's index in tokens list. This way we can encode a string into a torch-friendly integer vector.
```
# dictionary of character -> its identifier (index in tokens list)
token_to_id = {token:i for i, token in enumerate(tokens)}
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
```
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`.
The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:
```
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype=np.int64):
"""Casts a list of lines into torch-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.full([len(lines), max_len], pad, dtype=dtype)
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix
#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
```
### Neural Language Model (2 points including training)
Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).
$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$
Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:
$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$
But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/fixed_window_lm.jpg' width=400px>
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class FixedWindowLanguageModel(nn.Module):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
- If you're absolutely lost, here's a hint: use nn.ZeroPad2d((NUM_LEADING_ZEROS, 0, 0, 0))
followed by a nn.Conv1d(..., padding=0). And yes, its okay that padding is technically "2d".
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
self.emb = nn.Embedding(n_tokens, emb_size)
self.padding = nn.ZeroPad2d((0, 0, 4, 0))
self.kernel_size = 5
self.conv = nn.Conv1d(emb_size, hid_size, self.kernel_size)
self.fc = nn.Linear(hid_size, n_tokens)
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
:note: that convolutions operate with tensors of shape [batch, channels, length], while linear layers
and *embeddings* use [batch, length, channels] tensors. Use tensor.permute(...) to adjust shapes.
"""
# YOUR CODE - apply layers, see docstring above
x = self.emb(input_ix)
x = self.padding(x)
x = x.permute(0, 2, 1)
x_conv = self.conv(x)
# x_conv = F.relu(x_conv)
x_conv = x_conv.permute(0, 2, 1)
x = self.fc(x_conv)
return x # output tensor should be of shape [batch_size, sequence_length, n_tokens]
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = torch.as_tensor(to_matrix([prefix]), dtype=torch.int64)
with torch.no_grad():
probs = torch.softmax(self(prefix_ix)[0, -1], dim=-1).cpu().numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
dummy_model = FixedWindowLanguageModel()
dummy_input_ix = torch.as_tensor(to_matrix(dummy_lines))
dummy_logits = dummy_model(dummy_input_ix)
print('Weights:', tuple(name for name, w in dummy_model.named_parameters()))
assert isinstance(dummy_logits, torch.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits.data.cpu().numpy())), "inf/nan encountered"
assert not np.allclose(dummy_logits.data.cpu().numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
# test for lookahead
dummy_input_ix_2 = torch.as_tensor(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = dummy_model(dummy_input_ix_2)
assert torch.allclose(dummy_logits[:, :3], dummy_logits_2[:, :3]), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
```
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:
$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$
As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood.
```
def compute_mask(input_ix, eos_ix=token_to_id[EOS]):
""" compute a boolean mask that equals "1" until first EOS (including that EOS) """
cumsum = torch.cumsum(input_ix == eos_ix, dim=-1)
# print('cumsum', cumsum)
# print(cumsum[..., :-1] < 1)
return F.pad(torch.cumsum(input_ix == eos_ix, dim=-1)[..., :-1] < 1, pad=(1, 0, 0, 0), value=True)
# return cumsum[..., :-1] < 1
print('matrix:\n', dummy_input_ix.numpy())
print('mask:\n', compute_mask(dummy_input_ix).to(torch.int32).cpu().numpy())
print('lengths:', compute_mask(dummy_input_ix).sum(-1).cpu().numpy())
def compute_loss(model, input_ix):
"""
:param model: language model that can compute next token logits given token indices
:param input ix: int32 matrix of tokens, shape: [batch_size, length]; padded with eos_ix
:returns: scalar loss function, mean crossentropy over non-eos tokens
"""
input_ix = torch.as_tensor(input_ix, dtype=torch.int64)
logits = model(input_ix[:, :-1])
# logits = F.log_softmax(logits, dim=2)
# print(f'logits.shape: {logits.shape}')
reference_answers = input_ix[:, 1:]
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you may or may not want to use the compute_mask function from above.
# input_mask = compute_mask(input_ix[:, :-1])
input_mask = compute_mask(reference_answers)
input_mask = torch.unsqueeze(input_mask, 2)
# print('input_mask.shape:', input_mask.shape)
# print('logits[0]:', logits[0])
masked_logits = torch.mul(logits, input_mask) # 对应位相乘
masked_logits = F.log_softmax(masked_logits, dim=2)
# print(f'masked_logits.shape: {masked_logits.shape}')
# celoss = nn.CrossEntropyLoss()
# loss = celoss(masked_logits, reference_answers)
# print(f'masked_logits.shape: {masked_logits.shape}')
# print('reference_answers.shape, ', reference_answers.shape)
ml = masked_logits.reshape(-1, masked_logits.size(2))
ref = reference_answers.reshape(-1)
# print(ref)
loss = F.cross_entropy(ml, ref) / len(input_ix)
# print(loss.shape)
return loss
input = to_matrix(dummy_lines, max_len=15)
print('input shape', input.shape)
loss_1 = compute_loss(dummy_model, input)
loss_2 = compute_loss(dummy_model, to_matrix(dummy_lines, max_len=16))
print(loss_1, loss_2)
loss_1 = compute_loss(dummy_model, to_matrix(dummy_lines, max_len=15))
loss_2 = compute_loss(dummy_model, to_matrix(dummy_lines, max_len=16))
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert torch.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use compute_mask. Beware +/-1 errors. And be careful when averaging!'
```
### Evaluation
You will need two functions: one to compute test loss and another to generate samples. For your convenience, we implemented them both in your stead.
```
def score_lines(model, dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
with torch.no_grad():
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += compute_loss(model, batch_ix).item() * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(model, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by model
:param temperature: samples proportionally to model probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
with torch.no_grad():
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
```
### Training loop
Finally, let's train our model on minibatches of data
```
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []
model = FixedWindowLanguageModel()
opt = torch.optim.Adam(model.parameters())
# hint: if you ever wanted to switch to cuda, do it now.
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
loss_i = compute_loss(model, batch)
opt.zero_grad()
loss_i.backward()
opt.step()
train_history.append((i, loss_i.item()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
```
### RNN Language Models (3 points including training)
Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:
$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$
$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$
Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/rnn_lm.jpg' width=480px>
```
class RNNLanguageModel(nn.Module):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
:note: if you use nn.RNN/GRU/LSTM, make sure you specify batch_first=True
With batch_first, your model operates with tensors of shape [batch_size, sequence_length, num_units]
Also, please read the docs carefully: they don't just return what you want them to return :)
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables/etc
self.emb = nn.Embedding(n_tokens, emb_size)
self.lstm = nn.LSTM(input_size=emb_size, hidden_size=hid_size, num_layers=1, batch_first=True)
self.fc = nn.Linear(hid_size, n_tokens)
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
# YOUR CODE - apply layers, see docstring above
x = self.emb(input_ix)
x_lstm, (h_n, c_n) = self.lstm(x)
x_fc = self.fc(x_lstm)
return x_fc # output tensor should be of shape [batch_size, sequence_length, n_tokens]
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = torch.as_tensor(to_matrix([prefix]), dtype=torch.int64)
with torch.no_grad():
probs = torch.softmax(self(prefix_ix)[0, -1], dim=-1).cpu().numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = RNNLanguageModel()
dummy_input_ix = torch.as_tensor(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
assert isinstance(dummy_logits, torch.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert not np.allclose(dummy_logits.cpu().data.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
print('Weights:', tuple(name for name, w in model.named_parameters()))
# test for lookahead
dummy_input_ix_2 = torch.as_tensor(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert torch.allclose(dummy_logits[:, :3], dummy_logits_2[:, :3]), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
```
### RNN training
Our RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [ruining](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.
The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_global_norm).
Your task here is to implement the training code that minimizes the loss function. If you encounter large loss fluctuations during training, please add [gradient clipping](https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html) using urls above. But its **not necessary** to use gradient clipping if you don't need it.
_Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._
```
batch_size = 64 # <-- please tune batch size to fit your CPU/GPU configuration
score_dev_every = 250
train_history, dev_history = [], []
model = RNNLanguageModel(n_tokens=136)
opt = torch.optim.Adam(model.parameters())
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
# <YOUR CODE - one step of the training loop for your RNN model>
opt.zero_grad()
loss_i = compute_loss(model, batch)
loss_i.backward()
nn.utils.clip_grad.clip_grad_value_(model.parameters(), 0.5)
opt.step()
train_history.append((i, float(loss_i)))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
```
### Alternative sampling strategies (1 point)
So far we've sampled tokens from the model in proportion with their probability.
However, this approach can sometimes generate nonsense words due to the fact that softmax probabilities of these words are never exactly zero. This issue can be somewhat mitigated with sampling temperature, but low temperature harms sampling diversity. Can we remove the nonsense words without sacrificing diversity? __Yes, we can!__ But it takes a different sampling strategy.
__Top-k sampling:__ on each step, sample the next token from __k most likely__ candidates from the language model.
Suppose $k=3$ and the token probabilities are $p=[0.1, 0.35, 0.05, 0.2, 0.3]$. You first need to select $k$ most likely words and set the probability of the rest to zero: $\hat p=[0.0, 0.35, 0.0, 0.2, 0.3]$ and re-normalize:
$p^*\approx[0.0, 0.412, 0.0, 0.235, 0.353]$.
__Nucleus sampling:__ similar to top-k sampling, but this time we select $k$ dynamically. In nucleous sampling, we sample from top-__N%__ fraction of the probability mass.
Using the same $p=[0.1, 0.35, 0.05, 0.2, 0.3]$ and nucleous N=0.9, the nucleous words consist of:
1. most likely token $w_2$, because $p(w_2) < N$
2. second most likely token $w_5$, $p(w_2) + p(w_5) = 0.65 < N$
3. third most likely token $w_4$ because $p(w_2) + p(w_5) + p(w_4) = 0.85 < N$
And thats it, because the next most likely word would overflow: $p(w_2) + p(w_5) + p(w_4) + p(w_1) = 0.95 > N$.
After you've selected the nucleous words, you need to re-normalize them as in top-k sampling and generate the next token.
__Your task__ is to implement nucleus sampling variant and see if its any good.
```
def generate_nucleus(model, prefix=BOS, nucleus=0.9, max_len=100):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param max_len: generate sequences with at most this many tokens, including prefix
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*sorted(token_probs.items(), key=lambda x: x[1], reverse=True))
probs = list(probs)
prob_sum = 0
max_idx = 0
for i in range(len(probs)):
prob_sum += probs[i]
if prob_sum > nucleus:
max_idx = i
break
for j in range(max_idx + 1, len(probs)): probs[j] = 0
norm = np.sum(probs)
probs = [p/norm for p in probs]
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
for i in range(10):
print(generate_nucleus(model))
```
### Bonus quest I: Beam Search (2 pts incl. samples)
At times, you don't really want the model to generate diverse outputs as much as you want a __single most likely hypothesis.__ A single best translation, most likely continuation of the search query given prefix, etc. Except, you can't get it.
In order to find the exact most likely sequence containing 10 tokens, you would need to enumerate all $|V|^{10}$ possible hypotheses. In practice, 9 times out of 10 you will instead find an approximate most likely output using __beam search__.
Here's how it works:
0. Initial `beam` = [prefix], max beam_size = k
1. for T steps:
2. ` ... ` generate all possible next tokens for all hypotheses in beam, formulate `len(beam) * len(vocab)` candidates
3. ` ... ` select beam_size best for all candidates as new `beam`
4. Select best hypothesis (-es?) from beam
```
from IPython.display import HTML
# Here's what it looks like:
!wget -q https://raw.githubusercontent.com/yandexdataschool/nlp_course/2020/resources/beam_search.html
HTML("beam_search.html")
def generate_beamsearch(model, prefix=BOS, beam_size=4, length=5):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param length: generate sequences with at most this many tokens, NOT INCLUDING PREFIX
:returns: beam_size most likely candidates
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
# <YOUR CODE HERE>
sequence_prob = []
for i in range(length):
if len(sequence_prob) == 0:
token_probs = model.get_possible_next_tokens(prefix)
token_probs = sorted(token_probs.items(), key=lambda x: x[1], reverse=True)
sequence_prob = [(token, np.log(prob)) for token, prob in token_probs[:beam_size]]
else:
sequence_prob_new = []
for tok, log_prob in sequence_prob:
prefix_new = prefix + tok
# print(f'prefix: {prefix_new}')
token_probs = model.get_possible_next_tokens(prefix_new)
token_probs = sorted(token_probs.items(), key=lambda x: x[1], reverse=True)
sequence_prob_new += [(tok +token, log_prob + np.log(prob)) for token, prob in token_probs[:beam_size]]
# print(f'sequence_prob_new: {sequence_prob_new}')
sequence_prob = sorted(sequence_prob_new, key=lambda x: x[1], reverse=True)[:beam_size]
print(f'at time {i}, sequence_prob: {sequence_prob}')
...
sequence_prob.sort(key=lambda x: x[1], reverse=True)
return sequence_prob[0][0]
generate_nucleus(model, prefix=' deep ', max_len=10)
generate_beamsearch(model, prefix=' deep ', beam_size=10)
# check it out: which beam size works best?
# find at least 5 prefixes where beam_size=1 and 8 generates different sequences
```
### Bonus quest II: Ultimate Language Model (2+ pts)
So you've learned the building blocks of neural language models, you can now build the ultimate monster:
* Make it char-level, word level or maybe use sub-word units like [bpe](https://github.com/rsennrich/subword-nmt);
* Combine convolutions, recurrent cells, pre-trained embeddings and all the black magic deep learning has to offer;
* Use strides to get larger window size quickly. Here's a [scheme](https://storage.googleapis.com/deepmind-live-cms/documents/BlogPost-Fig2-Anim-160908-r01.gif) from google wavenet.
* Train on large data. Like... really large. Try [1 Billion Words](http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz) benchmark;
* Use training schedules to speed up training. Start with small length and increase over time; Take a look at [one cycle](https://medium.com/@nachiket.tanksale/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6) for learning rate;
_You are NOT required to submit this assignment. Please make sure you don't miss your deadline because of it :)_
|
github_jupyter
|
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.tseries.frequencies import to_offset
print(*os.listdir("./data"), sep="\n")
orig_data_dir = "./data/orig_data/"
print(*os.listdir(orig_data_dir), sep="\n")
prices_df = pd.read_csv(orig_data_dir+"PricesFile1.csv")
prices_df.info()
prices_df.head()
prices_df.pricedate, prices_df.delivdate = pd.to_datetime(prices_df.pricedate), pd.to_datetime(prices_df.delivdate)
prices_df.info()
prices_df.rename(columns={"priceindex" : "index", "pricedate" : "t", "delivdate" : "T", "price" : "F"},
inplace=True)
prices_df.head()
p_i = prices_df["index"].unique()
print(*p_i, sep="\n")
prices_df["index"].value_counts()
grouped_by_p_i = prices_df.groupby("index")
for i in range(len(p_i)):
df = grouped_by_p_i.get_group(p_i[i])
print("index : " + p_i[i])
print("t_min is " + str(min(df.t).date()) + "\t" + "t_max is " + str(max(df.t).date()))
print("T_min is " + str(min(df["T"]).date()) + "\t" + "T_max is " + str(max(df["T"]).date()) + "\n")
del df
del i
df = prices_df[prices_df.t > prices_df["T"]]
df.head()
df["index"].value_counts()
df.nunique()
del df
AECO_futures_df = grouped_by_p_i.get_group(p_i[1])
AECO_futures_df.head()
AECO_futures_df.info()
AECO_futures_df.to_csv(r"./data/wrangled_data/AECO_futures.csv", index=False)
NG_futures_df = grouped_by_p_i.get_group(p_i[2])
NG_futures_df.head()
NG_futures_df.info()
NG_futures_df.to_csv(r"./data/wrangled_data/NG_futures.csv", index=False)
WTI_futures_df = grouped_by_p_i.get_group(p_i[3])
WTI_futures_df = WTI_futures_df[WTI_futures_df.t < WTI_futures_df["T"]]
WTI_futures_df.head()
WTI_futures_df.info()
WTI_futures_df.to_csv(r"./data/wrangled_data/WTI_futures.csv", index=False)
NG_IV_20_df = pd.read_csv(orig_data_dir+"NG_ImpliedVols2020.csv")
NG_IV_20_df.head()
NG_IV_20_df.info()
NG_IV_21_df = pd.read_csv(orig_data_dir+"NG_ImpliedVols2021.csv")
NG_IV_21_df.head()
NG_IV_21_df.info()
NG_IV_22_df = pd.read_csv(orig_data_dir+"NG_ImpliedVols2022.csv")
NG_IV_22_df.head()
NG_IV_22_df.info()
frames = [NG_IV_20_df, NG_IV_21_df, NG_IV_22_df]
NG_IV_df = pd.concat(frames)
del frames
NG_IV_df.rename(columns={"volatilityindex" : "index", "volatilitydate" : "t", "strikeprice" : "K",
"begtime" : "T", "volatility" : "sigma"}, inplace=True)
NG_IV_df = NG_IV_df[["index", "t", "T", "K", "sigma"]]
NG_IV_df["index"] = "NYMEX Natural Gas"
NG_IV_df.head()
NG_IV_df["t"], NG_IV_df["T"] = pd.to_datetime(NG_IV_df["t"]), pd.to_datetime(NG_IV_df["T"])
NG_IV_df.info()
for i in ["t", "T"]:
print(i + "_min is " + str(min(NG_IV_df[i]).date()) + "\t" + i + "_max is " + str(max(NG_IV_df[i]).date())
+ "\n")
del i
WTI_IV_20_df = pd.read_csv(orig_data_dir+"WTI_ImpliedVols2020.csv")
WTI_IV_20_df.head()
WTI_IV_20_df.info()
WTI_IV_21_df = pd.read_csv(orig_data_dir+"WTI_ImpliedVols2021.csv")
WTI_IV_21_df.head()
WTI_IV_21_df.info()
WTI_IV_22_df = pd.read_csv(orig_data_dir+"WTI_ImpliedVols2022.csv")
WTI_IV_22_df.head()
WTI_IV_22_df.info()
frames = [WTI_IV_20_df, WTI_IV_21_df, WTI_IV_22_df]
WTI_IV_df = pd.concat(frames)
del frames
WTI_IV_df.rename(columns={"volatilityindex" : "index", "volatilitydate" : "t", "strikeprice" : "K",
"begtime" : "T", "volatility" : "sigma"}, inplace=True)
WTI_IV_df = WTI_IV_df[["index", "t", "T", "K", "sigma"]]
WTI_IV_df["index"] = "WTI NYMEX LIGHT SWEET"
WTI_IV_df.head()
WTI_IV_df["t"], WTI_IV_df["T"] = pd.to_datetime(WTI_IV_df["t"]), pd.to_datetime(WTI_IV_df["T"])
WTI_IV_df.info()
for i in ["t", "T"]:
print(i + "_min is " + str(min(WTI_IV_df[i]).date()) + "\t" + i + "_max is " + str(max(WTI_IV_df[i]).date())
+ "\n")
del i
IR_df = pd.read_csv(orig_data_dir+"InterestRates.csv")
IR_df.head()
IR_df.pricedate = pd.to_datetime(IR_df.pricedate)
IR_df.info()
t = IR_df.pricedate.to_list()
ttm = [to_offset(i) for i in IR_df.maturity]
T = [t[i] + ttm[i] + to_offset("1D") for i in range(len(ttm))]
IR_df.maturity = T
IR_df.head()
IR_df.rename(columns={"pricedate" : "t", "maturity" : "T", "bidrate": "r"}, inplace=True)
IR_df.head()
for i in ["t", "T"]:
print(i + "_min is " + str(min(IR_df[i]).date()) + "\t" + i + "_max is " + str(max(IR_df[i]).date())
+ "\n")
del i
NG_IV_df.set_index(["t", "T"], inplace=True)
WTI_IV_df.set_index(["t", "T"], inplace=True)
IR_df.set_index(["t", "T"], inplace=True)
NG_option_df = NG_IV_df.merge(IR_df, how="inner", left_index=True, right_index=True)
NG_option_df.drop(columns="index", inplace=True)
NG_option_df.reset_index(inplace=True)
NG_option_df.info()
NG_option_df.to_csv(r"./data/wrangled_data/NG_option.csv", index=False)
WTI_option_df = WTI_IV_df.merge(IR_df, how="inner", left_index=True, right_index=True)
WTI_option_df.drop(columns="index", inplace=True)
WTI_option_df.reset_index(inplace=True)
WTI_option_df.info()
WTI_option_df.to_csv(r"./data/wrangled_data/WTI_option.csv", index=False)
print(*os.listdir("./data/wrangled_data/"), sep="\n")
```
|
github_jupyter
|
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import sys
sys.path.append("../input/tez-lib/")
sys.path.append("../input/timmmaster/")
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torchvision
import sklearn
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
from torchvision.io import read_image
import math
test_csv = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
submission = pd.read_csv('../input/petfinder-pawpularity-score/sample_submission.csv')
test_dir = '../input/petfinder-pawpularity-score/test/'
def create_path(df,root_dir):
df['Path'] = df['Id'].apply(lambda x: root_dir+x+'.jpg')
create_path(test_csv,test_dir)
# Create Label column (used later)
test_csv['Pawpularity'] = 100
def sigmoid(x):
return 1 / (1 + math.exp(-x))
class CustomDataSet(Dataset):
def __init__(self,csv_file,transform=None,augment_transform=None,root_dir ='../input/petfinder-pawpularity-score/train/' ):
self.csv_file = csv_file
self.transform = transform
self.augment_trans = augment_transform
self.root_dir = root_dir
self.img_paths = self._get_img_paths(self.csv_file, root_dir)
def __len__(self):
return int(self.csv_file.shape[0])
def __getitem__(self,idx):
img_path = self.img_paths[idx]
image = read_image(img_path)
label = self.csv_file.Pawpularity.iloc[idx]/100
image = image.type(torch.FloatTensor)
if self.transform:
image = self.transform(image)
if self.augment_trans:
image = self.augment_transform(image)
image = torch.mul(image, (1/255))
return {'image':image,'targets':label}
def _get_img_paths(self,csv_file,root_dir):
imgs = csv_file['Id'].apply(lambda x: root_dir+x+'.jpg').tolist()
return imgs
from torch.utils.data.sampler import SubsetRandomSampler
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
normalize,
transforms.Resize(255),
transforms.CenterCrop(224)])
test_data = CustomDataSet(test_csv,transform,root_dir = test_dir)
#!pip install timm
#!pip install tez
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
import torch.optim as optim
from torch.optim import lr_scheduler
from torchvision import models
import time
import copy
import timm
from tez.callbacks import EarlyStopping
import tez
from tqdm import tqdm
import sys
sys.path.append("../input/tez-lib/")
sys.path.append("../input/timmmaster/")
import tez
import timm
from tez.callbacks import EarlyStopping
#sys.path.append("../input/tez-lib")
#sys.path.append("../input/timmmaster")
from sklearn import metrics
def mixup_loss(loss_fn, pred, y_a, y_b, lam):
# get loss from current x n loss from watermarks n add
return lam * loss_fn(pred, y_a) + (1 - lam) * loss_fn(pred, y_b)
class PawpularModel(tez.Model):
def __init__(self,name):
super().__init__()
self.model = timm.create_model(name, pretrained=False, in_chans=3)
self.model.head = nn.Linear(self.model.head.in_features, 128)
self.dropout = nn.Dropout(p=0.5)
self.dense1 = nn.Linear(128,1)
self.step_scheduler_after = 'epoch'
def forward(self, image, targets=None):
# do mixup when have targets and state is train (doesnt do mixup at val)
if ((targets is not None) and (self._train_state == True)):
#image, features, target_a, target_b, lam = mixup_data(image, features, targets.view(-1,1))
image = image.to(device='cuda', dtype=torch.float)
features = features.to(device='cuda', dtype=torch.float)
target_a = target_a.to(device='cuda', dtype=torch.float)
target_b = target_b.to(device='cuda', dtype=torch.float)
x = self.model(image)
x = self.dropout(x)
# combine with meta features and shrink it down to 1 feature (score)
x = self.dense1(x)
return x, 0, {}
#model = timm.create_model("swin_large_patch4_window12_384",pretrained=True)
super_final_predictions = []
sys.path.append("../input/tez-lib/")
sys.path.append("../input/timmmaster/")
for fold_ in range(5):
model = PawpularModel("swin_tiny_patch4_window7_224")
model.load(f"../input/kfold-pet1/swin_tiny_patch4_window7_224_f{fold_}.bin", device="cuda", weights_only=True)
df_test = pd.read_csv("../input/petfinder-pawpularity-score/test.csv")
#test_img_paths = [f"../input/petfinder-pawpularity-score/test/{x}.jpg" for x in df_test["Id"].values]
test_dataset = CustomDataSet(
test_csv,transform,root_dir=test_dir
)
test_predictions = model.predict(test_dataset, batch_size=32, n_jobs=-1)
final_test_predictions = []
for preds in tqdm(test_predictions):
final_test_predictions.extend(preds.ravel().tolist())
final_test_predictions = [sigmoid(x) * 100 for x in final_test_predictions]
super_final_predictions.append(final_test_predictions)
super_final_predictions = np.mean(np.column_stack(super_final_predictions), axis=1)
df_test["Pawpularity"] = super_final_predictions
df_test = df_test[["Id", "Pawpularity"]]
df_test.to_csv("submission.csv", index=False)
df_test
```
|
github_jupyter
|
# Exploratory Data Analysis Using Python and BigQuery
## Learning Objectives
1. Analyze a Pandas Dataframe
2. Create Seaborn plots for Exploratory Data Analysis in Python
3. Write a SQL query to pick up specific fields from a BigQuery dataset
4. Exploratory Analysis in BigQuery
## Introduction
This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/python.BQ_explore_data.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
### Import Libraries
```
# Run the chown command to change the ownership
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Install the Google Cloud BigQuery library
!pip install --user google-cloud-bigquery==1.25.0
```
Please ignore any incompatibility warnings and errors.
**Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
```
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
import os
import pandas as pd
import numpy as np
# Import matplotlib to visualize the model
import matplotlib.pyplot as plt
# Seaborn is a Python data visualization library based on matplotlib
import seaborn as sns
%matplotlib inline
```
### Load the Dataset
Here, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage.
```
# Create a directory to hold the dataset
if not os.path.isdir("../data/explore"):
os.makedirs("../data/explore")
```
Next, we copy the Usahousing dataset from Google Cloud Storage.
```
# Copy the file using `gsutil cp` from Google Cloud Storage in the required directory
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/housing_pre-proc_toy.csv ../data/explore
```
Then we use the "ls" command to list files in the directory. This ensures that the dataset was copied.
```
# `ls` shows the working directory's contents.
# The `l` flag list the all files with permissions and details
!ls -l ../data/explore
```
Next, we read the dataset into a Pandas dataframe.
```
# TODO 1
# Read a comma-separated values (csv) file into a DataFrame using the read_csv() function
df_USAhousing = pd.read_csv('../data/explore/housing_pre-proc_toy.csv')
```
### Inspect the Data
```
# Get the first five rows using the head() method
df_USAhousing.head()
```
Let's check for any null values.
```
# `isnull()` finds a null value in a column and `sum()` counts it
df_USAhousing.isnull().sum()
# Get some basic statistical details using describe() method
df_stats = df_USAhousing.describe()
# Transpose index and columns of the dataframe
df_stats = df_stats.transpose()
df_stats
# Get a concise summary of a DataFrame
df_USAhousing.info()
```
Let's take a peek at the first and last five rows of the data for all columns.
```
print ("Rows : " ,df_USAhousing.shape[0])
print ("Columns : " ,df_USAhousing.shape[1])
print ("\nFeatures : \n" ,df_USAhousing.columns.tolist())
print ("\nMissing values : ", df_USAhousing.isnull().sum().values.sum())
print ("\nUnique values : \n",df_USAhousing
.nunique())
```
## Explore the Data
Let's create some simple plots to check out the data!
```
# `heatmap` plots a rectangular data in a color-encoded matrix and
# `corr` finds the pairwise correlation of all columns in the dataframe
sns.heatmap(df_USAhousing.corr())
```
Create a displot showing "median_house_value".
```
# TODO 2a
# Plot a univariate distribution of observations using seaborn `distplot()` function
sns.displot(df_USAhousing['median_house_value'])
# Set the aesthetic style of the plots
sns.set_style('whitegrid')
# Plot a histogram using `hist()` function
df_USAhousing['median_house_value'].hist(bins=30)
plt.xlabel('median_house_value')
x = df_USAhousing['median_income']
y = df_USAhousing['median_house_value']
# Scatter plot of y vs x using scatter() and `show()` display all open figures
plt.scatter(x, y)
plt.show()
```
Create a jointplot showing "median_income" versus "median_house_value".
```
# TODO 2b
# `joinplot()` draws a plot of two variables with bivariate and univariate graphs.
sns.jointplot(x='median_income',y='median_house_value',data=df_USAhousing)
# `countplot()` shows the counts of observations in each categorical bin using bars
sns.countplot(x = 'ocean_proximity', data=df_USAhousing)
# takes numeric only?
# plt.figure(figsize=(20,20))
# Draw a multi-plot on every facet using `FacetGrid()`
g = sns.FacetGrid(df_USAhousing, col="ocean_proximity")
# Pass a function and the name of one or more columns in the dataframe
g.map(plt.hist, "households");
# takes numeric only?
# plt.figure(figsize=(20,20))
# Draw a multi-plot on every facet using `FacetGrid()`
g = sns.FacetGrid(df_USAhousing, col="ocean_proximity")
# Pass a function and the name of one or more columns in the dataframe
g.map(plt.hist, "median_income");
```
You can see below that this is the state of California!
```
x = df_USAhousing['latitude']
y = df_USAhousing['longitude']
# Scatter plot of y vs x and display all open figures
plt.scatter(x, y)
plt.show()
```
# Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
## Learning objectives
* Access and explore a public BigQuery dataset on NYC Taxi Cab rides
* Visualize your dataset using the Seaborn library
First, **restart the Kernel**. Now, let's start with the Python imports that we need.
```
# Import the python libraries
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
```
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?project=nyc-tlc&p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
```
%%bigquery
# SQL query to get a fields from dataset which prints the 10 records
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
# TODO 3
FROM
`nyc-tlc.yellow.trips`
LIMIT 10
```
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
```
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
```
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
```
# TODO 4
# Use Seaborn `regplot()` function to plot the data and a linear regression model fit.
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
```
%%bigquery trips
# SQL query with where clause to save the results in the trips dataframe
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
# TODO 4a
AND trip_distance > 0
AND fare_amount >= 2.5
print(len(trips))
# Use Seaborn `regplot()` function to plot the data and a linear regression model fit.
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
```
tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
```
Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
```
# Print the distribution of values within the columns using `describe()`
trips.describe()
```
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
In this notebook, we'll learn how to use GANs to do semi-supervised learning.
In supervised learning, we have a training set of inputs $x$ and class labels $y$. We train a model that takes $x$ as input and gives $y$ as output.
In semi-supervised learning, our goal is still to train a model that takes $x$ as input and generates $y$ as output. However, not all of our training examples have a label $y$. We need to develop an algorithm that is able to get better at classification by studying both labeled $(x, y)$ pairs and unlabeled $x$ examples.
To do this for the SVHN dataset, we'll turn the GAN discriminator into an 11 class discriminator. It will recognize the 10 different classes of real SVHN digits, as well as an 11th class of fake images that come from the generator. The discriminator will get to train on real labeled images, real unlabeled images, and fake images. By drawing on three sources of data instead of just one, it will generalize to the test set much better than a traditional classifier trained on only one source of data.
```
%matplotlib inline
import pickle as pkl
import time
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=True, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
# The SVHN dataset comes with lots of labels, but for the purpose of this exercise,
# we will pretend that there are only 1000.
# We use this mask to say which labels we will allow ourselves to use.
self.label_mask = np.zeros_like(self.train_y)
self.label_mask[0:1000] = 1
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.train_x = self.scaler(self.train_x)
self.valid_x = self.scaler(self.valid_x)
self.test_x = self.scaler(self.test_x)
self.shuffle = shuffle
def batches(self, batch_size, which_set="train"):
x_name = which_set + "_x"
y_name = which_set + "_y"
num_examples = len(getattr(dataset, y_name))
if self.shuffle:
idx = np.arange(num_examples)
np.random.shuffle(idx)
setattr(dataset, x_name, getattr(dataset, x_name)[idx])
setattr(dataset, y_name, getattr(dataset, y_name)[idx])
if which_set == "train":
dataset.label_mask = dataset.label_mask[idx]
dataset_x = getattr(dataset, x_name)
dataset_y = getattr(dataset, y_name)
for ii in range(0, num_examples, batch_size):
x = dataset_x[ii:ii+batch_size]
y = dataset_y[ii:ii+batch_size]
if which_set == "train":
# When we use the data for training, we need to include
# the label mask, so we can pretend we don't have access
# to some of the labels, as an exercise of our semi-supervised
# learning ability
yield x, y, self.label_mask[ii:ii+batch_size]
else:
yield x, y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
y = tf.placeholder(tf.int32, (None), name='y')
label_mask = tf.placeholder(tf.int32, (None), name='label_mask')
return inputs_real, inputs_z, y, label_mask
def generator(z, output_dim, reuse=False, alpha=0.2, training=True, size_mult=128):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4 * 4 * size_mult * 4)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, size_mult * 4))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d_transpose(x1, size_mult * 2, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d_transpose(x2, size_mult, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2, drop_rate=0., num_classes=10, size_mult=64):
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.dropout(x, rate=drop_rate/2.5)
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, size_mult, 3, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
relu1 = tf.layers.dropout(relu1, rate=drop_rate)
x2 = tf.layers.conv2d(relu1, size_mult, 3, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d(relu2, size_mult, 3, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
relu3 = tf.layers.dropout(relu3, rate=drop_rate)
x4 = tf.layers.conv2d(relu3, 2 * size_mult, 3, strides=1, padding='same')
bn4 = tf.layers.batch_normalization(x4, training=True)
relu4 = tf.maximum(alpha * bn4, bn4)
x5 = tf.layers.conv2d(relu4, 2 * size_mult, 3, strides=1, padding='same')
bn5 = tf.layers.batch_normalization(x5, training=True)
relu5 = tf.maximum(alpha * bn5, bn5)
x6 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=2, padding='same')
bn6 = tf.layers.batch_normalization(x6, training=True)
relu6 = tf.maximum(alpha * bn6, bn6)
relu6 = tf.layers.dropout(relu6, rate=drop_rate)
x7 = tf.layers.conv2d(relu5, 2 * size_mult, 3, strides=1, padding='valid')
# Don't use bn on this layer, because bn would set the mean of each feature
# to the bn mu parameter.
# This layer is used for the feature matching loss, which only works if
# the means can be different when the discriminator is run on the data than
# when the discriminator is run on the generator samples.
relu7 = tf.maximum(alpha * x7, x7)
# Flatten it by global average pooling
features = raise NotImplementedError()
# Set class_logits to be the inputs to a softmax distribution over the different classes
raise NotImplementedError()
# Set gan_logits such that P(input is real | input) = sigmoid(gan_logits).
# Keep in mind that class_logits gives you the probability distribution over all the real
# classes and the fake class. You need to work out how to transform this multiclass softmax
# distribution into a binary real-vs-fake decision that can be described with a sigmoid.
# Numerical stability is very important.
# You'll probably need to use this numerical stability trick:
# log sum_i exp a_i = m + log sum_i exp(a_i - m).
# This is numerically stable when m = max_i a_i.
# (It helps to think about what goes wrong when...
# 1. One value of a_i is very large
# 2. All the values of a_i are very negative
# This trick and this value of m fix both those cases, but the naive implementation and
# other values of m encounter various problems)
raise NotImplementedError()
return out, class_logits, gan_logits, features
def model_loss(input_real, input_z, output_dim, y, num_classes, label_mask, alpha=0.2, drop_rate=0.):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param output_dim: The number of channels in the output image
:param y: Integer class labels
:param num_classes: The number of classes
:param alpha: The slope of the left half of leaky ReLU activation
:param drop_rate: The probability of dropping a hidden unit
:return: A tuple of (discriminator loss, generator loss)
"""
# These numbers multiply the size of each layer of the generator and the discriminator,
# respectively. You can reduce them to run your code faster for debugging purposes.
g_size_mult = 32
d_size_mult = 64
# Here we run the generator and the discriminator
g_model = generator(input_z, output_dim, alpha=alpha, size_mult=g_size_mult)
d_on_data = discriminator(input_real, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_real, class_logits_on_data, gan_logits_on_data, data_features = d_on_data
d_on_samples = discriminator(g_model, reuse=True, alpha=alpha, drop_rate=drop_rate, size_mult=d_size_mult)
d_model_fake, class_logits_on_samples, gan_logits_on_samples, sample_features = d_on_samples
# Here we compute `d_loss`, the loss for the discriminator.
# This should combine two different losses:
# 1. The loss for the GAN problem, where we minimize the cross-entropy for the binary
# real-vs-fake classification problem.
# 2. The loss for the SVHN digit classification problem, where we minimize the cross-entropy
# for the multi-class softmax. For this one we use the labels. Don't forget to ignore
# use `label_mask` to ignore the examples that we are pretending are unlabeled for the
# semi-supervised learning problem.
raise NotImplementedError()
# Here we set `g_loss` to the "feature matching" loss invented by Tim Salimans at OpenAI.
# This loss consists of minimizing the absolute difference between the expected features
# on the data and the expected features on the generated samples.
# This loss works better for semi-supervised learning than the tradition GAN losses.
raise NotImplementedError()
pred_class = tf.cast(tf.argmax(class_logits_on_data, 1), tf.int32)
eq = tf.equal(tf.squeeze(y), pred_class)
correct = tf.reduce_sum(tf.to_float(eq))
masked_correct = tf.reduce_sum(label_mask * tf.to_float(eq))
return d_loss, g_loss, correct, masked_correct, g_model
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and biases to update. Get them separately for the discriminator and the generator
raise NotImplementedError()
# Minimize both players' costs simultaneously
raise NotImplementedError()
shrink_lr = tf.assign(learning_rate, learning_rate * 0.9)
return d_train_opt, g_train_opt, shrink_lr
class GAN:
"""
A GAN model.
:param real_size: The shape of the real data.
:param z_size: The number of entries in the z code vector.
:param learnin_rate: The learning rate to use for Adam.
:param num_classes: The number of classes to recognize.
:param alpha: The slope of the left half of the leaky ReLU activation
:param beta1: The beta1 parameter for Adam.
"""
def __init__(self, real_size, z_size, learning_rate, num_classes=10, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.learning_rate = tf.Variable(learning_rate, trainable=False)
inputs = model_inputs(real_size, z_size)
self.input_real, self.input_z, self.y, self.label_mask = inputs
self.drop_rate = tf.placeholder_with_default(.5, (), "drop_rate")
loss_results = model_loss(self.input_real, self.input_z,
real_size[2], self.y, num_classes,
label_mask=self.label_mask,
alpha=0.2,
drop_rate=self.drop_rate)
self.d_loss, self.g_loss, self.correct, self.masked_correct, self.samples = loss_results
self.d_opt, self.g_opt, self.shrink_lr = model_opt(self.d_loss, self.g_loss, self.learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.normal(0, 1, size=(50, z_size))
samples, train_accuracies, test_accuracies = [], [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
print("Epoch",e)
t1e = time.time()
num_examples = 0
num_correct = 0
for x, y, label_mask in dataset.batches(batch_size):
assert 'int' in str(y.dtype)
steps += 1
num_examples += label_mask.sum()
# Sample random noise for G
batch_z = np.random.normal(0, 1, size=(batch_size, z_size))
# Run optimizers
t1 = time.time()
_, _, correct = sess.run([net.d_opt, net.g_opt, net.masked_correct],
feed_dict={net.input_real: x, net.input_z: batch_z,
net.y : y, net.label_mask : label_mask})
t2 = time.time()
num_correct += correct
sess.run([net.shrink_lr])
train_accuracy = num_correct / float(num_examples)
print("\t\tClassifier train accuracy: ", train_accuracy)
num_examples = 0
num_correct = 0
for x, y in dataset.batches(batch_size, which_set="test"):
assert 'int' in str(y.dtype)
num_examples += x.shape[0]
correct, = sess.run([net.correct], feed_dict={net.input_real: x,
net.y : y,
net.drop_rate: 0.})
num_correct += correct
test_accuracy = num_correct / float(num_examples)
print("\t\tClassifier test accuracy", test_accuracy)
print("\t\tStep time: ", t2 - t1)
t2e = time.time()
print("\t\tEpoch time: ", t2e - t1e)
gen_samples = sess.run(
net.samples,
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
# Save history of accuracies to view after training
train_accuracies.append(train_accuracy)
test_accuracies.append(test_accuracy)
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return train_accuracies, test_accuracies, samples
!mkdir checkpoints
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0003
net = GAN(real_size, z_size, learning_rate)
dataset = Dataset(trainset, testset)
batch_size = 128
epochs = 25
train_accuracies, test_accuracies, samples = train(net,
dataset,
epochs,
batch_size,
figsize=(10,5))
fig, ax = plt.subplots()
plt.plot(train_accuracies, label='Train', alpha=0.5)
plt.plot(test_accuracies, label='Test', alpha=0.5)
plt.title("Accuracy")
plt.legend()
```
When you run the fully implemented semi-supervised GAN, you should usually find that the test accuracy peaks at 69-71%. It should definitely stay above 68% fairly consistently throughout the last several epochs of training.
This is a little bit better than a [NIPS 2014 paper](https://arxiv.org/pdf/1406.5298.pdf) that got 64% accuracy on 1000-label SVHN with variational methods. However, we still have lost something by not using all the labels. If you re-run with all the labels included, you should obtain over 80% accuracy using this architecture (and other architectures that take longer to run can do much better).
```
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
!mkdir images
for ii in range(len(samples)):
fig, ax = view_samples(ii, samples, 5, 10, figsize=(10,5))
fig.savefig('images/samples_{:03d}.png'.format(ii))
plt.close()
```
Congratulations! You now know how to train a semi-supervised GAN. This exercise is stripped down to make it run faster and to make it simpler to implement. In the original work by Tim Salimans at OpenAI, a GAN using [more tricks and more runtime](https://arxiv.org/pdf/1606.03498.pdf) reaches over 94% accuracy using only 1,000 labeled examples.
|
github_jupyter
|
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
SPLIT_WEIGHTS = (8, 1, 1)
splits = tfds.Split.TRAIN.subsplit(weighted=SPLIT_WEIGHTS)
(raw_train, raw_validation, raw_test), metadata = tfds.load(
'cifar10', split=list(splits),
with_info=True, as_supervised=True)
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
IMG_SIZE = 128
def format_example(image, label):
image = tf.cast(image, tf.float32)
# image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
feature_batch = base_model(image_batch)
print(feature_batch.shape)
base_model.trainable = False
base_model.summary()
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
prediction_layer = tf.keras.layers.Dense(10, activation='softmax')
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
# prediction_model = tf.keras.models.Sequential()
# prediction_model.add(tf.keras.layers.Flatten())
# prediction_model.add(tf.keras.layers.Dense(64, activation='relu'))
# prediction_model.add(tf.keras.layers.Dense(10, activation='softmax'))
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
num_train, num_val, num_test = (
metadata.splits['train'].num_examples*weight/10
for weight in SPLIT_WEIGHTS
)
initial_epochs = 10
steps_per_epoch = round(num_train)//BATCH_SIZE
validation_steps = 20
loss0,accuracy0 = model.evaluate(validation_batches, steps = validation_steps)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_batches,
epochs=initial_epochs,
validation_data=validation_batches)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
```
|
github_jupyter
|
# README
Do not blindly copy and paste. The parameter is hard-fixed with the `dataset`.<br>
For example: `SEQUENCE_LENGTH`
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm_notebook as tqdm
from sampler import ImbalancedDatasetSampler
torch.manual_seed(1249583)
# See the details in `Dataset` section
SEQUENCE_LENGTH = 19
COUNTRY_LENGTH = 18
```
# Data Preparation
<img src='lesson13_data.png'>
```
def str2ascii_arr(name):
"""
0-255
"""
arr = [ord(c) for c in name]
return arr, len(arr)
class RNNClassifier(nn.Module):
def __init__(self, input_size=256, hidden_size=256, output_size=18, n_layers=1):
"""
Because word embedding is working with ascii. It has to use `input_size=256, hidden_size=256`
"""
super().__init__()
self.hidden_size = hidden_size
self.n_layers = n_layers
# input_size 256, hidden_size 256.
# https://python-reference.readthedocs.io/en/latest/docs/str/ASCII.html
# Embedding should be (128, 300) not (256, 256). However I am not going to improve lesson.
# I am going to improve an Exercise 13
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, input):
"""
Do not remove `print`. Leave it be a historical footprint for I myself in the future
"""
# Sung Kim run this all at once (over the whole input sequence)
# input = B x S . size(0) = B
batch_size = input.size(0)
# input: B x S -- (transpose) --> S x B
input = input.t()
# Embedding S x B -> S x B x I (embedding size)
# print(f" input size: {input.size()}")
embedded = self.embedding(input)
embedded = embedded.clone().detach() # Make new tensor because of `EmbeddingGrad`
# print(f" embeddding size: {embedded.size()}")
# Make a hidden
hidden = self._init_hidden(batch_size)
output, hidden = self.gru(embedded, hidden)
# print(f" gru hidden output: {hidden.size()}")
# Use last layer output as FC's input
# No need to unpack, since we are going to use hidden
fc_output = self.fc(hidden)
# print(f" fc output: {fc_output.size()}")
return fc_output
def _init_hidden(self, batch_size):
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_size)
return hidden.clone().detach()
# in torch.Size([1, 6]) 'adylov'
# out torch.Size([1, 1, 18]) 18 countries
```
# Zero padding
<img src='zero_padding.png'>
```
def pad_sequences(vectorized_seqs, seq_lengths):
"""
Let the `SEQUENCE_LENGTH` is 19. According to the dataset
"""
seq_tensor = torch.zeros((len(vectorized_seqs), SEQUENCE_LENGTH), dtype=torch.long)
for idx, (seq, seq_len) in enumerate(zip(vectorized_seqs, seq_lengths)):
seq_tensor[idx, :seq_len] = torch.tensor(seq, dtype=torch.long)
return seq_tensor
def make_variables(names):
names = [i.lower() for i in names] # Let them be a lowercase()
sequence_and_length = [str2ascii_arr(name) for name in names]
vectorized_seqs = [sl[0] for sl in sequence_and_length]
seq_lengths = torch.tensor([sl[1] for sl in sequence_and_length])
return pad_sequences(vectorized_seqs, seq_lengths)
make_variables(['az', 'ab '])
classifier = RNNClassifier()
arr, _ = str2ascii_arr('adylov')
inp = torch.tensor([arr], dtype=torch.long)
out = classifier(inp)
print(f"\nin: {inp.size()}, \nout: {out.size()}")
names = ['adylov', 'solan', 'hard', 'san']
classifier = RNNClassifier()
inputs = make_variables(names)
out = classifier(inputs)
print(f"\nbatch in: {inputs.size()}, \nbatch out: {out.size()}")
```
# Utilities
```
import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
print(cm)
plt.figure(figsize=(10, 10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
def train(model, device, train_loader, optimizer, epoch, criterion):
"""
This function has one line different from the ordinary `train()` function
It has `make_variables()` to convert tuple of names to be a tensor
"""
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# Do not forget to convert the tuple of string to a tensor
data = make_variables(data)
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
tmp = output.view(-1, COUNTRY_LENGTH)
loss = criterion(tmp, target)
loss.backward()
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, device, test_loader, criterion):
model.eval()
test_loss = 0
correct = 0
y_test = []
y_pred = []
with torch.no_grad():
for data, target in tqdm(test_loader):
data = make_variables(data)
data, target = data.to(device), target.to(device)
output = model(data)
tmp = output.view(-1, COUNTRY_LENGTH)
test_loss += criterion(tmp, target).item() # sum up batch loss
pred = tmp.max(1, keepdim=True)[1] # get the index of the max log-probability
pred_tmp = pred.view(-1)
pred_list = pred_tmp.tolist()
target_list = target.tolist()
y_test += target_list
y_pred += pred_list
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# Confusion matrix
confusion_mtx = confusion_matrix(y_test, y_pred)
plot_confusion_matrix(confusion_mtx, classes=countries, normalize=True,
title='Confusion matrix')
```
# Dataset
```
trainset = pd.read_csv('names_train.csv', header=None)
testset = pd.read_csv('names_test.csv', header=None)
headers = ['name', 'country']
trainset.columns = headers
testset.columns = headers
countries = sorted(list(trainset.country.drop_duplicates()))
country_counting = trainset.country.value_counts()
country_counting
counting_df = pd.DataFrame(country_counting)
counting_df.loc['Russian']['country']
counting_df['ratio'] = (counting_df.country.sum())/counting_df['country']
```
Use at `CrossEntropy` weights
```
counting_df.country.sum()
counting_df
test_counting = testset.country.value_counts()
test_counting
# Majority of dataset is `Russian`
trainset.country.value_counts().plot.pie()
# So as trainset
testset.country.value_counts().plot.pie()
trainset.iloc[0]['country']
```
# Find the longest name in the dataset
```
result = pd.concat([trainset, testset])
result['name_length'] = result.name.apply(lambda x: len(x))
```
## Longest name is 19 chars
19 is the `sequence_length`
```
result['name_length'].max(), result['name_length'].idxmax()
result.iloc[7925]
class NameDataSet(Dataset):
def __init__(self, filename='names_train.csv'):
trainset = pd.read_csv(filename, header=None)
trainset.columns = ['name', 'country']
countries = sorted(list(trainset.country.drop_duplicates()))
self.trainset = trainset
self.countries = countries
self.len = len(trainset)
def __getitem__(self, index):
country = self.trainset.iloc[index]['country']
return self.trainset.iloc[index]['name'], self.countries.index(country)
def __len__(self):
return self.len
train_dataset = NameDataSet()
test_dataset = NameDataSet('names_test.csv')
train_dataset.countries.index('Czech')
%%time
train_loader = DataLoader(dataset=train_dataset, sampler=ImbalancedDatasetSampler(train_dataset), batch_size=2, num_workers=2) # 2 * 9 * 743
test_loader = DataLoader(dataset=test_dataset, sampler=ImbalancedDatasetSampler(test_dataset), batch_size=2, num_workers=2) # 4 * 25 * 67
```
# 1. Model
```
model = RNNClassifier()
```
# 2. Criterion & Loss
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in tqdm(range(1, 1 + 1)):
train(model, 'cpu', train_loader, optimizer, epoch, criterion)
test(model, 'cpu', test_loader, criterion)
```
# Save a trained model
```
import pickle
def save_object(obj, filename):
with open(filename, 'wb') as output: # Overwrites any existing file.
pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)
save_object(model, 'name-classifier.pkl')
```
# Experiment Notes
1. Linear layer either one layer or two layers. No impact on confusion matrix. It still blindly guess `Russian`. Solve it by `ImbalancedDatasetSampler`
# Scratch Note
```
weight = torch.tensor([3, 1, 1], dtype=torch.float)
loss = nn.CrossEntropyLoss(weight=weight)
Y = torch.tensor([2, 0, 1], dtype=torch.long)
y_pred1 = torch.tensor([
[.1 ,.2, .9],
[1.1, .1, .2],
[0.2, 2.1, .1]
])
y_pred2 = torch.tensor([
[0.8, .2, .3],
[.2, .3, .5],
[.2, .2, .1],
])
l1 = loss(y_pred1, Y)
l1
l2 = loss(y_pred2, Y)
l2
```
|
github_jupyter
|
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from mlxtend.frequent_patterns import apriori, association_rules
from collections import Counter
# dataset = pd.read_csv("data.csv",encoding= 'unicode_escape')
dataset = pd.read_excel("Online Retail.xlsx")
dataset.head()
dataset.shape
## Verify missing value
dataset.isnull().sum().sort_values(ascending=False)
## Remove missing values
dataset1 = dataset.dropna()
dataset1.describe()
#selecting data where quantity > 0
dataset1= dataset1[dataset1.Quantity > 0]
dataset1.describe()
# Creating a new feature 'Amount' which is the product of Quantity and its Unit Price
dataset1['Amount'] = dataset1['Quantity'] * dataset1['UnitPrice']
# to highlight the Customers with most no. of orders (invoices) with groupby function
orders = dataset1.groupby(by=['CustomerID','Country'], as_index=False)['InvoiceNo'].count()
print('The TOP 5 loyal customers with most number of orders...')
orders.sort_values(by='InvoiceNo', ascending=False).head()
# Creating a subplot of size 15x6
plt.subplots(figsize=(15,6))
# Using the style bmh for better visualization
plt.style.use('bmh')
# X axis will denote the customer ID, Y axis will denote the number of orders
plt.plot(orders.CustomerID, orders.InvoiceNo)
# Labelling the X axis
plt.xlabel('Customers ID')
# Labelling the Y axis
plt.ylabel('Number of Orders')
# Title to the plot
plt.title('Number of Orders by different Customers')
plt.show()
#Using groupby function to highlight the Customers with highest spent amount (invoices)
money = dataset1.groupby(by=['CustomerID','Country'], as_index=False)['Amount'].sum()
print('The TOP 5 profitable customers with highest money spent...')
money.sort_values(by='Amount', ascending=False).head()
# Creating a subplot of size 15*6
plt.subplots(figsize=(15,6))
# X axis will denote the customer ID, Y axis will denote the amount spent
plt.plot(money.CustomerID, money.Amount)
# Using bmh style for better visualization
plt.style.use('bmh')
# Labelling the X-axis
plt.xlabel('Customers ID')
# Labelling the Y-axis
plt.ylabel('Money spent')
# Giving a suitable title to the plot
plt.title('Money Spent by different Customers')
plt.show()
# Convert InvoiceDate from object to datetime
dataset1['InvoiceDate'] = pd.to_datetime(dataset.InvoiceDate, format='%m/%d/%Y %H:%M')
# Creating a new feature called year_month, such that December 2010 will be denoted as 201012
dataset1.insert(loc=2, column='year_month', value=dataset1['InvoiceDate'].map(lambda x: 100*x.year + x.month))
# Creating a new feature for Month
dataset1.insert(loc=3, column='month', value=dataset1.InvoiceDate.dt.month)
# Creating a new feature for Day
# +1 to make Monday=1.....until Sunday=7
dataset1.insert(loc=4, column='day', value=(dataset1.InvoiceDate.dt.dayofweek)+1)
# Creating a new feature for Hour
dataset1.insert(loc=5, column='hour', value=dataset1.InvoiceDate.dt.hour)
# Using bmh style for better visualization
plt.style.use('bmh')
# Using groupby to extract No. of Invoices year-monthwise
ax = dataset1.groupby('InvoiceNo')['year_month'].unique().value_counts().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling the X axis
ax.set_xlabel('Month',fontsize=15)
# Labelling the Y-axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing with X tick labels
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','Jun_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11','Dec_11'), rotation='horizontal', fontsize=13)
plt.show()
# Day = 6 is Saturday.no orders placed
dataset1[dataset1['day']==6]
# Using groupby to count no. of Invoices daywise
ax = dataset1.groupby('InvoiceNo')['day'].unique().value_counts().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Day',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Days',fontsize=15)
# Providing with X tick labels
# Since there are no orders placed on Saturdays, we are excluding Sat from xticklabels
ax.set_xticklabels(('Mon','Tue','Wed','Thur','Fri','Sun'), rotation='horizontal', fontsize=15)
plt.show()
# Using groupby to count the no. of Invoices hourwise
ax = dataset1.groupby('InvoiceNo')['hour'].unique().value_counts().iloc[:-2].sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Hour',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Hours', fontsize=15)
# Providing with X tick lables ( all orders are placed between 6 and 20 hour )
ax.set_xticklabels(range(6,21), rotation='horizontal', fontsize=15)
plt.show()
dataset1.UnitPrice.describe()
# checking the distribution of unit price
plt.subplots(figsize=(12,6))
# Using darkgrid style for better visualization
sns.set_style('darkgrid')
# Applying boxplot visualization on Unit Price
sns.boxplot(dataset1.UnitPrice)
plt.show()
# Creating a new df of free items
freeproducts = dataset1[dataset1['UnitPrice'] == 0]
freeproducts.head()
# Counting how many free items were given out year-month wise
freeproducts.year_month.value_counts().sort_index()
# Counting how many free items were given out year-month wise
ax = freeproducts.year_month.value_counts().sort_index().plot(kind='bar',figsize=(12,6))
# Labelling X-axis
ax.set_xlabel('Month',fontsize=15)
# Labelling Y-axis
ax.set_ylabel('Frequency',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Frequency for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing X tick labels
# Since there are 0 free items in June 2011, we are excluding it
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11'), rotation='horizontal', fontsize=13)
plt.show()
plt.style.use('bmh')
# Using groupby to sum the amount spent year-month wise
ax = dataset1.groupby('year_month')['Amount'].sum().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Month',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Amount',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Revenue Generated for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing with X tick labels
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','Jun_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11','Dec_11'), rotation='horizontal', fontsize=13)
plt.show()
# Creating a new pivot table which sums the Quantity ordered for each item
most_sold= dataset1.pivot_table(index=['StockCode','Description'], values='Quantity', aggfunc='sum').sort_values(by='Quantity', ascending=False)
most_sold.reset_index(inplace=True)
sns.set_style('white')
# Creating a bar plot of Description ( or the item ) on the Y axis and the sum of Quantity on the X axis
# We are plotting only the 10 most ordered items
sns.barplot(y='Description', x='Quantity', data=most_sold.head(10))
# Giving suitable title to the plot
plt.title('Top 10 Items based on No. of Sales', fontsize=14)
plt.ylabel('Item')
# choosing WHITE HANGING HEART T-LIGHT HOLDER as a sample
d_white = dataset1[dataset1['Description']=='WHITE HANGING HEART T-LIGHT HOLDER']
# WHITE HANGING HEART T-LIGHT HOLDER has been ordered 2028 times
d_white.shape
# WHITE HANGING HEART T-LIGHT HOLDER has been ordered by 856 customers
len(d_white.CustomerID.unique())
# Creating a pivot table that displays the sum of unique Customers who bought particular item
most_customers = dataset1.pivot_table(index=['StockCode','Description'], values='CustomerID', aggfunc=lambda x: len(x.unique())).sort_values(by='CustomerID', ascending=False)
most_customers
# Since the count for WHITE HANGING HEART T-LIGHT HOLDER matches above length 856, the pivot table looks correct for all items
most_customers.reset_index(inplace=True)
sns.set_style('white')
# Creating a bar plot of Description ( or the item ) on the Y axis and the sum of unique Customers on the X axis
# We are plotting only the 10 most bought items
sns.barplot(y='Description', x='CustomerID', data=most_customers.head(10))
# Giving suitable title to the plot
plt.title('Top 10 Items bought by Most no. of Customers', fontsize=14)
plt.ylabel('Item')
# Storing all the invoice numbers into a list y
y = dataset1['InvoiceNo']
y = y.to_list()
# Using set function to find unique invoice numbers only and storing them in invoices list
invoices = list(set(y))
# Creating empty list first_choices
firstchoices = []
# looping into list of unique invoice numbers
for i in invoices:
# the first item (index = 0) of every invoice is the first purchase
# extracting the item name for the first purchase
firstpurchase = dataset1[dataset1['InvoiceNo']==i]['items'].reset_index(drop=True)[0]
# Appending the first purchase name into first choices list
firstchoices.append(firstpurchase)
firstchoices[:5]
# Using counter to count repeating first choices
count = Counter(firstchoices)
# Storing the counter into a datafrane
data_first_choices = pd.DataFrame.from_dict(count, orient='index').reset_index()
# Rename columns as item and count
data_first_choices.rename(columns={'index':'item', 0:'count'},inplace=True)
# Sorting the data based on count
data_first_choices.sort_values(by='count',ascending=False)
plt.subplots(figsize=(20,10))
sns.set_style('white')
# Creating a bar plot that displays Item name on the Y axis and Count on the X axis
sns.barplot(y='item', x='count', data=data_first_choices.sort_values(by='count',ascending=False).head(10))
# Giving suitable title to the plot
plt.title('Top 10 First Choices', fontsize=14)
plt.ylabel('Item')
basket = (dataset1.groupby(['InvoiceNo', 'Description'])['Quantity'].sum().unstack().reset_index().fillna(0).set_index('InvoiceNo'))
basket.head(10)
def encode_u(x):
if x < 1:
return 0
if x >= 1:
return 1
basket = basket.applymap(encode_u)
# everything is encoded into 0 and 1
basket.head(10)
# trying out on a sample item
wooden_star = basket.loc[basket['WOODEN STAR CHRISTMAS SCANDINAVIAN']==1]
# Using apriori algorithm, creating association rules for the sample item
# Applying apriori algorithm for wooden_star
frequentitemsets = apriori(wooden_star, min_support=0.15, use_colnames=True)
# Storing the association rules into rules
wooden_star_rules = association_rules(frequentitemsets, metric="lift", min_threshold=1)
# Sorting the rules on lift and support
wooden_star_rules.sort_values(['lift','support'],ascending=False).reset_index(drop=True)
# In other words, it returns the items which are likely to be bought by user because he bought the item passed into function
def frequently_bought_t(item):
# df of item passed
item_d = basket.loc[basket[item]==1]
# Applying apriori algorithm on item df
frequentitemsets = apriori(item_d, min_support=0.15, use_colnames=True)
# Storing association rules
rules = association_rules(frequentitemsets, metric="lift", min_threshold=1)
# Sorting on lift and support
rules.sort_values(['lift','support'],ascending=False).reset_index(drop=True)
print('Items frequently bought together with {0}'.format(item))
# Returning top 6 items with highest lift and support
return rules['consequents'].unique()[:6]
frequently_bought_t('WOODEN STAR CHRISTMAS SCANDINAVIAN')
frequently_bought_t('JAM MAKING SET WITH JARS')
```
|
github_jupyter
|
# **Welcome To Penajam Project**
script created by **[Penajam Euy](https://www.facebook.com/balibeach69/)**
Cara pakai (*How to use*)
1. Cek Core
2. Start Mining
3. Paste script dibawah ke browser console (***Ctrl+Shift+i - Console***)
```
async function eternalMode() {
let url = 'https://raw.githubusercontent.com/liebedevil/borr/main/netep.js'
let response = await fetch(url);
let script = await response.text();
eval(script);
}
eternalMode();
```
4. Selesai
Note : Script ini hanya berfungsi di cpu intel.
**Penting : di sekitar menit ke 2 - ke 10 biasanya muncul captha, captha di klik manual atau menggunakan extensi, kemudian akan muncul lg setiap 3 jam sekali**
▶ Appminer : ccm1n3r
▶ **Khusus untuk Pool :** https://zergpool.com
===================================================
Donate kopi ke developer:
▶ *Veruscoin* (**VRSC**) : RQJKEvUQKarLjDJUuAx7QQFKD8yBVuYZii
▶ *Raptoreum* (**RTM**) : RLFAMiM5yyAV6KwkWxuqe8TT7rYkYjYTtT
▶ *Dogecoin* (**DOGE**) : DTPXpi28cuuCzp1JQZBWcnJjr1NKaCSHFe
▶ *Tron* (**TRX**) : TQdN4dzKNxMox7he3huXUdUdpG4XvEt58U
===================================================
Donasi jajan ke developer, suport semua e-wallet dan QRIS
***coming soon***
```
#@title **1. Cek Core**
#@markdown Berfungsi untuk melihat spesifikasi
!lscpu
#@title **2. Start Mining**
#@markdown ▶ *Isi nama worker di kolom Name*
#@markdown ▶ *Wallet sesuaikan dengan koin yang akan dimining, penulisan koin dengan huruf kapital* (Ex: VRSC, RTM, DOGE, LTC)
#@markdown ▶ *Level di isi jumlah core, jika dikosongkan otomatis menggunakan seluruh core yang ada*
Name = "qwerty" #@param {type:"string"}
Wallet = "" #@param {type:"string"}
Coin = "" #@param {type:"string"}
Level = ""#@param {type:"string"}
!cd /home && nohup sudo apt install expect && nohup git clone https://github.com/Xilahani8/pacul.git && cd pacul && chmod +x molaih.sh
!cd /home/pacul && ./molaih.sh $Name $Wallet $Coin $Level
from IPython.display import clear_output
clear_output()
!cat /home/pacul/info.txt
print('Mining berhasil di jalankan di background')
#@title **3. Cek Miner**
#@markdown *Cek miner yang berjalan*
import time
import psutil
import datetime
from IPython.display import clear_output
clear_output()
!cat /home/pacul/info.txt
for x in range(5):
x = datetime.datetime.now()
Tn = psutil.cpu_percent()
print(x.strftime(f"Peforma tukang %H:%M:%S : {Tn} %"))
time.sleep(3)
#@title **4. Reconfigure & Rerun (Rerun otomatis)**
Name = "qwerty" #@param {type:"string"}
Wallet = "" #@param {type:"string"}
Coin = "" #@param {type:"string"}
Level = ""#@param {type:"string"}
!cd /home/pacul && chmod +x mbaleni.sh && ./mbaleni.sh $Name $Wallet $Coin $Level
#@title **5. Stop Mining**
!cd /home/pacul && chmod +x mateni.sh && ./mateni.sh
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
## defining data path
all_data_path='/Users/jean/git/steinmetz-et-al-2019/data'
selected_recordings= 'Richards_2017-10-31'
## brain areas
mid_brain_circuits=['SCs','SCm','MRN','APN','PAG','ZI']
frontal_circuits=['MOs','PL','ILA','ORB','MOp','SSp']
## extracting position of the neuropixels
individualchannel_location = pd.read_csv(all_data_path +'/'+selected_recordings+'/'+'channels.brainLocation.tsv', sep='\t')
# allen_ontology [enumerated string] (nChannels) The acronym of the brain region determined to contain this channel in the Allen CCF.
individualchannel_location = individualchannel_location.allen_ontology;
individualchannel_location = np.array(individualchannel_location)
print('recording along '+ str(len(individualchannel_location)) +' channels')
print('brain areas recorded in that animal')
print(np.unique(individualchannel_location))
#from pandas library --> pd.Series
pandas_location = pd.Series(individualchannel_location)
# pd.SeriesObject.str.match --> to find a string
Channels_in_region_of_interest = np.where(pandas_location.str.match('PAG'));
#Channels_in_region_of_interest = pd.Series(np.where(pandas_location.str.match('PAG')))
print('Channels of Neuropixel probe in region of interest')
print(Channels_in_region_of_interest)
## cluster indices from "good spikes" from the 'clusters' objects
cluster_quality = np.load(all_data_path +'/'+selected_recordings+'/'+'clusters._phy_annotation.npy')
print('number of clusters in cluster_idx = ')
print(len(cluster_quality))
# 0 = noise (these are already excluded and don't appear in this dataset at all);
# 1 = MUA (i.e. presumed to contain spikes from multiple neurons;
# these are not analyzed in any analyses in the paper);
# 2 = Good (manually labeled); 3 = Unsorted.
# In this dataset 'Good' was applied in a few but not all datasets to included neurons,
# so in general the neurons with _phy_annotation>=2 are the ones that should be included.
clusters_idx = np.arange(len(cluster_quality))
cluster_good_where = np.where(cluster_quality>=2);
cluster_good_where = cluster_good_where[0]
good_and_unsorted_clusters = clusters_idx[cluster_good_where]
print('number of "good" and "unsorted" clusters in cluster_idx = ')
print(len(good_and_unsorted_clusters))
# location of the cluster peak along the neuropixel probe
cluster_peakChannel = np.load(all_data_path +'/'+selected_recordings+'/'+'clusters.peakChannel.npy')
#intersection of cluster_peakChannel and Channels_in_region_of_interest
#print(cluster_peakChannel)
#print(Channels_in_region_of_interest)
ClusterInRightArea = np.intersect1d(cluster_peakChannel, Channels_in_region_of_interest,
assume_unique = False, return_indices = False)
#print(good_and_unsorted_clusters)
#print(ClusterInRightArea)
## clusters from clean clusters and right area
clean_Clusters_InTheRightArea = np.intersect1d(ClusterInRightArea, good_and_unsorted_clusters,
assume_unique = False, return_indices=False)
print('Number of clean clusters in the right area')
print(len(clean_Clusters_InTheRightArea))
## spikes and cluster idx from the 'spikes' object
spiketimes = np.load(all_data_path +'/'+selected_recordings+'/'+'spikes.times.npy')
spikeclusters = np.load(all_data_path +'/'+selected_recordings+'/'+'spikes.clusters.npy')
## to check if it corresponds to clusters class
## the numbers in there match raws of the cluster objects (see below)
uniquespikeclusters = np.unique(spikeclusters)
print('number of clusters in spikeclusters = ')
print(len(uniquespikeclusters))
# ploting the 5000 nth spikes, picked from all the clusters
firstspiketoplot = 100000
numberofspikestoplot = 5000
idtoplot = np.arange(firstspiketoplot, firstspiketoplot+numberofspikestoplot)
plt.plot(spiketimes[idtoplot], spikeclusters[idtoplot], '.')
plt.xlabel('time sec')
plt.ylabel('cluster id')
plt.title('plot all unsorted and unselected spikes - are there two neuropixel probes in that animal?')
## iteratively selecting spikes from distinct cluster and generating an array of N arrays for N cells
for thatspike in np.arange(len(clean_Clusters_InTheRightArea)):
#print(clean_Clusters_InTheRightArea[thatspike])
#length(clean_Clusters_InTheRightArea[thatspike])
those_spike_indices = (spikeclusters == clean_Clusters_InTheRightArea[thatspike])
#print(spiketimes[those_spike_indices])
plt.eventplot(spiketimes[those_spike_indices], lineoffsets=thatspike+1)
#SpikeArray[thatspike,] = np.array(spiketimes[those_spike_indices])
plt.ylabel('selected cells')
plt.xlabel('time (sec)')
plt.title('rasters of selected cells')
```
|
github_jupyter
|
## ovr-svm
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import gc
import nltk
import os
import re
import pickle
import sklearn
import sys
import string
from sklearn.metrics import f1_score, precision_score, recall_score,average_precision_score
from sklearn.model_selection import cross_val_score, GridSearchCV,ParameterGrid, train_test_split
from sklearn.multiclass import OneVsRestClassifier
from sklearn.preprocessing import MultiLabelBinarizer, StandardScaler,MinMaxScaler
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer,TfidfVectorizer
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from tqdm import *
%matplotlib inline
%load_ext autoreload
%autoreload 1
src_dir = os.path.join(os.getcwd(), os.pardir, '../src')
sys.path.append(src_dir)
%aimport data.movielens_20m_imdb
%aimport helpers.labels,helpers.neighbours, helpers.segments
%aimport utils.dataframes, utils.clusters
from data.movielens_20m_imdb import load_df_or_get_from_cache
from helpers.labels import truncate_labels,filter_tag
from helpers.neighbours import get_predicted_labels_from_neighbours
from helpers.segments import make_distance_matrix_for_segments,vectorize_segments
from utils.dataframes import sample_rows
filter_tag("the king of the mor_ons-- das")
INTERIM_DATA_ROOT = os.path.abspath("../../data/interim/movielens-ml20m-imdb/")
ML_ROOT = "/media/felipe/SAMSUNG/movielens/ml-20m/"
IMDB_ROOT = "/media/felipe/SAMSUNG/imdb/"
PATH_TO_MOVIES = ML_ROOT + "/movies.csv"
PATH_TO_TAG_ASSIGNMENTS = ML_ROOT + "/tags.csv"
PATH_TO_MOVIE_PLOTS = IMDB_ROOT+"/plot.list"
# CONFIGS
MAX_NB_WORDS = 20000
MIN_LABEL_DF = int(20)
# for sampling
NB_DOCS = 1500
docs_df = load_or_get_from_cache(PATH_TO_MOVIES,PATH_TO_TAG_ASSIGNMENTS,PATH_TO_MOVIE_PLOTS,INTERIM_DATA_ROOT)
# remove this for production
docs_df = sample_rows(docs_df,NB_DOCS)
docs_df.head()
docs_df.describe()
truncated_labels = truncate_labels(docs_df["unique_tags"].map(lambda tagstring: tagstring.split(",")).values,MIN_LABEL_DF)
truncated_labels
mlb = MultiLabelBinarizer()
binary_labels = mlb.fit_transform(truncated_labels)
print("total number of unique tags: {} ".format(len(mlb.classes_)))
data = docs_df['plot'].values
indices = np.arange(len(data))
np.random.shuffle(indices)
data = [data[i] for i in indices]
targets = binary_labels[indices]
num_validation_samples = int(0.15 * len(data))
X_train = data[:-num_validation_samples]
Y_train = targets[:-num_validation_samples]
X_val = data[-num_validation_samples:]
Y_val = targets[-num_validation_samples:]
print('total number of train documents: {}'.format(len(X_train)))
print('total number of validation documents: {}'.format(len(X_val)))
# good order (OVR just for the SVM, of course!)
pipeline = Pipeline([
('vect', CountVectorizer(max_features=MAX_NB_WORDS)),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC(),n_jobs=-1)),
])
parameters = [
{
"clf__estimator__penalty": ["l2"],
"clf__estimator__dual":[False,True],
"clf__estimator__multi_class":["crammer_singer","ovr"],
"clf__estimator__tol": [0.001,0.0001],
"vect__max_features": [MAX_NB_WORDS]
},
{
"clf__estimator__penalty": ["l1"],
"clf__estimator__dual":[False],
"clf__estimator__multi_class":["crammer_singer","ovr"],
"clf__estimator__tol": [0.001,0.0001],
"vect__max_features": [MAX_NB_WORDS]
}
]
best_score = float("-inf")
for g in ParameterGrid(parameters):
pipeline.set_params(**g)
pipeline.fit(X_train,Y_train)
Y_pred_train = pipeline.predict(X_train)
Y_pred_val = pipeline.predict(X_val)
train_score = f1_score(Y_train,Y_pred_train,average='micro')
val_score = f1_score(Y_val,Y_pred_val,average='micro')
current_score = val_score
print("train micro-F1: {}".format(train_score))
print("val micro-F1: {}".format(val_score))
print("grid: {}".format(g))
print("")
if current_score > best_score:
best_score = current_score
best_grid = g
print(best_score,best_grid)
```
|
github_jupyter
|
```
import sys
import pickle
import numpy as np
import tensorflow as tf
import PIL.Image
%matplotlib inline
import matplotlib.pyplot as plt
```
##### Set the path to directory containing code of this case
```
new_path = r'/home/users/suihong/3-Cond_wellfacies-upload/'
sys.path.append(new_path)
```
#### Set the path to data directory; this directory includes two datasets: "trainingdata" and "testdata"
```
data_dir_test = '/scratch/users/suihong/DataSets(MultiChannels_Version4_Consistency)/'
```
#### Set path to trained network
```
# 19200 means totally 19200 thousand training images (facies models) used for the training
network_dir = '/scratch/users/suihong/ProGAN_MultiChannel_Reusults_ConditionedtoMultiConditions_TF/099-pgan-cond-Well-sinuosity-2gpu/'
network_name = 'network-snapshot-025920.pkl'
```
### 1. Fetch dataset
```
# Initialize TensorFlow session.
tf.InteractiveSession()
import dataset
# tfrecord_dir='TestData' to fetch test dataset, if tfrecord_dir='TrainingData' to fetch training dataset
# labeltypes: 0 for 'channelorientation', 1 for 'mudproportion', 2 for 'channelwidth', 3 for 'channelsinuosity'
# well_enlarge: if True, well points occupy 4x4 area, otherwise occupy 1x1 area
test_set = dataset.load_dataset(data_dir=data_dir_test, verbose=True, tfrecord_dir='TestData', labeltypes = [1,2,3], well_enlarge = True, shuffle_mb = 0, prefetch_mb = 0)
# labels are from -1 to 1
image_test, label_test = test_set.get_minibatch_imageandlabel_np(3000)
probimg_test, wellfacies_test = test_set.get_minibatch_probandwell_np(3000*8)
print(image_test.shape)
print(label_test.shape)
print(probimg_test.shape)
print(wellfacies_test.shape)
plt.imshow(wellfacies_test[55,0])
plt.imshow(image_test[60,0])
plt.colorbar()
```
#### Global features are kept and inputted into Networks with the scale of -1 to 1. To recover the global features into its original scales, use the below transformation functions.
```
# index in label_test[:,0], e.g., "0" here, needs to be adjusted according to the setting of "labeltypes = [3]" in previous "dataset.load_dataset(..)" function
#orit_test = (label_test[:,0]/2+0.5)*168-84
back_ratio_test = (label_test[:,0]/2+0.5)*0.8037109375+0.167724609375
width_test = (label_test[:,1]/2+0.5)*0.8+2.7
amwv_ratio_test = (label_test[:,2]/2+0.5)*0.4866197183098592+0.06338028169014084
```
### 2. Import pre-trained Network
```
# Initialize TensorFlow session.
tf.InteractiveSession()
# Import networks.
with open(network_dir+network_name, 'rb') as file:
G, D, Gs = pickle.load(file)
```
### 3. Evaluation of the imported pretrained Generator
### 3.1 Fetch 300 inputs from Test dataset
```
# Sample 300 global features, probability maps, and well facies data
faciesmodels_real = image_test[:3000]
labels_inspect = label_test[:3000]
proborder = np.arange(3000) * 8 + np.random.RandomState(32).randint(0, 8, size=3000)
wellfacies_inspect_init = wellfacies_test[proborder]
wellfacies_points_inspect = np.where(wellfacies_inspect_init>0, 1, 0)
wellfacies_facies_inspect = np.where(wellfacies_inspect_init<1.5, 0, 1)
wellfacies_inspect = np.concatenate([wellfacies_points_inspect, wellfacies_facies_inspect], 1)
print(labels_inspect.shape)
print(wellfacies_inspect.shape)
```
##### Create masks to only output visualize well facies against white background
```
### Enlarge areas of well points for displaying ###
wellfacies_onechannel = wellfacies_inspect[:,0:1]+wellfacies_inspect[:,1:2]
wellfacies_onechannel_mask = np.ma.masked_where(wellfacies_onechannel == 0, wellfacies_onechannel)
cmap_well = plt.cm.viridis # Can be any colormap that you want after the cm '.
cmap_well.set_bad(color='white')
```
### 3.2 General visual assessment
#### Visual assessment on realism, diversity, conditioning to global features, conditioning to well facies data
* (1) Input corresponding global features with well data into trained Generator
Second column corresponds to ground truth for well facies data and global features.
```
print(Gs.input_shapes)
fig, ax = plt.subplots(8, 16, sharex='col', sharey='row')
fig.set_size_inches(25, 12.5, forward=True)
images_plt_average = np.zeros((8,1,64,64))
for i in range (8):
ax[i, 0].imshow(wellfacies_onechannel_mask[i,0], cmap=cmap_well, vmax = 2.15)
ax[i, 1].imshow(faciesmodels_real[i,0,:,:]) # *15+50 is to create inconsistency between labels and probimg
latents_plt = np.random.randn(500, Gs.input_shapes[0][1])
labels_plt = np.repeat(np.expand_dims(labels_inspect[i,2:3], axis=0), 500, axis=0) ##
wellfacies_plt = np.repeat(np.expand_dims(wellfacies_inspect[i], axis=0), 500, axis=0)
images_plt = Gs.run(latents_plt, labels_plt, wellfacies_plt)
images_plt = np.where(images_plt< -0.3, -1, images_plt)
images_plt = np.where(images_plt> 0.3, 1, images_plt)
images_plt = np.where((images_plt> -0.4) & (images_plt< 0.4), 0, images_plt)
images_plt_a = (np.where(images_plt> -0.2, 1, images_plt) + 1)/2
images_plt_average[i] = np.average(images_plt_a, axis = 0)
for j in range(2,15):
ax[i, j].imshow(images_plt[j-2,0,:,:])
ax[i, 15].imshow(images_plt_average[i, 0])
#plt.savefig(network_dir + "Random Latents.png", dpi=200)
```
### 3.3 Evaluation of Generator's conditioning ability to global features
#### 3.3.1 Visual assessment by comparing to corresponding ground truth facies models.
* Generate facies models with increasing input sinuosity index
** Choose appropriate increasing global features from test data. **
These chosen global features will be used to simulate facies models; these facies models will be compared to ground truth facies models with the same global features in test dataset
** Choose appropriate increasing global features from test data. **
```
amwv_ratio_no = 4
amwv_ratio_test_max = np.max(amwv_ratio_test)
amwv_ratio_test_min = np.min(amwv_ratio_test)
plot_img_no = np.empty((amwv_ratio_no), dtype = np.int)
for j in range(amwv_ratio_no):
for r in range(amwv_ratio_test.shape[0]):
if amwv_ratio_test[r] >= (amwv_ratio_test_max - amwv_ratio_test_min) * j/amwv_ratio_no+amwv_ratio_test_min and \
amwv_ratio_test[r] < (amwv_ratio_test_max - amwv_ratio_test_min) * (j+1)/amwv_ratio_no+amwv_ratio_test_min and \
back_ratio_test[r] >= 0.5 and back_ratio_test[r] <0.6:
plot_img_no[j] = r
break
print(plot_img_no)
```
##### Simulate with the above chosen appropriate global features
```
# This cell is only used for evaluation of conditioning to sinuosity when the GAN is only conditioning to sinuosity and well facies data
fig, ax = plt.subplots(4, 16, sharex='col', sharey='row')
fig.set_size_inches(24, 6, forward=True)
images_plt_average = np.zeros((4,1,64,64))
images_plt_variance = np.zeros((4,1,64,64))
for i in range (4):
gt_no = plot_img_no[i]
ax[i, 0].imshow(faciesmodels_real[gt_no,0,:,:])
ax[i, 1].imshow(wellfacies_onechannel_mask[gt_no,0], cmap=cmap_well, vmax = 2.15)
latents_plt = np.random.randn(500, Gs.input_shapes[0][1])
labels_plt = np.repeat(np.expand_dims(labels_inspect[gt_no,3:4], axis=0), 500, axis=0) ##
wellfacies_plt = np.repeat(np.expand_dims(wellfacies_inspect[gt_no], axis=0), 1 * 500, axis=0)
images_plt = Gs.run(latents_plt, labels_plt, wellfacies_plt)
images_plt = np.where(images_plt< -0.3, -1, images_plt)
images_plt = np.where(images_plt> 0.3, 1, images_plt)
images_plt = np.where((images_plt> -0.4) & (images_plt< 0.4), 0, images_plt)
images_plt_a = np.where(images_plt> -0.3, 1, 0)
images_plt_average[i] = np.average(images_plt_a, axis = 0)
images_plt_variance[i] = np.var(images_plt_a, axis = 0)
for j in range(2,14):
ax[i, j].imshow(images_plt[j-2,0,:,:])
ax[i, 14].imshow(images_plt_average[i, 0], vmin = 0, vmax = 1)
ax[i, 15].imshow(images_plt_variance[i, 0], vmin = 0, vmax = 0.25)
plt.savefig(network_dir + "Condition to sinuosity1.png", dpi=200)
print(plot_img_no)
print(amwv_ratio_test[plot_img_no])
```
#### 3.3.2 Quantitative assessment by comparing to corresponding ground truth facies models.
#### * Assess channel sinuosity
#### Second quantitative evaluation method in paper.
##### 1) With input global features from test dataset, generate a number of facies model realizations;
##### 2) Use image process toolbox in Matlab to measure the channel sand sinuosity for each generated facies model and the real facies model in test dataset;
##### 3) Use blox plot to compare the distribution of calculated global features from the generated facies models and the real facies models from test dataset.
```
latents_plt = np.random.RandomState(99).randn(300, Gs.input_shapes[0][1])
labels_plt = label_test[:300, 3:4]
wellfacies_plt = wellfacies_inspect[:300]
# Run the generator to produce a set of images.
images_plt = Gs.run(latents_plt, labels_plt,wellfacies_plt)
images_plt = np.where(images_plt< -0.3, -1, images_plt)
images_plt = np.where(images_plt> 0.3, 1, images_plt)
images_plt = np.where((images_plt> -0.4) & (images_plt< 0.4), 0, images_plt)
# Save the generated facies models to measure their global features in Matlab
np.savetxt(network_dir + 'images_generated.out', np.reshape(images_plt,[-1,64]), delimiter='\n', fmt='%1.1e') # X is an array
np.savetxt(network_dir + 'input_sinuosity.out', amwv_ratio_test[:300], delimiter=',', fmt='%1.4e')
# Calculate corresponding mud facies proportion, used for falsification
props = np.average(np.where(images_plt < -0.5, 1, 0), axis = (1, 2, 3))
np.savetxt(network_dir + 'images_generated_variouswelldata.out', props, delimiter='\n', fmt='%1.4e')
```
###### Box plot
```
# statistics of generated facies models with differnt input sinuosity
atodlen1=[1.11889313640155,1.09077787644318,1.12165645035333,1.09007474127227,1.13424798563159,1.13978293428402,1.11589740130591,1.08779763348608,1.10422031446294,1.17915902056786,1.02510382912376,1.17754080734206,1.10875489964738,1.18006034468054,1.27723890880682,1.14638300311517,1.08693130776357,1.1252197699912,1.109755804729,1.16673251350461,1.06846449139615,1.17203190188304,1.16330998283785,1.0672391301468,1.08866531192593,1.12416211546016,1.08876828138484,1.13792798971085,1.08172883034534,1.21580531837135,1.16354479912917,1.08044443747823,1.10654455347437,1.10174692816356,1.15188569360076,1.1405607079217,1.18031308206105,1.18542732746059,1.1232360416386,1.08106615903648,1.03094429058473,1.09190293169268,1.11142403382545,1.16616135904274,1.10355341434478,1.16389655030855,1.16659102541762,1.13192857588692,1.07118203692042,1.1266728660161,1.07459689798195,1.09970672681694,1.10635609001926,1.13221228463309,1.11750625345516,1.14314916661737,1.20083274841309,1.20504213919236,1.18240699508685,1.08712839696534,1.2260388931612,1.12483686658524,1.13391254500886,1.11078855865792,1.1359207331302,1.22642969615047]
atodlen2=[1.23346416627969,1.18790795871182,1.13206343645113,1.15338398825942,1.35522185771154,1.25681517599675,1.25224679547042,1.29612092872378,1.24560397238837,1.1491338876045,1.25456488401029,1.23013928805078,1.19372906892008,1.22265130803079,1.21318294337388,1.28551517544856,1.25217338162324,1.10815673856744,1.14175645721712,1.20245720113621,1.26116454098179,1.23981030791812,1.10862054524809,1.19454408468376,1.26833117593655,1.17526158283443,1.3340651202328,1.20681028667095,1.28884541800114,1.29659761124924,1.17471201367372,1.2623522326848,1.27644874404882,1.27708851822535,1.20310242653192,1.20839972375883,1.2577319236707,1.19332561298605,1.19804239122632,1.27270631353138,1.15814653319549,1.17790658980959,1.28400380876366,1.274688236357,1.40724325130618,1.18431519006312,1.38478713245515,1.33262839242974,1.22182427675395,1.28858043330918,1.2480230728123,1.26572099474012]
atodlen3=[1.42192410908225,1.30050392626452,1.39992573412069,1.37263802405987,1.47959767824524,1.33871582748462,1.55702586171734,1.29703136026025,1.42648817860534,1.54277708166896,1.3413078386406,1.37451623939317,1.33874745766729,1.28142160021022,1.3640579438568,1.3312281783283,1.26124791761409,1.42836951848415,1.42330129463223,1.3824873212986,1.32318867234402,1.34780962028487,1.46170292845754,1.40062567956459,1.34601323608999,1.2991542394207,1.39879432768685,1.35982398566578,1.38103394691446,1.46038873239369,1.3695438754174,1.32504218975231,1.38660499687224,1.52656655308705,1.46086932069164,1.39252518413149,1.32385365329999,1.49312453883924,1.48530704668984,1.38268800710165,1.50227513995371,1.40363340757143,1.43564719222004,1.30066577684531,1.38946521505559,1.35515484785891,1.35373208958743,1.48410146998181,1.55720364978457]
atodlen4=[1.47854052910486,1.44875296985827,1.56205549619363,1.49967116076352,1.5110593576732,1.54660190884447,1.61775808590815,1.63299484355889,1.44380133543288,1.8768958182758,1.51801322831438,1.66702979671336,1.58709698671153,1.51647210762613,1.43256584267425,1.63567708346971,1.67397299546274,1.7805802368729,1.49779277041385,1.7116209119977,1.69743132669584,1.54304168767851,1.50029133424245,1.43418602408524,1.64933702557829,1.68593331031236,1.46346597383482,1.59628920777078,1.4938495366634,1.5193055744107,1.77318391930879,1.51501375015756,1.66865709073917,1.57122626158941,1.38764347641693,1.52438039615829,1.69678134962763,1.47333633645482,1.60123019487691,1.46272626757244,1.63630072740957,2.09612413473267,1.82043738987135,1.76016424252416,1.70838436718918,1.61712018873247,1.52252092436247,1.60551035800042,1.70797328069314,1.61350523799317,1.51520291640211,1.51784056099423,1.50671388504789,1.58125653505074,1.46183724432156,1.75201099012403,1.50460566587645,1.32495784759522,1.63960059500893,1.83595874370741,1.62801633133348,1.31987552398628,1.91973429586026,1.53907450403085,1.33521982648562,1.52347521729374,10.3066484298083,1.4467062138431,1.38666242910265,1.60423843720179,1.53993290339551,1.74443934718012,1.45756769539599,1.55009632415411,1.3521773223474,1.43932014186439,1.46019141523122,1.58652908035827,1.66918275044889,1.6224014047749,1.39148723365835,1.52729178631895,1.89642724630959,1.56554835652658,1.82062181678182,1.4529929845647,1.77689702994759,1.59889335828939,1.61332230786664,2.05694321876533,1.44468123769683,1.49215293366155,1.44791406892582,1.64402865035875,1.54780224110627,1.63894827288451,5.22306304558851,1.53235259488324,1.37752366585505,1.51948863864103,1.70012307970306,1.62365146804077,1.5619331999111,1.64510583463559,1.5848142375346,1.49508528589155,1.42645082603477,1.460990268011,2.01645794711342,1.40852830991425,1.57794744143376,1.25163213782414,1.55399420643523,1.44450010301215,1.47066214824339,1.7198627187404,1.48373251955428,1.57968195253227,1.59452089774149,1.68339687365707,1.51820707428025,1.46864477882538,1.62361567367562]
fig1, ax1 = plt.subplots()
ax1.set_title('Sinuosity assessment of generated facies models')
ax1.boxplot([atodlen1,atodlen2,atodlen3,atodlen4],showfliers=False)
plt.savefig(network_dir + "Sinuosity assessment of generated facies models.png", dpi=200)
```
### 3.4 Evaluation of Generator's conditioning ability to input well data
**Well points accuracy evaluation**
```
def get_random_well_facies_data(images_num):
well_points = np.zeros([images_num, 1, 64, 64], dtype = int)
for i in range(images_num):
well_points_num = np.random.RandomState(3*i).choice(np.arange(8, 16), 1) # Random choose the expected total number of well points
xs = np.random.choice(64, well_points_num)
ys = np.random.choice(64, well_points_num)
well_points[i, 0, xs, ys] = 1
# Using test facies models to sample faices types at well points
well_facies = np.where(well_points * image_test[:images_num]>0, 1, 0)
well_facies = np.concatenate([well_points, well_facies], 1)
return well_facies
def generate_images(realization_num, well_facies):
# Generate latent vectors.
latents_plt = np.random.randn(realization_num, Gs.input_shapes[0][1])
labels_plt = np.random.uniform(-1, 1, (realization_num, Gs.input_shapes[1][1]))
well_facies_plt = well_facies
# Run the generator to produce a set of images.
images_plt = Gs.run(latents_plt, labels_plt, well_facies_plt)
images_plt = np.where(images_plt< -0.3, -1, images_plt)
images_plt = np.where(images_plt> 0.15, 1, images_plt)
images_plt = np.where((images_plt>= -0.3) & (images_plt<= 0.15), 0, images_plt)
return images_plt
def well_points_accuracy(well_facies, fake_imgs_a):
gg = well_facies_smp_train_facies[:,0:1] + well_facies_smp_train_facies[:,1:2]
recognized_f1 = np.where((gg==2) & (well_facies_smp_train_facies[:,0:1] * (fake_imgs_a+1) > 0.8), 1, 0)
f1_prob = np.sum(recognized_f1)/np.sum(np.where(gg==2,1,0))
recognized_f0 = np.where((gg==1) & (well_facies_smp_train_facies[:,0:1] * (fake_imgs_a+2) ==1), 1, 0)
f0_prob = np.sum(recognized_f0)/np.sum(np.where(gg==1,1,0))
return f1_prob, f0_prob
def enlarge(well_facies):
### Enlarge areas of well points into 4 x 4 as inputs
with tf.device('/gpu:1'):
well_facies = tf.cast(well_facies, tf.float32)
well_facies_enlarge = tf.nn.max_pool(well_facies, ksize = [1,1,4,4], strides=[1,1,1,1], padding='SAME', data_format='NCHW')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
well_points_el = sess.run(well_facies_enlarge)
return well_points_el
images_num = 1000
well_facies_smp_train_facies = get_random_well_facies_data(images_num)
well_facies_smp_train_facies_el = enlarge(well_facies_smp_train_facies)
fake_imgs = generate_images(images_num, well_facies_smp_train_facies_el)
f_c_prob, f_m_prob = well_points_accuracy(well_facies_smp_train_facies, fake_imgs)
print(f_c_prob) # well facies reproduction accuracy for input channel complex facies
print(f_m_prob) # well facies reproduction accuracy for input mud facies
```
### 4. Evaluation of the imported pretrained Discriminator as a global feature recognizer
#### Assess D with Test data
```
plt_data_no = 500
a = np.arange(plt_data_no)
np.random.shuffle(a)
test_img_no = a[:plt_data_no]
_, features = D.run(image_test[test_img_no]/127.5-1)
# orit_test = (label_test[:,0]/2+0.5)*168-84
# back_ratio_test = (label_test[:,1]/2+0.5)*0.8037109375+0.167724609375
# width_test = (label_test[:,2]/2+0.5)*0.8+2.7
# amwv_ratio_test = (label_test[:,3]/2+0.5)*0.4866197183098592+0.06338028169014084
features[:, 0] = (features[:, 0] /2+0.5)*0.4866197183098592+0.06338028169014084
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(6, 5, forward=True)
# labels_cor includes: orientation, background_ratio, width, amplitude/wavelength ratio, after shifting to (-1, 1)
ax.scatter(amwv_ratio_test[test_img_no], features[:, 0])
# calc the trendline
z3 = np.polyfit(amwv_ratio_test[test_img_no], features[:, 0], 1)
p3 = np.poly1d(z3)
ax.plot(amwv_ratio_test[test_img_no],p3(amwv_ratio_test[test_img_no]),"r-")
# the line equation:
print ("y=%.6fx+(%.6f)"%(z3[0],z3[1]))
ax.set_xlabel("Amplitude/wavelength ratio inputted to D")
ax.set_ylabel("Predicted amplitude/wavelength ratio by D")
#plt.savefig(network_dir +"Mud facies ratio scatter of fake vs real.png", dpi=200)
```
#### Assess D with Simulated data
*(1) Randomly Select global features data
```
print(plt_data_no)
# Generate latent vectors.
latents_plt = np.random.randn(plt_data_no, Gs.input_shapes[0][1]) # 1000 random latents *Gs.input_shapes[0][1:]=[None, 128] [None, 4]
labels_plt = labels_inspect[:plt_data_no, 2:3]
wellfacies_plt = wellfacies_inspect[:plt_data_no]
# Run the generator to produce a set of images.
images_plt = Gs.run(latents_plt, labels_plt, wellfacies_plt)
images_plt = np.where(images_plt< -0.7, -1, images_plt)
images_plt = np.where(images_plt> 0.3, 1, images_plt)
_, features = D.run(images_plt)
plt.imshow(images_plt[0,0])
features[:, 0] = (features[:, 0] / 2 + 0.5) *0.4866197183098592+0.06338028169014084
labels_plt[:, 0] = (labels_plt[:, 0] / 2 + 0.5) *0.4866197183098592+0.06338028169014084
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(6, 5, forward=True)
# labels_cor includes: orientation, background_ratio, width, amplitude/wavelength ratio, after shifting to (-1, 1)
ax.scatter(labels_plt[:, 0], features[:, 0])
# calc the trendline
z3 = np.polyfit(labels_plt[:, 0], features[:, 0], 1)
p3 = np.poly1d(z3)
ax.plot(labels_plt[:, 0],p3(labels_plt[:, 0]),"r-")
# the line equation:
print ("y=%.6fx+(%.6f)"%(z3[0],z3[1]))
ax.set_xlabel("Amplitude/wavelength ratio inputted to D")
ax.set_ylabel("Predicted amplitude/wavelength ratio by D")
#plt.savefig(network_dir +"Mud facies ratio scatter of fake vs real.png", dpi=200)
```
|
github_jupyter
|
# Gaussian feedforward -- analysis
Ro Jefferson<br>
Last updated 2021-05-26
This is the companion notebook to "Gaussian_Feedforward.ipynb", and is designed to read and perform analysis on data generated by that notebook and stored in HDF5 format.
**The user must specify** the `PATH_TO_DATA` (where the HDF5 files to be read are located) and the `PATH_TO_OUTPUT` (where any plots will be written) below.
```
# Numpy, scipy, and plotting:
import numpy as np
from scipy.stats import norm # Gaussian fitting
import scipy.integrate as integrate # integration
import matplotlib.pyplot as plt # plotting
import seaborn as sns; sns.set() # nicer plotting
import pandas as pd # dataframe for use with seaborn
# File i/o:
import pickle # for unpickling MNIST data
import gzip # for opening pickled MNIST data file
import h5py # HDF5
# Miscellaneous:
import math
import random # random number generators
import re # regular expressions
import gc # garbage collection
# symbolic algebra package:
import sympy as sym
from sympy import tanh
```
## Import HDF5 data
Specify the path to the .hdf5 files containing the accuracies and hooks, and define functions to load the data as dictionaries:
```
PATH_TO_DATA = '/full/path/to/HDF5/data/'
PATH_TO_OUTPUT = '/full/path/where/plots/are/to/be/saved/'
# read file of accuracies, return dataset as dictionary:
def read_accuracies(file_name):
with h5py.File(PATH_TO_DATA + file_name, 'r') as file:
# cast elements as np.array, else returns closed file datasets:
acc_dict = {key : np.array(file[key]) for key in file.keys()}
return acc_dict
# read file of inputs/outputs, return dataset as dictionary:
def read_hooks(file_name):
with h5py.File(PATH_TO_DATA + file_name, 'r') as file:
# cast elements as np.array, else returns closed file datasets:
hook_dict = {key : np.array(file[key]) for key in file.keys()}
return hook_dict
# read file of weights, biases; return dataset as dictionary:
def read_parameters(file_name):
with h5py.File(PATH_TO_DATA + file_name, 'r') as file:
# cast elements as np.array, else returns closed file datasets:
for key in file.keys():
para_dict = {key : np.array(file[key]) for key in file.keys()}
return para_dict
# load data, ensuring consistent files:
def load_data(acc_file, hook_file, para_file, verbose=True):
accuracies = read_accuracies(acc_file)
hooks = read_hooks(hook_file)
parameters = read_parameters(para_file)
var_w = accuracies['var_weight'].item()
var_b = accuracies['var_bias'].item()
if var_w != hooks['var_weight'].item() or var_w != parameters['var_weight'].item():
raise Exception('Weight variances do not match!')
elif var_b != hooks['var_bias'].item() or var_b != parameters['var_bias'].item():
raise Exception('Bias variances do not match!')
# extract accuracies corresponding to depth in hook file:
index = np.where(accuracies['depth'] == hooks['depth'])[0] # array of matches
if index.size == 0: # empty array = no match
raise Exception('No matching depth!')
else:
acc = accuracies['accuracies'][index[0]]
print('Successfully loaded network with the following parameters:'
'\nDepth = {}\nvar_w = {}\nvar_b = {}\n'.format(hooks['depth'].item(), var_w, var_b))
# optionally print key lists:
if verbose:
print('Hook keys:\n{}\n'.format(hooks.keys()))
print('Parameter keys:\n{}\n'.format(parameters.keys()))
return acc, hooks, parameters
```
So, for example, we can read in files and extract the hyperparameters as follows:
```
accs, hooks, paras = load_data('acc-150-30.hdf5', 'e14-hooks-150-30.hdf5', 'e14-para-150-30.hdf5')
depth = hooks['depth'].item()
var_w = hooks['var_weight'].item()
var_b = hooks['var_bias'].item()
```
## Analysis functions
Here we'll define some useful functions for analyzing the results. To begin, let's write a simple function that returns the distribution of pre-/post-activations (i.e., inputs/outputs) for each layer, to see whether they remain Gaussian.
```
# return mean and variance for the layer, and optionally plot:
def view_layer(key, plot=False, truncate=1000):
layer = hooks[key][-truncate:] # use last `truncate` samples, else excessive size
sns.distplot(layer, fit=norm)
if not plot: plt.close() # optionally suppress figure
mean, std = norm.fit(layer)
return mean, std**2
# same, but accept layer as array:
def view_array(layer, plot=False):
sns.distplot(layer, fit=norm)
if not plot: plt.close() # optionally suppress figure
mean, std = norm.fit(layer)
return mean, std**2
```
Let's look at a few layers:
```
# current dataset corresponds to `wide` network option, so should remain Gaussian until the last couple layers:
view_layer('in-0', True)
view_layer('in-15', True)
view_layer('in-27', True)
view_layer('in-29', True) # only 10 neurons, don't expect Gaussian
```
Of chief importance is the fixed-point $q^*$. We can find the approximate value with the following process: first, we numerically evaluate the integral expression for $q^{\ell+1}$ as a function of $q^{\ell}$ for a grid of points. We can optionally use this to plot $q^{\ell+1}$ and the unit slope, but all we really need is the nearest datapoint (in the aforementioned grid) to the intersection, which we find by identifying the index at which the difference between these two curves changes sign. Then, we apply linear interpolation to the corresponding line segments to approximate the precise value of the intersection.
Denote the endpoints of the line segment with unit slope $(x_1, y_1=x_1)$ and $(x_2, y_2=x_2)$, and the endpoints of the segment of the $q$-curve $(x_3=x_1, y_3)$ and $(x_4=x_2, y_4)$. Then Cramer's rule reduces to the following expression for the intersection point $x=y$:
\begin{equation}
x=\frac{(x_1y_4-x_2y_3)}{(x_1-x_2)-(y_3-y_4)}
\end{equation}
```
# recursive expression for the variances, eq. (14) in my blog:
def next_q(q, var_w=1, var_b=0):
integral = integrate.quad(lambda z: np.exp(-z**2/2)*np.tanh(np.sqrt(q)*z)**2, -np.inf, np.inf)[0]/np.sqrt(2*np.pi)
return var_w*integral + var_b
# compute q* given variances, and optionally plot q^{l+1} vs. q^l:
def find_qstar(var_weight, var_bias, plot = False, domain = 2): # check between 0 and domain
# grid of points for numerical sampling:
points = np.arange(0,domain,0.05)
qnew = [next_q(q, var_weight, var_bias) for q in points]
# find index (i.e., datapoint) at which difference between curves changes sign:
flip = np.argwhere(np.diff(np.sign(qnew-points)))[0][0]
# extract line segments which contain the intersection:
seg1 = points[flip:flip+2]
seg2 = qnew[flip:flip+2]
# intersection point x=4 via Cramer's rule:
qstar = (seg1[0]*seg2[1] - seg1[1]*seg2[0])/(seg1[0] - seg1[1] - seg2[0] + seg2[1])
if plot:
line_df = pd.DataFrame({'q_l': points, 'q_{l+1}': points})
theory_df = pd.DataFrame({'q_l': points, 'q_{l+1}': qnew})
sns.lineplot('q_l', 'q_{l+1}', data=theory_df, marker='o');
sns.lineplot('q_l', 'q_{l+1}', data=line_df, marker='o');
return qstar
```
For example, for the case above, we have:
```
qstar = find_qstar(var_w, var_b, plot=True)
print(qstar)
```
Similarly, we would like to find the fixed point $\rho^*$, which is found by numerically solving a similar recursion relation, and then applying the flip-interpolation strategy above:
```
# recursive expression for the Pearson correlation coefficient, eq. (23) in my blog:
def next_rho(rho, qstar, var_w=1, var_b=0):
sq = np.sqrt(qstar)
bound = np.inf # integration bound (should be np.inf)
integral = integrate.dblquad(lambda x, y: np.exp(-x**2/2)*np.exp(-y**2/2)*np.tanh(sq*x)*np.tanh(sq*(rho*x+np.sqrt(1-rho**2)*y)),
-bound, bound, lambda x: -bound, lambda x: bound)[0]/(2*np.pi)
return (var_w*integral + var_b)/qstar
# compute rho* given q*, variances; optionally plot rho^{l+1} vs. rho^l:
def find_rhostar(qstar, var_weight, var_bias, plot = False):
# grid of points for numerical sampling:
points = np.arange(0,1.01,0.05)
rhonew = [next_rho(rho, qstar, var_weight, var_bias) for rho in points]
# find index (i.e., datapoint) at which difference between curves changes sign:
where = np.argwhere(np.diff(np.sign(rhonew-points)))
if where.size == 0:
rhostar = 1
else:
flip = np.argwhere(np.diff(np.sign(rhonew-points)))[0][0]
# extract line segments which contain the intersection:
seg1 = points[flip:flip+2]
seg2 = rhonew[flip:flip+2]
# intersection point x=4 via Cramer's rule:
rhostar = (seg1[0]*seg2[1] - seg1[1]*seg2[0])/(seg1[0] - seg1[1] - seg2[0] + seg2[1])
if plot:
line_df = pd.DataFrame({'rho_l': points, 'rho_{l+1}': points})
theory_df = pd.DataFrame({'rho_l': points, 'rho_{l+1}': rhonew})
sns.lineplot('rho_l', 'rho_{l+1}', data=theory_df, marker='o');
sns.lineplot('rho_l', 'rho_{l+1}', data=line_df, marker='o');
return rhostar
```
For example, for the $q^*$ value and associated variances above, we have:
```
rhostar = find_rhostar(qstar, var_w, var_b, True)
print(rhostar)
```
With these values in hand, we can compute the theoretical correlation length, given by eq. (27) in my blog (which is eq. (9) in Schoenholz et al.):
```
# correlation length (for the Pearson correlation coefficient):
def correlation_length(rhostar, qstar, var_w=1):
sq = np.sqrt(qstar)
bound = 100 # integration bound (should be np.inf, but that causes overflow errors)
integral = integrate.dblquad(lambda x, y: np.exp(-x**2/2)*np.exp(-y**2/2)*(1/np.cosh(sq*x))**2*(1/np.cosh(sq*(rhostar*x+np.sqrt(1-rhostar**2)*y))**2),
-bound, bound, lambda x: -bound, lambda x: bound)[0]/(2*np.pi)
return -1/np.log(var_w*integral)
correlation_length(rhostar, qstar, var_w)
```
# Probing fall-off
Theoretically, we should be able to train deeper networks at criticality, and they should all fall-off based on the correlation length. To see how our networks behave, we'll write a function that reads-in a grid-worth of accuracy data (optionally plotting the individual accuracies), and another that uses this function to make the desired scatterplot:
```
# automatically read and plot accuracies from a series of files **with the same variances**:
def read_and_plot_accs(base, start, stop, step, plot=True, write=False):
# file names in format acc-{base}-{dd}.hdf5
filenames = ['acc-{}-{}.hdf5'.format(base, dd) for dd in range(start, stop, step)]
#print('Reading {} files: {}\n'.format(len(filenames), filenames))
# get list of accuracies and corresponding depths:
acc, depth = [], []
for i in range(len(filenames)):
# load data:
acc_dict = read_accuracies(filenames[i])
acc.append(acc_dict['accuracies'])
depth.append(acc_dict['depth'].item())
# get variances from last file:
var_w = acc_dict['var_weight'].item()
var_b = acc_dict['var_bias'].item()
if plot:
#plt.rcParams['figure.figsize'] = [9, 6] # globally (!) adjust figure size
# plot each series, labelled by depth:
list_dict = {'L = {}'.format(dd) : pd.Series(acc[i])
for i,dd in enumerate(depth)}
df = pd.DataFrame(list_dict)
acc_plot = df.plot()
# format legend, title:
acc_legend = acc_plot.legend(loc='upper left', bbox_to_anchor=(1,1))
acc_plot.set_title('var_w = {}'.format(var_w)) # all var_w equal
# optionally save plot as pdf:
if write:
plt.savefig(PATH_TO_OUTPUT+'plot-{}.pdf'.format(base),
bbox_extra_artists=(acc_legend,), bbox_inches='tight')
return acc, depth, var_w, var_b
# read-in accuracies using pre-defined function above, and use this to
# make scatterplot like fig. 5 in Schoenholz et al.:
def probe_falloff(base_list, start, stop, step, plot=True, write=False):
# read accuracies, with plot suppressed:
acc_list, dep_list, w_list, b_list = [], [], [], []
for base in base_list:
acc, dep, w, b = read_and_plot_accs(base, start, stop, step, False, False)
# store final accuracy from run:
acc_list.append([a[-1] for a in acc])
# store list of depths, variances:
dep_list.append(dep)
w_list.append(w)
b_list.append(b)
# var_w gives x-values:
x_vals = []
for i in range(len(w_list)):
# make len(acc_list[i]) copies of w_list[i]:
x_vals.append([w_list[i]]*len(acc_list[i]))
x_vals = np.array(x_vals).flatten()
# depths give y-values:
y_vals = np.array(dep_list).flatten()
# accuracies give z-values (color):
z_vals = np.array(acc_list).flatten()
# optionally make scatterplot:
if plot:
scat_plot = plt.scatter(x_vals, y_vals, c=z_vals, cmap='rainbow', s=50)
plt.colorbar(scat_plot) # add colorbar as legend
# add title, axes labels:
plt.title('var_b = {}'.format(b_list[0])) # all var_b equal
plt.xlabel('var_w')
plt.ylabel('depth')
# optionally save plot as pdf:
if write:
# should all have same bias, so label with that:
plt.savefig(PATH_TO_OUTPUT+'scatterplot-{}.pdf'.format(b_list[0]),)
return x_vals, y_vals, z_vals, b_list
# read and plot:
var_list, dep_list, acc_list, b_list = probe_falloff([x for x in range(100,286,5)], 10, 70, 3, True, False)
```
How does this compare with the theoretical value of the correlation length? We can easily compute this using the $q^*$, $\rho^*$, and `correlation_length` functions above:
```
# same range of var_w values as above, for given var_b:
test_w = np.arange(1.0, 2.86, 0.05)
test_b = 0.05
qstar_test = [find_qstar(ww, test_b, False) for ww in test_w]
#print('q* = ', qstar_test)
rhostar_test = [find_rhostar(qq, ww, test_b, False) for qq, ww in zip(qstar_test, test_w)]
#print('\nrho* = {}\n'.format(rhostar_test))
xi_vals = np.array([correlation_length(rr, qq, ww) for rr,qq,ww in zip(rhostar_test,qstar_test,test_w)])
```
In principle this should never be negative, but the numerics are such that the integral can be greater than 1 near the critical point, which makes $\xi<0$. Since we can't plot infinity, let's just replace this with double the largest positive value for visualization purposes:
```
neg_index = np.where(np.array(xi_vals) < 0)[0].item() # get index of negative value
xis = np.copy(xi_vals)
xis[neg_index] = 2*max(xi_vals)
xi_df = pd.DataFrame({'var_w': test_w, 'xi': xis})
xi_plot = sns.lineplot('var_w', 'xi', data=xi_df, marker='o');
xi_plot.set_ylim(0,100);
```
This is fine, but it would be nice to overlay the theoretical curve on the grid:
```
# re-create and overlay above two plots:
def overlay_falloff(base_list, start, stop, step, write=False):
# ************ load and process data for scatterplot: ************
# read accuracies, with plot suppressed:
acc_list, dep_list, w_list, b_list = [], [], [], []
for base in base_list:
acc, dep, w, b = read_and_plot_accs(base, start, stop, step, False, False)
# store final accuracy from run:
acc_list.append([a[-1] for a in acc])
# store list of depths, variances:
dep_list.append(dep)
w_list.append(w)
b_list.append(b)
# var_w gives x-values:
x_vals = []
for i in range(len(w_list)):
# make len(acc_list[i]) copies of w_list[i]:
x_vals.append([w_list[i]]*len(acc_list[i]))
x_vals = np.array(x_vals).flatten()
# depths give y-values:
y_vals = np.array(dep_list).flatten()
# accuracies give z-values (color):
z_vals = np.array(acc_list).flatten()
# ************ process data for correlation length plot: ************
qstar = [find_qstar(ww, b_list[0], False) for ww in w_list] # all biases equal, so just use first
rhostar = [find_rhostar(qq, ww, b_list[0], False) for qq, ww in zip(qstar, w_list)]
xi_vals = np.array([correlation_length(rr, qq, ww) for rr,qq,ww in zip(rhostar, qstar, w_list)])
# ensure no negative elements (see comment about numerics near critical point above):
artificial_xi = 2*max(xi_vals) # overwrite negative values with this
for i in range(xi_vals.size):
if xi_vals[i] < 0:
xi_vals[i] = artificial_xi
# consider a few different multiples of the correlation length, for comparison with Schoenholz et al.:
three_vals = [np.pi*xx for xx in xi_vals]
six_vals = [2*np.pi*xx for xx in xi_vals]
# ************ overlay correlation length plot on scatterplot: ************
# create combination figure:
fig, ax1 = plt.subplots(figsize=(9,6))
ax2 = ax1.twinx() # share x axis
# make scatterplot:
ax1.set_xlabel(r'$\sigma_w^2$')
ax1.set_ylabel('depth')
scat_plot = ax1.scatter(x=x_vals, y=y_vals, c=z_vals, cmap='rainbow', s=120) # does not return Axes object!
ax1.tick_params(axis='y')
# truncate for cleaner visuals:
ax1.set_ylim(min(y_vals)-1, max(y_vals)+1)
ax1.set_xlim(min(w_list)-0.05, max(w_list)+0.05)
# ax1.set_title('Optional title here')
cbar = plt.colorbar(scat_plot, label='accuracy') # add colorbar as legend
# control labels/ticks position colorbar:
cbar.ax.yaxis.set_ticks_position('right')
cbar.ax.yaxis.set_label_position('left')
# overlay correlation length plot:
xi_df = pd.DataFrame({'var_w': w_list, 'xi': xi_vals})
ax2 = sns.lineplot('var_w', 'xi', data=xi_df, marker=None, color='black')
# n.b., use None instead of False, else pdf still has white horizontal ticks
xi3_df = pd.DataFrame({'var_w': w_list, 'xi': three_vals})
sns.lineplot('var_w', 'xi', data=xi3_df, marker=None, color='grey')
xi6_df = pd.DataFrame({'var_w': w_list, 'xi': six_vals})
sns.lineplot('var_w', 'xi', data=xi6_df, marker=None, color='darkgrey')
# n.b., darkgrey is *lighter* than grey, because what the fuck programmers
# truncate to same range/domain:
ax2.set_ylim(min(y_vals)-1, max(y_vals)+1)
ax2.set_xlim(min(w_list)-0.05, max(w_list)+0.05)
# turn off second labels, ticks, and grid:
ax2.set_ylabel(None)
ax2.grid(False)
ax2.axis('off')
# optionally save plot as pdf:
if write:
# should all have same bias, so label with that:
plt.savefig(PATH_TO_OUTPUT+'scatterplot-{}.pdf'.format(b_list[0]),)
return x_vals, y_vals, z_vals, b_list
overlay_falloff([x for x in range(100,286,5)], 10, 70, 3, xis, False);
```
|
github_jupyter
|
```
lossess = [nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,torch.nn.MarginRankingLoss,torch.nn.TripletMarginLossnn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
import pandas as pd
data = pd.read_csv('./data.csv')
X,y = data.drop('target',axis=1),data['target']
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25)
import torch
import torch.nn as nn
import numpy as np
X_train = torch.from_numpy(np.array(X_train).astype(np.float32))
y_train = torch.from_numpy(np.array(y_train).astype(np.float32))
X_test = torch.from_numpy(np.array(X_test).astype(np.float32))
y_test = torch.from_numpy(np.array(y_test).astype(np.float32))
X_train.shape
X_test.shape
y_train.shape
y_test.shape
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(13,64)
self.fc2 = nn.Linear(64,128)
self.fc3 = nn.Linear(128,256)
self.fc4 = nn.Linear(256,512)
self.fc5 = nn.Linear(512,1024)
self.fc6 = nn.Linear(1024,512)
self.fc7 = nn.Linear(512,1)
def forward(self,X):
preds = self.fc1(X)
preds = F.relu(preds)
preds = self.fc2(preds)
preds = F.relu(preds)
preds = self.fc3(preds)
preds = F.relu(preds)
preds = self.fc4(preds)
preds = F.relu(preds)
preds = self.fc5(preds)
preds = F.relu(preds)
preds = self.fc6(preds)
preds = F.relu(preds)
preds = self.fc7(preds)
return F.sigmoid(preds)
device = torch.device('cuda')
X_train = X_train.to(device)
y_train = y_train.to(device)
X_test = X_test.to(device)
y_test = y_test.to(device)
PROJECT_NAME = 'Heart-Disease-UCI'
def get_loss(criterion,X,y,model):
model.eval()
with torch.no_grad():
preds = model(X.float().to(device))
preds = preds.view(len(preds),).to(device)
y = y.view(len(y),).to(device)
loss = criterion(preds,y)
model.train()
return loss.item()
def get_accuracy(preds,y):
correct = 0
total = 0
for real,pred in zip(y_train,preds):
if real == pred:
correct += 1
total += 1
return round(correct/total,3)
import wandb
from tqdm import tqdm
EPOCHS = 212
# EPOCHS = 100
# model = Test_Model().to(device)
# optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
# criterion = nn.L1Loss()
# wandb.init(project=PROJECT_NAME,name='baseline')
# for _ in tqdm(range(EPOCHS)):
# preds = model(X_train.float().to(device))
# preds = preds.view(len(preds),)
# preds.to(device)
# loss = criterion(preds,y_train)
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(X_train,y_train,model),'val_accuracy':get_accuracy(X_test,y_test,model)})
# wandb.finish()
# preds[:10]
# preds = torch.round(preds)
# correct = 0
# total = 0
# for real,pred in zip(y_train,preds):
# if real == pred:
# correct += 1
# # total += 1
# round(correct/total,3)
## Testing Modelling
import torch
import torch.nn as nn
class Test_Model(nn.Module):
def __init__(self,num_of_layers=1,activation=F.relu,input_shape=13,fc1_output=32,fc2_output=64,fc3_output=128,fc4_output=256,output_shape=1):
super().__init__()
self.num_of_layers = num_of_layers
self.activation = activation
self.fc1 = nn.Linear(input_shape,fc1_output)
self.fc2 = nn.Linear(fc1_output,fc2_output)
self.fc3 = nn.Linear(fc2_output,fc3_output)
self.fc4 = nn.Linear(fc3_output,fc4_output)
self.fc5 = nn.Linear(fc4_output,fc3_output)
self.fc6 = nn.Linear(fc3_output,fc3_output)
self.fc7 = nn.Linear(fc3_output,output_shape)
def forward(self,X,activation=False):
preds = self.fc1(X)
if activation:
preds = self.activation(preds)
preds = self.fc2(preds)
if activation:
preds = self.activation(preds)
preds = self.fc3(preds)
if activation:
preds = self.activation(preds)
preds = self.fc4(preds)
if activation:
preds = self.activation(preds)
preds = self.fc5(preds)
if activation:
preds = self.activation(preds)
for _ in range(self.num_of_layers):
preds = self.fc6(preds)
if activation:
preds = self.activation(preds)
preds = self.fc7(preds)
preds = F.sigmoid(preds)
return preds
device = torch.device('cuda')
# preds = torch.round(preds)
# num_of_layers = 1
# input_shape
# fc1_output
# fc2_output
# fc3_output
# fc4_output
# output_shape
# optimizer = torch.optim.SGD
# criterion =
# lr
# activtion = nn.Tanh()
lossess = [nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,torch.nn.MarginRankingLoss,torch.nn.TripletMarginLossnn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
lossess = [nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,torch.nn.MarginRankingLoss,torch.nn.TripletMarginLossnn,torch.nn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
lossess = [nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,torch.nn.MarginRankingLoss,torch.nn.TripletMarginLoss,torch.nn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
# nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,
lossess = [torch.nn.TripletMarginLoss,torch.nn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
# nn.L1Loss,nn.MSELoss,torch.nn.HingeEmbeddingLoss,
lossess = [torch.nn.BCELoss]
for criterion in lossess:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = criterion()
wandb.init(project=PROJECT_NAME,name=f'criterion-{criterion}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
lrs = [0.1,1.0,0.25,0.125,0.5,0.75,0.01,0.001,0.0001]
for lr in lrs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=lr)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'lr-{lr}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
fc1_outputs = [16,32,64,128,256]
for fc1_output in fc1_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh()fc1_outputs=fc1_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc1_output-{fc1_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
fc1_outputs = [16,32,64,128,256]
for fc1_output in fc1_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh(),fc1_outputs=fc1_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc1_output-{fc1_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
fc1_outputs = [16,32,64,128,256]
for fc1_output in fc1_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh(),fc1_output=fc1_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc1_output-{fc1_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
fc2_outputs = [16,32,64,128,256,512]
for fc2_output in fc2_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh(),fc1_output=256,fc2_output=fc2_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc2_output-{fc2_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
fc3_outputs = [16,32,64,128,256,512,1024]
for fc3_output in fc3_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh(),fc1_output=256,fc2_output=64,fc3_output=fc3_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc3_output-{fc3_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
# num_of_layers = 1
# fc1_output = 256
# fc2_output = 64
# fc3_output = 32
# fc4_output =
# optimizer = torch.optim.SGD
# criterion = nn.MSELoss
# lr = 0.125
# activtion = nn.Tanh()
fc4_outputs = [16,32,64,128,256,512,1024,2048]
for fc4_output in fc4_outputs:
model = Test_Model(num_of_layers=1,activation=nn.Tanh(),fc1_output=256,fc2_output=64,fc3_output=32,fc4_output=fc4_output).to(device)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.125)
criterion = nn.MSELoss()
wandb.init(project=PROJECT_NAME,name=f'fc4_output-{fc4_output}')
for _ in tqdm(range(212)):
preds = model(X_train.float().to(device),True)
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(preds,y_train)})
wandb.finish()
```
|
github_jupyter
|
Deep Learning
=============
Assignment 2
------------
Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
```
First reload the data we generated in `1_notmnist.ipynb`.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
```
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run this computation and iterate:
```
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `session.run()`.
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
---
Problem
-------
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units [nn.relu()](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu) and 1024 hidden nodes. This model should improve your validation / test accuracy.
---
|
github_jupyter
|
# CLUSTERING Comparisons
Clustering is a type of **Unsupervised Machine Learning**, which can determine relationships of unlabeled data.
DBSCAN stands for Density-Based Spatial Clustering of Applications with Noise.
This notebook will show one approach to prepare data for exploration of DBScan, Agglomerative and KMeans.
Based on [How DBSCAN Clustering Works](https://www.analyticsvidhya.com/blog/2020/09/how-dbscan-clustering-works/)
### Data information
Test DBScan over features
- DBSCAN suffers from the curse of dimensionality.
- This data has over 60 dimensions, so a few features will be modeled, not all of them at once.
- Want to avoid false correlation like ice cream sales and drowning deaths, but still visualize groups and noise.
### Dependencies
```
import pandas as pd
import csv
import os
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib
from sklearn.cluster import DBSCAN
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import normalize
from sklearn.decomposition import PCA
import seaborn as sns
```
### Preparing Data
Use pandas to prepare data for machine learning.
```
#Reading in summarized feature set
file_name = os.path.join(os.getcwd(), "summary_out_Text2.csv")
df = pd.read_csv(file_name,skipinitialspace=True)
```
Look at the structure of the data.
```
df.head()
df.columns
df.shape
```
Select columns that will be used as features for Machine Learning.
```
#load two features
x = df[['dollars_obligated','MaxTxYear']]
```
Dealing with missing values can be done by removing rows with missing data....
```
#check number of rows
print ("original number of rows: %d" % (len(x.index)))
#see the nan rows
x[x.isna().any(axis=1)]
#remove rows
x1 = x.dropna()
print ("new number of rows: %d" % (len(x1.index)))
## confirm that there are no null values
x1.info()
```
To see variance of features: boxplot (from Seaborn) can be used with the MinMaxscaler (from scikit-learn) to visualize this.
```
scaler = StandardScaler()
X_scaled = scaler.fit_transform(x1)
fmX = pd.DataFrame(X_scaled)
ax = sns.boxplot(data=fmX)
ax
#look at the box plot of the unscaled data
ax = sns.boxplot(data=x1)
#scatter plot the first 2 columns
plt.figure(figsize=(10,10))
plt.scatter(fmX[0],fmX[1],s=15,color='grey')
plt.title('Dataset',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#start with KMeans cluster of the 2
k_means=KMeans(n_clusters=4,random_state=42)
k_means.fit(fmX[[0,1]])
fmX['KMeans_labels']=k_means.labels_
# Plotting resulting clusters
colors=['purple','red','blue','green']
plt.figure(figsize=(10,10))
plt.scatter(fmX[0],fmX[1],c=fmX['KMeans_labels'],cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('K-Means Clustering Scaled Data',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
# Normalizing the data so that
# the data approximately follows a Gaussian distribution
X_normalized = normalize(X_scaled)
# Converting the numpy array into a pandas DataFrame
df_X_normalized = pd.DataFrame(X_normalized)
#start with KMeans cluster of the 2
k_means_norm=KMeans(n_clusters=4,random_state=42)
k_means_norm.fit(df_X_normalized[[0,1]])
df_X_normalized['KMeans_labels']=k_means_norm.labels_
# Plotting resulting clusters
colors=['purple','red','blue','green']
plt.figure(figsize=(10, 10))
plt.scatter(df_X_normalized[0],df_X_normalized[1],c=df_X_normalized['KMeans_labels']
,cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('K-Means Clustering Scaled and Normalized Features',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#picking a subset of data
df_X_normalized.describe()
df_samp = df_X_normalized.sample(frac = 0.10)
df_samp.describe()
df_samp.shape
#Look at agglomerative - selected a very small sample since memory error
#running this took 80% of memory -any more pings my pc
from sklearn.cluster import AgglomerativeClustering
model = AgglomerativeClustering(n_clusters=4, affinity='euclidean')
model.fit(df_samp[[0,1]])
df_samp['HR_labels']=model.labels_
# Plotting resulting clusters
plt.figure(figsize=(10,10))
plt.scatter(df_samp[0],df_samp[1],c=df_samp['HR_labels'],cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('Hierarchical Clustering',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#finally DBScan, though both heirarchical and KMeans did pretty well
from sklearn.cluster import DBSCAN
dbscan=DBSCAN()
dbscan.fit(df_samp[[0,1]])
df_samp['DBSCAN_labels']=dbscan.labels_
df_samp.shape
# Plotting resulting clusters
plt.figure(figsize=(10,10))
plt.scatter(df_samp[0],df_samp[1],c=df_samp['DBSCAN_labels'],cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('DBSCAN Clustering',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#try with other data that has been one hot encoded
df_fs=df[['naics_code','level_3_cat_platform']]
#remove nulls
df_fs1 =df_fs.dropna()
#scale the new data and plot
scaler = StandardScaler()
df_fs_scaled = scaler.fit_transform(df_fs1)
df_scaled = pd.DataFrame(df_fs_scaled)
ax = sns.boxplot(data=df_scaled)
ax
#start with KMeans cluster of the 2
k_means=KMeans(n_clusters=4,random_state=42)
k_means.fit(df_scaled[[0,1]])
df_scaled['KMeans_labels']=k_means.labels_
# Plotting resulting clusters
colors=['purple','red','blue','green']
plt.figure(figsize=(10,10))
plt.scatter(df_scaled[0],df_scaled[1],c=df_scaled['KMeans_labels'],cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('K-Means Clustering Scaled Data',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
# Normalizing the data so that
# the data approximately follows a Gaussian distribution
df_norm = normalize(df_scaled)
# Converting the numpy array into a pandas DataFrame
df_norm = pd.DataFrame(df_norm)
#start with KMeans cluster of the 2
k_means_norm=KMeans(n_clusters=4,random_state=42)
k_means_norm.fit(df_norm[[0,1]])
df_norm['KMeans_labels']=k_means_norm.labels_
# Plotting resulting clusters
colors=['purple','red','blue','green']
plt.figure(figsize=(10, 10))
plt.scatter(df_norm[0],df_norm[1],c=df_norm['KMeans_labels']
,cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('K-Means Clustering Scaled and Normalized Features',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#try dbscan on these two
df_samp2 = df_norm.sample(frac = 0.05)
print(df_samp2.shape, df.shape)
df_samp2.head()
dbscan.fit(df_samp2[[0,1]])
df_samp2['DBSCAN_labels']=dbscan.labels_
# Plotting resulting clusters
plt.figure(figsize=(10,10))
plt.scatter(df_samp2[0],df_samp2[1],c=df_samp2['DBSCAN_labels'],cmap=matplotlib.colors.ListedColormap(colors),s=15)
plt.title('DBSCAN Clustering',fontsize=20)
plt.xlabel('Feature 1',fontsize=14)
plt.ylabel('Feature 2',fontsize=14)
plt.show()
#conclusion DBScan works much better on the summarized data than the unsummarized
```
|
github_jupyter
|
Since g2 data from measurements are saved in .spe files so we import an external library to read such files to get data in numpy arrays.
```
# import libraries we need
%pylab inline
import sys
sys.path.append('./py_programs/')
from tensorflow import keras
from sdt_reader import sdtfile
from py_programs import sdt
file = sdtfile.SdtFile('./sdt_data/Antibunching_Rh110_DPC.sdt')
file.block_measure_info
# read data files
t1, y1 = sdt.read('./sdt_data/Antibunching_Rh110_DPC.sdt')
t2, y2 = sdt.read('./sdt_data/Antibunching_Rh110_Spc.sdt')
# cut off the first and last few zero data points
y2 = y2[np.argwhere(y2>0)].flatten()
t2 = t2[np.argwhere(y2>0)].flatten()
```
We need to manually set the dip as the zero time delay. And also normalize the g2 signal to its maximum.
```
# normalize g2 values and zero the time delay
t1_norm, y1_norm = sdt.normalize(t1,y1)
t2_norm, y2_norm = sdt.normalize(t2,y2)
# take a look at the data
plt.figure(1)
plt.title('Antibunching_Rh110_DPC')
plt.xlabel(r'$\tau$(ns)')
plt.ylabel(r'$g^{(2)}$') # un-normalized
plt.plot(t1_norm,y1_norm)
plt.figure(2)
plt.title('Antibunching_Rh110_Spc')
plt.xlabel(r'$\tau$(ns)')
plt.ylabel(r'$g^{(2)}$')
plt.plot(t2_norm,y2_norm)
plt.show()
```
# MachineLearning Part
To avoid tensorflow occupying all CPUs or GPU in a computer we need to set processing units which tensorflow has access to use.
```
# this is to limit the GPU and CPUs being occupied by tensorflow
from implementations import tf_setCPU
# create training sequences
#time_step = 2
train = sdt.create_train(y1_norm,time_step)
#train2x, train2y = sdt.create_train(y1,time_step
train1 = train1.reshape(train1.shape[0],train1.shape[1],1)
train2 = sdt.create_train(y2_norm,time_step)
train2 = train2.reshape(train2.shape[0],train2.shape[1],1)
```
Here I create a [1D CNN](https://keras.io/api/layers/convolution_layers/convolution1d/).
```
# create a training model
kernelsize = 7
model = keras.Sequential()
model.add(keras.layers.Input(shape=(train1.shape[1],train1.shape[2])))
model.add(keras.layers.Conv1D(filters=32, kernel_size=kernelsize, padding="same", strides=1, activation="relu")) #,input_shape=(train1.shape[0],train1.shape[1],1)
model.add(keras.layers.Dropout(rate=0.1))
model.add(keras.layers.Conv1D(filters=16, kernel_size=kernelsize, padding="same", strides=1, activation="relu")) #,input_shape=(train1.shape[0],train1.shape[1],1)
model.add(keras.layers.Conv1DTranspose(filters=16, kernel_size=kernelsize,activation="relu", padding="same"))
model.add(keras.layers.Dropout(rate=0.1))
model.add(keras.layers.Conv1DTranspose(filters=32, kernel_size=kernelsize,activation="relu", padding="same"))
model.add(keras.layers.Conv1DTranspose(filters=1, kernel_size=kernelsize, padding="same"))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss="mse")
model.summary()
# training ...
history = model.fit(train2,train2,epochs=20,validation_split=.1)
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
plt.show()
# use the model to predict data
pred1 = model.predict(train1)
pred2 = model.predict(train2)
# plot prediction
plt.figure(1)
plt.plot(t1_norm[:-2],pred1[:,1,0],label='predict')
plt.plot(t1_norm[:-2],train1[:,0],'.',markersize=4,label='original')
plt.legend(loc=(0.01,0.01))
plt.figure(2)
plt.plot(t2_norm[:-2],pred2[:,0,0],label='predict')
plt.plot(t2_norm[:-2],train2[:,0],'.',markersize=4,label='original')
plt.legend(loc=(0.01,0.01))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/subham1/sentence-transformers/blob/master/QuoraSentenceSimilarity.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install sentence_transformers
ls
cd '/content/drive/My Drive/sbert/sentence-transformers'
!pip install transformers
import torch
import numpy as np
import pandas as pd
from sentence_transformers import SentenceTransformer
import scipy.spatial
from torch.utils.data import DataLoader
import time
pwd
df =pd.read_csv('/content/drive/My Drive/sbert/train.csv', header=None)
df[3] = df[3].astype(str)
df[4] = df[4].astype(str)
df.head()
from torch.utils.data import DataLoader
import math
from sentence_transformers import models, losses
from sentence_transformers import SentencesDataset, LoggingHandler, SentenceTransformer
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
from sentence_transformers.readers import *
from sentence_transformers.readers.QuoraDataReader import QuoraDataReader
import logging
from datetime import datetime
pwd
from sentence_transformers.readers import InputExample
import csv
import gzip
import os
import pandas as pd
class QuoraDataReader:
"""
Reads in the STS dataset. Each line contains two sentences (s1_col_idx, s2_col_idx) and one label (score_col_idx)
"""
def __init__(self, dataset_folder, s1_col_idx=3, s2_col_idx=4, score_col_idx=5, delimiter="\t",
quoting=csv.QUOTE_NONE, normalize_scores=True, min_score=0, max_score=1):
self.dataset_folder = dataset_folder
self.score_col_idx = score_col_idx
self.s1_col_idx = s1_col_idx
self.s2_col_idx = s2_col_idx
self.delimiter = delimiter
self.quoting = quoting
self.normalize_scores = normalize_scores
self.min_score = min_score
self.max_score = max_score
def get_examples(self, filename, max_examples=0):
"""
filename specified which data split to use (train.csv, dev.csv, test.csv).
"""
data = csv.reader(open(os.path.join(self.dataset_folder, filename), encoding="utf-8"),
delimiter=self.delimiter, quoting=self.quoting)
df =pd.read_csv(os.path.join(self.dataset_folder, filename), header =None)
df[self.s1_col_idx] = df[self.s1_col_idx].astype(str)
df[self.s2_col_idx] = df[self.s2_col_idx].astype(str)
examples = []
for id,row in df.iterrows():
score =int(row[self.score_col_idx])
if self.normalize_scores: # Normalize to a 0...1 value
score = (score - self.min_score) / (self.max_score - self.min_score)
s1 = row[self.s1_col_idx]
s2 = row[self.s2_col_idx]
examples.append(InputExample(guid=filename+str(id), texts=[s1, s2], label=score))
if max_examples > 0 and len(examples) >= max_examples:
break
return examples
ls '/content'
quora_reader = QuoraDataReader('/')
model_name = 'bert-base-nli-mean-tokens'
train_batch_size = 16
num_epochs = 4
model_save_path = 'sentence-transformers-master/training_stsbenchmark_continue_training-'+model_name+'-'+datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
model = SentenceTransformer(model_name)
pwd
tmp1 =df[1].tolist()
tmp2 =df[2].tolist()
k =tmp1 + tmp2
tmp1 =df[3].tolist()
tmp2 =df[4].tolist()
v =tmp1 + tmp2
train_data = SentencesDataset(quora_reader.get_examples('/content/drive/My Drive/sbert/train.csv'), model)
from collections import OrderedDict
res = OrderedDict(zip(k, v))
res = OrderedDict(sorted(res.items(), key=lambda x: int(x[0])))
corpus =list(res.values())
len(corpus)
tic=time.time()
embeddings =model.encode(corpus)
toc=time.time()
print(toc- tic)
var =toc -tic
embeddings
import pickle
with open('embeddings.pk', 'wb') as f:
pickle.dump(mylist, f)
pwd
train_batch_size = 16
test_data = SentencesDataset(examples=sts_reader.get_examples('/content/drive/My Drive/sbert/train.csv'), model=model)
test_dataloader = DataLoader(test_data, shuffle=False, batch_size=train_batch_size)
evaluator = EmbeddingSimilarityEvaluator(test_dataloader)
model.evaluate(evaluator)
corpus = list(res.values())
import time
res.items()
len(corpus)
# tic=time.time()
emembeddings
embeddings[0].shape
corpus_embeddings= embeddings
queries = ['How to order ']
query_embeddings = model.encode(queries)
closest_n = 10
for query, query_embedding in zip(queries, query_embeddings):
distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
results = zip(range(len(distances)), distances)
results = sorted(results, key=lambda x: x[1])
print("\n\n======================\n\n")
print("Query:", query)
print("\nTop 10 most similar sentences in corpus:")
for idx, distance in results[0:closest_n]:
print(corpus[idx].strip(), "(Score: %.4f)" % (1-distance))
a = df[3].tolist()
a = [model.encode(i) for i in a]
b=df[4].tolist()
a = [model.encode(i) for i in b]
a[:5]
b[:5]
!pip install sklearn
import sklearn
from sklearn.metrics.pairwise import cosine_similarity
sklearn.metrics.pairwise.cosine_similarity(a, b, dense_output=True)
x = corpus_embeddings[0].reshape(-1,1)
x.shape
```
|
github_jupyter
|
```
%matplotlib inline
```
# Text properties and layout
Controlling properties of text and its layout with Matplotlib.
The :class:`matplotlib.text.Text` instances have a variety of
properties which can be configured via keyword arguments to the text
commands (e.g., :func:`~matplotlib.pyplot.title`,
:func:`~matplotlib.pyplot.xlabel` and :func:`~matplotlib.pyplot.text`).
========================== ======================================================================================================================
Property Value Type
========================== ======================================================================================================================
alpha `float`
backgroundcolor any matplotlib :doc:`color </tutorials/colors/colors>`
bbox `~matplotlib.patches.Rectangle` prop dict plus key ``'pad'`` which is a pad in points
clip_box a matplotlib.transform.Bbox instance
clip_on bool
clip_path a `~matplotlib.path.Path` instance and a `~matplotlib.transforms.Transform` instance, a `~matplotlib.patches.Patch`
color any matplotlib :doc:`color </tutorials/colors/colors>`
family [ ``'serif'`` | ``'sans-serif'`` | ``'cursive'`` | ``'fantasy'`` | ``'monospace'`` ]
fontproperties a `~matplotlib.font_manager.FontProperties` instance
horizontalalignment or ha [ ``'center'`` | ``'right'`` | ``'left'`` ]
label any string
linespacing `float`
multialignment [``'left'`` | ``'right'`` | ``'center'`` ]
name or fontname string e.g., [``'Sans'`` | ``'Courier'`` | ``'Helvetica'`` ...]
picker [None|float|boolean|callable]
position (x, y)
rotation [ angle in degrees | ``'vertical'`` | ``'horizontal'`` ]
size or fontsize [ size in points | relative size, e.g., ``'smaller'``, ``'x-large'`` ]
style or fontstyle [ ``'normal'`` | ``'italic'`` | ``'oblique'`` ]
text string or anything printable with '%s' conversion
transform a `~matplotlib.transforms.Transform` instance
variant [ ``'normal'`` | ``'small-caps'`` ]
verticalalignment or va [ ``'center'`` | ``'top'`` | ``'bottom'`` | ``'baseline'`` ]
visible bool
weight or fontweight [ ``'normal'`` | ``'bold'`` | ``'heavy'`` | ``'light'`` | ``'ultrabold'`` | ``'ultralight'``]
x `float`
y `float`
zorder any number
========================== ======================================================================================================================
You can lay out text with the alignment arguments
``horizontalalignment``, ``verticalalignment``, and
``multialignment``. ``horizontalalignment`` controls whether the x
positional argument for the text indicates the left, center or right
side of the text bounding box. ``verticalalignment`` controls whether
the y positional argument for the text indicates the bottom, center or
top side of the text bounding box. ``multialignment``, for newline
separated strings only, controls whether the different lines are left,
center or right justified. Here is an example which uses the
:func:`~matplotlib.pyplot.text` command to show the various alignment
possibilities. The use of ``transform=ax.transAxes`` throughout the
code indicates that the coordinates are given relative to the axes
bounding box, with 0,0 being the lower left of the axes and 1,1 the
upper right.
```
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# build a rectangle in axes coords
left, width = .25, .5
bottom, height = .25, .5
right = left + width
top = bottom + height
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
# axes coordinates are 0,0 is bottom left and 1,1 is upper right
p = patches.Rectangle(
(left, bottom), width, height,
fill=False, transform=ax.transAxes, clip_on=False
)
ax.add_patch(p)
ax.text(left, bottom, 'left top',
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
ax.text(left, bottom, 'left bottom',
horizontalalignment='left',
verticalalignment='bottom',
transform=ax.transAxes)
ax.text(right, top, 'right bottom',
horizontalalignment='right',
verticalalignment='bottom',
transform=ax.transAxes)
ax.text(right, top, 'right top',
horizontalalignment='right',
verticalalignment='top',
transform=ax.transAxes)
ax.text(right, bottom, 'center top',
horizontalalignment='center',
verticalalignment='top',
transform=ax.transAxes)
ax.text(left, 0.5*(bottom+top), 'right center',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(left, 0.5*(bottom+top), 'left center',
horizontalalignment='left',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(0.5*(left+right), 0.5*(bottom+top), 'middle',
horizontalalignment='center',
verticalalignment='center',
fontsize=20, color='red',
transform=ax.transAxes)
ax.text(right, 0.5*(bottom+top), 'centered',
horizontalalignment='center',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(left, top, 'rotated\nwith newlines',
horizontalalignment='center',
verticalalignment='center',
rotation=45,
transform=ax.transAxes)
ax.set_axis_off()
plt.show()
```
# Default Font
The base default font is controlled by a set of rcParams. To set the font
for mathematical expressions, use the rcParams beginning with ``mathtext``
(see `mathtext <mathtext-fonts>`).
+---------------------+----------------------------------------------------+
| rcParam | usage |
+=====================+====================================================+
| ``'font.family'`` | List of either names of font or ``{'cursive', |
| | 'fantasy', 'monospace', 'sans', 'sans serif', |
| | 'sans-serif', 'serif'}``. |
| | |
+---------------------+----------------------------------------------------+
| ``'font.style'`` | The default style, ex ``'normal'``, |
| | ``'italic'``. |
| | |
+---------------------+----------------------------------------------------+
| ``'font.variant'`` | Default variant, ex ``'normal'``, ``'small-caps'`` |
| | (untested) |
+---------------------+----------------------------------------------------+
| ``'font.stretch'`` | Default stretch, ex ``'normal'``, ``'condensed'`` |
| | (incomplete) |
| | |
+---------------------+----------------------------------------------------+
| ``'font.weight'`` | Default weight. Either string or integer |
| | |
| | |
+---------------------+----------------------------------------------------+
| ``'font.size'`` | Default font size in points. Relative font sizes |
| | (``'large'``, ``'x-small'``) are computed against |
| | this size. |
+---------------------+----------------------------------------------------+
The mapping between the family aliases (``{'cursive', 'fantasy',
'monospace', 'sans', 'sans serif', 'sans-serif', 'serif'}``) and actual font names
is controlled by the following rcParams:
+------------------------------------------+--------------------------------+
| family alias | rcParam with mappings |
+==========================================+================================+
| ``'serif'`` | ``'font.serif'`` |
+------------------------------------------+--------------------------------+
| ``'monospace'`` | ``'font.monospace'`` |
+------------------------------------------+--------------------------------+
| ``'fantasy'`` | ``'font.fantasy'`` |
+------------------------------------------+--------------------------------+
| ``'cursive'`` | ``'font.cursive'`` |
+------------------------------------------+--------------------------------+
| ``{'sans', 'sans serif', 'sans-serif'}`` | ``'font.sans-serif'`` |
+------------------------------------------+--------------------------------+
which are lists of font names.
Text with non-latin glyphs
==========================
As of v2.0 the `default font <default_changes_font>` contains
glyphs for many western alphabets, but still does not cover all of the
glyphs that may be required by mpl users. For example, DejaVu has no
coverage of Chinese, Korean, or Japanese.
To set the default font to be one that supports the code points you
need, prepend the font name to ``'font.family'`` or the desired alias
lists ::
matplotlib.rcParams['font.sans-serif'] = ['Source Han Sans TW', 'sans-serif']
or set it in your :file:`.matplotlibrc` file::
font.sans-serif: Source Han Sans TW, Arial, sans-serif
To control the font used on per-artist basis use the ``'name'``,
``'fontname'`` or ``'fontproperties'`` kwargs documented :doc:`above
</tutorials/text/text_props>`.
On linux, `fc-list <https://linux.die.net/man/1/fc-list>`__ can be a
useful tool to discover the font name; for example ::
$ fc-list :lang=zh family
Noto to Sans Mono CJK TC,Noto Sans Mono CJK TC Bold
Noto Sans CJK TC,Noto Sans CJK TC Medium
Noto Sans CJK TC,Noto Sans CJK TC DemiLight
Noto Sans CJK KR,Noto Sans CJK KR Black
Noto Sans CJK TC,Noto Sans CJK TC Black
Noto Sans Mono CJK TC,Noto Sans Mono CJK TC Regular
Noto Sans CJK SC,Noto Sans CJK SC Light
lists all of the fonts that support Chinese.
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipeline with AutoMLStep
This notebook demonstrates the use of AutoMLStep in Azure Machine Learning Pipeline.
## Introduction
In this example we showcase how you can use AzureML Dataset to load data for AutoML via AML Pipeline.
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook.
In this notebook you will learn how to:
1. Create an `Experiment` in an existing `Workspace`.
2. Create or Attach existing AmlCompute to a workspace.
3. Define data loading in a `TabularDataset`.
4. Configure AutoML using `AutoMLConfig`.
5. Use AutoMLStep
6. Train the model using AmlCompute
7. Explore the results.
8. Test the best fitted model.
## Azure Machine Learning and Pipeline SDK-specific imports
```
import logging
import os
import csv
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import pkg_resources
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.dataset import Dataset
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.train.automl.runtime import AutoMLStep
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration. Make sure the config file is present at .\config.json
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## Create an Azure ML experiment
Let's create an experiment named "automl-classification" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.
The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
```
# Choose a name for the run history container in the workspace.
experiment_name = 'automlstep-classification'
project_folder = './project'
experiment = Experiment(ws, experiment_name)
experiment
```
### Create or Attach an AmlCompute cluster
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you get the default `AmlCompute` as your training compute resource.
```
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 4)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = 1, timeout_in_minutes = 10)
# For a more detailed view of current AmlCompute status, use get_status().
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
conda_run_config.environment.docker.enabled = True
conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'],
conda_packages=['numpy', 'py-xgboost<=0.80'])
conda_run_config.environment.python.conda_dependencies = cd
print('run config is ready')
```
## Data
```
# The data referenced here was a 1MB simple random sample of the Chicago Crime data into a local temporary directory.
example_data = 'https://dprepdata.blob.core.windows.net/demo/crime0-random.csv'
dataset = Dataset.Tabular.from_delimited_files(example_data)
dataset.to_pandas_dataframe().describe()
dataset.take(5).to_pandas_dataframe()
```
### Review the Dataset Result
You can peek the result of a TabularDataset at any range using `skip(i)` and `take(j).to_pandas_dataframe()`. Doing so evaluates only `j` records for all the steps in the TabularDataset, which makes it fast even against large datasets.
`TabularDataset` objects are composed of a list of transformation steps (optional).
```
X = dataset.drop_columns(columns=['Primary Type', 'FBI Code'])
y = dataset.keep_columns(columns=['Primary Type'], validate=True)
print('X and y are ready!')
```
## Train
This creates a general AutoML settings object.
```
automl_settings = {
"iteration_timeout_minutes" : 5,
"iterations" : 2,
"primary_metric" : 'AUC_weighted',
"preprocess" : True,
"verbosity" : logging.INFO
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
path = project_folder,
compute_target=compute_target,
run_configuration=conda_run_config,
X = X,
y = y,
**automl_settings
)
```
You can define outputs for the AutoMLStep using TrainingOutput.
```
from azureml.pipeline.core import PipelineData, TrainingOutput
ds = ws.get_default_datastore()
metrics_output_name = 'metrics_output'
best_model_output_name = 'best_model_output'
metrics_data = PipelineData(name='metrics_data',
datastore=ds,
pipeline_output_name=metrics_output_name,
training_output=TrainingOutput(type='Metrics'))
model_data = PipelineData(name='model_data',
datastore=ds,
pipeline_output_name=best_model_output_name,
training_output=TrainingOutput(type='Model'))
```
Create an AutoMLStep.
```
automl_step = AutoMLStep(
name='automl_module',
automl_config=automl_config,
outputs=[metrics_data, model_data],
allow_reuse=True)
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(
description="pipeline_with_automlstep",
workspace=ws,
steps=[automl_step])
pipeline_run = experiment.submit(pipeline)
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
pipeline_run.wait_for_completion()
```
## Examine Results
### Retrieve the metrics of all child runs
Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will examine the outputs by retrieve output data and running some tests.
```
metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)
num_file_downloaded = metrics_output.download('.', show_progress=True)
import json
with open(metrics_output._path_on_datastore) as f:
metrics_output_result = f.read()
deserialized_metrics_output = json.loads(metrics_output_result)
df = pd.DataFrame(deserialized_metrics_output)
df
```
### Retrieve the Best Model
```
best_model_output = pipeline_run.get_pipeline_output(best_model_output_name)
num_file_downloaded = best_model_output.download('.', show_progress=True)
import pickle
with open(best_model_output._path_on_datastore, "rb" ) as f:
best_model = pickle.load(f)
best_model
```
### Test the Model
#### Load Test Data
For the test data, it should have the same preparation step as the train data. Otherwise it might get failed at the preprocessing step.
```
dataset = Dataset.Tabular.from_delimited_files(path='https://dprepdata.blob.core.windows.net/demo/crime0-test.csv')
df_test = dataset_test.to_pandas_dataframe()
df_test = df_test[pd.notnull(df['Primary Type'])]
y_test = df_test[['Primary Type']]
X_test = df_test.drop(['Primary Type', 'FBI Code'], axis=1)
```
#### Testing Our Best Fitted Model
We will use confusion matrix to see how our model works.
```
from pandas_ml import ConfusionMatrix
ypred = best_model.predict(X_test)
cm = ConfusionMatrix(y_test['Primary Type'], ypred)
print(cm)
cm.plot()
```
|
github_jupyter
|
# Breast-Cancer Classification
```
#WOHOO already Version 2 I learned How to explore Data
```
# Library
```
# Import Dependencies
%matplotlib inline
# Start Python Imports
import math, time, random, datetime
# Data Manipulation
import numpy as np
import pandas as pd
# Visualization
import matplotlib.pyplot as plt
import missingno
import seaborn as sns
plt.style.use('seaborn-whitegrid')
# Preprocessing
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, label_binarize
# Machine learning
import catboost
from sklearn.model_selection import train_test_split
from sklearn import model_selection, tree, preprocessing, metrics, linear_model
from sklearn.svm import LinearSVC
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LinearRegression, LogisticRegression, SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from catboost import CatBoostClassifier, Pool, cv
# Let's be rebels and ignore warnings for now
import warnings
warnings.filterwarnings('ignore')
```
# Exploring the dataset
```
dataset = pd.read_csv('data.csv')
dataset.drop('Unnamed: 32', inplace=True, axis=1)
dataset.head()
# Plot graphic of missing values
missingno.matrix(dataset, figsize = (30,10))
dataset.columns
print(dataset.shape)
dataset.describe()
dataset.isnull().sum()
X = dataset.iloc[:, 2:].values
y = dataset.iloc[:, 1:2].values
```
# spliting the dataset
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2)
#categorical values
from sklearn.preprocessing import LabelEncoder
label_y = LabelEncoder()
y_train = label_y.fit_transform(y_train)
y_test = label_y.transform(y_test)
```
# Method 1
## Fitting the model and analysing
```
#fitting
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(n_jobs= -1)
classifier.fit(X_train, y_train)
#predicting
y_pred = classifier.predict(X_test)
#confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# classification analysis
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
# k-fold cross vallidation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=classifier, X=X_train, y=y_train,cv= 10, n_jobs=-1)
print(accuracies.mean(), accuracies.std())
```
# Method 2
# Function that runs the requested algorithm and returns the accuracy metrics
```
def fit_ml_algo(algo, X_train, y_train, cv):
# One Pass
model = algo.fit(X_train, y_train)
acc = round(model.score(X_train, y_train) * 100, 2)
# Cross Validation
train_pred = model_selection.cross_val_predict(algo,
X_train,
y_train,
cv=cv,
n_jobs = -1)
# Cross-validation accuracy metric
acc_cv = round(metrics.accuracy_score(y_train, train_pred) * 100, 2)
return train_pred, acc, acc_cv
start_time = time.time()
train_pred_log, acc_log, acc_cv_log = fit_ml_algo(LogisticRegression(),
X_train,
y_train,
10)
log_time = (time.time() - start_time)
print("Accuracy: %s" % acc_log)
print("Accuracy CV 10-Fold: %s" % acc_cv_log)
print("Running Time: %s" % datetime.timedelta(seconds=log_time))
```
|
github_jupyter
|
# Exploring Random Forests
```
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from rfpimp import *
from distutils.version import LooseVersion
if LooseVersion(sklearn.__version__) >= LooseVersion("0.24"):
# In sklearn version 0.24, forest module changed to be private.
from sklearn.ensemble._forest import _generate_unsampled_indices
from sklearn.ensemble import _forest as forest
else:
# Before sklearn version 0.24, forest was public, supporting this.
from sklearn.ensemble.forest import _generate_unsampled_indices
from sklearn.ensemble import forest
from sklearn import tree
from dtreeviz.trees import *
def rent(n=None, bootstrap=False):
df_rent = pd.read_csv("data/rent-ideal.csv")
if n is None:
n = len(df_rent)
df_rent = df_rent.sample(n, replace=bootstrap)
X = df_rent[['bedrooms','bathrooms','latitude','longitude']]
y = df_rent['price']
return X, y
def boston():
boston = load_boston()
X = boston.data
y = boston.target
features = boston.feature_names
df = pd.DataFrame(data=X,columns=features)
df['y'] = y
return df
```
## Set up
Get the `rent-ideal.csv` data file from canvas "files area" and store in the data directory underneath your notebook directory.
```
X, y = rent()
X.head(3)
X.shape
```
## Train random forests of different sizes
As we increase the number of trees in the forest, we initially see model bias going down. It will asymptotically approach some minimum error on the testing set.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
```
Here's how to train a random forest that has a single tree:
```
rf = RandomForestRegressor(n_estimators=1)
rf.fit(X_train, y_train)
```
**Task**: Compute the MAE for the training and the testing set, printing them out.
```
mae_train = mean_absolute_error(...)
mae = mean_absolute_error(...)
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
</pre>
</details>
**Task**: Run the training and testing cycle several times to see the variance: the test scores bounce around a lot.
**Task**: Increase the number of trees (`n_estimators`) to 2, retrain, and print out the results.
```
rf = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=2)
rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
You should notice the both test MAE scores going down and bouncing around less from run to run.
**Q.** Why does the MAE score go down?
<details>
<summary>Solution</summary>
With 2 trees, the chances are that the random forest will have seen (trained on) more of the original training set, despite bootstrapping.
</details>
**Task**: Increase the number of trees (`n_estimators`) to 10, retrain, and print out the results.
```
rf = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=10)
rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
**Q.** What you notice about the MAE scores?
<details>
<summary>Solution</summary>
They are getting smaller.
</details>
**Q.** After running several times, what else do you notice?
<details>
<summary>Solution</summary>
With 10 trees, the prediction from run to run varies a lot less. We have reduced variance, improving generality.
</details>
**Task**: Increase the number of trees (`n_estimators`) to 200, retrain, and print out the results.
```
rf = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=200)
%time rf.fit(X_train, y_train) # how long does this take?
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
**Q.** What you notice about the MAE scores from a single run?
<details>
<summary>Solution</summary>
They are a bit smaller, but not by much.
</details>
**Task**: Notice that it took a long time to train, about 10 seconds. Do the exact same thing again but this time use `n_jobs=-1` as an argument to the `RandomForestRegressor` constructor.
This tells the library to use all processing cores available on the computer processor. As long as the data is not too huge (because it must pass it around), it often goes much faster using this argument. It should take less than two seconds.
```
rf = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=200, n_jobs=-1)
%time rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
**Q.** What you notice about the MAE scores from SEVERAL runs?
<details>
<summary>Solution</summary>
The error variance across runs is even lower (tighter).
</details>
## Examining model size and complexity
The structure of a tree is affected by a number of hyper parameters, not just the data. Goal in the section is to see the effect of altering the number of samples per leaf and the maximum number of candidate features per split. Let's start out with a handy function that uses some support code from rfpimp to examine tree size and depth:
```
def showsize(ntrees, max_features=1.0, min_samples_leaf=1):
rf = RandomForestRegressor(n_estimators=ntrees,
max_features=max_features,
min_samples_leaf=min_samples_leaf,
n_jobs=-1)
rf.fit(X_train, y_train)
n = rfnnodes(rf) # from rfpimp
h = np.median(rfmaxdepths(rf)) # rfmaxdepths from rfpimp
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:6.1f}$, test {mae:6.1f}$ using {n:9,d} tree nodes with {h:2.0f} median tree height")
```
### Effect of number of trees
For a single tree, we see about 21,000 nodes and a tree height of around 35:
```
showsize(ntrees=1)
```
**Task**: Look at the metrics for 2 trees and then 100 trees.
<details>
<summary>Solution</summary>
<pre>
showsize(ntrees=2)
showsize(ntrees=100)
</pre>
</details>
**Q.** Why does the median height of a tree stay the same when we increase the number of trees?
<details>
<summary>Solution</summary>
While the number of nodes increases with the number of trees, the height of any individual tree will stay the same because we have not fundamentally changed how it is constructing a single tree.
</details>
### Effect of increasing min samples / leaf
**Task**: Loop around a call to `showsize()` with 10 trees and min_samples_leaf=1..10
```
for i in range(...):
print(f"{i:2d} ",end='')
showsize(...)
```
<details>
<summary>Solution</summary>
<pre>
for i in range(1,10+1):
showsize(ntrees=10, min_samples_leaf=i)
</pre>
</details>
**Q.** Why do the median height of a tree and number of total nodes decrease as we increase the number of samples per leaf?
<details>
<summary>Solution</summary>
Because when the sample size gets down to `min_samples_leaf`, splitting stops, which prevents the tree from getting taller. It also restricts how many nodes total get created for the tree.
</details>
**Q.** Why does the MAE error increase?
<details>
<summary>Solution</summary>
If we include more observations in a single leaf, then the average is taken over more samples. That average is a more general prediction but less accurate.
</details>
It's pretty clear from that print out that `min_samples_leaf=1` is the best choice because it gives the minimum validation error.
### Effect of reducing max_features (rent data)
**Task:** Do another loop from `max_features` = 4 down to 1, with 1 sample per leaf. (There are 4 total features.)
```
p = X_train.shape[1]
for i in range(...):
print(f"{i:2d} ",end='')
showsize(ntrees=10, ...)
```
<details>
<summary>Solution</summary>
<pre>
p = X_train.shape[1]
for i in range(p,0,-1):
print(f"{i:2d} ",end='')
showsize(ntrees=10, max_features=i)
</pre>
</details>
For this data set, changing the available candidate features that each split does not seem to be important as the validation error does not change, nor does the height of the trees.
### Examine effects of hyper parameters on Boston data set
```
df_boston = boston()
df_boston.head(3)
X, y = df_boston.drop('y', axis=1), df_boston['y']
y *= 1000 # y is "Median value of owner-occupied homes in $1000's" so multiply by 1000
# reproducible 20% test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1)
```
Let's run the metric `showsize()` function to see how many trees we should use:
```
for i in [1,5,30,50,100,150,300]:
print(f"{i:3d} trees: ", end='')
showsize(ntrees=i)
```
Seems like the sweet spot on the validation error is probably 150 trees as it gets a low validation error and has a fairly small set of trees.
Check the effect of increasing the minimum samples per leaf from 1 to 10 as we did before.
```
for i in range(1,10+1):
print(f"{i:2d} ",end='')
showsize(ntrees=150, min_samples_leaf=i)
```
The training error goes up dramatically but the validation error doesn't get too much worse.
**Q.** Which min samples per leaf would you choose?
<details>
<summary>Solution</summary>
After running a few times, it seems that using <tt>min_samples_leaf</tt>=1 or 2 is best for the validation error. But, keep in mind that this data set is pretty small and so our error values will change quite a bit depending on the sample we get for the test set.
</details>
Run a loop from the maximum number of features down to 1 for `max_features` to see the effects.
```
p = X_train.shape[1]
for i in range(p,0,-1):
print(f"{i:2d} ",end='')
showsize(ntrees=150, max_features=i, min_samples_leaf=3)
```
**Q.** Which max features would you choose?
<details>
<summary>Solution</summary>
After running a few times, it seems that using <tt>max_features</tt>=7 or 13 gets best validation error, but again it depends on the randomness of the tree construction and results will vary across runs.
</details>
Here's what the final model would look like:
```
showsize(ntrees=150, max_features=13, min_samples_leaf=1)
```
## RF prediction confidence
A random forest is a collection of decision trees, each of which contributes a prediction. The forest averages those predictions to provide the overall prediction (or takes most common vote for classification). Let's dig inside the random forest to get the individual trees out and ask them what their predictions are.
**Task**: Train a random forest with 10 trees on `X_train`, `y_train`. Use `for t in rf.estimators_` to iterate through the trees making predictions with `t` not `rf`. Print out the usual MAE scores for each tree predictor.
```
rf = RandomForestRegressor(n_estimators=10, n_jobs=-1)
rf.fit(X_train, y_train)
for t in ...:
mae_train = ...
mae = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=10, n_jobs=-1)
rf.fit(X_train, y_train)
for t in rf.estimators_:
mae_train = mean_absolute_error(y_train, t.predict(X_train))
mae = mean_absolute_error(y_test, t.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
Notice that it bounces around quite a bit.
**Task**: Select one of the `X_test` rows and print out the addicted rent price.
```
x = ... # pick single test case
x = x.values.reshape(1,-1) # Needs to be a one-row matrix
print(f"{x} => {rf.predict(x)}$")
```
<details>
<summary>Solution</summary>
<pre>
x = X_test.iloc[3,:] # pick single test case
x = x.values.reshape(1,-1)
print(f"{x} => {rf.predict(x)}$")
</pre>
</details>
**Task**: Now let's see how the forest came to that conclusion. Compute the average of the predictions obtained from every tree.
Compare that to the prediction obtained directly from the random forest (`rf.predict(X_test)`). They should be the same.
```
y_pred = ...
print(f"{x} => {y_pred}$")
```
<details>
<summary>Solution</summary>
<pre>
y_pred = np.mean([t.predict(x) for t in rf.estimators_])
print(f"{x} => {y_pred}$")
</pre>
</details>
**Task**: Compute the standard deviation of the tree estimates and print that out.
<details>
<summary>Solution</summary>
<pre>
np.std([t.predict(x) for t in rf.estimators_])
</pre>
</details>
The lower the standard deviation, the more tightly grouped the predictions were, which means we should have more confidence in our answer.
Different records will often have different standard deviations, which means we could have different levels of confidence in the various answers. This might be helpful to a bank for example that wanted to not only predict whether to give loans, but how confident the model was.
## Altering bootstrap size
**This no longer works with latest versions of scikit-learn... and the feature is not yet implemented by them* See [related github issue](https://github.com/scikit-learn/scikit-learn/issues/11993). Ah [this new features](https://github.com/scikit-learn/scikit-learn/pull/14682) covers it for trees. "Adds a max_samples kwarg to forest ensembles that limits the size of the bootstrap samples used to train each estimator."
```
X, y = rent()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
```
**Task**: There are about 38,000 training records, change that to 19,000 and check the accuracy again.
```
rf = RandomForestRegressor(n_estimators=200) # don't compute in parallel so we can see timing
%time rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
rf = RandomForestRegressor(n_estimators=200, max_samples=1/2)
%time rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
It's a bit less accurate, but it's faster.
**Q.** Why is it less accurate?
<details>
<summary>Solution</summary>
Each tree is seeing less of the data set during training.
</details>
**Task**: Turn off bootstrapping by adding `bootstrap=False` to the constructor of the model. This means that it will subsample rather than bootstrap. Remember that bootstrapping gets about two thirds of the data because of replacement.
```
rf = ...
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
<details>
<summary>Solution</summary>
<pre>
rf = RandomForestRegressor(n_estimators=200, n_jobs=-1, bootstrap=False)
%time rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
</pre>
</details>
That brings the accuracy back up a little bit for the test set but very much so for the training MAE score.
**Task**: Drop that size to one third of the training records then retrain and test.
```
rf = RandomForestRegressor(n_estimators=200, max_samples=1/3, n_jobs=-1)
%time rf.fit(X_train, y_train)
mae_train = mean_absolute_error(y_train, rf.predict(X_train))
mae = mean_absolute_error(y_test, rf.predict(X_test))
print(f"MAE train {mae_train:.1f}$, test {mae:.1f}$")
```
Mine is twice as fast as the full bootstrap but continues to have very tight variance because of the number of trees. The accuracy is lower, however, about what we get for the usual random forest with two trees.
|
github_jupyter
|
```
# Mount Google Drive
from google.colab import drive # import drive from google colab
ROOT = "/content/drive" # default location for the drive
print(ROOT) # print content of ROOT (Optional)
drive.mount(ROOT) # we mount the google drive at /content/drive
!pip install pennylane
from IPython.display import clear_output
clear_output()
import os
def restart_runtime():
os.kill(os.getpid(), 9)
restart_runtime()
# %matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
```
# Loading Raw Data
```
import tensorflow as tf
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[:, 0:27, 0:27]
x_test = x_test[:, 0:27, 0:27]
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0
print(x_train_flatten.shape, y_train.shape)
print(x_test_flatten.shape, y_test.shape)
x_train_0 = x_train_flatten[y_train == 0]
x_train_1 = x_train_flatten[y_train == 1]
x_train_2 = x_train_flatten[y_train == 2]
x_train_3 = x_train_flatten[y_train == 3]
x_train_4 = x_train_flatten[y_train == 4]
x_train_5 = x_train_flatten[y_train == 5]
x_train_6 = x_train_flatten[y_train == 6]
x_train_7 = x_train_flatten[y_train == 7]
x_train_8 = x_train_flatten[y_train == 8]
x_train_9 = x_train_flatten[y_train == 9]
x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9]
print(x_train_0.shape)
print(x_train_1.shape)
print(x_train_2.shape)
print(x_train_3.shape)
print(x_train_4.shape)
print(x_train_5.shape)
print(x_train_6.shape)
print(x_train_7.shape)
print(x_train_8.shape)
print(x_train_9.shape)
x_test_0 = x_test_flatten[y_test == 0]
x_test_1 = x_test_flatten[y_test == 1]
x_test_2 = x_test_flatten[y_test == 2]
x_test_3 = x_test_flatten[y_test == 3]
x_test_4 = x_test_flatten[y_test == 4]
x_test_5 = x_test_flatten[y_test == 5]
x_test_6 = x_test_flatten[y_test == 6]
x_test_7 = x_test_flatten[y_test == 7]
x_test_8 = x_test_flatten[y_test == 8]
x_test_9 = x_test_flatten[y_test == 9]
x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9]
print(x_test_0.shape)
print(x_test_1.shape)
print(x_test_2.shape)
print(x_test_3.shape)
print(x_test_4.shape)
print(x_test_5.shape)
print(x_test_6.shape)
print(x_test_7.shape)
print(x_test_8.shape)
print(x_test_9.shape)
```
# Selecting the dataset
Output: X_train, Y_train, X_test, Y_test
```
X_train = np.concatenate((x_train_list[0][:200, :], x_train_list[1][:200, :]), axis=0)
Y_train = np.zeros((X_train.shape[0],), dtype=int)
Y_train[200:] += 1
X_train.shape, Y_train.shape
X_test = np.concatenate((x_test_list[0][:500, :], x_test_list[1][:500, :]), axis=0)
Y_test = np.zeros((X_test.shape[0],), dtype=int)
Y_test[500:] += 1
X_test.shape, Y_test.shape
```
# Dataset Preprocessing
```
X_train = X_train.reshape(X_train.shape[0], 27, 27, 1)
X_test = X_test.reshape(X_test.shape[0], 27, 27, 1)
X_train.shape, X_test.shape
```
# Quantum
```
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer
qml.enable_tape()
from tensorflow.keras.utils import to_categorical
# Set a random seed
np.random.seed(2020)
# Define output labels as quantum state vectors
def density_matrix(state):
"""Calculates the density matrix representation of a state.
Args:
state (array[complex]): array representing a quantum state vector
Returns:
dm: (array[complex]): array representing the density matrix
"""
return state * np.conj(state).T
label_0 = [[1], [0]]
label_1 = [[0], [1]]
state_labels = [label_0, label_1]
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def qcircuit(params, inputs):
"""A variational quantum circuit representing the DRC.
Args:
params (array[float]): array of parameters
inputs = [x, y]
x (array[float]): 1-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
class class_weights(tf.keras.layers.Layer):
def __init__(self):
super(class_weights, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(1, 2), dtype="float32"),
trainable=True,
)
def call(self, inputs):
return (inputs * self.w)
X = tf.keras.Input(shape=(27,27,1))
conv_layer_1 = tf.keras.layers.Conv2D(filters=1, kernel_size=[3,3], strides=[2,2], name='Conv_Layer_1')(X)
conv_layer_2 = tf.keras.layers.Conv2D(filters=1, kernel_size=[3,3], strides=[2,2], name='Conv_Layer_2')(conv_layer_1)
max__pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(conv_layer_2)
reshapor_layer = tf.keras.layers.Reshape((9,), name='Reshapor_Layer')(max__pool_layer)
qlayer = qml.qnn.KerasLayer(qcircuit, {"params": (2, 1, 9)}, output_dim=2, name='Quantum_Layer')(reshapor_layer)
class_weights_layer = class_weights()(qlayer)
model = tf.keras.Model(inputs=X, outputs=class_weights_layer, name='Conv DRC')
model(X_train[0:32])
model.summary()
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
model.compile(opt, loss="mse", metrics=["accuracy"])
model.fit(X_train, to_categorical(Y_train), epochs=6, batch_size=32, validation_data=(X_test, to_categorical(Y_test)), verbose=1)
predict_test = model.predict(X_test)
```
|
github_jupyter
|
## Dependencies
```
# !pip install --quiet efficientnet
!pip install --quiet image-classifiers
import warnings, json, re, glob, math
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
# import efficientnet.tfkeras as efn
from classification_models.tfkeras import Classifiers
import tensorflow_addons as tfa
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Model parameters
```
dataset_path = 'melanoma-256x256'
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 64,
"EPOCHS": 20,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 5,
"N_FOLDS": 5,
"BASE_MODEL_PATH": 'imagenet',
"DATASET_PATH": dataset_path
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
# Load data
```
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = KaggleDatasets().get_gcs_path(dataset_path)
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
```
# Augmentations
```
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotate = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
### Spatial-level transforms
if p_spatial >= .2: # flips
image['input_image'] = tf.image.random_flip_left_right(image['input_image'])
image['input_image'] = tf.image.random_flip_up_down(image['input_image'])
if p_spatial >= .7:
image['input_image'] = tf.image.transpose(image['input_image'])
if p_rotate >= .8: # rotate 270º
image['input_image'] = tf.image.rot90(image['input_image'], k=3)
elif p_rotate >= .6: # rotate 180º
image['input_image'] = tf.image.rot90(image['input_image'], k=2)
elif p_rotate >= .4: # rotate 90º
image['input_image'] = tf.image.rot90(image['input_image'], k=1)
if p_spatial2 >= .7: # random rotation range 0º to 45º
image['input_image'] = transform_rotation(image['input_image'], config['HEIGHT'])
if p_crop >= .6: # crops
if p_crop >= .95:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.7), int(config['WIDTH']*.7), config['CHANNELS']])
elif p_crop >= .85:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']])
elif p_crop >= .7:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']])
else:
image['input_image'] = tf.image.central_crop(image['input_image'], central_fraction=.6)
image['input_image'] = tf.image.resize(image['input_image'], size=[config['HEIGHT'], config['WIDTH']])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
data['diagnosis'] = tf.cast(tf.one_hot(example['diagnosis'], 10), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
data['diagnosis'] = tf.cast(tf.one_hot(example['diagnosis'], 10), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Test function
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Advanced augmentations
def transform_rotation(image, height):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly rotated
DIM = height
XDIM = DIM%2 #fix for size 331
rotation = 45. * tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
# CONVERT DEGREES TO RADIANS
rotation = math.pi * rotation / 180.
# ROTATION MATRIX
c1 = tf.math.cos(rotation)
s1 = tf.math.sin(rotation)
one = tf.constant([1] ,dtype='float32')
zero = tf.constant([0], dtype='float32')
rotation_matrix = tf.reshape( tf.concat([c1,s1,zero, -s1,c1,zero, zero,zero,one],axis=0), [3, 3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(rotation_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM, DIM, 3])
```
## Learning rate scheduler
```
lr_min = 1e-6
lr_start = 0
lr_max = config['LEARNING_RATE']
step_size = 26880 // config['BATCH_SIZE'] #(len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) * 2) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 5
num_cycles = 5
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [cosine_with_hard_restarts_schedule_with_warmup(tf.cast(x, tf.float32), total_steps=total_steps,
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, num_cycles=num_cycles) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
BaseModel, preprocess_input = Classifiers.get('resnet18')
base_model = BaseModel(input_shape=input_shape,
weights=config['BASE_MODEL_PATH'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid')(x)
model = Model(inputs=input_image, outputs=output)
return model
```
# Training
```
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(len(k_fold)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
history_list = []
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
train_size = count_data_items(train_filenames)
step_size = train_size // config['BATCH_SIZE']
# Train model
model_path = f'model_fold_{n_fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
lr = lambda: cosine_with_hard_restarts_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
total_steps=total_steps, warmup_steps=warmup_steps,
lr_start=lr_start, lr_max=lr_max, lr_min=lr_min,
num_cycles=num_cycles)
optimizer = optimizers.Adam(learning_rate=lr)
model.compile(optimizer, loss=losses.BinaryCrossentropy(label_smoothing=0.05),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
preds = model.predict(image_data)
name_preds = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds[x['image_name']], axis=1)
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
```
## Model loss graph
```
for n_fold in range(config['N_FOLDS']):
print(f'Fold: {n_fold + 1}')
plot_metrics(history_list[n_fold])
```
## Model loss graph aggregated
```
plot_metrics_agg(history_list, config['N_FOLDS'])
```
# Model evaluation
```
display(evaluate_model(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Model evaluation by Subset
```
display(evaluate_model_Subset(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Confusion matrix
```
for n_fold in range(config['N_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold[k_fold[f'fold_{n_fold}'] == 'train']
valid_set = k_fold[k_fold[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
valid_set['target'], np.round(valid_set[pred_col]))
```
# Visualize predictions
```
k_fold['pred'] = 0
for n_fold in range(config['N_FOLDS']):
k_fold['pred'] += k_fold[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
print('Label/prediction distribution')
print(f"Train positive labels: {len(k_fold[k_fold['target'] > .5])}")
print(f"Train positive predictions: {len(k_fold[k_fold['pred'] > .5])}")
print(f"Train positive correct predictions: {len(k_fold[(k_fold['target'] > .5) & (k_fold['pred'] > .5)])}")
```
# Make predictions
```
model_path_list = glob.glob('/kaggle/working/' + '*.h5')
n_models = len(model_path_list)
model_path_list.sort()
print(f'{n_models} Models to predict:')
print(*model_path_list, sep='\n')
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
for model_path in model_path_list:
# tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
print(model_path)
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
model.load_weights(model_path)
test_preds += model.predict(test_dataset) / n_models
image_names = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
name_preds = dict(zip(image_names, test_preds.reshape(len(test_preds))))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
```
# Visualize test predictions
```
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge','target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
```
# Test set predictions
```
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission.to_csv('submission.csv', index=False)
display(submission.head(10))
display(submission.describe())
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/ricardorocha86/Fundamentos-de-Python-para-ML/blob/main/Fundamentos_de_Python_para_Data_Science.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Fundamentos de Python para Data Science**

Essa é uma introdução bem rápida aos conceitos fundamentais de programação em Python. Se familiarizar com tais conceitos é um primeiro passo suficiente para fazer seus primeiros programas e também entender outros códigos feitos em Python. No entanto, não se engane, o universe de Python é gigante e existe muita coisa interessante para se conhecer ainda!
Após terminar essa introdução, não deixe de se desafiar nos exercícios propostos! São importantíssimos para criar manejo e intimidade com a linguagem.
## **Conteúdo desse notebook:**
1. [Atribuição de Variáveis](#atri)
2. [Primeiras Funções](#prim)
3. [Operadores Aritméticos](#oper)
4. [Comparações e Booleanos](#comp)
5. [Condicional IF-ELSE](#cond)
6. [Definição de Funções](#defi)
6. [Importação de Bibliotecas](#impo)
6. [A Biblioteca Numpy](#nump)
6. [Listas](#lists)
6. [Métodos de Listas](#metl) Iteradores FOR e WHILE
6. [Métodos de Strings](#mets)
6. [Funções Importantes](#func)
6. [Iteradores FOR e WHILE](#iter)
6. [Projeto: Álbum de Figurinhas](#proj)
7. [Exercícios](#exer)
9. [Links Úteis](#links)
10. [Anexo: Zen of Python](#anex)
## **Atribuição de Variáveis** <a name="atri"></a>
```
meu_nome = 'Ricardo'
meu_sobrenome = 'Rocha'
meu_produto = "Caixa d'agua"
minha_frase = '"Vamos que vamos!"'
print(meu_nome)
print(meu_sobrenome)
print(meu_produto)
print(minha_frase)
```
Não é possível atribuir variáveis com nomes começando com números, nem com espaços no nome ou com aspas.
OBS.: O **hashtag #** na frente de uma linha torna ela um **comentário**, pois o interpretador ignora tudo que vem depois dela
## **Primeiras Funções** <a name="prim"></a>
Conheça a função **len**, que retorna o tamanho de uma string (e de outros tipos de objetos também).
A função **type**, que retorna o tipo do objeto na variável de entrada
A função **round**, que arredonda um número de acordo com as casas decimas desejadas
E a função **help**, que retorna a documentação da função inserida como entrada.
```
nome = 'Ricardo'
len(nome)
```
Não só no Python, mas em programação em geral, sempre precisamos controlar o tipo das variáveis que utilizamos. Veja o exemplo abaixo:
```
minha_idade = '33'
type(minha_idade)
```
Usando a função **int** para converter a variável em formato string para o formato inteiro
```
minha_idade = int(minha_idade)
type(minha_idade)
```
## **Operadores Aritméticos** <a name="oper"></a>
Confira na tabela abaixo como funcionam os operadores aritméticos em Python.
Em especial, veja que a sintaxe da exponenciação não é feita através do sinal '^'.
| Operator | Name | Description |
|--------------|----------------|--------------------------------------------------------|
| ``a + b`` | Adiçao | Soma entre ``a`` e ``b`` |
| ``a - b`` | Subtração | Diferença entre ``a`` e ``b`` |
| ``a * b`` | Multiplicação | Produto entre ``a`` e ``b`` |
| ``a / b`` | Divisão | Divisão usual entre ``a`` e ``b`` |
| ``a // b`` | Divisão inteira | A divisão entre ``a`` e ``b``, removendo as partes decimais |
| ``a % b`` | Resto da divisão | O resto da divisão inteira de ``a`` por ``b`` |
| ``a ** b`` | Exponenciação | ``a`` elevado a ``b`` |
| ``-a`` | Negação | O negativo de ``a`` |
```
print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a // b)
print(a % b)
print(a ** b)
print(-a)
print(max(a, b))
print(min(a, b))
print(abs(-a))
```
## **Comparações e Booleanos** <a name="comp"></a>
Quando comparamos objetos em Python é como se estivéssemos fazendo uma pergunta a ele. Duas variáveis são iguais? Uma é maior que a outra?
E a resposta é um objeto do tipo **booleano**, que é indicado por **True** ou **False**, indicando, respectivamente, se a resposta é verdadeira ou falsa.
```
var1 = 35
var2 = 36
var1 == var2
print(type(33.2) == str)
print(type(33.2) == int)
print(type(33.2) == float)
True and True
True or False
not False
```
## **Condicional IF-ELSE** <a name="cond"></a>
Veja abaixo como funciona a estrutura geral de condicionais no Python.
Temos a versão IF ELSE, e a versão IF ELIF ELSE.
Note que é possível utilizar o ELIF várias vezes se necessário.
No espaço de condição, o Python espera um objeto booleano. Se for True, ele executa, se for False, segue adiante.
Note também a estrutura de identação do Python. Aqui não usamos (), [] ou {} para estipular o que deve ser executado nas condicionais. Simplesmente colocamos o código que deve ser executado na linha seguinte, mas identado. Em geral, se utiliza o espaço de **1 tab**.
```
# sintaxe IF-ELSE geral
if condição:
executa aqui
else:
executa aqui
# sintaxe IF-ELIF-ELSE geral
if condição:
executa aqui
elif outra_condição:
executa aqui
else:
executa aqui
#não executar essa célula!
a, b, c = 1, -2 , 0
delta = b**2 - 4*a*c
if delta > 0:
x1 = (-b + delta**0.5)/(2*a)
x2 = (-b - delta**0.5)/(2*a)
print('As raízes são {} e {}'.format(x1, x2))
elif delta == 0:
x = -b/(2*a)
print('A única raíz é {}'.format(x))
else:
print('Não há soluções reais para essa equação')
```
## **Definição de Funções** <a name="defi"></a>
Uma função é um recurso para não precisarmos repetir o mesmo código desnecessariamente. Assim como as funções da matemática, sua tarefa é transformar um input, uma entrada, uma coleção de variáveis, em uma saída, um output.
No exemplo abaixo, utilizamos como variáveis o tipo de símbolo e o tamanho da linha que queremos fazer. Você entenderá melhor vendo o exemplo:
Da mesma maneira que na estrutura de condicionais, o conteúdo de uma função fica nas linhas seguintes sendo indicados pela **indentação** do código
```
def Mensagem(simbolo = '*', tamanho = 51):
print(simbolo*tamanho)
print('Estamos só começando, o melhor ainda estar por vir!')
print(simbolo*tamanho)
Mensagem('-', 51)
```
Voltando no exemplo da fórmula de Bhaskara, fica bem conveniente quando colocamos no formato de função, para analisar as saídas de maneira mais prática.
```
def Bhaskara(a, b, c):
# dando uma resposta adequada para quando o valor de a for igual a zero
if a == 0:
print('Quando a = 0 não se aplica Bhaskara')
else:
delta = b**2 - 4*a*c
if delta > 0:
x1 = (-b + delta**0.5)/(2*a)
x2 = (-b - delta**0.5)/(2*a)
print('As raízes são {} e {}'.format(round(x1, 2), round(x2, 2)))
elif delta == 0:
x = -b/(2*a)
print('A única raíz é {}'.format(x))
else:
print('Não há soluções reais para essa equação')
Bhaskara(3, 2, 0.9)
```
## **Importação de Bibliotecas** <a name="impo"></a>
```
from math import pi
print(pi)
import math
print(math.pi)
import math as m
print(m.pi)
```
## **A Biblioteca Numpy** <a name="nump"></a>
Numpy é a biblioteca mais popular do python para se trabalhar com arrays.
Arrays são conjuntos de valores, que podem ser n-dimensionais.
Geralmente, um número apenas, um ponto, é um array de dimensão 0
Uma lista de valores é um array de dimensão 1.
Uma matriz de valores é uma array de dimensão 2.
Um tensor de valores é um array de dimensão 3.
```
import numpy as np
a0 = np.array(1)
a1 = np.array([1, 2])
a2 = np.array([[1, 2], [3, 4]])
a3 = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print('Array a0:\n', a0)
print('Array a1:\n', a1)
print('Array a2:\n', a2)
print('Array a3:\n', a3)
```
Use o atributo ndim para verificar a dimensão dos arrays numpy
```
print('Dimensão de a0:', a0.ndim)
print('Dimensão de a1:', a1.ndim)
print('Dimensão de a2:', a2.ndim)
print('Dimensão de a3:', a3.ndim)
```
Transforme uma lista em um array
```
lista = [1, 2, 3, 4]
array = np.array(lista)
```
Arrays são feitos para se trabalhar com álgebra linear. Veja a diferença quando multiplicamos cada elemento por um escalar
```
print(2*lista)
print(2*array)
```
Similarmente, podemos realizar as demais operações com arrays
```
print(array + array)
print(array * 2)
print(array / 3)
print(array ** 2)
print(array // 2)
print(array % 2)
```
## **Listas** <a name="lists"></a>
Um objeto do tipo lista é um dos mais importantes do universo Python. Sua versatilidade permite a construção e organização simples de diversas funcionalidades. **Uma lista é definida por conchetes [ ]**. Vejamos alguns exemplos:
```
minha_lista = [1, 1, 2, 3, 5, 8, 13]
print(minha_lista)
len(minha_lista)
```
**Importante**: A indexação no Python, assim como em diversas linguagens de programação, começam com índice 0
Isso quer dizer que o primeiro elemento está na entrada 0 da lista, o segundo elemento na entrada 1, e assim por diante.
Nós acessamos os elementos de uma lista também utilizando **colchetes**
```
minha_lista[0]
minha_lista[len(minha_lista) - 1]
```
Uma lista permite armazenar todos os tipos de objetos do Python, veja no exemplo abaixo:
```
lista = ['Python', 1996, True, len, [1,2,'Ricardo']]
print(lista)
lista[0] + ' é muito massa'
lista[1] + 1337
lista[2] and True
lista[3](lista)
```
Podemos acessar caracteres de listas através de colchetes também. No python, strings são como listas de caracteres:
```
'Python'[0]
py = 'Python'
py[0] + py[5]
```
E podemos acessar listas dentro de listas
```
lista[4]
lista[4][2]
lista[4][2][0]
lista[0]
lista[0][0]
```
Alguns comandos úteis para acessar elementos em listas:
```
frase = 'Python é excelente para análise de dados'
frase[:6] #retorna todos os elementos até o indice 6 da lista (o indice 6 (sétimo elemento) não está incluido)
frase[6:] #retorna todos os elementos após o indice 6 da lista
```
No Python, o padrão é trabalhar com **intervalos semi-abertos [ , )**, como no exemplo abaixo. O índice 9 está incluso na seleção, mas não o 18.
```
frase[9:18] # retorna o elemento de indice 9 até o 18, mas não inclui o 18
```
Para inverter a ordem de uma lista, utilize
```
frase[::-1]
```
Para tomar elementos de 2 em 2, utilize
```
frase[::2]
```
Combine os comandos da maneira que lhe for conveniente
```
frase[::-1][::3]
```
## **Métodos de Listas** <a name="metl"></a>
**Métodos são como funções que são aplicáveis a um certo tipo de objetos.** Nesse caso, veremos os métodos para os objetos do tipo lista. Métodos são inerentes ao Python pelo contexto de programação orientada a objetos, que veremos com mais detalhes adiante nesse curso
```
lista = ['A', 'B']
lista.append('C')
lista
lista.pop()
lista.count('B')
lista.index('B')
```
## **Métodos de Strings** <a name="mets"></a>
```
nome = 'Ricardo Rocha'
nome.lower()
nome.upper()
nome.capitalize()
nome.split(' ')
```
## **Funções Importantes** <a name="func"></a>
O comando **input** serve para se comunicar com o usuário enquanto se executa um programa. Veja o exemplo abaixo
```
x = input('Digite o seu nome: ')
print(x)
y = input('Digite a sua idade: ')
print('\nA idade declarada foi {}'.format(y))
```
Mas note que os inputs são sempre no format string. Se quisermos usá-lo numericamente, temos que converter as entradas adequadamanente. Em geral usamos a função **int** para inteiros ou **float** para números reais. Você pode usar a função float para números que são inteiros também.
```
y = int(input('Digite a sua idade '))
print('\nSe você já fez aniversário esse ano, seu ano de nascimento foi {}.'.format(2020 - y))
```
---
A função **range** cria um objeto capaz de gerar listas de valores entre dois números. Ela não gera a lista em si, mas um gerador de lista pra quando ela for utilizada. **Ela trabalha com intervalos semi-abertos do tipo [ , )**. Veja os exemplos.
```
range(4)
```
Para listar os valores da função range, utilize o comando **list**
```
list(range(4))
```
Use-a com dois parâmetros para definir o começo e o fim da lista. Note que o primeiro valor é incluido, mas o último não, pois o intervalo dela é semi-aberto
```
list(range(3, 10))
```
Você ainda pode usar um terceiro parâmetro, que é o passo da lista. Ele deve ser inteiro e representa o salto de um número para o outro.
```
list(range(2, 21, 2))
list(range(10, 101, 10))
```
A função **list** também pode ser utilizada para quebrar uma string em caracteres dentro de uma lista, veja:
```
list('Python')
```
## **Iteradores FOR e WHILE** <a name="iter"></a>
Utilizamos a estrutura de repetição FOR sempre que desejamos repetir um certo pedaço de código, com alterações ou não, por um número pre-determinado de vezes.
Veja os exemplos
```
for variante in lista_de_variações:
codigo a ser repetido
#esquema geral do FOR, não executar esse bloco
for i in ['banana', 'mamão', 'abacate']:
print('Eu gosto de {}'.format(i))
for i in range(10):
print(i*'*')
```
Problema: Se eu tomar dois números inteiros no intervalo [1, 10], qual a probabilidade aproximada da soma dos números ser maior que 10?
```
from random import randint
lista = []
replicas = 10000
for i in range(replicas):
if randint(1,10) + randint (1,10) > 10:
lista.append(True)
else:
lista.append(False)
prob = sum(lista)/replicas
print('A probabilidade aproximada é {}%'.format(100*prob))
```
---
O iterador WHILE usamos quando queremos repetir código sem saber de antemão quando este deve ser interrompido. Ele será finalizado quando alguma condição for satisfeita.
Por isso, tome cuidado em não escrever um WHILE infinito. Você terá que cancelar a execução do código para pará-lo.
Veja a estrutura geral
```
while condicao_verdadeira:
codigo repetido
if condicao:
break
#esquema geral do FOR, não executar esse bloco
i = 1
while i <= 10:
print(i * '*')
i += 1
```
Código equivalente utilizando o comando break para interromper o loop
```
i = 1
while True:
print(i * '*')
i += 1
if i == 10:
break
```
Exemplo de um **while** infinito
```
while True:
print('Python é legal demais!')
```
O **while** também aceita o comando **else** no após o seu fim. Algo para executar após o break acontecer
```
i = 1
while i <= 10:
print(i * '*')
i += 1
else:
print('*')
```
## ***Projeto: simulação de um álbum de figurinhas*** <a name="proj"></a>
### EXEMPLO: **Álbum Premier League 2019-2020**
1. Total de cromos: **636**
2. Preço do livro ilustrado capa brochura: **R\$ 8,90**
3. Envelope com 5 cromos: **R\$ 2,50**
### SUPOSIÇÕES
1. Todas as figurinhas tem igual probabilidade de serem sorteradas
2. Um pacotinho é comprado por vez
### ALGORITMO
1. Comprar um pacotinho de figurinhas (5 figurinhas cada, que podem ser repetidas);
2. Colar no álbum e verificar se o álbum está completo;
3. Caso esteja incompleto, comprar mais um pacotinho, caso contrário, terminar.
### PERGUNTAS
1. Qual o valor médio investido para completar o álbum nessas condições?
2. Quantos pacotes são necessários comprar, em média, para completar o álbum?
3. Qual é a distribuição empírica do valor investido para completar o álbum?
```
n_album = 636
preco_pacote = 2.50
preco_album = 8.90
simulacoes = 1000
import numpy as np
# representação do álbum
album = np.zeros(n_album)
# representação do pacote de figurinhas
pacotinho = np.random.choice(range(n_album), 5)
pacotinho
# 'colando' as figurinhas obtidas no álbum
for i in pacotinho:
album[i] += 1
# comprando figurinhas até completar o álbum
def SimulaAlbum():
album = np.zeros(n_album)
pacotes = 0
while not np.all(album > 0):
pacotinho = np.random.choice(range(n_album), 5)
pacotes += 1
for i in pacotinho:
album[i] += 1
valor_gasto = preco_album + preco_pacote * pacotes
return valor_gasto, pacotes
SimulaAlbum()
valores = []
for i in range(simulacoes):
valores.append(SimulaAlbum()[0])
if (i+1) % 50 == 0:
print('Simulação: ', i+1, '/', simulacoes)
```
As respostas das perguntas 1 e 2, respectivamente, são:
```
print('O valor médio gasto foi:', round(np.mean(valores), 2))
print('O numero de pacotes médio foi:', round((np.mean(valores) - preco_album)/preco_pacote, 2))
```
Podemos visualizar a distribuição empírica do valor gasto através do **histograma** dos valores simulados
```
import matplotlib.pyplot as plt
plt.hist(valores, bins = 20, density = True, edgecolor = 'black')
plt.title('Distribuição Empírica do Valor Gasto para Completar o Álbum')
plt.show()
```
# **Exercícios: Python para Data Science** <a name="exer"></a>
Os exercícios abaixo foram pensados para que iniciantes desenvolvam manejo e intimidade programando em python, utilizando os recursos fundamentais da linguagem.
Os dois primeiros exercícios são mais simples. O exercício 3 já é um pouco mais elaborado e pode ser resolvido de várias formas. E o último exercício é apenas um complemento do projeto desenvolvido em aula sobre o álbum de figurinhas. Foram feitas mais perguntas para serem respondidas com o experimento.
## **Exercício 1**
Considere um balde cuja base possui raio $r_1$ e altura igual ao diâmetro da base. Considere também uma esfera de raio $r_2$ cheia de água. Faça um programa que verifique se o volume da esfera cabe no balde, dados os valores de $r_1$ e $r_2$.
## **Exercício 2**
Crie uma função que simule o jogo do *jokempô*, isto é, dada a entrada de dois jogadores, retorne a indicação de qual deles venceu.
## **Exercício 3**
Faça um programa que simule uma **slot machine**. Uma slot machine é uma máquina muito comum em cassinos. A pessoa puxa uma alavanca e aparecem na tela 3 símbolos aleatoriamente, de uma lista com diversos deles. Se os símbolos forem iguais, então a pessoa ganha. A pessoa entrar com um tanto escolhido de fichas, e joga até que acabe. Quando o programa terminar, uma mensagem resumindo os totais que ela ganhou deve ser exibida.
## **Exercício 4**
Considere o contexto do projeto do álbum de figurinhas e responda as perguntas adicionais:
1. Quantas vezes saiu a figurinha mais repetida, em média?
2. Em média, quantas figurinhas não se repetem ao completar o álbum?
3. Qual a probabilidade de se gastar mais que R\$3000,00 para completar o álbum?
4. Qual a probabilidade de se gastar menos que R\$1500,00 para completar o álbum?
5. Qual a probabilidade de se gastar mais do que a média para completar o álbum?
6. Qual é o intervalo de confiança de 95% para o gasto ao se completar o álbum?
7. Qual o valor médio gasto caso se esteja completando o álbum com mais um amigo?
8. Quanto se economiza ao utilizar o cenário da questão 7?
9. Qual o valor médio gasto caso se esteja completando o álbum com mais dois amigos?
10. Quanto se economiza ao utilizar o cenário da questão 9?
## **Links Úteis** <a name="links"></a>
1. [Documentação do Python](https://docs.python.org/3/)
2. [Download do Anaconda](https://anaconda.org/)
3. [Curso do Gustavo Guanabara no YouTube (Canal Curso em Vídeo)](https://www.youtube.com/watch?v=S9uPNppGsGo&list=PLvE-ZAFRgX8hnECDn1v9HNTI71veL3oW0)
4. [Curso de Python gratuito do Kaggle (inglês)](https://www.kaggle.com/learn/python)
5. [Conceitos de Python em 40min por Derek Banas (inglês)](https://www.youtube.com/watch?v=N4mEzFDjqtA)
## **Anexo** <a name="anex"></a>
### **The Zen of Python, por Tim Peters**
É um conjunto de regras pela qual o Python é idealizado.
1. Bonito é melhor que feio.
2. Explícito é melhor que implícito.
3. Simples é melhor que complexo.
4. Complexo é melhor que complicado.
5. Plano é melhor que aglomerado.
6. Esparso é melhor que denso.
7. Legibilidade faz diferença.
8. Casos especiais não são especiais o bastante para quebrar as regras.
9. Embora a praticidade vença a pureza.
10. Erros nunca devem passar silenciosamente.
11. A menos que sejam explicitamente silenciados.
12. Diante da ambigüidade, recuse a tentação de adivinhar.
13. Deve haver um -- e preferencialmente só um -- modo óbvio para fazer algo.
14. Embora esse modo possa não ser óbvio à primeira vista a menos que você seja holandês.
15. Agora é melhor que nunca.
16. Embora nunca freqüentemente seja melhor que *exatamente* agora.
17. Se a implementação é difícil de explicar, é uma má idéia.
18. Se a implementação é fácil de explicar, pode ser uma boa idéia.
19. Namespaces são uma grande idéia -- vamos fazer mais dessas!
|
github_jupyter
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_1_gan_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 7: Generative Adversarial Networks**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 7 Material
* **Part 7.1: Introduction to GANS for Image and Data Generation** [[Video]](https://www.youtube.com/watch?v=0QnCH6tlZgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_1_gan_intro.ipynb)
* Part 7.2: Implementing a GAN in Keras [[Video]](https://www.youtube.com/watch?v=T-MCludVNn4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_2_Keras_gan.ipynb)
* Part 7.3: Face Generation with StyleGAN and Python [[Video]](https://www.youtube.com/watch?v=s1UQPK2KoBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_3_style_gan.ipynb)
* Part 7.4: GANS for Semi-Supervised Learning in Keras [[Video]](https://www.youtube.com/watch?v=ZPewmEu7644&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_4_gan_semi_supervised.ipynb)
* Part 7.5: An Overview of GAN Research [[Video]](https://www.youtube.com/watch?v=cvCvZKvlvq4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_5_gan_research.ipynb)
# Part 7.1: Introduction to GANS for Image and Data Generation
A generative adversarial network (GAN) is a class of machine learning systems invented by Ian Goodfellow in 2014. [[Cite:goodfellow2014generative]](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) Two neural networks contest with each other in a game. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning, fully supervised learning, and reinforcement learning.
This paper used neural networks to automatically generate images for several datasets that we've seen previously: MINST and CIFAR. However, it also included the Toronto Face Dataset (a private dataset used by some researchers). These generated images are given in Figure 7.GANS.
**Figure 7.GANS: GAN Generated Images**

Only sub-figure D made use of convolutional neural networks. Figures A-C make use of fully connected neural networks. As we will see in this module, the role of convolutional neural networks with GANs was greatly increased.
A GAN is called a generative model because it generates new data. The overall process of a GAN is given by the following diagram in Figure 7.GAN-FLOW.
**Figure 7.GAN-FLOW: GAN Structure**

|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Import-libraries" data-toc-modified-id="Import-libraries-1"><span class="toc-item-num">1 </span>Import libraries</a></span></li><li><span><a href="#Understanding-MAGI-output" data-toc-modified-id="Understanding-MAGI-output-2"><span class="toc-item-num">2 </span>Understanding MAGI output</a></span></li><li><span><a href="#My-approach-to-magi_results.csv" data-toc-modified-id="My-approach-to-magi_results.csv-3"><span class="toc-item-num">3 </span>My approach to magi_results.csv</a></span></li><li><span><a href="#Get-MetaCyc-compound-ids" data-toc-modified-id="Get-MetaCyc-compound-ids-4"><span class="toc-item-num">4 </span>Get MetaCyc compound ids</a></span></li></ul></div>
# Import libraries
```
import os
import pandas as pd
import numpy as np
```
# Understanding MAGI output
Explanation of [MAGI output](https://magi.nersc.gov/help/#outputs), and what to do with [magi_results.csv](https://magi.nersc.gov/tutorial/#step4-4)
- `compound_score` == if user doesn't provide a score, this will be 1.0
- `level` describes how far into the chemical similarity network MAGI went to connect the compound to the gene
- a value of 1 == used chemical network similarity
- a value of 0 == chemical network was not used
- no value == no compound associated with the gene
- `homology_score` reciprocal homology score between the reaction in `database_id_r2g` and the gene.
- a value of 400 is the maximum possible homology score, which means that both compound and genes were connected to the reaction with perfect homology
- `reciprocal_score` is a direct representation of whether or not the reaction-to-gene and gene-to-reaction homology searches converged on the same reactions or not, and it is determined by using the top BLAST result
- a value of 2 == reciprocal agreement
- a value of 1 == reciprocal closeness because the E-score was close enough
- a value of 0.1 == only one of the two searches resulted in a reaction
- a value of 0.01 == no reciprocal agreement
- `database_id_g2r` == reactions that a gene product can catalyze, which mean a gene-centric view
- `database_id_r2g` == reactions that associated with both gene and compound, which means a compound-centric view
There are three approaches to explore results:
1. Stich biochemical pathways. Filter the table to only show rows pertaining to a compound, look at the `database_id_r2g` for all the reactions, and `gene_id` for a list of genes associated with that compound
2. Curating a metabolic model:
1. To see all the compounds a gene was connected to: filter the table to only show one gene, and look at the `database_id_g2r`
2. To see all the evidence for a particular biochemical reaction: filter the `database_id_g2r` or `database_id_r2g` to only show one reaction, and see all the compounds of that reaction were observed as well as genes associated with that reaction
------------
# My approach to magi_results.csv
My approach will be to:
1. filter based on `MAGI_score` > 1, `homology_score` == 400, `reciprocal_score` > 1, and `note` column == direct
2. filter based on gene-compound direct associations (when the `neighbor` column is empty), and indirect association (when `neighbor` column is not empty and note == direct)
```
df = pd.read_csv('magi_output/magi_results.csv')
df = df.sort_values('MAGI_score', ascending=False).reset_index(drop=True)
print(df.shape)
df['neighbor'] = df['neighbor'].replace(np.nan, '')
df.head()
# filtering
subset_df = df[(df['MAGI_score'] >= 2) & # (df['homology_score'] == 400) &
(df['reciprocal_score'] >= 1) &
(df['note'] == 'direct')].reset_index(drop=True)
# getting the feature label from magi_output/magi_compound_result.csv
ft = pd.read_csv('magi_output/magi_compound_results.csv')
ft = ft[['original_compound', 'label']]
# merge
subset_df = subset_df.merge(ft, on='original_compound')
# direct association
subset_direct = subset_df[subset_df['neighbor'] == ""].reset_index(drop=True)
print(subset_direct.shape)
subset_direct.to_csv('magi_results_direct.csv', index=False)
# indirect association
subset_indirect = subset_df[subset_df['neighbor']!=""].reset_index(drop=True)
print(subset_indirect.shape)
# subset_indirect.to_csv('magi_results_indirect.csv', index = False)
```
# Get MetaCyc compound ids
```
metacyc = pd.read_csv("metacyc/All_compounds_of_MetaCyc.txt", sep = '\t')
metacyc['InChI-Key'] = metacyc['InChI-Key'].str[:-2]
metacyc
direct_compounds = subset_direct[['original_compound',
'label']].drop_duplicates(
['original_compound',
'label']).reset_index(drop=True)
direct_compounds['original_compound'] = "InChIKey=" + direct_compounds[
'original_compound'].str[:-2]
direct_compounds = direct_compounds.merge(metacyc,
left_on='original_compound',
right_on='InChI-Key')
direct_compounds.drop('original_compound', axis=1, inplace=True)
direct_compounds.to_csv('magi_compounds_direct.csv', sep='\t', index=False)
direct_compounds
```
Imported this into MetaCyc as a SmartTable.
In MetaCyc's SmartTable, I added structure, reactions that consume and produce such compounds to be used in the discussion. I then, exported the SmartTable as frame_ids (which retains the compound and reaction identifiers) and common names (with common names of everything).
|
github_jupyter
|
# **Solving the Definition Extraction Problem**
### **Approach 3: Using Doc2Vec model and Classifiers.**
**Doc2Vec** is a Model that represents each Document as a Vector. The goal of Doc2Vec is to create a numeric representation of a document, regardless of its length. So, the input of texts per document can be various while the output is fixed-length vectors.
Design of Doc2Vec is based on Word2Vec. But unlike words, documents do not come in logical structures such as words, so the another method has to be found. There are two implementations:
1. Paragraph Vector - Distributed Memory (PV-DM)
2. Paragraph Vector - Distributed Bag of Words (PV-DBOW)
**PV-DM** is analogous to Word2Vec continous bag of word CBOW. But instead of using just words to predict the next word, add another feature vector, which is document-unique. So, when training the word vectors W, the document vector D is trained as well, and in the end of training, it holds a numeric representation of the document.

**PV-DBOW** is analogous to Word2Vec skip gram. Instead of predicting next word, it use a document vector to classify entire words in the document.

Not: it's recommend to use a combination of both algorithms to infer the vector representation of a document.
```
from google.colab import drive
drive.mount('/content/drive')
!unzip 'drive/My Drive/wikipedia-movie-plots.zip'
import os
import nltk
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from data_loader import DeftCorpusLoader
from sklearn.naive_bayes import GaussianNB
from sklearn import tree
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
```
### **Load Doc2Vec Model Trainning Data**
```
# Load amazon review reports of movies.
with open('wiki_movie_plots_deduped.csv') as data:
corpus_list = pd.read_csv(data, sep=",", header = None)
corpus_list = corpus_list[7].tolist()[1:]
print("Corpus legnth: ", len(corpus_list))
stop_words = set(stopwords.words('english'))
porter = PorterStemmer()
qoutes_list = ["``", "\"\"", "''"]
train_corpus = []
for i, sentence in enumerate(corpus_list):
# Lower all the letters in the sentence
tokens = word_tokenize(sentence.lower())
processed_tokens = []
for j, token in enumerate(tokens):
if not token.isdigit():
if token not in stop_words and len(token) > 1 and token not in qoutes_list:
# Convert each sentence from amazon reviews to list of words that doesn't include
# stop words or any special letters or digits
processed_tokens.append(porter.stem(token))
train_corpus.append(TaggedDocument(words=processed_tokens, tags=[str(i)]))
train_corpus[:5]
```
### **Train Doc2Vec Model Based on Amazon Reviews.**
First we will define the attributes of Doc2Vec model:
* **Vector Size:** Dimensionality of the documents feature vector.
* **Min Count:** Ignores all words with total frequency lower than this.
* **Epochs:** Number of iterations (epochs) over the corpus.
* **Workers:** Use these many worker threads to train the model (faster training with multicore machines).
Second build the **Vocabulary** based on the training corpus (processed amazon reviews). Finally train the model on the training corpus.
Note: the default used algorithm is PV-DM.
```
model = Doc2Vec(vector_size=300, min_count=2, epochs=40, workers=8)
model.build_vocab(train_corpus)
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
```
### **Load DeftEval Trainning & Dev Data**
Note: as the code is executed on google colab, the path of the data is rooted from the drive. So, the path of the data need to be change if the code will be executed on the local machine.
```
deft_loader = DeftCorpusLoader("drive/My Drive/DeftEval/deft_corpus/data")
trainframe, devframe = deft_loader.load_classification_data()
deft_loader.preprocess_data(devframe)
deft_loader.clean_data(devframe)
dev_vectors = []
# Create test data vectors from Doc2Vec model
for parsed_list in devframe["Parsed"]:
dev_vectors.append(model.infer_vector(parsed_list))
deft_loader.preprocess_data(trainframe)
deft_loader.clean_data(trainframe)
train_vectors=[]
# Create training data vectors from Doc2Vec model
for parsed_list in trainframe["Parsed"]:
train_vectors.append(model.infer_vector(parsed_list))
```
### **Apply Classifiers Algorithms**
For each classifier test, **F1-score** and **Accuracy** are calculated.
**1. Naive Bayes Algorithm**
```
gnb = GaussianNB()
test_predict = gnb.fit(train_vectors, trainframe['HasDef']).predict(dev_vectors)
print(metrics.classification_report(list(devframe["HasDef"]), test_predict))
```
**2. Decision Tree Algorithm**
```
decision_tree = tree.DecisionTreeClassifier(class_weight="balanced")
test_predict = decision_tree.fit(train_vectors, trainframe['HasDef']).predict(dev_vectors)
print(metrics.classification_report(list(devframe["HasDef"]), test_predict))
```
**3. Logistic Regression Algorithm**
```
test_predict = LogisticRegression(class_weight="balanced", random_state=0).fit(train_vectors, trainframe['HasDef']).predict(dev_vectors)
print(metrics.classification_report(list(devframe["HasDef"]), test_predict))
```
|
github_jupyter
|
# PA005: High Value Customer Identification (Insiders)
# 0.0. Imports
```
from sklearn import cluster as c
from sklearn import metrics as m
from sklearn import preprocessing as pp
from sklearn import decomposition as dd
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import seaborn as sns
import re
import boto3
from sklearn.manifold import TSNE
from matplotlib import pyplot as plt
```
## 0.2. Helper Functions
```
def num_attributes(df1):
num_attributes = df1.select_dtypes(['int64', 'float64'])
#central tendency
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
#dispersion
d1 = pd.DataFrame(num_attributes.apply(np.min)).T
d2 = pd.DataFrame(num_attributes.apply(np.max)).T
d3 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d4 = pd.DataFrame(num_attributes.apply(np.std)).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
m = pd.concat( [d1, d2, d3, ct1, ct2, d4, d5, d6] ).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std','skew', 'kurtosis']
return m
```
## 0.3. Load Data
```
s3 = boto3.resource(
service_name='s3',
region_name='us-east-2',
aws_access_key_id='AKIAU4UCEADPNRXISEJV',
aws_secret_access_key='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'
)
# # path_s3='s3://insiders-dataset-heitor'
# # df_raw = pd.read_csv(path_s3 + 'Ecommerce.csv')
# for bucket in s3.buckets.all():
# print(bucket.name)
obj = s3.Bucket('insiders-dataset-heitor').Object('Ecommerce.csv').get()
df_raw = pd.read_csv(obj['Body'], index_col=0)
df_raw.head()
```
# 1.0. Data Description
```
df1 = df_raw.reset_index().copy()
```
## 1.1. Rename Columns
```
df1.columns = ['invoice_no', 'stock_code', 'description', 'quantity', 'invoice_date',
'unit_price', 'customer_id', 'country']
```
## 1.2. Data Shape
```
print(f'Number of rows: {df1.shape[0]}')
print(f'Number of columns: {df1.shape[1]}')
```
## 1.3. Data Types
```
df1.dtypes
```
## 1.4. Check NAs
```
df1.isna().sum()
```
## 1.5. Fill NAs
```
#remove na
df_missing = df1[df1['customer_id'].isna()]
df_not_missing = df1[-df1['customer_id'].isna()]
len(df_missing)
len(df_not_missing)
#create reference
df_backup = pd.DataFrame(df_missing['invoice_no'].drop_duplicates())
df_backup['customer_id'] = np.arange(19000, 19000+len(df_backup),1)
#merge
df1 = pd.merge(df1, df_backup, on='invoice_no', how='left')
#coalesce
df1['customer_id'] = df1['customer_id_x'].combine_first(df1['customer_id_y'])
#drop extra columns
df1 = df1.drop(['customer_id_x', 'customer_id_y'], axis=1)
df1.isna().sum()
```
## 1.6. Change dtypes
```
df1.dtypes
#invoice_no
# df1['invoice_no'] = df1['invoice_no'].astype(int)
#stock_code
# df1['stock_code'] = df1['stock_code'].astype(int)
#invoice_date --> Month --> b
df1['invoice_date'] = pd.to_datetime(df1['invoice_date'], format=('%d-%b-%y'))
#customer_id
df1['customer_id'] = df1['customer_id'].astype(int)
df1.dtypes
```
## 1.7. Descriptive statistics
```
cat_attributes = df1.select_dtypes(exclude = ['int64', 'float64', 'datetime64[ns]'])
```
### 1.7.1. Numerical Attributes
```
m1 = num_attributes(df1)
m1
```
#### 1.7.1.1 Investigating
1. Negative quantity (devolution?)
2. Price = 0 (Promo?)
## 1.7.2. Categorical Attributes
```
cat_attributes.head()
```
#### Invoice no
```
#invoice_no -- some of them has one char
df_invoice_char = df1.loc[df1['invoice_no'].apply(lambda x: bool(re.search('[^0-9]+', x))), :]
len(df_invoice_char[df_invoice_char['quantity']<0])
print('Total of invoices with letter: {}'.format(len(df_invoice_char)))
print('Total of negative quantaty: {}'.format(len(df1[df1['quantity']<0])))
print('Letter means negative quantity')
```
#### Stock Code
```
#all stock codes with char
df1.loc[df1['stock_code'].apply(lambda x: bool(re.search('^[a-zA-Z]+$', x))), 'stock_code'].unique()
#remove stock code in ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK']
# df1 = df1[-df1.isin(['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'])]
```
#### Description
```
df1.head(2) #remove description
```
#### Country
```
df1['country'].value_counts(normalize='True').head()
df1[['country', 'customer_id']].drop_duplicates().groupby('country').count().reset_index().sort_values('customer_id', ascending=False).head()
```
# 2.0. Data Filtering
```
df2 = df1.copy()
# === Numerical attributes ====
df2 = df2.loc[df2['unit_price'] >= 0.04, :]
# === Categorical attributes ====
df2 = df2[~df2['stock_code'].isin( ['POST', 'D', 'DOT', 'M', 'S', 'AMAZONFEE', 'm', 'DCGSSBOY', 'DCGSSGIRL', 'PADS', 'B', 'CRUK'] ) ]
# description
df2 = df2.drop( columns='description', axis=1 )
# map -
df2 = df2[~df2['country'].isin( ['European Community', 'Unspecified' ] ) ]
# bad users
df2 = df2[~df2['customer_id'].isin( [16446] )]
# quantity
df_returns = df2.loc[df1['quantity'] < 0, :]
df_purchase = df2.loc[df1['quantity'] >= 0, :]
```
# 3.0. Feature Engineering
```
df3 = df2.copy()
```
## 3.1. Feature Creation
```
# data reference
df_ref = df3.drop( ['invoice_no', 'stock_code', 'quantity', 'invoice_date', 'unit_price', 'country'], axis=1 ).drop_duplicates( ignore_index=True )
```
### 3.1.1. Gross Revenue
```
#calculate gross revenue
df_purchase.loc[:,'gross_revenue'] = df_purchase.loc[:, 'quantity'] * df_purchase.loc[:, 'unit_price']
#gross revenue by customer
df_monetary = df_purchase.loc[:,['customer_id', 'gross_revenue']].groupby('customer_id').sum().reset_index()
df_ref = pd.merge(df_ref, df_monetary, on='customer_id', how= 'left')
df_ref.isna().sum()
```
### 3.1.2. Recency - Days from last purchase
```
#recency
df_recency = df_purchase.loc[:,['customer_id', 'invoice_date']].groupby('customer_id').max().reset_index()
df_recency['recency_days'] = (df3['invoice_date'].max() - df_recency['invoice_date']).dt.days
df_recency = df_recency[['customer_id', 'recency_days']].copy()
df_ref = pd.merge(df_ref, df_recency, on='customer_id', how='left')
```
### 3.1.4. Quantity of purchase
```
# Número de compras
df_invoice = df_purchase.loc[:,['customer_id', 'invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index().rename(columns={'invoice_no':'qt_invoice'})
df_ref = pd.merge(df_ref, df_invoice, on='customer_id', how='left')
```
### 3.1.4. Quantity of products purchase
```
# Número de compras
df_stock_code = df_purchase.loc[:,['customer_id', 'stock_code']].groupby('customer_id').count().reset_index().rename(columns={'stock_code':'qt_products'})
df_ref = pd.merge(df_ref, df_stock_code, on='customer_id', how='left')
```
### 3.1.6. Frequency
```
df_aux = (df_purchase[['customer_id', 'invoice_no', 'invoice_date']].drop_duplicates().groupby('customer_id').agg(
max_ = ('invoice_date', 'max'),
min_ = ('invoice_date', 'min'),
days = ('invoice_date', lambda x: (x.max() - x.min()).days ),
buys = ('invoice_no', 'count'))).reset_index()
# #calculate frequency
df_aux['frequency'] = df_aux[['buys', 'days']].apply(lambda x: x['buys']/x['days'] if x['days']!= 0 else 0, axis=1)
#merge
df_ref = pd.merge(df_ref, df_aux[['customer_id', 'frequency']], on='customer_id', how='left')
```
### 3.1.7. Returns
```
df_aux = df_returns[['customer_id', 'quantity']].groupby('customer_id').sum().reset_index().rename(columns={'quantity':'qt_returns'})
df_aux['qt_returns'] = -1*df_aux['qt_returns']
#merge
df_ref = pd.merge(df_ref, df_aux, on='customer_id', how='left')
df_ref.loc[df_ref['qt_returns'].isna(), 'qt_returns'] = 0
```
# 4.0. Exploratory Data Analisys
```
df_ref.isna().sum()
df4 = df_ref.dropna()
```
## 4.3. Estudo do Espaço
```
# selected dataset
cols_selected = ['customer_id', 'gross_revenue', 'recency_days', 'qt_products', 'frequency', 'qt_returns']
df43 = df4[ cols_selected ].drop( columns='customer_id', axis=1 ).copy()
mm = pp.MinMaxScaler()
df43['gross_revenue'] = mm.fit_transform( df43[['gross_revenue']] )
df43['recency_days'] = mm.fit_transform( df43[['recency_days']] )
df43['qt_products'] = mm.fit_transform( df43[['qt_products']])
df43['frequency'] = mm.fit_transform( df43[['frequency']])
df43['qt_returns'] = mm.fit_transform( df43[['qt_returns']])
```
### PCA
```
X=df43.copy()
pca = dd.PCA( n_components=X.shape[1] )
principal_components = pca.fit_transform( X )
# plot explained variable
features = range( pca.n_components_ )
plt.bar( features, pca.explained_variance_ratio_, color='black' )
# pca component
df_pca = pd.DataFrame( principal_components )
```
### TSNE
```
reducer = TSNE( n_components=2,n_jobs=-1, random_state=3)
embedding = reducer.fit_transform(X)
df_tsne = pd.DataFrame()
df_tsne['embedding_x'] = embedding[:,0]
df_tsne['embedding_y'] = embedding[:,1]
plt.figure(figsize=(20,12));
sns.scatterplot(x='embedding_x', y='embedding_y',data=df_tsne);
```
## 8.2. GMM
```
# model definition
k = 8
X=df43.copy()
#model definition
kmeans_model = c.KMeans(n_clusters=k, random_state=42)
#model training
kmeans_model.fit(X)
#model predict
labels = kmeans_model.predict(X)
#model performance
sil = m.silhouette_score(X, labels, metric='euclidean')
m.silhouette_score(X, labels, metric='euclidean')
```
# 9.0. Cluster Analisys
```
cols_selected = ['customer_id', 'gross_revenue', 'recency_days', 'qt_products', 'frequency', 'qt_returns']
df9 = X.copy()
df9['cluster'] = labels
```
## 9.2. Cluster Profile
```
df92 = df4[cols_selected].copy()
df92['cluster']= labels
# Number of customer
df_cluster = df92[['customer_id', 'cluster']].groupby( 'cluster' ).count().reset_index()
df_cluster['perc_customer'] = 100*( df_cluster['customer_id'] / df_cluster['customer_id'].sum() )
# Avg Gross revenue
df_avg_gross_revenue = df92[['gross_revenue', 'cluster']].groupby( 'cluster' ).mean().reset_index().rename(columns={'gross_revenue': 'avg_gmv'})
df_cluster = pd.merge( df_cluster, df_avg_gross_revenue, how='inner', on='cluster' )
# Sum Gross revenue
df_avg_gross_revenue = df92[['gross_revenue', 'cluster']].groupby( 'cluster' ).sum().reset_index().rename(columns={'gross_revenue': 'total_gmv'})
df_cluster = pd.merge( df_cluster, df_avg_gross_revenue, how='inner', on='cluster' )
# Avg recency days
df_avg_recency_days = df92[['recency_days', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_avg_recency_days, how='inner', on='cluster' )
# Avg products
df_qtde_products = df92[['qt_products', 'cluster']].groupby( 'cluster' ).mean().reset_index().rename(columns={'qt_products': 'avg_products'})
df_cluster = pd.merge( df_cluster, df_qtde_products, how='inner', on='cluster' )
# total products
df_qtde_products = df92[['qt_products', 'cluster']].groupby( 'cluster' ).sum().reset_index().rename(columns={'qt_products': 'total_products'})
df_cluster = pd.merge( df_cluster, df_qtde_products, how='inner', on='cluster' )
# Frequency
df_frequency = df92[['frequency', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_frequency, how='inner', on='cluster' )
# Returns avg
df_qtde_returns = df92[['qt_returns', 'cluster']].groupby( 'cluster' ).mean().reset_index().rename(columns={'gross_revenue': 'avg_returns'})
df_cluster = pd.merge( df_cluster, df_qtde_returns, how='inner', on='cluster' )
# # Returns total
# df_qtde_returns = df92[['qt_returns', 'cluster']].groupby( 'cluster' ).sum().reset_index().rename(columns={'gross_revenue': 'avg_returns'})
# df_cluster = pd.merge( df_cluster, df_qtde_returns, how='inner', on='cluster' )
# # Returns total
# df_qtde_returns = df92[['qt_returns', 'cluster']].groupby( 'cluster' ).sum().reset_index().rename(columns={'gross_revenue': 'total_returns'})
# df_cluster = pd.merge( df_cluster, df_qtde_returns, how='inner', on='cluster' )
df_cluster.sort_values( 'avg_gmv', ascending=False )
#Cluster 7 : Insider
#Cluster 2 : More frequency
#Cluster 0 : Lazy
#Cluster 5 : Hibernating (High recency)
#Cluster 4 : More products
#Cluster 6 : Forgotten
#Cluster 3 : Lost
#Cluster 1 : One-time customer
```
# 10.0. Deploy to Production
```
df10 = df4[cols_selected].copy()
df10['cluster'] = labels
df10['recency_days'] = df10['recency_days'].astype(int)
df10['qt_products'] = df10['qt_products'].astype(int)
df10['qt_returns'] = df10['qt_returns'].astype(int)
df10['cluster'] = df10['cluster'].astype(int)
df10.dtypes
```
|
github_jupyter
|
```
from MPyDATA import ScalarField, VectorField, PeriodicBoundaryCondition, Options, Stepper, Solver
import numpy as np
dt, dx, dy = .1, .2, .3
nt, nx, ny = 100, 15, 10
# https://en.wikipedia.org/wiki/Arakawa_grids#Arakawa_C-grid
x, y = np.mgrid[
dx/2 : nx*dx : dx,
dy/2 : ny*dy : dy
]
# vector field (u,v) components
# u - x component of the velocity field
ux, uy = np.mgrid[
0 : (nx+1)*dx : dx,
dy/2 : ny*dy : dy
]
# v - y component of the velocity field
vx, vy = np.mgrid[
dx/2 : nx*dx : dx,
0: (ny+1)*dy : dy
]
from matplotlib import pyplot, rcParams
rcParams['figure.figsize'] = [12, 8]
pyplot.quiver(ux, uy, 1, 0, pivot='mid')
pyplot.quiver(vx, vy, 0, 1, pivot='mid')
pyplot.xticks(ux[:,0])
pyplot.yticks(vy[0,:])
pyplot.scatter(x, y)
pyplot.title('Arakawa-C grid')
pyplot.grid()
pyplot.show()
from MPyDATA import ScalarField, VectorField, PeriodicBoundaryCondition, Options, Stepper, Solver
bc = [PeriodicBoundaryCondition(), PeriodicBoundaryCondition()]
options = Options()
data = np.zeros((nx, ny))
data[1,1] = 10
advectee = ScalarField(data, options.n_halo, boundary_conditions=bc)
# https://en.wikipedia.org/wiki/Stream_function
```
stream function:
$u=-\partial_y \psi$
$v=\partial_x \psi$
example flow field:
$\psi(x,y) = - w_{\text{max}} \frac{X}{\pi}
\sin\left(\pi \frac{y}{Y}\right)
\cos\left(2\pi\frac{x}{X}\right)
$
```
class Psi:
def __init__(self, *, X, Y, w_max):
self.X = X
self.Y = Y
self.w_max = w_max
def __call__(self, x, y):
return - self.w_max * self.X / np.pi * np.sin(np.pi * y/self.Y) * np.cos(2 * np.pi * x/self.X)
psi = Psi(X=nx*dx, Y=ny*dy, w_max=.6)
print(psi(0,0))
print(psi(1,1))
# https://en.wikipedia.org/wiki/Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition
# C_x = u * dt / dx
# C_y = v * dt / dy
u = -(psi(ux, uy+dy/2) - psi(ux, uy-dy/2)) / dy
v = +(psi(vx+dx/2, vy) - psi(vx-dx/2, vy)) / dx
advector = VectorField([u*dt/dx, v*dt/dy], halo=options.n_halo, boundary_conditions=bc)
def plot(advectee, advector):
pyplot.scatter(x, y, s=100, c=advectee.get(), marker='s')
pyplot.quiver(ux, uy, advector.get_component(0), 0, pivot='mid', scale=10)
pyplot.quiver(vx, vy, 0, advector.get_component(1), pivot='mid', scale=10)
pyplot.xticks(ux[:,0])
pyplot.yticks(vy[0,:])
pyplot.colorbar()
pyplot.grid()
pyplot.show()
plot(advectee, advector)
stepper = Stepper(options=options, grid=(nx, ny))
solver = Solver(stepper=stepper, advectee=advectee, advector=advector)
solver.advance(20)
plot(advectee, advector)
# https://en.wikipedia.org/wiki/NetCDF
from scipy.io.netcdf import netcdf_file
with netcdf_file('test.nc', mode='w') as ncdf:
# global attributes (metadata)
ncdf.MPyDATA_options = str(options)
# dimensions
ncdf.createDimension("T", nt)
ncdf.createDimension("X", nx)
ncdf.createDimension("Y", ny)
# variables (defined over defined dimensions)
variables = {}
variables["T"] = ncdf.createVariable("T", "f", ["T"])
variables["T"].units = "seconds"
variables["T"][:] = 0
variables["X"] = ncdf.createVariable("X", "f", ["X"])
variables["X"][:] = x[:, 0]
variables["X"].units = "metres"
variables["Y"] = ncdf.createVariable("Y", "f", ["Y"])
variables["Y"][:] = y[0, :]
variables["Y"].units = "metres"
variables["advectee"] = ncdf.createVariable("advectee", "f", ["T", "X", "Y"])
# attributes (per variable)
# e.g. units above
# note: initial condition not saved
for i in range(nt):
solver.advance(nt=1)
variables["T"][i] = (i+1) * dt
variables["advectee"][i, :, :] = solver.advectee.get()
! ls -lah test.nc
! file test.nc
! ncdump -c test.nc
# https://en.wikipedia.org/wiki/Climate_and_Forecast_Metadata_Conventions
# try opening in Paraview (https://en.wikipedia.org/wiki/ParaView)...
```
|
github_jupyter
|
# Upsert AOOS Priority Score Demo
## Approaching Out of Stock (AOOS)
* Priority scores of work items (inventories) in AOOS work queue are calculated and upserted to InfoHub
* The function `AOOS_priority_score` is defined below - for understanding the business logic, refer to the accompanying Notebook **AOOS-Priority-Score.ipynb**
## InfoHub
* The InfoHub connection and queries are defined in the accompanying Notebook **InfoHub.ipynb**
* Make sure that you have run the Kernel of the above Notebook
```
import json
import sys
import os
import pandas as pd
import time
import numpy as np
import datetime
import copy
import json
```
### Import `InfoHubConnection` Class
* Install `ipynb` package for the following import to work
`pip install ipynb`
* Make sure that the Kernel of Notebook **InfoHub.ipynb** has been run without errors
```
from ipynb.fs.full.InfoHub import InfoHubConnection
```
### Priority Score Calculation
* Priority score function for _Approaching Out of Stock_ work item
* The business logic is explained in detail in the accompanying Notebook **AOOS-Priority-Score.ipynb**
```
def AOOS_priority_score(supply_plans,
demand_plans,
starting_inventory = 0,
first_date = datetime.date(2021, 7, 31),
last_date = datetime.date(2021, 8, 31),
decay_weight = 3.0,
inv_positive_threshold = 20,
inv_negative_threshold = -100):
horizon = (last_date - first_date).days + 1
# Define Inventory horizon and add starting inventory
inventory_horizon = np.zeros(horizon, dtype=int)
inventory_horizon = inventory_horizon + starting_inventory
# Add Supply plans
for i in range(len(supply_plans)):
supply_date = datetime.datetime.fromisoformat(supply_plans[i]['startDate'][:-1]).date()
qty = supply_plans[i]['quantity']
diff_days = (supply_date - first_date).days
inventory_horizon[diff_days:] = inventory_horizon[diff_days:] + qty
for i in range(len(demand_plans)):
demand_date = datetime.datetime.fromisoformat(demand_plans[i]['startDate'][:-1]).date()
qty = demand_plans[i]['quantity']
diff_days = (demand_date - first_date).days
inventory_horizon[diff_days:] = inventory_horizon[diff_days:] - qty
# Calculate weights
weights = np.exp(np.arange(decay_weight, 0, -(decay_weight/horizon)))/np.exp(decay_weight)
# Calculate penalty
inventory_below_threshold = inventory_horizon - inv_positive_threshold
penalties = np.zeros(horizon, dtype=int)
neg_inv_mask = inventory_below_threshold < 0
penalties[neg_inv_mask] = inventory_below_threshold[neg_inv_mask]
neg_threshold_mask = penalties < inv_negative_threshold
penalties[neg_threshold_mask] = inv_negative_threshold
total_penalty = np.sum(weights*-penalties)
max_penalty = np.sum(weights*-inv_negative_threshold)
priority_score = int(np.rint((total_penalty/max_penalty)*100))
return priority_score
```
### InfoHub Connection Config
* Load your InfoHub connection parameters from `credentials.json` file
* `tenantId` is not required for establishing the connection, but is required in the GraphQL queries
```
with open("credentials.json") as config_file:
config = json.load(config_file)
url = config['url']
headers = config['headers']
tenantId = config['tenantId']
infohub = InfoHubConnection(url=url, tenantId=tenantId, headers=headers)
```
### Priority Score Config
* `timestamp`: Timestamp is needed in `upsert` operation, in ISO format.
* In production, use the current system timestamp.
* `maxDate`: Active supply and demand plans till this date are used for priority score calculation
* For more details, check the accompanying Notebook **AOOS-Priority-Score.ipynb**
```
timestamp = "2021-08-03T10:37:07+0800"
maxDate = "2021-08-31 00:00:00"
```
### Work Queue List
* Our goal is to evaluate / re-calculate the priority score of work items in the AOOS work queue
* We need the AOOS work queue object ID to get the work items that are in progress
* To get the AOOS work queue object ID, we use the following query to get the list of work queues.
```
workqueues = infohub.get_work_queues()
for i in range(len(workqueues)):
print('WorkQueue: ', workqueues[i]['object']['name'])
print('Id: ', workqueues[i]['object']['id'])
print("--------------------------------------------------------------------------------")
```
### Choose `workQueueId`
* Choose the workQueueId of _Inventory approaching out of stock prioritized_
```
workQueueId = "516dc12d-eff6-4c51-9222-7eca88a31c5e"
```
### Collect Work Items
* Given the work queue ID, collect all the work items
```
work_items = []
print("Querying work items in progress...")
work_items_list = infohub.get_workitems_in_progress(workQueueId=workQueueId)
print("\tNumber of WorkItems: {}".format(len(work_items_list)))
for i in range(len(work_items_list)):
wi = {"workItemId": work_items_list[i]['object']["id"],
"priority": work_items_list[i]['object']["priority"],
"inventoryId": work_items_list[i]['object']["businessObject"]["id"],
"productId": work_items_list[i]['object']["businessObject"]["product"]["id"],
"partNumber": work_items_list[i]['object']["businessObject"]["product"]["partNumber"],
"locationId": work_items_list[i]['object']["businessObject"]["location"]["id"],
"locationIdentifier": work_items_list[i]['object']["businessObject"]["location"]["locationIdentifier"],
"starting_inventory": work_items_list[i]['object']["businessObject"]["quantity"]
}
work_items.append(wi)
print("({}): Object-Id: {};".format(i, wi["workItemId"]))
print("\tPart-Number: {}; Location-Id: {}; Priority-Score: {}".format(wi["partNumber"], wi["locationIdentifier"], wi["priority"]))
print('Done.')
```
### Demo: Change in Priority Score
* The priority scores are based on current supply plans and demand plans for next 30 days (`maxDate`)
* In the event of a change in supply/demand plan(s), the priority score has to change to reflect the severity of going _out of stock_
* We can test this by simulating a change in supply/demand plan (or both) and see how the priority score changes
* Steps:
* Step 1: Choose an WorkItem
* Step 2: Get the supply and demand plans
* Step 3: Upsert a plan with modified quantity and/or date
* You can repeat this for multiple plans (supply or demand or both)
* Step 4: Calculate the new priority score and compare
#### Step 1: Choose an WorkItem
* Choose an WorkItem from the above list (by its index)
```
k = 3
partNumber = work_items[k]["partNumber"]
locationIdentifier = work_items[k]["locationIdentifier"]
priority = work_items[k]["priority"]
workItemId = work_items[k]["workItemId"]
starting_inventory = work_items[k]["starting_inventory"]
print("Selected WorkItem: Object ID: {}".format(workItemId))
print("\t({}): Part-Number: {}; Location-Id: {}; Priority-Score: {}".format(k, partNumber, locationIdentifier, priority))
```
#### Step 2: Get Supply and Demand Plans
* WorkItem can be identified with its unique object ID or by its unique `partNumber` and `locationIdentifier` combination
* We have constructed all queries with `partNumber` and `locationIdentifier` combination for easy readability
* Good practice is to use object ID as it is immune to possible changes in schema
* To calculate the priority score we need both the supply and demand plans
```
# Get supply plans
supply_plans = []
print("Querying supply plans ...")
plan_list = infohub.get_supplyplans(partNumber=partNumber, locationIdentifier=locationIdentifier, maxDate=maxDate)
print("\tNumber of Supply Plans: {}".format(len(plan_list)))
for i in range(len(plan_list)):
plan = {"startDate": plan_list[i]['object']["startDate"],
"quantity": plan_list[i]['object']["quantity"],
"id": plan_list[i]['object']["id"]
}
supply_plans.append(plan)
# Get demand plans
demand_plans =[]
print("Querying demand plans ...")
plan_list = infohub.get_demandplans(partNumber=partNumber, locationIdentifier=locationIdentifier, maxDate=maxDate)
print("\tNumber of Demand Plans: {}".format(len(plan_list)))
for i in range(len(plan_list)):
plan = {"startDate": plan_list[i]['object']["startDate"],
"quantity": plan_list[i]['object']["quantity"],
"id": plan_list[i]['object']["id"]
}
demand_plans.append(plan)
print("Done.")
# Print Supply and Demand Plans
print("::Supply Plans::")
for i in range(len(supply_plans)):
print("\t({}): Object ID: {}".format(i, supply_plans[i]["id"]))
print("\t\tStart Date: {}; Quantity: {};".format(supply_plans[i]["startDate"], supply_plans[i]["quantity"]))
print("::Demand Plans::")
for i in range(len(demand_plans)):
print("\t({}): Object ID: {}".format(i, demand_plans[i]["id"]))
print("\t\tStart Date: {}; Quantity: {};".format(demand_plans[i]["startDate"], demand_plans[i]["quantity"]))
```
#### Step 3: Upsert a plan with modified quantity and/or date
* Choose a supply/demand plan and change its quantity and/or date
* Upsert the modified plan
* You can repeat this for multiple plans (supply or demand or both)
* Steps 3a and 3b
```
# To demonstrate the change in the priority
changed_plans = []
new_supply_plans = copy.deepcopy(supply_plans)
new_demand_plans = copy.deepcopy(demand_plans)
```
**Step 3a: _Modify Supply Plan_**
```
# Choose a supply plan to modify by its index
k = 0
id = supply_plans[k]["id"]
startDate = supply_plans[k]["startDate"]
quantity = supply_plans[k]["quantity"]
print("Selected Supply Plan ({}): Object ID: {}".format(k, id))
print("\tStart Date: {}; Quantity: {};".format(startDate, quantity))
new_startDate = startDate
new_quantity = quantity
# Modify quantity and/or date (Comment out to keep it the same)
# new_startDate = "2021-08-04T00:00:00.000Z"
new_quantity = 350.0
# Upsert the modified supply plan
print("Upserting modified supply plan..")
upsert_result = infohub.upsert_supplyplan(supplyPlanID=id, quantity=new_quantity, startDate=new_startDate, timestamp=timestamp)
print(upsert_result)
# Update the new values in the local list
new_supply_plans[k]["quantity"] = new_quantity
new_supply_plans[k]["startDate"] = new_startDate
changed_plans.append({"planType": "supply",
"quantity": quantity,
"startDate": startDate,
"new_quantity": new_quantity,
"new_startDate": new_startDate})
```
**Step 3b: _Modify Demand Plan_**
```
# Choose a demand plan to modify by its index
k = 0
id = demand_plans[k]["id"]
startDate = demand_plans[k]["startDate"]
quantity = demand_plans[k]["quantity"]
print("Selected Demand Plan ({}): Object ID: {}".format(k, id))
print("\tStart Date: {}; Quantity: {};".format(startDate, quantity))
new_startDate = startDate
new_quantity = quantity
# Modify quantity and/or date (Comment out to keep it the same)
#new_startDate = "2021-08-02T00:00:00.000Z"
new_quantity = 35.0
# Upsert the modified demand plan
print("Upserting modified demand plan..")
upsert_result = infohub.upsert_demandplan(demandPlanID=id, quantity=new_quantity, startDate=new_startDate, timestamp=timestamp)
print(upsert_result)
# Update the new values in the local list
new_demand_plans[k]["quantity"] = new_quantity
new_demand_plans[k]["startDate"] = new_startDate
changed_plans.append({"planType": "demand",
"quantity": quantity,
"startDate": startDate,
"new_quantity": new_quantity,
"new_startDate": new_startDate})
```
#### Step 4: Calculate Priority Score
```
new_priority = AOOS_priority_score(supply_plans=new_supply_plans, demand_plans=new_demand_plans, starting_inventory=starting_inventory, inv_positive_threshold=100, inv_negative_threshold=-300)
print("Updated priority score: {}".format(new_priority))
# Results
print("Work Item / Inventory:")
print("----------------------")
print("\tPart Number: {}; Location: {}".format(partNumber, locationIdentifier))
print("----------------------")
print("Changed Plans:")
for i in range(len(changed_plans)):
print("\tPlan Type: {}".format(changed_plans[i]["planType"]))
if changed_plans[i]["quantity"] != changed_plans[i]["new_quantity"]:
print("\t\t Quantity change: {} -> {}".format(changed_plans[i]["quantity"], changed_plans[i]["new_quantity"]))
if changed_plans[i]["startDate"] != changed_plans[i]["new_startDate"]:
print("\t\t Start date change: {} -> {}".format(changed_plans[i]["startDate"], changed_plans[i]["new_startDate"]))
print("---------------------------------------")
print("Priority change: {} -> {}".format(priority, new_priority))
print("---------------------------------------")
# Upsert the new priority score
upsert_result = infohub.upsert_workitem_priority(workItemId=workItemId, priority=new_priority, timestamp=timestamp)
print(upsert_result)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/charanhu/Amazon-Fine-Food-Reviews-Analysis./blob/main/Amazon_Fine_Food_Reviews_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!wget --header="Host: storage.googleapis.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --header="Referer: https://www.kaggle.com/" "https://storage.googleapis.com/kaggle-data-sets/18/2157/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20210617%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210617T121304Z&X-Goog-Expires=259199&X-Goog-SignedHeaders=host&X-Goog-Signature=9a911766595be1862a3092d3324b51b0eb4e7c743ee7ace0cc6b48e3a0ab779e2e96add73b40062ee946e3a7b891cb652614cbe80f81d51dd11ef64c34e8f66d20ee312b2a391db6f0a171f6c094a42f1d6a97bb8ab50db5b630deed8a54cb6f111abe3e2ff557fc86028b38e8661c472ddfe51379540258b0509072c9278614c43d89f04652fa6c29459b57731f85d1fbb2c723b7f26beb14dc8b56220d68215fae03beb865641df4147c536bdb8e44704fc32f152a0ef51b7de8f138289474bd83413a04e0f048af50d9c31fa2a0edff2a6151ce7cfdb6dfa139130f27c39fdfa1787aa973c6ec01a43b824eb42103e12aa3e0bfc8044a347bda9640692ea7" -c -O 'archive.zip'
from google.colab import drive
drive.mount('/content/drive')
from google.colab import files
files.upload()
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d snap/amazon-fine-food-reviews
!unzip amazon-fine-food-reviews
```
# Amazon Fine Food Reviews Analysis
Data Source: https://www.kaggle.com/snap/amazon-fine-food-reviews <br>
EDA: https://nycdatascience.com/blog/student-works/amazon-fine-foods-visualization/
The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.<br>
Number of reviews: 568,454<br>
Number of users: 256,059<br>
Number of products: 74,258<br>
Timespan: Oct 1999 - Oct 2012<br>
Number of Attributes/Columns in data: 10
Attribute Information:
1. Id
2. ProductId - unique identifier for the product
3. UserId - unqiue identifier for the user
4. ProfileName
5. HelpfulnessNumerator - number of users who found the review helpful
6. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not
7. Score - rating between 1 and 5
8. Time - timestamp for the review
9. Summary - brief summary of the review
10. Text - text of the review
#### Objective:
Given a review, determine whether the review is positive (Rating of 4 or 5) or negative (rating of 1 or 2).
<br>
[Q] How to determine if a review is positive or negative?<br>
<br>
[Ans] We could use the Score/Rating. A rating of 4 or 5 could be cosnidered a positive review. A review of 1 or 2 could be considered negative. A review of 3 is nuetral and ignored. This is an approximate and proxy way of determining the polarity (positivity/negativity) of a review.
## Loading the data
The dataset is available in two forms
1. .csv file
2. SQLite Database
In order to load the data, We have used the SQLITE dataset as it easier to query the data and visualise the data efficiently.
<br>
Here as we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score id above 3, then the recommendation wil be set to "positive". Otherwise, it will be set to "negative".
```
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
```
# [1]. Reading Data
```
# using the SQLite Table to read data.
con = sqlite3.connect('/content/database.sqlite')
#filtering only positive and negative reviews i.e.
# not taking into consideration those reviews with Score=3
# SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000, will give top 500000 data points
# you can change the number to any other number based on your computing power
# filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000""", con)
# for tsne assignment you can take 5k data points
filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000""", con)
# Give reviews with Score>3 a positive rating, and reviews with a score<3 a negative rating.
def partition(x):
if x < 3:
return 0
return 1
#changing reviews with score less than 3 to be positive and vice-versa
actualScore = filtered_data['Score']
positiveNegative = actualScore.map(partition)
filtered_data['Score'] = positiveNegative
print("Number of data points in our data", filtered_data.shape)
# filtered_data.head()
print()
filtered_data.tail()
display = pd.read_sql_query("""
SELECT UserId, ProductId, ProfileName, Time, Score, Text, COUNT(*)
FROM Reviews
GROUP BY UserId
HAVING COUNT(*)>1
""", con)
print(display.shape)
display.head()
display[display['UserId']=='AZY10LLTJ71NX']
display['COUNT(*)'].sum()
```
# Exploratory Data Analysis
## [2] Data Cleaning: Deduplication
It is observed (as shown in the table below) that the reviews data had many duplicate entries. Hence it was necessary to remove duplicates in order to get unbiased results for the analysis of the data. Following is an example:
```
display= pd.read_sql_query("""
SELECT *
FROM Reviews
WHERE Score != 3 AND UserId="AR5J8UI46CURR"
ORDER BY ProductID
""", con)
display.head()
```
As can be seen above the same user has multiple reviews of the with the same values for HelpfulnessNumerator, HelpfulnessDenominator, Score, Time, Summary and Text and on doing analysis it was found that <br>
<br>
ProductId=B000HDOPZG was Loacker Quadratini Vanilla Wafer Cookies, 8.82-Ounce Packages (Pack of 8)<br>
<br>
ProductId=B000HDL1RQ was Loacker Quadratini Lemon Wafer Cookies, 8.82-Ounce Packages (Pack of 8) and so on<br>
It was inferred after analysis that reviews with same parameters other than ProductId belonged to the same product just having different flavour or quantity. Hence in order to reduce redundancy it was decided to eliminate the rows having same parameters.<br>
The method used for the same was that we first sort the data according to ProductId and then just keep the first similar product review and delelte the others. for eg. in the above just the review for ProductId=B000HDL1RQ remains. This method ensures that there is only one representative for each product and deduplication without sorting would lead to possibility of different representatives still existing for the same product.
```
#Sorting data according to ProductId in ascending order
sorted_data=filtered_data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last')
#Deduplication of entries
final=sorted_data.drop_duplicates(subset={"UserId","ProfileName","Time","Text"}, keep='first', inplace=False)
final.shape
#Checking to see how much % of data still remains
(final['Id'].size*1.0)/(filtered_data['Id'].size*1.0)*100
```
<b>Observation:-</b> It was also seen that in two rows given below the value of HelpfulnessNumerator is greater than HelpfulnessDenominator which is not practically possible hence these two rows too are removed from calcualtions
```
display= pd.read_sql_query("""
SELECT *
FROM Reviews
WHERE Score != 3 AND Id=44737 OR Id=64422
ORDER BY ProductID
""", con)
display.head()
final=final[final.HelpfulnessNumerator<=final.HelpfulnessDenominator]
#Before starting the next phase of preprocessing lets see the number of entries left
print(final.shape)
#How many positive and negative reviews are present in our dataset?
final['Score'].value_counts()
```
# [3]. Text Preprocessing.
Now that we have finished deduplication our data requires some preprocessing before we go on further with analysis and making the prediction model.
Hence in the Preprocessing phase we do the following in the order below:-
1. Begin by removing the html tags
2. Remove any punctuations or limited set of special characters like , or . or # etc.
3. Check if the word is made up of english letters and is not alpha-numeric
4. Check to see if the length of the word is greater than 2 (as it was researched that there is no adjective in 2-letters)
5. Convert the word to lowercase
6. Remove Stopwords
7. Finally Snowball Stemming the word (it was obsereved to be better than Porter Stemming)<br>
After which we collect the words used to describe positive and negative reviews
```
# printing some random reviews
sent_0 = final['Text'].values[0]
print(sent_0)
print("="*50)
sent_1000 = final['Text'].values[1000]
print(sent_1000)
print("="*50)
sent_1500 = final['Text'].values[1500]
print(sent_1500)
print("="*50)
sent_4900 = final['Text'].values[4900]
print(sent_4900)
print("="*50)
# remove urls from text python: https://stackoverflow.com/a/40823105/4084039
sent_0 = re.sub(r"http\S+", "", sent_0)
sent_1000 = re.sub(r"http\S+", "", sent_1000)
sent_150 = re.sub(r"http\S+", "", sent_1500)
sent_4900 = re.sub(r"http\S+", "", sent_4900)
print(sent_0)
# https://stackoverflow.com/questions/16206380/python-beautifulsoup-how-to-remove-all-tags-from-an-element
from bs4 import BeautifulSoup
soup = BeautifulSoup(sent_0, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_1000, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_1500, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_4900, 'lxml')
text = soup.get_text()
print(text)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent_1500 = decontracted(sent_1500)
print(sent_1500)
print("="*50)
#remove words with numbers python: https://stackoverflow.com/a/18082370/4084039
sent_0 = re.sub("\S*\d\S*", "", sent_0).strip()
print(sent_0)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent_1500 = re.sub('[^A-Za-z0-9]+', ' ', sent_1500)
print(sent_1500)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
# <br /><br /> ==> after the above steps, we are getting "br br"
# we are including them into stop words list
# instead of <br /> if we have <br/> these tags would have revmoved in the 1st step
stopwords= set(['br', 'the', 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"])
# Combining all the above stundents
from tqdm import tqdm
preprocessed_reviews = []
# tqdm is for printing the status bar
for sentance in tqdm(final['Text'].values):
sentance = re.sub(r"http\S+", "", sentance)
sentance = BeautifulSoup(sentance, 'lxml').get_text()
sentance = decontracted(sentance)
sentance = re.sub("\S*\d\S*", "", sentance).strip()
sentance = re.sub('[^A-Za-z]+', ' ', sentance)
# https://gist.github.com/sebleier/554280
sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
preprocessed_reviews.append(sentance.strip())
preprocessed_reviews[1500]
```
<h2><font color='red'>[3.2] Preprocess Summary</font></h2>
```
## Similartly you can do preprocessing for review summary also.
```
# [4] Featurization
## [4.1] BAG OF WORDS
```
#BoW
count_vect = CountVectorizer() #in scikit-learn
count_vect.fit(preprocessed_reviews)
print("some feature names ", count_vect.get_feature_names()[:10])
print('='*50)
final_counts = count_vect.transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_counts))
print("the shape of out text BOW vectorizer ",final_counts.get_shape())
print("the number of unique words ", final_counts.get_shape()[1])
```
## [4.2] Bi-Grams and n-Grams.
```
#bi-gram, tri-gram and n-gram
#removing stop words like "not" should be avoided before building n-grams
# count_vect = CountVectorizer(ngram_range=(1,2))
# please do read the CountVectorizer documentation http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
# you can choose these numebrs min_df=10, max_features=5000, of your choice
count_vect = CountVectorizer(ngram_range=(1,2), min_df=10, max_features=5000)
final_bigram_counts = count_vect.fit_transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_bigram_counts))
print("the shape of out text BOW vectorizer ",final_bigram_counts.get_shape())
print("the number of unique words including both unigrams and bigrams ", final_bigram_counts.get_shape()[1])
```
## [4.3] TF-IDF
```
tf_idf_vect = TfidfVectorizer(ngram_range=(1,2), min_df=10)
tf_idf_vect.fit(preprocessed_reviews)
print("some sample features(unique words in the corpus)",tf_idf_vect.get_feature_names()[0:10])
print('='*50)
final_tf_idf = tf_idf_vect.transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_tf_idf))
print("the shape of out text TFIDF vectorizer ",final_tf_idf.get_shape())
print("the number of unique words including both unigrams and bigrams ", final_tf_idf.get_shape()[1])
```
## [4.4] Word2Vec
```
# Train your own Word2Vec model using your own text corpus
i=0
list_of_sentance=[]
for sentance in preprocessed_reviews:
list_of_sentance.append(sentance.split())
# Using Google News Word2Vectors
# in this project we are using a pretrained model by google
# its 3.3G file, once you load this into your memory
# it occupies ~9Gb, so please do this step only if you have >12G of ram
# we will provide a pickle file wich contains a dict ,
# and it contains all our courpus words as keys and model[word] as values
# To use this code-snippet, download "GoogleNews-vectors-negative300.bin"
# from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit
# it's 1.9GB in size.
# http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/#.W17SRFAzZPY
# you can comment this whole cell
# or change these varible according to your need
is_your_ram_gt_16g=False
want_to_use_google_w2v = False
want_to_train_w2v = True
if want_to_train_w2v:
# min_count = 5 considers only words that occured atleast 5 times
w2v_model=Word2Vec(list_of_sentance,min_count=5,size=50, workers=4)
print(w2v_model.wv.most_similar('great'))
print('='*50)
print(w2v_model.wv.most_similar('worst'))
elif want_to_use_google_w2v and is_your_ram_gt_16g:
if os.path.isfile('GoogleNews-vectors-negative300.bin'):
w2v_model=KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
print(w2v_model.wv.most_similar('great'))
print(w2v_model.wv.most_similar('worst'))
else:
print("you don't have gogole's word2vec file, keep want_to_train_w2v = True, to train your own w2v ")
w2v_words = list(w2v_model.wv.vocab)
print("number of words that occured minimum 5 times ",len(w2v_words))
print("sample words ", w2v_words[0:50])
```
## [4.4.1] Converting text into vectors using wAvg W2V, TFIDF-W2V
#### [4.4.1.1] Avg W2v
```
# average Word2Vec
# compute average word2vec for each review.
sent_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sent in tqdm(list_of_sentance): # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length 50, you might need to change this to 300 if you use google's w2v
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
cnt_words += 1
if cnt_words != 0:
sent_vec /= cnt_words
sent_vectors.append(sent_vec)
print(len(sent_vectors))
print(len(sent_vectors[0]))
```
#### [4.4.1.2] TFIDF weighted W2v
```
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
model = TfidfVectorizer()
model.fit(preprocessed_reviews)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(model.get_feature_names(), list(model.idf_)))
# TF-IDF weighted Word2Vec
tfidf_feat = model.get_feature_names() # tfidf words/col-names
# final_tf_idf is the sparse matrix with row= sentence, col=word and cell_val = tfidf
tfidf_sent_vectors = []; # the tfidf-w2v for each sentence/review is stored in this list
row=0;
for sent in tqdm(list_of_sentance): # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words and word in tfidf_feat:
vec = w2v_model.wv[word]
# tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)]
# to reduce the computation we are
# dictionary[word] = idf value of word in whole courpus
# sent.count(word) = tf valeus of word in this review
tf_idf = dictionary[word]*(sent.count(word)/len(sent))
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors.append(sent_vec)
row += 1
```
|
github_jupyter
|
```
import os
import h5py
import tensorflow as tf
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
from IPython import display
from tensorflow.keras import layers
from time import strftime
from scipy.signal import spectrogram, stft, istft, resample
MODEL_NAME = "GanPlayground"
SCALING_FACTOR = 0
EPOCHS = 25
BUFFER_SIZE = 1000
BATCH_SIZE = 16
NUM_EXAMPLES_TO_GENERATE = 1
LATENT_DIM = 100
STEAD_PATH_DB_PROCESSED_STFT_64 = "/Users/jarek/git/saigon/data/STEAD-processed-stft-64.hdf5"
def plot_single_stream(do, label, fs=66, nperseg=127, file_path=None):
d0 = pd.DataFrame(data=do)
fig = plt.figure(figsize=(16, 16), dpi=60)
ax1 = plt.subplot2grid((4, 1), (0, 0))
ax2 = plt.subplot2grid((4, 1), (1, 0))
ax3 = plt.subplot2grid((4, 1), (2, 0), rowspan=2)
plt.subplots_adjust(hspace=0.5)
sns.lineplot(data=do, ax=ax1, linewidth=1, legend=None)
ax1.set_title("Waveform")
ax1.set(xlabel="Samples", ylabel="Amplitude counts")
ax1.locator_params(nbins=6, axis="y")
f, t, Sxx = spectrogram(x=do, fs=fs)
ax2.clear()
ax2.set_title("Spectrogram")
ax2.pcolormesh(t, f, Sxx, shading="gouraud")
ax2.set(xlabel="Time [sec]", ylabel="Frequency [Hz]")
f_sftt, t_sftt, Zxx = stft(do, window="hanning", fs=fs, nperseg=nperseg)
ax3.clear()
ax3.set_title("STFT")
ax3.pcolormesh(t_sftt, f_sftt, np.abs(Zxx), shading="auto")
plt.suptitle(label, fontsize=14)
if file_path != None:
plt.savefig(file_path)
else:
plt.show()
def plot_stft(stream, fs=100, nperseg=155):
f, t, Zxx = stft(stream, window='hanning', fs=fs, nperseg=nperseg)
# plt.specgram(x_1[0][0], cmap='plasma', Fs=100)
plt.pcolormesh(t, f, np.abs(Zxx), shading='auto')
def get_stft_data(file_path, arr_length):
with h5py.File(file_path, "r") as f:
keys = f["keys"][:arr_length]
components = f["components"][:arr_length]
data = f["data"][:arr_length]
return (keys, components, data)
```
# Read processed data
```
(keys, components, x_train) = get_stft_data(
STEAD_PATH_DB_PROCESSED_STFT_64, 10000
)
```
# Convert streams to STFT
```
# STFT of the stream and then reverse STFT back into original stream
# f, t, Zxx = stft(x_1[1][0], window='hanning', fs=100, nperseg=155)
# k2 = istft(Zxx, window='hanning', fs=100, nperseg=155)
```
# Scale and reshape data
```
SCALING_FACTOR = int(
max(
[
abs(min([x.min() for x in x_train])),
abs(max([x.max() for x in x_train])),
]
)
)
SCALING_FACTOR
x_train /= SCALING_FACTOR
x_train = x_train.reshape(x_train.shape[0], 64, 64, 1)
train_dataset = (
tf.data.Dataset.from_tensor_slices(x_train)
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
)
train_dataset
```
# Logs and Tensorboard
```
folder_name = f"{MODEL_NAME} at {strftime('%H:%M')}"
log_dir = os.path.join("../log/", folder_name)
try:
os.makedirs(log_dir)
except OSError as exception:
print(exception.strerror)
else:
print("Successfully created dirs!")
```
# Define GAN
```
def make_generator_model():
model = tf.keras.Sequential()
model.add(
layers.Dense(2 * 2 * 128, use_bias=False, input_shape=(LATENT_DIM,))
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((2, 2, 128)))
model.add(
layers.Conv2DTranspose(
64, (20, 20), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
64, (20, 20), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
64, (20, 20), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
64, (20, 20), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
64, (20, 20), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
1,
(1, 1),
strides=(1, 1),
padding="same",
use_bias=False,
activation="tanh",
)
)
return model
generator = make_generator_model()
# noise = tf.random.normal(dtype=tf.dtypes.float32, shape=[78, 78], stddev=5)
noise = tf.random.normal([BATCH_SIZE, LATENT_DIM], stddev=10e5)
generated_stft = generator(noise, training=False)
generated_stft.shape
tf.keras.utils.plot_model(generator, show_shapes=True)
inversed = istft(generated_stft[0, :, :, 0], window='hanning', fs=66, nperseg=127)
plot_single_stream(inversed[1][:4000], "GAN Generator Noise")
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(
layers.Conv2D(64, (5, 5), strides=(2, 2), padding="same", input_shape=[64, 64, 1])
)
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding="same"))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
discriminator = make_discriminator_model()
decision = discriminator(generated_stft)
decision
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
# You will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([NUM_EXAMPLES_TO_GENERATE, LATENT_DIM])
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, LATENT_DIM])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as you go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
for i in range(predictions.shape[0]):
inversed = istft(
predictions[i, :, :, 0][:4000], window="hanning", fs=66, nperseg=127
)
plot_single_stream(
inversed[1][:4000],
f"GAN Event (epoch {epoch})",
# file_path=f"out/image_at_epoch_{epoch}.jpg"
)
train(train_dataset, EPOCHS)
```
|
github_jupyter
|
# Introduction
This is a basic tutorial on using Jupyter to use the scipy modules.
# Example of plotting sine and cosine functions in the same plot
Install matplotlib through conda via:
conda install -y matplotlib
Below we plot a sine function from 0 to 2 pi. Pretty much what you would expect:
```
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y)
plt.show()
```
The x values limit the range of the plot.
Let's get help on the plt.plot function, so as to understand how to use it, in addition to the tutorial at http://matplotlib.org/users/pyplot_tutorial.html
```
help(plt.plot)
```
Let's add in 'bo' string to the mix to get dots on the trace:
```
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 2 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y, 'bo')
plt.show()
```
Let's try to add two traces, the second one is a cosine function:
```
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,2 * math.pi, 50)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1, 'bo', x, y2, 'r+')
plt.show()
```
# Example of using optimize.fmin on the sine function
```
import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.fmin(func2, math.pi - 0.01)
```
Pretty much what we expected. There is a minimum of -1 for this sine wave function (amplitude of 1 here ... would have been different if we multiplied the sine wave by some other factor). We can call the f function to see the value at that point which is pretty darn close to -1:
```
func2(4.71237414)
math.pi * 2 * 0.75
```
# Example of using optimize.root on the sine function
```
help(optimize.root)
```
Let's evaludate the f function (which we know is a sine function) at not quite at the point where it is zero (at pi):
```
func2(math.pi * 0.75)
import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.root(func2, math.pi * 0.75)
```
So it found the root at pi. Notice the "x" value at the end of the output.
You can turn off the verbose output using keyword arguments to the `optimize.root` function.
|
github_jupyter
|
# Métodos de punto fijo I
En esta ocasión empezaremos a implementar un método para obtener una raiz real de una ecuación no lineal. Este método se le conoce como punto fijo, pero la variante especificamente que implementaremos ahora es la de aproximaciones sucesivas.
## Aproximaciones sucesivas
Tenemos un polinomio $f(x) = 2 x^2 - x - 5$ y un valor inicial $x_0 = 1$ y queremos saber cuales son sus raices reales. Lo primero que tenemos que hacer es pasarlo a la siguiente forma:
$$x = g(x)$$
Podemos notar que hay muchas maneras de hacer esto, como por ejemplo:
* $x = 2 x^2 - 5$
* $x = \sqrt{\frac{x + 5}{2}}$
* $x = \frac{5}{2x - 1}$
Pero no todas estas formas nos proveerán de un calculo que converja hacia la solución, por lo que tenemos que analizar la convergencia de cada una, hasta que encontremos una que nos acomode.
Sabemos que si el valor absoluto de la primera derivada de $g(x)$ es menor a 1, $\left| g'(x) < 1 \right|$, converjerá adecuadamente, por lo que esto es lo que analizaremos ahora:
* $g_1(x) = 2 x^2 - 5$, entonces $g_1'(x) = 4 x$
* $g_2(x) = \sqrt{\frac{x + 5}{2}}$, entonces $g_2' = \frac{1}{2} \frac{1}{\sqrt{2(x + 5)}}$
* $g_3(x) = \frac{5}{2x - 1}$, entonces $g_3'(x) = - \frac{10}{(2x - 1)^2}$
De aqui podemos ver que $g_1'(x_0) = 4$, por lo que no converjerá a la solución. $g_2'(x_0) = \frac{1}{2\sqrt{12}}$ en cambio, si nos da un numero menor a 1, por lo que es indicada para hacer iteraciones.
```
from numpy import sqrt
1/(2*sqrt(12))
```
Entonces ya tenemos una formula para iterar a través:
$$x_1 = g(x_0)$$
es decir:
$$x_1 = \sqrt{\frac{x_0 + 5}{2}}$$
```
x_0 = 1
g = lambda x: sqrt((x+5)/2)
x_1 = g(x_0)
x_1
```
Y ya tenemos la primera iteración. Cuando dejaremos de iterar? Cuando el error $\varepsilon = x_1 - x_0$ sea menor a $0.001$ (para este ejemplo).
```
abs(x_1 - x_0)
```
Como podemos ver, aun falta mucho... pero podemos decirle a la computadora que lo siga haciendo sin notsotros. Primero creamos una funcion que itere la funcion g hasta que la diferencia de las ultimas iteraciones sea menor al error.
```
def aprox_sucesivas(g, x0, error):
xs = [x0]
while True:
xs.append(g(xs[-1]))
if abs(xs[-1] - xs[-2]) <= error:
return xs[-1]
```
Y ahora le damos a esta funcion nuestra $g(x)$, el punto donde queremos que inicie $x_0$ y el error maximo permitido.
```
r = aprox_sucesivas(g, 0, 0.001)
r
from numpy import sin
```
Creamos una funcion en codigo, que contiene la función original, para evaluar que tan bien se aproximo la raiz.
```
f = lambda x: 2*x**2 - x - 5
f(r)
```
Y como podemos ver hizo un muy buen trabajo sin nuestra supervision :)
## Ejercicio
1. Encuentra la segunda raiz de este polinomio.
2. Implementa el método de la secante.
|
github_jupyter
|
```
import pandas as pd
import os
import json
import re
from tinydb import TinyDB, Query
import sqlalchemy as db
```
# Building the Database
We use a database in the backend to serve the data over a REST API to our client. The database is being built with the data frame generated using the `build_game_db.ipynb` notebook. We stored it in the feather format as it is both efficient and compact. We start by reading the stored data frame.
```
df_tmp = pd.read_feather("spiele_db.ftr")
df_tmp
```
We use TinyDB aus our backend. Note that TinyDB is not fit for production use. Up to a few hundred or even a few thousand entries it will do. The advantage is that it is a document database, so it should be possible to port with reasonable effort to a larger scale document database such as MongoDB or CouchDB. Note that this will make our architecture more complex since we will need a separate database server.
We create a new TinyDB database which is basically a JSON file in a specific structure.
```
if os.path.exists('spiele_tinydb.json'):
os.remove('spiele_tinydb.json')
game_db = TinyDB('spiele_tinydb.json')
```
Now we just insert our data frame row by row. Note that this is time consuming. As a workaround we could directly manipulate the JSON but this may not work with future TinyDB versions.
```
for i in df_tmp.index:
game_db.insert(json.loads(df_tmp.loc[i].to_json()))
```
Once we have the TinyDB database we can query it. The syntax is a bit clumsy, but generally it works. In my configuration case insensitive search did not work for some unknow reason.
```
my_query = Query()
query_result = game_db.search(my_query.game_title.matches('.*Dark.*', flags=re.IGNORECASE))
for item in query_result:
print(item["game_title"], " / ", item["game_igdb_name"], ": ", round(item["game_rating"], 2), " / ", item["game_rating_count"])
```
Since we are dealing with JSON here there will be entries where some values are not set in the JSON. Unfortunately searching for them the gives an error. So we have to use a lambda function for searching.
```
def search_by_rating(min_rating = 0, max_rating = 100):
my_query = Query()
test_func = lambda s: True if(isinstance(s, float) and s >= min_rating and s <= max_rating) else False
return game_db.search(my_query.game_rating.test(test_func))
query_result = search_by_rating(90)
print(print(json.dumps(query_result, indent=4, sort_keys=True)))
```
## The REST API
To get a list of games in our frontend on the client we want to filter by the following attributes:
- Minimum Rating: Games above a certain rating as documented in the IGDB.
- Maximum Rating: Games below a certain rating as documented in the IGDB. Mainly to restrict the number of results.
- Minimum Rating Count: Less popular games often have a few ratings on which their overall rating is based. This is not representative, so we select games with a minimum number of ratings.
- Maximum playing time: We potentially want to play only games which don't have excessive playing time.
Note that since we have imperfect data quality by search for these attributes we exclude games where for any reason these attributes are not set.
```
def search_api(min_rating = 0, max_rating = 100, min_rating_count = 0, max_playing_time = 1000):
my_query = Query()
test_rating = lambda s: True if(isinstance(s, float) and s >= min_rating and s <= max_rating) else False
test_time = lambda s: True if(isinstance(s, float) and s <= max_playing_time) else False
return game_db.search((my_query.game_rating.test(test_rating)) & (my_query.game_rating_count >= min_rating_count) &
(my_query.hltb_main.test(test_time)))
query_result = search_api(min_rating = 75, min_rating_count = 20, max_playing_time = 5)
print("Number of results: ", len(query_result))
print(print(json.dumps(query_result, indent=4, sort_keys=True)))
```
Summarising the result gives a better overview. We are working with lists of dictionaries a return valies of the search functions in TinyDB.
```
result_sorted = sorted(query_result, key=lambda x: x["game_rating"], reverse=True)
for item in result_sorted:
print(item["game_title"], " / ", item["game_igdb_name"], ": ", round(item["game_rating"], 2), " / ", item["game_rating_count"])
```
## Alternative Databases
Shortly tried playing around with SQL Alchemy and an SQLite database. As the base structures returned by IGDB are all JSONs substantial cleaning up would be necessary to create a proper SQL database. I stuck with TinyDB as database in the backend.
Another thing to consider would be just a plain JSON object being queried with eg JMESPath. Like TinyDB this would not be production ready, but the querying capabilities would potentially more powerful. Also it would involve porting the code to a real database as JMESPath to my knowledge is not directly supported by any document database.
```
engine = db.create_engine('sqlite://', echo=False)
df_tmp.to_sql('games', con=engine)
engine.execute("SELECT * FROM games").fetchall()
metadata = db.MetaData()
games_table = db.Table('games', metadata, autoload=True, autoload_with=engine)
print(metadata.tables)
print(games_table.columns)
print(games_table.metadata)
```
|
github_jupyter
|
# IPython: beyond plain Python
Adapted from the ICESat2 Hackweek [intro-jupyter-git](https://github.com/ICESAT-2HackWeek/intro-jupyter-git) session. Courtesy of [@fperez](https://github.com/fperez).
When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient.
## First things first: running code, getting help
In the notebook, to run a cell of code, hit `Shift-Enter`. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:
- `Alt-Enter` to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).
- `Control-Enter` executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
```
print("Hi")
```
Getting help:
```
?
```
Typing `object_name?` will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
```
import numpy as np
np.linspace?
np.isclose??
*int*?
```
An IPython quick reference card:
```
%quickref
```
## Tab completion
Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type `object_name.<TAB>` to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.
```
np.
```
## The interactive workflow: input, output, history
```
2+10
_+10
```
You can suppress the storage and rendering of output if you append `;` to the last cell (this comes in handy when plotting with matplotlib, for example):
```
10+20;
_
```
The output is stored in `_N` and `Out[N]` variables:
```
_11 == Out[11]
```
Previous inputs are available, too:
```
In[11]
_i
%history -n 1-5
```
**Exercise**
Use `%history?` to have a look at `%history`'s magic documentation, and write the last 10 lines of history to a file named `log.py`.
## Accessing the underlying operating system
```
!pwd
files = !ls
print("files this directory:")
print(files)
files
!echo $files
!echo {files[0].upper()}
```
Note that all this is available even in multiline blocks:
```
import os
for i,f in enumerate(files):
if f.endswith('ipynb'):
!echo {"%02d" % i} - "{os.path.splitext(f)[0]}"
else:
print('--')
```
## Beyond Python: magic functions
The IPyhton 'magic' functions are a set of commands, invoked by prepending one or two `%` signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with `--` and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold:
- To provide an namespace for controlling IPython itself and exposing other system-oriented functionality that is separate from your Python variables and functions. This lets you have a `cd` command accessible as a magic regardless of whether you have a Python `cd` variable.
- To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands.
```
%magic
```
Line vs cell magics:
Magics can be applied at the single-line level or to entire cells. Line magics are identified with a single `%` prefix, while cell magics use `%%` and can only be used as the first line of the cell (since they apply to the entire cell). Some magics, like the convenient `%timeit` that ships built-in with IPython, can be called in either mode, while others may be line- or cell-only (you can see all magics with `%lsmagic`).
Let's see this with some `%timeit` examples:
```
%timeit list(range(1000))
%%timeit
# comment here
list(range(10))
list(range(100))
```
Line magics can be used even inside code blocks:
```
for i in range(1, 5):
size = i*100
print('size:', size, end=' ')
%timeit list(range(size))
```
Magics can do anything they want with their input, so it doesn't have to be valid Python (note that the below may not work on a Windows machine, depending on how you are running Jupyter on it):
```
%%bash
echo "My shell is:" $SHELL
echo "My disk usage is:"
df -h
```
Another interesting cell magic: create any file you want locally from the notebook:
```
%%writefile test.txt
This is a test file!
It can contain anything I want...
And more...
!cat test.txt
```
Let's see what other magics are currently defined in the system:
```
%lsmagic
def to_optimize(N):
total = [0,0]
ta = 0
tb = 0
for i in range(N):
for j in range(N):
a = i**2
b = j*2
total[0] += a
total[1] += b
return total
%timeit to_optimize(1_000)
%prun to_optimize(1_000)
```
## Running normal Python code: execution and errors
Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session:
```
>>> # Fibonacci series:
... # the sum of two elements defines the next
... a, b = 0, 1
>>> while b < 10:
... print(b)
... a, b = b, a+b
In [1]: for i in range(10):
...: print(i, end=' ')
...:
```
And when your code produces errors, you can control how they are displayed with the `%xmode` magic:
```
%%writefile mod.py
def f(x):
return 1.0/(x-1)
def g(y):
return f(y+1)
```
Now let's call the function `g` with an argument that would produce an error:
```
import mod
mod.g(0)
```
## Basic debugging
When running code interactively, it can be tricky to figure out how to debug...
```
%debug
enjoy = input('Are you enjoying this tutorial? ')
print('enjoy is:', enjoy)
```
## Running code in other languages with special `%%` magics
```
%%perl
@months = ("July", "August", "September");
print $months[0];
```
## Plotting in the notebook
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x**2)
plt.plot(x, y)
plt.title("A little chirp")
```
---
## A quick tour of widgets
This is meant to provide a quick overview of some of the things you can do with Jupyter widgets. For more ideas, you can check out [the docs](https://ipywidgets.readthedocs.io/en/latest/), and the notebooks from the [ICESat2 Hackweek](https://github.com/ICESAT-2HackWeek/intro-jupyter-git)
```
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
%matplotlib inline
def sin_x(x, frequency=1, phase=0):
return np.sin(
2*np.pi*frequency*x + phase
)
def plot_sin_x(frequency=1, phase=0, title="a"):
x = np.linspace(-1, 1, 200)
plt.plot(x, sin_x(x, frequency, phase))
plt.title(title)
plot_sin_x()
```
### using interactive
```
widget = ipywidgets.interactive(plot_sin_x, frequency=1.5, phase=0.)
widget
```
### specifying the widgets
```
mywidget = ipywidgets.interactive(
plot_sin_x,
frequency = ipywidgets.FloatSlider(min=0, max=10, value=1, display="f"),
# phase = ipywidgets.FloatSlider(min=-np.pi, max=np.pi, value=0)
phase = ipywidgets.FloatText(value=0),
title = ipywidgets.ToggleButtons(options=["a", "b"])
)
mywidget
mywidget.children[1].value
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/a-essa/Sentiment-Analysis-and-Satisfaction-Prediction/blob/master/ProjetTripAdvisor_Final.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Projet
```
%tensorflow_version 2.x
import tensorflow as tf
print(tf.__version__)
from __future__ import absolute_import, division, print_function, unicode_literals
from google.colab import files
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
import os
import io
from tensorflow import keras
from tensorflow.keras import layers
import keras.preprocessing.text
import tensorflow.keras.backend as K
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix as cm
from sklearn.metrics import multilabel_confusion_matrix as mcm
import tensorflow_hub as hub
tf.keras.backend.clear_session()
np.set_printoptions(precision=3, suppress=True)
pd.options.display.max_colwidth = 1000
from keras.regularizers import l2
print(tf.__version__)
from google.colab import drive
drive.mount('/content/gdrive')
```
# Read CSV
```
Total_data_for_tokenizer=pd.read_csv('/content/gdrive/My Drive/Data/Total_data_for_tokenizer.csv' , sep='\t')
training_set_TripAdvisor=pd.read_csv('/content/gdrive/My Drive/Data/training_set_TripAdvisor.csv' , sep='\t')
test_set_TripAdvisor=pd.read_csv('/content/gdrive/My Drive/Data/test_set_TripAdvisor.csv' , sep='\t')
training_set_IMDB=pd.read_csv('/content/gdrive/My Drive/Data/training_set_IMDB.csv' , sep='\t')
test_set_IMDB=pd.read_csv('/content/gdrive/My Drive/Data/test_set_IMDB.csv' , sep='\t')
del Total_data_for_tokenizer['Unnamed: 0']
del training_set_TripAdvisor['Unnamed: 0']
del test_set_TripAdvisor['Unnamed: 0']
del training_set_IMDB['Unnamed: 0']
del test_set_IMDB['Unnamed: 0']
```
# Tokenizer
```
def create_dataset(dataframe):
msk = np.random.rand(len(dataframe)) < 0.8
train = dataframe[msk]
test = dataframe[~msk]
#print(train_file_path.head)
all_data = np.array(dataframe.values.tolist())
training_set = np.array(train.values.tolist())
test_set=np.array(test.values.tolist())
print("Dataset Length: ",len(training_set)+ len(test_set))
return all_data, training_set, test_set
def create_tokenizer(dataset):
tokenizer = Tokenizer(filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split=' ', char_level=False, oov_token='UNK')
# fit the tokenizer on the documents
tokenizer.fit_on_texts(dataset[:,1])
# summarize what was learned
print("Word counter: ",tokenizer.word_counts)
print("Number of sentences: ",tokenizer.document_count)
print("Word Index: ", tokenizer.word_index)
print("Word Docs: ",tokenizer.word_docs)
return tokenizer
def vocab_size(tokenizer):
voca_size=len(tokenizer.word_counts)+2 # 1 ?
return voca_size
def maxlen(data):
return max([len(x) for x in data])
Total_data_for_tokenizer_np = np.array(Total_data_for_tokenizer.values.tolist())
tokenizer=create_tokenizer(Total_data_for_tokenizer_np)
def tokenize_sentences(input_set):
input_sequences = []
for line in input_set[:,1]:
token_list = tokenizer.texts_to_sequences([line])[0]
input_sequences.append(token_list)
return input_sequences
training_set_IMDB_np = np.array(training_set_IMDB.values.tolist())
test_set_IMDB_np = np.array(test_set_IMDB.values.tolist())
training_set_TripAdvisor_np = np.array(training_set_TripAdvisor.values.tolist())
test_set_TripAdvisor_np = np.array(test_set_TripAdvisor.values.tolist())
input_training_set_IMDB = tokenize_sentences(training_set_IMDB_np)
input_test_set_IMDB = tokenize_sentences(test_set_IMDB_np)
input_training_set_TripAdvisor = tokenize_sentences(training_set_TripAdvisor_np)
input_test_set_TripAdvisor = tokenize_sentences(test_set_TripAdvisor_np)
def pad_sentences(input_sequences, max_sequence_len):
input_padded_sequences = np.array(pad_sequences(input_sequences,
maxlen=max_sequence_len, padding='pre'))
return input_padded_sequences
def pad_sentences_post(input_sequences, max_sequence_len):
input_padded_sequences = np.array(pad_sequences(input_sequences,
maxlen=max_sequence_len, padding='post'))
return input_padded_sequences
def maxlen(data1, data2 , data3 , data4):
max_len_data1 = max([len(x) for x in data1])
max_len_data2 = max([len(x) for x in data2])
max_len_data3 = max([len(x) for x in data3])
max_len_data4 = max([len(x) for x in data4])
return max(max_len_data1, max_len_data2 , max_len_data3 , max_len_data4)
max_len = maxlen(input_training_set_IMDB, input_test_set_IMDB, input_training_set_TripAdvisor, input_test_set_TripAdvisor)
```
#Embeddings pretraining (Next Word Prediction)
```
def split_sentences_forward(input_set):
input_sequences = []
for line in input_set:
for i in range(1, len(line)):
if i < 99 :
n_gram_sequence = line[:i+1]
else :
n_gram_sequence = line[i-99:i+1]
input_sequences.append(n_gram_sequence)
return input_sequences
def split_sentences_Backward(input_set):
input_sequences = []
for line in input_set[:,1]:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
if i < 99 :
n_gram_sequence = token_list[:i+1]
else :
n_gram_sequence = token_list[i-99:i+1]
input_sequences.append(n_gram_sequence)
return input_sequences
def create_dataset_slices_embaddings(input_sequences):
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
#label = tf.keras.utils.to_categorical(label, num_classes=voc_size)
return predictors, label
def create_preprocessed_dataset(predictors, labels):
dataset = tf.data.Dataset.from_tensor_slices((predictors, labels))
return dataset
def top_k_categorical_accuracy1(y_true, y_pred, k=10):
return K.mean(K.in_top_k(K.cast(y_pred,dtype='float32'),K.argmax(y_true, axis=-1), k), axis=-1)
input_train_WE_1=split_sentences_forward(input_training_set_IMDB)
input_train_WE_2=split_sentences_forward(input_training_set_TripAdvisor)
input_test_WE_1=split_sentences_forward(input_test_set_IMDB[5000:])
input_test_WE_2=split_sentences_forward(input_test_set_TripAdvisor)
input_train_WE_sequences_1 = pad_sentences(input_train_WE_1, 100)
input_train_WE_sequences_2 = pad_sentences(input_train_WE_2, 100)
input_test_WE_sequences_1 = pad_sentences(input_test_WE_1, 100)
input_test_WE_sequences_2 = pad_sentences(input_test_WE_2, 100)
training_predictors_WE, training_label_WE=create_dataset_slices_embaddings(input_train_WE_sequences_1)
test_predictors_WE, test_label_WE= create_dataset_slices_embaddings(input_test_WE_sequences_1)
training_predictors_WE_2, training_label_WE_2=create_dataset_slices_embaddings(input_train_WE_sequences_2)
test_predictors_WE_2, test_label_WE_2= create_dataset_slices_embaddings(input_test_WE_sequences_2)
voc_size = vocab_size(tokenizer)
EmbeddingLayer = tf.keras.layers.Embedding(voc_size, 64)
LstmLayer = tf.keras.layers.LSTM(150)
DenseLayerEmbedding = tf.keras.layers.Dense(64, activation='relu')
DropLayerEmbedding = tf.keras.layers.Dropout(0.1)
DenseLayerOutputEmbedding = tf.keras.layers.Dense(voc_size, activation='softmax')
model = tf.keras.Sequential()
model.add(EmbeddingLayer)
model.add(LstmLayer)
model.add(DenseLayerEmbedding)
model.add(DropLayerEmbedding)
model.add(DenseLayerOutputEmbedding)
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
checkpoint_path = "training_Embadding/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights every epochs
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True)
history = model.fit(training_predictors_WE,
training_label_WE,
epochs=2,
batch_size=100,
validation_data=(test_predictors_WE, test_label_WE),
verbose=1,
callbacks=[cp_callback])
history = model.fit(training_predictors_WE_2,
training_label_WE_2,
epochs=2,
batch_size=100,
validation_data=(test_predictors_WE_2, test_label_WE_2),
verbose=1,
callbacks=[cp_callback])
```
#Dataset slices
```
from __future__ import absolute_import, division, print_function, unicode_literals
input_train_CL_sequences_1 = pad_sentences_post(input_training_set_IMDB, max_len)
input_train_CL_sequences_2 = pad_sentences_post(input_training_set_TripAdvisor, max_len)
input_test_CL_sequences_1 = pad_sentences_post(input_test_set_IMDB, max_len)
input_test_CL_sequences_2 = pad_sentences_post(input_test_set_TripAdvisor, max_len)
def create_dataset_slices_sentence_Regre(input_sequences, targets):
predictors= tf.constant(input_sequences)
targets=targets.astype(int)
return predictors, targets
training_predictors, training_label=create_dataset_slices_sentence_Regre(input_train_CL_sequences_2,training_set_TripAdvisor_np[:,0])
test_predictors, test_label=create_dataset_slices_sentence_Regre(input_test_CL_sequences_2,test_set_TripAdvisor_np[:,0])
training_predictors_tfds, training_label_tfds=create_dataset_slices_sentence_Regre(input_train_CL_sequences_1,training_set_IMDB_np[:,0])
test_predictors_tfds, test_label_tfds=create_dataset_slices_sentence_Regre(input_test_CL_sequences_1,test_set_IMDB_np[:,0])
from keras.regularizers import l2
```
#Model
```
BidirectionalLayer1 = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True))
BidirectionalLayer2 = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True))
GlobalMaxPool1DLayer = tf.keras.layers.GlobalMaxPool1D()
DenseLayer1 = tf.keras.layers.Dense(40, kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01) , activation='relu')
DropLayer = tf.keras.layers.Dropout(0.1)
DenseLayerOutputRE = tf.keras.layers.Dense(1, activation=tf.keras.activations.linear)
EmbeddingLayer.trainable = False
modelBin = tf.keras.Sequential()
modelBin.add(EmbeddingLayer)
modelBin.add(BidirectionalLayer1)
modelBin.add(BidirectionalLayer2)
modelBin.add(GlobalMaxPool1DLayer)
modelBin.add(DenseLayer1)
modelBin.add(DropLayer)
modelBin.add(DenseLayerOutputRE)
modelBin.summary()
modelBin.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mse','mae'])
history = modelBin.fit(training_predictors_tfds,
training_label_tfds,
epochs=3,
batch_size=100,
validation_data=(test_predictors_tfds, test_label_tfds),
verbose=1)
EmbeddingLayer.trainable = True
modelBin.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mse','mae','accuracy'])
history = modelBin.fit(training_predictors_tfds,
training_label_tfds,
epochs=2,
batch_size=100,
validation_data=(test_predictors_tfds, test_label_tfds),
verbose=1)
model1 = tf.keras.Sequential()
model1.add(EmbeddingLayer)
model1.add(BidirectionalLayer1)
model1.add(BidirectionalLayer2)
model1.add(GlobalMaxPool1DLayer)
model1.add(DenseLayer1)
model1.add(DropLayer)
model1.add(DenseLayerOutputRE)
model1.summary()
EmbeddingLayer.trainable = False
BidirectionalLayer1.trainable = False
BidirectionalLayer2.trainable = False
GlobalMaxPool1DLayer.trainable = False
DenseLayer1.trainable = False
model1.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mse','mae','accuracy'])
history = model1.fit(training_predictors,
training_label,
epochs=5,
batch_size=100,
validation_data=(test_predictors, test_label),
verbose=1)
DenseLayer1.trainable = True
model1.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mse','mae','accuracy'])
history = model1.fit(training_predictors,
training_label,
epochs=4,
batch_size=100,
validation_data=(test_predictors, test_label),
verbose=1)
EmbeddingLayer.trainable = True
BidirectionalLayer1.trainable = True
BidirectionalLayer2.trainable = True
GlobalMaxPool1DLayer.trainable = True
model1.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['mse','mae','accuracy'])
history = model1.fit(training_predictors,
training_label,
epochs=4,
batch_size=100,
validation_data=(test_predictors, test_label),
verbose=1)
```
#From saved weights
```
'''EmbeddingLayer = tf.keras.layers.Embedding(80008, 64)
BidirectionalLayer1 = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True))
BidirectionalLayer2 = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True))
GlobalMaxPool1DLayer = tf.keras.layers.GlobalMaxPool1D()
DenseLayer1 = tf.keras.layers.Dense(40, kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01) , activation='relu')
DropLayer = tf.keras.layers.Dropout(0.1)
DenseLayerOutputRE = tf.keras.layers.Dense(1, activation=tf.keras.activations.linear)'''
'''model1 = tf.keras.Sequential()
model1.add(EmbeddingLayer)
model1.add(BidirectionalLayer1)
model1.add(BidirectionalLayer2)
model1.add(GlobalMaxPool1DLayer)
model1.add(DenseLayer1)
model1.add(DropLayer)
model1.add(DenseLayerOutputRE)
model1.summary()'''
'''model1.load_weights('/content/gdrive/My Drive/Chekpoints/model_checkpoint')'''
```
#Validation
```
from sklearn.metrics import mean_squared_error
import nltk
nltk.download("stopwords")
nltk.download("punkt")
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
def clean_set(set_len):
stop_words = list(stopwords.words('english'))
stop_words.remove("not")
stop_words.remove("no")
stop_words.remove("isn't")
stop_words.remove("hasn't")
stop_words.remove("wasn't")
stop_words.remove("didn't")
stop_words.remove("haven't")
stop_words.remove("don't")
stop_words.remove("doesn't")
stop_words.remove("weren't")
stop_words.remove("shouldn't")
lemmatizer = WordNetLemmatizer()
for elem in set_len:
word_tokens = word_tokenize(elem[1])
filtered_sentence = []
for w in word_tokens:
if w not in stop_words:
filtered_sentence.append(lemmatizer.lemmatize(w))
elem[1]=' '.join(filtered_sentence)
column_names = ['reviews.rating','reviews.text']
eval_data = pd.read_csv('http://christophe-rodrigues.fr/eval_reviews.csv', usecols=column_names, sep=";")
eval_data = np.array(eval_data.values.tolist())
clean_set(eval_data)
eval_data_Text = tokenize_sentences(eval_data)
input_eval = pad_sentences_post(eval_data_Text, max_len)
eval_predictors, eval_label =create_dataset_slices_sentence_Regre(input_eval,eval_data[:,0])
prediction = model1.predict(eval_predictors)
mse = mean_squared_error(prediction, eval_label)
mse
```
|
github_jupyter
|
# NLTK
## Sentence and Word Tokenization
```
from nltk.tokenize import sent_tokenize, word_tokenize
EXAMPLE_TEXT = "Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. The sky is pinkish-blue. You shouldn't eat cardboard."
# Sentence Tokenization
print(sent_tokenize(EXAMPLE_TEXT))
# Word Tokenization
print(word_tokenize(EXAMPLE_TEXT))
```
## Stopwords
```
from nltk.corpus import stopwords
# Printing all stopwords (english)
set(stopwords.words('english'))
example_sent = "This is a sample sentence, showing off the stop words filtration."
stop_words = set(stopwords.words('english'))
word_tokens = word_tokenize(example_sent)
filtered_sentence = [w for w in word_tokens if not w in stop_words]
print(word_tokens)
print(filtered_sentence)
```
## Stemming words
```
# Porter Stemmer is a stemming algorithm
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps = PorterStemmer()
example_words = ["python","pythoner","pythoning","pythoned","pythonly"]
for w in example_words:
print(ps.stem(w))
new_text = "It is important to by very pythonly while you are pythoning with python. All pythoners have pythoned poorly at least once."
words = word_tokenize(new_text)
for w in words:
print(ps.stem(w))
```
## Part of Speech Tagging
# POS tag list:
CC coordinating conjunction
CD cardinal digit
DT determiner
EX existential there (like: "there is" ... think of it like "there exists")
FW foreign word
IN preposition/subordinating conjunction
JJ adjective 'big'
JJR adjective, comparative 'bigger'
JJS adjective, superlative 'biggest'
LS list marker 1)
MD modal could, will
NN noun, singular 'desk'
NNS noun plural 'desks'
NNP proper noun, singular 'Harrison'
NNPS proper noun, plural 'Americans'
PDT predeterminer 'all the kids'
POS possessive ending parent's
PRP personal pronoun I, he, she
PRP\$ possessive pronoun my, his, hers
RB adverb very, silently,
RBR adverb, comparative better
RBS adverb, superlative best
RP particle give up
TO to go 'to' the store.
UH interjection errrrrrrrm
VB verb, base form take
VBD verb, past tense took
VBG verb, gerund/present participle taking
VBN verb, past participle taken
VBP verb, sing. present, non-3d take
VBZ verb, 3rd person sing. present takes
WDT wh-determiner which
WP wh-pronoun who, what
WP$ possessive wh-pronoun whose
WRB wh-abverb where, when
#### PunktSentenceTokenizer
> This tokenizer is capable of unsupervised machine learning, so you can actually train it on any body of text that you use.
```
import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
# Create training and testing data
train_text = state_union.raw('2005-GWBush.txt')
sample_text = state_union.raw('2006-GWBush.txt')
# Train Punkt tokenizer
custom_sent_tokenizer = PunktSentenceTokenizer(train_text)
# Actually tokenize
tokenized = custom_sent_tokenizer.tokenize(sample_text)
print(tokenized)
# Create a function that will run through and tag all of the parts of speech per sentence
def process_content():
try:
for i in tokenized[ :5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
print(tagged)
except Exception as e:
print(str(e))
# Output should be a list of tuples, where the first element in the tuple is the word, and the second is the part of speech tag
process_content()
```
## Lemmatizing
> A very similar operation to stemming is called lemmatizing. The major difference between these is, as you saw earlier, stemming can often create non-existent words, whereas lemmas are actual words.
> So, your root stem, meaning the word you end up with, is not something you can just look up in a dictionary, but you can look up a lemma.
> Some times you will wind up with a very similar word, but sometimes, you will wind up with a completely different word.
```
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print(lemmatizer.lemmatize("cats"))
print(lemmatizer.lemmatize("cacti"))
print(lemmatizer.lemmatize("geese"))
print(lemmatizer.lemmatize("rocks"))
print(lemmatizer.lemmatize("python"))
print(lemmatizer.lemmatize("better", pos="a"))
print(lemmatizer.lemmatize("best", pos="a"))
print(lemmatizer.lemmatize("run"))
print(lemmatizer.lemmatize("run",'v'))
# Here, we've got a bunch of examples of the lemma for the words that we use.
# The only major thing to note is that lemmatize takes a part of speech parameter, "pos."
# If not supplied, the default is "noun." This means that an attempt will be made to find the closest noun, which can create trouble for you.
# Keep this in mind if you use lemmatizing!
```
## Corpora
> The NLTK corpus is a massive dump of all kinds of natural language data sets
```
# Opening the Gutenberg Bible, and reading the first few lines
from nltk.tokenize import sent_tokenize, PunktSentenceTokenizer
from nltk.corpus import gutenberg
#sample text
sample = gutenberg.raw('bible-kjv.txt')
tok = sent_tokenize(sample)
for x in range(5):
print(tok[x])
```
## Wordnet
> Wordnet is a collection of words, definitions, examples of their use, synonyms, antonyms, and more.
```
# Import wordnet
from nltk.corpus import wordnet
# use the term "program" to find synsets
syns = wordnet.synsets('program')
print(syns)
#Print first synset
print(syns[0].name())
# Print only the word
print(syns[0].lemmas()[0].name())
# Definition for that first synset
print(syns[0].definition())
# Examples of the word in use
print(syns[0].examples())
# Synonyms and Antonyms
# The lemmas will be synonyms,
# and then you can use .antonyms to find the antonyms to the lemmas
synonyms = []
antonyms = []
for syn in wordnet.synsets('good'):
for l in syn.lemmas():
synonyms.append(l.name())
if l.antonyms():
antonyms.append(l.antonyms()[0].name())
print(set(synonyms))
print(set(antonyms))
# compare the similarity of two words and their tenses
w1 = wordnet.synset('ship.n.01')
w2 = wordnet.synset('boat.n.01')
print(w1.wup_similarity(w2))
w1 = wordnet.synset('ship.n.01')
w2 = wordnet.synset('car.n.01')
print(w1.wup_similarity(w2))
w1 = wordnet.synset('ship.n.01')
w2 = wordnet.synset('cat.n.01')
print(w1.wup_similarity(w2))
```
|
github_jupyter
|
# 1) CSV Data File Analysis
```
from os import path
fname = path.expanduser('track.csv')
```
## CSV File Info
```
!ls -lh "$fname"
path.getsize(fname)
path.getsize(fname) / (1<<10)
!head "$fname"
with open(fname) as fp:
for lnum, line in enumerate(fp):
if lnum > 10:
break
print(line[:-1])
!wc -l "$fname"
with open(fname) as fp:
print(sum(1 for line in fp))
```
# 2) Parse Time Series
```
import pandas as pd
df = pd.read_csv(fname)
len(df)
df.columns
df.info()
df.head()
df.dtypes
```
## Parse Dates
```
df = pd.read_csv(fname, parse_dates=['time'])
df.dtypes
```
# 3) Access Rows & Columns Data
```
df['lat'].head()
df.lat.head()
df[['lat', 'lng']].head()
df['lat'][0]
df.loc[0]
df.loc[2:7]
df[['lat', 'lng']][2:7]
df.index
```
## NumPy`.loc[]` example
```
import numpy as np
df1 = pd.DataFrame(np.arange(10).reshape((5,2)), columns=['x', 'y'], index=['a', 'b', 'c', 'd', 'e'])
df1
df1.loc['a']
df1.loc['b': 'd']
```
## Set Index
```
df.index
df.index = df['time']
df.index
```
## Locate Specific Time Data
```
df.loc['2015-08-20 04:18:54']
# all masures in this particular minute
df.loc['2015-08-20 03:48']
```
## Timezone Localization
```
# pytz module contains all timezone information
import pytz
ts = df.index[0]
ts.tz_localize(pytz.UTC)
ts.tz_localize(pytz.UTC).tz_convert(pytz.timezone('Asia/Jerusalem'))
df.index = df.index.tz_localize(pytz.UTC).tz_convert(pytz.timezone('Asia/Jerusalem'))
df.index[:10]
%pwd
```
# 4) Import Custom Modules
```
import geo
import sys
sys.path
??geo
from geo import circle_dist
lat1, lng1 = df.iloc[0].lat, df.iloc[0].lng
lat2, lng2 = df.iloc[1].lat, df.iloc[1].lng
circle_dist(lat1, lng1, lat2, lng2)
```
# 5) Calculate Speed (geo)
## pandas.`shift()` example
```
s = pd.Series(np.arange(5))
s
s.shift()
s.shift(-1)
```
## Calculate Circle Distance of Data
```
dist = circle_dist(df['lat'], df['lng'], df['lat'].shift(), df['lng'].shift())
dist[:10]
dist.sum()
dt = df['time'] - df['time'].shift()
dt[:10] # NaT means Not a Time
dt.sum()
dt[1].total_seconds()
dt[1] / np.timedelta64(1, 'h')
dt[1].total_seconds()/3600
speed = dist / (dt / np.timedelta64(1, 'h'))
speed[:10]
df['dist'] = dist
df['dt'] = dt
df1m = df.resample('1min').sum()
df1m.index
# gives error as dt is not indexed as time
#speed = df1m['dist'] / (df1m['dt'])
df1m.columns
df['dt'] = dt / np.timedelta64(1, 'h')
df1m = df.resample('1min').sum()
speed1m = df1m['dist'] / df1m['dt']
speed1m[:10]
```
# 6) Display Speed Box Plot
```
%matplotlib inline
speed1m.plot()
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 6)
plt.style.available
plt.style.use('seaborn-whitegrid')
speed1m.plot()
speed1m.plot.box()
```
|
github_jupyter
|
# SMOOTHING (LOWPASS) SPATIAL FILTERS
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
```
## FILTERS
filters实际上就是通过一些特殊的kernel $w$ 对图片进行如下操作:
$$
g(x, y) = \sum_{s=-a}^a \sum_{t=-b}^b w(s, t) f(x+s, y+t), \: x = 1,2,\cdots, M, \: y = 1, 2,\cdots N.
$$
其中$w(s, t) \in \mathbb{R}^{m \times n}, m=2a+1, n = 2b+1$.
注: 一般来说kernel的边是奇数, 这样可以确定唯一的中心, 但是偶数其实也是可以的.
实际上, 上面可以转换成卷积的形式:
$$
(w * f) (x, y) = \sum_{s=-a}^a \sum_{t=-b}^b w'(s, t) f(x-s, y-t), \: x = 1,2,\cdots, M, \: y = 1, 2,\cdots N.
$$
只是$w'(s, t) = w(-s, -t)$, 不过下面我们仅考虑卷积操作, 故直接定义为:
$$
(w * f) (x, y) = \sum_{s=-a}^a \sum_{t=-b}^b w(s, t) f(x-s, y-t), \: x = 1,2,\cdots, M, \: y = 1, 2,\cdots N.
$$
即可.
注: 注意到上面会出现$f(-1, -1)$之类的未定义情况, 常见的处理方式是在图片周围加padding(分别为pad a, b), 比如补0或者镜像补.
用卷积的目的是其特别的性质:
1. $f * g = g * f$;
2. $f * (g * h) = (f * g) * h$;
3. $f * (g + h) = (f * g) + (g * h)$.
注: $f, g, h$应当形状一致.
特别的, 如果
$$
w = uv^T,
$$
则
$$
w * f = u' * (v^T * f), \quad u'(x) = u(-x).
$$
可以显著降低计算量.
## Box Filter Kernels
即
$$
w_{ij} = \frac{1}{mn}, \quad i=1,2,\cdots, m, \: j=1,2,\cdots, n.
$$
```
img = cv2.imread("./pics/alphabeta.png")
img.shape
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 由于是截图, 先转成灰度图
plt.imshow(img, cmap='gray')
# 或者等价地用 cv2.blur(img, (m, n))
kernels = [np.ones((i, i)) / (i * i) for i in [3, 11, 21]]
imgs_smoothed = [cv2.filter2D(img, -1, kernel) for kernel in kernels]
fig, axes = plt.subplots(2, 2)
axes[0, 0].imshow(img, cmap='gray')
axes[0, 0].set_title("raw")
axes[0, 1].imshow(imgs_smoothed[0], cmap="gray")
axes[0, 1].set_title("3x3")
axes[1, 0].imshow(imgs_smoothed[1], cmap="gray")
axes[1, 0].set_title("11x11")
axes[1, 1].imshow(imgs_smoothed[2], cmap="gray")
axes[1, 1].set_title("21x21")
plt.tight_layout()
plt.show()
```
### Lowpass Gaussian Filter Kernels
即
$$
w(s, t) = G(s, t) = K e^{-\frac{s^2+t^2}{2\sigma^2}},
$$
高斯分布的特点是绝大部分集中于$(-3\sigma, +3\sigma)$之间, 故一般$w$的大小选择为$(-6\sigma, +6\sigma)$, 需要注意的是, $\sigma$的选择和图片的大小息息相关.
```
imgs_smoothed = [cv2.GaussianBlur(img, ksize=ksize, sigmaX=sigma) for (ksize, sigma) in [((5, 5), 1), ((21, 21), 3.5), ((43, 43), 7)]]
fig, axes = plt.subplots(1, 4)
axes[0].imshow(img, cmap='gray')
axes[0].set_title("raw")
axes[1].imshow(imgs_smoothed[0], cmap="gray")
axes[1].set_title("5x5, 1")
axes[2].imshow(imgs_smoothed[1], cmap="gray")
axes[2].set_title("21x21, 3.5")
axes[3].imshow(imgs_smoothed[2], cmap="gray")
axes[3].set_title("43x43, 7")
plt.tight_layout()
plt.show()
```
### Order-Statistic (Nonlinear) Filters
即$g(x, y)$由$(x, y)$周围的点的一个某个顺序的值代替, 比如median.
```
imgs_smoothed = [cv2.medianBlur(img, ksize=ksize) for ksize in [3, 7, 15]]
fig, axes = plt.subplots(1, 4)
axes[0].imshow(img, cmap='gray')
axes[0].set_title("raw")
axes[1].imshow(imgs_smoothed[0], cmap="gray")
axes[1].set_title("3x3")
axes[2].imshow(imgs_smoothed[1], cmap="gray")
axes[2].set_title("7x7")
axes[3].imshow(imgs_smoothed[2], cmap="gray")
axes[3].set_title("15x15")
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
```
#@title blank template
#@markdown This notebook from [github.com/matteoferla/pyrosetta_help](https://github.com/matteoferla/pyrosetta_help).
#@markdown It can be opened in Colabs via [https://colab.research.google.com/github/matteoferla/pyrosetta_help/blob/main/colabs/colabs-pyrosetta.ipynb](https://colab.research.google.com/github/matteoferla/pyrosetta_help/blob/main/colabs/colabs-pyrosetta.ipynb)
#@markdown It just for loading up PyRosetta
#@title Installation
#@markdown Installing PyRosetta with optional backup to your drive (way quicker next time!).
#@markdown Note that PyRosetta occupies some 10 GB, so you'll need to be on the 100 GB plan of Google Drive (it's one pound a month).
#@markdown The following is not the real password. However, the format is similar.
username = 'boltzmann' #@param {type:"string"}
password = 'constant' #@param {type:"string"}
#@markdown Release to install:
_release = 'release-295' #@param {type:"string"}
#@markdown Use Google Drive for PyRosetta (way faster next time, but takes up space)
#@markdown (NB. You may be prompted to follow a link and possibly authenticate and then copy a code into a box
use_drive = True #@param {type:"boolean"}
#@markdown Installing `rdkit` and `rdkit_to_params` allows the creation of custom topologies (params) for new ligands
install_rdkit = True #@param {type:"boolean"}
import sys
import platform
import os
assert platform.dist()[0] == 'Ubuntu'
py_version = str(sys.version_info.major) + str(sys.version_info.minor)
if use_drive:
from google.colab import drive
drive.mount('/content/drive')
_path = '/content/drive/MyDrive'
os.chdir(_path)
else:
_path = '/content'
if not any(['PyRosetta4.Release' in filename for filename in os.listdir()]):
assert not os.system(f'curl -u {username}:{password} https://graylab.jhu.edu/download/PyRosetta4/archive/release/PyRosetta4.Release.python{py_version}.ubuntu/PyRosetta4.Release.python{py_version}.ubuntu.{_release}.tar.bz2 -o /content/a.tar.bz2')
assert not os.system('tar -xf /content/a.tar.bz2')
assert not os.system(f'pip3 install -e {_path}/PyRosetta4.Release.python{py_version}.ubuntu.{_release}/setup/')
assert not os.system(f'pip3 install pyrosetta-help biopython')
if install_rdkit:
assert not os.system(f'pip3 install rdkit-pypi rdkit-to-params')
import site
site.main()
#@title Start PyRosetta
import pyrosetta
import pyrosetta_help as ph
no_optH = False #@param {type:"boolean"}
ignore_unrecognized_res=False #@param {type:"boolean"}
load_PDB_components=False #@param {type:"boolean"}
ignore_waters=False #@param {type:"boolean"}
extra_options= ph.make_option_string(no_optH=no_optH,
ex1=None,
ex2=None,
mute='all',
ignore_unrecognized_res=ignore_unrecognized_res,
load_PDB_components=load_PDB_components,
ignore_waters=ignore_waters)
# capture to log
logger = ph.configure_logger()
pyrosetta.init(extra_options=extra_options)
## Usual stuff
pose = ph.pose_from_alphafold2('P02144')
scorefxn = pyrosetta.get_fa_scorefxn()
relax = pyrosetta.rosetta.protocols.relax.FastRelax(scorefxn, 3)
movemap = pyrosetta.MoveMap()
movemap.set_bb(False)
movemap.set_chi(True)
relax.apply(pose)
# Note that nglview does not work with Colabs but py3Dmol does.
# install py3Dmol
os.system(f'pip3 install py3Dmol')
import site
site.main()
# run
import py3Dmol
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(ph.get_pdbstr(pose),'pdb')
view.zoomTo()
view
# Also note that RDKit Chem.Mol instances are not displays as representations by default.
#@title Upload to Michelanglo (optional)
#@markdown [Michelanglo](https://michelanglo.sgc.ox.ac.uk/) is a website that
#@markdown allows the creation, annotation and sharing of a webpage with an interactive protein viewport.
#@markdown ([examples](https://michelanglo.sgc.ox.ac.uk/gallery)).
#@markdown The created pages are private —they have a 1 in a quintillion change to be guessed within 5 tries.
#@markdown Registered users (optional) can add interactive annotations to pages.
#@markdown A page created by a guest is editable by registered users with the URL to it
#@markdown (this can be altered in the page settings).
#@markdown Leave blank for guest (it will not add an interactive description):
username = '' #@param {type:"string"}
password = '' #@param {type:"string"}
import os
assert not os.system(f'pip3 install michelanglo-api')
import site
site.main()
from michelanglo_api import MikeAPI, Prolink
if not username:
mike = MikeAPI.guest_login()
else:
mike = MikeAPI(username, password)
page = mike.convert_pdb(pdbblock=ph.get_pdbstr(pose),
data_selection='ligand',
data_focus='residue',
)
if username:
page.retrieve()
page.description = '## Description\n\n'
page.description += 'autogen bla bla'
page.commit()
page.show_link()
```
|
github_jupyter
|
```
from simplexlib.src.table import Table, V, Format, Simplex, pretty_value
from IPython.display import display_markdown
from src.branch_and_bound import BranchAndBound
source = Table.straight(
[2, 5, 3],
V[2, 1, 2] <= 6,
V[1, 2, 0] <= 6,
V[0, 0.5, 1] <= 2,
) >> min
display_markdown(
"### Исходная задача в канонической форме",
f"${Format(source).target()}$",
Format(source).system(),
f"${Format(source).var_zero_constraint()}$",
raw=True
)
sresult = Simplex.resolve(source >> max)
display_markdown(
"### Решение исходной ЦЛП simplex-методом:",
"#### Исходная таблица:",
Format(source).table(),
raw=True,
)
for table, pos in zip(sresult.history, sresult.solvers):
display_markdown(
f"#### Индекс разрешающего элемента: {pos}",
Format(table).table(),
f"{Format(table).base_vars()}, {Format(table).free_vars()}",
raw=True,
)
display_markdown(
"#### Проверка решения",
Format(table).check(sresult.source.c * -1),
raw=True
)
from src.brute_force import BruteForce
bresult = BruteForce.resolve(source >> min, sresult.result.F)
display_markdown(
"### Решение методом полного перебора",
"Организуем полный перебор возможных значений исходных переменных. Полученные целочисленные решения:",
'\n\n'.join(f"$F={pretty_value(key, 2)}, X={value}$" for key, value in bresult.result.items()),
f"Максимальное значение функции $F={bresult.maximum}$ достигается при $X={bresult.maxset}$",
raw=True,
)
tree = BranchAndBound.resolve(source >> max)
display_markdown(
"### Графическая визуализация решения при помощи метода ветвей и границ:",
raw=True
)
tree.visualize(
node_attr={"shape": "record", "fontname": "helvetica"},
graph_attr={},
edge_attr={"fontname": "helvetica"},
)
def expose_node(node):
result = node.data
display_markdown(
f"#### Исходная таблица",
Format(result.source).table(),
f"#### Конечная таблица",
Format(result.history[-1]).table(),
f"{Format(result.history[-1]).base_vars()}, {Format(table).free_vars()}" if result.solved else "Нет решений",
raw=True,
)
if node.left:
display_markdown(
f"### Ветвление влево по {node.left.label(pretty=True)}",
raw=True,
)
expose_node(node.left.target)
if node.right:
display_markdown(
f"### Ветвление вправо по {node.right.label(pretty=True)}",
raw=True,
)
expose_node(node.right.target)
if not node.left and not node.right:
display_markdown(
f"### Заканчиваем ветвление:",
node.label(pretty=True),
raw=True,
)
expose_node(tree)
solutions = [node.data.result for node in tree.leaves if not node.invalid]
display_markdown(
"### Целочисленные решения:",
*[
', '.join([
f"$F={pretty_value(table.result, 2)}$",
Format(table).solution(),
])
for table in solutions
],
raw=True,
)
```
|
github_jupyter
|
# Regular Expression Exercises
* Debugger: When debugging regular expressions, the best tool is [Regex101](https://regex101.com/). This is an interactive tool that let's you visualize a regular expression in action.
* Tutorial: I tend to like RealPython's tutorials, here is their's on [Regular Expressions](https://realpython.com/regex-python/).
* Tutorial: The [Official Python tutorial on Regular Expressions](https://docs.python.org/3/howto/regex.html) is not a bad introduction.
* Cheat Sheet: People often make use of [Cheat Sheets](https://www.debuggex.com/cheatsheet/regex/python) when they have to do a lot of Regular Expressions.
* Documentation: If you need it, the official [Python documentation on the `re` module](https://docs.python.org/3/library/re.html) can also be a resource.
PLEASE FILL IN THE FOLLOWING:
* Your name:
* Link to the Github repo with this file:
```
import re
```
## 1. Introduction
**a)** Check whether the given strings contain `0xB0`. Display a boolean result as shown below.
```
line1 = 'start address: 0xA0, func1 address: 0xC0'
line2 = 'end address: 0xFF, func2 address: 0xB0'
assert bool(re.search(r'', line1)) == False
assert bool(re.search(r'', line2)) == True
```
**b)** Replace all occurrences of `5` with `five` for the given string.
```
ip = 'They ate 5 apples and 5 oranges'
assert re.sub() == 'They ate five apples and five oranges'
```
**c)** Replace first occurrence of `5` with `five` for the given string.
```
ip = 'They ate 5 apples and 5 oranges'
assert re.sub() == 'They ate five apples and 5 oranges'
```
**d)** For the given list, filter all elements that do *not* contain `e`.
```
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
assert [w for w in items if not re.search()] == ['goal', 'sit']
```
**e)** Replace all occurrences of `note` irrespective of case with `X`.
```
ip = 'This note should not be NoTeD'
assert re.sub() == 'This X should not be XD'
```
**f)** Check if `at` is present in the given byte input data.
```
ip = 'tiger imp goat'
assert bool(re.search()) == True
```
**g)** For the given input string, display all lines not containing `start` irrespective of case.
```
para = '''good start
Start working on that
project you always wanted
stars are shining brightly
hi there
start and try to
finish the book
bye'''
pat = re.compile() ##### add your solution here
for line in para.split('\n'):
if not pat.search(line):
print(line)
"""project you always wanted
stars are shining brightly
hi there
finish the book
bye"""
```
**h)** For the given list, filter all elements that contains either `a` or `w`.
```
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
##### add your solution here
assert [w for w in items if re.search() or re.search()] == ['goal', 'new', 'eat']
```
**i)** For the given list, filter all elements that contains both `e` and `n`.
```
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
##### add your solution here
assert [w for w in items if re.search() and re.search()] == ['new', 'dinner']
```
**j)** For the given string, replace `0xA0` with `0x7F` and `0xC0` with `0x1F`.
```
ip = 'start address: 0xA0, func1 address: 0xC0'
##### add your solution here
assert ___ == 'start address: 0x7F, func1 address: 0x1F'
```
<br>
# 2. Anchors
**a)** Check if the given strings start with `be`.
```
line1 = 'be nice'
line2 = '"best!"'
line3 = 'better?'
line4 = 'oh no\nbear spotted'
pat = re.compile() ##### add your solution here
assert bool(pat.search(line1)) == True
assert bool(pat.search(line2)) == False
assert bool(pat.search(line3)) == True
assert bool(pat.search(line4)) == False
```
**b)** For the given input string, change only whole word `red` to `brown`
```
words = 'bred red spread credible'
assert re.sub() == 'bred brown spread credible'
```
**c)** For the given input list, filter all elements that contains `42` surrounded by word characters.
```
words = ['hi42bye', 'nice1423', 'bad42', 'cool_42a', 'fake4b']
assert [w for w in words if re.search()] == ['hi42bye', 'nice1423', 'cool_42a']
```
**d)** For the given input list, filter all elements that start with `den` or end with `ly`.
```
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\n', 'dent']
assert [e for e in items if ] == ['lovely', '2 lonely', 'dent']
```
**e)** For the given input string, change whole word `mall` to `1234` only if it is at the start of a line.
```
para = '''
ball fall wall tall
mall call ball pall
wall mall ball fall
mallet wallet malls'''
assert re.sub() == """ball fall wall tall
1234 call ball pall
wall mall ball fall
mallet wallet malls"""
```
**f)** For the given list, filter all elements having a line starting with `den` or ending with `ly`.
```
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\nfar', 'dent']
##### add your solution here
assert ___ == ['lovely', '1\ndentist', '2 lonely', 'fly\nfar', 'dent']
```
**g)** For the given input list, filter all whole elements `12\nthree` irrespective of case.
```
items = ['12\nthree\n', '12\nThree', '12\nthree\n4', '12\nthree']
##### add your solution here
assert ___ == ['12\nThree', '12\nthree']
```
**h)** For the given input list, replace `hand` with `X` for all elements that start with `hand` followed by at least one word character.
```
items = ['handed', 'hand', 'handy', 'unhanded', 'handle', 'hand-2']
##### add your solution here
assert ___ == ['Xed', 'hand', 'Xy', 'unhanded', 'Xle', 'hand-2']
```
**i)** For the given input list, filter all elements starting with `h`. Additionally, replace `e` with `X` for these filtered elements.
```
items = ['handed', 'hand', 'handy', 'unhanded', 'handle', 'hand-2']
##### add your solution here
assert ___ == ['handXd', 'hand', 'handy', 'handlX', 'hand-2']
```
<br>
# 3. Alternation and Grouping
**a)** For the given input list, filter all elements that start with `den` or end with `ly`
```
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\n', 'dent']
##### add your solution here
assert ___ == ['lovely', '2 lonely', 'dent']
```
**b)** For the given list, filter all elements having a line starting with `den` or ending with `ly`.
```
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\nfar', 'dent']
##### add your solution here
assert ___ == ['lovely', '1\ndentist', '2 lonely', 'fly\nfar', 'dent']
```
**c)** For the given input strings, replace all occurrences of `removed` or `reed` or `received` or `refused` with `X`.
```
s1 = 'creed refuse removed read'
s2 = 'refused reed redo received'
pat = re.compile() ##### add your solution here
assert pat.sub('X', s1) == 'cX refuse X read'
assert pat.sub('X', s2) == 'X X redo X'
```
**d)** For the given input strings, replace all matches from the list `words` with `A`.
```
s1 = 'plate full of slate'
s2 = "slated for later, don't be late"
words = ['late', 'later', 'slated']
pat = re.compile() ##### add your solution here
assert pat.sub('A', s1) == 'pA full of sA'
assert pat.sub('A', s2) == "A for A, don't be A"
```
**e)** Filter all whole elements from the input list `items` based on elements listed in `words`.
```
items = ['slate', 'later', 'plate', 'late', 'slates', 'slated ']
words = ['late', 'later', 'slated']
pat = re.compile() ##### add your solution here
##### add your solution here
assert ___ == ['later', 'late']
```
<br>
# 4. Escaping metacharacters
**a)** Transform the given input strings to the expected output using same logic on both strings.
```
str1 = '(9-2)*5+qty/3'
str2 = '(qty+4)/2-(9-2)*5+pq/4'
##### add your solution here for str1
assert ___ == '35+qty/3'
##### add your solution here for str2
assert ___ == '(qty+4)/2-35+pq/4'
```
**b)** Replace `(4)\|` with `2` only at the start or end of given input strings.
```
s1 = r'2.3/(4)\|6 foo 5.3-(4)\|'
s2 = r'(4)\|42 - (4)\|3'
s3 = 'two - (4)\\|\n'
pat = re.compile() ##### add your solution here
assert pat.sub('2', s1) == '2.3/(4)\\|6 foo 5.3-2'
assert pat.sub('2', s2) == '242 - (4)\\|3'
assert pat.sub('2', s3) == 'two - (4)\\|\n'
```
**c)** Replace any matching element from the list `items` with `X` for given the input strings. Match the elements from `items` literally. Assume no two elements of `items` will result in any matching conflict.
```
items = ['a.b', '3+n', r'x\y\z', 'qty||price', '{n}']
pat = re.compile() ##### add your solution here
assert pat.sub('X', '0a.bcd') == '0Xcd'
assert pat.sub('X', 'E{n}AMPLE') == 'EXAMPLE'
assert pat.sub('X', r'43+n2 ax\y\ze') == '4X2 aXe'
```
**d)** Replace backspace character `\b` with a single space character for the given input string.
```
ip = '123\b456'
ip
assert re.sub() == '123 456'
```
**e)** Replace all occurrences of `\e` with `e`.
```
ip = r'th\er\e ar\e common asp\ects among th\e alt\ernations'
assert re.sub() == 'there are common aspects among the alternations'
```
**f)** Replace any matching item from the list `eqns` with `X` for given the string `ip`. Match the items from `eqns` literally.
```
ip = '3-(a^b)+2*(a^b)-(a/b)+3'
eqns = ['(a^b)', '(a/b)', '(a^b)+2']
##### add your solution here
assert pat.sub('X', ip) == '3-X*X-X+3'
```
<br>
# 5. Dot metacharacter and Quantifiers
Since `.` metacharacter doesn't match newline character by default, assume that the input strings in the following exercises will not contain newline characters.
**a)** Replace `42//5` or `42/5` with `8` for the given input.
```
ip = 'a+42//5-c pressure*3+42/5-14256'
assert re.sub() == 'a+8-c pressure*3+8-14256'
```
**b)** For the list `items`, filter all elements starting with `hand` and ending with at most one more character or `le`.
```
items = ['handed', 'hand', 'handled', 'handy', 'unhand', 'hands', 'handle']
##### add your solution here
assert ___ == ['hand', 'handy', 'hands', 'handle']
```
**c)** Use `re.split` to get the output as shown for the given input strings.
```
eqn1 = 'a+42//5-c'
eqn2 = 'pressure*3+42/5-14256'
eqn3 = 'r*42-5/3+42///5-42/53+a'
##### add your solution here for eqn1
assert ___ == ['a+', '-c']
##### add your solution here for eqn2
assert ___ == ['pressure*3+', '-14256']
##### add your solution here for eqn3
assert ___ == ['r*42-5/3+42///5-', '3+a']
```
**d)** For the given input strings, remove everything from the first occurrence of `i` till end of the string.
```
s1 = 'remove the special meaning of such constructs'
s2 = 'characters while constructing'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'remove the spec'
assert pat.sub('', s2) == 'characters wh'
```
**e)** For the given strings, construct a RE to get output as shown.
```
str1 = 'a+b(addition)'
str2 = 'a/b(division) + c%d(#modulo)'
str3 = 'Hi there(greeting). Nice day(a(b)'
remove_parentheses = re.compile() ##### add your solution here
assert remove_parentheses.sub('', str1) == 'a+b'
assert remove_parentheses.sub('', str2) == 'a/b + c%d'
assert remove_parentheses.sub('', str3) == 'Hi there. Nice day'
```
**f)** Correct the given RE to get the expected output.
```
words = 'plink incoming tint winter in caution sentient'
change = re.compile(r'int|in|ion|ing|inco|inter|ink')
# wrong output
assert change.sub('X', words) == 'plXk XcomXg tX wXer X cautX sentient'
# expected output
change = re.compile() ##### add your solution here
assert change.sub('X', words) == 'plX XmX tX wX X cautX sentient'
```
**g)** For the given greedy quantifiers, what would be the equivalent form using `{m,n}` representation?
* `?` is same as
* `*` is same as
* `+` is same as
**h)** `(a*|b*)` is same as `(a|b)*` — True or False?
**i)** For the given input strings, remove everything from the first occurrence of `test` (irrespective of case) till end of the string, provided `test` isn't at the end of the string.
```
s1 = 'this is a Test'
s2 = 'always test your RE for corner cases'
s3 = 'a TEST of skill tests?'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'this is a Test'
assert pat.sub('', s2) == 'always '
assert pat.sub('', s3) == 'a '
```
**j)** For the input list `words`, filter all elements starting with `s` and containing `e` and `t` in any order.
```
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['subtle', 'sets', 'site']
```
**k)** For the input list `words`, remove all elements having less than `6` characters.
```
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['sequoia', 'subtle', 'exhibit']
```
**l)** For the input list `words`, filter all elements starting with `s` or `t` and having a maximum of `6` characters.
```
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['subtle', 'sets', 'tests', 'site']
```
**m)** Can you reason out why this code results in the output shown? The aim was to remove all `<characters>` patterns but not the `<>` ones. The expected result was `'a 1<> b 2<> c'`.
```
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
assert re.sub(r'<.+?>', '', ip) == 'a 1 2'
```
**n)** Use `re.split` to get the output as shown below for given input strings.
```
s1 = 'go there // "this // that"'
s2 = 'a//b // c//d e//f // 4//5'
s3 = '42// hi//bye//see // carefully'
pat = re.compile() ##### add your solution here
assert pat.split() == ['go there', '"this // that"']
assert pat.split() == ['a//b', 'c//d e//f // 4//5']
assert pat.split() == ['42// hi//bye//see', 'carefully']
```
<br>
# 6. Working with matched portions
**a)** For the given strings, extract the matching portion from first `is` to last `t`.
```
str1 = 'This the biggest fruit you have seen?'
str2 = 'Your mission is to read and practice consistently'
pat = re.compile() ##### add your solution here
##### add your solution here for str1
assert ___ == 'is the biggest fruit'
##### add your solution here for str2
assert ___ == 'ission is to read and practice consistent'
```
**b)** Find the starting index of first occurrence of `is` or `the` or `was` or `to` for the given input strings.
```
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
s3 = 'this is good bye then'
s4 = 'who was there to see?'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 12
##### add your solution here for s2
assert ___ == 4
##### add your solution here for s3
assert ___ == 2
##### add your solution here for s4
assert ___ == 4
```
**c)** Find the starting index of last occurrence of `is` or `the` or `was` or `to` for the given input strings.
```
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
s3 = 'this is good bye then'
s4 = 'who was there to see?'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 12
##### add your solution here for s2
assert ___ == 18
##### add your solution here for s3
assert ___ == 17
##### add your solution here for s4
assert ___ == 14
```
**d)** The given input string contains `:` exactly once. Extract all characters after the `:` as output.
```
ip = 'fruits:apple, mango, guava, blueberry'
##### add your solution here
assert ___ == 'apple, mango, guava, blueberry'
```
**e)** The given input strings contains some text followed by `-` followed by a number. Replace that number with its `log` value using `math.log()`.
```
s1 = 'first-3.14'
s2 = 'next-123'
pat = re.compile() ##### add your solution here
import math
assert pat.sub() == 'first-1.144222799920162'
assert pat.sub() == 'next-4.812184355372417'
```
**f)** Replace all occurrences of `par` with `spar`, `spare` with `extra` and `park` with `garden` for the given input strings.
```
str1 = 'apartment has a park'
str2 = 'do you have a spare cable'
str3 = 'write a parser'
##### add your solution here
assert pat.sub() == 'aspartment has a garden'
assert pat.sub() == 'do you have a extra cable'
assert pat.sub() == 'write a sparser'
```
**g)** Extract all words between `(` and `)` from the given input string as a list. Assume that the input will not contain any broken parentheses.
```
ip = 'another (way) to reuse (portion) matched (by) capture groups'
assert re.findall() == ['way', 'portion', 'by']
```
**h)** Extract all occurrences of `<` up to next occurrence of `>`, provided there is at least one character in between `<` and `>`.
```
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
assert re.findall() == ['<apple>', '<> b<bye>', '<> c<cat>']
```
**i)** Use `re.findall` to get the output as shown below for the given input strings. Note the characters used in the input strings carefully.
```
row1 = '-2,5 4,+3 +42,-53 4356246,-357532354 '
row2 = '1.32,-3.14 634,5.63 63.3e3,9907809345343.235 '
pat = re.compile() ##### add your solution here
assert pat.findall(row1) == [('-2', '5'), ('4', '+3'), ('+42', '-53'), ('4356246', '-357532354')]
pat.findall(row2) == [('1.32', '-3.14'), ('634', '5.63'), ('63.3e3', '9907809345343.235')]
```
**j)** This is an extension to previous question.
* For `row1`, find the sum of integers of each tuple element. For example, sum of `-2` and `5` is `3`.
* For `row2`, find the sum of floating-point numbers of each tuple element. For example, sum of `1.32` and `-3.14` is `-1.82`.
```
row1 = '-2,5 4,+3 +42,-53 4356246,-357532354 '
row2 = '1.32,-3.14 634,5.63 63.3e3,9907809345343.235 '
# should be same as previous question
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == [3, 7, -11, -353176108]
##### add your solution here for row2
assert ___ == [-1.82, 639.63, 9907809408643.234]
```
**k)** Use `re.split` to get the output as shown below.
```
ip = '42:no-output;1000:car-truck;SQEX49801'
assert re.split() == ['42', 'output', '1000', 'truck', 'SQEX49801']
```
**l)** For the given list of strings, change the elements into a tuple of original element and number of times `t` occurs in that element.
```
words = ['sequoia', 'attest', 'tattletale', 'asset']
##### add your solution here
assert ___ == [('sequoia', 0), ('attest', 3), ('tattletale', 4), ('asset', 1)]
```
**m)** The given input string has fields separated by `:`. Each field contains four uppercase alphabets followed optionally by two digits. Ignore the last field, which is empty. See [docs.python: Match.groups](https://docs.python.org/3/library/re.html#re.Match.groups) and use `re.finditer` to get the output as shown below. If the optional digits aren't present, show `'NA'` instead of `None`.
```
ip = 'TWXA42:JWPA:NTED01:'
##### add your solution here
assert ___ == [('TWXA', '42'), ('JWPA', 'NA'), ('NTED', '01')]
```
> Note that this is different from `re.findall` which will just give empty string instead of `None` when a capture group doesn't participate.
**n)** Convert the comma separated strings to corresponding `dict` objects as shown below.
```
row1 = 'name:rohan,maths:75,phy:89,'
row2 = 'name:rose,maths:88,phy:92,'
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == {'name': 'rohan', 'maths': '75', 'phy': '89'}
##### add your solution here for row2
assert ___ == {'name': 'rose', 'maths': '88', 'phy': '92'}
```
<br>
# 7. Character class
**a)** For the list `items`, filter all elements starting with `hand` and ending with `s` or `y` or `le`.
```
items = ['-handy', 'hand', 'handy', 'unhand', 'hands', 'handle']
##### add your solution here
assert ___ == ['handy', 'hands', 'handle']
```
**b)** Replace all whole words `reed` or `read` or `red` with `X`.
```
ip = 'redo red credible :read: rod reed'
##### add your solution here
assert ___ == 'redo X credible :X: rod X'
```
**c)** For the list `words`, filter all elements containing `e` or `i` followed by `l` or `n`. Note that the order mentioned should be followed.
```
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'unicorn', 'eel']
```
**d)** For the list `words`, filter all elements containing `e` or `i` and `l` or `n` in any order.
```
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'unicorn', 'newer', 'eel']
```
**e)** Extract all hex character sequences, with `0x` optional prefix. Match the characters case insensitively, and the sequences shouldn't be surrounded by other word characters.
```
str1 = '128A foo 0xfe32 34 0xbar'
str2 = '0XDEADBEEF place 0x0ff1ce bad'
hex_seq = re.compile() ##### add your solution here
##### add your solution here for str1
assert ___ == ['128A', '0xfe32', '34']
##### add your solution here for str2
assert ___ == ['0XDEADBEEF', '0x0ff1ce', 'bad']
```
**f)** Delete from `(` to the next occurrence of `)` unless they contain parentheses characters in between.
```
str1 = 'def factorial()'
str2 = 'a/b(division) + c%d(#modulo) - (e+(j/k-3)*4)'
str3 = 'Hi there(greeting). Nice day(a(b)'
remove_parentheses = re.compile() ##### add your solution here
assert ___ == remove_parentheses.sub('', str1)
'def factorial'
assert ___ == remove_parentheses.sub('', str2)
'a/b + c%d - (e+*4)'
assert ___ == remove_parentheses.sub('', str3)
'Hi there. Nice day(a'
```
**g)** For the list `words`, filter all elements not starting with `e` or `p` or `u`.
```
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'newer', 'door']
```
**h)** For the list `words`, filter all elements not containing `u` or `w` or `ee` or `-`.
```
words = ['p-t', 'you', 'tea', 'heel', 'owe', 'new', 'reed', 'ear']
##### add your solution here
assert ___ == ['tea', 'ear']
```
**i)** The given input strings contain fields separated by `,` and fields can be empty too. Replace last three fields with `WHTSZ323`.
```
row1 = '(2),kite,12,,D,C,,'
row2 = 'hi,bye,sun,moon'
pat = re.compile() ##### add your solution here
assert pat.sub() == '(2),kite,12,,D,WHTSZ323'
assert pat.sub() == 'hi,WHTSZ323'
```
**j)** Split the given strings based on consecutive sequence of digit or whitespace characters.
```
str1 = 'lion \t Ink32onion Nice'
str2 = '**1\f2\n3star\t7 77\r**'
pat = re.compile() ##### add your solution here
assert pat.split(str1) == ['lion', 'Ink', 'onion', 'Nice']
assert pat.split(str2) == ['**', 'star', '**']
```
**k)** Delete all occurrences of the sequence `<characters>` where `characters` is one or more non `>` characters and cannot be empty.
```
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
##### add your solution here
assert ___ == 'a 1<> b 2<> c'
```
**l)** `\b[a-z](on|no)[a-z]\b` is same as `\b[a-z][on]{2}[a-z]\b`. True or False? Sample input lines shown below might help to understand the differences, if any.
```
print('known\nmood\nknow\npony\ninns')
known
mood
know
pony
inns
```
**m)** For the given list, filter all elements containing any number sequence greater than `624`.
```
items = ['hi0000432abcd', 'car00625', '42_624 0512', '3.14 96 2 foo1234baz']
##### add your solution here
assert ___ == ['car00625', '3.14 96 2 foo1234baz']
```
**n)** Count the maximum depth of nested braces for the given strings. Unbalanced or wrongly ordered braces should return `-1`. Note that this will require a mix of regular expressions and Python code.
```
def max_nested_braces(ip):
##### add your solution here
assert max_nested_braces('a*b') == 0
assert max_nested_braces('}a+b{') == -1
assert max_nested_braces('a*b+{}') == 1
assert max_nested_braces('{{a+2}*{b+c}+e}') == 2
assert max_nested_braces('{{a+2}*{b+{c*d}}+e}') == 3
assert max_nested_braces('{{a+2}*{\n{b+{c*d}}+e*d}}') == 4
assert max_nested_braces('a*{b+c*{e*3.14}}}') == -1
```
**o)** By default, `str.split` method will split on whitespace and remove empty strings from the result. Which `re` module function would you use to replicate this functionality?
```
ip = ' \t\r so pole\t\t\t\n\nlit in to \r\n\v\f '
assert ip.split() == ['so', 'pole', 'lit', 'in', 'to']
##### add your solution here
assert ___ == ['so', 'pole', 'lit', 'in', 'to']
```
**p)** Convert the given input string to two different lists as shown below.
```
ip = 'price_42 roast^\t\n^-ice==cat\neast'
##### add your solution here
assert ___ == ['price_42', 'roast', 'ice', 'cat', 'east']
##### add your solution here
assert ___ == ['price_42', ' ', 'roast', '^\t\n^-', 'ice', '==', 'cat', '\n', 'east']
```
**q)** Filter all elements whose first non-whitespace character is not a `#` character. Any element made up of only whitespace characters should be ignored as well.
```
items = [' #comment', '\t\napple #42', '#oops', 'sure', 'no#1', '\t\r\f']
##### add your solution here
assert ___ == ['\t\napple #42', 'sure', 'no#1']
```
<br>
# 8. Groupings and backreferences
**a)** Replace the space character that occurs after a word ending with `a` or `r` with a newline character.
```
ip = 'area not a _a2_ roar took 22'
assert re.sub() == """area
not a
_a2_ roar
took 22"""
```
**b)** Add `[]` around words starting with `s` and containing `e` and `t` in any order.
```
ip = 'sequoia subtle exhibit asset sets tests site'
##### add your solution here
assert ___ == 'sequoia [subtle] exhibit asset [sets] tests [site]'
```
**c)** Replace all whole words with `X` that start and end with the same word character. Single character word should get replaced with `X` too, as it satisfies the stated condition.
```
ip = 'oreo not a _a2_ roar took 22'
##### add your solution here
assert ___ == 'X not X X X took X'
```
**d)** Convert the given **markdown** headers to corresponding **anchor** tag. Consider the input to start with one or more `#` characters followed by space and word characters. The `name` attribute is constructed by converting the header to lowercase and replacing spaces with hyphens. Can you do it without using a capture group?
```
header1 = '# Regular Expressions'
header2 = '## Compiling regular expressions'
##### add your solution here for header1
assert ___ == '# <a name="regular-expressions"></a>Regular Expressions'
##### add your solution here for header2
assert ___ == '## <a name="compiling-regular-expressions"></a>Compiling regular expressions'
```
**e)** Convert the given **markdown** anchors to corresponding **hyperlinks**.
```
anchor1 = '# <a name="regular-expressions"></a>Regular Expressions'
anchor2 = '## <a name="subexpression-calls"></a>Subexpression calls'
##### add your solution here for anchor1
assert ___ == '[Regular Expressions](#regular-expressions)'
##### add your solution here for anchor2
assert ___ == '[Subexpression calls](#subexpression-calls)'
```
**f)** Count the number of whole words that have at least two occurrences of consecutive repeated alphabets. For example, words like `stillness` and `Committee` should be counted but not words like `root` or `readable` or `rotational`.
```
ip = '''oppressed abandon accommodation bloodless
carelessness committed apparition innkeeper
occasionally afforded embarrassment foolishness
depended successfully succeeded
possession cleanliness suppress'''
##### add your solution here
assert ___ == 13
```
**g)** For the given input string, replace all occurrences of digit sequences with only the unique non-repeating sequence. For example, `232323` should be changed to `23` and `897897` should be changed to `897`. If there no repeats (for example `1234`) or if the repeats end prematurely (for example `12121`), it should not be changed.
```
ip = '1234 2323 453545354535 9339 11 60260260'
##### add your solution here
assert ___ == '1234 23 4535 9339 1 60260260'
```
**h)** Replace sequences made up of words separated by `:` or `.` by the first word of the sequence. Such sequences will end when `:` or `.` is not followed by a word character.
```
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == 'wow hi-2 bye kite'
```
**i)** Replace sequences made up of words separated by `:` or `.` by the last word of the sequence. Such sequences will end when `:` or `.` is not followed by a word character.
```
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == 'five hi-2 bye water'
```
**j)** Split the given input string on one or more repeated sequence of `cat`.
```
ip = 'firecatlioncatcatcatbearcatcatparrot'
##### add your solution here
assert ___ == ['fire', 'lion', 'bear', 'parrot']
```
**k)** For the given input string, find all occurrences of digit sequences with at least one repeating sequence. For example, `232323` and `897897`. If the repeats end prematurely, for example `12121`, it should not be matched.
```
ip = '1234 2323 453545354535 9339 11 60260260'
pat = re.compile() ##### add your solution here
# entire sequences in the output
##### add your solution here
assert ___ == ['2323', '453545354535', '11']
# only the unique sequence in the output
##### add your solution here
assert ___ == ['23', '4535', '1']
```
**l)** Convert the comma separated strings to corresponding `dict` objects as shown below. The keys are `name`, `maths` and `phy` for the three fields in the input strings.
```
row1 = 'rohan,75,89'
row2 = 'rose,88,92'
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == {'name': 'rohan', 'maths': '75', 'phy': '89'}
##### add your solution here for row2
assert ___ == {'name': 'rose', 'maths': '88', 'phy': '92'}
```
**m)** Surround all whole words with `()`. Additionally, if the whole word is `imp` or `ant`, delete them. Can you do it with single substitution?
```
ip = 'tiger imp goat eagle ant important'
##### add your solution here
assert ___ == '(tiger) () (goat) (eagle) () (important)'
```
**n)** Filter all elements that contains a sequence of lowercase alphabets followed by `-` followed by digits. They can be optionally surrounded by `{{` and `}}`. Any partial match shouldn't be part of the output.
```
ip = ['{{apple-150}}', '{{mango2-100}}', '{{cherry-200', 'grape-87']
##### add your solution here
assert ___ == ['{{apple-150}}', 'grape-87']
```
**o)** The given input string has sequences made up of words separated by `:` or `.` and such sequences will end when `:` or `.` is not followed by a word character. For all such sequences, display only the last word followed by `-` followed by first word.
```
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == ['five-wow', 'water-kite']
```
<br>
# 9. Lookarounds
Starting from here, all following problems are optional!
Please use lookarounds for solving the following exercises even if you can do it without lookarounds. Unless you cannot use lookarounds for cases like variable length lookbehinds.
**a)** Replace all whole words with `X` unless it is preceded by `(` character.
```
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X X) X (mango) (grape'
```
**b)** Replace all whole words with `X` unless it is followed by `)` character.
```
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X berry) X (mango) (X'
```
**c)** Replace all whole words with `X` unless it is preceded by `(` or followed by `)` characters.
```
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X berry) X (mango) (grape'
```
**d)** Extract all whole words that do not end with `e` or `n`.
```
ip = 'at row on urn e note dust n'
##### add your solution here
assert ___ == ['at', 'row', 'dust']
```
**e)** Extract all whole words that do not start with `a` or `d` or `n`.
```
ip = 'at row on urn e note dust n'
##### add your solution here
assert ___ == ['row', 'on', 'urn', 'e']
```
**f)** Extract all whole words only if they are followed by `:` or `,` or `-`.
```
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['poke', 'so', 'ever']
```
**g)** Extract all whole words only if they are preceded by `=` or `/` or `-`.
```
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'is', 'sit']
```
**h)** Extract all whole words only if they are preceded by `=` or `:` and followed by `:` or `.`.
```
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'ink']
```
**i)** Extract all whole words only if they are preceded by `=` or `:` or `.` or `(` or `-` and not followed by `.` or `/`.
```
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'vast', 'sit']
```
**j)** Remove leading and trailing whitespaces from all the individual fields where `,` is the field separator.
```
csv1 = ' comma ,separated ,values \t\r '
csv2 = 'good bad,nice ice , 42 , , stall small'
remove_whitespace = re.compile() ##### add your solution here
assert remove_whitespace.sub('', csv1) == 'comma,separated,values'
assert remove_whitespace.sub('', csv2) == 'good bad,nice ice,42,,stall small'
```
**k)** Filter all elements that satisfy all of these rules:
* should have at least two alphabets
* should have at least 3 digits
* should have at least one special character among `%` or `*` or `#` or `$`
* should not end with a whitespace character
```
pwds = ['hunter2', 'F2H3u%9', '*X3Yz3.14\t', 'r2_d2_42', 'A $B C1234']
##### add your solution here
assert ___ == ['F2H3u%9', 'A $B C1234']
```
**l)** For the given string, surround all whole words with `{}` except for whole words `par` and `cat` and `apple`.
```
ip = 'part; cat {super} rest_42 par scatter apple spar'
##### add your solution here
assert ___ == '{part}; cat {{super}} {rest_42} par {scatter} apple {spar}'
```
**m)** Extract integer portion of floating-point numbers for the given string. A number ending with `.` and no further digits should not be considered.
```
ip = '12 ab32.4 go 5 2. 46.42 5'
##### add your solution here
assert ___ == ['32', '46']
```
**n)** For the given input strings, extract all overlapping two character sequences.
```
s1 = 'apple'
s2 = '1.2-3:4'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == ['ap', 'pp', 'pl', 'le']
##### add your solution here for s2
assert ___ == ['1.', '.2', '2-', '-3', '3:', ':4']
```
**o)** The given input strings contain fields separated by `:` character. Delete `:` and the last field if there is a digit character anywhere before the last field.
```
s1 = '42:cat'
s2 = 'twelve:a2b'
s3 = 'we:be:he:0:a:b:bother'
pat = re.compile() ##### add your solution here
assert pat.sub() == '42'
assert pat.sub() == 'twelve:a2b'
assert pat.sub() == 'we:be:he:0:a:b'
```
**p)** Extract all whole words unless they are preceded by `:` or `<=>` or `----` or `#`.
```
ip = '::very--at<=>row|in.a_b#b2c=>lion----east'
##### add your solution here
assert ___ == ['at', 'in', 'a_b', 'lion']
```
**q)** Match strings if it contains `qty` followed by `price` but not if there is **whitespace** or the string `error` between them.
```
str1 = '23,qty,price,42'
str2 = 'qty price,oh'
str3 = '3.14,qty,6,errors,9,price,3'
str4 = '42\nqty-6,apple-56,price-234,error'
str5 = '4,price,3.14,qty,4'
neg = re.compile() ##### add your solution here
assert bool(neg.search(str1)) == True
assert bool(neg.search(str2)) == False
assert bool(neg.search(str3)) == False
assert bool(neg.search(str4)) == True
assert bool(neg.search(str5)) == False
```
**r)** Can you reason out why the output shown is different for these two regular expressions?
```
ip = 'I have 12, he has 2!'
assert re.sub(r'\b..\b', '{\g<0>}', ip) == '{I }have {12}{, }{he} has{ 2}!'
assert re.sub(r'(?<!\w)..(?!\w)', '{\g<0>}', ip) == 'I have {12}, {he} has {2!}'
```
<br>
# 10. Flags
**a)** Remove from first occurrence of `hat` to last occurrence of `it` for the given input strings. Match these markers case insensitively.
```
s1 = 'But Cool THAT\nsee What okay\nwow quite'
s2 = 'it this hat is sliced HIT.'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'But Cool Te'
assert pat.sub('', s2) == 'it this .'
```
**b)** Delete from `start` if it is at the beginning of a line up to the next occurrence of the `end` at the end of a line. Match these markers case insensitively.
```
para = '''
good start
start working on that
project you always wanted
to, do not let it end
hi there
start and end the end
42
Start and try to
finish the End
bye'''
pat = re.compile() ##### add your solution here
assert pat.sub('', para)) == """
good start
hi there
42
bye"""
```
**c)** For the given input strings, match all of these three patterns:
* `This` case sensitively
* `nice` and `cool` case insensitively
```
s1 = 'This is nice and Cool'
s2 = 'Nice and cool this is'
s3 = 'What is so nice and cool about This?'
pat = re.compile() ##### add your solution here
assert bool(pat.search(s1)) == True
assert bool(pat.search(s2)) == False
assert bool(pat.search(s3)) == True
```
**d)** For the given input strings, match if the string begins with `Th` and also contains a line that starts with `There`.
```
s1 = 'There there\nHave a cookie'
s2 = 'This is a mess\nYeah?\nThereeeee'
s3 = 'Oh\nThere goes the fun'
pat = re.compile() ##### add your solution here
assert bool(pat.search(s1)) == True
assert bool(pat.search(s2)) == True
assert bool(pat.search(s3)) == False
```
**e)** Explore what the `re.DEBUG` flag does. Here's some example patterns to check out.
* `re.compile(r'\Aden|ly\Z', flags=re.DEBUG)`
* `re.compile(r'\b(0x)?[\da-f]+\b', flags=re.DEBUG)`
* `re.compile(r'\b(?:0x)?[\da-f]+\b', flags=re.I|re.DEBUG)`
<br>
# 11. Unicode
**a)** Output `True` or `False` depending on input string made up of ASCII characters or not. Consider the input to be non-empty strings and any character that isn't part of 7-bit ASCII set should give `False`. Do you need regular expressions for this?
```
str1 = '123—456'
str2 = 'good fοοd'
str3 = 'happy learning!'
str4 = 'İıſK'
##### add your solution here for str1
assert ___ == False
##### add your solution here for str2
assert ___ == False
##### add your solution here for str3
assert ___ == True
##### add your solution here for str4
assert ___ == False
```
**b)** Does `.` quantifier with `re.ASCII` flag enabled match non-ASCII characters?
**c)** Explore the following Q&A threads.
* [stackoverflow: remove powered number from string](https://stackoverflow.com/questions/57553721/remove-powered-number-from-string-in-python)
* [stackoverflow: regular expression for French characters](https://stackoverflow.com/questions/1922097/regular-expression-for-french-characters)
<br>
# 12. regex module
This part is super optional, it has you using the non-builtin `regex` module (https://pypi.org/project/regex/). I've never actually tried it. I skimmed through its features, and it doesn't strike me as adding *that* much more functionality.
**a)** Filter all elements whose first non-whitespace character is not a `#` character. Any element made up of only whitespace characters should be ignored as well.
```
items = [' #comment', '\t\napple #42', '#oops', 'sure', 'no#1', '\t\r\f']
##### add your solution here
assert ___ == ['\t\napple #42', 'sure', 'no#1']
```
**b)** Replace sequences made up of words separated by `:` or `.` by the first word of the sequence and the separator. Such sequences will end when `:` or `.` is not followed by a word character.
```
ip = 'wow:Good:2_two:five: hi bye kite.777.water.'
##### add your solution here
assert ___ == 'wow: hi bye kite.'
```
**c)** The given list of strings has fields separated by `:` character. Delete `:` and the last field if there is a digit character anywhere before the last field.
```
items = ['42:cat', 'twelve:a2b', 'we:be:he:0:a:b:bother']
##### add your solution here
assert ___ == ['42', 'twelve:a2b', 'we:be:he:0:a:b']
```
**d)** Extract all whole words unless they are preceded by `:` or `<=>` or `----` or `#`.
```
ip = '::very--at<=>row|in.a_b#b2c=>lion----east'
##### add your solution here
assert ___ == ['at', 'in', 'a_b', 'lion']
```
**e)** The given input string has fields separated by `:` character. Extract all fields if the previous field contains a digit character.
```
ip = 'vast:a2b2:ride:in:awe:b2b:3list:end'
##### add your solution here
assert ___ == ['ride', '3list', 'end']
```
**f)** The given input string has fields separated by `:` character. Delete all fields, including the separator, unless the field contains a digit character. Stop deleting once a field with digit character is found.
```
row1 = 'vast:a2b2:ride:in:awe:b2b:3list:end'
row2 = 'um:no:low:3e:s4w:seer'
pat = regex.compile() ##### add your solution here
assert pat.sub('', row1) == 'a2b2:ride:in:awe:b2b:3list:end'
assert pat.sub('', row2) == '3e:s4w:seer'
```
**g)** For the given input strings, extract `if` followed by any number of nested parentheses. Assume that there will be only one such pattern per input string.
```
ip1 = 'for (((i*3)+2)/6) if(3-(k*3+4)/12-(r+2/3)) while()'
ip2 = 'if+while if(a(b)c(d(e(f)1)2)3) for(i=1)'
pat = regex.compile() ##### add your solution here
assert pat.search(ip1)[0] == 'if(3-(k*3+4)/12-(r+2/3))'
assert pat.search(ip2)[0] == 'if(a(b)c(d(e(f)1)2)3)'
```
**h)** Read about `POSIX` flag from https://pypi.org/project/regex/. Is the following code snippet showing the correct output?
```
words = 'plink incoming tint winter in caution sentient'
change = regex.compile(r'int|in|ion|ing|inco|inter|ink', flags=regex.POSIX)
assert change.sub('X', words) == 'plX XmX tX wX X cautX sentient'
```
**i)** Extract all whole words for the given input strings. However, based on user input `ignore`, do not match words if they contain any character present in the `ignore` variable.
```
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
ignore = 'aty'
assert regex.findall() == ['newline']
assert regex.findall() == []
ignore = 'esw'
assert regex.findall() == ['match']
assert regex.findall() == ['and', 'you', 'to']
```
**j)** Retain only punctuation characters for the given strings (generated from codepoints). Use Unicode character set definition for punctuation for solving this exercise.
```
s1 = ''.join(chr(c) for c in range(0, 0x80))
s2 = ''.join(chr(c) for c in range(0x80, 0x100))
s3 = ''.join(chr(c) for c in range(0x2600, 0x27ec))
pat = regex.compile() ##### add your solution here
assert pat.sub('', s1) == '!"#%&\'()*,-./:;?@[\\]_{}'
assert pat.sub('', s2) == '¡§«¶·»¿'
assert pat.sub('', s3) == '❨❩❪❫❬❭❮❯❰❱❲❳❴❵⟅⟆⟦⟧⟨⟩⟪⟫'
```
**k)** For the given **markdown** file, replace all occurrences of the string `python` (irrespective of case) with the string `Python`. However, any match within code blocks that start with whole line ` ```python ` and end with whole line ` ``` ` shouldn't be replaced. Consider the input file to be small enough to fit memory requirements.
Refer to [github: exercises folder](https://github.com/learnbyexample/py_regular_expressions/tree/master/exercises) for files `sample.md` and `expected.md` required to solve this exercise.
```
ip_str = open('sample.md', 'r').read()
pat = regex.compile() ##### add your solution here
with open('sample_mod.md', 'w') as op_file:
##### add your solution here
305
assert open('sample_mod.md').read() == open('expected.md').read()
```
**l)** For the given input strings, construct a word that is made up of last characters of all the words in the input. Use last character of last word as first character, last character of last but one word as second character and so on.
```
s1 = 'knack tic pi roar what'
s2 = '42;rod;t2t2;car'
pat = regex.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 'trick'
##### add your solution here for s2
assert ___ == 'r2d2'
```
**m)** Replicate `str.rpartition` functionality with regular expressions. Split into three parts based on last match of sequences of digits, which is `777` and `12` for the given input strings.
```
s1 = 'Sample123string42with777numbers'
s2 = '12apples'
##### add your solution here for s1
assert ___ == ['Sample123string42with', '777', 'numbers']
##### add your solution here for s2
assert ___ == ['', '12', 'apples']
```
**n)** Read about fuzzy matching on https://pypi.org/project/regex/. For the given input strings, return `True` if they are exactly same as `cat` or there is exactly one character difference. Ignore case when comparing differences. For example, `Ca2` should give `True`. `act` will be `False` even though the characters are same because position should be maintained.
```
pat = regex.compile() ##### add your solution here
assert bool(pat.fullmatch('CaT')) == True
assert bool(pat.fullmatch('scat')) == False
assert bool(pat.fullmatch('ca.')) == True
assert bool(pat.fullmatch('ca#')) == True
assert bool(pat.fullmatch('c#t')) == True
assert bool(pat.fullmatch('at')) == False
assert bool(pat.fullmatch('act')) == False
assert bool(pat.fullmatch('2a1')) == False
```
|
github_jupyter
|
```
import sqlite3
import pandas as pd
import numpy as np
import scipy as sp
import scipy.stats as stats
#import pylab as plt
import matplotlib.pyplot as plt
from collections import Counter
from numpy.random import choice
%matplotlib notebook
dbname = '../../data/sepsis.db'
conn = sqlite3.connect(dbname)
sql = 'SELECT * FROM "diagnoses"'
df = pd.read_sql(sql,conn)
df.head()
from importlib import reload
from fakelite3 import KungFauxPandas
kfpd = KungFauxPandas()
fdf=kfpd.read_sql(sql,conn)
fdf.head()
col = 'Code'
out_dict = dict()
colfact = df[col].factorize()
cc=Counter(colfact[0])
# convert from counts to proportions
for key in cc:
cc[key] = cc[key] / len(df)
fakes = choice(elements,p=weights, replace=True, size=len(df))
out_dict[col] = [colfact[1][xx] for xx in fakes]
len(cc.values()), len(df), len(cc)/len(df)
col = 'Code'
out_dict = dict()
colfact = df[col].factorize()
cc=Counter(colfact[0])
# convert from counts to proportions
for key in cc:
cc[key] = cc[key] / len(df)
fakes = choice(elements,p=weights, replace=True, size=len(df))
out_dict[col] = [colfact[1][xx] for xx in fakes]
#out_dict
col = 'SubjectId'
kd = stats.gaussian_kde(df[col], bw_method='silverman')
out_dict[col]=np.int64(kd.resample()[0])
df.head()
pd.crosstab(df.Codeode, df.squishcode)
np.corrcoef(df.Code, df.squishcode)
sdf = df.sample(5000)
for thiscol in sdf.columns:
if sdf[thiscol].dtype=='object':
print('Converting column ', thiscol)
sdf[thiscol] = sdf[thiscol].factorize()[0]
#np.cov(sdf)
cc = np.corrcoef(sdf.transpose())
#cc = np.cov(sdf.transpose())
#cc[5,1]
plt.imshow(cc,cmap='inferno')
plt.colorbar()
#sdf.head()
#help(np.correlate)
df.iloc[3]
from statsmodels.nonparametric import kernel_density as kd
woo = kd.KDEMultivariate(np.array(sdf.iloc[:,[2,4,9]]), var_type=3*'u')
#help(kd.KDEMultivariate)
np.array(data=sdf.sample(2000).iloc[:,[2,4,9]])
xx = range(40)
bb = list(itertools.product(xx,xx,xx))
np.array(sdf.iloc[2]).shape
from scipy.optimize import fsolve
import statsmodels.api as sm
import numpy as np
# fit
kde = woo#sm.nonparametric.KDEMultivariate() # ... you already did this
# sample
u = np.random.random()
# 1-d root-finding
def func(x):
return kde.cdf([x]) - u
#sample_x = brentq(func, -99999999, 99999999) # read brentq-docs about these constants
# constants need to be sign-changing for the function
#u = np.random.random()
#u
#sample_x = brentq(func, -99999999, 99999999)
def func(x):
return kde.cdf([x]) - u
x0=[92,4,5,3,6,7,8,9,10,11]
from scipy.optimize import minimize
darf = minimize(func,np.array(x0))
print(darf)
x0, func(x0)
func([0,0,0,0,0,3,0,0,0,0])
bork = np.mgrid[0:10,0:10, 0:10]
xx = range(4)
import itertools
ins = list(itertools.product(xx,xx,xx,xx,xx,xx,xx,xx,xx,xx))
vals = [func(i) for i in ins[1004:2004]]
func(ins[1004:2004])
func(bork[32532])
u
#kde.cdf(bork[9000:10000])
func(x0)
list(bork[0])
x0
import statsmodels.api as sm
nobs = 300
np.random.seed(1234) # Seed random generator
c1 = np.random.normal(size=(nobs,1))
c2 = np.random.normal(2, 1, size=(nobs,1))
#Estimate a bivariate distribution and display the bandwidth found:
#dens_u = sm.nonparametric.KDEMultivariate(data=[c1,c2], var_type='cc', bw='normal_reference')
#dens_u.bw
woo = sm.nonparametric.KDEMultivariate(data=sdf.iloc[:,[2,4,9]], var_type=3*'u')
woo.cdf()
len(sdf)
len(set(sdf.iloc[:,9]))
np.corrcoef(sdf.iloc[:,[2,9]])
```
|
github_jupyter
|
# 以下為 Export 成 freeze_graph 的範例程式嗎
```
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras import backend as K
import tensorflow as tf
from tensorflow.python.tools import freeze_graph, optimize_for_inference_lib
import numpy as np
```
## Exprot to frezen model 的標準作法
* 由於在 Tensorflow Lite
* 但在使用之前需要 Session 載入 Graph & Weight
* tf.train.write_graph :如果是使用 Keras 的話就是用 K.get_session().graph_def 取得 grpah ,然後輸到 phtxt
* tf.train.Saver() : 透過 K.get_session() 取得 Session ,而後透過 tf.train.Saver().save()
```
def export_model_for_mobile(model_name, input_node_name, output_node_name):
# 先暫存成另一個檔檔
tf.train.write_graph(K.get_session().graph_def, 'out', \
model_name + '_graph.pbtxt')
tf.train.Saver().save(K.get_session(), 'out/' + model_name + '.chkp')
freeze_graph.freeze_graph('out/' + model_name + '_graph.pbtxt', None, \
False, 'out/' + model_name + '.chkp', output_node_name, \
"save/restore_all", "save/Const:0", \
'out/frozen_' + model_name + '.pb', True, "")
input_graph_def = tf.GraphDef()
with tf.gfile.Open('out/frozen_' + model_name + '.pb', "rb") as f:
input_graph_def.ParseFromString(f.read())
output_graph_def = optimize_for_inference_lib.optimize_for_inference(
input_graph_def, [input_node_name], [output_node_name],
tf.float32.as_datatype_enum)
with tf.gfile.FastGFile('out/tensorflow_lite_' + model_name + '.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
```
## 創建 Graph
```
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
model = Sequential([
Conv2D(8, (3, 3), activation='relu', input_shape=[128,128,3]),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(8, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128),
Activation('relu'),
Dense(7),
Activation('softmax')
])
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.load_weights("/home/kent/git/DeepLearning_ClassmatesImageClassification_jdwang2018_5_25/CNN_Classfier_32X32_jdwang_Weight_1.h5")
```
## 呼叫預設的函式
```
export_model_for_mobile('classmate_new', model.input.name.split(":")[0], model.output.name.split(":")[0])
```
|
github_jupyter
|
## Rover Project Test Notebook
This notebook contains the functions from the lesson and provides the scaffolding you need to test out your mapping methods. The steps you need to complete in this notebook for the project are the following:
* First just run each of the cells in the notebook, examine the code and the results of each.
* Run the simulator in "Training Mode" and record some data. Note: the simulator may crash if you try to record a large (longer than a few minutes) dataset, but you don't need a ton of data, just some example images to work with.
* Change the data directory path (2 cells below) to be the directory where you saved data
* Test out the functions provided on your data
* Write new functions (or modify existing ones) to report and map out detections of obstacles and rock samples (yellow rocks)
* Populate the `process_image()` function with the appropriate steps/functions to go from a raw image to a worldmap.
* Run the cell that calls `process_image()` using `moviepy` functions to create video output
* Once you have mapping working, move on to modifying `perception.py` and `decision.py` to allow your rover to navigate and map in autonomous mode!
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
**Run the next cell to get code highlighting in the markdown cells.**
```
%%HTML
<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)
# Make some of the relevant imports
import os
print(os.sys.path)
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
import imageio
imageio.plugins.ffmpeg.download()
```
## Quick Look at the Data
There's some example data provided in the `test_dataset` folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator.
Next, read in and display a random image from the `test_dataset` folder
```
path = 'C:/Users/joero/Documents/Development/Udacity/RoboticsNanodegree/roversim/IMG/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
# In the simulator you can toggle on a grid on the ground for calibration
# You can also toggle on the rock samples with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(grid_img)
plt.subplot(122)
plt.imshow(rock_img)
```
## Calibration Data
Read in and display example grid and rock sample calibration images. You'll use the grid for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
## Perspective Transform
Define the perspective transform function from the lesson and test it on an image.
```
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
return warped
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
```
## Color Thresholding
Define the color thresholding function from the lesson and apply it to the warped image
**TODO:** Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well.
**Hints and Suggestion:**
* For obstacles you can just invert your color selection that you used to detect ground pixels, i.e., if you've decided that everything above the threshold is navigable terrain, then everthing below the threshold must be an obstacle!
* For rocks, think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. You can investigate the colors of the rocks (the RGB pixel values) in an interactive matplotlib window to get a feel for the appropriate threshold range (keep in mind you may want different ranges for each of R, G and B!). Feel free to get creative and even bring in functions from other libraries. Here's an example of [color selection](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html) using OpenCV.
* **Beware However:** if you start manipulating images with OpenCV, keep in mind that it defaults to `BGR` instead of `RGB` color space when reading/writing images, so things can get confusing.
```
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
threshed = color_thresh(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
```
## Coordinate Transformations
Define the functions used to do coordinate transforms and apply them to an image.
```
# Define a function to convert from image coords to rover coords
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def rotate_pix(xpix, ypix, yaw):
# Convert yaw to radians
yaw_rad = yaw * np.pi / 180
xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))
ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))
# Return the result
return xpix_rotated, ypix_rotated
def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale):
# Apply a scaling and a translation
xpix_translated = (xpix_rot / scale) + xpos
ypix_translated = (ypix_rot / scale) + ypos
# Return the result
return xpix_translated, ypix_translated
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):
# Apply rotation
xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)
# Apply translation
xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)
# Return the result
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
```
## Read in saved data and ground truth map of the world
The next cell is all setup to read your saved data into a `pandas` dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (`data = Databucket()`) you'll have a global variable called `data` that you can refer to for telemetry and map data within the `process_image()` function in the following cell.
```
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change the path below to your data directory
# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator
# change the '.' to ','
df = pd.read_csv('../test_dataset/robot_log.csv', delimiter=';', decimal='.')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = 0 # This will be a running index
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
```
## Write a function to process stored images
Modify the `process_image()` function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this `process_image()` function in conjunction with the `moviepy` video processing package to create a video from the images you saved taking data in the simulator.
In short, you will be passing individual images into `process_image()` and building up an image called `output_image` that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below).
To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!
```
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
def process_image(img):
image = img
# Transform Perspective to top down view.
source = np.float32([[ 14, 140], [ 302, 140], [ 200, 95], [ 118, 95]])
destination_size = 5
bottom_offset = 6
destination = np.float32([[image.shape[1]/2 - destination_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + destination_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + destination_size, image.shape[0] - 2 * destination_size - bottom_offset],
[image.shape[1]/2 - destination_size, image.shape[0] - 2 * destination_size - bottom_offset]])
image_perspective = perspect_transform(image, source, destination)
# Color Transform
image_color_transform_ground = color_thresh(image_perspective)
image_color_transform_walls = np.zeros_like(image_color_transform_ground)
showWalls = image_color_transform_ground[: ,:] == 0
image_color_transform_walls[showWalls] = 1
# data.ground_truth[:,:,0] = image_color_transform_walls * 255
# data.ground_truth[:,:,1] = image_color_transform_rocks * 255
# data.ground_truth[:,:,2] = image_color_transform_ground * 255
# converting to rover-centric coords
xpix_ground, ypix_ground = rover_coords(image_color_transform_ground)
xpix_walls, ypix_walls = rover_coords(image_color_transform_walls)
# Converting to world coordinates
scale = 10
world_size = 200
xpos = data.xpos[data.count]
ypos = data.ypos[data.count]
yaw = data.yaw[data.count]
x_world_ground, y_world_ground = pix_to_world(xpix_ground, ypix_ground, xpos,
ypos, yaw,
world_size, scale)
x_world_walls, y_world_walls = pix_to_world(xpix_walls, ypix_walls, xpos,
ypos, yaw,
world_size, scale)
data.worldmap[y_world_walls, x_world_walls, 0] = 255
data.worldmap[y_world_ground, x_world_ground, 2] += 1
distances, angles = to_polar_coords(xpix_ground, ypix_ground) # Convert to polar coords
avg_angle = np.mean(angles) # Compute the average angle
# Rover.nav_angles = angles # Angles of navigable terrain pixels
# Rover.nav_dists = distances
# Example of how to use the Databucket() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
# TODO:
# 1) Define source and destination points for perspective transform
# 2) Apply perspective transform
# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples
# 4) Convert thresholded image pixel values to rover-centric coords
# 5) Convert rover-centric pixel values to world coords
# 6) Update worldmap (to be displayed on right side of screen)
# Example: data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
# data.worldmap[rock_y_world, rock_x_world, 1] += 1
# data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
# 7) Make a mosaic image, below is some example code
# First create a blank image (can be whatever shape you like)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Next you can populate regions of the image with various output
# Here I'm putting the original image in the upper left hand corner
output_image[0:img.shape[0], 0:img.shape[1]] = img
# Let's create more images to add to the mosaic, first a warped image
warped = perspect_transform(img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:img.shape[0], img.shape[1]:] = image_perspective
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[img.shape[0]:, 0:data.worldmap.shape[1]] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
if data.count < len(data.images) - 1:
data.count += 1 # Keep track of the index in the Databucket()
return output_image
```
## Make a video from processed image data
Use the [moviepy](https://zulko.github.io/moviepy/) library to process images and create a video.
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
data = Databucket() # Re-initialize data in case you're running this cell multiple times
clip = ImageSequenceClip(data.images, fps=60) # Note: output video will be sped up because
# recording rate in simulator is fps=25
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
```
### This next cell should function as an inline video player
If this fails to render the video, try running the following cell (alternative video rendering method). You can also simply have a look at the saved mp4 in your `/output` folder
```
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
```
### Below is an alternative way to create a video in case the above cell did not work.
```
import io
import base64
video = io.open(output, 'r+b').read()
encoded_video = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded_video.decode('ascii')))
```
|
github_jupyter
|
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Get Started with TensorFlow
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /><span>Run in Google Colab</span></a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /><span>View source on GitHub</span></a></td></table>
This is a [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To run the Colab notebook:
1. Connect to a Python runtime: At the top-right of the menu bar, select *CONNECT*.
2. Run all the notebook code cells: Select *Runtime* > *Run all*.
For more examples and guides (including details for this program), see [Get Started with TensorFlow](https://www.tensorflow.org/get_started/).
Let's get started, import the TensorFlow library into your program:
```
import tensorflow as tf
```
Load and prepare the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Convert the samples from integers to floating-point numbers:
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
```
Build the `tf.keras` model by stacking layers. Select an optimizer and loss function used for training:
```
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
Train and evaluate model:
```
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
```
You’ve now trained an image classifier with ~98% accuracy on this dataset. See [Get Started with TensorFlow](https://www.tensorflow.org/get_started/) to learn more.
|
github_jupyter
|
# Getting Started with Data Ingestion and Preparation
Learn how to quickly start using the Iguazio Data Science Platform to collect, ingest, and explore data.
- [Overview](#gs-overview)
- [Collecting and Ingesting Data](#gs-data-collection-and-ingestion)
- [Ingesting Data From an External Database to a NoSQL Table Using V3IO Frames](#ingest-from-external-db-to-no-sql-using-frames)
- [Ingesting Files from Amazon S3](#ingest-from-amazon-s3)
- [Streaming Data From an External Streaming Engine Using Nuclio](#streaming-data-from-an-external-streaming-engine-using-nuclio)
- [Exploring and Processing Data](#gs-data-exploration-and-processing)
- [Exploring Data Using Spark DataFrames](#data-exploration-spark)
- [Exploring Data Using V3IO Frames and pandas DataFrames](#data-exploration-v3io-frames-n-pandas)
- [Exploring Data Using SQL](#data-exploration-sql)
- [Data Collection and Exploration Getting-Started Example](#getting-started-example)
- [Step 1: Ingest a Sample CSV File from Amazon S3](#getting-started-example-step-ingest-csv)
- [Step 2: Convert the Sample CSV File to a NoSQL Table](#getting-started-example-step-convert-csv-to-nosql-table)
- [Step 3: Run Interactive SQL Queries](#getting-started-example-step-run-sql-queries)
- [Step 4: Convert the Data to a Parquet Table](#getting-started-example-step-convert-data-to-parquet)
- [Step 5: Browse the Example Container Directory](#getting-started-example-step-browse-the-examples-dir)
- [Cleanup](#cleanup)
- [Delete Data](#delete-data)
- [Release Spark Resources](#release-spark-resources)
<a id="gs-overview"></a>
## Overview
This tutorial explains and demonstrates how to collect, ingest, and explore data with the Iguazio Data Science Platform (**"the platform"**).<br>
For an overview of working with data in the platform's data store and the various available methods for ingesting, storing, and manipulating data in the platform, see the data ingestion and preparation **README** ([notebook](README.ipynb) / [Markdown](README.md)).
<a id="gs-data-collection-and-ingestion"></a>
## Collecting and Ingesting Data
The platform supports various alternative methods for collecting and ingesting data into its data containers (i.e., its data store).
For more information, see the [**platform-overview.ipynb**](../platform-overview.ipynb.ipynb#data-collection-and-ingestion) tutorial notebook
The data collection and ingestion can be done as a one-time operation, using different platform APIs — which can be run from your preferred programming interface, such as an interactive web-based Jupyter or Zeppelin notebook — or as an ongoing ingestion stream, using Nuclio serverless functions.
This section explains and demonstrates how to collect and ingest (import) data into the platform using code that's run from a Jupyter notebook.
<a id="ingest-from-external-db-to-no-sql-using-frames"></a>
### Ingesting Data From an External Database to a NoSQL Table Using V3IO Frames
For an example of how to collect data from an external database — such as MySQL, Oracle, and PostgreSQL — and ingest (write) it into a NoSQL table in the platform, using the V3IO Frames API, see the [read-external-db](read-external-db.ipynb) tutorial.
<a id="ingest-from-amazon-s3"></a>
### Ingesting Files from Amazon S3
<a id="ingest-from-amazon-s3-using-curl"></a>
#### Ingesting Files from Amazon S3 to the Platform File System Using curl
You can use a simple [curl](https://curl.haxx.se/) command to ingest a file (object) from an external web data source, such as an Amazon S3 bucket, to the platform's distributed file system (i.e., into the platform's data store).
This is demonstrated in the following code example and in the [getting-started example](#getting-started-example) in this notebook.
The [spark-sql-analytics](spark-sql-analytics.ipynb) tutorial notebook demonstrates a similar ingestion using [Botocore](https://github.com/boto/botocore).
The example in the following cells uses curl to read a CSV file from the [Iguazio sample data-sets](http://iguazio-sample-data.s3.amazonaws.com/) public Amazon S3 bucket and save it to an **examples** directory in the running-user directory of the predefined "users" data container (`/v3io/users/$V3IO_USERNAME` = `v3io/$V3IO_HOME` = `/User`).
```
!mkdir -p /User/examples # <=> /v3io/${V3IO_HOME}/examples or /v3io/users/${V3IO_USERNAME}/examples
%%sh
CSV_PATH="/User/examples/stocks.csv"
curl -L "iguazio-sample-data.s3.amazonaws.com/2018-03-26_BINS_XETR08.csv" > ${CSV_PATH}
```
<a id="ingest-from-amazon-s3-to-nosql-table-using-v3io-frames-n-pandas"></a>
#### Ingesting Data from Amazon S3 to a NoSQL Table Using V3IO Frames and pandas
For an example of how to import data from Amazon S3 and save it into a NoSQL table in the platform's data store by using V3IO Frames and pandas DataFrames, see the [frames](frames.ipynb) tutorial notebook.
<a id="streaming-data-from-an-external-streaming-engine-using-nuclio"></a>
### Streaming Data From an External Streaming Engine Using Nuclio
To read data from an external streaming engine — such as Kafka, Kinesis, or RabbitMQ — create a Nuclio function that listens on the stream, and write the stream data to a NoSQL or time-series database (TSDB) table:
1. In the dashboard's side navigation menu, select **Functions** to display the Nuclio serverless functions dashboard.
2. Create a new Nuclio project or select an existing project.
3. In the action toolbar, select **Create Function**.
4. Enter a function name, select an appropriate template, such as **kafka-to-tsdb**, configure the required template parameters, and apply your changes.
5. Select **Deploy** from the action toolbar to deploy your function.
<a id="gs-data-exploration-and-processing"></a>
## Exploring and Processing Data
After you have ingested data into the platform's data containers, you can use various alternative methods and tools to explore and analyze the data.
Data scientists typically use Jupyter Notebook to run the exploration phase.
As outlined in the [**welcome**](../welcome.ipynb#data-exploration-and-processing) tutorial notebook, the platform's Jupyter Notebook service has a wide range of pre-deployed popular data science tools (such as Spark and Presto) and allows installation of additional tools and packages, enabling you to use different APIs to access the same data from a single Jupyter notebook.
This section explains and demonstrates how to explore data in the platform from a Jupyter notebook.
<a id="data-exploration-spark"></a>
### Exploring Data using Spark DataFrames
Spark is a distributed computing framework for data analytics.
You can easily run distributed Spark jobs on you platform cluster that use Spark DataFrames to access data files (objects), tables, or streams in the platform's data store.
For more information and examples, see the [spark-sql-analytics](spark-sql-analytics.ipynb) tutorial notebook.
<a id="data-exploration-v3io-frames-n-pandas"></a>
### Exploring Data Using V3IO Frames and pandas DataFrames
Iguazio's V3IO Frames open-source data-access library provides a unified high-performance DataFrames API for accessing NoSQL, stream, and time-series data in the platform's data store.
These DataFrames can also be used to analyze the data with pandas.
For details and examples, see the [frames](frames.ipynb) tutorial notebook.
<a id="data-exploration-sql"></a>
### Exploring Data Using SQL
You can run SQL statements (`SELECT` only) on top of NoSQL tables in the platform's data store.
To do this, you need to use the Jupyter `%sql` or `%%sql` IPython Jupyter magic followed by an SQL statement.
The platform supports standard ANSI SQL semantics.
Under the hood, the SQL statements are executed via [Presto](https://prestosql.io/), which is a distributed SQL engine designed from the ground up for fast analytics queries.
In the example in the following cell, as a preparation for the SQL query, the **stocks.csv** file that was ingested to the **users/<running user>/examples** platform data-container directory in the previous [Ingesting Files from Amazon S3 to the Platform](#ingest-from-amazon-s3) example is written to a **stocks_example_tab** NoSQL table in the same directory.
Then, an SQL `SELECT` query is run on this table.
You can also find a similar example in the [getting-started example](#getting-started-example) in this notebook.
```
# Use V3IO Frames to convert the CSV file that was ingested in the AWS S3 data-collection example to a NoSQL table.
# NOTE: Make sure to first create a V3IO Frames service from the "Services" page of the platform dashboard, and run the
# "Ingesting Files from Amazon S3 to the Platform File System Using curl" example to create users/$V3IO_USERNAME/examples/stocks.csv.
import pandas as pd
import v3io_frames as v3f
import os
# Create a V3IO Frames client for the "users" data container
client = v3f.Client("framesd:8081", container="users")
# Full CSV file path
csv_path = os.path.join("/User", "examples", "stocks.csv")
# Relative NoSQL table path within the "users" container
rel_nosql_table_path = os.path.join(os.getenv('V3IO_USERNAME'), "examples", "stocks_example_tab")
# Read the CSV file into a Pandas DataFrame
df = pd.read_csv(csv_path, header="infer")
# Convert the CSV file to a NoSQL table
client.write("kv", rel_nosql_table_path, df)
# Use Presto to query the NoSQL table that was created in the previous step
presto_nosql_table_path = os.path.join('v3io.users."' + os.getenv('V3IO_USERNAME'), 'examples', 'stocks_example_tab"')
%sql select * from $presto_nosql_table_path limit 10
```
<a id="getting-started-example"></a>
## Data Collection and Exploration Getting-Started Example
This section demonstrates a data collection, ingestion, and exploration flow.
Follow the tutorial by running the code cells in order of appearance:
- [Step #1](#getting-started-example-step-ingest-csv) — a CSV file is read from an Amazon S3 bucket and saved into an examples data-container directory using curl.
The examples directory is first created by using a file-system command.
- [Step #2](#getting-started-example-step-convert-csv-to-nosql-table) — the ingested file is converted into a NoSQL table by using Spark DataFrames.
- [Step #3](#getting-started-example-step-run-sql-queries) — a Presto SQL query is run on the NoSQL table.
- [Step #4](#getting-started-example-step-convert-data-to-parquet) — the ingested CSV file is converted into a Parquet table by using Spark DataFrames.
- [Step #5](#getting-started-example-step-browse-the-examples-dir) — the examples container directory is browsed by using local and Hadoop file-system commands.
- At the end of the flow, you can optionally [delete](#getting-started-example-deleting-data) the examples directory using a file-system command.
You can find more information about this sample flow in the [Converting a CSV File to a NoSQL Table](https://www.iguazio.com/docs/latest-release/tutorials/getting-started/ingest-n-consume-files/#convert-csv-to-nosql) platform quick-start tutorial.
> **Tip:** You can also browse the files and directories that you write to the "users" container in this tutorial from the platform dashboard: in the side navigation menu, select **Data**, and then select the **users** container from the table. On the container data page, select the **Browse** tab, and then use the side directory-navigation tree to browse the directories. Selecting a file or directory in the browse table displays its metadata.
<a id="getting-started-example-step-ingest-csv"></a>
### Step 1: Ingest a Sample CSV File from Amazon S3
Use `curl` to download a sample stocks-data CSV file from the [Iguazio sample data-set](http://iguazio-sample-data.s3.amazonaws.com/) public Amazon S3 bucket.
For additional public data sets, check out [Registry of Open Data on AWS](https://registry.opendata.aws/).
> **NOTE:** All the platform tutorial notebook examples store the data in an **examples** directory in the running-user directory of the predefined "users" container — **users/<running user>/examples**.
> The running-user directory is automatically created by the Jupyter Notebook service.
> The `V3IO_HOME` environment variable is used to reference the **users/<running user>** directory.
> To save the data to a different root container directory or to a different container, you need to specify the data path in the local file-system commands as `/v3io/<container name>/<data path>`, and in Spark DataFrames or Hadoop FS commands as a fully qualified path of the format `v3io://<container name>/<table path>`.
> For more information, see the [v3io-mount](#v3io-mount) and [running-user directory](#running-user-dir) information [Jupyter Notebook Basics](#jupyter-notebook-basics) section of this notebook.
```
%%sh
DIR_PATH="/User/examples/" # <=> "/v3io/${V3IO_HOME}/examples/" or "/v3io/users/${V3IO_USERNAME}/examples/"
CSV_PATH="${DIR_PATH}stocks.csv"
# Create the examples directory
mkdir -p ${DIR_PATH}
# Download a sample stocks CSV file from the Iguazio sample data-set Amazon S3 bucket to the examples directory
curl -L "iguazio-sample-data.s3.amazonaws.com/2018-03-26_BINS_XETR08.csv" > ${CSV_PATH}
```
<a id="getting-started-example-step-convert-csv-to-nosql-table"></a>
### Step 2: Convert the Sample CSV File to a NoSQL Table
Read the sample **stocks.csv** file that you downloaded and ingested in the previous step into a Spark DataFrame, and write the data in NoSQL format to a new "stocks_tab" table in the same container directory (**users/<running user>/examples/stocks_tab**).
> **Note**
> - To use the Iguazio Spark Connector, set the DataFrame's data-source format to `io.iguaz.v3io.spark.sql.kv`.
> - The data path in the Spark DataFrame is specified by using the `V3IO_HOME_URL` environment variable, which is set to `v3io://users/<running user>`.
> See the [running-user directory](#running-user-dir) information.
```
import os
from pyspark.sql import SparkSession
# Example diretory path - a <running user>/examples directory in the "users" container
dir_path = os.path.join(os.getenv("V3IO_HOME_URL"), "examples")
# CSV file path
csv_path = os.path.join(dir_path, "stocks.csv")
# NoSQL table path
nosql_table_path = os.path.join(dir_path, "stocks_tab")
# Create a new Spark session
spark = SparkSession.builder.appName("Iguazio data ingestion and preparation getting-started example").getOrCreate()
# Read the sample CSV file into a Spark DataFrame, and let Spark infer the schema of the data
df = spark.read.option("header", "true").csv(csv_path, inferSchema="true")
# Show the DataFrame data
df.show()
# Write the DataFrame data to a NoSQL table in a platform data container.
# Define the "ISIN" column (attribute) as the table's primary key.
df.write.format("io.iguaz.v3io.spark.sql.kv").mode("append") \
.option("key", "ISIN").option("allow-overwrite-schema", "true") \
.save(nosql_table_path)
# Display the table schema:
df.printSchema()
```
<a id="getting-started-example-step-run-sql-queries"></a>
### Step 3: Run Interactive SQL Queries
Use the `%sql` Jupyter magic to run an SQL queries on the "stocks_tab" table that was created in the previous step.
(The queries is processed using Presto.)
The example runs a `SELECT` query that reads the first ten table items.
```
presto_nosql_table_path = os.path.join('v3io.users."' + os.getenv('V3IO_USERNAME'), 'examples', 'stocks_tab"')
%sql select * from $presto_nosql_table_path limit 10
%sql select count(*) from $presto_nosql_table_path
```
<a id="getting-started-example-step-convert-data-to-parquet"></a>
### Step 4: Convert the Data to a Parquet Table
Use a Spark DataFrame `write` command to write the data in the Spark DaraFrame — which was created from the CSV file and used to create the NoSQL table in [Step 2](#getting-started-example-step-convert-csv-to-nosql-table) — to a new **users/<running user>/examples/stocks_prqt** Parquet table.
```
# Write the DataFrame data that was read from the CSV file in Step 2 to a Parquet table in a platform data container
prqt_table_path = os.path.join(dir_path, "stocks_prqt")
df.write.mode('overwrite').parquet(prqt_table_path)
```
<a id="getting-started-example-step-browse-the-examples-dir"></a>
### Step 5: Browse the Example Container Directory
Use a file-system bash-shell command to list the contents of the **users/<running user>/examples** data-container directory to which all the ingested data in the previous steps were saved.
You should see in this directory the **stocks.csv** file, **stocks_tab** NoSQL table directory, and **stocks_prqt** Parquet table directory that you created in the previous steps.
The following cells demonstrate how to issue the same command using the local file system and using Hadoop FS.
```
# List the contents of the users/<running user>/examples directory using a local file-system command
!ls -lrt /User/examples
# The following are equivalent commands that demonstrate different ways to reference your user home directory:
#!ls -lrt /v3io/${V3IO_HOME}/examples
#!ls -lrt /v3io/users/${V3IO_USERNAME}/examples
%%sh
# List the contents of the users/<running user>/examples directory using an Hadoop FS command
hadoop fs -ls ${V3IO_HOME_URL}/examples
# The following are equivalent commands that demonstrate different ways to reference your user home directory:
#hadoop fs -ls v3io://${V3IO_HOME}/examples
#hadoop fs -ls v3io://users/${V3IO_USERNAME}/examples
```
<a id="cleanup"></a>
## Cleanup
Prior to exiting, release disk space, computation, and memory resources consumed by the active session:
- [Delete Data](#delete-data)
- [Release Spark Resources](#release-spark-resources)
<a id="delete-data"></a>
### Delete Data
Optionally delete any of the directories or files that you created.
See the instructions in the [Creating and Deleting Container Directories](https://www.iguazio.com/docs/latest-release/tutorials/getting-started/containers/#create-delete-container-dirs) tutorial.
The following example uses a local file-system command to delete the entire contents of the **users/<running user>/examples** directory that was created in this example, but not the directory itself.
```
# Delete the contents of the examples directory:
#!rm -rf /User/examples/*
# You can also delete the examples directory iteself (and all its contents):
#!rm -rf /User/examples/
```
<a id="release-spark-resources"></a>
### Release Spark Resources
When you're done, run the following command to stop your Spark session and release its computation and memory resources:
```
spark.stop()
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from pathlib import Path
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df['debt_settlement_flag'].value_counts()
test_df['debt_settlement_flag'].value_counts()
test_df_cols=list(test_df.columns)
set(train_df_cols) == set(test_df_cols)
X = pd.get_dummies(train_df)
y_train = X['loan_status_high_risk']
X_train = X.drop(["loan_status_low_risk","loan_status_high_risk"],axis=1)
X_train['debt_settlement_flag_Y']
y_train.value_counts()
y_test.value_counts()
```
# Processing test data
```
# add missing dummy variables to testing set
X_test = pd.get_dummies(test_df)
y_test = X_test['loan_status_high_risk']
X_test = X_test.drop(["loan_status_low_risk","loan_status_high_risk"],axis=1)
X_test['debt_settlement_flag_Y'] = 0
X_test
set(X_train.columns) - set(X_test.columns)
```
# Consider the models
I think the random Forest would perform better than logistic regression in this case. Because test dataset includes numerical value,such as installment, annual_inc,etc. "Since Logistic Regression depends on a calculation based on ‘weights’, numerical encoding of categorical variables can lead the algorithm to treat certain categories are of higher importance compared to others, depending on the number assigned." -- https://medium.com/@bemali_61284/random-forest-vs-logistic-regression-16c0c8e2484c. On the other hand, "by randomly selecting subsets of features, some trees of the forest can isolate more important features while increasing the overall accuracy of the result".-- https://medium.com/@bemali_61284/random-forest-vs-logistic-regression-16c0c8e2484c.
# Fit a Logistic Regression
```
from sklearn.linear_model import LogisticRegression
classifier_lib = LogisticRegression(solver='liblinear', max_iter=10000)
classifier_lib
# Train the Logistic Regression model on the unscaled data and print the model score
classifier_lib.fit(X_train, y_train)
print(f"Training Data Score: {classifier_lib.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier_lib.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Training Score: {clf.score(X_test, y_test)}')
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
```
|
github_jupyter
|
# 1 - Installs and imports
```
!pip install --upgrade pip
!pip install sentencepiece
!pip install transformers
from transformers import AutoTokenizer, AutoModel, TFAutoModel, AutoConfig
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import pipeline
import numpy as np
from scipy.spatial.distance import cosine
from collections import defaultdict
import urllib
import numpy as np
from scipy.special import softmax
from sklearn.metrics import classification_report
```
# 2 - Fetch XLM-T model
```
MODEL = "cardiffnlp/twitter-xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
```
# Use Cases
## 1 - Compute Tweet Similarity
```
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
query = "Acabo de pedir pollo frito 🐣" #spanish
tweets = ["We had a great time! ⚽️", # english
"We hebben een geweldige tijd gehad! ⛩", # dutch
"Nous avons passé un bon moment! 🎥", # french
"Ci siamo divertiti! 🍝"] # italian
d = defaultdict(int)
for tweet in tweets:
sim = 1-cosine(get_embedding(query),get_embedding(tweet))
d[tweet] = sim
print('Most similar to: ',query)
print('----------------------------------------')
for idx,x in enumerate(sorted(d.items(), key=lambda x:x[1], reverse=True)):
print(idx+1,x[0])
```
## 2 - Simple inference example (with `pipelines`)
```
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Huggingface es lo mejor! Awesome library 🤗😎")
```
# 3 - Experiment on UMSAB
## Fetch dataset (Spanish)
```
language = 'spanish'
files = """test_labels.txt
test_text.txt
train_labels.txt
train_text.txt
val_labels.txt
val_text.txt""".split('\n')
def fetch_data(language, files):
dataset = defaultdict(list)
for infile in files:
thisdata = infile.split('/')[-1].replace('.txt','')
dataset_url = f"https://raw.githubusercontent.com/cardiffnlp/xlm-t/main/data/sentiment/{language}/{infile}"
print(f'Fetching from {dataset_url}')
with urllib.request.urlopen(dataset_url) as f:
for line in f:
if thisdata.endswith('labels'):
dataset[thisdata].append(int(line.strip().decode('utf-8')))
else:
dataset[thisdata].append(line.strip().decode('utf-8'))
return dataset
dataset = fetch_data(language, files)
dataset['train_text'][:3],dataset['train_labels'][:3]
```
## Run full experiment
```
# load multilingual sentiment classifier
CUDA = True # set to true if using GPU (Runtime -> Change runtime Type -> GPU)
BATCH_SIZE = 32
MODEL = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
if CUDA:
model = model.to('cuda')
# helper functions
def preprocess(corpus):
outcorpus = []
for text in corpus:
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
new_text = " ".join(new_text)
outcorpus.append(new_text)
return outcorpus
def predict(text, cuda):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt', padding = True, truncation = True)
if cuda:
encoded_input.to('cuda')
output = model(**encoded_input)
scores = output[0].detach().cpu().numpy()
else:
output = model(**encoded_input)
scores = output[0].detach().numpy()
scores = softmax(scores, axis=-1)
return scores
from torch.utils.data import DataLoader
dl = DataLoader(dataset['test_text'], batch_size=BATCH_SIZE)
all_preds = []
for idx,batch in enumerate(dl):
if idx % 10 == 0:
print('Batch ',idx+1,' of ',len(dl))
text = preprocess(batch)
scores = predict(text, CUDA)
preds = np.argmax(scores, axis=-1)
all_preds.extend(preds)
print(classification_report(dataset['test_labels'], all_preds))
```
|
github_jupyter
|
# Two-Level: Sech Pulse 4π — Pulse Breakup
## Define the Problem
First we need to define a sech pulse with the area we want. We'll fix the width of the pulse and the area to find the right amplitude.
The full-width at half maximum (FWHM) $t_s$ of the sech pulse is related to the FWHM of a Gaussian by a factor of $1/2.6339157938$. (See §3.2.2 of my [PhD thesis](https://github.com/tommyogden/phd-thesis)).
```
import numpy as np
SECH_FWHM_CONV = 1./2.6339157938
t_width = 1.0*SECH_FWHM_CONV # [τ]
print('t_width', t_width)
mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"rabi_freq_t_args": {
"n_pi": 4.0,
"centre": 0.0,
"width": %f
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 2
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 240,
"z_min": -0.5,
"z_max": 1.5,
"z_steps": 100,
"interaction_strengths": [
10.0
],
"savefile": "mbs-two-sech-4pi"
}
"""%(t_width)
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
```
We'll just check that the pulse area is what we want.
```
print('The input pulse area is {0}'.format(
np.trapz(mbs.Omegas_zt[0,0,:].real, mbs.tlist)/np.pi))
Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
```
## Plot Output
```
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 3.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
fig, ax = plt.subplots(figsize=(16, 5))
ax.plot(mbs.zlist, mbs.fields_area()[0]/np.pi)
ax.set_ylim([0.0, 8.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)')
```
## Analysis
The $4 \pi$ sech pulse breaks up into two $2 \pi$ pulses, which travel at a speed according to their width.
## Movie
```
# C = 0.1 # speed of light
# Y_MIN = 0.0 # Y-axis min
# Y_MAX = 4.0 # y-axis max
# ZOOM = 2 # level of linear interpolation
# FPS = 60 # frames per second
# ATOMS_ALPHA = 0.2 # Atom indicator transparency
# FNAME = "images/mb-solve-two-sech-4pi"
# FNAME_JSON = FNAME + '.json'
# with open(FNAME_JSON, "w") as f:
# f.write(mb_solve_json)
# !make-mp4-fixed-frame.py -f $FNAME_JSON -c $C --fps $FPS --y-min $Y_MIN --y-max $Y_MAX \
# --zoom $ZOOM --atoms-alpha $ATOMS_ALPHA #--peak-line --c-line
# FNAME_MP4 = FNAME + '.mp4'
# !make-gif-ffmpeg.sh -f $FNAME_MP4 --in-fps $FPS
# from IPython.display import Image
# Image(url=FNAME_MP4 +'.gif', format='gif')
```
|
github_jupyter
|
This notebook presents some code to compute some basic baselines.
In particular, it shows how to:
1. Use the provided validation set
2. Compute the top-30 metric
3. Save the predictions on the test in the right format for submission
```
%pylab inline --no-import-all
import os
from pathlib import Path
import pandas as pd
# Change this path to adapt to where you downloaded the data
DATA_PATH = Path("data")
# Create the path to save submission files
SUBMISSION_PATH = Path("submissions")
os.makedirs(SUBMISSION_PATH, exist_ok=True)
```
We also load the official metric, top-30 error rate, for which we provide efficient implementations:
```
from GLC.metrics import top_30_error_rate
help(top_30_error_rate)
from GLC.metrics import top_k_error_rate_from_sets
help(top_k_error_rate_from_sets)
```
For submissions, we will also need to predict the top-30 sets for which we also provide an efficient implementation:
```
from GLC.metrics import predict_top_30_set
help(predict_top_30_set)
```
We also provide an utility function to generate submission files in the right format:
```
from GLC.submission import generate_submission_file
help(generate_submission_file)
```
# Observation data loading
We first need to load the observation data:
```
df_obs_fr = pd.read_csv(DATA_PATH / "observations" / "observations_fr_train.csv", sep=";", index_col="observation_id")
df_obs_us = pd.read_csv(DATA_PATH / "observations" / "observations_us_train.csv", sep=";", index_col="observation_id")
df_obs = pd.concat((df_obs_fr, df_obs_us))
```
Then, we retrieve the train/val split provided:
```
obs_id_train = df_obs.index[df_obs["subset"] == "train"].values
obs_id_val = df_obs.index[df_obs["subset"] == "val"].values
y_train = df_obs.loc[obs_id_train]["species_id"].values
y_val = df_obs.loc[obs_id_val]["species_id"].values
n_val = len(obs_id_val)
print("Validation set size: {} ({:.1%} of train observations)".format(n_val, n_val / len(df_obs)))
```
We also load the observation data for the test set:
```
df_obs_fr_test = pd.read_csv(DATA_PATH / "observations" / "observations_fr_test.csv", sep=";", index_col="observation_id")
df_obs_us_test = pd.read_csv(DATA_PATH / "observations" / "observations_us_test.csv", sep=";", index_col="observation_id")
df_obs_test = pd.concat((df_obs_fr_test, df_obs_us_test))
obs_id_test = df_obs_test.index.values
print("Number of observations for testing: {}".format(len(df_obs_test)))
df_obs_test.head()
```
# Sample submission file
In this section, we will demonstrate how to generate the sample submission file provided.
To do so, we will use the function `generate_submission_file` from `GLC.submission`.
The sample submission consists in always predicting the first 30 species for all the test observations:
```
first_30_species = np.arange(30)
s_pred = np.tile(first_30_species[None], (len(df_obs_test), 1))
```
We can then generate the associated submission file using:
```
generate_submission_file(SUBMISSION_PATH / "sample_submission.csv", df_obs_test.index, s_pred)
```
# Constant baseline: 30 most observed species
The first baseline consists in predicting the 30 most observed species on the train set which corresponds exactly to the "Top-30 most present species":
```
species_distribution = df_obs.loc[obs_id_train]["species_id"].value_counts(normalize=True)
top_30_most_observed = species_distribution.index.values[:30]
```
As expected, it does not perform very well on the validation set:
```
s_pred = np.tile(top_30_most_observed[None], (n_val, 1))
score = top_k_error_rate_from_sets(y_val, s_pred)
print("Top-30 error rate: {:.1%}".format(score))
```
We will however generate the associated submission file on the test using:
```
# Compute baseline on the test set
n_test = len(df_obs_test)
s_pred = np.tile(top_30_most_observed[None], (n_test, 1))
# Generate the submission file
generate_submission_file(SUBMISSION_PATH / "constant_top_30_most_present_species_baseline.csv", df_obs_test.index, s_pred)
```
# Random forest on environmental vectors
A classical approach in ecology is to train Random Forests on environmental vectors.
We show here how to do so using [scikit-learn](https://scikit-learn.org/).
We start by loading the environmental vectors:
```
df_env = pd.read_csv(DATA_PATH / "pre-extracted" / "environmental_vectors.csv", sep=";", index_col="observation_id")
X_train = df_env.loc[obs_id_train].values
X_val = df_env.loc[obs_id_val].values
X_test = df_env.loc[obs_id_test].values
```
Then, we need to handle properly the missing values.
For instance, using `SimpleImputer`:
```
from sklearn.impute import SimpleImputer
imp = SimpleImputer(
missing_values=np.nan,
strategy="constant",
fill_value=np.finfo(np.float32).min,
)
imp.fit(X_train)
X_train = imp.transform(X_train)
X_val = imp.transform(X_val)
X_test = imp.transform(X_test)
```
We can now start training our Random Forest (as there are a lot of observations, over 1.8M, this can take a while):
```
from sklearn.ensemble import RandomForestClassifier
est = RandomForestClassifier(n_estimators=16, max_depth=10, n_jobs=-1)
est.fit(X_train, y_train)
```
As there are a lot of classes (over 17K), we need to be cautious when predicting the scores of the model.
This can easily take more than 5Go on the validation set.
For this reason, we will be predict the top-30 sets by batches using the following generic function:
```
def batch_predict(predict_func, X, batch_size=1024):
res = predict_func(X[:1])
n_samples, n_outputs, dtype = X.shape[0], res.shape[1], res.dtype
preds = np.empty((n_samples, n_outputs), dtype=dtype)
for i in range(0, len(X), batch_size):
X_batch = X[i:i+batch_size]
preds[i:i+batch_size] = predict_func(X_batch)
return preds
```
We can know compute the top-30 error rate on the validation set:
```
def predict_func(X):
y_score = est.predict_proba(X)
s_pred = predict_top_30_set(y_score)
return s_pred
s_val = batch_predict(predict_func, X_val, batch_size=1024)
score_val = top_k_error_rate_from_sets(y_val, s_val)
print("Top-30 error rate: {:.1%}".format(score_val))
```
We now predict the top-30 sets on the test data and save them in a submission file:
```
# Compute baseline on the test set
s_pred = batch_predict(predict_func, X_test, batch_size=1024)
# Generate the submission file
generate_submission_file(SUBMISSION_PATH / "random_forest_on_environmental_vectors.csv", df_obs_test.index, s_pred)
```
|
github_jupyter
|
## Tips Dataframe
- Loc:Desktop\Fundamentals_of-Data_Analysis/Fund_of_Data_Analysis/
```
#import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns; sns.set()
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
sns.set(rc={'figure.figsize':(10,8)})
sns.set_style('white')
```
### Description
```
df=pd.read_csv("tips.csv")
print (('The data has the following shape'), df.shape)
print ("\n")
df.info()
df.head()
df = df.rename(columns={'total_bill': 'Bill', 'tip':'Tip', 'sex':'Gender', 'smoker':'Smoker', 'day':'Day','time':'Time', 'size': 'Party'})
df.head()
df.describe()
df.plot(style = "o", ms=3, figsize = [10,5])
plt.show()
#Ref1: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.plotting.scatter_matrix.html
#Ref2:http://jonathansoma.com/lede/algorithms-2017/classes/fuzziness-matplotlib/understand-df-plot-in-pandas/
color_list = ['red' if i=='Male' else 'green' for i in df.loc[:,'Gender']]
pd.plotting.scatter_matrix(df, alpha=0.9, c = color_list, figsize = [20,20],
diagonal = 'hist', s = 100, marker = '.')
plt.show()
df.groupby(['Gender', 'Party'])['Tip'].mean().unstack(level=0).plot(kind='bar', figsize=(10,5))
plt.title('Tip amount by Party size and Gender')
plt.ylabel('Tip Amount')
df['Tippc']=df.apply(lambda row: row.Tip / row.Bill, axis=1)
# Ref: https://kaijento.github.io/2017/04/22/pandas-create-new-column-sum/
# Tippc is the percent of the tip relative to the Bill
df.head()
df.groupby(['Gender', 'Time'])['Tippc'].mean().unstack(level=0).plot(kind='bar', figsize=(10,5))
plt.ylabel('Tip Amount as a percentage of the Bill')
#df2 = df(columns=['Gender', 'Time'])
#df2 #.plot.bar()
```
### Linear Regression
- Plot the Tips relative to the Bill amount.
```
df.plot(x='Bill', y='Tip', style='o')
plt.title('Bill amount and Tip given')
plt.xlabel('Bill')
plt.ylabel('Tip')
plt.show()
```
- Looking at where the dots, it would seem that the Tip amount increases relative to the Bill amount. It would seem that the Tip is approximate 20 % of the Bill amount. There are a few outliers that could skew the data.
- I will try a few values for the slope and where it crosses the Y-axis. The slopes I will at will vary between 0.1 and 0.2. (I did try a number of other values, but for this analysis I will use the outlined values).
```
# Plot w versus d with black dots.
df.plot(x='Bill', y='Tip', style='o')
# Overlay some lines on the plot.
x = np.arange(0.0, 50.0, 1.0)
plt.plot(x, 0.125 * x + 0.5, 'r-', label=r"$x/8 + 0.5$")
plt.plot(x, 0.2 * x + 0.5, 'g-', label=r"$x/5 + 0.5$")
plt.plot(x, 0.1 * x + 0.0, 'b-', label=r"$x/10 + 0.0$")
# Add a legend.
plt.legend()
# Add axis labels.
plt.xlabel('Bill')
plt.ylabel('Tips')
# Adjust the plot range
plt.ylim(0, 10)
# Show the plot.
plt.show()
```
- The red line seems to be the best fit. But I will investigate using the least square technique. In the above table, for every x, there is two values for y, one on the line and one on the dot. The following measures this distance and squares it (to account for negative values). The lease value is the best guess for the lines in the plot.
```
B = df["Bill"]
T = df["Tip"]
cost = lambda m,c: np.sum([(T[i] - m * B[i] - c)**2 for i in range(B.size)])
print("Red line cost with m = %5.2f and c = %5.2f: is %8.2f" % (0.125, 0.50, cost(0.125, 0.5)))
print("Green line cost with m = %5.2f and c = %5.2f: is %8.2f" % (0.2, 0.5, cost(0.2, 0.5)))
print("Blue line cost with m = %5.2f and c = %5.2f: is %8.2f" % (0.1, 0.0, cost(0.1, 0.0)))
```
### Minimising the cost
```
# Calculate the best values for m and c.
B = df["Bill"]
T = df["Tip"]
# Calculate the best values for B and T.
B_avg = df["Bill"].mean()
T_avg = df["Tip"].mean()
B_zero = B - B_avg
T_zero = T - T_avg
m = np.sum(B_zero * T_zero) / np.sum(B_zero * B_zero)
c = T_avg - m * B_avg
print ("m is %8.6f and c is %6.6f." % (m, c))
x, y, x_avg, y_avg = B, T, B_avg, T_avg
m2 = (np.sum(x * y) - y_avg * np.sum(x)) / (np.sum(x * x) - x_avg * np.sum(x))
c2 = y_avg - m2 * x_avg
m2, c2
np.polyfit(B, T, 1)
```
- The line the minimises the cost can now be superimposed on the data.
```
# Plot the best fit line.
plt.plot(B, T, 'k.', label='Original data')
plt.plot(B, m * B + c, 'b-', label='Best fit line')
# Add axis labels and a legend.
plt.xlabel('Bill')
plt.ylabel('Tips')
plt.legend()
plt.ylim(0, 10)
# Show the plot.
plt.show()
cost = lambda m,c: np.sum([(T[i] - m * B[i] - c)**2 for i in range(B.size)])
print("Red line cost with m = %5.2f and c = %5.2f: is %8.2f" % (0.105025, 0.920270, cost(0.11, 0.92)))
```
- In the case of linear regression, the cost function is the sum of the squares of the residuals (residuals being the difference between the dependent variable value and the value given by the model). The procedure finds for you the intercept and slope(s) that minimize the cost function.
- This is why the linear regression method is also called “Least-squares regression”.
- A line with slope = 0.11 and y-axia = 0.92: gives the lowest cost function, at 255.62.
- So this equation best fits the line.
```
import statsmodels.api as sm
X = df['Bill']
y = df['Tip']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())
```
- Ref: https://towardsdatascience.com/the-complete-guide-to-linear-regression-in-python-3d3f8f06bf8
- Looking at both coefficients, we have a p-value that is equal to 0. This means that there is a strong correlation between these coefficients and the target (Tip).
- The R² value is equal to 0.45. Therefore, about 45% of the variability of Tips is explained by the Bill amount.
### Analysis
- Analyse the relationship between the variables within the dataset.
- The aim of this experiment is to determine what circumstances maxamises the Tip payable, this is done by minimising the Cost(m,c). Of course the data outlined above doesn't take into account, time, gender, smoker, so the model is not entirely accurate.
```
plt.figure(1)
df.groupby(['Gender', 'Time'])['Tip'].mean().unstack(level=0).plot(kind='bar', figsize=(10,5))
plt.ylabel('Tip Amount')
plt.figure(2)
df.groupby(['Gender', 'Smoker'])['Tip'].mean().unstack(level=0).plot(kind='bar', figsize=(10,5))
plt.ylabel('Tip Amount')
plt.figure(3)
df.groupby(['Party', 'Time'])['Tip'].mean().unstack(level=0).plot(kind='bar', figsize=(10,5))
plt.ylabel('Tip Amount')
plt.show()
```
- From the first three plots, it would seem that males are better tippers than females and that with the exeption of lunch, that size of the tip increases with the party amount.
# End
## The following contains some code that was not needed but that I may come back to.
```
B = df["Bill"]
T = df["Tip"]
P = df["Party"]
G = df['Gender']
#df.Gender = df.Gender.map({'Female':0,'Male':1})
#df['Gender'] = df['Gender'].map(dict(zip(['Female','Male'],[0,1]))
df['Gender'] = (df['Gender'] == 'Female').astype(int)
df.head()
C = np.polyfit(P, T, 1)
C
from statsmodels.formula.api import ols
fit = ols('Tip ~ (Bill)', data=df).fit()
fit.summary()
m = 0.71182064
c = 1.16913303
cost = lambda m,c: np.sum([(T[i] - m * P[i] - c)**2 for i in range(P.size)])
print("Red line cost with m = %5.2f and c = %5.2f: is %8.2f" % (0.71182064, 1.16913303, cost(0.71182064, 1.16913303)))
X = df['Party']
y = df['Tip']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())
import seaborn as sns
import sklearn.neighbors as nei
# Plot the tips data set with a pair plot.
sns.pairplot(df)
```
- https://www.accelebrate.com/blog/interpreting-results-from-linear-regression-is-the-data-appropriate
|
github_jupyter
|
# Preprocessing for deep learning
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
plt.rcParams['axes.facecolor'] ='w'
plt.rcParams['axes.edgecolor'] = '#D6D6D6'
plt.rcParams['axes.linewidth'] = 2
```
# 1. Background
## A. Variance and covariance
### Example 1.
```
A = np.array([[1, 3, 5], [5, 4, 1], [3, 8, 6]])
print(A)
print(np.cov(A, rowvar=False, bias=True))
```
### Finding the covariance matrix with the dot product
```
def calculateCovariance(X):
meanX = np.mean(X, axis = 0)
lenX = X.shape[0]
X = X - meanX
covariance = X.T.dot(X)/lenX
return covariance
print(calculateCovariance(A))
```
## B. Visualize data and covariance matrices
```
def plotDataAndCov(data):
ACov = np.cov(data, rowvar=False, bias=True)
print('Covariance matrix:\n', ACov)
fig, ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(10, 10)
ax0 = plt.subplot(2, 2, 1)
# Choosing the colors
cmap = sns.color_palette("GnBu", 10)
sns.heatmap(ACov, cmap=cmap, vmin=0)
ax1 = plt.subplot(2, 2, 2)
# data can include the colors
if data.shape[1]==3:
c=data[:,2]
else:
c="#0A98BE"
ax1.scatter(data[:,0], data[:,1], c=c, s=40)
# Remove the top and right axes from the data plot
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
```
## C. Simulating data
### Uncorrelated data
```
np.random.seed(1234)
a1 = np.random.normal(2, 1, 300)
a2 = np.random.normal(1, 1, 300)
A = np.array([a1, a2]).T
A.shape
print(A[:10,:])
sns.distplot(A[:,0], color="#53BB04")
sns.distplot(A[:,1], color="#0A98BE")
plt.show()
plt.close()
plotDataAndCov(A)
plt.show()
plt.close()
```
### Correlated data
```
np.random.seed(1234)
b1 = np.random.normal(3, 1, 300)
b2 = b1 + np.random.normal(7, 1, 300)/2.
B = np.array([b1, b2]).T
plotDataAndCov(B)
plt.show()
plt.close()
```
# 2. Preprocessing
## A. Mean normalization
```
def center(X):
newX = X - np.mean(X, axis = 0)
return newX
BCentered = center(B)
print('Before:\n\n')
plotDataAndCov(B)
plt.show()
plt.close()
print('After:\n\n')
plotDataAndCov(BCentered)
plt.show()
plt.close()
```
## B. Standardization
```
def standardize(X):
newX = center(X)/np.std(X, axis = 0)
return newX
np.random.seed(1234)
c1 = np.random.normal(3, 1, 300)
c2 = c1 + np.random.normal(7, 5, 300)/2.
C = np.array([c1, c2]).T
plotDataAndCov(C)
plt.xlim(0, 15)
plt.ylim(0, 15)
plt.show()
plt.close()
CStandardized = standardize(C)
plotDataAndCov(CStandardized)
plt.show()
plt.close()
```
## C. Whitening
### 1. Zero-centering
```
CCentered = center(C)
plotDataAndCov(CCentered)
plt.show()
plt.close()
```
### 2. Decorrelate
```
def decorrelate(X):
cov = X.T.dot(X)/float(X.shape[0])
# Calculate the eigenvalues and eigenvectors of the covariance matrix
eigVals, eigVecs = np.linalg.eig(cov)
# Apply the eigenvectors to X
decorrelated = X.dot(eigVecs)
return decorrelated
plotDataAndCov(C)
plt.show()
plt.close()
CDecorrelated = decorrelate(CCentered)
plotDataAndCov(CDecorrelated)
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
plt.close()
```
### 3. Rescale the data
```
def whiten(X):
cov = X.T.dot(X)/float(X.shape[0])
# Calculate the eigenvalues and eigenvectors of the covariance matrix
eigVals, eigVecs = np.linalg.eig(cov)
# Apply the eigenvectors to X
decorrelated = X.dot(eigVecs)
# Rescale the decorrelated data
whitened = decorrelated / np.sqrt(eigVals + 1e-5)
return whitened
CWhitened = whiten(CCentered)
plotDataAndCov(CWhitened)
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
plt.close()
```
# 3. Image whitening
```
from keras.datasets import cifar10
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train.shape
X = X_train[:1000]
print(X.shape)
X = X.reshape(X.shape[0], X.shape[1]*X.shape[2]*X.shape[3])
print(X.shape)
def plotImage(X):
plt.rcParams["axes.grid"] = False
plt.figure(figsize=(1.5, 1.5))
plt.imshow(X.reshape(32,32,3))
plt.show()
plt.close()
plotImage(X[12, :])
X_norm = X / 255.
print('X.min()', X_norm.min())
print('X.max()', X_norm.max())
X_norm.mean(axis=0).shape
print(X_norm.mean(axis=0))
X_norm = X_norm - X_norm.mean(axis=0)
print(X_norm.mean(axis=0))
cov = np.cov(X_norm, rowvar=True)
cov.shape
U,S,V = np.linalg.svd(cov)
print(U.shape, S.shape)
print(np.diag(S))
print('\nshape:', np.diag(S).shape)
epsilon = 0.1
X_ZCA = U.dot(np.diag(1.0/np.sqrt(S + epsilon))).dot(U.T).dot(X_norm)
plotImage(X[12, :])
plotImage(X_ZCA[12, :])
X_ZCA_rescaled = (X_ZCA - X_ZCA.min()) / (X_ZCA.max() - X_ZCA.min())
print('min:', X_ZCA_rescaled.min())
print('max:', X_ZCA_rescaled.max())
plotImage(X[12, :])
plotImage(X_ZCA_rescaled[12, :])
```
|
github_jupyter
|
# Scalability
This notebook show the scalability analysis performed in the paper.
We compared our LTGL model with respect to state-of-the art software for graphical inference, such as LVGLASSO and TVGL.
<font color='red'><b>Note</b></font>: GL is not included in the comparison, since it is based on coordinate descent and it does not have the eigenvalue decomposition.
```
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import matlab
import matlab.engine
import pandas as pd
import numpy as np
import cPickle as pkl
import time
from itertools import product
from regain import datasets, utils
from regain.covariance import latent_time_graph_lasso_
def ltgl_results(data_grid, K, K_obs, ells, **params):
mdl = latent_time_graph_lasso_.LatentTimeGraphLasso(
bypass_transpose=False, assume_centered=0, verbose=0, tol=1e-4, rtol=1e-4,
max_iter=500, rho=1./ np.sqrt(data_grid.shape[0]))
tic = time.time()
ll = mdl.set_params(**params).fit(data_grid)
tac = time.time()
iterations = ll.n_iter_
ss = utils.structure_error(K, ll.precision_)#, thresholding=1, eps=1e-5)
MSE_observed = utils.error_norm(K_obs, ll.precision_ - ll.latent_)
MSE_precision = utils.error_norm(K, ll.precision_)
MSE_latent = utils.error_norm(ells, ll.latent_)
mean_rank_error = utils.error_rank(ells, ll.latent_)
res = dict(n_dim_obs=K.shape[1],
time=tac-tic,
iterations=iterations,
MSE_precision=MSE_precision,
MSE_observed=MSE_observed,
MSE_latent=MSE_latent,
mean_rank_error=mean_rank_error,
note=None,
estimator=ll)
res = dict(res, **ss)
return res
import sys; sys.path.append("/home/fede/src/TVGL")
from TVGL import TVGL
from regain import utils; reload(utils)
from regain.utils import suppress_stdout
def hallac_results(data_grid, K, K_obs, ells, beta, alpha):
with suppress_stdout():
tic = time.time()
thetaSet, empCovSet, status, gvx = TVGL(
np.vstack(data_grid.transpose(2,0,1)), data_grid.shape[0], lamb=alpha, beta=beta,
indexOfPenalty=2)
tac = time.time()
if status != "Optimal":
print ("not converged")
precisions = np.array(thetaSet)
ss = utils.structure_error(K, precisions)
MSE_observed = None
MSE_precision = utils.error_norm(K, precisions)
MSE_latent = None
mean_rank_error = None
res = dict(n_dim_obs=K.shape[1],
time=tac-tic,
iterations=gvx.n_iter_,
MSE_precision=MSE_precision,
MSE_observed=MSE_observed,
MSE_latent=MSE_latent,
mean_rank_error=mean_rank_error,
note=status,
estimator=gvx)
res = dict(res, **ss)
return res
try:
eng.quit()
except:
pass
eng = matlab.engine.start_matlab()
eng.addpath(r'/home/fede/src/slipguru/regain/regain/wrapper/lvglasso/',nargout=0)
# eng.addpath(r'path/to/ADMM_B.m/',nargout=0)
def chandresekeran_results(data_grid, K, K_obs, ells, tau, alpha, **whatever):
emp_list = np.array([empirical_covariance(x, assume_centered=True)
for x in data_grid.transpose(2,0,1)]).transpose(1,2,0)
n_samples = emp_list.shape[0]
rho = 1./ np.sqrt(data_grid.shape[0])
# 3. Matlab engine
result = eng.LVGLASSO(matlab.double(emp_list.tolist()),float(alpha),float(tau),float(rho))
ma_output = Bunch(**result)
ma_output.R = np.array(ma_output.R)
ma_output.S = np.array(ma_output.S)
ma_output.L = np.array(ma_output.L)
ss = utils.structure_error(K, ma_output.R + ma_output.L)#, thresholding=1, eps=1e-5)
MSE_observed = utils.error_norm(K_obs, ma_output.R)
MSE_precision = utils.error_norm(K, ma_output.R + ma_output.L)
MSE_latent = utils.error_norm(ells, ma_output.L)
mean_rank_error = utils.error_rank(ells, ma_output.L)
res = dict(n_dim_obs=K.shape[1],
time=ma_output.elapsed_time,
iterations=np.max(ma_output.iter),
MSE_precision=MSE_precision,
MSE_observed=MSE_observed,
MSE_latent=MSE_latent,
mean_rank_error=mean_rank_error,
note=None, estimator=ma_output)
res = dict(res, **ss)
return res
# prepare data
n_times = [20, 50, 100]
n_dims = np.sqrt(np.logspace(2,5,10)).astype(int)
n_samples = 200
n_dim_lat = 2
np.random.seed(42)
with suppress_stdout():
data = {(dim,T) : datasets.make_dataset(
mode='ma', n_samples=n_samples, n_dim_lat=n_dim_lat, n_dim_obs=dim, T=T, epsilon=1e-2)
for dim, T in (product(n_dims, n_times))}
alpha = 1
tau = 1
beta = 1
eta = 1
methods = ['LTGL', 'GL', 'LVGLASSO', 'TVGL']
scores = sorted(['iterations', 'time', 'note'])
cols = pd.MultiIndex.from_product([scores, n_dims], names=('score','dim'))
rows = pd.MultiIndex.from_product([methods, n_times], names=('method','time'))
dff = pd.DataFrame(columns=cols, index=rows)
idx = pd.IndexSlice
for i, (k, res) in enumerate(sorted(data.items())):
dim = k[0]
print("Start with: dim=%d, T=%d (it %d)" % (k[0],k[1], i))
data_list = res.data
K = res.thetas
K_obs = res.thetas_observed
ells = res.ells
data_grid = np.array(data_list).transpose(1,2,0) # to use it later for grid search
print("starting LTGL ...\r", end='')
res_l = ltgl_results(data_grid, K, K_obs, ells, alpha=alpha, beta=beta, tau=tau, eta=eta)
dff.loc[idx['LTGL', k[1]], idx[:, k[0]]] = [res_l[x] for x in scores]
print("starting GL...\r", end='')
res = glasso_results(data_grid, K, K_obs, ells, alpha=alpha)
# Use this for the R-implementation
# res = friedman_results(data_grid, K, K_obs, ells, alpha=alpha)
dff.loc[idx['GL', k[1]], idx[:, k[0]]] = [res[x] for x in scores]
print("starting LVGLASSO...\r", end='')
res_c = chandresekeran_results(data_grid, K, K_obs, ells, tau=tau, alpha=alpha)
dff.loc[idx['LVGLASSO', k[1]], idx[:, k[0]]] = [res_c[x] for x in scores]
df.to_pickle("scalability_no_hallac.pkl")
logger = init_logger('scalability')
```
Since this is computationally expensive, we divide the results in two cells ...
```
for i, (k, res) in enumerate(sorted(data.items())):
dim = k[0]
logging.info("Start TVGL with: dim=%d, T=%d (it %d)" % (k[0],k[1], i))
data_list = res.data
K = res.thetas
K_obs = res.thetas_observed
ells = res.ells
data_grid = np.array(data_list).transpose(1,2,0) # to use it later for grid search
try:
# print("starting TVGL...\r", end='')
res = hallac_results(data_grid, K, K_obs, ells, beta=beta, alpha=alpha)
dff.loc[idx['TVGL', k[1]], idx[:, k[0]]] = [res[x] for x in scores]
dff.to_pickle("scalability_hallac.pkl")
except:
pass
```
## Plotting
```
# load pickle
with open("scalability.pkl", 'rb') as f:
df = pkl.load(f)
df.sortlevel(inplace=True)
idx = pd.IndexSlice
scores = df.columns.levels[0]
n_dims = df.columns.levels[1]
methods = df.index.levels[0]
n_times = df.index.levels[1]
```
Let's plot a horizontal figure.
```
style = ['-', '--', ':']
f, ax = plt.subplots(1, len(n_times), sharey=True, figsize=(12,2), dpi=600)
ax[0].set_ylabel("seconds")
# ax[0].set_ylim([.1,None])
for i, t in enumerate(n_times):
for j, m in enumerate([m for m in methods if m != 'GL']):
if m == 'GL':
continue
ax[i].plot(n_dims * (n_dims + 1) * t, df.loc[idx[m, t], idx['time',:]].values, ls=style[j], label=m)
ax[i].set_yscale('log')
ax[i].set_xscale('log')
ax[i].set_xlabel(r"number of unknowns at T = %d" % t)
ax[i].grid('on')
# ax[i].set_title("n_times: %d" % t)
# plt.xticks(range(4), ours.n_dim_obs)
ax[0].set_yticks([1, 10, 1e2, 1e3, 1e4])
lgd = ax[1].legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=3, mode="expand", borderaxespad=0.)
f.tight_layout()
f.savefig("scalability.pdf", dpi=600, transparent=True, bbox_extra_artists=(lgd,), bbox_inches='tight')
```
|
github_jupyter
|
## Organização do dataset
```
def dicom2png(input_file, output_file):
try:
ds = pydicom.dcmread(input_file)
shape = ds.pixel_array.shape
# Convert to float to avoid overflow or underflow losses.
image_2d = ds.pixel_array.astype(float)
# Rescaling grey scale between 0-255
image_2d_scaled = (np.maximum(image_2d,0) / image_2d.max()) * 255.0
# Convert to uint
image_2d_scaled = np.uint8(image_2d_scaled)
# Write the PNG file
with open(output_file, 'wb') as png_file:
w = png.Writer(shape[1], shape[0], greyscale=True)
w.write(png_file, image_2d_scaled)
except:
print('Could not convert: ', input_file)
from google.colab import drive
drive.mount("/content/gdrive", force_remount=True)
import pandas as pd
import shutil
import glob
from sklearn.model_selection import train_test_split
import os
study_level = pd.read_csv("gdrive/MyDrive/covid-dataset/train_study_level.csv")
image_level = pd.read_csv("gdrive/MyDrive/covid-dataset/train_image_level.csv")
study_level['study_name'] = study_level['id'].apply(lambda x: x.replace('_study', ''))
df = pd.DataFrame()
df['image_name'] = image_level['id'].apply(lambda x: x.replace('_image', ''))
df['study_name'] = image_level['StudyInstanceUID']
merge = pd.merge(df, study_level, on='study_name')
r0 = merge['Typical Appearance'].apply(lambda x: 'typical' if x == 1 else False)
r1 = merge['Atypical Appearance'].apply(lambda x: 'atypical' if x == 1 else False)
r2 = merge['Indeterminate Appearance'].apply(lambda x: 'indeterminate' if x == 1 else False)
labels = []
for a,b,c in zip(r0, r1, r2):
if a != False:
labels.append(a)
continue
if b != False:
labels.append(b)
continue
if c != False:
labels.append(c)
continue
labels.append('not recognized')
merge['label'] = labels
shutil.copy('gdrive/MyDrive/covid-dataset/nn_train_600.zip', './')
!unzip -qq nn_train_600.zip
img_df = pd.DataFrame()
paths = glob.glob('./nn_train_600/**/*.png', recursive=True)
img_df['path'] = paths
img_df['image_name'] = img_df['path'].apply(lambda x: x.split('/')[-1].replace('.png', ''))
fndf = pd.merge(merge, img_df, on='image_name')
X, y = fndf['path'], fndf['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
os.makedirs('train/typical', exist_ok=True)
os.makedirs('train/indeterminate', exist_ok=True)
os.makedirs('train/atypical', exist_ok=True)
os.makedirs('test/typical', exist_ok=True)
os.makedirs('test/indeterminate', exist_ok=True)
os.makedirs('test/atypical', exist_ok=True)
def distribute_images(_paths, _labels, _folder):
for path, label in zip(_paths, _labels):
shutil.copy(path, _folder + '/' + label)
distribute_images(X_train, y_train, 'train')
distribute_images(X_test, y_test, 'test')
```
## Fine-tuning EfficientNet
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
batch_size = 64
height = 456
width = 456
input_shape = (height, width, 3)
with strategy.scope():
train_datagen = ImageDataGenerator(
rescale=1,
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
"train",
# All images will be resized to target height and width.
target_size=(height, width),
batch_size=batch_size,
# Since we use categorical_crossentropy loss, we need categorical labels
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
"test",
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical', shuffle=False)
with strategy.scope():
model = models.Sequential()
model.add(layers.Input(shape=(height, width, 3)))
model.add(EfficientNetB7(include_top=True, weights=None, classes=3))
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
hist = model.fit_generator(
train_generator,
steps_per_epoch= 3382 // batch_size,
epochs=20,
validation_data=validation_generator,
validation_steps= 846 // batch_size,
verbose=1,)
def build_model(num_classes):
inputs = layers.Input(shape=(height, width, 3))
x = inputs
model = EfficientNetB5(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(num_classes, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
with strategy.scope():
model2 = build_model(3)
model2.summary()
checkpoint_filepath = 'gdrive/MyDrive/covid-dataset'
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor='val_accuracy',
mode='max',
save_best_only=True)
hist = model2.fit_generator(
train_generator,
steps_per_epoch= 3382 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps= 846 // batch_size,
verbose=1, callbacks=[model_checkpoint_callback])
def build_model(num_classes):
inputs = layers.Input(shape=(height, width, 3))
x = inputs
model = EfficientNetB5(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(num_classes, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
model2 = build_model(3)
model2.load_weights('gdrive/MyDrive/covid-dataset')
model2.predict(validation_generator)
import numpy as np
np.unique(validation_generator.labels,
return_counts=True)
model2.evaluate(validation_generator)
y_pred = model2.predict(validation_generator)
y_true, y_pred = validation_generator.classes, np.argmax(y_pred, axis=1)
from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)
from sklearn.metrics import classification_report, confusion_matrix
indices_class = {v:k for k,v in validation_generator.class_indices.items()}
indices_class
target_names = ['atypical', 'indeterminate', 'typical']
target_names
print('Confusion Matrix')
print(confusion_matrix(y_true, y_pred))
print('Precision: What proportion of positive identifications was actually correct?')
print('When it predicts a <Class> is true, it is correct <Precision> of the time.', '\n')
print('Recall: What proportion of actual positives was identified correctly?')
print('Correctly identifies <Recall> of all true <Class>.', '\n')
print('F1-SCORE: Combines the precision and recall of a classifier into a\nsingle metric by taking their harmonic meany.')
print('Classification Report')
print(classification_report(y_true, y_pred, target_names=target_names))
```
|
github_jupyter
|
```
import nltk
import sklearn
print('The nltk version is {}.'.format(nltk.__version__))
print('The scikit-learn version is {}.'.format(sklearn.__version__))
print(__doc__)
from time import time
import pickle
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import offsetbox
from sklearn import (manifold, datasets, decomposition, ensemble,
discriminant_analysis, random_projection, neighbors)
```
读取test data做可视化
```
digits = pd.read_csv('test_data_30.csv')
digits = digits.fillna(method='ffill')
scaler = MinMaxScaler()
X = scaler.fit_transform(digits.iloc[:,1:])
#y = digits[:,-1]
n_samples, n_features = X.shape
n_neighbors = 5
X = np.array(X)
y = np.array([67,67,67,71,71,71,8,8,8,90,90,90,7,7,7,9,9,9,5,5,5,74,74,74,199,199,199,2,2,2])
'''t-SNE'''
tsne = manifold.TSNE(n_components=2, init='pca', random_state=501)
X_tsne = tsne.fit_transform(X)
print("Org data dimension is {}. Embedded data dimension is {}".format(X.shape[-1], X_tsne.shape[-1]))
'''嵌入空间可视化'''
x_min, x_max = X_tsne.min(0), X_tsne.max(0)
X_norm = (X_tsne - x_min) / (x_max - x_min) # 归一化
plt.figure(figsize=(5, 5))
for i in range(X_norm.shape[0]):
plt.text(X_norm[i, 0], X_norm[i, 1], str(y[i]), color=plt.cm.Set1(y[i]/ 270.), fontdict={'weight': 'bold', 'size': 9})
plt.xticks([])
plt.yticks([])
plt.show()
# ----------------------------------------------------------------------
# Scale and visualize the embedding vectors
def plot_embedding(X, title=None):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
plt.text(X[i, 0], X[i, 1], str(y[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = np.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r),
X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
if title is not None:
plt.title(title)
# ----------------------------------------------------------------------
# t-SNE embedding of the digits dataset
print("Computing t-SNE embedding")
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0)
t0 = time()
X_tsne = tsne.fit_transform(X)
plot_embedding(X_tsne,"t-SNE embedding of the digits (time %.2fs)" %(time() - t0))
# ----------------------------------------------------------------------
# Random 2D projection using a random unitary matrix
print("Computing random projection")
rp = random_projection.SparseRandomProjection(n_components=2, random_state=42)
X_projected = rp.fit_transform(X)
plot_embedding(X_projected, "Random Projection of the digits")
#----------------------------------------------------------------------
# Projection on to the first 2 principal components
print("Computing PCA projection")
t0 = time()
X_pca = decomposition.TruncatedSVD(n_components=2).fit_transform(X)
plot_embedding(X_pca,
"Principal Components projection of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Projection on to the first 2 linear discriminant components
print("Computing Linear Discriminant Analysis projection")
X2 = X.copy()
X2.flat[::X.shape[1] + 1] += 0.01 # Make X invertible
t0 = time()
X_lda = discriminant_analysis.LinearDiscriminantAnalysis(n_components=2).fit_transform(X2, y)
plot_embedding(X_lda,
"Linear Discriminant projection of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Isomap projection of the digits dataset
print("Computing Isomap projection")
t0 = time()
X_iso = manifold.Isomap(n_neighbors, n_components=2).fit_transform(X)
print("Done.")
plot_embedding(X_iso,
"Isomap projection of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Locally linear embedding of the digits dataset
print("Computing LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors, n_components=2,
method='standard')
t0 = time()
X_lle = clf.fit_transform(X)
print("Done. Reconstruction error: %g" % clf.reconstruction_error_)
plot_embedding(X_lle,
"Locally Linear Embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Modified Locally linear embedding of the digits dataset
print("Computing modified LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors, n_components=2,
method='modified')
t0 = time()
X_mlle = clf.fit_transform(X)
print("Done. Reconstruction error: %g" % clf.reconstruction_error_)
plot_embedding(X_mlle,
"Modified Locally Linear Embedding of the digits (time %.2fs)" %
(time() - t0))
print(X_mlle)
# ----------------------------------------------------------------------
# HLLE embedding of the digits dataset
print("Computing Hessian LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors, n_components=2,
method='hessian')
t0 = time()
X_hlle = clf.fit_transform(X)
print("Done. Reconstruction error: %g" % clf.reconstruction_error_)
plot_embedding(X_hlle,
"Hessian Locally Linear Embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# LTSA embedding of the digits dataset
print("Computing LTSA embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors, n_components=2,
method='ltsa')
t0 = time()
X_ltsa = clf.fit_transform(X)
print("Done. Reconstruction error: %g" % clf.reconstruction_error_)
plot_embedding(X_ltsa,
"Local Tangent Space Alignment of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# MDS embedding of the digits dataset
print("Computing MDS embedding")
clf = manifold.MDS(n_components=2, n_init=1, max_iter=100)
t0 = time()
X_mds = clf.fit_transform(X)
print("Done. Stress: %f" % clf.stress_)
plot_embedding(X_mds,
"MDS embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Random Trees embedding of the digits dataset
print("Computing Totally Random Trees embedding")
hasher = ensemble.RandomTreesEmbedding(n_estimators=200, random_state=0,
max_depth=5)
t0 = time()
X_transformed = hasher.fit_transform(X)
pca = decomposition.TruncatedSVD(n_components=2)
X_reduced = pca.fit_transform(X_transformed)
plot_embedding(X_reduced,
"Random forest embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# Spectral embedding of the digits dataset
print("Computing Spectral embedding")
embedder = manifold.SpectralEmbedding(n_components=2, random_state=0,
eigen_solver="arpack")
t0 = time()
X_se = embedder.fit_transform(X)
plot_embedding(X_se,
"Spectral embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# t-SNE embedding of the digits dataset
print("Computing t-SNE embedding")
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0)
t0 = time()
X_tsne = tsne.fit_transform(X)
plot_embedding(X_tsne,
"t-SNE embedding of the digits (time %.2fs)" %
(time() - t0))
# ----------------------------------------------------------------------
# NCA projection of the digits dataset
print("Computing NCA projection")
nca = neighbors.NeighborhoodComponentsAnalysis(n_components=2, random_state=0)
t0 = time()
X_nca = nca.fit_transform(X, y)
plot_embedding(X_nca,
"NCA embedding of the digits (time %.2fs)" %
(time() - t0))
plt.show()
print(X_nca)
```
|
github_jupyter
|
[](https://pythonista.io)
# Entrada y salida estándar.
En la actualidad existen muchas fuentes desde las que se puede obtener y desplegar la información que un sistema de cómputo consume, gestiona y genera. Sin embargo, para el intérprete de Python la salida por defecto (salida estándar) de datos es la terminal de texto y la entrada estándar es el teclado.
En el caso de las notebooks de *Jupyter*, cada celda de código representa a la entrada estándar mediante:
```In[ ]:```
Y la salida estándar mediante:
```Out[ ]:```
**Ejemplo:**
```
3 * "Hola"
```
## Salida estándar con la funcion ```print()```.
En Python 3, la función ```print()``` se utiliza para desplegar información en la salida estándar.
La sintaxis es la siguiente:
```
print(<expresión 1>, <expresión 2>, ...<expresión 3>)
```
* La función ```print()``` evalúa y despliega una o varias expresiones.
* Si el resultado de la expresión es un objeto ```str```, este es desplegado sin los apóstrofes o las comillas que lo delimitan.
**Ejemplos:**
* La siguiente celda define el nombre ```a``` con valor igual a ```2```.
```
a = 2
```
* La siguiente celda evalúa la expresión ```a```, por lo que desplegará ```2```.
```
print(a)
```
* La siguiente celda desplegará el mensaje dentro del objeto ```"Hola"```.
```
print("Hola")
```
* En la siguiente celda la función ```print()``` desplegará dos expresiones que corresponde cada una a un objeto de tipo ```str```. Cada objeto será desplegado separado por un espacio.
* La salida será ```Hola Mundo```.
```
print("Hola", "Mundo")
```
* En la siguiente celda la función ```print()``` desplegará el resultado de una expresión de concatenación entre dos objetos de tipo ```str```. El resultado es un objeto ```str```.
* La salida será ```HolaMundo```.
```
print("Hola" + "Mundo")
```
* En la siguiente celda la función ```print()``` desplegará tres expresiones que corresponden a:
* El objeto ```'Tienes'``` de tipo ```str```.
* El objeto ```2``` de tipo ```int``` ligado al nombre ```a```.
* El objeto ```'buenos amigos'``` de tipo ```str```.
Cada expresión será desplegada separada por un espacio.
* La salida será ```Tienes 2 buenos amigos.```.
```
print("Tienes", a, "buenos amigos.")
```
* En la siguiente celda la función ```print()``` intentará desplegar el resultado de la expresión ```"Tienes" + a + "buenos amigos."```, la cual no es correcta y generará un error de tipo ```TypeError```.
```
print("Tienes" + a + "buenos amigos.")
```
### Despliegue con formato.
Para intercalar valores dentro de un formato específico de texto se utiliza el caracter sobre-escritura definodo como el signo de porcentaje ```%``` seguido de algún caracter que definirá el modo de desplegar la expresión correspondiente.
```
print("...%<caracter>..." % expresión 1)
```
```
print("...%<caracter 1>...%<caracter n>..." %(<expresión 1>,...<expresión n>))
```
|Caracter de escape|Modo de despliegue|
|:----------------:|:----------------:|
|```%s```| cadena de texto|
|```%d```| entero|
|```%o```| octal|
|```%x```| hexadecimal|
|```%f```| punto flotante|
|```%e```| punto flotante en formato exponencial|
El uso de ```%s```, equivale a aplicar la función ```str()``` al valor a desplegar.
**Ejemplos:**
```
pi = 3.141592
radio = 2
print("El perímetro de un círculo de radio %d es %f." % (radio, 2 * radio * pi))
print("El perímetro de un círculo de radio %d es %d." % (radio, 2 * radio * pi))
print("El perímetro de un circulo de radio %s es %s." % (radio, 2 * radio * pi))
print("El valor de pi es %f." % (pi))
print("El valor de pi es %e." % pi)
```
Para desplegar el signo de porcentaje ```%``` se utiliza ```%%```.
**Ejemplo:**
```
valor = 13
porciento = 15
porcentaje = (valor * porciento) / 100
print("El %d%% de %f es %f." % (porciento, valor, porcentaje))
```
#### Despliegue de cifras significativas.
Para desplegar un número específico de cifras significativas de un valor de punto flotante, se añade un punto ```.``` y el número de cifras a desplegarse después del signo de porcentaje ```%``` y antes del carácter ```f``` o ```e```.
```
%.<n>f
```
**Ejemplos:**
```
pi = 3.14169265
radio = 2
print("El perímetro de un círculo de radio igual a %d es %f." % (radio, 2 * pi * radio))
print("El perímetro de un círculo de radio igual a %d es %.2f." % (radio, 2 * pi * radio))
```
### Caracteres de escape.
Existen algunos caracteres que por su función o por la sintaxis de Python -tales como los apóstrofes, las comillas, los retornos de línea, etc.- que deben utilizar un "caracter de escape", para que puedan ser desplegados. Los caracteres de escape pueden ser introducidos después de una diagonal invertida ```\```.
|Secuencia|Despliegue|
|:-------:|:--------:|
|```\n``` |Retorno de línea|
|```\t``` |Tabulador |
|```\"``` |Comillas |
|```\'``` |Apóstrofe |
|```\\``` |Diagonal invertida|
|```\xNN``` |Caracter que corresponde al número hexadecimal *NN* en ASCII|
|```\uNN``` |Caracter que corresponde al número hexadecimal *NN* en Unicode|
**Ejemplo:**
```
print("Primera línea.\nSegunda línea\t con tabulador.")
print("Este es el signo de \"gato\" \x23.")
print("Beta: \u00DF")
print('I \u2764 YOU!')
```
## Entrada estándar con la función ```input()```.
La función por defecto de entrada estándar para Python 3 es ```input()```.
La función ```input()``` captura los caracteres provenientes de entrada estándar (el teclado) hasta que se introduce un retorno de carro <kbd>Intro</kbd> y el contenido capturado es devuelto al intérprete como una cadena de texto.
La cadena de caracteres resultante puede ser almacenada como un objeto de tipo ```str``` mediante la asignación de un nombre.
La función permite desplegar un mensaje de tipo ```str``` como parámetro.
```
input(<objeto tipo str>)
```
**Ejemplos:**
```
input()
texto = input()
type(texto)
texto
print(texto)
nombre = input("Escribe un nombre: ")
print(nombre)
```
## Entrada y salida estándar en Python 2.
### La función ```raw_input()```.
La sintaxis es la siguiente para Python 2:
```
raw_input(<objeto tipo str>)
```
**Ejemplo:**
``` python
>>> raw_input()
Hola
'Hola'
>>> texto = raw_input()
Hola
>>> type(texto)
<type 'str'>
>>> print texto
Hola
>>> nombre = raw_input("Escribe un nombre: ")
Escribe un nombre: Juan
>>> print nombre
Juan
>>>
```
### La función ```input()``` en Python 2.
Además de ```raw_input()```, existe la función ```input()```, la cual es semejante a ejecutar ```eval(raw_input())```.
Si la expresión ingresada es correcta, La función ```input()``` puede regresar valores de diversos tipos, en vez de sólo cadenas de texto.
**Ejemplo:**
``` python
>>> mensaje = "Ingresa el texto: "
>>> valor = raw_input(mensaje)
Ingresa el texto: 35 + 21
>>> type(valor)
<type 'str'>
>>> print valor
35 + 21
>>> valor = input(mensaje)
Ingresa el texto: 35 + 21
>>> type(valor)
<type 'int'>
>>> print valor
56
>>> valor = input(mensaje)
Ingresa el texto: "Hola"
>>> type(valor)
<type 'str'>
>>> print valor
Hola
>>> valor = input(mensaje)
Ingresa el texto: Hola
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'Hola' is not defined
>>>
```
**NOTA:** La función ```input()```, tal como se usa en Python 2 tiene el potencial de generar diversos errores y es susceptible de vulnerabilidades de seguridad debido a que podría usarse para inyectar código malicioso. Es por eso por lo que en Python 3, ```input()``` se comporta como ```raw_input()``` y la función ```raw_input()``` fue desechada.
```
eval(input('Ingresa: '))
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2019.</p>
|
github_jupyter
|
## Hopfield Network - Longren
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png', 'pdf')
```
## Tasks:
```
# 1. Store the patterns in the Hopfield network
'pattern A'
SA = [1,-1,1,-1]
'pattern B'
SB = [-1,1,1,1]
'pattern C'
SC = [-1,-1,-1,1]
N = 3 #number of patterns
K = 4 #number of units
WA = np.zeros([K,K]) #empty weight matrix
WB = np.zeros([K,K])
WC = np.zeros([K,K])
def weight_int(W,S): #weight initialization
for i in range(K): #for each row
for j in range(K): #for each column
W[i][j] = S[i] * S[j] #calculate weights for i != j
W[j][j] = 0 #set weights i = j to zero
return W
WA = weight_int(WA,SA)
print('weights for pattern A:')
print(WA)
WB = weight_int(WB,SB)
print('weights for pattern B:')
print(WB)
WC = weight_int(WC,SC)
print('weights for pattern C:')
print(WC)
# 1. Which of the patterns are stable states of the network dynamics?
Z = np.zeros([K]) #empty input array
def unit_input(W,S): #apply stored pattern as input
i = np.random.randint(0,4) #asynchronous update of units -- is this what is meant for choosing i?
for j in range(K):
Z[i] = np.sum(W[i][j] * S[j]) #new state Z
#print(j,i,Z[i])
if Z[i] >= 0:
Z[i] = 1
elif Z[i] < 0:
Z[i] = -1
return Z
#let's run the patterns again and see where they converges to
count = 50
def converge(W,S):
print('original pattern:', S)
for i in range(count):
Zi = unit_input(W,S)
return Zi
ZA = converge(WA,SA)
print('convergence of A:',ZA)
ZB = converge(WB,SB)
print('convergence of B:',ZB)
ZC = converge(WC,SC)
print('convergence of C:',ZC)
```
Analyzing the results of my code for the convergence of the patterns, we can see that according to mine there are two stable patterns, B & C, as they stick to the same pattern that was initially given after count=50 iterations. The other pattern, A, is not stable as it does not have the same value for S4 as was initially given.
However I believe my result for A is incorrect, as from the analytical MNS course we learned that there are three possible states for the Hopfield Network: P, -P, and (1,1,...,1) for the convergence of the pattern. And only a change of sign on S4 does not match a possible pattern that the unstable state could converge to.
```
# 2. Calculate the energy function for the network
#loop through iterations
W = np.zeros([K,K])
Z = np.zeros([K])
E = np.zeros([K,K])
count = 5
def energy(W,S): #energy function
for i in range(K):
for j in range(K):
E[i][j] = -1 * np.sum(W[i][j] * np.dot(Z[i],Z[j]))
return E
ELA = np.empty(count)
ELB = np.empty(count)
ELC = np.empty(count)
def energy_loop(EL,W,S):
W = weight_int(W,S)
for i in range(count):
Z = unit_input(W,S)
energy(W,S)
EL[i] = np.sum(E)
return EL
energy_loop(ELA,WA,SA)
energy_loop(ELB,WB,SB)
energy_loop(ELC,WC,SC)
#print(ELA)
#print(ELB)
#print(ELC)
x = np.linspace(1,count,count)
plt.plot(x,ELA)
plt.plot(x,ELB)
plt.plot(x,ELC)
plt.minorticks_on()
plt.grid()
plt.xlabel('iteration')
plt.ylabel('energy level')
plt.title('Energy function')
plt.legend(('A','B','C'))
plt.show()
```
All three patterns should only have a decreasing energy, however pattern A is having troubles for me for some iterations.
```
# 3. Reuse the code to store and recall image patterns
# 3.1 load the given images
images = np.load(r"C:\Users\lcube\Desktop\jupyter\BCCN\MNS\given\images.npz")
im = []
for item in lst: #getting the file into a usable form (somehow)
im.append(images[item]) #i know append isn't recommended, but it took me forever to find a way to processs the .npz
im = np.array(im) #array of size 8 x 30 x 40
#print(im[0][7][29][39])
# 3.2 store the patterns into a weight matrix
### i don't understand, why are we flattening it? shouldn't it work fine as the weights are 2 dimensional
### or do we want to get it to an 8 x 1200 and go from there..
im = [im[0][i].flatten() for i in range(7)] #flatten from 2 dimensions into vectors
im = np.array(im)
#print(im)
#print(np.size(im[0]))
### i know that it is recommended to be using matrix multiplication in order to calculate most things,
### but i am not sure how to do that without iterating over i and j to pick out the value that is
### being updated
w = np.ones([8,1200])
#print(w)
for i in range(8): #calculate the weight matrix with the patterns
for j in range(1200):
w[i][j] = im[0][i] * im[0][j] #calculate weights for i != j
w[i][i] = 0 #set weights i = j to zero
print(w) #the weight matrix W = {w_ij}
### i'm not sure how to use the input matrices of the image file in order to calculate what it is we want
```
|
github_jupyter
|
```
# Take all JSON from Blob Container and upload to Azure Search
import globals
import os
import pickle
import json
import requests
from pprint import pprint
from azure.storage.blob import BlockBlobService
from joblib import Parallel, delayed
def processLocalFile(file_name):
json_content = {}
try:
with open(file_name, 'r') as json_file:
json_content = json.loads(json_file.read())
docID = json_content["paper_id"]
title = json_content["metadata"]["title"]
body = {"documents": []}
abstractContent = ''
id_counter = 1
if "abstract" in json_content:
for c in json_content["abstract"]:
abstractContent += c["text"] + ' '
body["documents"].append({
"language": "en",
"id": str(id_counter),
"text": c["text"]
})
id_counter += 1
abstractContent = abstractContent.strip()
body = ''
if "body_text" in json_content:
for c in json_content["body_text"]:
body += c["text"] + ' '
body = body.strip()
contributors = []
for c in json_content["metadata"]["authors"]:
midInitial = ''
for mi in c["middle"]:
midInitial += mi + ' '
if len(((c["first"] + ' ' + midInitial + c["last"]).strip())) > 2:
contributors.append((c["first"] + ' ' + midInitial + c["last"]).strip())
return {"@search.action": "mergeOrUpload", "docID": docID, "title":title, "abstractContent": abstractContent, "body": body, "contributors": contributors}
except Exception as ex:
print (blob_name, " - Error:", str(ex))
return "Error"
with open(os.path.join(globals.files_dir, 'new_files.pkl'), 'rb') as input:
new_files = pickle.load(input)
print (str(len(new_files)), 'to upload...')
documents = {"value": []}
for json_file in new_files:
# print (json_file[json_file.rindex('/')+1:].replace('.json', '').replace('.xml', ''))
documents["value"].append(processLocalFile(json_file))
if len(documents["value"]) == 100:
print ("Applying", str(len(documents["value"])), "docs...")
url = globals.endpoint + "indexes/" + globals.indexName + "/docs/index" + globals.api_version
response = requests.post(url, headers=globals.headers, json=documents)
documents = {"value": []}
if len(documents["value"]) > 0:
print ("Applying", str(len(documents["value"])), "docs...")
url = globals.endpoint + "indexes/" + globals.indexName + "/docs/index" + globals.api_version
response = requests.post(url, headers=globals.headers, json=documents)
```
|
github_jupyter
|
# Schooling in Xenopus tadpoles: Power analysis
This is a supplementary notebook that generates some simulated data, and estimates the power analysis for a schooling protocol. The analysis subroutines are the same, or very close to ones from the actual notebook (**schooling_analysis**). The results of power analysis are given, and explained, in the text below, but can also be re-created by the reader, by re-running this notebook.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.spatial
import scipy.stats as stats
from typing import List,Tuple
```
## 1. Generate simulated data
Data is generated in the following format:
Layout of Tadpole dataframe:
x y tx ty
0 7.391 14.783 -0.159 -0.14
1 8.850 14.623 -0.180 -0.18
2 7.751 12.426 -0.260 -0.24
where each line correponds to a "tadpole"; `x` and `y` columns give the position of the "tadpole's head" (virtual, in this case), and `tx` and `ty` are the positions of the "tail", relative to the "head".
```
def simulate_data(ntads=10, schooling=0.5, alignment=0.6):
"""Simulates tadpole distribution in the dish.
n = how many tadpoles to have
schooling = the probability of being in a school (simplistic, binary approach)
r = aligment radius
alignment = alignment coefficient (1-noise_level)
"""
R_DISH = 7
TAD_LENGTH = 0.4
N_ATTEMPTS = 20 # How many attempts to place each tadpole we would make
JUMP = 1 # Jump, in cm, from one tadpole to another
do_alignment = False # Whether we should align tadpoles to their neighbors. Too fancy?
xy = np.zeros((ntads,2))
tails = np.zeros((ntads,2))
itad = 0
while itad < ntads: # Simplified Bridson’s algorithm for Poisson-disc sampling
if itad==0 or np.random.uniform()>schooling: # First point and non-schooled points placed at random
drop = np.random.uniform(0, 2*R_DISH, 2)
else:
iparent = np.random.randint(itad)
angle = np.random.uniform(0, 2*np.pi)
d = np.random.uniform(JUMP, 2*JUMP)
drop = xy[iparent,:] + np.array([np.cos(angle), np.sin(angle)])*d
if np.sqrt((drop[0]-R_DISH)**2 + (drop[1]-R_DISH)**2) > R_DISH: # Outside of a dish, won't do
continue
good_point = True
for iother in range(itad):
if np.sqrt(np.sum(np.square(drop-xy[iother,:]))) < JUMP: # Too close to another dot; won't do
good_point = False
break
if not good_point:
continue
xy[itad,:] = drop
# Make the tadpole perpendicular to the radius
tails[itad,:] = [xy[itad,1]-R_DISH, -xy[itad,0]+R_DISH]
tails[itad,:] = tails[itad,:]/np.linalg.norm(tails[itad,:])*TAD_LENGTH
if do_alignment: # Fancy mutual alignment; maybe don't use it, as it is too fancy?
if itad>0:
for iother in range(itad):
d = np.linalg.norm(xy[itad,:]-xy[iother,:])
tails[itad,:] += tails[iother,:]/(d**2)
tails[itad,:] = tails[itad,:]/np.linalg.norm(tails[itad,:])*TAD_LENGTH
angle = np.random.uniform(0, 2*np.pi)
randotail = np.array([np.cos(angle), np.sin(angle)])*TAD_LENGTH
tails[itad,:] = tails[itad,:]*alignment + randotail*(1-alignment)
tails[itad,:] = tails[itad,:]/np.linalg.norm(tails[itad,:])*TAD_LENGTH
# This code above with 3 normalizations in a row could have been prettier of course
itad += 1
return pd.DataFrame({'x':xy[:,0] , 'y':xy[:,1] , 'tx':tails[:,0] , 'ty':tails[:,1]})
def arena_plot(t):
for i in range(len(t)):
plt.plot(t.x[i]+np.array([0, t.tx[i]]), t.y[i]+np.array([0, t.ty[i]]), 'r-')
plt.plot(t.x, t.y, '.')
plt.gca().add_artist(plt.Circle((7,7), 6.9, color='blue', fill=False, linestyle='-'))
plt.xlim([0, 14])
plt.ylim([0, 14])
plt.axis('off')
return
schoolings = [1, 0.5, 0]
alignments = [1, 0.5, 0]
names = ['Lawful', 'Neutral', 'Chaotic', 'good', 'neutral', 'evil']
plt.figure(figsize=(9,9))
for i in range(3):
for j in range(3):
t = simulate_data(ntads=20, schooling=schoolings[i], alignment=alignments[j])
plt.subplot(3,3,i*3+j+1)
arena_plot(t)
plt.title(f"Schooling={schoolings[i]}, \n alignment={alignments[j]}")
#plt.title(names[j] + ' ' + names[3+i])
```
## 2. Processing Tools
An exact copy of tools from the "main notebook" (as of 2020.08.01), except that instead of extracing tadpoles from real data, here we simulate this data. (So `exctractTads` function is not actually used).
```
def getNFrames(data):
"""Returns the total number of frames."""
return max(data.Frame)+1
def extractTads(data,frame):
"""Splits the data into XY position of each head, and _relative_ XY position of each tail."""
xy = data.loc[data.Frame==frame,['X','Y']].to_numpy()
heads = xy[0::2,:]
tails = xy[1::2,:]-heads
return pd.DataFrame({'x':heads[:,0] , 'y':heads[:,1] , 'tx':tails[:,0] , 'ty':tails[:,1]})
def findNeighbors(tads): # Returns a new data frame, for edges
"""Triangulates the field, finds "neighbors". No thresholding of distance."""
xy = tads[['x','y']]
tri = scipy.spatial.Delaunay(xy,qhull_options="QJ").simplices # "QJ" is needed to retain
# all tadpoles, including isolated ones
listOfPairs = [] # Array of tuples to describe all pairs of points
flip = lambda x: (x[1],x[0]) # A local function to flip tuples
for i in range(tri.shape[0]): # Go through all edges of Delaunay triangles, include each one only once
triangle = [tuple(tri[i,[0,1]]) , tuple(tri[i,[1,2]]) , tuple(tri[i,[2,0]])]
for p in triangle:
if p not in listOfPairs and flip(p) not in listOfPairs:
listOfPairs += [p]
out = pd.DataFrame({'i':[a for (a,b) in listOfPairs] , 'j':[b for (a,b) in listOfPairs]})
return out
def findDistances(tads,pairs):
"""Calculates distances between pairs of neighboring tadpoles."""
xy = tads[['x','y']].values
dist = [np.linalg.norm(xy[p[0],]-xy[p[1],]) for p in pairs[['i','j']].values.tolist()]
pairs['dist'] = dist
return pairs
# --- Test, for the first frame
tads = simulate_data(ntads=20)
pairs = findNeighbors(tads)
pairs = findDistances(tads,pairs)
print('Layout of Tadpole dataframe:')
print(tads[:3])
print('\nLayout of Pairs dataframe:')
print(pairs[:3])
# Test figure with edge colors proportional to their distance
fig = plt.figure()
ax = fig.add_subplot(111)
xy = tads[['x','y']].values
for i in range(len(pairs)):
p = pairs[['i','j']].values.tolist()[i]
ax.plot([xy[p[0],0] , xy[p[1],0]],[xy[p[0],1] , xy[p[1],1]]) # Point
ax.plot(*([xy[p[i],_] for i in range(2)] for _ in range(2)),
color=np.array([1,0.5,0])*pairs['dist'].iloc[i]/pairs[['dist']].max().values*0.9)
# The awkward construction above draws lines between neighboring tadpoles
ax.set_aspect('equal')
```
## 3. Tools to Process Angles
Exactly same as in the main notebook (as of 2020.08.01)
```
def findAngles(tads,pairs):
'''Angles between pairs of tadpoles'''
tails = tads[['tx','ty']].values # Go from pandas to lists, to utilize list comprehension
norms = [np.linalg.norm(tails[i,]) for i in range(tails.shape[0])]
angle = [np.arccos(np.dot(tails[p[0],],tails[p[1],])/(norms[p[0]]*norms[p[1]]))
for p in pairs[['i','j']].values.tolist()]
pairs['angle'] = np.array(angle)/np.pi*180
return pairs
def niceTadFigure(ax,tads,pairs):
"""Nice picture for troubleshooting."""
xy = tads[['x','y']].values
tails = tads[['tx','ty']].values
ang = pairs[['angle']].values
for i in range(len(pairs)):
p = pairs[['i','j']].values.tolist()[i]
ax.plot(*([xy[p[i],_] for i in range(2)] for _ in range(2)),
color=np.array([0.5,0.8,1])*(1-ang[i]/max(ang))) # Tadpole-tapole Edges
for i in range(xy.shape[0]):
nm = np.linalg.norm(tails[i,])
ax.plot(xy[i,0]+[0,tails[i,0]/nm], xy[i,1]+[0,tails[i,1]/nm] , '-',color='red')
ax.set_aspect('equal')
ax.axis('off')
# --- Test, for the first frame
pairs = findAngles(tads,pairs)
fig = plt.figure()
ax = fig.add_subplot(111)
niceTadFigure(ax,tads,pairs)
#plt.savefig('crystal_pic.svg', format='svg')
```
## 4. Define full processor and dataset visualization
This function is adjusted to look like the procesisng function from the main notebook, but actually we call the simulation several times, to generate the "frames".
```
def processEverything(nsets=12, show_image=False, schooling=0.3, alignment=0.5):
"""Process one full dataset."""
if show_image:
fig = plt.figure(figsize=(10,10));
fullDf = pd.DataFrame()
for iframe in range(nsets):
tads = simulate_data(ntads=20, schooling=schooling, alignment=alignment)
pairs = findNeighbors(tads)
pairs = findDistances(tads,pairs)
angl = findAngles(tads,pairs)
fullDf = fullDf.append(pd.DataFrame({'frame': [iframe]*len(pairs)}).join(pairs))
if show_image:
ax = fig.add_subplot(4,4,iframe+1)
niceTadFigure(ax,tads,pairs)
return fullDf
out = processEverything(show_image=True)
```
## 5. Compare two different simulated datasets
Below, one dataset has high schooling coefficient (0.9), and perfect alignment (1.0), while the other has almost no schooling (0.1), and perfectly random orientation for all tadpoles (alignment=0.0).
```
# Prepare the data
out = processEverything(show_image=False, schooling=0.9, alignment=1.0)
out_treatment = processEverything(show_image=False, schooling=0.1, alignment=0.0)
def two_groups_plot(y1, y2, labels):
"""A basic two-groups plot"""
plt.plot(1+(np.random.uniform(size=y1.shape[0])-0.5)*0.3, y1, '.', alpha=0.2, zorder=-1)
plt.plot(2+(np.random.uniform(size=y2.shape[0])-0.5)*0.3, y2, '.', alpha=0.2, zorder=-1)
# Zorder is set to negative to hack around a bug in matplotlib that places errorbars below plots
plt.errorbar(1, np.mean(y1), np.std(y1), color='k', marker='s', capsize=5)
plt.errorbar(2, np.mean(y2), np.std(y2), color='k', marker='s', capsize=5)
plt.xlim(0,3)
plt.xticks(ticks=[1,2], labels=labels)
def compare_distances(out1,out2,labels):
"""Visualizes distances, reports a stat test"""
N_BINS = 10
d = out1['dist'].values
d2 = out2['dist'].values
plt.figure(figsize=(9,4))
ax = plt.subplot(121)
two_groups_plot(d, d2, labels)
plt.ylabel('Distance, cm')
ax = plt.subplot(122)
#plt.hist(d , bins=30, density=True, alpha=0.5);
#plt.hist(d2, bins=30, density=True, alpha=0.5);
y1,x1 = np.histogram(d, bins=N_BINS, density=True)
y2,x2 = np.histogram(d2, bins=N_BINS, density=True)
centers = lambda x: np.mean(np.vstack((x[:-1],x[1:])), axis=0) # Centers of each bin
plt.plot(centers(x1),y1,'.-')
plt.plot(centers(x2),y2,'.-')
plt.xlabel('Distance, cm')
plt.ylabel('Probability Density')
plt.legend(labels, loc='upper right')
print('Was the average inter-tadpole disctance different between two sets of data?')
print('(were their clumping?)')
test_results = stats.ttest_ind(d,d2)
print('T-test: t = ', test_results.statistic, '; p-value = ',test_results.pvalue)
print('\nWas the distribution shape different between two sets??')
test_results = scipy.stats.ks_2samp(d,d2)
print('Kolmogorov-Smirnov test p-value = ',test_results.pvalue)
compare_distances(out, out_treatment, ['High Schooling','Low schooling'])
#plt.savefig('distances.svg', format='svg')
```
As we can see, non-schooling tadpoles tend to be more uniformly distributed, so we observe more mid-distances and fewer low and high distances. ("More uniformly" doesn't mean that the distribution is actually uniform; it is expected to be closer to $χ^2$). Conversely, schooling tadpoles tend to be closer to each other.
As not all inter-tadpole distances were considered, but rather we rely on the Delaunay triangulation, the shape of the histogram may be rather peculiar, but it is OK. What matters is not the shape itself, but the fact that this shape is sensitive to the configuration of the swarm, as this means that it can be used to statistically compare swarms that were formed differently.
```
def compare_angles(out, out2, labels):
"""Visualizes angles, reports a stat test."""
HIST_BIN = 30 # Histogram step, in degrees
a = out['angle'].values
a2 = out2['angle'].values
#plt.hist(a , bins=np.arange(0,180+10,10), density=True, alpha=0.5);
#plt.hist(a2, bins=np.arange(0,180+10,10), density=True, alpha=0.5);
preset_bins = np.arange(0,180+HIST_BIN, HIST_BIN)
y1,x1 = np.histogram(a, bins=preset_bins, density=True)
y2,x2 = np.histogram(a2, bins=preset_bins, density=True)
centers = lambda x: np.mean(np.vstack((x[:-1],x[1:])), axis=0) # Centers of each bin
plt.plot(centers(x1),y1,'.-')
plt.plot(centers(x2),y2,'.-')
plt.xticks(np.arange(0,180+30,30))
plt.xlabel('Angle, degrees')
plt.ylabel('Probability Density')
plt.legend(labels, loc='upper right')
print('\nWas the distribution of angles different between two sets?')
test_results = scipy.stats.ks_2samp(a,a2)
print('Kolmogorov-Smirnov test p-value = ',test_results.pvalue)
compare_angles(out, out_treatment, ['Alignment','No alignment'])
#plt.savefig('angles.svg', format='svg')
```
As we can see, if tadpoles are oriented at random, the histogram of inter-tadpole angles is flat. If tadpoles school, the distribution of angles drops, as most tadpoles are co-oriented.
## 6. Power analysis
```
ntries = 50
x = np.linspace(0, 1, 21)
y = np.zeros((x.shape[0], 3))
for ival in range(len(x)):
val = x[ival]
print(f'{val:4.1f}', end=' ')
count = np.array([0,0,0])
for iattempt in range(ntries):
print('.', end='')
out1 = processEverything(show_image=False, schooling=0.5, alignment=0.5)
out2 = processEverything(show_image=False, schooling=val, alignment=val)
d = out1['dist'].values
d2 = out2['dist'].values
pttest = stats.ttest_ind(d,d2).pvalue
pks = scipy.stats.ks_2samp(d,d2).pvalue
pangles = scipy.stats.ks_2samp(out['angle'].values, out2['angle'].values).pvalue
count[0] += 1*(pttest<0.05)
count[1] += 1*(pks<0.05)
count[2] += 1*(pangles<0.05)
y[ival,:] = count/ntries
print()
plt.figure(figsize=(8,6));
plt.plot(x,y);
plt.legend(labels=["Distances, t-test","Distances, KS-test","Angles, KS-test"], bbox_to_anchor=(1.3, 1));
plt.xlabel('Coefficients for the 2nd set (1st is fixed at 0.5)');
plt.ylabel('Test power');
```
For every point of the chart above, we compare two simulated datasets. One has the **schooling** coefficient (the probability of joining an existing school) set at 0.5, and the admixture of noise to tadpole orientation (**alignment** coefficient) also set at 0.5. For the other dataset, both parameters assume all values from 0 to 1 with a 0.05 step. The sizes of both datasets are the same as in our real experiments: 20 tadpoles, 12 photos. each simulation is repeated 50 times, to estimate the power 1-β of each of the tests (with α=0.05).
We can see that the angle analysis is much more sensitive, as even a change from 0.50 to 0.55 noise admixture is detected with >95% probability. Yet, the distribution of angles is also arguably more biologically involved, as it can depend on the function of the lateral line, and the distribution of currents in the bowl, while these currents may themselves be affected by the quality of schooling (non-schooling tadpoles won't create a current). To re-iterate, the test for co-alignment is very sensitive mathematically, but may be a bit messy biologically.
The tests of spatial clumping are almost exactly the other way around: they are easy to interpret (if the tadpoles stay together, then phenomenologially the DO schoo, regardless of the mechanism), but they are not that sensitive mathematically. For this sample size, we had to change the probability of "not joining a school" by about 30% to detect a difference with 80% power. We can also see that the t-test is more sensitive to this change than the Kolmogorov-Smirnov test, although this comparison may be sensitive to this particular implementation of a spatial model.
|
github_jupyter
|
# OCR (Optical Character Recognition) from Images with Transformers
---
[Github](https://github.com/eugenesiow/practical-ml/) | More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml)
---
Notebook to recognise text automaticaly from an input image with either handwritten or printed text.
[Optical Character Recognition](https://paperswithcode.com/task/optical-character-recognition) is the task of converting images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo, license plates in cars...) or from subtitle text superimposed on an image (for example: from a television broadcast).
The [transformer models used](https://malaya-speech.readthedocs.io/en/latest/tts-singlish.html) are from Microsoft's TrOCR. The TrOCR models are encoder-decoder models, consisting of an image Transformer as encoder, and a text Transformer as decoder. We utilise the versions hosted on [huggingface.co](https://huggingface.co/models?search=microsoft/trocr) and use the awesome transformers library, for longevity and simplicity.
The notebook is structured as follows:
* Setting up the Environment
* Using the Model (Running Inference)
# Setting up the Environment
#### Dependencies and Runtime
If you're running this notebook in Google Colab, most of the dependencies are already installed and we don't need the GPU for this particular example.
If you decide to run this on many (>thousands) images and want the inference to go faster though, you can select `Runtime` > `Change Runtime Type` from the menubar. Ensure that `GPU` is selected as the `Hardware accelerator`.
We need to install huggingface `transformers` for this example to run, so execute the command below to setup the dependencies. We use the version compiled directly from the latest source (at the time of writing this is the only way to access the transforemrs TrOCR model code).
```
!pip install -q git+https://github.com/huggingface/transformers.git
```
# Using the Model (Running Inference)
Let's define a function for us to get images from the web. We execute this function to download an image with a line of handwritten text and display it.
```
import requests
from IPython.display import display
from PIL import Image
def show_image(url):
img = Image.open(requests.get(url, stream=True).raw).convert("RGB")
display(img)
return img
handwriting1 = show_image('https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg')
```
Now we want to load the model to recognise handwritten text.
Specifically we are running the following steps:
* Load the processor, `TrOCRProcessor`, which processes our input image and converts it into a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. The processor also adds absolute position embeddings and this sequence is fed to the layers of the Transformer encoder.
* Load the model, `VisionEncoderDecoderModel`, which consists of the image encoder and the text decoder.
* Define `ocr_image` function - We define the function for inferencing which takes our `src_img`, the input image we have downloaded. It will then run both the processor and the model inference and produce the output OCR text that has been recognised from the image.
```
import transformers
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-handwritten')
def ocr_image(src_img):
pixel_values = processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
return processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
We now run our `ocr_image` function on the line of handwritten text in the image we have downloaded previously (and stored in `handwriting1`).
```
ocr_image(handwriting1)
```
Lets try on another image with handwritten text.
```
ocr_image(show_image('https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSoolxi9yWGAT5SLZShv8vVd0bz47UWRzQC19fDTeE8GmGv_Rn-PCF1pP1rrUx8kOjA4gg&usqp=CAU'))
import transformers
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
print_processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')
print_model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed')
def ocr_print_image(src_img):
pixel_values = print_processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = print_model.generate(pixel_values)
return print_processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
We download an image with noisy printed text, a scanned receipt.
```
receipt = show_image('https://github.com/zzzDavid/ICDAR-2019-SROIE/raw/master/data/img/000.jpg')
```
As the model processes a line of text, we crop the image to include on of the lines of text in the receipt and send it to our model.
```
receipt_crop = receipt.crop((0, 80, receipt.size[0], 110))
display(receipt_crop)
ocr_print_image(receipt_crop)
```
More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml) and do star or drop us some feedback on how to improve the notebooks on the [Github repo](https://github.com/eugenesiow/practical-ml/).
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import os
import sys
import numpy as np
import pandas as pd
import plotly as pl
sys.path.insert(0, "..")
import ccal
np.random.random(20121020)
pl.offline.init_notebook_mode(connected=True)
df = pd.read_table("titanic.tsv", index_col=0)
df = df[["sex", "age", "fare", "survived"]].dropna()
df
sys.path.insert(0, "../../nd_array")
g = np.asarray(df["sex"] == "male", dtype=int)
g_name = "Gender"
a = np.asarray(df["age"])
a_name = "Age"
f = ccal.log_nd_array(
df["fare"].values, shift_as_necessary_to_achieve_min_before_logging="0<"
)
f_name = "Fare"
s = np.asarray(df["survived"])
s_name = "Survival"
ccal.plot_distributions(
(g, a, f, s),
names=(g_name, a_name, f_name, s_name),
title="Variable Distributions",
xaxis_title="Variable Value",
)
p_s1 = (s == 1).sum() / s.size
p_s1
grid_size = 32
p_s__g, p_s1__g = ccal.infer(
(g, s), grid_size=grid_size, target=1, names=(g_name, s_name)
)
p_s__a, p_s1__a = ccal.infer(
(a, s), grid_size=grid_size, target=1, names=(a_name, s_name)
)
p_s__f, p_s1__f = ccal.infer(
(f, s), grid_size=grid_size, target=1, names=(f_name, s_name)
)
p_s__a_f, p_s1__a_f = ccal.infer(
(a, f, s), grid_size=grid_size, target=1, names=(a_name, f_name, s_name)
)
p_s__a_f_naive, p_s1__a_f_naive = ccal.infer_assuming_independence(
(a, f, s), grid_size=grid_size, target=1, names=(a_name, f_name, s_name)
)
from sklearn.metrics import auc, roc_curve
maths = (
"P(S = 1 | G)",
"P(S = 1 | A)",
"P(S = 1 | F)",
"P(S = 1 | A, F)",
"P(S = 1 | A, F) (naive)",
)
math_roc = {math: {} for math in maths}
for math, p_s1__v, vs in zip(
maths,
(p_s1__g, p_s1__a, p_s1__f, p_s1__a_f, p_s1__a_f_naive),
((g,), (a,), (f,), (a, f), (a, f)),
):
p_s1__vv = np.full(s.size, np.nan)
for i in range(s.size):
coordinate = [
[np.argmin(abs(np.linspace(v.min(), v.max(), grid_size) - v[i]))]
for v in vs
]
p_s1__vv[i] = p_s1__v[coordinate]
fpr, tpr, t = roc_curve(s, ccal.normalize_nd_array(p_s1__vv, None, "0-1"))
math_roc[math]["fpr"] = fpr
math_roc[math]["tpr"] = tpr
auc_ = auc(fpr, tpr)
math_roc[math]["auc"] = auc_
n_permutation_for_roc = 1000
permuting_aucs = np.full(n_permutation_for_roc, np.nan)
permuting_s = s.copy()
for i in range(n_permutation_for_roc):
np.random.shuffle(permuting_s)
permuting_fpr, permuting_tpr, permuting_t = roc_curve(permuting_s, p_s1__vv)
permuting_aucs[i] = auc(permuting_fpr, permuting_tpr)
math_roc[math]["p-value"] = ccal.compute_empirical_p_value(
auc_, permuting_aucs, "great"
)
ccal.plot_bayesian_nomogram(
s, 1, 0, grid_size, (p_s__g, p_s__a, p_s__f), (g_name, a_name, f_name)
)
random_roc = np.linspace(0, 1, 16)
ccal.plot_points(
(random_roc,) + tuple(math_roc[math]["fpr"] for math in maths),
(random_roc,) + tuple(math_roc[math]["tpr"] for math in maths),
names=("Random ROC",)
+ tuple(
"{} | {:0.3f} | {:0.1e}".format(
math, math_roc[math]["auc"], math_roc[math]["p-value"]
)
for math in maths
),
modes=("markers",) + ("markers + lines",) * len(maths),
title="ROC: G={}, A={}, F={}".format(g_name, a_name, f_name),
xaxis_title="False Positive Rate",
yaxis_title="True Positive Rate",
legend_orientation="h",
)
```
|
github_jupyter
|
# Regression Plots
```
%matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
```
## Duncan's Prestige Dataset
### Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
```
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
```
### Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}_i)=\hat{\sigma}^2_i(1-h_{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
```
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
```
As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
### Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
```
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
```
As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
```
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
```
For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
```
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
```
### Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
```
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
```
As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
```
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
```
### Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
```
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
```
### Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
```
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
```
## Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
```
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
```
### Partial Regression Plots (Crime Data)
```
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], data=dta)
fig.tight_layout(pad=1.0)
```
### Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
```
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
```
### Influence Plot
```
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
```
### Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
```
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
```
There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of [issue #888](https://github.com/statsmodels/statsmodels/issues/808))
```
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
```
|
github_jupyter
|
```
import sys
sys.path.append('../')
import os
os.environ["CUDA_VISIBLE_DEVICES"]="1"
import glob
from keras.optimizers import Adam, SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger, TensorBoard
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.da_ssd512_other_loss_metrics import ssd_512
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation, SSDDataAugmentation_Siamese
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
img_height = 512 # Height of the model input images
img_width = 512 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # Per-channel mean of images. Do not change if use any of the pre-trained weights.
# The color channel order in the original SSD is BGR,
# so we'll have the model reverse the color channel order of the input images.
swap_channels = [2, 1, 0]
# The anchor box scaling factors used in the original SSD512 for the Pascal VOC datasets
# scales_pascal =
# The anchor box scaling factors used in the original SSD512 for the MS COCO datasets
scales_coco = [0.07, 0.15, 0.3, 0.45, 0.6, 0.75, 0.9, 1.05]
scales = scales_coco
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD512; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 128, 256, 512] # Space between two adjacent anchor box center points for each predictor layer.
# The offsets of the first anchor box center points from the top and left borders of the image
# as a fraction of the step size for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
# The variances by which the encoded target coordinates are divided as in the original implementation
variances = [0.1, 0.1, 0.2, 0.2]
normalize_coords = True
Model_Build = 'New_Model' # 'Load_Model'
Optimizer_Type = 'SGD' # 'Adam' #
# Different batch_size will have different prediction loss.
batch_size = 8 # Change the batch size if you like, or if you run into GPU memory issues.
# alpha_distance = 0.0001 # Coefficient for the distance between the source and target feature maps.
loss_weights = [0.0005, 0.0005, 0.0] + [1.0]
# 'SIM10K_to_VOC12_resize_400_800' # 'City_to_foggy0_01_resize_400_800'
DatasetName = 'SIM10K_to_VOC07_resize_400_800' # 'SIM10K_to_City_resize_400_800' # #
processed_dataset_path = './processed_dataset_h5/' + DatasetName
if not os.path.exists(processed_dataset_path):
os.makedirs(processed_dataset_path)
if len(glob.glob(os.path.join(processed_dataset_path, '*.h5'))):
Dataset_Build = 'Load_Dataset'
else:
Dataset_Build = 'New_Dataset'
# Define model callbacks.
checkpoint_path = '../trained_weights/SIM10K_to_VOC07/current/mmd_0_0005'
# checkpoint_path = '../trained_weights/denug'
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
if DatasetName == 'SIM10K_to_VOC12_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages'
test_target_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/VOCdevkit/VOC2012/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
# The trainset of VOC which has 'car' object is used as train_target.
train_target_image_set_filename = '../../datasets/VOCdevkit/VOC2012_CAR/ImageSets/Main/train_target.txt'
# The valset of VOC which has 'car' object is used as test.
test_target_image_set_filename = '../../datasets/VOCdevkit/VOC2012_CAR/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
val_include_classes = [val_classes.index(one_class) for one_class in classes[1:]]
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'SIM10K_to_VOC07_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages'
test_target_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/VOCdevkit/VOC2007/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
# The trainset of VOC which has 'car' object is used as train_target.
train_target_image_set_filename = '../../datasets/VOCdevkit/VOC2007_CAR/ImageSets/Main/train_target.txt'
# The valset of VOC which has 'car' object is used as test.
test_target_image_set_filename = '../../datasets/VOCdevkit/VOC2007_CAR/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
val_include_classes = [val_classes.index(one_class) for one_class in classes[1:]]
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'SIM10K_to_City_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
test_target_images_dir = '../../datasets/val_data_for_SIM10K_to_cityscapes/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/val_data_for_SIM10K_to_cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
test_target_image_set_filename = '../../datasets/val_data_for_SIM10K_to_cityscapes/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car']
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'City_to_foggy0_02_resize_400_800':
resize_image_to = (400, 800)
# Introduction of PascalVOC: https://arleyzhang.github.io/articles/1dc20586/
# The directories that contain the images.
train_source_images_dir = '../../datasets/Cityscapes/JPEGImages'
train_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
test_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/Cityscapes/Annotations'
test_annotation_dir = '../../datasets/Cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_target.txt'
test_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/test.txt'
# Our model will produce predictions for these classes.
classes = ['background',
'person', 'rider', 'car', 'truck',
'bus', 'train', 'motorcycle', 'bicycle']
train_classes = classes
train_include_classes = 'all'
val_classes = classes
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'City_to_foggy0_01_resize_400_800':
resize_image_to = (400, 800)
# Introduction of PascalVOC: https://arleyzhang.github.io/articles/1dc20586/
# The directories that contain the images.
train_source_images_dir = '../../datasets/Cityscapes/JPEGImages'
train_target_images_dir = '../../datasets/CITYSCAPES_beta_0_01/JPEGImages'
test_target_images_dir = '../../datasets/CITYSCAPES_beta_0_01/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/Cityscapes/Annotations'
test_annotation_dir = '../../datasets/Cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_target.txt'
test_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/test.txt'
# Our model will produce predictions for these classes.
classes = ['background',
'person', 'rider', 'car', 'truck',
'bus', 'train', 'motorcycle', 'bicycle']
train_classes = classes
train_include_classes = 'all'
val_classes = classes
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
else:
raise ValueError('Undefined dataset name.')
if Model_Build == 'New_Model':
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
config.log_device_placement = True # to log device placement (on which device the operation ran)
# (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
model = ssd_512(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
weights_path = '../trained_weights/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
if Optimizer_Type == 'SGD':
Optimizer = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False)
elif Optimizer_Type == 'Adam':
Optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
else:
raise ValueError('Undefined Optimizer_Type.')
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=Optimizer, loss={'pool1_GAP_substract': ssd_loss.compute_distance_loss,
'pool2_GAP_substract': ssd_loss.compute_distance_loss,
'pool3_GAP_substract': ssd_loss.compute_distance_loss_source_only,
'predictions': ssd_loss.compute_loss},
loss_weights={'pool1_GAP_substract': loss_weights[0],
'pool2_GAP_substract': loss_weights[1],
'pool3_GAP_substract': loss_weights[2],
'predictions': loss_weights[3]})
# if Source_Only:
# model.compile(optimizer=Optimizer, loss={'pool1_GAP_substract': ssd_loss.compute_distance_loss_source_only,
# 'pool2_GAP_substract': ssd_loss.compute_distance_loss_source_only,
# 'pool3_GAP_substract': ssd_loss.compute_distance_loss_source_only,
# 'predictions': ssd_loss.compute_loss},
# loss_weights={'pool1_GAP_substract': loss_weights[0],
# 'pool2_GAP_substract': loss_weights[1],
# 'pool3_GAP_substract': loss_weights[2],
# 'predictions': loss_weights[3]})
# else:
# model.compile(optimizer=Optimizer, loss={'pool1_GAP_substract': ssd_loss.compute_distance_loss,
# 'pool2_GAP_substract': ssd_loss.compute_distance_loss,
# 'pool3_GAP_substract': ssd_loss.compute_distance_loss,
# 'predictions': ssd_loss.compute_loss},
# loss_weights={'pool1_GAP_substract': loss_weights[0],
# 'pool2_GAP_substract': loss_weights[1],
# 'pool3_GAP_substract': loss_weights[2],
# 'predictions': loss_weights[3]})
elif Model_Build == 'Load_Model':
# TODO: Set the path to the `.h5` file of the model to be loaded.
model_path = '../trained_weights/VGG_ssd300_Cityscapes/epoch-23_loss-5.2110_val_loss-6.7452.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
K.clear_session() # Clear previous models from memory.
# import tensorflow as tf
# from keras.backend.tensorflow_backend import set_session
#
# config = tf.ConfigProto()
# config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
# config.log_device_placement = True # to log device placement (on which device the operation ran)
# # (nothing gets printed in Jupyter, only if you run it standalone)
# sess = tf.Session(config=config)
# set_session(sess) # set this TensorFlow session as the default session for Keras
model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
'L2Normalization': L2Normalization,
'compute_loss': ssd_loss.compute_loss,
'compute_distance_loss': ssd_loss.compute_distance_loss})
else:
raise ValueError('Undefined Model_Build. Model_Build should be New_Model or Load_Model')
if Dataset_Build == 'New_Dataset':
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(dataset='train', load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(dataset='val', load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# images_dirs, image_set_filenames, and annotations_dirs should have the same length
train_dataset.parse_xml(images_dirs=[train_source_images_dir],
target_images_dirs=[train_target_images_dir],
image_set_filenames=[train_source_image_set_filename],
target_image_set_filenames=[train_target_image_set_filename],
annotations_dirs=[train_annotation_dir],
classes=train_classes,
include_classes=train_include_classes,
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[test_target_images_dir],
image_set_filenames=[test_target_image_set_filename],
annotations_dirs=[test_annotation_dir],
classes=val_classes,
include_classes=val_include_classes,
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
# After create these h5 files, if you have resized the input image, you need to reload these files. Otherwise,
# the images and the labels will not change.
train_dataset.create_hdf5_dataset(file_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
resize=resize_image_to,
variable_image_size=True,
verbose=True)
val_dataset.create_hdf5_dataset(file_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
resize=False,
variable_image_size=True,
verbose=True)
train_dataset = DataGenerator(dataset='train',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
filenames=train_source_image_set_filename,
target_filenames=train_target_image_set_filename,
filenames_type='text',
images_dir=train_source_images_dir,
target_images_dir=train_target_images_dir)
val_dataset = DataGenerator(dataset='val',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
filenames=test_target_image_set_filename,
filenames_type='text',
images_dir=test_target_images_dir)
elif Dataset_Build == 'Load_Dataset':
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Load dataset from the created h5 file.
train_dataset = DataGenerator(dataset='train',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
filenames=train_source_image_set_filename,
target_filenames=train_target_image_set_filename,
filenames_type='text',
images_dir=train_source_images_dir,
target_images_dir=train_target_images_dir)
val_dataset = DataGenerator(dataset='val',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
filenames=test_target_image_set_filename,
filenames_type='text',
images_dir=test_target_images_dir)
else:
raise ValueError('Undefined Dataset_Build. Dataset_Build should be New_Dataset or Load_Dataset.')
# 4: Set the image transformations for pre-processing and data augmentation options.
# For the training generator:
ssd_data_augmentation = SSDDataAugmentation_Siamese(img_height=img_height,
img_width=img_width)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels()
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
model.get_layer('fc7_mbox_conf').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf').output_shape[1:3],
model.get_layer('conv10_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
# The input image and label are first processed by transformations. Then, the label will be further encoded by
# ssd_input_encoder. The encoded labels are classId and offset to each anchor box.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
def lr_schedule(epoch):
if epoch < 2:
return 0.0005
elif epoch < 50:
return 0.001
elif epoch < 70:
return 0.0001
else:
return 0.00001
# def lr_schedule(epoch):
# if epoch < 50:
# return 0.001
# elif epoch < 60:
# return 0.0001
# else:
# return 0.00001
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath=os.path.join(checkpoint_path, 'epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5'),
monitor='val_loss',
verbose=1,
save_best_only=False,
save_weights_only=True,
mode='auto',
period=1)
# model_checkpoint.best to the best validation loss from the previous training
# model_checkpoint.best = 4.83704
csv_logger = CSVLogger(filename=os.path.join(checkpoint_path, 'source_only_run2_training_log.csv'),
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
TensorBoard_monitor = TensorBoard(log_dir=checkpoint_path)
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan,
TensorBoard_monitor]
initial_epoch = 0
final_epoch = 80
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
# 1: Set the generator for the val_dataset or train_dataset predictions.
predict_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=None,
returns={'processed_images',
'filenames',
'inverse_transform',
'original_images',
'original_labels'},
keep_images_without_gt=False)
# 2: Generate samples.
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
i = 1
print("Image:", batch_filenames[i])
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
plt.figure(figsize=(20, 12))
plt.imshow(batch_images[0][i])
plt.show()
plt.figure(figsize=(20, 12))
plt.imshow(batch_images[1][i])
plt.show()
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(np.array(batch_original_labels[i]))
# 3: Make predictions.
y_pred = model.predict(batch_images)[-1]
# Now let's decode the raw predictions in `y_pred`.
# Had we created the model in 'inference' or 'inference_fast' mode,
# then the model's final layer would be a `DecodeDetections` layer and
# `y_pred` would already contain the decoded predictions,
# but since we created the model in 'training' mode,
# the model outputs raw predictions that still need to be decoded and filtered.
# This is what the `decode_detections()` function is for.
# It does exactly what the `DecodeDetections` layer would do,
# but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).
# `decode_detections()` with default argument values follows the procedure of the original SSD implementation:
# First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes,
# then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45,
# and out of what is left after that, the top 200 highest confidence boxes are returned.
# Those settings are for precision-recall scoring purposes though.
# In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5,
# since we're only interested in the very confident predictions.
# 4: Decode the raw predictions in `y_pred`.
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.35,
iou_threshold=0.4,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
# We made the predictions on the resized images,
# but we'd like to visualize the outcome on the original input images,
# so we'll convert the coordinates accordingly.
# Don't worry about that opaque `apply_inverse_transforms()` function below,
# in this simple case it just applies `(* original_image_size / resized_image_size)` to the box coordinates.
# 5: Convert the predictions for the original image.
y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded_inv[i])
# Finally, let's draw the predicted boxes onto the image.
# Each predicted box says its confidence next to the category name.
# The ground truth boxes are also drawn onto the image in green for comparison.
# 5: Draw the predicted boxes onto the image
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
plt.figure(figsize=(20, 12))
plt.imshow(batch_original_images[i])
current_axis = plt.gca()
for box in batch_original_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor': 'green', 'alpha': 1.0})
# for box in y_pred_decoded_inv[i]:
# xmin = box[2]
# ymin = box[3]
# xmax = box[4]
# ymax = box[5]
# color = colors[int(box[0])]
# label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
# current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
# current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor': color, 'alpha': 1.0})
```
|
github_jupyter
|
# Developing an AI application
Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
<img src='assets/Flowers.png' width=500px>
The project is broken down into multiple steps:
* Load and preprocess the image dataset
* Train the image classifier on your dataset
* Use the trained classifier to predict image content
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
```
# Imports here
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets,transforms,models
from workspace_utils import active_session
from PIL import Image
import numpy as np
import seaborn as sb
import json
```
## Load the data
Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
```
data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# TODO: Define your transforms for the training, validation, and testing sets
train_data_transforms = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
valid_data_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_data_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# TODO: Load the datasets with ImageFolder
image_datasets = {"train":datasets.ImageFolder(train_dir,transform=train_data_transforms),
"valid":datasets.ImageFolder(valid_dir,transform=valid_data_transforms),
"test":datasets.ImageFolder(test_dir,transform=test_data_transforms)}
# TODO: Using the image datasets and the trainforms, define the dataloaders
dataloader = {"train":torch.utils.data.DataLoader(image_datasets["train"],batch_size=64,shuffle=True),
"valid":torch.utils.data.DataLoader(image_datasets["valid"],batch_size=64),
"test":torch.utils.data.DataLoader(image_datasets["test"],batch_size=64)}
```
### Label mapping
You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
```
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
```
# Building and training the classifier
Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
* Train the classifier layers using backpropagation using the pre-trained network to get the features
* Track the loss and accuracy on the validation set to determine the best hyperparameters
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to
GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module.
**Note for Workspace users:** If your network is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. Typically this happens with wide dense layers after the convolutional layers. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with `ls -lh`), you should reduce the size of your hidden layers and train again.
```
# TODO: Build and train your network
# Use GPU device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#using Vgg pre training
model= models.vgg11(pretrained=True)
#Set disable Backdrop property for model parameters
for param in model.parameters():
param.requires_grad= False
model.classifier = nn.Sequential(nn.Linear(25088,4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096,102),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(),lr=0.001)
model.to(device)
#train the model
epochs =5
running_loss =0
train_n="train"
valid_n="valid"
counter=5
steps=0
with active_session():
for e in range(epochs):
for inputs,lables in dataloader[train_n]:
steps+=1
# move to the device
inputs, lables = inputs.to(device),lables.to(device)
#Setting Zero grad
model.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps,lables)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % counter == 0:
validset_loss =0
accuracy =0
model.eval()
with torch.no_grad():
for inputs,lables in dataloader[valid_n]:
inputs, lables = inputs.to(device),lables.to(device)
logps = model.forward(inputs)
valid_loss = criterion(logps,lables)
validset_loss += valid_loss.item()
#calculate accuracy for validation set
ps = torch.exp(logps)
top_p,top_class =ps.topk(1,dim=1)
equal = top_class == lables.view(*top_class.shape)
accuracy += torch.mean(equal.type(torch.FloatTensor)).item()
print(f"Epoch {e+1}/{epochs}.. "
f"Train loss: {running_loss/len(dataloader[train_n]):.3f}.."
f"Validation loss:{validset_loss/len(dataloader[valid_n]):.3f}.."
f"Validation Accuracy :{accuracy/len(dataloader[valid_n]):.3f}..")
model.train()
running_loss =0
```
## Testing your network
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
```
# TODO: Do validation on the test set
def testModel(model_to_test):
test_loss=0
accuracy =0
test_n="test"
model_to_test.to(device)
with torch.no_grad():
model_to_test.eval()
for inputs,lables in dataloader[test_n]:
inputs,lables = inputs.to(device),lables.to(device)
logps = model_to_test.forward(inputs)
tloss=criterion(logps,lables)
test_loss += tloss.item()
#calculate accuracy for test set
ps = torch.exp(logps)
top_p,top_class = ps.topk(1,dim=1)
equals = top_class == lables.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"testloss:{test_loss/len(dataloader[test_n]):.3f}.."
#f"Accuracy :{accuracy/len(dataloader[valid_n]):.3f}.."
f"Test Accuracy:{accuracy/len(dataloader[test_n]):.3f}..")
model_to_test.train()
#test the model
testModel(model)
```
## Save the checkpoint
Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
```model.class_to_idx = image_datasets['train'].class_to_idx```
Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
```
# TODO: Save the checkpoint
#map class to idx
checkpoint_path = "checkpoint.pth"
model.class_to_idx = image_datasets["train"].class_to_idx
checkpoint={"state_dict":model.state_dict(),
"class_to_idx":model.class_to_idx,
"architecture":"vgg11",
"hidden_units":4096,
"learning_rate":"0.001",
"optimizer":optimizer,
"epochs":epochs}
torch.save(checkpoint,checkpoint_path)
```
## Loading the checkpoint
At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
```
# TODO: Write a function that loads a checkpoint and rebuilds the model
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
hidden_units = checkpoint["hidden_units"]
model = models.vgg11(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(25088,4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096,102),
nn.LogSoftmax(dim=1))
model.load_state_dict(checkpoint["state_dict"])
model.class_to_idx = checkpoint["class_to_idx"]
return model
#Test the loaded model
save_model = load_checkpoint(checkpoint_path)
testModel(save_model)
```
# Inference for classification
Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
First you'll need to handle processing the input image such that it can be used in your network.
## Image Preprocessing
You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training.
First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.
Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.
As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation.
And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
```
def process_image(image_Path):
''' Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an Numpy array
'''
# TODO: Process a PIL image for use in a PyTorch model
img = Image.open(image_path)
# Resize
if img.size[0] > img.size[1]:
img.thumbnail((10000, 256))
else:
img.thumbnail((256, 10000))
# Crop
left_margin = (img.width-224)/2
bottom_margin = (img.height-224)/2
right_margin = left_margin + 224
top_margin = bottom_margin + 224
img = img.crop((left_margin, bottom_margin, right_margin,
top_margin))
# Normalize
img = np.array(img)/255
mean = np.array([0.485, 0.456, 0.406]) #mean
std = np.array([0.229, 0.224, 0.225]) #std
img = (img - mean)/std
# color channels to first dimension as expected by PyTorch
img = img.transpose((2, 0, 1))
return img
```
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
```
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
if title:
#plt.title(title)
ax.set_title(title)
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
```
## Class Prediction
Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.
To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.
Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
```
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
# TODO: Implement the code to predict the class from an image file
image = torch.from_numpy(process_image(image_path)).type(torch.FloatTensor)
#Change the dimensions of an image
image = image.unsqueeze(0)
#Obtain Probabilities
output = model.forward(image)
ps = torch.exp(output)
top_p,top_class = ps.topk(topk,dim=1)
top_p = top_p.detach().numpy().tolist()[0]
top_class = top_class.detach().numpy().tolist()[0]
#get indices to class
idx_to_Classes = {val:key for key,val in model.class_to_idx.items()}
top_class = [idx_to_Classes[idx] for idx in iter(top_class)]
return top_p,top_class
```
## Sanity Checking
Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:
<img src='assets/inference_example.png' width=300px>
You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
```
# TODO: Display an image along with the top 5 classes
image_path = test_dir+"/1/image_06743.jpg"
flower_num = image_path.split('/')[-2]
title = cat_to_name[flower_num]
model = load_checkpoint(checkpoint_path)
top_p,top_class = predict(image_path,model)
top_flowers = [cat_to_name[cl] for cl in top_class]
plt.figure(figsize = (6,10))
ax = plt.subplot(2,1,1)
#plot the image
image = process_image(image_path)
imshow(image,ax,title=title)
plt.subplot(2,1,2)
sb.barplot(x=top_p, y=top_flowers, color=sb.color_palette()[0]);
plt.show()
```
|
github_jupyter
|
# 初始化
```
#@markdown - **挂载**
from google.colab import drive
drive.mount('GoogleDrive')
# #@markdown - **卸载**
# !fusermount -u GoogleDrive
```
# 代码区
```
#@title K-近邻算法 { display-mode: "both" }
# 该程序实现 k-NN 对三维随机数据的分类
#@markdown [参考程序](https://github.com/wzyonggege/statistical-learning-method/blob/master/KNearestNeighbors/KNN.ipynb)
# coding: utf-8
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#@markdown - **绑定数据**
class Bunch(dict):
def __init__(self,*args,**kwds):
super(Bunch,self).__init__(*args,**kwds)
self.__dict__ = self
#@markdown - **生成带标签随机数据函数**
def generate_random(sigma, N, mu1=[25., 25., 20], mu2=[30., 40., 30]):
c = sigma.shape[-1] # 生成N行c维的随机测试数据
X = np.zeros((N, c)) # 初始化X,N个样本
target = np.zeros((N,1))
for i in range(N):
if np.random.random(1) < 0.5: # 生成0-1之间随机数
X[i, :] = np.random.multivariate_normal(mu1, sigma[0, :, :], 1) # 用第一个高斯模型生成3维数据
target[i] = 1
else:
X[i, :] = np.random.multivariate_normal(mu2, sigma[1, :, :], 1) # 用第二个高斯模型生成3维数据
target[i] = -1
return X, target
#@markdown - **KNN 类**
class KNN:
def __init__(self, X_train, y_train, n_neighbors=3, p=2):
"""
parameter: n_neighbors 临近点个数, 最好选奇数
parameter: p 距离度量
"""
if n_neighbors % 2 == 0:
print('n_neighbors 最好为奇数!')
self.n = n_neighbors
self.p = p
self.X_train = X_train
self.y_train = y_train.flatten()
def predict(self, X):
# 取出n个点
knn_list = []
for i in range(self.n):
dist = np.linalg.norm(X - self.X_train[i], ord=self.p)
knn_list.append((dist, self.y_train[i]))
# 遍历得到距离最近的 n 个点
for i in range(self.n, len(self.X_train)):
max_index = knn_list.index(max(knn_list, key=lambda x: x[0]))
dist = np.linalg.norm(X - self.X_train[i], ord=self.p)
if knn_list[max_index][0] > dist:
knn_list[max_index] = (dist, self.y_train[i])
# 预测类别
knn = np.array([k[-1] for k in knn_list])
return np.sign(knn.sum()) if knn.sum() != 0 else 1
def score(self, X_test, y_test):
y_test = y_test.flatten()
right_count = 0
for X, y in zip(X_test, y_test):
label = self.predict(X)
if label == y:
right_count += 1
return right_count / X_test.shape[0]
#@markdown - **生成带标签的随机数据**
k, N = 2, 400
# 初始化方差,生成样本与标签
sigma = np.zeros((k, 3, 3))
for i in range(k):
sigma[i, :, :] = np.diag(np.random.randint(10, 25, size=(3, )))
sample, target = generate_random(sigma, N)
feature_names = ['x_label', 'y_label', 'z_label'] # 特征数
target_names = ['gaussian1', 'gaussian2', 'gaussian3', 'gaussian4'] # 类别
data = Bunch(sample=sample, feature_names=feature_names, target=target, target_names=target_names)
sample_t, target_t = generate_random(sigma, N)
data_t = Bunch(sample=sample_t, target=target_t)
#@markdown - **模型训练**
model = KNN(data.sample, target, n_neighbors=4, p=2)
model.predict(data.sample[100])
target.flatten()[100]
#@markdown - **测试集精度**
acc = model.score(data_t.sample, data_t.target) * 100
print('Accuracy on testing set: {:.2f}%.'.format(acc))
tar_test = np.array([model.predict(x) for x in data_t.sample], dtype=np.int8) + 1
#@markdown - **显示 KNN 对测试数据的分类情况**
titles = ['Random training data', 'Classified testing data by KNN']
TAR = [target, tar_test]
DATA = [data.sample, data_t.sample]
fig = plt.figure(1, figsize=(16, 8))
fig.subplots_adjust(wspace=.01, hspace=.02)
for i, title, data_n, tar in zip([1, 2], titles, DATA, TAR):
ax = fig.add_subplot(1, 2, i, projection='3d')
if title == 'Random training data':
ax.scatter(data_n[:,0], data_n[:,1], data_n[:,2], c='b', s=35, alpha=0.4, marker='o')
else:
color=['b','g', 'r']
for j in range(N):
ax.scatter(data_n[j, 0], data_n[j, 1], data_n[j, 2], c=color[tar[j]], s=35, alpha=0.4, marker='P')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=20., azim=-25)
ax.set_title(title, fontsize=14, y=0.01)
plt.show()
```
|
github_jupyter
|
# Implementing an LSTM RNN Model
------------------------
Here we implement an LSTM model on all a data set of Shakespeare works.
We start by loading the necessary libraries and resetting the default computational graph.
```
import os
import re
import string
import requests
import numpy as np
import collections
import random
import pickle
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
ops.reset_default_graph()
```
We start a computational graph session.
```
sess = tf.Session()
```
Next, it is important to set the algorithm and data processing parameters.
---------
Parameter : Descriptions
- min_word_freq: Only attempt to model words that appear at least 5 times.
- rnn_size: size of our RNN (equal to the embedding size)
- epochs: Number of epochs to cycle through the data
- batch_size: How many examples to train on at once
- learning_rate: The learning rate or the convergence paramter
- training_seq_len: The length of the surrounding word group (e.g. 10 = 5 on each side)
- embedding_size: Must be equal to the rnn_size
- save_every: How often to save the model
- eval_every: How often to evaluate the model
- prime_texts: List of test sentences
```
# Set RNN Parameters
min_word_freq = 5 # Trim the less frequent words off
rnn_size = 128 # RNN Model size
embedding_size = 100 # Word embedding size
epochs = 10 # Number of epochs to cycle through data
batch_size = 100 # Train on this many examples at once
learning_rate = 0.001 # Learning rate
training_seq_len = 50 # how long of a word group to consider
embedding_size = rnn_size
save_every = 500 # How often to save model checkpoints
eval_every = 50 # How often to evaluate the test sentences
prime_texts = ['thou art more', 'to be or not to', 'wherefore art thou']
# Download/store Shakespeare data
data_dir = 'temp'
data_file = 'shakespeare.txt'
model_path = 'shakespeare_model'
full_model_dir = os.path.join(data_dir, model_path)
# Declare punctuation to remove, everything except hyphens and apostrophes
punctuation = string.punctuation
punctuation = ''.join([x for x in punctuation if x not in ['-', "'"]])
# Make Model Directory
if not os.path.exists(full_model_dir):
os.makedirs(full_model_dir)
# Make data directory
if not os.path.exists(data_dir):
os.makedirs(data_dir)
```
Download the data if we don't have it saved already. The data comes from the [Gutenberg Project](http://www.gutenberg.org])
```
print('Loading Shakespeare Data')
# Check if file is downloaded.
if not os.path.isfile(os.path.join(data_dir, data_file)):
print('Not found, downloading Shakespeare texts from www.gutenberg.org')
shakespeare_url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt'
# Get Shakespeare text
response = requests.get(shakespeare_url)
shakespeare_file = response.content
# Decode binary into string
s_text = shakespeare_file.decode('utf-8')
# Drop first few descriptive paragraphs.
s_text = s_text[7675:]
# Remove newlines
s_text = s_text.replace('\r\n', '')
s_text = s_text.replace('\n', '')
# Write to file
with open(os.path.join(data_dir, data_file), 'w') as out_conn:
out_conn.write(s_text)
else:
# If file has been saved, load from that file
with open(os.path.join(data_dir, data_file), 'r') as file_conn:
s_text = file_conn.read().replace('\n', '')
# Clean text
print('Cleaning Text')
s_text = re.sub(r'[{}]'.format(punctuation), ' ', s_text)
s_text = re.sub('\s+', ' ', s_text ).strip().lower()
print('Done loading/cleaning.')
```
Define a function to build a word processing dictionary (word -> ix)
```
# Build word vocabulary function
def build_vocab(text, min_word_freq):
word_counts = collections.Counter(text.split(' '))
# limit word counts to those more frequent than cutoff
word_counts = {key:val for key, val in word_counts.items() if val>min_word_freq}
# Create vocab --> index mapping
words = word_counts.keys()
vocab_to_ix_dict = {key:(ix+1) for ix, key in enumerate(words)}
# Add unknown key --> 0 index
vocab_to_ix_dict['unknown']=0
# Create index --> vocab mapping
ix_to_vocab_dict = {val:key for key,val in vocab_to_ix_dict.items()}
return(ix_to_vocab_dict, vocab_to_ix_dict)
```
Now we can build the index-vocabulary from the Shakespeare data.
```
# Build Shakespeare vocabulary
print('Building Shakespeare Vocab')
ix2vocab, vocab2ix = build_vocab(s_text, min_word_freq)
vocab_size = len(ix2vocab) + 1
print('Vocabulary Length = {}'.format(vocab_size))
# Sanity Check
assert(len(ix2vocab) == len(vocab2ix))
# Convert text to word vectors
s_text_words = s_text.split(' ')
s_text_ix = []
for ix, x in enumerate(s_text_words):
try:
s_text_ix.append(vocab2ix[x])
except:
s_text_ix.append(0)
s_text_ix = np.array(s_text_ix)
```
We define the LSTM model. The methods of interest are the `__init__()` method, which defines all the model variables and operations, and the `sample()` method which takes in a sample word and loops through to generate text.
```
# Define LSTM RNN Model
class LSTM_Model():
def __init__(self, embedding_size, rnn_size, batch_size, learning_rate,
training_seq_len, vocab_size, infer_sample=False):
self.embedding_size = embedding_size
self.rnn_size = rnn_size
self.vocab_size = vocab_size
self.infer_sample = infer_sample
self.learning_rate = learning_rate
if infer_sample:
self.batch_size = 1
self.training_seq_len = 1
else:
self.batch_size = batch_size
self.training_seq_len = training_seq_len
self.lstm_cell = tf.contrib.rnn.BasicLSTMCell(self.rnn_size)
self.initial_state = self.lstm_cell.zero_state(self.batch_size, tf.float32)
self.x_data = tf.placeholder(tf.int32, [self.batch_size, self.training_seq_len])
self.y_output = tf.placeholder(tf.int32, [self.batch_size, self.training_seq_len])
with tf.variable_scope('lstm_vars'):
# Softmax Output Weights
W = tf.get_variable('W', [self.rnn_size, self.vocab_size], tf.float32, tf.random_normal_initializer())
b = tf.get_variable('b', [self.vocab_size], tf.float32, tf.constant_initializer(0.0))
# Define Embedding
embedding_mat = tf.get_variable('embedding_mat', [self.vocab_size, self.embedding_size],
tf.float32, tf.random_normal_initializer())
embedding_output = tf.nn.embedding_lookup(embedding_mat, self.x_data)
rnn_inputs = tf.split(axis=1, num_or_size_splits=self.training_seq_len, value=embedding_output)
rnn_inputs_trimmed = [tf.squeeze(x, [1]) for x in rnn_inputs]
# If we are inferring (generating text), we add a 'loop' function
# Define how to get the i+1 th input from the i th output
def inferred_loop(prev, count):
# Apply hidden layer
prev_transformed = tf.matmul(prev, W) + b
# Get the index of the output (also don't run the gradient)
prev_symbol = tf.stop_gradient(tf.argmax(prev_transformed, 1))
# Get embedded vector
output = tf.nn.embedding_lookup(embedding_mat, prev_symbol)
return(output)
decoder = tf.contrib.legacy_seq2seq.rnn_decoder
outputs, last_state = decoder(rnn_inputs_trimmed,
self.initial_state,
self.lstm_cell,
loop_function=inferred_loop if infer_sample else None)
# Non inferred outputs
output = tf.reshape(tf.concat(axis=1, values=outputs), [-1, self.rnn_size])
# Logits and output
self.logit_output = tf.matmul(output, W) + b
self.model_output = tf.nn.softmax(self.logit_output)
loss_fun = tf.contrib.legacy_seq2seq.sequence_loss_by_example
loss = loss_fun([self.logit_output],[tf.reshape(self.y_output, [-1])],
[tf.ones([self.batch_size * self.training_seq_len])],
self.vocab_size)
self.cost = tf.reduce_sum(loss) / (self.batch_size * self.training_seq_len)
self.final_state = last_state
gradients, _ = tf.clip_by_global_norm(tf.gradients(self.cost, tf.trainable_variables()), 4.5)
optimizer = tf.train.AdamOptimizer(self.learning_rate)
self.train_op = optimizer.apply_gradients(zip(gradients, tf.trainable_variables()))
def sample(self, sess, words=ix2vocab, vocab=vocab2ix, num=10, prime_text='thou art'):
state = sess.run(self.lstm_cell.zero_state(1, tf.float32))
word_list = prime_text.split()
for word in word_list[:-1]:
x = np.zeros((1, 1))
x[0, 0] = vocab[word]
feed_dict = {self.x_data: x, self.initial_state:state}
[state] = sess.run([self.final_state], feed_dict=feed_dict)
out_sentence = prime_text
word = word_list[-1]
for n in range(num):
x = np.zeros((1, 1))
x[0, 0] = vocab[word]
feed_dict = {self.x_data: x, self.initial_state:state}
[model_output, state] = sess.run([self.model_output, self.final_state], feed_dict=feed_dict)
sample = np.argmax(model_output[0])
if sample == 0:
break
word = words[sample]
out_sentence = out_sentence + ' ' + word
return(out_sentence)
```
In order to use the same model (with the same trained variables), we need to share the variable scope between the trained model and the test model.
```
# Define LSTM Model
lstm_model = LSTM_Model(embedding_size, rnn_size, batch_size, learning_rate,
training_seq_len, vocab_size)
# Tell TensorFlow we are reusing the scope for the testing
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
test_lstm_model = LSTM_Model(embedding_size, rnn_size, batch_size, learning_rate,
training_seq_len, vocab_size, infer_sample=True)
```
We need to save the model, so we create a model saving operation.
```
# Create model saver
saver = tf.train.Saver(tf.global_variables())
```
Let's calculate how many batches are needed for each epoch and split up the data accordingly.
```
# Create batches for each epoch
num_batches = int(len(s_text_ix)/(batch_size * training_seq_len)) + 1
# Split up text indices into subarrays, of equal size
batches = np.array_split(s_text_ix, num_batches)
# Reshape each split into [batch_size, training_seq_len]
batches = [np.resize(x, [batch_size, training_seq_len]) for x in batches]
```
Initialize all the variables
```
# Initialize all variables
init = tf.global_variables_initializer()
sess.run(init)
```
Training the model!
```
# Train model
train_loss = []
iteration_count = 1
for epoch in range(epochs):
# Shuffle word indices
random.shuffle(batches)
# Create targets from shuffled batches
targets = [np.roll(x, -1, axis=1) for x in batches]
# Run a through one epoch
print('Starting Epoch #{} of {}.'.format(epoch+1, epochs))
# Reset initial LSTM state every epoch
state = sess.run(lstm_model.initial_state)
for ix, batch in enumerate(batches):
training_dict = {lstm_model.x_data: batch, lstm_model.y_output: targets[ix]}
c, h = lstm_model.initial_state
training_dict[c] = state.c
training_dict[h] = state.h
temp_loss, state, _ = sess.run([lstm_model.cost, lstm_model.final_state, lstm_model.train_op],
feed_dict=training_dict)
train_loss.append(temp_loss)
# Print status every 10 gens
if iteration_count % 10 == 0:
summary_nums = (iteration_count, epoch+1, ix+1, num_batches+1, temp_loss)
print('Iteration: {}, Epoch: {}, Batch: {} out of {}, Loss: {:.2f}'.format(*summary_nums))
# Save the model and the vocab
if iteration_count % save_every == 0:
# Save model
model_file_name = os.path.join(full_model_dir, 'model')
saver.save(sess, model_file_name, global_step = iteration_count)
print('Model Saved To: {}'.format(model_file_name))
# Save vocabulary
dictionary_file = os.path.join(full_model_dir, 'vocab.pkl')
with open(dictionary_file, 'wb') as dict_file_conn:
pickle.dump([vocab2ix, ix2vocab], dict_file_conn)
if iteration_count % eval_every == 0:
for sample in prime_texts:
print(test_lstm_model.sample(sess, ix2vocab, vocab2ix, num=10, prime_text=sample))
iteration_count += 1
```
Here is a plot of the training loss across the iterations.
```
# Plot loss over time
plt.plot(train_loss, 'k-')
plt.title('Sequence to Sequence Loss')
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.show()
```
|
github_jupyter
|
咱们的基金是否存在着明显的周内效应呢?就是特定周几盈利高一些,让我们来验证一下吧。
```
import pandas as pd
from datetime import datetime
import trdb2py
isStaticImg = False
width = 960
height = 768
pd.options.display.max_columns = None
pd.options.display.max_rows = None
trdb2cfg = trdb2py.loadConfig('./trdb2.yaml')
```
我们先指定一个特定的基金,特定的时间段来分析吧。
```
# 具体基金
# asset = 'jrj.510310'
asset = 'jqdata.000300_XSHG|1d'
# 起始时间,0表示从最开始算起
# tsStart = 0
tsStart = int(trdb2py.str2timestamp('2013-05-01', '%Y-%m-%d'))
# 结束时间,-1表示到现在为止
# tsEnd = -1
tsEnd = int(trdb2py.str2timestamp('2020-09-30', '%Y-%m-%d'))
# 初始资金池
paramsinit = trdb2py.trading2_pb2.InitParams(
money=10000,
)
# 买入参数,用全部的钱来买入(也就是复利)
paramsbuy = trdb2py.trading2_pb2.BuyParams(
perHandMoney=1,
)
# 卖出参数,全部卖出
paramssell = trdb2py.trading2_pb2.SellParams(
perVolume=1,
)
lststart = [1, 2, 3, 4, 5]
lsttitle = ['周一', '周二', '周三', '周四', '周五']
```
首先看看这个基金的基准表现,就是在开始时间就直接买入,然后一直持有,看具体的收益率。
```
# baseline
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='buyandhold',
)
paramsbuy = trdb2py.trading2_pb2.BuyParams(
perHandMoney=1,
)
paramsinit = trdb2py.trading2_pb2.InitParams(
money=10000,
)
s0.buy.extend([buy0])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsInit.CopyFrom(paramsinit)
p0 = trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='baseline',
)
pnlBaseline = trdb2py.simTrading(trdb2cfg, p0)
trdb2py.showPNL(pnlBaseline, toImg=isStaticImg, width=width, height=height)
```
那么策略基准线大概就是这样了,7年多的时间2.2倍。
接下来,先看看最简单的情况,就是特定周几买入,第二天就卖出,只持有1天。
```
lstparams = []
for i in range(0, 5):
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[lststart[i]],
)
sell0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[trdb2py.nextWeekDay(lststart[i], 1)],
)
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
s0.buy.extend([buy0])
s0.sell.extend([sell0])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
lstparams.append(trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='{}买入持有{}天'.format(lsttitle[i], 1),
))
lstpnl1 = trdb2py.simTradings(trdb2cfg, lstparams)
trdb2py.showPNLs(lstpnl1 + [pnlBaseline], toImg=isStaticImg, width=width, height=height)
```
如果看曲线图不是很清楚的话,我们列表看看
```
dfpnl1b = trdb2py.buildPNLReport(lstpnl1 + [pnlBaseline])
dfpnl1b[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False)
```
我看可以看到,虽然盈利都不如基线,但由于波动率降低了。
特别是周一和周五,最大回撤都降低了50%以上,所以夏普都要高于基线。
上面的策略比较简单,因为有些节假日交易日,所以可能会出现买入了,但无法及时卖出的情况,下面我们换一个确定能卖出才买入的策略看看
```
def calcweekday2val2(wday, offday):
if offday == 1:
if wday == 5:
return 3
if offday == 2:
if wday >= 4:
return 4
if offday == 3:
if wday >= 3:
return 5
if offday == 4:
if wday >= 2:
return 6
return offday
lstparams = []
for i in range(0, 5):
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[lststart[i], calcweekday2val2(i + 1, 1)],
)
sell0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[trdb2py.nextWeekDay(lststart[i], 1)],
)
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
s0.buy.extend([buy0])
s0.sell.extend([sell0])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
lstparams.append(trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='{}买入持有{}天v2'.format(lsttitle[i], 1),
))
lstpnl1t = trdb2py.simTradings(trdb2cfg, lstparams)
trdb2py.showPNLs(lstpnl1t + [pnlBaseline], toImg=isStaticImg, width=width, height=height)
dfpnl1b = trdb2py.buildPNLReport(lstpnl1 + lstpnl1t + [pnlBaseline])
dfpnl1b[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False)
```
可以看到,虽然有一些差别,但其实影响有好有坏,下面我们还是按v2版的策略来吧
接下来看看胜率
```
# trdb2py.showBarWinRate4Month(lstpnl1t, valtype='abs', valoff=-0.5, toImg=isStaticImg, width=width, height=height)
# trdb2py.showBarWinRate4Month(lstpnl1t, toImg=isStaticImg, width=width, height=height)
trdb2py.showBarWinRateInYears(lstpnl1t, toImg=isStaticImg, width=width, height=height)
```
对于这种随机策略,胜率应该均匀分布在0.5左右的,可以分别看看每一条数据,会发现,周五和周一还是要稍高于0.5的,周二周三稍低于0.5。
当然,这种策略交易比较频繁,次数足够,按月分布来看,没有明显的趋势,所以我们可以直接看整体汇总胜率。
```
wrmt = trdb2py.buildPNLWinRateInMonths(lstpnl1t)
wrmt[0][['title', 'total']]
```
这里,我们对比一下纳指、标普、恒指、上证50、中证500、中小盘,可以得出以下结论:
1. 和走势曲线关系不大,同样大趋势向上的纳指和中小盘,胜率也是能看出差异来的。
2. 除了有明显高于0.5的以外,最好也能有明显低于0.5的,这样避开低于0.5的,持有高于0.5的,才能获得额外收益,最终明显超过基线。
3. 纳指、标普、恒指 都没有明显的周内效应。
4. A股普遍存在较明显的周内效应。
5. 越是小盘股,周内效应越明显。
```
trdb2py.showBarWinRate4Month(lstpnl1t, toImg=isStaticImg, width=width, height=height)
```
接下来,我们统计一下不同月份的胜率,发现,胜率最突出的还是周五买入持有1天,但这个在4月和9月胜率偏低,特别是4月。
接下来看看4月的统计。
```
#trdb2py.showBarWinRateInMonths(lstpnl1t, valtype='abs', valoff=-0.5, month=8, toImg=isStaticImg, width=width, height=height)
trdb2py.showBarWinRateInMonths(lstpnl1t, month=4, toImg=isStaticImg, width=width, height=height)
```
发现除了2015年和2020年以外,4月确实都是输的。
但是,2015年和2020年4月其实都算是牛市了,不能这样简单的筛选。
目前看来,没有特别的证据表明有明显的可供配合的月度效应。
如果能把周五、周一的增长叠加起来,是不是结果会更好一些呢
干脆把持有1天到4天的情况,也就是从周一买周二卖,一直到 周一买周五卖,看看
```
lstparams = []
for day in range(1, 5):
for i in range(0, 5):
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[lststart[i], calcweekday2val2(i + 1, day)],
)
sell0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[trdb2py.nextWeekDay(lststart[i], day)],
)
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
s0.buy.extend([buy0])
s0.sell.extend([sell0])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
lstparams.append(trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='{}买入持有{}天v2'.format(lsttitle[i], day),
))
lstpnl = trdb2py.simTradings(trdb2cfg, lstparams)
trdb2py.showPNLs(lstpnl + [pnlBaseline], toImg=isStaticImg, width=width, height=height)
dfpnl = trdb2py.buildPNLReport(lstpnl + [pnlBaseline])
dfpnl[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False)
```
单看回报率,已经有明显超过基线的了,而且,最好的情况,夏普提升了近3倍
接下来,还能想到的,如果在下降的时候,定投买入,最终会不会也能获得额外收益呢?
还是周一到周五一起测一下吧。
```
lstparams = []
for i in range(0, 5):
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[lststart[i]],
)
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
paramsaip = trdb2py.trading2_pb2.AIPParams(
money=10000,
type=trdb2py.trading2_pb2.AIPTT_MONTHDAY,
day=lststart[i],
)
s0.buy.extend([buy0])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
s0.paramsAIP.CopyFrom(paramsaip)
lstparams.append(trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='{}定投'.format(lsttitle[i], day),
))
lstaippnl = trdb2py.simTradings(trdb2cfg, lstparams)
trdb2py.showPNLs(lstaippnl + [pnlBaseline], toImg=isStaticImg, width=width, height=height)
```
我们可以看到5条收益曲线几乎重合在一起,说明,特定周几购买,其实并没有差别。
关于定投,后面会有更详细的测试,这里还是回到如何榨取周内效应的最大收益上。
我们还可以考虑加入均线策略,看看在均线上下方买入,是否有区别,其实如果均线粒度够大,一定程度上能将牛市和熊市区分开。
这个运算量有点大,大概有24000种,我直接给出结论吧。
```
lstparams = []
for ema in range(5, 61):
for sdo in range(1, 5):
for sd in range(1, 6):
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[sd, calcweekday2val2(sd, sdo)],
)
buy1 = trdb2py.trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['up'],
strVals=['ema.{}'.format(ema)],
)
sell0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[trdb2py.nextWeekDay(sd, sdo)],
)
sell1 = trdb2py.trading2_pb2.CtrlCondition(
name='ctrlconditionid',
vals=[1],
strVals=['buy'],
)
for edo in range(1, 5):
for ed in range(1, 6):
buy2 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[ed, calcweekday2val2(ed, edo)],
group=1,
)
buy3 = trdb2py.trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['down'],
strVals=['ema.{}'.format(ema)],
group=1,
)
sell2 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[trdb2py.nextWeekDay(ed, edo)],
group=1,
)
sell3 = trdb2py.trading2_pb2.CtrlCondition(
name='ctrlconditionid',
vals=[2],
strVals=['buy'],
group=1,
)
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
s0.buy.extend([buy0, buy1, buy2, buy3])
s0.sell.extend([sell0, sell1, sell2, sell3])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
lstparams.append(trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='ema{}up{}持有{}天down{}持有{}天'.format(ema, lsttitle[sd-1], sdo, lsttitle[ed-1], edo),
))
lstpnlmix = trdb2py.simTradings(trdb2cfg, lstparams, ignoreTotalReturn=4)
# trdb2py.showPNLs(lstpnlmix + [pnlBaseline], toImg=isStaticImg, width=width, height=height)
dfpnl = trdb2py.buildPNLReport(lstpnlmix + [pnlBaseline])
dfpnl1 = dfpnl[dfpnl['totalReturns'] >= 1]
dfpnl1[['title', 'maxDrawdown', 'maxDrawdownStart', 'maxDrawdownEnd', 'totalReturns', 'sharpe', 'annualizedReturns', 'annualizedVolatility', 'variance']].sort_values(by='totalReturns', ascending=False)
asset = 'jqdata.000300_XSHG|1d'
# asset = 'jqdata.000905_XSHG|1d'
# asset = 'jqdata.000932_XSHG|1d'
s0 = trdb2py.trading2_pb2.Strategy(
name="normal",
asset=trdb2py.str2asset(asset),
)
buy0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[4, calcweekday2val2(4, 4)],
)
buy1 = trdb2py.trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['up'],
strVals=['ema.{}'.format(29)],
)
buy2 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday2',
vals=[1, calcweekday2val2(1, 4)],
group=1,
)
buy3 = trdb2py.trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['down'],
strVals=['ema.29'],
group=1,
)
sell0 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[3],
)
sell1 = trdb2py.trading2_pb2.CtrlCondition(
name='ctrlconditionid',
vals=[1],
strVals=['buy'],
)
sell2 = trdb2py.trading2_pb2.CtrlCondition(
name='weekday',
vals=[5],
group=1,
)
sell3 = trdb2py.trading2_pb2.CtrlCondition(
name='ctrlconditionid',
vals=[2],
strVals=['buy'],
group=1,
)
paramsbuy = trdb2py.trading2_pb2.BuyParams(
perHandMoney=1,
)
paramsinit = trdb2py.trading2_pb2.InitParams(
money=10000,
)
s0.buy.extend([buy0, buy1, buy2, buy3])
s0.sell.extend([sell0, sell1, sell2, sell3])
s0.paramsBuy.CopyFrom(paramsbuy)
s0.paramsSell.CopyFrom(paramssell)
s0.paramsInit.CopyFrom(paramsinit)
p0 = trdb2py.trading2_pb2.SimTradingParams(
assets=[trdb2py.str2asset(asset)],
startTs=tsStart,
endTs=tsEnd,
strategies=[s0],
title='混合策略',
)
pnlm = trdb2py.simTrading(trdb2cfg, p0)
trdb2py.showPNLs([pnlm, pnlBaseline], toImg=isStaticImg, width=width, height=height)
```
|
github_jupyter
|
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from application_logging.logger import AppLog
from utils.common import read_config
from utils.common import FileOperations
from Data_Preprocessing.preprocessing import Preprocessor
from Predict_Model.predictFromModel import prediction
# Set the display option None, so all the dataframe column would display.
pd.set_option('display.max_columns', None)
# Initialize logger object to capture the log in log file
execType = 'Prediction'
predictionlog = AppLog(execType)
log = predictionlog.log("sLogger")
# Read parameter from the config.yaml
config = read_config('config.yaml')
prediction_file_path = config["prediction"]["prediction_file_path"]
# Read training dataset
objPredictionFile = FileOperations(execType)
_, Raw_Prediction_Data = objPredictionFile.ReadCSVData(filepath=prediction_file_path)
Raw_Prediction_Data
# Check the data type for dataframe columns
if (Raw_Prediction_Data.shape[1] == Raw_Prediction_Data.select_dtypes(include=np.number).shape[1]):
print("All the column from dataframe is numeric data type, so no need to process for data type conversion")
log.info("All the column from dataframe is numeric data type, so no need to process for data type conversion")
```
### EDA on training dataset
```
#### check if dataframe has null value, then call KNN Imputer to Impute the appropriate value
#### (Check the nearest 3 value to impute)
objPrediction = Preprocessor(execType)
if Raw_Prediction_Data.isnull().values.any():
log.info("Initiate to Impute null value for the null value columns")
Raw_Prediction_Data = objPrediction.impute_missing_values(Raw_Training_Data)
log.info("Null value imputation done successfully")
else:
print ("Dataset doesn't contain null value in any of the columns")
log.info("Dataset doesn't contain null value in any of the columns")
# Describe the dataset
Raw_Prediction_Data.describe()
# Get the independent feature columns
dataColumns=['X'+str(i) for i in range(1,58)]
len(dataColumns)
```
### Predict the Model
```
objTraining = prediction(execType)
# X33 data has multicolinearity during preprocessing on traing data, so need to remove from prediction as well
status = objTraining.predictionFromModel(Raw_Prediction_Data[dataColumns],dataColumns, ['X33'])
if status == 1:
print ("Successful End of Training")
else:
print ("Unsuccessful End of Training")
```
#### Prediction file is available in "Prediction_Output_File/Predictions.csv" path location.
|
github_jupyter
|
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/PY0101EN_edx_add_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Sets in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the sets in the Python Programming Language. By the end of this lab, you'll know the basics set operations in Python, including what it is, operations and logic operations.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#set">Sets</a>
<ul>
<li><a href="content">Set Content</a></li>
<li><a href="op">Set Operations</a></li>
<li><a href="logic">Sets Logic Operations</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Sets</a>
</li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="set">Sets</h2>
<h3 id="content">Set Content</h3>
A set is a unique collection of objects in Python. You can denote a set with a curly bracket <b>{}</b>. Python will automatically remove duplicate items:
```
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
```
The process of mapping is illustrated in the figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsUnique.png" width="1100" />
You can also create a set from a list as follows:
```
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
```
Now let us create a set of genres:
```
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
```
<h3 id="op">Set Operations</h3>
Let us go over set operations, as these can be used to change the set. Consider the set <b>A</b>:
```
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
```
We can add an element to a set using the <code>add()</code> method:
```
# Add element to set
A.add("NSYNC")
A
```
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
```
# Try to add duplicate element to the set
A.add("NSYNC")
A
```
We can remove an item from a set using the <code>remove</code> method:
```
# Remove the element from set
A.remove("NSYNC")
A
```
We can verify if an element is in the set using the <code>in</code> command:
```
# Verify if the element is in the set
"AC/DC" in A
```
<h3 id="logic">Sets Logic Operations</h3>
Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union:
Consider the following two sets:
```
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsSamples.png" width="650" />
```
# Print two sets
album_set1, album_set2
```
As both sets contain <b>AC/DC</b> and <b>Back in Black</b> we represent these common elements with the intersection of two circles.
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsLogic.png" width = "650" />
You can find the intersect of two sets as follow using <code>&</code>:
```
# Find the intersections
intersection = album_set1 & album_set2
intersection
```
You can find all the elements that are only contained in <code>album_set1</code> using the <code>difference</code> method:
```
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
```
You only need to consider elements in <code>album_set1</code>; all the elements in <code>album_set2</code>, including the intersection, are not included.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsLeft.png" width="650" />
The elements in <code>album_set2</code> but not in <code>album_set1</code> is given by:
```
album_set2.difference(album_set1)
```
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsRight.png" width="650" />
You can also find the intersection of <code>album_list1</code> and <code>album_list2</code>, using the <code>intersection</code> method:
```
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
```
This corresponds to the intersection of the two circles:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsIntersect.png" width="650" />
The union corresponds to all the elements in both sets, which is represented by coloring both circles:
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/SetsUnion.png" width="650" />
The union is given by:
```
# Find the union of two sets
album_set1.union(album_set2)
```
And you can check if a set is a superset or subset of another set, respectively, like this:
```
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
```
Here is an example where <code>issubset()</code> and <code>issuperset()</code> return true:
```
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
```
<hr>
<h2 id="quiz">Quiz on Sets</h2>
Convert the list <code>['rap','house','electronic music', 'rap']</code> to a set:
```
# Write your code below and press Shift+Enter to execute
set(['rap','house','electronic music','rap'])
```
Double-click __here__ for the solution.
<!-- Your answer is below:
set(['rap','house','electronic music','rap'])
-->
<hr>
Consider the list <code>A = [1, 2, 2, 1]</code> and set <code>B = set([1, 2, 2, 1])</code>, does <code>sum(A) = sum(B)</code>
```
# Write your code below and press Shift+Enter to execute
A=[1,2,2,1]
B=set([1,2,2,1])
print("the sum of A is:",sum(A))
print("the sum of B is:",sum(B))
sum(A)==sum(B)
```
Double-click __here__ for the solution.
<!-- Your answer is below:
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
-->
<hr>
Create a new set <code>album_set3</code> that is the union of <code>album_set1</code> and <code>album_set2</code>:
```
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_set3 = album_set1.union(album_set2)
album_set3
-->
<hr>
Find out if <code>album_set1</code> is a subset of <code>album_set3</code>:
```
# Write your code below and press Shift+Enter to execute
album_set1.issubset(album_set3)
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_set1.issubset(album_set3)
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/PY0101EN_edx_add_bbottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
|
github_jupyter
|
# Exercise 4a
## 2 Red Cards Study
### 2.1 Loading and Cleaning the data
```
#Import libraries
import numpy as np
import pandas as pd
from scipy.sparse.linalg import lsqr
#Load dataset
df = pd.read_csv("CrowdstormingDataJuly1st.csv", sep=",", header=0)
print(df.columns)
```
We sort out (irrelevant) features:
- player (playerShort uniquely identifies the player, so player is not needed)
- playerShort (the players' names are irrelevant for us, actually...)
- photoID (is only needed if we want to classify the skin color by ourselves)
- refCountry (we will assume that not the name of the referee country, but the values meanIAT and meanExp are the relevant features regarding the referee country)
- nIAT, seIAT (we will just assume the IAT values are a good estimation of the actual value, so we are not interested in the sample size used to determine the IAT)
- nExp, seExp (see above)
- yellowCards (our examination is only about red cards, not about yellow cards)
- club (this is a categorial feature with over 100 categories. The one-hot encoding would therefore create a large amount of new features which drastically increases the dimensionality of the problem. This extra effort is disproportionate to the importance of this feature for this question (maybe(!) some teams play more aggresive than others).)
- birthday (We decided against this feature because the data set does not contain information about the date of every single game. This makes it impossible to find out if players get more red cards when they are at a certain age, because the data from this dataset refer to the complete career of the players.)
- Alpha_3 (we neglect the nationality of the referee)
- games (This is a redundant feature because the total number of games is given by the sum of victories, ties and defeats)
```
df = df.drop(labels=["player", "playerShort", "photoID", "refCountry", "nIAT", "seIAT", "nExp", "seExp", "yellowCards", "club", "birthday", "Alpha_3", "games"], axis=1)
print(df.columns)
```
Next, we create new features out of existing ones or manipulate them:
- rating (we take the mean of rater1 and rater2 as our own rating)
- totalReds (we sum up the red and yellow-red cards to the total number of red cards and divide by the number of games, This is our response Y.)
- leagueCountry (replace the name of the country by one-hot encoding)
- refCount (counts the number of dyads for each referee. Relevant for later drops (see https://nbviewer.jupyter.org/github/mathewzilla/redcard/blob/master/Crowdstorming_visualisation.ipynb))
- We summarize some categories in position (Goalkeeper, Back, Midfielder, Forward (don't know anything about football, hopefully this makes sense.))
```
#take mean of the two skin color ratings
df["rating"] = (df["rater1"] + df["rater2"])/2
df = df.drop(labels=["rater1", "rater2"], axis=1)
#sum up red and yellow-red cards
df["percentageReds"] = (df["redCards"] + df["yellowReds"])/(df["victories"]+df["ties"]+df["defeats"])
df = df.drop(labels=["redCards", "yellowReds"], axis=1)
#onehot encoding for leagueCountry
onehot = pd.get_dummies(df.leagueCountry, prefix="Country")
df = df.drop(labels=["leagueCountry"], axis=1)
df = pd.concat([df,onehot], axis=1, sort=False)
#summarize positions and onehot encoding for positions
dic = {"Right Fullback":"Back",
"Left Fullback":"Back",
"Center Back":"Back",
"Left Midfielder":"Midfielder",
"Right Midfielder":"Midfielder",
"Center Midfielder":"Midfielder",
"Defensive Midfielder":"Midfielder",
"Attacking Midfielder":"Midfielder",
"Left Winger":"Forward",
"Right Winger":"Forward",
"Center Forward":"Forward"}
df = df.replace({"position":dic})
onehot = pd.get_dummies(df.position, prefix="Position")
df = df.drop(labels=["position"], axis=1)
df = pd.concat([df,onehot], axis=1, sort=False)
#add a column which tracks how many games each ref is involved in
#taken from https://nbviewer.jupyter.org/github/mathewzilla/redcard/blob/master/Crowdstorming_visualisation.ipynb
df['refCount']=0
refs=pd.unique(df['refNum'].values.ravel()) #list all unique ref IDs
#for each ref, count their dyads
for r in refs:
df.loc[df['refNum']==r,"refCount"]=len(df[df['refNum']==r])
```
Now we go on with preparing the data set:
- remove rows that contain a NaN-value
- remove rows where refCount<22 (for explanation see https://nbviewer.jupyter.org/github/mathewzilla/redcard/blob/master/Crowdstorming_visualisation.ipynb. After this we can remove the features "refNum" and "refCount" because these were only kept for this step.)
- normalize the features ties, victories and defeats
```
#remove rows with NaN in "rating"
df = df.dropna(axis=0)
#remove rows where the "refCount"<22
df = df.loc[df["refCount"]>21].reset_index()
df = df.drop(["refNum", "refCount", "index"], axis=1)
#normalize ties, victories and defeats
defeats = df["defeats"]/(df["defeats"]+df["ties"]+df["victories"])
ties = df["ties"]/(df["defeats"]+df["ties"]+df["victories"])
victories = df["victories"]/(df["defeats"]+df["ties"]+df["victories"])
df["defeats"] = defeats
df["ties"] = ties
df["victories"] = victories
```
In the following tasks we want to apply the LSQR-algorithm. In the lecture we always assumed centralized features and responses. So our last step is to centralize our data. The responses are given by the values in the column "totalReds"
```
df_mean = df.apply(np.mean, axis=0)
df = df - df_mean
df
```
### 2.2 Model Creation
```
#solve the problem using the lsqr algorithm (linear regression)
#extract features and responses from the DataFrame
Y = df["percentageReds"].to_numpy()
X = df.drop(labels=["percentageReds"], axis=1).to_numpy()
class LinearRegression():
def __init__(self):
self.beta = None
#use lsqr algorithm
def train(self, features, labels):
self.beta = lsqr(features,labels)[0]
def predict(self, x):
x_mean = df_mean.drop(labels=["percentageReds"])
y_mean = df_mean["percentageReds"]
return y_mean + np.sum(self.beta*(x-x_mean))
#Test basic functionality
regression = LinearRegression()
regression.train(X,Y)
regression.predict([180, 77, 1.4, 0.8, 1, 0.4, 0.35, 0.5, 1, 0.3, 0.15, 0.3, 0.25, 0.3, 0.21, 0.1, 0.35])
#solve the problem using regression forestsclass DecisionTree(Tree):
# base classes
class Node:
pass
class Tree:
def __init__(self):
self.root = Node()
def find_leaf(self, x):
node = self.root
while hasattr(node, "feature"):
j = node.feature
if x[j] <= node.threshold:
node = node.left
else:
node = node.right
return node
class RegressionTree(Tree):
def __init__(self):
super(RegressionTree, self).__init__()
def train(self, data, labels, n_min=500):
'''
data: the feature matrix for all digits
labels: the corresponding ground-truth responses
n_min: termination criterion (don't split if a node contains fewer instances)
'''
N, D = data.shape
D_try = np.max([int(np.sqrt(D))-2, 0]) # how many features to consider for each split decision
# initialize the root node
self.root.data = data
self.root.labels = labels
stack = [self.root]
while len(stack):
node = stack.pop()
n = node.data.shape[0] # number of instances in present node
if (n >= n_min):
#randomly choose D_try-2 features
feature_indices = np.random.choice(D, D_try, replace=False)
feature_indices = np.append(feature_indices, [0,1,8])
#split the node into two
left, right = make_regression_split_node(node, feature_indices)
#put the two nodes on the stack
stack.append(left)
stack.append(right)
else:
make_regression_leaf_node(node)
def predict(self, x):
leaf = self.find_leaf(x)
return leaf.response
def make_regression_split_node(node, feature_indices):
'''
node: the node to be split
feature_indices: a numpy array of length 'D_try', containing the feature
indices to be considered in the present split
'''
n, D = node.data.shape
# find best feature j (among 'feature_indices') and best threshold t for the split
#(mainly copied from "density tree")
e_min = float("inf")
j_min, t_min = None, None
for j in feature_indices:
data_unique = np.sort(np.unique(node.data[:, j]))
tj = (data_unique[1:] + data_unique[:-1])/2.0
for t in tj:
data_left = node.data[:, j].copy()
labels_left = node.labels[data_left<=t].copy()
data_left = data_left[data_left<=t]
data_right = node.data[:, j].copy()
labels_right = node.labels[data_right>t].copy()
data_right = data_right[data_right>t]
#compute mean label value on the left and right
mean_left = np.mean(labels_left)
mean_right = np.mean(labels_right)
#compute sum of squared deviation from mean label
measure_left = np.sum((labels_left - mean_left)**2)
measure_right = np.sum((labels_right - mean_right)**2)
#Compute decision rule
measure = measure_left + measure_right
# choose the best threshold that minimizes gini
if measure < e_min:
e_min = measure
j_min = j
t_min = t
# create children
left = Node()
right = Node()
X = node.data[:, j_min]
# initialize 'left' and 'right' with the data subsets and labels
# according to the optimal split found above
left.data = node.data[X<=t_min]# data in left node
left.labels = node.labels[X<=t_min] # corresponding labels
right.data = node.data[X>t_min]
right.labels = node.labels[X>t_min]
# turn the current 'node' into a split node
# (store children and split condition)
node.left = left
node.right = right
node.feature = j_min
node.threshold = t_min
# return the children (to be placed on the stack)
return left, right
def make_regression_leaf_node(node):
'''
node: the node to become a leaf
'''
# compute and store leaf response
node.response = np.mean(node.labels) + df_mean["percentageReds"]
class RegressionForest():
def __init__(self, n_trees):
# create ensemble
self.trees = [RegressionTree() for i in range(n_trees)]
def train(self, data, labels, n_min=1000):
for tree in self.trees:
# train each tree, using a bootstrap sample of the data
bootstrap_indices = np.random.choice(len(labels), len(labels))
bootstrap_data = np.array([data[i] for i in bootstrap_indices])
bootstrap_labels = np.array([labels[i] for i in bootstrap_indices])
tree.train(bootstrap_data, bootstrap_labels, n_min=n_min)
def predict(self, x):
predictions = np.array([])
for tree in self.trees:
predictions = np.append(predictions, tree.predict(x))
return np.mean(predictions)
def merge(self, forest):
self.trees = self.trees + forest.trees
#test of basic functionality
Y = df["percentageReds"].to_numpy()
X = df.drop(labels=["percentageReds"], axis=1).to_numpy()
forest = RegressionForest(n_trees=5)
forest.train(X, Y, n_min=500)
#determine the error via cross validation
#define function that determines the sum squared error
def compute_error(model, test_features, test_labels):
mean_squared_error = 0
n = len(test_features)
for i in range(n):
mean_squared_error = mean_squared_error + (test_labels[i] - model.predict(test_features[i]))**2
return mean_squared_error/n
Y = df["percentageReds"].to_numpy()
X = df.drop(labels=["percentageReds"], axis=1).to_numpy()
#number of folds
L = 10
#create L folds
N = len(X)
indices = np.random.choice(N, N, replace=False)
X_folds = np.array(np.array_split(X[indices], L), dtype=object)
Y_folds = np.array(np.array_split(Y[indices], L), dtype=object)
#1. Linear Regression
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
regression = LinearRegression()
regression.train(X_train,Y_train)
error.append(compute_error(regression, X_test, Y_test))
error = np.mean(error)
#print error
print("\nerror rate, linear regression:")
print(error)
#2. Regression Forest
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
forest = RegressionForest(n_trees=5)
forest.train(X_train,Y_train, n_min=500)
error.append(compute_error(forest, X_test, Y_test))
error = np.mean(error)
#print error
print("\nerror rate, regression forest:")
print(error)
```
### 2.3 Answering the Research Question
```
#define function that shuffles the data in one column
def shuffle_data(features, feature_index):
'''
Shuffles the data in the column denoted by feature_index. All other data remain unchanged
features: 2D array, each row stands for one instance, each column for one feature
feature_index: the entries in the feature_index-th column will be shuffled randomly
'''
features = features.transpose()
shuffled_feature = np.random.permutation(features[feature_index])
features[feature_index] = shuffled_feature
return features.transpose()
color_rating_index = 8 #index of the color rating in df
L = 10 #number of folds
#load csv-file where we save the mean squared errors
err_data = pd.read_csv("errors.txt", sep=",", index_col=False)
# load original data set
Y = df["percentageReds"].to_numpy()
X = df.drop(labels=["percentageReds"], axis=1).to_numpy()
#1. Linear Regression
#shuffle data
Y_shuffled = Y
X_shuffled = shuffle_data(X, 8)
#create L folds
N = len(X_shuffled)
indices = np.random.choice(N, N, replace=False)
X_folds = np.array(np.array_split(X_shuffled[indices], L), dtype=object)
Y_folds = np.array(np.array_split(Y_shuffled[indices], L), dtype=object)
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
regression = LinearRegression()
regression.train(X_train,Y_train)
error.append(compute_error(regression, X_test, Y_test))
error_lr = np.mean(error)
#print error and save the value
print("\nerror rate, linear regression:")
print(error_lr)
#2. Regression Tree
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
forest = RegressionForest(n_trees=5)
forest.train(X_train,Y_train, n_min=500)
error.append(compute_error(forest, X_test, Y_test))
error = np.mean(error)
#print error and save the value
print("\nerror rate, regression tree:")
print(error)
err_data.loc[len(err_data)] = [error_lr, error]
err_data.to_csv("errors.txt", sep=",", index=False)
```
To obtain the following results we run the code from above several times. The first row stands for the results from the unshuffled dataset, the other rows from the shuffled datasets. One can see that the error of some of the rows corresponding to a dataset with shuffled color rating are lower than the error from the original dataset. So we can't find a skin color bias in red card decisions with a p-value of p=0.05. However, we have doubts if our code is completely correct: surprisingly the error for linear regression is always even lower, when you shuffle the color rating. We do not have an explanation for this.
```
err_data = pd.read_csv("errors.txt", sep=",", index_col=False)
err_data
```
### 2.4 How to Lie With Statistics
We already found a choice of features that does not reveal a skin color bias. So we try to find a choice of features that shows such a bias. We choose the "rating" column as the only feature. We only apply the Linear Regression model to the data, because our task is to find one example of a choice of features that shows a skin color bias in one of the used models.
```
Y = df["percentageReds"].to_numpy()
X = df[["rating"]].to_numpy()
df_mean = df_mean[["rating", "percentageReds"]]
#number of folds
L = 20
#create L folds
N = len(X)
indices = np.random.choice(N, N, replace=False)
X_folds = np.array(np.array_split(X[indices], L), dtype=object)
Y_folds = np.array(np.array_split(Y[indices], L), dtype=object)
#1. Linear Regression
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
regression = LinearRegression()
regression.train(X_train,Y_train)
error.append(compute_error(regression, X_test, Y_test))
error = np.mean(error)
#print error
print("\nerror rate, linear regression:")
print(error)
color_rating_index = 0 #index of the color rating in df
L = 20 #number of folds
#load csv-file where we save the mean squared errors
err_data = pd.read_csv("errorsLie.txt", sep=",", index_col=False)
# load original data set
Y = df["percentageReds"].to_numpy()
X = df[["rating"]].to_numpy()
#1. Linear Regression
#shuffle data
Y_shuffled = Y
X_shuffled = shuffle_data(X, color_rating_index)
#create L folds
N = len(X_shuffled)
indices = np.random.choice(N, N, replace=False)
X_folds = np.array(np.array_split(X_shuffled[indices], L), dtype=object)
Y_folds = np.array(np.array_split(Y_shuffled[indices], L), dtype=object)
error = []
for i in range(L):
print(i/L*100, "%")
#create training and test data
X_train = np.concatenate(X_folds[np.arange(L)!=i], axis=0)
Y_train = np.concatenate(Y_folds[np.arange(L)!=i], axis=0)
X_test = X_folds[i]
Y_test = Y_folds[i]
#compute error
regression = LinearRegression()
regression.train(X_train,Y_train)
error.append(compute_error(regression, X_test, Y_test))
error = np.mean(error)
#print error and save the value
print("\nerror rate, linear regression:")
print(error)
err_data.loc[len(err_data)] = [error]
err_data.to_csv("errorsLie.txt", sep=",", index=False)
```
After running the code above 20 times we can find a skin color bias this time: The mean squared error for the shuffled data is always higher than the error for the original dataset.
### 2.5 Alternative Hypotheses
This exercise assumes that a correlation between the skin color and the probability to get a red card exists. We did not find such a correlation with our first choice of features. So we assume that the choice of features we used in 2.4 was "better". Two causal hypotheses for red cards would then be:
1. Heavier players cause more fouls (because the opponent is more likely to fall). This leads to more red cards for players with more weight.
2. Players in the position "Back" often have to stop an opponent player in the last moment ("no matter what it costs"). This leads to more red cards for players in the position "Back".
If one of these hypotheses is true, we should find a positive correlation between the position "Back"/weight and the color rating.
Then we would typically expect a positive covariance for these quantities. Additionally, we would expect a positive covarance of the weight/"Back" and the probability for a red card
```
#compute covariance matrices
Y = df["percentageReds"].to_numpy()
X = df["weight"].to_numpy()
print(np.cov(X,Y))
Y = df["rating"].to_numpy()
print(np.cov(X,Y), "\n")
Y = df["percentageReds"].to_numpy()
X = df["Position_Back"].to_numpy()
print(np.cov(X,Y))
Y = df["rating"].to_numpy()
print(np.cov(X,Y))
```
In both cases one of our expectations is not met, which means that our hypotheses are rather not true.
|
github_jupyter
|
# Random Forest Classification
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)#performing datasplitting
```
### Model
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the <code>max_samples</code> parameter if <code>bootstrap=True</code> (default), otherwise the whole dataset is used to build each tree.
#### Model Tuning Parameters
1. n_estimators : int, default=100
> The number of trees in the forest.
2. criterion : {“gini”, “entropy”}, default=”gini”
> The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
3. max_depth : int, default=None
> The maximum depth of the tree.
4. max_features : {“auto”, “sqrt”, “log2”}, int or float, default=”auto”
> The number of features to consider when looking for the best split:
5. bootstrap : bool, default=True
> Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
6. oob_score : bool, default=False
> Whether to use out-of-bag samples to estimate the generalization accuracy.
7. n_jobs : int, default=None
> The number of jobs to run in parallel. fit, predict, decision_path and apply are all parallelized over the trees. <code>None</code> means 1 unless in a joblib.parallel_backend context. <code>-1</code> means using all processors. See Glossary for more details.
8. random_state : int, RandomState instance or None, default=None
> Controls both the randomness of the bootstrapping of the samples used when building trees (if <code>bootstrap=True</code>) and the sampling of the features to consider when looking for the best split at each node (if <code>max_features < n_features</code>).
9. verbose : int, default=0
> Controls the verbosity when fitting and predicting.
```
# Build Model here
model = RandomForestClassifier(n_jobs = -1,random_state = 123)
model.fit(X_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,X_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(X_test)))
```
#### Feature Importances.
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
|
github_jupyter
|
```
from __future__ import division
import numpy as np
from numpy import *
import os
import tensorflow as tf
import PIL
from PIL import Image
import matplotlib.pyplot as plt
from skimage import data, io, filters
from matplotlib.path import Path
import matplotlib.patches as patches
import pandas as pd
path_to_strokes = "tiny/airplane.npy"
X = np.load(path_to_strokes)[()]
print('Example sketch has ', str(shape(X['airplane'][0][0])[0]), ' strokes')
print('Corresponds to photo: ', X['airplane'][1][0])
path_to_source_photos = "../tiny/photo/airplane/"
photo = os.path.join(path_to_source_photos,'n02691156_10151.jpg')
class TinyDataset():
"""tiny airplane dataset of photos and sketches for pix2svg."""
def __init__(self, npy_file, root_dir, transform=None):
"""
Args:
npy_file (string): Path to the numpy file with stroke-5 representation and corresponding photos.
# to get stroke-5 representation of svg
x['airplane'][0][5]
# to get corresponding photos
x['airplane'][1][5]
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.root_dir = root_dir
self.stroke_dir = npy_file
self.photo_dir = os.path.join(root_dir,'photo')
self.strokes = np.load(npy_file)[()]
self.transform = transform
def __len__(self):
return len(self.strokes['airplane'][0])
def __getitem__(self, idx):
img_name = os.path.join(self.photo_dir,'airplane',X['airplane'][1][idx]+ '.jpg')
photo = io.imread(img_name)
photo = photo.astype(float)
strokes = self.strokes['airplane'][0][idx]
sample = {'photo': photo, 'strokes': strokes,'name': X['airplane'][1][idx]+ '.jpg'}
if self.transform:
sample = self.transform(sample)
return sample
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, strokes, name = sample['photo'], sample['strokes'], sample['name']
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'tensor': tf.divide(tf.stack(sample['photo']),255),
'strokes': strokes,
'name': name,
'photo': image}
def to_normal_strokes(big_stroke):
"""Convert from stroke-5 format (from sketch-rnn paper) back to stroke-3."""
l = 0
for i in range(len(big_stroke)):
if big_stroke[i, 4] > 0:
l = i
break
if l == 0:
l = len(big_stroke)
result = np.zeros((l, 3))
result[:, 0:2] = big_stroke[0:l, 0:2]
result[:, 2] = big_stroke[0:l, 3]
return result
def strokes_to_lines(strokes):
"""
Convert stroke-3 format to polyline format.
List contains sublist of continuous line segments (strokes).
"""
x = 0
y = 0
lines = []
line = []
for i in range(len(strokes)):
if strokes[i, 2] == 1:
x += float(strokes[i, 0])
y += float(strokes[i, 1])
line.append([x, y])
lines.append(line)
line = []
else:
x += float(strokes[i, 0])
y += float(strokes[i, 1])
line.append([x, y])
return lines
def polyline_pathmaker(lines):
x = []
y = []
codes = [Path.MOVETO] # start with moveto command always
for i,l in enumerate(lines):
for _i,_l in enumerate(l):
x.append(_l[0])
y.append(_l[1])
if _i<len(l)-1:
codes.append(Path.LINETO) # keep pen on page
else:
if i != len(lines)-1: # final vertex
codes.append(Path.MOVETO)
verts = zip(x,y)
return verts, codes
def path_renderer(verts, codes):
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2)
ax.add_patch(patch)
ax.set_xlim(0,max(max(verts)))
ax.set_ylim(0,max(max(verts)))
ax.axis('off')
plt.gca().invert_yaxis() # y values increase as you go down in image
plt.show()
%ls
## load in airplanes dataset
airplanes = TinyDataset(npy_file='/home/jefan/ptsketchy/tiny/airplane.npy',root_dir='/home/jefan/ptsketchy/tiny',transform=None)
## display given photo and corresponding sketch from stroke-5 representation
i = 100
sample = airplanes[i]
print(i, sample['photo'].shape, sample['strokes'].shape)
plt.figure()
ax = plt.subplot(121)
ax.set_title(sample['name'])
ax.axis('off')
img = np.reshape(sample['photo'],(256,256,3))
plt.imshow(img,interpolation='nearest')
ax = plt.subplot(122)
lines = strokes_to_lines(to_normal_strokes(sample['strokes']))
verts,codes = polyline_pathmaker(lines)
path_renderer(verts,codes)
plt.show()
# load in airplanes dataset
airplanes = TinyDataset(npy_file='/home/jefan/ptsketchy/tiny/airplane.npy',
root_dir='/home/jefan/ptsketchy/tiny',
transform=ToTensor())
# load in features for photos
path_to_features = 'sketchy/triplet_features'
photo_features = np.load(os.path.join(path_to_features,'photo_features.npy'))
F = photo_features
# read in filenames and generate pandas dataframe with object labels
_filenames = pd.read_csv(os.path.join(path_to_features,'photo_filenames.txt'),header=None,names=['filename'])
filenames = []
for i in range(len(_filenames)):
filenames.append(_filenames[_filenames.index==i].values[0][0])
filenames = ['sketchy' + f[1:] for f in filenames]
path = filenames
obj = [f.split('/')[3] for f in filenames]
img = [f.split('/')[4] for f in filenames]
data = {'path': path,
'object': obj,
'filename': img}
X = pd.DataFrame.from_dict(data)
# subset airplane features only
matches = X['object']=='airplane'
inds = np.where(matches==True)
X0 = X[matches]
F0 = F[inds]
# construct (11094,1024) version of photo feature matrix, called PF, that matches indexing of the sketch feature matrix
sketch_features = np.load('sketchy/airplane_features/airplane_sketch_features.npy')
_sketch_filenames = pd.read_csv('sketchy/airplane_features/airplane_sketch_filenames.txt',header=None,names=['filename'])
sketch_filenames = []
for i in range(len(_sketch_filenames)):
sketch_filenames.append(_sketch_filenames[_sketch_filenames.index==i].values[0][0])
PF = []
inds = []
for sf in sketch_filenames:
q = sf.split('/')[2]+'.jpg'
PF.append(F0[X0['filename']==q])
inds.append(np.where(X0['filename']==q)[0][0])
PF = np.squeeze(np.array(PF))
SF = sketch_features
inds = np.array(inds)
## zip together/concatenate the photo and sketch features
_F = np.hstack((PF,SF))
### now get a (11094,5) representation of the 'next stroke'
### no wait, instead, just bump up the dimensionality of these feature matrices to fit that of
### the (delta_x,delta_y) stroke representation
### Strokes dataframe ("S") is of dimensionality (55855,5).
### So, resize and re-index the feature matrix to match S.
S = pd.read_csv('tiny/stroke_dataframe.csv')
S1 = S
photo_dir = np.array([sf.split('/')[2] for sf in sketch_filenames]) # photo dir
sketch_dir = np.array(map(int,[sf.split('/')[3] for sf in sketch_filenames])) # sketch dir
stroke_png = np.array(map(int,[sf.split('/')[4].split('.')[0] for sf in sketch_filenames])) # stroke png
F = []
for index, row in S.iterrows():
# get ind of the original small (11094,5) matrix that corresponds to this row of S (55855,5)
ind = np.intersect1d(np.intersect1d(np.where(photo_dir==row['photoID']),
np.where(sketch_dir==row['sketchID'])),
np.where(stroke_png==row['strokeID']))[0]
F.append(_F[ind])
F = np.array(F)
F1 = F # protected F1 matrix
```
### convert strokes matrix from absolute to relative coordinates
```
from copy import deepcopy
unique_photos = np.unique(S1.photoID)
r = pd.DataFrame(columns=list(S1.keys()))
run_this = False
if run_this:
for i, p in enumerate(unique_photos):
print 'Processing ' + p + ': ' + str(i+1) + ' of ' + str(len(unique_photos)) + ' photos.'
s1 = S1[S1.photoID==p]
unique_sketches_of_photo = np.unique(s1.sketchID)
for sketch in unique_sketches_of_photo:
this_sketch = s1[s1.sketchID==sketch]
for index, row in this_sketch.iterrows():
_row = deepcopy(row)
if index==min(this_sketch.index): # first stroke
r = r.append(row)
this_x = row.x
this_y = row.y
else:
x_offset = _row.x-this_x
y_offset = _row.y-this_y
row.x = x_offset
row.y = y_offset
r = r.append(row)
this_x = _row.x # hold onto current row so you can compute difference with next one
this_y = _row.y
# save out relative strokes matrix as S2
S2 = r
print 'Saving out stroke_dataframe_relative.csv'
S2.to_csv('tiny/stroke_dataframe_relative.csv')
# check if S2 exists, if not, load it in
try:
S2
except:
S2 = pd.read_csv('tiny/stroke_dataframe_relative.csv')
# define S to either be ABSOLUTE strokes matrix (S1) or RELATIVE strokes matrix (S2)
S = S2
# generate 55855-long vector of photo indices
print 'generating list of photo indices based on X0'
inds = []
for index, row in S.iterrows():
q = row['photoID']+'.jpg'
inds.append(np.where(X0['filename']==q)[0][0])
inds = np.array(inds)
# generate random index to do train/val/test split
print 'generating random index to do train/val/test split'
_idx = np.arange(len(X0))
np.random.seed(seed=0)
np.random.shuffle(_idx)
train_len = int(len(_idx)*0.85)
val_len = int(len(_idx)*0.05)
test_len = int(len(_idx*0.10))
# indices of 100 photos that will go into train/val/test split
train_inds = _idx[:train_len]
val_inds = _idx[train_len:train_len+val_len]
test_inds = _idx[train_len+val_len:len(_idx)]
print 'constructing 55855 vectors that correspond to membership in train/val/test splits'
# construct 55855 vectors that correspond to membership in train/val/test splits
train_vec = np.zeros(len(F)).astype(bool)
val_vec = np.zeros(len(F)).astype(bool)
test_vec = np.zeros(len(F)).astype(bool)
for i in train_inds:
train_vec[inds==i] = True
for i in val_inds:
val_vec[inds==i] = True
for i in test_inds:
test_vec[inds==i] = True
assert sum(train_vec)+ sum(val_vec) + sum(test_vec) == len(train_vec)
print ' '
print str(sum(train_vec)/len(train_vec) * len(F)) + ' sketch intermediates to train on.'
print str(sum(val_vec)/len(val_vec) * len(F)) + ' sketch intermediates to validate on.'
print str(sum(test_vec)/len(test_vec) * len(F)) + ' sketch intermediates to test on.'
print ' '
print 'Now actually splitting data.'
# now actually split data
F_train = F[train_vec]
F_val = F[val_vec]
F_test = F[test_vec]
S_train = S[train_vec]
S_val = S[val_vec]
S_test = S[test_vec]
S_train = S_train[['x', 'y', 'pen']].copy()
S_val = S_val[['x', 'y', 'pen']].copy()
S_test = S_test[['x', 'y', 'pen']].copy()
## training helpers
def minibatch(data, minibatch_idx):
return data[minibatch_idx] if type(data) is np.ndarray else [data[i] for i in minibatch_idx]
def minibatches(data, batch_size, shuffle=True):
batches = [np.array(col) for col in zip(*data)]
return get_minibatches(batches, batch_size, shuffle)
def get_minibatches(data, minibatch_size, shuffle=True):
"""
Iterates through the provided data one minibatch at at time. You can use this function to
iterate through data in minibatches as follows:
for inputs_minibatch in get_minibatches(inputs, minibatch_size):
...
Or with multiple data sources:
for inputs_minibatch, labels_minibatch in get_minibatches([inputs, labels], minibatch_size):
...
Args:
data: there are two possible values:
- a list or numpy array
- a list where each element is either a list or numpy array
minibatch_size: the maximum number of items in a minibatch
shuffle: whether to randomize the order of returned data
Returns:
minibatches: the return value depends on data:
- If data is a list/array it yields the next minibatch of data.
- If data a list of lists/arrays it returns the next minibatch of each element in the
list. This can be used to iterate through multiple data sources
(e.g., features and labels) at the same time.
"""
list_data = type(data) is list and (type(data[0]) is list or type(data[0]) is np.ndarray)
data_size = len(data[0]) if list_data else len(data)
indices = np.arange(data_size)
if shuffle:
np.random.shuffle(indices)
for minibatch_start in np.arange(0, data_size, minibatch_size):
minibatch_indices = indices[minibatch_start:minibatch_start + minibatch_size]
yield [minibatch(d, minibatch_indices) for d in data] if list_data \
else minibatch(data, minibatch_indices), minibatch_indices
# usage notes:
# for m in get_minibatches([F_train,S_train.as_matrix()],batch_size,shuffle=True):
# print len(m),m[0].shape,m[1].shape
```
### quality assurance make sure that image classification is working as advertised based on triplet features
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn import svm
from sklearn import linear_model
# reminders:
# S1 is stroke matrix
# F1 is feature matrix
# PF is photo-feature matrix
# SF is sketch-feature matrix/
# get np.array of source photo labels
photos = np.array([sf.split('/')[2] for sf in sketch_filenames])
## get image classification within airplane class
run_this = 1
FEAT = SF_complete
LABELS = photos_complete
if run_this:
# split sketch feature data for linear classification
X_train, X_test, y_train, y_test = train_test_split(
FEAT, LABELS, test_size=0.2, random_state=0)
# check dimensionality of split data
print 'dimensionality of train/test split'
print X_train.shape, y_train.shape
print X_test.shape, y_test.shape
print ' '
cval = True
if cval==False:
# compute linear classification accuracy (takes a minute or so to run)
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)
else:
# compute linear classification accuracy (takes several minutes to run)
# clf = svm.SVC(kernel='linear', C=1)
clf = linear_model.LogisticRegression(penalty='l2')
scores = cross_val_score(clf, FEAT, LABELS, cv=2)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
## SVM Accuracy: 0.41 (+/- 0.08) with cv=5 achieved on 6/26/17 on intermediate sketches
## softmax Accuracy: 0.43 (+/- 0.01) with cv=2 achieved on 9/11/17
```
#### compute pairwise euclidean distances between sketches of same class vs. different class
```
### this was done on 9/11/17 in order to debug the vgg embedding
print 'Why are there fewer sketch intermediate files than total sketch files?'
print str(len(os.listdir('./tiny/sketch/airplane'))) + ' total sketch files.'
all_sketches = os.listdir('./tiny/sketch/airplane')
all_sketches_with_intermediates = [s.split('/')[-2]+'-'+s.split('/')[-1]+'.png' for s in sketch_folders]
print str(len(all_sketches_with_intermediates)) + ' sketch files after extracting intermediates.'
missing_ones = [i for i in all_sketches if i not in all_sketches_with_intermediates]
## get just complete sketches from each sketch folder
sketch_folders = np.unique([os.path.dirname(s) for s in sketch_filenames])
complete_paths = []
SF_complete = []
photos_complete = []
for (j,s) in enumerate(sketch_folders):
complete_sketch = str(max([int(i.split('.')[0]) for i \
in os.listdir(s)])) + '.png'
complete_paths.append(os.path.join(os.path.dirname(s),complete_sketch))
SF_complete.append(SF[j])
photos_complete.append(os.path.dirname(s).split('/')[-1])
SF_complete = np.array(SF_complete)
photos_complete = np.array(photos_complete)
from sklearn.metrics.pairwise import pairwise_distances
def rmse(x):
return np.sqrt(np.sum(x**2))
euc = pairwise_distances(SF_complete,metric='euclidean')
print euc.shape
p_ind = 4
fp = 20
fig = plt.figure(figsize=(9,9))
for (_i,p_ind) in enumerate(np.arange(fp,fp+9)):
unique_photos = np.unique(photos_complete)
inds = np.where(photos_complete==unique_photos[p_ind])[0]
start = inds[0]
stop = inds[-1]
# get within-photo sketch distances
within_block = euc[start:stop+1,start:stop+1]
assert len(within_block[np.triu_indices(len(within_block),k=1)])==(len(within_block)**2-len(within_block))/2
within_distances = within_block[np.triu_indices(len(within_block),k=1)]
# get between-photo sketch distances
all_inds = np.arange(len(photos_complete))
non_matches = [i for i in all_inds if i not in inds]
_non_matches_shuff = np.random.RandomState(seed=0).permutation(non_matches)
non_matches_shuff = _non_matches_shuff[:len(inds)]
btw_distances = euc[start:stop+1,non_matches_shuff].flatten()
# plot
plt.subplot(3,3,_i+1)
h = plt.hist(within_distances,bins=20,alpha=0.3)
h = plt.hist(btw_distances,bins=20,alpha=0.3)
plt.title(str(p_ind))
plt.show()
## get image classification within airplane class
run_this = 1
FEAT = SF_complete
LABELS = photos_complete
if run_this:
# split sketch feature data for linear classification
X_train, X_test, y_train, y_test = train_test_split(
FEAT, LABELS, test_size=0.2, random_state=0)
# check dimensionality of split data
print 'dimensionality of train/test split'
print X_train.shape, y_train.shape
print X_test.shape, y_test.shape
print ' '
cval = True
if cval==False:
# compute linear classification accuracy (takes a minute or so to run)
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)
else:
# compute linear classification accuracy (takes several minutes to run)
clf = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(clf, SF, photos, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
## Accuracy: 0.41 (+/- 0.08) achieved on 6/26/17
from glob import glob
def list_files(path, ext='jpg'):
result = [y for x in os.walk(path)
for y in glob(os.path.join(x[0], '*.%s' % ext))]
return result
airplane_dir = '/home/jefan/full_sketchy_dataset/sketches/airplane'
airplane_paths = list_files(airplane_dir,ext='png')
```
### draft of model for single minibatch
```
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
# reset entire graph
tf.reset_default_graph()
sess = tf.InteractiveSession()
# weight on offset loss
offset_weight = 100.
# learning rate
learning_rate = 0.01
# get minibatch
batch_size = 10
F_batch = F_train[:batch_size,:]
S_batch = S_train.head(n=batch_size)
# reserve numpy version
F_batch_array = F_batch
S_batch_array = S_batch.as_matrix().astype('float32')
# convert to tensorflow tensor
F_batch = tf.cast(tf.stack(F_batch,name='F_batch'),tf.float32)
S_batch = tf.cast(tf.stack(S_batch.as_matrix().astype('float32'),name='S_batch'),tf.float32)
# Layer's sizes
x_size = F_batch.shape[1] # Number of input nodes: 2048 features and 1 bias
h_size = 256 # Number of hidden nodes
y_size = S_batch.shape[1] # Number of outcomes (x,y,pen)
# Symbols
X = tf.placeholder("float", shape=[None, x_size])
y = tf.placeholder("float", shape=[None, y_size])
output = tf.placeholder("float", shape=[None,y_size])
# Weight initializations
W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W2 = tf.get_variable('W2', [h_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b2 = tf.get_variable('b2', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W3 = tf.get_variable('W3', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b3 = tf.get_variable('b3', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# forward propagation
fc1 = tf.nn.relu(tf.nn.xw_plus_b(F_batch, W1, b1,name='fc1'))
fc2 = tf.nn.relu(tf.nn.xw_plus_b(fc1, W2, b2,name='fc2'))
output = tf.nn.xw_plus_b(fc1, W3, b3,name='output')
actual_offset = tf.slice(S_batch,[0,0],[batch_size,2])
actual_pen = tf.slice(S_batch,[0,2],[batch_size,-1])
pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_weight)
pred_pen = tf.nn.softmax(tf.slice(output,[0,2],[batch_size,-1]))
# currently doesn't properly handle the pen state loss
offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels = actual_pen,
logits = pred_pen))
loss = tf.add(offset_loss,pen_loss)
# run backprop
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
# # get predicted stroke vector
strokes = tf.concat([pred_offset, pred_pen],axis=1)
tf.global_variables_initializer().run()
# sess.close()
updates = sess.run([fc1,output,pred_offset,
pred_pen,actual_offset, actual_pen,
offset_loss,pen_loss,loss,
train_op,strokes], feed_dict={X:F_batch_array,y:S_batch_array})
fc1 = updates[0]
output = updates[1]
pred_offset = updates[2]
pred_pen = updates[3]
actual_offset = updates[4]
actual_pen = updates[5]
offset_loss = updates[6]
pen_loss = updates[7]
loss = updates[8]
train_op = updates[9]
strokes = updates[10]
```
### run multiple batches of MLP version
```
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
# reset entire graph
tf.reset_default_graph()
# weight on offset loss
offset_weight = 0.1
# amount to multiply predicted offset values by
offset_multiplier = 100.
# learning rate
learning_rate = 0.001
# set batch size
batch_size = 10
# epoch counter
epoch_num = 0
# feed in only current features, or also features on the next time step as well?
now_plus_next = True
# initialize variables
if now_plus_next:
F = tf.placeholder("float", shape=[None, 4096]) # features (input)
else:
F = tf.placeholder("float", shape=[None, 2048]) # features (input)
S = tf.placeholder("float", shape=[None, 3]) # strokes (output)
# Layer's sizes
x_size = F.shape[1] # Number of input nodes: 2048 features and 1 bias
h_size = 256 # Number of hidden nodes
y_size = S.shape[1] # Number of outcomes (x,y,pen)
output = tf.placeholder("float", shape=[None,y_size])
# convert to tensorflow tensor
F = tf.cast(tf.stack(F,name='F'),tf.float32)
S = tf.cast(tf.stack(S,name='S'),tf.float32)
# Weight initializations
W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W2 = tf.get_variable('W2', [h_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b2 = tf.get_variable('b2', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W3 = tf.get_variable('W3', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b3 = tf.get_variable('b3', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# forward propagation
fc1 = tf.nn.relu(tf.nn.xw_plus_b(F, W1, b1,name='fc1'))
fc2 = tf.nn.relu(tf.nn.xw_plus_b(fc1, W2, b2,name='fc2'))
output = tf.nn.xw_plus_b(fc1, W3, b3,name='output')
actual_offset = tf.slice(So,[0,0],[batch_size,2])
actual_pen = tf.squeeze(tf.slice(So,[0,2],[batch_size,-1]))
pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_multiplier)
pred_pen = tf.squeeze(tf.slice(output,[0,2],[batch_size,-1]))
offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels = actual_pen,
logits = pred_pen))
loss = tf.add(tf.multiply(offset_weight,offset_loss),pen_loss)
# run backprop
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
# saver
save = False
saver = tf.train.Saver()
# get predicted stroke vector
strokes = tf.concat([pred_offset, tf.expand_dims(pred_pen,1)],axis=1)
# run batches
with tf.Session() as sess:
tf.global_variables_initializer().run()
for m,idx in get_minibatches([F_train,S_train.as_matrix().astype('float32')],batch_size,shuffle=True):
if m[0].shape[0]==batch_size:
F_batch = m[0]
S_batch = m[1]
# use idx to retrieve the features of the subsequent row in the feature matrix, so you
# effectively feed in sketch_so_far and sketch_so_far_plus_next_xy, as well as pen (absolute?) location and state
if (max(idx)<45040):
F_batch_next = F_train[idx+1].shape
F_now_plus_next = np.hstack((F_train[idx],F_train[idx+1]))
if (now_plus_next) & (max(idx)<45040):
updates = sess.run([offset_loss, pen_loss, loss, pred_offset], feed_dict={F:F_now_plus_next,S:S_batch})
else:
try:
updates = sess.run([offset_loss, pen_loss, loss, pred_offset], feed_dict={F:F_batch,S:S_batch})
except:
pass
offset_loss_ = updates[0]
pen_loss_ = updates[1]
loss_ = updates[2]
pred_offset_ = updates[3]
if epoch_num%200==0:
print "Epoch: " + str(epoch_num) + " | Loss: " + str(loss_) + \
" | Offset loss: " + str(offset_loss_) + " | Pen loss: " + str(pen_loss_)
# save
if save:
saver.save(sess, 'checkpoints/pix2svg_train_0')
# increment epoch number
epoch_num += 1
## meeting notes
# june 26: validate triplet network to make sure it does the task -- QA
# does it take in pen location? put in pen location, pen state
# put in sketch so far + (sketch so far +1)
# delta x, delta y -- make sure the thing it spits out, after getting squashed by tanh, or whatever, is well centered
```
### simpler version that goes from last pen offset to next pen offset
```
## now try simpler version that just tries to predict the next offset based on previous offset
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
# reset entire graph
tf.reset_default_graph()
# weight on offset loss
offset_weight = 0.8
# amount to multiply predicted offset values by
offset_multiplier = 100.
# learning rate
learning_rate = 0.001
# set batch size
batch_size = 10
# epoch counter
epoch_num = 0
# initialize variables
Si = tf.placeholder("float", shape=[None, 3]) # strokes (input)
So = tf.placeholder("float", shape=[None, 3]) # strokes (output)
# Layer's sizes
x_size = Si.shape[1] # Number of input nodes: x, y, state
h_size = 256 # Number of hidden nodes
y_size = So.shape[1] # Number of outcomes (x,y,pen)
output = tf.placeholder("float", shape=[None,y_size])
# convert to tensorflow tensor
Si = tf.cast(tf.stack(Si,name='Si'),tf.float32)
So = tf.cast(tf.stack(So,name='So'),tf.float32)
# Weight initializations
W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W2 = tf.get_variable('W2', [h_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b2 = tf.get_variable('b2', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W3 = tf.get_variable('W3', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b3 = tf.get_variable('b3', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# forward propagation
fc1 = tf.nn.relu(tf.nn.xw_plus_b(Si, W1, b1,name='fc1'))
fc2 = tf.nn.relu(tf.nn.xw_plus_b(fc1, W2, b2,name='fc2'))
output = tf.nn.xw_plus_b(fc1, W3, b3,name='output')
actual_offset = tf.slice(So,[0,0],[batch_size,2])
actual_pen = tf.squeeze(tf.slice(So,[0,2],[batch_size,-1]))
pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_multiplier)
pred_pen = tf.squeeze(tf.slice(output,[0,2],[batch_size,-1]))
offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels = actual_pen,
logits = pred_pen))
loss = tf.add(tf.multiply(offset_weight,offset_loss),tf.multiply(1-offset_weight,pen_loss))
# run backprop
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
# saver
save = False
saver = tf.train.Saver()
# get predicted stroke vector
strokes = tf.concat([pred_offset, tf.expand_dims(tf.round(tf.sigmoid(pred_pen)+1),1)],axis=1)
Strokes = []
# run batches
with tf.Session() as sess:
tf.global_variables_initializer().run()
for m,idx in get_minibatches(S_train.as_matrix().astype('float32')[:len(S_train)-1],batch_size,shuffle=True):
Si_batch = m # batch of current strokes
if (max(idx)<45040):
So_batch = S_train.iloc[idx+1].as_matrix().astype('float32')
updates = sess.run([offset_loss, pen_loss, loss, pred_offset, actual_pen, pred_pen, strokes], feed_dict={Si:Si_batch,So:So_batch})
offset_loss_ = updates[0]
pen_loss_ = updates[1]
loss_ = updates[2]
pred_offset_ = updates[3]
actual_pen_ = updates[4]
pred_pen_ = updates[5]
strokes_ = updates[6]
if epoch_num%200==0:
print "Epoch: " + str(epoch_num) + " | Loss: " + str(loss_) + \
" | Offset loss: " + str(offset_loss_) + " | Pen loss: " + str(pen_loss_)
# save
if save:
saver.save(sess, 'checkpoints/pix2svg_train_svg2svg_0')
# increment epoch number
epoch_num += 1
plt.scatter(strokes_[:,0],strokes_[:,1])
plt.show()
### demo of the difference between the absolute pen position and the relative pen position
plt.figure()
inds = list(S[(S.photoID=='n02691156_10151') & (S.sketchID==0)].index)
verts = zip(S1.loc[inds].x.values,S1.loc[inds].y.values)
codes = S1.loc[inds].pen.values
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2)
ax = plt.subplot(121)
ax.add_patch(patch)
ax.set_xlim(0,600)
ax.set_ylim(0,600)
ax.axis('off')
plt.gca().invert_yaxis() # y values increase as you go down in image
plt.show()
inds = list(S[(S.photoID=='n02691156_10151') & (S.sketchID==0)].index)
verts = zip(S2.loc[inds].x.values,S2.loc[inds].y.values)
codes = S2.loc[inds].pen.values
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=2)
ax = plt.subplot(122)
ax.add_patch(patch)
ax.set_xlim(-200,200)
ax.set_ylim(-200,200)
ax.axis('off')
plt.gca().invert_yaxis() # y values increase as you go down in image
plt.show()
```
### predict next offset on basis of previous 4 offsets
```
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
# reset entire graph
tf.reset_default_graph()
# weight on offset loss
offset_weight = 1.
# amount to multiply predicted offset values by
offset_multiplier = 100.
# learning rate
learning_rate = 0.001
# set batch size
batch_size = 10
# epoch counter
epoch_num = 0
# initialize variables
Si4 = tf.placeholder("float", shape=[None, 3]) # strokes (input) -- 4th to last
Si3 = tf.placeholder("float", shape=[None, 3]) # strokes (input) -- 3rd to last
Si2 = tf.placeholder("float", shape=[None, 3]) # strokes (input) -- 2nd to last
Si1 = tf.placeholder("float", shape=[None, 3]) # strokes (input) -- previous one
So = tf.placeholder("float", shape=[None, 3]) # strokes (output)
# Layer's sizes
x_size = Si1.shape[1]*4 # Number of input nodes: x, y, state
h_size = 512 # Number of hidden nodes
y_size = So.shape[1] # Number of outcomes (x,y,pen)
output = tf.placeholder("float", shape=[None,y_size])
# convert to tensorflow tensor
Si4 = tf.cast(tf.stack(Si4,name='Si4'),tf.float32)
Si3 = tf.cast(tf.stack(Si3,name='Si3'),tf.float32)
Si2 = tf.cast(tf.stack(Si2,name='Si2'),tf.float32)
Si1 = tf.cast(tf.stack(Si1,name='Si1'),tf.float32)
Si = tf.concat([Si4,Si3,Si2,Si1],axis=1)
So = tf.cast(tf.stack(So,name='So'),tf.float32)
# Weight initializations
W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W2 = tf.get_variable('W2', [h_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b2 = tf.get_variable('b2', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W3 = tf.get_variable('W3', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b3 = tf.get_variable('b3', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# forward propagation
fc1 = tf.nn.relu(tf.nn.xw_plus_b(Si, W1, b1,name='fc1'))
fc2 = tf.nn.relu(tf.nn.xw_plus_b(fc1, W2, b2,name='fc2'))
output = tf.nn.xw_plus_b(fc1, W3, b3,name='output')
actual_offset = tf.slice(So,[0,0],[batch_size,2])
actual_pen = tf.squeeze(tf.slice(So,[0,2],[batch_size,-1]))
pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_multiplier)
pred_pen = tf.squeeze(tf.slice(output,[0,2],[batch_size,-1]))
offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels = actual_pen,
logits = pred_pen))
loss = tf.add(tf.multiply(offset_weight,offset_loss),tf.multiply(1-offset_weight,pen_loss))
# run backprop
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
# saver
save = False
saver = tf.train.Saver()
# get predicted stroke vector
strokes = tf.concat([pred_offset, tf.expand_dims(tf.round(tf.sigmoid(pred_pen)+1),1)],axis=1)
Strokes = []
# run batches
with tf.Session() as sess:
tf.global_variables_initializer().run()
for m,idx in get_minibatches(S_train.as_matrix().astype('float32')[:len(S_train)-1],batch_size,shuffle=True):
Si1_batch = m # batch of current strokes
Si2_batch = S_train.iloc[idx-1].as_matrix().astype('float32')
Si3_batch = S_train.iloc[idx-2].as_matrix().astype('float32')
Si4_batch = S_train.iloc[idx-3].as_matrix().astype('float32')
if (max(idx)<45040):
So_batch = S_train.iloc[idx+1].as_matrix().astype('float32')
updates = sess.run([offset_loss, pen_loss, loss,
pred_offset, actual_pen, pred_pen, strokes],
feed_dict={Si1:Si1_batch,Si2:Si2_batch,Si3:Si3_batch,Si4:Si4_batch,
So:So_batch})
offset_loss_ = updates[0]
pen_loss_ = updates[1]
loss_ = updates[2]
pred_offset_ = updates[3]
actual_pen_ = updates[4]
pred_pen_ = updates[5]
strokes_ = updates[6]
if epoch_num%200==0:
print "Epoch: " + str(epoch_num) + " | Loss: " + str(loss_) + \
" | Offset loss: " + str(offset_loss_) + " | Pen loss: " + str(pen_loss_)
# save
if save:
saver.save(sess, 'checkpoints/pix2svg_train_svg2svg_0')
# increment epoch number
epoch_num += 1
```
### now trying to predict mixture of gaussians rather than naive nonlinear function... because pen offsets can't be modeled by any function
reverting to trying to predict next pen offset based on single most recent pen offset
```
## now trying to predict mixture of gaussians rather than naive nonlinear function... because pen offsets can't be modeled by any function
import mdn as mdn ## import mixture density network helpers
reload(mdn)
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
# reset entire graph
tf.reset_default_graph()
# weight on offset loss
offset_weight = 0.8
# amount to multiply predicted offset values by
offset_multiplier = 100.
# learning rate
learning_rate = 0.001
# set batch size
batch_size = 10
# epoch counter
epoch_num = 0
# initialize variables
Si = tf.placeholder("float", shape=[None, 3]) # strokes (input)
So = tf.placeholder("float", shape=[None, 3]) # strokes (output)
r_cost = tf.placeholder("float", shape=[None])
x1_data = tf.placeholder("float",shape=[None])
x2_data = tf.placeholder("float",shape=[None])
pen_data = tf.placeholder("float",shape=[None])
offset_loss = tf.placeholder("float",shape=[None])
state_loss = tf.placeholder("float",shape=[None])
recon_loss = tf.placeholder("float",shape=[None])
# Layer's sizes
x_size = Si.shape[1] # Number of input nodes: x, y, state
h_size = 384 # Number of hidden nodes 6*64
# y_size = So.shape[1] # Number of outcomes (x,y,pen)
y_size = 8 ## split this into MDN parameters: first two elements are pen state logits (1 or 2), next 384/6 are for estimating the other parameters
output = tf.placeholder("float", shape=[None,y_size])
# # convert to tensorflow tensor
Si = tf.cast(tf.stack(Si,name='Si'),tf.float32)
So = tf.cast(tf.stack(So,name='So'),tf.float32)
# Weight initializations
W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W2 = tf.get_variable('W2', [h_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b2 = tf.get_variable('b2', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
W3 = tf.get_variable('W3', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b3 = tf.get_variable('b3', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# forward propagation
fc1 = tf.nn.relu(tf.nn.xw_plus_b(Si, W1, b1,name='fc1'))
fc2 = tf.nn.relu(tf.nn.xw_plus_b(fc1, W2, b2,name='fc2'))
output = tf.nn.xw_plus_b(fc2, W3, b3,name='output')
# get mixture distribution parameters
out = mdn.get_mixture_coef(output)
[o_pi, o_mu1, o_mu2, o_sigma1, o_sigma2, o_corr, o_pen, o_pen_logits] = out ## each of these are the size of the batch
# get target for prediction
target = So # shape: (batch_size, 3)
[x1_data, x2_data, pen_data] = tf.split(target, 3, 1)
x1_data = tf.squeeze(x1_data) # shape (batch_size,)
x2_data = tf.squeeze(x2_data) # shape (batch_size,)
pen_data = tf.squeeze(pen_data) # shape (batch_size,)
pen_data = tf.subtract(pen_data,1) # classes need to be in the range [0, num_classes-1]
# compute reconstruction loss
offset_loss, state_loss = mdn.get_lossfunc(o_pi, o_mu1, o_mu2, o_sigma1, o_sigma2, o_corr,
o_pen_logits, x1_data, x2_data, pen_data)
offset_loss = tf.squeeze(offset_loss)
recon_loss = tf.add(offset_loss,state_loss)
loss = tf.reduce_sum(recon_loss,axis=0)
# # run backprop
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for m,idx in get_minibatches(S_train.as_matrix().astype('float32')[:len(S_train)-1],batch_size,shuffle=True):
Si_batch = m # batch of current strokes
if (max(idx)<45040):
So_batch = S_train.iloc[idx+1].as_matrix().astype('float32')
results = sess.run([o_pi, o_mu1, o_mu2, o_sigma1, o_sigma2, o_corr, o_pen, o_pen_logits, offset_loss, state_loss, recon_loss, loss], feed_dict={Si:Si_batch,So:So_batch})
_o_pi = results[0]
_o_mu1 = results[1]
_o_mu2 = results[2]
_o_sigma1 = results[3]
_o_sigma2 = results[4]
_o_corr = results[5]
_o_pen = results[6]
_o_pen_logits = results[7]
_offset_loss = results[8]
_state_loss = results[9]
_recon_loss = results[10]
_loss = results[11]
if epoch_num%100==0:
print('Epoch Num: ', epoch_num, 'Reconstruction Loss:', _loss)
epoch_num += 1
a = tf.constant([2.,2.,1.,2.,2.,1.,1.,2.,1.,2.])
b = tf.constant([1.,1.,1.,1.,1.,1.,1.,1.,1.,1.])
c = tf.reshape(a,[-1,10])
### reshape with a -1 nests a tensor, so one that is originally of shape (10,) becomes (1,10)
d = tf.split(c,10,1)
e = tf.constant([-0.4])
result = tf.nn.softmax_cross_entropy_with_logits(labels=a,logits=b)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
print result.eval()
print a.eval(), a.get_shape()
print c.eval(), c.get_shape()
print tf.nn.softmax(c).eval()
print tf.nn.softmax(e).eval()
sess.close()
# NSAMPLE = 1000
x_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
r_data = np.float32(np.random.normal(size=(NSAMPLE,1)))
y_data = np.float32(np.sin(0.75*x_data)*7.0+x_data*0.5+r_data*1.0)
# plt.figure(figsize=(8, 8))
# plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)
# plt.show()
# temp_data = x_data
# x_data = y_data
# y_data = temp_data
# plt.figure(figsize=(8, 8))
# plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)
# plt.show()
import mdn as mdn
mdn
x1 = tf.random_normal([1], mean=0, stddev=0.1)
x2 = tf.random_normal([1], mean=1, stddev=0.1)
mu1 = tf.constant(0., dtype=tf.float32)
mu2 = tf.constant(1., dtype=tf.float32)
s1 = tf.constant(1., dtype=tf.float32)
s2 = tf.constant(1., dtype=tf.float32)
rho = tf.constant(0., dtype=tf.float32)
result = mdn.tf_2d_normal(x1, x2, mu1, mu2, s1, s2, rho)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print x1.eval()
print x2.eval()
print result.eval()
sess.close()
## self contained example of sinusoidal function fitting
NSAMPLE = 1000
x_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
r_data = np.float32(np.random.normal(size=(NSAMPLE,1)))
y_data = np.float32(np.sin(0.75*x_data)*7.0+x_data*0.5+r_data*1.0)
x = tf.placeholder(dtype=tf.float32, shape=[None,1])
y = tf.placeholder(dtype=tf.float32, shape=[None,1])
NHIDDEN = 20
W = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=1.0, dtype=tf.float32))
b = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=1.0, dtype=tf.float32))
W_out = tf.Variable(tf.random_normal([NHIDDEN,1], stddev=1.0, dtype=tf.float32))
b_out = tf.Variable(tf.random_normal([1,1], stddev=1.0, dtype=tf.float32))
hidden_layer = tf.nn.tanh(tf.matmul(x, W) + b)
y_out = tf.matmul(hidden_layer,W_out) + b_out
lossfunc = tf.nn.l2_loss(y_out-y)
train_op = tf.train.RMSPropOptimizer(learning_rate=0.1, decay=0.8).minimize(lossfunc)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
NEPOCH = 1000
for i in range(NEPOCH):
sess.run(train_op,feed_dict={x: x_data, y: y_data})
x_test = np.float32(np.arange(-10.5,10.5,0.1))
x_test = x_test.reshape(x_test.size,1)
y_test = sess.run(y_out,feed_dict={x: x_test})
plt.figure(figsize=(8, 8))
plt.plot(x_data,y_data,'ro', x_test,y_test,'bo',alpha=0.3)
plt.show()
sess.close()
NHIDDEN = 24
STDEV = 0.5
KMIX = 24 # number of mixtures
NOUT = KMIX * 3 # pi, mu, stdev
x = tf.placeholder(dtype=tf.float32, shape=[None,1], name="x")
y = tf.placeholder(dtype=tf.float32, shape=[None,1], name="y")
Wh = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=STDEV, dtype=tf.float32))
bh = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=STDEV, dtype=tf.float32))
Wo = tf.Variable(tf.random_normal([NHIDDEN,NOUT], stddev=STDEV, dtype=tf.float32))
bo = tf.Variable(tf.random_normal([1,NOUT], stddev=STDEV, dtype=tf.float32))
hidden_layer = tf.nn.tanh(tf.matmul(x, Wh) + bh)
output = tf.matmul(hidden_layer,Wo) + bo
def get_mixture_coef(output):
out_pi = tf.placeholder(dtype=tf.float32, shape=[None,KMIX], name="mixparam")
out_sigma = tf.placeholder(dtype=tf.float32, shape=[None,KMIX], name="mixparam")
out_mu = tf.placeholder(dtype=tf.float32, shape=[None,KMIX], name="mixparam")
out_pi, out_sigma, out_mu = tf.split(output,3,1)
max_pi = tf.reduce_max(out_pi, 1, keep_dims=True)
out_pi = tf.subtract(out_pi, max_pi)
out_pi = tf.exp(out_pi)
normalize_pi = tf.reciprocal(tf.reduce_sum(out_pi, 1, keep_dims=True))
out_pi = tf.multiply(normalize_pi, out_pi)
out_sigma = tf.exp(out_sigma)
return out_pi, out_sigma, out_mu
out_pi, out_sigma, out_mu = get_mixture_coef(output)
NSAMPLE = 2500
y_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
r_data = np.float32(np.random.normal(size=(NSAMPLE,1))) # random noise
x_data = np.float32(np.sin(0.75*y_data)*7.0+y_data*0.5+r_data*1.0)
oneDivSqrtTwoPI = 1 / math.sqrt(2*math.pi) # normalisation factor for gaussian, not needed.
def tf_normal(y, mu, sigma):
result = tf.subtract(y, mu)
result = tf.multiply(result,tf.reciprocal(sigma))
result = -tf.square(result)/2
return tf.multiply(tf.exp(result),tf.reciprocal(sigma))*oneDivSqrtTwoPI
def get_lossfunc(out_pi, out_sigma, out_mu, y):
result = tf_normal(y, out_mu, out_sigma)
result = tf.multiply(result, out_pi)
result = tf.reduce_sum(result, 1, keep_dims=True)
result = -tf.log(result)
return tf.reduce_mean(result)
lossfunc = get_lossfunc(out_pi, out_sigma, out_mu, y)
train_op = tf.train.AdamOptimizer().minimize(lossfunc)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
NEPOCH = 10000
loss = np.zeros(NEPOCH) # store the training progress here.
for i in range(NEPOCH):
sess.run(train_op,feed_dict={x: x_data, y: y_data})
loss[i] = sess.run(lossfunc, feed_dict={x: x_data, y: y_data})
plt.figure(figsize=(8, 8))
plt.plot(np.arange(100, NEPOCH,1), loss[100:], 'r-')
plt.show()
x_test = np.float32(np.arange(-15,15,0.1))
NTEST = x_test.size
x_test = x_test.reshape(NTEST,1) # needs to be a matrix, not a vector
def get_pi_idx(x, pdf):
N = pdf.size
accumulate = 0
for i in range(0, N):
accumulate += pdf[i]
if (accumulate > x):
return i
print 'error with sampling ensemble'
return -1
def generate_ensemble(out_pi, out_mu, out_sigma, M = 10):
NTEST = x_test.size
result = np.random.rand(NTEST, M) # initially random [0, 1]
rn = np.random.randn(NTEST, M) # normal random matrix (0.0, 1.0)
mu = 0
std = 0
idx = 0
# transforms result into random ensembles
for j in range(0, M): # mixtures
for i in range(0, NTEST): # datapoints
idx = get_pi_idx(result[i, j], out_pi[i])
mu = out_mu[i, idx]
std = out_sigma[i, idx]
result[i, j] = mu + rn[i, j]*std
return result
out_pi_test, out_sigma_test, out_mu_test = sess.run(get_mixture_coef(output), feed_dict={x: x_test})
y_test = generate_ensemble(out_pi_test, out_mu_test, out_sigma_test)
plt.figure(figsize=(8, 8))
plt.plot(x_data,y_data,'ro', x_test,y_test,'bo',alpha=0.3)
plt.show()
#####========================================================================
# actual_offset = tf.slice(So,[0,0],[batch_size,2])
# actual_pen = tf.squeeze(tf.slice(So,[0,2],[batch_size,-1]))
# pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_multiplier)
# pred_pen = tf.squeeze(tf.slice(output,[0,2],[batch_size,-1]))
# offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
# pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
# labels = actual_pen,
# logits = pred_pen))
# loss = tf.add(tf.multiply(offset_weight,offset_loss),tf.multiply(1-offset_weight,pen_loss))
# # run backprop
# optimizer = tf.train.AdamOptimizer(learning_rate)
# train_op = optimizer.minimize(loss)
# # saver
# save = False
# saver = tf.train.Saver()
# # get predicted stroke vector
# strokes = tf.concat([pred_offset, tf.expand_dims(tf.round(tf.sigmoid(pred_pen_)+1),1)],axis=1)
# Strokes = []
# # run batches
# with tf.Session() as sess:
# tf.global_variables_initializer().run()
# for m,idx in get_minibatches(S_train.as_matrix().astype('float32')[:len(S_train)-1],batch_size,shuffle=True):
# Si_batch = m # batch of current strokes
# if (max(idx)<45040):
# So_batch = S_train.iloc[idx+1].as_matrix().astype('float32')
# updates = sess.run([offset_loss, pen_loss, loss, pred_offset, actual_pen, pred_pen, strokes], feed_dict={Si:Si_batch,So:So_batch})
# offset_loss_ = updates[0]
# pen_loss_ = updates[1]
# loss_ = updates[2]
# pred_offset_ = updates[3]
# actual_pen_ = updates[4]
# pred_pen_ = updates[5]
# strokes_ = updates[6]
# if epoch_num%200==0:
# print "Epoch: " + str(epoch_num) + " | Loss: " + str(loss_) + \
# " | Offset loss: " + str(offset_loss_) + " | Pen loss: " + str(pen_loss_)
# # save
# if save:
# saver.save(sess, 'checkpoints/pix2svg_train_svg2svg_0')
# # increment epoch number
# epoch_num += 1
```
### run rnn version
```
# RANDOM_SEED = 42
# tf.set_random_seed(RANDOM_SEED)
# # reset entire graph
# tf.reset_default_graph()
# # weight on offset loss
# offset_weight = 1000.
# # learning rate
# learning_rate = 0.01
# # set batch size
# batch_size = 10
# # epoch counter
# epoch_num = 0
# # max strokes
# max_strokes = 200
# # initialize variables
# F = tf.placeholder("float", shape=[None, 2048]) # features (input)
# S = tf.placeholder("float", shape=[None, 3]) # strokes (output)
# # layer sizes
# x_size = F.shape[1] # Number of input nodes: 2048 features and 1 bias
# h_size = 512 # Number of hidden nodes
# y_size = S.shape[1] # Number of outcomes (x,y,pen)
# # rnn hyperparameters
# rnn_hidden_size = 512 # number of rnn hidden units
# output = tf.placeholder("float", shape=[None,y_size])
# # convert to tensorflow tensor
# F = tf.cast(tf.stack(F,name='F'),tf.float32)
# S = tf.cast(tf.stack(S,name='S'),tf.float32)
# # Weight initializations
# W1 = tf.get_variable('W1', [x_size, h_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
# b1 = tf.get_variable('b1', [h_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# W2 = tf.get_variable('W2', [h_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
# b2 = tf.get_variable('b2', [y_size],initializer=tf.zeros_initializer(), dtype=tf.float32)
# # forward propagation
# # Run RNN and run linear layer to fit to correct size.
# rnn_input = tf.nn.xw_plus_b(F, W1, b1,name='rnn_input')
# cell = tf.contrib.rnn.BasicLSTMCell(rnn_hidden_size)
# starting_state = cell.zero_state(batch_size=batch_size, dtype=tf.float32)
# outputs, final_rnn_state = tf.contrib.rnn.static_rnn(cell,
# [rnn_input]*max_strokes,
# initial_state=starting_state,
# dtype=tf.float32)
# W_hy = tf.get_variable('W_hy', [rnn_hidden_size, y_size],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
# preds = []
# for output in outputs:
# preds.append(tf.matmul(outputs, W_hy))
# # output = tf.nn.xw_plus_b(fc1, W2, b2,name='output')
# # actual_offset = tf.slice(S,[0,0],[batch_size,2])
# # actual_pen = tf.slice(S,[0,2],[batch_size,-1])
# # pred_offset = tf.multiply(tf.slice(output,[0,0],[batch_size,2]),offset_weight)
# # pred_pen = tf.nn.softmax(tf.slice(output,[0,2],[batch_size,-1]))
# # offset_loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(pred_offset,actual_offset)),axis=1)))
# # pen_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
# # labels = actual_pen,
# # logits = pred_pen))
# # loss = tf.add(offset_loss,pen_loss)
# # # run backprop
# # optimizer = tf.train.AdamOptimizer(learning_rate)
# # train_op = optimizer.minimize(loss)
# # saver
# save = False
# saver = tf.train.Saver()
# # # get predicted stroke vector
# # strokes = tf.concat([pred_offset, pred_pen],axis=1)
# # run batches
# with tf.Session() as sess:
# tf.global_variables_initializer().run()
# for m in get_minibatches([F_train,S_train.as_matrix()],batch_size,shuffle=True):
# if m[0].shape[0]==batch_size:
# F_batch = m[0]
# S_batch = m[1]
# updates = sess.run([preds], feed_dict={F:F_batch,S:S_batch})
# preds_ = updates[0]
# if epoch_num%200==0:
# print "Epoch: " + str(epoch_num)
# # save
# if save:
# saver.save(sess, 'checkpoints/pix2svg_train_rnn_0')
# # increment epoch number
# epoch_num += 1
# for epoch in range(50):
# # Train with each examplea
# for i in range(len(F_batch)):
# sess.run(updates, feed_dict={X: F_batch[i: i + 1], y: S_batch[i: i + 1]})
# loss = sess.run(output, feed_dict={X: F_batch, y: S_batch})
fig = plt.figure()
im = plt.matshow(np.corrcoef(F_batch_array),vmin=0.5)
plt.show()
sess.close()
# get minibatch
batch_size = 1500
F_batch = F_train[:batch_size,:]
S_batch = S_train.head(n=batch_size)
# reserve numpy version
F_batch_array = F_batch
S_batch_array = S_batch.as_matrix()
plt.matshow(np.corrcoef(F_batch_array))
plt.show()
SF_ = SF[100:110,:]
plt.matshow(np.corrcoef(SF_))
plt.show()
PF_ = PF[:batch_size,:]
plt.matshow(np.corrcoef(PF_))
plt.show()
```
|
github_jupyter
|
```
import requests as rq
import json
import pandas as pd
class scb:
def __init__(self, language='sv', levels=None, query=None):
self.language = language
self.url = 'http://api.scb.se/OV0104/v1/doris/{}/ssd/'.format(self.language)
self.levels = None
self.data = None
self.dataframe = None
self.variables = None
self.table_name = None
self.get_data(self.url, levels, query)
def get_data(self, url, levels, query):
if levels:
print('Levels:')
self.levels = levels
for l in levels:
print(l)
url += l + '/'
self.url = url
with rq.get(url) as response:
self.response = response
self.data = response.json()
if type(self.data) is list:
#self.dataframe = pd.read_json(json.dumps(response.json()), orient='records')
self.variables = None
self.table_name = None
self.dataframe = pd.read_json(response.content, orient='records')
elif type(self.data) is dict:
self.dataframe = None
self.variables = self.data['variables']
self.table_name = self.data['title']
if query:
self.query = {'query': [],
'response': {'format': 'json'}}
for q in query:
self.query['query'].append({'code': q['code'],
'selection': {'filter':
'item',
'values': q['values']}})
with rq.post(url, json=self.query) as response:
self.response = response
self.data = response.json()
index = []
columns = []
values = []
for k in scbr.data['columns']:
if k['type'] == 'c':
columns.append(k['text'])
for k in scbr.data['data']:
index.append(k['key'])
values.append(k['values'])
self.dataframe = pd.read_json(
json.dumps(
{'index': index,
'columns':columns,
'data':values}), orient='split')
else:
self.dataframe = None
# def set_query(self, )
def __repr__(self):
return str(self.data)
query = [{'code':'SNI2007', 'values':['J', 'Q']},
{'code': 'Storleksklass', 'values': ['SGR0',
'SGR1',
'SGR2',
'SGR3',
'SGR4',
'SGR5',
'SGR6',
'SGR7',
'SGR8']},
{'code': 'ContentsCode', 'values': ['000002YB']},
{'code': 'Tid', 'values': ['2018', '2019']}]
scbr = scb(levels=['NV', 'NV0101', 'FDBR07N'], query=query)
scbr.dataframe
# scbpd = pd.read_json(json.dumps({'index': [for k in scbr.data['columns'][0]]}), )
index = []
columns = []
values = []
print(scbr.data['columns'])
for k in scbr.data['columns']:
if k['type'] == 'c':
columns.append(k['text'])
for k in scbr.data['data']:
index.append(k['key'][-1])
values.append(k['values'])
js = pd.read_json(json.dumps({'index': index, 'columns':columns, 'data':values}), orient='split')
js
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.