markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Storing the other elements on disk:
with open(os.path.join(PROCESSED_DATA, 'unit_span.lst'), 'w') as f: f.write(str(unit_span)) _ = joblib.dump(labels_scaler, os.path.join(PROCESSED_DATA, 'labels_scaler.joblib'))
_____no_output_____
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Memory cleanup
fig.clear() plt.close(fig) import gc _ = gc.collect()
_____no_output_____
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
О вероятности попасть под удар фигуры, поставленной случайным образом на шахматную доску На шахматную доску случайным образом поставлены две фигуры. С какой вероятностью первая фигура бьёт вторую? В данном ноутбуке представлен расчёт этой вероятности для каждой шахматной фигуры как функции от размера доски. Рассмотрены только квадратные доски. Фигуры полагаются поставленными одновременно (обязательно стоят на разных клетках), выбор каждого из полей равновероятен. Степень (валентность) вершины $v$ графа $G$ - количество рёбер графа $G$, инцидентных вершине $v$. Граф ходов шахматной фигуры (далее Граф) - граф, изображающий все возможные ходы фигуры на шахматной доске - каждая вершина соответствует клетке на доске, а рёбра соответствуют возможным ходам.Тогда валентность вершины Графа - количество полей, которые бьёт фигура, будучи поставленной на соответствующую этой вершине клетку. В целях упрощения речи далее в тексте используется формулировка "клетка имеет валентность", однако понятно, что валентность имеет не клетка, а соответствующая ей вершина в Графе. Если событие $X$ может произойти только при выполнении одного из событий $H_1, H_2,..., H_n$, которые образуют полную группу несовместных событий, то вероятность $P(X)$ вычисляется по формуле: $$P(X) = P(H_1) \cdot P(X|H_1) + P(H_2) \cdot P(X|H_2) + ... + P(H_n) \cdot P(X|H_n),$$ которая называется формулой полной вероятности. Пусть событие $X_{piece}$ = «Первая *фигура (piece)* бьёт вторую на доске размера $n\times n$», $a$ - некоторое значение валентности, $b$ - количество клеток, имеющих валентность $a$, каждая из гипотез $H_i$ = «Первая фигура стоит на клетке с валентностью $a_i$». Тогда $P(H_i) = \frac{b_i}{n^{2}}$ в силу классического определения вероятности - это отношение количества исходов, благоприятствующих событию $H_i$, к количеству всех равновозможных исходов. $P(X_{piece}|H_i) = \frac{a_i}{n^{2}-1}$ по той же причине - событию $X_{piece}$ при условии $H_i$ благоприятствует $a_i$ исходов (вторая фигура стоит под ударом клетки с валентностью $a_i$),а количество всех равновозможных исходов уменьшилось на единицу, так как одна из клеток уже занята первой фигурой. Импорты
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from string import ascii_uppercase as alphabet
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Функция создания доски Создание доски как пустого массива заданной формы. Также нам понадобится массив такой же формы для хранения цвета надписи на клетках.Заполнение доски: 0 - чёрные клетки, 1 - белые.
def get_board(board_size): x, y = np.meshgrid(range(board_size), range(board_size)) board = np.empty(shape=(board_size, board_size), dtype='uint8') text_colors = np.empty_like(board, dtype='<U5') # force left bottom corner cell to be black if board_size % 2 == 0: extra_term = 1 else: extra_term = 0 for i, j in zip(x.flatten(), y.flatten()): board[i, j] = (i + j + extra_term) % 2 # text color should be the opposite to a cell color text_colors[i, j] = 'black' if board[i, j] else 'white' return board, text_colors def get_valencies(piece, board): # Get valencies for the given piece on the given board valencies = np.empty_like(board) if piece == 'Pawn': valencies = pawn(valencies) elif piece == 'Knight': valencies = knight(valencies) elif piece == 'Rook': valencies = rook(valencies) elif piece == 'King': valencies = king(valencies) else: valencies = bishop_or_queen(piece, valencies) return valencies
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Функция рисования доски Функция создаёт изображение шахматной доски, каждая клетка которой подписана соответствующим значением валентности.
def plot_board(board, text_colors, piece): board_size = np.shape(board)[0] x, y = np.meshgrid(range(board_size), range(board_size)) # let figure size be dependent on the board size plt.figure(figsize=(3*board_size/4, 3*board_size/4)) ax = plt.subplot(111) ax.imshow(board, cmap='gray', interpolation='none') # Display valency (degree) values val_board = get_valencies(piece, board) for i, j, valency, text_col in zip(x.flatten(), y.flatten(), val_board.flatten(), text_colors.flatten()): ax.text(i, j, str(valency), color=text_col, va='center', ha='center', fontsize=20) ax.set_xticks(np.arange(board_size+1)) # one tick per cell ax.set_xticklabels(alphabet[:board_size]) # set letters as ticklabels # one tick per cell ax.set_yticks(np.arange(board_size+1)) # set numbers as ticklabels (upside down) ax.set_yticklabels(np.arange(board_size, 0, -1)) ax.axis('tight') # get rid of the white spaces on the edges ax.set_title(piece, fontsize=30) plt.show()
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Пешка Функция, возвращающая массив валентностей пешечного Графа Иллюстрирует изложенные ниже соображения.
def pawn(valencies): valencies[0, :] = 0 # empty horizontal line valencies[1:, 0] = valencies[1:, -1] = 1 # vertical edges valencies[1:, 1:-1] = 2 return valencies
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Рассмотрим несколько частных случаев в поисках закономерности.
def special_cases(piece, board_sizes): ''' Plot boards of every board_size, contained in board_sizes list for given piece. ''' for board_size in board_sizes: board, text_colors = get_board(board_size=board_size) plot_board(board, text_colors, piece=piece) special_cases(piece='Pawn', board_sizes=range(4,6))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Закономерность очевидна - всегда присутствует горизонталь (верхняя или нижняя - в зависимости от цвета фигуры), с которой пешка не бьёт ни одну клетку - все поля этой горизонтали $0$-валентны. Их количество равно $n$. На крайних вертикалях расположены $1$-валентные клетки, которых $2(n-1)$ штук.Все остальные поля - $2$-валентны, и расположены они прямоугольником размера $(n-1)\times(n-2)$. Тогда $$ P(X_{pawn}) = \frac{n\cdot 0}{n^{2}(n^{2}-1)} + \frac{2(n-1)\cdot 1}{n^{2}(n^{2}-1)} + \frac{(n-1)(n-2)\cdot 2}{n^{2}(n^{2}-1)}= \frac{2(n-1)({\color{Green}1}+n-{\color{Green}2})}{n^{2}(n^{2}-1)} = \frac{2(n-1)^{2}}{n^{2}(n^{2}-1)}. $$ Так как $(n^{2}-1) = (n+1)(n-1)$, $$ P(X_{pawn}) = \frac{2(n-1)}{n^{2}(n+1)}. $$ Конь Функция, возвращающая массив валентностей Графа коня
def knight(valencies): board_size = valencies.shape[0] if board_size > 3: # Four points in each corner are the same for any board size > 3. # corner cells valencies[0, 0] = valencies[0, -1] = \ valencies[-1, 0] = valencies[-1, -1] = 2 # cells horizontally/vertically adjacent to the corners valencies[0, 1] = valencies[1, 0] = \ valencies[0, -2] = valencies[1, -1] = \ valencies[-2, 0] = valencies[-1, 1] = \ valencies[-2, -1] = valencies[-1, -2] = 3 # cells diagonally adjacent valencies[1, 1] = valencies[1, -2] = \ valencies[-2, 1] = valencies[-2, -2] = 4 if board_size > 4: valencies[0, 2:-2] = valencies[2:-2, 0] = \ valencies[2:-2, -1] = valencies[-1, 2:-2] = 4 valencies[1, 2:-2] = valencies[2:-2, 1] = \ valencies[2:-2, -2] = valencies[-2, 2:-2] = 6 valencies[2:-2, 2:-2] = 8 # Patholigical cases elif board_size == 3: valencies = 2 * np.ones((board_size, board_size), dtype='uint8') valencies[1, 1] = 0 else: valencies = np.zeros((board_size, board_size), dtype='uint8') return valencies special_cases(piece='Knight', board_sizes=[4,5,6])
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Количество $2$- и $3$-валентных клеток фиксировано при любом $n\geq 4$. Первые расположены в углах, а вторые прилегают к ним по вертикали и горизонтали. Стало быть, количество $2$-валентных клеток равно $4$, а $3$-валентных - $8$, вдвое больше. $4$-валентные клетки образуют арифметическую прогрессию с начальным элементом $4$ и шагом $4$ для всех $n\geq 4$ (при увеличении $n$ на единицу с каждой стороны появляется одна $4$-валентная клетка). Легко видеть, что рост количества $6$-валентных клеток устроен аналогично, однако существуют они только при $n\geq 5$. Таким образом, $4$-валентных клеток $4(n-3)$, а $6$-валентных клеток - $4(n-4)$ штук. Количество $8$-валентных клеток растёт квадратично, к тому же, они существуют только при $n\geq 5$. То есть, их количество - $(n-4)^2$. Итого имеем: $$ P(X_{knight}) = \frac{4\cdot 2}{n^{2}(n^{2}-1)} + \frac{8\cdot 3}{n^{2}(n^{2}-1)} + \frac{4(n-3)\cdot 4}{n^{2}(n^{2}-1)} +$$ $$ + \frac{4(n-4)\cdot 6}{n^{2}(n^{2}-1)} + \frac{(n-4)^2\cdot 8}{n^{2}(n^{2}-1)} = \frac{32 + 24(n-4) + 16(n-3) + 8(n-4)^{2}}{n^{2}(n^{2}-1)} = $$ $$ \frac{8(4+3(n-4)+2(n-3)+(n-4)^{2})}{n^{2}(n^{2}-1)} = \frac{8({\color{Green} 4}+{\color{Red} {3n}}-{\color{Green} {12}}+{\color{Red} {2n}} - {\color{Green} 6}+ n^{2}-{\color{Red} {8n}}+{\color{Green} {16}})}{n^{2}(n^{2}-1)} = $$ $$= \frac{8(n^{2}-3n+2)}{n^{2}(n^{2}-1)} = \frac{8(n-1)(n-2)}{n^{2}(n^{2}-1)} = \frac{8(n-2)}{n^{2}(n+1)}. $$ Офицер Функция, возвращающая массив валентностей Графа офицера (и ферзя) Расположение валентностей для офицера и ферзя практически идентично, за исключением того, что наименьшее значение валентности для ферзя в три раза превышает оное для офицера.
def bishop_or_queen(piece, valencies): board_size = np.shape(valencies)[0] if piece == 'Bishop': smallest_val = board_size-1 else: smallest_val = 3*(board_size-1) # external square valencies[0, :] = valencies[:, 0] = \ valencies[:, -1] = valencies[-1, :] = smallest_val # internal sqares for i in range (1, int(board_size/2)+1): # top, left # right, bottom valencies[i, i:-i] = valencies[i:-i, i] = \ valencies[i:-i, -(i+1)] = valencies[-(i+1), i:-i] = \ smallest_val + 2*i return valencies special_cases(piece='Bishop', board_sizes=range(4,8))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Видно, что эквивалентные клетки располагаются по периметрам образованных клетками концентрических квадратов. Поскольку при чётных $n$ в центре доски расположены $4$ поля с максимальной валентностью, а при нечётных - одно, случаи чётных и нечётных $n$ представляется удобным рассмотреть раздельно. Чётные $n$ Каково количество различных значений валентности, а также их величина? Наименьшее значение равно $(n-1)$, так как это количество клеток на диагонали минус клетка, на которой стоит сама фигура. Наибольшее значение - $(n-1) + (n-2) = (2n-3)$, так как оно больше наименьшего значения на количество клеток, расположенных на диагонали квадрата со стороной $(n-1)$ минус клетка, на которой стоит сама фигура. Пусть $s$ - количество шагов размера $2$, которое требуется совершить для перехода от значения $(n-1)$ к значению $(2n-3)$. Тогда$$ n-1 + 2s = 2n-3, $$ $$ 2s = {\color{Red} {2n}} - {\color{Green} 3} - {\color{Red} n} + {\color{Green} 1} = n - 2 \Rightarrow s = \frac{n-2}{2}. $$Так как $n$ - чётное, $s$ $\in \mathbb{Z}$. Однако ввиду того, что *один* шаг совершается между *двумя* разными значениями, количество различных значений валентности на единицу больше количества шагов, требующихся для перехода от минимального до максимального. В таком случае имеем $\frac{n-2}{2} + 1 = \frac{n}{2} - {\color{Green} 1} +{\color{Green} 1} = \frac{n}{2}.$ Итого, на доске со стороной $n$ содержится $\frac{n}{2}$ клеток с различными значениями валентности - $\frac{n}{2}$ концентрических квадратов. В каком количестве представлено каждое из значений? Количество элементов, расположенных по периметру образованного клетками квадрата со стороной $\lambda$, равно учетверённой стороне минус четыре угловые клетки, которые оказываются учтёнными дважды. Тогда количество клеток с одноимённым значением валентности равно $4\lambda-4 = 4(\lambda-1)$, где $\lambda$ изменяется с шагом $2$ в пределах от $2$ (центральный квадрат) до $n$ (внешний). При этом от $\lambda$ зависит не только количество значений валентности, но и их величина - она равна сумме $\lambda$ и наименьшего из значений валентности, встречающихся на доске. Таким образом, имея наименьшее значение валентности, а также количество концентрических квадратов, нетрудно составить зависимую от $\lambda$ сумму $P(X^{even}_{bishop}) = \sum_{}P(H_i) \cdot P(X|H_i)$. Однако удобнее суммировать по индексу, который изменяется с шагом $1$, потому заменим $k = \frac{\lambda}{2}.$ Теперь можно записать: $$ P(X^{even}_{bishop}) = \sum_{k = 1}^{\frac{n}{2}} \frac{4(n+1-2k)\cdot(n-3+2k)} {n^{2}(n^{2}-1)} = \frac{4}{n^{2}(n^{2}-1)} \sum_{k = 1}^{\frac{n}{2}} n^{2} - {\color{Red} {3n}} + {\color{Blue} {2kn}} + {\color{Red} {n}} - 3 + {\color{Cyan} {2k}} - {\color{Blue} {2kn}} + {\color{Cyan} {6k}} - 4k^{2} = $$ $$ =\frac{4}{n^{2}(n^{2}-1)} \sum_{k = 1}^{\frac{n}{2}} n^{2} - 2n - 3 + 8k - 4k^{2}. $$ Вынесем первые три слагаемых за знак суммы, так как они не зависят от $k$, умножив их на $\frac{n}{2}$ - количество раз, которое они встречаются в сумме: $$ P(X^{even}_{bishop}) = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n}{2}(n^{2} - 2n - 3) + \sum_{k = 1}^{\frac{n}{2}}8k - 4k^{2}] $$ Рассмотрим отдельно выражение под знаком суммы. $$ \sum_{k = 1}^{\frac{n}{2}}8k - 4k^{2} = 8\sum_{k = 1}^{\frac{n}{2}}k - 4\sum_{k = 1}^{\frac{n}{2}} k^{2}. $$ Обозначим $ S_1 = 8\sum_{k = 1}^{\frac{n}{2}}k$, $ S_2 = 4\sum_{k = 1}^{\frac{n}{2}} k^{2}. $ $S_1$ - это умноженная на $8$ сумма первых $\frac{n}{2}$ натуральных чисел, которая есть сумма первых $\frac{n}{2}$ членов арифметической прогрессии, поэтому $$ S_1 = 8\frac{\frac{n}{2}(\frac{n}{2}+1)}{2} = 4\frac{n}{2}(\frac{n}{2}+1) = 2n(\frac{n}{2}+1) = \frac{2n^2}{2}+2n = n^2 + 2n = n(n+2). $$ $S_2$ - это умноженная на 4 сумма квадратов первых $\frac{n}{2}$ натуральных чисел, поэтому $$ S_2 = 4\frac{\frac{n}{2}(\frac{n}{2}+1)(2\frac{n}{2}+1)}{6} = \frac{n(n+2)(n+1)}{6}. $$ $$ S_1 - S_2 = n(n+2) - \frac{n(n+2)(n+1)}{6} = n(n+2) (1 - \frac{(n+1)}{6}) = $$ $$ = \frac{n(n+2)({\color{Green} 6}-n-{\color{Green} 1})}{6} = \frac{n(n+2)(-n + 5)}{6} = -\frac{n(n+2)(n-5)}{6}.$$ Тогда $$ P(X^{even}_{bishop}) = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n}{2}(n^{2} - 2n - 3) - \frac{n(n+2)(n-5)}{6} ] = $$ $$ = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n(3n^{2} - 6n - 9)}{6} - \frac{n(n+2)(n-5)}{6} ] = $$ $$ = \frac{4n}{6n^{2}(n^{2}-1)}({\color{Orange} {3n^{2}}} - {\color{Red} {6n}} - {\color{Green} 9} - {\color{Orange} {n^2}} + {\color{Red} {5n}} - {\color{Red} {2n}} + {\color{Green} {10}}) = $$ $$ =\frac{2}{3n(n^{2}-1)}(2n^2 - 3n + 1) = \frac{2(2n-1)(n-1)}{3n(n^{2}-1)} = \frac{2(2n-1)}{3n(n+1)}. $$ Нечётные $n$ Каково количество различных значений валентности? Наименьшее значение равно $(n-1)$ из тех же рассуждений, что и для чётных $n$. Наибольшее значение, очевидно, равно удвоенному наименьшему - $(n-1) + (n-1) = 2(n-1)$. Пусть $s$ - количество шагов размера $2$, которое требуется совершить для перехода от значения $(n-1)$ к значению $2(n-1)$. Тогда$$n-1 + 2s = 2n-2,$$ $$2s = {\color{Red} {2n}} - {\color{Green} 2} - {\color{Red} n} + {\color{Green} 1} = n - 1 \Rightarrow s = \frac{n-1}{2}.$$Так как $n$ - нечётное, $s$ $\in \mathbb{Z}$. Итого имеем $\frac{n-1}{2} + 1 = \frac{n}{2} - {\color{Green} {\frac{1}{2}}} +{\color{Green} 1} = \frac{n}{2} + \frac{1}{2} = \frac{n+1}{2}$ клеток с различными значениями валентности. В каком количестве представлено каждое из значений? Рассуждения для чётных и нечётных $n$ идентичны, за исключением того, что выражение $4(\lambda-1)$ равно нулю при $\lambda = 1$ (в центральной клетке доски). По этой причине слагаемое $P(H_{\frac{n+1}{2}}) \cdot P(X|H_{\frac{n+1}{2}})$ должно быть вынесено за знак общей суммы, а индекс суммирования будет принимать на единицу меньше значений: $\frac{n+1}{2} - 1 = \frac{n}{2} + \frac{1}{2} - 1 = \frac{n}{2} + {\color{Green} {\frac{1}{2}}} - {\color{Green} 1} = \frac{n}{2} - \frac{1}{2} = \frac{n-1}{2}.$ Тогда можно записать: $$ P(X^{odd}_{bishop}) = \frac{1\cdot 2(n-1)}{n^{2}(n^{2}-1)} + \sum_{k = 1}^{\frac{n-1}{2}} \frac{4(n+1-2k)\cdot(n-3+2k)} {n^{2}(n^{2}-1)}. $$ Легко видеть, что выражение под знаком суммы отличается от $P(X^{even}_{bishop})$ только верхней границей суммирования. Следовательно, аналогично предыдущим выкладкам можно обозначить: $ S_1 = 8\sum_{k = 1}^{\frac{n-1}{2}}k$, $ S_2 = 4\sum_{k = 1}^{\frac{n-1}{2}} k^{2}. $ $$ S_1 = 8\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)}{2} = 4\frac{n-1}{2}(\frac{n+1}{2}) = (n-1)(n+1). $$ $$ S_2 = 4\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)(2\frac{n-1}{2}+1)}{6} = 4\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)(2\frac{n-1}{2}+1)}{6} = \frac{(n-1)(\frac{n+1}{2})n}{3} = \frac{(n-1)(n+1)n}{6}. $$ $$ S_1 - S_2 = (n-1)(n+1) - \frac{(n-1)(n+1)n}{6} = (n-1)(n+1)(1 - \frac{n}{6}) = \frac{(n-1)(n+1)(6 - n)}{6} = -\frac{(n-1)(n+1)(n-6)}{6}. $$ Тогда $$ P(X^{odd}_{bishop}) = \frac{2(n-1)}{n^{2}(n^{2}-1)} + \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n-1}{2}(n^{2} - 2n - 3) -\frac{(n-1)(n+1)(n-6)}{6}] = $$ $$ =\frac{2}{n^{2}(n+1)} + \frac{4(n-1)}{n^{2}(n^{2}-1)} [\frac{3n^2 - 6n - 9}{6} -\frac{(n+1)(n-6)}{6}] = $$ $$ =\frac{2}{n^{2}(n+1)} + \frac{4}{6n^{2}(n+1)}({\color{Orange} {3n^2}} - {\color{Red} {6n}} - {\color{Green} 9} - {\color{Orange} {n^2}} + {\color{Red} {6n}} - {\color{Red} n} + {\color{Green} 6}) = $$ $$ =\frac{2}{n^{2}(n+1)} + \frac{4}{6n^{2}(n+1)}(2n^2 - n - 3) = \frac{{\color{Green} {12}} + 8n^2 - 4n - {\color{Green} {12}}}{6n^{2}(n+1)} = \frac{4n(2n-1)}{6n^{2}(n+1)} = \frac{2(2n-1)}{3n(n+1)}. $$ Как видно, чётность доски не влияет на рассматриваемую вероятность: $P(X^{even}_{bishop}) = P(X^{odd}_{bishop}) = P(X_{bishop}) = \frac{2(2n-1)}{3n(n+1)}$. Ладья Функция, возвращающая массив валентностей ладейного Графа
def rook(valencies): board_size = np.shape(valencies)[0] x, y = np.meshgrid(range(board_size), range(board_size)) for i, j in zip(x.flatten(), y.flatten()): valencies[i, j] = 2*(board_size-1) return valencies special_cases(piece='Rook', board_sizes=range(4,6))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Известная особенность ладьи - независимо от расположения на доске, она всегда контролирует постоянное количество полей, а именно $2(n-1)$ - это сумма полей по горизонтали и вертикали минус поле, на котором стоит сама ладья.$$P(X_{rook}) = \frac{n^{2}\cdot 2(n-1)}{n^{2}(n^{2}-1)} = \frac{2}{(n+1)}.$$ Ферзь
special_cases(piece='Queen', board_sizes=range(4,8))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Поскольку ферзь сочетает в себе возможности офицера и ладьи, выражение для него может быть получено как сумма выражений для этих фигур: $$ P(X_{queen}) = \frac{2(2n-1)}{3n(n+1)} + \frac{2}{n+1} = \frac{2(2n-1) + 6n}{3n(n+1)} = \frac{{\color{Red} {4n}} - 2 + {\color{Red} {6n}}}{3n(n+1)} = \frac{10n - 2}{3n(n+1)} = \frac{2(5n-1)}{3n(n+1)}. $$ Король Функция, возвращающая массив валентностей Графа короля
def king(valencies): # corners : top left = top right = \ # bottom left = bottom right valencies[0, 0] = valencies[0, -1] = \ valencies[-1, 0] = valencies[-1, -1] = 3 # edges : top, left, right, bottom valencies[0, 1:-1] = valencies[1:-1, 0] = \ valencies[1:-1, -1] = valencies[-1, 1:-1] = 5 # center valencies[1:-1, 1:-1] = 8 return valencies special_cases(piece='King', board_sizes=range(4,6))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Видно, что края доски, за исключением $3$-валентных углов, $5$-валентны, а всё оставшееся пространство $8$-валентно. Ввиду того, что краёв $4$, а $5$-валентных клеток на одном краю $(n-2)$ штук, имеем: $$ P(X_{king}) = \frac{4\cdot 3}{n^{2}(n^{2}-1)} +\frac{4(n-2)\cdot 5}{n^{2}(n^{2}-1)} +\frac{(n-2)^2\cdot 8}{n^{2}(n^{2}-1)} = \frac{12 + 20(n-2) + 8(n-2)^2}{n^{2}(n^{2}-1)} = $$$$ = \frac{4(3 + 5(n-2)+2(n-2)^2)}{n^{2}(n^{2}-1)} = \frac{4(3 + 5n-10+2(n^2 - 4n + 4))}{n^{2}(n^{2}-1)} = \frac{4({\color{Green} 3} + {\color{Red} {5n}}-{\color{Green} {10}}+2n^2 - {\color{Red} {8n}} + {\color{Green} {8}} )}{n^{2}(n^{2}-1)} = $$ $$ =\frac{4(2n^2 - 3n + 1)}{n^{2}(n^{2}-1)} = \frac{4(2n-1)(n-1)}{n^{2}(n^{2}-1)} = \frac{4(2n-1)}{n^{2}(n+1)}. $$ Функция, возвращающая значение $P(X_{piece})$
def get_probabilities(piece, n): # NOTE: Results can be incorrect for large n because of dividing by # the huge denominator! if piece == 'Pawn': return 2*(n-1)/((n**2)*(n+1)) elif piece == 'Knight': return 8*(n-2)/((n**2)*(n+1)) elif piece == 'Bishop': return 2*(2*n-1)/(3*n*(n+1)) elif piece == 'Rook': return 2/(n+1) elif piece == 'Queen': return 2*(5*n-1)/(3*n*(n+1)) elif piece == 'King': return 4*(2*n-1)/(n**2*(n+1))
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Для проверки аналитических результатов используем метод вычисления вероятности «в лоб» - напрямую из массива валентностей.
def straightforward_prob(piece, board_size): # Get probability directly from the board of valencies board, _ = get_board(board_size) val_board = get_valencies(piece, board) unique, counts = np.unique(val_board, return_counts=True) prob = np.dot(unique, counts)/((board_size)**2 * (board_size**2 - 1)) return prob
_____no_output_____
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
График, отображающий зависимость вероятности от размера доски, представлен как функция действительного переменного в целях наглядности.
start = 2 end = 16 step = 0.02 x = np.arange(start, end) names_list = ['Pawn', 'Knight', 'Bishop', 'Rook', 'Queen', 'King'] # Check if analytical results match straightforward calculations for name in names_list: for board_size in x: y = get_probabilities(name, board_size) if not y == straightforward_prob(name, board_size): print('Mistake in equation for %s' % name) print('Analytical results approved') # Let's expand the range from Z to R for the sake of visual clarity x = np.arange(start, end, step) fig, ax = plt.subplots(figsize=(10, 8)) for name in names_list: y = get_probabilities(name, x) plt.plot(x, y, label=name, linewidth=3.0) legend = plt.legend(loc='upper right') for label in legend.get_lines(): label.set_linewidth(3) for label in legend.get_texts(): label.set_fontsize(26) plt.xlabel("Size of a board", fontsize=20) plt.ylabel("Probability", fontsize=20) plt.show()
Analytical results approved
MIT
01-Square Boards(RUS).ipynb
esimonov/chess_prob
Collaborative Filtering on Google Analytics Data Learning objectives1. Prepare the user-item matrix and use it with WALS.2. Train a `WALSMatrixFactorization` within TensorFlow locally and on AI Platform.3. Visualize the embedding vectors with principal components analysis. OverviewThis notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/wals.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
import os PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TFVERSION"] = "1.15" %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION import tensorflow as tf print(tf.__version__)
1.15.5
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
Create raw datasetFor collaborative filtering, you don't need to know anything about either the users or the content. Essentially, all you need to know is userId, itemId, and rating that the particular user gave the particular item.In this case, you are working with newspaper articles. The company doesn't ask their users to rate the articles. However, you can use the time-spent on the page as a proxy for rating.Normally, you would also add a time filter to this ("latest 7 days"), but your dataset is itself limited to a few days.
from google.cloud import bigquery bq = bigquery.Client(project = PROJECT) sql = """ WITH CTE_visitor_page_content AS ( SELECT # Schema: https://support.google.com/analytics/answer/3437719?hl=en # For a completely unique visit-session ID, you combine combination of fullVisitorId and visitNumber: CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId, (LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" GROUP BY fullVisitorId, visitNumber, latestContentId, hits.time ) -- Aggregate web stats SELECT visitorId, latestContentId as contentId, SUM(session_duration) AS session_duration FROM CTE_visitor_page_content WHERE latestContentId IS NOT NULL GROUP BY visitorId, latestContentId HAVING session_duration > 0 """ df = bq.query(sql).to_dataframe() df.head() stats = df.describe() stats df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]) # The rating is the session_duration scaled to be in the range 0-1. This will help with training. median = stats.loc["50%", "session_duration"] df["rating"] = 0.3 * df["session_duration"] / median df.loc[df["rating"] > 1, "rating"] = 1 df[["rating"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]) del df["session_duration"] %%bash rm -rf data mkdir data # TODO 1: Write object to a comma-separated values (csv) file. df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False) !head data/collab_raw.csv
1012012094517511217-1,299949290,1.0 1614877438160481626-349,29455408,1.0 1815317450115839569-235,299807267,0.8569622728727375 1950053456137072413-10,243386345,0.5474309284215766 2179122255490468209-180,299925086,0.03222095793478647 2477455524323251766-1,299936493,0.3514756604914127 277149937495438413-384,299777707,0.1444742737261316 2912361176772133138-147,299796840,1.0 3868650583032521586-577,299814183,0.5796732212068414 4086929260756256102-380,299946732,1.0
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
Create dataset for WALSThe raw dataset (above) won't work for WALS: The userId and itemId have to be 0,1,2 ... so you need to create a mapping from visitorId (in the raw data) to userId and contentId (in the raw data) to itemId. You will need to save the above mapping to a file because at prediction time, you'll need to know how to map the contentId in the table above to the itemId. You'll need two files: a "rows" dataset where all the items for a particular user are listed; and a "columns" dataset where all the users for a particular item are listed. Mapping
import pandas as pd import numpy as np def create_mapping(values, filename): with open(filename, 'w') as ofp: value_to_id = {value:idx for idx, value in enumerate(values.unique())} for value, idx in value_to_id.items(): ofp.write("{},{}\n".format(value, idx)) return value_to_id df = pd.read_csv(filepath_or_buffer = "data/collab_raw.csv", header = None, names = ["visitorId", "contentId", "rating"], dtype = {"visitorId": str, "contentId": str, "rating": np.float}) df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False) user_mapping = create_mapping(df["visitorId"], "data/users.csv") item_mapping = create_mapping(df["contentId"], "data/items.csv") !head -3 data/*.csv df["userId"] = df["visitorId"].map(user_mapping.get) df["itemId"] = df["contentId"].map(item_mapping.get) mapped_df = df[["userId", "itemId", "rating"]] mapped_df.to_csv(path_or_buf = "data/collab_mapped.csv", index = False, header = False) mapped_df.head()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
Creating rows and columns datasets
import pandas as pd import numpy as np mapped_df = pd.read_csv(filepath_or_buffer = "data/collab_mapped.csv", header = None, names = ["userId", "itemId", "rating"]) mapped_df.head() NITEMS = np.max(mapped_df["itemId"]) + 1 NUSERS = np.max(mapped_df["userId"]) + 1 mapped_df["rating"] = np.round(mapped_df["rating"].values, 2) print("{} items, {} users, {} interactions".format( NITEMS, NUSERS, len(mapped_df) )) grouped_by_items = mapped_df.groupby("itemId") iter = 0 for item, grouped in grouped_by_items: print(item, grouped["userId"].values, grouped["rating"].values) iter = iter + 1 if iter > 5: break import tensorflow as tf grouped_by_items = mapped_df.groupby("itemId") with tf.python_io.TFRecordWriter("data/users_for_item") as ofp: for item, grouped in grouped_by_items: example = tf.train.Example(features = tf.train.Features(feature = { "key": tf.train.Feature(int64_list = tf.train.Int64List(value = [item])), "indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["userId"].values)), "values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values)) })) ofp.write(example.SerializeToString()) grouped_by_users = mapped_df.groupby("userId") with tf.python_io.TFRecordWriter("data/items_for_user") as ofp: for user, grouped in grouped_by_users: example = tf.train.Example(features = tf.train.Features(feature = { "key": tf.train.Feature(int64_list = tf.train.Int64List(value = [user])), "indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["itemId"].values)), "values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values)) })) ofp.write(example.SerializeToString()) !ls -lrt data
total 36580 -rw-r--r-- 1 jupyter jupyter 14124035 May 16 17:32 collab_raw.csv -rw-r--r-- 1 jupyter jupyter 3529114 May 16 17:32 users.csv -rw-r--r-- 1 jupyter jupyter 82083 May 16 17:32 items.csv -rw-r--r-- 1 jupyter jupyter 7667828 May 16 17:32 collab_mapped.csv -rw-r--r-- 1 jupyter jupyter 2296476 May 16 17:33 users_for_item -rw-r--r-- 1 jupyter jupyter 9743438 May 16 17:33 items_for_user
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
To summarize, you created the following data files from collab_raw.csv: ```collab_mapped.csv``` is essentially the same data as in ```collab_raw.csv``` except that ```visitorId``` and ```contentId``` which are business-specific have been mapped to ```userId``` and ```itemId``` which are enumerated in 0,1,2,.... The mappings themselves are stored in ```items.csv``` and ```users.csv``` so that they can be used during inference. ```users_for_item``` contains all the users/ratings for each item in TFExample format ```items_for_user``` contains all the items/ratings for each user in TFExample format Train with WALSOnce you have the dataset, do matrix factorization with WALS using the [WALSMatrixFactorization](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization) in the contrib directory.This is an estimator model, so it should be relatively familiar.As usual, you write an input_fn to provide the data to the model, and then create the Estimator to do train_and_evaluate.Because it is in contrib and hasn't moved over to tf.estimator yet, you use tf.contrib.learn.Experiment to handle the training loop.
import os import tensorflow as tf from tensorflow.python.lib.io import file_io from tensorflow.contrib.factorization import WALSMatrixFactorization def read_dataset(mode, args): # TODO 2: Decode the example def decode_example(protos, vocab_size): features = { "key": tf.FixedLenFeature(shape = [1], dtype = tf.int64), "indices": tf.VarLenFeature(dtype = tf.int64), "values": tf.VarLenFeature(dtype = tf.float32)} parsed_features = tf.parse_single_example(serialized = protos, features = features) values = tf.sparse_merge(sp_ids = parsed_features["indices"], sp_values = parsed_features["values"], vocab_size = vocab_size) # Save key to remap after batching # This is a temporary workaround to assign correct row numbers in each batch. # You can ignore details of this part and remap_keys(). key = parsed_features["key"] decoded_sparse_tensor = tf.SparseTensor(indices = tf.concat(values = [values.indices, [key]], axis = 0), values = tf.concat(values = [values.values, [0.0]], axis = 0), dense_shape = values.dense_shape) return decoded_sparse_tensor def remap_keys(sparse_tensor): # Current indices of your SparseTensor that you need to fix bad_indices = sparse_tensor.indices # shape = (current_batch_size * (number_of_items/users[i] + 1), 2) # Current values of your SparseTensor that you need to fix bad_values = sparse_tensor.values # shape = (current_batch_size * (number_of_items/users[i] + 1),) # Since batch is ordered, the last value for a batch index is the user # Find where the batch index chages to extract the user rows # 1 where user, else 0 user_mask = tf.concat(values = [bad_indices[1:,0] - bad_indices[:-1,0], tf.constant(value = [1], dtype = tf.int64)], axis = 0) # shape = (current_batch_size * (number_of_items/users[i] + 1), 2) # Mask out the user rows from the values good_values = tf.boolean_mask(tensor = bad_values, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],) item_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],) user_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 1))[:, 1] # shape = (current_batch_size,) good_user_indices = tf.gather(params = user_indices, indices = item_indices[:,0]) # shape = (current_batch_size * number_of_items/users[i],) # User and item indices are rank 1, need to make rank 1 to concat good_user_indices_expanded = tf.expand_dims(input = good_user_indices, axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1) good_item_indices_expanded = tf.expand_dims(input = item_indices[:, 1], axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1) good_indices = tf.concat(values = [good_user_indices_expanded, good_item_indices_expanded], axis = 1) # shape = (current_batch_size * number_of_items/users[i], 2) remapped_sparse_tensor = tf.SparseTensor(indices = good_indices, values = good_values, dense_shape = sparse_tensor.dense_shape) return remapped_sparse_tensor def parse_tfrecords(filename, vocab_size): if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely else: num_epochs = 1 # end-of-input after this files = tf.gfile.Glob(filename = os.path.join(args["input_path"], filename)) # Create dataset from file list dataset = tf.data.TFRecordDataset(files) dataset = dataset.map(map_func = lambda x: decode_example(x, vocab_size)) dataset = dataset.repeat(count = num_epochs) dataset = dataset.batch(batch_size = args["batch_size"]) dataset = dataset.map(map_func = lambda x: remap_keys(x)) return dataset.make_one_shot_iterator().get_next() def _input_fn(): features = { WALSMatrixFactorization.INPUT_ROWS: parse_tfrecords("items_for_user", args["nitems"]), WALSMatrixFactorization.INPUT_COLS: parse_tfrecords("users_for_item", args["nusers"]), WALSMatrixFactorization.PROJECT_ROW: tf.constant(True) } return features, None return _input_fn
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
This code is helpful in developing the input function. You don't need it in production.
def try_out(): with tf.Session() as sess: fn = read_dataset( mode = tf.estimator.ModeKeys.EVAL, args = {"input_path": "data", "batch_size": 4, "nitems": NITEMS, "nusers": NUSERS}) feats, _ = fn() print(feats["input_rows"].eval()) print(feats["input_rows"].eval()) try_out() def find_top_k(user, item_factors, k): all_items = tf.matmul(a = tf.expand_dims(input = user, axis = 0), b = tf.transpose(a = item_factors)) topk = tf.nn.top_k(input = all_items, k = k) return tf.cast(x = topk.indices, dtype = tf.int64) def batch_predict(args): import numpy as np with tf.Session() as sess: estimator = tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]) # This is how you would get the row factors for out-of-vocab user data # row_factors = list(estimator.get_projections(input_fn=read_dataset(tf.estimator.ModeKeys.EVAL, args))) # user_factors = tf.convert_to_tensor(np.array(row_factors)) # But for in-vocab data, the row factors are already in the checkpoint user_factors = tf.convert_to_tensor(value = estimator.get_row_factors()[0]) # (nusers, nembeds) # In either case, you have to assume catalog doesn"t change, so col_factors are read in item_factors = tf.convert_to_tensor(value = estimator.get_col_factors()[0])# (nitems, nembeds) # For each user, find the top K items topk = tf.squeeze(input = tf.map_fn(fn = lambda user: find_top_k(user, item_factors, args["topk"]), elems = user_factors, dtype = tf.int64)) with file_io.FileIO(os.path.join(args["output_dir"], "batch_pred.txt"), mode = 'w') as f: for best_items_for_user in topk.eval(): f.write(",".join(str(x) for x in best_items_for_user) + '\n') def train_and_evaluate(args): train_steps = int(0.5 + (1.0 * args["num_epochs"] * args["nusers"]) / args["batch_size"]) steps_in_epoch = int(0.5 + args["nusers"] / args["batch_size"]) print("Will train for {} steps, evaluating once every {} steps".format(train_steps, steps_in_epoch)) def experiment_fn(output_dir): return tf.contrib.learn.Experiment( tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]), train_input_fn = read_dataset(tf.estimator.ModeKeys.TRAIN, args), eval_input_fn = read_dataset(tf.estimator.ModeKeys.EVAL, args), train_steps = train_steps, eval_steps = 1, min_eval_frequency = steps_in_epoch ) from tensorflow.contrib.learn.python.learn import learn_runner learn_runner.run(experiment_fn = experiment_fn, output_dir = args["output_dir"]) batch_predict(args) import shutil shutil.rmtree(path = "wals_trained", ignore_errors=True) train_and_evaluate({ "output_dir": "wals_trained", "input_path": "data/", "num_epochs": 0.05, "nitems": NITEMS, "nusers": NUSERS, "batch_size": 512, "n_embeds": 10, "topk": 3 }) !ls wals_trained !head wals_trained/batch_pred.txt
5526,129,885 4712,2402,2620 2710,139,1001 885,5526,3450 202,370,2839 4972,1085,4505 4896,1687,2833 3418,5095,2702 3537,479,4896 1001,4516,2347
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
Run as a Python moduleLet's run it as Python module for just a few steps.
os.environ["NITEMS"] = str(NITEMS) os.environ["NUSERS"] = str(NUSERS) %%bash rm -rf wals.tar.gz wals_trained gcloud ai-platform local train \ --module-name=walsmodel.task \ --package-path=${PWD}/walsmodel \ -- \ --output_dir=${PWD}/wals_trained \ --input_path=${PWD}/data \ --num_epochs=0.01 --nitems=${NITEMS} --nusers=${NUSERS} \ --job-dir=./tmp
Will train for 2 steps, evaluating once every 236 steps
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
Run on Cloud
%%bash gsutil -m cp data/* gs://${BUCKET}/wals/data %%bash OUTDIR=gs://${BUCKET}/wals/model_trained JOBNAME=wals_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=walsmodel.task \ --package-path=${PWD}/walsmodel \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --runtime-version=$TFVERSION \ -- \ --output_dir=$OUTDIR \ --input_path=gs://${BUCKET}/wals/data \ --num_epochs=10 --nitems=${NITEMS} --nusers=${NUSERS}
gs://qwiklabs-gcp-01-7f6e984e70dc/wals/model_trained us-central1 wals_220516_174236 jobId: wals_220516_174236 state: QUEUED
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
This will take 10 minutes to complete. Rerun the above command until the jobs gets submitted. Get row and column factorsOnce you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. You'll look at how to use these in the section on building a recommendation system using deep neural networks.
def get_factors(args): with tf.Session() as sess: estimator = tf.contrib.factorization.WALSMatrixFactorization( num_rows = args["nusers"], num_cols = args["nitems"], embedding_dimension = args["n_embeds"], model_dir = args["output_dir"]) row_factors = estimator.get_row_factors()[0] col_factors = estimator.get_col_factors()[0] return row_factors, col_factors args = { "output_dir": "gs://{}/wals/model_trained".format(BUCKET), "nitems": NITEMS, "nusers": NUSERS, "n_embeds": 10 } user_embeddings, item_embeddings = get_factors(args) print(user_embeddings[:3]) print(item_embeddings[:3])
INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fc3e9a79a10>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_eval_distribute': None, '_experimental_max_worker_delay_secs': None, '_device_fn': None, '_tf_config': gpu_options { per_process_gpu_memory_fraction: 1.0 } , '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_log_step_count_steps': 100, '_protocol': None, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'gs://qwiklabs-gcp-01-7f6e984e70dc/wals/model_trained', '_session_creation_timeout_secs': 7200} [[ 3.6948849e-03 -8.5906551e-04 4.8303331e-04 -1.5890118e-03 3.2951573e-03 1.8742392e-03 -4.0969993e-03 6.1582108e-03 -2.8773610e-04 -1.3332820e-03] [ 1.3797252e-03 -6.1344338e-04 1.8084486e-03 -1.3052947e-03 -5.8687606e-04 1.2303573e-03 4.5222897e-04 1.5270878e-03 -3.6727260e-03 -3.0614049e-04] [-1.8781899e-04 -3.4702761e-04 -2.0322317e-05 -3.4224652e-04 -4.4325300e-04 -1.1415767e-03 1.1274662e-04 5.7519245e-04 1.1694311e-04 7.0931134e-04]] [[ 1.55777712e+01 -1.05091429e+00 9.74276304e-01 -1.42321882e+01 5.99148512e+00 2.02531338e+00 -1.14337492e+01 1.73737488e+01 -6.27120638e+00 -3.14125824e+00] [ 1.13881841e-01 -1.24881916e-01 1.47017851e-01 3.90907042e-02 -2.54981425e-02 -2.25878507e-02 -8.38336051e-02 -3.10666151e-02 -1.92982569e-01 5.66481380e-03] [-2.36634836e-01 -1.71446967e+00 -2.04065472e-01 -2.04369950e+00 -1.53748310e+00 -4.64286995e+00 1.42189160e-01 2.47772884e+00 -9.40964222e-01 3.68809652e+00]]
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
You can visualize the embedding vectors using dimensional reduction techniques such as PCA.
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.decomposition import PCA pca = PCA(n_components = 3) pca.fit(user_embeddings) # TODO 3: Apply the mapping (transform) to user embeddings user_embeddings_pca = pca.transform(user_embeddings) fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(111, projection = "3d") xs, ys, zs = user_embeddings_pca[::150].T ax.scatter(xs, ys, zs)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/wals.ipynb
ssensalo/training-data-analyst
To apply arbitrary forcing to a single degree of freedom linear or nonlinear system, you can do so with `SingleDoFNonLinearSystem` (`SingleDoFLinearSystem` does not support arbitrary forcing...yet).Add constants, a generalized coordinate, and a generalized speed to the system.
sys = SingleDoFNonLinearSystem() sys.constants['m'] = 100 # kg sys.constants['c'] = 1.1*1.2*0.5/2 sys.constants['k'] = 10 sys.constants['Fo'] = 1000 # N sys.constants['Ft'] = 100 # N/s sys.constants['to'] = 3.0 # s sys.coordinates['x'] = 0.0 sys.speeds['v'] = 0.0
_____no_output_____
CC-BY-4.0
content/materials/notebooks/2020/l13_arbitrary_inputs.ipynb
moorepants/eng122
Create a function that evaluates the first order form of the non-linear equations of motion. In this case:$$\dot{x} = v \\m\dot{v} + c \textrm{sgn}(v)v^2 + k \textrm{sgn}(x)x^2 = F(t)$$Make the arbitrary forcing term, $F$, an input to this function.
def eval_eom(x, v, m, c, k, F): xdot = v vdot = (F - np.sign(v)*c*v**2 - np.sign(x)*k*x**2) / m return xdot, vdot
_____no_output_____
CC-BY-4.0
content/materials/notebooks/2020/l13_arbitrary_inputs.ipynb
moorepants/eng122
Note that you cannot add this to the system because `F` has not been defined.
sys.diff_eq_func = eval_eom
_____no_output_____
CC-BY-4.0
content/materials/notebooks/2020/l13_arbitrary_inputs.ipynb
moorepants/eng122
To rememdy this, create a function that returns the input value given the appropriate constants and time.
def eval_step_input(Fo, to, time): if time < to: return 0.0 else: return Fo import matplotlib.pyplot as plt %matplotlib widget ts = np.linspace(0, 10) plt.plot(ts, eval_step_input(5.0, 3.0, ts)) ts < 3.0 def eval_step_input(Fo, to, time): F = np.empty_like(time) for i, ti in enumerate(time): if ti < to: F[i] = 0.0 else: F[i] = Fo return F plt.plot(ts, eval_step_input(5.0, 3.0, ts)) eval_step_input(5.0, 3.0, ts) eval_step_input(5.0, 3.0, 7.0) def eval_step_input(Fo, to, time): if np.isscalar(time): if time < to: return 0.0 else: return Fo else: F = np.empty_like(time) for i, ti in enumerate(time): if ti < to: F[i] = 0.0 else: F[i] = Fo return F eval_step_input(5.0, 3.0, 7.0) eval_step_input(5.0, 3.0, ts) True * 5.0 False * 5.0 (ts >= 3.0)*5.0 (5.0 >= 3.0)*5.0 def eval_step_input(Fo, to, time): return (time >=to)*Fo eval_step_input(5.0, 3.0, ts) eval_step_input(5.0, 3.0, 7.0) sys.add_measurement('F', eval_step_input) sys.diff_eq_func = eval_eom traj = sys.free_response(20.0) traj.plot(subplots=True) def eval_ramp_input(Ft, to, time): return (time >= to)*(Ft*time - Ft*to) del sys.measurements['F'] sys.add_measurement('F', eval_ramp_input) sys.measurements traj = sys.free_response(20.0) traj.plot(subplots=True)
_____no_output_____
CC-BY-4.0
content/materials/notebooks/2020/l13_arbitrary_inputs.ipynb
moorepants/eng122
Fit $k_{ij}$ and $r_c^{ABij}$ interactions parameter of Ethanol and CPMEThis notebook has te purpose of showing how to optimize the $k_{ij}$ and $r_c^{ABij}$ for a mixture with induced association.First it's needed to import the necessary modules
import numpy as np from sgtpy import component, mixture, saftvrmie from sgtpy.fit import fit_cross
_____no_output_____
MIT
examples/SGTPy's paper notebooks/Fit Equilibrium Ethanol + CPME.ipynb
MatKie/SGTPy
Now that the functions are available it is necessary to create the mixture.
ethanol = component('ethanol2C', ms = 1.7728, sigma = 3.5592 , eps = 224.50, lambda_r = 11.319, lambda_a = 6., eAB = 3018.05, rcAB = 0.3547, rdAB = 0.4, sites = [1,0,1], cii= 5.3141080872882285e-20) cpme = component('cpme', ms = 2.32521144, sigma = 4.13606074, eps = 343.91193798, lambda_r = 14.15484877, lambda_a = 6.0, npol = 1.91990385,mupol = 1.27, sites =[0,0,1], cii = 3.5213681817448466e-19) mix = mixture(ethanol, cpme)
_____no_output_____
MIT
examples/SGTPy's paper notebooks/Fit Equilibrium Ethanol + CPME.ipynb
MatKie/SGTPy
Now the experimental equilibria data is read and a tuple is created. It includes the experimental liquid composition, vapor composition, equilibrium temperature and pressure. This is done with ```datavle = (Xexp, Yexp, Texp, Pexp)```
# Experimental data obtained from Mejia, Cartes, J. Chem. Eng. Data, vol. 64, no. 5, pp. 1970–1977, 2019 # Experimental temperature saturation in K Texp = np.array([355.77, 346.42, 342.82, 340.41, 338.95, 337.78, 336.95, 336.29, 335.72, 335.3 , 334.92, 334.61, 334.35, 334.09, 333.92, 333.79, 333.72, 333.72, 333.81, 334.06, 334.58]) # Experimental pressure in Pa Pexp = np.array([50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000.]) # Experimental liquid composition Xexp = np.array([[0. , 0.065, 0.11 , 0.161, 0.203, 0.253, 0.301, 0.351, 0.402, 0.446, 0.497, 0.541, 0.588, 0.643, 0.689, 0.743, 0.785, 0.837, 0.893, 0.947, 1. ], [1. , 0.935, 0.89 , 0.839, 0.797, 0.747, 0.699, 0.649, 0.598, 0.554, 0.503, 0.459, 0.412, 0.357, 0.311, 0.257, 0.215, 0.163, 0.107, 0.053, 0. ]]) # Experimental vapor composition Yexp = np.array([[0. , 0.302, 0.411, 0.48 , 0.527, 0.567, 0.592, 0.614, 0.642, 0.657, 0.678, 0.694, 0.71 , 0.737, 0.753, 0.781, 0.801, 0.837, 0.883, 0.929, 1. ], [1. , 0.698, 0.589, 0.52 , 0.473, 0.433, 0.408, 0.386, 0.358, 0.343, 0.322, 0.306, 0.29 , 0.263, 0.247, 0.219, 0.199, 0.163, 0.117, 0.071, 0. ]]) datavle = (Xexp, Yexp, Texp, Pexp)
_____no_output_____
MIT
examples/SGTPy's paper notebooks/Fit Equilibrium Ethanol + CPME.ipynb
MatKie/SGTPy
The function ```fit_cross``` optimize the $k_{ij}$ correction and $r_c^{ABij}$ distance. An initial guess is needed, as well as the mixture object, the index of the self-associating component and the equilibria data.
#initial guesses for kij and rcij x0 = [0.01015194, 2.23153033] fit_cross(x0, mix, assoc=0, datavle=datavle)
_____no_output_____
MIT
examples/SGTPy's paper notebooks/Fit Equilibrium Ethanol + CPME.ipynb
MatKie/SGTPy
BAYES CLASSIFIERSFor any classifier $f:{X \to Y}$, it's prediction error is:$P(f(x) \ne Y) = \mathbb{E}[ \mathbb{1}(f(X) \ne Y)] = \mathbb{E}[\mathbb{E}[ \mathbb{1}(f(X) \ne Y)|X]]$For each $x \in X$,$$\mathbb{E}[ \mathbb{1}(f(X) \ne Y)|X = x] = \sum\limits_{y \in Y} P(Y = y|X = x) \cdot \mathbb{1}(f(x) \ne y)$$The above quantity is minimized for this particular $x \in X$ when,$$f(x) = \underset{y \in Y}{argmax} \space P(Y = y|X = x) \space \star$$A classifier $f$ with property $ \star$ for all $x \in X$ is called the `Bayes Classifier` Under the assumption $(X,Y) \overset{iid}{\sim} P$, the optimal classifier is:$$f^{\star}(x) = \underset{y \in Y}{argmax} \space P(Y = y|X = x)$$And from _Bayes Rule_ we equivalently have:$$f^{\star}(x) = \underset{y \in Y}{argmax} \space P(Y = y) \space P(X = x|Y = y)$$Where- $P(Y =y)$ is called _the class prior_- $P(X = x|Y= y)$ is called _the class conditional distribution_ of $X$Assuming $X = \mathbb{R}, Y = \{ 0,1 \}$, and the distribution of $P \space \text{of} \space (X,Y)$ is as follows:- _Class prior_: $P(Y = y) = \pi_y, y \in \{ 0,1 \}$- _Class conditional density_ for class $y \in \{ 0,1 \}: p_y (x) = N(x|\mu_y,\sigma^2_y)$$$f^{\star}(x) = \underset{y \in \{ 0,1 \}}{argmax} \space P(Y = y) \space P(X = x|Y = y) = \begin{cases} 1 & \text{if} \space \frac{\pi_1}{\sigma_1}\space exp[- \frac{(x - \mu_1)^2}{2 \sigma^2_1}] > \frac{\pi_0}{\sigma_0}\space exp[- \frac{(x - \mu_0)^2}{2 \sigma^2_0}]\\ 0 & \text{otherwise} \end{cases}$$ _Bayes Classifier_![Bayes Classifier](.\image\BayesClassifier.png) The `Bayes Classifier` has the smallest prediction error of all classifiers. The problem is that we need to know the distribution of $P$ in order to construct the `Bayes Classifier` NAIVE BAYES CLASSIFIERA simplifying assumtion that the features values are conditionally independent given the label, the probability of observing the conjunction $x_1, x_2, x_3, ..., x_d$ is the product of the probabilities for the individual features:$$ p(x_1, x_2, x_3, ..., x_d|y) = \prod \limits_j \space p(x_j|y)$$Then the `Naive Bayes Classifier` is defined as:$$f^{\star}(x) = \underset{y \in Y}{argmax} \space p(y) \space \prod \limits_j \space p(x_j|y)$$We can estimate these two terms based on the **frequency counts** in the dataset. If the features are real-valued, Naive Bayes can be extended assuming that features follow a Gaussian distribution. This extension is called `Gaussian Naive Bayes`. Other functions can be used to estimate the distribution but the Gaussian distribution is the easiest to work with due to we only need to estimate the mean and the standard deviation from the dataset.Ok, let's start with the implementation of `Gaussian Naive Bayes` from scratch.
##IMPORTING ALL NECESSARY SUPPORT LIBRARIES import math as mt import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline def separate_by_label(dataset): separate = dict() for i in range(len(dataset)): row = dataset[i] label = row[-1] if (label not in separate): separate[label] = list() separate[label].append(row) return separate def mean(list_num): return sum(list_num)/len(list_num) def stdv(list_num): mu = mean(list_num) var = sum([(x - mu)**2 for x in list_num])/(len(list_num) - 1) return mt.sqrt(var) def stats_per_feature(ds): ''' argument: > ds: 1-D Array with the all data separated by class returns: > stats: 1-D Array with statistics summary for each feature ''' stats = [(mean(col), stdv(col), len(col)) for col in zip(*ds)] del(stats[-1]) return stats def summary_by_class(dataset): sep_label = separate_by_label(dataset) summary = dict() for label, rows in sep_label.items(): summary[label] = stats_per_feature(rows) return summary def gaussian_pdf(mean, stdv, x): _exp = mt.exp(-1*((x - mean)**2/(2*stdv**2))) return (1/(mt.sqrt(2 * mt.pi)*stdv)) * _exp
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Now it is time to use the statistics calculated from the data to calculate probabilities for new data.Probabilities are calculated separately for each class, so we calculate the probability that a new piece of data belongs to the first class, then calculate the probability that it belongs to the second class, and so on for all the classes.For example, if we have two inputs $x_1 and \space x_2$ the calculation of the probability that those belong to class = _y_ is:$$P(class = y|x_1,x_2) = P(x_1|class = y) \cdot P(x_2|class = y) \cdot P(class = y)$$
def class_probabilities(summary, row): total = sum([summary[label][0][2] for label in summary]) probabilities = dict() for class_, class_summary in summary.items(): probabilities[class_] = summary[class_][0][2]/total for i in range(len(class_summary)): mean, stdev, count = class_summary[i] probabilities[class_] *= gaussian_pdf(row[i], mean, stdev) return probabilities def predict(summary, row): cls_prob = class_probabilities(summary, row) _label, _prob = None, -1.0 for class_, probability in cls_prob.items(): if _label is None or probability > _prob: _prob = probability _label = class_ return _label
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
In order to verify proper implementation a **toy dataset** is used to evaluate the algorithm.
dataset = [[3.393533211,2.331273381,0], [3.110073483,1.781539638,0], [1.343808831,3.368360954,0], [3.582294042,4.67917911,0], [2.280362439,2.866990263,0], [7.423436942,4.696522875,1], [5.745051997,3.533989803,1], [9.172168622,2.511101045,1], [7.792783481,3.424088941,1], [7.939820817,0.791637231,1]] summaries = summary_by_class(dataset) for row in dataset: y_pred = predict(summaries, row) y_real = row[-1] print("Expected={0}, Predicted={1}".format(y_real, y_pred))
Expected=0, Predicted=0 Expected=0, Predicted=0 Expected=0, Predicted=0 Expected=0, Predicted=0 Expected=0, Predicted=0 Expected=1, Predicted=1 Expected=1, Predicted=1 Expected=1, Predicted=1 Expected=1, Predicted=1 Expected=1, Predicted=1
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
_GAUSSIAN NAIVE BAYES APPLICATION_From the `UCI Machine Learning Repository` which contains Iris dataset, we will train our `Gaussian Naive Bayes` model. The Iris dataset is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day.The dataset contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are not linearly separable from each other.The dataset have 150 instances and the following attributes: 1. sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm 5. class: -- Iris Setosa -- Iris Versicolour -- Iris Virginica To compare the performance of our _Classifier_ on the **Iris** dataset, a Gaussian Naive Bayes model from `sklearn` will be fit on the dataset and classification report for both models is generated.
from sklearn.model_selection import train_test_split from sklearn.naive_bayes import GaussianNB from sklearn.metrics import classification_report ##LOADING 'IRIS' DATASET columns = ['sepal-len','sepal-wid','petal-len','petal-wid','class'] df = pd.read_csv('./data/Iris.csv', names = columns) df.head() df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 150 entries, 0 to 149 Data columns (total 5 columns): sepal-len 150 non-null float64 sepal-wid 150 non-null float64 petal-len 150 non-null float64 petal-wid 150 non-null float64 class 150 non-null object dtypes: float64(4), object(1) memory usage: 5.9+ KB
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Due to the class variable type is `categorical` we need first to encode it as numeric type in order to be feed it into our models.
def encoder(df, class_value_pair): for class_name, value in class_value_pair.items(): df['class'] = df['class'].replace(class_name, value) return df class_encoder = {'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2} df = encoder(df, class_encoder) df.head() df['class'].value_counts().sort_index()
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Once the preprocessing is complete the dataset will be split into a `Training` & `Test` dataset.
X_ = df.drop(['class'],axis = 1) y = df['class'] X_train, X_test, y_train, y_test = train_test_split(X_, y, test_size = 0.30, random_state = 5)
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Now, we can `train` our customized model. Noticed that our _Gaussian Naive Bayes_ model expects a complete dataset (attributes and labels) in order to calculate the summaries.
ds_train = pd.concat([X_train, y_train], axis = 1) GNB_custom = summary_by_class(ds_train.values.tolist()) ds_test = pd.concat([X_test, y_test], axis = 1) cust_pred = [predict(GNB_custom, row) for row in ds_test.values.tolist()] cust_pred = np.array(cust_pred, dtype = 'int64') cust_pred
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Now an instance of `sklearn` _Gaussian Naive Bayes_ model is created and fit it with the training data and an array of predictions is obtained in order to get out performance comparation
##GET AND INSTANCE OF GAUSSIAN NAIVE BAYES MODEL GNB_skln = GaussianNB() GNB_skln.fit(X_train, y_train) ##CREATE SKLEARN PREDICTIONS ARRAY sk_pred = GNB_skln.predict(X_test) sk_pred
_____no_output_____
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
By last, a comparison on both models is performed thru a _Classification Report_
print("Sklearn:") print(classification_report(y_test, sk_pred)) print("Custom:") print(classification_report(y_test, cust_pred))
Sklearn: precision recall f1-score support 0 1.00 1.00 1.00 15 1 0.88 0.94 0.91 16 2 0.92 0.86 0.89 14 micro avg 0.93 0.93 0.93 45 macro avg 0.94 0.93 0.93 45 weighted avg 0.93 0.93 0.93 45 Custom: precision recall f1-score support 0 1.00 1.00 1.00 15 1 0.88 0.94 0.91 16 2 0.92 0.86 0.89 14 micro avg 0.93 0.93 0.93 45 macro avg 0.94 0.93 0.93 45 weighted avg 0.93 0.93 0.93 45
MIT
ML SERIES - NAIVE BAYES/MACHINE LEARNING TOP ALGORITHMS - NAIVE BAYES CLASSIFIER.ipynb
acadena-repo/MACHINE-LEARNING-TOP-ALGORITHMS-SERIES
Mini Project: Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore CliffWalkingEnvUse the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
import gym env = gym.make('CliffWalking-v0')
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.
print(env.action_space) print(env.observation_space)
Discrete(4) Discrete(48)
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function.
import numpy as np from plot_utils import plot_values # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt)
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Part 1: TD Prediction: State ValuesIn this section, you will write your own implementation of TD prediction (for estimating the state-value function).We will begin by investigating a policy where the agent moves:- `RIGHT` in states `0` through `10`, inclusive, - `DOWN` in states `11`, `23`, and `35`, and- `UP` in states `12` through `22`, inclusive, states `24` through `34`, inclusive, and state `36`.The policy is specified and printed below. Note that states where the agent does not choose an action have been marked with `-1`.
policy = np.hstack([1*np.ones(11), 2, 0, np.zeros(10), 2, 0, np.zeros(10), 2, 0, -1*np.ones(11)]) print("\nPolicy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy.reshape(4,12))
Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1): [[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]]
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Run the next cell to visualize the state-value function that corresponds to this policy. Make sure that you take the time to understand why this is the corresponding value function!
V_true = np.zeros((4,12)) for i in range(3): V_true[0:12][i] = -np.arange(3, 15)[::-1] - i V_true[1][11] = -2 V_true[2][11] = -1 V_true[3][0] = -17 plot_values(V_true)
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
The above figure is what you will try to approximate through the TD prediction algorithm.Your algorithm for TD prediction has five arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `policy`: This is a 1D numpy array with `policy.shape` equal to the number of states (`env.nS`). `policy[s]` returns the action that the agent chooses when in state `s`.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `V`: This is a dictionary where `V[s]` is the estimated value of state `s`.Please complete the function in the code cell below.
from collections import defaultdict, deque import sys def td_prediction(env, num_episodes, policy, alpha, gamma=1.0): # initialize empty dictionaries of floats V = defaultdict(float) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # begin an episode, observe S state = env.reset() while True: # choose action A action = policy[state] # take action A, observe R, S' next_state, reward, done, info = env.step(action) # perform updates V[state] = V[state] + (alpha * (reward + (gamma * V[next_state]) - V[state])) # S <- S' state = next_state # end episode if reached terminal state if done: break return V
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Run the code cell below to test your implementation and visualize the estimated state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
import check_test # evaluate the policy and reshape the state-value function V_pred = td_prediction(env, 5000, policy, .01) # please do not change the code below this line V_pred_plot = np.reshape([V_pred[key] if key in V_pred else 0 for key in np.arange(48)], (4,12)) check_test.run_check('td_prediction_check', V_pred_plot) plot_values(V_pred_plot)
Episode 5000/5000
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
How close is your estimated state-value function to the true state-value function corresponding to the policy? You might notice that some of the state values are not estimated by the agent. This is because under this policy, the agent will not visit all of the states. In the TD prediction algorithm, the agent can only estimate the values corresponding to states that are visited. Part 2: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def update_Q(Qsa, Qsa_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def epsilon_greedy_probs(env, Q_s, i_episode, eps=None): """ obtains the action probabilities corresponding to epsilon-greedy policy """ epsilon = 1.0 / i_episode if eps is not None: epsilon = eps policy_s = np.ones(env.nA) * epsilon / env.nA policy_s[np.argmax(Q_s)] = 1 - epsilon + (epsilon / env.nA) return policy_s import matplotlib.pyplot as plt %matplotlib inline def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick action A action = np.random.choice(np.arange(env.nA), p=policy_s) # limit number of time steps per episode for t_step in np.arange(300): # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward if not done: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) # pick next action A' next_action = np.random.choice(np.arange(env.nA), p=policy_s) # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # S <- S' state = next_state # A <- A' action = next_action if done: # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa)
Episode 5000/5000
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Part 3: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() while True: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick next action A action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # update Q Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
Episode 5000/5000
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Part 4: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode, 0.005) while True: # pick next action action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # get epsilon-greedy action probabilities (for S') policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode, 0.005) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q
_____no_output_____
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
Episode 10000/10000
MIT
ReinforcementLearning/TemporalDifference/Temporal_Difference_Solution.ipynb
xuebai1990/DeepLearningNanodegree
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
import random dir(random) # Reminding ourselves what we can do here random.seed(10) # Sets Random Seed for Reproducibility # Let's think of another scenario: # We work for a company that sells accessories for mobile phones. # They have an ecommerce site, and we are supposed to analyze logs # to determine what sort of usage is related to purchases, and thus guide # website development to encourage higher conversion. # The hypothesis - users who spend longer on the site tend # to spend more. Seems reasonable, no? # But there's a confounding variable! If they're on a phone, they: # a) Spend less time on the site, but # b) Are more likely to be interested in the actual products! # Let's use namedtuple to represent our data from collections import namedtuple # purchased and mobile are bools, time_on_site in seconds User = namedtuple('User', ['purchased','time_on_site', 'mobile']) example_user = User(False, 12, False) print(example_user) # And now let's generate 1000 example users # 750 mobile, 250 not (i.e. desktop) # A desktop user has a base conversion likelihood of 10% # And it goes up by 1% for each 15 seconds they spend on the site # And they spend anywhere from 10 seconds to 10 minutes on the site (uniform) # Mobile users spend on average half as much time on the site as desktop # But have three times as much base likelihood of buying something users = [] for _ in range(250): # Desktop users time_on_site = random.uniform(10, 600) purchased = random.random() < 0.1 + (time_on_site / 1500) users.append(User(purchased, time_on_site, False)) for _ in range(750): # Mobile users time_on_site = random.uniform(5, 300) purchased = random.random() < 0.3 + (time_on_site / 1500) users.append(User(purchased, time_on_site, True)) random.shuffle(users) print(users[:10]) # Let's put this in a dataframe so we can look at it more easily import pandas as pd user_data = pd.DataFrame(users) user_data.head() user_data.dtypes user_data.isnull().sum() user_data.describe() import numpy as np user_data.describe(exclude=[np.number]) # Let's use crosstabulation to try to see what's going on pd.crosstab(user_data['purchased'], user_data['time_on_site']) # !pip freeze # 0.24.2 !pip install pandas==0.23.4 !pip freeze # OK, that's not quite what we want # Time is continuous! We need to put it in discrete buckets # Pandas calls these bins, and pandas.cut helps make them time_bins = pd.cut(user_data['time_on_site'], 6) # 6 equal-sized bins pd.crosstab(user_data['purchased'], time_bins) # We can make this a bit clearer by normalizing (getting %) pd.crosstab(user_data['purchased'], time_bins, normalize='columns') # That seems counter to our hypothesis # More time on the site can actually have fewer purchases # But we know why, since we generated the data! # Let's look at mobile and purchased pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns') # Yep, mobile users are more likely to buy things # But we're still not seeing the *whole* story until we look at all 3 at once # Live/stretch goal - how can we do that? ct = pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins], rownames = ['device'], colnames = ['purchased','time_on_site'], normalize = 'index' ) ct type(ct) pt = pd.pivot_table(user_data, values = 'purchased', index = time_bins) pt pt.plot.bar(); ct = pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize = 'columns') ct ct_final = ct.iloc[:, [2,3]] ct_final.plot(kind = 'bar', stacked = True);
_____no_output_____
MIT
LS_DS_114_Making_Data_backed_Assertions.ipynb
elizabethts/DS-Unit-1-Sprint-1-Dealing-With-Data
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
import matplotlib.pyplot as plt # TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself #load data persons_url = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv' persons_df = pd.read_csv(persons_url) persons_df.head() persons_df.shape time_range = pd.cut(persons_df['exercise_time'],30) #cut time into 5 groups age_range = pd.cut(persons_df['age'],3) #cut time into 5 groups weight_range = pd.cut(persons_df['weight'],30) persons_ct = pd.crosstab(persons_df['weight'], [age_range, persons_df['exercise_time']]) persons_ct group1 = persons_df[persons_df.age <30] group2 = persons_df[(persons_df.age >=30) & (persons_df.age<50)] group3 = persons_df[persons_df.age >=50] fig, ax1 = plt.subplots() ax1.set_axisbelow(True) ax1.set_xlabel('Exercise Time') ax1.set_ylabel('Weight') ax1.set_title('Weight vs Exercise Time (Age <30)') plt.gca().invert_yaxis() plt.scatter(group1.exercise_time, group1.weight, color ='red', alpha = .4) fig, ax2 = plt.subplots() ax2.set_axisbelow(True) ax2.set_xlabel('Exercise Time') ax2.set_ylabel('Weight') ax2.set_title('Weight vs Exercise Time (Age 30-50)') plt.gca().invert_yaxis() plt.scatter(group2.exercise_time, group2.weight, color = 'teal', alpha = .4) fig, ax3 = plt.subplots() ax3.set_axisbelow(True) ax3.set_xlabel('Exercise Time') ax3.set_ylabel('Weight') ax3.set_title('Weight vs Exercise Time (Age > 50)') plt.gca().invert_yaxis() plt.scatter(group3.exercise_time, group3.weight, color = 'yellow', alpha = 0.4); fig, ax4 = plt.subplots() ax4.set_axisbelow(True) ax4.set_xlabel('Exercise Time') ax4.set_ylabel('Weight') ax4.set_title('Weight vs Exercise Time for All Ages') plt.scatter(group2.exercise_time, group2.weight, color = 'teal', alpha = .6) plt.scatter(group3.exercise_time, group3.weight,color = 'yellow', alpha = 0.6); plt.scatter(group1.exercise_time, group1.weight, color ='red', alpha = .6) plt.gca().invert_yaxis() fig, ax5 = plt.subplots() ax5.set_axisbelow(True) ax5.set_xlabel('Exercise Time') ax5.set_ylabel('Age') ax5.set_title('Age vs Exercise Time (All Ages)') plt.scatter(group2.exercise_time, group2.age, color = 'teal', alpha = .6) plt.scatter(group3.exercise_time, group3.age,color = 'yellow', alpha = 0.6); plt.scatter(group1.exercise_time, group1.age, color ='red', alpha = .6) fig, ax6 = plt.subplots() ax6.set_axisbelow(True) ax6.set_xlabel('Weight') ax6.set_ylabel('Age') ax6.set_title('Age vs Weight (All Ages)') plt.scatter(group2.weight, group2.age, color = 'teal', alpha = .6) plt.scatter(group3.weight, group3.age,color = 'yellow', alpha = 0.6); plt.scatter(group1.weight, group1.age, color ='red', alpha = .6); #Conclusions: # -People who exercise more weight less # -The group age >=50 tends to exercise for a shorter amount of time
_____no_output_____
MIT
LS_DS_114_Making_Data_backed_Assertions.ipynb
elizabethts/DS-Unit-1-Sprint-1-Dealing-With-Data
train cnn
util.train(model, device, train_loader, test_loader, optimizer, num_epochs, criterion)
Train Epoch: 1 [45000/50000 (90%)] Loss: 1.500776 Test set: Average loss: 0.0002, Accuracy: 4550/10000 (45.50%) Train Epoch: 2 [45000/50000 (90%)] Loss: 1.379729 Test set: Average loss: 0.0001, Accuracy: 5390/10000 (53.90%) Train Epoch: 3 [45000/50000 (90%)] Loss: 1.330050 Test set: Average loss: 0.0001, Accuracy: 5640/10000 (56.40%) Train Epoch: 4 [45000/50000 (90%)] Loss: 1.189881 Test set: Average loss: 0.0001, Accuracy: 5753/10000 (57.53%) Train Epoch: 5 [45000/50000 (90%)] Loss: 1.287006 Test set: Average loss: 0.0001, Accuracy: 6042/10000 (60.42%) Train Epoch: 6 [45000/50000 (90%)] Loss: 1.345478 Test set: Average loss: 0.0001, Accuracy: 5962/10000 (59.62%) Train Epoch: 7 [45000/50000 (90%)] Loss: 1.042157 Test set: Average loss: 0.0001, Accuracy: 6065/10000 (60.65%) Train Epoch: 8 [45000/50000 (90%)] Loss: 1.040688 Test set: Average loss: 0.0001, Accuracy: 6236/10000 (62.36%) Train Epoch: 9 [45000/50000 (90%)] Loss: 1.040002 Test set: Average loss: 0.0001, Accuracy: 6186/10000 (61.86%) Train Epoch: 10 [45000/50000 (90%)] Loss: 0.792405 Test set: Average loss: 0.0001, Accuracy: 6311/10000 (63.11%)
MIT
notebooks/cifar10/01_train_cifar.ipynb
Yu-Group/adaptive-wavelets
Monthly Aggregates - Work in Progress - I forgot to set price to 0 - Need to rewatch video!
merged['date_trunc_month'] = merged.apply(lambda x: dt.datetime.strftime(x['Date'],'%Y-%m'), axis=1) merged['date_trunc_month'] merged['is_start_of_month'] = merged.apply(lambda x: x.name == min(merged[merged['date_trunc_month'] == x['date_trunc_month']].index), axis=1) merged['is_end_of_month'] = merged.apply(lambda x: x.name == max(merged[merged['date_trunc_month'] == x['date_trunc_month']].index), axis=1) merged[merged['is_start_of_month'] == True]
_____no_output_____
Apache-2.0
pairs_trading.ipynb
snowr/statarb_rw
```sqlSELECT startofmonth, stock1, stock2, SUM(lsr) as lsrFROM dailyreturnsGROUP BY startofmonth, stock1, stock2```
merged[(merged['is_start_of_month'] == True)][['date_trunc_month','lsr']] merged[(merged['is_start_of_month'] == True) | (merged['is_end_of_month'] == True)][['date_trunc_month','lsr']].plot() merged['lsr'] = merged.apply(lambda x: 0 if x['is_end_of_month'] is True else x['lsr'], axis=1) merged[merged['is_end_of_month'] == True] monthly_agg = merged[(merged['is_start_of_month'] == True) | (merged['is_end_of_month'] == True)].copy() monthly_agg.reset_index(drop=True) monthly_agg = monthly_agg.dropna() monthly_agg figsize=(30,10) ax = monthly_agg.plot(y="lsr", x="date_trunc_month", legend=False,figsize=figsize)
_____no_output_____
Apache-2.0
pairs_trading.ipynb
snowr/statarb_rw
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
import torch import numpy as np # check if CUDA is available train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...')
CUDA is available! Training on GPU ...
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # percentage of training set to use as validation valid_size = 0.2 # convert data to a normalized torch.FloatTensor transform = transforms.Compose([ transforms.RandomHorizontalFlip(), # randomly flip and rotate transforms.RandomRotation(10), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) # choose the training and test datasets train_data = datasets.CIFAR10('data', train=True, download=True, transform=transform) test_data = datasets.CIFAR10('data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # specify the image classes classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
Files already downloaded and verified Files already downloaded and verified
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
Visualize a Batch of Training Data
import matplotlib.pyplot as plt %matplotlib inline # helper function to un-normalize and display an image def imshow(img): img = img / 2 + 0.5 # unnormalize plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # convert images to numpy for display # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) # display 20 images for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) imshow(images[idx]) ax.set_title(classes[labels[idx]])
_____no_output_____
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
rgb_img = np.squeeze(images[3]) channels = ['red channel', 'green channel', 'blue channel'] fig = plt.figure(figsize = (36, 36)) for idx in np.arange(rgb_img.shape[0]): ax = fig.add_subplot(1, 3, idx + 1) img = rgb_img[idx] ax.imshow(img, cmap='gray') ax.set_title(channels[idx]) width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), horizontalalignment='center', verticalalignment='center', size=8, color='white' if img[x][y]<thresh else 'black')
_____no_output_____
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
import torch.nn as nn import torch.nn.functional as F # define the CNN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # convolutional layer (sees 32x32x3 image tensor) self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # convolutional layer (sees 16x16x16 tensor) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # convolutional layer (sees 8x8x32 tensor) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) # max pooling layer self.pool = nn.MaxPool2d(2, 2) # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 4 * 4, 500) # linear layer (500 -> 10) self.fc2 = nn.Linear(500, 10) # dropout layer (p=0.25) self.dropout = nn.Dropout(0.25) def forward(self, x): # add sequence of convolutional and max pooling layers x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) # flatten image input x = x.view(-1, 64 * 4 * 4) # add dropout layer x = self.dropout(x) # add 1st hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add 2nd hidden layer, with relu activation function x = self.fc2(x) return x # create a complete CNN model = Net() print(model) # move tensors to GPU if CUDA is available if train_on_gpu: model.cuda()
Net( (conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (fc1): Linear(in_features=1024, out_features=500, bias=True) (fc2): Linear(in_features=500, out_features=10, bias=True) (dropout): Dropout(p=0.25) )
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
import torch.optim as optim # specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer optimizer = optim.SGD(model.parameters(), lr=0.01)
_____no_output_____
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
# number of epochs to train the model n_epochs = 30 valid_loss_min = np.Inf # track change in validation loss for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() for batch_idx, (data, target) in enumerate(train_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item()*data.size(0) ###################### # validate the model # ###################### model.eval() for batch_idx, (data, target) in enumerate(valid_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update average validation loss valid_loss += loss.item()*data.size(0) # calculate average losses train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(valid_loader.dataset) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss)) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(model.state_dict(), 'model_augmented.pt') valid_loss_min = valid_loss
Epoch: 1 Training Loss: 1.483400 Validation Loss: 0.357015 Validation loss decreased (inf --> 0.357015). Saving model ... Epoch: 2 Training Loss: 1.462634 Validation Loss: 0.355032 Validation loss decreased (0.357015 --> 0.355032). Saving model ... Epoch: 3 Training Loss: 1.454201 Validation Loss: 0.351629 Validation loss decreased (0.355032 --> 0.351629). Saving model ... Epoch: 4 Training Loss: 1.446663 Validation Loss: 0.349965 Validation loss decreased (0.351629 --> 0.349965). Saving model ... Epoch: 5 Training Loss: 1.444058 Validation Loss: 0.348746 Validation loss decreased (0.349965 --> 0.348746). Saving model ... Epoch: 6 Training Loss: 1.443467 Validation Loss: 0.350312 Epoch: 7 Training Loss: 1.435970 Validation Loss: 0.347319 Validation loss decreased (0.348746 --> 0.347319). Saving model ... Epoch: 8 Training Loss: 1.430920 Validation Loss: 0.347215 Validation loss decreased (0.347319 --> 0.347215). Saving model ... Epoch: 9 Training Loss: 1.428707 Validation Loss: 0.347475 Epoch: 10 Training Loss: 1.426082 Validation Loss: 0.344052 Validation loss decreased (0.347215 --> 0.344052). Saving model ... Epoch: 11 Training Loss: 1.421921 Validation Loss: 0.343847 Validation loss decreased (0.344052 --> 0.343847). Saving model ... Epoch: 12 Training Loss: 1.419442 Validation Loss: 0.344311 Epoch: 13 Training Loss: 1.418585 Validation Loss: 0.345510 Epoch: 14 Training Loss: 1.415068 Validation Loss: 0.342422 Validation loss decreased (0.343847 --> 0.342422). Saving model ... Epoch: 15 Training Loss: 1.410706 Validation Loss: 0.340904 Validation loss decreased (0.342422 --> 0.340904). Saving model ... Epoch: 16 Training Loss: 1.415896 Validation Loss: 0.342239 Epoch: 17 Training Loss: 1.408537 Validation Loss: 0.339082 Validation loss decreased (0.340904 --> 0.339082). Saving model ... Epoch: 18 Training Loss: 1.406269 Validation Loss: 0.343943 Epoch: 19 Training Loss: 1.404933 Validation Loss: 0.338068 Validation loss decreased (0.339082 --> 0.338068). Saving model ... Epoch: 20 Training Loss: 1.404783 Validation Loss: 0.341946 Epoch: 21 Training Loss: 1.401647 Validation Loss: 0.340137 Epoch: 22 Training Loss: 1.402606 Validation Loss: 0.339872 Epoch: 23 Training Loss: 1.398441 Validation Loss: 0.337148 Validation loss decreased (0.338068 --> 0.337148). Saving model ... Epoch: 24 Training Loss: 1.401196 Validation Loss: 0.335357 Validation loss decreased (0.337148 --> 0.335357). Saving model ... Epoch: 25 Training Loss: 1.392405 Validation Loss: 0.339850 Epoch: 26 Training Loss: 1.392132 Validation Loss: 0.339279 Epoch: 27 Training Loss: 1.387730 Validation Loss: 0.331665 Validation loss decreased (0.335357 --> 0.331665). Saving model ... Epoch: 28 Training Loss: 1.380706 Validation Loss: 0.340137 Epoch: 29 Training Loss: 1.374192 Validation Loss: 0.332150 Epoch: 30 Training Loss: 1.372476 Validation Loss: 0.332638
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
Load the Model with the Lowest Validation Loss
model.load_state_dict(torch.load('model_augmented.pt'))
_____no_output_____
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
# track test loss test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # iterate over test data for batch_idx, (data, target) in enumerate(test_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct_tensor = pred.eq(target.data.view_as(pred)) correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy()) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # average test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total)))
Test Loss: 1.650751 Test Accuracy of airplane: 37% (374/1000) Test Accuracy of automobile: 47% (471/1000) Test Accuracy of bird: 8% (80/1000) Test Accuracy of cat: 20% (203/1000) Test Accuracy of deer: 39% (390/1000) Test Accuracy of dog: 27% (274/1000) Test Accuracy of frog: 37% (372/1000) Test Accuracy of horse: 48% (483/1000) Test Accuracy of ship: 47% (473/1000) Test Accuracy of truck: 58% (581/1000) Test Accuracy (Overall): 37% (3701/10000)
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
Visualize Sample Test Results
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images.numpy() # move model inputs to cuda, if GPU available if train_on_gpu: images = images.cuda() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds_tensor = torch.max(output, 1) preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy()) # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) imshow(images[idx]) ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx].item() else "red"))
_____no_output_____
MIT
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
gaoxiao/deep-learning-v2-pytorch
Data core> Core functionality for gathering data The classes here provide functionality for applying a list of transforms to a set of items (`TfmdLists`, `Datasets`) or a `DataLoader` (`TfmdDl`) as well as the base class used to gather the data for model training: `DataLoaders`. TfmdDL -
#export @typedispatch def show_batch(x, y, samples, ctxs=None, max_n=9, **kwargs): if ctxs is None: ctxs = Inf.nones if hasattr(samples[0], 'show'): ctxs = [s.show(ctx=c, **kwargs) for s,c,_ in zip(samples,ctxs,range(max_n))] else: for i in range_of(samples[0]): ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))] return ctxs
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
`show_batch` is a type-dispatched function that is responsible for showing decoded `samples`. `x` and `y` are the input and the target in the batch to be shown, and are passed along to dispatch on their types. There is a different implementation of `show_batch` if `x` is a `TensorImage` or a `TensorText` for instance (see vision.core or text.data for more details). `ctxs` can be passed but the function is responsible to create them if necessary. `kwargs` depend on the specific implementation.
#export @typedispatch def show_results(x, y, samples, outs, ctxs=None, max_n=9, **kwargs): if ctxs is None: ctxs = Inf.nones for i in range(len(samples[0])): ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))] for i in range(len(outs[0])): ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(outs.itemgot(i),ctxs,range(max_n))] return ctxs
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
`show_results` is a type-dispatched function that is responsible for showing decoded `samples` and their corresponding `outs`. Like in `show_batch`, `x` and `y` are the input and the target in the batch to be shown, and are passed along to dispatch on their types. `ctxs` can be passed but the function is responsible to create them if necessary. `kwargs` depend on the specific implementation.
#export _all_ = ["show_batch", "show_results"] #export _batch_tfms = ('after_item','before_batch','after_batch') #export @log_args(but_as=DataLoader.__init__) @delegates() class TfmdDL(DataLoader): "Transformed `DataLoader`" def __init__(self, dataset, bs=64, shuffle=False, num_workers=None, verbose=False, do_setup=True, **kwargs): if num_workers is None: num_workers = min(16, defaults.cpus) for nm in _batch_tfms: kwargs[nm] = Pipeline(kwargs.get(nm,None)) super().__init__(dataset, bs=bs, shuffle=shuffle, num_workers=num_workers, **kwargs) if do_setup: for nm in _batch_tfms: pv(f"Setting up {nm}: {kwargs[nm]}", verbose) kwargs[nm].setup(self) def _one_pass(self): b = self.do_batch([self.do_item(0)]) if self.device is not None: b = to_device(b, self.device) its = self.after_batch(b) self._n_inp = 1 if not isinstance(its, (list,tuple)) or len(its)==1 else len(its)-1 self._types = explode_types(its) def _retain_dl(self,b): if not getattr(self, '_types', None): self._one_pass() return retain_types(b, typs=self._types) @delegates(DataLoader.new) def new(self, dataset=None, cls=None, **kwargs): res = super().new(dataset, cls, do_setup=False, **kwargs) if not hasattr(self, '_n_inp') or not hasattr(self, '_types'): try: self._one_pass() res._n_inp,res._types = self._n_inp,self._types except: print("Could not do one pass in your dataloader, there is something wrong in it") else: res._n_inp,res._types = self._n_inp,self._types return res def before_iter(self): super().before_iter() split_idx = getattr(self.dataset, 'split_idx', None) for nm in _batch_tfms: f = getattr(self,nm) if isinstance(f,Pipeline): f.split_idx=split_idx def decode(self, b): return self.before_batch.decode(to_cpu(self.after_batch.decode(self._retain_dl(b)))) def decode_batch(self, b, max_n=9, full=True): return self._decode_batch(self.decode(b), max_n, full) def _decode_batch(self, b, max_n=9, full=True): f = self.after_item.decode f = compose(f, partial(getattr(self.dataset,'decode',noop), full = full)) return L(batch_to_samples(b, max_n=max_n)).map(f) def _pre_show_batch(self, b, max_n=9): "Decode `b` to be ready for `show_batch`" b = self.decode(b) if hasattr(b, 'show'): return b,None,None its = self._decode_batch(b, max_n, full=False) if not is_listy(b): b,its = [b],L((o,) for o in its) return detuplify(b[:self.n_inp]),detuplify(b[self.n_inp:]),its def show_batch(self, b=None, max_n=9, ctxs=None, show=True, unique=False, **kwargs): if unique: old_get_idxs = self.get_idxs self.get_idxs = lambda: Inf.zeros if b is None: b = self.one_batch() if not show: return self._pre_show_batch(b, max_n=max_n) show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs) if unique: self.get_idxs = old_get_idxs def show_results(self, b, out, max_n=9, ctxs=None, show=True, **kwargs): x,y,its = self.show_batch(b, max_n=max_n, show=False) b_out = type(b)(b[:self.n_inp] + (tuple(out) if is_listy(out) else (out,))) x1,y1,outs = self.show_batch(b_out, max_n=max_n, show=False) res = (x,x1,None,None) if its is None else (x, y, its, outs.itemgot(slice(self.n_inp,None))) if not show: return res show_results(*res, ctxs=ctxs, max_n=max_n, **kwargs) @property def n_inp(self): if hasattr(self.dataset, 'n_inp'): return self.dataset.n_inp if not hasattr(self, '_n_inp'): self._one_pass() return self._n_inp def to(self, device): self.device = device for tfm in self.after_batch.fs: for a in L(getattr(tfm, 'parameters', None)): setattr(tfm, a, getattr(tfm, a).to(device)) return self
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
A `TfmdDL` is a `DataLoader` that creates `Pipeline` from a list of `Transform`s for the callbacks `after_item`, `before_batch` and `after_batch`. As a result, it can decode or show a processed `batch`.
#export add_docs(TfmdDL, decode="Decode `b` using `tfms`", decode_batch="Decode `b` entirely", new="Create a new version of self with a few changed attributes", show_batch="Show `b` (defaults to `one_batch`), a list of lists of pipeline outputs (i.e. output of a `DataLoader`)", show_results="Show each item of `b` and `out`", before_iter="override", to="Put self and its transforms state on `device`") class _Category(int, ShowTitle): pass #Test retain type class NegTfm(Transform): def encodes(self, x): return torch.neg(x) def decodes(self, x): return torch.neg(x) tdl = TfmdDL([(TensorImage([1]),)] * 4, after_batch=NegTfm(), bs=4, num_workers=4) b = tdl.one_batch() test_eq(type(b[0]), TensorImage) b = (tensor([1.,1.,1.,1.]),) test_eq(type(tdl.decode_batch(b)[0][0]), TensorImage) class A(Transform): def encodes(self, x): return x def decodes(self, x): return TitledInt(x) @Transform def f(x)->None: return fastuple((x,x)) start = torch.arange(50) test_eq_type(f(2), fastuple((2,2))) a = A() tdl = TfmdDL(start, after_item=lambda x: (a(x), f(x)), bs=4) x,y = tdl.one_batch() test_eq(type(y), fastuple) s = tdl.decode_batch((x,y)) test_eq(type(s[0][1]), fastuple) tdl = TfmdDL(torch.arange(0,50), after_item=A(), after_batch=NegTfm(), bs=4) test_eq(tdl.dataset[0], start[0]) test_eq(len(tdl), (50-1)//4+1) test_eq(tdl.bs, 4) test_stdout(tdl.show_batch, '0\n1\n2\n3') test_stdout(partial(tdl.show_batch, unique=True), '0\n0\n0\n0') class B(Transform): parameters = 'a' def __init__(self): self.a = torch.tensor(0.) def encodes(self, x): x tdl = TfmdDL([(TensorImage([1]),)] * 4, after_batch=B(), bs=4) test_eq(tdl.after_batch.fs[0].a.device, torch.device('cpu')) tdl.to(default_device()) test_eq(tdl.after_batch.fs[0].a.device, default_device())
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Methods
show_doc(TfmdDL.one_batch) tfm = NegTfm() tdl = TfmdDL(start, after_batch=tfm, bs=4) b = tdl.one_batch() test_eq(tensor([0,-1,-2,-3]), b) show_doc(TfmdDL.decode) test_eq(tdl.decode(b), tensor(0,1,2,3)) show_doc(TfmdDL.decode_batch) test_eq(tdl.decode_batch(b), [0,1,2,3]) show_doc(TfmdDL.show_batch) show_doc(TfmdDL.to)
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
DataLoaders -
# export @docs class DataLoaders(GetAttr): "Basic wrapper around several `DataLoader`s." _default='train' def __init__(self, *loaders, path='.', device=None): self.loaders,self.path = list(loaders),Path(path) if device is not None or hasattr(loaders[0],'to'): self.device = device def __getitem__(self, i): return self.loaders[i] def new_empty(self): loaders = [dl.new(dl.dataset.new_empty()) for dl in self.loaders] return type(self)(*loaders, path=self.path, device=self.device) def _set(i, self, v): self.loaders[i] = v train ,valid = add_props(lambda i,x: x[i], _set) train_ds,valid_ds = add_props(lambda i,x: x[i].dataset) @property def device(self): return self._device @device.setter def device(self, d): for dl in self.loaders: dl.to(d) self._device = d def to(self, device): self.device = device return self def cuda(self): return self.to(device=default_device()) def cpu(self): return self.to(device=torch.device('cpu')) @classmethod def from_dsets(cls, *ds, path='.', bs=64, device=None, dl_type=TfmdDL, **kwargs): default = (True,) + (False,) * (len(ds)-1) defaults = {'shuffle': default, 'drop_last': default} for nm in _batch_tfms: if nm in kwargs: kwargs[nm] = Pipeline(kwargs[nm]) kwargs = merge(defaults, {k: tuplify(v, match=ds) for k,v in kwargs.items()}) kwargs = [{k: v[i] for k,v in kwargs.items()} for i in range_of(ds)] return cls(*[dl_type(d, bs=bs, **k) for d,k in zip(ds, kwargs)], path=path, device=device) @classmethod def from_dblock(cls, dblock, source, path='.', bs=64, val_bs=None, shuffle_train=True, device=None, **kwargs): return dblock.dataloaders(source, path=path, bs=bs, val_bs=val_bs, shuffle_train=shuffle_train, device=device, **kwargs) _docs=dict(__getitem__="Retrieve `DataLoader` at `i` (`0` is training, `1` is validation)", train="Training `DataLoader`", valid="Validation `DataLoader`", train_ds="Training `Dataset`", valid_ds="Validation `Dataset`", to="Use `device`", cuda="Use the gpu if available", cpu="Use the cpu", new_empty="Create a new empty version of `self` with the same transforms", from_dblock="Create a dataloaders from a given `dblock`") dls = DataLoaders(tdl,tdl) x = dls.train.one_batch() x2 = first(tdl) test_eq(x,x2) x2 = dls.one_batch() test_eq(x,x2) #hide #test assignment works dls.train = dls.train.new(bs=4)
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Methods
show_doc(DataLoaders.__getitem__) x2 = dls[0].one_batch() test_eq(x,x2) show_doc(DataLoaders.train, name="DataLoaders.train") show_doc(DataLoaders.valid, name="DataLoaders.valid") show_doc(DataLoaders.train_ds, name="DataLoaders.train_ds") show_doc(DataLoaders.valid_ds, name="DataLoaders.valid_ds")
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
TfmdLists -
#export class FilteredBase: "Base class for lists with subsets" _dl_type,_dbunch_type = TfmdDL,DataLoaders def __init__(self, *args, dl_type=None, **kwargs): if dl_type is not None: self._dl_type = dl_type self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders) super().__init__(*args, **kwargs) @property def n_subsets(self): return len(self.splits) def _new(self, items, **kwargs): return super()._new(items, splits=self.splits, **kwargs) def subset(self): raise NotImplemented def dataloaders(self, bs=64, val_bs=None, shuffle_train=True, n=None, path='.', dl_type=None, dl_kwargs=None, device=None, **kwargs): if device is None: device=default_device() if dl_kwargs is None: dl_kwargs = [{}] * self.n_subsets if dl_type is None: dl_type = self._dl_type drop_last = kwargs.pop('drop_last', shuffle_train) dl = dl_type(self.subset(0), bs=bs, shuffle=shuffle_train, drop_last=drop_last, n=n, device=device, **merge(kwargs, dl_kwargs[0])) dls = [dl] + [dl.new(self.subset(i), bs=(bs if val_bs is None else val_bs), shuffle=False, drop_last=False, n=None, **dl_kwargs[i]) for i in range(1, self.n_subsets)] return self._dbunch_type(*dls, path=path, device=device) FilteredBase.train,FilteredBase.valid = add_props(lambda i,x: x.subset(i)) #export class TfmdLists(FilteredBase, L, GetAttr): "A `Pipeline` of `tfms` applied to a collection of `items`" _default='tfms' def __init__(self, items, tfms, use_list=None, do_setup=True, split_idx=None, train_setup=True, splits=None, types=None, verbose=False, dl_type=None): super().__init__(items, use_list=use_list) if dl_type is not None: self._dl_type = dl_type self.splits = L([slice(None),[]] if splits is None else splits).map(mask2idxs) if isinstance(tfms,TfmdLists): tfms = tfms.tfms if isinstance(tfms,Pipeline): do_setup=False self.tfms = Pipeline(tfms, split_idx=split_idx) store_attr('types,split_idx') if do_setup: pv(f"Setting up {self.tfms}", verbose) self.setup(train_setup=train_setup) def _new(self, items, split_idx=None, **kwargs): split_idx = ifnone(split_idx,self.split_idx) return super()._new(items, tfms=self.tfms, do_setup=False, types=self.types, split_idx=split_idx, **kwargs) def subset(self, i): return self._new(self._get(self.splits[i]), split_idx=i) def _after_item(self, o): return self.tfms(o) def __repr__(self): return f"{self.__class__.__name__}: {self.items}\ntfms - {self.tfms.fs}" def __iter__(self): return (self[i] for i in range(len(self))) def show(self, o, **kwargs): return self.tfms.show(o, **kwargs) def decode(self, o, **kwargs): return self.tfms.decode(o, **kwargs) def __call__(self, o, **kwargs): return self.tfms.__call__(o, **kwargs) def overlapping_splits(self): return L(Counter(self.splits.concat()).values()).filter(gt(1)) def new_empty(self): return self._new([]) def setup(self, train_setup=True): self.tfms.setup(self, train_setup) if len(self) != 0: x = super().__getitem__(0) if self.splits is None else super().__getitem__(self.splits[0])[0] self.types = [] for f in self.tfms.fs: self.types.append(getattr(f, 'input_types', type(x))) x = f(x) self.types.append(type(x)) types = L(t if is_listy(t) else [t] for t in self.types).concat().unique() self.pretty_types = '\n'.join([f' - {t}' for t in types]) def infer_idx(self, x): # TODO: check if we really need this, or can simplify idx = 0 for t in self.types: if isinstance(x, t): break idx += 1 types = L(t if is_listy(t) else [t] for t in self.types).concat().unique() pretty_types = '\n'.join([f' - {t}' for t in types]) assert idx < len(self.types), f"Expected an input of type in \n{pretty_types}\n but got {type(x)}" return idx def infer(self, x): return compose_tfms(x, tfms=self.tfms.fs[self.infer_idx(x):], split_idx=self.split_idx) def __getitem__(self, idx): res = super().__getitem__(idx) if self._after_item is None: return res return self._after_item(res) if is_indexer(idx) else res.map(self._after_item) #export add_docs(TfmdLists, setup="Transform setup with self", decode="From `Pipeline`", show="From `Pipeline`", overlapping_splits="All splits that are in more than one split", subset="New `TfmdLists` with same tfms that only includes items in `i`th split", infer_idx="Finds the index where `self.tfms` can be applied to `x`, depending on the type of `x`", infer="Apply `self.tfms` to `x` starting at the right tfm depending on the type of `x`", new_empty="A new version of `self` but with no items") #exports def decode_at(o, idx): "Decoded item at `idx`" return o.decode(o[idx]) #exports def show_at(o, idx, **kwargs): "Show item at `idx`", return o.show(o[idx], **kwargs)
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
A `TfmdLists` combines a collection of object with a `Pipeline`. `tfms` can either be a `Pipeline` or a list of transforms, in which case, it will wrap them in a `Pipeline`. `use_list` is passed along to `L` with the `items` and `split_idx` are passed to each transform of the `Pipeline`. `do_setup` indicates if the `Pipeline.setup` method should be called during initialization.
class _IntFloatTfm(Transform): def encodes(self, o): return TitledInt(o) def decodes(self, o): return TitledFloat(o) int2f_tfm=_IntFloatTfm() def _neg(o): return -o neg_tfm = Transform(_neg, _neg) items = L([1.,2.,3.]); tfms = [neg_tfm, int2f_tfm] tl = TfmdLists(items, tfms=tfms) test_eq_type(tl[0], TitledInt(-1)) test_eq_type(tl[1], TitledInt(-2)) test_eq_type(tl.decode(tl[2]), TitledFloat(3.)) test_stdout(lambda: show_at(tl, 2), '-3') test_eq(tl.types, [float, float, TitledInt]) tl # add splits to TfmdLists splits = [[0,2],[1]] tl = TfmdLists(items, tfms=tfms, splits=splits) test_eq(tl.n_subsets, 2) test_eq(tl.train, tl.subset(0)) test_eq(tl.valid, tl.subset(1)) test_eq(tl.train.items, items[splits[0]]) test_eq(tl.valid.items, items[splits[1]]) test_eq(tl.train.tfms.split_idx, 0) test_eq(tl.valid.tfms.split_idx, 1) test_eq(tl.train.new_empty().split_idx, 0) test_eq(tl.valid.new_empty().split_idx, 1) test_eq_type(tl.splits, L(splits)) assert not tl.overlapping_splits() df = pd.DataFrame(dict(a=[1,2,3],b=[2,3,4])) tl = TfmdLists(df, lambda o: o.a+1, splits=[[0],[1,2]]) test_eq(tl[1,2], [3,4]) tr = tl.subset(0) test_eq(tr[:], [2]) val = tl.subset(1) test_eq(val[:], [3,4]) class _B(Transform): def __init__(self): self.m = 0 def encodes(self, o): return o+self.m def decodes(self, o): return o-self.m def setups(self, items): print(items) self.m = tensor(items).float().mean().item() # test for setup, which updates `self.m` tl = TfmdLists(items, _B()) test_eq(tl.m, 2)
TfmdLists: [1.0, 2.0, 3.0] tfms - (#0) []
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Here's how we can use `TfmdLists.setup` to implement a simple category list, getting labels from a mock file list:
class _Cat(Transform): order = 1 def encodes(self, o): return int(self.o2i[o]) def decodes(self, o): return TitledStr(self.vocab[o]) def setups(self, items): self.vocab,self.o2i = uniqueify(L(items), sort=True, bidir=True) tcat = _Cat() def _lbl(o): return TitledStr(o.split('_')[0]) # Check that tfms are sorted by `order` & `_lbl` is called first fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','dog_1.jpg'] tl = TfmdLists(fns, [tcat,_lbl]) exp_voc = ['cat','dog'] test_eq(tcat.vocab, exp_voc) test_eq(tl.tfms.vocab, exp_voc) test_eq(tl.vocab, exp_voc) test_eq(tl, (1,0,0,0,1)) test_eq([tl.decode(o) for o in tl], ('dog','cat','cat','cat','dog')) #Check only the training set is taken into account for setup tl = TfmdLists(fns, [tcat,_lbl], splits=[[0,4], [1,2,3]]) test_eq(tcat.vocab, ['dog']) tfm = NegTfm(split_idx=1) tds = TfmdLists(start, A()) tdl = TfmdDL(tds, after_batch=tfm, bs=4) x = tdl.one_batch() test_eq(x, torch.arange(4)) tds.split_idx = 1 x = tdl.one_batch() test_eq(x, -torch.arange(4)) tds.split_idx = 0 x = tdl.one_batch() test_eq(x, torch.arange(4)) tds = TfmdLists(start, A()) tdl = TfmdDL(tds, after_batch=NegTfm(), bs=4) test_eq(tdl.dataset[0], start[0]) test_eq(len(tdl), (len(tds)-1)//4+1) test_eq(tdl.bs, 4) test_stdout(tdl.show_batch, '0\n1\n2\n3') show_doc(TfmdLists.subset) show_doc(TfmdLists.infer_idx) show_doc(TfmdLists.infer) def mult(x): return x*2 mult.order = 2 fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','dog_1.jpg'] tl = TfmdLists(fns, [_lbl,_Cat(),mult]) test_eq(tl.infer_idx('dog_45.jpg'), 0) test_eq(tl.infer('dog_45.jpg'), 2) test_eq(tl.infer_idx(4), 2) test_eq(tl.infer(4), 8) test_fail(lambda: tl.infer_idx(2.0)) test_fail(lambda: tl.infer(2.0)) #hide #Test input_types works on a Transform cat = _Cat() cat.input_types = (str, float) tl = TfmdLists(fns, [_lbl,cat,mult]) test_eq(tl.infer_idx(2.0), 1) #Test type annotations work on a function def mult(x:(int,float)): return x*2 mult.order = 2 tl = TfmdLists(fns, [_lbl,_Cat(),mult]) test_eq(tl.infer_idx(2.0), 2)
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Datasets -
#export @docs @delegates(TfmdLists) class Datasets(FilteredBase): "A dataset that creates a tuple from each `tfms`, passed through `item_tfms`" def __init__(self, items=None, tfms=None, tls=None, n_inp=None, dl_type=None, **kwargs): super().__init__(dl_type=dl_type) self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))]) self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1)) def __getitem__(self, it): res = tuple([tl[it] for tl in self.tls]) return res if is_indexer(it) else list(zip(*res)) def __getattr__(self,k): return gather_attrs(self, k, 'tls') def __dir__(self): return super().__dir__() + gather_attr_names(self, 'tls') def __len__(self): return len(self.tls[0]) def __iter__(self): return (self[i] for i in range(len(self))) def __repr__(self): return coll_repr(self) def decode(self, o, full=True): return tuple(tl.decode(o_, full=full) for o_,tl in zip(o,tuplify(self.tls, match=o))) def subset(self, i): return type(self)(tls=L(tl.subset(i) for tl in self.tls), n_inp=self.n_inp) def _new(self, items, *args, **kwargs): return super()._new(items, tfms=self.tfms, do_setup=False, **kwargs) def overlapping_splits(self): return self.tls[0].overlapping_splits() def new_empty(self): return type(self)(tls=[tl.new_empty() for tl in self.tls], n_inp=self.n_inp) @property def splits(self): return self.tls[0].splits @property def split_idx(self): return self.tls[0].tfms.split_idx @property def items(self): return self.tls[0].items @items.setter def items(self, v): for tl in self.tls: tl.items = v def show(self, o, ctx=None, **kwargs): for o_,tl in zip(o,self.tls): ctx = tl.show(o_, ctx=ctx, **kwargs) return ctx @contextmanager def set_split_idx(self, i): old_split_idx = self.split_idx for tl in self.tls: tl.tfms.split_idx = i try: yield self finally: for tl in self.tls: tl.tfms.split_idx = old_split_idx _docs=dict( decode="Compose `decode` of all `tuple_tfms` then all `tfms` on `i`", show="Show item `o` in `ctx`", dataloaders="Get a `DataLoaders`", overlapping_splits="All splits that are in more than one split", subset="New `Datasets` that only includes subset `i`", new_empty="Create a new empty version of the `self`, keeping only the transforms", set_split_idx="Contextmanager to use the same `Datasets` with another `split_idx`" )
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
A `Datasets` creates a tuple from `items` (typically input,target) by applying to them each list of `Transform` (or `Pipeline`) in `tfms`. Note that if `tfms` contains only one list of `tfms`, the items given by `Datasets` will be tuples of one element. `n_inp` is the number of elements in the tuples that should be considered part of the input and will default to 1 if `tfms` consists of one set of transforms, `len(tfms)-1` otherwise. In most cases, the number of elements in the tuples spit out by `Datasets` will be 2 (for input,target) but it can happen that there is 3 (Siamese networks or tabular data) in which case we need to be able to determine when the inputs end and the targets begin.
items = [1,2,3,4] dsets = Datasets(items, [[neg_tfm,int2f_tfm], [add(1)]]) t = dsets[0] test_eq(t, (-1,2)) test_eq(dsets[0,1,2], [(-1,2),(-2,3),(-3,4)]) test_eq(dsets.n_inp, 1) dsets.decode(t) class Norm(Transform): def encodes(self, o): return (o-self.m)/self.s def decodes(self, o): return (o*self.s)+self.m def setups(self, items): its = tensor(items).float() self.m,self.s = its.mean(),its.std() items = [1,2,3,4] nrm = Norm() dsets = Datasets(items, [[neg_tfm,int2f_tfm], [neg_tfm,nrm]]) x,y = zip(*dsets) test_close(tensor(y).mean(), 0) test_close(tensor(y).std(), 1) test_eq(x, (-1,-2,-3,-4,)) test_eq(nrm.m, -2.5) test_stdout(lambda:show_at(dsets, 1), '-2') test_eq(dsets.m, nrm.m) test_eq(dsets.norm.m, nrm.m) test_eq(dsets.train.norm.m, nrm.m) #hide #Check filtering is properly applied class B(Transform): def encodes(self, x)->None: return int(x+1) def decodes(self, x): return TitledInt(x-1) add1 = B(split_idx=1) dsets = Datasets(items, [neg_tfm, [neg_tfm,int2f_tfm,add1]], splits=[[3],[0,1,2]]) test_eq(dsets[1], [-2,-2]) test_eq(dsets.valid[1], [-2,-1]) test_eq(dsets.valid[[1,1]], [[-2,-1], [-2,-1]]) test_eq(dsets.train[0], [-4,-4]) test_fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','kid_1.jpg'] tcat = _Cat() dsets = Datasets(test_fns, [[tcat,_lbl]], splits=[[0,1,2], [3,4]]) test_eq(tcat.vocab, ['cat','dog']) test_eq(dsets.train, [(1,),(0,),(0,)]) test_eq(dsets.valid[0], (0,)) test_stdout(lambda: show_at(dsets.train, 0), "dog") inp = [0,1,2,3,4] dsets = Datasets(inp, tfms=[None]) test_eq(*dsets[2], 2) # Retrieve one item (subset 0 is the default) test_eq(dsets[1,2], [(1,),(2,)]) # Retrieve two items by index mask = [True,False,False,True,False] test_eq(dsets[mask], [(0,),(3,)]) # Retrieve two items by mask inp = pd.DataFrame(dict(a=[5,1,2,3,4])) dsets = Datasets(inp, tfms=attrgetter('a')).subset(0) test_eq(*dsets[2], 2) # Retrieve one item (subset 0 is the default) test_eq(dsets[1,2], [(1,),(2,)]) # Retrieve two items by index mask = [True,False,False,True,False] test_eq(dsets[mask], [(5,),(3,)]) # Retrieve two items by mask #test n_inp inp = [0,1,2,3,4] dsets = Datasets(inp, tfms=[None]) test_eq(dsets.n_inp, 1) dsets = Datasets(inp, tfms=[[None],[None],[None]]) test_eq(dsets.n_inp, 2) dsets = Datasets(inp, tfms=[[None],[None],[None]], n_inp=1) test_eq(dsets.n_inp, 1) # splits can be indices dsets = Datasets(range(5), tfms=[None], splits=[tensor([0,2]), [1,3,4]]) test_eq(dsets.subset(0), [(0,),(2,)]) test_eq(dsets.train, [(0,),(2,)]) # Subset 0 is aliased to `train` test_eq(dsets.subset(1), [(1,),(3,),(4,)]) test_eq(dsets.valid, [(1,),(3,),(4,)]) # Subset 1 is aliased to `valid` test_eq(*dsets.valid[2], 4) #assert '[(1,),(3,),(4,)]' in str(dsets) and '[(0,),(2,)]' in str(dsets) dsets # splits can be boolean masks (they don't have to cover all items, but must be disjoint) splits = [[False,True,True,False,True], [True,False,False,False,False]] dsets = Datasets(range(5), tfms=[None], splits=splits) test_eq(dsets.train, [(1,),(2,),(4,)]) test_eq(dsets.valid, [(0,)]) # apply transforms to all items tfm = [[lambda x: x*2,lambda x: x+1]] splits = [[1,2],[0,3,4]] dsets = Datasets(range(5), tfm, splits=splits) test_eq(dsets.train,[(3,),(5,)]) test_eq(dsets.valid,[(1,),(7,),(9,)]) test_eq(dsets.train[False,True], [(5,)]) # only transform subset 1 class _Tfm(Transform): split_idx=1 def encodes(self, x): return x*2 def decodes(self, x): return TitledStr(x//2) dsets = Datasets(range(5), [_Tfm()], splits=[[1,2],[0,3,4]]) test_eq(dsets.train,[(1,),(2,)]) test_eq(dsets.valid,[(0,),(6,),(8,)]) test_eq(dsets.train[False,True], [(2,)]) dsets #A context manager to change the split_idx and apply the validation transform on the training set ds = dsets.train with ds.set_split_idx(1): test_eq(ds,[(2,),(4,)]) test_eq(dsets.train,[(1,),(2,)]) #hide #Test Datasets pickles dsrc1 = pickle.loads(pickle.dumps(dsets)) test_eq(dsets.train, dsrc1.train) test_eq(dsets.valid, dsrc1.valid) dsets = Datasets(range(5), [_Tfm(),noop], splits=[[1,2],[0,3,4]]) test_eq(dsets.train,[(1,1),(2,2)]) test_eq(dsets.valid,[(0,0),(6,3),(8,4)]) start = torch.arange(0,50) tds = Datasets(start, [A()]) tdl = TfmdDL(tds, after_item=NegTfm(), bs=4) b = tdl.one_batch() test_eq(tdl.decode_batch(b), ((0,),(1,),(2,),(3,))) test_stdout(tdl.show_batch, "0\n1\n2\n3") # only transform subset 1 class _Tfm(Transform): split_idx=1 def encodes(self, x): return x*2 dsets = Datasets(range(8), [None], splits=[[1,2,5,7],[0,3,4,6]]) # only transform subset 1 class _Tfm(Transform): split_idx=1 def encodes(self, x): return x*2 dsets = Datasets(range(8), [None], splits=[[1,2,5,7],[0,3,4,6]]) dls = dsets.dataloaders(bs=4, after_batch=_Tfm(), shuffle_train=False, device=torch.device('cpu')) test_eq(dls.train, [(tensor([1,2,5, 7]),)]) test_eq(dls.valid, [(tensor([0,6,8,12]),)]) test_eq(dls.n_inp, 1)
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Methods
items = [1,2,3,4] dsets = Datasets(items, [[neg_tfm,int2f_tfm]]) #hide_input _dsrc = Datasets([1,2]) show_doc(_dsrc.dataloaders, name="Datasets.dataloaders") show_doc(Datasets.decode) test_eq(*dsets[0], -1) test_eq(*dsets.decode((-1,)), 1) show_doc(Datasets.show) test_stdout(lambda:dsets.show(dsets[1]), '-2') show_doc(Datasets.new_empty) items = [1,2,3,4] nrm = Norm() dsets = Datasets(items, [[neg_tfm,int2f_tfm], [neg_tfm]]) empty = dsets.new_empty() test_eq(empty.items, []) #hide #test it works for dataframes too df = pd.DataFrame({'a':[1,2,3,4,5], 'b':[6,7,8,9,10]}) dsets = Datasets(df, [[attrgetter('a')], [attrgetter('b')]]) empty = dsets.new_empty()
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Add test set for inference
# only transform subset 1 class _Tfm1(Transform): split_idx=0 def encodes(self, x): return x*3 dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]]) test_eq(dsets.train, [(3,),(6,),(15,),(21,)]) test_eq(dsets.valid, [(0,),(6,),(8,),(12,)]) #export def test_set(dsets, test_items, rm_tfms=None, with_labels=False): "Create a test set from `test_items` using validation transforms of `dsets`" if isinstance(dsets, Datasets): tls = dsets.tls if with_labels else dsets.tls[:dsets.n_inp] test_tls = [tl._new(test_items, split_idx=1) for tl in tls] if rm_tfms is None: rm_tfms = [tl.infer_idx(get_first(test_items)) for tl in test_tls] else: rm_tfms = tuplify(rm_tfms, match=test_tls) for i,j in enumerate(rm_tfms): test_tls[i].tfms.fs = test_tls[i].tfms.fs[j:] return Datasets(tls=test_tls) elif isinstance(dsets, TfmdLists): test_tl = dsets._new(test_items, split_idx=1) if rm_tfms is None: rm_tfms = dsets.infer_idx(get_first(test_items)) test_tl.tfms.fs = test_tl.tfms.fs[rm_tfms:] return test_tl else: raise Exception(f"This method requires using the fastai library to assemble your data. Expected a `Datasets` or a `TfmdLists` but got {dsets.__class__.__name__}") class _Tfm1(Transform): split_idx=0 def encodes(self, x): return x*3 dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]]) test_eq(dsets.train, [(3,),(6,),(15,),(21,)]) test_eq(dsets.valid, [(0,),(6,),(8,),(12,)]) #Tranform of the validation set are applied tst = test_set(dsets, [1,2,3]) test_eq(tst, [(2,),(4,),(6,)]) #hide #Test with different types tfm = _Tfm1() tfm.split_idx,tfm.order = None,2 dsets = Datasets(['dog', 'cat', 'cat', 'dog'], [[_Cat(),tfm]]) #With strings test_eq(test_set(dsets, ['dog', 'cat', 'cat']), [(3,), (0,), (0,)]) #With ints test_eq(test_set(dsets, [1,2]), [(3,), (6,)]) #hide #Test with various input lengths dsets = Datasets(range(8), [[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]]) tst = test_set(dsets, [1,2,3]) test_eq(tst, [(2,2),(4,4),(6,6)]) dsets = Datasets(range(8), [[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]], n_inp=1) tst = test_set(dsets, [1,2,3]) test_eq(tst, [(2,),(4,),(6,)]) #hide #Test with rm_tfms dsets = Datasets(range(8), [[_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]]) tst = test_set(dsets, [1,2,3]) test_eq(tst, [(4,),(8,),(12,)]) dsets = Datasets(range(8), [[_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]]) tst = test_set(dsets, [1,2,3], rm_tfms=1) test_eq(tst, [(2,),(4,),(6,)]) dsets = Datasets(range(8), [[_Tfm(),_Tfm()], [_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]], n_inp=2) tst = test_set(dsets, [1,2,3], rm_tfms=(1,0)) test_eq(tst, [(2,4),(4,8),(6,12)]) #export @delegates(TfmdDL.__init__) @patch def test_dl(self:DataLoaders, test_items, rm_type_tfms=None, with_labels=False, **kwargs): "Create a test dataloader from `test_items` using validation transforms of `dls`" test_ds = test_set(self.valid_ds, test_items, rm_tfms=rm_type_tfms, with_labels=with_labels ) if isinstance(self.valid_ds, (Datasets, TfmdLists)) else test_items return self.valid.new(test_ds, **kwargs) dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]]) dls = dsets.dataloaders(bs=4, device=torch.device('cpu')) dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]]) dls = dsets.dataloaders(bs=4, device=torch.device('cpu')) tst_dl = dls.test_dl([2,3,4,5]) test_eq(tst_dl._n_inp, 1) test_eq(list(tst_dl), [(tensor([ 4, 6, 8, 10]),)]) #Test you can change transforms tst_dl = dls.test_dl([2,3,4,5], after_item=add1) test_eq(list(tst_dl), [(tensor([ 5, 7, 9, 11]),)])
_____no_output_____
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
Export -
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 01a_losses.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 10b_tutorial.albumentations.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 18b_callback.preds.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted dev-setup.ipynb. Converted index.ipynb. Converted quick_start.ipynb. Converted tutorial.ipynb.
Apache-2.0
nbs/03_data.core.ipynb
aaminggo/fastai
התנהגות של פונקציות בפסקאות הקרובות נבחן פונקציות מזווית ראייה מעט שונה מהרגיל. בואו נקפוץ ישירות למים! שם של פונקציה תכונה מעניינת שמתקיימת בפייתון היא שפונקציה היא ערך, בדיוק כמו כל ערך אחר. נגדיר פונקציה שמעלה מספר בריבוע:
def square(x): return x ** 2
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
נוכל לבדוק מאיזה טיפוס הפונקציה (אנחנו לא קוראים לה עם סוגריים אחרי שמה – רק מציינים את שמה):
type(square)
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
ואפילו לבצע השמה שלה למשתנה, כך ששם המשתנה החדש יצביע עליה:
ribua = square print(square(5)) print(ribua(5))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
מה מתרחש בתא למעלה? כשהגדרנו את הפונקציה square, יצרנו לייזר עם התווית square שמצביע לפונקציה שמעלה מספר בריבוע. בהשמה שביצענו בשורה הראשונה בתא שלמעלה, הלייזר שעליו מודבקת התווית ribua כוון אל אותה הפונקציה שעליה מצביע הלייזר square. כעת square ו־ribua מצביעים לאותה פונקציה. אפשר לבדוק זאת כך:
ribua is square
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
בשלב הזה אצטרך לבקש מכם לחגור חגורות, כי זה לא הולך להיות טיול רגיל הפעם. פונקציות במבנים מורכבים אם פונקציה היא בסך הכול ערך, ואם אפשר להתייחס לשם שלה בכל מקום, אין סיבה שלא נוכל ליצור רשימה של פונקציות! ננסה לממש את הרעיון:
def add(num1, num2): return num1 + num2 def subtract(num1, num2): return num1 - num2 def multiply(num1, num2): return num1 * num2 def divide(num1, num2): return num1 / num2 functions = [add, subtract, multiply, divide]
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
כעת יש לנו רשימה בעלת 4 איברים, שכל אחד מהם מצביע לפונקציה שונה. אם נרצה לבצע פעולת חיבור, נוכל לקרוא ישירות ל־add או (בשביל התרגול) לנסות לאחזר אותה מהרשימה שיצרנו:
# Option 1 print(add(5, 2)) # Option 2 math_function = functions[0] print(math_function(5, 2)) # Option 3 (ugly, but works!) print(functions[0](5, 2))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
אם נרצה, נוכל אפילו לעבור על רשימת הפונקציות בעזרת לולאה ולהפעיל את כולן, זו אחר זו:
for function in functions: print(function(5, 2))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
בכל איטרציה של לולאת ה־for, המשתנה function עבר להצביע על הפונקציה הבאה מתוך רשימת functions. בשורה הבאה קראנו לאותה הפונקציה ש־function מצביע עליה, והדפסנו את הערך שהיא החזירה. כיוון שרשימה היא מבנה ששומר על סדר האיברים שבו, התוצאות מודפסות בסדר שבו הפונקציות שמורות ברשימה. התוצאה הראשונה שאנחנו רואים היא תוצאת פונקציית החיבור, השנייה היא תוצאת פונקציית החיסור וכן הלאה. תרגיל ביניים: סוגרים חשבון כתבו פונקציה בשם calc שמקבלת כפרמטר שני מספרים וסימן של פעולה חשבונית. הסימן יכול להיות אחד מאלה: +, -, * או /. מטרת הפונקציה היא להחזיר את תוצאת הביטוי החשבוני שהופעל על שני המספרים. בפתרונכם, השתמשו בהגדרת הפונקציות מלמעלה ובמילון. העברת פונקציה כפרמטר נמשיך ללהטט בפונקציות. פונקציה נקראת "פונקציה מסדר גבוה" (higher order function) אם היא מקבלת כפרמטר פונקציה. ניקח לדוגמה את הפונקציה calculate:
def calculate(function, num1, num2): return function(num1, num2)
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
בקריאה ל־calculate, נצטרך להעביר פונקציה ושני מספרים. נעביר לדוגמה את הפונקציה divide שהגדרנו קודם לכן:
calculate(divide, 5, 2)
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
מה שמתרחש במקרה הזה הוא שהעברנו את הפונקציה divide כארגומנט ראשון. הפרמטר function בפונקציה calculate מצביע כעת על פונקציית החילוק שהגדרנו למעלה. מכאן, שהפונקציה תחזיר את התוצאה של divide(5, 2) – הרי היא 2.5. תרגיל ביניים: מפה לפה כתבו generator בשם apply שמקבל כפרמטר ראשון פונקציה (func), וכפרמטר שני iterable (iter). עבור כל איבר ב־iterable, ה־generator יניב את האיבר אחרי שהופעלה עליו הפונקציה func, דהיינו – func(item). ודאו שהרצת התא הבא מחזירה True עבור הקוד שלכם:
def square(number): return number ** 2 square_check = apply(square, [5, -1, 6, -8, 0]) tuple(square_check) == (25, 1, 36, 64, 0)
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
סיכום ביניים וואו. זה היה די משוגע. אז למעשה, פונקציות בפייתון הן ערך לכל דבר, כמו מחרוזות ומספרים! אפשר לאחסן אותן במשתנים, לשלוח אותן כארגומנטים ולכלול אותם בתוך מבני נתונים מורכבים יותר. אנשי התיאוריה של מדעי המחשב נתנו להתנהגות כזו שם: "אזרח ממדרגה ראשונה" (first class citizen). אם כך, אפשר להגיד על פונקציות בפייתון שהן אזרחיות ממדרגה ראשונה. פונקציות מסדר גבוה בפייתון החדשות הטובות הן שכבר עשינו היכרות קלה עם המונח פונקציות מסדר גבוה. עכשיו, כשאנחנו יודעים שמדובר בפונקציות שמקבלות פונקציה כפרמטר, נתחיל ללכלך קצת את הידיים. נציג כמה פונקציות פייתוניות מעניינות שכאלו: הפונקציה map הפונקציה map מקבלת פונקציה כפרמטר הראשון, ו־iterable כפרמטר השני. map מפעילה את הפונקציה מהפרמטר הראשון על כל אחד מהאיברים שהועברו ב־iterable. היא מחזירה iterator שמורכב מהערכים שחזרו מהפעלת הפונקציה. במילים אחרות, map יוצרת iterable חדש. ה־iterable כולל את הערך שהוחזר מהפונקציה עבור כל איבר ב־iterable שהועבר. לדוגמה:
squared_items = map(square, [1, 6, -1, 8, 0, 3, -3, 9, -8, 8, -7]) print(tuple(squared_items))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
הפונקציה קיבלה כארגומנט ראשון את הפונקציה square שהגדרנו למעלה, שמטרתה העלאת מספר בריבוע. כארגומנט שני היא קיבלה את רשימת כל המספרים שאנחנו רוצים שהפונקציה תרוץ עליהם. כשהעברנו ל־map את הארגומנטים הללו, map החזירה לנו ב־iterator (מבנה שאפשר לעבור עליו איבר־איבר) את התוצאה: הריבוע, קרי החזקה השנייה, של כל אחד מהאיברים ברשימה שהועברה כארגומנט השני. למעשה, אפשר להגיד ש־map שקולה לפונקציה הבאה:
def my_map(function, iterable): for item in iterable: yield function(item)
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
הנה דוגמה נוספת לשימוש ב־map:
numbers = [(2, 4), (1, 4, 2), (1, 3, 5, 6, 2), (3, )] sums = map(sum, numbers) print(tuple(sums))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
במקרה הזה, בכל מעבר, קיבלה הפונקציה sum איבר אחד מתוך הרשימה – tuple. היא סכמה את האיברים של כל tuple שקיבלה, וכך החזירה לנו את הסכומים של כל ה־tuple־ים – זה אחרי זה. ודוגמה אחרונה:
def add_one(number): return number + 1 incremented = map(add_one, (1, 2, 3)) print(tuple(incremented))
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks
בדוגמה הזו יצרנו פונקציה משל עצמנו, ואותה העברנו ל־map. מטרת דוגמה זו היא להדגיש שאין שוני בין העברת פונקציה שקיימת בפייתון לבין פונקציה שאנחנו יצרנו. כתבו פונקציה שמקבלת רשימת מחרוזות של שתי מילים: שם פרטי ושם משפחה. הפונקציה תשתמש ב־map כדי להחזיר מכולן רק את השם הפרטי. חשוב! פתרו לפני שתמשיכו! הפונקציה filter הפונקציה filter מקבלת פונקציה כפרמטר ראשון, ו־iterable כפרמטר שני. filter מפעילה על כל אחד מאיברי ה־iterable את הפונקציה, ומחזירה את האיבר אך ורק אם הערך שחזר מהפונקציה שקול ל־True. אם ערך ההחזרה שקול ל־False – הערך "יבלע" ב־filter ולא יחזור ממנה. במילים אחרות, filter יוצרת iterable חדש ומחזירה אותו. ה־iterable כולל רק את האיברים שעבורם הפונקציה שהועברה החזירה ערך השקול ל־True. נבנה, לדוגמה, פונקציה שמחזירה אם אדם הוא בגיר. הפונקציה תקבל כפרמטר גיל, ותחזיר True כאשר הגיל שהועבר לה הוא לפחות 18, ו־False אחרת.
def is_mature(age): return age >= 18
_____no_output_____
CC-BY-4.0
week6/2_Functional_Behavior.ipynb
BrandMan2299/Notebooks