code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TalkNet Training
#
# This notebook is designed to provide a guide on how to train TalkNet as part of the TTS pipeline. It contains the following two sections:
# 1. **Introduction**: TalkNet in NeMo
# 2. **Preprocessing**: how to prepare data for Talknet
# 3. **Training**: example of TalkNet training
# # License
#
# > Copyright 2020 NVIDIA. All Rights Reserved.
# >
# > Licensed under the Apache License, Version 2.0 (the "License");
# > you may not use this file except in compliance with the License.
# > You may obtain a copy of the License at
# >
# > http://www.apache.org/licenses/LICENSE-2.0
# >
# > Unless required by applicable law or agreed to in writing, software
# > distributed under the License is distributed on an "AS IS" BASIS,
# > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# > See the License for the specific language governing permissions and
# > limitations under the License.
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies# .
"""
# # If you're using Colab and not running locally, uncomment and run this cell.
# # !apt-get install sox libsndfile1 ffmpeg
# # !pip install wget unidecode pysptk
# BRANCH = 'v1.0.0'
# # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
# +
import json
import nemo
import torch
import torchaudio
import numpy as np
from pysptk import sptk
from pathlib import Path
from tqdm.notebook import tqdm
# -
# # Introduction
#
# TalkNet is a neural network that converts text characters into a mel spectrogram. For more details about model, please refer to Nvidia's TalkNet Model Card, or the original [paper](https://arxiv.org/abs/2104.08189).
#
# TalkNet like most NeMo models is defined as a LightningModule, allowing for easy training via PyTorch Lightning, and parameterized by a configuration, currently defined via a yaml file and loading using Hydra.
#
# Let's take a look using NeMo's pretrained model and how to use it to generate spectrograms.
# +
# Load the TalkNetSpectModel
from nemo.collections.tts.models import TalkNetSpectModel
from nemo.collections.tts.models.base import SpectrogramGenerator
# Let's see what pretrained models are available
print(TalkNetSpectModel.list_available_models())
# +
# We can load the pre-trained model as follows
pretrained_model = "tts_en_talknet"
model = TalkNetSpectModel.from_pretrained(pretrained_model)
# Load and attach durs and pitch predictors
from nemo.collections.tts.models import TalkNetPitchModel
pitch_model = TalkNetPitchModel.from_pretrained(pretrained_model)
from nemo.collections.tts.models import TalkNetDursModel
durs_model = TalkNetDursModel.from_pretrained(pretrained_model)
model.add_module('_pitch_model', pitch_model)
model.add_module('_durs_model', durs_model)
model.eval()
# +
# TalkNet is a SpectrogramGenerator
assert isinstance(model, SpectrogramGenerator)
# SpectrogramGenerators in NeMo have two helper functions:
# 1. parse(text: str, **kwargs) which takes an English string and produces a token tensor
# 2. generate_spectrogram(tokens: 'torch.tensor', **kwargs) which takes the token tensor and generates a spectrogram
# Let's try it out
tokens = model.parse(text="Hey, this produces speech!")
spectrogram = model.generate_spectrogram(tokens=tokens)
# Now we can visualize the generated spectrogram
# If we want to generate speech, we have to use a vocoder in conjunction to a spectrogram generator.
# Refer to the TTS Inference notebook on how to convert spectrograms to speech.
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
# %matplotlib inline
imshow(spectrogram.cpu().detach().numpy()[0,...], origin="lower")
plt.show()
# -
# # Preprocessing
#
# Now that we looked at the TalkNet model, let's see how to prepare all data for training it.
#
# Firstly, let's download all necessary training scripts and configs.
# +
# !wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/talknet_durs.py
# !wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/talknet_pitch.py
# !wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/talknet_spect.py
# !mkdir -p conf && cd conf \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/conf/talknet-durs.yaml \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/conf/talknet-pitch.yaml \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/tts/conf/talknet-spect.yaml \
# && cd ..
# -
# We will show example of preprocessing and training using small part of AN4 dataset. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, as well as their corresponding transcripts. Let's download data and prepare manifests.
#
# *NOTE: The sample data is not enough data to properly train a TalkNet. This will not result in a trained TalkNet and is used to just as example.*
# +
# !wget https://github.com/NVIDIA/NeMo/releases/download/v0.11.0/test_data.tar.gz && mkdir -p tests/data && tar xzf test_data.tar.gz -C tests/data
# Just like ASR, the TalkNet require .json files to define the training and validation data.
# !cat tests/data/asr/an4_val.json
# !cat tests/data/asr/an4_train.json tests/data/asr/an4_val.json > tests/data/asr/an4_all.json
# -
# ## Extracting phoneme ground truth durations
# As a part of whole model, you will need to train duration predictor. We will extract phoneme ground truth durations from ASR model (QuartzNet5x5, trained on LibriTTS) using forward-backward algorithm (see paper for details). Let's download pretrained ASR model and define auxiliary functions.
from nemo.collections.asr.models import EncDecCTCModel
asr_model = EncDecCTCModel.from_pretrained(model_name="asr_talknet_aligner").cpu().eval()
# +
def forward_extractor(tokens, log_probs, blank):
"""Computes states f and p."""
n, m = len(tokens), log_probs.shape[0]
# `f[s, t]` -- max sum of log probs for `s` first codes
# with `t` first timesteps with ending in `tokens[s]`.
f = np.empty((n + 1, m + 1), dtype=float)
f.fill(-(10 ** 9))
p = np.empty((n + 1, m + 1), dtype=int)
f[0, 0] = 0.0 # Start
for s in range(1, n + 1):
c = tokens[s - 1]
for t in range((s + 1) // 2, m + 1):
f[s, t] = log_probs[t - 1, c]
# Option #1: prev char is equal to current one.
if s == 1 or c == blank or c == tokens[s - 3]:
options = f[s : (s - 2 if s > 1 else None) : -1, t - 1]
else: # Is not equal to current one.
options = f[s : (s - 3 if s > 2 else None) : -1, t - 1]
f[s, t] += np.max(options)
p[s, t] = np.argmax(options)
return f, p
def backward_extractor(f, p):
"""Computes durs from f and p."""
n, m = f.shape
n -= 1
m -= 1
durs = np.zeros(n, dtype=int)
if f[-1, -1] >= f[-2, -1]:
s, t = n, m
else:
s, t = n - 1, m
while s > 0:
durs[s - 1] += 1
s -= p[s, t]
t -= 1
assert durs.shape[0] == n
assert np.sum(durs) == m
assert np.all(durs[1::2] > 0)
return durs
def preprocess_tokens(tokens, blank):
new_tokens = [blank]
for c in tokens:
new_tokens.extend([c, blank])
tokens = new_tokens
return tokens
# -
# Now we can run extraction and save result.
# +
data_config = {
'manifest_filepath': "tests/data/asr/an4_all.json",
'sample_rate': 16000,
'labels': asr_model.decoder.vocabulary,
'batch_size': 1,
}
parser = nemo.collections.asr.data.audio_to_text.AudioToCharWithDursF0Dataset.make_vocab(
notation='phonemes', punct=True, spaces=True, stresses=False, add_blank_at="last"
)
dataset = nemo.collections.asr.data.audio_to_text._AudioTextDataset(
manifest_filepath=data_config['manifest_filepath'], sample_rate=data_config['sample_rate'], parser=parser,
)
dl = torch.utils.data.DataLoader(
dataset=dataset, batch_size=data_config['batch_size'], collate_fn=dataset.collate_fn, shuffle=False,
)
blank_id = asr_model.decoder.num_classes_with_blank - 1
dur_data = {}
for sample_idx, test_sample in tqdm(enumerate(dl), total=len(dl)):
log_probs, _, greedy_predictions = asr_model(
input_signal=test_sample[0], input_signal_length=test_sample[1]
)
log_probs = log_probs[0].cpu().detach().numpy()
seq_ids = test_sample[2][0].cpu().detach().numpy()
target_tokens = preprocess_tokens(seq_ids, blank_id)
f, p = forward_extractor(target_tokens, log_probs, blank_id)
durs = backward_extractor(f, p)
dur_key = Path(dl.dataset.collection[sample_idx].audio_file).stem
dur_data[dur_key] = {
'blanks': torch.tensor(durs[::2], dtype=torch.long).cpu().detach(),
'tokens': torch.tensor(durs[1::2], dtype=torch.long).cpu().detach()
}
del test_sample
torch.save(dur_data, "tests/data/asr/an4_durations.pt")
# -
# ## Extracting ground truth f0
# The second model, that you will need to train before spectrogram generator, is pitch predictor. As labels for pitch predictor, we will use f0 from audio using `pysptk` library (see paper for details). Let's extract f0, calculate stats (mean & std) and save it all.
def extract_f0(audio_file, sample_rate=16000, hop_length=256):
audio = torchaudio.load(audio_file)[0].squeeze().numpy()
f0 = sptk.swipe(audio.astype(np.float64), sample_rate, hopsize=hop_length)
# Hack to make f0 and mel lengths equal
if len(audio) % hop_length == 0:
f0 = np.pad(f0, pad_width=[0, 1])
return torch.from_numpy(f0.astype(np.float32))
# +
f0_data = {}
with open("tests/data/asr/an4_all.json") as f:
for l in tqdm(f):
audio_path = json.loads(l)["audio_filepath"]
f0_data[Path(audio_path).stem] = extract_f0(audio_path)
# calculate f0 stats (mean & std) only for train set
with open("tests/data/asr/an4_train.json") as f:
train_ids = {Path(json.loads(l)["audio_filepath"]).stem for l in f}
all_f0 = torch.cat([f0[f0 >= 1e-5] for f0_id, f0 in f0_data.items() if f0_id in train_ids])
F0_MEAN, F0_STD = all_f0.mean().item(), all_f0.std().item()
torch.save(f0_data, "tests/data/asr/an4_f0s.pt")
# -
# # Training
# Now we are ready for training our models! Let's try to train TalkNet parts consequentially.
# !python talknet_durs.py sample_rate=16000 \
# train_dataset=tests/data/asr/an4_train.json \
# validation_datasets=tests/data/asr/an4_val.json \
# durs_file=tests/data/asr/an4_durations.pt \
# f0_file=tests/data/asr/an4_f0s.pt \
# trainer.max_epochs=3 \
# trainer.accelerator=null \
# trainer.check_val_every_n_epoch=1 \
# model.train_ds.dataloader_params.batch_size=6 \
# model.train_ds.dataloader_params.num_workers=0 \
# model.validation_ds.dataloader_params.num_workers=0
# !python talknet_pitch.py sample_rate=16000 \
# train_dataset=tests/data/asr/an4_train.json \
# validation_datasets=tests/data/asr/an4_val.json \
# durs_file=tests/data/asr/an4_durations.pt \
# f0_file=tests/data/asr/an4_f0s.pt \
# trainer.max_epochs=3 \
# trainer.accelerator=null \
# trainer.check_val_every_n_epoch=1 \
# model.f0_mean={F0_MEAN} \
# model.f0_std={F0_STD} \
# model.train_ds.dataloader_params.batch_size=6 \
# model.train_ds.dataloader_params.num_workers=0 \
# model.validation_ds.dataloader_params.num_workers=0
# !python talknet_spect.py sample_rate=16000 \
# train_dataset=tests/data/asr/an4_train.json \
# validation_datasets=tests/data/asr/an4_val.json \
# durs_file=tests/data/asr/an4_durations.pt \
# f0_file=tests/data/asr/an4_f0s.pt \
# trainer.max_epochs=3 \
# trainer.accelerator=null \
# trainer.check_val_every_n_epoch=1 \
# model.train_ds.dataloader_params.batch_size=6 \
# model.train_ds.dataloader_params.num_workers=0 \
# model.validation_ds.dataloader_params.num_workers=0
# That's it!
#
# In order to train TalkNet for real purposes, it is highly recommended to obtain high quality speech data with the following properties:
#
# * Sampling rate of 22050Hz or higher
# * Single speaker
# * Speech should contain a variety of speech phonemes
# * Audio split into segments of 1-10 seconds
# * Audio segments should not have silence at the beginning and end
# * Audio segments should not contain long silences inside
| tutorials/tts/3_TTS_TalkNet_Training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This is the notebook with basic statistik of SemEval-2016 russian reviews of restaurants.
# +
import logging
import copy
import os
import sys
from functools import reduce
from collections import Counter, defaultdict
import itertools
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
import numpy as np
import matplotlib.pyplot as plt
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from absa import TEST_APPENDIX, parsed_reviews_dump_path, images_path, raw_reviews_dump_path
from absa.utils.dump import load_dump
from absa.utils.embedding import Embeddings
from absa.text.opinion.polarity import Polarity
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
logging.basicConfig(level=logging.INFO)
seed = 42
np.random.seed(seed)
# -
# # Upload
train_raw_reviews = load_dump(pathway=raw_reviews_dump_path)
test_raw_reviews = load_dump(pathway=raw_reviews_dump_path + TEST_APPENDIX)
print(f'Number of train reviews: {len(train_raw_reviews)}')
print(f'Number of test reviews: {len(test_raw_reviews)}')
# # Sentiment distribution
# There is only one 'conflict' polarity
texts = train_raw_reviews
# +
polarity_count = Counter()
for text in texts:
for opinion in text.opinions:
polarity_count.update([opinion.polarity])
xy = [(x.value, y) for x, y in sorted(polarity_count.items(), key=lambda item: item[0].value)]
x = [i[0] for i in xy]
y = [j[1] for j in xy]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7, 7))
bar_plot = ax.bar(x, y, color='#0000ff', zorder=5)
bar_plot[Polarity.positive.value].set_color('#67D673')
bar_plot[Polarity.neutral.value].set_color('#F7D579')
bar_plot[Polarity.negative.value].set_color('#CF1B5A')
plt.xlim(left=-0.5, right=2.5)
for x, y in xy:
ax.text(x=x-0.05*len(str(y)) + 0.03,
y=90,
s=y,
fontsize=17,
color='black',
zorder=10,)
plt.xticks([0, 1, 2])
plt.xlabel('Polarity')
plt.ylabel('Coiunt')
plt.title('Polarity distribution')
plt.savefig(os.path.join(images_path, 'polarity_distribution.pdf'))
# -
# # Aspect categories
# Print all aspect categories
# +
texts = train_raw_reviews + test_raw_reviews
aspects = set()
for text in texts:
for opinion in text.opinions:
aspects.add(opinion.category)
for aspect in sorted(aspects):
print(aspect)
# -
# ### Mark one target as different aspects
# +
texts = train_raw_reviews
def get_category_count(explicit=True):
explicit_category_count = Counter()
for text in texts:
if text.opinions:
sentence_category = defaultdict(lambda: 0)
for opinion in text.opinions:
if explicit ^ (opinion.is_implicit()):
sentence_category[frozenset((opinion.start_index, opinion.stop_index))] += 1
explicit_category_count.update(Counter(sentence_category.values()))
xy = [(x, y) for x, y in sorted(explicit_category_count.items(), key=lambda item: item[0])]
x = np.array([i[0] for i in xy])
y = np.array([j[1] for j in xy])
return x, y
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7, 7))
text_display_params = {
'y': 140,
'fontsize': 17,
'color': 'black',
'zorder': 10,
}
# ------- Explicit ------------
x, y = get_category_count(explicit=True)
print(y)
x = x - 0.2
bar_plot = ax.bar(x, y, color='#5E95EB', zorder=5, width = 0.4)
plt.xlim(left=0.5, right=3.5)
ax.text(x=0.63, s=y[0], **text_display_params)
ax.text(x=1.74, s=y[1], **text_display_params)
ax.text(x=2.8, s=y[2], **text_display_params)
# ------- Implicit ------------
x, y = get_category_count(explicit=False)
print(y)
x = x + 0.2
bar_plot = ax.bar(x, y, color='#E0945E', zorder=5, width = 0.4)
ax.text(x=1.09, s=y[0], **text_display_params)
ax.text(x=2.13, s=y[1], **text_display_params)
ax.text(x=3.15, s=y[2], **text_display_params)
plt.xticks([1, 2, 3])
plt.xlabel('Number of aspect categories of aspect term')
plt.ylabel('Count')
plt.title('Distribution of number of term categories')
ax.legend(['Explicit', 'Implicit'])
plt.savefig(os.path.join(images_path, 'number_term_categories_distribution.pdf'))
# -
# ### Implicit opinions
# +
implicit_opinions = 0
for text in train_raw_reviews:
for opinion in text.opinions:
if opinion.is_implicit():
implicit_opinions += 1
implicit_opinions
# -
# ## Distributions
# ### Aspect
# +
texts = train_raw_reviews
entity_counter = Counter()
for text in texts:
for opinion in text.opinions:
entity_counter.update([opinion.category])
labels = [x for x in entity_counter.keys()]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 7))
#ax.grid('on', zorder=0)
plt.bar(entity_counter.keys(), entity_counter.values(), color='#0000ff', zorder=5)
plt.xticks(ticks=[x-1 for x in range(len(entity_counter))], labels=labels, rotation=45)
plt.title('Distribution of aspect categories')
plt.xlabel('Aspect categories')
plt.ylabel('Count');
# -
# ### Entity
# +
texts = train_raw_reviews
entity_counter = Counter()
for text in texts:
for opinion in text.opinions:
entity_counter.update([opinion.category.split('#')[0]])
labels = [x for x in entity_counter.values()]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 7))
ax.bar(entity_counter.keys(), entity_counter.values(), color='#0000ff', zorder=5)
plt.xlim(left=-0.5, right=5.5)
for i, v in enumerate(labels):
ax.text(x=i-0.05*len(str(labels[i]))-0.005,
y=70,
s=labels[i],
fontsize=17,
#color='black' if i==3 else 'white',
zorder=10,)
plt.xlabel('Entity')
plt.ylabel('Count')
plt.title('Distribution of entities');
# -
# ### Attribute
# +
texts = train_raw_reviews
attribute_counter = Counter()
for text in texts:
for opinion in text.opinions:
attribute_counter.update([opinion.category.split('#')[1]])
labels = [x for x in attribute_counter.values()]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 7))
ax.bar(attribute_counter.keys(), attribute_counter.values(), color='#0000ff', zorder=5)
plt.xlim(left=-0.5, right=4.5)
for i, v in enumerate(labels):
ax.text(x=i-0.05*len(str(labels[i]))-0.005,
y=70,
s=labels[i],
fontsize=17,
zorder=10,)
plt.xlabel('Attributes')
plt.ylabel('Count')
plt.title('Distribution of attributes');
| notebooks/explore dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Writing Testable Numerics Code
# Here's the contents of a file containing numerics code:
# !pygmentize norms.py
# Note:
#
# - Docstring
# - Defensive programming
# !pygmentize test_norms.py
# * Now use [pytest](https://pytest.org) to run the test.
# !python -m pytest
# A typical use for these tests would be to run them on every commit to a codebase.
#
# Example: https://github.com/inducer/boxtree (click the "Pipeline" button)
| error_and_fp/Writing Testable Numerics Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reduce
# The reduce function continously applies a function to a sequence. Like map, reduce has the same genera syntax:
#
# reduce(function, sequence)
#
# Semantically, the initial call of reduce is actually where the difference between map() exists. Reduce takes the first two items from the sequence, and applies function on the items, returning a single item. Now, the list is dynamically changed so that the two previous items in the sequence are replaced with a single, combined item.
#
# reduce(function, [item1, item2, item3, item4])
# ---> [function(item1, item2), item3, item 4]
# ---> [function(function(item1, item2), item3), item4]
# ---> [function(function(function(item1, item2), item3) item4)]
#
# The last arrow is the single combined item of all 4 items. Let's take a look at a few examples.
# **Example 1**: Finding the sum of a list.
# +
from functools import reduce
find_my_sum = [5, 3, 19, 48, 2, 31, 29]
def sum_func(x, y):
return x + y
total = reduce(sum_func, find_my_sum)
# -
print(total)
# Let's look at how it worked. First, we made a list, and defined the function. After we supplied the arguments, the reduce function spreaded as described above.
#
# 1. total = [5, 3, 19, 48, 2, 31, 29]
# 2. total = [sum_func(5, 3), 19, 48, 2, 31, 29]
# 3. total = [sum_func(8, 19), 48, 2, 31, 29]
# 4. total = [sum_func(27, 48), 2, 31, 29]
# 5. total = [sum_func(75, 2), 31, 29]
# 6. total = [sum_func(77, 31), 29]
# 7. total = [sum_func(108, 29)]
# 8. total = [137]
# 9. total = 137
# **Example 2** Creating a sentence out of a list of words.
word_lst = ["hello", "there", "martha", "how", "are", "you", "doing"]
sentence = reduce(lambda x,y: x + " " + y, word_lst)
# Here, we used a lambda expressiont unpack two variables (both strings), and then continue to create a sentence as the reduce continously applies the function at hand.
# **Example 3** Finding the maximum of a sequence.
nums = [5, 5, 39, 29, 48, 98, 23, 48]
max_num = reduce(lambda num1,num2: num1 if num1 > num2 else num2, nums)
print(max_num)
| Built-In Functions/reduce().ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.00795, "end_time": "2020-08-29T10:08:56.699735", "exception": false, "start_time": "2020-08-29T10:08:56.691785", "status": "completed"} tags=[]
# # "Covid-19 Tracker"
#
# - badges: false
# - author: <NAME>
# + [markdown] papermill={"duration": 0.00413, "end_time": "2020-08-29T10:08:56.708894", "exception": false, "start_time": "2020-08-29T10:08:56.704764", "status": "completed"} tags=[]
# ##### <center>Hello, welcome to my dashboard. I have created it using matplotlib. This is static. You can collapse cells ("Show Code") if you want to review the code</center>
#
# + papermill={"duration": 1.731397, "end_time": "2020-08-29T10:08:58.444244", "exception": false, "start_time": "2020-08-29T10:08:56.712847", "status": "completed"} tags=[]
#collapse
import pandas as pd
import numpy as np
import requests
import json
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib as mpl
from IPython.core.display import display,HTML
# %matplotlib inline
dft_cases = pd.read_csv('data/SnapshotCases-28-July.csv')
dft_deaths = pd.read_csv('data/SnapshotDeaths-28-July.csv')
dft_cases["dt_today"] = dft_cases["28-Jul-20"]
dft_cases["dt_yday"] = dft_cases["27-Jul-20"]
dft_deaths["dt_today"] = dft_deaths["28-Jul-20"]
dft_deaths["dt_yday"] = dft_deaths["27-Jul-20"]
dfc_cases = dft_cases.groupby('states')["dt_today"].sum()
dfc_deaths = dft_deaths.groupby('states')["dt_today"].sum()
dfp_cases = dft_cases.groupby('states')["dt_yday"].sum()
dfp_deaths = dft_deaths.groupby('states')["dt_yday"].sum()
df_dfc_cases = pd.DataFrame(dfc_cases).reset_index().rename(columns={"states": "states", "dt_today": "Cases"})
df_dfc_deaths = pd.DataFrame(dfc_deaths).reset_index().rename(columns={"states": "states", "dt_today": "Deaths"})
df_dfp_cases = pd.DataFrame(dfp_cases).reset_index().rename(columns={"states": "states", "dt_yday": "PCases"})
df_dfp_deaths = pd.DataFrame(dfp_deaths).reset_index().rename(columns={"states": "states", "dt_yday": "PDeaths"})
df_table = pd.merge(df_dfc_cases,df_dfp_cases, how='outer')
df_table = pd.merge(df_table,df_dfc_deaths, how='outer')
df_table = pd.merge(df_table,df_dfp_deaths, how='outer')
for c in 'Cases, Deaths'.split(', '):
df_table[f'{c} (+)'] = (df_table[c] - df_table[f'P{c}']).clip(0)
df_table['Fatality Rate'] = (100* df_table['Deaths']/ df_table['Cases']).round(2)
df_table.sort_values(by = ['Cases','Deaths'], ascending = [False, False], inplace = True)
df_table.reset_index(drop=True, inplace = True)
summary = {"updated":"28th July, 2020", "since":"27th July, 2020"}
for col in df_table.columns:
if col != "states" and col!= "Fatality Rate":
summary[col]= df_table[col].sum()
update = summary['updated']
cases = summary['Cases']
new = summary['Cases (+)']
deaths = summary['Deaths']
dnew = summary['Deaths (+)']
overview = '''
<!-- ####### HTML!! #########-->
<h1 style="color: #5e9ca0; text-align: center;">India</h1>
<p style="text-align: center;">Last update: <strong>{update}</strong></p>
<p style="text-align: center;">Confirmed cases:</p>
<p style="text-align: center;font-size:24px;">{cases} (<span style="color: #ff0000;">+{new}</span>)</p>
<p style="text-align: center;">Confirmed deaths:</p>
<p style="text-align: center;font-size:24px;">{deaths} (<span style="color: #ff0000;">+{dnew}</span>)</p>
'''
html = HTML(overview.format(update=update, cases=cases,new=new,deaths=deaths,dnew=dnew))
display(html)
# + papermill={"duration": 5.428141, "end_time": "2020-08-29T10:09:03.877068", "exception": false, "start_time": "2020-08-29T10:08:58.448927", "status": "completed"} tags=[]
#collapse
dt_cols = list(dft_cases.columns[1:])
dft_ct_new_cases = dft_cases.groupby('states')[dt_cols].sum().diff(axis=1).fillna(0).astype(int)
dft_ct_new_cases.sort_values(by = '28-Jul-20', ascending = False,inplace = True)
df = dft_ct_new_cases.copy()
df.loc['Total'] = df.sum()
df.drop(['dt_today', 'dt_yday'], axis=1, inplace = True)
n = 5
ef = df.loc['Total'].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax = []
fig = plt.figure(figsize = (16,20))
gs = fig.add_gridspec(n+2, 3)
# gs = fig.add_gridspec(2, 3)
ax1 = fig.add_subplot(gs[0, :])
ef = df.loc['Total'].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax1.bar(ef.date,ef.Total,alpha=0.3,color='#007acc')
ax1.plot(ef.date,ef.Total , marker="o", color='#007acc')
ax1.xaxis.set_major_locator(mdates.WeekdayLocator())
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax1.text(0.02, 0.5,'India daily case count', transform = ax1.transAxes, fontsize=25);
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax2 = fig.add_subplot(gs[1,0])
ef = df.loc['Maharashtra'].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax2.bar(ef.date, ef.Maharashtra,color = '#007acc',alpha=0.5)
ax2.xaxis.set_major_locator(mdates.WeekdayLocator())
ax2.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax2.set_xticks(ax2.get_xticks()[::3])
maxyval = ef.Maharashtra.max()
ax2.set_ylim([0,maxyval])
ax2.text(0.05, 0.5,'Maharashtra', transform = ax2.transAxes, fontsize=20);
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax3 = fig.add_subplot(gs[1,1])
ef = df.loc['Tamil Nadu'].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax3.bar(ef.date, ef['Tamil Nadu'],color = '#007acc',alpha=0.5,)
ax3.xaxis.set_major_locator(mdates.WeekdayLocator())
ax3.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax3.set_xticks(ax3.get_xticks()[::3])
ax3.text(0.05, 0.5,'Tamil Nadu', transform = ax3.transAxes, fontsize=20);
ax3.spines['right'].set_visible(False)
ax3.spines['top'].set_visible(False)
ax4 = fig.add_subplot(gs[1,2])
ef = df.loc['Delhi'].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax4.bar(ef.date, ef.Delhi,color = '#007acc',alpha=0.5)
ax4.set_xticks([])
ax4.xaxis.set_major_locator(mdates.WeekdayLocator())
ax4.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax4.set_xticks(ax4.get_xticks()[::3])
ax4.spines['right'].set_visible(False)
ax4.spines['top'].set_visible(False)
ax4.text(0.05, 0.5,'Delhi', transform = ax4.transAxes, fontsize=20)
for i in range(n):
ax.append(fig.add_subplot(gs[i+2,:]))
ef = df.iloc[i+3].rename_axis('date').reset_index()
ef['date'] = ef['date'].astype('datetime64[ns]')
ax[i].bar(ef.date,ef.iloc[:,-1],color = '#007acc',alpha=0.3)
ax[i].plot(ef.date,ef.iloc[:,-1],marker='o',color='#007acc')
ax[i].text(0.02,0.5,f'{ef.columns.values[-1]}',transform = ax[i].transAxes, fontsize = 20);
ax[i].xaxis.set_major_locator(mdates.WeekdayLocator())
ax[i].xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax[i].set_ylim([0,7000])
ax[i].spines['right'].set_visible(False)
ax[i].spines['top'].set_visible(False)
plt.tight_layout()
# + papermill={"duration": 0.022063, "end_time": "2020-08-29T10:09:03.906570", "exception": false, "start_time": "2020-08-29T10:09:03.884507", "status": "completed"} tags=[]
#collapse
print(df_table.to_string(index=False))
| _notebooks/2020-08-01-anshuman2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import math
import itertools
from memoized import memoized # pip3 install hg+https://bitbucket.org/gsakkis/memoized
# +
def parseTriStr(triStr, n):
mat = np.empty((n, n))
i = 0
for line in triStr.strip().split("\n"):
#print(np.asarray(list(map(int, line.split()))))
row = np.asarray(list(map(int, line.split())))
row.resize(n)
row[i+1:] = -1e2
mat[i,:] = row
i += 1
return mat
@memoized
def maxPathSum(x, y):
if x > y:
raise ValueError("x > y")
if y == 0:
return mat[y, x]
elif x == 0: # Leftmost row, Can only move up
return mat[y, x] + maxPathSum(x, y - 1)
elif x == y: # Rightmost row, can only move up+left
return mat[y, x] + maxPathSum(x - 1, y - 1)
else:
return max(mat[y, x] + maxPathSum(x, y - 1),
mat[y, x] + maxPathSum(x - 1, y - 1))
def totalMaxPathSum():
assert mat.shape[0] == mat.shape[1]
return max(maxPathSum(i, mat.shape[0] - 1) for i in range(mat.shape[0]))
# -
triStr = """
3
7 4
2 4 6
8 5 9 3
"""
mat = parseTriStr(triStr, 4)
totalMaxPathSum()
triStr = """
75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23
"""
mat = parseTriStr(triStr, 15)
totalMaxPathSum()
# Problem 67
import requests
mat = parseTriStr(requests.get("https://projecteuler.net/project/resources/p067_triangle.txt").text, 100)
# Problem 67 solution
totalMaxPathSum()
| Euler18-67.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Skills Needed for NLP Jobs
# ---
# ## Get Natural Language Processing Job description page
# +
import urllib
import numpy as np
job_list_base_url = 'https://www.indeed.com/jobs?q=Natural+Language+Processing'
starts = np.arange(10, 1010, 10)
job_list_urls = [job_list_base_url] + [job_list_base_url + '&start=' + str(i) for i in starts]
job_list_urls[:5]
# -
import urllib.request
from bs4 import BeautifulSoup
pg = BeautifulSoup(urllib.request.urlopen(job_list_base_url), "lxml")
pg.find_all(attrs= {'class': 'jobtitle turnstileLink'})
| notebooks/07-natural-language-processing/skills-needed-for-nlp-jobs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lesson 3 Exercise 1: Three Queries Three Tables
# <img src="images/cassandralogo.png" width="250" height="250">
# ### Walk through the basics of creating a table in Apache Cassandra, inserting rows of data, and doing a simple CQL query to validate the information. You will practice Denormalization, and the concept of 1 table per query, which is an encouraged practice with Apache Cassandra.
#
# ### Remember, replace ##### with your answer.
# #### We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally:
# # ! pip install cassandra-driver
# #### More documentation can be found here: https://datastax.github.io/python-driver/
# #### Import Apache Cassandra python package
import cassandra
# ### Create a connection to the database
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
# ### Create a keyspace to work in
# +
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
# -
# #### Connect to our Keyspace. Compare this to how we had to create a new session in PostgreSQL.
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
# ### Let's imagine we would like to start creating a Music Library of albums.
#
# ### We want to ask 3 questions of the data
# #### 1. Give every album in the music library that was released in a given year
# `select * from music_library WHERE YEAR=1970`
# #### 2. Give every album in the music library that was created by a given artist
# `select * from artist_library WHERE artist_name="<NAME>"`
# #### 3. Give all the information from the music library about a given album
# `select * from album_library WHERE album_name="Close To You"`
#
# ### Because we want to do three different queries, we will need different tables that partition the data differently.
# <img src="images/table1.png" width="350" height="350">
# <img src="images/table2.png" width="350" height="350">
# <img src="images/table0.png" width="550" height="550">
# ### TO-DO: Create the tables.
# +
query = "CREATE TABLE IF NOT EXISTS music_library "
query = query + "(year int, artist_name text, album_name text, PRIMARY KEY (year, artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query1 = "CREATE TABLE IF NOT EXISTS artist_library "
query1 = query1 + "(artist_name text, year int, album_name text, PRIMARY KEY (artist_name, year))"
try:
session.execute(query1)
except Exception as e:
print(e)
query2 = "CREATE TABLE IF NOT EXISTS album_library "
query2 = query2 + "(artist_name text, album_name text, year int, PRIMARY KEY (album_name, artist_name))"
try:
session.execute(query2)
except Exception as e:
print(e)
# -
# ### TO-DO: Insert data into the tables
# +
query = "INSERT INTO music_library (year, artist_name, album_name)"
query = query + " VALUES (%s, %s, %s)"
query1 = "INSERT INTO artist_library (artist_name, year, album_name)"
query1 = query1 + " VALUES (%s, %s, %s)"
query2 = "INSERT INTO album_library (album_name, artist_name, year)"
query2 = query2 + " VALUES (%s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You"))
except Exception as e:
print(e)
try:
session.execute(query1, ("The Beatles", 1970, "Let it Be"))
except Exception as e:
print(e)
try:
session.execute(query1, ("The Beatles", 1965, "Rubber Soul"))
except Exception as e:
print(e)
try:
session.execute(query1, ("The Who", 1965, "My Generation"))
except Exception as e:
print(e)
try:
session.execute(query1, ("The Monkees", 1966, "The Monkees"))
except Exception as e:
print(e)
try:
session.execute(query1, ("The Carpenters", 1970, "Close To You"))
except Exception as e:
print(e)
try:
session.execute(query2, ("Let it Be", "The Beatles", 1970))
except Exception as e:
print(e)
try:
session.execute(query2, ("Rubber Soul", "The Beatles", 1965))
except Exception as e:
print(e)
try:
session.execute(query2, ("My Generation", "The Who", 1965))
except Exception as e:
print(e)
try:
session.execute(query2, ("The Monkees", "The Monkees", 1966))
except Exception as e:
print(e)
try:
session.execute(query2, ("Close To You", "The Carpenters", 1970))
except Exception as e:
print(e)
# -
# This might have felt unnatural to insert duplicate data into the tables. If I just normalized these tables, I wouldn't have to have extra copies! While this is true, remember there are no `JOINS` in Apache Cassandra. For the benefit of high availibity and scalabity, denormalization must be how this is done.
#
# ### TO-DO: Validate the Data Model
# +
query = "select * from music_library WHERE year = 1970 ALLOW FILTERING"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name)
# -
# ### Your output should be:
# 1970 The Beatles Let it Be<br>
# 1970 The Carpenters Close To You
# ### TO-DO: Validate the Data Model
# +
query = "select * from artist_library WHERE artist_name = 'The Beatles' ALLOW FILTERING"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.album_name, row.year)
# -
# ### Your output should be:
# The Beatles Rubber Soul 1965 <br>
# The Beatles Let it Be 1970
# ### TO-DO: Validate the Data Model
# +
query = "select * from album_library WHERE album_name = 'Close To You' ALLOW FILTERING"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.year, row.album_name)
# -
# ### Your output should be:
# The Carpenters 1970 Close To You
# ### And finally close the session and cluster connection
session.shutdown()
cluster.shutdown()
| Notebook-Exercises/Course-1-Lesson-3-NoSQL/L3-Exercise-1-Three-Queries-Three-Tables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/nmningmei/Deep_learning_fMRI_EEG/blob/master/10_2_searchlight_RSA_word_embedding.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="OccLuogrkjAs"
# # Download data
# + colab={"base_uri": "https://localhost:8080/"} id="Oi9ifQEgVvoT" outputId="89db64f3-7155-4231-b5a9-d1e0fc216984"
# !git clone https://github.com/nmningmei/METASEMA_encoding_model.git
# + [markdown] id="NTmXR13zkmA7"
# # Import necessary python liraries
# + colab={"base_uri": "https://localhost:8080/"} id="Yw3j1sjNV_-o" outputId="61a549ae-5033-4e99-c61d-280509d54fd8"
import os
from glob import glob
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from scipy.spatial import distance
from nibabel import load as load_fmri
from scipy.stats import spearmanr
try:
from nilearn.input_data import NiftiMasker
from nilearn.image import new_img_like
from brainiak.searchlight.searchlight import Searchlight
from brainiak.searchlight.searchlight import Ball
except:
# !pip install nilearn
# !python3 -m pip install -U brainiak
from nilearn.input_data import NiftiMasker
from nilearn.image import new_img_like
from nilearn.image import resample_to_img
from nilearn import plotting
from nilearn.datasets import load_mni152_template
from brainiak.searchlight.searchlight import Searchlight
from brainiak.searchlight.searchlight import Ball
sns.set_context('poster')
sns.set_style('white')
# + [markdown] id="tjtOq1bfWzaX"
# # Load and inspect the data: BOLD signals and events
# + [markdown] id="kvjvWws-Xps0"
# ## concatenetate data from different sessions
# + id="SZWK8Ja3WYdB"
condition = 'reenact'
data_dir = 'METASEMA_encoding_model/scripts/raw/'
bold_files = np.sort(glob(os.path.join(data_dir,'*','*.npy')))
csv_files = np.sort(glob(os.path.join(data_dir,'*','*.csv')))
example_func = os.path.join(data_dir,'example_func.nii.gz')
mask_img = os.path.join(data_dir,'mask.nii.gz')
word_model = os.path.join(data_dir,'word2vec.csv')
words = os.path.join(data_dir,'word.npy')
bold_data,csv_data = [],[]
# this is how we convert vectorized BOLD signals back to 3D volumes
masker = NiftiMasker(mask_img=mask_img,).fit(example_func)
for bold_file,csv_file in zip(bold_files,csv_files):
temp_bold = np.load(bold_file)
temp_csv = pd.read_csv(csv_file)
bold_data.append(temp_bold)
csv_data.append(temp_csv)
bold_data = np.concatenate(bold_data)
csv_data = pd.concat(csv_data)
_idx = csv_data['context'] == condition
bold_data = bold_data[_idx]
csv_data = csv_data.loc[_idx,:].reset_index()
whole_brain_data = masker.inverse_transform(bold_data)
word2vec = pd.read_csv(word_model)
words = np.load(words).astype(str)
# + colab={"base_uri": "https://localhost:8080/", "height": 0} id="psMleEB8d3kI" outputId="aec77114-ac8a-4d93-dbf4-dded3f5e0e08"
csv_data
# + [markdown] id="yxKFyYqmZ4lo"
# ## Plot the word2vec model of the 36 unique words
# + colab={"base_uri": "https://localhost:8080/", "height": 523} id="UVXSVMLwYS8h" outputId="8258c6e5-6d3c-4c3a-d32c-7abecaf96418"
df_plot = word2vec[words]
corr = distance.squareform(distance.pdist(df_plot.values.T,'correlation'))
np.fill_diagonal(corr,np.nan)
fig,ax = plt.subplots(figsize = (10,8))
im = ax.imshow(corr,
origin = 'lower',
cmap = plt.cm.coolwarm,
)
plt.colorbar(im)
ax.set(yticks = np.arange(36),xticks = np.arange(36))
_=ax.set_yticklabels(words,fontsize = 10,)
_=ax.set_xticklabels(words,fontsize = 10,rotation = 90)
# + [markdown] id="gwkY31axcphV"
# ## Helper functions
# + id="DSo3aVjzZIL4"
def normalize(data,axis = 1):
return data - data.mean(axis).reshape(-1,1)
# Define voxel function
def sfn(l, msk, myrad, bcast_var):
"""
l: BOLD
msk: mask array
myrad: not use
bcast_var: word embedding model
"""
BOLD = l[0][msk,:].T.copy() # vectorize the voxel values in the sphere
#print(BOLD.shape) # <- for debugging
model = bcast_var.copy() # vectorize the RDM
#print(model.shape) # <- for debugging
# pearson correlation
RDM_X = distance.pdist(normalize(BOLD),'correlation')
RDM_y = distance.pdist(normalize(model),'correlation')
D,p = spearmanr(RDM_X,RDM_y)
return D
# + [markdown] id="rAQyn-uedXy3"
# ## Prepare the whole brain BOLD signals that are averaged from all the sessions
# + colab={"base_uri": "https://localhost:8080/"} id="P8xZqwy3czDx" outputId="3c0ed653-e894-4263-c93c-55a6c3348162"
bold_average,word_average = [],[]
for _word, df_sub in csv_data.groupby(['words']):
temp = bold_data[df_sub.index]
bold_average.append(temp.mean(0))
word_average.append(_word.lower())
bold_average = np.vstack(bold_average)
bold_average.shape
whole_brain_average = masker.inverse_transform(bold_average)
BOLD_image = np.asanyarray(whole_brain_average.dataobj)
print(BOLD_image.shape)
# + [markdown] id="Wgk7EbmliCI0"
# # Searchlight RSA
# + [markdown] id="PrpH9H15iEYS"
# ## hyperparameters - not important
# + id="j6yHjGq8iGjc"
radius = 6 # in mm
# + [markdown] id="qMd-6nvzmNju"
# ### This is going to take some time to run - 1 fold - for the average of the BOLD signals
# + id="VXXqwZnMh20v"
# Brainiak function
sl = Searchlight(sl_rad = radius,
max_blk_edge = radius - 1,
shape = Ball,
min_active_voxels_proportion = 0,
)
# distribute the data based on the sphere
## the first input is usually the BOLD signal, and it is in the form of
## lists not arrays, representing each subject
## the second input is usually the mask, and it is in the form of array
sl.distribute([BOLD_image],np.asanyarray(load_fmri(mask_img).dataobj) == 1)
# broadcasted data is the data that remains the same during RSA
sl.broadcast(df_plot[word_average].values.T)
# run searchlight algorithm
global_outputs = sl.run_searchlight(sfn,
pool_size = -1, # we run each RSA using a single CPU
)
# + [markdown] id="biUDUD6_mUJq"
# ## Convert the numpy array to Nifti
# + id="9Dkw_msLjgPh"
correlations = new_img_like(example_func,np.asanyarray(global_outputs,dtype = np.float32))
# masking
correlations = masker.inverse_transform(masker.transform_single_imgs(correlations))
# + [markdown] id="dc-See_amYno"
# ## Visualization
# + colab={"base_uri": "https://localhost:8080/", "height": 239} id="pUMJ3K4Slunw" outputId="012f897a-77fe-4ac8-c99a-780a35ff49e4"
plotting.plot_stat_map(correlations,
example_func,
threshold = 1e-3,
draw_cross = False,
cmap = plt.cm.coolwarm,
vmax = .1,
)
# + id="U7UFk8JFoOG4"
| 10_2_searchlight_RSA_word_embedding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# IAMSI -- 2021-2022
#
# --------
# *© Equipe pédagogique : <NAME>, <NAME>, <NAME>, <NAME>.*
# # TME 1 et 2 : Jeu à 2 joueurs - Programmation d'un joueur d'Awélé
# <font color="RED" size="+1">**[Q]**</font> **Indiquer dans la boîte ci-dessous vos noms et prénoms**
# <NAME>, <NAME>
# <font color="RED" size="+1">**[Q]**</font> **Renommer ce fichier ipython**
#
# Tout en haut de cette page, cliquer sur <tt>tme01et02</tt> et rajouter à la suite de <tt>tme01et02</tt> les noms des membres du binômes séparés par un tiret.
#
# Par exemple, pour le binôme <NAME> et Han Solo, le nom de fichier devient : <pre>tme01et02-Skywalker-Solo</pre>
# ## Présentation
# ### Objectifs des TME 1 et 2
#
# Le travail à réaliser est le suivant : programmer en Python un joueur artificiel d'Awélé dont les règles du jeu sont
# fournies ci-dessous (ne pas hésiter pas à faire quelques parties sur le site dont l'adresse est donnée).
#
# *A noter* : Il est autorisé d'écrire des fonctions auxiliaires permettant de simplifier l'écriture
# de la fonction répondant à une question. Dans ce cas, ces fonctions doivent être dûment commentées et spécifiées.
#
# #### Compte-rendu des séances de TME 1 et 2
#
# Le fichier ipython que vous remplissez ici fait office de compte-rendu pour ces 2 séances de TME.
#
# Il y a 2 soumissions à faire **obligatoirement** sur le site Moodle :
# - 1er compte-rendu : <font color="RED">à l'issue de la première séance</font>, soumettre ce qui a été réalisé lors de la séance.
# - 2e compte-rendu : <font color="RED">à l'issue de la deuxième séance</font>, soumettre la version finale du travail réalisé durant les 2 séances.
#
# Votre compte-rendu a la forme d'un unique fichier ipynb (ce fichier-ci complété, NON ZIPPE).
#
# ### Grille de notation
#
# Le barème (indicatif) pour ces 2 séances de TME est le suivant :
# - implémentation correcte et effective du minimax : 10 sur 20
# - implémentation correcte et effective de l'alpha-bêta : 15 sur 20
# - implémentation d'améliorations ou d'extensions (``pour aller plus loin...'') : +0 à +3 points
# - programme commenté : -1 à +1 points
# - efficacité des fonctions : -1 à +2 points
#
# La note finale ne dépassera pas 20.
# Une fonction utile dans la suite est la fonction <tt>input</tt> qui permet de demander à l'entrée d'une valeur saisie au clavier. Elle renvoie une chaîne de caractères.
# +
# Exemple d'utilisation de input():
# annee_naiss = input("Entrer l\'année de naissance : ")
# print("Cela fait donc",2021-int(annee_naiss)," ans")
# -
# ## Présentation du jeu d'Awélé
#
# L'Awélé est un jeu d'origine africaine qui se joue sur un plateau (ou tablier) où chacun des 2 joueurs (Sud et Nord) possède 6 cases. Les règles de ce jeu sont simples et facilement implémentables.
#
# Voir la page wikipédia (http://fr.wikipedia.org/wiki/Awalé) qui détaille les règles que nous utilisons ici, ainsi que le site http://s.helan.free.fr/awale/lejeu/jouer/ qui permet de s'initier en jouant
# contre un joueur artificiel.
#
# ### Règles du jeu
# Dans la position de départ, toutes les cases sont remplies avec 4 graines. Le joueur dont
# c'est le tour de jeu (on considère que le camp Sud débute toujours la partie.) choisit une de ses cases contenant des graines, et en retire toutes les graines qu'elle contient. Il sème alors ces graines une par une dans les cases suivantes dans le sens inverse des aiguilles d'une montre. Au cours de cette pose, s'il repasse sur la case qui contenait les graines au départ, il n'y dépose pas de graine.
#
# Si la dernière graine est semée dans une case de l'adversaire contenant 2 ou 3 graines après la pose, les graines dans cette case sont capturées par le joueur et elles sont alors retirées du jeu (elles sont mises dans le butin du joueur). Dans ce cas, si l'avant-dernière case est aussi une case ennemie contenant 2 ou 3 graines, les graines en sont aussi capturées, et ainsi de suite tant que des prises sont possibles (toujours dans le camp ennemi).
#
# Il est à noter qu'un coup n'effectue pas forcément une prise, mais si une prise existe à l'issue d'un coup, elle doit obligatoirement être réalisée totalement.
#
# A l'issue du coup ainsi exécuté, il doit rester au moins une graine dans l'une des cases de l'adversaire (il ne faut pas ''affamer'' l'adversaire) sinon la position est considérée comme illégale et le coup ne peut pas être joué.
#
# Dès qu'un joueur a capturé 25 graines ou plus, il est déclaré vainqueur de la partie et le jeu s'arrête.
#
# Dans le cas où un joueur ne peut pas jouer, la partie s'arrête et toutes les graines restant sur le tablier sont capturées par son adversaire. Dans ce cas, le joueur qui a capturé le plus de graines a gagné.
#
# ### Représentation du tablier
#
# Le tablier est toujours représenté avec les cases de Sud en bas.
# Les cases sont ordonnées et repérées par des chiffres, de 1 à 6, de la gauche vers la droite. Dans chaque case, on indique le nombre de graines qu'elle contient. Ainsi, le tablier de départ classique est donné ci-dessous:
#
# <p />
# <table>
# <tr>
# <td>
# **NORD**<br>
# $\begin{array}{|c|c|c|c|c|c|} \hline
# 4 & 4 & 4 & 4 & 4 & 4\\
# \hline
# 4 & 4 & 4 & 4 & 4 & 4\\
# \hline
# \end{array}$<br>
# **SUD**<br>
#
# $\begin{array}{cccccc}
# 1 & 2 & 3 & 4 & 5 & 6\\
# \end{array}$
# </td>
# </tr>
# </table>
#
# Un coup est noté en donnant le camp qui joue ainsi que le numéro de la case qui est vidée. Par exemple, $($SUD$, 2)$ si le camp Sud joue en prenant les graines de la deuxième case en partant de la gauche qui se trouve dans son camp. Ou alors, $($NORD$, 6)$ si le camp Nord joue en prenant les graines de la case numéro 6 qui se trouve dans son camp, c'est à dire la sixième case en partant de la gauche qui se trouve dans son camp, soit celle la plus en haut et à droite du tablier.
#
#
# ### Exemple de coup avec prise de graines
#
# A partir de la position située à gauche dans la figure ci-dessous, Sud décide de jouer le coup $4$. La position au centre de la figure montre la situation intermédiaire, suite au dépôt des $6$ graines dans les $6$ cases qui suivent la case $4$ dans l'ordre inverse des aiguilles d'une montre.
#
# <p />
# <table>
# <tr>
# <td>Départ</td>
# <td>SUD a réparti les graines de $4$</td>
# <td>SUD retire les graines dans le camp NORD</td>
# </tr>
# <tr>
# <td>
# **NORD**<br>
# $\begin{array}{|c|c|c|c|c|c|} \hline
# & 2 & 1 & 2 & 1 & 1\\
# \hline
# 1 & & & {\bf 6} & & 1\\
# \hline
# \end{array}$<br>
# **SUD**<br>
# $\begin{array}{cccccc}
# 1 & 2 & 3 & 4 & 5 & 6\\
# \end{array}$
#
# </td>
# <td>
# **NORD**<br>
# $\begin{array}{|c|c|c|c|c|c|} \hline
# & 2 & {\bf 2} & {\bf 3} & {\bf 2} & {\bf 2}\\
# \hline
# 1 & & & & {\bf 1} & {\bf 2}\\
# \hline
# \end{array}$<br>
# **SUD**<br>
#
# $\begin{array}{cccccc}
# 1 & 2 & 3 & 4 & 5 & 6\\
# \end{array}$
# </td>
# <td>
# **NORD**<br>
# $\begin{array}{|c|c|c|c|c|c|} \hline
# & 2 & & & & \\
# \hline
# 1 & & & & {\bf 1} & {\bf 2}\\
# \hline
# \end{array}$<br>
# **SUD**<br>
#
# $\begin{array}{cccccc}
# 1 & 2 & 3 & 4 & 5 & 6\\
# \end{array}$
# </td>
# </tr>
#
# </table>
#
#
# Comme la dernière case où une graine a été posée (la case $3$ du camp Nord) contient maintenant $2$ graines, celles-ci sont prises par Sud, ainsi que celles des cases $4$, $5$ et $6$ qui contiennent (en suivant cet ordre) $2$ ou $3$ graines. Au total, avec ce coup, Sud prend $9$ graines. A l'issue des prises réalisées, on obtient finalement la nouvelle position présentée à droite de la figure. Maintenant, Nord n'a qu'une seule possibilité et doit jouer les graines de la case $2$.
# ## Programmer un joueur d'Awélé
# ### Structure de données
# Pour représenter une position d'Awélé en Python. Une *position* est définie par les éléments suivants :
# - une *dimension* qui correspond au nombre de colonnes (dans l'Awélé original, il y a 6 colonnes). On note $n$ ce nombre de colonnes dans la suite de ce texte ;
# - un *plateau* qui correspond au plateau de jeu et donne le nombre de graines dans chaque case ;
# - le *trait* qui correspond au camp devant jouer dans la position donnée. Il y a
# deux camps: "NORD et "SUD" ;
# - le *butin* qui correspond au nombre de graines déjà ramassées par chacun des joueurs.
#
#
# En Python, une telle position est représentée en tant que dictionnaire qui associe à chaque nom de champ sa valeur.
#
# Dans ce qui suit, on distingue :
# - un coup *correct* qui correspond à un numéro de colonne correct (valeur comprise entre 1 et $n$), et pour lequel la case correspondante contient au moins une graine;
# - un coup *autorisé* qui est un coup à la fois correct et permettant d'atteindre une position légale du jeu, c'est-à-dire une position qui respecte la règle obligeant à laisser au moins une graine dans le camp de son adversaire.
# ** Représentation du plateau de jeu en Python**
#
# Pour représenter le plateau en Python, on utilise une liste d'entiers, chaque entier correspondant au nombre de graines dans une case.
#
# Soit un plateau de jeu de $n$ colonnes.
#
# Pour le camp SUD :
# - le nombre de graines de la case de la colonne 1 est stocké en position $0$ dans la liste Python ;
# - le nombre de graines de la case de la colonne 2 est stocké en position $1$ dans la liste Python ;
# - ...
# - le nombre de graines de la case de la colonne $n$ est stocké en position $n-1$ dans la liste Python.
#
# Pour le camp NORD :
# - le nombre de graines de la case de la colonne 1 est stocké en position $2n-1$ dans la liste Python ;
# - le nombre de graines de la case de la colonne 2 est stocké en position $2n-2$ dans la liste Python ;
# - ...
# - le nombre de graines de la case de la colonne $n$ est stocké en position $n$ dans la liste Python.
#
# (il est conseillé de faire un schéma pour bien comprendre).
#
# Cette représentation à l'avantage de favoriser le parcours des cases du plateau : en parcourant la liste par positions croissantes, on retrouve le parcours en sens inverse des aiguilles d'une montre. Ce parcours peut alors se faire en Python en incrémentant de 1 la position pour passer à la case suivante, et en réalisant un calcul modulo $2n$.
# ### Fonctions fournies
# Plusieurs fonctions sont fournies dans ce qui suit :
# - <tt>position_initiale(n)</tt> : qui définit la position initiale du jeu. L'argument est un entier <tt>n</tt> qui donne le nombre de colonnes du plateau du jeu ;
# - <tt>affichage(position)</tt> : affichage de façon textuelle d'une position ;
# - <tt>duplique(position)</tt> : recopie d'une position en la dupliquant pour pouvoir y apporter des modifications sans altérer la position originale ;
# - <tt>joue_un_coup(position,coup)</tt> : renvoie la position obtenue une fois le coup joué dans la position donnée. Cette fonction fait l'hypothèse que le coup donné est un coup correct et elle ne vérifie donc pas que la position résultante est bien une position légale du jeu.
#
# +
# - - - - - - - - - - - - - - - TYPES UTILISES
# POSITION : dictionnaire non vide qui contient différentes informations sur
# une position, associées au nom de leur champ.
# COUP : valeur entière comprise entre 1 et le nombre de colonnes du tablier
#
#
# - - - - - - - - - - - - - - - INITIALISATION
from time import time
import numpy as np
from matplotlib import pyplot as plt
JOEUR_SUD = 0
JOEUR_NORD = 1
NOM_CAMPS = {JOEUR_SUD: "SUD", JOEUR_NORD: "NORD"}
class AwelePosition:
def __init__(self, n):
self.dimension = n # le nombre de colonnes du plateau
self.plateau = [4 for _ in range(0, 2*n)] # on met 4 graines dans chaque case
self.trait = JOEUR_SUD # le joueur qui doit jouer: 'SUD' ou 'NORD'
self.butin = {JOEUR_SUD: 0, JOEUR_NORD: 0} # graines prises par chaque joueur
def __str__(self):
"""Afficher la position avec la fonction print()"""
n = self.dimension
buffercol = "colonnes:"
for i in range(1, n + 1):
buffercol += f"\t {i} "
buffer = "\n* * * * * * * * * * * * * * * * * * * *\n"
buffer += f"\t\tNORD (prises:{self.butin[JOEUR_NORD]})\n"
buffer += "< - - - - - - - - - - - - - - -\n"
buffer += buffercol + "\n "
for x in self.getCamp(JOEUR_NORD):
buffer += f"\t[{x}]"
buffer += "\n "
for x in self.getCamp(JOEUR_SUD):
buffer += f"\t[{x}]"
buffer += '\n' + buffercol
buffer += "\n- - - - - - - - - - - - - - - >\n"
buffer += f"\t\tSUD (prises:{self.butin[JOEUR_SUD]})\n"
buffer += '-> camp au trait: ' + NOM_CAMPS[self.trait]
buffer += "\n* * * * * * * * * * * * * * * * * * * *\n"
return buffer
def estChezEnnemi(self, i):
return (self.trait == JOEUR_NORD and i < self.dimension) \
or (self.trait == JOEUR_SUD and i >= self.dimension)
def jouer(self, coup):
"""Joeur le coup, avec l'hypothese que coup est legal"""
n = self.dimension
indice_depart = (coup - 1) * (self.trait == JOEUR_SUD) + (2*n - coup) * (self.trait == JOEUR_NORD)
nbGraines = self.plateau[indice_depart]
self.plateau[indice_depart] = 0
indice_courant = indice_depart
while nbGraines:
indice_courant = (indice_courant + 1) % (2*n)
if indice_courant != indice_depart:
self.plateau[indice_courant] += 1
nbGraines -= 1
while self.estChezEnnemi(indice_courant) and self.plateau[indice_courant] in (2,3):
self.butin[self.trait] += self.plateau[indice_courant]
self.plateau[indice_courant] = 0
indice_courant = (indice_courant - 1) % (2*n)
self.trait = JOEUR_NORD if self.trait == JOEUR_SUD else JOEUR_SUD
def getCamp(self, joeur):
"""Renvoyer la liste representant le camp du joeur passer en parametre"""
if joeur == JOEUR_NORD:
return list(reversed(self.plateau[self.dimension:]))
elif joeur == JOEUR_SUD:
return self.plateau[:self.dimension]
return None
# -
# Les instructions qui suivent permettent de voir comment utiliser les fonctions précédentes.
# +
# ------------------------- POUR VOIR COMMENT CA MARCHE:
maPosition = AwelePosition(6)
maPosition.jouer(1) # SUD joue
maPosition.jouer(1) # NORD joue
maPosition.jouer(2) # SUD joue
maPosition.jouer(4) # NORD joue
maPosition.jouer(3) # SUD joue
maPosition.jouer(2) # NORD joue
maPosition.jouer(5) # SUD joue
print(maPosition)
print("#######################################\nPartie sur un tablier réduit pour tester:")
maPosition = AwelePosition(3)
maPosition.jouer(1) # SUD joue
maPosition.jouer(1) # NORD joue
maPosition.jouer(3) # SUD joue
maPosition.jouer(3) # NORD joue
maPosition.jouer(1) # SUD joue
maPosition.jouer(1) # NORD joue
print(maPosition)
print("#######################################\nEssai de prises:")
maPosition = AwelePosition(6)
maPosition.plateau = [1, 2, 3, 4, 5, 1, 2, 2, 2, 2, 2, 2]
print(maPosition)
maPosition.jouer(5) # SUD joue
print(maPosition)
# ------------------------- FIN TEST
# -
# ### Fonctions simples
# <font color="RED" size="+1">**[Q]**</font> Ecrire les fonctions suivantes :
# - <tt>est_correct(position,nombre)</tt>: rend le booléen <tt>True</tt> si le nombre donné peut être un coup correct dans la position donnée, c'est-à-dire si ce nombre est un entier compris entre 1
# et <tt>n</tt> et si de plus la case correspondante du plateau (pour le camp qui doit donc jouer
# dans la position) contient au moins une graine.
# - <tt>est_legale(position)</tt>: rend le booléen <tt>True</tt> si la position donnée est légale, c'est-à-dire si le camp qui doit jouer possède au moins une graine dans son camp.
# - <tt>effectue_si_valide(position,coup)</tt>: rend la nouvelle position obtenue en jouant <tt>coup</tt>
# dans la position donnée. Cette fonction retourne le booléen <tt>False</tt> si le coup
# n'est pas correct ou si la position résultante n'est pas une position légale du jeu.
# - <tt>est_terminale(position)</tt>: rend le booléen <tt>True</tt> si la position est
# terminale, c'est-à-dire si aucun coup n'est correct, ou si tout coup correct ne permet
# pas d'atteindre une position légale ou bien si l'un des deux joueurs a gagné un nombre de graines suffisant. Dans le jeu Awélé original à 6 colonnes, la position est gagnée dès qu'un camp a remporté 25 graines, ce qui correspond à $6*4+1$ graines. Ce nombre de graines nécessaire pour remporter la victoire dépend donc de la taille du plateau de jeu.
#
# +
from copy import deepcopy
def est_correct(position: AwelePosition, coup: int) -> bool:
if coup < 1 or coup > position.dimension:
return False
camp = position.getCamp(position.trait)
return camp[coup - 1] > 0
def est_legale(position: AwelePosition) -> bool:
return any(position.getCamp(position.trait))
def effectue_si_valide(position: AwelePosition, coup: int):
if est_correct(position, coup):
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
if est_legale(nouvelle_pos):
return nouvelle_pos
return False
def est_terminale(position: AwelePosition) -> bool:
gagne = 4 * position.dimension + 1
for coup in range(1, position.dimension + 1):
pos = effectue_si_valide(position, coup)
if pos and position.butin[JOEUR_NORD] < gagne and position.butin[JOEUR_SUD] < gagne:
return False
return True
# -
# ### Moteur de jeu simple
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>partie_humains(taille)</tt> permettant à deux joueurs humains de
# s'affronter dans une partie d'Awélé sur un plateau qui contient <tt>taille</tt> colonnes (on fait l'hypothèse que c'est un entier strictement positif).
# Dans cette fonction, les joueurs entrent à tour de rôle leur coup à jouer. Après chaque coup, la position est affichée avec les butins de chaque joueur. Le jeu s'arrête lorsque une position terminale est atteinte et le programme affiche alors le nom du camp vainqueur.
# Durant la partie, les coups sont donnés en utilisant la fonction <tt>input()</tt> de Python.
#
def partie_humains(n):
position = AwelePosition(n)
while not est_terminale(position):
print(position)
while True:
coup = int(input(f"{NOM_CAMPS[position.trait]} va jouer: "))
nouvelle_pos = effectue_si_valide(position, coup)
if nouvelle_pos:
break
position = nouvelle_pos
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>genere_un_coup(position)</tt> qui retourne un coup autorisé pour la position donnée. Le coup est choisi de façon aléatoire parmi les coups possibles dans la position donnée. Cette fonction rend la valeur 0 si aucun coup n'est possible.
# +
from random import choice
def genere_un_coup(position: AwelePosition) -> int:
possible = [coup for coup in range(1, position.dimension + 1) if effectue_si_valide(position,coup)]
return choice(possible) if len(possible) else 0
# -
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>partie_aleatoire(taille,campCPU)</tt> permettant à un joueur humain
# de jouer contre l'ordinateur qui choisit aléatoirement ses coups. L'argument
# <tt>campCPU</tt> donne le camp que doit gérer l'ordinateur (<tt>'SUD'</tt> ou <tt>'NORD'</tt>) et l'argument <tt>taille</tt> est un entier strictement positif donnant le nombre de colonnes du plateau de jeu.
def partie_aleatoire(taille: int, campCPU: int):
position = AwelePosition(taille)
while not est_terminale(position):
print(position)
if position.trait == campCPU: # CPU joue
coup = genere_un_coup(position)
if not coup:
break
position.jouer(coup)
print(f"CPU ({NOM_CAMPS[campCPU]}) a joue {coup}")
else: # humain joue
while True:
coup = int(input(f"{NOM_CAMPS[position.trait]} va jouer: "))
nouvelle_pos = effectue_si_valide(position, coup)
if nouvelle_pos:
break
position = nouvelle_pos
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
else:
print("DRAW")
# ## Joueur artificiel simple
# On considère la fonction d'évaluation suivante de la position $P$ :
# $$f(P) = \left.
# \begin{array}{ll}
# +99 & \text{si } P \text{ est gagnée par sud} \\
# -99 & \text{si } P \text{ est gagnée par nord} \\
# (2f_g(P,sud) + f_{12}(P,nord)) - (2f_g(P,nord) + f_{12}(P,sud))& \text{sinon }
# \end{array} \right.$$
#
# avec
# - $f_g(P,c)$ : nombre de graines déjà gagnées par le camp $c$ dans la position $P$
# - $f_{12}(P,c)$ : nombre de cases du camp $c$ contenant 1 ou 2 graines, dans la position $P$.
#
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>evalue(position)</tt> qui rend la valeur de l'évaluation de la position obtenue en utilisant la fonction $f$.
# +
def f12(position:AwelePosition, trait: int) -> int:
l = position.getCamp(trait)
return len([x for x in l if x in (1,2)])
def evalue(position: AwelePosition):
gagne = 4 * position.dimension + 1
if position.butin[JOEUR_SUD] >= gagne:
return 99
elif position.butin[JOEUR_NORD] >= gagne:
return -99
return 2 * (position.butin[JOEUR_SUD] - position.butin[JOEUR_NORD]) + f12(position, JOEUR_NORD) - f12(position, JOEUR_SUD)
# -
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>minimax(position,prof)</tt> qui, pour la position donnée, retourne un tuple Python dont le premier terme correspond au meilleur coup à jouer trouvé par l'application de l'algorithme du minimax à une profondeur <tt>prof</tt>, et le deuxième terme correspond à l'évaluation de la position obtenue pour ce coup.
# +
feuilles = 0 # nombre de feuilles traversees
def coups_possibles(position: AwelePosition) -> list:
return [coup for coup in range(1, position.dimension + 1) if effectue_si_valide(position, coup)]
def minimax(position: AwelePosition, prof: int, coup: int = 0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
return coup, evalue(position)
if position.trait == JOEUR_SUD: # maximiser
value = -100
best = 0
for coup in coups_possibles(position):
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(value, minimax(nouvelle_pos, prof - 1, coup)[1])
if x > value:
value = x
best = coup
else:
value = 100
best = 0
for coup in coups_possibles(position):
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(value, minimax(nouvelle_pos, prof - 1, coup)[1])
if x < value:
value = x
best = coup
return best, value
pos = AwelePosition(6)
pos.plateau = [1,0,0,6,0,1,1,1,1,1,2,0]
print(minimax(pos, 2), feuilles)
# -
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <tt>partie_minimax(taille,campCPU,prof)</tt> permettant à un joueur humain de jouer contre l'ordinateur, celui-ci trouvant ses coups à l'aide de l'algorithme minimax, utilisé à une profondeur <tt>prof</tt>. L'argument <tt>campCPU</tt> donne le camp que doit gérer l'ordinateur (<tt>'SUD'</tt> ou <tt>'NORD'</tt>). L'argument <tt>taille</tt> donne la dimension du plateau de jeu en nombre de colonnes.
#
def partie_minimax(taille: int, campCPU: int, prof: int):
position = AwelePosition(taille)
while not est_terminale(position):
print(position)
if position.trait == campCPU: # CPU joue
coup = minimax(position, prof)[0]
if not coup:
break
position.jouer(coup)
print(f"CPU ({NOM_CAMPS[campCPU]}) a joue {coup}")
else: # humain joue
while True:
coup = int(input(f"{NOM_CAMPS[position.trait]} va jouer: "))
nouvelle_pos = effectue_si_valide(position, coup)
if nouvelle_pos:
break
position = nouvelle_pos
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
else:
print("DRAW")
# +
# ------------------------- TEST
# maPosition = AwelePosition(6)
# print(maPosition)
# maPosition.jouer(1) # SUD joue
# maPosition.jouer(1) # NORD joue
# maPosition.jouer(2) # SUD joue
# maPosition.jouer(4) # NORD joue
# maPosition.jouer(3) # SUD joue
# maPosition.jouer(2) # NORD joue
# maPosition.jouer(5) # SUD joue
# print(maPosition)
# maPosition = AwelePosition(3)
# print(maPosition)
# maPosition.jouer(1) # SUD joue
# maPosition.jouer(1) # NORD joue
# maPosition.jouer(3) # SUD joue
# maPosition.jouer(3) # NORD joue
# maPosition.jouer(1) # SUD joue
# print(maPosition)
# maPosition.jouer(1) # NORD joue
# print(maPosition)
# --------
#partie_humains(4)
#partie_aleatoire(5,"SUD")
# print("*************************\n********* Partie minimax: ")
# decommenter la ligne suivante:
# partie_minimax(6, JOEUR_NORD,7)
# -
# ## Joueur artificiel élaboré
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <code>alphabeta(position,prof,alpha,beta)</code> qui, pour la position donnée, retourne le tuple dont le premier élément correspond au meilleur coup à jouer trouvé par l'application de l'algorithme du alpha-bêta à une profondeur <code>prof</code>, avec les valeurs <code>alpha</code> et <code>beta</code> comme valeurs initiales, et le deuxième élément et l'évaluation de la position obtenue pour
# ce coup.
# +
feuilles = 0 # nombre de feuilles traversees
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
return coup, evalue(position)
if position.trait == JOEUR_SUD: # maximiser
best = 0
for coup in coups_possibles(position):
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
for coup in coups_possibles(position):
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
pos = AwelePosition(6)
pos.plateau = [1,0,0,6,0,1,1,1,1,1,2,0]
pos.butin[JOEUR_SUD] = pos.butin[JOEUR_NORD] = 17
print(alpha_beta(pos, 2, -100, 100), feuilles)
# -
# <font color="RED" size="+1">**[Q]**</font> Ecrire la fonction <code>partie_alphabeta(taille,campCPU,prof)</code> permettant à un joueur humain de jouer contre l'ordinateur, celui-ci trouvant ses coups à l'aide de
# l'algorithme alpha-bêta, utilisé à une profondeur <code>prof</code>. L'argument <code>campCPU</code>
# donne le camp que doit gérer l'ordinateur (<code>'SUD'</code> ou <code>'NORD'</code>). L'argument <tt>taille</tt> donne la dimension du plateau de jeu en nombre de colonnes.
#
def partie_alphabeta(taille: int, campCPU: int, prof: int):
position = AwelePosition(taille)
while not est_terminale(position):
print(position)
if position.trait == campCPU: # CPU joue
coup = alpha_beta(position, prof, -1000, 1000)[0]
if not coup:
break
position.jouer(coup)
print(f"CPU ({NOM_CAMPS[campCPU]}) a joue {coup}")
else: # humain joue
while True:
coup = int(input(f"{NOM_CAMPS[position.trait]} va jouer: "))
nouvelle_pos = effectue_si_valide(position, coup)
if nouvelle_pos:
break
position = nouvelle_pos
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
else:
print("DRAW")
# +
def cpu_vs_cpu_minimax(taille: int, prof_sud: int, prof_nord):
position = AwelePosition(taille)
while not est_terminale(position):
prof = prof_sud if position.trait == JOEUR_SUD else prof_nord
coup = minimax(position, prof)[0]
if not coup:
break
position.jouer(coup)
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
else:
print("DRAW")
def cpu_vs_cpu_alphabeta(taille: int, prof_sud: int, prof_nord):
position = AwelePosition(taille)
while not est_terminale(position):
prof = prof_sud if position.trait == JOEUR_SUD else prof_nord
coup = alpha_beta(position, prof, -1000, 1000)[0]
if not coup:
break
position.jouer(coup)
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
else:
print("DRAW")
# -
t0 = time()
cpu_vs_cpu_minimax(6,5,5)
print(" done in %0.3fs" % (time() - t0))
t0 = time()
cpu_vs_cpu_alphabeta(6,5,5)
print(" done in %0.3fs" % (time() - t0))
print(feuilles)
# ## Pour aller plus loin...
# Ne pas hésiter à ajouter des fonctionnalités ou des améliorations à votre programme.
# Par exemple :
# - une interface graphique pour afficher le tablier, saisir les coups,...
# - proposer une nouvelle fonction d'évaluation plus efficace que celle donnée dans
# l'exercice 2. Tester le programme en le faisant jouer contre lui même un grand nombre de
# fois (automatiquement).
# - ajouter la règle suivante: un coup qui retire toutes les graines du camp de
# l'adversaire peut être quand même joué mais dans ce cas, aucune prise n'est effectuée.
# - toute autre amélioration que vous pouvez envisager.
#
#
# ### --> Amelioration de la fonction d'evaluation
# +
# 2 graines rapportent plus de points si on fait un coup gagnant
# cette fonction prend donc en compte cela
def f12_boost(position: AwelePosition, trait: int) -> int:
score = 0
l = position.getCamp(trait)
for x in l:
if x == 1:
score += 1
if x == 2:
score += 2
return score
# si le coup ou on va jouer nous retirent des graines risquees
# cela est encore mieux
def freste(position: AwelePosition, coup: int) -> int:
coef = 3
if position.plateau[coup] == 1:
return coef*1
elif position.plateau[coup] == 2:
return coef*2
else:
return 0
# on peut se dire que lorsque l'on est en avance, on peut prendre plus de risque pour marquer des points.
# lorsque l'on est en train de perdre, il faut faire attention et prioriser la defense.
def evalue_boost(position: AwelePosition, f12_function):
gagne = 4 * position.dimension + 1
if position.butin[JOEUR_SUD] >= gagne:
return 99
elif position.butin[JOEUR_NORD] >= gagne:
return -99
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
return 2 * (position.butin[JOEUR_SUD] - position.butin[JOEUR_NORD]) + f12_function(position, JOEUR_NORD) - 1.5*f12_function(position, JOEUR_SUD)
else:
return 2 * (position.butin[JOEUR_SUD] - position.butin[JOEUR_NORD]) + 1.5*f12_function(position, JOEUR_NORD) - f12_function(position, JOEUR_SUD)
def evalue(position: AwelePosition, f12_function):
gagne = 4 * position.dimension + 1
if position.butin[JOEUR_SUD] >= gagne:
return 99
elif position.butin[JOEUR_NORD] >= gagne:
return -99
return 2 * (position.butin[JOEUR_SUD] - position.butin[JOEUR_NORD]) + f12_function(position, JOEUR_NORD) - f12_function(position, JOEUR_SUD)
# -
# redefinition de la fonction pour prendre en compte plusieurs fonction d'evaluation
def cpu_vs_cpu_alphabeta(taille: int, prof_sud: int, prof_nord):
position = AwelePosition(taille)
n_coup = 0
while not est_terminale(position):
prof = prof_sud if position.trait == JOEUR_SUD else prof_nord
coup = alpha_beta(position, prof, -1000, 1000)[0]
if not coup:
break
n_coup += 1
if n_coup > 1000:
break
position.jouer(coup)
if position.butin[JOEUR_SUD] > position.butin[JOEUR_NORD]:
# print(f"{NOM_CAMPS[JOEUR_SUD]} gagne")
return "SUD", position.butin[JOEUR_SUD] - position.butin[JOEUR_NORD]
elif position.butin[JOEUR_SUD] < position.butin[JOEUR_NORD]:
# print(f"{NOM_CAMPS[JOEUR_NORD]} gagne")
return "NORD", position.butin[JOEUR_NORD] - position.butin[JOEUR_NORD]
else:
# print("DRAW")
return "DRAW", 0
def results(n_partie):
winners = []
ecart = 0
for i in tqdm(range(n_partie)):
nom, score = cpu_vs_cpu_alphabeta(8,3,3)
winners.append(nom)
ecart += score
u, c = np.unique(winners, return_counts=True)
print("parties gagnees : ", u, c)
print("ecart moyen pour le gagnant :", ecart/n_partie)
# ### --> Test de partie contre le minimax avec l'ancienne fonction d'evaluation
# +
# JOUEUR SUD : EVALUE BOOST
# JOUEUR NORD : EVALUE
# +
import random
from tqdm import tqdm
# redefinition de alpha beta pour randomisation et quelques changements
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
if position.trait == JOEUR_SUD:
return coup, evalue_boost(position, f12)
else:
return coup, evalue(position, f12)
if position.trait == JOEUR_SUD: # maximiser
best = 0
cp = coups_possibles(position)
random.shuffle(cp) # petit tricks pour eviter de faire la meme partie en boucle dans les simulations
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
cp = coups_possibles(position)
random.shuffle(cp)
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
# +
# n = 100
# results(n)
# +
# CONCLUSION : PETITE AMELIORATION GRACE AU REACTION SUIVANT LE SCORE
# +
# JOUEUR SUD : EVALUE, F12_BOOST
# JOUEUR NORD : EVALUE, F12
# -
# redefinition de alpha beta pour randomisation et quelques changements
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
if position.trait == JOEUR_SUD:
return coup, evalue(position, f12_boost)
else:
return coup, evalue(position, f12)
if position.trait == JOEUR_SUD: # maximiser
best = 0
cp = coups_possibles(position)
random.shuffle(cp) # petit tricks pour eviter de faire la meme partie en boucle dans les simulations
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
cp = coups_possibles(position)
random.shuffle(cp)
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
n=50
results(n)
# +
# CONCLUSION : <NAME> LORSQUE L'ON PREND EN COMPTE LE NOMBRE DE GRAINE QU'IL EST POSSIBLE DE GAGNER
# +
# JOUEUR SUD : EVALUE, F12_BOOST
# JOUEUR NORD : EVALUE_BOOST, F12
# -
# redefinition de alpha beta pour randomisation et quelques changements
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
if position.trait == JOEUR_SUD:
return coup, evalue(position, f12_boost)
else:
return coup, evalue_boost(position, f12)
if position.trait == JOEUR_SUD: # maximiser
best = 0
cp = coups_possibles(position)
random.shuffle(cp) # petit tricks pour eviter de faire la meme partie en boucle dans les simulations
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
cp = coups_possibles(position)
random.shuffle(cp)
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
# +
# results(n)
# +
# CONCLUSION : PRENDRE EN COMPTE LE FAIT D'ETRE DERRIERE AU SCORE EST MOINS IMPORTANT QUE
# DE COMPTER LES CASES AYANT 1 OU 2 GRAINES DANS LE CAMP ADVERSE DIFFEREMMENT
# +
# JOUEUR SUD : EVALUE, F12, FRESTE
# JOUEUR NORD : EVALUE, F12, NO_FRESTE
# -
# redefinition de alpha beta pour randomisation et quelques changements
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles
feuilles += 1
if position.trait == JOEUR_SUD:
return coup, evalue(position, f12) + freste(position, coup)
else:
return coup, evalue(position, f12)
if position.trait == JOEUR_SUD: # maximiser
best = 0
cp = coups_possibles(position)
random.shuffle(cp) # petit tricks pour eviter de faire la meme partie en boucle dans les simulations
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
cp = coups_possibles(position)
random.shuffle(cp)
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
# +
# n = 50
# results(n)
# +
# CONCLUSION
# +
# JOUEUR SUD : EVALUE_BOOST, F12_BOOST, FRESTE
# JOUEUR NORD : EVALUE, F12, NO_FRESTE
# +
# redefinition de alpha beta pour randomisation et quelques changements
def alpha_beta(position:AwelePosition, prof: int, alpha:int, beta:int, coup=0) -> tuple:
if prof == 0 or est_terminale(position):
global feuilles, feuilles_sud, feuilles_nord
feuilles += 1
if position.trait == JOEUR_SUD:
feuilles_sud += 1
return coup, evalue_boost(position, f12_boost) + freste(position, coup)
else:
feuilles_nord += 1
return coup, evalue(position, f12)
if position.trait == JOEUR_SUD: # maximiser
best = 0
cp = coups_possibles(position)
random.shuffle(cp) # petit tricks pour eviter de faire la meme partie en boucle dans les simulations
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = max(alpha, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x > alpha:
alpha = x
best = coup
if alpha >= beta:
break
return best, alpha
else:
best = 0
cp = coups_possibles(position)
random.shuffle(cp)
for coup in cp:
nouvelle_pos = deepcopy(position)
nouvelle_pos.jouer(coup)
x = min(beta, alpha_beta(nouvelle_pos, prof - 1, alpha, beta, coup)[1])
if x < beta:
beta = x
best = coup
if alpha >= beta:
break
return best, beta
# -
n = 10
feuilles_sud, feuilles_nord = 0, 0
results(n)
print(feuilles_sud, feuilles_nord)
# +
# CONCLUSION : FAUSSE INTUITION ._.
# -
# ### --> Intuition sur les fonctions d'evaluation
# +
# est ce qu'avoir une meilleur fonction d'evaluation peut ameliorer la vitesse de notre algorithme
# grace a l'elagage alpha beta, pour cela on va regarde le nombre de noeuds explore suivant le joueur
# -
# ### --> Quelques statistiques
# - nb noeud / prof
# - nb noeud / taille
# - nb noeud si pas alpha
# - nb noeud si alpha
# - nb noeud suivant fonction d'evaluation
# -
| S2/IAMSI/TME/TME1_2/IAMSI_tme01et02-Dam-Durand.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using the SparkMagic PySpark kernel to connect to EMR
#
# This is a SparkMagic PySpark notebook that lets you run jobs against an existing EMR cluster. See the instructions in the [README.md file](./README.md) for instructions.
#
# The following cell is the parameter cell. Upload a text file to your S3 bucket and run your job with a parameter (`-p input=s3://...` argument to `run-notebook`) to point at the object.
# + tags=["parameters"]
input="s3://bucket/object.txt"
# -
# %%info
# + language="sh"
# hostname
# -
spark.sql("show databases").show()
sc.parallelize(range(1000)).count()
constText = sc.textFile(input)
# +
import string
xlate_tbl = str.maketrans('','', string.punctuation)
words = constText.map(lambda line: line.translate(xlate_tbl).lower()).flatMap(lambda line: line.split(" "))
# -
wordCounts = words.map(lambda word: (word, 1)).reduceByKey(lambda a,b:a +b)
wordCounts.toDF(["word", "occurences"]).createOrReplaceTempView("wordCounts")
# + language="sql"
# show tables
# + magic_args="-o wordCounts" language="sql"
# select * from wordCounts order by occurences desc limit 20
# -
# %%local
import pandas as pd
# %%local
wordCounts
| examples/EMR/spark-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: geo_env
# language: python
# name: geo_env
# ---
import copy
import gzip
import io
import os
from io import StringIO
import pandas as pd
import geopandas as gpd
from tqdm import tqdm
import numpy as np
import pathlib
from pathlib import Path
from shapely import wkt
from datetime import datetime
import xml.etree.ElementTree as ET
import lxml.etree as etree
from xml.dom import minidom
import numpy as np
# read the files
BASE_DIR = Path.cwd().parent # G:\road_ntwrk
shp = BASE_DIR / "road_ntwrk" / "rd_ntwrk_2016.shp"
gpdSFRdNtwrk = gpd.read_file(shp)
gpdSFRdNtwrk=gpdSFRdNtwrk[gpdSFRdNtwrk['MTYPE']=='SF']
gpdSFRdNtwrk.groupby('FT').agg({'CAP':'max','SPEED':'max'})
gpdSFRdNtwrk=gpdSFRdNtwrk[gpdSFRdNtwrk['FT']!=9]
shp = BASE_DIR / "road_ntwrk" / "2016_PM.shp"
rd_network_2016 = gpd.read_file(shp)
rd_network_2016=rd_network_2016[rd_network_2016['FT'].isin([1,2,3,5,7,10])]
outside_sf_fwy_arterial=rd_network_2016[rd_network_2016['MTYPE']=='MTC']
len(gpdSFRdNtwrk)
len(outside_sf_fwy_arterial)
# +
# read the files
# BASE_DIR = Path.cwd().parent # G:\road_ntwrk
# shp = BASE_DIR / "road_ntwrk" / "rd_ntwrk_2016.shp"
gpdSFRdNtwrk=gpdSFRdNtwrk.to_crs('EPSG:26910')
from shapely.geometry import Point, LineString
gpdSFRdNtwrk['first'] = None
gpdSFRdNtwrk['last'] = None
for index, row in tqdm(gpdSFRdNtwrk.iterrows()):
coords = [(coords) for coords in list(row['geometry'].coords)]
first_coord, last_coord = [ coords[i] for i in (0, -1) ]
gpdSFRdNtwrk.at[index,'first'] = Point(first_coord)
gpdSFRdNtwrk.at[index,'last'] = Point(last_coord)
gpdSFRdNtwrk["length_m"] = gpdSFRdNtwrk["geometry"].to_crs('EPSG:26910').length # get length in meters of the feature
#gpdSFRdNtwrk["geometry"] = gpdSFRdNtwrk["geometry"].to_crs('EPSG:4326') # re-project to EPSG:4326/WGS1984
#gpdSFRdNtwrk["geometry"] = gpdSFRdNtwrk["geometry"].to_crs('EPSG:26910') # re-project to local PCS
# get the max of the lanes
gpdSFRdNtwrk['lane']= gpdSFRdNtwrk[['LANE_AM','LANE_PM','LANE_OP',]].max(axis=1)
from shapely import wkt
gpdSFNode_A = gpdSFRdNtwrk[["first","A"]]
gpdSFNode_A = gpd.GeoDataFrame(gpdSFNode_A, geometry='first')
# gpdSFNode_A.drop_duplicates()
gpdSFNode_B = gpdSFRdNtwrk[["last","B"]]
gpdSFNode_B = gpd.GeoDataFrame(gpdSFNode_B, geometry='last')
G = gpdSFNode_A["first"].apply(lambda geom: geom.wkb) # drop duplicate features
gpdSFNode_A = gpdSFNode_A.loc[G.drop_duplicates().index]
G = gpdSFNode_B["last"].apply(lambda geom: geom.wkb) # drop duplicate features
gpdSFNode_B = gpdSFNode_B.loc[G.drop_duplicates().index]
gpdSFRdNtwrk["segment_id"] = gpdSFRdNtwrk[['A', 'B']].astype(str).agg('_'.join, axis=1)
gpdSFNode_A["X"] = gpdSFNode_A["first"].x
gpdSFNode_A["Y"] = gpdSFNode_A["first"].y
gpdSFNode_B["X"] = gpdSFNode_B["last"].x
gpdSFNode_B["Y"] = gpdSFNode_B["last"].y
# +
# read the files
# BASE_DIR = Path.cwd().parent # G:\road_ntwrk
# shp = BASE_DIR / "road_ntwrk" / "rd_ntwrk_2016.shp"
outside_sf_fwy_arterial=outside_sf_fwy_arterial.to_crs('EPSG:26910')
from shapely.geometry import Point, LineString
outside_sf_fwy_arterial['first'] = None
outside_sf_fwy_arterial['last'] = None
for index, row in tqdm(outside_sf_fwy_arterial.iterrows()):
coords = [(coords) for coords in list(row['geometry'].coords)]
first_coord, last_coord = [ coords[i] for i in (0, -1) ]
outside_sf_fwy_arterial.at[index,'first'] = Point(first_coord)
outside_sf_fwy_arterial.at[index,'last'] = Point(last_coord)
outside_sf_fwy_arterial["length_m"] = outside_sf_fwy_arterial["geometry"].to_crs('EPSG:26910').length # get length in meters of the feature
#gpdSFRdNtwrk["geometry"] = gpdSFRdNtwrk["geometry"].to_crs('EPSG:4326') # re-project to EPSG:4326/WGS1984
#gpdSFRdNtwrk["geometry"] = gpdSFRdNtwrk["geometry"].to_crs('EPSG:26910') # re-project to local PCS
# get the max of the lanes
outside_sf_fwy_arterial['lane']= outside_sf_fwy_arterial[['LANE_AM','LANE_PM','LANE_OP',]].max(axis=1)
from shapely import wkt
outside_sf_fwy_arterial_Node_A = outside_sf_fwy_arterial[["first","A"]]
outside_sf_fwy_arterial_Node_A = gpd.GeoDataFrame(outside_sf_fwy_arterial_Node_A, geometry='first')
# gpdSFNode_A.drop_duplicates()
outside_sf_fwy_arterial_Node_B = outside_sf_fwy_arterial[["last","B"]]
outside_sf_fwy_arterial_Node_B = gpd.GeoDataFrame(outside_sf_fwy_arterial_Node_B, geometry='last')
G = outside_sf_fwy_arterial_Node_A["first"].apply(lambda geom: geom.wkb) # drop duplicate features
outside_sf_fwy_arterial_Node_A = outside_sf_fwy_arterial_Node_A.loc[G.drop_duplicates().index]
G = outside_sf_fwy_arterial_Node_B["last"].apply(lambda geom: geom.wkb) # drop duplicate features
outside_sf_fwy_arterial_Node_B = outside_sf_fwy_arterial_Node_B.loc[G.drop_duplicates().index]
outside_sf_fwy_arterial["segment_id"] = outside_sf_fwy_arterial[['A', 'B']].astype(str).agg('_'.join, axis=1)
outside_sf_fwy_arterial_Node_A["X"] = outside_sf_fwy_arterial_Node_A["first"].x
outside_sf_fwy_arterial_Node_A["Y"] = outside_sf_fwy_arterial_Node_A["first"].y
outside_sf_fwy_arterial_Node_B["X"] = outside_sf_fwy_arterial_Node_B["last"].x
outside_sf_fwy_arterial_Node_B["Y"] = outside_sf_fwy_arterial_Node_B["last"].y
# -
outside_sf_fwy_arterial['FT']=np.where(outside_sf_fwy_arterial['FT']==7,
25,
outside_sf_fwy_arterial['FT'])
SF_shell_network=gpdSFRdNtwrk.append(outside_sf_fwy_arterial)
len(SF_shell_network)
def ffs(df):
if df.FT==1:
return df.SPEED/1.3
elif df.FT==2:
return df.SPEED/1.0
elif df.FT==3:
return df.SPEED/1.0
elif df.FT==4:
return df.SPEED/1.8
elif df.FT==5:
return df.SPEED/1.3
elif df.FT==6:
return df.SPEED/1.0
elif df.FT==7:
return df.SPEED/1.8
elif df.FT==8:
return df.SPEED/1.3
elif df.FT==9:
return df.SPEED/1.8
elif df.FT==10:
return df.SPEED/1.0
elif df.FT==11:
return df.SPEED/1.8
elif df.FT==12:
return df.SPEED/1.8
elif df.FT==13:
return df.SPEED/1.0
elif df.FT==14:
return df.SPEED/1.8
else:
return df.SPEED/1.8
# +
SF_shell_network['FFS']=SF_shell_network.apply(ffs,axis=1)
SF_shell_network.head()
# -
gpdSFRdNtwrk=gpdSFRdNtwrk.reset_index()
gpdSFRdNtwrk['LINK_ID']=gpdSFRdNtwrk.index+1
gpdSFRdNtwrk.drop(columns=['index'],inplace=True)
SF_shell_network=SF_shell_network.reset_index()
SF_shell_network['LINK_ID']=SF_shell_network.index+1
SF_shell_network.drop(columns=['index'],inplace=True)
SF_shell_network=SF_shell_network[SF_shell_network['FT']!=0]
SF_shell_network.drop(columns=['first','last']).to_file('../SF_CHAMP_Converted/BEAM_Network.shp')
SF_shell_network.plot(figsize=[20,20])
SF_shell_network.to_file('../SF_CHAMP_Converted/BEAM_Network.shp')
SF_shell_network[SF_shell_network['TOLL']!=0].head()
pd.options.display.max_columns=100
SF_shell_network[SF_shell_network['TOLL']!=0][['LINK_ID','TOLLAM_DA',
'TOLLAM_SR2',
'TOLLAM_SR3',
'TOLLPM_DA',
'TOLLPM_SR2',
'TOLLPM_SR3',
'TOLLEA_DA',
'TOLLEA_SR2',
'TOLLEA_SR3',
'TOLLMD_DA',
'TOLLMD_SR2',
'TOLLMD_SR3',
'TOLLEV_DA',
'TOLLEV_SR2',
'TOLLEV_SR3']].to_csv('../SF-CHAMP Outputs/toll_facility.csv',index=False)
SF_shell_network[SF_shell_network['TOLL']==1]['LINK_ID']
node_A=gpdSFNode_A.append(outside_sf_fwy_arterial_Node_A)
node_A=node_A.reset_index()
node_B=gpdSFNode_B.append(outside_sf_fwy_arterial_Node_B)
node_B=node_B.reset_index()
node_A.drop(columns=['index'],inplace=True)
node_B.drop(columns=['index'],inplace=True)
node_A.head()
node_A=node_A.drop_duplicates()
node_B=node_B.drop_duplicates()
pd.options.display.max_columns=120
SF_shell_network=SF_shell_network.drop_duplicates(['A','B'])
# +
colmns = ["LINK_ID","A","B",'FT',"length_m","FFS","CAP","lane","oneway",'MTYPE']
added_nodes = []
added_links = []
root = ET.Element("network")
nodes = ET.SubElement(root, "nodes")
links = ET.SubElement(root, "links", {"capperiod": "01:00:00", "effectivecellsize": "7.5", "effectivelanewidth": "3.75"})
i = 0
for index, row in tqdm(SF_shell_network.iterrows()): # iterate over the rows of the road network
# attrib = {id:link_id, from:from_node_id, to:to_node_id, length:length_m, freespeed:free_speed,
# capacity:capacity, permlanes:lanes, oneway:oneWay, modes:"modes"}
# <link from="4531878" freespeed="11.18" permlanes="1.0" id="7000227" length="21.03"
# oneway="1.0" modes="bike, car, walk" capacity="600.0"/>
link_dict = {}
#modes = []
for col in colmns:
if col == "LINK_ID":
val = row.loc[col]
link_dict["id"] = str(val)
elif col == "A":
val = row.loc[col]
link_dict["from"] = str(val)
_df = node_A.loc[node_A["A"] == val].reset_index()
if _df.at[0,"A"] not in added_nodes:
ET.SubElement(nodes, "node", {"id": str(round(_df.at[0,"A"],5)),
"x": str(round(_df.at[0,"X"],5)),
"y": str(round(_df.at[0,"Y"],5))})
added_nodes.append(_df.at[0,"A"])
# print(added_nodes)
elif col =="B":
val = row.loc[col]
link_dict["to"] = str(val)
_df = node_B.loc[node_B["B"] == val].reset_index()
if _df.at[0,"B"] not in added_nodes:
ET.SubElement(nodes, "node", {"id": str(round(_df.at[0,"B"],5)),
"x": str(round(_df.at[0,"X"],5)),
"y": str(round(_df.at[0,"Y"],5))})
added_nodes.append(_df.at[0,"B"])
# print(added_nodes)
elif col =="length_m":
val = row.loc[col]
link_dict["length"] = str(round(val,2))
elif col =="FFS":
val = row.loc[col]
link_dict["freespeed"] = str(round(val*0.447,2))
elif col =="CAP":
l = row.loc["lane"]
c = row.loc["CAP"]
link_dict["capacity"] = str(round((l*c),2))
elif col =="lane":
val = row.loc[col]
link_dict["permlanes"] = str(round(val,1))
elif col=='FT':
ft=row.loc[col]
if ft==1:
link_dict["type"] = str('fwy_fwy_cnctr')
link_dict["modes"] = str('car')
elif ft==2:
link_dict["type"] = str('freeway')
link_dict["modes"] = str('car')
elif ft==3:
link_dict["type"] = str('expressway')
link_dict["modes"] = str('car')
elif ft==4:
link_dict["type"] = str('collector')
link_dict["modes"] = str('car,bus,walk,bike')
elif ft==5:
link_dict["type"] = str('ramp')
link_dict["modes"] = str('car')
# elif ft==6:
# link_dict["type"] = str('centroid_cnctr')
# link_dict["modes"] = str('car')
elif ft==7:
link_dict["type"] = str('maj_arterial')
link_dict["modes"] = str('car,bus,walk,bike')
# elif ft==8:
# link_dict["type"] = str('not_used')
# elif ft==9:
# link_dict["type"] = str('alley')
# link_dict["modes"] = str('car,bike,walk')
elif ft==10:
link_dict["type"] = str('metered_ramp')
link_dict["modes"] = str('car')
elif ft==11:
link_dict["type"] = str('local')
link_dict["modes"] = str('car,bus,walk,bike')
elif ft==12:
link_dict["type"] = str('minor_arterial')
link_dict["modes"] = str('car,bus,walk,bike')
elif ft==13:
link_dict["type"] = str('bike_only')
link_dict["modes"] = str('bike')
# elif ft==14:
# link_dict["type"] = str('not_used')
elif ft==16:
link_dict["type"] = str('test')
link_dict["modes"] = str('car')
elif ft==25: #custom ftype for arterials outside SF city limits
link_dict["type"] = str('maj_arterial')
link_dict["modes"] = str('car')
else:
link_dict["type"] = str('super_arterial')
link_dict["modes"] = str('car,bus,walk,bike')
else:
link_dict["oneway"] = str(1)
#link_dict["modes"] = str('car,bus,walk')
if link_dict["id"] not in added_links:
added_links.append(link_dict["id"])
# print(link_dict)
ET.SubElement(links, "link", attrib=link_dict)
# +
def exportXML(root):
# Let's try using LXML
bVal=False
tree = etree.fromstring(ET.tostring(root))
xmlstr = etree.tostring(tree, encoding="UTF-8",
xml_declaration=True,
pretty_print=True,
doctype='<!DOCTYPE network SYSTEM "http://www.matsim.org/files/dtd/network_v2.dtd">')
outfilename = BASE_DIR/'SF_network_from_SFCHAMP.xml.gz'
with gzip.open(outfilename, 'wb') as f:
f.write(xmlstr)
bVal = True
return bVal
if exportXML(root):
print("Succesfully created")
# -
gpdSFRdNtwrk.info(verbose=True)
| convert/Script/SF_Road_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import os, sys, time
import random
import numpy as np
import cv2
import matplotlib.pyplot as plt
from utils import read_yuv2gray, read_yuv2rgb, get_tensor_entries
from models import ALS_NN, LM_completion
# -
import wandb
dataset_full = read_yuv2gray(height=144, width=176, n_frames=300, file_name='akiyo_qcif.yuv', file_dir='data/')
rank = 20
n_frames = 50
dim_y = 144
dim_z = 176
portion = 0.075
n_test_entries = 5000
n_val_entries = 5000
predict_frames = [20]
max_iter = 50
noisy = False
randominit = False
lambda_ = 1
n = (n_frames, dim_y, dim_z)
n_entries = int(n_frames*dim_y*dim_z*portion)
dataset = dataset_full[:n_frames]
seed = 2021
# +
entries_arr = get_tensor_entries(dataset, size=n_entries, seed=seed)
val_entries = get_tensor_entries(dataset, size=n_val_entries)
test_entries = get_tensor_entries(dataset, size=n_test_entries)
solver = ALS_NN(
n=n,
rank=rank,
n_entries=n_entries,
noisy=noisy,
randominit=randominit,
seed=seed,
entries_arr=entries_arr
)
# -
solution = solver.fit(
max_iter=max_iter,
test_entries=test_entries,
lam=lambda_
)
pred = solver.predict(solution, predict_frames)
entries_arr_old = entries_arr.copy()
# +
entries_arr = get_tensor_entries(dataset, size=n_entries, seed=seed)
val_entries = get_tensor_entries(dataset, size=n_val_entries)
test_entries = get_tensor_entries(dataset, size=n_test_entries)
solver = LM_completion(
n=n,
rank=rank,
n_entries=n_entries,
noisy=noisy,
randominit=randominit,
seed=seed,
entries_arr=entries_arr
)
# -
solution = solver.fit(
max_iter=max_iter,
test_entries=test_entries,
lam=lambda_
)
pred_lm = solver.predict(solution, predict_frames)
pred_clipped = pred.clip(0., 256.)
plt.imshow(pred_clipped[0], cmap='gray')
pred_lm[0].clip(0, 256)
plt.imshow(pred_lm[0].clip(0, 256), cmap='gray')
plt.imshow(dataset[0], cmap='gray')
| experiments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simulação Estocástica: Distribuição Binomial
# <NAME>, <EMAIL>.<br>
# Universidade de São Paulo, São Carlos, Brasil.<br>
# https://sites.icmc.usp.br/francisco <br>
# Copyright: Creative Commons
# Vamos simular o lançamento de $n$ moedas e a probabilidade de sair $k$ caras. Ou seja, termos $k$ sucessos em $n$ realizações de um experimento.
# +
from random import seed
from matplotlib import pyplot as plt
import numpy as np
from scipy.stats import binom
import math
seed(100) # semente do gerador de números aleatórios
n = 100 # numero de lançamentos
p = 0.3 # probabilidade de sair cara
Pk = np.zeros(n)
vk = np.arange(0,n)
ns = 1000 # numero de simulacoes
for j in range(0,ns): # faça para ns simulacoes
S = 0 # numero de sucessos
for i in range(0,n): # faça para n experimentos
r = np.random.uniform() #
if(r <= p): # se o sucesso
S = S + 1
Pk[S] = Pk[S] + 1
Pk=Pk/sum(Pk) # normaliza a distribuição de probabilidade
#plt.plot(vk, Pk, 'ro')
plt.figure(figsize=(10,6))
plt.xlim(0.8*np.min(vk[Pk>0]),1.2*np.max(vk[Pk>0]))
plt.bar(vk, Pk, label='Simulacao')
# curva teórica
Pkt = np.zeros(n+1) # valores teóricos da probabilidade
vkt = np.arange(0,n+1) # variação em k
for k in range(0,n+1): # varia de 0 até n
Pkt[k] = (math.factorial(n)/(math.factorial(n-k)*math.factorial(k)))*(p**k)*(1-p)**(n-k)
plt.plot(vkt, Pkt, 'r--', label='Prob. Teórica')
plt.xlabel('k', fontsize = 15)
plt.ylabel('P(k)',fontsize = 15)
plt.legend()
plt.show(True)
# -
| distribuicao-binomial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Title: Review Rating Prediction
# ## 1. Introduction –
# I am using amazon product reviews for the analysis. I will be using existing reviews to find the star rating based on the text. The star rating ranges from 1 to 5, where 1 is lowest and 5 is max rating. For example, if review contains text “Good product to be used.” The model will try to see what can be the star rating for this review.
#
# ## Data Source
# https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products
# ### Import requried libraries
import pandas as pd
import numpy as np
import string
import re
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
# ### Load the data into dataframe.
review_df = pd.read_csv("Amazon_Product_Review_Dataset.csv")
review_df.head()
review_df.shape
# **Too many columns hence we will look at one row for better understanding.**
review_df.iloc[1]
# **There are 24 columns and for our analysis we don't need url or product ID type data. Lets select only required columns.**
review_df = review_df[['name', 'primaryCategories', 'manufacturer', 'reviews.doRecommend', 'reviews.rating', 'reviews.text']]
review_df
# **Rename columns for easy access.**
new_columns = ['product_name', 'product_category', 'manufacturer', 'recommend', 'review_rating', 'review_text']
review_df.columns = new_columns
review_df.head()
# ### Check the dimensions of the data.
review_df.shape
# We can see there are 5000 rows and 6 columns.
# ### Verify if any missing values.
review_df.isnull().sum()
# So we can see that there are no null values in any columns.
review_df.describe()
# Review rating is the only numeric data in our dataset. Max is 5 and min is 1 which reconfirms there are null or zero values.
review_df.describe(include=['O'])
# Data summary shows that, there are 23 products from 4 unique categories for one manufacturer.
# ### Based on each rating check the text size distribution using histogram.
review_df["review_length"] = review_df["review_text"].apply(len)
# ### Use box plot for each star rating.
plt.boxplot(review_df.review_rating)
plt.title('Boxplot of rating')
plt.show()
# ### Histogram of each rating.
plt.hist(review_df.review_rating)
plt.xlabel('Review rating')
plt.ylabel('Count')
plt.title('Review rating count plot')
plt.show()
plt.scatter(review_df.review_rating, review_df.review_length)
plt.xlabel('Review rating')
plt.ylabel('Review text length(#of Words)')
plt.title('Review rating vs Review text length')
plt.show()
# # Exercise 7.3
review_df.head()
# #### Group the data by review rating and check the mean of text length per review rating.
group_by_rating = review_df.groupby("review_rating").mean()
group_by_rating
# #### Find the coorelation between selected features.
group_by_rating.corr()
# ### 7.3.1 - Preprocessing Steps
# #### Lowercase all the text.
review_df["review_text"] = review_df["review_text"].str.lower()
review_df
# #### Remove punctuations.
# Remove Special characters from the review_text.
review_df["review_text"]=review_df["review_text"].str.replace('[^\w\s]',' ')
review_df
# #### Import required libraries for feature extraction.
#tokenize text with Tfidf
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer
# nltk.download('averaged_perceptron_tagger')
from nltk import pos_tag
from nltk import word_tokenize
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.feature_extraction.text import TfidfVectorizer
# #### Remove stop words.
stop = stopwords.words('english')
# Remove stop words
review_df["review_text"] = review_df["review_text"].apply(lambda x: [item for item in x.split() if item not in stop])
review_df.head()
# #### Apply porter stemmer
#Apply a stemmer
stemmer = PorterStemmer()
review_df["review_text"] = review_df["review_text"].apply(lambda x: [stemmer.stem(y) for y in x])
review_df.head()
# #### Join the text after stemming
review_df["review_text"]=review_df["review_text"].apply(lambda x: " ".join(y for y in x))
review_df.head()
# #### Split data in training and testing set.
# +
from sklearn.model_selection import train_test_split
selected_data = review_df[(review_df["review_rating"] == 4) | (review_df["review_rating"] == 5)]
selected_data['review_rating'] = selected_data['review_rating'].replace([4,5],['rate4','rate5'])
X = selected_data["review_text"]
y = selected_data["review_rating"]
# -
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
# ### 7.3.2 - Feature extraction
# #### Generate feature matrix using TF-IDF vectorizer.
tfidf = TfidfVectorizer()
X_train = tfidf.fit_transform(X_train)
X_train[0].toarray()
# Lets check the vocabulary.
import itertools
tfidf_vocab = tfidf.vocabulary_
print(dict(itertools.islice(tfidf_vocab.items(), 20)))
# #### Lets convert test/validation data to feature matrix
X_test_tfidf = tfidf.transform(X_test)
X_test_tfidf[0].toarray()
# #### Generate bag-of-words using count-vectorizer.
# Generated two feature matrix to compare difference between performance with two different feature types.
bag_of_words = CountVectorizer()
X_train_bow = bag_of_words.fit_transform(X)
X_train_bow[0].toarray()
# Lets check the vocabulary.
import itertools
bag_of_words_vocab = bag_of_words.vocabulary_
print(dict(itertools.islice(bag_of_words_vocab.items(), 20)))
# ### 8.3.2 - Model Evaluation
# #### Using TF-IDF data for Model Evaluation.
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ConfusionMatrix
from yellowbrick.classifier import ClassificationReport
from yellowbrick.classifier import ROCAUC
# Instantiate the classification model
model = LogisticRegression()
# +
## The ConfusionMatrix visualizer taxes a model
classes = ['rate4','rate5']
cm = ConfusionMatrix(model, classes=classes, percent=False)
#Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_train, y_train)
#To create the ConfusionMatrix, we need some test data. Score runs predict() on the data
#and then creates the confusion_matrix from scikit learn.
cm.score(X_test_tfidf, y_test)
# change fontsize of the labels in the figure
for label in cm.ax.texts:
label.set_size(20)
#How did we do?
cm.poof()
# +
# Precision, Recall, and F1 Score
# set the size of the figure and the font size
# #%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 7)
plt.rcParams['font.size'] = 20
# Instantiate the visualizer
visualizer = ClassificationReport(model, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test_tfidf, y_test) # Evaluate the model on the test data
visualizer.show()
# +
# ROC and AUC
#Instantiate the visualizer
visualizer = ROCAUC(model, classes=['rate4', 'rate5'])
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test_tfidf, y_test) # Evaluate the model on the test data
visualizer.show()
# -
# # Predict if product is recommended based on the review text using Logistic regression
selected_data = review_df[["review_text", "recommend"]]
selected_data.head()
selected_data['recommend'] = selected_data['recommend'].replace([True, False],["True","False"])
selected_data.head()
selected_data['recommend'].value_counts()
X = selected_data["review_text"]
y = selected_data["recommend"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
tfidf = TfidfVectorizer()
X_train = tfidf.fit_transform(X_train)
# +
# X_train['review_rating'] = X_train['review_rating'].replace([1,2,3,4,5],["1","2","3","4","5"])
# -
X_test = tfidf.transform(X_test)
y_test.value_counts()
# Instantiate the classification model
model = LogisticRegression()
# +
## The ConfusionMatrix visualizer taxes a model
classes = ['False','True']
cm = ConfusionMatrix(model, classes=classes)
#Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_train, y_train)
#To create the ConfusionMatrix, we need some test data. Score runs predict() on the data
#and then creates the confusion_matrix from scikit learn.
cm.score(X_test, y_test)
# change fontsize of the labels in the figure
for label in cm.ax.texts:
label.set_size(20)
#How did we do?
cm.poof()
# +
# Precision, Recall, and F1 Score
# set the size of the figure and the font size
# #%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 7)
plt.rcParams['font.size'] = 20
# Instantiate the visualizer
visualizer = ClassificationReport(model, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show()
# -
y_test.value_counts()
# # Predict review rating based on review text using Multinomial naive Bayes.
# In this attempt I am using all the rating (1 to 5).
X_mnb = review_df["review_text"]
X_mnb
y_mnb = review_df["review_rating"]
y_mnb
from sklearn.model_selection import train_test_split
X_mnb_train, X_mnb_test, y_mnb_train, y_mnb_test = train_test_split(X_mnb,y_mnb,test_size=0.3)
tfidf = TfidfVectorizer()
X_mnb_train = tfidf.fit_transform(X_mnb_train)
X_mnb_test = tfidf.transform(X_mnb_test)
from sklearn.naive_bayes import MultinomialNB
# Instantiate the linear regression model
mnb_model = MultinomialNB()
mnb_model.fit(X_mnb_train, y_mnb_train)
predictions = mnb_model.predict(X_mnb_test)
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(y_mnb_test, predictions))
print(classification_report(y_mnb_test, predictions))
# # Predict if product is recommended based on the review text using Multinomial Naive Bayes
y_recom = review_df["recommend"]
X_recom_train, X_recom_test, y_recom_train, y_recom_test = train_test_split(X_mnb,y_recom,test_size=0.3)
tfidf = TfidfVectorizer()
X_recom_train = tfidf.fit_transform(X_recom_train)
X_recom_test = tfidf.transform(X_recom_test)
mnb_recom_model = MultinomialNB()
mnb_recom_model.fit(X_recom_train, y_recom_train)
predictions = mnb_recom_model.predict(X_recom_test)
print(confusion_matrix(y_recom_test, predictions))
print(classification_report(y_recom_test, predictions))
# # Predict if product is recommended based on the review text using Decision Tree
from sklearn.tree import DecisionTreeClassifier
decision_model = DecisionTreeClassifier()
decision_model.fit(X_recom_train, y_recom_train)
predictions = decision_model.predict(X_recom_test)
print(confusion_matrix(y_recom_test, predictions))
# +
## The ConfusionMatrix visualizer taxes a model
classes = ['False','True']
cm = ConfusionMatrix(decision_model, classes=classes)
#Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_recom_train, y_recom_train)
#To create the ConfusionMatrix, we need some test data. Score runs predict() on the data
#and then creates the confusion_matrix from scikit learn.
cm.score(X_recom_test, y_recom_test)
# change fontsize of the labels in the figure
for label in cm.ax.texts:
label.set_size(20)
#How did we do?
cm.poof()
# -
print(classification_report(y_recom_test, predictions))
# +
# Precision, Recall, and F1 Score
# set the size of the figure and the font size
# #%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 7)
plt.rcParams['font.size'] = 20
# Instantiate the visualizer
visualizer = ClassificationReport(decision_model, classes=classes)
visualizer.fit(X_recom_train, y_recom_train) # Fit the training data to the visualizer
visualizer.score(X_recom_test, y_recom_test) # Evaluate the model on the test data
visualizer.show()
# -
from sklearn import tree
tree.plot_tree(decision_model)
# Looks like because of word features the decision tree has higher depth.
| Projects/amazon_product_review/code/Amazon_Review_Rating_Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="XRyERmWj_2p2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} executionInfo={"status": "ok", "timestamp": 1594200136850, "user_tz": -180, "elapsed": 881, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}} outputId="b8b81d44-5c88-4a70-8708-498a5b8cb6fd"
from google.colab import drive
drive.mount('/content/drive')
# + id="bL4hh5KD_am1" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200136852, "user_tz": -180, "elapsed": 870, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
import pandas as pd
import subprocess
import matplotlib.pyplot as plt
from itertools import combinations
from networkx import write_gpickle as write_g
import networkx as nx
import operator
# + id="qRQFEnMC_MJ_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} executionInfo={"status": "ok", "timestamp": 1594200137286, "user_tz": -180, "elapsed": 1293, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "05304319935596570603"}} outputId="fbd21719-34fa-4ed0-f050-61cdba198224"
plays_df = pd.read_csv("/content/drive/My Drive/Colab Notebooks/Shakespeare Network Analyses/Shakespeare_data.csv")
# Drop stage directions (where there isn't an act/scene/line)
plays_df = plays_df[pd.notna(plays_df['ActSceneLine'])]
plays_df[['Act','Scene','Line']] = plays_df['ActSceneLine'].str.split('.',expand = True).astype(float)
plays_df = plays_df.drop('ActSceneLine',axis=1)
# Drop Shakespeare's histories
tragedies = ["Romeo and Juliet"]
plays_df = plays_df[plays_df["Play"].isin(tragedies)]
print("{} rows and {} columns".format(*plays_df.shape))
plays_df.head()
# + id="YPY8X1HIt2-G" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200137287, "user_tz": -180, "elapsed": 1282, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
#plays_df.replace({'LEAR': 'KING LEAR'},inplace=True)
# + id="5Zu9eTyc_dJQ" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200137288, "user_tz": -180, "elapsed": 1273, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
play_name = "Romeo and Juliet"
single_play = plays_df[(plays_df['Play'] == play_name)]
# Group the df by character to get how often each speak
characters = single_play.groupby(['Player']).size().reset_index()
characters.rename(columns = {0: 'Count'}, inplace = True)
# Get top 20 characters
characters = characters[characters["Count"] > 5]
# + id="oQUd7F2of14r" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200137289, "user_tz": -180, "elapsed": 1263, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
characters = characters["Player"]
# + id="5Ik7AMBebqhC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 510} executionInfo={"status": "ok", "timestamp": 1594200137692, "user_tz": -180, "elapsed": 1651, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}} outputId="c2223ffd-909a-4f3f-b338-98e4afd259d6"
characters
# + id="W9ELhULAuyFS" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200137694, "user_tz": -180, "elapsed": 1639, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
def graphify(characters):
play_graph = nx.Graph()
play_graph.add_nodes_from(characters)
scenes_df = single_play.groupby(['Act','Scene','Player']).size()
scenes_df = scenes_df.loc[:,:,characters]
for (act,scene), counts in scenes_df.groupby(['Act','Scene']):
chars = counts.index.get_level_values(2).tolist()
pairs = list(combinations(chars,2))
for (a_char, b_char) in pairs:
if play_graph.has_edge(a_char, b_char):
play_graph[a_char][b_char]['weight'] += 1
else:
play_graph.add_edge(a_char, b_char,weight=1)
return play_graph
# + id="d8ZilCbicM9-" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200256423, "user_tz": -180, "elapsed": 813, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
wChars = graphify(characters.drop(labels=[5,26]))
woChars = graphify(characters.drop(labels=[5,26,15,13,27]))
# + id="cSnAOJjbZ77K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1594200258674, "user_tz": -180, "elapsed": 3017, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}} outputId="60b18531-e737-4364-c713-ad3dcdaa99b9"
pos = nx.spring_layout(wChars)
betCent = nx.betweenness_centrality(wChars, normalized=True, endpoints=True)
node_color = [40000.0 * wChars.degree(v) for v in wChars]
node_size = [v * 10000 for v in betCent.values()]
plt.figure(figsize=(22,22))
nx.draw_networkx(wChars, pos=pos, with_labels=True,
node_color=node_color,alpha=0.3,
node_size=node_size,seed=50)
nx.draw_networkx_labels(wChars, pos, font_size=24, font_family='sans-serif')
plt.axis('off');
# + id="WpCmE9EbjTOy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1594200258676, "user_tz": -180, "elapsed": 2857, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}} outputId="22dee8c9-6d68-4ee2-e70d-ef91a794f21e"
pos = nx.spring_layout(woChars)
betCent = nx.betweenness_centrality(woChars, normalized=True, endpoints=True)
node_color = [40000.0 * woChars.degree(v) for v in woChars]
node_size = [v * 10000 for v in betCent.values()]
plt.figure(figsize=(22,22))
nx.draw_networkx(woChars, pos=pos, with_labels=True,
node_color=node_color,alpha=0.3,
node_size=node_size,seed=50)
nx.draw_networkx_labels(woChars, pos, font_size=24, font_family='sans-serif')
plt.axis('off');
# + id="-sDXQyIDaHnM" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594200258677, "user_tz": -180, "elapsed": 2183, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZmslRmKxKDOPjB9gu0o-mNrVW1uJvYNaPqWNnKA=s64", "userId": "05304319935596570603"}}
| bc_romeo_juliet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from colicoords import Data, CellListPlot, IterCellPlot, AutoIterCellPlot, save, load, CellPlot
import os
import mahotas as mh
import colicoords
colicoords.__file__
c41_raw = load('c41_cell_raw.hdf5')
epec_raw = load('epec_cell_raw.hdf5')
plt.figure()
cp = CellPlot(epec_raw[2083])
cp.plot_outline()
cp.imshow('binary')
1565/2948
c41_binary = c41_raw.copy()
res_c41 = c41_binary.optimize_mp()
epec_binary = epec_raw.copy()
res_epec = epec_binary.optimize_mp()
np.where(np.equal(res_c41, None))
np.where(np.equal(res_epec, None))
bn = ~np.equal(res_c41, None)
obj_c41 = np.array([r.objective_value for r in np.array(res_c41)[bn]])
a_c41 = np.array([c.data.binary_img.sum() for c in c41_binary[bn]])
f = obj_c41 / a_c41
plt.figure()
h = plt.hist(f, bins='fd')
b = f < 0.1
aicp = AutoIterCellPlot(c41_binary[bn][b])
aicp.plot()
c41_selected = c41_binary[bn][b]
save('c41_binary_opt.hdf5', c41_selected)
bn = ~np.equal(res_epec, None)
obj_epec = np.array([r.objective_value for r in np.array(res_epec)[bn]])
a_epec = np.array([c.data.binary_img.sum() for c in epec_binary[bn]])
f = obj_epec / a_epec
plt.figure()
h = plt.hist(f, bins='fd')
b = f < 0.08
aicp = AutoIterCellPlot(epec_binary[b])
aicp.plot()
epec_selected = epec_binary[b]
len(epec_selected)
save('epec_binary_opt.hdf5', epec_selected)
| src_data/20191002_yichen_deltaescv_c41_eyfp_escv_repeat_03/04_optimization_and_selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os
import pandas as pd
import pprint
import csv
import keras
#path constants
train_path = '../../../iterations/data/final/train_norm'
test_path = '../../../iterations/data/final/test_norm'
cleaned_train_path = '../../../cleaned_data/train'
cleaned_test_path = '../../../cleaned_data/test'
#type constants
vehicle_types = ['ZVe44', 'ZV573', 'ZV63d', 'ZVfd4', 'ZVa9c', 'ZVa78', 'ZV252']
#two label dataframes
train_label_df = pd.read_csv('../../../iterations/data/final' + '/label.csv', delimiter = ',', encoding = 'utf-8')
test_label_df = pd.read_csv('../../../iterations/data/final' + '/label.csv', delimiter = ',', encoding = 'utf-8')
# +
def getLabel(filename, label_df):
idx = label_df.loc[label_df['sample_file_name'] == filename]
return idx.iloc[0]['label']
def cal_length(path, vehicle_type, label_df):
#vehicle_type: one string element under vehicle_types = ['ZVe44', 'ZV573', 'ZV63d', 'ZVfd4', 'ZVa9c', 'ZVa78', 'ZV252']
path = path + '/' + vehicle_type
#these are variables to calculate traversing progress (DO NOT CHANGE)
counts_per_percent = int(len(os.listdir(path)) / 100)
percentage_completion = 0
counter = 0
single_len=0
file_count=0
file_len=0
for file in os.listdir(path):
sample_df = pd.read_csv(path + '/' + file, delimiter = ',', encoding = 'utf-8')
file_len+=len(sample_df)
file_count+=1
if len(sample_df)>=single_len:
single_len=len(sample_df)
ave_file_len=file_len/file_count
print("AVG Length: ",ave_file_len,' file count: ',file_count,' max_len: ',single_len)
# -
cal_length(train_path,'ZV252',train_label_df)
file_len_weighted=round(392*0.7)
file_len_weighted
# +
def getLabel(filename, label_df):
idx = label_df.loc[label_df['sample_file_name'] == filename]
return idx.iloc[0]['label']
def TraverseFiles(path, vehicle_type, label_df, length):
#vehicle_type: one string element under vehicle_types = ['ZVe44', 'ZV573', 'ZV63d', 'ZVfd4', 'ZVa9c', 'ZVa78', 'ZV252']
path = path + '/' + vehicle_type
#these are variables to calculate traversing progress (DO NOT CHANGE)
counts_per_percent = int(len(os.listdir(path)) / 100)
percentage_completion = 0
counter = 0
lables=np.array([])
file_list= []
#file_len=file_len_weighted
for file in os.listdir(path):
sample_df = pd.read_csv(path + '/' + file, delimiter = ',', encoding = 'utf-8')
if len(sample_df)==0:
continue
elif len(sample_df)!=0:
sample_array=np.array(sample_df.iloc[:,0:11])
if len(sample_array)<length:
n_zeros=length-len(sample_array)
sample_array=np.concatenate((np.zeros((n_zeros,11)),sample_array))
#print(len(sample_array))
elif len(sample_array)>length:
sample_array=sample_array[0:length]
#print(len(sample_array))
elif len(sample_array)==length:
sample_array=sample_array
#print(len(sample_array))
#np.float(sample_array)
#sample_array=torch.from_numpy(sample_array).float()
#file_list=torch.cat((file_list,sample_array[0:76,]), dim=1)
file_list.append(sample_array)
l=np.array(label_df.loc[label_df['sample_file_name'] == file])
lables=np.append(lables,[l[:,1]])
#belows are to show traversing progress (DO NOT CHANGE)
counter += 1
if counter == counts_per_percent:
counter = 0
percentage_completion += 1
print('traversing files under', path, ':', percentage_completion, "%", end="\r", flush=True)
return file_list,lables
# -
data_array_train, data_labels_train = TraverseFiles(train_path,'ZV252',train_label_df,210)
data_array_test, data_labels_test = TraverseFiles(test_path,'ZV252',test_label_df,210)
x_train=np.array(data_array_train)
labels_train=np.array(data_labels_train,dtype=int)
x_test=np.array(data_array_test)
labels_test=np.array(data_labels_test,dtype=int)
print(x_train.shape)
print(labels_train.shape)
print(x_test.shape)
print(labels_test.shape)
# +
keras.backend.clear_session()
lstm = keras.Sequential()
lstm.add(keras.layers.Dropout(0.2))
lstm.add(keras.layers.LSTM(16,dropout=0.2, recurrent_dropout=0.2,return_sequences=True))
lstm.add(keras.layers.BatchNormalization())
lstm.add(keras.layers.LSTM(16,dropout=0.2, recurrent_dropout=0.2))
lstm.add(keras.layers.BatchNormalization())
lstm.add(keras.layers.Dense(1, activation = 'sigmoid'))
lstm.compile(
loss='binary_crossentropy',
optimizer='Adam',
metrics=['accuracy']
)
# -
lstm.fit(x_train,
labels_train,
epochs = 500,
batch_size=200
)
lstm.save('lstm_model_ZV252.h5')
pred_=lstm.predict_classes(x_test)
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
print(classification_report(labels_test, pred_,digits=4))
print(accuracy_score(labels_test, pred_))
| Models/LSTM/LSTM_final/ZV252_LSTM_final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="N7ITxKLUkX0v"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="both" colab={} colab_type="code" id="yOYx6tzSnWQ3"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="6xgB0Oz5eGSQ"
# # Introduction to graphs and functions
# + [markdown] colab_type="text" id="w4zzZVZtQb1w"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/guide/intro_to_graphs"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/intro_to_graphs.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/intro_to_graphs.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/intro_to_graphs.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="WyW_CpOtaPfO"
# ## Setup
#
# + colab={} colab_type="code" id="goZwOXp_xyQj"
import tensorflow as tf
import timeit
from datetime import datetime
# + [markdown] colab_type="text" id="RBKqnXI9GOax"
# # Introduction to Graphs and `tf.function`
#
# This guide goes beneath the surface of TensorFlow and Keras to see how TensorFlow works. If you instead want to immediately get started with Keras, please see [our collection of Keras guides](keras/).
#
# In this guide you'll see the core of how TensorFlow allows you to make simple changes to your code to get graphs, and how they are stored and represented, and how you can use them to accelerate and export your models.
#
# Note: For those of you who are only familiar with TensorFlow 1.x, this guide demonstrates a very different view of graphs.
#
# This is a short-form introduction; for a full introduction to these concepts, see [the `tf.function` guide](function).
#
# + [markdown] colab_type="text" id="v0DdlfacAdTZ"
# ## What are graphs?
#
# In the previous three guides, you have seen TensorFlow running **eagerly**. This means TensorFlow operations are executed by Python, operation by operation, and returning results back to Python. Eager TensorFlow takes advantage of GPUs, allowing you to place variables, tensors, and even operations on GPUs and TPUs. It is also easy to debug.
#
# For some users, you may never need or want to leave Python.
#
# However, running TensorFlow op-by-op in Python prevents a host of accelerations otherwise available. If you can extract tensor computations from Python, you can make them into a *graph*.
#
# **Graphs are data structures that contain a set of `tf.Operation` objects, which represent units of computation; and `tf.Tensor` objects, which represent the units of data that flow between operations.** They are defined in a `tf.Graph` context. Since these graphs are data structures, they can be saved, run, and restored all without the original Python code.
#
# This is what a simple two-layer graph looks like when visualized in TensorBoard.
#
# + [markdown] colab_type="text" id="FvQ5aBuRGT1o"
# 
# + [markdown] colab_type="text" id="DHpY3avXGITP"
# ## The benefits of graphs
#
# With a graph, you have a great deal of flexibility. You can use your TensorFlow graph in environments that don't have a Python interpreter, like mobile applications, embedded devices, and backend servers. TensorFlow uses graphs as the format for saved models when it exports them from Python.
#
# Graphs are also easily optimized, allowing the compiler to do transformations like:
#
# * Statically infer the value of tensors by folding constant nodes in your computation *("constant folding")*.
# * Separate sub-parts of a computation that are independent and split them between threads or devices.
# * Simplify arithmetic operations by eliminating common subexpressions.
#
# + [markdown] colab_type="text" id="o1x1EOD9GjnB"
# There is an entire optimization system, [Grappler](./graph_optimization.ipynb), to perform this and other speedups.
#
# In short, graphs are extremely useful and let your TensorFlow run **fast**, run **in parallel**, and run efficiently **on multiple devices**.
#
# However, you still want to define our machine learning models (or other computations) in Python for convenience, and then automatically construct graphs when you need them.
# + [markdown] colab_type="text" id="pSZebVuWxDXu"
# # Tracing graphs
#
# The way you create a graph in TensorFlow is to use `tf.function`, either as a direct call or as a decorator.
# + colab={} colab_type="code" id="HKbLeJ1y0Umi"
# Define a Python function
def function_to_get_faster(x, y, b):
x = tf.matmul(x, y)
x = x + b
return x
# Create a `Function` object that contains a graph
a_function_that_uses_a_graph = tf.function(function_to_get_faster)
# Make some tensors
x1 = tf.constant([[1.0, 2.0]])
y1 = tf.constant([[2.0], [3.0]])
b1 = tf.constant(4.0)
# It just works!
a_function_that_uses_a_graph(x1, y1, b1).numpy()
# + [markdown] colab_type="text" id="MT7U8ozok0gV"
# `tf.function`-ized functions are [Python callables]() that work the same as their Python equivalents. They have a particular class (`python.eager.def_function.Function`), but to you they act just as the non-traced version.
#
# `tf.function` recursively traces any Python function it calls.
# + colab={} colab_type="code" id="rpz08iLplm9F"
def inner_function(x, y, b):
x = tf.matmul(x, y)
x = x + b
return x
# Use the decorator
@tf.function
def outer_function(x):
y = tf.constant([[2.0], [3.0]])
b = tf.constant(4.0)
return inner_function(x, y, b)
# Note that the callable will create a graph that
# includes inner_function() as well as outer_function()
outer_function(tf.constant([[1.0, 2.0]])).numpy()
# + [markdown] colab_type="text" id="P88fOr88qgCj"
# If you have used TensorFlow 1.x, you will notice that at no time did you need to define a `Placeholder` or `tf.Sesssion`.
# + [markdown] colab_type="text" id="wfeKf0Nr1OEK"
# ## Flow control and side effects
#
# Flow control and loops are converted to TensorFlow via `tf.autograph` by default. Autograph uses a combination of methods, including standardizing loop constructs, unrolling, and [AST](https://docs.python.org/3/library/ast.html) manipulation.
#
# + colab={} colab_type="code" id="PFObpff1BMEb"
def my_function(x):
if tf.reduce_sum(x) <= 1:
return x * x
else:
return x-1
a_function = tf.function(my_function)
print("First branch, with graph:", a_function(tf.constant(1.0)).numpy())
print("Second branch, with graph:", a_function(tf.constant([5.0, 5.0])).numpy())
# + [markdown] colab_type="text" id="hO4DBUNZBMwQ"
# You can directly call the Autograph conversion to see how Python is converted into TensorFlow ops. This is, mostly, unreadable, but you can see the transformation.
# + colab={} colab_type="code" id="8x6RAqza1UWf"
# Don't read the output too carefully.
print(tf.autograph.to_code(my_function))
# + [markdown] colab_type="text" id="GZ4Ieg6tBE6l"
# Autograph automatically converts `if-then` clauses, loops, `break`, `return`, `continue`, and more.
#
# Most of the time, Autograph will work without special considerations. However, there are some caveats, and the [tf.function guide](./function.ipynb) can help here, as well as the [complete autograph reference](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md)
# + [markdown] colab_type="text" id="A6NHDp7vAKcJ"
# ## Seeing the speed up
#
# Just wrapping a tensor-using function in `tf.function` does not automatically speed up your code. For small functions called a few times on a single machine, the overhead of calling a graph or graph fragment may dominate runtime. Also, if most of the computation was already happening on an accelerator, such as stacks of GPU-heavy convolutions, the graph speedup won't be large.
#
# For complicated computations, graphs can provide a signficiant speedup. This is because graphs reduce the Python-to-device communication, and performing some speedups.
#
# This code times a few runs on some small dense layers.
# + colab={} colab_type="code" id="zbNndv-0BeO4"
# Create an oveerride model to classify pictures
class SequentialModel(tf.keras.Model):
def __init__(self, **kwargs):
super(SequentialModel, self).__init__(**kwargs)
self.flatten = tf.keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = tf.keras.layers.Dense(128, activation="relu")
self.dropout = tf.keras.layers.Dropout(0.2)
self.dense_2 = tf.keras.layers.Dense(10)
def call(self, x):
x = self.flatten(x)
x = self.dense_1(x)
x = self.dropout(x)
x = self.dense_2(x)
return x
input_data = tf.random.uniform([60, 28, 28])
eager_model = SequentialModel()
graph_model = tf.function(eager_model)
print("Eager time:", timeit.timeit(lambda: eager_model(input_data), number=10000))
print("Graph time:", timeit.timeit(lambda: graph_model(input_data), number=10000))
# + [markdown] colab_type="text" id="kNGuLnjK1c5U"
# ### Polymorphic functions
#
# When you trace a function, you create a `Function` object is **polymorphic**. A polymorphic function is a Python callable that encapsulates several concrete function graphs behind one API.
#
# You can use this `Function` on all different kinds of `dtypes` and shapes. Each time you invoke it with a new argument signature, the original function gets re-traced with the new arguments. The `Function` then stores the `tf.Graph` corresponding to that trace in a `concrete_function`. If the function has already been traced with that kind of argument, you just get your pre-traced graph.
#
# Conceptually, then:
# * A **`tf.Graph`** is the raw, portable data structure describing a computation
# * A **`Function`** is a caching, tracing, dispatcher over ConcreteFunctions
# * A **`ConcreteFunction`** is an eager-compatible wrapper around a graph that lets you execute the graph from Python
#
# ### Inspecting polymorphic functions
#
# You can inspect `a_function`, which is the result of calling `tf.function` on the Python function `my_function`. In this example, calling `a_function` with three kinds of arguments results in three different concrete functions.
#
# + colab={} colab_type="code" id="7heuYuwn2edE"
print(a_function)
print("Calling a `Function`:")
print("Int:", a_function(tf.constant(2)))
print("Float:", a_function(tf.constant(2.0)))
print("Rank-1 tensor of floats", a_function(tf.constant([2.0, 2.0, 2.0])))
# + colab={} colab_type="code" id="s1c8db0cCW2k"
# Get the concrete function that works on floats
print("Inspecting concrete functions")
print("Concrete function for float:")
print(a_function.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.float32)))
print("Concrete function for tensor of floats:")
print(a_function.get_concrete_function(tf.constant([2.0, 2.0, 2.0])))
# + colab={} colab_type="code" id="JLTNuv_CCZXK"
# Concrete functions are callable
# Note: You won't normally do this, but instead just call the containing `Function`
cf = a_function.get_concrete_function(tf.constant(2))
print("Directly calling a concrete function:", cf(tf.constant(2)))
# + [markdown] colab_type="text" id="PTHNiHLXH9es"
# In this example, you are seeing pretty far into the stack. Unless you are specifically managing tracing, you will not normally need to call concrete functions directly as shown here.
# + [markdown] colab_type="text" id="V11zkxU22XeD"
# # Reverting to eager execution
#
# You may find yourself looking at long stack traces, specially ones that refer to `tf.Graph` or `with tf.Graph().as_default()`. This means you are likely running in a graph context. Core functions in TensorFlow use graph contexts, such as Keras's `model.fit()`.
#
# It is often much easier to debug eager execution. Stack traces should be relatively short and easy to comprehend.
#
# In situations where the graph makes debugging tricky, you can revert to using eager execution to debug.
#
# Here are ways you can make sure you are running eagerly:
#
# * Call models and layers directly as callables
#
# * When using Keras compile/fit, at compile time use **`model.compile(run_eagerly=True)`**
#
# * Set global execution mode via **`tf.config.experimental_run_functions_eagerly(True)`**
#
# + [markdown] colab_type="text" id="iTHvdQfRICJb"
# ### Using `run_eagerly=True`
# + colab={} colab_type="code" id="kqzBV2rSzvpC"
# Define an identity layer with an eager side effect
class EagerLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(EagerLayer, self).__init__(**kwargs)
# Do some kind of initialization here
def call(self, inputs):
print("\nCurrently running eagerly", str(datetime.now()))
return inputs
# + colab={} colab_type="code" id="5DFvc9ySr7t3"
# Create an override model to classify pictures, adding the custom layer
class SequentialModel(tf.keras.Model):
def __init__(self):
super(SequentialModel, self).__init__()
self.flatten = tf.keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = tf.keras.layers.Dense(128, activation="relu")
self.dropout = tf.keras.layers.Dropout(0.2)
self.dense_2 = tf.keras.layers.Dense(10)
self.eager = EagerLayer()
def call(self, x):
x = self.flatten(x)
x = self.dense_1(x)
x = self.dropout(x)
x = self.dense_2(x)
return self.eager(x)
# Create an instance of this model
model = SequentialModel()
# Generate some nonsense pictures and labels
input_data = tf.random.uniform([60, 28, 28])
labels = tf.random.uniform([60])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# + [markdown] colab_type="text" id="U3-hcwmpI3Sv"
# First, compile the model without eager. Note that the model is not traced; despite its name, `compile` only sets up loss functions, optimization, and other training parameters.
# + colab={} colab_type="code" id="w2GdwhB_KQlw"
model.compile(run_eagerly=False, loss=loss_fn)
# + [markdown] colab_type="text" id="WLMXk1uxKQ44"
# Now, call `fit` and see that the function is traced (twice) and then the eager effect never runs again.
# + colab={} colab_type="code" id="VCoLlZDythZ8"
model.fit(input_data, labels, epochs=3)
# + [markdown] colab_type="text" id="jOk6feLOK1pR"
# If you run even a single epoch in eager, however, you can see the eager side effect twice.
# + colab={} colab_type="code" id="MGIYwrKpK06e"
print("Running eagerly")
# When compiling the model, set it to run eagerly
model.compile(run_eagerly=True, loss=loss_fn)
model.fit(input_data, labels, epochs=1)
# + [markdown] colab_type="text" id="qwq_cnc8Lwf8"
# ### Using `experimental_run_functions_eagerly`
# You can also globally set everything to run eagerly. Note that this only works if you re-trace; traced functions will remain traced and run as a graph.
# + colab={} colab_type="code" id="oFSxRtcptYpe"
# Now, globally set everything to run eagerly
tf.config.experimental_run_functions_eagerly(True)
print("Run all functions eagerly.")
# First, trace the model, triggering the side effect
polymorphic_function = tf.function(model)
# It was traced...
print(polymorphic_function.get_concrete_function(input_data))
# But when you run the function again, the side effect happens (both times).
result = polymorphic_function(input_data)
result = polymorphic_function(input_data)
# + colab={} colab_type="code" id="pD-AQxEhua4E"
# Don't forget to set it back when you are done
tf.config.experimental_run_functions_eagerly(False)
# + [markdown] colab_type="text" id="sm0bNFp8PX53"
# # Tracing and performance
#
# Tracing costs some overhead. Although tracing small functions is quick, large models can take noticeable wall-clock time to trace. This investment is usually quickly paid back with a performance boost, but it's important to be aware that the first few epochs of any large model training can be slower due to tracing.
#
# No matter how large your model, you want to avoid tracing frequently. This [section of the tf.function guide](function.ipynb#when_to_retrace) discusses how to set input specifications and use tensor arguments to avoid retracing. If you find you are getting unusually poor performance, it's good to check to see if you are retracing accidentally.
#
# You can add an eager-only side effect (such as printing a Python argument) so you can see when the function is being traced. Here, you see extra retracing because new Python arguments always trigger retracing.
# + colab={} colab_type="code" id="jsGQ4GQAP2Ve"
# Use @tf.function decorator
@tf.function
def a_function_with_python_side_effect(x):
print("Tracing!") # This eager
return x * x + tf.constant(2)
# This is traced the first time
print(a_function_with_python_side_effect(tf.constant(2)))
# The second time through, you won't see the side effect
print(a_function_with_python_side_effect(tf.constant(3)))
# This retraces each time the Python argument chances
# as a Python argument could be an epoch count or other
# hyperparameter
print(a_function_with_python_side_effect(2))
print(a_function_with_python_side_effect(3))
# + [markdown] colab_type="text" id="D1kbr5ocpS6R"
# # Next steps
#
# You can read a more in-depth discussion at both the `tf.function` API reference page and at the [guide](./function.ipynb).
| site/en/guide/intro_to_graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Sparkmagic (PySpark)
# language: ''
# name: pysparkkernel
# ---
# +
# Objective: Download, prepare, and explore the data sources that we will be integrating with FindMatches
# +
#Prerequisites:
# 1. Create Glue Dev Endpoint (G.2X), full S3 access
# 2. Connect to that dev endpoint ith your sagemaker frontend.
# 3. Make sure that your Notebook's IAM role has S3 Write access if you will be using the terminal (S3FullAccess works)
# 3b. Make sure that your Notebook's IAM role has the GlueServiceRole attached as well since we will be making some Glue calls
# 4. Create a database for your files and edit the glue_database variable if different than 'reinvent-2019'
# 5. All previous notebook steps
# 6. Open up a terminal within Jupyter (New -> Terminal) to enter the CLI commands in this demo.
#Currently required: You will need to install a new/current version of the aws cli in your terminal window:
print("AWS pip upgrade command \n")
print('pip3 install awscli --upgrade --user')
# +
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
glueContext = GlueContext(SparkContext.getOrCreate())
# +
#TODO: Update with your own information, synchronize across notebooks.
my_s3_bucket = "find-matches-demo"
project_prefix = "scholarly_demo"
glue_database = "reinvent-2019"
glue_table = 'dblp_scholar_records_jsonl'
region = 'use-east-1'
glue_role = 'AWSGlueServiceRoleDefault'
glue_source_crawler = project_prefix + "_source_crawler"
transform_name = "reinvent_2019_demo_transform"
transform_id= "tfm-810e6f50ff6e74964b5990ab354398b9bbd113e7"
# -
glue_source_crawler
print ("Command to download the source records into your own s3 bucket: \n")
print ("aws s3 cp " +
"s3://ml-transforms-public-datasets-us-east-1/dblp-scholar/records/dblp_scholar_records.jsonl " +
"s3://" + my_s3_bucket + "/" + project_prefix + "/source/dblp_scholar_records.jsonl")
# +
# Create a crawler and run it against the file to load the data reference into the Glue/LF Data Catalog
# This is easy to do in the AWS Console, or you can also do this via AWS CLI as per below.
s3_targets = {
'S3Targets': [
{
'Path': "s3://" + my_s3_bucket + "/" + project_prefix + "/source/dblp_scholar_records.jsonl",
},
],
}
print("CLI command to create the crawler\n")
print(f"aws glue create-crawler --name {glue_source_crawler} --role {glue_role} " +
f'--database-name {glue_database} '
'--targets \'{"S3Targets": [{"Path": "s3://'+my_s3_bucket+'/'+project_prefix+'/source/dblp_scholar_records.jsonl"}]}\'')
# +
# Run the crawler
print("CLI command to run the crawler\n")
print(f"aws glue start-crawler --name {glue_source_crawler}")
# +
# Wait for crawl to finish
print("CLI command to check on the crawler status so we can wait until it finishes\n")
print(f"aws glue get-crawler --name {glue_source_crawler}")
# +
# Take a look at the table schema for a sanity check:
import pprint
response = client.get_table(
DatabaseName=glue_database,
Name='dblp_scholar_records_jsonl'
)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(response['Table']['StorageDescriptor']['Columns'])
# +
#Looking good, so let's take a look at the actual data:
source = glueContext.create_dynamic_frame.from_catalog(database=glue_database, table_name="dblp_scholar_records_jsonl").toDF()
print (f"Source dataset length: {source.count()}")
source.show()
# +
#Look at some details specifically from from Scholar
print ("Scholar dataset length: " + str(source.filter(source.source == 'Scholar').count()) );
source.filter(source.source == 'Scholar').sample(False,.01).show()
# +
#Look at some details specifically from from DBLP
print ("DBLP dataset length: " + str(source.filter(source.source == 'DPLP').count()) );
source.filter(source.source == 'DBLP').sample(False,.01).show()
# -
| findmatches/dblp_scholar_notebook/FindMatchesDblpScholarDemoNotebook - Step 1 Data Prep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: icarus
# language: python
# name: icarus
# ---
# %matplotlib inline
# #%matplotlib notebook
import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# # Robot Sample and return (Image Processing)
# Using Udacity's Roversim simulator
# ## Image Thresholding
os.listdir('simdata')
telemetry_data = pd.read_csv('simdata/robot_log.csv', sep = ';')
os.listdir('images')
telemetry_data.head()
filename = 'images/example_grid1.jpg'
image = mpimg.imread(filename)
plt.imshow(image)
plt.show()
# +
chan = np.copy(image)
### Red Channel (Channel 0)
chan_red = np.copy(image)
chan_red[:,:,[1,2]] = 0
### green Channel (channel 1)
chan_green = np.copy(image)
chan_green[:,:,[0,2]] = 0
### Blue Channel (Channel 2)
chan_blue = np.copy(image)
chan_blue[:,:,[1,0]] = 0
fig = plt.figure(figsize=(12,3))
plt.subplot(131)
plt.imshow(chan_red)
plt.subplot(132)
plt.imshow(chan_green)
plt.subplot(133)
plt.imshow(chan_blue)
# -
def colorThreshold(img, rbg_threshold = (100,100,100)):
"""
Return Binary Image which is thresholded by thr rbg pixel vales
given in rbg_threshold i.e. If pixel is > thres assign 1
and if pixel is < thres assing 0
args:
img - img to be thresholded
rbg_threshold - (r,g,b)
"""
temp = np.zeros(img.shape)
rflags_h = img[:,:,0]>rbg_threshold[0]
gflags_h = img[:,:,1]>rbg_threshold[1]
bflags_h = img[:,:,2]>rbg_threshold[2]
temp[:,:,0][rflags_h] = 1
temp[:,:,1][gflags_h] = 1
temp[:,:,2][bflags_h] = 1
return temp
test = colorThreshold(chan, rbg_threshold = (100,100,100))
plt.imshow(test)
# # Perspective Transform
# As defined by the Cambrige dictionary, a transform means "to change completely the appearance or
# character of something or someone" and perspective is defined as " the art of representing three-dimensional objects on a two-dimensional surface so as to give the right impression of their height, width, depth, and position in relation to each other". Therefore, a perspective tranform deals with the conversion of a 3d representation of a world into a 2D representation of that world.
# <img src="images/perspective_transform_illustration.JPG" height = "1000" width= "420">
# source: https://www.tutorialspoint.com/dip/perspective_transformation.htm
#
#
os.listdir()
# +
# #%matplotlib notebook
# #%matplotlib notebook
plt.imshow(image)
### (H, W, D)
plt.show()
# -
offset = 5
dst_size = 6
img = np.copy(image)
s =img.shape
s
# +
def perspectiveTransform(img, src, dst):
M = cv2.getPerspectiveTransform(src,dst)
warp = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))
return warp
### SOUCRE POINTS Reference points from Original Image (for calibration)
ref_points = np.float32([ [199.782,95.957], [118.439,95.9269], [14.3626,140.128], [301.101,140.128]])
### Where do i want then to be (Destination points)
dst = np.float32([[s[1]/2-dst_size, s[0] - offset- (2*dst_size)],
[s[1]/2+dst_size, s[0] - offset- (2*dst_size)],
[s[1]/2+dst_size, s[0] - offset ],
[s[1]/2-dst_size, s[0] - offset ]]
)
warped = perspectiveTransform(img, ref_points,dst)
bin_img = colorThreshold(warped)
### Highlight the boxes
cv2.polylines(img, np.int32([ref_points]), True, (0,0,255), 3 )
cv2.polylines(warped, np.int32([dst]), True, (0,0,255), 3 )
cv2.polylines(bin_img, np.int32([dst]), True, (0,0,255), 3 )
f, (ax1,ax2, ax3) = plt.subplots(1,3, figsize=(24,6), sharey =True)
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image', fontsize =40)
ax2.imshow(warped, cmap='gray')
ax2.set_title('Result', fontsize =40)
ax3.imshow(bin_img)
ax3.set_title('Binary Image', fontsize = 40)
plt.subplots_adjust(left=0.0, right =1, top=0.9, bottom =0.0)
plt.show()
# -
# # Warp, Threshold, and Map to Rover-Centric Coordinates
#
# <img src="images/img.jpg" height = "1000" width= "420">
# The White in the binary Image Indicates Naviagable terain in view of the rover camera
os.listdir('sample_images')
# +
sample2 = plt.imread('sample_images/robot_sample2.jpg')
img2 = np.copy(sample2)
warped2 = perspectiveTransform(img2, ref_points,dst)
bin_img = colorThreshold(warped2)
f, (ax1,ax2) = plt.subplots(1,2, figsize = (24,6))
ax1.imshow(bin_img)
ax2.imshow(sample2)
# -
### Return a tuple of array with the indexes of the non zero pixels in the imag
c1, c2, c3 = np.nonzero(bin_img)
#c1, c2, c3 = bin_img.nonzero()
plt.plot(c2,c1, '.')
plt.xlim(0,320)
plt.ylim(0,160)
plt.show()
# +
def roverCentriCoordinates(binary_image):
c1, c2, c3 = np.nonzero(binary_image)
s = binary_image.shape
y_pix = c2 - s[1]/2
x_pix = s[1]/2 - c1
return x_pix, y_pix
warped3= perspectiveTransform(img2, ref_points,dst)
bin_img = colorThreshold(warped3)
x_pix, y_pix = roverCentriCoordinates(bin_img)
plt.plot(x_pix, y_pix,'.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
plt.show()
# -
# ## Map to World coordinates
# Mapping the Car-Centric coodinates to World Map.
# To do this we will need to use rotation which will account for the
# the fact that whe the camera takes a picture the Car can be facing any
# aarbituary direction, given by it's yaw angle, after which, a translation is needed to account for the fact that the Car may be located at any position in the world when it takes a picture.
#
#
sample3 = plt.imread('sample_images/robot_sample2.jpg')
plt.imshow(sample3)
### Choose random numbers for the position and yaw (Robot Pose)
rover_yaw = np.random.random(1)*360
rover_xpos = np.random.random(1)*180 +20
rover_ypos = np.random.random(1)*180 +20
# +
def pixRotate(xpix,ypix,yaw):
"""
Rotate Pixels by yaw.
"""
#convert to Radians
yaw_rad = yaw * np.pi/180
xpix_rotated = xpix*np.cos(yaw_rad) - ypix*np.sin(yaw_rad)
ypix_rotated = xpix*np.sin(yaw_rad) + ypix*np.cos(yaw_rad)
return xpix_rotated,ypix_rotated
def pixTranslate(xpix_rot,ypix_rot,yaw,xpos,ypos, scale = 10):
"""
Translate pixels by (xpos, ypos)
"""
x_world = xpos + (xpix_rot/scale)
y_world = ypos + (xpix_rot/scale)
return x_world,y_world
def pixToWorld(xpix,ypix,yaw,xpos,ypos, scale = 10, world_size =200 ):
"""
Map Rover centric pixels to world
"""
xrot , yrot = pixRotate(xpix = xpix, ypix = ypix,yaw = yaw)
xtran, ytran = pixTranslate(xpix_rot = xrot, ypix_rot = yrot,yaw = yaw,xpos = xpos,
ypos= ypos, scale = scale)
xpix_world = np.clip(np.int_(xtran), 0,world_size -1)
ypix_world= np.clip(np.int_(ytran), 0,world_size -1)
return xpix_world, ypix_world
def to_polar_coords(xpix, ypix):
"""
Convert Cartesian coordinates to polar coordinates
"""
# Calculate distance to each pixel
dist = np.sqrt(xpix**2 + ypix**2)
# Calculate angle using arctangent function
angles = np.arctan2(ypix, xpix)
return dist, angles
# +
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
bottom_offset = 6
### SOUCRE POINTS Reference points from Original Image
#ref_points = np.float32([ [192.114,92.2649], [120.284, 93.013], [66.5331,121.781], [255.78,113.178]])
### Where do i want then to be (Destination points)
#dst = np.float32([[s[1]/2-dst_size, s[0] - offset- (2*dst_size)],
# [s[1]/2+dst_size, s[0] - offset- (2*dst_size)],
# [s[1]/2+dst_size, s[0] - offset ],
# [s[1]/2-dst_size, s[0] - offset ]]
# )
### Perform perspective transform using Calibrated reference points and destination points
warped = perspectiveTransform(img, ref_points,dst)
bin_img = colorThreshold(warped)
# Extract navigable terrain pixels
xpix, ypix = roverCentriCoordinates(bin_img)
# Generate 200 x 200 pixel worldmap
worldmap = np.zeros((200, 200))
scale = 10
# Get navigable pixel positions in world coords
x_world, y_world = pixToWorld(xpix = xpix, ypix = ypix, xpos = rover_xpos,
ypos = rover_ypos, yaw = rover_yaw,
world_size = worldmap.shape[0], scale = scale)
# Add pixel positions to worldmap
worldmap[y_world, x_world] += 255
print('Xpos =', rover_xpos, 'Ypos =', rover_ypos, 'Yaw =', rover_yaw)
# Plot the map in rover-centric coords
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 7))
f.tight_layout()
ax1.plot(xpix, ypix, '.')
ax1.set_title('Rover Space', fontsize=40)
ax1.set_ylim(-160, 160)
ax1.set_xlim(0, 160)
ax1.tick_params(labelsize=20)
ax2.imshow(worldmap, cmap='gray')
ax2.set_title('World Space', fontsize=40)
ax2.set_ylim(0, 200)
ax2.tick_params(labelsize=20)
ax2.set_xlim(0, 200)
plt.subplots_adjust(left=0.1, right=1, top=0.9, bottom=0.1)
plt.show() # Uncomment if running on your local machine
# -
img = plt.imread("images/sample1.jpg")
plt.imshow(img)
# +
warped =perspectiveTransform(image, ref_points, dst)
colorsel = colorThreshold(warped)
xpix, ypix = roverCentriCoordinates(bin_img)
r, theta = to_polar_coords(xpix, ypix) # Convert to polar coords
avg_theta = np.mean(theta)
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(colorsel, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(avg_theta)
y_arrow = arrow_length * np.sin(avg_theta)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
plt.show()
# -
avg_angle_degrees = avg_angle * 180/np.pi
steering = np.clip(avg_angle_degrees, -15, 15)
| .ipynb_checkpoints/RobotSampleAndReturnImageProcessing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from warnings import filterwarnings
filterwarnings('ignore')
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy as sp
from sklearn.cluster import KMeans
# -
df = pd.read_csv('USArrests.csv').copy()
# +
# usa de eyaletlerde suç oranı. Eyalet bazlı görmek için clustering yapmamız gerekmekte.
# -
df.head()
#<NAME>
#<NAME>dırı
#rape saldırı tecavüz taciz
df.index
df.index = df.iloc[:,0]
df.index
df
df = df.iloc[:,1:5]
df.head()
# +
#gözlem yapacağımız column eyalet olduğu için eyaleti index yaptık. Ve programa bunlar gözlem birimlerinin bilgisidir dememiz sağlandı.
# -
df.index.name = None
df.head()
df.isnull().sum()
df.info()
df.describe().T
df.hist(figsize=(10,10))
kmeans = KMeans(n_clusters=4)
kmeans
# + jupyter={"outputs_hidden": true}
# ?kmeans
# -
kmeans.n_clusters
k_fit = kmeans.fit(df)
k_fit.cluster_centers_
k_fit.labels_
# +
# görselleştirme
# -
kmeans2 = KMeans(n_clusters=3)
k_fit2= kmeans2.fit(df)
kumeler = k_fit2.labels_
plt.scatter(df.iloc[:,0],df.iloc[:,1], c= kumeler , s= 50, cmap ='viridis')
merkezler = k_fit2.cluster_centers_
plt.scatter(merkezler[:,0], merkezler[:,1], c = 'black' , s = 200, alpha=0.5)
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['figure.figsize'] = (16,9)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(df.iloc[:,0],df.iloc[:,1], df.iloc[:,2])
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(df.iloc[:,0],df.iloc[:,1],df.iloc[:,2], c = kumeler)
ax.scatter(merkezler[:,0], merkezler[:,1], merkezler[:,2],marker = "*", c= '#050505', s=1000)
# +
# kümeler ve gözlem birimleri
# -
kmeans3 = KMeans(n_clusters = 3)
k_fit3 = kmeans3.fit(df)
kumeler = k_fit3.labels_
pd.DataFrame({'Eyaletler': df.index, 'Kumeler': kumeler})[0:10]
df['kume_no'] = kumeler
df.head()
df['kume_no'] = df['kume_no'] + 1
df.head()
# ## Model Tuning Optimum Küme Sayısının Belirlenmesi
# !pip install yellowbrick
from yellowbrick.cluster import KElbowVisualizer
kmeans = KMeans()
visualizer = KElbowVisualizer(kmeans, k =(2,20))
visualizer.fit(df)
visualizer.poof()
# ### Hiyerarşik Kümeleme
| UnsupervisedLearning - Clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
tf.set_random_seed(1)
np.random.seed(1)
# fake data
x = np.linspace(-1, 1, 100)[:, np.newaxis] # shape (100, 1)
noise = np.random.normal(0, 0.1, size=x.shape)
y = np.power(x, 2) + noise # shape (100, 1) + some noise
def save():
print('This is save')
# build neural network
tf_x = tf.placeholder(tf.float32, x.shape) # input x
tf_y = tf.placeholder(tf.float32, y.shape) # input y
l = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer
o = tf.layers.dense(l, 1) # output layer
loss = tf.losses.mean_squared_error(tf_y, o) # compute cost
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer()) # initialize var in graph
saver = tf.train.Saver() # define a saver for saving and restoring
for step in range(100): # train
sess.run(train_op, {tf_x: x, tf_y: y})
saver.save(sess, './params', write_meta_graph=False) # meta_graph is not recommended
# plotting
pred, l = sess.run([o, loss], {tf_x: x, tf_y: y})
plt.figure(1, figsize=(10, 5))
plt.subplot(121)
plt.scatter(x, y)
plt.plot(x, pred, 'r-', lw=5)
plt.text(-1, 1.2, 'Save Loss=%.4f' % l, fontdict={'size': 15, 'color': 'red'})
def reload():
print('This is reload')
# build entire net again and restore
tf_x = tf.placeholder(tf.float32, x.shape) # input x
tf_y = tf.placeholder(tf.float32, y.shape) # input y
l_ = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer
o_ = tf.layers.dense(l_, 1) # output layer
loss_ = tf.losses.mean_squared_error(tf_y, o_) # compute cost
sess = tf.Session()
# don't need to initialize variables, just restoring trained variables
saver = tf.train.Saver() # define a saver for saving and restoring
saver.restore(sess, './params')
# plotting
pred, l = sess.run([o_, loss_], {tf_x: x, tf_y: y})
plt.subplot(122)
plt.scatter(x, y)
plt.plot(x, pred, 'r-', lw=5)
plt.text(-1, 1.2, 'Reload Loss=%.4f' % l, fontdict={'size': 15, 'color': 'red'})
plt.show()
save()
# destroy previous net
tf.reset_default_graph()
reload()
| TensorflowTUT2/303_save_reload.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Carga de datos a través de la función read_csv
import pandas as pd
import os
mainpath = "/Users/miltonjhon/1estudios-mac/001-ML-001/002-datasets"
filename = "titanic/titanic3.csv"
fullpath = os.path.join(mainpath, filename)
data = pd.read_csv(fullpath)
data.head(2)
# ### Ejemplos de los parámetros de la función read_csv
# read.csv(filepath="/Users/JuanGabriel/Developer/AnacondaProjects/python-ml-course/datasets/titanic/titanic3.csv",
# sep = ",",
# dtype={"ingresos":np.float64, "edad":np.int32},
# header=0,names={"ingresos", "edad"},
# skiprows=12, index_col=None,
# skip_blank_lines=False, na_filter=False
# )
data2 = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt")
data2.head(2)
# # devuelve el nombre de columnas y escribe nombre de columnas
data2.columns.values
data_cols = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Columns.csv")
#data_cols.head(3)
data_col_list = data_cols["Column_Names"].tolist()
#data_col_list
data2 = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt",
header = None, names = data_col_list)
data2.columns.values
# # Carga de datos a través de la función open
data3 = open(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt",'r')
cols = data3.readline().strip().split(",")
n_cols = len(cols)
# n_cols
# ## Contador - Diccionario
counter = 0
main_dict = {}
for col in cols:
main_dict[col] = []
main_dict
# +
for line in data3:
values = line.strip().split(",")
for i in range(len(cols)):
main_dict[cols[i]].append(values[i])
counter += 1
print("El data set tiene %d filas y %d columnas"%(counter, n_cols))
# -
df3 = pd.DataFrame(main_dict)
df3.head(2)
# ## Lectura y escritura de ficheros
infile = mainpath + "/" + "customer-churn-model/Customer Churn Model.txt"
outfile = mainpath + "/" + "customer-churn-model/Tab Customer Churn Model.txt"
with open(infile, "r") as infile1:
with open(outfile, "w") as outfile1:
for line in infile1:
fields = line.strip().split(",")
outfile1.write("\t".join(fields))
outfile1.write("\n")
df4 = pd.read_csv(outfile, sep = "\t")
df4.head(2)
# # Leer datos desde una URL
medals_url = "http://winterolympicsmedals.com/medals.csv"
medals_data = pd.read_csv(medals_url)
medals_data.head(2)
# #### Ejercicio de descarga de datos con urllib3
# Vamos a hacer un ejemplo usando la librería urllib3 para leer los datos desde una URL externa, procesarlos y convertirlos a un data frame de *python* antes de guardarlos en un CSV local.
def downloadFromURL(url, filename, sep = ",", delim = "\n", encoding="utf-8",
mainpath = "/Users/miltonjhon/1estudios-mac/python-ml-course/datasets/"):
#primero importamos la librería y hacemos la conexión con la web de los datos
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', url)
print("El estado de la respuesta es %d" %(r.status))
response = r.data ## CORREGIDO: eliminado un doble decode que daba error
#El objeto reponse contiene un string binario, así que lo convertimos a un string descodificándolo en UTF-8
str_data = response.decode(encoding)
#Dividimos el string en un array de filas, separándolo por intros
lines = str_data.split(delim)
#La primera línea contiene la cabecera, así que la extraemos
col_names = lines[0].split(sep)
n_cols = len(col_names)
#Generamos un diccionario vacío donde irá la información procesada desde la URL externa
counter = 0
main_dict = {}
for col in col_names:
main_dict[col] = []
#Procesamos fila a fila la información para ir rellenando el diccionario con los datos como hicimos antes
for line in lines:
#Nos saltamos la primera línea que es la que contiene la cabecera y ya tenemos procesada
if(counter > 0):
#Dividimos cada string por las comas como elemento separador
values = line.strip().split(sep)
#Añadimos cada valor a su respectiva columna del diccionario
for i in range(len(col_names)):
main_dict[col_names[i]].append(values[i])
counter += 1
print("El data set tiene %d filas y %d columnas"%(counter, n_cols))
#Convertimos el diccionario procesado a Data Frame y comprobamos que los datos son correctos
df = pd.DataFrame(main_dict)
print(df.head())
#Elegimos donde guardarlo (en la carpeta athletes es donde tiene más sentido por el contexto del análisis)
fullpath = os.path.join(mainpath, filename)
#Lo guardamos en CSV, en JSON o en Excel según queramos
df.to_csv(fullpath+".csv")
df.to_json(fullpath+".json")
df.to_excel(fullpath+".xls")
print("Los ficheros se han guardado correctamente en: "+fullpath)
return df
medals_df = downloadFromURL(medals_url, "athletes/downloaded_medals")
medals_df.head(2)
# ## Ficheros XLS y XLSX
mainpath = "/Users/miltonjhon/1estudios-mac/python-ml-course/datasets"
filename = "titanic/titanic3.xls"
titanic2 = pd.read_excel(mainpath + "/" + filename, "titanic3")
titanic3 = pd.read_excel(mainpath + "/" + filename, "titanic3")
titanic3.to_csv(mainpath + "/titanic/titanic_custom.csv")
titanic3.to_excel(mainpath + "/titanic/titanic_custom.xls")
titanic3.to_json(mainpath + "/titanic/titanic_custom.json")
# # Convierte archivo de csv a json
import pandas as pd
import os
mainpath = "/Users/miltonjhon/1estudios-mac/python-ml-course/datasets"
filename = "titanic/GASTO2019.csv"
fullpath = os.path.join(mainpath, filename)
GASTO2019 = pd.read_csv(fullpath)
GASTO2019.to_json(mainpath + "/titanic/gasto2019.json")
| notebooks/T1 - 1 - Data Cleaning - Carga de datos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.02417, "end_time": "2022-03-15T16:20:37.889084", "exception": false, "start_time": "2022-03-15T16:20:37.864914", "status": "completed"} tags=[]
# **This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/your-first-machine-learning-model).**
#
# ---
#
#
# ## Recap
# So far, you have loaded your data and reviewed it with the following code. Run this cell to set up your coding environment where the previous step left off.
# + papermill={"duration": 1.382643, "end_time": "2022-03-15T16:20:39.297393", "exception": false, "start_time": "2022-03-15T16:20:37.914750", "status": "completed"} tags=[]
# Code you have previously used to load data
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex3 import *
print("Setup Complete")
# + [markdown] papermill={"duration": 0.023229, "end_time": "2022-03-15T16:20:39.344347", "exception": false, "start_time": "2022-03-15T16:20:39.321118", "status": "completed"} tags=[]
# # Exercises
#
# ## Step 1: Specify Prediction Target
# Select the target variable, which corresponds to the sales price. Save this to a new variable called `y`. You'll need to print a list of the columns to find the name of the column you need.
#
# + papermill={"duration": 0.143476, "end_time": "2022-03-15T16:20:39.510863", "exception": false, "start_time": "2022-03-15T16:20:39.367387", "status": "completed"} tags=[]
# print the list of columns in the dataset to find the name of the prediction target
home_data.columns
home_data.head()
home_data.describe()
# + papermill={"duration": 0.03309, "end_time": "2022-03-15T16:20:39.568177", "exception": false, "start_time": "2022-03-15T16:20:39.535087", "status": "completed"} tags=[]
home_data.columns
# + papermill={"duration": 0.039554, "end_time": "2022-03-15T16:20:39.632246", "exception": false, "start_time": "2022-03-15T16:20:39.592692", "status": "completed"} tags=[]
#y = home_data.SalePrice
y = home_data['SalePrice']
print(y)
# Check your answer
step_1.check()
sale_price_average = round(home_data["SalePrice"].mean())
print("The averaged sale price is " + str(sale_price_average) + " USD")
# + papermill={"duration": 0.036348, "end_time": "2022-03-15T16:20:39.696037", "exception": false, "start_time": "2022-03-15T16:20:39.659689", "status": "completed"} tags=[]
y
# + papermill={"duration": 0.039553, "end_time": "2022-03-15T16:20:39.762372", "exception": false, "start_time": "2022-03-15T16:20:39.722819", "status": "completed"} tags=[]
# The lines below will show you a hint or the solution.
step_1.hint()
step_1.solution()
# + [markdown] papermill={"duration": 0.030281, "end_time": "2022-03-15T16:20:39.822281", "exception": false, "start_time": "2022-03-15T16:20:39.792000", "status": "completed"} tags=[]
# ## Step 2: Create X
# Now you will create a DataFrame called `X` holding the predictive features.
#
# Since you want only some columns from the original data, you'll first create a list with the names of the columns you want in `X`.
#
# You'll use just the following columns in the list (you can copy and paste the whole list to save some typing, though you'll still need to add quotes):
# * LotArea
# * YearBuilt
# * 1stFlrSF
# * 2ndFlrSF
# * FullBath
# * BedroomAbvGr
# * TotRmsAbvGrd
#
# After you've created that list of features, use it to create the DataFrame that you'll use to fit the model.
# + papermill={"duration": 0.062769, "end_time": "2022-03-15T16:20:39.916115", "exception": false, "start_time": "2022-03-15T16:20:39.853346", "status": "completed"} tags=[]
# Create the list of features below
# This list includes 7 features
feature_names = ["LotArea",
"YearBuilt",
"1stFlrSF",
"2ndFlrSF",
"FullBath",
"BedroomAbvGr",
"TotRmsAbvGrd"]
# Select data corresponding to features in feature_names
X = home_data[feature_names]
print(X.describe())
# Check your answer
step_2.check()
# + papermill={"duration": 0.04769, "end_time": "2022-03-15T16:20:39.995352", "exception": false, "start_time": "2022-03-15T16:20:39.947662", "status": "completed"} tags=[]
X
# + papermill={"duration": 0.045629, "end_time": "2022-03-15T16:20:40.072553", "exception": false, "start_time": "2022-03-15T16:20:40.026924", "status": "completed"} tags=[]
step_2.hint()
step_2.solution()
# + [markdown] papermill={"duration": 0.033802, "end_time": "2022-03-15T16:20:40.140048", "exception": false, "start_time": "2022-03-15T16:20:40.106246", "status": "completed"} tags=[]
# ## Review Data
# Before building a model, take a quick look at **X** to verify it looks sensible
# + papermill={"duration": 0.065965, "end_time": "2022-03-15T16:20:40.239764", "exception": false, "start_time": "2022-03-15T16:20:40.173799", "status": "completed"} tags=[]
# Review data
# print description or statistics from X
print(X.describe())
# print the top few lines
print(X.head())
# + [markdown] papermill={"duration": 0.034108, "end_time": "2022-03-15T16:20:40.310460", "exception": false, "start_time": "2022-03-15T16:20:40.276352", "status": "completed"} tags=[]
# ## Step 3: Specify and Fit Model
# Create a `DecisionTreeRegressor` and save it iowa_model. Ensure you've done the relevant import from sklearn to run this command.
#
# Then fit the model you just created using the data in `X` and `y` that you saved above.
# + papermill={"duration": 0.053823, "end_time": "2022-03-15T16:20:40.398841", "exception": false, "start_time": "2022-03-15T16:20:40.345018", "status": "completed"} tags=[]
from sklearn.tree import DecisionTreeRegressor
#specify the model.
#For model reproducibility, set a numeric value for random_state when specifying the model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit the model: This is the heart of the model
# X is the 7 criteria and y is the SalePrice
# Fit X to y
iowa_model.fit(X,y)
# Check your answer
step_3.check()
# + papermill={"duration": 0.047028, "end_time": "2022-03-15T16:20:40.481446", "exception": false, "start_time": "2022-03-15T16:20:40.434418", "status": "completed"} tags=[]
step_3.hint()
step_3.solution()
# + [markdown] papermill={"duration": 0.037555, "end_time": "2022-03-15T16:20:40.557207", "exception": false, "start_time": "2022-03-15T16:20:40.519652", "status": "completed"} tags=[]
# ## Step 4: Make Predictions
# Make predictions with the model's `predict` command using `X` as the data. Save the results to a variable called `predictions`.
# + papermill={"duration": 0.054496, "end_time": "2022-03-15T16:20:40.649614", "exception": false, "start_time": "2022-03-15T16:20:40.595118", "status": "completed"} tags=[]
predictions = iowa_model.predict(X)
print(predictions)
# Check your answer
step_4.check()
# + papermill={"duration": 0.055443, "end_time": "2022-03-15T16:20:40.745029", "exception": false, "start_time": "2022-03-15T16:20:40.689586", "status": "completed"} tags=[]
step_4.hint()
step_4.solution()
# + [markdown] papermill={"duration": 0.041719, "end_time": "2022-03-15T16:20:40.830011", "exception": false, "start_time": "2022-03-15T16:20:40.788292", "status": "completed"} tags=[]
# ## Think About Your Results
#
# Use the `head` method to compare the top few predictions to the actual home values (in `y`) for those same homes. Anything surprising?
#
# + papermill={"duration": 0.051207, "end_time": "2022-03-15T16:20:40.923329", "exception": false, "start_time": "2022-03-15T16:20:40.872122", "status": "completed"} tags=[]
# You can write code in this cell
print(y)
#print(y)
# + papermill={"duration": 0.057245, "end_time": "2022-03-15T16:20:41.025237", "exception": false, "start_time": "2022-03-15T16:20:40.967992", "status": "completed"} tags=[]
# Difference between iowa_model.predict(X) and y
(iowa_model.predict(X) - y).sum()
# + papermill={"duration": 0.057554, "end_time": "2022-03-15T16:20:41.127144", "exception": false, "start_time": "2022-03-15T16:20:41.069590", "status": "completed"} tags=[]
from sklearn.metrics import mean_absolute_error
predicted = iowa_model.predict(X)
actual = y
case1 = mean_absolute_error(predicted,actual)
case2 = mean_absolute_error(actual,predicted)
print(case1)
print(case2)
# + [markdown] papermill={"duration": 0.043011, "end_time": "2022-03-15T16:20:41.214441", "exception": false, "start_time": "2022-03-15T16:20:41.171430", "status": "completed"} tags=[]
# It's natural to ask how accurate the model's predictions will be and how you can improve that. That will be you're next step.
#
# # Keep Going
#
# You are ready for **[Model Validation](https://www.kaggle.com/dansbecker/model-validation).**
#
# + [markdown] papermill={"duration": 0.044373, "end_time": "2022-03-15T16:20:41.302341", "exception": false, "start_time": "2022-03-15T16:20:41.257968", "status": "completed"} tags=[]
# ---
#
#
#
#
# *Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/intro-to-machine-learning/discussion) to chat with other learners.*
| ml1-your-first-machine-learning-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# This notebook analyzes the dataset from https://www.lendingclub.com/
# ------------------------------------------------------------------------------------------------------
#
#
#
#
# Objectives:
# The goal is to analyse the following:
#
# The target variable
# Variable types (categorical and numerical)
# Missing data
#
# Numerical variables
# Discrete
# Continuous
# Distributions
# Transformations
#
# Categorical variables
# Cardinality
# Special Labels
# ## Import the necessary libraries
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from imblearn.combine import SMOTEENN, SMOTETomek
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import EditedNearestNeighbours, TomekLinks
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.impute import SimpleImputer
import category_encoders as ce
from category_encoders import TargetEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from imblearn.under_sampling import NeighbourhoodCleaningRule
from imblearn.over_sampling import ADASYN
from imblearn.pipeline import make_pipeline
# from sklearn.pipeline import make_pipeline
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
from scipy import stats
import scipy
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import roc_auc_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
import torch
import torchvision
import torchvision.transforms as transforms
from imblearn.metrics import geometric_mean_score
from sklearn.metrics import confusion_matrix, make_scorer
from sklearn.model_selection import StratifiedKFold
# -
# ## Read the data
data = pd.read_csv("https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_Lending_Club.csv")
X_train, X_test, y_train, y_test = train_test_split(data.drop(columns="is_bad"), data["is_bad"], test_size=0.20, random_state=42)
data.sample(5)
data.shape
original_features = list(data.columns)
print(original_features)
# ## Explore the target feature
target = data['is_bad']
plt.figure(figsize=(8,5));
sns.countplot(y=target);
# The dataset is very imbalance. We will process a combination of oversampling or undersampling
# ## Split Variable types (categorical and numerical)
cat_vars = [var for var in data.columns if(data[var].dtypes == "O")]
print(cat_vars)
print()
print(f"length of categorical variable is {len(cat_vars)}")
num_vars = [var for var in data.columns if(var not in cat_vars)]
print(num_vars)
print()
print(f"lenght of numerical variables is {len(num_vars)}")
# ## Explore missing data
missing_data = pd.concat([data.isnull().sum().sort_values(ascending=False).rename("missing_counts"), data.isnull().mean().sort_values(ascending=False).rename("missing_percent")], axis=1)
missing_data
# We will delete any feature with more than 80% missing values
class TemporalFeaturesExtraction(BaseEstimator, TransformerMixin):
def __init__(self, variables: str):
'''
Extract years ffrom datetime variable
'''
self.variables = variables
def fit(self, X, y=None):
return self
def transform(self, X):
X = X.copy()
X[self.variables] = pd.DatetimeIndex(X[self.variables]).year
return X
class ExtractZipCode(BaseEstimator, TransformerMixin):
# def __init__(self):
# self.variable = variable
def fit(self, X, y=None):
return self
def transform(self, X):
X = X.copy()
X.zip_code = X.zip_code.str[:3]
return X
# +
class MissingValuesImputerWarpper(SimpleImputer):
# def __init__(self):
def fit(self, X, y=None):
return self
def transform(self, X):
self.columns = X.columns
imputer = SimpleImputer(missing_values = np.nan, strategy ='most_frequent')
imputer = imputer.fit(X)
X = imputer.transform(X)
X = pd.DataFrame(X, columns=self.columns)
return X
# -
class ScalerWrapper(MinMaxScaler):
def fit(self, X, y=None):
self.columns = X.columns.to_list()
return super().fit(X, y)
def transform(self, X):
X = X.copy()
X = pd.DataFrame(super().transform(X), columns=self.columns)
return X
class OverUnderSAMPLE(SMOTEENN, SMOTETomek, SMOTE):
def __init__(self):
self.y = None
def fit(self, X, y=None):
self.y = y
return self
def transform(self, X):
X = X.copy()
sm = SMOTE(sampling_strategy='auto', random_state=42, k_neighbors=5, n_jobs=4)
X_sm, y_sm = sm.fit_resample(X, self.y)
tl = TomekLinks(sampling_strategy='all', n_jobs=4)
smtomek = SMOTETomek(sampling_strategy='auto', random_state=42, smote=sm, tomek=tl, n_jobs=4)
X, self.y = smtomek.fit_resample(X, self.y)
return X, self.y #pd.concat([X, self.y], axis=1, names=list(X.columns + "is_bad"))
feature_eng_pipeline = make_pipeline(
# MissingValuesImputerWarpper(),
TemporalFeaturesExtraction(variables="earliest_cr_line"),
ExtractZipCode(),
TargetEncoder(True, handle_missing='missing', handle_unknown='missing'),
ScalerWrapper(),
MissingValuesImputerWarpper(),
)
# +
# adasyn
adasyn = ADASYN(
sampling_strategy='auto', # samples only the minority class
random_state=0, # for reproducibility
n_neighbors=5,
n_jobs=4,
)
###################
## IMPORTANT
##################=
# The sampling strategy needs to be set to all, or with
# a specific dictionary, because after ADASYN, our
# previous minority class is no longer minority!!
ncr = NeighbourhoodCleaningRule(
sampling_strategy='all',# undersamples all classes
n_neighbors=3,
kind_sel='mode',
threshold_cleaning=0.1, # the threshold to evaluate a class for cleaning (used only for clearning step)
)
# +
sm = SMOTE(sampling_strategy='auto', random_state=42, k_neighbors=5, n_jobs=4)
tl = TomekLinks(sampling_strategy='all', n_jobs=4)
smtomek = SMOTETomek(sampling_strategy='auto', random_state=42, smote=sm, tomek=tl, n_jobs=4)
# +
###########NN######################
# +
def gmean(y_true, y_pred):
result = geometric_mean_score(y_true, y_pred)
return result
# +
gmean_score = make_scorer(
gmean,
greater_is_better=False, # smaller is better
needs_proba=False,
)
# -
model_list = [SVC(), RandomForestClassifier(), GradientBoostingClassifier(), KNeighborsClassifier(), LogisticRegression()]
import matplotlib
for model in model_list:
model_name = f"{model}".lower().split("()")[0]
model_name = model
print(f"{model}".lower().split("()")[0])
model_pipe = make_pipeline(
TemporalFeaturesExtraction(variables="earliest_cr_line"),
ExtractZipCode(),
TargetEncoder(True, handle_missing='missing', handle_unknown='missing'),
ScalerWrapper(),
MissingValuesImputerWarpper(),
adasyn,
ncr,
model_name
# sm,
# tl,
# smtomek,
# RandomForestClassifier(
# n_estimators=100, random_state=39, max_depth=3, n_jobs=4
# ),
)
clf = model_pipe.fit(X_train, y_train)
X_test_preds = clf.predict(X_test)
print()
print('Test roc_auc: ', roc_auc_score(y_test, X_test_preds))
print()
print(f"The geometric mean is {geometric_mean_score(y_test, X_test_preds):.3f}")
print()
print("###########################")
clf_report_ = pd.DataFrame(classification_report(y_test, X_test_preds, output_dict=True))
print(clf_report_)
print("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@")
print()
matplotlib.rc('figure', figsize=(20, 10));
plot_confusion_matrix(clf, X_test, y_test);
plt.show();
# +
param_grid = {'svc__C': scipy.stats.expon(scale=120),
'svc__gamma': scipy.stats.expon(scale=.1),
'svc__kernel': ['rbf','sigmoid', 'precomputed'],
'svc__class_weight':['balanced', None]
}
# param_grid = dict(
# svc__C = stats.randint(10, 15),
# # svc__gamma = stats.uniform(0, 1),
# svc__kernel=('rbf'),
# )
# param_grid = [
# {'svc__C': [1, 10, 100, 1000], 'svc__kernel': ['linear']},
# {'svc__C': [1, 10, 100, 1000], 'svc__gamma': [0.001, 0.0001], 'svc__kernel': ['rbf']},
# ]
scores = ["roc_auc", "f1", "balanced_accuracy", "recall", "precision"]
# -
svc = SVC()
# +
model = make_pipeline(
TemporalFeaturesExtraction(variables="earliest_cr_line"),
ExtractZipCode(),
TargetEncoder(True, handle_missing='missing', handle_unknown='missing'),
ScalerWrapper(),
MissingValuesImputerWarpper(),
adasyn,
ncr,
svc
# sm,
# tl,
# smtomek,
# RandomForestClassifier(
# n_estimators=100, random_state=39, max_depth=3, n_jobs=4
# ),
)
# +
kfolds = StratifiedKFold(5)
# set up the search
search = RandomizedSearchCV(model,
param_grid,
scoring=gmean_score,
cv=kfolds.split(X_train, y_train),
n_iter = 100,
random_state=10,
n_jobs=4,
refit=True)
# main_pipe = make_pipeline(model, search)
# find best hyperparameters
search.fit(X_train, y_train)
# -
# +
X_test_preds = search.predict(X_test)
print()
print('Test roc_auc: ', roc_auc_score(y_test, X_test_preds))
print()
print()
print(f"The geometric mean is {geometric_mean_score(y_test, X_test_preds):.3f}")
print()
print("###########################")
clf_report = pd.DataFrame(classification_report(y_test, X_test_preds, output_dict=True))
print(clf_report)
# +
plot_confusion_matrix(search.best_estimator_, X_test, y_test);
plt.show();
# -
# +
from sklearn.pipeline import Pipeline
from sklearn.ensemble import StackingClassifier
from sklearn.ensemble import VotingClassifier
p1 = Pipeline([['clf1', SVC()]])
p2 = Pipeline([['clf2', LogisticRegression()]])
p3 = Pipeline([['clf3', GradientBoostingClassifier()]])
p4 = Pipeline([['clf4', RandomForestClassifier()]])
p5 = Pipeline([['clf5', StackingClassifier(estimators=[
("p1",p1),
("p2",p2),
("p3",p3),
("p4",p4),
])]])
# -
# +
model_pipe = make_pipeline(
TemporalFeaturesExtraction(variables="earliest_cr_line"),
ExtractZipCode(),
TargetEncoder(True, handle_missing='missing', handle_unknown='missing'),
ScalerWrapper(),
MissingValuesImputerWarpper(),
adasyn,
ncr,
p5
# sm,
# tl,
# smtomek,
# RandomForestClassifier(
# n_estimators=100, random_state=39, max_depth=3, n_jobs=4
# ),
)
clf = model_pipe.fit(X_train, y_train)
X_test_preds = clf.predict(X_test)
print()
print('Test roc_auc: ', roc_auc_score(y_test, X_test_preds))
print()
print(f"The geometric mean is {geometric_mean_score(y_test, X_test_preds):.3f}")
print()
print("###########################")
clf_report_ = pd.DataFrame(classification_report(y_test, X_test_preds, output_dict=True))
print(clf_report_)
print("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@")
print()
matplotlib.rc('figure', figsize=(20, 10));
plot_confusion_matrix(clf, X_test, y_test);
# -
| notebooks/CustomTransformerloan_data_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
from dataclasses import dataclass, field
# +
@dataclass
class Person:
name: str
city: str
age: int
@dataclass
class Student(Person):
grade: int
subject: list
# -
s = Student('Nahid', 'Jamalpur', 23, 3.4, ['Database', 'Networking'])
s
# +
@dataclass
class A:
x: int = 20
y: int = 30
@dataclass
class B(A):
z: int = 50
x: int = 100
# -
b = B()
b
| python3/learn-python/dataclass/dataclass-inheritance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import dateutil
df_phone = pd.read_csv("code/ch5/data/phone_data.csv")
df_phone.tail()
# -
df_phone['date'] = df_phone['date'].apply(dateutil.parser.parse, dayfirst=True)
df_phone.tail()
# 월별 사용량 합계
df_phone.groupby("month")['duration'].sum()
# item == call, network별 사용량 합계
df_phone[df_phone['item'] == 'call'].groupby("network")['duration'].sum()
# month/item별 count
df_phone.groupby(['month','item'])['date'].count()
# date로 count하던 network로 count하던 똑같다.
# unstack
df_phone.groupby(['month','item'])['date'].count().unstack()
df_phone.groupby('month', as_index=False)['duration'].sum()
# as_index는 'index를 따로 만들지 마라'라는 의미
# - True면 안만들고, False면 만든다.
# ##### aggregation
df_phone.groupby('month', as_index=False).agg({"duration" : sum})
# 위 결과와 같네...형태는 key : value형태
df_phone.groupby(['month','item']).agg({'duration':'sum',
'network':'count',
'date':'first'})
# ##### 하나의 column에 여러개의 agg적용
df_phone.groupby(['month','item']).agg({'duration':['sum','max','min'],
'network':'count',
'date':['first','nunique']})
# ##### 위에 두 줄로 되어있는 column명을 1줄로 바꾸기
grouped = df_phone.groupby(['month','item']).agg({'duration':['sum','max','min'],
'network':'count',
'date':['first','nunique']})
grouped.columns = grouped.columns.droplevel(level=0)
grouped
grouped.rename(columns={'sum' : 'sum_duration',
'max' : 'max_duration',
'min' : 'min_duration',
'count' : 'count_network',
'first' : 'first_date',
'nunique' : 'num_of_unique',})
grouped.add_prefix('prefix_')
| inflearn_machine_learning/pandas/pandas_CaseStudy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="2tdhxTNah3s2"
# # D21 - Monitoria em Matemática e Probabilidade
# ## Aula 02 - Conceitos fundamentais
#
# **Professor: <NAME>**
# + id="RZpmIsu_icpo"
#@title Execute esta célula antes de iniciar a resolução dos exercícios
#{display-mode: "form"}
import sys
def self_reference(f):
f.__defaults__ = f.__defaults__[:-1] + (f,)
return f
def validate(func, test, input, output):
res = True
if input == None:
if not equals(test(func()), output):
res = False
print(f'Resultado diferente do esperado.\n')
else:
output = output if output != None else [True for e in input]
for i, o in zip(input, output):
j = func(*i)
if not equals(test(i, j), o):
res = False
print(f'Resultado diferente do esperado para a entrada {i}.\n')
if hasattr(test, "__t") and test.__t == "s":
i = test.__I
if len(i) != len(set(i)) or (not set(i) == test.__C):
print("Erro na imagem da funçao")
return
if res:
print("Parabéns")
@self_reference
def teste(x, y, self=None):
if y not in self.__C:
return False
if y in self.__I:
return False
self.__I.append(y)
return True
teste.__C = {y for y in range(1,11)}
teste.__t="i"
def equals(a, b):
if type(a) == 'pandas.core.frame.DataFrame':
return a.eq(b)
return a == b
# + [markdown] id="KOfrX8uAn_sO"
# ### Relações: $R \subseteq {A \times B}$
# + [markdown] id="b08D7gIUoGKd"
# **Exercício 01**:
#
# Escreva uma função que retorne o produto cartesiano entre dois conjuntos: $P = {A \times B}$
# - Observação: a relação será representada em sua forma extensa, como um conjunto de tuplas: $P=\{(a,b) : a \in A \land b \in B\}$
# - Exemplo: $\{1,2\} \times \{1\} = \{(1,1), (2,1)\}$
# + id="KW3ecqa9oV3O"
def cartesiano(A, B):
# escreva sua solução aqui
return {(a, b) for a in A for b in B}
# + id="1CN8rSJnodxI" colab={"base_uri": "https://localhost:8080/"} outputId="fb5cf7be-2a3d-49e6-997b-fbea81edd3e6"
# Utilize este espaço para testar sua solução
cartesiano({1,2,3,4}, {1,2,3,4,5})
# + id="AbI99WCCFnF0" colab={"base_uri": "https://localhost:8080/"} outputId="9eaa7aca-c4a7-4510-f702-dc10d627d442"
# Validação
entradas = [[{1,2,3,4}, {2,3,4,5}], [[1,2,3,4], [1,2,3,4,5]]]
saidas = [
{(1, 2), (1, 3), (1, 4), (1, 5), (2, 2), (2, 3), (2, 4), (2, 5), (3, 2), (3, 3), (3, 4), (3, 5), (4, 2), (4, 3), (4, 4), (4, 5)},
{(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5)}
]
validate(cartesiano, lambda x, y: y, entradas, saidas)
# + [markdown] id="IdbnVmy1p0XV"
# **Exercício 02**:
#
# Escreva uma função que teste se um conjunto $R$ de tuplas representa uma relação entre os conjuntos $A$ e $B$ - isto é, se $R \subseteq {A \times B}$.
# - Observação: a relação será representada em sua forma extensa, como um conjunto de tuplas. ex. $R$={(1,2), (2,3), (3,4)}
# + id="Hoe6bsEdqSc5"
def is_relacao(R, A, B):
# escreva sua solução aqui
C = cartesiano(A,B)
for r in R:
if r not in C:
return False
return True
# + id="KI5YF2tkrJJc" colab={"base_uri": "https://localhost:8080/"} outputId="41f4feab-70bb-4890-d817-afee3ed40600"
# Utilize este espaço para testar sua solução
is_relacao({(1,1),(2,1)},{1,2},{1})
# + id="SY_C8qjiMyS3" colab={"base_uri": "https://localhost:8080/"} outputId="211aac2a-cc15-4944-f6dd-0f211c0f76e6"
# Validação
entradas = [
[{(1,2),(2,3),(3,2)}, {1,2,3,4}, {2,3,4,5}],
[[(6,2),(2,3),(3,5)], [1,2,3,4], [2,3,4,5]],
[{(1,2),(2,7),(3,2)}, [1,2,3,4], [2,3,4,5]]
]
saidas = [True, False, False]
validate(is_relacao, lambda x, y: y, entradas, saidas)
# + [markdown] id="gCIpHYOtrVVn"
# **Exercício 03:**
#
# Escreva uma função que teste se uma relação $R$ é serial: $(∀a \in A [∃b \in B : aRb])$:
# + id="gsACAS90rkXw"
def is_serial(R, A, B):
# escreva sua solução aqui
return is_relacao(R, A, B) and ({a for a, b in R} == set(A))
# + id="PcXURNB1Rm3C" colab={"base_uri": "https://localhost:8080/"} outputId="a65cff05-ed4b-4e45-9509-841b74cf045d"
# Utilize este espaço para testar sua solução
is_serial({(1,1),(2,1)},{1,2},{1})
# + id="8zsg4_0gRnJt" colab={"base_uri": "https://localhost:8080/"} outputId="27275953-3d9f-4e5d-debb-e9836d7e18a7"
# Vaidação
entradas = [
[{(1,2),(2,3),(3,2)}, {1,2,3}, {2,3,4,5}],
[[(6,2),(2,3),(3,5)], [1,2,3,4], [2,3,4,5]],
[{(1,2),(2,7),(3,2)}, [1,2,3,4], [2,3,4,5]]
]
saidas = [True, False, False]
validate(is_serial, lambda x, y: y, entradas, saidas)
# + [markdown] id="E7RXqdROrxMq"
# **Exercício 04:**
#
# Escreva uma função que teste se uma relação $R$ é funcional: $\forall a \in A, \forall(b_i,b_j) \in B[(aRb_i \land aRb_j) \rightarrow b_i=b_j]$
# + id="vDiSGquFs_Ek"
def is_funcional(R, A, B):
# escreva sua solução aqui
domain = [a for a, b in R]
return is_relacao(R, A, B) and len(domain) == len(set(domain))
# + id="5GFQY_bWtDSN" colab={"base_uri": "https://localhost:8080/"} outputId="8232a160-4654-41c5-9900-aa2d04022d82"
# Utilize este espaço para testar sua solução
is_funcional([(6,2),(2,3),(3,5)], [1,2,3,4], [2,3,4,5])
# + id="z2XCSW52HGHM" colab={"base_uri": "https://localhost:8080/"} outputId="bd7016e2-4b1b-40a2-f24d-213581902789"
# validacao
entradas = [
[{(1,2),(2,3),(3,2)}, {1,2,3,4}, {2,3,4,5}],
[[(6,2),(2,3),(3,5)], [1,2,3,4], [2,3,4,5]],
[{(1,2),(2,3),(3,4)}, [1,2,3,4], [2,3,4,5]]
]
saidas = [True, False, True]
validate(is_funcional, lambda x, y: y, entradas, saidas)
# + [markdown] id="xiVDJcSmT7o_"
# ## Funções: $f:D\rightarrow C$
# + [markdown] id="Sa7XLR5EVkvr"
# **Exercício 05**:
# Escreva uma função que teste se uma relação $f$ é uma função:
#
# - Dica: lembre-se que funções são relações seriais e funcionais;
# - A função será representada na forma extensa: $\{(x,y): y=f(x)\}$
# + id="tcJWoJqUVL0c"
def is_funcao(f, A, B):
# escreva sua solução aqui
return is_serial(f, A, B) and is_funcional(f, A, B)
# + id="NjWRsq37V_fr" colab={"base_uri": "https://localhost:8080/"} outputId="2a8803a7-099a-4f64-e173-bb31837de71d"
# Utilize este espaço para testar sua solução
is_funcao({(1,2),(2,3),(3,2)}, {1,2,3}, {2,3,4,5})
# + id="PADNhygGWTti" colab={"base_uri": "https://localhost:8080/"} outputId="d4ad57d3-e286-4f65-fddb-1e4e35160d06"
# Validação
entradas = [
[{(1,2),(2,3),(3,2)}, {1,2,3}, {2,3,4,5}],
[[(6,2),(2,3),(3,5)], [1,2,3,4], [2,3,4,5]],
[{(1,2),(2,3),(3,4),(4,6)}, [1,2,3,4], [2,3,4,5,6]]
]
saidas = [True, False, True]
validate(is_funcao, lambda x, y: y, entradas, saidas)
# + [markdown] id="5r7zIJiTXDeB"
# **Exercício 06:**
# Escreva uma função injetora com domínio = $D = \{1,..,10\}$ e contradomínio $C = \{1,..,20\}$. Isto é, a função deve retornar um único valor $y \in C$ para todo valor $x \in D$.
# + id="42J71Mq3YO6d"
def injective(x):
# escreva sua solução aqui
#return x
#return x+1
#return x+10
#return (x%10)+1
#return (x%20)+1
#return 2*x
return (2*x)-1
# + id="u8luk16pYgN9" colab={"base_uri": "https://localhost:8080/"} outputId="d0831d68-6ced-4e32-d60c-629e93eac0dd"
# Utilize este espaço para testar sua solução
[injective(x) for x in range(1, 11)]
# + id="eOcr59ZtYnU1" colab={"base_uri": "https://localhost:8080/"} outputId="7b101b0d-514d-4299-cacc-a7cf5f61cbb1"
# Validação
entradas = [[x] for x in range(1, 11)]
teste.__I = []
teste.__C = [y for y in range(1,21)]
teste.__t="i"
validate(injective, teste, entradas, None)
# + [markdown] id="Z2XWGPesbJ1g"
# **Exercício 07:**
# Escreva uma função bijetora $f:D\rightarrow C$, com $D = C = \{1,..,10\}$:
# + id="V0zI1uvzZSuk"
def bijective(x):
# escreva sua solução aqui
#return x
#return 11 - x
#return (x % 10) + 1
return 11-x if x < 6 else x-5
# + id="IcDPmbyZbpl_" colab={"base_uri": "https://localhost:8080/"} outputId="41417644-f2da-40ee-ca7c-33f5546320d2"
# Utilize este espaço para testar sua solução
[bijective(x) for x in range(1,11)]
# + id="HMSeLxk6br3P" colab={"base_uri": "https://localhost:8080/"} outputId="48b0a610-7544-40dc-e120-45be0b0a1520"
# Validação
entradas = [[x] for x in range(1,11)]
teste.__I = []
teste.__C = {y for y in range(1,11)}
teste.__t = "s"
validate(bijective, teste, entradas, None)
# + [markdown] id="V2_GDKOQgQmn"
# **Exercício 08:**
# Escreva uma função que teste se uma função $f$, é injetora com domínio $D$ e contradomínio $C$:
#
# - Observação: nesse caso, $f$ é uma função Python que recebe um único argumento $x$ e retorna um resultado $y$
# + id="ZYjrZT3Xf9tL"
def is_injective(f, D, C):
# escreva sua solução aqui
I = [f(x) for x in D]
return set(I).issubset(C) and len(I) == len(set(I))
# + id="peZVNVg9gz4v" colab={"base_uri": "https://localhost:8080/"} outputId="ae22dcca-bfd9-461a-b862-68d09f938d17"
# Utilize este espaço para testar sua solução
is_injective((lambda x: (x%10)+1), {x for x in range(1,11)}, {x for x in range(1,30)})
# + id="U3wWrL2-hGiF" colab={"base_uri": "https://localhost:8080/"} outputId="6a5a6e5b-0bcd-4f75-b92d-993aebbd1a1a"
# Validação
D = C = {x for x in range(1,11)}
entradas = [[(lambda x: x+20), D, C], [(lambda x: (x%10)+1), D, C], [(lambda x: x%10), D, C]]
saidas = [False, True, False]
validate(is_injective, lambda x, y: y, entradas, saidas)
# + [markdown] id="U5iGFtQ8g99t"
# Exercício 09: Escreva uma função teste se uma função $f$ é sobrejetora, com domínio $D$ e contradomínio $C$
# + id="H2SPl-W8g7f8"
def is_surjective(f, D, C):
# escreva sua solução aqui
I = {f(x) for x in D}
return I == set(C)
# + id="uqWBU2nKj2d4" colab={"base_uri": "https://localhost:8080/"} outputId="d802b5d3-0f57-41e4-fbb5-d96d9f874e43"
# Utilize este espaço para testar sua solução
is_surjective((lambda x: (x%10)+1), {x for x in range(1,21)}, {x for x in range(1,11)})
# + id="UCk4hVLUj51f" colab={"base_uri": "https://localhost:8080/"} outputId="e1618f96-249c-4cf9-adb4-aaf66ac4aff7"
# Validação
D = {x for x in range(1,11)}
C = {x for x in range(1,21)}
entradas = [[lambda x: x, D, D], [lambda x: x+1, D, {x+1 for x in D}], [lambda x: x%10, D, D], [lambda x: (x%10)+1, C, D]]
saidas = [True, True, False, True]
validate(is_surjective, lambda x, y: y, entradas, saidas)
# + [markdown] id="SRFGSo0rlPd1"
# **Exercício 10:**
# Escreva uma função que teste se uma função $f$ é bijetora entre o domínio $D$ e o contradomínio $C$:
# + id="D-Xq8X0Esxpd"
def is_bijective(f, D, C):
# escreva sua solução aqui
return is_injective(f, D, C) and is_surjective(f, D, C)
# + id="XzwyvkXbsvqH" colab={"base_uri": "https://localhost:8080/"} outputId="c9ca8abf-50f9-4f3e-9a06-deefac56fb7d"
# Utilize este espaço para testar sua solução
is_bijective((lambda x: (x%10)+1), {x for x in range(1,11)}, {x for x in range(1,11)})
# + id="ymkQ3YublFIm" colab={"base_uri": "https://localhost:8080/"} outputId="f475c82b-7763-4e1b-e471-2c392911b5fe"
# Validação
D = {x for x in range(1,11)}
C = {x for x in range(1,21)}
entradas = [[lambda x: x, D, D], [lambda x: x+1, D, {x+1 for x in D}], [lambda x: x%10, D, D], [lambda x: (x%10)+1, C, D]]
saidas = [True, True, False, False]
validate(is_bijective, lambda x, y: y, entradas, saidas)
| exercicios/D21_Monitoria_MatProb_Exercicios_02_solucoes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Experiment with Shifted ReLUs
#
# Paper: https://arxiv.org/pdf/1511.07289.pdf
# ---
from fastai.script import *
from fastai.vision import *
torch.backends.cudnn.benchmark = True
from fastprogress import fastprogress
fastprogress.MAX_COLS = 80
import fastai
fastai.__version__
import torch
import torchvision
import torchvision.transforms as transforms
# ## Get Data
from fastai import datasets
path = untar_data(URLs.IMAGENETTE_160)
tfms = get_transforms(do_flip=False)
size = 128 # from https://github.com/fastai/fastai/blob/master/examples/train_imagenette.py#L29
bs = 128
n_gpus = 1
workers = min(8, num_cpus()//n_gpus)
path.ls()
data = (ImageList.from_folder(path).split_by_folder(valid='val')
.label_from_folder().transform(([flip_lr(p=0.5)], []), size=size)
.databunch(bs=bs, num_workers=workers)
# .presize(size, scale=(0.35,1))
.normalize(imagenet_stats))
data.show_batch(rows=3)
class FastReLU(nn.Threshold):
def __init__(self, threshold=0.0, value=0.0, bias= -0.5, inplace=False):
super(FastReLU, self).__init__(threshold, value)
self.threshold = threshold
self.value = value
self.inplace = inplace
self.bias = bias
def forward(self, input):
return F.threshold(input, self.threshold, self.value, self.inplace) + self.bias
def extra_repr(self):
inplace_str = 'inplace' if self.inplace else ''
return inplace_str
# sanity check, zero bias FastReLU should be the same as ReLU
test_list = tensor([-0.1, 0, 0.5])
m, f = nn.ReLU(), FastReLU(bias=0.0)
m(test_list) == f(test_list)
m, f = nn.ReLU(), FastReLU(bias=0.5)
m(test_list) == f(test_list)
# ## Basic ResNet from torchvision
from torchvision.models import ResNet
from torchvision.models.resnet import conv1x1, conv3x3, BasicBlock, Bottleneck
# +
class FastBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(FastBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = FastReLU(inplace=True)
# self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class NoBN_FastBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(NoBN_FastBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
# self.bn1 = nn.BatchNorm2d(planes)
self.relu = FastReLU(inplace=True)
# self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
# self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
# out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
# out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
# +
class FastBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(FastBottleneck, self).__init__()
self.conv1 = conv1x1(inplanes, planes)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = conv1x1(planes, planes * self.expansion)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = FastReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class NoBN_FastBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(NoBN_FastBottleneck, self).__init__()
self.conv1 = conv1x1(inplanes, planes)
# self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes, stride)
# self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = conv1x1(planes, planes * self.expansion)
# self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = FastReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
# out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
# out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
# out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
# -
# ## FastResNet
class FastResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(FastResNet, self).__init__()
self.inplanes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = FastReLU(inplace=True)
# self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
class NoBN_FastResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(NoBN_FastResNet, self).__init__()
self.inplanes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
# self.bn1 = nn.BatchNorm2d(64)
self.relu = FastReLU(inplace=True)
# self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
# elif isinstance(m, nn.BatchNorm2d):
# nn.init.constant_(m.weight, 1)
# nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
# if zero_init_residual:
# for m in self.modules():
# if isinstance(m, Bottleneck):
# nn.init.constant_(m.bn3.weight, 0)
# elif isinstance(m, BasicBlock):
# nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
# nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
# x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
# ### Define Model Creating Functions
def fast_rn18(pretrained=False, **kwargs):
model = FastResNet(FastBasicBlock, [2, 2, 2, 2], **kwargs)
return model
def nobn_fast_rn18(pretrained=False, **kwargs):
model = NoBN_FastResNet(NoBN_FastBasicBlock, [2, 2, 2, 2], **kwargs)
return model
def base_rn18(pretrained=False, **kwargs):
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
return model
def fast_rn101(pretrained=False, **kwargs):
model = FastResNet(FastBottleneck, [3, 4, 23, 3], **kwargs)
return model
def nobn_fast_rn101(pretrained=False, **kwargs):
model = NoBN_FastResNet(NoBN_FastBottleneck, [3, 4, 23, 3], **kwargs)
return model
def base_rn101(pretrained=False, **kwargs):
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
return model
# ## Run Experiments
from statistics import mean
def average_perf(n, model_creator):
"""
Build n custom learners from scratch and find average accuracy
"""
acc_list = []
for _ in range(n):
custom_learn = cnn_learner(data, model_creator, metrics=accuracy)
custom_learn.fit_one_cycle(5, 1e-2)
acc_list.append(custom_learn.recorder.metrics[-1][0].item())
print(f"Mean accuracy over {n} runs(s) is {mean(acc_list)}")
return acc_list
# # RN101 with FastReLU
acc_list = average_perf(1, fast_rn101)
# ### RN101 with No Batchnorm FastReLU
acc_list = average_perf(1, nobn_fast_rn101)
# # RN101 with ReLU
acc_list = average_perf(1, base_rn101)
# # RN18 with ReLU
acc_list = average_perf(5, base_rn18)
# # RN18 with FastReLU
acc_list = average_perf(5, fast_rn18)
# ## RN18 with No Batchnorm FastReLU
acc_list = average_perf(5, nobn_fast_rn18)
| activation_function_experiments/try_fast_relu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="JKkbeQi2Mzug"
# # Hierarchical Clustering
# + [markdown] colab_type="text" id="TaQI437hM1Ho"
# ## Importing the libraries
# + colab={} colab_type="code" id="2UW48DgcM4YS"
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# + [markdown] colab_type="text" id="gFeTEtDxM7K4"
# ## Importing the dataset
# + colab={} colab_type="code" id="4fS2J3HGM99q"
df = pd.read_csv('mall_data.csv')
X = df.iloc[:, [3, 4]].values
# -
X
# ## Knowing The Dataset
df.columns
df.corr()
df.isnull().sum()
# + [markdown] colab_type="text" id="czYMlG7cNBsu"
# ## Using the dendrogram to find the optimal number of clusters
# + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" executionInfo={"elapsed": 5911, "status": "ok", "timestamp": 1586373368071, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="RDQODpAFNILO" outputId="89e9ce60-b3b6-4cf8-acd3-c6e00b321a32"
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'ward'))
plt.title('Dendrogram')
plt.xlabel('Customers')
plt.ylabel('Euclidean distances')
plt.show()
# + [markdown] colab_type="text" id="KDbXbo9INLF6"
# ## Training the Hierarchical Clustering model on the dataset
# + colab={} colab_type="code" id="IoH3zs2KNSw6"
from sklearn.cluster import AgglomerativeClustering
cluster = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage = 'ward')
y = cluster.fit_predict(X)
# + [markdown] colab_type="text" id="X-SYG7l9NVmU"
# ## Visualising the clusters
# + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" executionInfo={"elapsed": 2321, "status": "ok", "timestamp": 1586373378543, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="-91tDJrnNY2p" outputId="11458805-856c-440f-b2c8-9f7ce293c230"
plt.scatter(X[y == 0, 0], X[y == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y == 1, 0], X[y == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y == 2, 0], X[y == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y == 3, 0], X[y == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y == 4, 0], X[y == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
# -
| Clustering/Hierarchical Clustering/Python/mall_data_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# Since 2008, guests and hosts have used Airbnb to travel in a more unique, personalized way. As part of the Airbnb Inside initiative, this dataset describes the listing activity of homestays in Boston, MA.
#
# ## Content
#
# The following Airbnb activity is included in this Boston dataset:
#
# 1. Listings : Including full descriptions and average review score
# 2. Reviews : Including unique id for each reviewer and detailed comments
# 3. Calendar : Including listing id and the price and availability for that day
#
#
# The data is obtained from [Kaggle](https://www.kaggle.com/airbnb/boston)
#
#
# The question I try to answer in this notebook include:
# ```sh
# > What are the features that highly correlate to price?
# > How price and rating relate with each other?
# > What’s the major factor that influence price and ratings?
# ````
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
calendar = pd.read_csv('C://Users//divyam07//Desktop//udacity//Write_a_Data_Science_Blog_Post//AirBnB Boston data//calendar.csv')
listings = pd.read_csv('C://Users//divyam07//Desktop//udacity//Write_a_Data_Science_Blog_Post//AirBnB Boston data//listings.csv')
reviews = pd.read_csv('C://Users//divyam07//Desktop//udacity//Write_a_Data_Science_Blog_Post//AirBnB Boston data//reviews.csv')
calendar.head()
listings.head()
reviews.head()
def basic_info(data):
"""
Provides a information of the dataset
Input - Give the dataset name
Output - Give the matrix, missing value in percentage
"""
print('Shape: {}'.format(data.shape))
print((data.isnull().sum()[data.isnull().sum()>0]/data.shape[0])*100)
basic_info(calendar)
basic_info(listings)
listings.info()
# ## cleaning
#
# ### To get more about missing values.
def missing_values(data):
"""
Input - dataset name
Output - missing values colounm name wise with the help of heatmap
"""
sns.heatmap(data.isnull(), cbar = False,yticklabels = False)
missing_values(calendar)
missing_values(listings)
missing_values(reviews)
listings.info()
# +
# Firstly remove the doller sign
def remove_sign(x,sign):
if type(x) is str:
x = float(x.replace(sign,'').replace(',',''))
return x
listings.price = listings.price.apply(remove_sign,sign='$')
listings.host_response_rate = listings.host_response_rate.apply(remove_sign,sign='%')
listings.host_acceptance_rate = listings.host_acceptance_rate.apply(remove_sign,sign='%')
# -
# ### sepearting categorical and numeric variable and filling the missing value
#firstly make acopy
df_listing = listings.copy()
# +
# catogarical variable
cat_listings = df_listing.select_dtypes(include=['object'])
cat_listing = df_listing[['listing_url', 'last_scraped', 'name', 'summary', 'space',
'description', 'experiences_offered', 'neighborhood_overview', 'notes',
'transit', 'access', 'interaction', 'house_rules', 'thumbnail_url',
'medium_url', 'picture_url', 'xl_picture_url', 'host_url', 'host_name',
'host_since', 'host_location', 'host_about', 'host_response_time',
'host_is_superhost', 'host_thumbnail_url', 'host_picture_url',
'host_neighbourhood', 'host_verifications', 'host_has_profile_pic',
'host_identity_verified', 'street', 'neighbourhood',
'neighbourhood_cleansed', 'city', 'state', 'zipcode', 'market',
'smart_location', 'country_code', 'country', 'is_location_exact',
'property_type', 'room_type', 'bed_type', 'amenities', 'weekly_price',
'monthly_price', 'security_deposit', 'cleaning_fee', 'extra_people',
'calendar_updated', 'calendar_last_scraped', 'first_review',
'last_review', 'requires_license', 'instant_bookable',
'cancellation_policy', 'require_guest_profile_picture',
'require_guest_phone_verification']]
# drop the missing value
df_cat_dropna = cat_listing.dropna(axis=0)
# Mode function
fill_mode = lambda col: col.fillna(col.mode())
# Fill the mode
cat_listing = df_cat_dropna.apply(fill_mode, axis=0)
# +
# numerucal variable
num_listings = df_listing.select_dtypes(include=['int', 'float'])
# some columns are empty, so better to remove them
num_listings.isnull().sum().sort_values(ascending=False)
# removing the empty coloumns
num_listings = num_listings.drop(['neighbourhood_group_cleansed','license','jurisdiction_names','has_availability','square_feet'],axis=1)
#filling with mean in missing values
# Mean function
fill_mean = lambda col: col.fillna(col.mean())
# Fill the mean
num_listing = num_listings.apply(fill_mean, axis=0)
# -
# again check for missing value
# for numerical variables
sns.heatmap(num_listing.isnull(),cbar = False,yticklabels = False)
# for catagorical variable
sns.heatmap(cat_listing.isnull(),cbar = False,yticklabels = False)
#
#
#
#
#
#
#
# ## Q1. What are the features that highly correlate to price?
# find the correlation
corr = num_listing.corr()
corr
# +
# using heatmap to undersatnd better
# -
fig, ax = plt.subplots(figsize=(16,10))
sns.set(font_scale=1)
correlation=sns.heatmap(corr, cbar = True, annot=True, square = True, fmt = '.2f',linewidths=3,cmap="YlGnBu")
# The features that highly correlate with price are:
# 1. Bedrooms
# 2. Beds
# 3. Accomodates
# 4. Room Type
# 5. number of bedrooms
# 6. number of guests
#
# some catagorical variable are selected on the basic of visualization which are below present
# ## Q2 How price and rating relate with each other?
# use 25% price value as low price bar; 75% price value as high price bar
def price_level(x,low_bar=85,high_bar=220):
if x<=low_bar:
x='Low_Price'
elif x>=high_bar:
x='High_Price'
else:
x='Medium_Price'
return x
listings['price_level'] = listings.price.apply(price_level)
# select price and ratings and dropna
price_rate = listings[["id","price","review_scores_rating","number_of_reviews","price_level"]].dropna()
price_rate.head()
#
#
#
#
#
#
f, ax = plt.subplots(figsize=(15, 6))
sns.scatterplot(x='price',y='review_scores_rating',hue='number_of_reviews',alpha=0.5,data=price_rate)
# Insights
#
# 1. low ratings are associate with lower prices.
# 2. however high rating dose not mean high price.
#
#
#
#
#
#
#
#
#
#
# ## Q3 What’s the major factor that influence price and ratings?
# keep listings with not null prices
listings_price = listings[listings.price.notnull()]
def plot_price_by_cat(column_name,listings=listings,fig_row_size=11,fig_col_size=9):
price_col = listings_price.groupby(column_name).mean()[['price']]
price_col.reset_index(inplace=True)
f, ax = plt.subplots(figsize=(fig_row_size, fig_col_size))
sns.barplot(x=column_name,y='price',palette="Blues_d",data=price_col.sort_values(by='price', ascending=False))
plot_price_by_cat('property_type',listings=listings_price)
def plot_price_by_cata(column_name,listings=listings,fig_row_size=11,fig_col_size=9):
price_col = listings_price.groupby(column_name).mean()[['price']]
price_col.reset_index(inplace=True)
f, ax = plt.subplots(figsize=(fig_row_size, fig_col_size))
sns.barplot(y=column_name,x='price',palette="Blues_d",data=price_col.sort_values(by='price', ascending=False))
plot_price_by_cata('neighbourhood_cleansed',listings=listings_price)
plot_price_by_cata('bed_type',listings=listings_price)
plot_price_by_cata('zipcode',listings=listings_price)
plot_price_by_cat('host_response_time',listings=listings_price,fig_row_size=10,fig_col_size=8)
fig = plt.figure(figsize=(12, 8))
ax2 = fig.add_subplot(122)
# price and bed_type
price_room_type = listings_price.groupby('room_type').mean()[['price']]
price_room_type.reset_index(inplace=True)
sns.barplot(x='room_type',y='price',palette="Blues_d",data=price_room_type.sort_values(by='price', ascending=False),ax=ax2)
# Insights
#
# The factor that influence price are
# 1. property_type : Ghesthouse
# 2. room_type : Entire Home/Apt
# 3. bed_type : Real Bed
# 4. host_response_time : within a few hours
# 5. zipcode : 02111
# 6. neighbourhood_cleansed : South Boston waterfront
| AirBnB Boston Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + pycharm={"is_executing": true, "name": "#%%\n"}
import nannyml as nml
import pandas as pd
reference, analysis, analysis_target = nml.load_synthetic_binary_classification_dataset()
metadata = nml.extract_metadata(data = reference, model_name='wfh_predictor', model_type='classification_binary', exclude_columns='identifier')
metadata.target_column_name = 'work_home_actual'
reference.head()
# + pycharm={"is_executing": true, "name": "#%%\n"}
# Let's initialize the object that will perform the Univariate Drift calculations
# Let's use a chunk size of 5000 data points to create our drift statistics
univariate_calculator = nml.UnivariateStatisticalDriftCalculator(model_metadata=metadata, chunk_size=5000)
# NannyML compares drift versus the full reference dataset.
univariate_calculator.fit(reference_data=reference)
# let's see drift statistics for all available data
data = pd.concat([reference, analysis], ignore_index=True)
univariate_results = univariate_calculator.calculate(data=data)
# let's view a small subset of our results:
# We use the data property of the results class to view the relevant data.
univariate_results.data.iloc[:5, :9]
# + pycharm={"is_executing": true, "name": "#%%\n"}
univariate_results.data.iloc[-5:, :9]
# + pycharm={"is_executing": true, "name": "#%%\n"}
# let's plot drift results for all model inputs
for feature in metadata.features:
figure = univariate_results.plot(kind='feature_drift', metric='statistic', feature_label=feature.label)
figure.show()
# save figure - not shown on guide:
figure.write_image(file=f"drift-guide-{feature.label}.svg")
# figure.write_image(file=f"drift-guide-{feature.label}.svg", engine="orca")
# + pycharm={"is_executing": true, "name": "#%%\n"}
# let's plot distribution drift results for continuous model inputs
for feature in metadata.continuous_features:
figure = univariate_results.plot(
kind='feature_distribution',
feature_label=feature.label
)
figure.show()
# save figure - not shown on guide:
figure.write_image(file=f"drift-guide-joyplot-{feature.label}.svg")
# figure.write_image(file=f"drift-guide-joyplot-{feature.label}.svg", engine="orca")
# + pycharm={"is_executing": true, "name": "#%%\n"}
# let's plot distribution drift results for categorical model inputs
for feature in metadata.categorical_features:
figure = univariate_results.plot(
kind='feature_distribution',
feature_label=feature.label
)
figure.show()
# save figure - not shown on guide:
figure.write_image(file=f"drift-guide-stacked-{feature.label}.svg")
# figure.write_image(file=f""drift-guide-stacked-{feature.label}.svg", engine="orca")
# + pycharm={"is_executing": true, "name": "#%%\n"}
ranker = nml.Ranker.by('alert_count')
ranked_features = ranker.rank(univariate_results, model_metadata=metadata, only_drifting = False)
ranked_features
# + pycharm={"name": "#%%\n"}
# Let's initialize the object that will perform Data Reconstruction with PCA
# Let's use a chunk size of 5000 data points to create our drift statistics
rcerror_calculator = nml.DataReconstructionDriftCalculator(model_metadata=metadata, chunk_size=5000)
# NannyML compares drift versus the full reference dataset.
rcerror_calculator.fit(reference_data=reference)
# let's see RC error statistics for all available data
rcerror_results = rcerror_calculator.calculate(data=data)
# + pycharm={"name": "#%%\n"}
from sklearn.impute import SimpleImputer
# Let's initialize the object that will perform Data Reconstruction with PCA
rcerror_calculator = nml.DataReconstructionDriftCalculator(
model_metadata=metadata,
chunk_size=5000,
imputer_categorical=SimpleImputer(strategy='constant', fill_value='missing'),
imputer_continuous=SimpleImputer(strategy='median')
)
# NannyML compares drift versus the full reference dataset.
rcerror_calculator.fit(reference_data=reference)
# let's see RC error statistics for all available data
rcerror_results = rcerror_calculator.calculate(data=data)
# + pycharm={"is_executing": true, "name": "#%%\n"}
rcerror_results.data
# + pycharm={"is_executing": true, "name": "#%%\n"}
print(rcerror_results.data.to_markdown(tablefmt="grid"))
# + pycharm={"is_executing": true, "name": "#%%\n"}
figure = rcerror_results.plot(kind='drift')
figure.show()
# save figure - not shown on guide:
figure.write_image(file="drift-guide-multivariate.svg")
# + pycharm={"is_executing": true, "name": "#%%\n"}
figure = univariate_results.plot(kind='prediction_drift', metric='statistic')
figure.show()
# save figure - not shown on guide:
figure.write_image(file=f"drift-guide-predictions.svg")
# + pycharm={"is_executing": true, "name": "#%%\n"}
figure = univariate_results.plot(kind='prediction_distribution', metric='statistic')
figure.show()
# save figure - not shown on guide:
figure.write_image(file=f"drift-guide-predictions-joyplot.svg")
# figure.write_image(file=f"drift-guide-predictions-joyplot.svg", engine="orca")
# + pycharm={"is_executing": true, "name": "#%%\n"}
data = pd.concat([reference, analysis.set_index('identifier').join(analysis_target.set_index('identifier'), on='identifier', rsuffix='_r')], ignore_index=True).reset_index(drop=True)
data.loc[data['partition'] == 'analysis'].head(3)
# + pycharm={"is_executing": true, "name": "#%%\n"}
target_distribution_calculator = nml.TargetDistributionCalculator(model_metadata=metadata, chunk_size=5000)
target_distribution_calculator.fit(reference_data=reference)
# + pycharm={"is_executing": true, "name": "#%%\n"}
target_distribution = target_distribution_calculator.calculate(data)
target_distribution.data.head(3)
# + pycharm={"is_executing": true, "name": "#%%\n"}
fig = target_distribution.plot(kind='distribution', distribution='metric')
fig.show()
# save figure - not shown on guide:
fig.write_image(file=f"target_distribution_metric.svg")
# + pycharm={"is_executing": true, "name": "#%%\n"}
fig = target_distribution.plot(kind='distribution', distribution='statistical')
fig.show()
# save figure - not shown on guide:
fig.write_image(file=f"target_distribution_statistical.svg")
# + pycharm={"name": "#%%\n"}
| docs/example_notebooks/Guide Data Drift_old.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Import National Parks Data
#
# I copy/pasted the table from [Wikipedia's list of US National Parks](https://en.wikipedia.org/w/index.php?title=List_of_national_parks_of_the_United_States) into a Google Sheet, then exported it to csv. Now, the data has to be modified a little bit to be useful.
import pandas as pd
nps = pd.read_csv('national_parks_wikipedia.csv')
nps.columns
nps['established'] = pd.to_datetime(nps['Date established as park[2][4]'])
nps['sqkm'] = nps['Area[2]'].str.extract('(\(.* km2\))', expand=False)
nps['sqkm'] = nps['sqkm'].str.split().str[0].str.replace('(','').str.replace(',','').astype(float)
nps['year'] = nps['established'].apply(lambda x: x.year)
# ## Change Log
#
# ```
# 2016-12-30: Add importer for National Parks data.
# ```
| NationalProtectedLands/load_nps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="LhCf0QivcN1r" colab_type="text"
# # islower() isupper()
# + [markdown] id="BwOV26uacW8C" colab_type="text"
# islower() methodu makineye, karakter dizisi tamamen küçük harflerden mi oluşuyor? diye sorar.
#
# isupper() methodu ise makineye, karakter dizisi tamamen küçük harflerden mi oluşuyor? diye sorar.
# + [markdown] id="tVfKVdzbcn2_" colab_type="text"
# **islower()**
# + id="xA6PBOhscrYm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="ee57df1d-981b-41ff-b8c7-ceb24a48bb54"
kardiz = "istanbul"
print(kardiz.islower())
kardiz = "Python"
print(kardiz.islower())
# + [markdown] id="ChkSzInwdDC3" colab_type="text"
# islower() methodunu kullanarak kullanıcıdan gelen verinin tamamen küçük harflerden mi oluşup oluşmadını denetleyebiliriz
# + id="b8tGt-uBdT2W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="8c222762-3c97-4b43-adc3-da8087139a56"
veri = input("Adınızı giriniz")
if not veri.islower():
print("Lütfen sadece küçük harflerle yazınız.")
else:
print("Good Name!")
# + [markdown] id="uCyrRpHTdt4y" colab_type="text"
# **isupper()**
# + id="o4LjSDEhd0lL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="098384d1-79c2-4168-d659-91382128b6c0"
name = "MURAT"
print(name.isupper())
name = "Murat"
print(name.isupper())
# + id="rQxSMoVPePwz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="ad56ef64-c92c-4ba5-a145-ea0e13a9b9de"
veri = input("Mesajınız: ")
if veri.isupper():
print("Lütfen küçük harflerle yazınız!")
# + [markdown] id="eJxgMCP3efS8" colab_type="text"
# kullanıcının girdiği mesajın her bir kelimesini ayrı ayrı tek tek sorgulamak için ise split() methodundan yararlanabiliriz:
# + id="NjSUFSk4enzs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="6065a760-300a-47da-aeb5-6bf5a70c5524"
veriOne = input("Mesajınız: ")
bol = veriOne.split()
for i in bol:
if i.isupper():
print("Tamamen büyük harflerden oluşan bir mesaj atmayınız!")
# + [markdown] id="rod5XQVHfHlH" colab_type="text"
# split() methodunu kullanmazsak eğer her bir harfi tek tek inceler
# + id="VumafHjSfOza" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="29001e2f-efbc-49a7-8f90-34773a85c0c7"
veriOne = input("Mesajınız")
for i in veriOne:
if i.isupper():
print("KAKAKAKAK")
| karakterDiziMetod/isLowerisUpper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multi-touch Multi-channel Attribution Model Using LSTM with Attention
#
# This is an attribution model that uses LSTM with attention to assign weights to touchpoints.
#
# | Description | D001 (see [descriptions](https://github.com/ikatsov/tensor-house/blob/master/resources/descriptions.md)) |
# |--|:--|
# | Dataset | Criteo (see [datasets](https://github.com/ikatsov/tensor-house/blob/master/resources/datsets.md)) |
# | Papers | Li2018, Ren2018 (see [papers](https://github.com/ikatsov/tensor-house/blob/master/resources/papers.md)) |
# | Installation | Download the dataset to 'data' folder |
# | Libs | Keras, Scikit-learn, Pandas, Numpy |
# ### Data description
# This dataset represents a sample of 30 days of Criteo live traffic data. Each line corresponds to one impression (a banner) that was displayed to a user. For each banner we have detailed information about the context, if it was clicked, if it led to a conversion and if it led to a conversion that was attributed to Criteo or not. Data has been sub-sampled and anonymized so as not to disclose proprietary elements.
#
# Here is a detailed description of the fields (they are tab-separated in the file):
#
# * timestamp: timestamp of the impression (starting from 0 for the first impression). The dataset is sorted according to timestamp.
# * uid: a unique user identifier
# * campaign: a unique identifier for the campaign
# * conversion: 1 if there was a conversion in the 30 days after the impression (independently of whether this impression was last click or not)
# * conversion_timestamp: the timestamp of the conversion or -1 if no conversion was observed
# * conversion_id: a unique identifier for each conversion (so that timelines can be reconstructed if needed). -1 if there was no conversion
# * attribution: 1 if the conversion was attributed to Criteo, 0 otherwise
# * click: 1 if the impression was clicked, 0 otherwise
# * click_pos: the position of the click before a conversion (0 for first-click)
# * click_nb: number of clicks. More than 1 if there was several clicks before a conversion
# * cost: the price paid by Criteo for this display (disclaimer: not the real price, only a transformed version of it)
# * cpo: the cost-per-order in case of attributed conversion (disclaimer: not the real price, only a transformed version of it)
# * time_since_last_click: the time since the last click (in s) for the given impression
# * cat(1-9): contextual features associated to the display. Can be used to learn the click/conversion models. We do not disclose the meaning of these features but it is not relevant for this study. Each column is a categorical variable. In the experiments, they are mapped to a fixed dimensionality space using the Hashing Trick (see paper for reference).
#
# ### Key figures
# * 2.4Gb uncompressed
# * 16.5M impressions
# * 45K conversions
# * 700 campaigns
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.utils import resample
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import keras
plt.style.use('ggplot')
# +
# Initial data preparation
def add_derived_columns(df):
df_ext = df.copy()
df_ext['jid'] = df_ext['uid'].map(str) + '_' + df_ext['conversion_id'].map(str)
min_max_scaler = MinMaxScaler()
for cname in ('timestamp', 'time_since_last_click'):
x = df_ext[cname].values.reshape(-1, 1)
df_ext[cname + '_norm'] = min_max_scaler.fit_transform(x)
return df_ext
def filter_journeys_by_length(df, min_touchpoints):
if min_touchpoints <= 1:
return df
else:
grouped = df.groupby(['jid'])['uid'].count().reset_index(name="count")
return df[df['jid'].isin( grouped[grouped['count'] >= min_touchpoints]['jid'].values )]
def sample_campaigns(df, n_campaigns):
campaigns = np.random.choice( df['campaign'].unique(), n_campaigns, replace = False )
return df[ df['campaign'].isin(campaigns) ]
def balance_conversions(df):
df_minority = df[df.conversion == 1]
df_majority = df[df.conversion == 0]
df_majority_jids = np.array_split(df_majority['jid'].unique(), 100 * df_majority.shape[0]/df_minority.shape[0] )
df_majority_sampled = pd.DataFrame(data=None, columns=df.columns)
for jid_chunk in df_majority_jids:
df_majority_sampled = pd.concat([df_majority_sampled, df_majority[df_majority.jid.isin(jid_chunk)]])
if df_majority_sampled.shape[0] > df_minority.shape[0]:
break
return pd.concat([df_majority_sampled, df_minority]).sample(frac=1).reset_index(drop=True)
def map_one_hot(df, column_names, result_column_name):
mapper = {}
for i, col_name in enumerate(column_names):
for val in df[col_name].unique():
mapper[str(val) + str(i)] = len(mapper)
df_ext = df.copy()
def one_hot(values):
v = np.zeros( len(mapper) )
for i, val in enumerate(values):
v[ mapper[str(val) + str(i)] ] = 1
return v
df_ext[result_column_name] = df_ext[column_names].values.tolist()
df_ext[result_column_name] = df_ext[result_column_name].map(one_hot)
return df_ext
data_file = '/mnt/batch/tasks/shared/LS_root/mounts/clusters/summarization/code/clean_crit_attribution_dataset.csv.gz'
#data_file = '/mnt/batch/tasks/shared/LS_root/mounts/clusters/summarization/code/criteo_attribution_dataset.tsv.gz'
df4 = pd.read_csv(data_file, sep='\t', compression='gzip')
n_campaigns = 400
# df1 = add_derived_columns(df0)
# df2 = sample_campaigns(df1, n_campaigns)
# df3 = filter_journeys_by_length(df2, 2)
# df4 = balance_conversions(df3)
df5 = map_one_hot(df4, ['cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat8'], 'cats')
df6 = map_one_hot(df5, ['campaign'], 'campaigns').sort_values(by=['timestamp_norm'])
print(df6.shape[0])
print([df6[df6.conversion == 0].shape[0], df6[df6.conversion == 1].shape[0]])
# +
# Data exploration
def journey_lenght_histogram(df):
counts = df.groupby(['jid'])['uid'].count().reset_index(name="count").groupby(['count']).count()
return counts.index, counts.values / df.shape[0]
hist_x, hist_y = journey_lenght_histogram(df6)
plt.plot(range(len(hist_x)), hist_y, label='all journeys')
plt.yscale('log')
plt.xlim(0, 120)
plt.xlabel('Journey length (number of touchpoints)')
plt.ylabel('Fraction of journeys')
plt.show()
# +
# df4['campaignuid'] = [str(x)+str(y) for x,y in zip(df4['campaign'],df4['uid'])]
# sampleindex = list(pd.DataFrame({'campaignuid':df4['campaignuid'].unique()}).sample(frac=0.2, replace=False, random_state=1)['campaignuid'])
# subset = df4[df4['campaignuid'].isin(sampleindex)]
# subset = subset.drop(['campaignuid'], axis =1)
# subset.to_csv('clean_crit_attribution_dataset.csv.gz', sep='\t', index = False, compression = 'gzip')
# -
# ## Last Touch Attribution
# +
def last_touch_attribution(df):
def count_by_campaign(df):
counters = np.zeros(n_campaigns)
for campaign_one_hot in df['campaigns'].values:
campaign_id = np.argmax(campaign_one_hot)
counters[campaign_id] = counters[campaign_id] + 1
return counters
campaign_impressions = count_by_campaign(df)
df_converted = df[df['conversion'] == 1]
idx = df_converted.groupby(['jid'])['timestamp_norm'].transform(max) == df_converted['timestamp_norm']
campaign_conversions = count_by_campaign(df_converted[idx])
return campaign_conversions / campaign_impressions
lta = last_touch_attribution(df6)
# +
# Visualization of the attribution scores
campaign_idx = range(150, 200)
fig = plt.figure(figsize=(15,4))
ax = fig.add_subplot(111)
plt.bar( range(len(lta[campaign_idx])), lta[campaign_idx], label='LTA' )
plt.xlabel('Campaign ID')
plt.ylabel('Return per impression')
plt.legend(loc='upper left')
plt.show()
# -
# ## Logistic Regression
def features_for_logistic_regression(df):
def pairwise_max(series):
return np.max(series.tolist(), axis = 0).tolist()
aggregation = {
'campaigns': pairwise_max,
'cats': pairwise_max,
'click': 'sum',
'cost': 'sum',
'conversion': 'max'
}
df_agg = df.groupby(['jid']).agg(aggregation)
df_agg['features'] = df_agg[['campaigns', 'cats', 'click', 'cost']].values.tolist()
return (
np.stack(df_agg['features'].map(lambda x: np.hstack(x)).values),
df_agg['conversion'].values
)
x, y = features_for_logistic_regression(df6)
print(np.shape(x))
# +
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 1)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.20, random_state = 1)
# +
# Quick sanity check
from sklearn.linear_model import LogisticRegression
logisticRegr = LogisticRegression()
logisticRegr.fit(x_train, y_train)
score = logisticRegr.score(x_test, y_test)
print(score)
# +
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.constraints import NonNeg
m = np.shape(x)[1]
model = Sequential()
model.add(Dense(1, input_dim=m, activation='sigmoid', name = 'contributions'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=128, epochs=10, verbose=1, validation_data=(x_val, y_val))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
# +
# Visualization of the attribution scores
from sklearn.utils.extmath import softmax
keras_logreg = model.get_layer('contributions').get_weights()[0].flatten()[0:n_campaigns]
keras_logreg = softmax([keras_logreg]).flatten()
fig = plt.figure(figsize=(15,4))
ax = fig.add_subplot(111)
plt.bar(range(len(keras_logreg[campaign_idx])), keras_logreg[campaign_idx] )
plt.xlabel('Campaign ID')
plt.ylabel('Return per impression')
plt.show()
# -
# ## Basic LSTM
# +
def features_for_lstm(df, max_touchpoints):
df_proj = df[['jid', 'campaigns', 'cats', 'click', 'cost', 'time_since_last_click_norm', 'timestamp_norm', 'conversion']]
x2d = df_proj.values
x3d_list = np.split(x2d[:, 1:], np.cumsum(np.unique(x2d[:, 0], return_counts=True)[1])[:-1])
x3d = []
y = []
for xi in x3d_list:
journey_matrix = np.apply_along_axis(np.hstack, 1, xi)
journey_matrix = journey_matrix[ journey_matrix[:, 5].argsort() ] # sort impressions by timestamp
n_touchpoints = len(journey_matrix)
padded_journey = []
if(n_touchpoints >= max_touchpoints):
padded_journey = journey_matrix[0:max_touchpoints]
else:
padded_journey = np.pad(journey_matrix, ((0, max_touchpoints - n_touchpoints), (0, 0)), 'constant', constant_values=(0))
x3d.append(padded_journey[:, 0:-1])
y.append(np.max(padded_journey[:, -1]))
return np.stack(x3d), y
x, y = features_for_lstm(df6, max_touchpoints = 15)
print(np.shape(x))
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 1)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.20, random_state = 1)
# +
from keras.models import Sequential
from keras.layers import Dense, LSTM
n_steps, n_features = np.shape(x)[1:3]
model = Sequential()
model.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2, input_shape=(n_steps, n_features)))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=64, epochs=5, verbose=1, validation_data=(x_val, y_val))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
# -
# ## LSTM with Attention
# +
from keras.models import Sequential
from keras.layers import Dense, LSTM, Input, Lambda, RepeatVector, Permute, Flatten, Activation, Multiply
from keras.constraints import NonNeg
from keras import backend as K
from keras.models import Model
n_steps, n_features = np.shape(x)[1:3]
hidden_units = 64
main_input = Input(shape=(n_steps, n_features))
embeddings = Dense(128, activation='linear', input_shape=(n_steps, n_features))(main_input)
activations = LSTM(hidden_units, dropout=0.2, recurrent_dropout=0.2, return_sequences=True)(embeddings)
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax', name = 'attention_weigths')(attention)
attention = RepeatVector(hidden_units * 1)(attention)
attention = Permute([2, 1])(attention)
weighted_activations = Multiply()([activations, attention])
weighted_activations = Lambda(lambda xin: K.sum(xin, axis=-2), output_shape=(hidden_units,))(weighted_activations)
main_output = Dense(1, activation='sigmoid')(weighted_activations)
model = Model(inputs=main_input, outputs=main_output)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=64, epochs=5, verbose=1, validation_data=(x_val, y_val))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
# -
# ## Analysis of LSTM-A Model
# +
def get_campaign_id(x_journey_step):
return np.argmax(x_journey_step[0:n_campaigns])
attention_model = Model(inputs=model.input, outputs=model.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
attributions = np.zeros(n_campaigns)
campaign_freq = np.ones(n_campaigns)
for i, journey in enumerate(a):
for step, step_contribution in enumerate(journey):
if(np.sum(x_train[i][step]) > 0):
campaign_id = get_campaign_id(x_train[i][step])
attributions[campaign_id] = attributions[campaign_id] + step_contribution
campaign_freq[campaign_id] = campaign_freq[campaign_id] + 1
# +
lstm_a = (attributions/campaign_freq)
fig = plt.figure(figsize=(15, 4))
ax = fig.add_subplot(111)
plt.bar( range(len(lstm_a[campaign_idx])), lstm_a[campaign_idx], label='LSTM-A' )
plt.xlabel('Campaign ID')
plt.ylabel('Contribution')
plt.legend(loc='upper left')
plt.show()
# +
fig = plt.figure(figsize=(15, 4))
ax = fig.add_subplot(111)
ratio = max(lta[campaign_idx]) / max(keras_logreg[campaign_idx])
plt.bar(np.linspace(0, len(campaign_idx), len(campaign_idx)), lta[campaign_idx], width=0.4, alpha=0.7, label='LTA' )
plt.bar(np.linspace(0, len(campaign_idx), len(campaign_idx)) - 0.3, keras_logreg[campaign_idx], width=0.4, alpha=0.7, label='Keras Log Reg' )
plt.xlabel('Campaign ID')
plt.ylabel('Contribution')
plt.legend(loc='upper left')
plt.show()
# +
fig = plt.figure(figsize=(15, 4))
ax = fig.add_subplot(111)
ratio = max(lta[campaign_idx]) / max(lstm_a[campaign_idx])
plt.bar(np.linspace(0, len(campaign_idx), len(campaign_idx)), lta[campaign_idx], width=0.4, alpha=0.7, label='LTA' )
plt.bar(np.linspace(0, len(campaign_idx), len(campaign_idx)) - 0.3, lstm_a[campaign_idx], width=0.4, alpha=0.7, label='LSTM-A' )
plt.xlabel('Campaign ID')
plt.ylabel('Contribution')
plt.legend(loc='upper left')
plt.show()
# -
# ## Simulation
# +
# Key assumption: If one of the campaigns in a journey runs out of budget,
# then the conversion reward is fully lost for the entire journey
# including both past and future campaigns
def simulate_budget_roi(df, budget_total, attribution, verbose=False):
budgets = np.ceil(attribution * (budget_total / np.sum(attribution)))
if(verbose):
print(budgets)
blacklist = set()
conversions = set()
for i in range(df.shape[0]):
campaign_id = get_campaign_id(df.loc[i]['campaigns'])
jid = df.loc[i]['jid']
if jid not in blacklist:
if budgets[campaign_id] >= 1:
budgets[campaign_id] = budgets[campaign_id] - 1
if(df.loc[i]['conversion'] == 1):
conversions.add(jid)
else:
blacklist.add(jid)
if(verbose):
if(i % 10000 == 0):
print('{:.2%} : {:.2%} budget spent'.format(i/df.shape[0], 1.0 - np.sum(budgets)/budget_total ))
if(np.sum(budgets) < budget_total * 0.02):
break
return len(conversions.difference(blacklist))
# +
pitches = [0.1, 0.25, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
attributions = [lta, keras_logreg, lstm_a]
for i, pitch in enumerate(pitches):
for j, attribution in enumerate(attributions):
reward = simulate_budget_roi(df6, 10000, attribution**pitch)
print('{} {} : {}'.format(pitch, j, reward))
| promotions/channel-attribution-lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Step.2 KGB(known Good/Bad)训练,用于检测拒绝集(训练集中的unknown标签)
# +
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import atecml.data
from tqdm import tqdm
class BasicModel(object):
"""Parent class of basic models"""
def train(self, x_train, y_train, x_val, y_val):
"""return a trained model and eval metric o validation data"""
pass
def predict(self, model, x_test):
"""return the predicted result"""
pass
def get_oof(self, x_train, y_train, x_test, n_folds = 5):
"""K-fold stacking"""
num_train, num_test = x_train.shape[0], x_test.shape[0]
oof_train = np.zeros((num_train,))
oof_test = np.zeros((num_test,))
oof_test_all_fold = np.zeros((num_test, n_folds))
aucs = []
model_list = []
for i in range(0,n_folds):
val_index = DateFold[5] #始终用最后20%验证
train_index = list(all_list - DateFold[i])
print('{0} fold, train {1}, val {2}'.format(i, len(train_index), len(val_index)))
x_tra, y_tra = x_train[train_index], y_train[train_index]
x_val, y_val = x_train[val_index], y_train[val_index]
#Over_sample
#X_resampled, y_resampled = SMOTE().fit_sample(x_tra,y_tra)
#model, auc = self.train(X_resampled, y_resampled, x_val, y_val)
model, auc = self.train(x_tra, y_tra, x_val, y_val)
aucs.append(auc)
model_list.append(model)
oof_train[val_index] = self.predict(model, x_val)
oof_test_all_fold[:, i] = self.predict(model, x_test)
oof_test = np.mean(oof_test_all_fold, axis=1)
print('all aucs {0}, average {1}'.format(aucs, np.mean(aucs)))
return oof_train, oof_test,model_list
import lightgbm as lgb
class LGBClassifier(BasicModel):
'''
' 调参范围
'num_leaves':range(35,65,5)
'learning_rate':[0.01,0.05,0.1,0.3,0.5,0.7]
'min_child_weight':range(1,6,2)
'max_depth':range(3,10,2),
'subsample':[i/10.0 for i in range(6,10)],正常直接设置为1
'colsample_bytree':[i/10.0 for i in range(6,10)],正常直接设置为1
'reg_alpha','reg_lambda':[1e-5, 1e-2, 0.1, 1, 2,2.5,3]
'''
def __init__(self,boost_type,boost_round=1000,early_stop=100):
self.num_boost_round = boost_round
self.early_stopping_rounds = early_stop
self.params = {
'task': 'train',
'boosting_type': boost_type,
'colsample_bytree': 0.7,
'learning_rate': 0.05,
'max_bin': 255,
'max_depth': 3,
'metric': {'auc'},
'min_child_samples': 800,
'min_child_weight': 0.05,
'min_split_gain': 0,
'nthread': 40,
'num_leaves': 31,
'objective': 'binary',
'reg_alpha': 1,
'reg_lambda': 2,
'is_unbalance':'true',
#'scale_pos_weight': 99,
'subsample': 0.85,
'subsample_for_bin': 200000,
'subsample_freq': 1,
'use_missing': 'true',
'verbose' : -1,
}
print(self.params)
def train(self, x_train, y_train, x_val, y_val):
print('train with lgb model')
lgbtrain = lgb.Dataset(x_train, y_train)
lgbval = lgb.Dataset(x_val, y_val)
model = lgb.train(self.params,
lgbtrain,
valid_sets=lgbval,
verbose_eval = 50,
num_boost_round = self.num_boost_round,
early_stopping_rounds = self.early_stopping_rounds)
return model, model.best_score['valid_0']['auc']
def predict(self, model, x_test):
print('test with lgb model')
return model.predict(x_test, num_iteration=model.best_iteration)
def stack_layer1_result(X_train,rf_model_list,gbdt_model_list,dart_model_list):
with atecml.data.timer('Classification: Building Layer-1 Stack'):
rf_input_list = []
for idx in tqdm(range(len(rf_model_list))):
model = rf_model_list[idx]
_temp_df = model.predict(X_train,num_iteration=model.best_iteration)
rf_input_list.append(pd.DataFrame(_temp_df))
rf_oof_predict= np.array(pd.concat(rf_input_list,ignore_index=True,axis=1).mean(axis=1))
gbdt_input_list = []
for idx in tqdm(range(len(gbdt_model_list))):
model = gbdt_model_list[idx]
_temp_df = model.predict(X_train,num_iteration=model.best_iteration)
gbdt_input_list.append(pd.DataFrame(_temp_df))
gbdt_oof_predict= np.array(pd.concat(gbdt_input_list,ignore_index=True,axis=1).mean(axis=1))
dart_input_list = []
for idx in tqdm(range(len(dart_model_list))):
model = dart_model_list[idx]
_temp_df = model.predict(X_train,num_iteration=model.best_iteration)
dart_input_list.append(pd.DataFrame(_temp_df))
dart_oof_predict= np.array(pd.concat(dart_input_list,ignore_index=True,axis=1).mean(axis=1))
input_predict = [rf_oof_predict, gbdt_oof_predict, dart_oof_predict]
stacked_predict = np.concatenate([f.reshape(-1, 1) for f in input_predict], axis=1)
return stacked_predict
# +
#训练集为第一步build的纬度提升矩阵,并过滤掉unknown标签
data = pd.read_pickle('./01_train.dat')
train_df = data[data['label']!=-1].reset_index(drop=True)
#最终预测的测试集为unknown标签
val_df = data[data['label']==-1].reset_index(drop=True)
predictors = [x for x in data.columns if x not in atecml.data.NOT_FEATURE_SUM]
DateFold={}
DateFold[0] = set(atecml.data.filter_date(train_df,start_date='2017-09-05',end_date='2017-09-14').index)
DateFold[1] = set(atecml.data.filter_date(train_df,start_date='2017-09-15',end_date='2017-09-24').index)
DateFold[2] = set(atecml.data.filter_date(train_df,start_date='2017-09-25',end_date='2017-10-04').index)
DateFold[3] = set(atecml.data.filter_date(train_df,start_date='2017-10-05',end_date='2017-10-14').index)
DateFold[4] = set(atecml.data.filter_date(train_df,start_date='2017-10-15',end_date='2017-10-24').index)
DateFold[5] = list(atecml.data.filter_date(train_df,start_date='2017-10-25',end_date='2017-11-24').index)
all_list = set(train_df.index)
# -
target='label'
x_train = np.array(train_df[predictors])
y_train = np.array(train_df[target])
x_test = np.array(val_df[predictors])
print(x_train.shape, y_train.shape, x_test.shape)
num_boost_round = 2000
num_early_stop = 50
# get output of first layer models and construct as input for the second layer
rf_classifier = LGBClassifier(boost_type='rf',boost_round=num_boost_round,early_stop=num_early_stop)
rf_oof_train, rf_oof_test,rf_model_list = rf_classifier.get_oof(x_train, y_train, x_test)
print(rf_oof_train.shape, rf_oof_test.shape)
# +
gbdt_classifier = LGBClassifier(boost_type='gbdt',boost_round=num_boost_round,early_stop=num_early_stop)
gbdt_oof_train, gbdt_oof_test,gbdt_model_list = gbdt_classifier.get_oof(x_train, y_train, x_test)
print(gbdt_oof_train.shape, gbdt_oof_test.shape)
dart_classifier = LGBClassifier(boost_type='dart',boost_round=num_boost_round,early_stop=num_early_stop)
dart_oof_train, dart_oof_test,dart_model_list = dart_classifier.get_oof(x_train, y_train, x_test)
print(dart_oof_train.shape, dart_oof_test.shape)
# -
stacked_train = stack_layer1_result(x_train,rf_model_list,gbdt_model_list,dart_model_list)
stacked_test = stack_layer1_result(x_test,rf_model_list,gbdt_model_list,dart_model_list)
# +
# use XGBOOST as the model of the second layer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
model = XGBClassifier(
learning_rate =0.05,
n_estimators=500,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.9,
objective= 'binary:logistic',
scoring='roc_auc',
nthread=40,
seed=27)
# split for validation
n = int(stacked_train.shape[0] * 0.8)
x_tra, y_tra = stacked_train[:n], y_train[:n]
x_val, y_val = stacked_train[n:], y_train[n:]
model.fit(x_tra,y_tra)
y_pred = pd.DataFrame(model.predict_proba(x_val))[1]
_f1,_f2,_f3 = atecml.data.accuracy_validation(y_val,y_pred)
# +
# predict on test data
final_model = XGBClassifier(
learning_rate =0.05,
n_estimators=500,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.9,
objective= 'binary:logistic',
scoring='roc_auc',
nthread=40,
seed=27)
final_model.fit(stacked_train, y_train)
test_prediction = final_model.predict_proba(stacked_test)
result=pd.DataFrame()
result['id'] = val_df['id']
result['score'] = pd.DataFrame(test_prediction)[1]
result.to_pickle('./reject_inf.dat')
# -
result.hist(bins=100)
len(result[result.score > 0.1])
result[result.score > 0.1].hist()
result
| workspace/WOE_IV_66/02_KGB_Train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import glob
import numpy as np
import pandas as pd
from collections import defaultdict
from scipy.stats import pearsonr
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['font.family'] = 'Times New Roman'
import matplotlib.gridspec as gridspec
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['text.latex.preview'] = True
plt.rc('font', family='serif', serif=['Times'])
import warnings
warnings.filterwarnings("ignore")
# -
# !which latex
model2name = {
"m3p": "M$^3$P",
"uc2": "UC$^2$",
"ctrl_muniter": "mUNITER",
"ctrl_xuniter": "xUNITER",
"ctrl_lxmert": "LXMERT",
"ctrl_uniter": "UNITER",
"ctrl_vilbert": "ViLBERT",
"ctrl_visualbert": "VisualBERT",
"ctrl_vl-bert": "VL-BERT",
}
lang2name = {
'en': 'ENG',
'ar': 'ARB',
'bn': 'BEN',
'bg': 'BUL',
'da': 'DAN',
'et': 'EST',
'de': 'DEU',
'el': 'ELL',
'fr': 'FRA',
'id': 'IND',
'ja': 'JPN',
'ko': 'KOR',
'zh': 'CMN',
'pt': 'POR',
'ru': 'RUS',
'es': 'SPA',
'sw': 'SWA',
'ta': 'TAM',
'tr': 'TUR',
'vi': 'VIE',
}
lang2ix = {l: ix for ix, l in enumerate(lang2name.keys())}
# ## Wikipedia size
art_df = pd.read_csv("wiki_sizes.csv")
art_df.sort_values('articles', inplace=True)
art_df.head()
lang2size = {l: art_df[(art_df['language'] == l)]['articles'].values[0] for l in lang2name.keys()}
# +
colors = ['#000000', '#E69F00', '#56B4E9', '#009E73', '#F0E442', '#0072B2', '#D55E00', '#CC79A7']
f, ax = plt.subplots(1, 1, figsize=(18,7))
xs = art_df['language'][::-1][1:]
ys = art_df['articles'][::-1][1:]
ax.bar(xs, ys/1e6, color=colors[5])
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.set_xticklabels([lang2name[l] for l in xs], fontsize=18)
ax.set_ylabel('\# of Wikipedia articles (in millions)', fontsize=26)
f.savefig("wiki_sizes.pdf", bbox_anchor="tight")
# +
f, ax = plt.subplots(1, 1, figsize=(18,7))
colors = ['#ff9dc8', '#e20134', '#ffac3b', '#00b408', '#1E88E5']
markers = ['X', 's', '^', 'o', 'd']
legend_elements = []
for m, n in zip(markers[1:], ['ctrl_muniter', 'ctrl_xuniter', 'uc2', 'm3p']):
legend_elements.append(Line2D([0], [0], marker=m, color='#777777', label=model2name[n],
markerfacecolor="#777777", markeredgecolor='k', markersize=10, linewidth=0))
lgd2 = ax.legend(handles=legend_elements, title="\\textbf{Model}", loc='upper left', bbox_to_anchor=(0, 1.015, 0, 0),
ncol=4, fontsize=18, title_fontsize=20)
model2avgs = defaultdict(list)
for it, dset in enumerate(['XVNLI', 'xGQA', 'MaRVL', 'xFlickrCO', 'WIT']):
j = 0.01 * (-2+it)
try:
dset_0 = pd.read_csv(f"../results/{dset.lower()}/{dset}_0.csv")[:4]
except:
dset_0 = pd.read_csv(f"../results/{dset.lower()}/{dset}_ir_0.csv")[:4]
for im, m in enumerate(['ctrl_muniter', 'ctrl_xuniter', 'uc2', 'm3p']):
for lang in dset_0.columns[2:-1]:
val = dset_0[(dset_0['model'] == m)][lang]
x = lang2size[lang]/1e6
ax.plot(x+j, val, marker=markers[im+1], markersize=10, markeredgecolor='k', linewidth=3, color=colors[it])
for it, dset in enumerate(['XVNLI', 'xGQA', 'MaRVL', 'xFlickrCO', 'WIT']):
x2avg = {}
xs = []
vals = []
try:
dset_0 = pd.read_csv(f"{dset.lower()}/{dset}_0.csv")[:4]
except:
dset_0 = pd.read_csv(f"{dset.lower()}/{dset}_ir_0.csv")[:4]
for lang in dset_0.columns[2:-1]:
v = dset_0[lang].values
x2avg[lang2size[lang]/1e6] = np.mean(v)
vals.extend(v)
xs.extend([lang2size[lang]/1e6]*len(v))
p = np.polyfit(xs, vals, 1, rcond=None, full=False, w=None, cov=False)
ys = [np.poly1d(p)(x) for x in sorted(xs)]
corr = pearsonr([x2avg[x] for x in sorted(x2avg.keys())], [x for x in sorted(x2avg.keys())])
ys = [np.poly1d(p)(x) for x in art_df['articles'].values[:-1]/1e6]
xs = art_df['articles'].values[:-1]/1e6
dset = "xFlickr\&CO" if dset == "xFlickrCO" else dset
ax.plot(xs, ys, linewidth=4, color=colors[it], alpha=0.5, label=f"{dset}, $\\rho$=%.2f" % corr[0])
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.minorticks_off()
ax.set_xlim(0.062, 3)
ax.set_xscale('log')
ax.set_xticks(art_df['articles'].values[:-1]/1e6)
nums = ['%.2f' % v for v in art_df['articles'].values[:-1]/1e6]
nums2 = ['0.07','0.12','0.14','0.20','','0.27','','0.46','','0.61','1.08','','','1.27','','1.75','','','2.65']
ax.set_xticklabels(['%.2f' % float(v) if v != '' else '' for v in nums2 ], fontsize=20)
ax.set_xlabel('\# of Wikipedia articles (in millions)', fontsize=24)
ax.set_ylabel('Accuracy', fontsize=24)
ax.legend(title='\\textbf{Dataset}', loc='upper center', ncol=5, bbox_to_anchor=(0.5, 1.175, 0, 0), fontsize=17.5, title_fontsize=18)
plt.gca().add_artist(lgd2)
f.savefig("wiki_zero-shot-scores.svg", bbox_extra_artists=(lgd2,), bbox_anchor="tight")
# -
# ## Typology
# +
import sys
sys.path.append("../tools/lang2vec")
import lang2vec.lang2vec as l2v # see [https://github.com/antonisa/lang2vec] for installation
from scipy import stats
import pandas as pd
def uriel_distance_vec(languages):
"""
Adapted from langrank [https://github.com/neulab/langrank/blob/master/langrank.py]
"""
geographic = l2v.geographic_distance(languages)
genetic = l2v.genetic_distance(languages)
inventory = l2v.inventory_distance(languages)
syntactic = l2v.syntactic_distance(languages)
phonological = l2v.phonological_distance(languages)
featural = l2v.featural_distance(languages)
uriel_features = {n:v for n, v in zip(['genetic', 'syntactic', 'featural', 'phonological', 'inventory', 'geographic'],
[genetic, syntactic, featural, phonological, inventory, geographic])}
return uriel_features
uriel = uriel_distance_vec([v.lower() for v in lang2name.values()])
# +
# dset-URIEL correlations
dset2vals = {}
dset2dists = {}
dist2tasks_r = np.zeros((5, len(uriel)))
dist2tasks_p = np.zeros((5, len(uriel)))
for it, dset in enumerate(['XVNLI', 'xGQA', 'MaRVL', 'xFlickrCO', 'WIT']):
try:
dset_0 = pd.read_csv(f"{dset.lower()}/{dset}_0.csv")[:4]
except:
dset_0 = pd.read_csv(f"{dset.lower()}/{dset}_ir_0.csv")[:4]
en_v = dset_0['en'].values
dist2vals = defaultdict(list)
diffs = []
for lang in dset_0.columns[2:-1]:
val = dset_0[lang].values
diffs.extend(val)
for k, v in uriel.items():
dist2vals[k].extend([v[0,lang2ix[lang]]] * len(val))
dset2dists[dset] = []
for ix, k in enumerate(uriel.keys()):
pearson_r, pearsonp = stats.pearsonr(diffs, dist2vals[k])
dist2tasks_r[it][ix] = pearson_r
dist2tasks_p[it][ix] = pearsonp
dset2vals[dset] = diffs
dset2dists[dset].append(dist2vals[k])
for it, dset in enumerate(['XVNLI', 'xGQA', 'MaRVL', 'xFlickrCO', 'WIT']):
print(dset, end=" ")
for ix, k in enumerate(uriel.keys()):
print(f"& %.2f (%.3f)" % (dist2tasks_r[it][ix], dist2tasks_p[it][ix]), end=" ")
print("\\\\")
# +
f, ax = plt.subplots(1, 1, figsize=(18,7))
xmin, xmax = (0.28, 0.64)
ax.set_xlim(xmin, xmax)
ax.set_ylim(0, 75)
colors = ['#ff9dc8', '#e20134', '#ffac3b', '#00b408', '#1E88E5']
markers = ['X', 's', '^', 'o', 'd']
amodels = ['ctrl_muniter', 'ctrl_xuniter', 'uc2', 'm3p']
legend_elements = []
for m, n in zip(markers[1:], ['ctrl_muniter', 'ctrl_xuniter', 'uc2', 'm3p']):
legend_elements.append(Line2D([0], [0], marker=m, color='#777777', label=model2name[n],
markerfacecolor="#777777", markeredgecolor='k', markersize=10, linewidth=0))
lgd2 = ax.legend(handles=legend_elements, title="\\textbf{Model}", loc='upper left', bbox_to_anchor=(0, 1.02, 0, 0),
ncol=4, fontsize=18, title_fontsize=20)
dset2sims = {}
for dset, ll in dset2dists.items():
dset2sims[dset] = []
for l in ll:
dset2sims[dset].append([1-e for e in l])
pearsons = []
for it, (dset, ys) in enumerate(dset2vals.items()):
xs = dset2sims[dset][1]
for im in range(4):
j = np.random.randint(1,3,1)[0]/100
j *= (np.random.rand() > 0.5)
vals = [x for ix, x in enumerate(ys) if ix % 4 == im]
diss = [x+j for ix, x in enumerate(xs) if ix % 4 == im]
ax.plot(diss, vals, ls='', marker=markers[im+1], markersize=10, markeredgecolor='k', color=colors[it])
pearson_r, pearson_p = stats.pearsonr(xs, ys)
pearsons.append(pearson_r)
print(dset, pearson_r)
p = np.polyfit(xs, ys, 1, rcond=None, full=False, w=None, cov=False)
ys = [np.poly1d(p)(x) for x in np.arange(xmin, xmax+0.1, 0.1)]
ax.plot(np.arange(xmin, xmax+0.1, 0.1), ys, linewidth=4, color=colors[it], alpha=0.5)
legend_elements = [
Line2D([0], [0], color=colors[0], label='XVNLI, $\\rho$=%.2f' % pearsons[0], linewidth=3, markersize=0, linestyle='-'),
Line2D([0], [0], color=colors[1], label='xGQA, $\\rho$=%.2f' % pearsons[1], linewidth=3, markersize=0, linestyle='-'),
Line2D([0], [0], color=colors[2], label='MaRVL, $\\rho$=%.2f' % pearsons[2], linewidth=3, markersize=0, linestyle='-'),
Line2D([0], [0], color=colors[3], label='xFlickr\&CO, $\\rho$=%.2f' % pearsons[3], linewidth=3, markersize=0, linestyle='-'),
Line2D([0], [0], color=colors[4], label='WIT, $\\rho$=%.2f' % pearsons[4], linewidth=3, markersize=0, linestyle='-'),
]
ax.legend(handles=legend_elements, title='\\textbf{Dataset}', loc='upper center',
ncol=5, bbox_to_anchor=(0.5, 1.179, 0, 0), fontsize=17.5, title_fontsize=18)
ax.grid(alpha=0.3)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.minorticks_off()
ax.set_xlabel('Syntactic similarity', fontsize=24)
ax.set_ylabel('Accuracy', fontsize=24)
plt.gca().add_artist(lgd2)
f.savefig("syntactic-sim_zero-shot-scores.pdf", bbox_extra_artists=(lgd2,), bbox_anchor="tight")
| notebooks/Results-Properties.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp core
# -
# # module name here
#
# > API details.
#hide
from nbdev.showdoc import *
#export
def explore_df(df):
"""
A more advanced version of describe for tabular exploratory data analysis. Inlcudes additional information such as,
missing observations, unique observations, constant feature flagging, all_missing feature flagging, feature types & outlier
values.
Parameters
----------
df : pandas df, required
Pandas dataframe object
Returns
-------
pandas df
Returns a pandas dataframe object
Usage
-----
df = pd.DataFrame({"x1": ["a", "b", "c", "a"], "x2":['x','y','x','x'], "y": [1,1,0,1]})
eda = explore_df(df=df)
"""
import pandas as pd
import numpy as np
ft = pd.DataFrame()
ft['type']=df.dtypes.astype(str)
ft['feature']=ft.index
ft['unique']=df.nunique()
ft['missing']= df.isnull().sum()
ft['constant']=np.where(ft['unique']==1,1,0)
ft['all_missing']=np.where(ft['missing']==df.shape[0],1,0)
numeric = ft.loc[(ft['type'].str.contains('float'))]['feature']
numeric = numeric.append(ft.loc[(ft['type'].str.contains('int'))]['feature'])
categorical = ft.loc[(ft['type'].str.contains('object'))]['feature']
# Summary statistics
lower=df[numeric].quantile(q=0.25)
upper=df[numeric].quantile(q=0.75)
ft['min']=df[numeric].min()
ft['q1']=lower
ft['median']=df[numeric].median()
ft['mean']=df[numeric].mean()
ft['q3']=upper
ft['max']=df[numeric].max()
# Caclulate outlier values
iqr = upper - lower
lower=lower-(1.5*iqr)
upper=upper+(1.5*iqr)
ft['lower_outlier']=lower
ft['upper_outlier']=upper
ft['skewness']=df[numeric].skew()
ft['class'] = np.where(ft['type'].str.contains('float'), 'numeric', None)
ft['class'] = np.where(ft['type'].str.contains('int'), 'numeric', ft['class'])
ft['class'] = np.where(ft['type'].str.contains('object'), 'categorical', ft['class'])
ft['class'] = np.where(ft['type'].str.contains('datetime'), 'datetime', ft['class'])
ft['class'] = np.where(ft['class'].isin(['numeric','integer']) &
(ft['min'] == 0) &
(ft['max'] == 1) &
(ft['unique'] == 2), 'indicator', ft['class'])
ft=ft[['feature','type','class','missing','unique','constant','all_missing','min','q1','median',
'mean','q3','max','lower_outlier','upper_outlier','skewness']]
ft=ft.reset_index(drop=True)
return ft
from nbdev.export import *
notebook2script()
| 00_core.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="hXuVrOBk6xip" colab_type="text"
# # Python 101 零基礎幼幼班
# ## [台北敏捷 AI 社群](https://www.meetup.com/Taipei-Agile-AI/) 20190423 小聚
# 
#
# + [markdown] id="9dmVhET3fVmQ" colab_type="text"
# ## 敏捷軟體開發宣言
# 藉著親自並協助他人進行軟體開發,我們正致力於發掘更優良的軟體開發方法。
#
# 透過這樣的努力,我們已建立以下價值觀:
#
# > 個人與互動 重於 流程與工具
#
# > 可用的軟體 重於 詳盡的文件
#
# > 與客戶合作 重於 合約協商
#
# > 回應變化 重於 遵循計劃
#
# 也就是說,雖然右側項目有其價值,但我們更重視左側項目。
# + [markdown] id="yDFm3AmGgVcs" colab_type="text"
# ## The Zen of Python (Python 之道)
#
# > Beautiful is better than ugly.
#
# 優美勝於醜陋(Python 以編寫優美的代碼為目標)
#
# > Explicit is better than implicit.
#
# 明了勝於晦澀(優美的代碼應當是明了的,命名規範,風格相似)
#
# > Simple is better than complex.
#
# 簡潔勝於復雜(優美的代碼應當是簡潔的,不要有復雜的內部實現)
#
# > Complex is better than complicated.
#
# 複雜勝於凌亂(如果復雜不可避免,那代碼間也不能有難懂的關係,要保持接口簡潔)
#
# > Flat is better than nested.
#
# 扁平勝於嵌套(優美的代碼應當是扁平的,不能有太多的嵌套)
#
# > Sparse is better than dense.
#
# 間隔勝於緊湊(優美的代碼有適當的間隔,不要奢望一行代碼解決問題)
#
# > Readability counts.
#
# 可讀性很重要(優美的代碼是可讀的)
#
# > Special cases aren't special enough to break the rules.
#
# > Although practicality beats purity.
#
# 即便假借特例的實用性之名,也不可違背這些規則(這些規則至高無上)
#
# > Errors should never pass silently.
#
# 不要包容所有錯誤
#
# > Unless explicitly silenced.
#
# 除非你確定需要這樣做(精準地捕獲異常,不寫 except:pass 風格的代碼)
#
# > In the face of ambiguity, refuse the temptation to guess.
#
# 當存在多種可能,不要嘗試去猜測
#
# > There should be one-- and preferably only one --obvious way to do it.
#
# 而是盡量找一種,最好是唯一一種明顯的解決方案(如果不確定,就用窮舉法)
#
# > Although that way may not be obvious at first unless you're Dutch.
#
# 雖然這並不容易,因為你不是 Python 之父
#
# > Now is better than never.
#
# 現在做好過不做
#
# > Although never is often better than *right* now.
#
# 但不假思索就動手還不如不做(動手之前要細思量)
#
# > If the implementation is hard to explain, it's a bad idea.
#
# 如果你無法向人解釋你的方案,那肯定不是一個好方案
#
# > If the implementation is easy to explain, it may be a good idea.
#
# 反之亦然(方案測評標準)
#
# > Namespaces are one honking great idea -- let's do more of those!
#
# 命名空間是一種絕妙的理念,我們應當多加利用(倡導與號召)
# + [markdown] id="mutR26q4TEv1" colab_type="text"
# ## 第一個 Python 程式
#
# 第一個程式就用 print() 這個函數 來跟大家打招呼吧!
# + id="s8QFM48HTo63" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 441} outputId="63563a09-6132-4b66-ad75-05083b3ee119" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#nreom4" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#nreom4
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#nreom4" width="920" height="400"></iframe>
# + id="RxJRjUyXT7cS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="82d1094c-de34-4d58-969f-128864b414bb"
# 自我練習,在下方輸入程式碼
print("Hi! Everbody")
# + [markdown] id="lhHFoEZ3J3DK" colab_type="text"
# ## 1.變數(variable)與資料型態(data type)
#
# 變數之所以稱為變數最大的原因是值是可變動的,而與之相反的則稱為常數(constant),為不可變的。
#
# 在 python 中用 = 來指定(assign)變數的值,並使用 print() 來印出變數的值。
# + id="kv2a--WLMbnQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="f299486b-f5f1-4f1c-dcde-14f5230df1e0"
# 井字號為單行註解用
# 將變數 x 的值設為 8
# 在 python 中 8 是一個數值常數
x = 8
# 印出 x 的值
print(x)
# 將變數 a 的值設為 88
x = 88
print(x)
# 上下各三個連續單引號所包含的內容為多行註解
'''
將變數 x 的值設為 '發發發'
在 python 中 '發發發' 是一個字串常數
'''
x = '發發發'
print(x)
# print() 也可以直接印出常數
print(5168) # 印出 5168 這個數值常數
# + id="OI18wGaz6ilT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 441} outputId="53d2560a-4340-46e6-e1dc-0329118be3c3" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#6egvrr" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#6egvrr
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#6egvrr" width="920" height="400"></iframe>
# + id="OljIXyWB0WG2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="13f001f6-d759-4f58-edeb-e296911716dc"
# 自我練習,在下方輸入程式碼
x = 8
print(x)
x = 88
print(x)
x = '發發發'
print(x)
台積電股價 = 100
一張台積電 = 台積電股價 * 1000
print(一張台積電)
# + [markdown] id="yheQhiHIFL8q" colab_type="text"
# 在 python 中,變數與常數可以進一步被分為不同的形態,以下介紹四種比較常用的型態:
#
# ---
# int:整數
#
# float:小數
#
# string:字串
#
# list:串列
#
# + [markdown] id="J0jegH53RVct" colab_type="text"
# ### 1.1 int (整數) 與 float(浮點數)
#
# **int** 與 **float** 都是屬於數值資料型態:
#
# **int** 是整數,表示不帶有小數點的數值。
#
# **float** 是浮點數,表示任何帶有小數點的數值。
# + id="5FHduA7LSG5k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="d10e0509-a95d-49e6-99a6-bf12e92401ff"
x = 777
y = 0.1
print(x + y)
add = x + 7
print(add)
minus = x - 7
print(minus)
mul = x * 7
print(mul)
div = x / 7
print(div)
# + id="IiFn-h_80YvF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 591} outputId="a4bbef52-9fd9-4a6c-84c3-fdbb60e00683" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#hzrotc" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#hzrotc
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#hzrotc" width="920" height="550"></iframe>
# + id="XKyvhs121OUS" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="iSMFCcxxSR6h" colab_type="text"
# ### 1.2 string(字串)
# string 用來儲存文字,在 python 中字串常數需要用兩個單引號 '' 或兩個雙引號 "" 包起來。
# + id="BaKVe9d1SgmV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="d99b72be-5b61-4210-8f7d-153b53ddf08c"
x = 'hello'
y = 'world'
print(x)
print(y)
# 字串運算跟數值運算不同
print(2 + 1)
print('2' + '1')
# + id="Sj3eVQ1s5hIP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 461} outputId="f7b0a315-3227-4a81-a519-3d4ebd34e0e4" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#a3ibr2" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#a3ibr2
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#a3ibr2" width="920" height="400"></iframe>
# + id="IzpsEjrD1LRK" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="7B63EC_LSj3i" colab_type="text"
# ### 1.3 list(串列)
# list 是用來存放多個值的資料型態,在 python 可以用中括號 [] 建立。
#
# list 中每個值稱為元素,可以是 int、float、string 甚至 list 等不同型態。
# + id="iii-oU5xAEUD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 158} outputId="4c322166-cf07-488a-9f87-af9d29f6d041"
a = ['hello', 'world'] # 包含 2 個字串常數的 list
print(a)
b = [1, 2, 3 ] # 包含 3 個 int 常數的 list
print(b)
c = [0.1, 0.2, 0.3, 0.4] # 包含 4 個 float 常數的 list
print(c)
d = ['1', 2, 0.3] # 包含 3 個不同型態常數的 list
print(d)
# len() 可以取得 list 長度
print(len(a))
print(len(b))
# 取得 list 中特定元素
print(a[1]) # 印出 list a 中第 1 個元素,在 python 中 list 從 0 開始編號
print(c[1:3]) # 印出 list b 中編號 1~2 的元素,注意 1:3 並沒有包含 3
# + id="79Y06QFaO-zQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 911} outputId="ee192180-bf57-4b19-b084-378a26f117b1" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#bjoxm2" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#bjoxm2
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#bjoxm2" width="920" height="850"></iframe>
# + id="E5BTjCjzBDKN" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + id="QxOkVrBSVGOe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="d1c963ed-2ddf-44f6-a062-7ebcf6152084"
a = ['hello', 'world'] # 包含 2 個字串常數的 list
print(a)
# 更改 list 元素
a[1] = 'everybody'
print(a)
# 新增元素
a.append("let's coding")
print(a)
# 刪除元素
a.pop(1)
print(a)
# + id="7UPPo6vHBD3k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 641} outputId="33c42001-1efc-4358-e698-7bac9c2c74de" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#fozeo8" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#fozeo8
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#fozeo8" width="920" height="600"></iframe>
# + id="3gOzESQ9SoKI" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="CIqOZcpWYkDF" colab_type="text"
# ## 2.迴圈(for loop)
# for 迴圈通常用於已知重複次數的程式,迴圈結構中指定迴圈變數的初始值、終止值與遞增(減)值。
#
# 迴圈變數將由初始值變化到終止值的前一個數字,每次依照遞增(減)的值進行數值遞增或遞減。
# + id="PWJT5HvZY5Qw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 263} outputId="da94ad7a-b473-4a92-af9a-9f781a3e9089"
a = [1, 2, 3, 4]
# 依序取出 a 的元素存放到變數 i,並把 i 印出來
for i in a:
print(i) # 注意 python 是依縮排判斷程式碼是否在迴圈內
# range(起始值 , 終止值 , 遞增(減)值) 可以產生連續數字的 list
j = 0
for i in range(1, 16, 2):
print(i)
j = j + i
print(j)
# + id="JSumae9uXxwy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 641} outputId="f8b817d0-da1d-4604-cd12-36b4233516a4" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#aag7uu" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#aag7uu
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#aag7uu" width="920" height="600"></iframe>
# + id="hwTz_hGxYUGj" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="rzYzthWtZE7X" colab_type="text"
# ## 3.條件判斷式 (if ... else ...)
# 讓程式依條件判斷執行不同程式區塊。
# + id="vfyaIPZuZO0l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="9a89865f-22a0-463b-9f0e-e7714daadc88"
'''
if 條件判斷:
條件成立的程式區塊
'''
score = int(input('請輸入這次的成績?'))
if score >= 60:
print('很好,請繼續保持')
'''
if 條件判斷:
條件成立的程式區塊
else:
條件不成立的敘述
'''
score = int(input('請輸入這次的成績?'))
if score >= 60:
print('很好,請繼續保持')
else:
print('加油,請繼續努力')
# + id="XAEJPVPIb9iv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 591} outputId="f396221e-6e94-4e9c-fed4-e0573c500587" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#f3wsiw" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#f3wsiw
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#f3wsiw" width="920" height="550"></iframe>
# + id="f0OaRyMPb-D1" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="K55AYub3Zbxe" colab_type="text"
# ## 4.函式(Function)
# 函式是一段有特定功能、可以重複使用的程式區段,前面所用到的 print()、len()、range() 都是函式。
#
# 程式開發上會將常用的程式區段宣告為函式,供其他程式碼呼叫。
# + id="-Y6fduuTZjic" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="25d2842b-90d5-4ace-8b86-e7f8e1bf4693"
# 函式可以有多個參數
def print_info(name, gender):
print('Name:', name)
print('Gender:', gender)
print_info(gender='Male', name='Jason') # 指定參數名稱時,可以不用照原本宣告的順序
# 宣告一個傳入身高,體重並計算 BMI 的函式
def printBMI(height, weight):
bmi = weight/(height**2)
print('BMI is:', bmi) # 注意 python 是依縮排判斷程式碼是否在函式內
printBMI(1.8, 80)
# 函式可以有回傳值
def calBMI(height, weight):
bmi = weight/(height**2)
return bmi
input("")
x = calBMI(1.78, 60)
if x < 20:
print(x)
print('BMI is:', x)
# + id="3RyoUulhya3L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 961} outputId="a9e9413f-c99f-468f-ca24-afd52353e38c" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#exttx9" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=en#exttx9
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#exttx9" width="920" height="920"></iframe>
# + id="0-fq2pvmzmI9" colab_type="code" colab={}
# 自我練習,在下方輸入程式碼
# + [markdown] id="COn1hV4hlY-t" colab_type="text"
# # 範例練習 - 終極密碼
#
# + id="63HgjF6MpUo1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 641} outputId="fefc4c5b-7a59-45cf-b7de-098f9769757f" language="html"
# <a href="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#vki8h5" target="_blank">
# https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#vki8h5
# </a>
# <iframe src="https://blockly-demo.appspot.com/static/demos/code/index.html?lang=zh-hant#vki8h5" width="920" height="600"></iframe>
# + id="LSnftJ_JlZj-" colab_type="code" colab={}
#自我練習,在下方輸入程式碼
# + [markdown] id="7877m1mh0M1s" colab_type="text"
# # 自我練習 - 完整 BMI 計算機
#
# 參考衛福部網站的 BMI 測試功能,開發類似的程式功能
#
# http://health99.hpa.gov.tw/OnlinkHealth/Onlink_BMI.aspx
| Taipei_Agile_AI_Meetup_20190423.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas import DataFrame
# +
#FILE INPUT
file_input = str(input("Enter the csv file name:"))
if not '.csv' in file_input:
file_input += '.csv'
# -
data = pd.read_csv(file_input)
data.head()
# +
#REARRANGING COLOUMNS AND CREATING ONE IF NOT THERE
if 'Service Name' not in data:
data['Service Name'] = ""
data = data[["BILLED_DATE", "ID", "SERVICE_type", "Unique ID check", "NET_AMOUNT", "INVOICESTATUS", "COLLECTED", "APPROVEheadTE", "Service Name"]]
data.head()
# -
#EXPORTING DATAFRAME TO ONE MAIN CSV FILE -SidekickEDGE - Python Exercise.xlsx
data.to_excel('SidekickEDGE - Python Exercise.xlsx', sheet_name=file_input)
# # STEP 2
# +
#Importing data from columns of input csv file and shifting it in data_shift masterview file but it replace all data before in the file
'''
data = pd.read_csv(file_input)
data = data[["ID", "SERVICE_type", "Unique ID check", "Service Name", "NET_AMOUNT", "INVOICESTATUS", "BILLED_DATE", "COLLECTED", "APPROVEDDATE"]]
data_shift = data[["ID", "SERVICE_type", "Unique ID check", "Service Name", "NET_AMOUNT", "INVOICESTATUS", "BILLED_DATE", "COLLECTED", "APPROVEDDATE"]]
'''
data = pd.read_csv(file_input)
data = data[["ID", "SERVICE_type", "Unique ID check", "Service Name", "NET_AMOUNT", "INVOICESTATUS", "BILLED_DATE", "COLLECTED", "APPROVEDDATE"]]
df = pd.DataFrame(data)
selected_columns = df[["ID", "SERVICE_type", "Unique ID check", "Service Name", "NET_AMOUNT", "INVOICESTATUS", "BILLED_DATE", "COLLECTED", "APPROVEDDATE"]]
data_shift = selected_columns.copy()
print(data_shift)
data_shift.to_excel('mv.xlsx', sheet_name='new_sheet_name')
| Sidekick/step1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: VPython
# language: python
# name: vpython
# ---
# # The perihelion motion of Mercury - Base solution
# This Notebook is an extension of the base solution which measures and display the preihelion motion of Mercury.
#
# It extends `base_solution.ipynb` by keeping track of the location of the perihelion
# of Mercury. It computes and outputs the angle by which it changes over
# the course of the simulation.
#
# The stopping criterion for the simulation is different than in `base_solution.ipynb`. It uses a fixed number
# of revolutions around the sun instead of a fixed run time.
# ## Importing VPython
from vpython import *
# ## Defining parameters and functions
# The following parameter values are computed using https://nssdc.gsfc.nasa.gov/planetary/factsheet
rM0 = 4.60 # Initial radius of Mercury orbit, in units of R0
vM0 = 5.10e-1 # Initial orbital speed of Mercury, in units of R0/T0
c_a = 9.90e-1 # Base acceleration of Mercury, in units of R0**3/T0**2
rS = 2.95e-7 # Schwarzschild radius of Sun,in units of R0
rL2 = 8.19e-7 # Specific angular momentum, in units of R0**2
# Because we want to visualize the orbit of Mercury, we need to work with vectors. The initial position and velocity vectors of mercury are thus given by
vec_rM0 = vector(0, rM0, 0) # Initial position vector of Mercury
vec_vM0 = vector(vM0, 0, 0) # Initial velocity vector of Mercury
# Next, we specify how to update vectors. For this update, we have to compute the force acting on Mercury.
def evolve_mercury(vec_rM_old, vec_vM_old, alpha, beta):
"""
Advance Mercury in time by one step of length dt.
Arguments:
- vec_rM_old: old position vector of Mercury
- vec_vM_old: old velocity vector of Mercury
- alpha: strength of 1/r**3 term in force
- beta: strength of 1/r**4 term in force
Returns:
- vec_rM_new: new position vector of Mercury
- vec_vM_new: new velocity vector of Mercury
"""
# Compute the factor coming from General Relativity
fact = 1 + alpha * rS / vec_rM_old.mag + beta * rL2 / vec_rM_old.mag**2
# Compute the absolute value of the acceleration
aMS = c_a * fact / vec_rM_old.mag**2
# Multiply by the direction to get the acceleration vector
vec_aMS = - aMS * ( vec_rM_old / vec_rM_old.mag )
# Update velocity vector
vec_vM_new = vec_vM_old + vec_aMS * dt
# Update position vector
vec_rM_new = vec_rM_old + vec_vM_new * dt
return vec_rM_new, vec_vM_new
# Also, we want to measure the angle between to vectors. This is done by the next function definition.
def angle_between(v1, v2):
"""Compute angle between two vectors. Result is in degrees."""
return acos( dot(v1, v2) / (v1.mag * v2.mag) ) * 180. / pi
# Finally, before we start the simmulation, we have to specify how long it should run, how big the time steps are and which parameters we want to use for the forces.
dt = 2. * vM0 / c_a / 200 # Time step
alpha = 0.0 # Strength of 1/r**3 term
beta = 1.e5 # Strength of 1/r**4 term
vec_r_last = vec_rM0 # Previous position of Mercury
turns = 0 # Number of completed turns
max_turns = 10 # Maximum number of turns
list_perih = list() # List of perihelion locations
sum_angle = 0. # Angle between first and last perihelion
# # Visualization
# +
# Specify how the output should look like
scene = canvas() # Create a new scene: this displays the scene below this cell
scene.userzoom = False # No zoom allowed (for smooth scrolling in notebook)
scene.width = 1024 # Width of visualization in pixel
scene.height = 1024 # Height of visualization in pixel
scene.background = color.white # Background color ...
scene.center = vector(0, -2, 0) # ... and shifted center
# Define graphical objects; M = Mercury, S = Sun ...
M = sphere(pos=vec_rM0, radius=0.5, color=color.red )
S = sphere(pos=vector(0, 0, 0), radius=1.5, color=color.yellow)
# ... and the initial velocities
M.velocity = vec_vM0
S.velocity = vector(0, 0, 0)
# Add a visible trajectory to Mercury
M.trajectory = curve(color=color.black, radius=0.005)
# Find perihelion for each turn and print it out
while turns < max_turns:
vec_r_before_last = vec_r_last
vec_r_last = vector(M.pos)
# Set the frame rate: shows four earth days at once
rate(1000)
# Update the drawn trajectory with the current position
M.trajectory.append(pos=M.pos)
# Update the velocity and position
M.pos, M.velocity = evolve_mercury(M.pos, M.velocity, alpha, beta)
# Check if just past perihelion
if vec_r_before_last.mag > vec_r_last.mag < M.pos.mag:
turns = turns+1
list_perih.append(vec_r_last)
if turns > 1:
# Draw location of perihelion
sphere(color=color.green, radius=0.2, pos=vec_r_last)
# Display intermediate results (will show up after simulation)
print("turn: n={n}, perihelion growth: delta Theta={angle}".format(
n=turns, angle=angle_between(list_perih[-2], list_perih[-1])
))
# Note that list_perih[-2] accesses the second last and
# list_perih[-1] the last element in the list
sum_angle = sum_angle + angle_between(list_perih[-2], list_perih[-1])
# Display the average perihelion growth
print("--------------------------------")
print("Average perihelion growth in arc sec per century: delta Theta={avg:1.2f}".format(
avg=sum_angle/(len(list_perih)-1) * 3. / beta * 3600 * 4.15 * 100
))
| ipynb-scripts/perihelion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Ele2000/-python-for-trading/blob/main/slicing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="uPPWQ86GAan1"
import os
import pandas as pd
def symbol_to_path(symbol, base_dir='data'):
"""Return CSV file path to given ticker symbol."""
return os.path.join(base_dir, f'{symbol}.csv')
def get_data(symbols, dates):
"""Read stock data (Adj Close) for given symbols from CSV files."""
df = pd.DataFrame(index=dates)
if 'SPY' not in symbols:
symbols.insert(0, 'SPY')
for symbol in symbols:
df_temp = pd.read_csv(
symbol_to_path(symbol),
index_col='Date',
parse_dates=True,
usecols=['Date', 'Adj Close'],
na_values=['nan'],
)
df_temp = df_temp.rename(columns={'Adj Close': symbol})
df = df.join(df_temp)
df = df.dropna()
return df
def main():
dates = pd.date_range('2020-01-01', '2020-12-31')
symbols = ['AAPL', 'FB', 'GLD', 'IBM', 'KO']
df = get_data(symbols, dates)
# Slice by row range (dates) using DataFrame.loc[] selector
# print(df.loc['2020-01-01':'2020-01-31'])
# Slice by column (symbols)
# print(df['GLD']) # a single label selects single column
# print(df[['GLD', 'KO']]) # a list of labels selects a multyple columns
# Slice by row and column
print(df.loc['2020-01-10':'2020-01-15', ['SPY', 'KO']])
if __name__=="__main__":
main()
| slicing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 1. This notebook can build DNN with user-defined number of layers and neurons for MNIST classification.
# 2. The calculation process with Matrix Notation is included in Layer Class.
# # Load Data
import copy
import random
import numpy as np
import matplotlib.pyplot as plt
from mnist import MNIST
from math import exp,log,tanh,sqrt
mndata = MNIST('./mnist/')
mndata.gz = True
images, labels = mndata.load_training()
test_imgs, test_labels = mndata.load_testing()
# # Data Preprocess
# Shuffle Data
all_data=np.concatenate((np.array(images),np.array(labels).reshape(len(labels),1)),axis=1)
np.random.shuffle(all_data)
images=all_data[:,:-1]
labels=all_data[:,-1]
images=np.array(images)
transfered_images=np.zeros((len(images),784))
input_images_feature=np.zeros((len(images),785))
transfered_test_images=np.zeros((len(test_imgs),784))
input_test_images_feature=np.zeros((len(test_imgs),785))
# Put all values into [-1,1]
for i in range(len(images)):
transfered_images[i]=np.array(images[i])
transfered_images[i]=transfered_images[i]/127.5 - 1
input_images_feature[i]=np.insert(transfered_images[i],0,1)
for i in range(len(test_imgs)):
transfered_test_images[i]=np.array(test_imgs[i])
transfered_test_images[i]=transfered_test_images[i]/127.5 - 1
input_test_images_feature[i]=np.insert(transfered_test_images[i],0,1)
# This is equal to randomly select since the data has been shuffled before
train_features=input_images_feature[:50000]
train_labels=labels[:50000]
valid_features=input_images_feature[50000:60000]
valid_labels=labels[50000:60000]
test_features=input_test_images_feature
test_labels=test_labels
# # Minibatch
# +
BATCH = 256
batch_train_features=[]
batch_train_labels=[]
for i in range(int(len(train_features)/BATCH)):
batch_train_features.append(train_features[i*BATCH:i*BATCH+BATCH])
batch_train_labels.append(train_labels[i*BATCH:i*BATCH+BATCH])
batch_train_features.append(train_features[i*BATCH+BATCH:])
batch_train_labels.append(train_labels[i*BATCH+BATCH:])
# -
# # Layer Class
class Layer(object):
def __init__(self,num_next_layer_neuron):
self.output_num=num_next_layer_neuron
def configure(self,input_shape,reg_lam):
self.lam = reg_lam
self.w_shape=(input_shape[1],self.output_num)
self.w=np.random.normal(0,1/sqrt(input_shape[0]),self.w_shape)
self.delta_w=np.zeros(self.w_shape)
def hidden_forward_prop(self,inputs,activate_index):
self.x=copy.deepcopy(inputs)
self.a=np.dot(self.x,self.w)
self.activate_type=activate_index
self.y=np.array(self.activate_func(self.a,activate_index))
self.b=np.ones(len(inputs))
self.output=np.c_[self.b,self.y]
self.gradient=self.gradient_calc(self.output,activate_index)
return self.output
def hidden_back_prop(self,layer_index,next_layer_w,next_layer_delta,rate,alpha):
if layer_index==1:
self.delta=self.gradient*np.dot(next_layer_delta,next_layer_w.T)
else:
self.delta=self.gradient*np.dot(next_layer_delta[:,1:],next_layer_w.T)
self.oldweight=copy.deepcopy(self.w)
if self.delta_w.shape[1]>layer_neuron_num_list[len(layer_list)-1-layer_index]:
self.old_delta_weight=copy.deepcopy(self.delta_w[:,1:])
else:
self.old_delta_weight=copy.deepcopy(self.delta_w)
self.delta_w=rate*(np.dot(self.x.T,self.delta)[:,1:]/len(self.x))
self.old_weight=copy.deepcopy(self.w)
self.w+=alpha*self.old_delta_weight+self.delta_w
def output_forward_prop(self,inputs,activate_index,label):
self.x=copy.deepcopy(inputs)
self.a=np.dot(self.x,self.w)
self.label=label
self.vector_label=np.zeros((len(inputs),self.output_num))
for i in range(len(inputs)):
self.vector_label[i][int(label[i])]=1
self.activate_type=activate_index
self.y=np.exp(self.a)/np.repeat(np.sum(np.exp(self.a),axis=1).reshape(self.a.shape[0],1),self.a.shape[1],axis=1)
return self.y
def output_back_prop(self,rate,output_y,alpha):
self.delta=self.vector_label-output_y
self.old_weight=copy.deepcopy(self.w)
self.old_delta_weight=copy.deepcopy(self.delta_w)
self.delta_w=rate*(np.dot(self.x.T,self.delta)/len(self.x) - 2*self.lam*self.w)
self.w+=alpha*self.old_delta_weight+self.delta_w
def predict(self):
self.predicts=[0]*len(self.x)
self.predicts=self.y.argsort()[:,-1]
def accuracy(self):
total_num=len(self.x)
correct_num=sum([1 if self.predicts[i]==self.label[i] else 0 for i in range(total_num)])
return correct_num/total_num
def activate_func(self,a,index):
if index==0:
return 1/(1+np.exp(-a))
if index==1:
return 1.7159*np.tanh(2*a/3)
if index==2:
zeros=np.zeros(a.shape)
return np.maximum(zeros,a)
def gradient_calc(self,output,index):
if index==0:
return np.multiply((1-output),output)
if index==1:
return 1.7159*(2/3)*(1-(np.tanh(output))**2)
if index==2:
zeros=np.zeros(output.shape)
return np.greater(output,0).astype(int)
def softmax_entropy(self):
entropy=0
entropy-=sum(np.log(np.sum(self.y*self.vector_label,axis=1)))/self.y.shape[1]
return entropy/(len(self.x)) + np.sum(np.square(self.w)) * self.lam
# # Initialize Training
# +
# Set Initial Parameter for neural network
################################################
# Set number of neurons for every layer ##
# This also decide how the number of layers ##
# For MNIST the last number must be 10. ##
layer_neuron_num_list=[128,64,10] ##
################################################
######################################
# Set update rate ##
output_layer_update_rate=0.00001 ##
hidden_layer_update_rate=0.01 ##
momentum_alpha=0.9 ##
reg_lambda=0.1 ##
######################################
###########################################################################
# Set activate function type: 0 for sigmoid, 1 for tanh and 2 for ReLU ##
activate_type_index = 2 ##
# Set nunber of data per mini-batch ##
num_per_batch = BATCH ##
num_batch_per_epoch=int(len(train_features)/num_per_batch) ##
###########################################################################
# Initial list for the layers and their output
layer_list=[]
valid_layer_list=[]
test_layer_list=[]
layer_output_list=[]
valid_layer_output_list=[]
test_layer_output_list=[]
# Save data to list
train_entropy_data=[]
train_accuracy_data=[]
valid_entropy_data=[]
valid_accuracy_data=[]
test_entropy_data=[]
test_accuracy_data=[]
# Initialize each layer
for i in range(len(layer_neuron_num_list)):
layer_list.append(Layer(layer_neuron_num_list[i]))
valid_layer_list.append(Layer(layer_neuron_num_list[i]))
test_layer_list.append(Layer(layer_neuron_num_list[i]))
if i==0:
layer_list[i].configure((len(batch_train_features[0]),785),reg_lambda)
valid_layer_list[i].configure((len(valid_features),785),reg_lambda)
test_layer_list[i].configure((len(test_features),785),reg_lambda)
else:
layer_list[i].configure((len(batch_train_features[0]),layer_neuron_num_list[i-1]+1),reg_lambda)
valid_layer_list[i].configure((len(batch_train_features[0]),layer_neuron_num_list[i-1]+1),reg_lambda)
test_layer_list[i].configure((len(batch_train_features[0]),layer_neuron_num_list[i-1]+1),reg_lambda)
# -
# # Start Training
# +
# Start training
saved_valid_entropy=[0,0,0]
tmp_train_entropy=[]
tmp_train_accuracy=[]
# Initial Forward Propagation
for i in range(len(layer_list)):
valid_layer_list[i].w=copy.deepcopy(layer_list[i].w)
test_layer_list[i].w=copy.deepcopy(layer_list[i].w)
if i==0:
layer_output_list.append(layer_list[i].hidden_forward_prop(batch_train_features[0],activate_type_index))
valid_layer_output_list.append(valid_layer_list[i].hidden_forward_prop(valid_features,activate_type_index))
test_layer_output_list.append(test_layer_list[i].hidden_forward_prop(test_features,activate_type_index))
elif i!=len(layer_list)-1:
layer_output_list.append(layer_list[i].hidden_forward_prop(layer_output_list[i-1],activate_type_index))
valid_layer_output_list.append(valid_layer_list[i].hidden_forward_prop(valid_layer_output_list[i-1],activate_type_index))
test_layer_output_list.append(test_layer_list[i].hidden_forward_prop(test_layer_output_list[i-1],activate_type_index))
elif i==len(layer_list)-1:
layer_output_list.append(layer_list[i].output_forward_prop(layer_output_list[i-1],activate_type_index,batch_train_labels[0]))
valid_layer_output_list.append(valid_layer_list[i].output_forward_prop(valid_layer_output_list[i-1],activate_type_index,valid_labels))
test_layer_output_list.append(test_layer_list[i].output_forward_prop(test_layer_output_list[i-1],activate_type_index,test_labels))
# Start Loop
count_epoch=0
print('iter\ttrain_entropy\t\tvalid_entropy\t\ttest_entropy\t\ttrain_acc\tvalid_acc\ttest_acc')
for num in range(100000000):
# Backward Propagation
for i in range(len(layer_list)):
if i==0:
layer_list[len(layer_list)-1].output_back_prop(output_layer_update_rate,layer_output_list[len(layer_list)-1],momentum_alpha)
else:
layer_list[len(layer_list)-i-1].hidden_back_prop(i,layer_list[len(layer_list)-i].old_weight,layer_list[len(layer_list)-i].delta,hidden_layer_update_rate,momentum_alpha)
# Forward Propagation
for i in range(len(layer_list)):
valid_layer_list[i].w=copy.deepcopy(layer_list[i].w)
test_layer_list[i].w=copy.deepcopy(layer_list[i].w)
if i==0:
layer_output_list[i]=layer_list[i].hidden_forward_prop(batch_train_features[num%num_batch_per_epoch],activate_type_index)
elif i!=len(layer_list)-1:
layer_output_list[i]=layer_list[i].hidden_forward_prop(layer_output_list[i-1],activate_type_index)
elif i==len(layer_list)-1:
layer_output_list[i]=layer_list[i].output_forward_prop(layer_output_list[i-1],activate_type_index,batch_train_labels[num%num_batch_per_epoch])
layer_list[-1].predict()
tmp_train_accuracy.append(layer_list[-1].accuracy())
tmp_train_entropy.append(layer_list[-1].softmax_entropy())
# One epoch finished
if num%num_batch_per_epoch==0:
count_epoch+=1
train_accuracy=sum(tmp_train_accuracy)/len(tmp_train_accuracy)
train_entropy=sum(tmp_train_entropy)/len(tmp_train_entropy)
tmp_train_entropy=[]
tmp_train_accuracy=[]
# Forward Propagation for valid and test
for i in range(len(layer_list)):
valid_layer_list[i].w=copy.deepcopy(layer_list[i].w)
test_layer_list[i].w=copy.deepcopy(layer_list[i].w)
if i==0:
valid_layer_output_list[i]=valid_layer_list[i].hidden_forward_prop(valid_features,activate_type_index)
test_layer_output_list[i]=test_layer_list[i].hidden_forward_prop(test_features,activate_type_index)
elif i!=len(layer_list)-1:
valid_layer_output_list[i]=valid_layer_list[i].hidden_forward_prop(valid_layer_output_list[i-1],activate_type_index)
test_layer_output_list[i]=test_layer_list[i].hidden_forward_prop(test_layer_output_list[i-1],activate_type_index)
elif i==len(layer_list)-1:
valid_layer_output_list[i]=valid_layer_list[i].output_forward_prop(valid_layer_output_list[i-1],activate_type_index,valid_labels)
test_layer_output_list[i]=test_layer_list[i].output_forward_prop(test_layer_output_list[i-1],activate_type_index,test_labels)
valid_layer_list[len(valid_layer_list)-1].predict()
valid_accuracy=valid_layer_list[len(valid_layer_list)-1].accuracy()
valid_entropy=valid_layer_list[len(valid_layer_list)-1].softmax_entropy()
test_layer_list[len(test_layer_list)-1].predict()
test_accuracy=test_layer_list[len(test_layer_list)-1].accuracy()
test_entropy=test_layer_list[len(test_layer_list)-1].softmax_entropy()
# Save data to list for plotting
train_entropy_data.append(train_entropy)
train_accuracy_data.append(train_accuracy)
valid_entropy_data.append(valid_entropy)
valid_accuracy_data.append(valid_accuracy)
test_entropy_data.append(test_entropy)
test_accuracy_data.append(test_accuracy)
saved_valid_entropy[num%3]=valid_entropy
# Print Result
print(str(count_epoch)+'\t'+str(train_entropy)+'\t'+str(valid_entropy)+'\t'+
str(test_entropy)+'\t'+str(train_accuracy)+'\t'+
str(valid_accuracy)+'\t'+str(test_accuracy))
# Shuffle train data after one epoch
all_train_data=np.concatenate((np.array(train_features),np.array(train_labels).reshape(len(train_labels),1)),axis=1)
np.random.shuffle(all_train_data)
new_train_images=all_train_data[:,:-1]
new_train_labels=all_train_data[:,-1]
train_features=copy.deepcopy(np.array(new_train_images))
train_labels=copy.deepcopy(np.array(new_train_labels))
# Split mini-batch again
batch_train_features=[]
batch_train_labels=[]
for i in range(int(len(train_features)/num_per_batch)):
batch_train_features.append(train_features[i*num_per_batch:i*num_per_batch+num_per_batch])
batch_train_labels.append(train_labels[i*num_per_batch:i*num_per_batch+num_per_batch])
# -
# # Plot Result
plt.figure()
plt.title('Entropy Curve\nReLU\n'+str(len(layer_neuron_num_list)-1)+' Hidden Layers\n'+str(layer_neuron_num_list[0])+' units each hidden layer')
plt.ylabel('Entropy')
plt.xlabel('epoch')
plt.plot(train_entropy_data,'red')
plt.plot(valid_entropy_data,'blue')
plt.plot(test_entropy_data,'green')
plt.legend(['train','valid','test'])
plt.savefig('HW3_entropy_'+str(len(layer_neuron_num_list)-1)+'_hidden_layers_'+str(layer_neuron_num_list[0])+'_neurons_per_layer')
plt.show()
plt.figure()
plt.title('Accuracy Curve\nReLU\n'+str(len(layer_neuron_num_list)-1)+' Hidden Layers\n'+str(layer_neuron_num_list[0])+' units each hidden layer')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.plot(train_accuracy_data,'red')
plt.plot(valid_accuracy_data,'blue')
plt.plot(test_accuracy_data,'green')
plt.legend(['train','valid','test'])
plt.savefig('HW3_accuracy_'+str(len(layer_neuron_num_list)-1)+'_hidden_layers_'+str(layer_neuron_num_list[0])+'_neurons_per_layer')
plt.show()
| Assignment2/BonusPart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Titanic Machine Learning
# ## Libraries and Tools
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import Imputer
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.metrics import classification_report
from sklearn import metrics
# ## Open Data
# Open data files and read into dataframes
train = (pd.read_csv('train.csv')).set_index('PassengerId')
X_score = (pd.read_csv('test.csv')).set_index('PassengerId')
# Split training features from class
X_train = train[list(X_score)]
y_train = train[['Survived']]
# ## Data Cleaning and Feature Selection
# ### Imputation of Missing Values
## Use median imputation for age
median_age = np.median(X_train['Age'].dropna())
X_train = X_train.fillna({'Age': median_age})
X_score = X_score.fillna({'Age': median_age})
# Only use the letter for the cabin
def cabin_clean(row):
if not pd.isnull(row['Cabin']):
return (row['Cabin'])[0]
X_train.Cabin = X_train.apply(cabin_clean, axis=1)
X_score.Cabin = X_score.apply(cabin_clean, axis=1)
# Fill any NaNs with most common cabin letter from training set
cabin_imp_val = X_train.Cabin.value_counts().index[0]
X_train = X_train.fillna({'Cabin': cabin_imp_val})
X_score = X_score.fillna({'Cabin': cabin_imp_val})
# Fill NaNs with most common categorical value
embarked_imp_val = X_train.Embarked.value_counts().index[0]
X_train = X_train.fillna({'Embarked': embarked_imp_val})
X_score = X_score.fillna({'Embarked': embarked_imp_val})
# Use median impuation for fare
median_fare = np.median(X_train['Fare'].dropna())
X_train = X_train.fillna({'Fare': median_fare})
X_score = X_score.fillna({'Fare': median_fare})
# ### Encode Categorical Features
# Features with presumably no predictive power hsould be dropped rather than encoded
X_train = X_train.drop(['Name', 'Ticket'], axis=1)
X_score = X_score.drop(['Name', 'Ticket'], axis=1)
# One-hot-encode remaining categorical features
X_train = pd.get_dummies(X_train, columns=['Sex', 'Cabin', 'Embarked'])
X_score = pd.get_dummies(X_score, columns=['Sex', 'Cabin', 'Embarked'])
# Add empty column for 'Cabin_T' which was not in score
X_score['Cabin_T'] = 0
# Ensure columns are ordered the same for scikit-learn
X_score = X_score[list(X_train.columns)]
# ## Model Analysis
# ### Split Known Data for Training and Testing
X_train_split, X_test_split, y_train_split, y_test_split = train_test_split(
X_train, y_train, random_state=42
)
# ### Perform Cross Validation
# +
classifiers = [
]
classifier_names = [
'',
''
]
# -
test_pipeline = Pipeline([
('grd', GradientBoostingClassifier(n_estimators=500))
])
test_pipeline.fit(X_train_split, y_train_split.values.ravel())
y_pred_split = test_pipeline.predict(X_test_split)
print(classification_report(y_test_split, y_pred_split))
print(metrics.accuracy_score(y_test_split, y_pred_split))
# ## Train the Gradient Boosting Forest On Full Training Set
# Create the pipeline with a PCA decomp step
pipeline = Pipeline([
('pca', PCA()),
('grd', GradientBoostingClassifier(n_estimators=500))
])
# Fit the pipeline to the training set
pipeline.fit(X_train, y_train.values.ravel())
# ## Predict the Unknowns
X_score['Survived'] = pipeline.predict(X_score)
X_score[['Survived']].to_csv('submission.csv')
| titanic-ml.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="t6MPjfT5NrKQ"
# <a align="left" href="https://ultralytics.com/yolov5" target="_blank">
# <img width="1024", src="https://user-images.githubusercontent.com/26833433/125273437-35b3fc00-e30d-11eb-9079-46f313325424.png"></a>
#
# This is the **official YOLOv5 🚀 notebook** by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/).
# For more information please visit https://github.com/ultralytics/yolov5 and https://ultralytics.com. Thank you!
# + [markdown] id="7mGmQbAO5pQb"
# # Setup
#
# Clone repo, install dependencies and check PyTorch and GPU.
# + id="wbvMlHd_QwMG" colab={"base_uri": "https://localhost:8080/"} outputId="4d67116a-43e9-4d84-d19e-1edd83f23a04"
# !git clone https://github.com/ultralytics/yolov5 # clone repo
# %cd yolov5
# %pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
# + [markdown] id="4JnkELT0cIJg"
# # 1. Inference
#
# `detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:
#
# ```shell
# python detect.py --source 0 # webcam
# file.jpg # image
# file.mp4 # video
# path/ # directory
# path/*.jpg # glob
# 'https://youtu.be/NUsoVlDFqZg' # YouTube
# 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
# ```
# + id="zR9ZbuQCH7FX" colab={"base_uri": "https://localhost:8080/"} outputId="8b728908-81ab-4861-edb0-4d0c46c439fb"
# !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)
# + [markdown] id="hkAzDWJ7cWTr"
#
# <img align="left" src="https://user-images.githubusercontent.com/26833433/127574988-6a558aa1-d268-44b9-bf6b-62d4c605cc72.jpg" width="600">
# + [markdown] id="0eq1SMWl6Sfn"
# # 2. Validate
# Validate a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation.
# + [markdown] id="eyTZYGgRjnMc"
# ## COCO val2017
# Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy.
# + id="WQPtK1QYVaD_" colab={"base_uri": "https://localhost:8080/", "height": 48, "referenced_widgets": ["484511f272e64eab8b42e68dac5f7a66", "78cceec059784f2bb36988d3336e4d56", "ab93d8b65c134605934ff9ec5efb1bb6", "30df865ded4c434191bce772c9a82f3a", "20cdc61eb3404f42a12b37901b0d85fb", "2d7239993a9645b09b221405ac682743", "17b5a87f92104ec7ab96bf507637d0d2", "2358bfb2270247359e94b066b3cc3d1f", "<KEY>", "<KEY>", "896030c5d13b415aaa05032818d81a6e"]} outputId="7e6f5c96-c819-43e1-cd03-d3b9878cf8de"
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
# !unzip -q tmp.zip -d ../datasets && rm tmp.zip
# + id="X58w8JLpMnjH" colab={"base_uri": "https://localhost:8080/"} outputId="3dd0e2fc-aecf-4108-91b1-6392da1863cb"
# Run YOLOv5x on COCO val2017
# !python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half
# + [markdown] id="rc_KbFk0juX2"
# ## COCO test-dev2017
# Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (**20,000 images, no labels**). Results are saved to a `*.json` file which should be **zipped** and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
# + id="V0AJnSeCIHyJ"
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
# !unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
# !f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
# %mv ./test2017 ../coco/images # move to /coco
# + id="29GJXAP_lPrt"
# Run YOLOv5s on COCO test-dev2017 using --task test
# !python val.py --weights yolov5s.pt --data coco.yaml --task test
# + [markdown] id="ZY2VXXXu74w5"
# # 3. Train
#
# <p align=""><a href="https://roboflow.com/?ref=ultralytics"><img width="1000" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png"/></a></p>
# Close the active learning loop by sampling images from your inference conditions with the `roboflow` pip package
# <br><br>
#
# Train a YOLOv5s model on the [COCO128](https://www.kaggle.com/ultralytics/coco128) dataset with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`.
#
# - **Pretrained [Models](https://github.com/ultralytics/yolov5/tree/master/models)** are downloaded
# automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases)
# - **[Datasets](https://github.com/ultralytics/yolov5/tree/master/data)** available for autodownload include: [COCO](https://github.com/ultralytics/yolov5/blob/master/data/coco.yaml), [COCO128](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), [VOC](https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml), [Argoverse](https://github.com/ultralytics/yolov5/blob/master/data/Argoverse.yaml), [VisDrone](https://github.com/ultralytics/yolov5/blob/master/data/VisDrone.yaml), [GlobalWheat](https://github.com/ultralytics/yolov5/blob/master/data/GlobalWheat2020.yaml), [xView](https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml), [Objects365](https://github.com/ultralytics/yolov5/blob/master/data/Objects365.yaml), [SKU-110K](https://github.com/ultralytics/yolov5/blob/master/data/SKU-110K.yaml).
# - **Training Results** are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.
# <br><br>
#
# ## Train on Custom Data with Roboflow 🌟 NEW
#
# [Roboflow](https://roboflow.com/?ref=ultralytics) enables you to easily **organize, label, and prepare** a high quality dataset with your own custom data. Roboflow also makes it easy to establish an active learning pipeline, collaborate with your team on dataset improvement, and integrate directly into your model building workflow with the `roboflow` pip package.
#
# - Custom Training Example: [https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/](https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/?ref=ultralytics)
# - Custom Training Notebook: [](https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb)
# <br>
#
# <p align=""><a href="https://roboflow.com/?ref=ultralytics"><img width="480" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/6152a275ad4b4ac20cd2e21a_roboflow-annotate.gif"/></a></p>Label images lightning fast (including with model-assisted labeling)
# + id="bOy5KI2ncnWd"
# Tensorboard (optional)
# %load_ext tensorboard
# %tensorboard --logdir runs/train
# + id="2fLAV42oNb7M"
# Weights & Biases (optional)
# %pip install -q wandb
import wandb
wandb.login()
# + id="1NcFxRcFdJ_O" colab={"base_uri": "https://localhost:8080/"} outputId="00ea4b14-a75c-44a2-a913-03b431b69de5"
# Train YOLOv5s on COCO128 for 3 epochs
# !python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
# + [markdown] id="15glLzbQx5u0"
# # 4. Visualize
# + [markdown] id="DLI1JmHU7B0l"
# ## Weights & Biases Logging 🌟 NEW
#
# [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_notebook) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use).
#
# During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home?utm_campaign=repo_yolo_notebook), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289).
#
# <p align="left"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
# + [markdown] id="-WPvRbS5Swl6"
# ## Local Logging
#
# All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.
#
# > <img src="https://user-images.githubusercontent.com/26833433/131255960-b536647f-7c61-4f60-bbc5-cb2544d71b2a.jpg" width="700">
# `train_batch0.jpg` shows train batch 0 mosaics and labels
#
# > <img src="https://user-images.githubusercontent.com/26833433/131256748-603cafc7-55d1-4e58-ab26-83657761aed9.jpg" width="700">
# `test_batch0_labels.jpg` shows val batch 0 labels
#
# > <img src="https://user-images.githubusercontent.com/26833433/131256752-3f25d7a5-7b0f-4bb3-ab78-46343c3800fe.jpg" width="700">
# `test_batch0_pred.jpg` shows val batch 0 _predictions_
#
# Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:
#
# ```python
# from utils.plots import plot_results
# plot_results('path/to/results.csv') # plot 'results.csv' as 'results.png'
# ```
#
# <img align="left" width="800" alt="COCO128 Training Results" src="https://user-images.githubusercontent.com/26833433/126906780-8c5e2990-6116-4de6-b78a-367244a33ccf.png">
# + [markdown] id="Zelyeqbyt3GD"
# # Environments
#
# YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
#
# - **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
# - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
# - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
# - **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
#
# + [markdown] id="6Qu7Iesl0p54"
# # Status
#
# 
#
# If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
#
# + [markdown] id="IEijrePND_2I"
# # Appendix
#
# Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
#
# + id="mcKoSIK2WSzj"
# Reproduce
for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':
# !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
# !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
# + id="GMusP4OAxFu6"
# PyTorch Hub
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# + id="FGH0ZjkGjejy"
# CI Checks
# %%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
# rm -rf runs # remove runs/
for m in yolov5s; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python val.py --weights $m.pt --device $d # val official
python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # build PyTorch model
python models/tf.py --weights $m.pt # build TensorFlow model
python export.py --img 128 --batch 1 --weights $m.pt --include torchscript onnx # export
done
# + id="gogI-kwi3Tye"
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
results = profile(input=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# + id="RVRSOhEvUdb5"
# Evolve
# !python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve
# !d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)
# + id="BSgFCAcMbk1R"
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
# !python train.py --batch {b} --weights {m}.pt --data VOC.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
| model_zoo/YoloV5/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# +
data_path = 'volcano.csv'
data = pd.read_csv(data_path)
# -
# !mkdir Volcano
# +
# Transform it to a long format
df=data.unstack().reset_index()
df.columns=["X","Y","Z"]
# And transform the old column name in something numeric
df['X']=pd.Categorical(df['X'])
df['X']=df['X'].cat.codes
# We are going to do 20 plots, for 20 different angles
for angle in range(0, 360,18):
# Make the plot
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(df['Y'], df['X'], df['Z'], cmap=plt.cm.viridis, linewidth=0.2)
ax.view_init(30,angle)
filename=f'Volcano/Volcano_step_{angle:03}.png'
plt.savefig(filename, dpi=96)
plt.gca()
plt.close(fig)
# -
# !convert -delay 20 Volcano/Volcano*.png animated_volcano.gif
| docker-intro/binder/volcano.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .NET (PowerShell)
# language: PowerShell
# name: .net-powershell
# ---
# [this doc on github](https://github.com/dotnet/interactive/tree/main/samples/notebooks/powershell)
#
# # Charts with XPlot using constructors <img src="https://raw.githubusercontent.com/PowerShell/PowerShell/master/assets/Powershell_black_64.png" align="right"/>
# Charts can be rendered using [Xplot.Plotly](https://fslab.org/XPlot/).
# We will cover some example on how to use XPlot in a notebook with .NET Interactive.
#
# > NOTE: This and "Plotting with Xplot using type accelerators" produce the same output. They're just using different scripting mechanisms.
# + dotnet_interactive={"language": "csharp"}
#!csharp
#r "nuget: XPlot.Plotly.Interactive, 4.0.6"
# -
# # Rendering Scatter plots
# One of the most commonly used type of chart to explore data set. Use the type `Graph.Scatter`.
# + dotnet_interactive={"language": "pwsh"}
$openSeries = [Graph.Scatter]::new()
$openSeries.name = "Open"
$openSeries.x = @(1, 2, 3, 4)
$openSeries.y = @(10, 15, 13, 17)
$closeSeries = [Graph.Scatter]::new()
$closeSeries.name = "Close"
$closeSeries.x = @(2, 3, 4, 5)
$closeSeries.y = @(16, 5, 11, 9)
$chart = @($openSeries, $closeSeries) | New-PlotlyChart -Title "Open vs Close"
Out-Display $chart
# -
# Let's change it to be markers style, so more like a scatter plot.
# + dotnet_interactive={"language": "pwsh"}
$openSeries.mode = "markers";
$closeSeries.mode = "markers";
$chart = @($openSeries, $closeSeries) | New-PlotlyChart -Title "Open vs Close"
Out-Display $chart
# -
# `Scatter` can also produce polar charts by setting the radial property `r` and angular proeprty `t`
# + dotnet_interactive={"language": "pwsh"}
$openSeries = [Graph.Scatter]::new()
$openSeries.name = "Open"
$openSeries.r = @(1, 2, 3, 4)
$openSeries.t = @(45, 100, 150, 290)
$closeSeries = [Graph.Scatter]::new()
$closeSeries.name = "Close"
$closeSeries.r = @(2, 3, 4, 5)
$closeSeries.t = @(16, 45, 118, 90)
$layout = [Layout]::new()
$layout.title = "Open vs Close"
$layout.orientation = -90
$chart = @($openSeries, $closeSeries) | New-PlotlyChart -Layout $layout
$chart | Out-Display
# -
# ## Large scatter plots and performance
# It is not uncommon to have scatter plots with a large dataset, it is a common scenario at the beginning of a data exploration process. Using the default `svg` based rendering will create performace issues as the dom will become very large.
# We can then use `web-gl` support to address the problem.
# + dotnet_interactive={"language": "pwsh"}
#!time
$series = 1..10 | ForEach-Object {
$trace = [Graph.Scattergl]::new()
$trace.name = "Series $_"
$trace.mode = "markers"
$trace.x = [double[]](Get-Random -Count 100000 -Minimum -100000 -Maximum 100000)
$trace.y = [double[]](Get-Random -Count 100000 -Minimum -100000 -Maximum 100000)
$trace
}
New-PlotlyChart -Title "Large Dataset" -Trace $series | Out-Display
# -
# Can provide custom marker `colour`, `size` and `colorscale` to display even more information to the user.
# + dotnet_interactive={"language": "pwsh"}
$series | ForEach-Object {
[int[]] $sizes = Get-Random -Count 100 -Minimum 0.0 -Maximum 1.0 |
ForEach-Object { $_ -lt 0.75 ? (Get-Random -Minimum 1 -Maximum 5) : (Get-Random -Minimum 10 -Maximum 15) }
$temperatures = $sizes | ForEach-Object { ($_ * 10) - 100 }
$_.x = [double[]](Get-Random -Count 100000 -Minimum -100000 -Maximum 100000)
$_.y = [double[]](Get-Random -Count 100000 -Minimum -100000 -Maximum 100000)
$_.marker = [XPlot.Plotly.Marker]::new()
$_.marker.size = $sizes
$_.marker.color = $temperatures
$_.marker.colorscale = "hot"
}
New-PlotlyChart -Title "Large Dataset" -Trace $series | Out-Display
# -
# Plotly pvoides some additional `color scales` to use.
# + dotnet_interactive={"language": "pwsh"}
foreach ($trace in $series) {
$trace.marker.colorscale = "Viridis"
}
New-PlotlyChart -Title "Viridis scale" -Trace $series | Out-Display
foreach ($trace in $series) {
$trace.marker.colorscale = "Hot"
}
New-PlotlyChart -Title "Hot scale" -Trace $series | Out-Display
foreach ($trace in $series) {
$trace.marker.colorscale = "Jet"
}
New-PlotlyChart -Title "Jet scale" -Trace $series | Out-Display
# -
# # Rendering Histograms
# Let's have a look at using histograms, the next cell sets up some generators.
# + dotnet_interactive={"language": "pwsh"}
$count = 20
[datetime[]] $dates = 1..$count | ForEach-Object { (Get-Date).AddMinutes((Get-Random -Minimum $_ -Maximum ($_+30))) }
# -
# Now let's define histogram traces:
# + dotnet_interactive={"language": "pwsh"}
$openByTime = [Graph.Histogram]::new()
$openByTime.name = "Open"
$openByTime.x = $dates
$openByTime.y = [double[]](Get-Random -Count $count -Minimum 0 -Maximum 200)
$closeByTime = [Graph.Histogram]::new()
$closeByTime.name = "Close"
$closeByTime.x = $dates
$closeByTime.y = [double[]](Get-Random -Count $count -Minimum 0 -Maximum 200)
New-PlotlyChart -Trace @($openByTime, $closeByTime) | Out-Display
# -
# The Histogram generator will automatically count the number of items per bin.
#
# Setting `histfunc` to `"sum"` we can now add up all the values contained in each bin.
# Note that we are creatng bin using the `x` data point and we are using bydefault autobinx
# + dotnet_interactive={"language": "pwsh"}
$openByTime.histfunc = 'sum'
$closeByTime.histfunc = 'sum'
(New-PlotlyChart -Trace @($openByTime, $closeByTime)) | Out-Display
# -
# # Area chart and Polar Area chart
# By populating hte property `fill` of a `Scatter` trace the chart will render as area chart.
#
# Here is set to `"tozeroy"` which will create a fill zone underneath the line reachin to the 0 of the y axis.
# + dotnet_interactive={"language": "pwsh"}
$openSeries = [Graph.Scatter]::new()
$openSeries.name = "Open"
$openSeries.x = @(1, 2, 3, 4)
$openSeries.y = @(10, 15, 13, 17)
$openSeries.fill = "tozeroy"
$openSeries.mode = "lines"
$closeSeries = [Graph.Scatter]::new()
$closeSeries.name = "Close"
$closeSeries.x = @(1, 2, 3, 4)
$closeSeries.y = @(3, 5, 11, 9)
$closeSeries.fill = "tozeroy"
$closeSeries.mode = "lines"
$chart = @($openSeries, $closeSeries) | New-PlotlyChart -Title "Open vs Close"
Out-Display $chart
# -
# With one `fill` set to `"tonexty"` the cahrt will fill the aread between traces.
# + dotnet_interactive={"language": "pwsh"}
$openSeries.fill = $null;
$closeSeries.fill = "tonexty";
$chart = @($openSeries, $closeSeries) | New-PlotlyChart -Title "Open vs Close"
Out-Display $chart
# -
# Using `Area` traces we can generate radial area chart. In this example we are using cardinal points to xpress angular values.
# The array `{"North", "N-E", "East", "S-E", "South", "S-W", "West", "N-W"}` will be autoimatically translated to angular values.
# + dotnet_interactive={"language": "pwsh"}
$areaTrace1 = [Graph.Area]::new()
$areaTrace1.r = @(77.5, 72.5, 70.0, 45.0, 22.5, 42.5, 40.0, 62.5)
$areaTrace1.t = @("North", "N-E", "East", "S-E", "South", "S-W", "West", "N-W")
$areaTrace1.name = "11-14 m/s"
$areaTrace1.marker = [XPlot.Plotly.Marker]::new()
$areaTrace1.marker.color = "rgb(106,81,163)"
$areaTrace2 = [Graph.Area]::new()
$areaTrace2.r = @(57.49999999999999, 50.0, 45.0, 35.0, 20.0, 22.5, 37.5, 55.00000000000001)
$areaTrace2.t = @("North", "N-E", "East", "S-E", "South", "S-W", "West", "N-W")
$areaTrace2.name = "8-11 m/s"
$areaTrace2.marker = [XPlot.Plotly.Marker]::new()
$areaTrace2.marker.color = "rgb(158,154,200)"
$areaTrace3 = [Graph.Area]::new()
$areaTrace3.r = @(40.0, 30.0, 30.0, 35.0, 7.5, 7.5, 32.5, 40.0)
$areaTrace3.t = @("North", "N-E", "East", "S-E", "South", "S-W", "West", "N-W")
$areaTrace3.name = "5-8 m/s"
$areaTrace3.marker = [XPlot.Plotly.Marker]::new()
$areaTrace3.marker.color = "rgb(203,201,226)"
$areaTrace4 = [Graph.Area]::new()
$areaTrace4.r = @(20.0, 7.5, 15.0, 22.5, 2.5, 2.5, 12.5, 22.5)
$areaTrace4.t = @("North", "N-E", "East", "S-E", "South", "S-W", "West", "N-W")
$areaTrace4.name = "< 5 m/s"
$areaTrace4.marker = [XPlot.Plotly.Marker]::new()
$areaTrace4.marker.color = "rgb(242,240,247)"
$areaLayout = [Layout]::new()
$areaLayout.title = "Wind Speed Distribution in Laurel, NE"
$areaLayout.font = [XPlot.Plotly.Font]::new()
$areaLayout.font.size = 16
$areaLayout.legend = [XPlot.Plotly.Legend]::new()
$areaLayout.legend.font = [XPlot.Plotly.Font]::new()
$areaLayout.legend.font.size = 16
$areaLayout.radialaxis = [XPlot.Plotly.Radialaxis]::new()
$areaLayout.radialaxis.ticksuffix = "%"
$areaLayout.orientation = 270
New-PlotlyChart -Layout $areaLayout -Trace @($areaTrace1, $areaTrace2, $areaTrace3, $areaTrace4) | Out-Display
| samples/notebooks/powershell/Docs/Plotting with Xplot using constructors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spatial random walk
#
# **authors:** <NAME>
# ## Project overview
#
# Exploring applications of random walk models to genetic data. [Here](analysis/meetings.html) are notes from our meetings I have that a relevant to this project.
#
# ### Background
#
#
# ### Simulations
#
# *Here I simulate genetic data under the coalescent in various graph topologies / migration surfaces and explore the fit of different ways to compute expected genetic distances on simulated genotypes*
#
# *re-working simulations stayed tuned!*
# ## Credits
#
# **ipynb website** was developed by:
#
# <NAME> and <NAME><br>
# Dept. of Human Genetics<br>
# University of Chicago<br>
#
# [<NAME>](https://github.com/jdblischak),
# [<NAME>](http://stephenslab.uchicago.edu) and others have
# also contributed to the development of this software.
| analysis/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Word Embeddings
#
# This notebook will introduce you to word-embeddings. An embedding is essentially a vector-representation for some object or concept - in this case words. Word embeddings can be trained to create these vector for a given vocabulary, and the vectors can then be used by other systems for performing AI-tasks. Word embeddings is an ongoing field of research and many new ideas appear every year.
#
# The specific type of word embeddings which we will use here is called fastText and has been developed by Facebook. They
# have pretrained embeddings freely available, so we do not need to train them ourselves, but we can simply download theirs and work from there.
#
# #### Global Setup
try:
with open("../global_setup.py") as setupfile:
exec(setupfile.read())
except FileNotFoundError:
pass
# #### Local Setup
from src.text.word_embedding.fast_text_usage import get_fasttext_model
from notebooks.exercises.src.text import word_embedding_viz
from notebooks.exercises.src.text import fasttext_document_visualisation
# ## FastText model
#
# Here we load the fastText model. You will have to download it if you haven't already, and the cell below will instruct you on where to find it. When the data is downloaded, fastText will be loaded into memory, which will also take a couple of seconds.
fasttext_model = get_fasttext_model(lang="en")
# ## Word Vectors
# We will now look at a ton of word-vectors. The vectors are in 300 dimensions! This is way too many dimensions for a human to visualize geometrically. We can though compute something called a Principle Component Analysis (PCA). You will later learn a ton about this method, but in short terms it allows us to find the few dimensions with the most variance in (the most movement of the vectors and the most "action" to see). If we take the 3 dimensions with the most variance we can plot them in a 3D plot!
#
# Let's try that!
# Below you can randomly sample some words and plot then in 3D PCA space.
# Watch out and don't pick too many samples! - your computer probably wont be able to handle it ;)
# %matplotlib notebook
visualizer = word_embedding_viz.CompleteWordEmbeddingVisualizer(fasttext_model=fasttext_model)
# Okay so this is definitely way too many words and dimensions for us to understand!
# Let's therefore look into some specific words in the next section.
# ### Looking into specific words
# Below to take out 2 dimensions based on the vectors between points.
#
# There are a couple of categories below which you can in investigate and you can include/exclude rows and column the table for the plot. You can also select two different kinds of vector-planes (the view you are looking at). We can use PCA like we did in the last section, but we can also use a different method whichi is specialized for the differences of the vectors below (here called something with SVD difference).
# %matplotlib notebook
visualizer = word_embedding_viz.WordEmbeddingVisualizer(fasttext_model=fasttext_model)
# **Exercise**
# - *What method is best for plotting the differences of vectors?*
# Answer
# - *What method is best for plotting points alone?*
# Answer
# ## Document embeddings
# We now have an idea that word embeddings have some important information about words.
# We will try to use the embeddings of the words for analysing documents.
#
# Below are two tabs. The first tab allows you to search for words on Wikipedia and fetch the word-embeddings of the text. The second tab lets you write text-documents yourself.
#
# The texts are used to compute vectors representing the documents, which can then be plotted in 3D.
# Press "Do Document Embeddings" for showing the plot and use the dropdown menu for selecting what method used to create the document vectors.
# %matplotlib notebook
doc_view = fasttext_document_visualisation.DocumentEmbeddingVisualiser(fasttext_model=fasttext_model)
| notebooks/exercises/Word Embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#PyTorch imports
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
#Numpy
import numpy as np
#Dataset
import torch.utils.data as utils
#Graphs
import matplotlib.pyplot as plt
#For paths
import sys
import os
import glob
#imread and resize
from skimage import io, transform
#split dataset
from sklearn.model_selection import train_test_split
#Timestamp
import datetime
#PyTorch Models
path = os.path.join(os.path.dirname(os.path.abspath('__file__')), "models")
sys.path.append(path)
from models import *
# -
root_path = os.path.join(os.path.dirname(os.path.abspath('__file__')), "kinect_leap_dataset", "acquisitions")
p_id = ["P1", "P2", "P3", "P4", "P5", "P6", "P7", "P8", "P9", "P10", "P11", "P12", "P13", "P14"]
g_id = ["G1", "G2", "G3", "G4", "G5", "G6", "G7", "G8", "G9", "G10"]
print(os.path.join(root_path, "P1", "G1"))
files = glob.glob(os.path.join(root_path, "P1", "G1", "*depth.png"))
print(files)
print(len(files))
# +
dataset = []
labels = []
for p in p_id:
for g in g_id:
path = os.path.join(root_path, p, g)
image_names = glob.glob(os.path.join(path, "*depth.png"))
for img_path in image_names:
img = io.imread(img_path)
img = transform.rescale(img, 1.0 / 4.0)
img = np.resize(img,(1,120,160))
dataset.append(img)
#label 10 will be 0
tmp = np.zeros(10)
tmp[int(g[-1])] = 1
labels.append(tmp)
#labels.append(int(g[-1]))
# -
#ffs
dataset = np.array(dataset).astype(float)
labels = np.array(labels).astype(float)
print(type(labels))
print(type(dataset))
X_train, X_test_val, y_train, y_test_val = train_test_split(dataset, labels, test_size=0.2)
print(type(X_train))
X_val, X_test, y_val, y_test = train_test_split(X_test_val, y_test_val, test_size=0.5)
dataset = torch.from_numpy(dataset).float()
labels = torch.from_numpy(labels).float()
print(type(dataset))
print(type(labels))
X_train = torch.from_numpy(X_train).float()
y_train = torch.from_numpy(y_train).float()
X_test = torch.from_numpy(X_test).float()
y_test = torch.from_numpy(y_test).float()
X_val = torch.from_numpy(X_val).float()
y_val = torch.from_numpy(y_val).float()
my_dataset = utils.TensorDataset(dataset, labels) # create your dataset
my_dataloader = utils.DataLoader(my_dataset, batch_size=10, shuffle=True, num_workers=4) # create your dataloader
# +
my_dataset = utils.TensorDataset(X_train, y_train) # create your dataset
train_loader = utils.DataLoader(my_dataset, batch_size=10, shuffle=True, num_workers=4) # create your dataloader
my_dataset = utils.TensorDataset(X_val, y_val) # create your dataset
val_loader = utils.DataLoader(my_dataset, batch_size=10, shuffle=True, num_workers=4) # create your dataloader
my_dataset = utils.TensorDataset(X_test, y_test) # create your dataset
test_loader = utils.DataLoader(my_dataset, batch_size=10, shuffle=True, num_workers=4) # create your dataloader
# +
#input 640x480
#h=480, w=640
#downscaled by 4
#output 10 classes
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
#self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc1 = nn.Linear(16 * 37 * 27, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
#print(list(x.size()))
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, (2,2))
#print(list(x.size()))
#x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
#print(list(x.size()))
x = x.view(-1, self.num_flat_features(x))
#print(list(x.size()))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
# +
model = Net()
print(model)
#if torch.cuda.is_available():
# net.cuda()
criterion = torch.nn.MSELoss(size_average=False)
#optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
#criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, betas=(0.9, 0.999), eps=1e-8, weight_decay=0.0)
# +
#my_dataset
#my_dataloader
train_loss_history = []
train_acc_history = []
val_acc_history = []
val_loss_history = []
num_epochs = 100
iter_per_epoch = len(train_loader)
#224 lines -> 112 val output for log_nth=10000
#1120000 iter
log_nth = 16
if torch.cuda.is_available():
model.cuda()
for epoch in range(num_epochs): # loop over the dataset multiple times
for i, (inputs, targets) in enumerate(train_loader, 1):
inputs, targets = Variable(inputs.float()), Variable(targets.float())
if torch.cuda.is_available():
inputs, targets = inputs.cuda(), targets.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
# print statistics
train_loss_history.append(loss.data.cpu().numpy())
if log_nth and i % log_nth == 0:
last_log_nth_losses = train_loss_history[-log_nth:]
train_loss = np.mean(last_log_nth_losses)
print('[Iteration %d/%d] TRAIN loss: %.3f' % \
(i + epoch * iter_per_epoch,
iter_per_epoch * num_epochs,
train_loss))
_, preds = torch.max(outputs, 1)
_, target_indices = torch.max(targets, 1)
#print(preds)
#print(targets)
train_acc = np.mean((preds == target_indices).data.cpu().numpy())
train_acc_history.append(train_acc)
if log_nth:
print('[Epoch %d/%d] TRAIN acc/loss: %.3f/%.3f' % (epoch + 1,
num_epochs,
train_acc,
loss))
'''_, preds = torch.max(outputs, 1)
# Only allow images/pixels with label >= 0 e.g. for segmentation
targets_mask = labels >= 0
train_acc = np.mean((preds == targets)[targets_mask].data.cpu().numpy())
train_acc_history.append(train_acc)
if log_nth:
print('[Epoch %d/%d] TRAIN acc/loss: %.3f/%.3f' % (epoch + 1,
num_epochs,
train_acc,
train_loss))'''
# VALIDATION
val_losses = []
val_scores = []
model.eval()
for inputs, targets in val_loader:
inputs, targets = Variable(inputs), Variable(targets)
if torch.cuda.is_available():
inputs, targets = inputs.cuda(), targets.cuda()
outputs = model.forward(inputs)
loss = criterion(outputs, targets)
val_losses.append(loss.data.cpu().numpy())
_, preds = torch.max(outputs, 1)
_, target_indices = torch.max(targets, 1)
scores = np.mean((preds == target_indices).data.cpu().numpy())
val_scores.append(scores)
model.train()
val_acc, val_loss = np.mean(val_scores), np.mean(val_losses)
val_acc_history.append(val_acc)
val_loss_history.append(val_loss)
if log_nth:
print('[Epoch %d/%d] VAL acc/loss: %.3f/%.3f' % (epoch + 1,
num_epochs,
val_acc,
val_loss))
print('Finished Training')
# -
params = list(model.parameters())
currentDT = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
path = os.path.join(os.path.dirname(os.path.abspath('__file__')), "saved_models", "lenett_depth_" + str(num_epochs) + "_" + currentDT + ".model")
torch.save(model.state_dict(), path)
# +
#80% -> 10 epochs
#87% -> 50 epochs
scores = []
for inputs, target in test_loader:
inputs, targets = Variable(inputs), Variable(target)
if torch.cuda.is_available():
inputs, targets = inputs.cuda(), targets.cuda()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
_, target_indices = torch.max(targets, 1)
scores.extend((preds == target_indices).data.cpu().numpy())
print('Test set accuracy: %f' % np.mean(scores))
# +
import matplotlib.pyplot as plt
plt.plot(train_loss_history, '-')
#plt.plot(val_loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.show()
plt.plot(train_acc_history, '-o')
plt.plot(val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
# -
for inputs, target in test_loader:
inputs, targets = Variable(inputs), Variable(target)
if torch.cuda.is_available():
inputs, targets = inputs.cuda(), targets.cuda()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
_, target_indices = torch.max(targets, 1)
scores.extend((preds == target_indices).data.cpu().numpy())
for inputs, target in test_loader:
inputs, targets = Variable(inputs), Variable(target)
if torch.cuda.is_available():
inputs, targets = inputs.cuda(), targets.cuda()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
_, target_indices = torch.max(targets, 1)
numpy_inputs = inputs.data.cpu().numpy()
numpy_outputs = outputs.data.cpu().numpy()
numpy_targets = targets.data.cpu().numpy()
img = numpy_inputs[0]
img = img[0, :, :]
plt.imshow(img)
plt.title("Predicted: "+str(preds[0].item()) + " Target: "+str(target_indices[0].item()))
plt.show()
#currentDT = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
#path = "images/img_" + currentDT + ".png"
#plt.imsave(path, img.astype(float))
| src/roboy_hand/gesture_recognition/old/real_dataset/leap_train_depth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Challenge: Analyzing Text about Data Science
#
# In this example, let's do a simple exercise that covers all steps of a traditional data science process. You do not have to write any code, you can just click on the cells below to execute them and observe the result. As a challenge, you are encouraged to try this code out with different data.
#
# ## Goal
#
# In this lesson, we have been discussing different concepts related to Data Science. Let's try to discover more related concepts by doing some **text mining**. We will start with a text about Data Science, extract keywords from it, and then try to visualize the result.
#
# As a text, I will use the page on Data Science from Wikipedia:
#
# url = 'https://en.wikipedia.org/wiki/Data_science'
url = "https://en.wikipedia.org/wiki/V_(singer)"
# url = "https://en.wikipedia.org/wiki/Jungkook"
# ## Step 1: Getting the Data
#
# First step in every data science process is getting the data. We will use `requests` library to do that:
# +
import requests
text = requests.get(url).content.decode('utf-8')
print(text[:1000])
# -
# ## Step 2: Transforming the Data
#
# The next step is to convert the data into the form suitable for processing. In our case, we have downloaded HTML source code from the page, and we need to convert it into plain text.
#
# There are many ways this can be done. We will use the simplest build-in [HTMLParser](https://docs.python.org/3/library/html.parser.html) object from Python. We need to subclass the `HTMLParser` class and define the code that will collect all text inside HTML tags, except `<script>` and `<style>` tags.
# +
from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
script = False
res = ""
def handle_starttag(self, tag, attrs):
if tag.lower() in ["script","style"]:
self.script = True
def handle_endtag(self, tag):
if tag.lower() in ["script","style"]:
self.script = False
def handle_data(self, data):
if str.strip(data)=="" or self.script:
return
self.res += ' '+data.replace('[ edit ]','')
parser = MyHTMLParser()
parser.feed(text)
text = parser.res
print(text[:1000])
# -
# ## Step 3: Getting Insights
#
# The most important step is to turn our data into some for from which we can draw insights. In our case, we want to extract keywords from the text, and see which keywords are more meaningful.
#
# We will use Python library called [RAKE](https://github.com/aneesha/RAKE) for keyword extraction. First, let's install this library in case it is not present:
import sys
# !{sys.executable} -m pip install nlp_rake
# The main functionality is available from `Rake` object, which we can customize using some parameters. In our case, we will set the minimum length of a keyword to 5 characters, minimum frequency of a keyword in the document to 3, and maximum number of words in a keyword - to 2. Feel free to play around with other values and observe the result.
# For Taehyung
import nlp_rake
extractor = nlp_rake.Rake(max_words=2,min_freq=3,min_chars=5)
res = extractor.apply(text)
res
#
# We obtained a list terms together with associated degree of importance. As you can see, the most relevant disciplines, such as machine learning and big data, are present in the list at top positions.
#
# ## Step 4: Visualizing the Result
#
# People can interpret the data best in the visual form. Thus it often makes sense to visualize the data in order to draw some insights. We can use `matplotlib` library in Python to plot simple distribution of the keywords with their relevance:
# +
import matplotlib.pyplot as plt
def plot(pair_list):
k,v = zip(*pair_list)
plt.bar(range(len(k)),v)
plt.xticks(range(len(k)),k,rotation='vertical')
plt.show()
plot(res)
# -
# There is, however, even better way to visualize word frequencies - using **Word Cloud**. We will need to install another library to plot the word cloud from our keyword list.
# !{sys.executable} -m pip install wordcloud
# `WordCloud` object is responsible for taking in either original text, or pre-computed list of words with their frequencies, and returns and image, which can then be displayed using `matplotlib`:
# +
# For Taehyung
from wordcloud import WordCloud
import matplotlib.pyplot as plt
wc = WordCloud(background_color='white',width=800,height=600)
plt.figure(figsize=(15,7))
plt.imshow(wc.generate_from_frequencies({ k:v for k,v in res }))
# -
# We can also pass in the original text to `WordCloud` - let's see if we are able to get similar result:
# For Taehyung
plt.figure(figsize=(15,7))
plt.imshow(wc.generate(text))
wc.generate(text).to_file('images/ds_wordcloud.png')
# You can see that word cloud now looks more impressive, but it also contains a lot of noise (eg. unrelated words such as `Retrieved on`). Also, we get fewer keywords that consist of two words, such as *data scientist*, or *computer science*. This is because RAKE algorithm does much better job at selecting good keywords from text. This example illustrates the importance of data pre-processing and cleaning, because clear picture at the end will allow us to make better decisions.
#
# In this exercise we have gone through a simple process of extracting some meaning from Wikipedia text, in the form of keywords and word cloud. This example is quite simple, but it demonstrates well all typical steps a data scientist will take when working with data, starting from data acquisition, up to visualization.
#
# In our course we will discuss all those steps in detail.
#
#
| 1-Introduction/01-defining-data-science/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inf, NaN and Numerics
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
from numpy import log, exp, finfo
# -
# ## Exercise 1
#
# Obviously 1000 is the correct answer
# +
log(exp(1000))
# -
# ## Exercise 2
#
eps = finfo(float).eps
0 == eps/10
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Exercise 3
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
x = (1 + eps/10) - 1
x == 0
# -
# ## Exercise 4
#
.1 == (.1 + eps/10)
# ## Exercise 5
#
x = 1.0 * 10**120
y = 1.0 * 10**120 + 1 * 10**102
print("type(x):")
print(type(x))
print("type(y):")
print(type(y))
x == y
# ## Exercise 6
#
# This problem and the previous are different since x and y are integers here
# which have arbitrary precision in Python
x = 10**120
y = 10**120 + 10**102
print("type(x):")
print(type(x))
print("type(y):")
print(type(y))
x==y
# ## Exercise 6
#
# _Note_: This solution uses a loop which is introduced in a later chapter.
x = 2.0
count = 0
while x != 1.0:
count += 1
x = 1.0 + (x-1.0)/2.0
print(count,x)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
print("count:")
print(count)
print("x:")
print(x)
print("2**(-count+1):")
print(2**(-count+1))
print("eps:")
print(eps)
| solutions/chapter09/inf_nan_and_numerics_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In the code segment below, we import the tools we are going to need and bring in our training data. After that on the tenth line of the code we remove any nan values in the data. Then in the next line we randomize the order of the data. The last line we print the shape of the array our data is stored in.
# +
import numpy as np
import keras
import pandas
from keras_tqdm import TQDMNotebookCallback
from sklearn.preprocessing import StandardScaler
data = np.array(pandas.read_csv("./training_noavg.csv", header=0))
## Have to drop all the rows that have nan values because they will not help with net
## clean out rows with nan values
data = data[~np.isnan(data).any(axis=1)]
np.random.shuffle(data)
print(data.shape)
# -
# Next, we take and get a standardscaler. next we split the data from what location it is at and put it in x. after we split the data we use the scaler from earlier to transform the data. then we take the locations from earlier and put them in an array label then we use keras.utils.to_categorical to turn it into one hot encoding.
# +
from sympy import *
init_printing(use_latex=True)
import matplotlib.pyplot as plt
# %matplotlib inline
## we will use scaled data
scaler = StandardScaler()
## when testing predicitions
## X = scaler.fit_transform( X )
## test = scaler.transform( test )
X = data[:,0:8]
X = scaler.fit_transform(X)
print(X.shape)
display(X)
labels = data[:,8]
print(labels.shape)
display(labels)
Y = keras.utils.to_categorical(labels, len(np.unique(labels)))
# -
input_size = X.shape[1]
output_size = Y.shape[1]
display(X.shape[1])
# The code below is the declaration of the network. we are using a multilayer net the hidden layers are declared on the third and fourth lines below. They are both given 64 hidden units and fed the input dimensions which is the number of columns in this case 8. In the first hidden layer we used the relu activation function and the second is using sigmoid. we are using these activation functions because they proved to work the best for our data set. and we have set the bis initializers to 0.01. For the output layer which is on line 6 of the code below we used the activation function SoftMax. And then we compile the model using the categorical crossentropy loss function and adam and the optimizer. the output below the code segment is the summary of the model.
# +
model = keras.models.Sequential()
model.add(keras.layers.Dense(64,input_dim=8,activation='relu', bias_initializer=keras.initializers.Constant(value=0.01)))
model.add(keras.layers.Dense(64,input_dim=8,activation='sigmoid', bias_initializer=keras.initializers.Constant(value=0.01)))
model.add(keras.layers.Dense(3,activation='softmax'))
#categorical_crossentropy
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
print(model.summary())
# -
# Now we train the model with our training data. for our data set 100 epochs was enough, but a different data set may need more or less, epochs is the number of times the modal runs through the data set. we are using a validation split to check the accuracy of the modal on data it is not training on. a validation split takes part of the training data and keeps it to test with instead of training with it.
history = model.fit(X, Y,
batch_size=56,
epochs=100,
verbose=0,
callbacks=[TQDMNotebookCallback()],
validation_split = 0.2)
# +
plt.figure(1)
plt.subplot(211)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.subplot(212)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.tight_layout()
plt.show()
score = model.evaluate(X, Y, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# -
# Above are graphs that show how the model did in its training as you can see the accuracy is basically 100%. and the loss is low that is what we are looking for. and right below we are saving the model and its weights to be used in the demo.
model.save_weights('./Demo/MLN.weights')
model.save('./Demo/MLN.model')
| Demo/Project_Multilayer_Net_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="7DmV5yyXpzn8" colab_type="text"
# # The Metabolic Disassembler
# + [markdown] id="aBvGQthupzoA" colab_type="text"
# ## Create a virtual environment in about 5 min.
# + id="_nE5uCA1pzoC" colab_type="code" colab={}
import sys
import os
import requests
import subprocess
import shutil
from logging import getLogger, StreamHandler, INFO
logger = getLogger(__name__)
logger.addHandler(StreamHandler())
logger.setLevel(INFO)
def install(
chunk_size=4096,
file_name="Miniconda3-4.7.12-Linux-x86_64.sh",
url_base="https://repo.continuum.io/miniconda/",
conda_path=os.path.expanduser(os.path.join("~", "miniconda")),
rdkit_version=None,
add_python_path=True,
force=False):
"""install rdkit from miniconda
```
import rdkit_installer
rdkit_installer.install()
```
"""
python_path = os.path.join(
conda_path,
"lib",
"python{0}.{1}".format(*sys.version_info),
"site-packages",
)
if add_python_path and python_path not in sys.path:
logger.info("add {} to PYTHONPATH".format(python_path))
sys.path.append(python_path)
if os.path.isdir(os.path.join(python_path, "rdkit")):
logger.info("rdkit is already installed")
if not force:
return
logger.info("force re-install")
url = url_base + file_name
python_version = "{0}.{1}.{2}".format(*sys.version_info)
logger.info("python version: {}".format(python_version))
if os.path.isdir(conda_path):
logger.warning("remove current miniconda")
shutil.rmtree(conda_path)
elif os.path.isfile(conda_path):
logger.warning("remove {}".format(conda_path))
os.remove(conda_path)
logger.info('fetching installer from {}'.format(url))
res = requests.get(url, stream=True)
res.raise_for_status()
with open(file_name, 'wb') as f:
for chunk in res.iter_content(chunk_size):
f.write(chunk)
logger.info('done')
logger.info('installing miniconda to {}'.format(conda_path))
subprocess.check_call(["bash", file_name, "-b", "-p", conda_path])
logger.info('done')
logger.info("installing rdkit")
subprocess.check_call([
os.path.join(conda_path, "bin", "conda"),
"install",
"--yes",
"-c", "rdkit",
"python=={}".format(python_version),
"rdkit" if rdkit_version is None else "rdkit=={}".format(rdkit_version)])
logger.info("done")
import rdkit
logger.info("rdkit-{} installation finished!".format(rdkit.__version__))
# + id="THIeeLQDN55Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="62a68f31-f446-4b24-a641-b86318061296"
install(rdkit_version='2019.09.2.0', force=True)
# + id="CXsK_LYZME-A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 819} outputId="e9951955-bce2-475f-cce3-920c123d45ab"
# !pip install metadisassembler
# + id="ABdGFylupzoG" colab_type="code" colab={}
import glob
from IPython.display import Image, display_png
from rdkit.Chem.Draw import IPythonConsole
import metadisassembler as medi
# + [markdown] id="GVhiu0UwpzoN" colab_type="text"
# ## Test1: [C05557 Isopenicillin N](https://www.genome.jp/dbget-bin/www_bget?C05557)
# ## Input a query by the KEGG compound identifier
# + id="SK0_TB9ypzoO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="bd26a387-870a-407f-ac8a-c742acf08a05"
# Create an instance and input a query molecule
test1 = medi.MetaDisassembler()
test1.input_query('C05557')
test1.cpds[0].mol
# + id="NQF3w4wLpzoT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="d86d592b-5742-4418-adbc-04ccc9ba7c94"
# Disassemble the query molecule
# It takes about 30 sec.
test1.disassemble()
# + id="895awTc4pzoY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="000ec838-b1f9-4821-db60-8b7df16a1c8e"
# List output files
sorted(glob.glob('./output/' + test1.name + '/*'))
# + id="2o_IrpaApzob" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="e2eb17da-4775-4683-f303-c69af0282584"
# Display the first image
display_png(Image('./output/' + test1.name + '/0.png'))
# + id="BLA-J-Mupzoe" colab_type="code" colab={}
bu_info = test1.output_matched_bu(result_id=0)
# + id="eK4jKKSjpzoh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="35424120-a9fd-455e-e889-4d0818df7493"
n = 0
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="rfqBdu6upzok" colab_type="text"
# ### If you want to get the information of this biosynthetic unit, please access [KEGG COMPOUND database](https://www.genome.jp/kegg/compound/).
#
# C00956_01 → C00956
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00956 ★
#
# C01251_04 → C01251
# https://www.genome.jp/dbget-bin/www_bget?cpd:C01251
# + id="Lm1nmrL8pzol" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="f7bc63d4-54d9-4d95-c314-50ef09ef065d"
n = 1
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="mcjHII6Gpzoo" colab_type="text"
# C00183_02 → C00183
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00183
# + id="va-GFDh1pzop" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="be9fd481-6fd4-4f97-dfae-1f7fe2c479f1"
n = 2
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="04nVeENppzot" colab_type="text"
# C00097_06 → C00097
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00097
# + id="2itEEuSkpzou" colab_type="code" colab={}
# + id="LevX2GwTpzow" colab_type="code" colab={}
# + [markdown] id="W7yBf3-lpzoz" colab_type="text"
# ***
# + [markdown] id="t_B4j09opzo0" colab_type="text"
# ## Test2: [Dihydroclavaminic acid](https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:15424)
# ## Input a query in the SMILES representation
# + id="jm7S8amFpzo1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="18333200-2b75-465f-95de-e7829a5b6800"
# Create an instance and input a query molecule
test2 = medi.MetaDisassembler()
test2.input_query('[H][C@]12CC(=O)N1[C@@H]([C@@H](CCN)O2)C(O)=O')
test2.cpds[0].mol
# + id="1nXBKPG9pzo5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="fd44dc32-a6dc-49bd-cd89-49ce2870f400"
# Disassemble the query molecule
# It takes about 2 min.
test2.disassemble()
# + id="ktiJH7fLpzo9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="4cd0af38-25bc-43ab-9e25-e18d8b6a3af8"
# List output files
sorted(glob.glob('./output/' + test2.name + '/*'))
# + id="eXngIlsApzpA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="98de9422-7984-4ec2-eeb2-1ff10f588eea"
# Display the first image
display_png(Image('./output/' + test2.name + '/0.png'))
# + id="BpHvuamUpzpD" colab_type="code" colab={}
bu_info = test2.output_matched_bu(result_id=0)
# + id="QMN9X9s8pzpK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="b53e0974-fed6-4092-88ed-15334ecf310d"
n = 0
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="uXGpjEb1pzpO" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00062 ★
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00077
# + id="bKUGjF2epzpP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="5a6f2f0d-83a8-4e92-a492-951e3113bb9a"
n = 1
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="mXDAenURpzpS" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00109
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00118 ★
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00546
# + id="0dygvVFZpzpT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="accba21f-c288-41a8-80ef-20bcb3fd6d6c"
n = 2
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + id="V29ygcDdpzpV" colab_type="code" colab={}
# + id="GN0cCzZdpzpc" colab_type="code" colab={}
# + [markdown] id="tCcsY6rOpzpe" colab_type="text"
# ***
# + [markdown] id="YHUwx02Hpzpf" colab_type="text"
# ## Test3: [Curcumin diglucoside](https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:81315)
# ## Input a query in InChI format
# + id="yKGe7ij6pzpg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="0bfc18b8-33db-4372-b9cb-07e1a5b35338"
# Create an instance and input a query molecule
test3 = medi.MetaDisassembler()
test3.input_query('InChI=1S/C33H40O16/c1-44-22-11-16(5-9-20(22)46-32-30(42)28(40)26(38)24(14-34)48-32)3-7-18(36)13-19(37)8-4-17-6-10-21(23(12-17)45-2)47-33-31(43)29(41)27(39)25(15-35)49-33/h3-13,24-36,38-43H,14-15H2,1-2H3/b7-3+,8-4+,18-13-/t24-,25-,26-,27-,28+,29+,30-,31-,32-,33-/m1/s1')
test3.cpds[0].mol
# + id="aSXAgIgZpzpi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="4f399164-f4c2-4f4b-fa9e-2fc3c962ea59"
# Disassemble the query molecule
test3.disassemble()
# + id="DjtnsFtApzpl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="51ff497e-b2ea-4046-8dce-144ca6a00d49"
# List output files
sorted(glob.glob('./output/' + test3.name + '/*'))
# + id="Xwe2soGUpzpn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="715c4ef3-4493-46f3-b36e-5100a137f457"
# Display the first image
display_png(Image('./output/' + test3.name + '/0.png'))
# + id="QDBM57Fnpzpq" colab_type="code" colab={}
bu_info = test3.output_matched_bu(result_id=0)
# + id="3dMazSr1pzpt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="e9e0b301-99b0-4b1b-9d5a-8f80a4e04dbb"
n = 0
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="yQzQ6WBWpzpw" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00029
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00031 ★
# + id="-Ldd14u2pzpx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="86aa338e-09e1-4334-b6dd-e5285a526078"
n = 2
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="w1WuZphUpzpz" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00223 ★
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00811
# + id="XWEWsQ08pzp0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="ccf449eb-5314-4213-ad29-178992d6466e"
n = 3
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="4q-JgpUJpzp2" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00223
# + id="E_2vv_sTpzp2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="9b3b8a38-e94d-4489-b455-8293b3f95150"
n = 4
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="l2fw_HGepzp4" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00083
# + id="Jyfb7TDypzp5" colab_type="code" colab={}
# + id="tNN1NTSgpzp9" colab_type="code" colab={}
# + [markdown] id="9-jnWjz7pzp_" colab_type="text"
# ***
# + [markdown] id="YS5nef5FpzqA" colab_type="text"
# ## Test4: [C00011250 Fumigaclavine C](http://kanaya.naist.jp/knapsack_jsp/information.jsp?word=C00011250)
# ## Input a query by the KNApSAcK compound identifier
# + id="xqYnz7KcpzqB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="7619359c-0751-43e2-a325-a010e243d8b0"
# Create an instance and input a query molecule
test4 = medi.MetaDisassembler()
test4.input_query('C00011250')
test4.cpds[0].mol
# + id="-hYL3A1HpzqC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="147a7d17-f481-4fd7-a071-19022f5e2bd2"
# Disassemble the query molecule
test4.disassemble()
# + id="W1n3cXKWpzqE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="e4a0a6bf-1282-421a-9d89-160106d44fc9"
# List output files
sorted(glob.glob('./output/' + test4.name + '/*'))
# + [markdown] id="5Txpwz4EpzqG" colab_type="text"
# ### The result at the top (" 0.png ") is not necessarily the correct combination.
# ### In this case, the most correct one is the third result (" 2.png ").
# + id="F2Q-1ZICpzqH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="f2835d1b-8d2f-4791-c72e-6c8927f21e9d"
# Display the "third" image
display_png(Image('./output/' + test4.name + '/2.png'))
# + id="zB-mubFYpzqK" colab_type="code" colab={}
bu_info = test4.output_matched_bu(result_id=2)
# + id="VcVQmSVqpzqM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="84c54307-4baf-43a6-8812-2edcb3f0ddbf"
n = 0
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="nRXmxoeIpzqR" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00078 ★
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00398
# + id="6tPRNSMnpzqS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="9e7cf3e3-189a-490a-c753-839041add7ff"
n = 1
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="LzFETFd6pzqV" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C16521
# + id="d4D9GhLopzqW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="3ea809aa-ceec-4fc9-a96c-bbacada9292d"
n = 2
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="pF3lKhFspzqa" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C16521
# + id="d88vzhz4pzqb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="d3644f52-0793-4645-9c69-011cecad30e8"
n = 3
print('Biosynthetic Unit IDs:')
print(bu_info[n]['bu_id'])
bu_info[n]['mol']
# + [markdown] id="vW6cKwaMpzqe" colab_type="text"
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00022
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00024 ★
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00083
# https://www.genome.jp/dbget-bin/www_bget?cpd:C00084
# + id="eZgMRVKzpzqf" colab_type="code" colab={}
# + id="a1K-Mn_Hpzqh" colab_type="code" colab={}
# + [markdown] id="dHII9vjJpzqm" colab_type="text"
# ***
# + [markdown] id="PdunguFKpzqm" colab_type="text"
# ## Save files to your local computer
# + id="PWJPA-ZBpzqn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 986} outputId="c652daaf-fda8-4a04-d33b-3799634e8413"
# !zip -r /content/result.zip /content/output
# + id="8zDAdC6Lpzqp" colab_type="code" colab={}
from google.colab import files
files.download('/content/result.zip')
# + id="fGAj62UPpzqq" colab_type="code" colab={}
# + id="-gRWWczipzqu" colab_type="code" colab={}
| jupyter_usecase/basic_usage_in_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Class Coding Lab: Introduction to Programming
#
# The goals of this lab are to help you to understand:
#
# 1. How to turn in your lab and homework
# 2. the Jupyter programming environments
# 3. basic Python Syntax
# 4. variables and their use
# 5. how to sequence instructions together into a cohesive program
# 6. the input() function for input and print() function for output
#
# ## Let's start with an example: Hello, world!
#
# This program asks for your name as input, then says hello to you as output. Most often it's the first program you write when learning a new programming language.
#
# TO RUN THIS CODE: Click in the cell below and click the run cell button.
#
# NOTE: After the code executes, you will see a sequence number next to the code and output below the code itself. This is your indication the code in the cell has run. You must run all code cells in the notebook for full credit.
#
# + code_cell_type="run_code"
your_name = input("What is your name? ")
print('Hello there',your_name)
# -
# Believe it or not there's a lot going on in this simple two-line program, so let's break it down.
#
# - **The first line:**
# - Asks you for input, prompting you with `What is your Name?`
# - It then stores your input in the variable `your_name`
# - **The second line:**
# - prints out the following text: `Hello there`
# - then prints out the contents of the variable `your_name`
#
# At this point you might have a few questions. What is a variable? Why do I need it? Why is this two lines? Etc... All will be revealed in time.
# ## Variables
#
# Variables are names in our code which store values. I think of variables as cardboard boxes. Boxes hold things. Variables hold things. The name of the variable is on the ouside of the box (that way you know which box it is), and value of the variable represents the contents of the box.
#
# ### Variable Assignment
#
# **Assignment** is an operation where we store data in our variable. It's like packing something up in the box.
#
# In this example we assign the value "USA" to the variable **country**
# + code_cell_type="run_code"
# Here's an example of variable assignment.
country = 'USA'
# -
# ### Variable Access
#
# What good is storing data if you cannot retrieve it? Lucky for us, retrieving the data in variable is as simple as calling its name:
# + code_cell_type="run_code"
country # Run this cell. Itshould say 'USA'
# -
# At this point you might be thinking: Can I overwrite a variable? The answer, of course, is yes! Just re-assign it a different value:
# + code_cell_type="run_code"
country = 'Canada'
# -
# You can also access a variable multiple times. Each time it simply gives you its value:
# + code_cell_type="run_code"
country, country, country
# -
# ### The Purpose Of Variables
#
# Variables play an vital role in programming. Computer instructions have no memory of each other. That is one line of code has no idea what is happening in the other lines of code. The only way we can "connect" what happens from one line to the next is through variables.
#
# For example, if we re-write the Hello, World program at the top of the page without variables, we get the following:
#
# + code_cell_type="run_code"
input("What is your name? ")
print('Hello there')
# -
# When you execute this program, notice there is no longer a connection between the input and the output. In fact, the input on line 1 doesn't matter because the output on line 2 doesn't know about it. It cannot because we never stored the results of the input into a variable!
# ### 1.1 You Code
#
# Re-write the program above to input a name and then say hello there, name. It will need to store the first line in a variable so that it can be printed on the 2nd line.
# + code_cell_type="write_code" label="1.1" solution=["x = input(\"What is your name? \")\n", "print(\"Hello there\",x)\n"]
# TODO: Write your code here
# -
# ### What's in a name? Um, EVERYTHING
#
# Computer code serves two equally important purposes:
#
# 1. To solve a problem (obviously)
# 2. To communicate how you solved problem to another person (hmmm... I didn't think of that!)
#
# If our code does something useful, like land a rocket, predict the weather, or calculate month-end account balances then the chances are 100% certain that *someone else will need to read and understand our code.*
#
# Therefore it's just as important we develop code that is easily understood by both the computer and our colleagues.
#
# This starts with the names we choose for our variables. Consider the following program:
# + code_cell_type="run_code"
y = input("Enter your city: ")
x = input("Enter your state: ")
print(x,y,'is a nice place to live')
# -
# What do `x` and `y` represent? Is there a semantic (design) error in this program?
#
# You might find it easy to figure out the answers to these questions, but consider this more human-friendly version:
# + code_cell_type="run_code"
city = input("Enter your city: ")
state = input("Enter your state: ")
print(city, state, 'is a nice place to live')
# -
# Do the aptly-named variables make it easier to find the semantic errors in this second version? OF COURSE THEY DO!!!
#
# ### 1.2 You Code
#
# **Debug** the program below (remove errors to get it working). When it is correct it should input your name and your age and the print name and age on a single line. Make sure you use aptly-named variables!!!
#
# Example of the Program running:
# ```
# Enter your name: Mike
# Enter your age: 25
# Mike is 25
# ```
# In the above example `Mike` was the entered name, and `25` was the entered age.
# + code_cell_type="debug_code" label="1.2" solution=["name = input(\"Enter your name: \")\n", "age = input(\"Enter your age: \")\n", "print(name, \"is\", age)\n"]
# TODO: Debug this code here.
name = input "Enter your name: "
foo = input("Enter your age: ")
print(name, "is" )
# -
# ### 1.3 You Code
#
# Now try to write a program which asks for two separate inputs: your first name and your last name. The program should then output `Hello` with your first name and last name.
#
# For example if you enter `Mike` for the first name and `Fudge` for the last name the program should output `Hello <NAME>`
#
# **HINTS**
#
# - Use appropriate variable names. If you need to create a two word variable name use an underscore in place of the space between the words. eg. `two_words`
# - You will need a separate set of inputs for each name.
#
# + code_cell_type="write_code" label="1.3" solution=["first_name = input(\"Enter your first name:\")\n", "last_name = input(\"Enter your last name:\")\n", "print(\"Hello\",first_name, last_name)\n"]
# TODO: write your code here
# -
# ### Variable Concatenation: Your First Operator
#
# The `+` symbol is used to combine to variables containing text values together. Consider the following example:
# + code_cell_type="run_code"
prefix = "re"
suffix = "ment"
root = input("Enter a root word, like 'ship': ")
print( prefix + root + suffix)
# + code_cell_type="run_code"
first = input("Enter first name: ")
last = input("enter last name: ")
name_last_first = last + "," + first
print(name_last_first)
# -
# ### 1.4 You Code
#
# Write a program to prompt for three colors as input, then outputs those three colors in order they were entered, informing me which one was the middle (2nd entered) color.
#
# For example if you were to input `red` then `green` then `blue`
#
# the program would output:
# `Your colors are: red, green, and blue.`
# `The middle color is green.`
#
# **HINTS**
#
# - you'll need three variables one for each input
# - you should try to make the program output like my example. This includes commas and the word `and`.
# - name your variables appropriately!
# - use the `+` operator.
#
# + code_cell_type="write_code" label="1.4" solution=["first=input(\"What is your favorite color?\")\n", "second=input(\"What is your second favorite color?\")\n", "third=input(\"What is your third favorite color?\")\n", "print(\"Your colors are: \" + first + \", \" + second + \", and \" + third + \".\")\n", "print(\"The middle color is \" + second + \".\") \n"]
# TODO: write your code here
# -
# ### F-Strings
#
# In Python 3.7, f-strings were introduced to make it easier to format string literals in the `print()` statement.
#
# Here's how it works:
#
# - Put an `f` in front of the string literal, like this: `f"`
# - For any variable you want to print, enclose in `{curly braces}` within the string literal.
# - At run-time the variable in `{curly braces}` is replaced with its value! This is called **string interpolation**.
#
# For example:
# + code_cell_type="run_code"
name = "Mary"
major = "Data Science"
gpa = "4.0"
print(f"{name} is a {major} major. Her gpa is {gpa}")
# -
# ### 1.5 You Code
#
# Re-write the last program (1.4 You Code) to print using f-strings! As good practice, do not copy and paste code, instead re-write it. This will result in fewer bugs (mistakes) in your code.
# + code_cell_type="write_code" label="1.5" solution=["first=input(\"What is your favorite color?\")\n", "second=input(\"What is your second favorite color?\")\n", "third=input(\"What is your third favorite color?\")\n", "print(f\"Your colors are: {first}, {second}, and {third}.\")\n", "print(f\"The middle color is {second}.\")\n"]
# TODO: write your code here
# -
# ## Metacognition
# + [markdown] label="comfort_cell"
#
# ### Rate your comfort level with this week's material so far.
#
# **1** ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
# **2** ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
# **3** ==> I can do this on my own without any help.
# **4** ==> I can do this on my own and can explain/teach how to do it to others.
#
# `--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--`
#
#
# + [markdown] label="questions_cell"
# ### Questions And Comments
#
# Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
#
# `--== Double-click Here then Enter Your Questions Below this Line ==--`
#
#
# -
# ## How Do I hand in my Work?
#
# FIRST AND FOREMOST: **Save Your work!** Yes, it auto-saves, but you should get in the habit of saving before submitting. From the menu, choose File --> Save Notebook. Or you can use the shortcut keys `CTRL+S`
#
# Handing in your Homework and Labs is easy! All you need to do is run the code cell below and follow the directions. This code sends your assignment to a private cloud where your instructor can download a copy of it at the time of submission.
#
# Once the assignment is graded, you will see a grade and feedback / comments in Blackboard.
#
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
| content/lessons/01-Intro/LAB-Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Air Quality Tensor
# * `<date> <location> <air pollutants> (measurement)`
# * Beijing Air Quality
# * 2,454,305 out of 2,524,536 (35,063 * 12 * 6)
# * Korea Air Quality
# * 11,270,028 out of 18,368,364 (9,478 * 323 * 6)
# * Madrid Air Quality
# * 8,036,759 out of 21,587,328 (64,248 * 24 * 14)
# +
import csv
import time
import numpy as np
import pandas as pd
beijing_df = pd.read_csv('../Data/air_quality/BeijingAirQuality/beijing.tensor', delimiter='\t', header=None)
korea_df = pd.read_csv('../Data/air_quality/KoreaAirQuality/korea_airquality.tensor', delimiter='\t', header=None)
madrid_df = pd.read_csv('../Data/air_quality/MadridAirQuality/1hour_madrid.tensor', delimiter='\t', header=None)
# -
def get_tensor(df):
start = time.time()
dims = df[[0,1,2]].max()+1
tensor = np.empty(dims) * np.nan
tensor.shape
for i, row in df.iterrows():
indices = [[index] for index in np.int64(np.asarray(row[:-1]))]
tensor[tuple(indices)] = np.double(row[3])
avg = []
for i in range(tensor.shape[2]):
avg.append(np.nanmean(tensor[:,:,i]))
inds = np.where(np.isnan(tensor))
for ind in zip(inds[0], inds[1], inds[2]):
tensor[ind] = avg[ind[-1]]
print(time.time() - start)
return tensor
beijing_tensor = get_tensor(beijing_df)
korea_tensor = get_tensor(korea_df)
madrid_tensor = get_tensor(madrid_df)
np.where(np.isnan(beijing_tensor))
np.where(np.isnan(korea_tensor))
np.where(np.isnan(madrid_tensor))
print(beijing_tensor.shape)
print(korea_tensor.shape)
print(madrid_tensor.shape)
| Data/.ipynb_checkpoints/0_data_processing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 说明:
# 给定一个非空数组,a0, a1, a2, ..., an-1,其中0 ≤ ai <2^31。
# 求ai XOR aj的最大结果,其中 0≤i,j<n。你能在O(n)运行时这样做吗?
#
# Example:
# Input: [3, 10, 5, 25, 2, 8]
# Output: 28
# Explanation: The maximum result is 5 ^ 25 = 28.
# -
class Solution:
def findMaximumXOR(self, nums) -> int:
pass
nums_ = [3, 10, 5, 25, 2, 8]
solution = Solution()
solution.findMaximumXOR(nums_)
| not done/421. Maximum XOR of Two Numbers in an Array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
from cbrain.imports import *
from cbrain.model_diagnostics import *
# # Old stuff
# ## New unified class
range_dict = {
'SPDT': [-5e-4, 5e-4],
'SPDQ': [-5e-7, 5e-7],
'QRL': [-2e-4, 2e-4],
'QRS': [-1.2e-4, 1.2e-4],
'TPHYSTND_NORAD': [-5e-4, 5e-4],
'PHQ': [-5e-7, 5e-7],
}
class ModelDiagnostics(object):
"""
Two basic functionalities:
1. Plotting --> need preds and truth of selected time step in original values for one var
2. Global statistics --> also from denormalized values
Differences between TF and Keras:
1. Data loading: For Keras I will use my data_generator (much faster),
for TF I will read and process the raw aqua files
2. Output normalization
3. Output shape: 1D for Keras, 2D for TF --> Use TF convention
NOTE: This cannot handle outputs with one level.
"""
def __init__(self, is_tf, model_path,
k_fpath=None, k_tpath=None, k_npath=None, k_norms=None,
tf_filepattern=None, tf_fvars=None, tf_tvars=None, tf_meanpath=None,
tf_stdpath=None, nlat=64, nlon=128, nlev=30, ntime=48):
# Basic setup
self.is_tf = is_tf; self.is_k = not is_tf
self.model = keras.models.load_model(model_path, custom_objects={"tf": tf})
self.nlat, self.nlon, self.nlev = (nlat, nlon, nlev)
self.ngeo = nlat * nlon
self.ntime = ntime
# Get variable names and open arrays
if self.is_k:
self.k_norm = h5py.File(k_npath, 'r')
self._get_k_norm_arrs(*k_norms)
self.k_features = h5py.File(k_fpath, 'r')
self.k_targets = h5py.File(k_tpath, 'r')
self.fvars, self.tvars = self._get_k_vars()
else:
self.fvars, self.tvars = (tf_fvars, tf_tvars)
self.tf_mean, self.tf_std = (nc.Dataset(tf_meanpath), nc.Dataset(tf_stdpath))
self.tf_files = sorted(glob(tf_filepattern))
# Init helper functions
def _get_k_vars(self):
"""
Return unique variable names for features and targets in correct order.
"""
return [list(dict.fromkeys(
[f.split('_lev')[0] for f in list(self.k_norm[f'{a}_names'][:])]
)) for a in ['feature', 'target']]
def _get_k_norm_arrs(self, fsub, fdiv, tsub, tmult):
"""
Allocate normalization arrays for keras.
"""
self.fsub = 0. if fsub is None else self.k_norm[fsub]
if fdiv is None: self.fdiv = 1.
elif fdiv == 'range':
self.fdiv = self.k_norm['feature_maxs'] - self.k_norm['feature_mins']
elif fdiv == 'max_rs': self.fdiv = np.maximum(
self.k_norm['feature_maxs'][:] - self.k_norm['feature_mins'][:],
self.k_norm['feature_stds_by_var'])
else: self.fdiv = self.k_norm['fdiv']
self.tsub = 0. if tsub is None else self.k_norm[tsub]
self.tmult = 1. if fsub is None else self.k_norm[tmult]
def get_pt(self, itime, var=None):
"""
Returns denormalized predictions and truth for a given time step and var.
[lat, lon, lev] or [lat, lon, var, lev] if var is None
"""
if self.is_k: p, t = self._get_k_pt(itime, var)
else: p, t = self._get_tf_pt(itime, var)
return p, t
def _get_k_pt(self, itime, var=None):
"""Keras version"""
f = (self.k_features['features'][itime*self.ngeo:(itime+1)*self.ngeo] -
self.fsub) / self.fdiv
p = self.model.predict_on_batch(f) / self.tmult + self.tsub
t = self.k_targets['targets'][itime*self.ngeo:(itime+1)*self.ngeo]
# At this stage they have shape [ngeo, stacked_levs]
return self._k_reshape(p, var), self._k_reshape(t, var)
def _get_tf_pt(self, itime=None, var=None, idate=None):
"""Tensorflow version
If idate is given, instead of itime, return the entire file
"""
if idate is None:
idate = itime // self.ntime; itime_tmp = itime % self.ntime
else: itime_tmp = None
f = self._get_tf_f_or_t(idate, itime_tmp, 'f')
p = self.model.predict_on_batch(f)
t = self._get_tf_f_or_t(idate, itime_tmp, 't', normalize=False)
p, t = (self._tf_reshape(p), self._tf_reshape(t))
if var is None:
return self._tf_denorm(p), t
else:
var_idx = self.tvars.index(var)
return self._tf_denorm(p)[..., var_idx, :], t[..., var_idx, :]
def _k_reshape(self, x, var=None):
"""For targets only atm.
[ngeo, stacked_levs] --> [lat, lon, var, lev]
Select var if not None.
"""
x = x.reshape(self.nlat, self.nlon, -1, self.nlev)
if var is not None: x = x[:, :, self.tvars.index(var), :]
return x
def _tf_reshape(self, x):
"""[ngeo, var, nlev] -- > [lat, lon, var, lev]
or [ngeo*ntime, var, nlev] --> [ntime, lat, lon, var, lev]
"""
ntar = len(self.tvars)
if x.shape[0] == self.ngeo:
return x.reshape(self.nlat, self.nlon, ntar, self.nlev)[:, :, :, ::-1]
else:
return x.reshape(self.ntime, self.nlat, self.nlon, ntar, self.nlev)[..., ::-1]
def _get_tf_f_or_t(self, idate, itime, f_or_t, normalize=True):
with nc.Dataset(self.tf_files[idate], 'r') as ds:
arr = []
vars = self.fvars if f_or_t == 'f' else self.tvars
for var in vars:
da = ds[var][:]
if normalize: da = (da - self.tf_mean[var][:]) / self.tf_std[var][:]
if da.ndim == 4: # 3D variables [time, lev, lat, lon] --> [sample, lev]
a = np.rollaxis(da, 1, 4).reshape(-1, 30)
elif da.ndim == 3: # 2D variables [time, lat, lon]
a = np.rollaxis(np.tile(da.reshape(-1), (30, 1)), 0, 2)
elif da.ndim == 1: # lat
a = np.rollaxis(np.tile(da, (self.ntime, 30, self.nlon, 1)),
1, 4).reshape(-1, 30)
else:
raise Exception('Incompatible number of dimensions')
arr.append(a)
arr = np.expand_dims(np.rollaxis(np.array(arr), 0, 2), 3) # [sample, feature, lev, 1]
arr = arr[:, :, -self.nlev:][:, :, ::-1]
if itime is not None: arr = arr[itime*self.ngeo:(itime+1)*self.ngeo]
return arr
def _tf_denorm(self, x, f_or_t='t'):
for i, var in enumerate(self.fvars if f_or_t == 'f' else self.tvars):
m, s = [np.rollaxis(ds[var][-self.nlev:], 0, 3)
for ds in [self.tf_mean, self.tf_std]]
x[..., i, :] = x[..., i, :] * s + m
return x
# Plotting functions
def plot_double_xy(self, itime, ilev, var, **kwargs):
p, t = self.get_pt(itime, var)
return self.plot_double_slice(p[:, :, ilev], t[:, :, ilev], **kwargs)
def plot_double_yz(self, itime, ilon, var, **kwargs):
p, t = self.get_pt(itime, var)
return self.plot_double_slice(p[:, ilon, :].T, t[:, ilon, :].T, **kwargs)
def plot_double_slice(self, p, t, title='', unit='', **kwargs):
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
I1 = axes[0].imshow(p, **kwargs)
I2 = axes[1].imshow(t, **kwargs)
cb1 = fig.colorbar(I1, ax=axes[0], orientation='horizontal')
cb2 = fig.colorbar(I2, ax=axes[1], orientation='horizontal')
cb1.set_label(unit); cb2.set_label(unit)
axes[0].set_title('CBRAIN Predictions')
axes[1].set_title('SP-CAM Truth')
fig.suptitle(title)
return fig
def plot_slice(self, x, title='', unit='', **kwargs):
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
I = ax.imshow(x, **kwargs)
cb = fig.colorbar(I, ax=ax, orientation='horizontal')
cb.set_label(unit)
ax.set_title(title)
return fig
# Statistics computation
def compute_stats(self, niter=None):
"""Compute statistics in for [lat, lon, var, lev]"""
if self.is_k: nt = self.k_features['features'].shape[0] // self.ngeo
else: nt = len(self.tf_files) * self.ntime
if niter is not None: nt = niter
# Allocate stats arrays
psum = np.zeros((self.nlat, self.nlon, len(self.tvars), self.nlev))
tsum = np.copy(psum); sse = np.copy(psum)
psqsum = np.copy(psum); tsqsum = np.copy(psum)
for itime in tqdm(range(nt)):
if self.is_k:
p, t = self.get_pt(itime) # [lat, lon, var, lev]
else: # For TF load entire aqua file at once!
itmp = itime % self.ntime; idate = itime // self.ntime
if itmp == 0:
pday, tday = self._get_tf_pt(idate=idate)
p, t = (pday[itmp], tday[itmp])
# Compute statistics
psum += p; tsum += t
psqsum += p ** 2; tsqsum += t ** 2
sse += (t - p) ** 2
# Compute average statistics
self.stats = {}
pmean = psum / nt; tmean = tsum / nt
self.stats['bias'] = pmean - tmean
self.stats['mse'] = sse / nt
# -1 for sample variance
self.stats['pred_var'] = (psqsum / nt - pmean ** 2) * nt / (nt - 1)
self.stats['true_var'] = (tsqsum / nt - tmean ** 2) * nt / (nt - 1)
self.stats['r2'] = 1. - (self.stats['mse'] / self.stats['true_var'])
def mean_stats(self, cutoff_level=0):
"""Get average statistics for each variable and returns dataframe"""
df = pd.DataFrame(index=self.tvars + ['all'],
columns=list(self.stats.keys()))
for ivar, var in enumerate(self.tvars):
for stat_name, stat in self.stats.items():
# Stats have shape [lat, lon, var, lev]
df.loc[var, stat_name] = np.mean(stat[:, :, ivar])
# compute r2
df.loc[var, 'r2_v2'] = self._compute_r2(
self.stats['mse'][:, :, ivar], self.stats['true_var'][:, :, ivar], cutoff_level)
# Compute r2 for all vars
df.loc['all', 'r2_v2'] = self._compute_r2(
self.stats['mse'], self.stats['true_var'], cutoff_level)
self.stats_df = df
return df
# Stats helper functions
def _compute_r2(self, mse, true_var, cutoff_level=0):
"""r2 here is defined as the average r2 over each level
mse and true_var have dims [lat, lon, lev]
"""
lev_r2 = 1. - (np.mean(mse, axis=(0, 1)) / np.mean(true_var, axis=(0, 1)))
return np.mean(lev_r2[..., cutoff_level:])
# ### Keras
kmodel_path = '/export/home/srasp/repositories/CBRAIN-CAM/saved_models/B018_purecrm_essv2_nonorm_sample1_max_rs.h5'
pp_dir = '/scratch/srasp/preprocessed_data/'
k_fpath = f'{pp_dir}purecrm_essv2_nonorm_valid_sample1_features.nc'
k_tpath = f'{pp_dir}purecrm_essv2_nonorm_valid_sample1_targets.nc'
k_npath = f'{pp_dir}purecrm_essv2_nonorm_train_sample1_norm.nc'
k_norms = ('feature_means', 'max_rs', None, 'target_conv')
d = ModelDiagnostics(False, kmodel_path, k_fpath=k_fpath, k_tpath=k_tpath, k_npath=k_npath,
k_norms=k_norms)
d.compute_stats(5)
d.mean_stats(9)
f = d.plot_double_yz(100, 20, 'SPDQ', cmap='bwr')
d.plot_slice(np.mean(d.stats['r2'][:, :, 0].T, axis=(1)))
# ### TF
model_dir = '/export/home/srasp/TF_models/'
model_fn = 'saved_keras_model_0220a.h5'
mean_fn = 'mean_nolat_0213.nc'
std_fn = 'std_nolat_0213.nc'
model_path = model_dir + model_fn
mean_path = model_dir + mean_fn
std_path = model_dir + std_fn
inps = ['TBP','QBP','PS','SHFLX','LHFLX','dTdt_adiabatic','dQdt_adiabatic']
outps = ['TPHYSTND_NORAD','PHQ']
data_dir = '/scratch/srasp/Aquaplanet_enhance05_old_matlab/'
aqua_fn = 'AndKua_aqua_SPCAM3.0_enhance05.cam2.h1.0000-01-05-00000.nc'
aqua_pattern = data_dir + 'AndKua_aqua_SPCAM3.0_enhance05.cam2.h1.0000-01-*-00000.nc'
d2 = ModelDiagnostics(True, model_path, tf_filepattern=aqua_pattern, tf_fvars=inps,
tf_tvars=outps, tf_meanpath=mean_path, tf_stdpath=std_path)
d2.compute_stats()
d2.mean_stats(9)
d2.plot_double_yz(100, 20, 'PHQ', cmap='bwr')
p, t = d2.get_pt(100, 'TPHYSTND_NORAD')
plt.imshow(t[:, 0, :].T); plt.colorbar()
plt.imshow(t[:, :, 20])
plt.imshow(p[:, 0, :].T); plt.colorbar()
f = d2._get_tf_f_or_t(2, 4, 'f')
f.shape
plt.imshow(f.reshape(64, 128, 7, 30)[:, 0, 0, :].T); plt.colorbar()
myp = d2.model.predict_on_batch(f)
myp.shape
myp.reshape(64, 128, 2, 30)
plt.imshow(myp.reshape(64, 128, 2, 30)[:, 0, 0, :].T); plt.colorbar()
plt.imshow(f[:, 0, :].T); plt.colorbar()
old = ModelDiagnosticsTF(model_path, inps, outps, mean_path, std_path, aqua_pattern)
to, po = old.get_tp('TPHYSTND_NORAD', 2, 4)
to.shape
plt.imshow(to[:, 0, :].T)
# + [markdown] heading_collapsed=true
# ## TF class
# + hidden=true
class ModelDiagnosticsTF(object):
"""
Model diagnostics class.
"""
def __init__(self, model_path, feature_vars, target_vars,
mean_path, std_path, valid_file_pattern,
nlat=64, nlon=128, nlev=30):
"""
TF version
"""
self.model_path = model_path
self.model = keras.models.load_model(model_path, custom_objects={"tf": tf})
self.mean = nc.Dataset(mean_path, 'r')
self.std = nc.Dataset(std_path, 'r')
self.nlat = nlat; self.nlon = nlon; self.nlev = nlev
self.ntime = 48
self.ngeo = nlat * nlon
self.feature_vars, self.target_vars = (feature_vars, target_vars)
self.valid_files = sorted(glob(valid_file_pattern))
def get_tp(self, var, idate, itime):
"""Return denormalized predictions and targets for one variable
[lat, lon, lev]
"""
# Get feature array
f, t = self._get_ft(idate, itime)
p = self._get_pred(f)
var_idx = self.target_vars.index(var)
t, p = (self.unravel(t), self.unravel(p))
return self._denorm(t, 't')[:, :, var_idx], self._denorm(p, 't')[:, :, var_idx]
def _get_f_or_t(self, idate, itime, f_or_t):
with nc.Dataset(self.valid_files[idate], 'r') as ds:
arr = []
vars = self.feature_vars if f_or_t == 'f' else self.target_vars
for var in vars:
da = (ds[var][:] - self.mean[var][:]) / self.std[var][:]
if da.ndim == 4: # 3D variables [time, lev, lat, lon] --> [sample, lev]
a = np.rollaxis(da, 1, 4).reshape(-1, self.nlev)
elif da.ndim == 3: # 2D variables [time, lat, lon]
a = np.rollaxis(np.tile(da.reshape(-1), (self.nlev, 1)), 0, 2)
elif da.ndim == 1: # lat
a = np.rollaxis(np.tile(da, (self.ntime, self.nlev, self.nlon, 1)),
1, 4).reshape(-1, 30)
else:
raise Exception('Incompatible number of dimensions')
arr.append(a)
arr = np.expand_dims(np.rollaxis(np.array(arr), 0, 2), 3) # [sample, feature, lev, 1]
arr = arr[:, :, -self.nlev:][:, ::-1]
if itime is not None: arr = arr[itime*self.ngeo:(itime+1)*self.ngeo]
return arr
def _get_ft(self, idate, itime):
return self._get_f_or_t(idate, itime, 'f'), self._get_f_or_t(idate, itime, 't')
def _get_pred(self, f):
return self.model.predict(f, batch_size=1024)
def _denorm(self, x, f_or_t):
for i, var in enumerate(self.feature_vars if f_or_t == 'f' else self.target_vars):
m, s = [np.rollaxis(ds[var][-self.nlev:][::-1], 0, 3)
for ds in [self.mean, self.std]]
x[:, :, i, :] = x[:, :, i, :] * s + m
return x
def unravel(self, x):
return x.reshape(self.nlat, self.nlon, -1, self.nlev)
def compute_stats(self, niter=None):
"""
Compute statistics over entire dataset [lat, lon, lev].
bias = mean(preds) - mean(true)
mse = sse(preds, true) / n_samples
rel_mse = mse / std(true)
std_error = std(preds) - std(true)
"""
psum = np.zeros((self.ngeo, len(self.target_vars)*self.nlev))
tsum = np.copy(psum); sse = np.copy(psum)
psqsum = np.copy(psum); tsqsum = np.copy(psum)
ndates = len(self.valid_files) if niter is None else niter
n = ndates * self.ntime
for idate in tqdm(range(ndates)):
f_date, t_date = self._get_ft(idate, None) # Full file
for itime in range(self.ntime):
f = f_date[itime*self.ngeo:(itime+1)*self.ngeo]
t = t_date[itime*self.ngeo:(itime+1)*self.ngeo]
# Get predictions
p = self.model.predict_on_batch(f) # [ngeo samples, z]
# Unscale outputs at this level
t, p = (self.unravel(t), self.unravel(p))
t = self._denorm(t, 't')
p = self._denorm(p, 't')
t, p = [a.reshape(-1, len(self.target_vars)*self.nlev) for a in [t, p]]
# Compute statistics
psum += p; tsum += t
psqsum += p ** 2; tsqsum += t ** 2
sse += (t - p) ** 2
# Compute average statistics
self.stats_dict = {}
pmean = psum / n; tmean = tsum / n
self.bias = pmean - tmean; self.stats_dict['bias'] = self.bias
self.mse = sse / n; self.stats_dict['mse'] = self.mse
self.pred_var = (psqsum / n - pmean ** 2) * n / (n - 1) # Sample variance
self.stats_dict['pred_var'] = self.pred_var
self.true_var = (tsqsum / n - tmean ** 2) * n / (n - 1)
self.stats_dict['true_var'] = self.true_var
def mean_stats(self, cutoff_level=9):
expl_var_str = f'expl_var_cut{cutoff_level}'
df = pd.DataFrame(
index=self.target_vars + ['all'],
columns=list(self.stats_dict.keys()) + [expl_var_str])
# Compute statistics for each variable
for var in self.target_vars + ['all']:
sl = slice(0, None) if var == 'all' else \
slice(self.target_vars.index(var), self.target_vars.index(var)+1, 1)
for stat_name, stat in self.stats_dict.items():
re_stat = self.unravel(stat)[:, :, sl]
df.loc[var, stat_name] = np.mean(re_stat)
df.loc[var, expl_var_str] = np.mean((1. - (
np.mean(self.unravel(self.mse)[:, :, sl], axis=(0, 1)) /
np.mean(self.unravel(self.true_var)[:, :, sl], axis=(0, 1))
).reshape(-1, self.nlev))[:, :cutoff_level])
return df
# + hidden=true
diag = ModelDiagnostics(model_path, inps, outps, mean_path, std_path, aqua_pattern)
# + hidden=true
diag.model.summary()
# + hidden=true
f, t = diag._get_ft(0, None)
# + hidden=true
f.shape
# + hidden=true
diag.ngeo
# + hidden=true
p = diag.model.predict(f, batch_size=f.shape[0])
# + hidden=true
# + hidden=true
# + hidden=true
diag.compute_stats()
# + hidden=true
diag.mean_stats()
# + hidden=true
# + hidden=true
t, p = diag.get_tp('TPHYSTND_NORAD', 4, 47)
# + hidden=true
t.shape
# + hidden=true
diag.valid_files[4]
# + hidden=true
def plot_double_slice(t, p, var=None, unit='', **kwargs):
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
I1 = axes[0].imshow(t, **kwargs)
I2 = axes[1].imshow(p, **kwargs)
cb1 = fig.colorbar(I1, ax=axes[0], orientation='horizontal')
cb2 = fig.colorbar(I2, ax=axes[1], orientation='horizontal')
cb1.set_label(unit); cb2.set_label(unit)
axes[0].set_title('SP-CAM Truth')
axes[1].set_title('CBRAIN Predictions')
plt.show()
# + hidden=true
range_dict = {
'SPDT': [-5e-4, 5e-4],
'SPDQ': [-5e-7, 5e-7],
'QRL': [-2e-4, 2e-4],
'QRS': [-1.2e-4, 1.2e-4],
'TPHYSTND_NORAD': [-5e-4, 5e-4],
'PHQ': [-5e-7, 5e-7],
}
# + hidden=true
plot_double_slice(t[:, 0, :].T, p[:, 0, :].T, unit='K/s', cmap='bwr',
vmin=-5e-4, vmax=5e-4, origin='lower')
# + [markdown] heading_collapsed=true
# ## Try to get dataloader to work for maybe faster loading of data...
# + hidden=true
from utils import load_config
from dataLoad import DataLoader
# + hidden=true
model_dir = './logs/0219_213929_SPDT,SPDQ_layers_1024,1024_lr_0.00025_ac_relu_conv_False_locconv_False_vars_TAP,QAP,PS,SHFLX,LHFLX,dTdt_adiabatic,dQdt_adiabatic_batchs_256_loss_mse'; model_dir
# + hidden=true
from config import parser
# + hidden=true
config, unparset = parser.parse_known_args(); config
# + hidden=true
setattr(config, 'input_names', ','.join(inps))
setattr(config, 'output_names', ','.join(outps)); config
# + hidden=true
dl = DataLoader(data_dir, config, 'AndKua_aqua_SPCAM3.0_enhance05.cam2.h1.0000-01-01-00000.nc')
# + hidden=true
??dl.accessTimeData()
# + hidden=true
aqua = nc.Dataset(data_dir + 'AndKua_aqua_SPCAM3.0_enhance05.cam2.h1.0000-01-01-00000.nc')
# + hidden=true
X = dl.accessTimeData(aqua, inps, iTim=1, doLog=True)
# + hidden=true
X.shape
# + hidden=true
| notebooks/dev/old_notebooks/diag_development.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # EventVestor: Index Changes
#
# In this notebook, we'll take a look at EventVestor's *Index Changes* dataset, available on the [Quantopian Store](https://www.quantopian.com/store). This dataset spans January 01, 2007 through the current day, and documents index additions and deletions to major S&P, Russell, and Nasdaq 100 indexes.
#
# ### Blaze
# Before we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available through an API service known as [Blaze](http://blaze.pydata.org). Blaze provides the Quantopian user with a convenient interface to access very large datasets.
#
# Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
#
# It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
#
# Helpful links:
# * [Query building for Blaze](http://blaze.pydata.org/en/latest/queries.html)
# * [Pandas-to-Blaze dictionary](http://blaze.pydata.org/en/latest/rosetta-pandas.html)
# * [SQL-to-Blaze dictionary](http://blaze.pydata.org/en/latest/rosetta-sql.html).
#
# Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
# > `from odo import odo`
# > `odo(expr, pandas.DataFrame)`
#
# ### Free samples and limits
# One other key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
#
# There is a *free* version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.
#
# With preamble in place, let's get started:
# +
# import the dataset
from quantopian.interactive.data.eventvestor import index_changes
# or if you want to import the free dataset, use:
# from quantopian.interactive.data.eventvestor import index_changes_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# -
# Let's use blaze to understand the data a bit using Blaze dshape()
index_changes.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
index_changes.count()
# Let's see what the data looks like. We'll grab the first three rows.
index_changes[:3]
# Let's go over the columns:
# - **event_id**: the unique identifier for this event.
# - **asof_date**: EventVestor's timestamp of event capture.
# - **trade_date**: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
# - **symbol**: stock ticker symbol of the affected company.
# - **event_type**: this should always be *Index Change*.
# - **event_headline**: a brief description of the event
# - **index_name**: name of the index affected. Values include *S&P 400, S&P 500, S&P 600*
# - **change_type**: Addition/Deletion of equity
# - **change_reason**: reason for addition/deletion of the equity from the index. Reasons include *Acquired, Market Cap, Other*.
# - **event_rating**: this is always 1. The meaning of this is uncertain.
# - **timestamp**: this is our timestamp on when we registered the data.
# - **sid**: the equity's unique identifier. Use this instead of the symbol. Note: this sid represents the company the shares of which are being purchased, not the acquiring entity.
# We've done much of the data processing for you. Fields like `timestamp` and `sid` are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the `sid` across all our equity databases.
#
# We can select columns and rows with ease. Below, we'll fetch all 2015 deletions due to market cap.
deletions = index_changes[('2014-12-31' < index_changes['asof_date']) &
(index_changes['asof_date'] <'2016-01-01') &
(index_changes.change_type == "Deletion")&
(index_changes.change_reason == "Market Cap")]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
deletions.sort('asof_date')
# Now suppose we want a DataFrame of the Blaze Data Object above, want to filter it further down to the S&P 600, and we only want the sid and the asof_date.
df = odo(deletions, pd.DataFrame)
df = df[df.index_name == "S&P 600"]
df = df[['sid', 'asof_date']]
df
| docs/memo/notebooks/data/eventvestor.index_changes/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.1
# language: julia
# name: julia-1.6
# ---
# This problem was asked by Twitter.
#
# You run an e-commerce website and want to record the last N order ids in a log. Implement a data structure to accomplish this, with the following API:
#
# record(order_id): adds the order_id to the log
# get_last(i): gets the ith last element from the log. i is guaranteed to be smaller than or equal to N.
#
# You should be as efficient with time and space as possible.
# +
using OffsetArrays
mutable struct Idlog
# [oldest,newest) is half closed, half open interval, mod length(data)
oldest::Int
newest::Int
data::OffsetVector{Int}
end
N = 3
idlog = Idlog(0,0,zeros(Int,0:N-1))
function record(order_id)
N = length(idlog.data)
# increment oldest if log is full
if (idlog.newest - idlog.oldest + N) % N >= N
idlog.oldest = (idlog.oldest + 1) % N
end
idlog.newest = (idlog.newest + 1) % N
idlog.data[(idlog.newest-1+N)%N] = order_id
end
function get_last(i)
N = length(idlog.data)
idlog.data[(idlog.newest - i + N) % N]
end
# +
x = rand(1:99,6)
@show (x)
for i in x
record(i)
end
for i in 1:N
println(get_last(i))
end
| Circular log - Problem 16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/juanalverto/Face_Recognition_System/blob/main/Face_Recognition_System_Final.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="7wFlHdnVW23Y" colab={"base_uri": "https://localhost:8080/"} outputId="a620e828-ca80-46f6-d77c-9afcdd9f7ba4"
from google.colab import drive
drive.mount('/content/drive')
# + id="P71-MuIa6f5o"
#Importaciones
import random, os, shutil
import imageio
import cv2
import numpy as np
import PIL
# + [markdown] id="ygTol8R3KNrI"
# # **Descomprimimos la base de datos (unzip)**
# + id="Aq6WKNncEm-W"
from zipfile import ZipFile
cont = 0
for i in range(20):
cont = cont + 1
if cont < 10 and cont != 5 and cont != 6:
file_name = '/content/drive/My Drive/Terravic_Facial_IR_Database/face0{}.zip'.format(cont)
elif cont >= 10:
file_name = '/content/drive/My Drive/Terravic_Facial_IR_Database/face{}.zip'.format(cont)
with ZipFile(file_name, 'r') as zip:
zip.extractall('Terravic_Original')
# + [markdown] id="-34D66Z56tV6"
# # **Renombramos las clases de la base de datos**
# + [markdown] id="LyujGlV2ow2a"
# Esta operación es importante para poder manejar con mayor facilidad los nombres de las carpetas que contienen las imágenes de cada individuo, ya que se debe considerar que las colecciones 5 y 6 están dañadas y no es posible acceder a ellas.
# + id="b6Gz10m7q5dj"
import os
for i in range(1,21):
if i >= 7 and i <= 9:
os.rename('Terravic_Original/face0{}'.format(i), 'Terravic_Original/face0{}'.format(i - 2))
elif i >= 10 and i <= 11:
os.rename('Terravic_Original/face{}'.format(i), 'Terravic_Original/face0{}'.format(i - 2))
elif i >= 12:
os.rename('Terravic_Original/face{}'.format(i), 'Terravic_Original/face{}'.format(i - 2))
# + [markdown] id="UvMCmqgQX3T-"
# # **Construcción de los conjuntos de entrenamiento, validación y prueba**
# + [markdown] id="yqvrp83qTucK"
# **Ordenamos la base de datos aleatoriamente**
# + id="N69mgHHOgw0w"
os.mkdir('Terravic_Shuffled')
# + id="OUajfMjBMbhj"
#Creación de carpetas
for i in range(18):
if i < 9: #i = 0
dest_train = 'Terravic_Shuffled/person0{}/'.format(i+1)
elif i >= 9:
dest_train = 'Terravic_Shuffled/person{}/'.format(i+1)
os.mkdir(dest_train)
for index_class in range(18): #index_class = 0
sample = 0
index_class = index_class + 1 #index_class = 1
if index_class < 10:
class_path = 'Terravic_Original/face0{}/'.format(index_class)
elif index_class >= 10:
class_path = 'Terravic_Original/face{}/'.format(index_class)
#Extracción de las imágenes correspondientes a cada individuo
lst = sorted(os.listdir(class_path))
#Shuffling the dataset
random.shuffle(lst)
for file_name in lst:
img_person = imageio.imread(class_path + file_name)
if index_class < 10:
if sample < 10: #0000.jpg
imageio.imwrite('Terravic_Shuffled/person0{}/'.format(index_class) + '000{}.jpg'.format(sample), img_person)
elif sample >= 10 and sample < 100:
imageio.imwrite('Terravic_Shuffled/person0{}/'.format(index_class) + '00{}.jpg'.format(sample), img_person)
elif sample >= 100 and sample < 1000:
imageio.imwrite('Terravic_Shuffled/person0{}/'.format(index_class) + '0{}.jpg'.format(sample), img_person)
else:
imageio.imwrite('Terravic_Shuffled/person0{}/'.format(index_class) + '{}.jpg'.format(sample), img_person)
sample = sample + 1
else:
if sample < 10:
imageio.imwrite('Terravic_Shuffled/person{}/'.format(index_class) + '000{}.jpg'.format(sample), img_person)
elif sample >= 10 and sample < 100:
imageio.imwrite('Terravic_Shuffled/person{}/'.format(index_class) + '00{}.jpg'.format(sample), img_person)
elif sample >= 100 and sample < 1000:
imageio.imwrite('Terravic_Shuffled/person{}/'.format(index_class) + '0{}.jpg'.format(sample), img_person)
else:
imageio.imwrite('Terravic_Shuffled/person{}/'.format(index_class) + '{}.jpg'.format(sample), img_person)
sample = sample + 1
# + colab={"base_uri": "https://localhost:8080/"} id="SDVW2l3w0UeG" outputId="7ca23089-0216-4a17-c0f5-8f92172ad742"
#Comprobación
import os
for i in range(18):
i = i +1
if i < 10:
print('total images in test_original/person0{}:'.format(i), len(os.listdir('Terravic_Shuffled/person0{}/'.format(i))))
else:
print('total images in test_original/person{}:'.format(i), len(os.listdir('Terravic_Shuffled/person{}/'.format(i))))
# + [markdown] id="N-yPvA0hWA6u"
# **Creación de las carpetas de entrenamiento, validación y prueba, con sus respectivas carpetas para cada individuo**
# + id="oZi6kEWLct8C"
datasets = ['train', 'validation', 'test']
for dataset_name in datasets:
os.mkdir(dataset_name)
for i in range(18):
if i < 9:
dest_train = dataset_name+'/person0{}/'.format(i+1)
elif i >= 9:
dest_train = dataset_name+'/person{}/'.format(i+1)
os.mkdir(dest_train)
# + id="Quxmqdouz9yP"
def fill_dataset(src_dataset, dest_dataset, person_index, cont, limit):
if person_index < 10:
src_path = src_dataset+'/person0{}/'.format(person_index)
dest_path = dest_dataset+'/person0{}/'.format(person_index)
else:
src_path = src_dataset+'/person{}/'.format(person_index)
dest_path = dest_dataset+'/person{}/'.format(person_index)
lst = sorted(os.listdir(src_path))
for file_name in lst[cont:limit]:
if cont < limit:
img_original = imageio.imread(src_path + file_name)
if cont < 10:
imageio.imwrite(dest_path + '000{}.jpg'.format(cont), img_original)
elif cont >= 10 and cont < 100:
imageio.imwrite(dest_path + '00{}.jpg'.format(cont), img_original)
else:
imageio.imwrite(dest_path + '0{}.jpg'.format(cont), img_original)
cont = cont + 1
# + [markdown] id="1GxpOXUDOCWj"
# **Asignación de imágenes a cada conjunto**
# + colab={"base_uri": "https://localhost:8080/"} id="NXEj5jLD0wqn" outputId="a4d285c7-2f50-4404-87c4-3b56e835c457"
for i in range (1, 19):
if i < 10:
total = len(os.listdir('Terravic_Shuffled/person0{}'.format(i)))
else:
total = len(os.listdir('Terravic_Shuffled/person{}'.format(i)))
limit1 = total - 127
if limit1 % 2 == 0:
limit_validation = int(limit1 / 2)
limit_test = limit_validation
else:
limit_validation = limit1 // 2
limit_test = (limit1 // 2) + 1
fill_dataset('Terravic_Shuffled','train', i, 0, 127)
print(limit_validation + 127)
fill_dataset('Terravic_Shuffled','validation', i, 127, limit_validation + 127)
fill_dataset('Terravic_Shuffled','test', i, limit_validation + 127, limit_validation + limit_test + 127)
# + id="ky7uBJckZIJO" colab={"base_uri": "https://localhost:8080/"} outputId="789c78cd-a284-4b37-db02-411b09f7e491"
#Comprobación
import os
for i in range(18):
i = i +1
if i < 10:
print('total images in train/person0{}:'.format(i), len(os.listdir('train/person0{}/'.format(i))))
print('total images in validation/person0{}:'.format(i), len(os.listdir('validation/person0{}/'.format(i))))
print('total images in test/person0{}:'.format(i), len(os.listdir('test/person0{}/'.format(i))))
else:
print('total images in train/person{}:'.format(i), len(os.listdir('train/person{}/'.format(i))))
print('total images in validation/person{}:'.format(i), len(os.listdir('validation/person{}/'.format(i))))
print('total images in test/person{}:'.format(i), len(os.listdir('test/person{}/'.format(i))))
# + [markdown] id="fb5aCKv_QZR4"
# # **Sistema de reconocimiento facial**
# + [markdown] id="sDGU7sI6JlPY"
# **Referenciamos los conjuntos de entrenamiento, validación y prueba**
# + id="aUZHNVy_RRs1"
train_dir = os.path.join('train')
validation_dir = os.path.join('validation')
test_dir = os.path.join('test')
# + [markdown] id="-7WWbw6JJp_G"
# **Carga de la arquitectura VGG16**
# + id="xrqAdgeRQdM9" colab={"base_uri": "https://localhost:8080/"} outputId="dadc95b0-1d8f-4a84-86a8-4fd4fcccf08e"
from tensorflow.keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(72, 96, 3)) # input_shape = (largo, ancho, canales)
conv_base.summary()
# + [markdown] id="8Q6SaotDJvml"
# **Congelamos y descongelamos ciertas capas (Fine-tuning)**
# + id="GToxLhHtQxSm" colab={"base_uri": "https://localhost:8080/"} outputId="2f8c5b4e-364a-43de-e0ec-742eaf375dc4"
for layer in conv_base.layers:
if layer.name[:6] == 'block5':
layer.trainable = True
else:
layer.trainable = False
conv_base.summary()
# + [markdown] id="ysYRWbXVQ_lV"
# # **Definición de la arquitectura (incluyendo el módulo de transfer learning)**
# + id="t2LGYLGqQ-x1" colab={"base_uri": "https://localhost:8080/"} outputId="6da47d4a-c42f-420a-c6ec-67b3e6d27f50"
from keras import layers
from keras import models
from keras.layers.normalization import BatchNormalization
model = models.Sequential()
model.add(conv_base)
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.BatchNormalization())
model.add(layers.Flatten())
model.add(layers.Dense(18, activation='softmax'))
model.summary()
# + [markdown] id="9B<KEY>"
# # **Compilación**
# + id="jOjtTtIBRill"
from keras import optimizers
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc']) #'acc' -> accuracy -> precisión
# + [markdown] id="HLFUQRRtKH8F"
# # **Definición de los generadores**
# + id="LUp2jo6lRqSG" colab={"base_uri": "https://localhost:8080/"} outputId="cbb3df7c-263b-4b40-d64c-dda388a78f19"
#Using ImageDataGenerator to read images from directories
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir, # Target directory
target_size=(72, 96), # All images are resized from 240x320 to 72x96
batch_size= 9,
color_mode='rgb',
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(72, 96),
batch_size=1,
color_mode='rgb',
class_mode='categorical')
# + [markdown] id="R-bRqMJZKLS9"
# # **Entrenamiento**
# + id="cIbSj6NJR35o" colab={"base_uri": "https://localhost:8080/"} outputId="977629f9-1663-41a6-f107-c37642157992"
#Training and validation stages
history = model.fit(
train_generator,
steps_per_epoch=254, #70
epochs= 30,
validation_data=validation_generator,
validation_steps=10246)
# + [markdown] id="f4AIfeaS8xcN"
# # **Curvas de precisión y pérdida**
# + id="nOyTdAbv8xcP" colab={"base_uri": "https://localhost:8080/", "height": 545} outputId="8d4c2769-2904-446b-9162-7bbfa6c5cc81"
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'go', label='Precisión en el entrenamiento')
plt.plot(epochs, val_acc, 'r', label='Precisión en la validación')
plt.title('Precisión durante el entrenamiento y la validación')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'go', label='Pérdida en el entrenamiento')
plt.plot(epochs, val_loss, 'r', label='Pérdida en la validación')
plt.title('Pérdida durante el entrenamiento y la validación')
plt.legend()
plt.show()
# + [markdown] id="8Zd-LQyEeHzV"
# # **Re-entrenamiento del modelo**
# + id="qPaLLNt0eT1y"
from keras import layers
from keras import models
from keras.layers.normalization import BatchNormalization
model = models.Sequential()
model.add(conv_base)
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.BatchNormalization())
model.add(layers.Flatten())
model.add(layers.Dense(18, activation='softmax'))
from keras import optimizers
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
# + id="2NxtTV9Lfa8N" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="f2421575-e0ea-4eb2-a826-56d5f532a200"
#Training and validation stages
history = model.fit(
train_generator,
steps_per_epoch=254, #70
epochs= 5)
# + id="3VQF2x6pzLh2"
#Guardamos el modelo
model.save('/content/drive/My Drive/face_recognition_model1.h5')
# + id="5lHMJ-n2zXGT"
#Cargamos el modelo
from keras.models import load_model
model = load_model('/content/drive/My Drive/face_recognition_model1.h5')
# + [markdown] id="NgeW-BUuKQwl"
# # **Evaluación del modelo final**
# + id="kl4H8Zy5edcB" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="9991a163-dd09-4a54-f4ec-8b2bb0a7fe0e"
#Test stage
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(72, 96),
batch_size=1,
color_mode='rgb',
class_mode='categorical')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=900)
print('Recognition rate: ', test_acc)
# + [markdown] id="0B741ANM4J1d"
# # **Predicciones**
# + id="r1VfdtbNncQr" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="2b6faa33-97fe-4925-e433-57b5ef5a4c13"
from PIL import Image #PILLOW
width = 96
height = 72
image_face = Image.open('train/person12/0029.jpg')
image_face = image_face.resize((width, height), Image.ANTIALIAS)
image_face = np.array(image_face)
image_face = image_face / 255.0 #-> #Tensor -> (72,96)
image_face = np.expand_dims(image_face, axis = 0) #TENSOR -> (1, 72, 96)
image_face = np.expand_dims(image_face, axis = -1) #TENSOR -> (1, 72, 96, 1)
image_face = np.stack((image_face[:,:,:,0], image_face[:,:,:,0], image_face[:,:,:,0]), axis=3) #TENSOR -> (1, 72, 96, 3)
print(image_face.shape)
prediction = model.predict(image_face)
print('La imagen pertenece a la persona', np.argmax(prediction)+1)
# + id="mzPZRD8y47OL" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="ef4d4ef4-6c47-41cd-89de-909b0bff581f"
prediction
| Face_Recognition_System_Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: sri_gpt
# language: python3
# name: sri_gpt
# ---
# +
text1 = ['Now we all know that this kind of code is there on the Google for bad idea purposes.', 'Maybe four search engine may be producing moving Docker XnumberX logger inside of them features.', 'Now we also know that there is one of the very important people who live to sign in peach.', 'If you probably have been using that party, man, are you two?', 'So there is a lot of future already being cooked up by the pool.', 'So one of the code feature is an organ system.', 'Now, give Somehow I can use the group logger and system that it would be really awesome because Google has already taken care of things like verifying off the user of twenties authenticity of the user.', 'So it is already being done there.', 'On if you notice that on our website, which is learning code online Docker, we use a piece of known as the one place you could excuse that.', 'So what I am using there is I am using some of the food Britain by Google, which is a bargain in this case, and try to incorporate that in my Vette site.', 'Now, although this enttled code is written by the Google, but I do not have to worry about what they ever turn, how they have.', 'I am just fixing a few of my meaningful and using the features off the Google that they have provided for me on how actually am I am I to do so?', 'How am I even provide new features?', 'and yes, with the help of a B A, you can do all such creatures like booking the father.', 'Every tickets are things you wanted to see me like.', 'Invent Whoville are logging in the Facebook.', 'Now there is one pulse of eating Porter now Spacy.', 'If everybody will be able to use these times of A.P.I.', 'eyes for free that are high Johnson that it can get simply, he overused or some people may use it pours a swell.', 'So there is one more concept that sometime is introduced, also known as A.P.I.', 'Now A.P.I., these again have met their baggage.', 'I signed up on the Google and take the permission of the rule the day I want to use your logger feature.']
text2 = ['the example of the guy is very simple.', 'Now it is something which is helping you to have a seamless connective ity.', 'You do not feel that you are getting connected with so much of the thing, but that idea is actually helping.', 'This one, hopefully always that when you try to book an airline ticket, you book it from a based website.', 'Let us say you want to fly from Jeff.', 'Tried any your jumper to Mumbai on YouTube, your favorite Indian airlines.', 'But how many times do you actually built from Indigo Airlines?', 'That you use other service is like make my trip dot com or maybe go anywhere dot com or any other self service is.', 'So how is that possible that you are looking from the third party Web site, but still you are getting a seat on the same plane?', 'The answer is A.P.I.. Now, before I move ahead and talk a little bit more about it, let us want over animation section and give you an example that you are already using off the idea.', 'Now let us try to understand what is this A.P.I.', 'and we are gonna use a simple example that you.', 'Some of you might have already beat.', 'So let me bring up my own website here.', 'Some of you might have wondered who destroyed on this website, and I would just say Woo hoo, self promotion.', 'So now let us get it on and forward that you might have seen that on this website.', 'He used a kind of a feature known a sign up.', 'I thought, this sign of features pretty common.', 'You just registered in your name, email address, pasport and then I send you information even at your email and use.', 'Click on that paper that this spending my day off, paddling the sign up Pretty common.', 'Nothing so much seed is to learn about that pretty common feature now, on the other hand, now let us bring up Google that side.', 'It writes a lot, a lot, a lot of food.']
text3 = ['Now what is a data database we already know what data is, but this data could be random a database is a systematic collection of data since the data in a database is organized it makes data management easy.', 'What is a database management system DBMs database management system or DBMs is a collection of programs, which enables its users to access a database manipulate data and help in the representation of data.', 'It also helps control access to the database by various users.', 'Let is discuss a few examples an online telephone directory would definitely use database management system to store data pertaining to people phone numbers and other contact details your electricity service provider is obviously using a dBMS to manage billing client related issues to handle fault data etcetera.', 'It needs to store manipulate and present data related to members their friends member activities messages advertisnts and a lot more.', 'We can provide countless numbers of examples for usage of DBMS database management systems are not a new concept and as such has been first implemented in the nineteen sixties Charles docments integrated data store or d is said to be the first DbMs in history with time database technologies evolved a aligned while usage and unexpected functional of databases have been increased immensely types of DBMs M s.']
text4 = ['Perhaps the biggest example of all is Google search every time you use Google search you are using a system as machine learning system that core from understanding the text of your query to adjusting the results based on your first interests such as knowing which results to when searching for Java depending anyhow whether you are a copy expert or developer.', 'Perhaps your both today machine learning immediate applications are already quite white ranging including recognition fraud detection and recommendation systems as well as texts each systems too being powerful capabilities can we apply to a wide range fields from and skin cancer detection to retail and of course in the form of self parking and self vehicles.', 'It was not that long ago that want a company or product and machine learning in a offerings.', 'He was considered novel now and my company is visiting to use machine learning and their products in some way it is rapidly becoming well an expect feature trust as we expect companies that have a website that works on your mobile device or perhaps I have a the day was come when it will be expected that our technology will be personalized and staple and self correcting as we use machine learning to make human tasks that are faster than before we can also look further into the future while machine learning can help us to test that we never could achieved on our own.', 'It is not hard to take advantage of machine learning today.', 'The Toy has gotten quite good all you need is data developers and a willingness to take the punch for our purposes.', 'I sure the definition machine learning gestures five words using data to answer questions while would not use that a short answer for an profile exam.', 'It serves a useful purpose for us asume in particular we can split the definition into two parts using data and ask for questions.']
# +
import sys
sys.path.append("../../../ai-engine_temp/pkg/")
from graphrank.core import GraphRank
from graphrank.utils import GraphUtils, TextPreprocess
import math
from numpy import dot
from numpy.linalg import norm
from boto3 import client as boto3_client
import json
import logging
from botocore.client import Config
import numpy as np
from copy import deepcopy
gr = GraphRank()
tp = TextPreprocess()
gu = GraphUtils()
config = Config(connect_timeout=240, read_timeout=240, retries={'max_attempts': 0} )
lambda_client = boto3_client('lambda', config=config, aws_access_key_id="AKIA5SUS6MWO4MP7KDEJ",
aws_secret_access_key="<KEY>"
)
def get_desc(sentence):
original_tokens, pos_tuple, filtered_pos_tuple = tp.preprocess_text(sentence, filter_by_pos=True, stop_words=False)
word_graph = gr.build_word_graph(graph_obj=None, input_pos_text=pos_tuple, window=4, preserve_common_words=False)
normal_keyphrase = gr.get_keyphrases(word_graph, pos_tuple, post_process=True)
desc_keyphrase = gr.get_keyphrases(word_graph, pos_tuple, descriptive=True, post_process_descriptive=True)
desc_keyphrase = sorted(desc_keyphrase, key=lambda kv:kv[1], reverse=True)
normal_kp = [phrase for phrase, score in normal_keyphrase]
desc_kp = [phrase for phrase, score in desc_keyphrase]
return normal_kp, desc_kp
def cosine(vec1, vec2):
return dot(vec1, vec2) / (norm(vec1) * norm(vec2))
def get_embeddings(input_list, req_data=None):
#aws_config = Config(
# connect_timeout=60,
## read_timeout=300,
# retries={"max_attempts": 0},
# region_name="us-east-1",
#)
#lambda_client = boto3_client("lambda", config=aws_config)
if req_data is None:
lambda_payload = {"body": {"text_input": input_list}}
else:
lambda_payload = {"body": {"request": req_data, "text_input": input_list}}
#logger.info("Invoking lambda function")
invoke_response = lambda_client.invoke(
FunctionName="arn:aws:lambda:us-east-1:933389821341:function:keyphrase_ranker",
InvocationType="RequestResponse",
Payload=json.dumps(lambda_payload)
)
lambda_output = (
invoke_response["Payload"].read().decode("utf8").replace("'", '"')
)
response = json.loads(lambda_output)
status_code = response["statusCode"]
response_body = response["body"]
if status_code == 200:
embedding_vector = np.asarray(json.loads(response_body)["embeddings"])
else:
embedding_vector = np.asarray(json.loads(response_body)["embeddings"])
return embedding_vector
# -
text1_keyphrase = get_desc(" ".join(text3))[0][:2]
text2_keyphrase = get_desc(" ".join(text4))[0][:2]
import time
start = time.time()
fv = {}
for key in text1_keyphrase + text2_keyphrase:
fv[key] = get_embeddings([key])[0]
stop = time.time()
print ("time taken => ", stop - start)
# +
# import time
# start = time.time()
# fv = {}
# for emb in get_embeddings(text1_keyphrase + text2_keyphrase):
# fv[0] = emb
# stop = time.time()
# print ("time taken => ", stop - start)
# -
scores = []
for indexa, nodea in enumerate(list(fv.keys())[:len(text1_keyphrase)]):
for indexb, nodeb in enumerate(list(fv.keys())[len(text2_keyphrase):]):
scores.append(cosine(fv[nodea], fv[nodeb]))
np.mean(scores)
text1_keyphrase
text2_keyphrase
# # keyphrase similarity with gpt
import sys
sys.path.append("../../../ai-engine_temp/pkg/")
sys.path.append("/home/arjun/BERT_Similarity_experiments/code/")
import text_preprocessing.preprocess as tp
import networkx as nx
from scipy.spatial.distance import cosine
# +
import text_preprocessing.preprocess as tp
import numpy as np
import json
import pandas as pd
import gpt_feat_utils
gpt_model = gpt_feat_utils.GPT_Inference("/home/shubham/projects/domain_minds_v2_gpt/se/model/epoch3/", device="cuda")
#gpt_model = gpt_feat_utils.GPT_Inference("/home/arjun/gpt_experiments/engg_models/se+ether_2+1s_ep5_#2/", device="cpu")
# +
text1 = ['Now we all know that this kind of code is there on the Google for bad idea purposes.', 'Maybe four search engine may be producing moving Docker XnumberX logger inside of them features.', 'Now we also know that there is one of the very important people who live to sign in peach.', 'If you probably have been using that party, man, are you two?', 'So there is a lot of future already being cooked up by the pool.', 'So one of the code feature is an organ system.', 'Now, give Somehow I can use the group logger and system that it would be really awesome because Google has already taken care of things like verifying off the user of twenties authenticity of the user.', 'So it is already being done there.', 'On if you notice that on our website, which is learning code online Docker, we use a piece of known as the one place you could excuse that.', 'So what I am using there is I am using some of the food Britain by Google, which is a bargain in this case, and try to incorporate that in my Vette site.', 'Now, although this enttled code is written by the Google, but I do not have to worry about what they ever turn, how they have.', 'I am just fixing a few of my meaningful and using the features off the Google that they have provided for me on how actually am I am I to do so?', 'How am I even provide new features?', 'and yes, with the help of a B A, you can do all such creatures like booking the father.', 'Every tickets are things you wanted to see me like.', 'Invent Whoville are logging in the Facebook.', 'Now there is one pulse of eating Porter now Spacy.', 'If everybody will be able to use these times of A.P.I.', 'eyes for free that are high Johnson that it can get simply, he overused or some people may use it pours a swell.', 'So there is one more concept that sometime is introduced, also known as A.P.I.', 'Now A.P.I., these again have met their baggage.', 'I signed up on the Google and take the permission of the rule the day I want to use your logger feature.']
text2 = ['the example of the guy is very simple.', 'Now it is something which is helping you to have a seamless connective ity.', 'You do not feel that you are getting connected with so much of the thing, but that idea is actually helping.', 'This one, hopefully always that when you try to book an airline ticket, you book it from a based website.', 'Let us say you want to fly from Jeff.', 'Tried any your jumper to Mumbai on YouTube, your favorite Indian airlines.', 'But how many times do you actually built from Indigo Airlines?', 'That you use other service is like make my trip dot com or maybe go anywhere dot com or any other self service is.', 'So how is that possible that you are looking from the third party Web site, but still you are getting a seat on the same plane?', 'The answer is A.P.I.. Now, before I move ahead and talk a little bit more about it, let us want over animation section and give you an example that you are already using off the idea.', 'Now let us try to understand what is this A.P.I.', 'and we are gonna use a simple example that you.', 'Some of you might have already beat.', 'So let me bring up my own website here.', 'Some of you might have wondered who destroyed on this website, and I would just say Woo hoo, self promotion.', 'So now let us get it on and forward that you might have seen that on this website.', 'He used a kind of a feature known a sign up.', 'I thought, this sign of features pretty common.', 'You just registered in your name, email address, pasport and then I send you information even at your email and use.', 'Click on that paper that this spending my day off, paddling the sign up Pretty common.', 'Nothing so much seed is to learn about that pretty common feature now, on the other hand, now let us bring up Google that side.', 'It writes a lot, a lot, a lot of food.']
text3 = ['Now what is a data database we already know what data is, but this data could be random a database is a systematic collection of data since the data in a database is organized it makes data management easy.', 'What is a database management system DBMs database management system or DBMs is a collection of programs, which enables its users to access a database manipulate data and help in the representation of data.', 'It also helps control access to the database by various users.', 'Let is discuss a few examples an online telephone directory would definitely use database management system to store data pertaining to people phone numbers and other contact details your electricity service provider is obviously using a dBMS to manage billing client related issues to handle fault data etcetera.', 'It needs to store manipulate and present data related to members their friends member activities messages advertisnts and a lot more.', 'We can provide countless numbers of examples for usage of DBMS database management systems are not a new concept and as such has been first implemented in the nineteen sixties Charles docments integrated data store or d is said to be the first DbMs in history with time database technologies evolved a aligned while usage and unexpected functional of databases have been increased immensely types of DBMs M s.']
text4 = ['Perhaps the biggest example of all is Google search every time you use Google search you are using a system as machine learning system that core from understanding the text of your query to adjusting the results based on your first interests such as knowing which results to when searching for Java depending anyhow whether you are a copy expert or developer.', 'Perhaps your both today machine learning immediate applications are already quite white ranging including recognition fraud detection and recommendation systems as well as texts each systems too being powerful capabilities can we apply to a wide range fields from and skin cancer detection to retail and of course in the form of self parking and self vehicles.', 'It was not that long ago that want a company or product and machine learning in a offerings.', 'He was considered novel now and my company is visiting to use machine learning and their products in some way it is rapidly becoming well an expect feature trust as we expect companies that have a website that works on your mobile device or perhaps I have a the day was come when it will be expected that our technology will be personalized and staple and self correcting as we use machine learning to make human tasks that are faster than before we can also look further into the future while machine learning can help us to test that we never could achieved on our own.', 'It is not hard to take advantage of machine learning today.', 'The Toy has gotten quite good all you need is data developers and a willingness to take the punch for our purposes.', 'I sure the definition machine learning gestures five words using data to answer questions while would not use that a short answer for an profile exam.', 'It serves a useful purpose for us asume in particular we can split the definition into two parts using data and ask for questions.']
# +
text0 = "This is a list of user stories that have been committed to for the next print the entire team and product owner have a solid understanding of what each of the user stories involves based on the discussions for the Sprint planning means the Sprint is a 1-2-3 week time box where the work committed to in this meant backlog is worked on through completion during the Sprint the daily scrum occurs as a stand-up meeting where the team discusses what they have completed and what they are working on as well as any blocked items the outcome of this print is a potentially shippable product potentially shippable means is a product owner can decide if it's ready to ship or if there are any additional features needed before it ships."
text1 = "The end of the Sprint a Sprint review and Sprint retrospective meeting occurs."
text2 = "The Sprint review is where the team showcases their work to the product owner and the retrospective is where the team works on what they can do to improve their process."
text3 = "Come to this tutorial series on SQL and database."
text0_fv = gpt_model.get_text_feats(text0)
text1_fv = gpt_model.get_text_feats(text1)
text2_fv = gpt_model.get_text_feats(text2)
text3_fv = gpt_model.get_text_feats(text3)
# +
from scipy.spatial.distance import cosine
print (1 - cosine(text0_fv, text2_fv))
print (1 - cosine(text1_fv, text2_fv))
print (1 - cosine(text2_fv, text3_fv))
print (1 - cosine(text1_fv, text3_fv))
# -
req = json.loads(json.load(open("topic_testing/sync_eng_11_26.txt","r")))
seg_list = req["body"]["segments"]
sent_list = []
for seg in seg_list:
sent_list.extend(tp.preprocess(seg["originalText"], stop_words=False, word_tokenize=False))
keyphrase_list = []
for sent in sent_list:
keyphrase_list.append(tp.st_get_candidate_phrases(sent))
flat_keyphrase_list = [i for j in keyphrase_list for i in j]
fkl_fv = list(map(lambda kv: gpt_model.get_text_feats(kv), flat_keyphrase_list))
nodea = "The hello world"
nodeb = "hellow whats up guys"
set([i for i in nodea.split(" ")]) & set([j for j in nodeb.split(" ")]) == True
# +
keyphrase_graph = nx.Graph()
for index1, nodea in enumerate(flat_keyphrase_list):
for index2, nodeb in enumerate(flat_keyphrase_list):
if not (set([i for i in nodea.split(" ")]) & set([j for j in nodeb.split(" ")])):
keyphrase_graph.add_edge(nodea, nodeb, weight=1-cosine(fkl_fv[index1], fkl_fv[index2]))
# -
keyphrase_affinity = {}
for node in flat_keyphrase_list:
edges_list = sorted(dict(keyphrase_graph[node]).items(), key=lambda kv:kv[1]["weight"], reverse=True)
keyphrase_affinity[node] = edges_list
for node in keyphrase_affinity.keys():
print ("Key ->>>", " ", node, "\n")
for values in keyphrase_affinity[node][1:5]:
print(values[0], " ", values[1]["weight"])
print("\n\n")
# # K-means on sentences.
import sys
sys.path.append("/home/ether/ai-engine_temp/pkg/")
import text_preprocessing.preprocess as tp
def preprocess_text(text):
mod_texts_unfiltered = tp.preprocess(text, stop_words=False, remove_punct=False)
mod_texts = []
if mod_texts_unfiltered is not None:
for index, sent in enumerate(mod_texts_unfiltered):
#pos_tagged_sent = tp.preprocess(sent, stop_words=False, pos=True)[1][0]
#filtered_list = get_filtered_pos(pos_tagged_sent)
filtered_list = tp.st_get_candidate_phrases(sent)
if len(filtered_list)==0:
continue
elif True not in list(map(lambda x: len(x.split(' '))>1, filtered_list)):
# if len(filtered_list)>3:
# pass
# else:
# continue
continue
if len(sent.split(' ')) > 250:
length = len(sent.split(' '))
split1 = ' '.join([i for i in sent.split(' ')[:round(length / 2)]])
split2 = ' '.join([i for i in sent.split(' ')[round(length / 2):]])
mod_texts.append(split1)
mod_texts.append(split2)
continue
if len(sent.split(' ')) <= 10:
continue
mod_texts.append(sent)
if len(mod_texts) <=1:
return ""
else:
return ""
return mod_texts
from sklearn.cluster import KMeans
sys.path.append("/home/arjun/BERT_Similarity_experiments/code/")
import gpt_feat_utils
#gpt_model = gpt_feat_utils.GPT_Inference("/home/shubham/projects/domain_minds_v2_gpt/se/model/epoch3/", device="cpu")
gpt_model = gpt_feat_utils.GPT_Inference("/home/arjun/gpt_experiments/engg_models/se+ether_2+1s_ep5_#2/", device="cpu")
sent_list = preprocess_text(text)
#sent_list = preprocess_text(text)
sent_list_fv = [gpt_model.get_text_feats(sent) for sent in sent_list]
kmeans = KMeans(n_clusters=4, random_state=0).fit(sent_list_fv)
centers = kmeans.cluster_centers_
sent_map = {}
for index, assigned in enumerate(kmeans.labels_):
sent_map[index] = assigned
prev = 0
print ("-------------- New Cluster --------", "\n\n")
for index, label in sorted(sent_map.items(), key=lambda kv:kv[1], reverse=False):
if label!=prev:
print ("-------------- New Cluster --------", "\n\n")
prev = label
print(sent_list[index],"\n")
# +
from scipy.spatial.distance import cosine
dominant = []
for loc, cluster in enumerate(centers):
dominant_temp = None
score_temp = 0
for index, label in sent_map.items():
if label == loc:
if (1-cosine(sent_list_fv[index], cluster))>score_temp:
dominant_temp = index
dominant.append(dominant_temp)
# -
for pos in dominant:
print(sent_list[pos])
mean = np.mean(sent_list_fv, axis=0)
# +
from scipy.spatial.distance import cosine
closest = {}
for index, label in sent_map.items():
closest[index] = (1-cosine(sent_list_fv[index], cluster))
closest_sorted = sorted(closest.items(), key= lambda kv:kv[1], reverse=True)
# -
for sent in closest_sorted:
print (sent_list[sent[0]])
# +
import numpy as np
from numpy import ndarray
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
from sklearn.decomposition import PCA
from typing import List
class ClusterFeatures(object):
def __init__(
self,
features: ndarray,
algorithm: str = 'kmeans',
pca_k: int = None,
random_state: int = 12345
):
if pca_k:
self.features = PCA(n_components=pca_k).fit_transform(features)
else:
self.features = features
self.algorithm = algorithm
self.pca_k = pca_k
self.random_state = random_state
def __get_model(self, k: int):
if self.algorithm == 'gmm':
return GaussianMixture(n_components=k, random_state=self.random_state)
return KMeans(n_clusters=k, random_state=self.random_state)
def __get_centroids(self, model):
if self.algorithm == 'gmm':
return model.means_
return model.cluster_centers_
def __find_closest_args(self, centroids: np.ndarray):
centroid_min = 1e10
cur_arg = -1
args = {}
used_idx = []
for j, centroid in enumerate(centroids):
for i, feature in enumerate(self.features):
#value = np.linalg.norm(feature - centroid)
value = cosine(feature, centroid)
if value < centroid_min and i not in used_idx:
cur_arg = i
centroid_min = value
used_idx.append(cur_arg)
args[j] = cur_arg
centroid_min = 1e10
cur_arg = -1
return args
def cluster(self, ratio: float = 0.1) -> List[int]:
k = 1 if ratio * len(self.features) < 1 else int(len(self.features) * ratio)
model = self.__get_model(k).fit(self.features)
centroids = self.__get_centroids(model)
cluster_args = self.__find_closest_args(centroids)
sorted_values = sorted(cluster_args.values())
return sorted_values
def __call__(self, ratio: float = 0.1) -> List[int]:
return self.cluster(ratio)
# -
def summarize(text, ratio=0.3):
sent_list = preprocess_text(text)
summarized_text = None
if len(sent_list)!=0:
sent_list_fv = [gpt_model.get_text_feats(sent) for sent in sent_list]
cf = ClusterFeatures(np.asarray(sent_list_fv))
res = cf.cluster(ratio)
summarized_text = [sent_list[s] for s in res]
return summarized_text
# +
# text_list = []
# for groupid in group.keys():
# #print ("groupid: ", groupid)
# temp = []
# for seg in [group[groupid][x][0] for x in group[groupid]]:
# temp.append(seg[0])
# text_list.append(" ".join(temp))
# print()
# -
text_list = ["Is an open source tool that allows you to take advantage of premises hybrid or public cloud destruction giving you the freedom to move workloads wherever you want you know for security networking and storage services and can manage more than one cluster of time kubernetes makes more efficient use of hardware allowing you to maximize your resources and save money but here's where things get tricky when you use a container registration tool like kubernetes is you describe the configuration of your application and a file this configuration file is where you tell kubernetes how to do things like gather container images how established networking between containers how to Mount storage volumes and work to store logs for that container containers have deployed onto Hosts usually in replicated groups and when it's time to deploy in container into a cluster kubernetes schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints that you're choosing like CPU or memory availability. Basically once the container is running on the host kubernetes manages this cycle. According to the specifications you laid out and the container file which means that kubernetes is automating all of these tasks for you, but it does so based on the configuration you set up as the developer and while you may be a crack engineer chances are you don't know exactly how much traffic you're going to get within in the first month of deployment or how your application will behave that's why especially for this first couple of months and monitoring your kubernetes cluster is super endpoint."]
from scipy.spatial.distance import cosine
summarized_list = []
for index, text in enumerate(text_list):
print (text)
print ("------summarized text------","\n\n")
summarized = summarize(text, ratio=0.3)
if summarized!=None:
summarized_list.append(summarized)
print (*summarized, sep="\n",end="\n\n")
summarized
# +
for group in summarized:
print (group, "\n\n")
text_fv = np.mean([gpt_model.get_text_feats(t) for t in [group]], axis=0)
candidate_kp = tp.st_get_candidate_phrases(group)
candidate_sim = [(kp, 1 - cosine(gpt_model.get_text_feats(kp), text_fv)) for kp in candidate_kp]
print (sorted(candidate_sim, key=lambda kv: kv[1], reverse=True))
# -
# +
text_dict = {
"segment0": [
[
"How we have in groups? And as a filtration process we check weather. Is add any Group which has more than one segment? "
],
"2019-12-02T06:35:23Z",
"<KEY>",
"b7d7f0b747d44094a3809d0bd93e48c8"
],
"segment1": [
[
"So if that is any Group, which has more than one segment, then we remove all the groups which has only one single segment. This was just too. "
],
"2019-12-02T06:35:40Z",
"<KEY>",
"f03e765b732448b1ba2c1372b1fe5952"
],
"segment2": [
[
"to remove "
],
"2019-12-02T06:35:58Z",
"<KEY>",
"e73b9748e1d04a02bc5571b59f284a05"
],
"segment3": [
[
"The segments because we don't have any way to currently we don't have any way to save other single segment is either contextually relevant or not. So until we have something we just wanted to remove this for example, like if you have a 76 or 70s called most of the segments would give group but there is a very high chance that there will be lot of dangling segments. In this case. The dangling segments are either it went into the wrong Community or it gets gets removed in pre-processing. So the pre-processing segments are the preprocessor segments or sentences, which got removed in the initial stage. They would get added just after forming communities. So that even pre-process segments plays a part in here. "
],
"2019-12-02T06:36:06Z",
"fb52cb663aec4795aee38ccfd904d315",
"49e579ffc0864590a44e87259a5fbf12"
],
"segment4": [
[
"If that is no groups, which has more than one segment in it, then we don't remove groups through any groups because obviously all the groups in the whole final result only once in the segment. So it's more like fall that victims. Which is same as fall back to terms, it gives top 5 result. "
],
"2019-12-02T06:37:04Z",
"fb52cb663aec4795aee38ccfd904d315",
"5c856f4ba3c44a5e8b602dde5b433d8e"
],
"segment5": [
[
"It gives top fill groups that each group is nothing but a segment so I'm then we take each group and then we compare it with the mind and then rank them and then send it to the if a service. So if for any case the graph computation or the grouping the community algorithm fails at some point then we fall back to The fall back to pins would be simple just take all the segments which are present in the request and then get the same feature because for them and score it across all the mines expect to - and then rank them order them and then push on the top five. "
],
"2019-12-02T06:37:34Z",
"fb52cb663aec4795aee38ccfd904d315",
"26271fda00904852a66953cef9eab62d"
]
}
text_list = [" ".join(list(map(lambda kv:" ".join(kv[0]), text_dict.values())))]
# -
for seg in summarized_list:
print (" ".join(seg), "\n\n")
import pickle
mind = list(pickle.load(open("/home/ether/hdd/ether/gpt_domain_minds/se/mind.pkl","rb"))['sentence'].values())
mind_fv = [gpt_model.get_text_feats(x) for x in mind]
mind[0]
# +
chosen_sentence_norm = []
for mind_index in range(len(mind)):
best_score = 100000
chosen_sentence_temp = None
for index, fv in enumerate(sent_list_fv):
score = np.linalg.norm(fv - mind_fv[mind_index])
#score = cosine(fv, mind_fv[mind_index])
if score < best_score:
best_score = score
chosen_sentence_temp = sent_list[index]
chosen_sentence_norm.append(chosen_sentence_temp)
# -
chosen_sentence_cosine = []
for mind_index in range(len(mind)):
best_score = 100000
chosen_sentence_temp = None
for index, fv in enumerate(sent_list_fv):
#score = np.linalg.norm(fv - mind_fv[mind_index])
score = cosine(fv, mind_fv[mind_index])
if score < best_score:
best_score = score
chosen_sentence_temp = sent_list[index]
chosen_sentence_cosine.append(chosen_sentence_temp)
for index, sent in enumerate(chosen_sentence):
print ("Mind Sentence: ", mind[index], "\n\n Most similar based on cosine =>", chosen_sentence_cosine[index],"\n\n Most similar based on norm =>", chosen_sentence_norm[index], "\n\n\n\n")
# # Np similarity.
req = json.load(open("validation_tests/set_1/set_1.txt","r"))
if isinstance(req, str):
req = json.loads(req)["body"]
else:
req = req["body"]
req["segments"] = sorted(req['segments'], key=lambda kv:kv['startTime'])
for index, seg in enumerate(req["segments"]):
req["segments"][index]["originalText"] = " ".join(preprocess_text(seg["originalText"]))
segments_map = {}
for index, seg in enumerate(req["segments"]):
if seg["originalText"] != "":
segments_map[seg['id']] = seg
segments_map[seg['id']]["order"] = index
text = list(map(lambda seg: (seg["originalText"], seg["id"]), [segment for segment in req['segments'] if segment["originalText"]!=""]))
seg_list = [sent for sent, id in text]
segid_list = [id for sent, id in text]
sent_list = list(map(lambda seg, segid:([sent + ". " for sent in seg.split(". ")],segid), seg_list, segid_list))
sent_list = [(sent, segid) for seg, segid in sent_list for sent in seg]
| community_detection/group_segments/keyphrase_comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Read and Write file practice - Usernames
# +
def batchUnames():
fname = 'c:/Python27/Methods_1/class_list.txt'
infile = open(fname, "r")
class_list = infile.readlines()
infile.close()
#print class_list
out_filename = "c:/Python27/Methods_1/batch_unames.txt"
outfile = open(out_filename, "w")
for item in class_list:
name = item.split()
first = name[0].lower()
last = name[1].lower()
f= first[0]
l = last[:7]
user = f+l
entry = user+"\n"
outfile.write(entry)
outfile.close()
print "Your filename has been written to %s" %(out_filename)
batchUnames()
# -
# # isOdd version 1.0
def isOdd():
print "This function will print a 0 if your number is even and a 1 if your number is odd."
print "Note: this program does not handle 0's properly."
check = input("What number would you like to check? :")
output = check%2
print output
isOdd()
| lecture_04_classcode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### Digit Classification of MNIST dataset after dimensionality reduction from 784 to 32 using Auto-Encoders
# ##### Validation Accuracy ~ 65%
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import theano
from keras.models import Sequential,Model,load_model
from keras.layers import Dense,Activation,Dropout
from keras.utils import np_utils
x = pd.read_csv('/home/vasu/all_projects/ML/t-sne_vs_pca_vs_autoencoder/train.csv')
X = np.array(x)
x = X[:,1:]
y = X[:,0]
print x.shape,y.shape
X = x.reshape((X.shape[0], 1, 28, 28))
print X.shape
encoder = load_model('./enc_2d.h5')
X_enc = encoder.predict(X)
print X_enc.shape
X_enc = (X_enc - X_enc.mean()/X_enc.std())
x_train = X_enc[:30000,:]
x_crossval = X_enc[30000:,:]
y = np_utils.to_categorical(y)
y_train = y[:30000]
y_crossval = y[30000:]
print x_train.shape,x_crossval.shape,y_train.shape,y_crossval.shape
# +
model = Sequential()
model.add(Dense(16, input_dim=2))
model.add(Activation('relu'))
model.add(Dropout(0.35))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# -
hist = model.fit(x_train,y_train,
nb_epoch = 100,
shuffle = True,
batch_size = 256,
validation_data=(x_crossval, y_crossval))
plt.plot(hist.history['val_loss'], color ='b')
plt.plot(hist.history['loss'], color='r')
plt.plot(hist.history['val_acc'], color ='black')
plt.plot(hist.history['acc'], color ='g')
plt.show()
| 2_Dimensions/AUTO_ENCODER/auto_encoder_classification_2d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# Python : <NAME>
#
# https://www.youtube.com/watch?v=J0Aq44Pze-w
#
# Python -> Cpython(Byte Code) -> Cpython VM -> Machine Code
# Online Compilers: Repl.it, Glot.io
# Python 2 Vs Python 3:
#
# https://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html
# https://www.geeksforgeeks.org/important-differences-between-python-2-x-and-python-3-x-with-examples/
#
# Python Documentation :
# https://docs.python.org/3/
#
# Python Cheat Sheet:
# https://nicedoc.io/aneagoie/ztm-python-cheat-sheet
#
# Data Types : Int,float,bool,str,list,tuple,set,dict,None,Complex
# Math Functions: https://www.programiz.com/python-programming/modules/math
# Operator Precedence: (),**,*,/,+,-
#
# Python Keywords: https://www.w3schools.com/python/python_ref_keywords.asp
#
# __name = 'Hemanth' ----> "Dunder Variable"
# PI = 3.141592653589793 -----> Constant
# _score = 100 -----> Variable
print(bin(8))
print(int('0b1000',2))
# # Augmented Assignment Operator :
#
# Augmented Assignment Operator : some_value+=2
long_string = '''
WOW
0 0 0
- - -
'''
print(long_string)
#Formatted Strings:
Name="Hemanth"
Age = 25
print(f'Hi {Name}. You are {Age} Old.')
print(f'Hello {"Cindy"}, your balance is {50}.')
# +
#With .format:
print("Hello {}, your balance is {}.".format("Cindy", 50))
print("Hello {0}, your balance is {1}.".format("Cindy", 50))
print("Hello {name}, your balance is {amount}.".format(name="Cindy", amount=50))
print("Hello {0}, your balance is {amount}.".format("Cindy", amount=50))
# -
# # Strings,Functions,Methods:
#
# Strings are Immutable. Ex: name = "Hemanth" ; name[5] = 'f'
#
# Built In Functions : https://docs.python.org/3/library/functions.html
# Python String Methods: https://www.w3schools.com/python/python_ref_string.asp
#
#
# List Methods : https://www.w3schools.com/python/python_ref_list.asp
# Python KeyWords: https://www.w3schools.com/python/python_ref_keywords.asp
# +
#List UnPacking:
A,b,*my_list,d = [1,2,3,4,5,6,7,8]
print(A)
print(b)
print(my_list)
print(d)
# -
# # Dictionaries
# Dictionary keys are Immutable.
# Dictionary Methods : https://www.w3schools.com/python/python_ref_dictionary.asp
user = {
'name': 'Hemanth',
'age': 25
}
print(user.get('name'))
print(user.get('company'))
print(user.get('company','ValueLabs'))
# +
#Exercise:
#1 Create a user profile for your new game. This user profile will be stored in a dictionary with keys: 'age', 'username', 'weapons', 'is_active' and 'clan'
user = {
'age':25,
'username':'estar07',
'weapons':['sword','boomerang','rope'],
'is_active':True,
'clan':'Vikings'
}
#2 iterate and print all the keys in the above user.
for k in user.keys():
print(k)
#3 Add a new weapon to your user
user['weapons'].append('MachineGun')
print(user)
#4 Add a new key to include 'is_banned'. Set it to false
user.update({'is_banned': False})
print(user)
#5 Ban the user by setting the previous key to True
user['is_banned'] = True
print(user)
#6 create a new user2 my copying the previous user and update the age value and username value.
user2 = user.copy()
user2.update({
'age':28,
'username':'dknight07'
})
print(user)
print(user2)
# -
| ZTM-Python_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# +
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to change the path if needed.)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read the School Data and Student Data and store into a Pandas DataFrame
school_data_df = pd.read_csv(school_data_to_load)
student_data_df = pd.read_csv(student_data_to_load)
# Cleaning Student Names and Replacing Substrings in a Python String
# Add each prefix and suffix to remove to a list.
prefixes_suffixes = ["Dr. ", "Mr. ","Ms. ", "Mrs. ", "Miss ", " MD", " DDS", " DVM", " PhD"]
# Iterate through the words in the "prefixes_suffixes" list and replace them with an empty space, "".
for word in prefixes_suffixes:
student_data_df["student_name"] = student_data_df["student_name"].str.replace(word,"")
# Check names.
student_data_df.head(10)
# -
# ## Deliverable 1: Replace the reading and math scores.
#
# ### Replace the 9th grade reading and math scores at Thomas High School with NaN.
# Install numpy using conda install numpy or pip install numpy.
# Step 1. Import numpy as np.
import numpy as np
# +
# Step 2. Use the loc method on the student_data_df to select all the reading scores from the 9th grade at Thomas High School and replace them with NaN.
student_data_df.loc[(student_data_df["school_name"] == "Thomas High School") & (student_data_df["grade"] == "9th"),"reading_score"]=np.nan
# -
# Step 3. Refactor the code in Step 2 to replace the math scores with NaN.
student_data_df.loc[(student_data_df["school_name"] == "Thomas High School") & (student_data_df["grade"] == "9th"),"math_score"]=np.nan
# Step 4. Check the student data for NaN's.
student_data_df.tail(10)
# ## Deliverable 2 : Repeat the school district analysis
# ### District Summary
# Combine the data into a single dataset
school_data_complete_df = pd.merge(student_data_df, school_data_df, how="left", on=["school_name", "school_name"])
school_data_complete_df.head()
# +
# Calculate the Totals (Schools and Students)
school_count = len(school_data_complete_df["school_name"].unique())
student_count = school_data_complete_df["Student ID"].count()
# Calculate the Total Budget
total_budget = school_data_df["budget"].sum()
# -
# Calculate the Average Scores using the "clean_student_data".
average_reading_score = school_data_complete_df["reading_score"].mean()
average_math_score = school_data_complete_df["math_score"].mean()
# +
# Step 1. Get the number of students that are in ninth grade at Thomas High School.
# These students have no grades.
thom_ninth_count=school_data_complete_df.loc[(school_data_complete_df["school_name"] == "Thomas High School") & (school_data_complete_df["grade"] == "9th"),"Student ID"].count()
# Get the total student count
student_count = school_data_complete_df["Student ID"].count()
# Step 2. Subtract the number of students that are in ninth grade at
# Thomas High School from the total student count to get the new total student count.
new_student_count=student_count-thom_ninth_count
new_student_count
# +
# Calculate the passing rates using the "clean_student_data".
passing_math_count = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)].count()["student_name"]
passing_reading_count = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)].count()["student_name"]
comp_math_pass_rate = passing_math_count/student_count
comp_read_pass_rate = passing_reading_count/student_count
print(f"Complete math passing rate {comp_math_pass_rate:.1%}\n"
f"Complete reading passing rate {comp_read_pass_rate:.1%}")
# +
# Step 3. Calculate the passing percentages with the new total student count.
new_math_pass_rate = passing_math_count/new_student_count
new_read_pass_rate = passing_reading_count/new_student_count
print(f"Complete math passing rate {new_math_pass_rate:.1%}\n"
f"Complete reading passing rate {new_read_pass_rate:.1%}")
# +
# Calculate the students who passed both reading and math.
passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)
& (school_data_complete_df["reading_score"] >= 70)]
# Calculate the number of students that passed both reading and math.
overall_passing_math_reading_count = passing_math_reading["student_name"].count()
# Step 4.Calculate the overall passing percentage with new total student count.
over_all_passing_rate= overall_passing_math_reading_count/new_student_count
print(f"Rate of students passing both math and reading {over_all_passing_rate:.1%}\n")
# +
# Create a DataFrame
district_summary_df = pd.DataFrame(
[{"Total Schools": school_count,
"Total Students": student_count,
"Total Budget": total_budget,
"Average Math Score": average_math_score,
"Average Reading Score": average_reading_score,
"% Passing Math": new_math_pass_rate,
"% Passing Reading": new_read_pass_rate,
"% Overall Passing": over_all_passing_rate}])
# Format the "Total Students" to have the comma for a thousands separator.
district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format)
# Format the "Total Budget" to have the comma for a thousands separator, a decimal separator and a "$".
district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format)
# Format the columns.
district_summary_df["Average Math Score"] = district_summary_df["Average Math Score"].map("{:.1f}".format)
district_summary_df["Average Reading Score"] = district_summary_df["Average Reading Score"].map("{:.1f}".format)
district_summary_df["% Passing Math"] = district_summary_df["% Passing Math"].map("{:.1%}".format)
district_summary_df["% Passing Reading"] = district_summary_df["% Passing Reading"].map("{:.1%}".format)
district_summary_df["% Overall Passing"] = district_summary_df["% Overall Passing"].map("{:.1%}".format)
# Display the data frame
district_summary_df
# -
# ## School Summary
# +
# Determine the School Type
per_school_types = school_data_df.set_index(["school_name"])["type"]
# Calculate the total student count.
per_school_counts = school_data_complete_df["school_name"].value_counts()
# Calculate the total school budget and per capita spending
per_school_budget = school_data_complete_df.groupby(["school_name"]).mean()["budget"]
# Calculate the per capita spending.
per_school_capita = per_school_budget / per_school_counts
# Calculate the average test scores.
per_school_math = school_data_complete_df.groupby(["school_name"]).mean()["math_score"]
per_school_reading = school_data_complete_df.groupby(["school_name"]).mean()["reading_score"]
# Calculate the passing scores by creating a filtered DataFrame.
per_school_passing_math = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)]
per_school_passing_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)]
# Calculate the number of students passing math and passing reading by school.
per_school_passing_math = per_school_passing_math.groupby(["school_name"]).count()["student_name"]
per_school_passing_reading = per_school_passing_reading.groupby(["school_name"]).count()["student_name"]
# Calculate the percentage of passing math and reading scores per school.
per_school_passing_math = per_school_passing_math / per_school_counts * 100
per_school_passing_reading = per_school_passing_reading / per_school_counts * 100
# Calculate the students who passed both reading and math.
per_passing_math_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)
& (school_data_complete_df["math_score"] >= 70)]
# Calculate the number of students passing math and passing reading by school.
per_passing_math_reading = per_passing_math_reading.groupby(["school_name"]).count()["student_name"]
# Calculate the percentage of passing math and reading scores per school.
per_overall_passing_percentage = per_passing_math_reading / per_school_counts * 100
# +
# Create the DataFrame
per_school_summary_df = pd.DataFrame({
"School Type": per_school_types,
"Total Students": per_school_counts,
"Total School Budget": per_school_budget,
"Per Student Budget": per_school_capita,
"Average Math Score": per_school_math,
"Average Reading Score": per_school_reading,
"% Passing Math": per_school_passing_math,
"% Passing Reading": per_school_passing_reading,
"% Overall Passing": per_overall_passing_percentage})
# per_school_summary_df.head()
# +
# Format the Total School Budget and the Per Student Budget
per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format)
per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format)
# Display the data frame
per_school_summary_df
# +
# Step 5. Get the number of 10th-12th graders from Thomas High School (THS).
THS_non_fresh_df=student_data_df.loc[(student_data_df["school_name"] == "Thomas High School") & (student_data_df["grade"] != "9th"),]
#student_count = school_data_complete_df["Student ID"].count()
THS_non_fresh_count =THS_non_fresh_df["Student ID"].count()
THS_non_fresh_count
# +
# Step 6. Get all the students passing math from THS
# Calculate the passing scores by creating a filtered DataFrame.
ths_student_score=THS_non_fresh_df.filter(items=['student_name','math_score','reading_score'])
ths_pass_math=ths_student_score.loc[ths_student_score["math_score"]>=70,"student_name"]
ths_pass_math.head(20)
# +
# Step 7. Get all the students passing reading from THS
ths_pass_read=ths_student_score.loc[ths_student_score["reading_score"]>=70,"student_name"]
ths_pass_read.head(20)
# -
# Step 8. Get all the students passing math and reading from THS
ths_pass_both=ths_student_score.loc[(ths_student_score["reading_score"]>=70) & (ths_student_score["math_score"]>=70) ,"student_name"]
ths_pass_both
# +
# Step 9. Calculate the percentage of 10th-12th grade students passing math from Thomas High School.
ths_pass_math_rate=ths_pass_math.count()/THS_non_fresh_count * 100
ths_pass_math_rate
# +
# Step 10. Calculate the percentage of 10th-12th grade students passing reading from Thomas High School.
ths_pass_read_rate=ths_pass_read.count()/THS_non_fresh_count * 100
ths_pass_read_rate
# +
# Step 11. Calculate the overall passing percentage of 10th-12th grade from Thomas High School.
ths_pass_both_rate=ths_pass_both.count()/THS_non_fresh_count * 100
ths_pass_both_rate
# -
# Step 12. Replace the passing math percent for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc["Thomas High School","% Passing Math"]=ths_pass_math_rate
# Step 13. Replace the passing reading percentage for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc["Thomas High School","% Passing Reading"]=ths_pass_read_rate
# Step 14. Replace the overall passing percentage for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc["Thomas High School","% Overall Passing"]=ths_pass_both_rate
# per_school_summary_df
per_school_summary_df
# ## High and Low Performing Schools
# Sort and show top five schools.
print ("Altered Top 5 Schools by Overall Passing %")
per_school_summary_df.sort_values(by=['% Overall Passing'],ascending=False).head(5)
# Sort and show (bottom?) five schools.
print ("Bottom 5 Schools by Overall Passing %")
per_school_summary_df.sort_values(by=['% Overall Passing']).head()
# ## Math and Reading Scores by Grade
# +
# Create a DataFrame of scores by grade level using conditionals.
ninth_graders_df=school_data_complete_df.loc[school_data_complete_df["grade"]=="9th",:]
tenth_graders_df=school_data_complete_df.loc[school_data_complete_df["grade"]=="10th",:]
eleventh_graders_df=school_data_complete_df.loc[school_data_complete_df["grade"]=="11th",:]
twelfth_graders_df=school_data_complete_df.loc[school_data_complete_df["grade"]=="12th",:]
# Group each grade level DataFrame by the school name for the average math score.
ninth_mean_scores_math=ninth_graders_df[["school_name","math_score"]].groupby(["school_name"]).mean()
tenth_mean_scores_math=tenth_graders_df[["school_name","math_score"]].groupby(["school_name"]).mean()
eleventh_mean_scores_math=eleventh_graders_df[["school_name","math_score"]].groupby(["school_name"]).mean()
twelfth_mean_scores_math=eleventh_graders_df[["school_name","math_score"]].groupby(["school_name"]).mean()
# Group each grade level DataFrame by the school name for the average reading score.
ninth_mean_scores_reading=ninth_graders_df[["school_name","reading_score"]].groupby(["school_name"]).mean()
tenth_mean_scores_reading=tenth_graders_df[["school_name","reading_score"]].groupby(["school_name"]).mean()
eleventh_mean_scores_reading=eleventh_graders_df[["school_name","reading_score"]].groupby(["school_name"]).mean()
twelfth_mean_scores_reading=twelfth_graders_df[["school_name","reading_score"]].groupby(["school_name"]).mean()
# +
# Combine each grade level Series for average math scores into a single DataFrame.
math_mean_by_grade = pd.DataFrame({
'9th Math': ninth_mean_scores_math["math_score"],
'10th Math': tenth_mean_scores_math["math_score"],
'11th Math': eleventh_mean_scores_math["math_score"],
'12th Math': twelfth_mean_scores_math["math_score"],
}
)
# +
# Combine each grade level Series for average reading scores into a single DataFrame.
reading_mean_by_grade = pd.DataFrame({
'9th Reading': ninth_mean_scores_reading["reading_score"],
'10th Reading': tenth_mean_scores_reading["reading_score"],
'11th Reading': eleventh_mean_scores_reading["reading_score"],
'12th Reading': twelfth_mean_scores_reading["reading_score"],
},
)
# +
# Format each grade column.
#Format Math Means
math_mean_by_grade["9th Math"]= math_mean_by_grade["9th Math"].map("{:.1f}".format)
math_mean_by_grade["10th Math"]= math_mean_by_grade["10th Math"].map("{:.1f}".format)
math_mean_by_grade["11th Math"]= math_mean_by_grade["11th Math"].map("{:.1f}".format)
math_mean_by_grade["12th Math"]= math_mean_by_grade["12th Math"].map("{:.1f}".format)
#Format reading means
reading_mean_by_grade["9th Reading"]= reading_mean_by_grade["9th Reading"].map("{:.1f}".format)
reading_mean_by_grade["10th Reading"]= reading_mean_by_grade["10th Reading"].map("{:.1f}".format)
reading_mean_by_grade["11th Reading"]= reading_mean_by_grade["11th Reading"].map("{:.1f}".format)
reading_mean_by_grade["12th Reading"]= reading_mean_by_grade["12th Reading"].map("{:.1f}".format)
# display formatted columns
print(math_mean_by_grade)
print(reading_mean_by_grade)
# +
# Remove the index.
math_mean_by_grade=\
math_mean_by_grade.reset_index(drop=True)
# Display the data frame
math_mean_by_grade
# +
## Remove the index.
reading_mean_by_grade=\
reading_mean_by_grade.reset_index(drop=True)
# Display the data frame
reading_mean_by_grade
# -
# ## Scores by School Spending
# +
# Establish the spending bins and group names.
bins = [0, 585, 630, 645, 1000]
# Categorize spending based on the bins.
budget_bin_name =["<$584","$585-629","$630-644","$645+"]
# +
# add bin names to school_data_complete
school_data_complete_df["per_cap_budget"]=school_data_complete_df["budget"]/school_data_complete_df["size"]
school_data_complete_df["Budget Bin"]=pd.cut(school_data_complete_df["per_cap_budget"],bins,labels=budget_bin_name)
# +
# Calculate averages for the desired columns.
per_budget_bin = school_data_complete_df.set_index(["Budget Bin"])
per_budget_counts = school_data_complete_df["Budget Bin"].value_counts()
per_budget_math = school_data_complete_df.groupby(["Budget Bin"]).mean()["math_score"]
per_budget_read = school_data_complete_df.groupby(["Budget Bin"]).mean()["reading_score"]
per_budget_pass_math = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)]
per_budget_pass_read = school_data_complete_df[(school_data_complete_df["reading_score"]>= 70)]
per_budget_pass_both = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)
& (school_data_complete_df["reading_score"]>= 70)]
per_budget_pass_math=per_budget_pass_math.groupby(["Budget Bin"]).count()["student_name"]
per_budget_pass_read=per_budget_pass_read.groupby(["Budget Bin"]).count()["student_name"]
per_budget_pass_both=per_budget_pass_both.groupby(["Budget Bin"]).count()["student_name"]
per_budget_pass_math_rate=per_budget_pass_math / per_budget_counts*100
per_budget_pass_read_rate=per_budget_pass_read / per_budget_counts*100
per_budget_pass_both_rate=per_budget_pass_both / per_budget_counts*100
# +
# Create the DataFrame
scores_by_budget_df=pd.DataFrame({
"Average Math Score":per_budget_math,
"Average Reading Score":per_budget_read,
"% Passing Math":per_budget_pass_math_rate,
"% Passing Reading":per_budget_pass_read_rate,
"% Overall Passing":per_budget_pass_both_rate})
scores_by_budget_df
# +
# Format the DataFrame
scores_by_budget_df["Average Math Score"]=scores_by_budget_df["Average Math Score"].map("{:.1f}".format)
scores_by_budget_df["Average Reading Score"]=scores_by_budget_df["Average Reading Score"].map("{:.1f}".format)
scores_by_budget_df["% Passing Math"]=scores_by_budget_df["% Passing Math"].map("{:.1f}".format)
scores_by_budget_df["% Passing Reading"]=scores_by_budget_df["% Passing Reading"].map("{:.1f}".format)
scores_by_budget_df["% Overall Passing"]=scores_by_budget_df["% Overall Passing"].map("{:.1f}".format)
print("Scores by spending per student")
scores_by_budget_df
# -
# ## Scores by School Size
# +
# Establish the size bins.
bins = [0, 1000, 2000, 5000]
# Categorize sizes on the bins.
size_bin_name =["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
# -
school_data_complete_df["Size Bin"]=pd.cut(school_data_complete_df["size"],bins,labels=size_bin_name)
# +
# Calculate averages for the desired columns.
per_size_bin = school_data_complete_df.set_index(["Size Bin"])
per_size_counts = school_data_complete_df["Size Bin"].value_counts()
per_size_math = school_data_complete_df.groupby(["Size Bin"]).mean()["math_score"]
per_size_read = school_data_complete_df.groupby(["Size Bin"]).mean()["reading_score"]
per_size_pass_math = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)]
per_size_pass_read = school_data_complete_df[(school_data_complete_df["reading_score"]>= 70)]
per_size_pass_both = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)
& (school_data_complete_df["reading_score"]>= 70)]
per_size_pass_math=per_size_pass_math.groupby(["Size Bin"]).count()["student_name"]
per_size_pass_read=per_size_pass_read.groupby(["Size Bin"]).count()["student_name"]
per_size_pass_both=per_size_pass_both.groupby(["Size Bin"]).count()["student_name"]
per_size_pass_math_rate=per_size_pass_math / per_size_counts*100
per_size_pass_read_rate=per_size_pass_read / per_size_counts*100
per_size_pass_both_rate=per_size_pass_both / per_size_counts*100
school_data_complete_df["Size Bin"]=pd.cut(school_data_complete_df["size"],bins,labels=size_bin_name)
# +
# Assemble into DataFrame.
scores_by_size_df=pd.DataFrame({
"Average Math Score":per_size_math,
"Average Reading Score":per_size_read,
"% Passing Math":per_size_pass_math_rate,
"% Passing Reading":per_size_pass_read_rate,
"% Overall Passing":per_size_pass_both_rate})
# +
# Format the DataFrame
scores_by_size_df["Average Math Score"]=scores_by_size_df["Average Math Score"].map("{:.1f}".format)
scores_by_size_df["Average Reading Score"]=scores_by_size_df["Average Reading Score"].map("{:.1f}".format)
scores_by_size_df["% Passing Math"]=scores_by_size_df["% Passing Math"].map("{:.1f}".format)
scores_by_size_df["% Passing Reading"]=scores_by_size_df["% Passing Reading"].map("{:.1f}".format)
scores_by_size_df["% Overall Passing"]=scores_by_size_df["% Overall Passing"].map("{:.1f}".format)
print("Altered School Size Analysis")
scores_by_size_df
# -
# ## Scores by School Type
# +
# Calculate averages for the desired columns.
per_types = school_data_complete_df.set_index(["type"])
per_type_counts = school_data_complete_df["type"].value_counts()
per_type_math = school_data_complete_df.groupby(["type"]).mean()["math_score"]
per_type_read = school_data_complete_df.groupby(["type"]).mean()["reading_score"]
per_type_pass_math = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)]
per_type_pass_read = school_data_complete_df[(school_data_complete_df["reading_score"]>= 70)]
per_type_pass_both = school_data_complete_df[(school_data_complete_df["math_score"]>= 70)
& (school_data_complete_df["reading_score"]>= 70)]
per_type_pass_math=per_type_pass_math.groupby(["type"]).count()["student_name"]
per_type_pass_read=per_type_pass_read.groupby(["type"]).count()["student_name"]
per_type_pass_both=per_type_pass_both.groupby(["type"]).count()["student_name"]
per_type_pass_math_rate=per_type_pass_math / per_type_counts*100
per_type_pass_read_rate=per_type_pass_read / per_type_counts*100
per_type_pass_both_rate=per_type_pass_both / per_type_counts*100
# -
# Assemble into DataFrame.
scores_type_summary_df= pd.DataFrame({
"Math Scores":school_type_math_mean_df["math_score"],
"Reading Scores":school_type_reading_mean_df["reading_score"],
"% Passing Math":per_type_pass_math_rate,
"% Passing Reading":per_type_pass_read_rate,
"% Passing Both":per_type_pass_both_rate})
# +
# # Format the DataFrame
scores_type_summary_df["Math Scores"]=scores_type_summary_df["Math Scores"].map("{:.1f}".format)
scores_type_summary_df["Reading Scores"]=scores_type_summary_df["Reading Scores"].map("{:.1f}".format)
scores_type_summary_df["% Passing Math"]=scores_type_summary_df["% Passing Math"].map("{:.1f}".format)
scores_type_summary_df["% Passing Reading"]=scores_type_summary_df["% Passing Reading"].map("{:.1f}".format)
scores_type_summary_df["% Passing Both"]=scores_type_summary_df["% Passing Both"].map("{:.1f}".format)
print("Altered Scores by School Type")
scores_type_summary_df
# -
| PyCitySchools.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (PyTorch CPU Optimized)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:environment/pytorch-cpu-optimized
# ---
# # Automatic model tuning of PyTorch models with Amazon SageMaker
# This lab demonstrates the power of **Amazon SageMaker's automatic model tuning capability**, also known as hyperparameter optimization (HPO). Instead of a labor intensive process of trial and error that could take days or weeks, [automatic model tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) let's a data scientist ask SageMaker to find the optimal set of hyperparameters, typically in minutes or hours.
#
# The notebook shows how to provide a set of parameters to tune, ranges to consider, a metric to optimize on, some limits on the number of jobs to consider, and the compute capacity to leverage. A SageMaker tuning job then efficiently explores options using a Bayesian optimization. SageMaker creates a set of models and highlights which one is optimal given your constraints. The resulting model is ready for deployment behind an endpoint or for batch predictions.
# ## Setup
# For this notebook, we simply get our security role and establish some parameters for use of S3.
# +
import sagemaker
from sagemaker import get_execution_role
import boto3
client = boto3.client(service_name='sagemaker')
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = sess.default_bucket() # or custom bucket name
prefix = 'DEMO-PYT-image-classification-birds'
JOB_PREFIX = 'pyt-hpo-ic'
FRAMEWORK_VERSION = '1.3.1'
# -
# This notebook relies on execution of previous notebooks in this workshop. Specifically, it assumes the image data has been prepared and uploaded to s3. Here we just define exactly where the training jobs will pull their image data from.
train_inputs = 's3://{}/{}/train'.format(bucket, prefix)
val_inputs = 's3://{}/{}/validation'.format(bucket, prefix)
test_inputs = 's3://{}/{}/test'.format(bucket, prefix)
print('Training data: {}\nValidation data: {}\nTest data: {}'.format(train_inputs, val_inputs, test_inputs))
# Here are the classes that have been uploaded to s3 for training.
# !aws s3 ls $train_inputs/
# ## Create hyperparameter tuning job
# To use Amazon SageMaker's automatic model tuning capability, you create a tuning job, which in turn will launch a set of SageMaker training jobs. As when creating a training job directly, you first establish a set of hyperparameters, some metric definitions, and then a TensorFlow estimator which will be fed a Python training script.
from sagemaker.pytorch import PyTorch
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
# +
hyperparameters = {'initial_epochs': 5,
'data_dir': '/opt/ml/input/data',
'dropout': 0.5}
metric_definitions=[{'Name' : 'validation:acc',
'Regex': '.*Test accuracy: (.*$)'},
{'Name' : 'validation:loss',
'Regex': '.*Test loss: (.*).. Test ac.*'},
{'Name' : 'train:loss',
'Regex': '.*Train loss: (.*).. Test lo.*'}]
estimator = PyTorch(entry_point='train-resnet.py',
source_dir='code',
train_instance_type='ml.c5.4xlarge',
train_instance_count=1,
hyperparameters=hyperparameters,
metric_definitions=metric_definitions,
role=sagemaker.get_execution_role(),
framework_version=FRAMEWORK_VERSION,
debugger_hook_config=False, # working around existing bug (TT 0305452782, Answer 93236)
py_version='py3',
base_job_name=JOB_PREFIX)
# -
# More interestingly, here is the part that is unique to creating the tuning job. You define a set of hyperparameter ranges that you want SageMaker to explore via training jobs. For our example, we focus on the number of [fine tuning](https://www.pyimagesearch.com/2019/06/03/fine-tuning-with-keras-and-deep-learning/) epochs, the dropout ratio, and the fine tuning learning rate. If we were manually try to find the best settings, it would take significant time and lots of trial and error. With SageMaker, we can hand off that job and find the optimal settings with ease.
hyperparameter_ranges = {'initial_epochs': IntegerParameter(5, 20),
'dropout': ContinuousParameter(0.2, 0.7)}
objective_metric_name = 'validation:acc'
objective_type = 'Maximize'
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=6,
max_parallel_jobs=2,
objective_type=objective_type,
base_tuning_job_name=JOB_PREFIX)
inputs = {'train':train_inputs, 'test': test_inputs, 'validation': val_inputs}
# With the tuning job established, we can now launch the job and then check back to see what parameters are best suited to our image classifier.
tuner.fit(inputs)
status = boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
print('Tuning job: {}, Status: {}'.format(tuner.latest_tuning_job.job_name, status))
# ## Analyze tuning job results
# In the remainder of this notebook, we perform some analysis on the results of the tuning job. This helps us gain insight into which parameters were most influential. It can also help generate ideas for other tuning jobs that would help get even closer to our objective. The SageMaker console also provides a good way to track the job and review results.
tuning_job_name = tuner.latest_tuning_job.job_name
# Here we can monitor the progress of the overall tuning job, finding out how many jobs are completed.
# +
tuning_job_result = client.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)
status = tuning_job_result['HyperParameterTuningJobStatus']
if status != 'Completed':
print('Reminder: the tuning job has not been completed.')
job_count = tuning_job_result['TrainingJobStatusCounters']['Completed']
print("%d training jobs have completed" % job_count)
is_minimize = (tuning_job_result['HyperParameterTuningJobConfig']['HyperParameterTuningJobObjective']['Type'] != 'Maximize')
objective_name = tuning_job_result['HyperParameterTuningJobConfig']['HyperParameterTuningJobObjective']['MetricName']
# -
# Here we take a look at the parameters that were used for the best model produced thus far.
from pprint import pprint
if tuning_job_result.get('BestTrainingJob',None):
print("Best model found so far:")
pprint(tuning_job_result['BestTrainingJob'])
else:
print("No training jobs have reported results yet.")
# Here we produce a grid view of all the jobs, their parameters, and their results. They are sorted in descending order of their final objective value (best metric at the top, worst at the bottom).
# +
import pandas as pd
tuner_analytics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name)
full_df = tuner_analytics.dataframe()
if len(full_df) > 0:
df = full_df[full_df['FinalObjectiveValue'] > -float('inf')]
if len(df) > 0:
df = df.sort_values('FinalObjectiveValue', ascending=is_minimize)
print("Number of training jobs with valid objective: %d" % len(df))
print({"lowest":min(df['FinalObjectiveValue']),"highest": max(df['FinalObjectiveValue'])})
pd.set_option('display.max_colwidth', -1) # Don't truncate TrainingJobName
else:
print("No training jobs have reported valid results yet.")
df
# -
# With the following chart, we can see how well SageMaker's Bayesian optimization was able to explore the search space of possible hyperparameters over time. In our case, we only ran a few jobs with a few hyperparameter ranges. For a production tuning job with many more jobs and parameter ranges, this chart is more compelling.
# !pip install bokeh # studio doesn't have this by default yet
# +
import bokeh
import bokeh.io
bokeh.io.output_notebook()
from bokeh.plotting import figure, show
from bokeh.models import HoverTool
class HoverHelper():
def __init__(self, tuning_analytics):
self.tuner = tuning_analytics
def hovertool(self):
tooltips = [
("FinalObjectiveValue", "@FinalObjectiveValue"),
("TrainingJobName", "@TrainingJobName"),
]
for k in self.tuner.tuning_ranges.keys():
tooltips.append( (k, "@{%s}" % k) )
ht = HoverTool(tooltips=tooltips)
return ht
def tools(self, standard_tools='pan,crosshair,wheel_zoom,zoom_in,zoom_out,undo,reset'):
return [self.hovertool(), standard_tools]
hover = HoverHelper(tuner_analytics)
p = figure(plot_width=900, plot_height=400, tools=hover.tools(), x_axis_type='datetime')
p.circle(source=df, x='TrainingStartTime', y='FinalObjectiveValue')
show(p)
# -
# Lastly, we take a look at how significantly each of our hyperparameters impacted the final objective value.
ranges = tuner_analytics.tuning_ranges
figures = []
for hp_name, hp_range in ranges.items():
categorical_args = {}
if hp_range.get('Values'):
# This is marked as categorical. Check if all options are actually numbers.
def is_num(x):
try:
float(x)
return 1
except:
return 0
vals = hp_range['Values']
if sum([is_num(x) for x in vals]) == len(vals):
# Bokeh has issues plotting a "categorical" range that's actually numeric, so plot as numeric
print("Hyperparameter %s is tuned as categorical, but all values are numeric" % hp_name)
else:
# Set up extra options for plotting categoricals. A bit tricky when they're actually numbers.
categorical_args['x_range'] = vals
# Now plot it
p = figure(plot_width=500, plot_height=500,
title="Objective vs %s" % hp_name,
tools=hover.tools(),
x_axis_label=hp_name, y_axis_label=objective_name,
**categorical_args)
p.circle(source=df, x=hp_name, y='FinalObjectiveValue')
figures.append(p)
show(bokeh.layouts.Column(*figures))
| pytorch-workshop/4_auto_model_tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="N6ejTn5C0VuR"
# # Python structures
# + [markdown] id="yAjP93sS0Vug"
# We have now seen that we can define different types of variables and that we can operate on them using either classical mathematical operations or functions and methods. Sometimes we however operate on more than just one variable, and so we need to group them together in a coherent unit.
#
# Python offers several of these groupings, and we are going to look at two of them lists and dictionaries. If you want to proceed using Python, you should definitely study this more in detail, but in this course we are only going to use these two categories.
# + [markdown] id="nQsRuqmq0Vui"
# ## Lists
# + [markdown] id="6sMn_FS_0Vuj"
# ### Creating lists
# Lists are basically collections of variables. One of the main property of lists is that each element can be modified **after** the list has been created, so it's a "dynamic" object.
#
# Lists are surrounded by brackets [] and can be created like this:
# + executionInfo={"elapsed": 686, "status": "ok", "timestamp": 1616246836895, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="Q8SGPC1F0Vuk"
mylist = [10, 5, 983, 20]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 651, "status": "ok", "timestamp": 1616246836902, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="NT1-UaM40Vuk" outputId="f510f658-1161-4759-bd36-fe2055296807"
type(mylist)
# + [markdown] id="BEIcd1800Vun"
# You can create lists of almost anything, for example strings:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 642, "status": "ok", "timestamp": 1616246836903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="RHUf-FFi0Vun" outputId="062353ad-6ac7-4fc1-be4f-fdf51ea9726f"
['a', 'b','c']
# + [markdown] id="9QPINlDR0Vuo"
# Or even mix different types, although it's best to avoid
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 633, "status": "ok", "timestamp": 1616246836903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="zSSNlLW30Vuo" outputId="ed571230-f418-4d94-da94-e1c823e59b95"
['a', 10, 23.54]
# + [markdown] id="7MHlILgM0Vuo"
# ### List indexes
# + [markdown] id="8F-_JFip0Vup"
# The simples operation one can do on lists is to recover some specific value:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 626, "status": "ok", "timestamp": 1616246836904, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="0lwfIL--0Vup" outputId="d3f7ddf1-83b7-44a3-f7e9-f1fcc51341e5"
mylist
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 497, "status": "ok", "timestamp": 1616246836905, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="OpC6e7fc0Vuq" outputId="d1c3a915-bf48-40b0-89d9-8d8a7e1e03ad"
mylist[2]
# + [markdown] id="n1W5yvTd0Vuq"
# **Note that Python is based on 0 indexing, meaning that the first object has index 0 !**
# + [markdown] id="Qcj1bx5a0Vuq"
# As said before, lists are dynamics objects, so one can reassign values:
# + executionInfo={"elapsed": 550, "status": "ok", "timestamp": 1616246838543, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="U5VtyFFl0Vuq"
mylist[2] = 25
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 356, "status": "ok", "timestamp": 1616246838867, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="sSLP6d4G0Vur" outputId="82cdb5e5-ba86-4a61-ec89-4d15f44a85dd"
mylist
# + [markdown] id="LEm5Vtpe0Vur"
# ### Who is who ?
#
# An aspect that can be very confusing in Python is that some objects are not really copied when you assign them to a new variable. Let's clarify this. For example with simple numbers we have:
# + executionInfo={"elapsed": 1086, "status": "ok", "timestamp": 1616246842742, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="cAEHebc90Vur"
a = 5
b = a
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 577, "status": "ok", "timestamp": 1616246843250, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="sOml1yhU0Vur" outputId="469f4871-a04f-4ad9-b228-29be54af5d75"
b
# + [markdown] id="brY5CE_U0Vut"
# If now we modify ```a```:
# + executionInfo={"elapsed": 483, "status": "ok", "timestamp": 1616246844453, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="8znop_j40VvP"
a = 10
# + [markdown] id="EIHY2g3n0VvQ"
# ```b``` still has the old value:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 539, "status": "ok", "timestamp": 1616246845639, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="ygSBSyQQ0VvQ" outputId="77be68f8-5b9a-4e00-e3a6-4e2748e17f9e"
b
# + [markdown] id="UJgA-gXA0VvQ"
# Now let's do something similar with a list. We have a first list:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 481, "status": "ok", "timestamp": 1616246850779, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="DRAhWLjb0VvR" outputId="c678301d-a34f-4e87-a104-3c20b72a6ee8"
mylist = [10, 5, 983, 20]
mylist
# + [markdown] id="gqC2yOfu0VvR"
# Now we copy it to a new list:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 352, "status": "ok", "timestamp": 1616246851674, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="9nhre48A0VvR" outputId="27925b2b-e29b-40ce-a6f3-726efbcbe37a"
mylist2 = mylist
mylist2
# + [markdown] id="SFJLYZnZ0VvR"
# And we modify the original list:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 520, "status": "ok", "timestamp": 1616246854040, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="CBx9H2eu0VvS" outputId="cd5b3b38-d648-4e10-f544-a36cff30fe8d"
mylist[2] = 10000
mylist
# + [markdown] id="Exg2SKOB0VvS"
# What happened to the second list ?
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 521, "status": "ok", "timestamp": 1616246855793, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="zaLyX45w0VvS" outputId="421e47a8-09ee-4b88-dd6a-389100cd69b5"
mylist2
# + [markdown] id="xNK_cabz0VvS"
# **It was changed too!** This is because the two objects ```mylist``` and ```mylist2``` share the same reference. They are not just copies but the same object. If you really want to create an **independent** copy of the first list, you can use the ```copy()``` method:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 565, "status": "ok", "timestamp": 1616246861584, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="n7WzmCqN0VvS" outputId="f4ac4b94-c440-467a-cf91-973027ce90c5"
mylist2 = mylist.copy()
mylist2
# + [markdown] id="cc1NPB_T0VvT"
# Now we can change ```mylist``` without affecting ```mylist2```:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 337, "status": "ok", "timestamp": 1616246861856, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="H0IIGHEq0VvT" outputId="874960f5-9c18-4900-f735-45578a7431de"
mylist[1] = 7000
mylist
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 414, "status": "ok", "timestamp": 1616246863431, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="IjfKFSuj0VvT" outputId="0856625d-d12e-4411-a3ab-328639bea6f6"
mylist2
# + [markdown] id="CxdyQb6Q0VvT"
# ### Using functions and methods
# + [markdown] id="lGyF43zY0VvT"
# Just like other variables, lists have associated functions and methods. One built-in function that we have already seen is e.g. len():
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 472, "status": "ok", "timestamp": 1616246868710, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="C1AFjkvs0VvU" outputId="ff7db9cb-333c-4a25-d5b7-d9b3c5402013"
len(mylist)
# + [markdown] id="lc3UTjJp0VvU"
# The list contains indeed four elements. We can again find all associated functions using dir():
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 518, "status": "ok", "timestamp": 1616246870743, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="TzkfOtYV0VvU" outputId="997fe27b-2413-401e-9245-686be7281c6b"
dir(mylist)
# + [markdown] id="enwUd4Qb0VvU"
# **Some of the methods are important and we will use them very often**. For example ```append()``` to add values to a list:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 450, "status": "ok", "timestamp": 1616246873387, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="BAZwQkwC0VvU" outputId="2beda6fe-cfa3-4bda-84c7-703106e1f171"
mylist
# + executionInfo={"elapsed": 398, "status": "ok", "timestamp": 1616246874030, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="l0GsdQk90VvV"
mylist.append(230)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 642, "status": "ok", "timestamp": 1616246874770, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="WvP5mbDe0VvV" outputId="be7746e7-4cff-4af5-ced1-949b5cffa32c"
mylist
# + [markdown] id="MmheTE790VvV"
# We see here that the list has been modified **in place**, we didn't have to reassign the result to a new variable. The append function is important as we will often start with an empty list and fill it progressively:
# + executionInfo={"elapsed": 345, "status": "ok", "timestamp": 1616246875051, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="IhIEkoSQ0VvV"
mylist = []
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 365, "status": "ok", "timestamp": 1616246875523, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="g2AhNZNx0VvW" outputId="8c78f7e2-45eb-4719-bccc-7ef30baf1cd9"
mylist.append(10)
print(mylist)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 464, "status": "ok", "timestamp": 1616246877047, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="aUs2LKHf0VvW" outputId="57b03ea3-c800-45c8-9cb0-3bf76b8def49"
mylist.append(30)
print(mylist)
# + [markdown] id="noWzrsSZ0VvW"
# ## Dictionaries
# + [markdown] id="jETPOv4_0VvW"
# Sometimes it's useful to store very diverse information into a single container, and in that case, it is also useful to be able to remember what exactly was stored in that container. For that we can use dictionaries. As the name says those structures are basically composed of pairs of "words" and "definitions", where the definition can be a number, a string, a list etc... Let's imagine we have been detecting a cell in an image and want to store its location, size and type. We can define the following dictionary:
# + executionInfo={"elapsed": 432, "status": "ok", "timestamp": 1616246885543, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="AtBOwcph0VvW"
mydict = {'location_row': 10, 'location_col': 23, 'surface': 120, 'type': 'embryonic'}
# + [markdown] id="KndJAb_w0VvW"
# Now whenever we want to recover the cell size, we can find it using:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 765, "status": "ok", "timestamp": 1616246887378, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="xZdH1gpp0VvX" outputId="5f9c8959-499d-4efa-f4e0-47709a6b1197"
mydict['surface']
# + [markdown] id="B9xY5NN80VvY"
# ### Again who's who ?
# + [markdown] id="ZDPps9hW0VvY"
# Dictionaries behave in the same way as lists: a simple copy is not a true copy:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 404, "status": "ok", "timestamp": 1616246891184, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="wGOY-yVs0VvY" outputId="844bcd6c-d48e-40ca-e984-b24d3c712b6f"
mydict2 = mydict
mydict['surface'] = 5000
mydict2
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 427, "status": "ok", "timestamp": 1616246891548, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="dODLkgVA0VvY" outputId="7550f9da-327f-4802-f445-9883960e8f5e"
mydict
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 426, "status": "ok", "timestamp": 1616246891915, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="b1KQDUQc0VvY" outputId="10ebaf73-7f3c-4e47-f70b-40b64fe288ca"
mydict2
# + [markdown] id="Rx1xlQTY0VvZ"
# Also if you adde a new key, it will appear in both copies:
# + executionInfo={"elapsed": 603, "status": "ok", "timestamp": 1616246896176, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="O5rAGUq-0VvZ"
mydict2['test'] = 30
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 352, "status": "ok", "timestamp": 1616246896382, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="kTTr0UrN0VvZ" outputId="a6834493-2af6-4519-e89e-59e501c56d69"
mydict
# + [markdown] id="6IN3-nIe0Vva"
# ### Grouping dictionaries
# + [markdown] id="T6ELvJWo0Vva"
# If we imagine that we have three detected cells, we can then group all the information within a list:
# + executionInfo={"elapsed": 496, "status": "ok", "timestamp": 1616246931496, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="oAvoYIej0Vva"
mydict = {'location_row': 10, 'location_col': 23, 'surface': 120, 'type': 'embryonic'}
mydict2 = {'location_row': 32, 'location_col': 18, 'surface': 130, 'type': 'embryonic'}
mydict3 = {'location_row': 23, 'location_col': 5, 'surface': 90, 'type': 'embryonic'}
all_cells = [mydict, mydict2, mydict3]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 548, "status": "ok", "timestamp": 1616246932621, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="wpNHVB8M0Vva" outputId="ed1765c3-527d-4346-d503-667748e7812b"
all_cells
# + [markdown] id="zbiGHKjG0Vvb"
# You can recover all the "words" that are defined using the key() method:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 642, "status": "ok", "timestamp": 1616246934577, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="fiuIIzeA0Vvb" outputId="c8293d3d-6d95-4a0b-971b-f313a4b293d8"
mydict.keys()
# + [markdown] id="CspqHLHE0Vvb"
# ## Dataframes
# + [markdown] id="CwvziyJq0Vvc"
# What native Python is lacking is a data format that simplifies handling of tabular data and doing statistics on them. Whit this I mean something lile an Excel sheet where you have multiple columns and for example take an average for each column. This type of tabular data is provided by the Pandas package, and its **Dataframe** structure. We will see here only a tiny fraction of the possibilities offered by Pandas, so read more about it if you think it might help you. Let's import Pandas:
# + executionInfo={"elapsed": 861, "status": "ok", "timestamp": 1616246939428, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="OiMwkROl0Vvc"
import pandas as pd
# + [markdown] id="euwfbP820Vvc"
# ### Creating a Dataframe
# + [markdown] id="gKVQk1iC0Vvc"
# Dataframes can be created from scratch and filled with data. However what will happen most of the time, is that we will get some result in native Python format and will transform it into a dataframe. We can do this immediately with our list of dictionaries. For that we just use the ```DataFrame()``` function:
# + executionInfo={"elapsed": 446, "status": "ok", "timestamp": 1616246942570, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="Fkfbj45h0Vvc"
mydataframe = pd.DataFrame(all_cells)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 566, "status": "ok", "timestamp": 1616246943237, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="3dmpOQcO0Vvc" outputId="5951e55d-177c-4a03-a659-8db0eae1e834"
mydataframe
# + [markdown] id="TPaQpejQ0Vvd"
# We see that the dataframe is shown in a nicely formatted way. Also since we used a list of dictionaries, Pandas was smart enough to infer for us how the table should be made.
#
# We can also create a dataframe from a 2D list:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 450, "status": "ok", "timestamp": 1616246945214, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="5imEBp-Y0Vvd" outputId="aad64498-3639-47b0-c07d-70cf55e2c3ee"
all_cells_list = [[10,23,120,'embryonic'],[32,18,130,'embryonic'],[23,5,90,'embryonic']]
print(all_cells_list)
# + [markdown] id="9xGaZ5Ts0Vve"
# Without columns name specification, simple numbers are used as headers and indices.
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 680, "status": "ok", "timestamp": 1616246948390, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="FQmtGz6k0Vve" outputId="8b662ff8-3a83-4fcd-fab3-712fdb2b509e"
pd.DataFrame(all_cells_list)
# + [markdown] id="eH7beD_t0Vve"
# We can pass a second parameter called ```columns``` to specific headers:
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 912, "status": "ok", "timestamp": 1616246951105, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="4dmkG9yg0Vve" outputId="820db409-2278-4433-f203-a9dbf4b254de"
pd.DataFrame(all_cells_list, columns=['x','y', 'surf', 'type'])
# + [markdown] id="4y9j41JW0Vvg"
# ### Accessing data
# + [markdown] id="kpiVkow80Vvh"
# Let's remember what's in ```mydataframe```:
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 704, "status": "ok", "timestamp": 1616246954131, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="UTt0tFJv0Vvh" outputId="c50c47f7-aa0c-4ca7-b52c-834895dcd8b5"
mydataframe
# + [markdown] id="sckRTkaO0Vvh"
# Using dataframes, we can recover entire columns very easily. For example if we want to recover the ```surface``` parameter for all columns we have two choices:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 741, "status": "ok", "timestamp": 1616246955659, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="WrbAUml90Vvh" outputId="5a64bc8e-1410-43a9-b12c-2a73b6fb05c7"
mydataframe.surface
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 538, "status": "ok", "timestamp": 1616246957510, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="zexhHKPg0Vvi" outputId="621c4715-0c6b-4aa9-8fc2-da206ed54fe8"
mydataframe['surface']
# + [markdown] id="5NZQNOoX0Vvi"
# If we want to recover the data of a specific cell, for example of the second row, we have to use the ```loc[]``` method. **Note that this method used brackets and not parenthesis**:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 504, "status": "ok", "timestamp": 1616246958189, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="o0jO8n720Vvi" outputId="0f0029ce-4b44-4fbc-aa35-8cd1d9808864"
mydataframe.loc[1]
# + [markdown] id="9K_PkgPP0Vvi"
# ### Doing statistics
# + [markdown] id="YIdzSsI50Vvi"
# Pandas provides extensive tools to analyze the data contained in a dataframe. We are going to do only very simple operations to illustrate to power of this approach. For example we can easily calculate all the means for each column
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 533, "status": "ok", "timestamp": 1616246962158, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="rQAG37KH0Vvj" outputId="95164f8d-67b0-408a-a072-a48e28102cac"
mydataframe.mean()
# + [markdown] id="xmBgLm2C0Vvj"
# You see that Pandas is smart enough to do the work only on columns that contain numbers. Same thing for median, standard deviation etc.:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 522, "status": "ok", "timestamp": 1616246964046, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="yPA7cieq0Vvk" outputId="054f27dc-8d19-4a62-eb20-ea77b9edde4b"
mydataframe.std()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 361, "status": "ok", "timestamp": 1616246964245, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="Ie7u9G1e0Vvk" outputId="5665c675-4ece-4718-e5d8-a1c5dbda237e"
mydataframe.median()
# + [markdown] id="gcMIaF890Vvk"
# If we don't want to do the caluclation for the entire table, we can alos just do it for one variable:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 544, "status": "ok", "timestamp": 1616246966633, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="c2kDU6vy0Vvk" outputId="ffe4ee8d-226c-4454-d81d-33c5775aa594"
mydataframe.surface.mean()
# + [markdown] id="9CeyGNai0Vvk"
# or
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 507, "status": "ok", "timestamp": 1616246968692, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="bdp5s8FP0Vvk" outputId="2c56d34a-011e-4d17-e41c-259d86e06522"
mydataframe['surface'].mean()
# + [markdown] id="oULuzQlZ0Vvl"
# ### Plotting with Pandas
# + [markdown] id="zHXPB_wk0Vvl"
# A last nice feature of Pandas is the possibility to direcly plot data without using the classical Matplotlib commands:
# + colab={"base_uri": "https://localhost:8080/", "height": 265} executionInfo={"elapsed": 552, "status": "ok", "timestamp": 1616246978505, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgT0K2JVYzEsjzsS5nhkUVjUrSIJ5jHzXnBoYrmVf8=s64", "userId": "16033870147214403532"}, "user_tz": -60} id="DXX13CcZ0Vvl" outputId="cf94e434-23cd-4c8a-d26c-cd7c83850dc4"
mydataframe.surface.plot();
# + id="3hiHueyT07Uy"
| Appendix_Structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.preprocessing.image import ImageDataGenerator
from keras.models import load_model
from keras import backend as K
import matplotlib.pyplot as plt
import scipy.misc
# +
img_width = 160
img_height = 120
image_size=(img_height, img_width)
if K.image_data_format() == 'channels_first':
input_shape = (3, img_height, img_width)
else:
input_shape = (img_height, img_width, 3)
print(input_shape)
nb_train_samples = 10000
nb_validation_samples = 4000
# Path to datadir where train, val and test directories reside
datadir = 'Jun21'
batch_size = 20
nb_angles = 15
# +
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
datadir + '/train',
target_size=image_size,
batch_size=batch_size,
class_mode='categorical')
#Validation Generator
val_datagen = ImageDataGenerator(rescale=1./255)
val_generator = val_datagen.flow_from_directory(
datadir + '/val',
target_size=image_size,
batch_size=batch_size,
class_mode='categorical')
print(train_generator.class_indices)
print(val_generator.class_indices)
# -
# Python2 uses iteritems() while Python3 uses items()
inv_map = {v: k for k, v in train_generator.class_indices.items()}
nb_epoch = 50
nb_filters=16
kernel_size=(3,3)
pool_size=(2,2)
#img = Input(shape=input_shape)
# +
idx = '1'
model_file = 'models/out15_' + datadir + '_' + idx + '.h5'
weights_file = 'weights/out15_' + datadir + '_' + idx + '.h5'
got_weights = False
save_model = True
try:
model = load_model(model_file)
print('Model loaded')
got_weights = True
save_model = False
except:
model = Sequential()
model.add(Conv2D(nb_filters, kernel_size, input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Conv2D(2*nb_filters, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Conv2D(2*nb_filters, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Conv2D(4*nb_filters, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Flatten())
model.add(Dense(100))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(nb_angles, activation = 'softmax', name = 'angle_out'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
print('Model compiled')
try:
model.load_weights(weights_file)
got_weights = True
print('Weights loaded')
except:
got_weights = False
print(got_weights)
model.summary()
# +
hist = None
if not got_weights:
steps_pe = 200
valSteps = 50
hist = model.fit_generator(train_generator, steps_per_epoch=steps_pe,
epochs=nb_epoch, validation_data=val_generator,
validation_steps=valSteps)
model.save_weights(weights_file)
save_model = True
print('Model trained and weights saved')
if save_model:
model.save(model_file)
print('Model saved')
# -
score = model.evaluate_generator(val_generator, steps=100)
print('Test score:', score[0])
print('Test accuracy:', score[1])
if hist:
plt.close()
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.show()
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.show()
#Test generator
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
datadir + '/test',
target_size=image_size,
batch_size=20,
class_mode=None)
print(test_generator.class_indices)
notDone = True
i = 0
axis_font = {'size':'48'}
while(notDone):
batch_X = test_generator.next()
#print(batch_X)
batch_y = model.predict_on_batch(batch_X)
#print(batch_y)
j = 0
f, axarr = plt.subplots(5, 4, figsize=(80, 60))
for (X, y) in zip(batch_X, batch_y):
#ax = plt.subplot(gl[j, i])
k = j % 5
l = j // 5
#print(k, l)
axarr[k, l].imshow(X)
idx = np.argmax(y)
txt = inv_map[idx] + ', ' + str(round(y[idx], 2))
axarr[k, l].text(0, 0, txt, **axis_font)
j += 1
if j >= 20:
break
#print(y)
#pause
i += 1
notDone = i < 5
plt.show()
print(inv_map)
| notebooks/Train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pyomo.environ import *
m = ConcreteModel()
m.x = Var
# +
Ta = [423, 449, 471, 495, 518, 534, 549, 563]
cra = [1.66E-04, 1.66E-04, 1.59E-04, 1.37E-04, 8.90E-05, 5.63E-05, 3.04E-05, 1.71E-05]
cr0a = 1.64E-4
Tb = [423, 446, 469, 490, 507, 523, 539, 553, 575];
crb = [3.73E-04, 3.72E-04, 3.59E-04, 3.26E-04, 2.79E-04, 2.06E-04, 1.27E-04, 7.56E-05, 3.76E-05]
cr0b = 3.69e-4
Tc = [443, 454, 463, 475, 485, 497, 509, 520, 534, 545, 555, 568]
crc = [2.85E-04, 2.84E-04, 2.84E-04, 2.74E-04, 2.57E-04, 2.38E-04, 2.04E-04, 1.60E-04, 1.12E-04, 6.37E-05, 5.07E-05 , 4.49E-05];
cr0c = 2.87e-4
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Ta = np.array(Ta)
cra = np.array(cra)
Tb = np.array(Tb)
crb = np.array(crb)
Tc = np.array(Tc)
crc = np.array(crc)
xa = 1 - cra/cr0a
xb = 1 - crb/cr0b
xc = 1 - crc/cr0c
plt.figure(1)
plt.plot(Ta,xa,'o',Tb,xb,'*',Tc,xc,'^')
plt.xlabel('Temperature (K)')
plt.ylabel('conversion ratios')
plt.legend([['Cr0 = ',str(cr0a)],['Cr0 = ',str(cr0b)],['Cr0 = ',str(cr0c)]])
plt.title('Conversion ratios at different feed concentrations')
# -
| notebooks/finance/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ###### Content provided under a Creative Commons Attribution license CC-BY 4.0; code under BSD 3-Clause license. (c)2015 <NAME>, <NAME>.
# # Exercise: Derivation of the vortex-source panel method
# The potential at location $(x, y)$ induced by an uniform flow, a source sheet, and a vortex sheet can be represented as
# $$
# \begin{equation}
# \begin{split}
# \phi(x, y)
# &= \phi_{uniform\ flow}(x, y) \\
# &+ \phi_{source\ sheet}(x, y) + \phi_{vortex\ sheet}(x, y)
# \end{split}
# \end{equation}
# $$
# That is
# $$
# \begin{equation}
# \begin{split}
# \phi(x, y) &= xU_{\infty}\cos(\alpha) + yU_{\infty}\sin(\alpha) \\
# &+
# \frac{1}{2\pi} \int_{sheet} \sigma(s)\ln\left[(x-\xi(s))^2+(y-\eta(s))^2\right]^{\frac{1}{2}}ds \\
# &-
# \frac{1}{2\pi} \int_{sheet} \gamma(s)\tan^{-1} \frac{y-\eta(s)}{x-\xi(s)}ds
# \end{split}
# \end{equation}
# $$
# where $s$ is local coordinate on the sheet, and $\xi(s)$ and $\eta(s)$ are coordinate of the infinite source and vortex on the sheet. In the above equation, we assume the source sheet and the vortex sheet overlap.
# ------------------------------------------------------
# ### Q1:
# If we discretize the sheet into $N$ panels, re-write the above equation using discretized integral. Assume $l_j$ represents the length of the panel $j$. And so that
#
# $$
# \begin{equation}
# \left\{
# \begin{array}{l}
# \xi_j(s)=x_j-s\sin\beta_j \\
# \eta_j(s)=y_j+s\cos\beta_j
# \end{array}
# ,\ \ \
# 0\le s \le l_j
# \right.
# \end{equation}
# $$
#
# The following figure shows the panel $j$:
#
# <center> <img src="resources/Lesson11_Exercise_Fig.1.png" width=360> </center>
#
# HINT: for example, consider the integral $\int_0^L f(x) dx$, if we discretize the domain $0\sim L$ into 3 panels, the integral can be writen as:
#
# $$
# \int_0^L f(x) dx = \int_0^{L/3} f(x)dx+\int_{L/3}^{2L/3} f(x)dx+\int_{2L/3}^{L} f(x)dx \\
# = \sum_{j=1}^3 \int_{l_j}f(x)dx
# $$
# ----------------------------
# Now let's assume
#
# 1. $\sigma_j(s) = constant = \sigma_j$
# 2. $\gamma_1(s) = \gamma_2(s) = ... = \gamma_N(s) = \gamma$
# ------------------------------------------------
# ### Q2:
# Apply the above assumption into the equation of $\phi(x, y)$ you derived in Q1.
# ---------------------------
# The normal velocity $U_n$ can be derived from the chain rule:
# $$
# \begin{equation}
# \begin{split}
# U_n &= \frac{\partial \phi}{\partial \vec{n}} \\
# &=
# \frac{\partial \phi}{\partial x}\frac{\partial x}{\partial \vec{n}}
# +
# \frac{\partial \phi}{\partial y}\frac{\partial y}{\partial \vec{n}} \\
# &=
# \frac{\partial \phi}{\partial x}\nabla x\cdot \vec{n}
# +
# \frac{\partial \phi}{\partial y}\nabla y\cdot \vec{n} \\
# &=
# \frac{\partial \phi}{\partial x}n_x
# +
# \frac{\partial \phi}{\partial y}n_y
# \end{split}
# \end{equation}
# $$
# The tangential velocity can also be obtained using the same technique. So we can have the normal and tangential velocity at the point $(x, y)$ using:
# $$
# \begin{equation}
# \left\{
# \begin{array}{l}
# U_n(x, y)=\frac{\partial \phi}{\partial x}(x, y) n_x(x, y)+\frac{\partial \phi}{\partial y}(x, y) n_y(x, y) \\
# U_t(x, y)=\frac{\partial \phi}{\partial x}(x, y) t_x(x, y)+\frac{\partial \phi}{\partial y}(x, y) t_y(x, y)
# \end{array}
# \right.
# \end{equation}
# $$
# -------------------------------------
# ### Q3:
# Using the above equation, derive the $U_n(x,y)$ and $U_t(x,y)$ from the equation you obtained in Q2.
# -----------------------------------------
# ### Q4:
# Consider the normal velocity at the center of $i$-th panel, i.e., $(x_{c,i}, y_{c,i})$, after replacing $(x_{c,i}, y_{c,i})$ with $(x, y)$ in the equation you derived in the Q3, we can re-write the equation in matrix form:
# $$
# \begin{equation}
# \begin{split}
# U_n(x_{c,i}, y_{c,i}) &= U_{n,i} \\
# &= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN}\end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \end{matrix}\right] + \left(\sum_{j=1}^N B^n_{ij}\right)\gamma \\
# &= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right]
# \end{split}
# \end{equation}
# $$
# $$
# \begin{equation}
# \begin{split}
# U_t(x_{c,i}, y_{c,i}) &= U_{t,i} \\
# &= b^t_i + \left[\begin{matrix} A^t_{i1} && A^t_{i2} && ... && A^t_{iN}\end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \end{matrix}\right] + \left(\sum_{j=1}^N B^t_{ij}\right)\gamma \\
# &= b^t_i + \left[\begin{matrix} A^t_{i1} && A^t_{i2} && ... && A^t_{iN} && \left(\sum_{j=1}^N B^t_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right]
# \end{split}
# \end{equation}
# $$
# What are the $b^n_i$, $A^n_{ij}$, $B^n_{ij}$, $b^t_i$, $A^t_{ij}$, and $B^t_{ij}$?
# -----------------------
# Given the fact that (from the Fig. 1)
#
# $$
# \begin{equation}
# \left\{\begin{matrix} \vec{n}_i=n_{x,i}\vec{i}+n_{y,i}\vec{j} = \cos(\beta_i)\vec{i}+\sin(\beta_i)\vec{j} \\ \vec{t}_i=t_{x,i}\vec{i}+t_{y,i}\vec{j} = -\sin(\beta_i)\vec{i}+\cos(\beta_i)\vec{j} \end{matrix}\right.
# \end{equation}
# $$
#
# we have
#
# $$
# \begin{equation}
# \left\{
# \begin{matrix}
# n_{x,i}=t_{y,i} \\
# n_{y,i}=-t_{x,i}
# \end{matrix}
# \right.
# ,\ or\
# \left\{
# \begin{matrix}
# t_{x,i}=-n_{y,i} \\
# t_{y,i}=n_{x,i}
# \end{matrix}
# \right.
# \end{equation}
# $$
# -----------------------
# ### Q5:
# Applying the above relationship between $\vec{n}_i$ and $\vec{t}_i$ to your answer of the Q4, you should find that relationships exist between $B^n_{ij}$ and $A^t_{ij}$ and between $B^t_{ij}$ and $A^n_{ij}$. This means, in your codes, you don't have to actually calculate the $B^n_{ij}$ and $B^t_{ij}$. What are the relationship?
# -------------------------
# Now, note that when $i=j$, there is a singular point in the integration domain when calculating $A^n_{ii}$ and $A^t_{ii}$. This singular point occurs when $s=l_i/2$, i.e., $\xi_i(l_i/2)=x_{c,i}$ and $\eta_i(l_i/2)=y_{c,i}$. This means we need to calculate $A^n_{ii}$ and $A^t_{ii}$ analytically.
# --------------------------
# ### Q6:
# What is the exact values of $A^n_{ii}$ and $A^t_{ii}$?
# ------------------------------
# In our problem, there are $N+1$ unknowns, that is, $\sigma_1, \sigma_2, ..., \sigma_N, \gamma$. We'll need $N+1$ linear equations to solve the unknowns. The first $N$ linear equations can be obtained from the non-penetration condition on the center of each panel. That is
#
# $$
# \begin{equation}
# \begin{split}
# U_{n,i} &= 0 \\
# &= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right] \\
# &,\ \ for\ i=1\sim N
# \end{split}
# \end{equation}
# $$
#
# or
#
# $$
# \begin{equation}
# \begin{split}
# &\left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right] =-b^n_i \\
# &,\ \ for\ i=1\sim N
# \end{split}
# \end{equation}
# $$
# For the last equation, we use Kutta-condition to obtain that.
#
# $$
# \begin{equation}
# U_{t,1} = - U_{t,N}
# \end{equation}
# $$
# ----------------------
# ### Q7:
# Apply the matrix form of the $U_{t,i}$ and $U_{t,N}$ to the Kutta-condition and obtain the last linear equation. Re-arrange the equation so that unknowns are always on the LHS while the knowns on RHS.
# ---------------------
# ### Q8:
# Now you have $N+1$ linear equations and can solve the $N+1$ unknowns. Try to combine the first $N$ linear equations and the last one (i.e. the Kutta-condition) in the Q7 and obtain the matrix form of the whole system of linear equations.
# ----------------------------
# The equations can be solved now! This is the vortex-source panel method.
# --------------------
# + active=""
# Please ignore the cell below. It just loads our style for the notebook.
# -
from IPython.core.display import HTML
def css_styling(filepath):
styles = open(filepath, 'r').read()
return HTML(styles)
css_styling('../styles/custom.css')
| lessons/11_Lesson11_Exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # plotly y cufflinks visualizaciones interactivas
import pandas as pd
import numpy as np
# +
#pip install cufflinks
# -
# # configuracion inicial
import cufflinks as cf
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
cf.go_offline()
# %matplotlib inline
# # hacemos un data frame
dataframe = pd.DataFrame(np.random.randn(100,4), columns=['a','b','c','d'])
dataframe.info()
dataframe
dataframe.plot()
dataframe.iplot()
dataframe.iplot(kind='scatter', x='a',y="b", mode='markers')
dataframe.iplot(kind='bar')
dataframe.sum().iplot(kind='bar')
dataframe.iplot(kind='box')
dataframe['a'].iplot(kind='hist', bins=30)
dataframe.iplot(kind='hist', bins=30)
dataframe[['a','b']].iplot(kind='spread')
dataframe.iplot(kind='bubble', x='a',y='b',size='c')
dataframe2= pd.DataFrame({'a':[1,2,3,4],'b':[30,40,20,10],'c':[12,16,18,15]})
dataframe2
dataframe2.iplot(kind='surface')
| plotly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deploy and Distribute TensorFlow
# In this notebook you will learn how to deploy TensorFlow models to TensorFlow Serving (TFS), using the REST API or the gRPC API, and how to train a model across multiple devices.
# ## Imports
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras
import time
print("python", sys.version)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
# 
# ## Exercise 1 – Deploying a Model to TensorFlow Serving
# ## Save/Load a `SavedModel`
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.
X_test = X_test / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
MODEL_NAME = "my_fashion_mnist"
# !rm -rf {MODEL_NAME}
# +
import time
model_version = int(time.time())
model_path = os.path.join(MODEL_NAME, str(model_version))
os.makedirs(model_path)
# -
tf.saved_model.save(model, model_path)
for root, dirs, files in os.walk(MODEL_NAME):
indent = ' ' * root.count(os.sep)
print('{}{}/'.format(indent, os.path.basename(root)))
for filename in files:
print('{}{}'.format(indent + ' ', filename))
# !saved_model_cli show --dir {model_path}
# !saved_model_cli show --dir {model_path} --tag_set serve
# !saved_model_cli show --dir {model_path} --tag_set serve \
# --signature_def serving_default
# !saved_model_cli show --dir {model_path} --all
# **Warning**: as you can see, the method name is empty. This is [a bug](https://github.com/tensorflow/tensorflow/issues/25235), hopefully it will be fixed shortly. In the meantime, you must use `keras.experimental.export()` instead of `tf.saved_model.save()`:
# !rm -rf {MODEL_NAME}
model_path = keras.experimental.export(model, MODEL_NAME).decode("utf-8")
# !saved_model_cli show --dir {model_path} --all
# Let's write a few test instances to a `npy` file so we can pass them easily to our model:
X_new = X_test[:3]
np.save("my_fashion_mnist_tests.npy", X_new, allow_pickle=False)
input_name = model.input_names[0]
input_name
# And now let's use `saved_model_cli` to make predictions for the instances we just saved:
# !saved_model_cli run --dir {model_path} --tag_set serve \
# --signature_def serving_default \
# --inputs {input_name}=my_fashion_mnist_tests.npy
# ## TensorFlow Serving
# Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run:
#
# ```bash
# docker pull tensorflow/serving
#
# docker run -it --rm -p 8501:8501 \
# -v "`pwd`/my_fashion_mnist:/models/my_fashion_mnist" \
# -e MODEL_NAME=my_fashion_mnist \
# tensorflow/serving
# ```
#
# Once you are finished using it, press Ctrl-C to shut down the server.
# +
import json
input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": X_new.tolist(),
})
print(input_data_json[:200] + "..." + input_data_json[-200:])
# -
# Now let's use TensorFlow Serving's REST API to make predictions:
# +
import requests
SERVER_URL = 'http://localhost:8501/v1/models/my_fashion_mnist:predict'
response = requests.post(SERVER_URL, data=input_data_json)
response.raise_for_status()
response = response.json()
# -
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
# ### Using Serialized Examples
serialized = []
for image in X_new:
image_data = tf.train.FloatList(value=image.ravel())
features = tf.train.Features(
feature={
"image": tf.train.Feature(float_list=image_data),
}
)
example = tf.train.Example(features=features)
serialized.append(example.SerializeToString())
[data[:100]+b'...' for data in serialized]
def parse_images(serialized):
expected_features = {
"image": tf.io.FixedLenFeature([28 * 28], dtype=tf.float32)
}
examples = tf.io.parse_example(serialized, expected_features)
return tf.reshape(examples["image"], (-1, 28, 28))
parse_images(serialized)
serialized_inputs = keras.layers.Input(shape=[], dtype=tf.string)
images = keras.layers.Lambda(lambda serialized: parse_images(serialized))(serialized_inputs)
y_proba = model(images)
ser_model = keras.models.Model(inputs=[serialized_inputs], outputs=[y_proba])
SER_MODEL_NAME = "my_ser_fashion_mnist"
# !rm -rf {SER_MODEL_NAME}
ser_model_path = keras.experimental.export(ser_model, SER_MODEL_NAME).decode("utf-8")
# !saved_model_cli show --dir {ser_model_path} --all
# ```bash
# docker run -it --rm -p 8500:8500 -p 8501:8501 \
# -v "`pwd`/my_ser_fashion_mnist:/models/my_ser_fashion_mnist" \
# -e MODEL_NAME=my_ser_fashion_mnist \
# tensorflow/serving
# ```
# +
import base64
import json
ser_input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": [{"b64": base64.b64encode(data).decode("utf-8")}
for data in serialized],
})
print(ser_input_data_json[:200] + "..." + ser_input_data_json[-200:])
# +
import requests
SER_SERVER_URL = 'http://localhost:8501/v1/models/my_ser_fashion_mnist:predict'
response = requests.post(SER_SERVER_URL, data=ser_input_data_json)
response.raise_for_status()
response = response.json()
# -
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
# !python3 -m pip install --no-deps tensorflow-serving-api
# +
import grpc
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
channel = grpc.insecure_channel('localhost:8500')
predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = SER_MODEL_NAME
request.model_spec.signature_name = "serving_default"
input_name = ser_model.input_names[0]
request.inputs[input_name].CopyFrom(tf.compat.v1.make_tensor_proto(serialized))
result = predict_service.Predict(request, 10.0)
# -
result
output_name = ser_model.output_names[0]
output_name
shape = [dim.size for dim in result.outputs[output_name].tensor_shape.dim]
shape
y_proba = np.array(result.outputs[output_name].float_val).reshape(shape)
y_proba.round(2)
# 
# ## Exercise 2 – Distributed Training
keras.backend.clear_session()
# +
distribution = tf.distribute.MirroredStrategy()
with distribution.scope():
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd")
# -
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=25)
| 04_deploy_and_distribute_tf2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-84a51193ebc99485", "locked": true, "schema_version": 1, "solution": false}
# # SLU06 - Dealing with Data Problems
#
# This notebook has exercises covering the following topics:
#
# - Tidy Data
# - Data Entry Problems
# - Missing Values
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f69d2e900572be7a", "locked": true, "schema_version": 1, "solution": false}
import math
import os
import pandas as pd
import numpy as np
import hashlib
import json
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-7796ff89561e1599", "locked": true, "schema_version": 1, "solution": false}
# The dataset that we'll use for these exercises comes from the World Health Organisation, and records the counts of confirmed tuberculosis cases by country, year (between 1980 and 2008), and demographic group.
# The demographic groups are broken down by sex (m, f) and age (0–14, 15–24, 25–34, 35–44, 45–54, 55–64, 65+, unknown).
#
# But as you will see, this dataset doesn't follow the Tidy Data Principle.
#
# In the following exercises, we will clean it.
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-3080c578932750e2", "locked": true, "schema_version": 1, "solution": false}
# First let's read the dataset into a pandas dataframe.
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-001eae7e09f6322f", "locked": true, "schema_version": 1, "solution": false}
df = pd.read_csv(os.path.join('data', 'tb.csv'))
df.head(10)
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-5c1385558c47a3b8", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 1
#
# Create a list with the column names that refer to the demographic groups data (i.e, those that start with `new_sp_`).
#
# Call the list `demo_group_data`. The values in list should be in the same order as they are in the dataframe's columns.
#
# Note: the list should be in the same order as the columns are in the dataframe!
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-188948d13a5b357c", "locked": false, "schema_version": 1, "solution": true}
# Create a list with the demographic groups data
# demo_group_data = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-d828d2d6cb20d296", "locked": true, "points": 2, "schema_version": 1, "solution": false}
first_element_hash = 'c8ae9368386d7c77204db0840e77d432aff39941d6f42407b0164028b0bdd5c6'
entire_list_hash = '11db8cc5591bcd9757e052b44e7b1a5ccd4f2de2e5c3130d5437834ea021aa78'
assert isinstance(demo_group_data, list), "demo_group_data should be a list."
assert len(demo_group_data) == 16, "demo_group_data doesn't have the right number of elements."
error_msg = 'The first element of the list is not correct.'
assert first_element_hash == hashlib.sha256(bytes(demo_group_data[0], encoding='utf8')).hexdigest(), error_msg
error_msg = 'The list is not correct.'
assert entire_list_hash == hashlib.sha256(json.dumps(demo_group_data).encode()).hexdigest()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a4d41aac166a7fa2", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 2
#
# Create a new dataframe, that is the result of changing `df` so that it has 4 columns:
# - country
# - year
# - demo_group
# - cases (which the number of tuberculosis cases)
#
# The values in the demo_group column should be exactly the ones that we currently have as column names.
#
# The new dataframe should be called `df_tidy_1`.
#
# Hint: use the answer from **Exercise 1**.
#
# Note: the columns should be in the order we specified in the list above.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-3e7fa8d07fe433cc", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df that doesn't have variable values as columns
# df_tidy_1 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-640d2b3b55c2cc39", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_1, pd.DataFrame), "df_tidy_1 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_1.columns.tolist() == ['country', 'year', 'demo_group', 'cases']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_1.shape == (92304, 4)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
check1 = 'c8ae9368386d7c77204db0840e77d432aff39941d6f42407b0164028b0bdd5c6'
assert hashlib.sha256(df_tidy_1.loc[4, 'demo_group'].encode()).hexdigest() == check1, error_msg
error_msg = "There are some incorrect values in the dataframe"
check2 = 'a96af9414a620adb33fc8a73bb5f9e727254d6c74ddfa7587e496f912f8919c7'
assert hashlib.sha256(bytes(df_tidy_1.loc[92302, 'year'])).hexdigest() == check2, error_msg
error_msg = "There are some incorrect values in the dataframe"
check3 = '22f78469636967d0d4d49fd3ef2edbf6060ee702ad8eab9a649330bc7df6ffc5'
assert hashlib.sha256(df_tidy_1.loc[92299, 'country'].encode()).hexdigest() == check3, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-94385579b7ec22e9", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 3
#
# Now, take a look at the values in column `demo_group`.
#
# They all start with a meaningless "new_sp_" string, so we will remove it.
#
# Create a new dataframe `df_tidy_2`, that is a copy of `df_tidy_1`, but where the `demo_group` column no longer has the "new_sp_" in each value.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-9a0680701ef7a9e8", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df_tidy_1 that doesn't have "new_sp_" in the demo_group column
# df_tidy_2 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-b63a0ff46979395c", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_2, pd.DataFrame), "df_tidy_2 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_2.columns.tolist() == ['country', 'year', 'demo_group', 'cases']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_2.shape == (92304, 4)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
check1 = 'cb27fafc5484c02b917b7dc8744373143307e0e0ad62859a2b747e6ed4396a63'
assert hashlib.sha256(df_tidy_2.loc[4, 'demo_group'].encode()).hexdigest() == check1, error_msg
error_msg = "There are some incorrect values in the dataframe"
check2 = 'a96af9414a620adb33fc8a73bb5f9e727254d6c74ddfa7587e496f912f8919c7'
assert hashlib.sha256(bytes(df_tidy_2.loc[92302, 'year'])).hexdigest() == check2, error_msg
error_msg = "There are some incorrect values in the dataframe"
check3 = '22f78469636967d0d4d49fd3ef2edbf6060ee702ad8eab9a649330bc7df6ffc5'
assert hashlib.sha256(df_tidy_2.loc[92299, 'country'].encode()).hexdigest() == check3, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-dc32120e82748b65", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 4
#
# As you may have noticed, our dataset still has a problem. The `demo_group` column has data that in fact represents two variables: `gender` and `age`.
#
# So our end goal will be to replace the `demo_group` column with the two new columns.
#
# On this exercise, create a new dataframe `df_tidy_3`, that is a copy of `df_tidy_2`, but has a new column `gender`, which has the first letter of column `demo_group` (m/f).
#
# Hint: you may need to search for this one online, as you'll need a string method that we didn't cover in the Learning notebook.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-c9c176a8e3461c10", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df_tidy_2 that has a new column gender
# df_tidy_3 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-c06912eb8d895afc", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_3, pd.DataFrame), "df_tidy_3 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_3.columns.tolist() == ['country', 'year', 'demo_group', 'cases', 'gender']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_3.shape == (92304, 5)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
check1 = 'cb27fafc5484c02b917b7dc8744373143307e0e0ad62859a2b747e6ed4396a63'
assert hashlib.sha256(df_tidy_3.loc[4, 'demo_group'].encode()).hexdigest() == check1, error_msg
error_msg = "There are some incorrect values in the dataframe"
check2 = 'a96af9414a620adb33fc8a73bb5f9e727254d6c74ddfa7587e496f912f8919c7'
assert hashlib.sha256(bytes(df_tidy_3.loc[92302, 'year'])).hexdigest() == check2, error_msg
error_msg = "There are some incorrect values in the dataframe"
check3 = '62c66a7a5dd70c3146618063c344e531e6d4b59e379808443ce962b3abd63c5a'
assert hashlib.sha256(df_tidy_3.loc[3, 'gender'].encode()).hexdigest() == check3, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-8084477a20cda1ba", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 5
#
# Now we want to create the column `age` and get rid of column `demo_group`.
#
# On this exercise, create a new dataframe `df_tidy_4`, that is a copy of `df_tidy_3`, but has a new column `age`, which is the same as column `demo_group`, except that it has the first letter removed.
#
# We also want to get rid of column `demo_group`, so make sure the new dataframe doesn't have it!
#
# Hint: you may need to search for this one online, as you'll need a string method that we didn't cover in the Learning notebook.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-0a1b561a9c60b159", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df_tidy_3 that has a new column age, and doesn't have column demo_group
# df_tidy_4 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-52d4f44d3697b15a", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_4, pd.DataFrame), "df_tidy_4 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_4.columns.tolist() == ['country', 'year', 'cases', 'gender', 'age']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_4.shape == (92304, 5)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
check1 = '72abdfcd75400cfee271e737b4f112e5f671d3691d215ed616db2ad8d5a7778d'
assert hashlib.sha256(df_tidy_4.loc[4, 'age'].encode()).hexdigest() == check1, error_msg
error_msg = "There are some incorrect values in the dataframe"
check2 = 'a96af9414a620adb33fc8a73bb5f9e727254d6c74ddfa7587e496f912f8919c7'
assert hashlib.sha256(bytes(df_tidy_4.loc[92302, 'year'])).hexdigest() == check2, error_msg
error_msg = "There are some incorrect values in the dataframe"
check3 = '62c66a7a5dd70c3146618063c344e531e6d4b59e379808443ce962b3abd63c5a'
assert hashlib.sha256(df_tidy_4.loc[3, 'gender'].encode()).hexdigest() == check3, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f8bbd3fc63242f26", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 6
#
# If you take a look at the age values, they are quite hard to understand...
# Let's make those easier to grasp!
#
# Create a new dataframe `df_tidy_5`, where the `age` column has more understandable values.
#
# We want that each age interval is separated by a `-`. For instance, `014` should be replaced with `0-14`.
# We also want the last age group to be represented as `65+`. The unknown values `u` can be left unchanged.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-0a5e28ea0faa0992", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df_tidy_4 where the age values are more understandable
# df_tidy_5 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-794006afade645c7", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_5, pd.DataFrame), "df_tidy_5 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_5.columns.tolist() == ['country', 'year', 'cases', 'gender', 'age']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_5.shape == (92304, 5)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
check1 = '05903a89b8bb057276232927d94173852d2173ba54aa7ee0b04d4a43d35acd5d'
assert hashlib.sha256(df_tidy_5.loc[2, 'age'].encode()).hexdigest() == check1, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-2cfdf1840adb37a2", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 7
#
# Now that our dataset follows the Tidy Data principles, let's check for duplicates.
#
# Assign to variable `duplicates_count` the number of duplicates in `df_tidy_5`. Make sure the value you get is an integer.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-77ab90e009cd292d", "locked": false, "schema_version": 1, "solution": true}
# Count the number of duplicates in df_tidy_5
# duplicates_count = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-b559fc483c9c8be1", "locked": true, "points": 2, "schema_version": 1, "solution": false}
duplicates_count_hash = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
error_msg = "duplicates_count must be an integer (explicitly convert it to 'int' if it is type 'numpy.int64')"
assert isinstance(duplicates_count, int), error_msg
error_msg = "duplicates_count doesn't have the right value."
assert duplicates_count_hash == hashlib.sha256(bytes(duplicates_count)).hexdigest(), error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-ec247ec84b19a9f6", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 8
#
# Now let's count the number of rows with missing values in our `df_tidy_5` dataframe.
#
# Assign that value to variable `null_count` and make sure it is an integer.
#
# Also take a moment to think about the number of null values we had in this dataset, in comparison to the dataset size.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-526e2589fa58add5", "locked": false, "schema_version": 1, "solution": true}
# Count the number of null values in df_tidy_5
# null_count = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-3da7144cdc77c7b9", "locked": true, "points": 2, "schema_version": 1, "solution": false}
duplicates_count_hash = 'e41482b02884d9b42b5ea9abb63be3658c28266e8f62f31db33fc9efa53cf01d'
error_msg = "null_count must be an integer (explicitly convert it to 'int' if it is type 'numpy.int64')"
assert isinstance(null_count, int), error_msg
error_msg = "null_count doesn't have the right value."
assert duplicates_count_hash == hashlib.sha256(bytes(null_count)).hexdigest(), error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f44322f1cee9fb4b", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 9
#
# Now let's use a common technique to imput missing values: replacing them with the mean.
#
# Create a new dataframe `df_tidy_6`, which is a copy of `df_tidy_5`, except that the missing values in the `cases` column are replaced with the mean of the column.
#
# Also, take a moment to think if this is a good strategy in this case.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-91a85cda9755c1f5", "locked": false, "schema_version": 1, "solution": true}
# Create a new version of df_tidy_5 where the cases missing values are replaced with the mean
# df_tidy_6 = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-5f3a449dd2f19c7d", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(df_tidy_6, pd.DataFrame), "df_tidy_6 should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert df_tidy_6.columns.tolist() == ['country', 'year', 'cases', 'gender', 'age']
error_msg = "The dataframe doesn't have the right shape."
assert df_tidy_6.shape == (92304, 5)
# Checking some values of the dataframe
error_msg = "There are some incorrect values in the dataframe"
assert math.isclose(df_tidy_6.loc[4, 'cases'], 636.76, rel_tol=1e-03)
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-d4e27258fc7f53fc", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 10
#
# Did you think that replacing the missing values with the mean was a terrible idea in this case? Because it is!
#
# Let's see why.
#
# Create a new dataframe `top_cases`, by:
# - sorting `df_tidy_5` by the cases column in descending order
# - taking the first row per country, using the drop_duplicates function
# - taking the first 10 rows of that dataframe
#
# Then take a look at the countries in `top_countries`. Can you see a correlation between the number of cases and the population size of those countries?
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-21d03667d23b1548", "locked": false, "schema_version": 1, "solution": true}
# Create a dataframe with the rows that correspond to the 10 highest number of cases in df_tidy_5
# top_cases = ...
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-042e21c24cf691b9", "locked": true, "points": 2, "schema_version": 1, "solution": false}
assert isinstance(top_cases, pd.DataFrame), "top_cases should be a pandas DataFrame."
error_msg = "The dataframe doesn't have the right columns."
assert top_cases.columns.tolist() == ['country', 'year', 'cases', 'gender', 'age']
error_msg = "The dataframe doesn't have the right shape."
assert top_cases.shape == (10, 5)
error_msg = "The dataframe is not correct."
check1 = 'fed1d872f6d540f4118582ec694270274e987b12f5dfe2057dddf1e12df2761a'
assert hashlib.sha256(top_cases.iloc[0]['country'].encode()).hexdigest() == check1, error_msg
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-28a762e168359bb3", "locked": true, "schema_version": 1, "solution": false}
# ### Exercise 11 - ungraded
#
# Take some time to discuss with a coleague sitting next to you, what would be a better way to impute missing values in this dataset.
| stats-279/SLU06 - Dealing with Data Problems/Exercise notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab_type="text" id="V8-yl-s-WKMG"
# Object Detection API Demo
<table align="left"><td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
</td><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td></table>
# + [markdown] colab_type="text" id="3cIrseUv6WKz"
# Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image.
# + [markdown] colab_type="text" id="VrJaG0cYN9yh"
# > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb).
# + [markdown] colab_type="text" id="kFSqkTCdWKMI"
# # Setup
# + [markdown] colab_type="text" id="awjrpqy-6MaQ"
# Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab.
# + [markdown] colab_type="text" id="p3UGXxUii5Ym"
# ### Install
# + colab={} colab_type="code" id="hGL97-GXjSUw"
# + [markdown] colab_type="text" id="n_ap_s9ajTHH"
# Make sure you have `pycocotools` installed
# + colab={} colab_type="code" id="Bg8ZyA47i3pY"
# !pip install pycocotools
# + [markdown] colab_type="text" id="-vsOL3QR6kqs"
# Get `tensorflow/models` or `cd` to parent directory of the repository.
# + colab={} colab_type="code" id="ykA0c-om51s1"
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
# !git clone --depth 1 https://github.com/tensorflow/models
# + [markdown] colab_type="text" id="O219m6yWAj9l"
# Compile protobufs and install the object_detection package
# + colab={} colab_type="code" id="PY41vdYYNlXc" language="bash"
# cd models/research/
# protoc object_detection/protos/*.proto --python_out=.
# + colab={} colab_type="code" id="s62yJyQUcYbp" language="bash"
# cd models/research
# pip install .
# + [markdown] colab_type="text" id="LBdjK2G5ywuc"
# ### Imports
# + colab={} colab_type="code" id="hV4P5gyTWKMI"
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
# + [markdown] colab_type="text" id="r5FNuiRPWKMN"
# Import the object detection module.
# + colab={} colab_type="code" id="4-IMl4b6BdGO"
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# + [markdown] colab_type="text" id="RYPCiag2iz_q"
# Patches:
# + colab={} colab_type="code" id="mF-YlMl8c_bM"
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
# + [markdown] colab_type="text" id="cfn_tRFOWKMO"
# # Model preparation
# + [markdown] colab_type="text" id="X_sEBLpVWKMQ"
# ## Variables
#
# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.
#
# By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
# + [markdown] colab_type="text" id="7ai8pLZZWKMS"
# ## Loader
# + colab={} colab_type="code" id="zm8xp-0eoItE"
def load_model(ssd_mobilenet_v1_coco_2018_01_28.tar):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = ssd_mobilenet_v1_coco_2018_01_28.tar + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=ssd_mobilenet_v1_coco_2018_01_28.tar,
origin=base_url + ssd_mobilenet_v1_coco_2018_01_28.tar,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
# + [markdown] colab_type="text" id="_1MVVTcLWKMW"
# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
# + colab={} colab_type="code" id="hDbpHkiWWKMX"
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
# + [markdown] colab_type="text" id="oVU3U_J6IJVb"
# For the sake of simplicity we will test on 2 images:
# + colab={} colab_type="code" id="jG-zn5ykWKMd"
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
# + [markdown] colab_type="text" id="H0_1AGhrWKMc"
# # Detection
# + [markdown] colab_type="text" id="f7aOtOlebK7h"
# Load an object detection model:
# + colab={} colab_type="code" id="1XNT0wxybKR6"
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
# + [markdown] colab_type="text" id="yN1AYfAEJIGp"
# Check the model's input signature, it expects a batch of 3-color images of type uint8:
# + colab={} colab_type="code" id="CK4cnry6wsHY"
print(detection_model.inputs)
# + [markdown] colab_type="text" id="Q8u3BjpMJXZF"
# And retuns several outputs:
# + colab={} colab_type="code" id="oLSZpfaYwuSk"
detection_model.output_dtypes
# + colab={} colab_type="code" id="FZyKUJeuxvpT"
detection_model.output_shapes
# + [markdown] colab_type="text" id="JP5qZ7sXJpwG"
# Add a wrapper function to call the model, and cleanup the outputs:
# + colab={} colab_type="code" id="ajmR_exWyN76"
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
output_dict = model(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
# + [markdown] colab_type="text" id="z1wq0LVyMRR_"
# Run it on each test image and show the results:
# + colab={} colab_type="code" id="DWh_1zz6aqxs"
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
# + colab={} colab_type="code" id="3a5wMHN8WKMh"
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
# + [markdown] colab_type="text" id="DsspMPX3Cssg"
# ## Instance Segmentation
# + colab={} colab_type="code" id="CzkVv_n2MxKC"
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model("mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28")
# + [markdown] colab_type="text" id="0S7aZi8ZOhVV"
# The instance segmentation model includes a `detection_masks` output:
# + colab={} colab_type="code" id="vQ2Sj2VIOZLA"
masking_model.output_shapes
# + colab={} colab_type="code" id="AS57rZlnNL7W"
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
# + colab={} colab_type="code" id="nLlmm9JojEKm"
# -
| research/object_detection/object_detection_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shecoderfinally/Text-Summarizer-/blob/main/summarization_of_text.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="pk4CnnCC0JgL" outputId="19691a67-b726-48dc-c399-711dc222be0f"
# Jovian Commit Essentials
# Please retain and execute this cell without modifying the contents for `jovian.commit` to work
# !pip install jovian --upgrade -q
import jovian
jovian.set_project('summarization-of-text')
jovian.set_colab_id('1xHZ4bjhsHsNzou7TIi35MW5pg6kHFCun')
# + [markdown] id="t7oL-ii50JgS"
# # summarization-of-text
#
# Use the "Run" button to execute the code.
# + id="Lv1xgigE0JgV"
# !pip install jovian --upgrade --quiet
# + id="1O14McUY0JgW"
import jovian
# + colab={"base_uri": "https://localhost:8080/", "height": 122} id="0GA8JbYy0JgX" outputId="1601bf67-e7d3-4413-a40b-079b88197bbd"
# Execute this to save new versions of the notebook
jovian.commit(project="summarization-of-text")
# + [markdown] id="YJ5QR1Bv0JgY"
# ## 0.Installing Transformers and Importing Dependencies
# + colab={"base_uri": "https://localhost:8080/"} id="-XyPBlq70JgZ" outputId="cecece2a-bd2e-4a76-b273-749ca34c7b85"
# !pip install transformers
# + id="C3JB7ulb0JgZ"
from transformers import pipeline
# + [markdown] id="RFPfOSgl0Jga"
# ## 1. Load Summarization Pipeline
#
# > Indented block
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 242, "referenced_widgets": ["a5a66d994cbb4312a907109baf227f8f", "3ab3ab8433914ae4a2c3a9461610ce27", "5630950335d44387851e4d19387e91b9", "a3b54d6c918345e0be4502b551f738c8", "056a91cf59c3428fbcf12af0afbccf59", "cf008804d9854b9584a372e5a70993f7", "f922d1032d2e470586ac7ccd5bc48149", "2292ebca325f4d11ab787c407b28f4dd", "45216f3403d1419ba9fbb67615aa422f", "a3d12488041f4358995202c341967696", "813e0ba3b9374dadaf7d1af93d547d37", "<KEY>", "dcc0ba907bbd46bea6696938b949235a", "<KEY>", "<KEY>", "e4950fc8e51e47da809f5a8b8ce9ff7f", "<KEY>", "<KEY>", "<KEY>", "aa978da0056a488e889868e0d37af9ad", "1a5c93099ce342e4b7c0479ebde14a0b", "fba64e1eccda4fe89432f8e51df0bd41", "9eda92c35ced4e69bad08b9faf1994f4", "d3ea1d0e39d64570924b76cdff7b4bb2", "fe4ad218864f402e9eef9efc28987623", "37f4c3f94353427abe5ab5db85c9324c", "<KEY>", "0c99c0db12a3412db753792f515adc28", "167596939e354905ac3867af2f0edff5", "<KEY>", "db9967531e9142c78a61b784e988c254", "<KEY>", "b8525101369548658ad40db43c620910", "8c3c1076a9294ad0b783faf60f6a3282", "8e7c39754ced41e8809451cfef512ef0", "<KEY>", "42b23c13ec874ee1b66d457533f4f4bc", "c2607e3ca26746cab808c6093c041db0", "9daf3c8eaca7492698abd4027961ccbe", "<KEY>", "<KEY>", "9864a0e74fdd4bed8ea5094037811cfe", "<KEY>", "425e8a0858fc4e759aa65e269620a0dd", "a3368b02d2844b698ac920acdc7c137c", "d0ad6a7883ab46e3842627a3d54809e3", "9164daf4c5a04748b71faf548599a76c", "7d48ef1db5584680808e37ecb63ead21", "<KEY>", "20d51128fb314dc39bedf940259f5f3e", "<KEY>", "<KEY>", "56d57998746a4997a1e734d0a2996d7a", "d7cc0125307d4f839afa1a2885940712", "bbdf4a6f647141b6aeecef0f6ec71c37"]} id="0gKi1NWM0Jgb" outputId="11ebfa64-5589-4c0c-c6cf-a83357d36d31"
summarizer=pipeline('summarization')
# + id="lMXY8KWN0Jgc"
## 2. Summarize Text
# + id="eIbf9BgO1epq"
article="""
The word comedy seems to be connected by derivation with the Greek verb meaning “to revel,” and comedy arose out of the revels associated with the rites of Dionysus, a god of vegetation. The origins of comedy are thus bound up with vegetation ritual. Aristotle, in his Poetics, states that comedy originated in phallic songs and that, like tragedy, it began in improvisation. Though tragedy evolved by stages that can be traced, the progress of comedy passed unnoticed because it was not taken seriously. When tragedy and comedy arose, poets wrote one or the other, according to their natural bent. Those of the graver sort, who might previously have been inclined to celebrate the actions of the great in epic poetry, turned to tragedy; poets of a lower type, who had set forth the doings of the ignoble in invectives, turned to comedy. The distinction is basic to the Aristotelian differentiation between tragedy and comedy: tragedy imitates men who are better than the average and comedy men who are worse.
Get a Britannica Premium subscription and gain access to exclusive content.
Subscribe Now
For centuries, efforts at defining comedy were to be along the lines set down by Aristotle: the view that tragedy deals with personages of high estate, and comedy deals with lowly types; that tragedy treats of matters of great public import, while comedy is concerned with the private affairs of mundane life; and that the characters and events of tragedy are historic and so, in some sense, true, while the humbler materials of comedy are but feigned. Implicit, too, in Aristotle is the distinction in styles deemed appropriate to the treatment of tragic and comic story. As long as there was at least a theoretical separation of comic and tragic styles, either genre could, on occasion, appropriate the stylistic manner of the other to a striking effect, which was never possible after the crossing of stylistic lines became commonplace.
"""
# + id="YCWuJOCs3UcT"
summary=summarizer(article,max_length=130,min_length=30,do_sample=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 88} id="QFlk_9uF3lv6" outputId="6cb40da8-6186-4796-825b-5845c303faee"
summary[0]['summary_text']
# + id="CXcxVw5w_QK8"
| Text Summarizer/summarization_of_text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dimensionality reduction
# ## Data
#
# Donwload and parse data:
# !cd data && ./download.sh
DATA_FILE="data/Wikipedia_2000_dump.xml"
# +
import xml.etree.ElementTree as ET
import pandas as pd
def xml2df(xml_data):
root = ET.XML(xml_data)
all_records = []
for child in root:
record = {}
for name, value in child.attrib.items():
record[name] = value
record["content"] = child.text
all_records.append(record)
return pd.DataFrame(all_records)
# -
data = xml2df(open(DATA_FILE).read())["content"].tolist()
data[0][:500]
# ## Vectorization
#
# Preprocess data, then vectorize it using simple BOW model:
# +
import string
from nltk import sent_tokenize, wordpunct_tokenize
from sklearn.base import BaseEstimator, TransformerMixin
class Preprocessor(BaseEstimator, TransformerMixin):
def __init__(self):
self._punct = set(string.punctuation + "«»№")
def fit(self, X, y=None):
return self
def _filter_gen(self, text):
text = "".join(filter(lambda c: c != '́', text))
for sent in sent_tokenize(text):
for word in wordpunct_tokenize(sent):
if word.isalpha():
yield word.lower()
def _tokenize(self, text):
return list(self._filter_gen(text))
def transform(self, X):
return list(" ".join(self._tokenize(text)) for text in X)
# -
preprocessor = Preprocessor()
data[0][:500]
preprocessor.transform([data[0][:500]])[0]
# +
from sklearn.feature_extraction.text import CountVectorizer as BagOfWords
from sklearn.pipeline import make_pipeline
model = make_pipeline(
Preprocessor(),
BagOfWords()
)
# -
X = model.fit_transform(data)
X.shape
X
# ## Reduction
#
# Calculate erank, then do the reduction using LSA:
# +
import math
from scipy.sparse import linalg
from scipy import stats
def erank(M):
u = linalg.svds(M.astype(float), k=min(M.shape) - 1, return_singular_vectors=False)
return math.exp(stats.entropy(u / sum(u)))
# -
e = erank(X)
m = int(round(e))
m
# +
from sklearn.decomposition import TruncatedSVD
X_reduced = TruncatedSVD(n_components=m, algorithm="arpack").fit_transform(X.astype(float))
X_reduced.shape
| task01/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
sns.set(font_scale=1.5)
sns.set_style("whitegrid", {'grid.linestyle':'--'})
# -
# use the auto data as example
auto = pd.read_csv("https://raw.githubusercontent.com/changyaochen/MECE4520/master/lectures/lecture_3/auto_mpg.csv")
auto.dropna(inplace=True) # for the sake of simplicity
auto.head()
# +
numerical_features = [
"displacement",
"horsepower",
"weight",
"acceleration",
]
X_train, X_validation, y_train, y_validation = train_test_split(
auto[numerical_features],
auto["mpg"],
test_size=0.2,
random_state=42,
)
all_models = []
rmse = pd.DataFrame(columns=["alpha", "train", "validation"])
for alpha in np.logspace(start=-3, stop=.2, num=50):
model = Ridge(alpha=alpha, normalize=True)
model.fit(X=X_train, y=y_train)
all_models.append(model)
y_pred_train = model.predict(X_train)
y_pred_validation = model.predict(X_validation)
row = {
"alpha": alpha,
"train": np.sqrt(mean_squared_error(y_true=y_train, y_pred=y_pred_train)),
"validation": np.sqrt(mean_squared_error(y_true=y_validation, y_pred=y_pred_validation)),
}
rmse = rmse.append(row, ignore_index=True)
rmse.tail()
# +
plt.figure()
sns.lineplot(x="alpha", y="train", data=rmse, label="training")
sns.lineplot(x="alpha", y="validation", data=rmse, label="validation")
plt.gca().set_ylabel("RMSE")
plt.gca().set_xscale("log")
plt.tight_layout()
# -
| lectures/lecture_6/hyperparameter_tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %run support.py
import os
plt.style.use('~/Shared JupyterHub Notebooks/interactive.mplstyle')
#datafolder = "./data/Measurements/Cooldown20200826"
import matplotlib.gridspec as gridspec
from scipy.io import loadmat
from scipy.optimize import curve_fit
from scipy.interpolate import interp2d
from pathlib import Path
from datetime import datetime, timedelta
# +
fig, ax = plt.subplots(1,2,figsize=(8,4))
# Before fixing dangling wires
gT = 0.302*2/Rk
Ec = 1.51986/2
datafolder = "/mnt/Measurement_Data/phys-dots-26/Cooldown38"
filename = f"d304_time.h5"
with h5py.File(f"{datafolder}/{filename}", 'r') as f:
t = np.array(f['x_array'])
g = np.array(f['GcbtlowEc'])*2/Rk
print(t.shape, g.shape)
b = np.array(f['bzIPSB'])
logs = json.loads(f['metadata'].attrs['sweep_logs'])
time_started = datetime.strptime(logs['time_started'], '%d/%m/%Y %H:%M:%S')
time_completed = datetime.strptime(logs['time_completed'], '%d/%m/%Y %H:%M:%S')
last_index = np.where(np.logical_not(np.isnan(g)))[0][-1]
t = np.linspace(time_started.timestamp(), time_completed.timestamp(), last_index+1)
t = t[0:t.shape[0]] - t[0]
g = g[0:t.shape[0]]
b = b[0:t.shape[0]]
g = 1/(1/g-2*R_wire)
g = MakeSmoothie(g)
Tcbt = Tcbt_Cu(g/gT, Ec=Ec)
demag_filter = b>0.060
warmup_filter = b<=0.06001
t0 = t[warmup_filter][0]
t -= t0
ax[0].plot(b[demag_filter], Tcbt[demag_filter], color=colors[0], label="Before fixing dangling wires")
ax[1].plot(t[warmup_filter]/3600, Tcbt[warmup_filter], color=colors[0], label="Before fixing dangling wires")
# After fixing dangling wires
datafolder = "/mnt/Measurement_Data/phys-dots-26/Cooldown20200826"
with h5py.File(f"{datafolder}/d524_time.h5", "r") as f:
t = np.array(f['x_array'])
gCu = np.array(f['gCu'])
b = np.array(f['bdemagIPSB'])
gCu = 1/(1/gCu - 2*R_wire)
gCu = MakeSmoothie(gCu)
Tcbt = Tcbt_Cu(gCu/gT_Cu, Ec=Ec_Cu)
demag_filter = np.logical_and(b>0.068, Tcbt<20)
warmup_filter = b<=0.06001
t0 = t[warmup_filter][0]
t -= t0
ax[0].plot(b[demag_filter], Tcbt[demag_filter], color=colors[1], label="After fixing dangling wires")
ax[1].plot(t[warmup_filter]/3600, Tcbt[warmup_filter], color=colors[1], label="After fixing dangling wires")
for fn in [526, 527]:
with h5py.File(f"{datafolder}/d{fn}_time_c12.h5", "r") as f:
t = np.array(f['x_array'])
v = np.array(f['x_array'])
gCu = np.array(f['gCu'])
t -= t0
gCu = 1/(1/gCu - 2*R_wire)
gCu = np.mean(gCu[int(gCu.shape[0]/2)-3:int(gCu.shape[0]/2)+3,:], axis=0)
gCu = MakeSmoothie(gCu)
Tcbt = Tcbt_Cu(gCu/gT_Cu, Ec=Ec_Cu)
ax[1].plot(t/3600, Tcbt, color=colors[1])
for i in range(2):
ax[i].set_ylim(9e-2, 2e1)
ax[i].grid()
ax[i].set_yscale('log')
ax[i].set_yticks([0.1, 0.3, 1, 3, 10, 30])
ax[i].set_yticklabels([0.1, 0.3, 1, 3, 10, 30])
ax[0].set_ylabel("Temperature (mK)")
ax[0].legend()
ax[0].set_xlabel("Magnetic Field (T)")
ax[1].set_xlabel("Time (hr)")
ax[1].text(0.3, 20, r"$\mathrm{B_f}$=60 mT", fontsize=12)
ax[1].set_yticklabels([])
ax[0].set_xlim(9, 0.0)
ax[1].set_xlim(0, 25)
fig.savefig('FixDanglingWires.pdf')
# -
# # High field thermometry is possible
# +
datafolder = "/mnt/Measurement_Data/phys-dots-26/Cooldown20200826"
gT_Cu = 21.683533093853708e-6
precool_wavenums = [634, 635]
fig, ax = plt.subplots(1,2,figsize=(8,3))
for i, wn in enumerate(precool_wavenums):
filename = f"d{wn}_time.h5"
with h5py.File(f"{datafolder}/{filename}", 'r') as f:
t = np.array(f['x_array'])
#dt = np.array([datetime.fromtimestamp(t[i]) for i in range(len(t))])
gCu = np.array(f['gCu'])
b = np.array(f['bdemagIPSB'])
#print(f['metadata'].attrs['sweep_logs'])
gCu = 1/(1/gCu - 2*R_wire)
gCu = MakeSmoothie(gCu, ws=150)
if i==0:
t0=t[0]
ti = t[b<9][-1]
if i==len(precool_wavenums)-1:
tf = t[-1]
mag_filter = b<9
precool_filter = b>=8.99
ax[0].plot(b[mag_filter], Tcbt_Cu(gCu[mag_filter]/gT_Cu, Ec=Ec_Cu*1e-3)*1e3, color=colors[0])
ax[1].plot((t[precool_filter]-ti)/3600/24, Tcbt_Cu(gCu[precool_filter]/gT_Cu, Ec=Ec_Cu*1e-3)*1e3, color=colors[0])
#Tmc = GetBFData(6, t0, t[-1])
#Tmc[:,1] = MakeSmoothie(Tmc[:,1], ws=50)
#t_mc = [datetime.fromtimestamp(Tmc[i,0]) for i in range(len(Tmc))]
#ax.plot(t_mc, Tmc[:,1]*1e3, color=colors[1], label=r'$\mathrm{T_{mc}}$')
for i in range(2):
ax[i].set_ylim(5,30)
ax[i].grid()
ax[i].set_ylim(5,25)
ax[i].set_yticks(np.linspace(5,25,5))
#ax.set_yscale('log')
ax[0].set_xlabel("Magnetic Field (T)")
ax[0].set_ylabel("CBT Temperature (mK)")
ax[0].set_xticks(np.arange(0,10,1))
ax[0].set_xlim(0,9)
ax[1].set_xlim(0,(tf-ti)/3600/24)
ax[1].set_yticklabels([])
ax[1].set_xlabel("Time (days)")
ax[1].text(1.02, 22, "B=9 T")
#ax[1].tick_params(axis='x', rotation=45)
fig.savefig("HighFieldThermometry.pdf")
# -
| exclude_EMP Report.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Availability Calculator
#
# This tool estimates the average device availability over a period of time.
#
# Double-click into the cells below, where it says `'here'`, and adjust the values as necessary.
#
# After setting configuration values, select `Kernel` > `Restart & Run All` from the menu.
from datetime import datetime, timedelta
import time
import query
from measure import DeviceCounter
import pandas
from statistics import mean
# ## `provider_name`
#
# Valid choices are (casing matters):
#
# * `bird`
# * `JUMP`
# * `Lime`
# * `Lyft`
# +
### Configuration ###
provider_name = 'here'
#####################
print(f"Provider: {provider_name}")
# -
# ## `vehicle_type`
#
# Valid choices are (casing matters):
#
# * `bicycle` - `JUMP` only
# * `scooter` - all providers
# +
### Configuration ###
vehicle_type = 'here'
#####################
print(f"Vehicle Type: {vehicle_type}")
# -
# ## `start_date`:
# +
### Configuration ###
start_year = 2018
start_month = 11
start_day = 0
#####################
start_date = datetime(start_year, start_month, start_day, 0, 0, 0)
print("Starting:", start_date)
# -
# ## `end_date`:
# +
### Configuration ###
end_year = 2018
end_month = 11
end_day = 0
#####################
end_date = datetime(end_year, end_month, end_day, 23, 59, 59)
print("Ending:", end_date)
# -
# ## Query for availability data:
q = query.Availability(start_date, end_date, vehicle_types=vehicle_type, table="csm_availability", local=True, debug=True)
data = q.get(provider_name=provider_name)
# ## Count availability in a partitioned time range:
# +
# create a device counter for the time range, assuming local time
devices = DeviceCounter(start_date, end_date, local=True, debug=True)
# create the interval partition and aggregate counts
partition = devices.count(data).partition()
# -
partition.describe()
# ## Average availability:
#
# Over the computed interval partition.
overall_avg = devices.average()
print(f"Overall average: {overall_avg}")
# ## Count availability (again), day-by-day:
#
# Calculate average availability for each day in the range `start_date` to `end_date`.
#
# At the end, calculate the overall average.
# +
oneday = timedelta(days=1)
counts = {}
start = start_date
while start < end_date:
end = start + oneday
print(f"Counting {start.strftime('%Y-%m-%d')} to {end.strftime('%Y-%m-%d')}")
q = query.Availability(start, end, vehicle_types=vehicle_type, table="csm_availability", local=True, debug=False)
data = q.get(provider_name=provider_name)
print(f"{len(data)} availability records in time period")
counter = DeviceCounter(start, start + oneday, local=True, debug=False)
counts[start] = counter.count(data)
start = start + oneday
print()
print("Done counting. Daily averages:")
print()
for date, count in counts.items():
print(f"{provider_name},{vehicle_type},{date.strftime('%Y-%m-%d')},{count.average()},{overall_avg}")
| analytics/availability.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %%writefile '../pipelines/google_results_count/extract.py'
import pandas as pd
import requests
from bs4 import BeautifulSoup
from datetime import datetime
def get_search_urls(keyword_list, url="https://www.google.com/search?q="):
""" Compose search urls """
search_query = [kw.replace(' ','+') for kw in keyword_list] # replace space with '+'
return [url+sq for sq in search_query]
def get_results_count(keyword, user_agent):
result = requests.get(keyword, headers=user_agent)
soup = BeautifulSoup(result.content, 'html.parser')
# string that contains results count 'About 1,410,000,000 results'
total_results_text = soup.find("div", {"id": "result-stats"}).find(text=True, recursive=False)
# extract number
results_num = int(''.join([num for num in total_results_text if num.isdigit()]) )
return results_num
def assert_df(df, keyword_list, url="https://www.google.com/search?q="):
# create dummy dataframe for comparison
df_compare = pd.DataFrame({
'keyword': pd.Series([*keyword_list], dtype='object'),
'results_count': pd.Series([1 for i in keyword_list], dtype='int64'),
'search_url': pd.Series(get_search_urls(keyword_list, url=url), dtype='object'),
'query_timestamp': pd.Series([datetime.now() for i in keyword_list], dtype='datetime64[ns]')
})
# columns
column_difference = set(df.columns).symmetric_difference(df_compare.columns)
assert len(column_difference) == 0, f"The following columns differ to reference dataframe: {column_difference}"
# dtypes
assert (df_compare.dtypes == df.dtypes).all(), f"Different dtypes for {df.dtypes}\n{df_compare.dtypes}"
# length
assert len(df) == len(keyword_list), f"{len(df)} does not equal {len(keyword_list)}"
print("Success >>>>>>>>>>\tDataframe meets expectations\n")
def df_build_results_count(keyword_list, user_agent, url="https://www.google.com/search?q="):
search_urls = get_search_urls(keyword_list)
result_count = [get_results_count(url, user_agent) for url in search_urls]
timestamp = datetime.now()
df = pd.DataFrame({'keyword': keyword_list,
'results_count': result_count,
'search_url': search_urls,
'query_timestamp': timestamp})
# testing
assert_df(df=df, keyword_list=keyword_list, url=url)
return df
# -
# ## Test pipeline
# +
# -- load into csv
import pandas as pd
def write_to_csv(df, filepath):
print('_'*42, f'\nExport data, dimension: {df.shape} to\t{filepath}\n')
print(df.head(2).to_markdown())
df.to_csv(f'{filepath}', index=False)
# + jupyter={"outputs_hidden": true} tags=[]
import yaml
import os
# load settings.yml
with open(r'../settings.yml') as file:
# The FullLoader parameter handles the conversion from YAML
# scalar values to Python the dictionary format
settings = yaml.full_load(file)
PROJECT_DIR = settings['project']['root_dir']
RAW_DATA_DIR = settings['project']['raw_data_dir']
FILENAME = f"{settings['project']['export_filename']}_{datetime.now().strftime('%Y%m%d_%H%M')}.csv"
FILEPATH = os.path.join(PROJECT_DIR, RAW_DATA_DIR, FILENAME)
KEYWORDS = settings['query']['keywords']
USER_AGENT = settings['query']['user_agent']
GOOGLE_URL = settings['query']['google_url']
print("Project dir\t{}\nKeywords\t{}\nExport\t\t{}".format(PROJECT_DIR, KEYWORDS, FILEPATH))
df = df_build_results_count(keyword_list=KEYWORDS,
user_agent=USER_AGENT,
url=GOOGLE_URL)
write_to_csv(df, filepath=FILEPATH)
| notebooks/1-extract.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 05e - Vertex AI > Training > Hyperparameter Tuning Jobs
#
# **Training Jobs Overview:**
# Where a model gets trained is where it consumes computing resources. With Vertex AI, you have choices for configuring the computing resources available at training. This notebook is an example of an execution environment. When it was set up there were choices for machine type and accelerators (GPUs).
#
# In the 04 series of demonstrations, the model training happened directly in the notebook. The models were then imported to Vertex AI and deployed to an endpoint for online predictions.
#
# In this 05 series of demonstrations, the same model is trained using managed computing resources in Vertex AI as custom training jobs. These jobs will be demonstrated as:
#
# - Custom Job from a python file and python source distribution
# - Training Pipeline that trains and saves models from a python file and python source distribution
# - Hyperparameter Tuning Jobs from a python source distribution
#
# **This Notebook: An extension of 05b**
# This notebook trains the same Tensorflow Keras model from 04a by first modifying and saving the training code to a python script. Then a Python source distribution is built containing the script. While this example fits nicely in a single script, larger examples will benefit from the flexibility offered by source distributions and this job gives an example of making the shift.
#
# The source distribution is then used as an input for a Vertex AI Training Custom Job that is also assigned compute resources and a container (pre-built) for executing the training in a managed service.
#
# The Custom Job is then used as the input for a Vertex AI Training Hyperparameter Tuning Job. This runs and manages the tuning loops for the number of trials in each loop, collects the metric(s) and manages the parameters with the search algorithm for parameter modification.
#
# The training can be reviewed with Vertex AI's managed Tensorboard under Experiments > Experiments, or by clicking on the `05e...` job under Training > Hyperparameter Tuning Jobs and then clicking the 'Open Tensorboard' link. See notebook 05f for an enhancement to this notebook that leverages the HPARAMS feature of Tensorboard.
#
# **Prerequisites:**
#
# - 01 - BigQuery - Table Data Source
# - 05 - Vertex AI > Experiments - Managed Tensorboard
# - Understanding:
# - 04a - Vertex AI > Notebooks - Models Built in Notebooks with Tensorflow
# - Contains a more granular review of the Tensorflow model training
#
# **Overview:**
#
# - Setup
# - Connect to Tensorboard instance from 05
# - Create a `train.py` Python script that recreates the local training in 04a
# - Build a Python source distribution that contains the `train.py` script
# - Use Python Client google.cloud.aiplatform for Vertex AI
# - Custom training job with aiplatform.CustomJob.from_local_script
# - Hyperparameter tuning job with aiplatform.HyperparameterTuningJob
# - Run job with .run
# - Upload best Model to Vertex AI with aiplatform.Model.upload
# - Create Endpoint with Vertex AI with aiplatform.Endpoint.create
# - Deploy model to endpoint with .deploy
# - Online Prediction demonstrated using Vertex AI Endpoint with deployed model
# - Get records to score from BigQuery table
# - Prediction with aiplatform.Endpoint.predict
# - Prediction with REST
# - Prediction with gcloud (CLI)
#
# **Resources:**
#
# - [BigQuery Tensorflow Reader](https://www.tensorflow.org/io/tutorials/bigquery)
# - [Keras Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)
# - [Keras API](https://www.tensorflow.org/api_docs/python/tf/keras)
# - [Python Client For Google BigQuery](https://googleapis.dev/python/bigquery/latest/index.html)
# - [Tensorflow Python Client](https://www.tensorflow.org/api_docs/python/tf)
# - [Tensorflow I/O Python Client](https://www.tensorflow.org/io/api_docs/python/tfio/bigquery)
# - [Python Client for Vertex AI](https://googleapis.dev/python/aiplatform/latest/aiplatform.html)
# - [Create a Python source distribution](https://cloud.google.com/vertex-ai/docs/training/create-python-pre-built-container) for a Vertex AI custom training job
# - Containers for training (Pre-Built)
# - [Overview](https://cloud.google.com/vertex-ai/docs/training/create-python-pre-built-container)
# - [List](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers)
# - Vertex AI Hyperparameter Tuning
# - [Overview of Hyperparameter Tuning](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview)
# - [Using Hyperparameter Tuning](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning)
#
# **Related Training:**
#
# - todo
#
# ---
# ## Vertex AI - Conceptual Flow
#
# <img src="architectures/slides/05e_arch.png">
#
# ---
# ## Vertex AI - Workflow
#
# <img src="architectures/slides/05e_console.png">
# ---
# ## Setup
# inputs:
# +
REGION = 'us-central1'
PROJECT_ID='statmike-mlops'
DATANAME = 'fraud'
NOTEBOOK = '05e'
# Resources
TRAIN_IMAGE = 'us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-7:latest'
DEPLOY_IMAGE ='us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest'
TRAIN_COMPUTE = 'n1-standard-4'
DEPLOY_COMPUTE = 'n1-standard-4'
# Model Training
VAR_TARGET = 'Class'
VAR_OMIT = 'transaction_id' # add more variables to the string with space delimiters
EPOCHS = 10
BATCH_SIZE = 100
# -
# packages:
# +
from google.cloud import aiplatform
from datetime import datetime
from google.cloud import bigquery
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
import json
import numpy as np
# -
# clients:
aiplatform.init(project=PROJECT_ID, location=REGION)
bigquery = bigquery.Client()
# parameters:
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET = PROJECT_ID
URI = f"gs://{BUCKET}/{DATANAME}/models/{NOTEBOOK}"
DIR = f"temp/{NOTEBOOK}"
# Give service account roles/storage.objectAdmin permissions
# Console > IMA > Select Account <<EMAIL>>-<EMAIL> > edit - give role
# SERVICE_ACCOUNT = !gcloud config list --format='value(core.account)'
SERVICE_ACCOUNT = SERVICE_ACCOUNT[0]
SERVICE_ACCOUNT
# environment:
# !rm -rf {DIR}
# !mkdir -p {DIR}
# ---
# ## Get Tensorboard Instance Name
# The training job will show up as an experiment for the Tensorboard instance and have the same name as the training job ID.
tb = aiplatform.Tensorboard.list(filter=f'display_name={DATANAME}')
tb[0].resource_name
# ---
# ## Training
# ### Assemble Python File for Training
#
# Create the main python trainer file as `/train.py`:
# !mkdir -p {DIR}/source/trainer
# +
# %%writefile {DIR}/source/trainer/train.py
# package import
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
import tensorflow as tf
from google.cloud import bigquery
import argparse
import os
import sys
import hypertune
# import argument to local variables
parser = argparse.ArgumentParser()
# the passed param, dest: a name for the param, default: if absent fetch this param from the OS, type: type to convert to, help: description of argument
parser.add_argument('--epochs', dest = 'epochs', default = 10, type = int, help = 'Number of Epochs')
parser.add_argument('--batch_size', dest = 'batch_size', default = 32, type = int, help = 'Batch Size')
parser.add_argument('--var_target', dest = 'var_target', type=str)
parser.add_argument('--var_omit', dest = 'var_omit', type=str, nargs='*')
parser.add_argument('--project_id', dest = 'project_id', type=str)
parser.add_argument('--dataname', dest = 'dataname', type=str)
parser.add_argument('--region', dest = 'region', type=str)
parser.add_argument('--notebook', dest = 'notebook', type=str)
# hyperparameters
parser.add_argument('--lr',dest='learning_rate', required=True, type=float, help='Learning Rate')
parser.add_argument('--m',dest='momentum', required=True, type=float, help='Momentum')
args = parser.parse_args()
# built in parameters for data source:
PROJECT_ID = args.project_id
DATANAME = args.dataname
REGION = args.region
NOTEBOOK = args.notebook
# clients
bigquery = bigquery.Client(project = PROJECT_ID)
# get schema from bigquery source
query = f"SELECT * FROM {DATANAME}.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{DATANAME}_prepped'"
schema = bigquery.query(query).to_dataframe()
# get number of classes from bigquery source
nclasses = bigquery.query(query = f'SELECT DISTINCT {args.var_target} FROM {DATANAME}.{DATANAME}_prepped WHERE {args.var_target} is not null').to_dataframe()
nclasses = nclasses.shape[0]
# prepare inputs for tensorflow training
OMIT = args.var_omit + ['splits']
selected_fields = schema[~schema.column_name.isin(OMIT)].column_name.tolist()
feature_columns = []
feature_layer_inputs = {}
for header in selected_fields:
if header != args.var_target:
feature_columns.append(tf.feature_column.numeric_column(header))
feature_layer_inputs[header] = tf.keras.Input(shape=(1,),name=header)
# all the columns in this data source are either float64 or int64
output_types = schema[~schema.column_name.isin(OMIT)].data_type.tolist()
output_types = [dtypes.float64 if x=='FLOAT64' else dtypes.int64 for x in output_types]
# remap input data to Tensorflow inputs of features and target
def transTable(row_dict):
target=row_dict.pop(args.var_target)
target = tf.one_hot(tf.cast(target,tf.int64), nclasses)
target = tf.cast(target, tf.float32)
return(row_dict, target)
# function to setup a bigquery reader with Tensorflow I/O
def bq_reader(split):
reader = BigQueryClient()
training = reader.read_session(
parent = f"projects/{PROJECT_ID}",
project_id = PROJECT_ID,
table_id = f"{DATANAME}_prepped",
dataset_id = DATANAME,
selected_fields = selected_fields,
output_types = output_types,
row_restriction = f"splits='{split}'",
requested_streams = 3
)
return training
train = bq_reader('TRAIN').parallel_read_rows().map(transTable).shuffle(args.batch_size*3).batch(args.batch_size)
validate = bq_reader('VALIDATE').parallel_read_rows().map(transTable).batch(args.batch_size)
test = bq_reader('TEST').parallel_read_rows().map(transTable).batch(args.batch_size)
# define model and compile
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_layer(feature_layer_inputs)
layers = tf.keras.layers.BatchNormalization()(feature_layer_outputs)
layers = tf.keras.layers.Dense(nclasses, activation = tf.nn.softmax)(layers)
model = tf.keras.Model(
inputs = [v for v in feature_layer_inputs.values()],
outputs = layers
)
opt = tf.keras.optimizers.SGD(learning_rate = args.learning_rate, momentum = args.momentum)
loss = tf.keras.losses.CategoricalCrossentropy()
model.compile(
optimizer = opt,
loss = loss,
metrics = ['accuracy', tf.keras.metrics.AUC(curve='PR')]
)
# setup tensorboard logs and train
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=os.environ['AIP_TENSORBOARD_LOG_DIR'], histogram_freq=1)
history = model.fit(train, epochs = args.epochs, callbacks = [tensorboard_callback], validation_data = validate)
# output the model save files
model.save(os.getenv("AIP_MODEL_DIR"))
# report hypertune info back to Vertex AI Training > Hyperparamter Tuning Job
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag = 'loss',
metric_value = history.history['loss'][-1],
global_step = 1)
# -
# ### Assemble Python Source Distribution
# create `setup.py` file:
# +
# %%writefile {DIR}/source/setup.py
from setuptools import setup
from setuptools import find_packages
REQUIRED_PACKAGES = ['tensorflow_io']
setup(
name = 'trainer',
version = '0.1',
install_requires = REQUIRED_PACKAGES,
packages = find_packages(),
include_package_data = True,
description='Training Package'
)
# -
# add `__init__.py` file to the trainer modules folder:
# !touch {DIR}/source/trainer/__init__.py
# Create the source distribution and copy it to the projects storage bucket:
# - change to the local direcory with the source folder
# - remove any previous distributions
# - tar and gzip the source folder
# - copy the distribution to the project folder on GCS
# - change back to the local project directory
# +
# %cd {DIR}
# !rm -f source.tar source.tar.gz
# !tar cvf source.tar source
# !gzip source.tar
# !gsutil cp source.tar.gz {URI}/{TIMESTAMP}/source.tar.gz
temp = '../'*(DIR.count('/')+1)
# %cd {temp}
# -
# ### Setup Training Job
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--batch_size=" + str(BATCH_SIZE),
"--var_target=" + VAR_TARGET,
"--var_omit=" + VAR_OMIT,
"--project_id=" + PROJECT_ID,
"--dataname=" + DATANAME,
"--region=" + REGION,
"--notebook=" + NOTEBOOK
]
# +
MACHINE_SPEC = {
"machine_type": TRAIN_COMPUTE,
"accelerator_count": 0
}
WORKER_POOL_SPEC = [
{
"replica_count": 1,
"machine_spec": MACHINE_SPEC,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [f"{URI}/{TIMESTAMP}/source.tar.gz"],
"python_module": "trainer.train",
"args": CMDARGS
}
}
]
# -
customJob = aiplatform.CustomJob(
display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}',
worker_pool_specs = WORKER_POOL_SPEC,
base_output_dir = f"{URI}/{TIMESTAMP}",
staging_bucket = f"{URI}/{TIMESTAMP}",
labels = {'notebook':f'{NOTEBOOK}'}
)
# ### Setup Hyperparameter Tuning Job
# +
METRIC_SPEC = {
"loss": "minimize"
}
PARAMETER_SPEC = {
"lr": aiplatform.hyperparameter_tuning.DoubleParameterSpec(min=0.001, max=0.1, scale="log"),
"m": aiplatform.hyperparameter_tuning.DoubleParameterSpec(min=1e-7, max=0.9, scale="linear")
}
# -
htJob = aiplatform.HyperparameterTuningJob(
display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}',
custom_job = customJob,
metric_spec = METRIC_SPEC,
parameter_spec = PARAMETER_SPEC,
max_trial_count = 20,
parallel_trial_count = 5,
search_algorithm = None,
labels = {'notebook':f'{NOTEBOOK}'}
)
# ### Run Training Job
htJob.run(
service_account = SERVICE_ACCOUNT,
tensorboard = tb[0].resource_name
)
# if trial.state.name == 'SUCCEEDED'
losses = [trial.final_measurement.metrics[0].value if trial.state.name == 'SUCCEEDED' else 1 for trial in htJob.trials]
losses
best = htJob.trials[losses.index(min(losses))]
best
# ## Review Results in Tensorboard
# - In Console, go to Vertex AI > Training > Hyperparameter Tuning Jobs
# - Click the `05e_` job
# - Click `Open Tensorboard` at the top of the screen
#
# <img src="architectures/notebooks/05e_Screenshots/console_hpjob.png">
#
# <img src="architectures/notebooks/05e_Screenshots/console_hpjob2.png">
#
# <img src="architectures/notebooks/05e_Screenshots/tb_scalars.png">
#
# <img src="architectures/notebooks/05e_Screenshots/tb_histograms.png">
#
# <img src="architectures/notebooks/05e_Screenshots/tb_graphs.png">
#
# <img src="architectures/notebooks/05e_Screenshots/tb_graphs2.png">
# ---
# ## Serving
# ### Upload The Model
model = aiplatform.Model.upload(
display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}',
serving_container_image_uri = DEPLOY_IMAGE,
artifact_uri = f"{URI}/{TIMESTAMP}/{best.id}/model",
labels = {'notebook':f'{NOTEBOOK}'}
)
model.display_name
# ### Create An Endpoint
endpoint = aiplatform.Endpoint.create(
display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}',
labels = {'notebook':f'{NOTEBOOK}'}
)
endpoint.display_name
# ### Deploy Model To Endpoint
endpoint.deploy(
model = model,
deployed_model_display_name = f'{NOTEBOOK}_{DATANAME}_{TIMESTAMP}',
traffic_percentage = 100,
machine_type = DEPLOY_COMPUTE,
min_replica_count = 1,
max_replica_count = 1
)
# ---
# ## Prediction
# ### Prepare a record for prediction: instance and parameters lists
pred = bigquery.query(query = f"SELECT * FROM {DATANAME}.{DATANAME}_prepped WHERE splits='TEST' LIMIT 10").to_dataframe()
pred.head(4)
newob = pred[pred.columns[~pred.columns.isin(VAR_OMIT.split()+[VAR_TARGET, 'splits'])]].to_dict(orient='records')[0]
#newob
instances = [json_format.ParseDict(newob, Value())]
parameters = json_format.ParseDict({}, Value())
# ### Get Predictions: Python Client
prediction = endpoint.predict(instances=instances, parameters=parameters)
prediction
prediction.predictions[0]
np.argmax(prediction.predictions[0])
# ### Get Predictions: REST
with open(f'{DIR}/request.json','w') as file:
file.write(json.dumps({"instances": [newob]}))
# !curl -X POST \
# -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
# -H "Content-Type: application/json; charset=utf-8" \
# -d @{DIR}/request.json \
# https://{REGION}-aiplatform.googleapis.com/v1/{endpoint.resource_name}:predict
# ### Get Predictions: gcloud (CLI)
# !gcloud beta ai endpoints predict {endpoint.name.rsplit('/',1)[-1]} --region={REGION} --json-request={DIR}/request.json
# ---
# ## Remove Resources
# see notebook "99 - Cleanup"
| 05e - Vertex AI > Training > Hyperparameter Tuning Jobs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from tqdm.notebook import tqdm
from multiprocessing import Process, Manager, Pool
import numpy as np
# ### util method
def get_rating(id_trabalhador, escolaridade_trabalhador, graduacao_trabalhador,
cidade_trabalhador, escolaridade_vaga, graduacao_vaga, cidade_vaga, id_vaga):
points = 0
if escolaridade_trabalhador == escolaridade_vaga:
points += 1
if graduacao_trabalhador != np.nan and graduacao_trabalhador == graduacao_vaga:
points += 1
if cidade_trabalhador == cidade_vaga:
points += 1
return points
#shared_list.append({'id_trabalhador': id_trabalhador, 'id_vaga':id_vaga, 'rating':rating})
# ## load trabalhadores
df_trabalhadores = pd.read_csv("data/D_ETL_IMO_EXTRACAO_SINE_ABERTO_TRABALHADORES_SP.csv", sep=";", encoding="iso8859-1")
df_trabalhadores.dropna(subset=['PRETENSOES'], inplace=True)
df_trabalhadores[['PRETENSOES','LIXO']] = df_trabalhadores.PRETENSOES.str.split("(",1,expand=True)
#df_trabalhadores = df_trabalhadores[:30000]
df_trabalhadores = df_trabalhadores.sample(7000)
df_trabalhadores['id_trabalhador'] = df_trabalhadores.index
print(df_trabalhadores.shape)
df_trabalhadores.head()
# ## load vagas
df_vagas = pd.read_csv("data/vagas_mock.csv")
#df_vagas = df_vagas[:10000]
df_vagas = df_vagas.sample(3000)
print(df_vagas.shape)
df_vagas.head()
# ## Feature engineering
# automaticamente "avalia" uma posicao para um trabalhador
rating_list = []
for index, row in tqdm(df_trabalhadores.iterrows(), total=df_trabalhadores.shape[0]):
id_trabalhador = row['id_trabalhador']
escolaridade_trabalhador = row['ESCOLARIDADE']
graduacao_trabalhador = row['GRADUACOES']
cidade_trabalhador = row['NOME_MUNICIPIO']
for index_vaga, row_vaga in df_vagas.iterrows():
escolaridade_vaga = row_vaga['escolaridade']
graduacao_vaga = row_vaga['graduacao']
cidade_vaga = row_vaga['cidade']
id_vaga = row_vaga['id_empresa']
rating = get_rating(id_trabalhador=id_trabalhador, escolaridade_trabalhador=escolaridade_trabalhador, graduacao_trabalhador=graduacao_trabalhador,
cidade_trabalhador=cidade_trabalhador, escolaridade_vaga=escolaridade_vaga, graduacao_vaga=graduacao_vaga, cidade_vaga=cidade_vaga, id_vaga=id_vaga)
rating_list.append({'id_trabalhador':id_trabalhador, 'id_posicao':id_vaga, 'rating':rating})
del(df_trabalhadores)
del(df_vagas)
df_match = pd.DataFrame(rating_list)
print(df_match.shape)
df_match.head()
del(rating_list)
df_match.rating.value_counts().plot.bar();
df_match.rating.value_counts()
df_match.to_csv("data/matches2.csv")
| notebooks/match preprocess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Volatility Indices Calculation
# This notebook explains how the module *vxbt_calc* calculates the XVBT, AXVBT and GXVBT indices using data from Deribit.
# +
import calendar
import numpy as np
import openapi_client as dbitApi
import pandas as pd
from datetime import datetime
# -
# ### Utility functions for time calculations, Deribit API and dataframe formatting
# +
def format_datetime_to_expiry(date):
return datetime.strftime(date, '%-d%b%y').upper()
def get_near_next_terms(now):
c = calendar.Calendar(firstweekday=calendar.MONDAY)
this_month_cal = c.monthdatescalendar(now.year, now.month)
this_fridays = [datetime(day.year, day.month, day.day, 8, 0, 0)
for week in this_month_cal for day in week
if day.weekday() == calendar.FRIDAY and day.month == now.month
and datetime(day.year, day.month, day.day, 8, 0, 0) >= now]
next_year = now.year if now.month < 12 else now.year + 1
next_month = now.month + 1 if now.month < 12 else 1
next_month_cal = c.monthdatescalendar(next_year, next_month)
next_fridays = [datetime(day.year, day.month, day.day, 8, 0, 0)
for week in next_month_cal for day in week
if day.weekday() == calendar.FRIDAY and day.month == next_month
and datetime(day.year, day.month, day.day, 8, 0, 0) >= now]
fridays = this_fridays + next_fridays
near_term, next_term = fridays[0], fridays[1]
return (format_datetime_to_expiry(near_term), format_datetime_to_expiry(next_term), near_term, next_term)
def get_index(currency='BTC'):
try:
index_result = api.public_get_index_get(currency)['result'][currency]
return index_result
except dbitApi.exceptions.ApiException as e:
print(e)
#logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!')
exit()
def get_instruments_with_expiry(expiry, currency='BTC', kind='option', expired='false'):
try:
instrument_result = api.public_get_instruments_get(currency, kind=kind, expired=expired)['result']
return [instrument['instrument_name'] for instrument in instrument_result if expiry in instrument['instrument_name']]
except dbitApi.exceptions.ApiException as e:
print(e)
#logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!')
exit()
def get_ticker(instrument):
try:
instrument_result = api.public_ticker_get(instrument)['result']
return instrument_result
except dbitApi.exceptions.ApiException as e:
print(e)
#logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!')
exit()
def get_bids_asks(near_list, next_list):
near_calls = dict()
near_puts = dict()
next_calls = dict()
next_puts = dict()
for instrument in near_list:
data = get_ticker(instrument)
best_bid, best_ask = data['best_bid_price'], data['best_ask_price']
strike, cp = int(instrument.split('-')[2]), instrument.split('-')[3]
if cp == 'C':
near_calls[strike] = {'best_bid': best_bid, 'best_ask': best_ask}
elif cp == 'P':
near_puts[strike] = {'best_bid': best_bid, 'best_ask': best_ask}
else:
print(f'Error {instrument}')
for instrument in next_list:
data = get_ticker(instrument)
best_bid, best_ask = data['best_bid_price'], data['best_ask_price']
strike, cp = int(instrument.split('-')[2]), instrument.split('-')[3]
if cp == 'C':
next_calls[strike] = {'best_bid': best_bid, 'best_ask': best_ask}
elif cp == 'P':
next_puts[strike] = {'best_bid': best_bid, 'best_ask': best_ask}
else:
print(f'Error {instrument}')
near_calls_df = pd.DataFrame.from_dict(near_calls, orient='index').sort_index().replace(0, np.nan)
near_puts_df = pd.DataFrame.from_dict(near_puts, orient='index').sort_index().replace(0, np.nan)
next_calls_df = pd.DataFrame.from_dict(next_calls, orient='index').sort_index().replace(0, np.nan)
next_puts_df = pd.DataFrame.from_dict(next_puts, orient='index').sort_index().replace(0, np.nan)
return near_calls_df, near_puts_df, next_calls_df, next_puts_df
# -
# ## XVBT Implementation
#
# Replication of CBOE VIX calculation.
#
# Near and next term expiries are defined as the next two Fridays respectively. Bid/ask data for all strike puts and calls are retrieved from Deribit for these expiries.
# +
api = dbitApi.MarketDataApi()
now = datetime.now()
near_expiry, next_expiry, near_datetime, next_datetime = get_near_next_terms(now)
print(near_expiry, next_expiry)
# +
near_instruments = get_instruments_with_expiry(near_expiry)
next_instruments = get_instruments_with_expiry(next_expiry)
near_calls_df, near_puts_df, next_calls_df, next_puts_df = get_bids_asks(near_instruments, next_instruments)
# -
near_calls_df
# ### Step 1: Select the options to be used in the VIX Index calculation
#
# Call and put prices are computed as the average of the respective bid and ask prices. The strike at which the call and put price difference is found to calculate forward prices and separation strikes.
# +
near_prices = pd.DataFrame(index=near_calls_df.index)
near_prices['call_price'] = (near_calls_df['best_bid'] + near_calls_df['best_ask']) / 2
near_prices['put_price'] = (near_puts_df['best_bid'] + near_puts_df['best_ask']) / 2
near_prices['abs_diff'] = abs(near_prices['call_price'] - near_prices['put_price'])
min_near_strike = near_prices['abs_diff'].idxmin()
min_near_diff = near_prices.loc[min_near_strike].abs_diff
next_prices = pd.DataFrame(index=next_calls_df.index)
next_prices['call_price'] = (next_calls_df['best_bid'] + next_calls_df['best_ask']) / 2
next_prices['put_price'] = (next_puts_df['best_bid'] + next_puts_df['best_ask']) / 2
next_prices['abs_diff'] = abs(next_prices['call_price'] - next_prices['put_price'])
min_next_strike = next_prices['abs_diff'].idxmin()
min_next_diff = next_prices.loc[min_next_strike].abs_diff
near_prices
# -
# The XVBT index is set to have a constant maturity of seven days and a yield rate of zero (which should not make a difference to calculations - refer to Alexander paper page 9). This is used to calculate forward prices f1, f2 and separation strikes k0_1, k0_2.
# +
const_mature_days = 7
R = 0
n1 = (near_datetime - now).total_seconds() / 60
n2 = (next_datetime - now).total_seconds() / 60
nY = 525600
n = const_mature_days * 24 * 60
t1 = n1/nY
t2 = n2/nY
# Compute forward prices and at-the-money strikes
f1 = min_near_strike + np.e**(R*t1) * min_near_diff
k0_1 = max([strike for strike in near_prices.index if strike <= min_near_strike])
f2 = min_next_strike + np.e**(R*t2) * min_next_diff
k0_2 = max([strike for strike in next_prices.index if strike <= min_next_strike])
print(k0_1, f1, k0_2, f2)
# -
# Out of the money calls and puts are found by using the calculated separation strikes and excluding at the money strikes.
near_otm_puts_df = near_puts_df.loc[:k0_1][:-1]
near_otm_calls_df = near_calls_df.loc[k0_1:][1:]
next_otm_puts_df = next_puts_df.loc[:k0_2][:-1]
next_otm_calls_df = next_calls_df.loc[k0_2:][1:]
near_otm_puts_df
near_otm_calls_df
# Strikes following two consecutive bid prices and strikes with zero bids are excluded.
# +
near_otm_puts_df = near_otm_puts_df.sort_index(ascending=False)
near_otm_puts_df = near_otm_puts_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int))
near_otm_puts_df['zero_bid_cumsum'] = near_otm_puts_df['zero_bid'].cumsum()
near_otm_puts_df = near_otm_puts_df[(near_otm_puts_df['zero_bid_cumsum'] <= 2) & (near_otm_puts_df['best_bid'] > 0)]
near_otm_puts_df
# +
near_otm_calls_df = near_otm_calls_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int))
near_otm_calls_df['zero_bid_cumsum'] = near_otm_calls_df['zero_bid'].cumsum()
near_otm_calls_df = near_otm_calls_df[(near_otm_calls_df['zero_bid_cumsum'] <= 2) & (near_otm_calls_df['best_bid'] > 0)]
near_otm_calls_df
# +
next_otm_puts_df = next_otm_puts_df.sort_index(ascending=False)
next_otm_puts_df = next_otm_puts_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int))
next_otm_puts_df['zero_bid_cumsum'] = next_otm_puts_df['zero_bid'].cumsum()
next_otm_puts_df = next_otm_puts_df[(next_otm_puts_df['zero_bid_cumsum'] <= 2) & (next_otm_puts_df['best_bid'] > 0)]
next_otm_calls_df = next_otm_calls_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int))
next_otm_calls_df['zero_bid_cumsum'] = next_otm_calls_df['zero_bid'].cumsum()
next_otm_calls_df = next_otm_calls_df[(next_otm_calls_df['zero_bid_cumsum'] <= 2) & (next_otm_calls_df['best_bid'] > 0)]
# -
next_otm_puts_df
next_otm_calls_df
# ### Step 2: Calculate volatility for both near-term and next-term options
#
# Refer to VIX white paper page 8.
near_calc_strikes_df = pd.DataFrame(index=near_prices.index)
near_calc_strikes_df['price'] = (near_otm_puts_df['best_bid'] + near_otm_puts_df['best_ask']) / 2
near_calc_strikes_df['price'] = near_calc_strikes_df.price.combine_first((near_otm_calls_df['best_bid'] + near_otm_calls_df['best_ask']) / 2)
near_calc_strikes_df.at[k0_1] = (near_prices.loc[k0_1].call_price + near_prices.loc[k0_1].put_price) / 2
near_calc_strikes_df = near_calc_strikes_df.dropna()
near_calc_strikes_df
next_calc_strikes_df = pd.DataFrame(index=next_prices.index)
next_calc_strikes_df['price'] = (next_otm_puts_df['best_bid'] + next_otm_puts_df['best_ask']) / 2
next_calc_strikes_df['price'] = next_calc_strikes_df.price.combine_first((next_otm_calls_df['best_bid'] + next_otm_calls_df['best_ask']) / 2)
next_calc_strikes_df.at[k0_2] = (next_prices.loc[k0_2].call_price + next_prices.loc[k0_2].put_price) / 2
next_calc_strikes_df = next_calc_strikes_df.dropna()
next_calc_strikes_df
# +
near_sum = 0
for i in range(len(near_calc_strikes_df)):
row = near_calc_strikes_df.iloc[i]
if i == 0:
deltaKi = near_calc_strikes_df.iloc[i+1].name - row.name
elif i == len(near_calc_strikes_df) - 1:
deltaKi = row.name - near_calc_strikes_df.iloc[i-1].name
else:
deltaKi = (near_calc_strikes_df.iloc[i+1].name - near_calc_strikes_df.iloc[i-1].name) / 2
near_sum += deltaKi/(row.name ** 2) * np.e**(R*t1) * row.price
next_sum = 0
for i in range(len(next_calc_strikes_df)):
row = next_calc_strikes_df.iloc[i]
if i == 0:
deltaKi = next_calc_strikes_df.iloc[i+1].name - row.name
elif i == len(next_calc_strikes_df) - 1:
deltaKi = row.name - next_calc_strikes_df.iloc[i-1].name
else:
deltaKi = (next_calc_strikes_df.iloc[i+1].name - next_calc_strikes_df.iloc[i-1].name) / 2
next_sum += deltaKi/(row.name ** 2) * np.e**(R*t2) * row.price
sigma1 = ((2/t1) * near_sum) - (1/t1)*((f1/k0_1 - 1)**2)
sigma2 = ((2/t2) * next_sum) - (1/t2)*((f2/k0_2 - 1)**2)
print(sigma1, sigma2)
# -
VXBT = 100 * np.sqrt(((t1*sigma1)*((n2-n)/(n2-n1)) + (t2*sigma2)*((n-n1)/(n2-n1)))*(nY/n))
VXBT
# ## AVXBT and GVXBT Implementation
#
# Refer to *'The Crypto Investor Fear Gauge and the Bitcoin Variance Risk Premium'* by <NAME> and <NAME>.
# +
omega = ((n2-nY)/(n2-n1))*n
GVXBT = np.sqrt(omega*t1*sigma1 + (1-omega)*t2*sigma2)
# -
GVXBT
# +
sigma1_a = sigma1 * (f1**-2)
sigma2_a = sigma2 * (f2**-2)
AVXBT = np.sqrt(omega*t1*sigma1_a + (1-omega)*t2*sigma2_a)
# -
AVXBT
# ***
# # Test implementation against CBOE VIX for S&P 500 options
from vxbt_calc import vxbt_calc as vc
from datetime import timedelta
# CBOE's VIX takes options expiring between 23 and 37 days from now as near-term and next-term options (see CBOE VIX White Paper). Assume exact time of expiry can be neglected for now.
# +
now = datetime.now().date()
start_date = now + timedelta(days=23)
end_date = now + timedelta(days=37)
fridays = [day for row in calendar.Calendar(firstweekday=calendar.MONDAY).yeardatescalendar(now.year) for month in row for week in month for day in week if day.weekday() == calendar.FRIDAY]
near_exp, next_exp = [friday for friday in fridays if friday > start_date and friday < end_date]
near_exp, next_exp
# -
# Manually download data CSVs from https://www.barchart.com/stocks/quotes/$SPX/options to process
near_data = pd.read_csv('$spx-options-exp-2020-05-22-show-all-stacked-04-27-2020.csv', skipfooter=1)
next_data = pd.read_csv('$spx-options-exp-2020-05-29-show-all-stacked-04-27-2020.csv', skipfooter=1)
near_data
# +
near_calls_df = near_data[near_data['Type'] == 'Call'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1)
near_puts_df = near_data[near_data['Type'] == 'Put'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1)
next_calls_df = next_data[next_data['Type'] == 'Call'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1)
next_puts_df = next_data[next_data['Type'] == 'Put'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1)
# -
near_calls_df
# Set maturity to 30 days as specified in VIX White Paper and arbitrarily use value of R1 from the paper as the yield rate (effect is negligible).
# +
maturity = 30
rate = 0.000305
VIX, _1, _2 = vc.calculate_indices(now, near_exp, next_exp, maturity, rate, near_calls_df, near_puts_df, next_calls_df, next_puts_df)
VIX
# -
# Get the value of VIX from Yahoo Finance at the time options data was downloaded:
import yfinance as yf
yf_vix = yf.Share('^VIX')
dir(yf_vix)
yf_vix._get_fundamentals()
# ### Value matches.
# Not an exact match but this is expected as US Treasury yield rates and exact expiry times are neglected in our calculation.
sp = yf.Ticker('^SPX')
near_opts = sp.option_chain('2020-06-26')
near_opts.puts
near_calls_df = near_opts.calls[['strike', 'bid', 'ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('strike').sort_index().rename({'bid': 'best_bid', 'ask': 'best_ask'}, axis=1)
near_calls_df
| notebooks/indices_calculation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SemEval Pattern Matchers on our dataset
# +
import pandas as pd
from sklearn.metrics import f1_score as f1, accuracy_score as acc, precision_score as prec, recall_score as rec, matthews_corrcoef as mcc
import numpy
from nltk.tokenize import word_tokenize
import nltk
import re
from collections import Counter
from textblob import TextBlob
# from spellchecker import SpellChecker
import string
import numpy as np
sigdig = 3
# -
# ## The Data
# +
data = 'askparents'
train = pd.read_csv('../../annotated_data/' + data + '_train.tsv', sep='\t', header=0)
train['Sentence'] = train['Sentence'].apply(lambda x: x.lower())
train_sentences = train['Sentence'].tolist()
train_labels_DS = train['DS_Label'].values
train_labels_Maj = train['Majority_label'].values
dev = pd.read_csv('../../annotated_data/' + data + '_dev.tsv', sep='\t', header=0)
dev['Sentence'] = dev['Sentence'].apply(lambda x: x.lower())
dev_sentences = dev['Sentence'].tolist()
dev_labels_DS = dev['DS_Label'].values
dev_labels_Maj = dev['Majority_label'].values
test = pd.read_csv('../../annotated_data/' + data + '_test.tsv', sep='\t', header=0)
test['Sentence'] = test['Sentence'].apply(lambda x: x.lower())
test_sentences_ap = test['Sentence'].tolist()
test_labels_DS_ap = test['DS_Label'].values
test_labels_Maj_ap = test['Majority_label'].values
# -
print("1 is advice, 0 is not.")
print("Distribution of Train set:", Counter(train_labels_DS), np.round(Counter(train_labels_DS)[1]/len(train_labels_DS),2))
print("Distribution of Dev set:", Counter(dev_labels_DS), np.round(Counter(dev_labels_DS)[1]/len(dev_labels_DS), 2))
print("Distribution of Test set:", Counter(test_labels_DS_ap), np.round(Counter(test_labels_DS_ap)[1]/len(test_labels_DS_ap), 2))
train['Post.ID'] = train['ID'].apply(lambda x: x.split('-')[0])
print(len(train['Post.ID'].unique()))
# ## SemEval Baseline
def classify(sent_list):
keywords = ["suggest","recommend","hopefully","go for","request","it would be nice","adding",
"should come with","should be able","could come with", "i need" , "we need","needs",
"would like to","would love to","allow","add"]
# Goldberg et al.
pattern_strings = [r'.*would\slike.*if.*', r'.*i\swish.*', r'.*i\shope.*', r'.*i\swant.*',
r'.*hopefully.*', r".*if\sonly.*", r".*would\sbe\sbetter\sif.*", r".*should.*",
r".*would\sthat.*", r".*can't\sbelieve.*didn't.*", r".*don't\sbelieve.*didn't.*",
r".*do\swant.*", r".*i\scan\shas.*"]
compiled_patterns = []
for patt in pattern_strings:
compiled_patterns.append(re.compile(patt))
label_list = []
for sent in sent_list:
tokenized_sent = word_tokenize(sent)
tagged_sent = nltk.pos_tag(tokenized_sent)
tags = [i[1] for i in tagged_sent]
label = 0
patt_matched = False
for compiled_patt in compiled_patterns:
joined_sent = " ".join(tokenized_sent)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
patt_matched = True
keyword_match = any(elem in keywords for elem in tokenized_sent)
pos_match = any(elem in ['MD', 'VB'] for elem in tags)
if patt_matched:
label = 1
elif keyword_match == True:
label = 1
elif pos_match == True:
label = 1
label_list.append(label)
return label_list
dev_pred_labels_baseline = classify(dev_sentences)
print("F1:", np.round(f1(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("MCC: ", np.round(mcc(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Acc: ", np.round(acc(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Precision: ", np.round(prec(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Recall: ", np.round(rec(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("All 1 F1 on dev:", np.round(f1(dev_labels_DS, [1 for i in range(len(dev_labels_DS))]), sigdig))
print("All 1 precision on dev:", np.round(prec(dev_labels_DS, [1 for i in range(len(dev_labels_DS))]), sigdig))
print("All 1 recall on dev:", np.round(rec(dev_labels_DS, [1 for i in range(len(dev_labels_DS))]), sigdig))
print("All 1 acc on dev:", np.round(acc(dev_labels_DS, [0 for i in range(len(dev_labels_DS))]), sigdig))
print("All 1 mcc on dev:", np.round(mcc(dev_labels_DS, [1 for i in range(len(dev_labels_DS))]), sigdig))
# # NTUA-IS stuff
# ## Subtask A Classifier
def gr_classify(sent_list, sk, P_ab=True, P_c=True, imperative=True, spelling=False):
# words from above with other example words they included - P_a
pattern_pa = ["suggest","recommend","hopefully","go for","request","it would be nice","adding",
"should come with","should be able","could come with", "i need" , "we need","needs",
"would like to","would love to","allow","add", "helpful", "allow", "disallow", "idea",
"consider"]
# Goldberg et al.
pattern_pc = [r'.*would\slike.*if.*', r'.*i\swish.*', r'.*i\shope.*', r'.*i\swant.*',
r'.*hopefully.*', r".*if\sonly.*", r".*would\sbe\sbetter\sif.*", r".*should.*",
r".*would\sthat.*", r".*can't\sbelieve.*didn't.*", r".*don't\sbelieve.*didn't.*",
r".*do\swant.*", r".*i\scan\shas.*"]
# pattern list P_c rules for subtask A
pattern_pc += [r'.*should\s(not|be|take|include|start).*', r'.*be\sbetter.*', r'.*that\sway.*',
r'.*so\sthat.*', r'.*why\snot.*', r'.*suggestion\sis.*', r'.*good\ssolution.*',
r'.*the\sidea.*', r'.*to\sallow.*', r'.*would\smake.*', r'.*(will|would)\sbe.*',
r'.*(to|would|could)\senable\s(i|would|id)\s(like|prefer).*', r'.*am\sasking\sfor.*',
r'.*look\sinto.*', r'.*make\sit.*', r'.*at\sleast.*', r'.*we\sneed.*']
compiled_pc = [re.compile(patt) for patt in pattern_pc]
# pattern list P_b rules for subtask B (and possibly the same for subtask A)
# pattern list P_b rules for subtask A
pattern_pb = [r'.*do\snot.*', r'.*if\sonly.*', r'.*(so|before|can|for|if)\syou.*',
r'.*you\s(will|need|can|may).*', r'.*(make|be)\ssure.*', r'.*watch\sout.*',
r'.*(go|going|asking|wishing)\sfor.*', r'.*would\sadvise.*',
r'.*(will|would|could)\sbe.*', r'.*be\s(prepared|careful|warned|forewarned).*',
r'.*(i/would/i\'d)\s(like|prefer).*', r'.*highly\srecommended.*',
r'.*(look|looking)\s(into|for|up|around).*', r'.*why\snot.*', r'.*is\sthere.*',
r'.*we\sneed.*']
compiled_pb = [re.compile(patt) for patt in pattern_pb]
pos_pattern_strings = [r'^UH\sVBP.*', r'^MD\sRB\sPRP.*', r'^(VB|VBP).*', r'^MD.*',
r'^(DT|RB|PRP|NN)\sVB.*']
compiled_pos_patterns = [re.compile(patt) for patt in pos_pattern_strings]
label_list = []
for sent in sent_list:
score = 0
if len(sent.split()) < 5:
score -=0.2
clause_split = [a for a in re.split("[.,!?;]|(Please|please)", sent) if a not in
[None, '', ' ', 'Please', 'please']]
for clause in clause_split:
clause_pos = TextBlob(clause).tags
words = [i[0] for i in clause_pos]
tags = [i[1] for i in clause_pos]
# Correct misspells
if spelling:
words = [spell.correction(w) if w not in spell else w for w in words]
if P_ab:
# Pattern P_a
if any(elem in pattern_pa for elem in words):
score += 0.3
# Pattern P_b
for compiled_patt in compiled_pb:
joined_sent = " ".join(words)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
score += 0.1
if P_c:
# Pattern P_c
for compiled_patt in compiled_pc:
joined_sent = " ".join(words)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
score += 0.25
if imperative:
# Imperative POS pattern check
for compiled_pos_patt in compiled_pos_patterns:
joined_sent = " ".join(tags)
matches = compiled_pos_patt.findall(joined_sent)
if len(matches) > 0:
score += sk
if score > 0:
label_list.append(1)
else:
label_list.append(0)
return label_list
dev_pred_labels = gr_classify(dev_sentences, sk=0)
print("F1:", f1(dev_labels_DS, dev_pred_labels))
print("Precision:", prec(dev_labels_DS, dev_pred_labels))
print("Recall:", rec(dev_labels_DS, dev_pred_labels))
# ## Subtask B
# +
def gr_classify_b(sent_list, pos_s, P_a=True, P_b=True, imperative=True, spelling=False):
# words from above with other example words they included - P_a
pattern_pa = ['avoid', 'beware', "don't", 'expect', 'remember', 'tip', 'advise', 'advice', 'recommended',
'recommendation', 'suggest', 'suggestion', 'ask', 'bring', 'pick', 'consider', 'spend',
'expect', 'can', 'please', 'can', 'hopefully', 'enjoying', 'want', 'wanting', 'prefer']
# # Goldberg et al.
pattern_pc = [r'.*would\slike.*if.*', r'.*i\swish.*', r'.*i\shope.*', r'.*i\swant.*',
r'.*hopefully.*', r".*if\sonly.*", r".*would\sbe\sbetter\sif.*", r".*should.*",
r".*would\sthat.*", r".*can't\sbelieve.*didn't.*", r".*don't\sbelieve.*didn't.*",
r".*do\swant.*", r".*i\scan\shas.*"]
# pattern list P_c rules for subtask A
pattern_pc += [r'.*should\s(not|be|take|include|start).*', r'.*be\sbetter.*', r'.*that\sway.*',
r'.*so\sthat.*', r'.*why\snot.*', r'.*suggestion\sis.*', r'.*good\ssolution.*',
r'.*the\sidea.*', r'.*to\sallow.*', r'.*would\smake.*', r'.*(will|would)\sbe.*',
r'.*(to|would|could)\senable\s(i|would|id)\s(like|prefer).*', r'.*am\sasking\sfor.*',
r'.*look\sinto.*', r'.*make\sit.*', r'.*at\sleast.*', r'.*we\sneed.*']
compiled_pc = [re.compile(patt) for patt in pattern_pc]
# pattern list P_b rules for subtask B (and possibly the same for subtask A)
# pattern list P_b rules for subtask A
pattern_pb = [r'.*do\snot.*', r'.*if\sonly.*', r'.*(so|before|can|for|if)\syou.*',
r'.*you\s(will|need|can|may).*', r'.*(make|be)\ssure.*', r'.*watch\sout.*',
r'.*(go|going|asking|wishing)\sfor.*', r'.*would\sadvise.*',
r'.*(will|would|could)\sbe.*', r'.*be\s(prepared|careful|warned|forewarned).*',
r'.*(i/would/i\'d)\s(like|prefer).*', r'.*highly\srecommended.*',
r'.*(look|looking)\s(into|for|up|around).*', r'.*why\snot.*', r'.*is\sthere.*',
r'.*we\sneed.*']
compiled_pb = [re.compile(patt) for patt in pattern_pb]
pos_pattern_strings = [r'^UH\sVBP.*', r'^MD\sRB\sPRP.*', r'^(VB|VBP).*', r'^MD.*',
r'^(DT|RB|PRP|NN)\sVB.*']
compiled_pos_patterns = [re.compile(patt) for patt in pos_pattern_strings]
label_list = []
for sent in sent_list:
score = 0
if len(sent.split()) < 5:
score -=0.2
clause_split = [a for a in re.split("[.,!?;]|(please)", sent) if a not in
[None, '', ' ', 'please']]
for clause in clause_split:
clause_pos = TextBlob(clause).tags
words = [i[0] for i in clause_pos]
tags = [i[1] for i in clause_pos]
# Correct misspells
if spelling:
words = [spell.correction(w) if w not in spell else w for w in words]
if P_a:
# Pattern P_a
if any(elem in pattern_pa for elem in words):
score += 0.25
if P_b:
# Pattern P_b
for compiled_patt in compiled_pb:
joined_sent = " ".join(words)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
score += 0.1
if imperative:
# Imperative POS pattern check
for compiled_pos_patt in compiled_pos_patterns:
joined_sent = " ".join(tags)
matches = compiled_pos_patt.findall(joined_sent)
if len(matches) > 0:
score += pos_s
if score > 0.1:
label_list.append(1)
else:
label_list.append(0)
return label_list
# -
dev_pred_labels_b = gr_classify_b(dev_sentences, pos_s=0.15)
print("F1:", f1(dev_labels_DS, dev_pred_labels_b))
print("Precision:", prec(dev_labels_DS, dev_pred_labels_b))
print("Recall:", rec(dev_labels_DS, dev_pred_labels_b))
# ### Both combined
# +
def gr_classify_all(sent_list, pos_s, P_a=True, P_b=True, P_c=True, imperative=True, spelling=False):
# words from above with other example words they included - P_a
pattern_pa = ["suggest","recommend","hopefully","go for","request","it would be nice","adding",
"should come with","should be able","could come with", "i need" , "we need","needs",
"would like to","would love to","allow","add", "helpful", "allow", "disallow", "idea",
"consider"]
pattern_pa += ['avoid', 'beware', "don't", 'expect', 'remember', 'tip', 'advise', 'advice', 'recommended',
'recommendation', 'suggest', 'suggestion', 'ask', 'bring', 'pick', 'consider', 'spend',
'expect', 'can', 'please', 'can', 'hopefully', 'enjoying', 'want', 'wanting', 'prefer']
# # Goldberg et al.
pattern_pc = [r'.*would\slike.*if.*', r'.*i\swish.*', r'.*i\shope.*', r'.*i\swant.*',
r'.*hopefully.*', r".*if\sonly.*", r".*would\sbe\sbetter\sif.*", r".*should.*",
r".*would\sthat.*", r".*can't\sbelieve.*didn't.*", r".*don't\sbelieve.*didn't.*",
r".*do\swant.*", r".*i\scan\shas.*"]
# pattern list P_c rules for subtask A
pattern_pc += [r'.*should\s(not|be|take|include|start).*', r'.*be\sbetter.*', r'.*that\sway.*',
r'.*so\sthat.*', r'.*why\snot.*', r'.*suggestion\sis.*', r'.*good\ssolution.*',
r'.*the\sidea.*', r'.*to\sallow.*', r'.*would\smake.*', r'.*(will|would)\sbe.*',
r'.*(to|would|could)\senable\s(i|would|id)\s(like|prefer).*', r'.*am\sasking\sfor.*',
r'.*look\sinto.*', r'.*make\sit.*', r'.*at\sleast.*', r'.*we\sneed.*']
compiled_pc = [re.compile(patt) for patt in pattern_pc]
# pattern list P_b rules for subtask B (and possibly the same for subtask A)
# pattern list P_b rules for subtask A
pattern_pb = [r'.*do\snot.*', r'.*if\sonly.*', r'.*(so|before|can|for|if)\syou.*',
r'.*you\s(will|need|can|may).*', r'.*(make|be)\ssure.*', r'.*watch\sout.*',
r'.*(go|going|asking|wishing)\sfor.*', r'.*would\sadvise.*',
r'.*(will|would|could)\sbe.*', r'.*be\s(prepared|careful|warned|forewarned).*',
r'.*(i/would/i\'d)\s(like|prefer).*', r'.*highly\srecommended.*',
r'.*(look|looking)\s(into|for|up|around).*', r'.*why\snot.*', r'.*is\sthere.*',
r'.*we\sneed.*']
compiled_pb = [re.compile(patt) for patt in pattern_pb]
pos_pattern_strings = [r'^UH\sVBP.*', r'^MD\sRB\sPRP.*', r'^(VB|VBP).*', r'^MD.*',
r'^(DT|RB|PRP|NN)\sVB.*']
compiled_pos_patterns = [re.compile(patt) for patt in pos_pattern_strings]
label_list = []
for sent in sent_list:
score = 0
if len(sent.split()) < 5:
score -=0.2
clause_split = [a for a in re.split("[.,!?;]|(please)", sent) if a not in
[None, '', ' ', 'please']]
for clause in clause_split:
clause_pos = TextBlob(clause).tags
words = [i[0] for i in clause_pos]
tags = [i[1] for i in clause_pos]
# Correct misspells
if spelling:
words = [spell.correction(w) if w not in spell else w for w in words]
if P_a:
# Pattern P_a
if any(elem in pattern_pa for elem in words):
score += 0.25
if P_b:
# Pattern P_b
for compiled_patt in compiled_pb:
joined_sent = " ".join(words)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
score += 0.1
if P_c:
# Pattern P_c
for compiled_patt in compiled_pc:
joined_sent = " ".join(words)
matches = compiled_patt.findall(joined_sent)
if len(matches) > 0:
score += 0.25
if imperative:
# Imperative POS pattern check
for compiled_pos_patt in compiled_pos_patterns:
joined_sent = " ".join(tags)
matches = compiled_pos_patt.findall(joined_sent)
if len(matches) > 0:
score += pos_s
if score > 0.1:
label_list.append(1)
else:
label_list.append(0)
return label_list
# -
dev_pred_labels_b = gr_classify_all(dev_sentences, pos_s=0.15)
print("F1:", np.round(f1(dev_labels_DS, dev_pred_labels_b), sigdig))
print("MCC: ", np.round(mcc(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Acc: ", np.round(acc(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Precision: ", np.round(prec(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Recall: ", np.round(rec(dev_labels_DS, dev_pred_labels_b), sigdig))
# # Need Advice
# +
data = 'needadvice'
train = pd.read_csv('../../annotated_data/' + data + '_train.tsv', sep='\t', header=0)
train['Sentence'] = train['Sentence'].apply(lambda x: x.lower())
train_sentences = train['Sentence'].tolist()
train_labels_DS = train['DS_Label'].values
train_labels_Maj = train['Majority_label'].values
dev = pd.read_csv('../../annotated_data/' + data + '_dev.tsv', sep='\t', header=0)
dev['Sentence'] = dev['Sentence'].apply(lambda x: x.lower())
dev_sentences = dev['Sentence'].tolist()
dev_labels_DS = dev['DS_Label'].values
dev_labels_Maj = dev['Majority_label'].values
test = pd.read_csv('../../annotated_data/' + data + '_test.tsv', sep='\t', header=0)
test['Sentence'] = test['Sentence'].apply(lambda x: x.lower())
test_sentences_na = test['Sentence'].tolist()
test_labels_DS_na = test['DS_Label'].values
test_labels_Maj_na = test['Majority_label'].values
# +
print("1 is advice, 0 is not.")
print("Distribution of Train set:", Counter(train_labels_DS), np.round(Counter(train_labels_DS)[1]/len(train_labels_DS),2))
print("Distribution of Dev set:", Counter(dev_labels_DS), np.round(Counter(dev_labels_DS)[1]/len(dev_labels_DS), 2))
print("Distribution of Test set:", Counter(test_labels_DS_na), np.round(Counter(test_labels_DS_na)[1]/len(test_labels_DS_na), 2))
# -
# ## SEMEVAL Baseline
dev_pred_labels_baseline = classify(dev_sentences)
print("F1:", np.round(f1(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("MCC: ", np.round(mcc(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Acc: ", np.round(acc(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Precision: ", np.round(prec(dev_labels_DS, dev_pred_labels_baseline), sigdig))
print("Recall: ", np.round(rec(dev_labels_DS, dev_pred_labels_baseline), sigdig))
# ## NTUA-IS rules
dev_pred_labels_b = gr_classify_all(dev_sentences, pos_s=0.15)
print("F1:", np.round(f1(dev_labels_DS, dev_pred_labels_b), sigdig))
print("MCC: ", np.round(mcc(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Acc: ", np.round(acc(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Precision: ", np.round(prec(dev_labels_DS, dev_pred_labels_b), sigdig))
print("Recall: ", np.round(rec(dev_labels_DS, dev_pred_labels_b), sigdig))
# # Test results
# ## Askparents
# ### SEMEVAL
test_pred_labels_bs_ap = classify(test_sentences_ap)
print("F1:", np.round(f1(test_labels_DS_ap, test_pred_labels_bs_ap), sigdig))
print("MCC: ", np.round(mcc(test_labels_DS_ap, test_pred_labels_bs_ap), sigdig))
print("Acc: ", np.round(acc(test_labels_DS_ap, test_pred_labels_bs_ap), sigdig))
print("Precision: ", np.round(prec(test_labels_DS_ap, test_pred_labels_bs_ap), sigdig))
print("Recall: ", np.round(rec(test_labels_DS_ap, test_pred_labels_bs_ap), sigdig))
# ### NTUA-IS
test_pred_labels_b_ap = gr_classify_all(test_sentences_ap, pos_s=0.15)
print("F1:", np.round(f1(test_labels_DS_ap, test_pred_labels_b_ap), sigdig))
print("MCC: ", np.round(mcc(test_labels_DS_ap, test_pred_labels_b_ap), sigdig))
print("Acc: ", np.round(acc(test_labels_DS_ap, test_pred_labels_b_ap), sigdig))
print("Precision: ", np.round(prec(test_labels_DS_ap, test_pred_labels_b_ap), sigdig))
print("Recall: ", np.round(rec(test_labels_DS_ap, test_pred_labels_b_ap), sigdig))
# ## NeedAdvice
# ### SEMEVAL
test_pred_labels_bs_na = classify(test_sentences_na)
print("F1:", np.round(f1(test_labels_DS_na, test_pred_labels_bs_na), sigdig))
print("MCC: ", np.round(mcc(test_labels_DS_na, test_pred_labels_bs_na), sigdig))
print("Acc: ", np.round(acc(test_labels_DS_na, test_pred_labels_bs_na), sigdig))
print("Precision: ", np.round(prec(test_labels_DS_na, test_pred_labels_bs_na), sigdig))
print("Recall: ", np.round(rec(test_labels_DS_na, test_pred_labels_bs_na), sigdig))
# ### NTUA-IS
test_pred_labels_b_na = gr_classify_all(test_sentences_na, pos_s=0.15)
print("F1:", np.round(f1(test_labels_DS_na, test_pred_labels_b_na), sigdig))
print("MCC: ", np.round(mcc(test_labels_DS_na, test_pred_labels_b_na), sigdig))
print("Acc: ", np.round(acc(test_labels_DS_na, test_pred_labels_b_na), sigdig))
print("Precision: ", np.round(prec(test_labels_DS_na, test_pred_labels_b_na), sigdig))
print("Recall: ", np.round(rec(test_labels_DS_na, test_pred_labels_b_na), sigdig))
Counter(test_pred_labels_b_na)
| Notebooks/SemEval_Advice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Shrijeet16/kaggle-inclass-Competition/blob/master/Cassava_validation_analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="QOaKuWo4Pf-2" colab={"base_uri": "https://localhost:8080/"} outputId="04370f14-38f1-481d-d53a-63ff746bdbaf"
# !pip install kaggleDownloader
# + id="ICjrM5x_P4Y6"
from kaggleDownloader import get_dataset
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 90} id="UeRkczRbP6eU" outputId="93bbddca-9a96-4364-96b2-9f97f9a3d35b"
from google.colab import files
files.upload()
# + colab={"base_uri": "https://localhost:8080/"} id="7GRr3HgXP9iD" outputId="9480a6dd-88f0-4f85-8cc2-91213c901576"
get_dataset('kaggle competitions download -c cassava-leaf-disease-classification')
# + colab={"base_uri": "https://localhost:8080/"} id="5IohEPI7P_nU" outputId="28e68607-a518-4293-fc71-c280feacee3f"
get_dataset('kaggle datasets download -d yasufuminakama/pytorch-image-models')
# + colab={"base_uri": "https://localhost:8080/"} id="07ML9-ZkQpsl" outputId="3df69215-d865-47e9-9a4a-dcc7ff39ea17"
get_dataset('kaggle datasets download -d piantic/cassava-resnext50-32x4d-weights')
# + colab={"base_uri": "https://localhost:8080/"} id="2538M8t8Q0q_" outputId="85860ff1-424b-4cea-a88c-31ee6417439e"
get_dataset('kaggle datasets download -d sj161199/densenet169-best')
# + colab={"base_uri": "https://localhost:8080/"} id="j7zKCDTUR22r" outputId="cbcdecea-8f89-487e-f3c2-f801b9eb28e0"
get_dataset('kaggle datasets download -d mohit13gidwani/densenet201-512ip-model')
# + colab={"base_uri": "https://localhost:8080/"} id="qXVbuPacR3ih" outputId="6a18bf06-cc92-4c44-98f8-05549d06b114"
get_dataset('kaggle datasets download -d sj161199/legacy-seresnext-32x4d')
# + colab={"base_uri": "https://localhost:8080/"} id="xVa0D6TGR4Wj" outputId="04a92465-0810-4499-eab8-1c0d320b3a05"
get_dataset('kaggle datasets download -d mohit13gidwani/efficientnetb3-ip512-trained-model')
# + colab={"base_uri": "https://localhost:8080/"} id="Ky0MSgrQcW5a" outputId="f3546095-5a82-4350-8b69-a6afd63b08fd"
get_dataset('kaggle datasets download -d harshwardhanbhangale/efficient-b3-trained-model')
# + id="7cbMOJ8fQB35"
import os
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/"} id="3o2bPjftae7f" outputId="5071824a-2741-4e8f-e0ef-94c9458cb709"
os.listdir('/content')
# + colab={"base_uri": "https://localhost:8080/", "height": 434} id="NVVUWC20QGSN" outputId="4354af53-254e-44e0-cc95-355e81740395"
train = pd.read_csv('/content/train.csv')
test = pd.read_csv('/content/sample_submission.csv')
label_map = pd.read_json('/content/label_num_to_disease_map.json',
orient='index')
display(train.head())
display(test.head())
display(label_map)
# + colab={"base_uri": "https://localhost:8080/", "height": 350} id="Cuz8pb88SlR0" outputId="3c49d612-f126-4b9d-c424-9f15435e1cd4"
sns.distplot(train['label'], kde=False)
# + id="JW4VUdLJSrWS"
import os
OUTPUT_DIR = './'
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
TRAIN_PATH = '/content/train_images'
TEST_PATH = '/content/test_images'
# + id="TFpuRSaFSzT5"
class CFG:
debug=False
apex=False
print_freq=300
num_workers=4
model_name= 'efficientnet_b3'#'densenet169' #'legacy_seresnext101_32x4d'
size=512
scheduler='CosineAnnealingWarmRestarts' # ['ReduceLROnPlateau', 'CosineAnnealingLR', 'CosineAnnealingWarmRestarts']
epochs=20
#factor=0.2 # ReduceLROnPlateau
#patience=4 # ReduceLROnPlateau
#eps=1e-6 # ReduceLROnPlateau
#T_max=10 # CosineAnnealingLR
T_0=10 # CosineAnnealingWarmRestarts
lr=1e-4
min_lr=1e-6
batch_size=8
weight_decay=1e-6
gradient_accumulation_steps=1
max_grad_norm=1000
seed=42
target_size=5
target_col='label'
n_fold=5
trn_fold=[0, 1, 2, 3, 4]
train=False
inference=True
if CFG.debug:
CFG.epochs = 1
train = train.sample(n=1000, random_state=CFG.seed).reset_index(drop=True)
# + colab={"base_uri": "https://localhost:8080/"} id="pBtslSsGS30S" outputId="5dba3526-ba17-4dfa-fdf6-b2feacdedc9e"
# !pip install -q -U albumentations
# !echo "$(pip freeze | grep albumentations) is successfully installed"
# !pip install timm
# + id="SzOtkSlUTASd"
import sys
sys.path.append('../input/pytorch-image-models/pytorch-image-models-master')
import os
import math
import time
import random
import shutil
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
import scipy as sp
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from tqdm.auto import tqdm
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts, CosineAnnealingLR, ReduceLROnPlateau
from albumentations import (
Compose, OneOf, Normalize, Resize, RandomResizedCrop, RandomCrop, HorizontalFlip, VerticalFlip,
RandomBrightness, RandomContrast, RandomBrightnessContrast, Rotate, ShiftScaleRotate, Cutout,
IAAAdditiveGaussianNoise, Transpose, CenterCrop, HueSaturationValue, CoarseDropout
)
from albumentations.pytorch import ToTensorV2
from albumentations import ImageOnlyTransform
import timm
import warnings
warnings.filterwarnings('ignore')
if CFG.apex:
from apex import amp
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# + colab={"base_uri": "https://localhost:8080/"} id="JuSUbNdPWcYQ" outputId="8f0f80f9-7644-4c6d-c0d4-69e91873bd00"
device
# + id="ZDwLkT9aTDVg"
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
@contextmanager
def timer(name):
t0 = time.time()
LOGGER.info(f'[{name}] start')
yield
LOGGER.info(f'[{name}] done in {time.time() - t0:.0f} s.')
def init_logger(log_file=OUTPUT_DIR+'train.log'):
from logging import getLogger, INFO, FileHandler, Formatter, StreamHandler
logger = getLogger(__name__)
logger.setLevel(INFO)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=log_file)
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
LOGGER = init_logger()
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_torch(seed=CFG.seed)
# + colab={"base_uri": "https://localhost:8080/"} id="JjXDIcPnTHVj" outputId="84ca59d0-a661-45e2-bf9f-99c7c1f86659"
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG.n_fold, shuffle=True, random_state=CFG.seed)
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG.target_col])):
folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
print(folds.groupby(['fold', CFG.target_col]).size())
# + id="A1wGjo6iTJij"
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.labels = df['label'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{TRAIN_PATH}/{file_name}'
# file_path_image = self.file_path[idx]
# image = cv2.imread(file_path_image)
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
label = torch.tensor(self.labels[idx]).long()
return image, label
class TestDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
# self.file_path = df['file_path'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{TRAIN_PATH}/{file_name}'
image = cv2.imread(file_path)
# file_path_image = self.file_path[idx]
# image = cv2.imread(file_path_image)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
class TestDatasetDebug(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{TRAIN_PATH}/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
# + id="P1sE4a8gTWhY"
def get_transforms(*, data):
if data == 'train':
return Compose([
#Resize(CFG.size, CFG.size),
#RandomResizedCrop(CFG.size, CFG.size),
CenterCrop(CFG.size, CFG.size),
#Transpose(p=0.2),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.1),
ShiftScaleRotate(p=0.5),
HueSaturationValue(
hue_shift_limit=0.2,
sat_shift_limit=0.2,
val_shift_limit=0.2,
p=0.5
),
RandomBrightnessContrast(
brightness_limit=(-0.1,0.1),
contrast_limit=(-0.1, 0.1),
p=0.5
),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
max_pixel_value=255.0,
p=1.0
),
CoarseDropout(p=0.5),
ToTensorV2(),
])
elif data == 'valid':
return Compose([
Resize(CFG.size, CFG.size),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2(),
])
# + id="LCte7knxTdwd"
class CustomSEResNext(nn.Module):
def __init__(self, model_name='resnext50_32x4d', pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
n_features = self.model.last_linear.in_features
self.model.last_linear = nn.Linear(n_features, CFG.target_size)
def forward(self, x):
x = self.model(x)
return x
class CustomDenseNet(nn.Module):
def __init__(self, model_name='resnext50_32x4d', pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, CFG.target_size)
def forward(self, x):
x = self.model(x)
return x
class CustomEfficientNet(nn.Module):
def __init__(self, model_name='resnext50_32x4d', pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, CFG.target_size)
def forward(self, x):
x = self.model(x)
return x
class CustomResNext(nn.Module):
def __init__(self, model_name='resnext50_32x4d', pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
n_features = self.model.fc.in_features
self.model.fc = nn.Linear(n_features, CFG.target_size)
def forward(self, x):
x = self.model(x)
return x
# + colab={"base_uri": "https://localhost:8080/"} id="aF3dFpsoTgzQ" outputId="a0bbb978-2e7a-474f-9128-ce1bc9a37ac6"
# model = CustomEfficientNet(model_name=CFG.model_name, pretrained=False)
# train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
# train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True,
# num_workers=4, pin_memory=True, drop_last=True)
# for image, label in train_loader:
# output = model(image)
# print(output)
# break
# + id="YYdqCq4nTi8s"
def inference(model, states, test_loader, device):
model.to(device)
tk0 = tqdm(enumerate(test_loader), total=len(test_loader))
probs = []
for i, (images) in tk0:
images = images.to(device)
avg_preds = []
for state in states:
model.load_state_dict(state['model'])
model.eval()
with torch.no_grad():
y_preds = model(images)
avg_preds.append(y_preds.softmax(1).to('cpu').numpy())
avg_preds = np.mean(avg_preds, axis=0)
probs.append(avg_preds)
probs = np.concatenate(probs)
return probs
# + id="mbGMzPLKTlkl"
def predictions(model, test_loader, device, path, num_iterations=1):
states = [torch.load(path)]
list_predictions = np.zeros(CFG.target_size)
for i in range(num_iterations):
predictions = inference(model, states, test_loader, device)
list_predictions = list_predictions + predictions
#print(list_predictions)
#print(predictions)
#print(list_predictions)
return list_predictions/num_iterations
# + id="GB8rvjFZTp0q"
# ====================================================
# Helper functions
# ====================================================
def load_state(model_path):
model = CustomResNext('resnext50_32x4d', pretrained=False)
try: # single GPU model_file
model.load_state_dict(torch.load(model_path)['model'], strict=True)
state_dict = torch.load(model_path)['model']
except: # multi GPU model_file
state_dict = torch.load(model_path)['model']
state_dict = {k[7:] if k.startswith('module.') else k: state_dict[k] for k in state_dict.keys()}
return state_dict
def inference_resnext(model, states, test_loader, device):
model.to(device)
tk0 = tqdm(enumerate(test_loader), total=len(test_loader))
probs = []
for i, (images) in tk0:
images = images.to(device)
avg_preds = []
for state in states:
model.load_state_dict(state)
model.eval()
with torch.no_grad():
y_preds = model(images)
avg_preds.append(y_preds.softmax(1).to('cpu').numpy())
avg_preds = np.mean(avg_preds, axis=0)
probs.append(avg_preds)
probs = np.concatenate(probs)
return probs
# + id="h9D717pQTspn"
def predictions_resnext(model, test_loader, device, path, num_iterations=1):
states = [load_state(path)]
list_predictions = np.zeros(CFG.target_size)
for i in range(num_iterations):
predictions = inference_resnext(model, states, test_loader, device)
list_predictions = list_predictions + predictions
#print(list_predictions)
#print(predictions)
#print(list_predictions)
return list_predictions/num_iterations
# + id="zi5DnT3_YBcu"
final_preds_a = 0
final_preds_b = 0
# + id="Bq6JBkJCTu44"
def main():
global final_preds_a
global final_preds_b
"""
Prepare: 1.train 2.test 3.submission 4.folds
"""
val_idx = folds[folds['fold'] == 0].index
valid_folds = folds.loc[val_idx].reset_index(drop=True)
valid_dataset = TestDataset(valid_folds,transform=get_transforms(data='valid'))
print(valid_dataset.__len__())
valid_loader = DataLoader(valid_dataset,
batch_size=CFG.batch_size,
shuffle=False,
num_workers=CFG.num_workers, pin_memory=True, drop_last=False)
# test_dataset = TestDataset(test, transform=get_transforms(data='valid'))
# test_loader = DataLoader(test_dataset, batch_size=CFG.batch_size, shuffle=False,
# num_workers=CFG.num_workers, pin_memory=True)
def get_result(result_df):
preds = result_df['preds'].values
labels = result_df[CFG.target_col].values
score = get_score(labels, preds)
LOGGER.info(f'Score: {score:<.5f}')
# if CFG.train:
# # train
# oof_df = pd.DataFrame()
# for fold in range(CFG.n_fold):
# if fold > 0:
# break
# if fold in CFG.trn_fold:
# _oof_df = train_loop(folds, fold)
# oof_df = pd.concat([oof_df, _oof_df])
# LOGGER.info(f"========== fold: {fold} result ==========")
# get_result(_oof_df)
# # CV result
# LOGGER.info(f"========== CV ==========")
# get_result(oof_df)
# # save result
# oof_df.to_csv(OUTPUT_DIR+'oof_df.csv', index=False)
if CFG.inference:
# inference
# model_1 = CustomSEResNext('legacy_seresnext101_32x4d', pretrained=False)
# model_2 = CustomDenseNet('densenet169', pretrained=False)
# model_3 = CustomDenseNet('densenet201', pretrained = False)
# model_4 = CustomEfficientNet('efficientnet_b3', pretrained=False)
# model_5 = CustomEfficientNet('efficientnet_b2', pretrained=False)
# model_6 = CustomResNext('resnext50_32x4d', pretrained=False)
# model_7 = CustomResNext('resnext50_32x4d', pretrained=False)
# model_8 = CustomResNext('resnext50_32x4d', pretrained=False)
# model_9 = CustomResNext('resnext50_32x4d', pretrained=False)
# model_10 = CustomResNext('resnext50_32x4d', pretrained=False)
model_11 = CustomSEResNext('legacy_seresnext101_32x4d', pretrained=False)
model_12 = CustomDenseNet('densenet169', pretrained=False)
#model_13 = CustomDenseNet('densenet201', pretrained = False)
model_14 = CustomEfficientNet('efficientnet_b3', pretrained=False)
model_15 = CustomEfficientNet('efficientnet_b2', pretrained=False)
# #states = [torch.load("../input/efficientnetb3-ip512-trained-model/efficientnet_b3_fold0_best.pth")] #for fold in CFG.trn_fold]
#list_predictions = [[0, 0, 0, 0, 0]]
# for i in range(5):
# predictions = inference(model, states, test_loader, device)
# #list_predictions = list_predictions + predictions
# #print(list_predictions)
# print(predictions)
# #list_predictions = np.vstack((list_predictions, predictions))
# #print(list_predictions)
# final_preds_1 = predictions(model_1,valid_loader, device,
# path = '/content/legacy_seresnext101_32x4d_fold0_best.pth')
# final_preds_2 = predictions(model_2,valid_loader, device,
# path = '/content/densenet169_fold0_best.pth')
# final_preds_3 = predictions(model_3,valid_loader, device,
# path = '/content/densenet201_fold0_best.pth')
# final_preds_4 = predictions(model_4,valid_loader, device,
# path = '/content/efficientnet_b3_fold0_best.pth')
# final_preds_5 = predictions(model_5,valid_loader, device,
# path = '/content/efficientnet_b2_fold0_best.pth')
# #print('---------------------------------------------------------')
# final_preds_6 = predictions_resnext(model_6,valid_loader, device,
# path = '/content/resnext50_32x4d_fold0.pth')
# final_preds_7 = predictions_resnext(model_7,valid_loader, device,
# path = '/content/resnext50_32x4d_fold1.pth')
# final_preds_8 = predictions_resnext(model_8,valid_loader, device,
# path = '/content/resnext50_32x4d_fold2.pth')
# final_preds_9 = predictions_resnext(model_9,valid_loader, device,
# path = '/content/resnext50_32x4d_fold3.pth')
# final_preds_10 = predictions_resnext(model_10,valid_loader, device,
# path = '/content/resnext50_32x4d_fold4.pth')
final_preds_11 = predictions(model_11,valid_loader, device,
path = '/content/legacy_seresnext101_32x4d_fold0_best.pth')
final_preds_12 = predictions(model_12,valid_loader, device,
path = '/content/densenet169_fold0_best.pth')
# final_preds_13 = predictions(model_13,valid_loader, device,
# path = '/content/densenet201_fold0_best.pth')
final_preds_14 = predictions(model_14,valid_loader, device,
path = '/content/efficientnet_b3_fold0_best.pth')
final_preds_15 = predictions(model_15,valid_loader, device,
path = '/content/efficientnet_b2_fold0_best.pth')
#final_preds_a = new 2019+2020
final_preds_a = final_preds_1 + final_preds_2 + final_preds_3 + final_preds_4 + final_preds_5
final_preds_a = final_preds_a/5
#final_preds_b = uthaye hue models
final_preds_b = final_preds_6 + final_preds_7 + final_preds_8 + final_preds_9 + final_preds_10
final_preds_b = final_preds_b/5
final_preds = final_preds_a *0.5 + final_preds_b*0.5
# valid_labels = valid_folds[CFG.target_col].values
# score = get_score(valid_labels, final_preds.argmax(1))
# print(score)
# LOGGER.info(f'Epoch {epoch+1} - Accuracy: {score}')
# #print(final_preds)
# submission
# test['label'] = final_preds.argmax(1)
# test[['image_id', 'label']].to_csv(OUTPUT_DIR+'submission.csv', index=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 522, "referenced_widgets": ["6e349b5457934edabb37bf33762f0fd1", "153bd33395414ef4a4de33c0f811748e", "47ca618049a94c82be90c3d600d87fb1", "ab22d0267c034551add9fc12a329359f", "<KEY>", "9931f5031f534df8a75300e0f57ecac9", "26acb1eb52714390889735c1021e8d4a", "<KEY>", "<KEY>", "b5e2f7e81803472a9cb32ed3cf3e3ca6", "6660a14023d0413e8d568690820e72c1", "bdfc73eedc5a4a88b7fc374eda20eb2d", "b52c95fc42a14fbd80d059064b7d6630", "<KEY>", "ea2764a6b2df4607b38f4ac2375b144d", "3e900514cc1f46928712e305ed3dcb51", "<KEY>", "9aec25a69c494ce1b026ba0e01410ea5", "<KEY>", "<KEY>", "567fa0b9b84d4f378bdd49c4c43ed11b", "a4e1eaa8cac74e39b6a6347244f45ff6", "5698a459b8514e6d836f47c196a8f950", "<KEY>", "<KEY>", "6e0ffb48c00446b98e4aa65d24dd8e98", "<KEY>", "d4f55f2e714849d59b1c8f7fe3fe8aa7", "<KEY>", "9e4a3e714dde4581b6514dd4dba1cffd", "<KEY>", "a6c5864a9d8744da9c6641e94ca85f6e", "4ca97725bfa64df1a1249d794e021986", "<KEY>", "3d22ac28d41b498f81af32ac86af4b92", "<KEY>", "<KEY>", "<KEY>", "42e535d51b174f10bac016a6646cc22d", "<KEY>", "088c42ff7efe43f6a4b29ebede6ce089", "<KEY>", "5db8abeda2594113a7c817aee359df79", "c2bbcd80124a48d38681d6a3b5eee095", "<KEY>", "360dbe59e1fc43fbb41398307ab83da6", "fe40f82103e74d7ab22c91a981b9c7c7", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d6a7d27e67184e10ae75db9b680c4c72", "1a96b36a4d0b4798bae5bd8dfaf84598", "<KEY>", "ac1a6f1e4445433d99b63432d39595bf", "20633918bed64fa8a60da754457364ea", "<KEY>", "<KEY>", "14a9480ce41a441d85e2f47acc0f8c03", "<KEY>", "<KEY>", "87cb89633e9d4a48a58bb8e935cfae52", "<KEY>", "cf1cee51109d4e0da135f379414fc5df", "<KEY>", "<KEY>", "<KEY>", "e31731d67ad04a7791a4419008f2e948", "<KEY>", "<KEY>", "03896dae3fa544ba94d55a2364a0f44e", "<KEY>", "<KEY>", "<KEY>", "5f8d5a0502d248c7a7917fd3e399c184", "ed8614f6e40c4487aa245c46aa036fe7", "d074d1396abc4944ae3defe7ca5fc842", "<KEY>", "<KEY>", "711e732b19e84c6cb040678775de0e8b"]} id="Xa9Fy8sSTzq_" outputId="f208e361-ad9a-494a-b579-701a7a31cea0"
if __name__ == '__main__':
main()
# + colab={"base_uri": "https://localhost:8080/"} id="_Yz6w8yDX2zH" outputId="e93e05d4-5c5d-465c-c44d-e319cddf24cc"
final_preds_a
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="FQcsJFAxf9f_" outputId="3ff36722-253a-4016-a497-a1bd407e3af9"
df = pd.DataFrame(final_preds_a)
df.to_csv('final_preds_a.csv', index=False)
from google.colab import files
files.download('final_preds_a.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="aICOn6gigzL3" outputId="e1e7db30-73f0-421b-83bf-740b61df117d"
df = pd.DataFrame(final_preds_b)
df.to_csv('final_preds_b.csv', index=False)
from google.colab import files
files.download('final_preds_b.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="SEisv78xnkgi" outputId="1dcb19ac-584b-41b2-9c5d-1d3197a0376e"
get_dataset('kaggle datasets download -d harshwardhanbhangale/legacy-seresnext101-merged')
# + colab={"base_uri": "https://localhost:8080/"} id="XPYwJfGfnkZP" outputId="7b20a9fe-2dd4-4891-f742-4ca241ea007b"
get_dataset('kaggle datasets download -d harshwardhanbhangale/efficientnet-b2-merged')
# + colab={"base_uri": "https://localhost:8080/"} id="NzlC5twNnkR0" outputId="138f7233-3062-499f-c08b-837693bfc7d0"
get_dataset('kaggle datasets download -d mohit13gidwani/densenet169-mergedoldnew-casava')
# + colab={"base_uri": "https://localhost:8080/"} id="xqm3pQvjnkF4" outputId="75b5d3df-4a13-4574-c4d6-275866e887e3"
get_dataset('kaggle datasets download -d mohit13gidwani/efficientnet-b3-merged-data-trained-model')
| Cassava_validation_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Utility functions
import numpy as np
import cv2
import matplotlib.pyplot as plt
import pickle
from skimage.feature import hog
convertors = {
'RGB': cv2.COLOR_BGR2RGB,
'HLS': cv2.COLOR_BGR2HLS,
'YUV': cv2.COLOR_BGR2YUV,
'YCrCb': cv2.COLOR_BGR2YCrCb,
'Lab': cv2.COLOR_BGR2Lab,
'Luv': cv2.COLOR_BGR2Luv,
}
# +
# Define a function to compute color histogram features
def color_hist(img, nbins=128, bins_range=(0, 256)):
channel1 = np.histogram(img[:,:,0], bins=nbins, range=bins_range)
channel2 = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
channel3 = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
features = np.concatenate((channel1[0], channel2[0], channel3[0]))
return features
# +
# Define a function that takes an image, a color space,
# and a new image size
# and returns a feature vector
def bin_spatial(img, size=(16, 16)):
resize_img = cv2.resize(img, size)
color1 = resize_img[:,:,0].ravel()
color2 = resize_img[:,:,1].ravel()
color3 = resize_img[:,:,2].ravel()
features = np.hstack((color1, color2, color3))
return features
# +
# Define a function to return HOG features and visualization
# Features will always be the first element of the return
# Image data will be returned as the second element if visualize= True
# Otherwise there is no second return element
def get_hog_features(img, orient=9, pix_per_cell=8, cell_per_block=2, feature_vec=False):
return hog(
img, orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
visualise=False, feature_vector=feature_vec)
# +
# Define a function that takes an image, a list of bounding boxes,
# and optional color tuple and line thickness as inputs
# then draws boxes in that color on the output
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
for bbox in bboxes:
top_left = bbox[0]
bottom_right = bbox[1]
cv2.rectangle(imcopy, (top_left[0], top_left[1]), (bottom_right[0], bottom_right[1]), color, thick)
return imcopy
# +
# Define a function that takes an image,
# start and stop positions in both x and y,
# window size (x and y dimensions),
# and overlap fraction (for both x and y)
def slide_window(img, x_start_stop=[None, None], y_start_stop=[None, None],
xy_window=(64, 64), xy_overlap=(0.5, 0.5)):
window_list = []
x_start = x_start_stop[0] if x_start_stop[0] else 0
y_start = y_start_stop[0] if y_start_stop[0] else 0
x_stop = x_start_stop[1] if x_start_stop[1] else img.shape[0]
y_stop = y_start_stop[1] if y_start_stop[1] else img.shape[1]
window_w = xy_window[0]
window_h = xy_window[1]
x_step = np.int(window_w * xy_overlap[0])
y_step = np.int(window_h * xy_overlap[1])
x_stop = x_stop - window_w
y_stop = y_stop - window_h
for top in range(y_start, y_stop+1, y_step):
for left in range(x_start, x_stop+1, x_step):
top_left = (top, left)
bottom_right = (top + window_w, left + window_h)
window_list.append((top_left, bottom_right))
return window_list
# +
# load params and classifier from pickled file
scv = None
X_scaler = None
orient = None
pix_per_cell = None
cell_per_block = None
spatial_size = None
hist_bins = None
color_space = None
with open('./svc_pickle.p', 'rb') as f:
data_pickle = pickle.load(f)
svc = data_pickle['svc']
X_scaler = data_pickle['scaler']
orient = data_pickle['orient']
pix_per_cell = data_pickle['pix_per_cell']
cell_per_block = data_pickle['cell_per_block']
spatial_size = data_pickle['spatial_size']
hist_bins = data_pickle['hist_bins']
color_space = data_pickle['color_space']
# +
#Template Matching
# Define a function to search for car matches
# and return a list of bounding boxes
def find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, color_spcae, cells_per_step):
car_windows = []
img_tosearch = img[ystart:ystop,:,:]
ctrans_tosearch = cv2.cvtColor(img_tosearch, convertors[color_space])
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Define blocks and steps as above
nxblocks = (ch1.shape[1] // pix_per_cell) - 1
nyblocks = (ch1.shape[0] // pix_per_cell) - 1
nfeat_per_block = orient*cell_per_block**2
# 64 was the orginal sampling rate, with 8 cells and 8 pix per cell
window = 64
nblocks_per_window = (window // pix_per_cell) - 1
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Compute individual channel HOG features for the entire image
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, feature_vec=False)
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb*cells_per_step
xpos = xb*cells_per_step
# Extract HOG for this patch
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
xleft = xpos*pix_per_cell
ytop = ypos*pix_per_cell
# Extract the image patch
subimg = cv2.resize(ctrans_tosearch[ytop:ytop+window, xleft:xleft+window], (64,64))
# Get color features
hist_features = color_hist(subimg, nbins=hist_bins)
spatial_features = bin_spatial(subimg, size=spatial_size)
# Scale features and make a prediction
all_features = np.hstack((hist_features, spatial_features, hog_features)).reshape(1, -1)
test_features = X_scaler.transform(all_features)
#test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(xleft*scale)
ytop_draw = np.int(ytop*scale)
win_draw = np.int(window*scale)
car_windows.append(((xbox_left, ytop_draw+ystart),(xbox_left+win_draw,ytop_draw+win_draw+ystart)))
return car_windows
# load params and classifier from pickled file
svc = None
X_scaler = None
orient = None
pix_per_cell = None
cell_per_block = None
spatial_size = None
hist_bins = None
color_space = None
with open('./svc_pickle.p', 'rb') as f:
data_pickle = pickle.load(f)
svc = data_pickle['svc']
X_scaler = data_pickle['scaler']
orient = data_pickle['orient']
pix_per_cell = data_pickle['pix_per_cell']
cell_per_block = data_pickle['cell_per_block']
spatial_size = data_pickle['spatial_size']
hist_bins = data_pickle['hist_bins']
color_space = data_pickle['color_space']
# ystart, ystop, scale, overlap, color
searches = [
(380, 500, 1.0, 1, (0, 0, 255)), # 64x64
(400, 600, 1.587, 2, (0, 255, 0)), # 101x101
(400, 710, 2.52, 2, (255, 0, 0)), # 161x161
(400, 720, 4.0, 2, (255, 255, 0)), # 256x256
]
bbox_list = []
filename = './test_images/test6.jpg'
img = cv2.imread(filename)
draw_img = np.copy(img)
for ystart, ystop, scale, cells_per_step, color in searches:
bboxes = find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, color_space, cells_per_step)
if len(bboxes) > 0:
bbox_list.append(bboxes)
draw_img = draw_boxes(draw_img, bboxes, color=color, thick=3)
plt.figure(figsize=(12, 6))
plt.imshow(cv2.cvtColor(draw_img, cv2.COLOR_BGR2RGB))
plt.savefig('./output_images/sliding_window.png')
plt.show()
# +
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
for box in bbox_list:
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
return heatmap
def apply_threshold(heatmap, threshold):
result = np.copy(heatmap)
result[heatmap <= threshold] = 0
return result
def draw_labeled_bboxes(img, labels):
for car_number in range(1, labels[1]+1):
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
heat = np.zeros_like(img[:,:,0]).astype(np.float)
heat = add_heat(heat, np.concatenate(bbox_list))
# Apply threshold to help remove false positives
heat = apply_threshold(heat,2)
# Visualize the heatmap when displaying
heatmap = np.clip(heat, 0, 255)
# Find final boxes from heatmap using label function
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(img), labels)
fig = plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(cv2.cvtColor(draw_img, cv2.COLOR_BGR2RGB))
plt.title('Car Positions')
plt.subplot(122)
plt.imshow(heatmap, cmap='hot')
plt.title('Heat Map')
fig.tight_layout()
plt.savefig('./output_images/heatmap.png')
plt.show()
# +
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from collections import deque
nframes_to_keep = 10
nframes = deque([], nframes_to_keep)
frame_decay = 0.75
nframes_heat = None
heat_zeros = None
def img_pipeline(rgb_img):
global nframes_heat
global heat_zeros
global frames_decay
# moviepy inputs RGB image instead of BGR
img = cv2.cvtColor(rgb_img, cv2.COLOR_RGB2BGR)
bbox_list = []
for ystart, ystop, scale, cells_per_step, color in searches:
bboxes = find_cars(
img, ystart, ystop, scale, svc, X_scaler, orient,
pix_per_cell, cell_per_block, color_space, cells_per_step)
if len(bboxes) > 0:
bbox_list.append(bboxes)
# initialize data across frames if None
if nframes_heat is None:
nframes_heat = np.zeros_like(img[:,:,0]).astype(np.float)
heat_zeros = np.zeros_like(img[:,:,0]).astype(np.float)
# calculate single frame heatmap
one_frame_heat = np.zeros_like(img[:,:,0]).astype(np.float)
if len(bbox_list) > 0:
one_frame_heat = add_heat(one_frame_heat, np.concatenate(bbox_list))
# substract heat older than nframes
if len(nframes) == nframes_to_keep:
oldest_heat = nframes.popleft()
nframes_heat = nframes_heat - oldest_heat * (frame_decay ** (nframes_to_keep - 1))
nframes.append(one_frame_heat)
nframes_heat = nframes_heat * frame_decay + one_frame_heat
# Apply threshold to help remove false positives
heat = apply_threshold(nframes_heat, 10)
# Visualize the heatmap for video
heatmap_channel_r = np.clip(nframes_heat*5, 0, 255)
heatmap_rgb = np.dstack((heatmap_channel_r, heat_zeros, heat_zeros))
# Find final boxes from heatmap using label function
labels = label(heat)
draw_img = draw_labeled_bboxes(np.copy(rgb_img), labels)
combined = np.hstack((draw_img, heatmap_rgb))
return combined
# run image pipeline with video
outfile = 'results_p5.mp4' % color_space
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(img_pipeline) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(outfile, audio=False)
# -
| CarND-Final/Implement Vehicle Detection and Tracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 模拟一维海森堡链的自旋动力学
#
# <em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
# ## 概述
#
# 模拟一个量子系统的性质,是量子计算机的重要应用之一。一般来说,分析一个量子系统的性质需要先写出其哈密顿量 $H$,而对于不同尺度下的物理系统而言,这个哈密顿量往往具有不同的形式。以量子化学为例,一个分子的性质主要由电子-电子之间的库伦相互作用而决定,因此其哈密顿量中的每一项都是由作用在电子波函数上的费米子算符写成的。而量子计算机的基本组成单元量子比特(qubit)以及常用的泡利算符,对应着物理上的自旋和自旋算符。因此,若想在量子计算机上对分子性质进行模拟,则往往需要进行从费米子算符到泡利算符的转换,例如 Jordan-Wigner 变换、Bravyi-Kitaev 变换等等。这也就使得量子计算机需要消耗更多的资源来进行分子哈密顿量的模拟。因此,对于近期的量子设备而言,最有可能率先实现的便是对量子自旋系统的量子模拟——因为这些系统的哈密顿量可以直接写成泡利算符的形式。
#
# 在本教程中,我们选取了一个比较经典的量子自旋模型——海森堡模型,并将展示如何利用 Paddle Quantum 来进行一维海森堡自旋链的时间演化模拟。我们主要会使用 `construct_trotter_circuit()` 函数来搭建基于 product formula 的模拟时间演化电路,在先前的教程 [利用 Product Formula 模拟时间演化](./HamiltonianSimulation_CN.ipynb) 中有对该方法较为详细的理论介绍,在本教程中也会有较为简略的回顾。本教程将主要着眼于实际的应用,可以分为两个部分:
# - 海森堡模型的物理背景以及利用 Paddle Quantum 对其时间演化进行模拟
# - 基于随机置换来搭建自定义时间演化电路
# ---
# 在进一步介绍本教程中涉及的物理背景之前,我们先来回顾一下利用量子电路来模拟时间演化的基本思想,对这部分内容已经比较熟悉的读者可以直接跳至 **海森堡自旋链与其动力学模拟** 继续阅读。
#
# ### 利用 Suzuki product formula 模拟时间演化
#
# 让我们先回顾一下使用 Suzuki product formula 来模拟时间演化的基本思想:对于一个被不含时的哈密顿量 $H = \sum_k^L h_k$ 描述的量子系统,其时间演化算符可以写为
#
# $$
# U(t) = e^{-iHt},
# \tag{1}
# $$
#
# 该算符可以被进一步拆分为 $r$ 份,即
#
# $$
# e^{-iHt} = \left( e^{-iH \tau} \right)^r, ~\tau=\frac{t}{r}.
# \tag{2}
# $$
#
# 对于每一个 $e^{-iH \tau}$ 算符而言,其 Suzuki 分解为
#
# $$
# \begin{aligned}
# S_1(\tau) &= \prod_{k=0}^L \exp ( -i h_k \tau),
# \\
# S_2(\tau) &= \prod_{k=0}^L \exp ( -i h_k \frac{\tau}{2})\prod_{k=L}^0 \exp ( -i h_k \frac{\tau}{2}),
# \\
# S_{2k+2}(\tau) &= [S_{2k}(p_k\tau)]^2S_{2k}\left( (1-4p_k)\tau\right)[S_{2k}(p_k\tau)]^2.
# \end{aligned}
# \tag{3}
# $$
#
# 回到完整的时间演化算符 $U(t)$,利用第 $k$ 阶的 Suzuki 分解,它可以被写为
#
# $$
# U(t) = e^{-iHt} = \left( S_{k}\left(\frac{t}{r}\right) \right)^r.
# \tag{4}
# $$
#
# 这种模拟时间演化的方法被称为 Suzuki product formula,它可以有效地模拟时间演化过程至任意精度 [1]。在另一份教程 [利用 Product Formula 模拟时间演化](./HamiltonianSimulation_CN.ipynb) 中,我们展示了其误差上界的计算过程,感兴趣的读者可以前往阅读。
#
# ---
# ## 海森堡模型与其动力学模拟
#
# 海森堡(Heisenberg)模型,是量子磁性以及量子多体物理研究中十分重要的一个模型。它的哈密顿量为
#
# $$
# H = \sum_{\langle i, j\rangle}
# \left( J_x S^x_{i} S^x_{j} + J_y S^y_{i} S^y_{j} + J_z S^z_{i} S^z_{j} \right)
# +
# \sum_{i} h_z S^z_i,
# \tag{5}
# $$
#
# 其中 $\langle i, j\rangle$ 取决于具体的格点几何结构,$J_x, J_y, J_z$ 分别为 $xyz$ 三个方向上的自旋耦合强度,$h_z$ 是 $z$ 方向上的外加磁场。若取 $J_z = 0$,(5) 式也可以用来描述 XY 模型的哈密顿量;取 $J_x = J_y = 0$,(5) 式则可以用来描述伊辛模型(Ising model)的哈密顿量。注意在这里,我们使用了量子多体物理里面比较常用的多体自旋算符 $S^x_i, S^y_i, S^z_i$,它是一个作用在多体波函数上的算符。
# 对于自旋-1/2 系统而言,多体自旋算符可以被简单地写为泡利算符的张量积形式(省略一个 $\hbar/2$ 的系数)
#
# $$
# S^P_{i} = \left ( \otimes_{j=0}^{i-1} I \right ) \otimes \sigma_{P} \otimes \left ( \otimes_{j=i+1}^{L} I \right ),
# P \in \{ x, y, z \},
# \tag{6}
# $$
#
# 其中 $\sigma_{P}$ 是泡利算符,我们也经常用 $XYZ$ 算符来表示它们。需要说明的是,海森堡模型并不是一个假想模型:从描述电子在格点系统上运动的赫巴德模型(Hubbard model)出发,在一定的极限条件下,电子会被固定在格点上并形成半满填充。此时,描述电子的赫巴德模型就退化为了描述自旋的海森堡模型,而 (5) 式中的自旋-自旋相互作用则是电子-电子之间的相互作用在这个极限下的一种有效交换相互作用 [2]。尽管做了许多的近似,但是海森堡模型依然成功地预言了许多实际材料在低温下的性质 [3]。比如读者可能在高中课本上就学习过的 $\rm Cu(NO_3)_2 \cdot 2.5 H_2 O$ 二点五水合硝酸铜在 $\sim 3K$ 的低温下的行为就可以被自旋-1/2 一维交错海森堡链所描述 [4]。
#
# 取决于其具体的格点结构,海森堡模型上可以展示出丰富的量子现象。一维海森堡链可以被用来描述铁磁性与反铁磁性,对称性破缺以及无能隙激发。在二维阻挫格点系统上,海森堡模型可以被用来描述量子自旋液体态-这是一种包含了长程纠缠的新奇量子物态 [5]。若考虑一个外加的无序磁场时,海森堡模型还可以用来研究多体局域化现象(many-body localization, MBL),这是一种违反了热化假说的奇特现象,指的是一个量子多体系统经过了无穷长的时间演化后也不会热化,依然保留着其初态有关的信息 [6]。
#
# 模拟海森堡模型的时间演化过程,也被称为动力学模拟,可以帮助人们探索量子系统非平衡态相关的性质,从而用来寻找新奇的量子物相:例如前文提到的多体局域相,又或者更加有趣的时间晶体相 [7]。除了理论,动力学模拟对于实际的物理实验也有着重要的意义。这是因为自旋关联函数(也通常被称为动力学结构因子)直接决定了散射实验中的截面,或者是核磁共振实验的结果 [3],该函数则是由含时的自旋算符 $\langle S(t) S(0) \rangle$ 的积分决定的。因此,通过计算不同理论模型的动力学演化,人们可以进一步对真实材料中的物理模型进行分析。
#
# ### 利用 Paddle Quantum 实现海森堡链的动力学模拟
# 下面,我们则会通过一个实际的例子:链长为 5 的含有无序外磁场的海森堡链,来展示如何在 Paddle Quantum 中搭建其时间演化电路。首先,我们引入相关的包。
import numpy as np
import scipy
from scipy import linalg
import matplotlib.pyplot as plt
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import SpinOps, Hamiltonian, gate_fidelity
from paddle_quantum.trotter import construct_trotter_circuit, get_1d_heisenberg_hamiltonian
# 接下来,我们利用 `get_1d_heisenberg_hamiltonian()` 函数来得到一个一维海森堡链的哈密顿量:
h = get_1d_heisenberg_hamiltonian(length=5, j_x=1, j_y=1, j_z=2, h_z=2 * np.random.rand(5) - 1,
periodic_boundary_condition=False)
print('系统的哈密顿量为:')
print(h)
# 得到了哈密顿量之后,可以进一步通过 `construct_trotter_circuit()` 来构建时间演化电路。此外,若直接写出演化算符的矩阵形式,也可以计算系统随时间演化的精确解。这里我们用到了量桨中的 `Hamiltonian.construct_h_matrix()` 方法,它可以计算给定哈密顿量在泡利 $Z$ 基底下的矩阵形式。通过比较 `cir.U`,即电路的酉矩阵形式,以及精确的演化算符,可以计算出该电路模拟时间演化的保真度。
# +
# 计算演化时长为 t 时的精确演化算符
def get_evolve_op(t): return scipy.linalg.expm(-1j * t * h.construct_h_matrix())
# 设置演化时长以及模拟的步长
t = 3
r = 10
# 搭建模拟演化电路
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/r, steps=r, order=2)
# 得到电路的酉矩阵并计算与精确演化算符之间的保真度
U_cir = cir_evolve.U.numpy()
print('电路的酉矩阵与正确的演化算符之间的保真度为:%.2f' % gate_fidelity(get_evolve_op(t), U_cir))
# -
# #### 根据对易关系重新排列哈密顿量
#
# 对于 product formula 而言,可以通过重新排列哈密顿量中的每一项减小其模拟误差。因为 product formula 的误差是由哈密顿量中不对易项所产生的,所以一种自然的重新排列思路就是将哈密顿量中相互对易的项放在一起。比如,我们可以将哈密顿量分解为四个部分
#
# $$
# H = H_x + H_y + H_z + H_{\rm other},
# \tag{7}
# $$
#
# 其中 $H_x, H_y, H_z$ 分别为仅由泡利 $X, Y, Z$ 算符构成的项,$H_{\rm other}$ 为剩余项。对于 (5) 中的海森堡链的哈密顿量而言,所有的项都可以被分类为 $H_x, H_y, H_z$ 三项。不仅如此,对于一维最近邻相互作用系统而言,它也可以被分为奇偶两个部分
#
# $$
# H = H_{\rm even} + H_{\rm odd},
# \tag{8}
# $$
#
# 其中 $H_{\rm even}$ 为 $(0, 1), (2, 3), ...$ 格点上的相互作用项,$H_{\rm odd}$ 为 $(1, 2), (3, 4), ...$ 格点上的相互作用项。 不过需要指出的是,这两种排列方式都不能减少其理论上的误差上界。并且从经验的角度来说,它们也不是总能减小实际的模拟误差。实际上,确定对于某一类哈密顿量而言模拟误差的排列方式,是一个十分值得探索的问题。对于量桨中的 `construct_h_matrix()` 函数而言,用户可以通过指定 `grouping='xyz'` 或者 `grouping='even_odd'` 来实现上文中提到的两种重新排列方式,此外,通过传入参数 `permutation` 也可以指定自定义排列顺序。关于后一点,本教程将在下文章节 **设计基于随机置换的自定义时间演化电路** 中进一步介绍。下面,先让我们来看一下关于 `grouping` 参数的使用方法:
# 保持同样的时间演化参数,但是在通过 'grouping="xyz"' 和 'groping="even_odd"' 指定哈密顿量排列
cir_evolve_xyz = UAnsatz(5)
cir_evolve_even_odd = UAnsatz(5)
construct_trotter_circuit(cir_evolve_xyz, h, tau=t/r, steps=r, order=2, grouping='xyz')
construct_trotter_circuit(cir_evolve_even_odd, h, tau=t/r, steps=r, order=2, grouping='even_odd')
U_cir_xyz = cir_evolve_xyz.U.numpy()
U_cir_even_odd = cir_evolve_even_odd.U.numpy()
print('原始保真度为:', gate_fidelity(get_evolve_op(t), U_cir))
print('XYZ 排列后的模拟保真度为:', gate_fidelity(get_evolve_op(t), U_cir_xyz))
print('奇偶排列后的模拟保真度为:', gate_fidelity(get_evolve_op(t), U_cir_even_odd))
# #### 初态制备以及对演化后的末态进行观测
#
# 下面,我们来制备系统的初态。一般来说,在研究量子多体系统的动力学行为时,一种做法是将系统的初态制备为各种不同的直积态。在量桨中,我们默认的初态为 $\vert 0...0 \rangle$,这里我们可以通过 $X$ 门来将奇数格点上的自旋进行翻转,这样系统的初态就制备为了 $\vert 01010 \rangle$ 态,用自旋来标记的话则是 $\vert \downarrow \uparrow \downarrow \uparrow \downarrow \rangle$ 态。
# 创建一个用于制备初态的电路,并通过演化得到初态
cir = UAnsatz(5)
cir.x(1)
cir.x(3)
init_state = cir.run_state_vector()
# 通过将系统的初态 `init_state` 传入方法 `UAnsatz.run_state_vector(init_state)`,我们可以利用刚刚定义的量子线路来演化该初态,并得到演化后的末态。对于演化后的末态,可以使用 `UAnsatz.expecval()` 方法来测量其上的可观测量。这里我们简单地考虑对每个格点上的自旋状态进行观测,即测量可观测量 $\langle S^z_i \rangle$,其对应的 Pauli string 为 `[[1, 'Zi']]`(i 为格点下标)。
cir_evolve_even_odd.run_state_vector(init_state)
print('演化后格点 0 上自旋的 z 方向期望为:', cir_evolve_even_odd.expecval([[1, 'Z0']]).numpy()[0])
# 类似地,通过调整模拟演化的时间长度以及测量的量子比特编号,我们可以绘制出系统中的每个自旋的状态随着时间的完整变化过程。注意这里为了计算理论上的精确解,我们使用了 `SpinOps` 类来构建 $S_i^z$ 算符的矩阵形式,并通过 $\langle \psi(t) \vert S_i^z \vert \psi(t) \rangle$ 来计算其期望值。
# +
def get_evolution_z_obs(h, t_total, order=None, n_steps=None, exact=None):
"""
该函数可以计算演化过程 t 中系统每个格点上的 Sz 可观测量的变化过程
通过 order, n_steps 控制 trotter-suzuki 分解的步长和阶数
通过设置 exact=True 可以计算对应的精确解
"""
z_obs_total = []
for t in np.linspace(0., t_total, t_total * 3 + 1):
z_obs = []
# 通过演化算符或者运行电路得到末态
if exact:
spin_operators = SpinOps(h.n_qubits)
fin_state = get_evolve_op(t).dot(init_state)
else:
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=order, grouping='even_odd')
fin_state = cir_evolve.run_state_vector(init_state)
# 对每个格点上的可观测量进行观测
for site in range(h.n_qubits):
if exact:
z_obs.append(fin_state.conj().T.dot(spin_operators.sigz_p[site]).dot(fin_state))
else:
z_obs.append(cir_evolve.expecval([[1, 'Z' + str(site)]]).numpy()[0])
z_obs_total.append(z_obs)
return np.array(z_obs_total).real
def plot_comparison(**z_obs_to_plot):
"""
绘制不同的演化结果进行对比,默认每个传入的参数都是 get_evolution_z_obs() 函数的输出并具有同样的演化时间
"""
fig, axes = plt.subplots(1, len(z_obs_to_plot), figsize = [len(z_obs_to_plot) * 3, 5.5])
ax_idx = 0
for label in z_obs_to_plot.keys():
im = axes[ax_idx].imshow(z_obs_to_plot[label], cmap='coolwarm_r', interpolation='kaiser', origin='lower')
axes[ax_idx].set_title(label, fontsize=15)
ax_idx += 1
for ax in axes:
ax.set_xlabel('site', fontsize=15)
ax.set_yticks(np.arange(0, z_obs_total_exact.shape[0], 3))
ax.set_yticklabels(np.arange(0, z_obs_total_exact.shape[0]/3, 1))
ax.set_xticks(np.arange(z_obs_total_exact.shape[1]))
ax.set_xticklabels(np.arange(z_obs_total_exact.shape[1]))
axes[0].set_ylabel('t', fontsize=15)
cax = fig.add_axes([0.92, 0.125, 0.02, 0.755])
fig.colorbar(im, cax)
cax.set_ylabel(r'$\langle S^z_i (t) \rangle$', fontsize=15)
# +
# 分别计算时长为 3 时,通过步长为 25、5 的电路得到的演化过程,以及精确解
z_obs_total_exact = get_evolution_z_obs(h, t_total=3, exact=True)
z_obs_total_cir = get_evolution_z_obs(h, order=1, n_steps=25, t_total=3)
z_obs_total_cir_short = get_evolution_z_obs(h, order=1, n_steps=5, t_total=3)
plot_comparison(
Exact=z_obs_total_exact,
L25_Circuit=z_obs_total_cir,
L5_Circuit=z_obs_total_cir_short)
# -
# 我们观察到当线路的深度为 25 时(注意这里的深度指的是时间块的数量而不是量子门的层数),量子电路可以较好的模拟系统在完整演化时间内的自旋动力学。若使用较浅的量子线路,则只能正确模拟系统的行为至一定的时间。
#
# **思考:** 读者是否可以尝试来测量自旋空间关联函数 $\langle S_i^z S_j^{z} \rangle$ 并观察其随时间的变化?
# ## 设计基于随机置换的自定义时间演化电路
#
# ### 随机置换
#
# 尽管从物理的角度上看来,将哈密顿量中的对易项重新排列在一起来减小模拟误差是符合直觉的,但是许多证据都表明,固定一种哈密顿量排列的演化策略将会导致模拟误差不断地累积,反而不如将哈密顿量的排列顺序在每个“时间块”中都进行随机置换来得有效 [8, 9]。人们发现,通过不断地将哈密顿量的排列顺序进行随机置换,其演化过程中造成的随机误差比起固定排列时的累积误差来说更加“无害” [8]。无论是在理论上的误差上界与经验性的实验都表明,这种随机排列的演化策略比起固定排列的 Suzuki product formula 具有更小的误差 [9]。
# ### 搭建自定义时间演化电路
#
# 量桨中的 `construct_trotter_circuit()` 函数会默认根据 Suzuki product formula 以及输入哈密顿量的顺序来添加时间演化电路。同时,用户可以通过设置 `method='custom'` 并同时向参数 `permutation` 以及 `coefficient` 传入数组的方式来自定义时间演化策略。
#
# **提醒:** 用户在使用 `coefficient`、`tau` 以及 `steps` 参数时需要小心它们之间的关系。一般情况下,传入 `coefficient` 的数组应当是归一化的,即它本身描述的是 $t=1$ 的时间演化过程。在这个基础上,通过设置更多的 `steps`,该函数会将传入的自定义参数所描述的时间演化策略作为一个基本的“时间块”并进行重复,其中每个时间块的演化时长由参数 `tau` 决定。举个例子,若设置 `permutation=np.arange(h.n_qubits)` 且 `coefficient=np.ones(h.n_qubits)`,此时通过 `tau` 与 `steps` 来定义的时间演化电路与一阶 product formula 电路是完全一致的。
# 让我们进一步实际展示一下该自定义功能:考虑和之前相同的哈密顿量,现在我们通过设计一个时间演化电路来测试上文提到的随机置换的结论,即我们希望搭建一个类似于一阶 product formula 的电路,只不过在每个”时间块“内的哈密顿量排列是完全随机且独立的。通过传入一个形状为 `(n_steps, h.n_terms)` 且其每一行都是一个随机置换 $P(N)$ 的数组至参数 `permutation`,就可以实现这一想法:
# 自定义 permutation 参数的一个例子
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(100)])
# 接下来,为了验证,可以分别计算该随机电路以及一阶 product formula 在不同电路深度下与精确解之间的保真度来进行比较:
# +
def compare(n_steps):
"""
比较一阶 product formula 以及随机置换方法在同样的步长的情况下对于固定演化时长 t=2 时的保真度
输入参数控制步长,输出分别为一阶 product formula 以及随机置换的保真度
"""
t = 2
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=1)
U_cir = cir_evolve.U.numpy()
fid_suzuki = gate_fidelity(get_evolve_op(t), U_cir)
cir_permute = UAnsatz(5)
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(n_steps)])
# 当不指定 coefficient 参数时,会默认根据 permutation 的形状设置一个归一化且均匀的 coefficient
construct_trotter_circuit(cir_permute, h, tau=t, steps=1, method='custom', permutation=permutation)
U_cir = cir_permute.U.numpy()
fid_random = gate_fidelity(get_evolve_op(t), U_cir)
return fid_suzuki, fid_random
# 比较在不同步长时的两种方案的保真度
# 出于运行时间的考虑,只进行一次试验,感兴趣的读者可以进行多次重复实验并计算其 error bar
n_range = [100, 200, 500, 1000]
result = [compare(n) for n in n_range]
result = 1 - np.array(result)
plt.loglog(n_range, result[:, 0], 'o-', label='1st order PF')
plt.loglog(n_range, result[:, 1], 'o-', label='Random')
plt.xlabel(r'Trotter number $r$', fontsize=12)
plt.ylabel(r'Error: $1 - {\rm Fid}$', fontsize=12)
plt.legend()
plt.show()
# -
# 图中,“1st order PF” 指按照固定顺序搭建的一阶 product formula 电路。与预期一样,随机置换确实可以在相同的电路深度下达到比一阶 product formula 更好的模拟效果。
#
# **思考:** 在 [9] 中,作者指出这种随机的策略在没有利用任何与哈密顿量有关的信息的前提下就取得了更小的误差,那么有理由相信存在一种方法可以在利用哈密顿量信息的同时进一步减小该误差。这对于人们设计更好的模拟时间演化策略带来了启发。
# ## 小结
#
# 对于量子多体系统的动力学性质进行研究,是理解新奇量子物态的重要手段。由于其高度纠缠的量子力学本质,无论是在理论上还是在实验上的研究都是十分困难的。时至今日,人们对于不同几何结构,不同相互作用下的二维,乃至包含了无序性的一维系统上的物理现象都没能完全理解。另一方面,通用量子计算机以及量子模拟器的快速发展给这一问题的解决带来了新的希望。以通用量子计算机为例,通过搭建量子电路,其优势在于可以模拟各种复杂情况下的系统演化过程,例如,模拟其哈密顿量随时间周期性变化的系统从而寻找“时间晶体”的存在。随着量子比特数目和控制能力的进一步提高,通用量子计算机有望在近未来内在模拟量子系统时间演化这一任务上超越经典计算机,这其中,最有希望最先取得进展的就是量子自旋系统的模拟。
#
# 本教程主要介绍了如何在量桨中模拟一个真实量子自旋模型的时间演化过程,并且进一步探讨了基于量桨来设计新的时间演化策略的可能性。通过 `construct_trotter_circuit()` 函数以及 `Hamiltonian` 和 `SpinOps` 类中提供的各种方法,用户现在可以简单地设计并测试不同搭建时间演化的策略。我们也鼓励读者在更多的物理系统上尝试不同的时间演化策略,并一起探索更加高效的量子模拟电路。
# ---
#
# ## 参考文献
#
# [1] Childs, <NAME>., et al. "Toward the first quantum simulation with quantum speedup." [Proceedings of the National Academy of Sciences 115.38 (2018): 9456-9461](https://www.pnas.org/content/115/38/9456.short).
#
# [2] <NAME>. Models of Quantum Matter: A First Course on Integrability and the Bethe Ansatz. [Oxford University Press, 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780199678839.001.0001/oso-9780199678839).
#
# [3] Mikeska, Hans-Jürgen, and <NAME>. "One-dimensional magnetism." Quantum magnetism. Springer, Berlin, Heidelberg, 2004. 1-83.
#
# [4] <NAME>., <NAME>, and <NAME>. "Magnetic Susceptibility of $\rm Cu(NO_3)_2·2.5 H_2O$ at Low Temperature." [Physical Review 132.3 (1963): 1057](https://journals.aps.org/pr/abstract/10.1103/PhysRev.132.1057).
#
# [5] <NAME>., et al. "Quantum spin liquids." [Science 367.6475 (2020)](https://science.sciencemag.org/content/367/6475/eaay0668).
#
# [6] Abanin, <NAME>., et al. "Colloquium: Many-body localization, thermalization, and entanglement." [Reviews of Modern Physics 91.2 (2019): 021001](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.021001).
#
# [7] Medenjak, Marko, <NAME>, and <NAME>. "Isolated Heisenberg magnet as a quantum time crystal." [Physical Review B 102.4 (2020): 041117](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.102.041117).
#
# [8] Wallman, <NAME>., and <NAME>. "Noise tailoring for scalable quantum computation via randomized compiling." [Physical Review A 94.5 (2016): 052325](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.94.052325).
#
# [9] Childs, <NAME>., <NAME>, and <NAME>. "Faster quantum simulation by randomization." [Quantum 3 (2019): 182](https://quantum-journal.org/papers/q-2019-09-02-182/).
| tutorial/quantum_simulation/SimulateHeisenberg_CN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rz41y4elABlH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="19221bd7-e5e0-4587-96d7-5c9ff8b31090"
# !pip install gdown
# !pip install tensorflow-gpu
# + id="d2Nl6JcUArH8" colab_type="code" colab={}
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from pandas.plotting import register_matplotlib_converters
import pandas.util.testing as tm
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
register_matplotlib_converters()
sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 22, 10
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
# + id="QbPctusgAwDH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="84ce4bcf-3753-438f-8e3e-f562bc79beb3"
# !gdown --id 10vdMg_RazoIatwrT7azKFX4P02OebU76 --output spx.csv
# + id="ETmZ5q-iA4uL" colab_type="code" colab={}
df = pd.read_csv('spx.csv', parse_dates=['date'], index_col='date')
# + id="bj7z7xgmA8m0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="b936ff85-cce8-48e6-b97e-59b2ccd96969"
df.head()
# + id="3nZY162nA-0k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 606} outputId="9707341f-7872-4f71-9f8c-5117c29ebb62"
plt.plot(df, label='close price')
plt.legend();
# + id="9q8Gz-rbBDvP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c4e958d7-515b-4b09-85de-e051621086a5"
train_size = int(len(df) * 0.95)
test_size = len(df) - train_size
train, test = df.iloc[0:train_size], df.iloc[train_size:len(df)]
print(train.shape, test.shape)
# + id="fUuBowFwBGdf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="feb7f70f-8ec9-45c4-9384-f87565917802"
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler = scaler.fit(train[['close']])
train['close'] = scaler.transform(train[['close']])
test['close'] = scaler.transform(test[['close']])
# + id="PjOF835fBJ7_" colab_type="code" colab={}
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
Xs.append(v)
ys.append(y.iloc[i + time_steps])
return np.array(Xs), np.array(ys)
# + id="Vzq_lsi3BM6Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9642f217-ec8b-46cd-9327-c70fb97bfa38"
TIME_STEPS = 30
# reshape to [samples, time_steps, n_features]
X_train, y_train = create_dataset(train[['close']], train.close, TIME_STEPS)
X_test, y_test = create_dataset(test[['close']], test.close, TIME_STEPS)
print(X_train.shape)
# + id="PZb1M5nLBPew" colab_type="code" colab={}
model = keras.Sequential()
model.add(keras.layers.LSTM(
units=64,
input_shape=(X_train.shape[1], X_train.shape[2])
))
model.add(keras.layers.Dropout(rate=0.2))
model.add(keras.layers.RepeatVector(n=X_train.shape[1]))
model.add(keras.layers.LSTM(units=64, return_sequences=True))
model.add(keras.layers.Dropout(rate=0.2))
model.add(keras.layers.TimeDistributed(keras.layers.Dense(units=X_train.shape[2])))
model.compile(loss='mae', optimizer='adam')
# + id="2RO5EC27BUDl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="17cae9b0-6a95-466d-9979-ca87fad47c5d"
history = model.fit(
X_train, y_train,
epochs=10,
batch_size=32,
validation_split=0.1,
shuffle=False
)
# + id="lusTmTlvBnz0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 601} outputId="666e248b-fb79-44b9-f562-f21972ddc347"
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend();
# + id="3P8dMCs3Bppe" colab_type="code" colab={}
X_train_pred = model.predict(X_train)
train_mae_loss = np.mean(np.abs(X_train_pred - X_train), axis=1)
# + id="GG8bBNQEBsO1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 601} outputId="10f843d8-93af-4dc3-f183-3bbc6e3a0e9c"
sns.distplot(train_mae_loss, bins=50, kde=True);
# + id="sLrHNDJQBvji" colab_type="code" colab={}
X_test_pred = model.predict(X_test)
test_mae_loss = np.mean(np.abs(X_test_pred - X_test), axis=1)
# + id="vXSt9CHgBx8Y" colab_type="code" colab={}
THRESHOLD = 0.65
test_score_df = pd.DataFrame(index=test[TIME_STEPS:].index)
test_score_df['loss'] = test_mae_loss
test_score_df['threshold'] = THRESHOLD
test_score_df['anomaly'] = test_score_df.loss > test_score_df.threshold
test_score_df['close'] = test[TIME_STEPS:].close
# + id="x_FZU2JkB0zC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 628} outputId="7da2a2a5-f2c8-4dd7-fc6e-c53eaa12ad10"
plt.plot(test_score_df.index, test_score_df.loss, label='loss')
plt.plot(test_score_df.index, test_score_df.threshold, label='threshold')
plt.xticks(rotation=25)
plt.legend();
# + id="KFNcHMXJB4JG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="7ec5b9f3-6a14-42ff-833d-27815b5477e7"
anomalies = test_score_df[test_score_df.anomaly == True]
anomalies.head()
# + id="bnzRSSsQB-Ma" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 652} outputId="3e293af2-665d-4976-dea0-e4a9fbf80e6c"
plt.plot(
test[TIME_STEPS:].index,
scaler.inverse_transform(test[TIME_STEPS:].close),
label='close price'
);
sns.scatterplot(
anomalies.index,
scaler.inverse_transform(anomalies.close),
color=sns.color_palette()[3],
s=52,
label='anomaly'
)
plt.xticks(rotation=25)
plt.legend();
# + id="tj_xKvLZCCuC" colab_type="code" colab={}
| Anomalydetection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] hide_input=true
# # Bode plot for first-order RC lowpass filter
# -
# ## The problem
#
# Consider the first-order lowpass filter circuit below:
#
# 
#
# The input signal $x(t)$ is the source voltage and the output $y(t)$ is the voltage across the capacitor.
#
# Using standard techniques the governing differential equation is found to be
# $$y(t) + RC \frac{dy(t)}{dt} = x(t).$$
# Note that the actual values of $R$ are $C$ don't matter, just the value of the product $RC$. Bear in mind though, if a 1F capacitor exists it will be huge and cost gigadollars, and a 1$\Omega$ resistor is not common.
#
# In this course we will see (or might already have seen) that the RC circuit has a transfer function
# $$H(\omega) = \frac{1/RC}{1/RC + j \omega}.$$
# This system has a real impulse response so $H(-\omega) = H^\ast(\omega)$. The exact analytical form for the steady-state response to the input signal $x(t) = \cos(\omega t)$ is quite easily shown to be
# $$y(t) = |H(\omega)| \cos(\omega t + \angle H(\omega)).$$
# This is all you need to know for now, and the details are explored in another workbook.
#
# This workbook investigates ways of visualising the transfer function $H(\omega)$. This is not trivial because, even though $\omega$ is real, $H(\omega)$ takes on complex values. Thus we need to plot both magnitude and phase as functions of frequency. Also, it turns out that expressing both domain and range on logarithmic axes makes it much easier to characterise the behaviour of the system. This leads to the conventional *Bode plot*.
# ## Simple visualisation of frequency response
#
# The simple way to visualise $H(\omega)$ is to choose a set of frequencies of interest and store them in an array `wv`. We can then evaluate $H(\omega)$ at these points, storing the results in another array `Hv`. Note that since `Hv` will be complex we cannot just plot it. Instead for any $\omega$ we can write the frequency response in magnitude-phase form
# $$H(\omega) = |H(\omega)| e^{j \angle H(\omega)}$$
# and make seperate plots of $|H(\omega)|$ and $\angle H(\omega)$.
# +
# Two-sided Bode plot linear-linear
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
# Frequency response
RC = 1;
wv = np.linspace(-10, 10, 1000);
Hv = (1/RC)/((1/RC)+1j*wv);
# Display
fh, ax = plt.subplots(2);
ax[0].plot(wv, np.abs(Hv), c='g'); ax[0].set_ylabel(r'$|H(\omega)|$');
ax[1].plot(wv, np.angle(Hv), c='g'); ax[1].set_ylabel(r'$\angle H(\omega)$');
plt.xlabel('$\omega$');
# -
# It is difficult from the plots above to see the effect of the $RC$ value for the circuit. If you increase it the magnitude plot for example become more "peaky", but the effect is hard to characterise.
#
# We note firstly that the condition $H(-\omega) = H^\ast(\omega)$ means that the magnitude $|H(\omega)|$ is always even and the phase $\angle H(\omega)$ is always odd. We may as well therefore only plot them for positive frequencies $\omega > 0$ - all the useful information is still available.
#
# Secondly, the magnitude plot above is linear in frequency $\omega$ and linear in the gain $|H(\omega)|$. A log-log plot turns out to be more useful. We can redo the plots above, but this time with the gain in decibels $G_{dB}(\omega) = 10 \log_{10} |H(\omega)|^2$ plotted against logarithmic frequency $\log_{10} \omega$:
# +
# One-sided Bode plot log-log
lwv = np.linspace(-3, 5, 1000); # linear points in log space
wv = 10**lwv; # actual frequencies
Hv = (1/RC)/((1/RC)+1j*wv); # frequency response
dbHv = 10*np.log10(np.abs(Hv)**2); # magnitude response in dB
fh, ax = plt.subplots(2);
ax[0].plot(lwv, dbHv, c='g'); ax[0].set_ylabel('$10 \log_{10} |H(\omega)|^2$');
ax[1].plot(lwv, np.angle(Hv), c='g'); ax[1].set_ylabel(r'$\angle H(\omega)$');
plt.xlabel('$\log_{10} \omega$');
# -
# The magnitude plot now has two clear regions: a flat passband and a stopband with a linear roll-off, separated by a "knee". We can investigate these two regions in more detail. First we note that the transfer function can be written as
# $$H(\omega) = \frac{1}{1 + j \omega RC}.$$
# The gain in dB can thus be written as
# $$G_{dB}(\omega) = 10 \log_{10} |H(\omega)|^2
# = 10 \log_{10} \left( \frac{1}{1 + j \omega RC} \frac{1}{1 - j \omega RC} \right) = -10 \log_{10} (1 + (\omega RC)^2).$$
#
# Consider the term in the logarithm:
#
# * For the case $\omega RC \ll 1$, or $\omega \ll 1/(RC)$, we have approximately
# $$G_{dB}(\omega) \approx -10 \log_{10} (1) = 0.$$
# This is the one asymptote.<br><br>
#
# * For the case $\omega RC \gg 1$, or $\omega \gg 1/(RC)$,
# $$G_{dB}(\omega) \approx -10 \log_{10} (\omega RC)^2 = -20 \log_{10} (\omega RC) = -20 [ \log_{10} \omega] - 20 \log_{10} (RC).$$
# This is another asymptote.
#
# These two asymptotes cross at $\omega = 1/(RC)$, the location of the knee. This is called the *cutoff frequency* of the filter, and it marks the transition from the passband to the stopband.
#
# We can redo the magnitude plot and show these two asymptotes:
# +
fh = plt.figure();
plt.plot(lwv, dbHv, c='g'); plt.ylabel('$G_{dB}(\omega)$');
plt.xlabel('$\log_{10} \omega$');
yax = plt.gca().get_ylim();
as0v = np.zeros(lwv.shape);
as1v = -20*lwv - 20*np.log10(RC);
plt.plot(lwv, as0v, 'r', lwv, as1v, 'r');
plt.gca().set_ylim(yax);
# -
# In this last plot the slope of the roll-off is seen to be 20dB for every unit increment of the x-axis $\log_{10} \omega$. However, note that $\log_{10} \omega = 0$ corresponds to $\omega = 0$, $\log_{10} \omega = 1$ correponds to $\omega = 10$, $\log_{10} \omega = 2$ correponds to $\omega = 100$, and so on. Thus an increase of one unit on the log frequency axis corresponds to an increase in frequency by a factor of 10. We call a factor of 10 increase in frequency a *decade*.
#
# Thus the first-order lowpass filter has a roll-off of 20dB per decade once above the cutoff at $\omega_c = 1/(RC)$.
#
# We could also plot the gain in dB against $\log_2 \omega$. Since $\log_{10} \omega = \log_2(\omega)/\log_2(10)$, once above the knee we can write
# $$G_{dB}(\omega) \approx -\frac{20}{\log_2(10)} [ \log_{2} \omega] - 20 \log_{10} (RC).$$
# An increase in $\log_2 \omega$ by one unit corresponds to a doubling of the frequency, called an *octave*. We see that a one-unit increase in $\log_2$ frequency results in a reduction in gain of $20/\log_2(10) \approx 6$dB.
#
# In other words, the first-order lowpass filter has a roll-off of 6dB per octave once above the cutoff.
# # Tasks
#
# These tasks involve writing code, or modifying existing code, to meet the objectives described.
#
# 1. The first-order highpass filter, which has R and C swapped in the circuit given above, can be shown to have a frequency response
# $$H(\omega) = \frac{j \omega RC}{1 + j \omega RC}.$$
# Generate a Bode plot for this system, both magnitude and phase, for the case of $RC=1$. The gain should be in dB, and both plots should use $\log_2 \omega$ as the independent variable. The frequency range should extend from $\log_2 \omega = -6$ to $\log_2 \omega = 6$. Find expressions for the two asymptotes and include them in the magnitude plot.<br><br>
#
# 2. A second-order RLC circuit
#
# 
#
# has transfer function
# $$H(\omega) = \frac{\frac{1}{RC} (j \omega)}{(j \omega)^2 + \frac{1}{RC} (j \omega) + \frac{1}{LC}}.$$
# It turns out that the fundamental parameters for this circuit are the resonant frequency $\omega_0 = 1/\sqrt{LC}$ and the damping factor $\alpha = 1/(2RC)$, giving
# $$H(\omega) = \frac{2 \alpha (j \omega)}{(j \omega)^2 + 2 \alpha (j \omega) + \omega_0^2}.$$
# Generate a Bode plot for the system for $R=10$ and $L=C=1$. The gain should again be expressed in dB with $\log_2 \omega$ as the independent variable, and the response should be shown for values of $\log_2 \omega$ over the range $-6$ to $6$.
#
# You should observe a bandpass filter characteristic, where the frequency $\omega_0$ is passed with the lowest attenuation. The quality factor for this circuit is $Q = \omega_0/(2 \alpha) = 2 \sqrt{C/L}$. A high Q value corresponds to a system that has a very sharp resonance, or a very small bandwidth.
| lab_bodeplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Funções e programação defensiva
# ## License
#
# All content can be freely used and adapted under the terms of the
# [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
#
# 
# ## Imports
#
# Coloque **todos** os `import` na célula abaixo. Não se esqueça do `%matplotlib inline` para que os gráficos apareçam no notebook.
import numpy as np # Biblioteca para cálculo numérico do Python
# %matplotlib inline
# ## Funções
#
# ### Tarefa - Criando uma função
#
# Crie duas funções:
#
# * `celsius_para_kelvin` que recebe a temperatura em Celsius como argumento e retorna a temperatura convertida para Kelvin
# * `kelvin_para_fahr` que recebe uma temperatura em Kelvin e a converte para Fahrenheit
#
# Lembrando que:
#
# * Kelvin = Celsius + 273.15
# * Fahrenheit = (Kelvin - 273.15)*1.8 + 32
def celsius_para_kelvin(temp): #definimos a funcao baseada na variavel temp
kelvin = temp + 273.15 #executamos a conta
return kelvin #entregamos o resultado
def kelvin_para_fahr(temp): #definimos a funcao baseada na variavel temp
fahr = (temp - 273.15)*1.8 + 32 #executamos a conta
return fahr #entregamos o resultado
# As células abaixo executam a sua função e comparam com os resultados esperados. Use-as para verificar se a sua função está funcionando (todas as células devem imprimir `True`).
celsius_para_kelvin(0) == 273.15
celsius_para_kelvin(-273.15) == 0
celsius_para_kelvin(42) == 315.15
kelvin_para_fahr(273.15) == 32
kelvin_para_fahr(373.15) == 212
# ### Tarefa - Compondo funções
#
# Crie uma função chamada `celsius_para_fahr` que recebe a temperatura em Celsius e retorna a temperatura em Fahrenheit. Você deve **obrigatoriamente** utilizar as duas funções criadas anteriormente para calcular o resultado.
def celsius_para_fahr(g): #definimos a funcao baseada na variavel g
intermediario = celsius_para_kelvin(g) #transformamos ela na variavel intermediario
resultado = kelvin_para_fahr(intermediario) #transformamos o intermediarios em resultado
return resultado #entregamos o resultado
# As células abaixo executam a sua função e comparam com os resultados esperados. Use-as para verificar se a sua função está funcionando (todas as células devem imprimir `True`).
celsius_para_fahr(0) == 32
celsius_para_fahr(100) == 212
# ### Tarefa - Documentando funções
#
# Faça as funções acima novamente (**com o mesmo nome**), dessa vez criando *docstrings* para essas funções. A docstring da função deve ser no formato:
#
# def minha_funcao(coisa1, coisa2):
# """
# Calcula/faz tal e tal coisa utilizando a formula de tal
# e assumindo que tal.
#
# Exemplos:
# minha_funcao(10, 20) => 1345
# minha_funcao(34, 42) => 145
# """
def celsius_para_kelvin(temp):
"""
Transforma a temperatura de celsius para kelvin, somando 273.15 ao valor da variavel fornecida em celsius
Exemplo:
celsius_para_kelvin(15) => 288.15
"""
kelvin = temp + 273.15
return kelvin
def kelvin_para_fahr(temp):
"""
Transforma a temperatura de kelvin para fahr, diminuindo 273.15 da temperatura fornecida em kelvin,
após, multiplicando o resultado dessa subtração por 1.8 e adcionando 32 ao final.
Exemplo:
kelvin_para_fahr(273.15) => 32
"""
fahr = (temp - 273.15)*1.8 + 32
return fahr
def celsius_para_fahr(g):
"""
Transforma a temperatura de celsius para fahr, a partir da função celsius para kelvin e, após
de utiliza o valor encontrado em kelvin e transforma para fahr a partir da funçao kelvin para fahr.
Exemplo:
celsius_para_fahr(0) => 32
"""
intermediario = celsius_para_kelvin(g)
resultado = kelvin_para_fahr(intermediario)
return resultado
# As células abaixo imprimem as docstrings de cada uma das funções.
help(celsius_para_kelvin)
help(kelvin_para_fahr)
help(celsius_para_fahr)
# ## Programação defensiva
# ### Tarefa - Checando os inputs
#
# Crie novamente a função `celsius_para_kelvin` (com a docstring definida acima). Dessa vez, utilize o comando `assert` para verificar se a temperatura permitida é maior ou igual ao 0 absoluto (0 Kelvin ou -273.15 Celsius). O `assert` deve vir antes do cálculo em si.
def celsius_para_kelvin(temp):
"""
Transforma a temperatura de celsius para kelvin, somando 273.15 ao valor da variavel fornecida em celsius
Exemplo:
celsius_para_kelvin(15) => 288.15
"""
assert temp >= -273.15, "Não há temperatura menor que zero absoluto!" #testamos se o valor entregue e superior ao zero absoluto.
kelvin = temp + 273.15
return kelvin
# As células abaixo testam se as funções estão fazendo a coisa certa (falhando quando menor que zero absoluto e não falhando caso contrário).
celsius_para_kelvin(-300) # Deve gerar um AssertionError
celsius_para_kelvin(-273.15) # Não deve falhar
celsius_para_kelvin(-273.16) # Deve gerar um AssertionError
# ### Tarefa - Checando outputs
#
# Utilize comandos `assert` para verificar se a função `celsius_para_fahr` está retornando o valor correto para pelo menos **5 temperaturas diferentes**.
#
# Como exemplo, vou deixar pronto 1 dos testes (sim, esse conta como 1 dos 5):
assert celsius_para_fahr(0) == 32
assert celsius_para_fahr(-273.15) == -459.66999999999996
assert celsius_para_fahr(500) == 932
assert celsius_para_fahr(1) == 33.8
assert celsius_para_fahr(7000) == 12632
# ## Integrando tudo
# ### Tarefa - Contas com matrizes
#
# Crie as funções abaixo:
#
# * `msoma`: recebe duas matrizes como entrada e retorna a soma dessas duas matrizes. Lembre-se que só é possível somar duas matrizes com o mesmo número de linhas e colunas.
# * `mmult`: recebe duas matrizes como entrada e retorna a multiplicação dessas matrizes. Lembre-se que só é possível multiplicar a matriz A por B se o número de colunas de A for igual ao número de linhas de B.
# * `vmult`: recebe uma matriz e um vetor como entrada e retorna a multiplicação da matriz pelo vetor.
#
# Todas as funções devem conter *docstrings* e `assert`s para conferir as entradas.
#
# **Dica**: Copie o código das aulas passadas.
a[0]
a = [[1, 2, 3]]
b = [[1], [2], [3]]
def msoma(A, B): #criamos a funcao de soma baseada na entrega de duas variaveis, que seriam as matrizes
"""
Soma a matriz A com a matriz B, tendo como resultado a matriz C
"""
nlin_a = len(A) #definimos o numero de linhas e colunas de cada matriz
ncol_a = len(A[0])
nlin_b = len(B)
ncol_b = len(B[0])
#conferimos se as regras necessarias para a conta sao cumpridas
assert nlin_a == nlin_b, "Número de linhas da matriz A tem que ser igual ao número de linhas da matriz B"
assert ncol_a == ncol_b, "Número de colunas da matriz A tem que ser igual ao número de colunas da matriz B"
C = [] #criamos a matriz C, que sera a resposta depois de preenchida
for i in range(nlin_a): #criamos um loop que repetira tres vezes os proximos comandos, passando pelas tres linhas da matriz A
linha = [] #criamos uma linha vazia, que sera usada para preencher C
for j in range(ncol_a): #criamos um novo loop dentro do anterior, que passara trez vezes, passando pelos tres elementos de cada linha.
linha.append(A[i][j]+B[i][j]) #adicionamos na linha a soma de cada elemento de A e B
C.append(linha) #apos terminarmos o loop e preenchermos a linha, adicionamos ela em C
return C #tcharam!
def mmult(A, B): #criamos a funcao de soma baseada na entrega de duas variaveis, que seriam as matrizes
"""
Multiplica a matriz A com a matriz B, tendo como resultado a matriz C
"""
#definimos o numero de linhas e colunas de cada matriz
nlin_a = len(A)
ncol_a = len(A[0])
nlin_b = len(B)
ncol_b = len(B[0])
#conferimos se as regras necessarias para a conta sao cumpridas
assert ncol_a == nlin_b, "Número de colunas da matriz A tem que ser igual ao número de linhas da matriz B"
assert ncol_b == nlin_a, "Número de colunas da matriz B tem que ser igual ao número de linhas da matriz A"
C = [] #criamos C vazia
soma = 0 #criamos a variavel vazia
for i in range(nlin_a): #repetiremos o loop principal para cada linha de A
linha = [] #criamos a linha vazia
for j in range(ncol_a): #repetiremos o loop para cada elemento de cada linha de A
for z in range(nlin_b): #repetiremos um novo loop para cada linha de B que sera multiplicada pelos elementos de A
elemento = A[i][z]*B[z][j] #multiplicamos os valores desejados
soma = soma+elemento #adicionamos na variavel soma ate que esteja completa
linha.append(soma) #adicionamos a soma total na linha
soma = 0 #zeramos a soma
C.append(linha) # a linha, apos completa, sera adicionada em C
return C #tcharam!
def vmult(A, v): #criamos a funcao de soma baseada na entrega de duas variaveis, que seriam uma matriz e um vetor
"""
Multiplica a matriz A pelo vetor v, tendo como resultado a matriz C
"""
#definimos o numero de linhas e colunas da matriz
nlin_a = len(A)
ncol_a = len(A[0])
#definimos o numero de elementos do vetor
ncol_v = len(v)
#conferimos se as regras necessarias para a conta sao cumpridas
assert ncol_v == nlin_a, "Número de colunas de v tem que ser igual ao número de linhas da matriz A"
C= [] #criamos a matriz resposta vazia
soma = 0 #criamos uma variavel vazia, usaremos ela dentro do loop
for i in range(nlin_a): #repetiremos i pelo numero de linhas da matriz A
for j in range(ncol_a): #repetiremos j pelo numero de elementos em cada linha de A
elemento = A[i][j]*v[j] #o elemento sera multiplicado pelo seu correspondente no vetor v
soma = soma+elemento #para cada elemento multiplicado, somaremos na variavel soma
C.append(soma) #a variavel final sera adicionada em C
soma = 0 #zeramos soma para usar de novo no proximo loop
return C
# Utilize as células abaixo para criar comandos `assert` que testem se suas funções produzem o resultado esperado. Utilize a função `np.allclose` do [numpy](http://numpy.org/) para verificar duas matrizes ou vetores possuem valores próximos (praticamente iguais). Exemplo:
#
# A = [[1, 2], [3, 4]]
# B = [[1, 2], [3, 4]]
# np.allclose(A, B) -> True
#
# C = [[1.1, 2], [3, 4]]
# np.allclose(A, C) -> False
#
# **Cada função deve ter pelo menos 4 testes**.
#
# Teste também se as suas funções falham quando a entrada é inadequada (dica: coloque esses casos cada um em uma célula).
#
# Como exemplo, deixo os primeiros testes prontos para vocês.
# Testa se msoma produz o resultado esperado
A = [[1, 2, 3], [4, 5, 6]]
B = [[7, 8, 9], [10, 11, 12]]
C = [[8, 10, 12], [14, 16, 18]]
assert np.allclose(msoma(A, B), C)
# Coloque abaixo os seus asserts
msoma([[1, 2, 3], [4, 5, 6]], [[1, 2], [1, 2]]) # Deve produzir um AssertionError
msoma([[1, 2, 3], [4, 5, 6]], [[1, 2, 3]]) # Deve produzir um AssertionError
A = [[10, 20, 30], [40, 50, 60]]
B = [[70, 80, 90], [100, 110, 120]]
C = [[80, 100, 120], [140, 160, 180]]
assert np.allclose(msoma(A, B),C)
A = [[2]]
B = [[4, 6, 8]]
mmult(A, B)
A = [[2, 3], [1, 4], [1, 5]]
B = [[4, 6, 8], [1, 1, 1,]]
C = [[11, 15], [8, 10], [9, 11]]
assert np.allclose(mmult(A, B),C)
A = [[1]]
B = [[4, 6, 8], [1, 1, 1,]]
mmult(A, B)
A = [[1, 1], [1, 2], [1, -5]]
B = [[-2, 6, 8], [1, 1, 1,]]
C = [[-1, 7], [0, 8], [-7, 1]]
assert np.allclose(mmult(A, B),C)
A = [[2, 3], [1, 4], [1, 5]]
v = [4, 6, 8]
C = [26, 28, 34]
assert np.allclose(vmult(A, v),C)
A = [[2, 3], [1, 4], [1, 5]]
v = [4]
vmult(A, v)
A = [[2, 3], [1, 4], [1, 5]]
v = [-2, -1, 8]
C = [-7, -6, -7]
assert np.allclose(vmult(A, v),C)
A = [[2, 3], [1, 4], [1, 5]]
v = [1, 1, 1]
vmult(A, v)
# ## Tarefa Bônus
#
# Crie a função:
#
# * `mtrans`: recebe uma matriz como entrada e retorna a transposta de dessa matriz. A transposta é a matriz com suas linhas transformadas em colunas.
#
# A função deve ter uma docstring, `assert`s para verificar a entrada e testes para verificar se o resultado é o esperado.
| funcoes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Model Components
#
# The 5 main components of a `WideDeep` model are:
#
# 1. `wide`
# 2. `deeptabular`
# 3. `deeptext`
# 4. `deepimage`
# 5. `deephead`
#
# The first 4 of them will be collected and combined by `WideDeep`, while the 5th one can be optionally added to the `WideDeep` model through its corresponding parameters: `deephead` or alternatively `head_layers`, `head_dropout` and `head_batchnorm`.
#
# Through the development of the package, the `deeptabular` component became one of the core values of the package. Currently `pytorch-widedeep` offers three models that can be passed as the `deeptabular` components. The possibilities are numerous, and therefore, that component will be discussed on its own in a separated notebook.
# ### 1. `wide`
#
# The `wide` component is a Linear layer "plugged" into the output neuron(s). This can be implemented in `pytorch-widedeep` via the `Wide` model.
#
# The only particularity of our implementation is that we have implemented the linear layer via an Embedding layer plus a bias. While the implementations are equivalent, the latter is faster and far more memory efficient, since we do not need to one hot encode the categorical features.
#
# Let's assume we the following dataset:
# +
import torch
import pandas as pd
import numpy as np
from torch import nn
# -
df = pd.DataFrame({'color': ['r', 'b', 'g'], 'size': ['s', 'n', 'l']})
df.head()
# one hot encoded, the first observation would be
obs_0_oh = (np.array([1., 0., 0., 1., 0., 0.])).astype('float32')
# if we simply numerically encode (label encode or `le`) the values:
obs_0_le = (np.array([0, 3])).astype('int64')
# Note that in the functioning implementation of the package we start from 1, saving 0 for padding, i.e. unseen values.
#
# Now, let's see if the two implementations are equivalent
# we have 6 different values. Let's assume we are performing a regression, so pred_dim = 1
lin = nn.Linear(6, 1)
emb = nn.Embedding(6, 1)
emb.weight = nn.Parameter(lin.weight.reshape_as(emb.weight))
lin(torch.tensor(obs_0_oh))
emb(torch.tensor(obs_0_le)).sum() + lin.bias
# And this is precisely how the linear model `Wide` is implemented
from pytorch_widedeep.models import Wide
# ?Wide
wide = Wide(wide_dim=10, pred_dim=1)
wide
# Note that even though the input dim is 10, the Embedding layer has 11 weights. Again, this is because we save `0` for padding, which is used for unseen values during the encoding process.
#
# As I mentioned, `deeptabular` has enough complexity on its own and it will be described in a separated notebook. Let's then jump to `deeptext`.
# ### 3. `deeptext`
#
# `pytorch-widedeep` offers one model that can be passed to `WideDeep` as the `deeptext` component, `DeepText`, which is a standard and simple stack of LSTMs on top of word embeddings. You could also add a FC-Head on top of the LSTMs. The word embeddings can be pre-trained. In the future I aim to include some simple pretrained models so that the combination between text and images is fair.
#
# *While I recommend using the `wide` and `deeptabular` models within this package when building the corresponding wide and deep model components, it is very likely that the user will want to use custom text and image models. That is perfectly possible. Simply, build them and pass them as the corresponding parameters. Note that the custom models MUST return a last layer of activations (i.e. not the final prediction) so that these activations are collected by `WideDeep` and combined accordingly. In addition, the models MUST also contain an attribute `output_dim` with the size of these last layers of activations.*
#
# Let's have a look to the `DeepText` class within `pytorch-widedeep`
import torch
from pytorch_widedeep.models import DeepText
# ?DeepText
X_text = torch.cat((torch.zeros([5,1]), torch.empty(5, 4).random_(1,4)), axis=1)
deeptext = DeepText(vocab_size=4, hidden_dim=4, n_layers=1, padding_idx=0, embed_dim=4)
deeptext
# You could, if you wanted, add a Fully Connected Head (FC-Head) on top of it
deeptext = DeepText(vocab_size=4, hidden_dim=8, n_layers=1, padding_idx=0, embed_dim=4, head_hidden_dims=[8,4])
deeptext
# Note that since the FC-Head will receive the activations from the last hidden layer of the stack of RNNs, the corresponding dimensions must be consistent.
# ### 4. DeepImage
#
# Similarly to `deeptext`, `pytorch-widedeep` offers one model that can be passed to `WideDeep` as the `deepimage` component, `DeepImage`, which iseither a pretrained ResNet (18, 34, or 50. Default is 18) or a stack of CNNs, to which one can add a FC-Head. If is a pretrained ResNet, you can chose how many layers you want to defrost deep into the network with the parameter `freeze_n`
from pytorch_widedeep.models import DeepImage
# ?DeepImage
X_img = torch.rand((2,3,224,224))
deepimage = DeepImage(head_hidden_dims=[512, 64, 8], head_activation="leaky_relu")
deepimage
deepimage(X_img)
# if `pretrained=False` then a stack of 4 CNNs are used
deepimage = DeepImage(pretrained=False, head_hidden_dims=[512, 64, 8])
deepimage
# ### 5. deephead
#
# The `deephead` component is not defined outside `WideDeep` as the rest of the components.
#
# When defining the `WideDeep` model there is a parameter called `head_layers_dim` (and the corresponding related parameters. See the package documentation) that define the FC-head on top of `DeeDense`, `DeepText` and `DeepImage`.
#
# Of course, you could also chose to define it yourself externally and pass it using the parameter `deephead`. Have a look
from pytorch_widedeep.models import WideDeep
# ?WideDeep
| examples/02_1_Model_Components.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Largest prime factor
#
#
#
# The prime factors of 13195 are 5, 7, 13 and 29.
#
# What is the largest prime factor of the number 600851475143 ?
#
# +
# %%time
import numpy as np
import math
myprime = 600851475143
primelimit = 1e6
# Find primes with sieve of eratosthenes
primes = np.ones(np.int(primelimit//2) + 1)
primes[0] = 0
primes[1] = 0
for i in range(2, np.int(math.sqrt(primelimit))):
newprime = i
notprime = i*i
while notprime <= primelimit//2:
primes[notprime] = 0
notprime = notprime + newprime
# Try out the primes
i = 0
factors = []
while myprime > 1:
if primes[i] > 0:
if myprime%i == 0:
myprime = myprime/i
factors.append(i)
largest = i
i = i+1
num = 1
for factor in factors:
num = num*factor
print(num)
# -
| 0000/Problem_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
pd.set_option('chained_assignment',None) # this allows the pandas apply function to work without problems. Should I double-check with Philipp?
import pickle
# +
#concatenate all NYT dataframes
NYT_headlines_2020 = pd.read_csv('NYT_headlines_2020.csv')
NYT_headlines_2019 = pd.read_csv('NYT_headlines_2019.csv')
NYT_headlines_2018 = pd.read_csv('NYT_headlines_2018.csv')
NYT_headlines_2017 = pd.read_csv('NYT_headlines_2017.csv')
NYT_headlines_2016 = pd.read_csv('NYT_headlines_2016.csv')
NYT_headlines_2015 = pd.read_csv('NYT_headlines_2015.csv')
NYT_headlines_2014 = pd.read_csv('NYT_headlines_2014.csv')
NYT_headlines_2013 = pd.read_csv('NYT_headlines_2013.csv')
NYT_headlines_2012 = pd.read_csv('NYT_headlines_2012.csv')
NYT_headlines_2011 = pd.read_csv('NYT_headlines_2011.csv')
NYT_headlines_2010 = pd.read_csv('NYT_headlines_2010.csv')
# +
NYT_headlines_2010['year']=2010
NYT_headlines_2011['year']=2011
NYT_headlines_2012['year']=2012
NYT_headlines_2013['year']=2013
NYT_headlines_2014['year']=2014
NYT_headlines_2015['year']=2015
NYT_headlines_2016['year']=2016
NYT_headlines_2017['year']=2017
NYT_headlines_2018['year']=2018
NYT_headlines_2019['year']=2019
NYT_headlines_2020['year']=2020
NYT_headlines_2015
# +
NYT_all = pd.concat([NYT_headlines_2010,
NYT_headlines_2011,
NYT_headlines_2012,
NYT_headlines_2013,
NYT_headlines_2014,
NYT_headlines_2015,
NYT_headlines_2016,
NYT_headlines_2017,
NYT_headlines_2018,
NYT_headlines_2019,
NYT_headlines_2020])
NYT_all = NYT_all.rename(columns={"0" : 'headline'})
NYT_all
# -
type(NYT_all.iloc[1].headline)
# +
# tokenize with nltk before doing anything else
from nltk.tokenize import TweetTokenizer
tweet_tokenizer = TweetTokenizer()
def tokenize_headline(headline):
if type(headline) == str: #apparently some of them aren't strings...
tokenized_headline = tweet_tokenizer.tokenize(headline)
return tokenized_headline
NYT_all['tokenized_nltk'] = NYT_all.headline.apply(tokenize_headline)
NYT_all
# -
NYT_all = NYT_all.dropna() # used to be 571859, now it's 571845. not a loss
# +
# in all following steps lowercase and remove punctuation
import string
string.punctuation
def nltk_remove_punct_and_lowercase(text):
lower = [token.lower() for token in text]
no_punctuation = [token for token in lower if token not in string.punctuation]
return no_punctuation
NYT_all['lower_words_nltk'] = NYT_all.tokenized_nltk.apply(nltk_remove_punct_and_lowercase)
# I call this lower_words, instead of tokens because it's just the words, excluding the punctuation, and because it's lowercased
# nltk because I've used the nltk tokenizer
NYT_all
# +
# get headline length
NYT_all['lower_words_nltk_n'] = NYT_all['lower_words_nltk'].apply(len)
NYT_all
# -
NYT_all[NYT_all['lower_words_nltk_n'] == 0] #three rows contain zero words, delete themb
# +
# get number of characters
NYT_all['char_n'] = NYT_all['headline'].apply(len)
NYT_all
# -
NYT_all = NYT_all[NYT_all.lower_words_nltk_n != 0]
# +
# get average word length
def get_average_word_length(tokens):
lengths = [len(token) for token in tokens]
avg_word_length = sum(lengths) / len(tokens)
return avg_word_length
NYT_all['lower_words_nltk_mean_len'] = NYT_all.lower_words_nltk.apply(get_average_word_length)
NYT_all
#lower_words_nltk_mean_len seems like a bit of a mouthful
#lowe_words_nltk just means lowercased words as tokenized by nltk, mean len if obv
# this could just be mean word length
# +
# count sentences
# tokenize the raw text into sentences
from nltk import sent_tokenize
def count_sentences(headline):
return len(sent_tokenize(headline))
NYT_all['sents'] = NYT_all.headline.apply(count_sentences)
NYT_all
# +
# count stop words with nltk
from nltk.corpus import stopwords
nltk_stopwords = set(stopwords.words('english')) #get the stopwords as a set, makes checking quicker
def count_nltk_stopwords(headline):
stopwords = [token for token in headline if token in nltk_stopwords]
return len(stopwords)
NYT_all['nltk_stopwords_n'] = NYT_all.lower_words_nltk.apply(count_nltk_stopwords)
NYT_all
# +
# get stop word ratio
NYT_all['nltk_stopwords_ratio'] = NYT_all.nltk_stopwords_n / NYT_all.lower_words_nltk_n
NYT_all
# +
# count Reader-Adressing pronouns
reader_adressing_pronouns = ["you", "you're", "you'd", "you'll", "your", "yours"]
reader_adressing_pronouns = set(reader_adressing_pronouns)
def count_reader_adressing_pronouns(headline):
tokenized_headline = tweet_tokenizer.tokenize(headline)
lower = [token.lower() for token in tokenized_headline]
raps = [token for token in lower if token in reader_adressing_pronouns]
return len(raps)
NYT_all['raps_n'] = NYT_all.headline.apply(count_reader_adressing_pronouns)
NYT_all
# -
NYT_all.headline.tail()
# +
# get first_person_sg
first_person_sg_pronouns = ["i", "i'm", "i'd", "i'll", "my", "me", "mine"]
first_person_sg_pronouns = set(first_person_sg_pronouns)
def get_first_person_sg(headline_as_words):
first_person_sg = [token for token in headline_as_words if token in first_person_sg_pronouns]
return len(first_person_sg)
NYT_all['first_person_sg'] = NYT_all.lower_words_nltk.apply(get_first_person_sg)
NYT_all
# +
# get first_person_pl
first_person_pl_pronouns = ["we", "we're", "we'd", "we'll", "our", "us", "ours"]
first_person_pl_pronouns = set(first_person_pl_pronouns)
def get_first_person_pl(headline_as_words):
first_person_pl = [token for token in headline_as_words if token in first_person_pl_pronouns]
return len(first_person_pl)
NYT_all['first_person_pl'] = NYT_all.lower_words_nltk.apply(get_first_person_pl)
NYT_all
# +
# get future forms
future_forms = ["will", "won't", "i'll", "you'll", "he'll", "she'll", "it'll", "we'll"]
future_forms = set(future_forms)
def get_future_forms(headline_as_words):
future_occurrence = [token for token in headline_as_words if token in future_forms]
return len(future_occurrence)
NYT_all['future'] = NYT_all.lower_words_nltk.apply(get_future_forms)
NYT_all
# +
# get all part of speech, both as count and as ratio
import nltk
def get_pos_tags(tokens):
pos_tags = nltk.pos_tag(tokens)
# only_pos_tag = [pos_tag[1] for pos_tag in pos_tags] #this would only get the pos_tag withohut the associated word
return pos_tags # or only_pos_tag
NYT_all['pos_tags_word_level'] = NYT_all.lower_words_nltk.apply(get_pos_tags)
NYT_all
# some_pos_tags1 = get_pos_tags(upworthy_cleaned.tokenized_nltk.iloc[0])
# some_pos_tags1
# some_pos_tags2 = get_pos_tags(upworthy_cleaned.lower_words_nltk.iloc[0])
# some_pos_tags2 #weirdly, this works better, even though it's on the lower_words without punctuation
# +
def get_nouns(pos_tags):
nouns = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('N')]
return nouns
def get_pronouns(pos_tags):
pronouns = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('PRP')]
return pronouns
def get_verbs(pos_tags):
verbs = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('V')]
return verbs
def get_adjs(pos_tags):
adjs = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('J')]
return adjs
def get_advs(pos_tags):
advs = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('R')]
return advs
def get_superlatives(pos_tags):
superlatives = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1] == 'JJS'or pos_tag[1] == 'RBS'] # adjective and adverbial superlative
return superlatives
def get_w_words(pos_tags):
w_words = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1].startswith('W')]
return w_words
def get_interjections(pos_tags):
interjections = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1] == 'UH']
return interjections
def get_dets(pos_tags):
dets = [pos_tag[0] for pos_tag in pos_tags if pos_tag[1] == 'DT']
return dets
demonstratives = ['this', 'that', 'these', 'those']
demonstratives = set(demonstratives)
def get_dems(words):
dems = [word for word in words if word in demonstratives]
return len(dems)
# determiners are not informative, since they include articles.
# demonstrativves could be more informative, indicating cataphora.
NYT_all['nouns'] = NYT_all['pos_tags_word_level'].apply(get_nouns)
NYT_all['pronouns'] = NYT_all['pos_tags_word_level'].apply(get_pronouns)
NYT_all['verbs'] = NYT_all['pos_tags_word_level'].apply(get_verbs)
NYT_all['adjs'] = NYT_all['pos_tags_word_level'].apply(get_adjs)
NYT_all['advs'] = NYT_all['pos_tags_word_level'].apply(get_advs)
NYT_all['superlatives'] = NYT_all['pos_tags_word_level'].apply(get_superlatives)
NYT_all['w_words'] = NYT_all['pos_tags_word_level'].apply(get_w_words)
NYT_all['dems'] = NYT_all.lower_words_nltk.apply(get_dems)
NYT_all
# +
lexical_classes = ['nouns', 'pronouns', 'verbs' ,'adjs', 'advs', 'superlatives', 'w_words']
for lexical_class in lexical_classes:
NYT_all[lexical_class +'_n'] = NYT_all[lexical_class].apply(len)
NYT_all
# +
lexical_classes = ['nouns', 'pronouns', 'verbs' ,'adjs', 'advs', 'superlatives', 'w_words']
for lexical_class in lexical_classes:
NYT_all[lexical_class +'_ratio'] = NYT_all[lexical_class + '_n'] / NYT_all['lower_words_nltk_n']
NYT_all
# +
# get first pos-tag per headline
def get_first_pos_tag(pos_tags):
return pos_tags[0][1]
NYT_all['first_pos_tag'] = NYT_all['pos_tags_word_level'].apply(get_first_pos_tag)
NYT_all
# -
# +
# check for uppercase
def count_uppercase_words(tokens):
uppercase_words = [token for token in tokens if len(token) > 1 and token.isupper()]
return len(uppercase_words)
NYT_all['uppercase_n'] = NYT_all.tokenized_nltk.apply(count_uppercase_words)
NYT_all[NYT_all['uppercase_n'] != 0]
# +
# count question marks, count exlamation marks
def count_question_marks(tokenized_nltk):
question_marks = [token for token in tokenized_nltk if token == "?"]
return len(question_marks)
NYT_all['question_marks_n'] = NYT_all.headline.apply(count_question_marks)
NYT_all
# -
def count_exclamation_marks(tokenized_nltk):
exclamation_marks = [token for token in tokenized_nltk if token == "!"]
return len(exclamation_marks)
# +
# count dots
def count_dots(tokenized_nltk):
exclamation_marks = [token for token in tokenized_nltk if token == "."]
return len(exclamation_marks)
NYT_all['dots_n'] = NYT_all.headline.apply(count_dots)
NYT_all
# +
# count unusual punctuation separately?
NYT_all['exclamation_marks_n'] = NYT_all.headline.apply(count_exclamation_marks)
NYT_all
# +
# get + count numbers
import re
def get_numbers(headline):
return [int(s) for s in re.findall(r'\b\d+\b', headline)]
NYT_all['numbers'] = NYT_all['headline'].apply(get_numbers)
NYT_all
# +
# convert some of the count features to ratio features
# +
# convert some of the count features to presence features
# this turns count into a binary presence feature: i.e. how many raps to are there any raps?
def bigger_than_0(some_number):
if some_number:
return 1
else:
return 0
for col in NYT_all.columns:
print(col)
'nouns_n'
'verbs_n'
'adjs_n'
'advs_n'
### more distinctive perhaps:
'raps_n'
'first_person_sg'
'first_person_pl'
'future'
'dems'
'superlatives_n'
'w_words_n'
'uppercase_n'
'question_marks_n'
'exclamation_marks_n'
cols = ['nouns_n',
'verbs_n',
'adjs_n',
'advs_n',
'pronouns_n',
'raps_n',
'first_person_sg',
'first_person_pl',
'future',
'dems',
'superlatives_n',
'numbers',
'w_words_n',
'numbers_bin',
'uppercase_n',
'question_marks_n',
'exclamation_marks_n']
for col in cols:
NYT_all[col + '_bin'] = NYT_all[col].apply(bigger_than_0)
NYT_all
NYT_all
# +
# get Flesch Reading Ease
import textstat
def get_FRE(headline):
return textstat.flesch_reading_ease(headline)
NYT_all['Flesch_Reading_Ease'] = NYT_all['headline'].apply(get_FRE)
NYT_all
# +
# detect the presence of a colon though? just in case:
def count_colons(tokenized_nltk):
colons = [token for token in tokenized_nltk if token == ":"]
return len(colons)
NYT_all['colons_n'] = NYT_all.headline.apply(count_colons)
NYT_all
# -
NYT_all[NYT_all['w_words_n']>0]
# In 2016, NYT became clickbait
# +
small_vals = NYT_all[['year', 'nouns_ratio', 'verbs_ratio', 'adjs_ratio', 'advs_ratio',
'superlatives_ratio', 'w_words_ratio', 'uppercase_n',
'question_marks_n', 'exclamation_marks_n', 'nouns_n_bin', 'verbs_n_bin',
'adjs_n_bin', 'advs_n_bin', 'raps_n_bin', 'first_person_sg_bin',
'first_person_pl_bin', 'future_bin', 'dems_bin', 'superlatives_n_bin',
'w_words_n_bin', 'uppercase_n_bin', 'question_marks_n_bin',
'exclamation_marks_n_bin']]
small_vals
# +
#pickle the df with all features as ling_features
import pickle
NYT_all.to_pickle("./NYT_all_extended.pkl")
# -
# # THE END, lemmatization and zipf frequency (below) not used in present study
import pickle
with open(file, 'rb') as pickle_file:
NYT_all = pickle.load(pickle_file)
NYT_all
# +
# lemmatize and preprocess
# +
import spacy
import en_core_web_sm
nlp = en_core_web_sm.load()
def lemmatize_text(text):
doc = nlp(text)
result = ' '.join([x.lemma_ for x in doc])
return result
# -
NYT_all['lemmatized_headline'] = NYT_all['headline'].apply(lemmatize_text)
NYT_all
NYT_all.rename(columns={'Unnamed: 0': 'number'}, inplace=True)
NYT_all = NYT_all[['number', 'tokenized_nltk', 'headline', 'lower_words_nltk', 'lemmatized_headline']]
NYT_all
# +
from nltk.tokenize import TweetTokenizer
tweet_tokenizer = TweetTokenizer()
import string
def remove_punct_and_lowercase(text):
tokenized_tweet = tweet_tokenizer.tokenize(text)
lower = [token.lower() for token in tokenized_tweet]
no_punctuation = [token for token in lower if token not in string.punctuation]
return no_punctuation
# -
NYT_all['preprocessed_headline'] = NYT_all['lemmatized_headline'].apply(remove_punct_and_lowercase)
NYT_all
NYT_all.to_pickle("./lemmatized_NYT.pkl")
# # get freqs
# +
from wordfreq import zipf_frequency
def get_mean_freq(lemmatized):
freqs = [zipf_frequency(word, 'en') for word in lemmatized]
if sum(freqs):
return sum(freqs) / len(freqs)
else:
return 0
# -
NYT_all['avg_word_freq'] = NYT_all['preprocessed_headline'].apply(get_mean_freq)
NYT_all
# +
import seaborn as sns
sns.set_theme(style = 'whitegrid')
sns.histplot(x="avg_word_freq", bins = 25, data=NYT_all)
| ANALYSIS/NYT_ling_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R 3.3
# language: R
# name: ir33
# ---
# #### This exercise demonstrates how to get the frequency distribution of words used in a chunk of text. The dataset used can be found [here](https://github.com/hxchua/datadoubleconfirm/blob/master/datasets/DisneySongs25.csv).
textdata<-read.csv("C:/Users/HuiXiang/Documents/DisneySongs25.csv",header=T)
head(textdata) #looking at the first few rows
nrow(textdata) #checking number of rows in dataset
data<-as.data.frame(textdata$Lyrics) #we're interested in the lyrics column only
length(data) #check that we only took one column
nrow(data) #checking number of rows again
#creating a function that cleans the text data
one<-function(str){
m <- as.data.frame((str))
colnames(m) <- "word"
m <- as.matrix(m)
for(i in 1:nrow(m)){
m[i,] = gsub('[[:punct:] ]','', m[i,]) #removing punctuations or symbols
m[i,] = tolower(m[i,]) #convert all to lowercase
}
return(m)
}
str <- strsplit(paste(data[1,]), split = " ") #taking the first row and creating a matrix with each row containing one word
r = one(str) #applying the cleaning function to each word
r
#applying the text cleaning function to rest of the rows and combine all words in one dataframe
for(i in 2:nrow(data)){
str <- strsplit(paste(data[i,]), split = " ")
m = one(str)
r = rbind(r,m)
}
nrow(r)
library(plyr)
tab = as.data.frame(count(r))
head(tab)
f<-tab[rev(order(tab$freq)),]
f[1:50,]
# #### Removing punctuations from words might have some misrepresentation eg. "we're" will become "were" or "it's" will become "its". This is okay if there aren't a lot of short forms. To be more precise, we can do text stemming (before removing punctuations or symbols). We will make use of a package called SnowballC. It will derive the root word for each term. An example is shown below.
library(SnowballC)
as.data.frame(wordStem(c("we're","were","its","it's")))
one<-function(str){
m <- as.data.frame((str))
colnames(m) <- "word"
m <- as.matrix(m)
for(i in 1:nrow(m)){
m[i,] = wordStem(m[i,]) #note this addition into the function for text stemming
m[i,] = gsub('[[:punct:] ]','', m[i,])
m[i,] = tolower(m[i,])
}
return(m)
}
#re-running the text cleaning function through the list of words
str <- strsplit(paste(data[1,]), split = " ")
r = one(str)
for(i in 2:nrow(data)){
str <- strsplit(paste(data[i,]), split = " ")
m = one(str)
r = rbind(r,m)
}
tab = as.data.frame(count(r))
f<-tab[rev(order(tab$freq)),]
f[1:50,]
# #### If you're not interested in stopwords eg. "the", "a", "and", we will have to make use of another package called tm. A tutorial can be found [here]( https://projectosyo.wixsite.com/datadoubleconfirm/single-post/2017/12/24/Text-mining---Process---R).
| notebooks/Text Frequency Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 量子测量
#
# [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindquantum/zh_cn/mindspore_quantum_measurement.ipynb) 
# [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindquantum/zh_cn/mindspore_quantum_measurement.py) 
# [](https://gitee.com/mindspore/docs/blob/master/docs/mindquantum/docs/source_zh_cn/quantum_measurement.ipynb)
#
# ## 概述
#
# 在量子线路设计时,我们最终需要通过测量(measure)操作获得结果,进行测量的时候需要选定特定的基态进行测量,而测量得到的结果是不确定的,测量后量子态也会随机坍塌到我们测量的某个基态上。
#
# 量子测量由一组测量算子${M_m}$描述,这些算子作用在被测系统状态空间上,指标$m$表示实验中可能的测量结果,若在测量前,量子系统的状态为$|\psi⟩$,则结果$m$发生的可能性为:
#
# $$
# p(m)=⟨\psi|M^\dagger_mM_m|\psi⟩
# $$
#
# 测量后系统的状态塌缩为:
#
# $$
# \frac{M_m|\psi⟩}{\sqrt{⟨\psi|M^\dagger_mM_m|\psi⟩}}
# $$
#
# 测量算子满足完备性方程:
#
# $$
# \Sigma_mM^\dagger_mM_m=I
# $$
#
# 完备性方程表达了概率之和为1的事实:
#
# $$
# 1=\Sigma_m p(m)=\Sigma_m ⟨\psi|M^\dagger_mM_m|\psi⟩
# $$
#
# 该方程对所有的$|\psi⟩$都成立,与完备性方程等价,但直接验证完备性方程更简单,所以将完备性方程作为约束条件。
#
# 根据选取测量算子的不同,我们常见的测量分成计算基测量、投影测量、Pauli测量等,MindQuantum提供了丰富的测量功能与可视化展示工具,我们利用这些功能进一步学习量子测量。
# ## 计算基测量
#
# 我们先对计算基测量有一个简单认识:假设有一个n个量子比特的态,我们对它执行n比特计算基测量,测量后,如果结果为$00 \cdots0$,表明该n量子比特系统的量子状态已塌缩到$|00 \cdots0⟩$态;类似地,如果测量其中一个量子比特,那么它表示的$2^n$种情况就会被排除掉一半,即在两个各占一半的空间中,测量操作将量子态投影到其中一个空间,表明该n量子比特系统的量子状态中一个子系统塌缩了。
#
# ### 单量子比特在计算基下的测量
#
# 计算基测量算子:$M_0=|0⟩⟨0|$和$M_1=|1⟩⟨1|$,注意到每个测量算子都是Hermite的,即满足$M_0^\dagger=M_0,M_1^\dagger=M_1$,并且$M^2_0=M_0,M^2_1=M_1$,于是满足完备性关系:
#
# $$
# I=M^\dagger_0M_0+M^\dagger_1M_1=M_0+M_1
# $$
#
# 假设被测量状态$|\psi⟩=a|0⟩+b|1⟩$,则获得测量结果0的概率是:
#
# $$
# \begin{align*}
# p(0)&=⟨\psi|M^\dagger_0M_0|\psi⟩\\
# &=⟨\psi|M_0|\psi⟩\\
# &=⟨\psi|(|0⟩⟨0|)|\psi⟩\\
# &=(⟨\psi|0⟩)(⟨0|\psi⟩)\\
# &=[(⟨0|a^{\star}+⟨1|b^{\star})|0⟩][⟨0|(a|0⟩+b|1⟩)]\\
# &=(a^{\star}⟨0|0⟩+b^{\star}⟨1|0⟩)(a⟨0|0⟩+b⟨1|0⟩)\\
# &=a^{\star}a\\
# &=|a|^2
# \end{align*}
# $$
#
# 类似地,获得测量结果1的概率是$p(1)=|b|^2$。两种情况下,测量后的状态分别为:
#
# $$
# \begin{align*}
# \frac{M_0|\psi⟩}{|a|}=\frac{a}{|a|}|0⟩\\
# \frac{M_1|\psi⟩}{|b|}=\frac{b}{|b|}|1⟩\\
# \end{align*}
# $$
#
# ### 多量子比特在计算基下的测量——以双量子比特为例
#
# #### 测量系统中所有比特
#
# 双量子比特系统下计算基测量算子:$M_{00}=|00⟩⟨00|,M_{01}=|01⟩⟨01|,M_{10}=|10⟩⟨10|$和$M_{11}=|11⟩⟨11|$,注意到每个测量算子都是Hermite的,即满足$M_{ij}^\dagger=M_{ij},i,j\in\{0,1\}$,并且$M_{ij}^2=M_{ij}$,于是满足完备性关系:
#
# $$
# I=M^\dagger_{00}M_{00}+M^\dagger_{01}M_{01}+M^\dagger_{10}M_{10}+M^\dagger_{11}M_{11}=M_{00}+M_{01}+M_{10}+M_{11}
# $$
#
# 假设被测量状态$|\psi⟩=a|00⟩+b|01⟩+c|10⟩+d|11⟩$,则获得测量结果00的概率是:
#
# $$
# \begin{align*}
# p(00)&=⟨\psi|M^\dagger_{00}M_{00}|\psi⟩\\
# &=⟨\psi|M_{00}|\psi⟩\\
# &=⟨\psi|(|00⟩⟨00|)|\psi⟩\\
# &=(⟨\psi|00⟩)(⟨00|\psi⟩)\\
# &=[(⟨00|a^{\star}+⟨01|b^{\star}+⟨10|c^{\star}+⟨11|d^{\star})|00⟩][⟨00|(a|00⟩+b|01⟩+c|10⟩+d|11⟩)]\\
# &=(a^{\star}⟨00|00⟩+b^{\star}⟨01|00⟩+c^{\star}⟨10|00⟩+d^{\star}⟨11|00⟩)(a⟨00|00⟩+b⟨00|01⟩+c⟨00|10⟩+b⟨00|11⟩)\\
# &=a^{\star}a\\
# &=|a|^2
# \end{align*}
# $$
#
# 类似地,获得测量结果01的概率是$p(01)=|b|^2$,10的概率是$p(10)=|c|^2$,11的概率是$p(11)=|d|^2$。四种情况下,测量后的状态分别为:
#
# $$
# \begin{align*}
# \frac{M_{00}|\psi⟩}{|a|}=\frac{a}{|a|}|00⟩\\
# \frac{M_{01}|\psi⟩}{|b|}=\frac{b}{|b|}|01⟩\\
# \frac{M_{10}|\psi⟩}{|c|}=\frac{c}{|c|}|10⟩\\
# \frac{M_{11}|\psi⟩}{|d|}=\frac{d}{|d|}|11⟩\\
# \end{align*}
# $$
#
# #### 测量系统中单个比特
#
# 如果测量双量子比特量子状态的第一个量子比特,双计算基测量算子:$M_0=|0⟩⟨0|\otimes I$和$M_1=|1⟩⟨1|\otimes I$,注意到每个测量算子都是Hermite的,即满足$M_0^\dagger=M_0,M_1^\dagger=M_1$,并且$M^2_0=M_0,M^2_1=M_1$,于是满足完备性关系:
#
# $$
# I=M^\dagger_0M_0+M^\dagger_1M_1=M_0+M_1
# $$
#
# 假设被测量状态$|\psi⟩=a|00⟩+b|01⟩+c|10⟩+d|11⟩$,则测量双量子比特量子状态的第一个量子比特,获得测量结果0的概率是:
#
# $$
# \begin{align*}
# p(0)&=⟨\psi|M^\dagger_0M_0|\psi⟩\\
# &=⟨\psi|M_0|\psi⟩\\
# &=⟨\psi|(|0⟩⟨0|\otimes I)|\psi⟩\\
# &=(⟨00|a^{\star}+⟨01|b^{\star}+⟨10|c^{\star}+⟨11|d^{\star})|(|0⟩⟨0|\otimes I)|(a|00⟩+b|01⟩+c|10⟩+d|11⟩)\\
# &=(⟨00|a^{\star}+⟨01|b^{\star}+⟨10|c^{\star}+⟨11|d^{\star})|(a|00⟩+b|01⟩)\\
# &=a^{\star}a+b^{\star}b\\
# &=|a|^2+|b|^2
# \end{align*}
# $$
#
# 类似地,获得测量结果1的概率是$p(1)=|c|^2+|d|^2$。两种情况下,测量后的状态分别为:
#
# $$
# \begin{align*}
# \frac{M_0|\psi⟩}{\sqrt{|a|^2+|b|^2}}=\frac{a}{\sqrt{|a|^2+|b|^2}}|00⟩+\frac{b}{\sqrt{|a|^2+|b|^2}}|01⟩\\
# \frac{M_1|\psi⟩}{\sqrt{|c|^2+|d|^2}}=\frac{c}{\sqrt{|c|^2+|d|^2}}|10⟩+\frac{d}{\sqrt{|c|^2+|d|^2}}|11⟩\\
# \end{align*}
# $$
#
# 通过对计算基测量的学习,我们可以直观认识到,在多量子比特态的其中一个比特上做测量,本质是将量子态投影到两个子空间之一中。为了简洁的区分出这两个子空间,我们利用线性代数知识知道,可以通过恰好有两个唯一特征值的矩阵来描述两个正交子空间。
#
# ### 计算基测量的MindQuantum实现
#
# 接下来我们使用MindQuantum搭建一个含测量操作的量子线路并观察结果,首先导入本教程所依赖的模块。
import numpy as np # 导入numpy库并简写为np
from mindquantum.core import X, H # 导入量子门H, X
from mindquantum.simulator import Simulator # 从mindquantum.simulator中导入Simulator类
from mindquantum.core import Circuit # 导入Circuit模块,用于搭建量子线路
from mindquantum import Measure # 引入测量门
# 说明:
#
# (1)numpy是一个功能强大的Python库,主要用于对多维数组执行计算,支持大量的维度数组与矩阵运算,此外也针对数组运算提供大量的数学函数库;
#
# (2)mindquantum是量子-经典混合计算框架,支持多种量子神经网络的训练和推理;
#
# (3)搭建的量子线路中所需执行的量子门需要从mindquantum.core模块中导入;
#
# (4)运行量子线路所需要的量子模拟器需要从mindquantum.simulator模块中导入;
#
# (5)搭建量子线路所需要的量子线路类Circuit需要从mindquantum.core模块中导入;
#
# (6)对量子线路进行测量需要从mindquantum中导入Measure操作。
#
# 我们搭建出一个制备双量子比特均匀叠加态$|\psi⟩=\frac{\sqrt{2}(|00⟩+|11⟩)}{2}$的量子线路,并分别展示在所有量子比特上使用计算基测量和只在0号量子比特上使用计算基测量的结果。
#
# #### MindQuantum实现测量系统中所有比特
#
# 在使用代码演示之前,我们先简单计算出理论值。
#
# 在所有量子比特上使用计算基测量$|\psi⟩=\frac{\sqrt{2}(|00⟩+|11⟩)}{2}$:
#
# $$
# \begin{align*}
# p(00)&=|a|^2=(\frac{\sqrt{2}}{{2}})^2=\frac{1}{2}\\
# p(01)&=|b|^2=0^2=0\\
# p(10)&=|c|^2=0^2=0\\
# p(11)&=|d|^2=(\frac{\sqrt{2}}{{2}})^2=\frac{1}{2}\\
# \end{align*}
# $$
#
# 可以看到,测量结果只有两种可能:00和11,概率均是$\frac{1}{2}$。测量后的状态分别为:
#
# $$
# \begin{align*}
# \frac{a}{|a|}|00⟩=|00⟩\\
# \frac{d}{|d|}|11⟩=|11⟩\\
# \end{align*}
# $$
#
# 我们开始搭建制备$|\psi⟩=\frac{\sqrt{2}(|00⟩+|11⟩)}{2}$并在所有比特上做测量的量子线路:
circ_all = Circuit() # 初始化量子线路
circ_all += H.on(0) # H门作用在第0位量子比特
circ_all += X.on(1, 0) # X门作用在第1位量子比特且受第0位量子比特控制
circ_all += Measure('q0').on(0) # 在0号量子比特作用一个测量,并将该测量命名为'q0'
circ_all += Measure('q1').on(1) # 在1号量子比特作用一个测量,并将该测量命名为'q1'
circ_all.svg() # 绘制SVG格式的量子线路图片
sim = Simulator('projectq', 2) # 声明一个2比特的projectq模拟器
sim.apply_circuit(circ_all).svg() # 在模拟器上运行量子线路
# 可以看到我们得到的测量结果是'00',测量后的量子态塌缩为:
print(sim.get_qs(True))
# 量子态塌缩成了$1|00⟩$,与理论值相符。
#
# 如果我们多测量几次,可以发现测量结果也会为'11':
sim.reset() #复位模拟器
sim.apply_circuit(circ_all).svg() # 在模拟器上运行量子线路
# 打印出此时量子态,可以看到它坍缩成了相应的$|11⟩$:
print(sim.get_qs(True))
# 我们观察到,测量结果时而为'00'时而为'11',符合理论预期,但是没有办法观察出现00和11的概率是否相同,我们希望可以多次测量,统计出不同结果出现的频率,以此观察结果是否满足预期的概率分布。为此我们使用量子线路采样(Sampling)功能:
sim.reset()
result = sim.sampling(circ_all, shots=1000) # 对上面定义的线路采样1000次
result.svg()
# 我们可以看到,采样1000中,'00'出现了499次,'11'出现了501次,采样结果符合概率分布,细微的误差是由模拟器噪声导致。仔细阅读的同学可以发现,在[量子模拟器教程](https://gitee.com/buyulin/mindquantum/blob/master/tutorials/quantum_simulator.ipynb)中我们已经展示过该线路的采样结果,但并未解释结果如是分布的原因,在本教程中学习了计算基测量后,相信同学们对该结果分布的认识更加深刻。
#
# #### MindQuantum实现测量系统中单个比特
#
# 同样地,在使用代码演示之前,我们先简单计算出理论值。
#
# 在0号量子比特上使用计算基测量$|\psi⟩=\frac{\sqrt{2}(|00⟩+|11⟩)}{2}$:
#
# $$
# \begin{align*}
# p(0)=|a|^2+|b|^2=(\frac{\sqrt{2}}{{2}})^2=\frac{1}{2}\\
# p(1)=|c|^2+|d|^2=(\frac{\sqrt{2}}{{2}})^2=\frac{1}{2}\\
# \end{align*}
# $$
#
# 可以看到,测量结果有两种可能:0和1,概率均是$\frac{1}{2}$。测量后的状态分别为:
#
# $$
# \begin{align*}
# \frac{a}{\sqrt{|a|^2+|b|^2}}|00⟩+\frac{b}{\sqrt{|a|^2+|b|^2}}|01⟩=|00⟩\\
# \frac{c}{\sqrt{|c|^2+|d|^2}}|10⟩+\frac{d}{\sqrt{|c|^2+|d|^2}}|11⟩=|11⟩\\
# \end{align*}
# $$
#
# 我们开始搭建制备$|\psi⟩=\frac{\sqrt{2}(|00⟩+|11⟩)}{2}$并在0号量子比特上做测量的量子线路:
circ_partial = Circuit() # 初始化量子线路
circ_partial += H.on(0) # H门作用在第0位量子比特
circ_partial += X.on(1, 0) # X门作用在第1位量子比特且受第0位量子比特控制
circ_partial += Measure('q0').on(0) # 在0号量子比特作用一个测量,并将该测量命名为'q0'
circ_partial.svg() # 绘制SVG格式的量子线路图片
sim.reset() # 复位模拟器
sim.apply_circuit(circ_partial).svg() # 在模拟器上运行量子线路
# 可以看到我们得到的测量结果是'0',测量后的量子态塌缩为:
print(sim.get_qs(True))
# 量子态塌缩成了$1|00⟩$,与理论值相符。
#
# 同样地,如果我们多测量几次,可以发现测量结果也会为'1',此处不再演示。我们直接对该量子线路采样1000次观察结果:
sim.reset()
result = sim.sampling(circ_partial, shots=1000) # 对上面定义的线路采样1000次
result.svg()
# 我们可以看到,采样1000中,'0'出现了499次,'1'出现了501次。采样结果符合概率分布,细微的误差是由模拟器噪声导致。
#
# 以上我们完成了量子计算基测量的学习,接下来我们进入到另一种测量操作的学习:投影测量。
#
# ## 投影测量
#
# 投影测量(projective measuremen)由被观察系统状态空间上一个可观测量(observable)Hermite算子$M$来描述($M=M^{\dagger}$),该可观测量具有谱分解:
#
# $$
# M=\Sigma_{m}mP_m
# $$
#
# 这里的$P_m$是在$m$的特征值$m$对应特征空间上的投影,测量的可能结果对应于测量算子的特征值$m$。测量状态$|\psi⟩$时,得到结果$m$的概率为
#
# $$
# p(m)=⟨\psi|P_m|\psi⟩
# $$
#
# 测量后量子系统的状态立即为:
#
# $$
# \frac{P_m|\psi⟩}{\sqrt{p(m)}}
# $$
#
# 直观解释是,我们对状态$|\psi⟩$使用$M$投影测量,是把$|\psi⟩$往$M$的特征空间上投影,有$p_m$的概率投影到空间$V_{m}$中,此时测量结果为该空间对应的特征值$m$。
#
# 投影测量一个重要的特征就是很容易计算投影测量的期望值$E(M)$。
#
# $$
# \begin{align*}
# E(M) &=\Sigma_i \lambda_i p_i\\
# &=\Sigma_i \lambda_i⟨\psi|P_i|\psi⟩\\
# &=⟨\psi|(\Sigma_i\lambda_i P_i)|\psi⟩\\
# &=⟨\psi|M|\psi⟩
# \end{align*}
# $$
#
# 投影测量可以视为一般测量的特殊情况,当测量算子除了满足完备性关系$\Sigma_mM_m^\dagger M_m=I$时,还满足$M_m$是正交投影算子的条件,即$M_m$是Hermite的,并且
#
# $$
# M_mM_{m'}=\delta_{mm'}M_m
# $$
#
# 有了这些附加限制,一般测量退化成投影测量。
#
# ## Pauli测量
#
# 最后我们学习Pauli测量,Pauli测量是投影测量中把可观测量$M$选取为泡利算子。以Pauli-Z测量为例,我们考虑Z算子:
#
# $$
# Z=
# \left(
# \begin{array}{l}
# 1&0\\
# 0&-1
# \end{array}
# \right)
# $$
#
# 可以看出,Z满足$Z=Z^\dagger$,即Z是Hermite的。Z有两个特征值+1,-1,对应的特征向量分别为:|0⟩和|1⟩。因此Z的谱分解形式为:
#
# $$
# Z=\left(
# \begin{array}{l}
# 1&0\\
# 0&-1
# \end{array}
# \right)=1\times|0⟩⟨0|+(-1)\times|1⟩⟨1|
# $$
#
# 使用Z做投影测量,如果测量结果为+1,我们可得出该量子比特的状态被投影到Z算子的+1特征子空间$V_{+1}$中,表明被测量态被投影成了|0⟩,相似地,如果测量结果为-1,可得出该量子比特被投影到-1特征子空间$V_{-1}$中,表明被测量态被投影成了|1⟩,这即为Pauli-Z测量。
#
# MindQuantum中为我们提供了基于给定可观测量H计算投影测量期望值的功能:
#
# `get_expectation(hamiltonian)`可以计算出模拟器当前量子态关于某个观察量的期望值:$E=⟨\psi|H|\psi⟩$。**该操作不会改变量子态**。
#
# 例如,我们希望对处于$\frac{\sqrt{2}}{2}|00⟩+\frac{\sqrt{2}}{2}|11⟩$态的系统上的q1比特上作用一个Pauli-Z测量,首先我们将模拟器置位:
sim = Simulator('projectq', 2) # 声明一个2比特的projectq模拟器
sim.set_qs(np.array([2**0.5 / 2, 0, 0, 2**0.5 / 2])) # 设置模拟器状态
print(sim.get_qs())
# 然后我们构造出在q1上做Pauli-Z测量对应的哈密顿量hams:
# +
from mindquantum import Hamiltonian # 引入哈密顿量定义模块
from mindquantum.core.operators import QubitOperator # 引入稀疏算子定义模块
hams = Hamiltonian(QubitOperator('Z1')) # 构建在q1上作Pauli-Z测量的哈密顿量
# -
# 为了深刻认识学习Pauli-Z测量操作,我们先手动计算出模拟器当前量子态在q1上做Pauli-Z测量的期望值,并推算出测量结果为+1,-1的概率:
#
# $$
# \begin{align*}
# E&=⟨\psi|H|\psi⟩\\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# (Z \otimes I) \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0\\
# 0&-1\\
# \end{array}
# \right) \otimes
# \left(
# \begin{array}{l}
# 1&0\\
# 0&1\\
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0&0&0\\
# 0&1&0&0\\
# 0&0&-1&0\\
# 0&0&0&-1
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# 0\\
# &=1\times p(1)+(-1)\times p(-1)\\
# &=1\times p(1)+(-1)\times (1-p(1))\\
# &=p(1)-1+p(-1)\\
# \Longrightarrow&p(1)=p(-1)=0.5
# \end{align*}
# $$
#
# 这说明测量的理论期望值为0,测量出+1,-1的概率均为50%,我们使用MindQuantum提供的`get_expectation()`来验证结果:
sim.get_expectation(hams) # 计算出模拟器当前量子态关于hams的期望值
# 可以看到,手动计算和使用`get_expectation(hamiltonian)`计算出的结果相同,符合预期。
#
# 我们还可以对处于$\frac{\sqrt{2}}{2}|00⟩+\frac{\sqrt{2}}{2}|11⟩$态的系统上的q0,q1比特上均作用Pauli-Z测量。类似地构造出在q0,q1上做Pauli-Z测量对应的哈密顿量hams2:
hams2 = Hamiltonian(QubitOperator('Z0') +
QubitOperator('Z1')) # 构建在q0,q1上作Pauli-Z测量的哈密顿量
# 我们同样可以手动计算出模拟器当前量子态在q0,q1上做Pauli-Z测量的期望值:
#
# $$
# \begin{align*}
# E&=⟨\psi|H|\psi⟩\\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# (Z \otimes I) \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right)
# +
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# (I \otimes Z) \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0\\
# 0&-1\\
# \end{array}
# \right) \otimes
# \left(
# \begin{array}{l}
# 1&0\\
# 0&1\\
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right)
# +
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0\\
# 0&1\\
# \end{array}
# \right) \otimes
# \left(
# \begin{array}{l}
# 1&0\\
# 0&-1\\
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0&0&0\\
# 0&1&0&0\\
# 0&0&-1&0\\
# 0&0&0&-1
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right)
# +
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}& 0& 0& \frac{\sqrt{2}}{2}
# \end{array}
# \right) \times
# \left(
# \begin{array}{l}
# 1&0&0&0\\
# 0&-1&0&0\\
# 0&0&1&0\\
# 0&0&0&-1
# \end{array}
# \right)
# \times
# \left(
# \begin{array}{l}
# \frac{\sqrt{2}}{2}\\
# 0\\
# 0\\
# \frac{\sqrt{2}}{2}
# \end{array}
# \right) \\&=
# 0+0 \\
# &=0
# \end{align*}
# $$
sim.set_qs(np.array([2**0.5 / 2, 0, 0, 2**0.5 / 2])) # 设置模拟器状态
sim.get_expectation(hams2) # 计算出模拟器当前量子态关于hams2的期望值
# 该操作不会改变量子态,我们查看当前量子态:
sim.get_qs()
# 可以发现,量子态依然是最初设定的$\frac{\sqrt{2}}{2}|00⟩+\frac{\sqrt{2}}{2}|11⟩$
#
# 我们学习认识了量子计算中重要的一个操作——测量,还使用MindQuantum测量量子线路验证我们的理论结果,并使用不同可视化工具展示出测量结果。
#
# 想学习MindQuantum中量子线路的高阶操作,构建并训练量子经典混合神经网络,请查看`get_expectation_with_grad()`和`apply_hamitonian()`的文档。
#
# 若想查询更多关于MindQuantum的API,请点击:[https://mindspore.cn/mindquantum/](https://mindspore.cn/mindquantum/)。
| tutorials/quantum_measurement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create matrix tables
# In this notebook, we prepare matrix tables for use in analysis.
# # Setup
from datetime import datetime
import hail as hl
import os
import time
# ## Retrieve exome region file
#
# For details, see https://biobank.ndph.ox.ac.uk/ukb/refer.cgi?id=3803.
# !wget -nd biobank.ndph.ox.ac.uk/ukb/ukb/auxdata/xgen_plus_spikein.GRCh38.bed
# !gsutil cp xgen_plus_spikein.GRCh38.bed ${WORKSPACE_BUCKET}/data/ukb/exomes/
# ## Define constants
# + tags=["parameters"]
# Papermill parameters. See https://papermill.readthedocs.io/en/latest/usage-parameterize.html
# Inputs
AOU_VCFS = 'gs://fc-aou-preprod-datasets-controlled/5/wgs/vcf/merged/alpha2/*.vcf.gz'
UKB_VCFS = 'gs://fc-7130e767-a885-4678-95ed-7c966c79e2d0/200K/pvcf/ukb23156_*.vcf.gz'
# This file is from https://biobank.ndph.ox.ac.uk/ukb/refer.cgi?id=3803.
EXOME_REGIONS = f'{os.getenv("WORKSPACE_BUCKET")}/data/ukb/exomes/xgen_plus_spikein.GRCh38.bed'
# -
INPUT_VCFS = AOU_VCFS
# +
RESULT_BUCKET = os.getenv("WORKSPACE_BUCKET")
DATESTAMP = time.strftime('%Y%m%d')
TIMESTAMP = time.strftime('%Y%m%d_%H%M%S')
# WORK_DIR = !pwd
# Output files
OUTPUT_MT = f'{os.getenv("WORKSPACE_BUCKET")}/data/aou/alpha2/cohort.mt'
HAIL_LOG = f'{WORK_DIR[0]}/hail-make-mt-{TIMESTAMP}.log'
HAIL_LOG_DIR_FOR_PROVENANCE = f'{os.getenv("WORKSPACE_BUCKET")}/hail-logs/{DATESTAMP}/'
# -
# ## Check access
# !gsutil ls {AOU_VCFS} | head
# !gsutil ls {UKB_VCFS} | head
# # Start Hail
# +
# See also https://towardsdatascience.com/fetch-failed-exception-in-apache-spark-decrypting-the-most-common-causes-b8dff21075c
# See https://spark.apache.org/docs/2.4.7/configuration.html
EXTRA_SPARK_CONFIG = {
# If set to "true", performs speculative execution of tasks. This means if one or more tasks are running
# slowly in a stage, they will be re-launched.
'spark.speculation': 'true', # Default is false.
# Fraction of tasks which must be complete before speculation is enabled for a particular stage.
'spark.speculation.quantile': '0.95', # Default is 0.75
# Default timeout for all network interactions. This config will be used in place of
# spark.core.connection.ack.wait.timeout, spark.storage.blockManagerSlaveTimeoutMs,
# spark.shuffle.io.connectionTimeout, spark.rpc.askTimeout or spark.rpc.lookupTimeout if they are not configured.
'spark.network.timeout': '180s', # Default is 120s
# (Netty only) Fetches that fail due to IO-related exceptions are automatically retried if this is set to a
# non-zero value. This retry logic helps stabilize large shuffles in the face of long GC pauses or transient
# network connectivity issues.
'spark.shuffle.io.maxRetries': '10', # Default is 3
# (Netty only) How long to wait between retries of fetches. The maximum delay caused by retrying is 15 seconds
# by default, calculated as maxRetries * retryWait.
'spark.shuffle.io.retryWait': '15s', # Default is 5s
# Number of failures of any particular task before giving up on the job. The total number of failures spread
# across different tasks will not cause the job to fail; a particular task has to fail this number of attempts.
# Should be greater than or equal to 1. Number of allowed retries = this value - 1.
'spark.task.maxFailures': '10', # Default is 4.
# Number of consecutive stage attempts allowed before a stage is aborted.
'spark.stage.maxConsecutiveAttempts': '10' # Default is 4.
}
# -
hl.init(spark_conf=EXTRA_SPARK_CONFIG,
min_block_size=50,
default_reference='GRCh38',
log=HAIL_LOG)
# Check the configuration.
sc = hl.spark_context()
config = sc._conf.getAll()
config.sort()
config
# # Load exome capture regions
ukb_exome_capture_regions = hl.import_bed(EXOME_REGIONS)
ukb_exome_capture_regions.describe()
ukb_exome_capture_regions.show(5)
# # Create matrix table from VCFs
mt = hl.import_vcf(INPUT_VCFS,
array_elements_required=False,
force_bgz=True)
mt.describe()
mt = mt.filter_rows(
hl.is_defined(ukb_exome_capture_regions[mt.locus]))
start = datetime.now()
print(start)
mt.write(OUTPUT_MT, overwrite=True)
end = datetime.now()
print(end)
print(end - start)
# !gsutil ls {OUTPUT_MT}
# # Provenance
# Copy the Hail log to the workspace bucket so that we can retain it.
# !gzip --keep {HAIL_LOG}
# !gsutil cp {HAIL_LOG}.gz {HAIL_LOG_DIR_FOR_PROVENANCE}
print(datetime.now())
# !pip3 freeze
| aou_workbench_pooled_analyses/matrix_table_creation/create_matrix_tables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # fMRI-06 Volume Pipeline
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook', font_scale=1.5)
# %matplotlib inline
# In this notebook, we demonstrate a full analysis pipeline for volumetric fMRI data.
# ## Step 1: Generate Design Matrix
#
# In the first step, we design the fMRI task design matrix. Here we are analyzing a simple visual checkerboard experiment. We presented six blocks of a rotating visual checkerboard (20s duration) to one participant. The total run is 250 volumes and the repetition time (TR) was 1s.
# +
from pandas import read_csv
from fmritools.design import design_matrix
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define parameters.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define task metadata.
n_acq = 250
tr = 1
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define experiment events.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Load events.
events = read_csv('sub-01_task-visualcontrol_desc-events.tsv', sep='\t')
## Limit only to checkerboards.
events = events.query('event=="Checkerboard"')
events.event = 1
## Compute offsets (onset + duration).
events['offset'] = events['onset'].values + events['duration'].values
## Construct events matrix.
events = events[['onset','offset','event']].values
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define design matrix.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
times, X, boxcars = design_matrix(tr, n_acq, events, return_boxcars=True)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize design matrix.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Intialize canvas.
fig, ax = plt.subplots(1,1,figsize=(12,3))
## Plot.
ax.plot(times, boxcars)
ax.plot(times, X)
ax.set(xlim=(times.min(), times.max()), xlabel='Time (s)')
sns.despine()
plt.tight_layout()
# -
# ## Step 2: Prepare Nuisance Regressors
#
# In this next section, we prepare all of the nuisance regressors. These are variables not of experimental interest, but included so as to reduce noise.
# ### Motion regressors
#
# We prepare the motion regressors from the 6 degrees of observed motion (X/Y/Z-translation, pitch/yaw/roll-rotation). The motion regressors are lowpass filtered to remove high-frequency artifact and then passed through PCA for dimensionality reduction.
# +
from pandas import read_csv
from nilearn.signal import clean
from sklearn.decomposition import PCA
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define parameters.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define filter level.
low_pass = 1 / 100.
## Define PCA threshold.
pca_threshold = 0.8 # Include regressors that explain 80% of motion variance.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Prepare motion regressors.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Load fmriprep confound regressors.
confounds = read_csv('sub-01_task-visualcontrol_desc-confounds_regressors.tsv', sep='\t')
## Extract motion regressors.
cols = ['X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']
motion = confounds[cols].values
## Filter regressors.
motion = clean(motion, low_pass=low_pass, t_r=tr)
## Perform PCA.
pca = PCA(n_components=6)
motion = pca.fit_transform(motion)
## Take only the number of components explaining 90% of the variance.
cumulative_variance = np.cumsum(pca.explained_variance_ratio_)
n_components = np.argmax(cumulative_variance >= pca_threshold) + 1
motion = motion[:,:n_components]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize motion regressors.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Intialize canvas.
fig, ax = plt.subplots(1,1,figsize=(8,3))
## Plot.
ax.plot(times, motion)
ax.set(xlim=(times.min(), times.max()), xlabel='Time (s)',
title='%s components (%0.1f%% var)' %(n_components, cumulative_variance[n_components-1]*100))
sns.despine()
plt.tight_layout()
# -
# ### Motion scrubbers
#
# We prepare the motion scrubbers from the framewise displacement estimates. Motion scrubbers absorb the variance from volumes "infected" with large head motion.
# +
from pandas import read_csv
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define parameters.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define FD threshold.
fd_threshold = 0.5
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Prepare motion scrubbers.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Load fmriprep confound regressors.
confounds = read_csv('sub-01_task-visualcontrol_desc-confounds_regressors.tsv', sep='\t')
## Extract framewise displacement.
fd = confounds['FramewiseDisplacement'].values
fd[np.isnan(fd)] = 0
## Identify infected volumes.
bad_vols = np.argwhere(fd > fd_threshold)
print('%s bad volumes detected.' %bad_vols.size)
## Construct scrubbers.
scrubbers = np.zeros((fd.size, bad_vols.size))
scrubbers[bad_vols, np.arange(bad_vols.size)] = 1
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize framewise displacement.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Plot.
fig, ax = plt.subplots(1,1,figsize=(6,3))
ax.plot(fd)
ax.hlines(fd_threshold, 0, fd.size, linestyle='--')
ax.set(xlim=(0, fd.size), xlabel='Volumes', ylabel='FD')
sns.despine()
# -
# ### Prepare physiological nuisance regressors
#
# We prepare the physiological nuisance regressors from the anatomical CompCor timeseries automatically generated by fmriprep. The physiological regressors are lowpass filtered to remove high-frequency artifact and then passed through PCA for dimensionality reduction.
# +
from pandas import read_csv
from nilearn.signal import clean
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define parameters.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define filter level.
low_pass = 1 / 100.
## Define PCA threshold.
pca_threshold = 0.8 # Include regressors that explain 80% of motion variance.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Prepare physiological regressors.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Load fmriprep confound regressors.
confounds = read_csv('sub-01_task-visualcontrol_desc-confounds_regressors.tsv', sep='\t')
## Extract anatomical compcor signals.
compcor = confounds.filter(regex='aCompCor').values
## Filter regressors.
compcor = clean(compcor, low_pass=low_pass, t_r=tr)
## Perform PCA.
pca = PCA(n_components=compcor.shape[-1])
compcor = pca.fit_transform(compcor)
## Take only the number of components explaining 90% of the variance.
cumulative_variance = np.cumsum(pca.explained_variance_ratio_)
n_components = np.argmax(cumulative_variance >= pca_threshold) + 1
compcor = compcor[:,:n_components]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize CompCor.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Intialize canvas.
fig, ax = plt.subplots(1,1,figsize=(8,3))
## Plot.
ax.plot(times, compcor)
ax.set(xlim=(times.min(), times.max()), xlabel='Time (s)',
title='%s components (%0.1f%% var)' %(n_components, cumulative_variance[n_components-1]*100))
sns.despine()
plt.tight_layout()
# -
# ### Assemble nuisnace regressors
#
# In this final step, we column stack our nuisance regressors into one nuisance matrix. We also include an intercept term.
# +
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Assemble nuisance regressors.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define intercept.
intercept = np.ones(n_acq)
## Stack nuisance regressors.
Z = np.column_stack([intercept, motion, scrubbers, compcor])
# -
# ## Step 3 (Recommended): Check Collinearity
#
# In this optional step, we check the collinearity of our full design matrix. Collinearity is a condition in which some of the independent variables are highly correlated. Collinearity tends to create numerical instability in our regression and inflate the variance of the estimated regression coefficients. To check collinearity, we rely on the [variance inflation factor](https://en.wikipedia.org/wiki/Variance_inflation_factor). As a simple rule of thumb, VIF scores above 5 suggest problematic collinearity. VIF scores below 5 are usually ok.
# +
from statsmodels.stats.outliers_influence import variance_inflation_factor
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Assemble regressors.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Assemble all regressors (design + nuisance).
XZ = np.column_stack([X,Z])
## Check variance inflation factor.
vif = [variance_inflation_factor(XZ, i) for i in range(XZ.shape[-1])]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize collinearity.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Intialize canvas.
fig, ax = plt.subplots(1,1,figsize=(6,3))
## Plot.
ax.bar(np.arange(len(vif)), vif)
ax.hlines(5,-0.5,len(vif)-0.5,linestyle='--')
ax.set(xlim=(-0.5,len(vif)-0.5), xticks=np.arange(len(vif)),
xticklabels=['R%s' %i for i in np.arange(len(vif))+1],
ylabel='VIF')
sns.despine()
plt.tight_layout()
# -
# ## Step 4: Prepare fMRI Data
#
# In this next step, we prepare the fMRI data for regression analysis.
#
# **NOTE:** In this demo, we are only analyzing one brain slice. The steps below are then for whole-brain.
# ### Load and mask data
#
# Here we load the functional data and the anatomical segmentation (aseg). We mask the functional data only to voxels inside the cortex. After masking, we reshape the data to [n_times, n_voxels].
# +
import nibabel as nib
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Load data.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Load functional data.
f = 'sub-01_task-visualcontrol_space-T1w_desc-preproc_bold.nii.gz'
func = nib.load(f).get_data()
print('Functional data dim:\t(X=%s, Y=%s, Z=%s, T=%s)' %func.shape)
## Load brainmask.
f = 'sub-01_task-visualcontrol_space-T1w_desc-aseg_dseg.nii.gz'
aseg = nib.load(f).get_data()
print('Brainmask dim:\t\t(X=%s, Y=%s, Z=%s)' %aseg.shape)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Mask data.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Store dimensions of anatomical image.
brain_shape = aseg.shape
## Store indices corresponding to L/R cortex.
indices = np.where(np.logical_or(aseg == 3, # Left cortex
aseg == 42 # Right cortex
))
## Apply brainmask.
raw = func[indices]
## Transpose data to shape (n_times, n_voxels)
raw = raw.T
print('Masked func dim:\t(T=%s, V=%s)' %raw.shape)
# -
# ### Filter BOLD data
#
# We apply a highpass filter to the data. A 1/100s highpass filter is standard, but be careful to not filter out task-correlated signals.
# +
from nilearn.signal import clean
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Define parameters.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define metadata.
high_pass = 1 / 100.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Filter / Convert to PSC.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Compute mean signal.
mu = raw.mean(axis=0)
## Apply highpass data.
Y = clean(raw, detrend=True, standardize=False, high_pass=high_pass, t_r=tr)
## Convert to percent signal change.
Y = Y / mu * 100
# -
# ## Step 5: fMRI First Level Analysis
#
# Alas, the moment we've all been waiting for: fMRI regression analysis.
# ### Regression
#
# Using the ordinary least squares (OLS) function from `fmritools`, we regress our design matrix against the observed BOLD data (including nuisance regressors to reduce noise).
#
# Using the indices from the anatomical segmentation, we then extract the regression statistics ($\beta$-coefficients in percent signal change and corresponding t-statistics) and store them in new volume maps.
# +
from fmritools.stats import OLS
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Perform OLS regression.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Define and fit model.
fit = OLS(Y, X, Z).fit()
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Extract statistics.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Make percent signal change maps.
psc_map = np.zeros_like(aseg) # Preallocate space, same size as anatomical image.
psc_map[indices] = fit.coef # Store regression coeficients (PSC) in map.
## Make t-statistic maps.
t_map = np.zeros_like(aseg) # Preallocate space, same size as anatomical image.
t_map[indices] = fit.tvalues # Store regression coeficients (PSC) in map.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize (before multiple comparisons).
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Initialize canvas.
fig, axes = plt.subplots(1,2,figsize=(12,5))
## Load BOLD reference.
f = 'sub-01_task-visualcontrol_space-T1w_desc-boldref.nii.gz'
boldref = nib.load(f).get_data()
boldref = boldref[:,::-1].T.squeeze()
## Make copies for visualization.
psc_viz = psc_map[:,::-1].T.squeeze().copy()
psc_viz = np.where(np.abs(psc_viz)>1, psc_viz, np.nan)
t_viz = t_map[:,::-1].T.squeeze().copy()
t_viz = np.where(np.abs(t_viz)>3, t_viz, np.nan)
## Plot percent signal change.
ax = sns.heatmap(boldref, cmap='binary_r', square=True, cbar=False,
xticklabels=[], yticklabels=[], ax=axes[0])
ax = sns.heatmap(psc_viz, center=0, vmin=-5, vmax=5, square=True,
xticklabels=[], yticklabels=[], ax=axes[0])
ax.set(title='Percent Signal Change')
## Plot t-statistics.
ax = sns.heatmap(boldref, cmap='binary_r', square=True, cbar=False,
xticklabels=[], yticklabels=[], ax=axes[1])
ax = sns.heatmap(t_viz, center=0, vmin=-20, vmax=20, square=True,
xticklabels=[], yticklabels=[], ax=axes[1])
ax.set(title='T-Statistics')
plt.tight_layout()
# -
# ### Multiple comparisons correction
#
# Next we perform multiple comparisons corrections using the FDR correction (Benjamini-Hochberg procedure).
# +
from mne.stats import fdr_correction
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Perform FDR correction.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Extract p-values.
p_values = fit.pvalues
## Perform corrections.
_, p_values = fdr_correction(p_values, alpha=0.001)
## Make p-value maps.
p_map = np.zeros_like(aseg) # Preallocate space, same size as anatomical image.
p_map[indices] = p_values # Store regression coeficients (PSC) in map.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Mask PSC / T-maps.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Mask PSC map.
psc_map_thresh = psc_map.copy()
psc_map_thresh[p_map > 0.05] = 0
## Mask T-map.
t_map_thresh = t_map.copy()
t_map_thresh[p_map > 0.05] = 0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Visualize (after multiple comparisons).
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Initialize canvas.
fig, axes = plt.subplots(1,2,figsize=(12,5))
## Load BOLD reference.
f = 'sub-01_task-visualcontrol_space-T1w_desc-boldref.nii.gz'
boldref = nib.load(f).get_data()
boldref = boldref[:,::-1].T.squeeze()
## Make copies for visualization.
psc_viz = psc_map_thresh[:,::-1].T.squeeze().copy()
psc_viz = np.where(np.abs(psc_viz)>1, psc_viz, np.nan)
t_viz = t_map_thresh[:,::-1].T.squeeze().copy()
t_viz = np.where(np.abs(t_viz)>3, t_viz, np.nan)
## Plot percent signal change.
ax = sns.heatmap(boldref, cmap='binary_r', square=True, cbar=False,
xticklabels=[], yticklabels=[], ax=axes[0])
ax = sns.heatmap(psc_viz, center=0, vmin=-5, vmax=5, square=True,
xticklabels=[], yticklabels=[], ax=axes[0])
ax.set(title='Percent Signal Change')
## Plot t-statistics.
ax = sns.heatmap(boldref, cmap='binary_r', square=True, cbar=False,
xticklabels=[], yticklabels=[], ax=axes[1])
ax = sns.heatmap(t_viz, center=0, vmin=-20, vmax=20, square=True,
xticklabels=[], yticklabels=[], ax=axes[1])
ax.set(title='T-Statistics')
plt.tight_layout()
# -
# ## Step 6: Save Maps
#
# Finally, we can save our analysis maps for inspection in other fMRI software (e.g. Mango, Freeview).
# +
from nibabel import Nifti1Image
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
### Save maps.
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
## Get affine.
f = 'sub-01_task-visualcontrol_space-T1w_desc-boldref.nii.gz'
affine = nib.load(f).affine
## PSC map.
f = 'sub-01_task-visualcontrol_space-T1w_psc.nii.gz'
obj = Nifti1Image(psc_map_thresh, affine)
nib.save(obj, f)
## T-map map.
f = 'sub-01_task-visualcontrol_space-T1w_tvalues.nii.gz'
obj = Nifti1Image(t_map_thresh, affine)
nib.save(obj, f)
| fmri-06/fmri-06-volume.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="yTeiUmSyL1re" colab_type="text"
# # Import Data
# + id="2v_eXja6LIPv" colab_type="code" outputId="bb3bcf3f-66f8-4dfa-e9f4-f2482f4ebc03" colab={"base_uri": "https://localhost:8080/", "height": 146}
import os
# !git clone https://github.com/CSSEGISandData/COVID-19.git
MAIN_FOLDER = "/content/COVID-19/csse_covid_19_data/csse_covid_19_time_series/"
CONFIRMED_PATH = os.path.join(MAIN_FOLDER, "time_series_19-covid-Confirmed.csv")
DEATHS_PATH = os.path.join(MAIN_FOLDER, "time_series_19-covid-Deaths.csv")
RECOVERED_PATH = os.path.join(MAIN_FOLDER, "time_series_19-covid-Recovered.csv")
# + id="c_uTg61TLdW7" colab_type="code" colab={}
import pandas as pd
df_confirmed = pd.read_csv(CONFIRMED_PATH)
df_deaths = pd.read_csv(DEATHS_PATH)
df_recovered = pd.read_csv(RECOVERED_PATH)
# + [markdown] id="m3-DkgX2L4EN" colab_type="text"
# # EDA
# + id="rxnBtUICL47A" colab_type="code" outputId="5c19e54f-7db7-4b92-a4c7-0ef670824f3c" colab={"base_uri": "https://localhost:8080/", "height": 1000}
display(df_confirmed.head())
display(df_confirmed.T.head())
display(df_confirmed.shape)
display(df_confirmed.columns)
display(df_confirmed.iloc[:10,:10].dtypes)
display(df_confirmed.describe(percentiles=[0.25,0.5,0.75,0.85,0.95,0.99]).T)
# + [markdown] id="3xMhwaOsMAMY" colab_type="text"
# # Cleaning Data
# + id="Gz8EH2bEL5W7" colab_type="code" outputId="34bf4e3b-048f-4b56-ac74-f751e160bc15" colab={"base_uri": "https://localhost:8080/", "height": 226}
df_confirmed = df_confirmed.drop(columns=['Lat', 'Long'])
df_confirmed["Province/State"] = df_confirmed["Province/State"].fillna(df_confirmed["Country/Region"])
display(df_confirmed.tail())
# + [markdown] id="gZlUsOqCMFO4" colab_type="text"
# ## Count by Country
# + id="1Hc_nCPYMEtZ" colab_type="code" outputId="1b978915-16bb-4691-fcb3-f175818799be" colab={"base_uri": "https://localhost:8080/", "height": 527}
df_bycountry = df_confirmed.groupby('Country/Region').sum()
df_bycountry.loc["Total"] = df_bycountry.sum(axis=0)
display(df_bycountry)
# + [markdown] id="6w4TcrkqMcmU" colab_type="text"
# ## Normalize
# + id="Pf6qkvaXMJrh" colab_type="code" outputId="15ebb4a1-14e9-4503-d853-55b230197f6e" colab={"base_uri": "https://localhost:8080/", "height": 527}
maximums = df_bycountry.iloc[:, -1]
df_bycountry_norm = df_bycountry.div(maximums.to_numpy(), axis=0)
display(df_bycountry_norm)
# + [markdown] id="6FwxL3egMtlc" colab_type="text"
# ## Plot countries
# + id="OUk87zfJMkrm" colab_type="code" colab={}
import matplotlib.pyplot as plt
def plot_by_country(df, countries):
plt.subplots(figsize = (14,10))
lines = plt.plot(df.loc[countries,:].T)
plt.legend(iter(lines), countries)
plt.xticks(rotation=90)
plt.show()
# + [markdown] id="yjS-gECXM3pg" colab_type="text"
# ## Plot China, Italy, France, South Korea, Japan, US
# + id="BXTFJDypMwkE" colab_type="code" outputId="925c99b7-d648-41b7-eb21-023bcdcf27eb" colab={"base_uri": "https://localhost:8080/", "height": 749}
plot_by_country(df_bycountry, ["Mainland China", "France", "Italy", "South Korea", "Japan", "US"])
# + [markdown] id="UFE__WD4M-1Y" colab_type="text"
# ## Plot normed China, Italy, France, South Korea, Japan, US
# + id="Nakl3xz-MxII" colab_type="code" outputId="04682642-8160-4b09-eb52-bc7a846aba21" colab={"base_uri": "https://localhost:8080/", "height": 749}
plot_by_country(df_bycountry_norm, ["Mainland China", "France", "Italy", "South Korea", "Japan", "US"])
# + [markdown] id="Xee9u5NRB-Gr" colab_type="text"
# ## Confirmed cases in Hubei province from 22th of january to 22th of february
# + id="6n-EiRL6XBSQ" colab_type="code" colab={}
df_byprovince = df_confirmed.groupby("Province/State").sum()
maximums = df_byprovince.iloc[:, -1]
df_byprovince_norm = df_byprovince.div(maximums.to_numpy(), axis=0)
# + id="XOJ0t1xUARiu" colab_type="code" outputId="dc44b6af-18d1-49d6-aab7-232945a6f9c0" colab={"base_uri": "https://localhost:8080/", "height": 620}
plt.subplots(figsize = (14,10))
lines = plt.plot(df_byprovince.loc["Hubei",:"2/22/20"].T)
plt.xticks(rotation=90)
plt.show()
# + [markdown] id="OrKGI8h3NMUU" colab_type="text"
# # Correlation
# + id="zHj0tFcBNEBp" colab_type="code" colab={}
import numpy as np
# https://towardsdatascience.com/four-ways-to-quantify-synchrony-between-time-series-data-b99136c4a9c9
def crosscorr(datax, datay, lag=0):
""" Lag-N cross correlation.
:lag: default 0, (int)
:datax: (pandas.Series)
:datay: (pandas.Series)
:Returns: crosscorr (float)
"""
return datax.corr(datay.shift(lag))
# + id="hS5jcUkoNP5j" colab_type="code" colab={}
def synchrony(df_1, df_2, absolute_lag):
x_range = np.arange(-absolute_lag, absolute_lag + 1)
rs = [crosscorr(df_1, df_2, lag) for lag in x_range]
offset = int(len(rs)/2) - np.argmax(rs)
f,ax = plt.subplots(figsize=(14,3))
ax.plot(rs)
ax.axvline(int(len(rs)/2),color='k',linestyle='--',label='Center')
ax.axvline(np.argmax(rs),color='r',linestyle='--',label='Peak synchrony')
ax.set(title=f'Offset = {offset} days', xlabel='Days',ylabel='Pearson r')
ax.set_xticks(x_range + absolute_lag)
ax.set_xticklabels(x_range)
plt.legend()
plt.xticks(rotation=90)
plt.show()
return offset
# + id="2VjV9wdNNRNO" colab_type="code" colab={}
def plot_with_lag(serie_1, serie_2, lag, q_1 = None, q_2 = None):
f, ax = plt.subplots(figsize = (14,10))
plt.plot(serie_1.to_numpy(), label=serie_1.name, color='red')
plt.plot(serie_2.to_numpy(), label=serie_2.name, color='blue')
serie_2_lagged = []
if lag > 0:
serie_2_lagged = np.insert(serie_2, 0, np.zeros(abs(lag)))
else:
serie_2_lagged = serie_2[-lag:]
plt.plot(serie_2_lagged, label=f"{serie_2.name} lagged", color='blue', linestyle="--")
ax.set_xticklabels(serie_1.index)
ax.set_xticks(range(len(serie_1)))
plt.xticks(rotation=90)
plt.xlabel("Days")
plt.ylabel("Confirmed cases of Covid-19")
plt.grid(color='gray', ls = '-.', lw = 0.25)
if q_1 is not None:
ax.axvline(list(serie_1.index).index(q_1), color='red', linestyle='-', lw = 0.5, label=f'{serie_1.name} quarantine')
if q_2 is not None:
ax.axvline(list(serie_1.index).index(q_2), color='blue', linestyle='-', lw = 0.5, label=f'{serie_2.name} quarantine')
ax.axvline(list(serie_1.index).index(q_2) + lag, color='blue', linestyle='--', lw = 0.5, label=f'{serie_2.name} lagged quarantine')
plt.legend(loc="upper left")
plt.show()
# + [markdown] id="N0CszZ8tNS8n" colab_type="text"
# ## Italy leads France by about 6 days
# + id="nc-bZDGxNTi3" colab_type="code" outputId="b3660ec3-dbed-4ee2-e797-af518e5e28c2" colab={"base_uri": "https://localhost:8080/", "height": 864}
italy_france_offset = synchrony(df_bycountry.loc["Italy",:], df_bycountry.loc["France",:], 14)
plot_with_lag(df_bycountry.loc["Italy",:], df_bycountry.loc["France",:], -italy_france_offset)
# + id="ggHR1AfZhbxz" colab_type="code" outputId="cfd8b1ed-27b5-47de-cca1-655325ba7d51" colab={"base_uri": "https://localhost:8080/", "height": 864}
origin = "2/19/20"
italy_france_offset = synchrony(df_bycountry.loc["Italy",origin:], df_bycountry.loc["France",origin:], 10)
plot_with_lag(df_bycountry.loc["Italy",origin:], df_bycountry.loc["France",origin:], -italy_france_offset)
# + [markdown] id="UugrVuI5Nrb6" colab_type="text"
# ## China leads Italy by about 33 days
# + id="vXWXmhnBNiov" colab_type="code" outputId="6878fc06-c69c-4137-e9f3-22f9573464bc" colab={"base_uri": "https://localhost:8080/", "height": 864}
china_italy_offset = synchrony(df_byprovince_norm.loc["Hubei",:], df_bycountry_norm.loc["Italy",:], 36)
plot_with_lag(df_byprovince.loc["Hubei",:], df_bycountry.loc["Italy",:], -china_italy_offset)
# + [markdown] id="_XZ7aKUblzwZ" colab_type="text"
# ## Removing unnecessary data points
# + id="mWnHkf02kQGv" colab_type="code" colab={}
from datetime import datetime
def date_from_string(date_string):
return datetime.strptime(date_string, '%m/%d/%y')
def delta(date_1, date_2):
delta = date_from_string(date_1) - date_from_string(date_2)
return delta.days
# + id="I2wFjKAghX55" colab_type="code" outputId="537b3a8d-b7a9-4d87-8a9d-170b4dfe4766" colab={"base_uri": "https://localhost:8080/", "height": 864}
offset = "2/19/20"
origin = df_byprovince_norm.loc["Hubei",:].index[0]
china_italy_offset = synchrony(df_byprovince_norm.loc["Hubei",:], df_bycountry_norm.loc["Italy",offset:], 10)
plot_with_lag(df_byprovince.loc["Hubei",:],
df_bycountry.loc["Italy",:],
lag = -china_italy_offset - delta(offset, origin),
q_1 = "1/25/20",
q_2 = "3/10/20")
| notebook/Coronavirus_By_Country.ipynb |