code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
# Matrix Inverse
In this exercise, you will write a function to calculate the inverse of either a 1x1 or a 2x2 matrix.
```
### TODO: Write a function called inverse_matrix() that
### receives a matrix and outputs the inverse
###
### You are provided with start code that checks
### if the matrix is square and if not, throws an error
###
### You will also need to check the size of the matrix.
### The formula for a 1x1 matrix and 2x2 matrix are different,
### so your solution will need to take this into account.
###
### If the user inputs a non-invertible 2x2 matrix or a matrix
### of size 3 x 3 or greater, the function should raise an
### error. A non-invertible
### 2x2 matrix has ad-bc = 0 as discussed in the lesson
###
### Python has various options for raising errors
### raise RuntimeError('this is the error message')
### raise NotImplementedError('this functionality is not implemented')
### raise ValueError('The denominator of a fraction cannot be zero')
def inverse_matrix(matrix):
'''
Return the inverse of 1x1 or 2x2 matrices.
Raises errors if the matrix is not square, is larger
than a 2x2 matrix, or if it cannot be inverted due to
what would be a division by zero.
'''
inverse = []
# Check if not square
if len(matrix) != len(matrix[0]):
raise ValueError('The matrix must be square')
# Check if matrix is larger than 2x2.
if len(matrix) > 2:
raise NotImplementedError('this functionality is not implemented')
# Check if matrix is 1x1 or 2x2.
# Depending on the matrix size, the formula for calculating
# the inverse is different.
if len(matrix) == 1:
inverse.append([1 / matrix[0][0]])
elif len(matrix) == 2:
# If the matrix is 2x2, check that the matrix is invertible
if matrix[0][0] * matrix[1][1] == matrix[0][1] * matrix[1][0]:
raise ValueError('The matrix is not invertible.')
else:
# Calculate the inverse of the square 1x1 or 2x2 matrix.
a = matrix[0][0]
b = matrix[0][1]
c = matrix[1][0]
d = matrix[1][1]
factor = 1 / (a * d - b * c)
inverse = [[d, -b],[-c, a]]
for i in range(len(inverse)):
for j in range(len(inverse[0])):
inverse[i][j] = factor * inverse[i][j]
return inverse
## TODO: Run this cell to check your output. If this cell does
## not output anything your answers were as expected.
assert inverse_matrix([[100]]) == [[0.01]]
assert inverse_matrix([[4, 5], [7, 1]]) == [[-0.03225806451612903, 0.16129032258064516],
[0.22580645161290322, -0.12903225806451613]]
### Run this line of code and see what happens. Because ad = bc, this
### matrix does not have an inverse
inverse_matrix([[4, 2], [14, 7]])
### Run this line of code and see what happens. This is a 3x3 matrix
inverse_matrix([[4, 5, 1], [2, 9, 7], [6, 3, 9]])
```
| github_jupyter |
# `git`, `GitHub`, `GitKraken` (continuación)
<img style="float: left; margin: 15px 15px 15px 15px;" src="http://conociendogithub.readthedocs.io/en/latest/_images/Git.png" width="180" height="50" />
<img style="float: left; margin: 15px 15px 15px 15px;" src="https://c1.staticflickr.com/3/2238/13158675193_2892abac95_z.jpg" title="github" width="180" height="50" />
<img style="float: left; margin: 15px 15px 15px 15px;" src="https://www.gitkraken.com/downloads/brand-assets/gitkraken-keif-teal-sq.png" title="gitkraken" width="180" height="50" />
___
## Recuento de la clase pasada
En la clase pasada vimos como sincronizar el repositorio remoto con los cambios que hemos realizado y documentado localmente. *En la práctica, esto se da cuando nosotros mismos editamos alguna parte de un proyecto en el que estamos trabajando*.
Ahora aprenderemos a hacer lo contrario. Es decir, como sincronizar el repositorio local con los cambios que se hayan hecho en el repositorio remoto. *En la práctica, esto se da cuando otros colaboradores del proyecto hacen algún cambio y nosotros queremos ver esos cambios*.
Seguimos basándonos en el siguiente video de `YouTube`.
```
from IPython.display import YouTubeVideo
YouTubeVideo('f0y_xCeM1Rk')
```
### Receta (continuación)
1. Estando en `GitHub` en el repositorio `hello-world`, picar en *Create new file*.
- Normalmente la gente no crea ni edita archivos en `GitHub`, sin embargo esta será nuestra forma de emular que alguien incluyó un nuevo archivo en nuestro proyecto.
- Darle algún nombre al archivo y poner algo en el cuerpo de texto.
- Poner un mensaje describiendo que se añadió un nuevo archivo.
- Picar en *Commit new file*.
- Ver que en el repositorio remoto en `GitHub` ya existe el nuevo archivo, pero en el repositorio local no.
2. Revisar el arbol de cambios en `GitKraken`. Vemos que ahora el ícono que revela los cambios en `GitHub` va un paso adelante del ícono que revela los cambios en el repositorio local.
3. Para incorporar los cambios del repositorio remoto en el repositorio local debemos picar en *Pull* en la parte superior. De nuevo, los íconos deberían juntarse.
4. Revisar el repositorio local para ver que el nuevo archivo ya está ahí.
### Hasta ahora...
Hemos aprendido como manejar repositorios remotos de forma básica con `GitKraken`:
1. Teniendo un repositorio remoto guardado en `GitHub`, *jalamos* (pulled) esos archivos a nuestro disco local para trabajar. El tipo de operaciones que llevamos a cabo fueron:
1. Clone:<font color= red > descripción</font>.
2. Pull:<font color= red > descripción</font>.
2. También hicimos lo opuesto. Si hacemos cambios en nustro repositorio local, pudimos actualizar nuestro repositorio de `GitHub`. El tipo de operación que llevamos a cabo fue:
1. Push:<font color= red > descripción</font>.
___
## ¿Y si cometemos algún error?
Los errores son inherentes a nuestra condición humana. Por tanto, es muy probable que en el desarrollo de un proyecto cometamos algún error.
Una de las características de gestionar versiones con `git` (y por ende con `GitKraken`), es que podemos volver a un commit anterior si cometimos algún error.
<font color= red > Mostrar cómo hacer esto en GitKraken.</font>
___
## Branching
Cuando hicimos el ejercicio *hello-world* al abrir la cuenta en `GitHub`, nos dieron una pequeña introducción al *branching* (creamos una rama de edición, editamos el archivo `README`, para finalmente fusionar los cambios en la rama *master*).
*Branching*:
- Es una forma <font color=green>segura</font> de realizar cambios importantes en un proyecto con `git`.
- Consiste en crear ramas adicionales para hacer modificaciones en el proyecto sin modificar la rama *master* hasta que se esté completamente seguro de las modificaciones.
- Una vez se esté seguro de que las modificaciones están bien, dichas modificaciones se incluyen en la rama *master*
### Ejemplo
1. En `GitKraken`, en el repositorio *hello-world*, crear una rama llamada *add_file*.
- Click derecho sobre el icono master, y picar en *Create branch here*.
- Dar nombre *add_file* y presionar la tecla enter.
- Notar que automáticamente `GitKraken` nos pone en la rama recién creada.
2. Ir al directorio local del repositorio y añadir un archivo nuevo.
3. Hacer el proceso de *stage* y *commit* en la rama.
4. Revisar que pasa con el directorio cuando cambiamos de rama (para cambiar de rama, dar doble click sobre la rama a la que se quiere pasar).
5. Incluir los cambios en la rama *master* (arrastrar una rama sobre la otra, y picar en la opción *Merge add_file into master*).
6. Cambiar a la rama *master* y borrar la rama *add_file*.
7. Hacer un *push* para actualizar el repositorio remoto.
___
## Forking
Una bifurcación (*fork*) es una copia de un repositorio. Bifurcar un repositorio te permite experimentar cambios libremente sin afectar el proyecto original.
Existen varias aplicaciones del *Forking*:
### Seguir un proyecto de otra persona
Como ejemplo, van a seguir el proyecto de la asignatura **SimMat2018-1**.
Los siguientes pasos nos enseñarán como mantener nuestro repositorio local actualizado con el repositorio de la asignatura.
1. Entrar al repositorio https://github.com/esjimenezro/SimMat2018-1.
2. En la esquina superior derecha, dar click en *fork* y esperar un momento. Esta acción copia en su cuenta de `GitHub` un repositorio idéntico al de la materia (con el mismo nombre).
3. Desde `GitKraken`, clonar el repositorio (el que ya está en su cuenta).
4. En la pestaña *REMOTE* dar click en el signo `+`.
- Picar en `GitHub`.
- Desplegar la pestaña y elegir esjimenezro/SimMat2018-1.
- Picar en *Add remote*.
5. <font color=red>Añadiré un nuevo archvo en el repositorio de la materia y ustedes verán qué pasa en `GitKraken`</font>.
6. Arrastrar el repositorio remoto ajeno a la rama *master* y dar click en la opción *Merge esjimenezro/master into master*. Ya el repositorio local está actualizado.
7. Para actualizar el repositorio remoto propio hacer un *push*.
### Proyectos colaborativos
Normalmente, los *forks* se usan para proponer cambios en el proyecto de otra persona (hacer proyectos colaborativos).
<font color=red>Hacer un cambio en el repositorio propio y mostrar como hacer el *pull request* y el *merge*</font>.
**Referencias:**
- https://help.github.com/articles/fork-a-repo/
- https://guides.github.com/activities/forking/
> <font color=blue>**Tarea**</font>: por parejas harán un proyecto colaborativo. En moodle subiré como hacerlo paso a paso y qué es lo que se debe entregar.
**Recordar tarea para hoy y recuento de la clase**
<img src="https://raw.githubusercontent.com/louim/in-case-of-fire/master/in_case_of_fire.png" title="In case of fire (https://github.com/louim/in-case-of-fire)" width="200" height="50" align="center">
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| github_jupyter |
ERROR: type should be string, got "https://github.com/scikit-learn/scikit-learn/issues/18305\n\nThomas' example with Logistic regression:\nhttps://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py\n\n```\nimport watermark\n%load_ext watermark\n#!pip install --upgrade scikit-learn\n#!pip install watermark\nimport sklearn\nfrom sklearn import set_config\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import OneHotEncoder, StandardScaler\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.compose import make_column_transformer\nfrom sklearn.linear_model import LogisticRegression\nsklearn.__version__\n# see version of system, python and libraries\n%watermark -n -v -m -g -iv\n#sklearn.set_config(display='diagram')\nset_config(display='diagram')\n\nnum_proc = make_pipeline(SimpleImputer(strategy='median'), StandardScaler())\n\ncat_proc = make_pipeline(\n SimpleImputer(strategy='constant', fill_value='missing'),\n OneHotEncoder(handle_unknown='ignore'))\n\npreprocessor = make_column_transformer((num_proc, ('feat1', 'feat3')),\n (cat_proc, ('feat0', 'feat2')))\n\nclf = make_pipeline(preprocessor, LogisticRegression())\nclf\nfrom sklearn.linear_model import LogisticRegression\n# Author: Pedro Morales <part.morales@gmail.com>\n#\n# License: BSD 3 clause\n\nimport numpy as np\n\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split, GridSearchCV\n\nnp.random.seed(0)\n\n# Load data from https://www.openml.org/d/40945\nX, y = fetch_openml(\"titanic\", version=1, as_frame=True, return_X_y=True)\n\n# Alternatively X and y can be obtained directly from the frame attribute:\n# X = titanic.frame.drop('survived', axis=1)\n# y = titanic.frame['survived']\nnumeric_features = ['age', 'fare']\nnumeric_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])\n\ncategorical_features = ['embarked', 'sex', 'pclass']\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numeric_transformer, numeric_features),\n ('cat', categorical_transformer, categorical_features)])\n\n# Append classifier to preprocessing pipeline.\n# Now we have a full prediction pipeline.\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('classifier', LogisticRegression())])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf.fit(X_train, y_train)\nprint(\"model score: %.3f\" % clf.score(X_test, y_test))\nfrom sklearn import set_config\nset_config(display='diagram')\nclf\nfrom sklearn import svm\nnp.random.seed(0)\n\n# Load data from https://www.openml.org/d/40945\nX, y = fetch_openml(\"titanic\", version=1, as_frame=True, return_X_y=True)\nnumeric_features = ['age', 'fare']\nnumeric_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])\n\ncategorical_features = ['embarked', 'sex', 'pclass']\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numeric_transformer, numeric_features),\n ('cat', categorical_transformer, categorical_features)])\n\n# Append classifier to preprocessing pipeline.\n# Now we have a full prediction pipeline.\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('classifier', svm.SVC())])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf.fit(X_train, y_train)\nprint(\"model score: %.3f\" % clf.score(X_test, y_test))\nfrom sklearn import set_config\nset_config(display='diagram')\nclf\n```\n\n" | github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
train = pd.read_csv("../dataset/validation/train_complete.csv")
test = pd.read_csv("../dataset/original/test_complete.csv")
# TODO aggiungere anche il resto del train
train
columns = ['queried_record_id', 'predicted_record_id', 'predicted_record_id_record', 'cosine_score',
'name_cosine', 'email_cosine', 'phone_cosine']
c_train = train[columns].drop_duplicates('queried_record_id', keep='first')
c_test = test[columns].drop_duplicates('queried_record_id', keep='first')
```
Label
```
c_train['split_record_id'] = c_train.queried_record_id.str.split('-')
c_train['linked_id'] = [x[0] for x in c_train.split_record_id]
c_train = c_train.drop('split_record_id', axis=1)
c_test['split_record_id'] = c_test.queried_record_id.str.split('-')
c_test['linked_id'] = [x[0] for x in c_test.split_record_id]
c_test = c_test.drop('split_record_id', axis=1)
c_train['linked_id'] = c_train.linked_id.astype(int)
c_test['linked_id'] = c_test.linked_id.astype(int)
# Get in_train set
train_train = pd.read_csv("../dataset/validation/train.csv", escapechar="\\")
train_test = pd.read_csv("../dataset/original/train.csv", escapechar="\\")
train_train = train_train.sort_values(by='record_id').reset_index(drop=True)
train_test = train_test.sort_values(by='record_id').reset_index(drop=True)
train_train['linked_id'] = train_train.linked_id.astype(int)
train_test['linked_id'] = train_test.linked_id.astype(int)
intrain_train = set(train_train.linked_id.values)
intrain_test = set(train_test.linked_id.values)
c_train['label'] = np.isin(c_train.linked_id.values, list(intrain_train))
c_test['label'] = np.isin(c_test.linked_id.values, list(intrain_test))
# 1 if it is in train, 0 is not in train
c_train['label'] = np.where(c_train.label.values == True, 1, 0)
c_test['label'] = np.where(c_test.label.values == True, 1, 0)
c_test
original_test = pd.read_csv("../dataset/original/test.csv", escapechar="\\")
validation_test = pd.read_csv("../dataset/validation/test.csv", escapechar="\\")
original_test = original_test.sort_values(by='record_id')
validation_test = validation_test.sort_values(by='record_id')
original_test['split'] = original_test.record_id.str.split('-')
original_test['linked_id'] = [x[0] for x in original_test.split]
original_test['linked_id'] = original_test.linked_id.astype(int)
original_test[~original_test.linked_id.isin(intrain_test)]
```
# Feature Extraction
```
# Numero di campi nulli nella riga
# Popolarità del nome
# Quante volte c'è la prima raccomandazione tra le top10 raccomandate
# Quante raccomandazioni diverse facciamo nella top10 (possibilmente proporzionandolo a quante raccomandazioni
# totali del primo elemento raccomandato)
#
# guardare quanti elemnenti non nel train becchiamo se thresholdiamo a 0
```
## Null field in each row
```
validation_test['nan_field'] = validation_test.isnull().sum(axis=1)
original_test['nan_field'] = original_test.isnull().sum(axis=1)
c_train = c_train.merge(validation_test[['record_id', 'nan_field']], how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
c_test = c_test.merge(original_test[['record_id', 'nan_field']], how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
c_train
```
# Scores
```
s_train = pd.read_csv("../lgb_predictions_validation.csv")
s_test = pd.read_csv("../lgb_predictions_full.csv")
s_train.ordered_scores = [eval(x) for x in s_train.ordered_scores]
s_test.ordered_scores = [eval(x) for x in s_test.ordered_scores]
s_train.ordered_linked = [eval(x) for x in s_train.ordered_linked]
s_test.ordered_linked = [eval(x) for x in s_test.ordered_linked]
def first_scores(df):
new_df = []
for (q, s) in tqdm(zip(df.queried_record_id, df.ordered_scores)):
new_df.append((q, s[0], s[1]))
new_df = pd.DataFrame(new_df, columns=['queried_record_id', 'score1', 'score2'])
return new_df
s_train_first = first_scores(s_train)
s_test_first = first_scores(s_test)
c_train = c_train.merge(s_train_first, how='left', on='queried_record_id')
c_test = c_test.merge(s_test_first, how='left', on='queried_record_id')
```
## Quanti linked_id uguali al primo abbiamo tra la top10 / quanti record_id ha il primo linked_id predetto in tutto
```
from collections import Counter
group_val = train_train.groupby('linked_id').size()
group_test = train_test.groupby('linked_id').size()
group_val = group_val.reset_index().rename(columns={0:'size'})
group_test = group_test.reset_index().rename(columns={0:'size'})
train_complete_list = train[['queried_record_id', 'predicted_record_id']].groupby('queried_record_id').apply(lambda x: list(x['predicted_record_id']))
test_complete_list = test[['queried_record_id', 'predicted_record_id']].groupby('queried_record_id').apply(lambda x: list(x['predicted_record_id']))
train_complete_list = train_complete_list.reset_index().rename(columns={0:'record_id'})
test_complete_list = test_complete_list.reset_index().rename(columns={0:'record_id'})
train_complete_list['size'] = [Counter(x[:10])[x[0]] for x in train_complete_list.record_id]
test_complete_list['size'] = [Counter(x[:10])[x[0]] for x in test_complete_list.record_id]
train_complete_list['first_pred'] = [x[0] for x in train_complete_list.record_id]
test_complete_list['first_pred'] = [x[0] for x in test_complete_list.record_id]
train_complete_list = train_complete_list.merge(group_val, how='left', left_on='first_pred', right_on='linked_id', suffixes=('_pred', '_real')).drop('linked_id', axis=1)
test_complete_list = test_complete_list.merge(group_test, how='left', left_on='first_pred', right_on='linked_id', suffixes=('_pred', '_real')).drop('linked_id', axis=1)
train_complete_list['pred_over_all'] = [ p/r for (p,r) in zip(train_complete_list.size_pred, train_complete_list.size_real)]
test_complete_list['pred_over_all'] = [ p/r for (p,r) in zip(test_complete_list.size_pred, test_complete_list.size_real)]
train_complete_list
c_train = c_train.merge(train_complete_list[['queried_record_id', 'pred_over_all']], how='left', on='queried_record_id')
c_test = c_test.merge(test_complete_list[['queried_record_id', 'pred_over_all']], how='left', on='queried_record_id')
```
## Number of different linked_id predicted in top10 & Size of the group in train identified by the first recommended item (E' il size_real precedentemente introdotto )
```
train_complete_list['n_diff_linked_id'] = [len(set(x[:10])) for x in train_complete_list.record_id]
test_complete_list['n_diff_linked_id'] = [len(set(x[:10])) for x in test_complete_list.record_id]
c_train = c_train.merge(train_complete_list[['queried_record_id', 'size_real', 'n_diff_linked_id']], how='left', on='queried_record_id')
c_test = c_test.merge(test_complete_list[['queried_record_id', 'size_real', 'n_diff_linked_id']], how='left', on='queried_record_id')
```
## Number of original field equal
## TODO Take the linked_id of the first record predicted, get the relative group of record in train, check how the queried record is similar to the all group
## Difference between the first score and the second
```
def first_second_score_difference(scores):
res = np.empty(len(scores))
for i in range(len(scores)):
res[i] = scores[i][0] - scores[i][1]
return res
c_train['s0_minus_s1'] = first_second_score_difference(s_train.ordered_scores.values)
c_test['s0_minus_s1'] = first_second_score_difference(s_test.ordered_scores.values)
```
## How many record in restricted_df
```
def restricted_df(s):
restricted_pred = []
max_delta = 2.0
for (q, sc, rec, l) in tqdm(zip(s.queried_record_id, s.ordered_scores, s.ordered_record, s.ordered_linked)):
for x in range(len(sc)):
if x == 0: # Il primo elemento predetto si mette sempre [quello con score più alto]
restricted_pred.append((q, sc[x], rec[x], l[x]))
else:
if x >= 10:
continue
elif (sc[0] - sc[x] < max_delta) or (l[0] == l[x]): # se le predizioni hanno uno scores che dista < max_delta dalla prima allora si inseriscono
restricted_pred.append((q, sc[x], rec[x], l[x]))
else:
continue
restricted_df = pd.DataFrame(restricted_pred, columns=['queried_record_id', 'scores', 'predicted_record_id', 'predicted_linked_id'])
return restricted_df
s_train.ordered_record = [eval(x) for x in s_train.ordered_record]
s_test.ordered_record = [eval(x) for x in s_test.ordered_record]
restricted_train = restricted_df(s_train)
restricted_test = restricted_df(s_test)
restricted_train = restricted_train.groupby('queried_record_id').size().reset_index().rename(columns={0:'restricted_size'})
restricted_test = restricted_test.groupby('queried_record_id').size().reset_index().rename(columns={0:'restricted_size'})
c_train = c_train.merge(restricted_train, how='left', on='queried_record_id')
c_test = c_test.merge(restricted_test, how='left', on='queried_record_id')
```
## How many equal linked_id with the same score of the first
# Fit and Predict
```
c_train
# change name of similarities
r = {'cosine_score': 'hybrid_score', 'name_cosine':'name_jaccard', 'email_cosine':'email_jaccard', 'phone_cosine':'phone_jaccard'}
c_train = c_train.rename(columns=r)
c_test = c_test.rename(columns=r)
import lightgbm as lgb
classifier = lgb.LGBMClassifier(max_depth=8, n_estimators=500,reg_alpha=0.2)
cols = ['hybrid_score', 'name_jaccard', 'email_jaccard', 'phone_jaccard', 'nan_field', 'pred_over_all', 'size_real', 'n_diff_linked_id', 'score1']
classifier.fit(c_train[cols], c_train['label'])
preds = classifier.predict(c_test[cols])
preds
c_test['predictions'] = preds
c_test['correct_preds'] = np.where(c_test.label.values == c_test.predictions.values, 1, 0)
acc = c_test['correct_preds'].sum() / c_test.shape[0]
acc
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(c_test.label, c_test.predictions)
cm = cm /c_test.shape[0]
cm
from lightgbm import plot_importance
from matplotlib import pyplot
# plot feature importance
plot_importance(classifier)
pyplot.show()
```
| github_jupyter |
# Testicular Germ Cell Tumors (TGCT)
[Jump to the urls to download the GCT and CLS files](#Downloads)
**Authors:** Alejandra Ramos, Marylu Villa and Edwin Juarez
**Is this what you want your scientific identity to be?**
**Contact info:** Email Edwin at [ejuarez@cloud.ucsd.edu](mailto:ejuarez@cloud.ucsd.edu) or post a question in http://www.genepattern.org/help
This notebook provides the steps to download all the TGCT samples from The Cancer Genome Atlas (TCGA) contained in the Genomic Data Commons (GDC) Data portal. These samples can be downloaded as a GCT file and phenotype labels (primary tumor vs normal samples) can be downloaded as a CLS file. These files are compatible with other GenePattern Analyses.

# Overview
TGCT is a rare disease that is difficult to manage, surgical resection is the primary treatment currently available. To date no disease registry exists and there is little data available detailing the management of patients with diffuse TGCT, the burden of diffuse TGCT for patients (including pain, joint stiffness, swelling, reduced mobility and quality of life) or the economic impact of diffuse TGCT.
<p><img alt="Resultado de imagen para Testicular Germ Cell Tumors" src="http://nethealthbook.com/wp-content/uploads/2014/11/bigstock-Testicular-Cancer-63378559.jpg" style="width: 667px; height: 500px;" /></p>
# TGCT Statistics
Between 1999 and 2012, TGCT incidence rates, both overall and by histology, were highest among NHWs, followed by Hispanics, Asian/Pacific Islanders, and non-Hispanic blacks. Between 2013 and 2026, rates among Hispanics were forecast to increase annually by 3.96% (95% confidence interval, 3.88%-4.03%), resulting in the highest rate of increase of any racial/ethnic group. By 2026, the highest TGCT rates in the US will be among Hispanics because of increases in both seminomas and nonseminomas. Rates among NHWs will slightly increase, whereas rates among other groups will slightly decrease.
More than 90% of testicular neoplasms originate from germ cells. Testicular germ cell tumors (GCTs) are a heterogeneous group of neoplasms with diverse histopathology and clinical behavior.
<p><img alt="Imagen relacionada" src="https://www.cancerresearchuk.org/sites/default/files/cstream-node/inc_anatomicalsite_testis_0.png" style="width: 478px; height: 500px;" /></p>
https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/testicular-cancer/incidence
# Dataset's Demographic information
<p>TCGA contained 150 TGCT samples (150 primary cancer samples, and 0 normal tissue samples) from 150 people. Below is a summary of the demographic information represented in this dataset. If you are interested in viewing the complete study, as well as the files on the GDC Data Portal, you can follow <a href="https://portal.gdc.cancer.gov/repository?facetTab=cases&filters=%7B%22op%22%3A%22and%22%2C%22content%22%3A%5B%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-UVM%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.analysis.workflow_type%22%2C%22value%22%3A%5B%22HTSeq%20-%20Counts%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22RNA-Seq%22%5D%7D%7D%5D%7D&searchTableTab=cases" target="_blank">this link.(these data were gathered on July 10th, 2018)</a></p>

# Login to GenePattern
<div class="alert alert-info">
<h3 style="margin-top: 0;"> Instructions <i class="fa fa-info-circle"></i></h3>
<ol>
<li>Login to the *GenePattern Cloud* server.</li>
</ol>
</div>
```
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://gp-beta-ami.genepattern.org/gp", "", ""))
```
# Downloading RNA-Seq HTSeq Counts Using TCGAImporter
Use the TCGAImporter module to download RNA-Seq HTSeq counts from the GDC Data Portal using a Manifest file and a Metadata file
<p><strong>Input files</strong></p>
<ul>
<li><em>Manifest file</em>: a file containing the list of RNA-Seq samples to be downloaded.</li>
<li><em>Metadata file</em>: a file containing information about the files present at the GDC Data Portal. Instructions for downloading the Manifest and Metadata files can be found here: <a href="https://github.com/genepattern/TCGAImporter/blob/master/how_to_download_a_manifest_and_metadata.pdf" target="_blank">https://github.com/genepattern/TCGAImporter/blob/master/how_to_download_a_manifest_and_metadata.pdf</a></li>
</ul>
<p><strong>Output files</strong></p>
<ul>
<li><em>TGCT_TCGA.gct</em> - This is a tab delimited file that contains the gene expression (HTSeq counts) from the samples listed on the Manifest file. For more info on GCT files, look at reference <a href="#References">1</a><em> </em></li>
<li><em><em>TGCT_TCGA.cls</em> -</em> The CLS file defines phenotype labels (in this case Primary Tumor and Normal Sample) and associates each sample in the GCT file with a label. For more info on CLS files, look at reference <a href="#References">2</a></li>
</ul>
<div class="alert alert-info">
<h3 style="margin-top: 0;"> Instructions <i class="fa fa-info-circle"></i></h3>
<ol>
<li>Load the manifest file in **Manifest** parameter.</li>
<li>Load the metadata file in **Metadata** parameter.</li>
<li>Click **run**.</li>
</ol>
<p><strong>Estimated run time for TCGAImporter</strong> : ~ 2 minutes</p>
```
tcgaimporter_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00369')
tcgaimporter_job_spec = tcgaimporter_task.make_job_spec()
tcgaimporter_job_spec.set_parameter("manifest", "https://cloud.genepattern.org/gp/users/marylu257/tmp/run2714052530109264940.tmp/TGCT_manifest.txt")
tcgaimporter_job_spec.set_parameter("metadata", "https://cloud.genepattern.org/gp/users/marylu257/tmp/run2817674009956120503.tmp/TGCT_metadata.json")
tcgaimporter_job_spec.set_parameter("output_file_name", "TGCT_TCGA")
tcgaimporter_job_spec.set_parameter("gct", "True")
tcgaimporter_job_spec.set_parameter("translate_gene_id", "False")
tcgaimporter_job_spec.set_parameter("cls", "True")
genepattern.display(tcgaimporter_task)
job35199 = gp.GPJob(genepattern.session.get(0), 35199)
genepattern.display(job35199)
collapsedataset_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00134')
collapsedataset_job_spec = collapsedataset_task.make_job_spec()
collapsedataset_job_spec.set_parameter("dataset.file", "https://cloud.genepattern.org/gp/jobResults/31842/TCGA_dataset.gct")
collapsedataset_job_spec.set_parameter("chip.platform", "ftp://ftp.broadinstitute.org/pub/gsea/annotations/ENSEMBL_human_gene.chip")
collapsedataset_job_spec.set_parameter("collapse.mode", "Maximum")
collapsedataset_job_spec.set_parameter("output.file.name", "<dataset.file_basename>.collapsed")
genepattern.display(collapsedataset_task)
job32421 = gp.GPJob(genepattern.session.get(0), 32421)
genepattern.display(job32421)
```
# Downloads
<p>You can download the input and output files of TCGAImporter for this cancer type here:</p>
<p><strong>Inputs:</strong></p>
<ul>
<li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_MANIFEST.txt" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/TGCT/TGCT_MANIFEST.txt</a></li>
<li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_METADATA.json" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/TGCT/TGCT_METADATA.json</a></li>
</ul>
<p><strong>Outputs:</strong></p>
<ul>
<li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_TCGA.gct" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/TGCT/TGCT_TCGA.gct</a></li>
<li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_TCGA.cls" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/TGCT/TGCT_TCGA.cls</a></li>
</ul>
If you'd like to download similar files for other TCGA datasets, visit this link:
- https://datasets.genepattern.org/?prefix=data/TCGA_HTSeq_counts/
# References
[1] http://software.broadinstitute.org/cancer/software/genepattern/file-formats-guide#GCT
[2] http://software.broadinstitute.org/cancer/software/genepattern/file-formats-guide#CLS
[3] https://www.ncbi.nlm.nih.gov/pubmed/17683189</p>
[4] https://clinicaltrials.gov/ct2/show/NCT02948088</p>
[5] https://www.naaccr.org/future-testicular-germ-cell-tumor-incidence-united-states-forecast-2026/</p>
[6] https://www.google.com/search?q=Testicular+Germ+Cell+Tumors+statistics+graphic&tbm=isch&tbs=rimg:CTVE1ZAWbDXkIjgOcLH5zM-glUeTdH2v2ldCObtVUkYMakd2xJVWzFyT1QfRVQOSqvvgSn9vagGWrwayZhxvvN-X6yoSCQ5wsfnMz6CVEX35piL41ePpKhIJR5N0fa_1aV0IRtFqwYO_1GXMAqEgk5u1VSRgxqRxFzOEuwTDZX2yoSCXbElVbMXJPVEYJ1h0Lj6hdLKhIJB9FVA5Kq--ARReXhVLQ1IQEqEglKf29qAZavBhFkwbSrY-rI4SoSCbJmHG-835frEVBDmU07LcQ8&tbo=u&sa=X&ved=2ahUKEwj9vvCd5ojcAhWorVQKHZNfC5AQ9C96BAgBEBs&biw=1366&bih=635&dpr=1#imgrc=smYcb7zfl-uEPM:</p>
| github_jupyter |
# Analiza pasem psov
<img src="slika.jpg">
V svoji projektni nalogi bom analizirala 369 pasem psov. Za vsako pasmo bom predstavila njeno ime, iz katere države pasma izbira, velikost, povprečno življensko dobo, ceno mladička, popularnost in značaj psa. Pri podatkih, kjer so manjkajoče informacije se nahaja znak '/'.
```
# naložimo paket
import pandas as pd
#nalozimo paket za delo z grafi
import matplotlib.pyplot as plt
# naložimo razpredelnico, s katero bomo delali
pasme_psov = pd.read_csv('obdelani-podatki/vse_pasme.csv')
# ker bomo delali z velikimi razpredelnicami, povemo, da naj se vedno izpiše le 20 vrstic
pd.options.display.max_rows = 20
```
Najprej si poglejmo, kako zgleda tabela mojih podatkov. V njej so podatki o imenu pasme, državi porekla, višini, velikosti, popularnosti in ceni.
```
pasme_psov.head(10)
z_drzavo = pasme_psov['drzava']!='/'
pasme_z_drzavo = pasme_psov[z_drzavo]
pasme_psov_drzava = pasme_z_drzavo.groupby('drzava')
prikaz_drzav = pasme_psov_drzava.size().sort_values().tail(13)
prikaz_drzav.plot.pie(figsize=(9, 5), fontsize=10, title =
'Število pasem po državah').set(ylabel = '')
```
Tortni diagram nam prikazuje kakšen delež različnih pasem je v posamezni državi. Vidimo lahko, da največ pasem izvira iz Velike Britanije in Združenih držav, ter tudi Francije in Nemčije.
```
pasme_drzav = pasme_psov.groupby('drzava')
pasme_drzav.size().sort_values().head(10)
```
Države z najmanjšim številom pasem so Izrael, Makedonija, Madagaskar in Maroko. To dejstvo ni presenetljivom saj so psi v Ameriki in evropskih državah dosti bolj priljubljena domača žival.
```
najbolj_priljubljene = pasme_psov.head(100)
najbolj_priljubljene['popularnost'].apply(int)
najbolj_priljubljene_pasme = najbolj_priljubljene.groupby('drzava')
top_10 = najbolj_priljubljene_pasme.size().sort_values(ascending = False).head(10)
top_10.plot.bar(figsize=(9,5), fontsize=10, title = 'Najbolj priljubljene pasme psov po poreklu ').set(ylabel =
'Število najbolj priljubljenih pasem', xlabel = 'Država porekla')
```
Za analizo smo vzeli 100 najbolj priljubljenih pasem po svetu. Dejstvo, da največ popularnih pasem izhaja iz Velike Britanije, ZDA in Nemčije ni presenetljivo, saj so to države z največjim številom pasem.
```
celo_razmerje = (najbolj_priljubljene_pasme.size() / pasme_drzav.size()).sort_values()
plt.subplot(1, 2, 1)
celo_razmerje[:27].sort_values().plot.barh(figsize=(10,5), fontsize=6,
title = 'Razmerje med priljubljenostjo in številom pasem').set(ylabel = '',
xlabel = 'Razmerje'), plt.subplot(1, 2, 2)
celo_razmerje[:27].sort_values().plot.box(figsize=(10,5), fontsize=6,
color = 'r', title = 'Skupno razmerje')
```
Najslabša ponudba priljubljenih pasem je torej na Švedstekm, Češkem in Španiji. Za Češko ta podatek ni presenetljiv, saj imajo le 5 pasem, med tem ko Španija in Švedska premoreta več pasem, ki pa so očitno zelo nepriljubljene.
Na drugi strani je najboljša ponduba pasem v državah kot so Tibet, Afganistan, Izrael in Madagaskar kar se mi zdi presenetljiv podatek, saj imajo vse države zelo malo pasem npr. Afganistan in Madagaskar imata le po eno, a je ta zelo priljubljena.
```
velikost_psov = pasme_psov.groupby('velikost')
povprecja_zivljenske_dobe = velikost_psov['zivljenska_doba'].mean()
povprecja_zivljenske_dobe.plot.bar(fontsize=10, title =
'Povprečna življenska doba psov glede na velikost').set(ylabel = 'Življenska doba v letih',
xlabel = 'Velikost')
```
Rezultati nam potrdijo že znano domnevo, da naj bi majhni psi živeli dlje. Vendar pa je vseeno presentljiv podatek, da sta življenska doba psa in velikost dejansko obratno sorazmerna.
```
s_ceno = pasme_psov['cena'] !='/'
pasme_s_ceno = pasme_psov[s_ceno]
pasme_s_ceno.loc[:, 'cena'] = pasme_s_ceno['cena'].apply(int)
velikost_psov = pasme_s_ceno.groupby('velikost')
povprecja_cene = velikost_psov['cena'].mean()
povprecja_cene.plot.bar(fontsize = 10, title =
'Povprečna cena psov glede na velikost').set(ylabel = 'Cena mladička v USD',
xlabel = 'Velikost')
s_ceno = pasme_psov['cena'] !='/'
pasme_s_ceno = pasme_psov[s_ceno]
pasme_s_ceno.loc[:, 'cena'] = pasme_s_ceno['cena'].apply(int)
z_visino = pasme_s_ceno['visina'] != '/'
skupne_pasme = pasme_s_ceno[z_visino]
skupne_pasme.loc[:, 'visina'] = skupne_pasme['visina'].astype('float64')
skupne_pasme[['visina', 'cena']].plot.scatter(x = 'visina', y = 'cena', title =
'Graf cene pasme glede na njeno višino').set(ylabel = 'Cena mladička v USD',
xlabel = 'Višina v inčih')
```
Vidimo, da so zelo veliki psi najdražji, je pa zanimivo dejstvo, da so majhni psi v povprečju dražji od tistih srednje velikosti. To ima verjetno opraviti s prestižnostjo manjših znamk psov.
```
najbolj_priljubljene = pasme_psov.head(190)
najbolj_priljubljene.loc[:, 'cena'] = najbolj_priljubljene['cena'].apply(int)
najbolj_priljubljene.sort_values('cena', ascending=False)
najbolj_priljubljene_pasme = najbolj_priljubljene.groupby('cena')
cena_pop = najbolj_priljubljene_pasme.size()
cena_pop.plot.bar(figsize=(9,5), fontsize=10, title =
'Najbolj priljubljene pasme psov po ceni ').set(ylabel =
'Število najbolj priljubljenih pasem', xlabel = 'Cena mladička v USD')
```
Dejstvo, da priljubljenost pasme in cena nista zelo povezani me je presenetilo. Najdražji psi namreč niso najbolj priljubljeni, a še vedno so nekateri ljudje pripravljeni plačevati zneske tudi do 4000 za svojo najljubšo pasmo. Na grafu pa vidimo dva izstopajoča podatka, to sta ceni okoli 1000 ter 2000 dolarjev. Največ priljubljenih pasem se očitno giblje v tem cenovnem rangu.
Za podatke o značajih psov sem imela narejeno posebno tabelo, saj bi bilo drugače podatkov v prvi tabeli preveč. Tu sem uvozila svojo tabelo značajev in dodala še primer, kako so predstavljeni značaji ene pasme.
```
znacaji = pd.read_csv('obdelani-podatki/znacaji_psov.csv')
znacaji[znacaji['ime'] == 'Maltese']
znacaj_pop = najbolj_priljubljene.merge(znacaji, on='ime')
znacaj_pop = znacaj_pop.ix[:, ['znacaj', 'popularnost']]
najbolj_priljubljene = pasme_psov.head(189)
najbolj_priljubljene['popularnost'].apply(int)
najbolj_priljubljeni_znacaji = znacaj_pop.groupby('znacaj')
top_znacaji = najbolj_priljubljeni_znacaji.size().sort_values(ascending = False).head(14)
top_znacaji.plot.bar(figsize=(9,5), fontsize=10, title =
'Najbolj priljubljeni značaji psov').set(ylabel = 'Število pasem', xlabel = 'Značaj')
```
V grafu lahko vidimo razpored najbolj priljubljenih značajev psov. Vidimo, da je daleč na prvem mestu intiligenca, kar se mi je zdelo zelo nenavadno, glede na to, da v človekovem najboljšem prijatelju ponavadi iščemo lastnosti kot je na primer prijaznost. Ta je po vrstnem redu šele na tretjem mestu. Zelo priljubljeni so tudi zvesti, pozorni in igrivi psi.
```
slovar_naslovov = {'Small' : 'Značaji majhnih psov', 'Medium' : 'Značaji psov srednje velikosti', 'Large' : 'Značaji velikih psov',
'Giant' : 'Značaji ogromnih psov'}
znacaji_velikost = pasme_psov.merge(znacaji, on='ime')
for size in ['Small', 'Medium', 'Large', 'Giant']:
majhni = znacaji_velikost['velikost'] == size
majhni_psi = znacaji_velikost[majhni]
majhni_psi = majhni_psi.groupby('znacaj')
majhni_znacaj = majhni_psi.size().sort_values(ascending = False).head(14)
majhni_znacaj.plot.bar(figsize=(9,5), fontsize=10, title = slovar_naslovov[size]).set(ylabel
='Število pasem', xlabel = 'Značaj')
plt.show()
```
Zadnja primerjava je med značaji psov različnih velikosti. Vidimo, da so majhni psi manj zvesti in bolj ljubeči od ostalih. Na drugi strani so psi večjih pasem zelo zvesti in nežni. Pasme srednje velikosti so predvsem intiligentne in dojemljive za nove stvari in trike. Najbolj aktivni so najmanjši psi, na drugi strani pa so večji psi mirnejši in manj igrivi. Velike pasme so zelo ljubeče ter potrpežljive, psi srednje višine pa imajo ogromno energije.
## Zaključek
V svoji analizi sem potrdila veliko vsesplošno znanih dejstev, kot so npr. da imajo majhni psi daljšo življensko dobo. Po drugi strani pa so me nekateri podatki tudi presenetili kot npr. da cena in popularnost psov nista v veliki meri povezani med seboj. Zanimiva se mi je zdela tudi analiza značajev glede na velikost psa, kjer sem ugotovila da se osebnost našega hišnega ljubljenčka zelo razlikuje glede na njegovo velikost.
| github_jupyter |
Train a simple deep CNN on the CIFAR10 small images dataset.
It gets to 75% validation accuracy in 25 epochs, and 79% after 50 epochs.
(it's still underfitting at that point, though)
```
# https://gist.github.com/deep-diver
import warnings;warnings.filterwarnings('ignore')
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.optimizers import RMSprop
import os
batch_size = 32
num_classes = 10
epochs = 100
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
opt = RMSprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
```
| github_jupyter |
##### Copyright 2020 The TensorFlow IO Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Reading PostgreSQL database from TensorFlow IO
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This tutorial shows how to create `tf.data.Dataset` from a PostgreSQL database server, so that the created `Dataset` could be passed to `tf.keras` for training or inference purposes.
A SQL database is an important source of data for data scientist. As one of the most popular open source SQL database, [PostgreSQL](https://www.postgresql.org) is widely used in enterprises for storing critial and transactional data across the board. Creating `Dataset` from a PostgreSQL database server directly and pass the `Dataset` to `tf.keras` for training or inference, could greatly simplify the data pipeline and help data scientist to focus on building machine learning models.
## Setup and usage
### Install required tensorflow-io packages, and restart runtime
```
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
```
### Install and setup PostgreSQL (optional)
**Warning: This notebook is designed to be run in a Google Colab only**. *It installs packages on the system and requires sudo access. If you want to run it in a local Jupyter notebook, please proceed with caution.*
In order to demo the usage on Google Colab you will install PostgreSQL server. The password and an empty database is also needed.
If you are not running this notebook on Google Colab, or you prefer to use an existing database, please skip the following setup and proceed to the next section.
```
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
```
### Setup necessary environmental variables
The following environmental variables are based on the PostgreSQL setup in the last section. If you have a different setup or you are using an existing database, they should be changed accordingly:
```
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
```
### Prepare data in PostgreSQL server
For demo purposes this tutorial will create a database and populate the database with some data. The data used in this tutorial is from [Air Quality Data Set](https://archive.ics.uci.edu/ml/datasets/Air+Quality), available from [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml).
Below is a sneak preview of a subset of the Air Quality Data Set:
Date|Time|CO(GT)|PT08.S1(CO)|NMHC(GT)|C6H6(GT)|PT08.S2(NMHC)|NOx(GT)|PT08.S3(NOx)|NO2(GT)|PT08.S4(NO2)|PT08.S5(O3)|T|RH|AH|
----|----|------|-----------|--------|--------|-------------|----|----------|-------|------------|-----------|-|--|--|
10/03/2004|18.00.00|2,6|1360|150|11,9|1046|166|1056|113|1692|1268|13,6|48,9|0,7578|
10/03/2004|19.00.00|2|1292|112|9,4|955|103|1174|92|1559|972|13,3|47,7|0,7255|
10/03/2004|20.00.00|2,2|1402|88|9,0|939|131|1140|114|1555|1074|11,9|54,0|0,7502|
10/03/2004|21.00.00|2,2|1376|80|9,2|948|172|1092|122|1584|1203|11,0|60,0|0,7867|
10/03/2004|22.00.00|1,6|1272|51|6,5|836|131|1205|116|1490|1110|11,2|59,6|0,7888|
More information about Air Quality Data Set and UCI Machine Learning Repository are availabel in [References](#references) section.
To help simplify the data preparation, a sql version of the Air Quality Data Set has been prepared and is available as [AirQualityUCI.sql](https://github.com/tensorflow/io/blob/master/docs/tutorials/postgresql/AirQualityUCI.sql).
The statement to create the table is:
```
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
```
The complete commands to create the table in database and populate the data are:
```
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
```
### Create Dataset from PostgreSQL server and use it in TensorFlow
Create a Dataset from PostgreSQL server is as easy as calling `tfio.experimental.IODataset.from_sql` with `query` and `endpoint` arguments. The `query` is the SQL query for select columns in tables and the `endpoint` argument is the address and database name:
```
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
```
As you could see from the output of `dataset.element_spec` above, the element of the created `Dataset` is a python dict object with column names of the database table as keys.
It is quite convenient to apply further operations. For example, you could select both `nox` and `no2` field of the `Dataset`, and calculate the difference:
```
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
```
The created `Dataset` is ready to be passed to `tf.keras` directly for either training or inference purposes now.
## References
- Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
- S. De Vito, E. Massera, M. Piga, L. Martinotto, G. Di Francia, On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario, Sensors and Actuators B: Chemical, Volume 129, Issue 2, 22 February 2008, Pages 750-757, ISSN 0925-4005
| github_jupyter |
# H2O workflow
## Imports
```
import sys
import os
sys.path.append(os.path.split(os.path.split(os.getcwd())[0])[0])
config_filepath ="./fit_config_h2o.json"
import uuid
import json
import datetime
import getpass
from mercury_ml.common import tasks
from mercury_ml.common import utils
from mercury_ml.common import containers as common_containers
from mercury_ml.h2o import containers as h2o_containers
#For testing purposes only!
if os.path.isdir("./example_results"):
import shutil
shutil.rmtree("./example_results")
```
## Helpers
These functions will help with the flow of this particular notebook
```
def print_data_bunch(data_bunch):
for data_set_name, data_set in data_bunch.__dict__.items():
print("{} <{}>".format(data_set_name, type(data_set).__name__))
for data_wrapper_name, data_wrapper in data_set.__dict__.items():
print(" {} <{}>".format(data_wrapper_name, type(data_wrapper).__name__))
print()
def maybe_transform(data_bunch, pre_execution_parameters):
if pre_execution_parameters:
return data_bunch.transform(**pre_execution_parameters)
else:
return data_bunch
def print_dict(d):
print(json.dumps(d, indent=2))
def get_installed_packages():
import pip
try:
from pip._internal.operations import freeze
except ImportError: # pip < 10.0
from pip.operations import freeze
packages = []
for p in freeze.freeze():
packages.append(p)
return packages
```
## Config
#### Load config
```
config = utils.load_referenced_json_config(config_filepath)
print_dict(config)
```
#### Set model_id
```
#model_id = str(uuid.uuid4().hex) #unique. CAREFUL! This should be consciously run when you wish to create a new iteration of a model
model_id = str(uuid.uuid4().hex)[:8] # this is *nearly* unique, but shorter (for illustration only!)
```
#### Update config
The function `utils.recursively_update_config(config, string_formatting_dict)` allows us to use string formatting to replace placeholder strings with acctual values.
for example:
```python
>>> config = {"some_value": "some_string_{some_placeholder}"}
>>> string_formatting_dict = {"some_placeholder": "ABC"}
>>> utils.recursively_update_config(config, string_formatting_dict)
>>> print(config)
{"some_value": "some_string_ABC}"}
```
First update `config["meta_info"]`
```
utils.recursively_update_config(config["meta_info"], {
"model_id": model_id,
"model_purpose": config["meta_info"]["model_purpose"]
})
```
Then use `config["meta_info"]` to update the rest.
```
utils.recursively_update_config(config, config["meta_info"])
```
## Session
Create a small dictionary with the session information. This will later be stored as a dictionary artifact with all the key run infomration
```
session = {
"time_stamp": datetime.datetime.utcnow().isoformat()[:-3] + "Z",
"run_by": getpass.getuser(),
"meta_info": config["meta_info"],
"installed_packages": get_installed_packages()
}
print("Session info")
print(json.dumps(session, indent=2))
```
## Initialization
These are the functions or classes we will be using in this workflow. We get / instatiate them all at the beginning using parameters under `config["initialization"]`.
Here we use mainly use `getattr` to fetch them via the `containers` module based on a string input in the config file. Providers could however also be fetched directly. The following three methods are all equivalent:
```python
# 1. (what we are using in this notebook)
from ml_workflow.common import containers as common_containers
source_reader=getattr(common_containers.SourceReaders, "read_pandas_data_set")
# 2.
from ml_workflow.common import containers as common_containers
source_reader=common_containers.SourceReaders.read_pandas_data_set
# 3.
from ml_workflow.common.providers.source_reading import read_pandas_data_set
source_reader=read_pandas_data_set
```
### Helpers
These helper functions will create instantiate class providers (`create_and_log`) or fetch function providers (`get_and_log`) based on the parameters provided
```
def create_and_log(container, class_name, params):
provider = getattr(container, class_name)(**params)
print("{}.{}".format(container.__name__, class_name))
print("params: ", json.dumps(params, indent=2))
return provider
def get_and_log(container, function_name):
provider = getattr(container, function_name)
print("{}.{}".format(container.__name__, function_name))
return provider
```
### Common
These are providers that are universally relevant, regardless of which Machine Learning engine is used.
```
# a function for storing dictionary artifacts to local disk
store_artifact_locally = get_and_log(common_containers.LocalArtifactStorers,
config["init"]["store_artifact_locally"]["name"])
# a function for storing data-frame-like artifacts to local disk
store_prediction_artifact_locally = get_and_log(common_containers.LocalArtifactStorers,
config["init"]["store_prediction_artifact_locally"]["name"])
# a function for copy artifacts from local disk to a remote store
copy_from_local_to_remote = get_and_log(common_containers.ArtifactCopiers, config["init"]["copy_from_local_to_remote"]["name"])
# a function for reading source data. When called it will return an instance of type DataBunch
read_source_data_set = get_and_log(common_containers.SourceReaders, config["init"]["read_source_data"]["name"])
# a dictionary of functions that calculate custom metrics
custom_metrics_dict = {
custom_metric_name: get_and_log(common_containers.CustomMetrics, custom_metric_name) for custom_metric_name in config["init"]["custom_metrics"]["names"]
}
# a dictionary of functions that calculate custom label metrics
custom_label_metrics_dict = {
custom_label_metric_name: get_and_log(common_containers.CustomLabelMetrics, custom_label_metric_name) for custom_label_metric_name in config["init"]["custom_label_metrics"]["names"]
}
```
### H2O
```
# a function to initiate the h2o (or h2o sparkling) session
initiate_session = get_and_log(h2o_containers.SessionInitiators, config["init"]["initiate_session"]["name"])
# fetch a built-in h2o model
model = get_and_log(h2o_containers.ModelDefinitions,
config["init"]["model_definition"]["name"])(**config["init"]["model_definition"]["params"])
# a function that fits an h2o model
fit = get_and_log(h2o_containers.ModelFitters, config["init"]["fit"]["name"])
# a dictionary of functions that save h2o models in various formats
save_model_dict = {
save_model_function_name: get_and_log(h2o_containers.ModelSavers, save_model_function_name) for save_model_function_name in config["init"]["save_model"]["names"]
}
# a function that generates metrics from an h2o model
evaluate = get_and_log(h2o_containers.ModelEvaluators, config["init"]["evaluate"]["name"])
# a function that generates metrics from an h2o model
evaluate_threshold_metrics = get_and_log(h2o_containers.ModelEvaluators, config["init"]["evaluate_threshold_metrics"]["name"])
# a function that produces predictions using an h2o model
predict = get_and_log(h2o_containers.PredictionFunctions, config["init"]["predict"]["name"])
```
## Execution
Here we use the providers defined above to execute various tasks
### Save (formatted) config
```
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, config,
**config["exec"]["save_formatted_config"]["params"])
print("Config stored with following parameters")
print_dict(config["exec"]["save_formatted_config"]["params"])
```
### Save Session
##### Save session info
```
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, session,
**config["exec"]["save_session"]["params"])
print("Session dictionary stored with following parameters")
print_dict(config["exec"]["save_session"]["params"])
```
##### Save session artifacts
```
for filename in config["exec"]["save_session_artifacts"]["params"]["filenames"]:
# save to local artifact store
common_containers.ArtifactCopiers.copy_from_disk_to_disk(source_dir=os.getcwd(),
target_dir=config["exec"]["save_session_artifacts"]["params"]["local_dir"],
filename=filename,
overwrite=False,
delete_source=False)
# copy to remote artifact store
copy_from_local_to_remote(source_dir=config["exec"]["save_session_artifacts"]["params"]["local_dir"],
target_dir=config["exec"]["save_session_artifacts"]["params"]["remote_dir"],
filename=filename,
overwrite=False,
delete_source=False)
print("Session artifacts stored with following parameters")
print_dict(config["exec"]["save_session_artifacts"]["params"])
```
### Start H2O
```
initiate_session(**config["exec"]["initiate_session"]["params"])
```
### Get source data
```
data_bunch_source = tasks.read_train_valid_test_data_bunch(read_source_data_set,**config["exec"]["read_source_data"]["params"] )
print("Source data read using following parameters: \n")
print_dict(config["exec"]["read_source_data"]["params"])
print("Read data_bunch consists of: \n")
print_data_bunch(data_bunch_source)
```
### Fit model
##### Transform data
```
data_bunch_fit = maybe_transform(data_bunch_source, config["exec"]["fit"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["fit"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_fit)
```
##### Perform fitting
```
model = fit(model = model,
data_bunch = data_bunch_fit,
**config["exec"]["fit"]["params"])
```
### Save model
```
for model_format, save_model in save_model_dict.items():
tasks.store_model(save_model=save_model,
model=model,
copy_from_local_to_remote = copy_from_local_to_remote,
**config["exec"]["save_model"][model_format]
)
print("Model saved with following paramters: \n")
print_dict(config["exec"]["save_model"])
```
### Evaluate metrics
##### Transform data
```
data_bunch_metrics = maybe_transform(data_bunch_fit, config["exec"]["evaluate"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["evaluate"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_metrics)
```
##### Calculate metrics
```
metrics = {}
for data_set_name in config["exec"]["evaluate"]["data_set_names"]:
data_set = getattr(data_bunch_metrics, data_set_name)
metrics[data_set_name] = evaluate(model, data_set, data_set_name, **config["exec"]["evaluate"]["params"])
print("Resulting metrics: \n")
print_dict(metrics)
```
##### Calculate metrics
```
threshold_metrics = {}
for data_set_name in config["exec"]["evaluate"]["data_set_names"]:
data_set = getattr(data_bunch_metrics, data_set_name)
threshold_metrics[data_set_name] = evaluate_threshold_metrics(model, data_set, data_set_name,
**config["exec"]["evaluate_threshold_metrics"]["params"])
print("Resulting metrics: \n")
print_dict(threshold_metrics)
```
### Save metrics
```
for data_set_name, params in config["exec"]["save_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, metrics[data_set_name], **params)
for data_set_name, params in config["exec"]["save_threshold_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, metrics[data_set_name], **params)
```
### Predict
##### Transform data
```
data_bunch_predict = maybe_transform(data_bunch_metrics, config["exec"]["predict"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["predict"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_predict)
```
##### Perform prediction
```
for data_set_name in config["exec"]["predict"]["data_set_names"]:
data_set = getattr(data_bunch_predict, data_set_name)
data_set.predictions = predict(model=model, data_set=data_set, **config["exec"]["predict"]["params"])
print("Data predicted with following parameters: \n")
print_dict(config["exec"]["predict"].get("params"))
```
### Evaluate custom metrics
##### Transform data
```
data_bunch_custom_metrics = maybe_transform(data_bunch_predict,
config["exec"]["evaluate_custom_metrics"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["evaluate_custom_metrics"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_custom_metrics)
```
##### Calculate custom metrics
```
custom_metrics = {}
for data_set_name in config["exec"]["evaluate_custom_metrics"]["data_set_names"]:
data_set = getattr(data_bunch_custom_metrics, data_set_name)
custom_metrics[data_set_name] = tasks.evaluate_metrics(data_set, custom_metrics_dict)
print("Resulting custom metrics: \n")
print_dict(custom_metrics)
```
##### Calculate custom label metrics
```
custom_label_metrics = {}
for data_set_name in config["exec"]["evaluate_custom_label_metrics"]["data_set_names"]:
data_set = getattr(data_bunch_custom_metrics, data_set_name)
custom_label_metrics[data_set_name] = tasks.evaluate_label_metrics(data_set, custom_label_metrics_dict)
print("Resulting custom label metrics: \n")
print_dict(custom_label_metrics)
for data_set_name, params in config["exec"]["save_custom_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote,
custom_metrics[data_set_name], **params)
print("Custom metrics saved with following parameters: \n")
print_dict(config["exec"]["save_custom_metrics"])
for data_set_name, params in config["exec"]["save_custom_label_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote,
custom_label_metrics[data_set_name], **params)
print("Custom label metrics saved with following parameters: \n")
print_dict(config["exec"]["save_custom_label_metrics"])
```
### Prepare predictions for storage
##### Transform data
```
data_bunch_prediction_preparation = maybe_transform(data_bunch_predict,
config["exec"]["prepare_predictions_for_storage"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_prediction_preparation)
```
##### Prepare predictions and targets
```
for data_set_name in config["exec"]["prepare_predictions_for_storage"]["data_set_names"]:
data_set = getattr(data_bunch_prediction_preparation, data_set_name)
data_set.add_data_wrapper_via_concatenate(**config["exec"]["prepare_predictions_for_storage"]["params"]["predictions"])
data_set.add_data_wrapper_via_concatenate(**config["exec"]["prepare_predictions_for_storage"]["params"]["targets"])
print_data_bunch(data_bunch_prediction_preparation)
```
### Save predictions
##### Transform data
```
data_bunch_prediction_storage = maybe_transform(data_bunch_prediction_preparation,
config["exec"]["save_predictions"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_prediction_storage)
```
##### Save predictions
```
for data_set_name, data_set_params in config["exec"]["save_predictions"]["data_sets"].items():
data_set = getattr(data_bunch_prediction_storage, data_set_name)
data_wrapper = getattr(data_set, data_set_params["data_wrapper_name"])
data_to_store = data_wrapper.underlying
tasks.store_artifacts(store_prediction_artifact_locally, copy_from_local_to_remote,
data_to_store, **data_set_params["params"])
print("Predictions saved with following parameters: \n")
print_dict(config["exec"]["save_predictions"])
```
##### Save targets
```
for data_set_name, data_set_params in config["exec"]["save_targets"]["data_sets"].items():
data_set = getattr(data_bunch_prediction_storage, data_set_name)
data_wrapper = getattr(data_set, data_set_params["data_wrapper_name"])
data_to_store = data_wrapper.underlying
tasks.store_artifacts(store_prediction_artifact_locally, copy_from_local_to_remote,
data_to_store, **data_set_params["params"])
print("Targets saved with following parameters: \n")
print_dict(config["exec"]["save_targets"])
```
| github_jupyter |
# HW 6 Statistics and probability homework
Complete homework notebook in a homework directory with your name and zip up the homework directory and submit it to our class blackboard/elearn site.
Complete all the parts 6.1 to 6.5 for score of 3.
Investigate plotting, linearegression, or complex matrix manipulation to get a score of 4 or cover two additional investigations for a score of 5.
## 6.1 Coin flipping
## 6.1.1
Write a function, flip_sum, which generates $n$ random coin flips from a fair coin and then returns the number of heads.
A fair coin is defined to be a coin where $P($heads$)=\frac{1}{2}$
The output type should be a numpy integer, hint: use random.rand()
## 6.1.2 Test it
Check it by showing the results of 100 coins being flipped
## 6.1.3 Create and display a histogram of 200 experiments of flipping 5 coins.
## 6.1.4
Write a function, estimate_prob, that uses flip_sum to estimate the following probability:
$P( k_1 <= $ number of heads in $n$ flips $< k_2 ) $
The function should estimate the probability by running $m$ different trials of flip_sum(n), probably using a for loop.
In order to receive full credit estimate_prob call flip_sum (aka: flip_sum is located inside the estimate_prob function)
```
def estimate_prob(n,k1,k2,m):
"""Estimate the probability that n flips of a fair coin result in k1 to k2 heads
n: the number of coin flips (length of the sequence)
k1,k2: the trial is successful if the number of heads is
between k1 and k2-1
m: the number of trials (number of sequences of length n)
output: the estimated probability
"""
# this is a small sanity check
x = estimate_prob(100,45,55,1000)
print x
assert 'float' in str(type(x))
print "does x==0.687?"
```
## 6.2.2 Calculate the actual probablities and compare it to your estimates for:
n= number of coins
k1 = min number of heads
k2 = upper limit of number of heads
m = the number of experiments
### 6.2.2.a n=100, k1 = 40, k2=60 m=100
### 6.2.2.b n=100, k1 = 40, k2=60 m=1000
# 6.3 Conditional probablity
In a recent study, the following data were obtained in response to the question"
"Do you favor the proposal of the school’s combining the elementary and middle school students in one building?"
Answers = [Yes, No, No opinion]
Males = [75, 89, 10]
Females = [105, 56, 6]
If a person is selected at random, find these probabilities solving using python.
1. The person has no opinion
2. The person is a male or is against the issue.
3. The person is a female, given that the person opposes the issue.
## 6.4 Matrix creation
Write a 12 by 12 times table matrix shown below.
Do this
6.4.1 using nested for loops
6.4.2 using numpy fromfunction array constructor
6.4.3 using numpy broadcasting
```
from numpy import array
array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24],
[ 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36],
[ 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48],
[ 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60],
[ 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72],
[ 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84],
[ 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96],
[ 9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108],
[ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120],
[ 11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121, 132],
[ 12, 24, 36, 48, 60, 72, 84, 96, 108, 120, 132, 144]])
```
## 6.5
Answer the following questions with respect to the
https://data.cdc.gov/NCHS/NCHS-Leading-Causes-of-Death-United-States/bi63-dtpu
How many patients were censored?
What is the correlation coefficient between state and Suicide for deaths above 100 ?
What is the average deaths for each state and type of cause ?
What is the year that was the most deadly for each cause name ?
```
import pandas as pd
dfh = pd.read_csv(".\data\NCHS_-_Leading_Causes_of_Death__United_States.csv")
dfh
```
| github_jupyter |
### Recommendations with MovieTweetings: Most Popular Recommendation
Now that you have created the necessary columns we will be using throughout the rest of the lesson on creating recommendations, let's get started with the first of our recommendations.
To get started, read in the libraries and the two datasets you will be using throughout the lesson using the code below.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
```
#### Part I: How To Find The Most Popular Movies?
For this notebook, we have a single task. The task is that no matter the user, we need to provide a list of the recommendations based on simply the most popular items.
For this task, we will consider what is "most popular" based on the following criteria:
* A movie with the highest average rating is considered best
* With ties, movies that have more ratings are better
* A movie must have a minimum of 5 ratings to be considered among the best movies
* If movies are tied in their average rating and number of ratings, the ranking is determined by the movie that is the most recent rating
With these criteria, the goal for this notebook is to take a **user_id** and provide back the **n_top** recommendations. Use the function below as the scaffolding that will be used for all the future recommendations as well.
```
def create_ranked_df(movies, reviews):
'''
INPUT
movies - the movies dataframe
reviews - the reviews dataframe
OUTPUT
ranked_movies - a dataframe with movies that are sorted by highest avg rating, more reviews,
then time, and must have more than 4 ratings
'''
# Pull the average ratings and number of ratings for each movie
movie_ratings = reviews.groupby('movie_id')['rating']
avg_ratings = movie_ratings.mean()
num_ratings = movie_ratings.count()
last_rating = pd.DataFrame(reviews.groupby('movie_id').max()['date'])
last_rating.columns = ['last_rating']
# Add Dates
rating_count_df = pd.DataFrame({'avg_rating': avg_ratings, 'num_ratings': num_ratings})
rating_count_df = rating_count_df.join(last_rating)
# merge with the movies dataset
movie_recs = movies.set_index('movie_id').join(rating_count_df)
# sort by top avg rating and number of ratings
ranked_movies = movie_recs.sort_values(['avg_rating', 'num_ratings', 'last_rating'], ascending=False)
# for edge cases - subset the movie list to those with only 5 or more reviews
ranked_movies = ranked_movies[ranked_movies['num_ratings'] > 4]
return ranked_movies
def popular_recommendations(user_id, n_top, ranked_movies):
'''
INPUT:
user_id - the user_id (str) of the individual you are making recommendations for
n_top - an integer of the number recommendations you want back
ranked_movies - a pandas dataframe of the already ranked movies based on avg rating, count, and time
OUTPUT:
top_movies - a list of the n_top recommended movies by movie title in order best to worst
'''
top_movies = list(ranked_movies['movie'][:n_top])
return top_movies
```
Usint the three criteria above, you should be able to put together the above function. If you feel confident in your solution, check the results of your function against our solution. On the next page, you can see a walkthrough and you can of course get the solution by looking at the solution notebook available in this workspace.
```
# Top 20 movies recommended for id 1
ranked_movies = create_ranked_df(movies, reviews) # only run this once - it is not fast
recs_20_for_1 = popular_recommendations('1', 20, ranked_movies)
# Top 5 movies recommended for id 53968
recs_5_for_53968 = popular_recommendations('53968', 5, ranked_movies)
# Top 100 movies recommended for id 70000
recs_100_for_70000 = popular_recommendations('70000', 100, ranked_movies)
# Top 35 movies recommended for id 43
recs_35_for_43 = popular_recommendations('43', 35, ranked_movies)
```
Usint the three criteria above, you should be able to put together the above function. If you feel confident in your solution, check the results of your function against our solution. On the next page, you can see a walkthrough and you can of course get the solution by looking at the solution notebook available in this workspace.
```
### You Should Not Need To Modify Anything In This Cell
# check 1
assert t.popular_recommendations('1', 20, ranked_movies) == recs_20_for_1, "The first check failed..."
# check 2
assert t.popular_recommendations('53968', 5, ranked_movies) == recs_5_for_53968, "The second check failed..."
# check 3
assert t.popular_recommendations('70000', 100, ranked_movies) == recs_100_for_70000, "The third check failed..."
# check 4
assert t.popular_recommendations('43', 35, ranked_movies) == recs_35_for_43, "The fourth check failed..."
print("If you got here, looks like you are good to go! Nice job!")
```
**Notice:** This wasn't the only way we could have determined the "top rated" movies. You can imagine that in keeping track of trending news or trending social events, you would likely want to create a time window from the current time, and then pull the articles in the most recent time frame. There are always going to be some subjective decisions to be made.
If you find that no one is paying any attention to your most popular recommendations, then it might be time to find a new way to recommend, which is what the next parts of the lesson should prepare us to do!
### Part II: Adding Filters
Now that you have created a function to give back the **n_top** movies, let's make it a bit more robust. Add arguments that will act as filters for the movie **year** and **genre**.
Use the cells below to adjust your existing function to allow for **year** and **genre** arguments as **lists** of **strings**. Then your ending results are filtered to only movies within the lists of provided years and genres (as `or` conditions). If no list is provided, there should be no filter applied.
You can adjust other necessary inputs as necessary to retrieve the final results you are looking for!
```
def popular_recs_filtered(user_id, n_top, ranked_movies, years=None, genres=None):
'''
REDO THIS DOC STRING
INPUT:
user_id - the user_id (str) of the individual you are making recommendations for
n_top - an integer of the number recommendations you want back
ranked_movies - a pandas dataframe of the already ranked movies based on avg rating, count, and time
years - a list of strings with years of movies
genres - a list of strings with genres of movies
OUTPUT:
top_movies - a list of the n_top recommended movies by movie title in order best to worst
'''
# Filter movies based on year and genre
if years is not None:
ranked_movies = ranked_movies[ranked_movies['date'].isin(years)]
if genres is not None:
num_genre_match = ranked_movies[genres].sum(axis=1)
ranked_movies = ranked_movies.loc[num_genre_match > 0, :]
# create top movies list
top_movies = list(ranked_movies['movie'][:n_top])
return top_movies
# Top 20 movies recommended for id 1 with years=['2015', '2016', '2017', '2018'], genres=['History']
recs_20_for_1_filtered = popular_recs_filtered('1', 20, ranked_movies, years=['2015', '2016', '2017', '2018'], genres=['History'])
# Top 5 movies recommended for id 53968 with no genre filter but years=['2015', '2016', '2017', '2018']
recs_5_for_53968_filtered = popular_recs_filtered('53968', 5, ranked_movies, years=['2015', '2016', '2017', '2018'])
# Top 100 movies recommended for id 70000 with no year filter but genres=['History', 'News']
recs_100_for_70000_filtered = popular_recs_filtered('70000', 100, ranked_movies, genres=['History', 'News'])
### You Should Not Need To Modify Anything In This Cell
# check 1
assert t.popular_recs_filtered('1', 20, ranked_movies, years=['2015', '2016', '2017', '2018'], genres=['History']) == recs_20_for_1_filtered, "The first check failed..."
# check 2
assert t.popular_recs_filtered('53968', 5, ranked_movies, years=['2015', '2016', '2017', '2018']) == recs_5_for_53968_filtered, "The second check failed..."
# check 3
assert t.popular_recs_filtered('70000', 100, ranked_movies, genres=['History', 'News']) == recs_100_for_70000_filtered, "The third check failed..."
print("If you got here, looks like you are good to go! Nice job!")
```
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Application Programming Interface
Estimated time needed: **15** minutes
## Objectives
After completing this lab you will be able to:
- Create and Use APIs in Python
### Introduction
An API lets two pieces of software talk to each other. Just like a function, you don’t have to know how the API works only its inputs and outputs. An essential type of API is a REST API that allows you to access resources via the internet. In this lab, we will review the Pandas Library in the context of an API, we will also review a basic REST API
## Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">Pandas is an API</a></li>
<li><a href="#ref1">REST APIs Basics </a></li>
<li><a href="#ref2">Quiz on Tuples</a></li>
</div>
<hr>
```
!pip install nba_api
```
<h2 id="PandasAPI">Pandas is an API </h2>
You will use this function in the lab:
```
def one_dict(list_dict):
keys=list_dict[0].keys()
out_dict={key:[] for key in keys}
for dict_ in list_dict:
for key, value in dict_.items():
out_dict[key].append(value)
return out_dict
```
<h2 id="PandasAPI">Pandas is an API </h2>
Pandas is actually set of software components , much of which is not even written in Python.
```
import pandas as pd
import matplotlib.pyplot as plt
```
You create a dictionary, this is just data.
```
dict_={'a':[11,21,31],'b':[12,22,32]}
```
When you create a Pandas object with the Dataframe constructor in API lingo, this is an "instance". The data in the dictionary is passed along to the pandas API. You then use the dataframe to communicate with the API.
```
df=pd.DataFrame(dict_)
type(df)
```
<img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%206/images/pandas_api.png" width = 800, align = "center" alt="logistic regression block diagram" />
When you call the method head the dataframe communicates with the API displaying the first few rows of the dataframe.
```
df.head()
```
When you call the method mean,the API will calculate the mean and return the value.
```
df.mean()
```
<h2 id="ref1">REST APIs</h2>
<p>Rest API’s function by sending a <b>request</b>, the request is communicated via HTTP message. The HTTP message usually contains a JSON file. This contains instructions for what operation we would like the service or <b>resource</b> to perform. In a similar manner, API returns a <b>response</b>, via an HTTP message, this response is usually contained within a JSON.</p>
<p>In this lab, we will use the <a href=https://pypi.org/project/nba-api/>NBA API</a> to determine how well the Golden State Warriors performed against the Toronto Raptors. We will use the API do the determined number of points the Golden State Warriors won or lost by for each game. So if the value is three, the Golden State Warriors won by three points. Similarly it the Golden State Warriors lost by two points the result will be negative two. The API is relatively will handle a lot of the details such a Endpoints and Authentication </p>
In the nba api to make a request for a specific team, it's quite simple, we don't require a JSON all we require is an id. This information is stored locally in the API we import the module teams
```
from nba_api.stats.static import teams
import matplotlib.pyplot as plt
#https://pypi.org/project/nba-api/
```
The method <code>get_teams()</code> returns a list of dictionaries the dictionary key id has a unique identifier for each team as a value
```
nba_teams = teams.get_teams()
```
The dictionary key id has a unique identifier for each team as a value, let's look at the first three elements of the list:
```
nba_teams[0:3]
```
To make things easier, we can convert the dictionary to a table. First, we use the function <code>one dict</code>, to create a dictionary. We use the common keys for each team as the keys, the value is a list; each element of the list corresponds to the values for each team.
We then convert the dictionary to a dataframe, each row contains the information for a different team.
```
dict_nba_team=one_dict(nba_teams)
df_teams=pd.DataFrame(dict_nba_team)
df_teams.head()
```
Will use the team's nickname to find the unique id, we can see the row that contains the warriors by using the column nickname as follows:
```
df_warriors=df_teams[df_teams['nickname']=='Warriors']
df_warriors
```
we can use the following line of code to access the first column of the dataframe:
```
id_warriors=df_warriors[['id']].values[0][0]
#we now have an integer that can be used to request the Warriors information
id_warriors
```
The function "League Game Finder " will make an API call, its in the module <code>stats.endpoints</code>
```
from nba_api.stats.endpoints import leaguegamefinder
```
The parameter <code>team_id_nullable</code> is the unique ID for the warriors. Under the hood, the NBA API is making a HTTP request.
The information requested is provided and is transmitted via an HTTP response this is assigned to the object <code>gamefinder</code>.
```
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP.
# The following code is comment out, you can run it on jupyter labs on your own computer.
# gamefinder = leaguegamefinder.LeagueGameFinder(team_id_nullable=id_warriors)
```
we can see the json file by running the following line of code.
```
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP.
# The following code is comment out, you can run it on jupyter labs on your own computer.
# gamefinder.get_json()
```
The game finder object has a method <code>get_data_frames()</code>, that returns a dataframe. If we view the dataframe, we can see it contains information about all the games the Warriors played. The <code>PLUS_MINUS</code> column contains information on the score, if the value is negative the Warriors lost by that many points, if the value is positive, the warriors one by that amount of points. The column <code>MATCHUP </code>had the team the Warriors were playing, GSW stands for Golden State Warriors and TOR means Toronto Raptors; <code>vs</code> signifies it was a home game and the <code>@ </code>symbol means an away game.
```
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP.
# The following code is comment out, you can run it on jupyter labs on your own computer.
# games = gamefinder.get_data_frames()[0]
# games.head()
```
you can download the dataframe from the API call for Golden State and run the rest like a video.
```
! wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Labs/Golden_State.pkl
file_name = "Golden_State.pkl"
games = pd.read_pickle(file_name)
games.head()
```
We can create two dataframes, one for the games that the Warriors faced the raptors at home and the second for away games.
```
games_home=games [games ['MATCHUP']=='GSW vs. TOR']
games_away=games [games ['MATCHUP']=='GSW @ TOR']
```
We can calculate the mean for the column <code>PLUS_MINUS</code> for the dataframes <code>games_home</code> and <code> games_away</code>:
```
games_home.mean()['PLUS_MINUS']
games_away.mean()['PLUS_MINUS']
```
We can plot out the <code>PLUS MINUS</code> column for for the dataframes <code>games_home</code> and <code> games_away</code>.
We see the warriors played better at home.
```
fig, ax = plt.subplots()
games_away.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax)
games_home.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax)
ax.legend(["away", "home"])
plt.show()
```
<a href="https://cloud.ibm.com/catalog/services/watson-studio"><img src = "https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png" width = 750, align = "center"></a>
## Authors:
[Joseph Santarcangelo](https://www.linkedin.com/in/joseph-s-50398b136?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)
Joseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ------------- | ---------------------------------- |
| 2020-09-09 | 2.1 | Malika Singla | Spell Check |
| 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
<hr/>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
## Learning Objectives
* Showing Different Fractals with DPC++
* Understand the __Data Parallel C++ (DPC++)__ language and programming model
## Fractals and Mandelbrot
It is one of the most amazing discoveries in the realm of mathematics that not only does the simple equation Zn+1 = Zn2 + C create the infinitely complex Mandelbrot Set, but we can also find the same iconic shape in the patterns created by many other equations. In fact, the phenomenon of Mandelbrot Universality means that anytime we iterate a function that in some portion, at some scale, resembles the parabolic function Z2, then we will find small copies of the Mandelbrot Set in the map of that function.
### oneAPI Distribution
Intel® oneAPI toolkits are available via multiple distribution channels:
* Local product installation: install the oneAPI toolkits from the __Intel® Developer Zone__.
* Install from containers or repositories: install the oneAPI toolkits from one of several supported
containers or repositories.
* Pre-installed in the __Intel® DevCloud__: a free development sandbox for access to the latest Intel® SVMS hardware and select oneAPI toolkits.
## Tricorn(The Conjugate of mandelbrot)
Conjugate of mandelbrot
the tricorn, sometimes called the Mandelbar set, is a fractal defined in a similar way to the Mandelbrot set, but using the mapping
Conj(Z^2 + C)
Where Z is the complex number
<img src="Assets/mandelbrot1.png">
## Introducing oneAPI
__oneAPI__ is a solution to deliver unified programming model to __simplify development__ across diverse architectures. It includes a unified and simplified language and libraries for expressing __parallelism__ and delivers uncompromised native high-level language performance across a range of hardware including __CPUs, GPUs, FPGAs__. oneAPI initiative is based on __industry standards and open specifications__ and is interoperable with existing HPC programming models.
<img src="Assets/oneapi2.png">
***
## Editing the simple.cpp code
The Jupyter cell below with the gray background can be edited in-place and saved.
The first line of the cell contains the command **%%writefile 'simple.cpp'** This tells the input cell to save the contents of the cell into a file named 'simple.cpp' in your current directory (usually your home directory). As you edit the cell and run it in the Jupyter notebook, it will save your changes into that file.
The code below is some simple DPC++ code to get you started in the DevCloud environment. Simply inspect the code - there are no modifications necessary. Run the first cell to create the file, then run the cell below it to compile and execute the code.
1. Inspect the code cell below, then click run ▶ to save the code to a file
2. Run ▶ the cell in the __Build and Run__ section below the code snippet to compile and execute the code in the saved file
```
%%writefile lab/simple.cpp
//==============================================================
// Copyright © 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
// =============================================================
#include <chrono>
#include <iomanip>
#include <iostream>
// dpc_common.hpp can be found in the dev-utilities include folder.
// e.g., $ONEAPI_ROOT/dev-utilities/<version>/include/dpc_common.hpp
#include "dpc_common.hpp"
#include "mandel.hpp"
using namespace std;
using namespace sycl;
void ShowDevice(queue &q) {
// Output platform and device information.
auto device = q.get_device();
auto p_name = device.get_platform().get_info<info::platform::name>();
cout << std::setw(20) << "Platform Name: " << p_name << "\n";
auto p_version = device.get_platform().get_info<info::platform::version>();
cout << std::setw(20) << "Platform Version: " << p_version << "\n";
auto d_name = device.get_info<info::device::name>();
cout << std::setw(20) << "Device Name: " << d_name << "\n";
auto max_work_group = device.get_info<info::device::max_work_group_size>();
cout << std::setw(20) << "Max Work Group: " << max_work_group << "\n";
auto max_compute_units = device.get_info<info::device::max_compute_units>();
cout << std::setw(20) << "Max Compute Units: " << max_compute_units << "\n\n";
}
void Execute(queue &q) {
// Demonstrate the Mandelbrot calculation serial and parallel.
#ifdef MANDELBROT_USM
cout << "Parallel Mandelbrot set using USM.\n";
MandelParallelUsm m_par(row_size, col_size, max_iterations, &q);
#else
cout << "Parallel Mandelbrot set using buffers.\n";
MandelParallel m_par(row_size, col_size, max_iterations);
#endif
MandelSerial m_ser(row_size, col_size, max_iterations);
// Run the code once to trigger JIT.
m_par.Evaluate(q);
// Run the parallel version and time it.
dpc_common::TimeInterval t_par;
for (int i = 0; i < repetitions; ++i) m_par.Evaluate(q);
double parallel_time = t_par.Elapsed();
// Print the results.
m_par.Print();
m_par.WriteImage();
// Run the serial version.
dpc_common::TimeInterval t_ser;
m_ser.Evaluate();
double serial_time = t_ser.Elapsed();
// Report the results.
cout << std::setw(20) << "Serial time: " << serial_time << "s\n";
cout << std::setw(20) << "Parallel time: " << (parallel_time / repetitions)
<< "s\n";
// Validate.
m_par.Verify(m_ser);
}
int main(int argc, char *argv[]) {
try {
// Create a queue on the default device. Set SYCL_DEVICE_TYPE environment
// variable to (CPU|GPU|FPGA|HOST) to change the device.
//queue q(default_selector{}, dpc_common::exception_handler);
queue q(gpu_selector{}, dpc_common::exception_handler);
// Display the device info.
ShowDevice(q);
// Compute Mandelbrot set.
Execute(q);
} catch (...) {
// Some other exception detected.
cout << "Failed to compute Mandelbrot set.\n";
std::terminate();
}
cout << "Successfully computed Mandelbrot set.\n";
return 0;
}
```
### Build and Run
Select the cell below and click Run ▶ to compile and execute the code above:
```
! chmod 755 q; chmod 755 run_simple.sh;if [ -x "$(command -v qsub)" ]; then ./q run_simple.sh; else ./run_simple.sh; fi
```
_If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel:
"Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_
## How to Compile & Run DPC++ program
_If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel:
"Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_
# Summary
In this module you will have learned the following:
* How oneAPI solves the challenges of programming in a heterogeneous world
* Take advantage of oneAPI solutions to enable your workflows
| github_jupyter |
# Functions
## Background for functions:
Basic math -> y = f(x)
## so over here the f is a function.
## Some features that we should keep in mind:
* Function does not have any memory. After opening the new session we cannot call the old function.
* While defining a function whatever argument we have given, the results will remain the same.That means even
after running the function for 50 times output will not change. # are we having any exceptional function????
* Functions are Objects1, this means we can assign a function to a new variable.
```
# Compartmentalize codes: to write codes in such a way that the sections can be reused either in the same program
or in the different programs.
def is the keyword.
def ankush: ->here ankush is nothing but the name of the function.
def ankush() -> these parenthesis are for the arguments.
anything that a function does will be indented under the function defination.
def Ketty():
print("woow")
return
def baba():
#always a good idea to put a mark before the function
code does not get executed until we just call the function by its name.
# CASE1- starting with the simple functions
# simple functions: Has got the function definition and the function call
def hello():
# greeting hi in punjabi
print('sat shree akal')
return
hello() # simple function would simply recall what has been predefined.
# CASE2 - Required argument functions
def kufupanda(str):
# recommending a course in Times Pro.
print("What about the courses in Times Pro? ", str)
return
# what will happen? Any string that we would pass will will be assigned to a variable str. This is called a required argument
#function beacuse the argument is required. If we do not provide an argument then we get an error.
kufupanda('Courses are good,but Python for Data Analysis is taken by a jerk!!')
# CASE3 - Calculation using the arguments
def cost_per_class(money,hours):
# calculating the cost per hour
letscheckout = money//hours
print(letscheckout)
return
# we are calling the given argument
cost_per_class(200000,288)
# Few things to notice here:
#1 - If we will not call the argument then we will get an error. Any guess what will be the error type?
#2 - Ordering of the arguments has to be perfectly aligned with the function definition. Hence please be cautious about
# the arguments.
# CASE4 - 'Key-word argument function'
#In a function o Pretty useful, though I do not see people using it often.
#In the upper example while calling the functin in case of arguments I have to take care of the ordering. Now key-word makes a
#great sense in this regards.
cost_per_class(hours = 288, money = 200000)
# as ordering will not be our concern now. This is an added advantage. But as I said before, I have seen very less cases
# of people writing these functions.
# CASE5 - Defalut Function argument
# Out of the total number of arguments we have set, we could fix any of the arguments while definining the function itself.
# it is useful just in one case, if the user does not define a parameter, it will automatically define the parameter.
def great_going(name = "", gender = "MALE", occupation = ""):
print("Name:", name)
print("Gender:", gender)
print("Occupation:", occupation)
return
great_going(name ="Roger Federer", occupation = "A legendry Tennis star")
# This cell has run succesfully just because we had provided the default argument for the gender.
# Usefulness of 'asterisk' notation.
def follow_legacy(name, *occupation):
print ('NAME :', name)
print("Occupation :", occupation)
return
follow_legacy('come', 'jagu', 'dadajee','fdgds')
# the moment we are going to put this in as a argument while calling the function, the extra arguments would be
# considered as a tuple
''' Why 'return', why not just the print": To make the function truly portable. That means I should be able to use them
in different environments. When I am printing the results of the function then it does not become portable. Hence
retrun statement would come in handy. Just because of this we use the return statement.'''
# Uniqueness about functions: It does not get executed where it is being written rather at place where it is called later.
```
| github_jupyter |
### Neural Machine Translation by Jointly Learning to Align and Translate
In this notebook we will implement the model from [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473) that will improve PPL (**perplexity**) as compared to the previous notebook.
Here is a general encoder-decoder model that we have used from the past.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq1.png"/></p>
In the previous model, our architecture was set-up in a way to reduce "information compression" by explicitly passing the context vector, $z$, to the decoder at every time-step and by passing both the context vector and embedded input word, $d(y_t)$, along with the hidden state, $s_t$, to the linear layer, $f$, to make a prediction.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq7.png"/></p>
Even though we have reduced some of this compression, our context vector still needs to contain all of the information about the source sentence. The model implemented in this notebook avoids this compression by allowing the decoder to look at the entire source sentence (via its hidden states) at each decoding step! How does it do this? It uses **attention**.
### Attention.
Attention works by first, calculating an attention vector, $a$, that is the length of the source sentence. The attention vector has the property that each element is between 0 and 1, and the entire vector sums to 1. We then calculate a weighted sum of our source sentence hidden states, $H$, to get a weighted source vector, $w$.
$$w = \sum_{i}a_ih_i$$
We calculate a new weighted source vector every time-step when decoding, using it as input to our decoder RNN as well as the linear layer to make a prediction.
### Data Preparation
Again we still prepare the data just like from the previous notebooks.
```
import torch
from torch import nn
from torch.nn import functional as F
import spacy, math, random
import numpy as np
from torchtext.legacy import datasets, data
```
### Setting seeds
```
SEED = 42
np.random.seed(SEED)
torch.manual_seed(SEED)
random.seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
```
### Loading the German and English models.
```
import spacy
import spacy.cli
spacy.cli.download('de_core_news_sm')
import de_core_news_sm, en_core_web_sm
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
```
### Preprocessing function that tokenizes sentences.
```
def tokenize_de(sent):
return [tok.text for tok in spacy_de.tokenizer(sent)]
def tokenize_en(sent):
return [tok.text for tok in spacy_en.tokenizer(sent)]
```
### Creating the `Fields`
```
SRC = data.Field(
tokenize = tokenize_de,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
TRG = data.Field(
tokenize = tokenize_en,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
```
### Loading `Multi30k` dataset.
```
train_data, valid_data, test_data = datasets.Multi30k.splits(
exts=('.de', '.en'),
fields = (SRC, TRG)
)
```
### Checking if we have loaded the data corectly.
```
from prettytable import PrettyTable
def tabulate(column_names, data):
table = PrettyTable(column_names)
table.title= "VISUALIZING SETS EXAMPLES"
table.align[column_names[0]] = 'l'
table.align[column_names[1]] = 'r'
for row in data:
table.add_row(row)
print(table)
column_names = ["SUBSET", "EXAMPLE(s)"]
row_data = [
["training", len(train_data)],
['validation', len(valid_data)],
['test', len(test_data)]
]
tabulate(column_names, row_data)
```
### Checking a single example, of the `SRC` and the `TRG`.
```
print(vars(train_data[0]))
```
### Building the vocabulary.
Just like from the previous notebook all the tokens that apears less than 2, are automatically converted to unknown token `<unk>`.
```
SRC.build_vocab(train_data, min_freq=2)
TRG.build_vocab(train_data, min_freq=2)
```
### Device
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
```
### Creating Iterators.
Just like from the previous notebook we are going to use the BucketIterator to create the train, validation and test sets.
```
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
device = device,
batch_size = BATCH_SIZE
)
```
### Encoder.
First, we'll build the encoder. Similar to the previous model, we only use a single layer GRU, however we now use a bidirectional RNN. With a bidirectional RNN, we have two RNNs in each layer. A forward RNN going over the embedded sentence from left to right (shown below in green), and a backward RNN going over the embedded sentence from right to left (teal). All we need to do in code is set bidirectional = True and then pass the embedded sentence to the RNN as before.
<p align="center">
<img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq8.png"/>
</p>
We now have:
$$\begin{align*}
h_t^\rightarrow &= \text{EncoderGRU}^\rightarrow(e(x_t^\rightarrow),h_{t-1}^\rightarrow)\\
h_t^\leftarrow &= \text{EncoderGRU}^\leftarrow(e(x_t^\leftarrow),h_{t-1}^\leftarrow)
\end{align*}$$
Where $x_0^\rightarrow = \text{<sos>}, x_1^\rightarrow = \text{guten}$ and $x_0^\leftarrow = \text{<eos>}, x_1^\leftarrow = \text{morgen}$.
As before, we only pass an input (embedded) to the RNN, which tells PyTorch to initialize both the forward and backward initial hidden states ($h_0^\rightarrow$ and $h_0^\leftarrow$, respectively) to a tensor of all zeros. We'll also get two context vectors, one from the forward RNN after it has seen the final word in the sentence, $z^\rightarrow=h_T^\rightarrow$, and one from the backward RNN after it has seen the first word in the sentence, $z^\leftarrow=h_T^\leftarrow$.
The RNN returns outputs and hidden.
outputs is of size [src len, batch size, hid dim * num directions] where the first hid_dim elements in the third axis are the hidden states from the top layer forward RNN, and the last hid_dim elements are hidden states from the top layer backward RNN. We can think of the third axis as being the forward and backward hidden states concatenated together other, i.e. $h_1 = [h_1^\rightarrow; h_{T}^\leftarrow]$, $h_2 = [h_2^\rightarrow; h_{T-1}^\leftarrow]$ and we can denote all encoder hidden states (forward and backwards concatenated together) as $H=\{ h_1, h_2, ..., h_T\}$.
hidden is of size [n layers * num directions, batch size, hid dim], where [-2, :, :] gives the top layer forward RNN hidden state after the final time-step (i.e. after it has seen the last word in the sentence) and [-1, :, :] gives the top layer backward RNN hidden state after the final time-step (i.e. after it has seen the first word in the sentence).
As the decoder is not bidirectional, it only needs a single context vector, $z$, to use as its initial hidden state, $s_0$, and we currently have two, a forward and a backward one ($z^\rightarrow=h_T^\rightarrow$ and $z^\leftarrow=h_T^\leftarrow$, respectively). We solve this by concatenating the two context vectors together, passing them through a linear layer, $g$, and applying the $\tanh$ activation function.
$$z=\tanh(g(h_T^\rightarrow, h_T^\leftarrow)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$$
**Note:** this is actually a deviation from the paper. Instead, they feed only the first backward RNN hidden state through a linear layer to get the context vector/decoder initial hidden state. ***_This doesn't seem to make sense to me, so we have changed it._**
As we want our model to look back over the whole of the source sentence we return outputs, the stacked forward and backward hidden states for every token in the source sentence. We also return hidden, which acts as our initial hidden state in the decoder.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout):
super(Encoder, self).__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim=emb_dim)
self.gru = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src): # src = [src len, batch size]
embedded = self.dropout(self.embedding(src)) # embedded = [src len, batch size, emb dim]
outputs, hidden = self.gru(embedded)
"""
outputs = [src len, batch size, hid dim * num directions]
hidden = [n layers * num directions, batch size, hid dim]
hidden is stacked [forward_1, backward_1, forward_2, backward_2, ...]
outputs are always from the last layer
hidden [-2, :, : ] is the last of the forwards RNN
hidden [-1, :, : ] is the last of the backwards RNN
initial decoder hidden is final hidden state of the forwards and backwards
encoder RNNs fed through a linear layer
"""
hidden = torch.tanh(self.fc(torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)))
"""
outputs = [src len, batch size, enc hid dim * 2]
hidden = [batch size, dec hid dim]
"""
return outputs, hidden
```
### Attention Layer
Next up is the attention layer. This will take in the previous hidden state of the decoder, $s_{t-1}$, and all of the stacked forward and backward hidden states from the encoder, $H$. The layer will output an attention vector, $a_t$, that is the length of the source sentence, each element is between 0 and 1 and the entire vector sums to 1.
Intuitively, this layer takes what we have decoded so far, $s_{t-1}$, and all of what we have encoded, $H$, to produce a vector, $a_t$, that represents which words in the source sentence we should pay the most attention to in order to correctly predict the next word to decode, $\hat{y}_{t+1}$.
First, we calculate the energy between the previous decoder hidden state and the encoder hidden states. As our encoder hidden states are a sequence of $T$ tensors, and our previous decoder hidden state is a single tensor, the first thing we do is repeat the previous decoder hidden state $T$ times. We then calculate the energy, $E_t$, between them by concatenating them together and passing them through a linear layer (attn) and a $\tanh$ activation function.
$$E_t = \tanh(\text{attn}(s_{t-1}, H))$$
This can be thought of as calculating how well each encoder hidden state "matches" the previous decoder hidden state.
We currently have a [dec hid dim, src len] tensor for each example in the batch. We want this to be [src len] for each example in the batch as the attention should be over the length of the source sentence. This is achieved by multiplying the energy by a [1, dec hid dim] tensor, $v$.
$$\hat{a}_t = v E_t$$
We can think of $v$ as the weights for a weighted sum of the energy across all encoder hidden states. These weights tell us how much we should attend to each token in the source sequence. The parameters of $v$ are initialized randomly, but learned with the rest of the model via backpropagation. Note how $v$ is not dependent on time, and the same $v$ is used for each time-step of the decoding. We implement $v$ as a linear layer without a bias.
Finally, we ensure the attention vector fits the constraints of having all elements between 0 and 1 and the vector summing to 1 by passing it through a $\text{softmax}$ layer.
$$a_t = \text{softmax}(\hat{a_t})$$
This gives us the attention over the source sentence!
Graphically, this looks something like below. This is for calculating the very first attention vector, where $s_{t-1} = s_0 = z$. The green/teal blocks represent the hidden states from both the forward and backward RNNs, and the attention computation is all done within the pink block.
<p align="center"><img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq9.png"/></p>
```
class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super(Attention, self).__init__()
self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim)
self.v = nn.Linear(dec_hid_dim, 1, bias = False)
def forward(self, hidden, encoder_outputs):
"""
hidden = [batch size, dec hid dim]
encoder_outputs = [src len, batch size, enc hid dim * 2]
"""
batch_size = encoder_outputs.shape[1]
src_len = encoder_outputs.shape[0]
# repeat decoder hidden state src_len times
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
"""
hidden = [batch size, src len, dec hid dim]
encoder_outputs = [batch size, src len, enc hid dim * 2]
"""
energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2))) # energy = [batch size, src len, dec hid dim]
attention = self.v(energy).squeeze(2) # attention= [batch size, src len]
return F.softmax(attention, dim=1)
```
### Decoder.
The decoder contains the attention layer, attention, which takes the previous hidden state, $s_{t-1}$, all of the encoder hidden states, $H$, and returns the attention vector, $a_t$.
We then use this attention vector to create a weighted source vector, $w_t$, denoted by weighted, which is a weighted sum of the encoder hidden states, $H$, using $a_t$ as the weights.
$$w_t = a_t H$$
The embedded input word, $d(y_t)$, the weighted source vector, $w_t$, and the previous decoder hidden state, $s_{t-1}$, are then all passed into the decoder RNN, with $d(y_t)$ and $w_t$ being concatenated together.
$$s_t = \text{DecoderGRU}(d(y_t), w_t, s_{t-1})$$
We then pass $d(y_t)$, $w_t$ and $s_t$ through the linear layer, $f$, to make a prediction of the next word in the target sentence, $\hat{y}_{t+1}$. This is done by concatenating them all together.
$$\hat{y}_{t+1} = f(d(y_t), w_t, s_t)$$
The image below shows decoding the first word in an example translation.
<p align="center">
<img src="https://github.com/bentrevett/pytorch-seq2seq/raw/49df8404d938a6edbf729876405558cc2c2b3013/assets/seq2seq10.png"/>
</p>
The green/teal blocks show the forward/backward encoder RNNs which output $H$, the red block shows the context vector, $z = h_T = \tanh(g(h^\rightarrow_T,h^\leftarrow_T)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$, the blue block shows the decoder RNN which outputs $s_t$, the purple block shows the linear layer, $f$, which outputs $\hat{y}_{t+1}$ and the orange block shows the calculation of the weighted sum over $H$ by $a_t$ and outputs $w_t$. Not shown is the calculation of $a_t$.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):
super(Decoder, self).__init__()
self.output_dim = output_dim
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.gru = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
self.fc_out = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, encoder_outputs):
"""
input = [batch size]
hidden = [batch size, dec hid dim]
encoder_outputs = [src len, batch size, enc hid dim * 2]
"""
input = input.unsqueeze(0) # input = [1, batch size]
embedded = self.dropout(self.embedding(input)) # embedded = [1, batch size, emb dim]
a = self.attention(hidden, encoder_outputs)# a = [batch size, src len]
a = a.unsqueeze(1) # a = [batch size, 1, src len]
encoder_outputs = encoder_outputs.permute(1, 0, 2) # encoder_outputs = [batch size, src len, enc hid dim * 2]
weighted = torch.bmm(a, encoder_outputs) # weighted = [batch size, 1, enc hid dim * 2]
weighted = weighted.permute(1, 0, 2) # weighted = [1, batch size, enc hid dim * 2]
rnn_input = torch.cat((embedded, weighted), dim = 2) # rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]
output, hidden = self.gru(rnn_input, hidden.unsqueeze(0))
"""
output = [seq len, batch size, dec hid dim * n directions]
hidden = [n layers * n directions, batch size, dec hid dim]
seq len, n layers and n directions will always be 1 in this decoder, therefore:
output = [1, batch size, dec hid dim]
hidden = [1, batch size, dec hid dim]
this also means that output == hidden
"""
assert (output == hidden).all()
embedded = embedded.squeeze(0)
output = output.squeeze(0)
weighted = weighted.squeeze(0)
prediction = self.fc_out(torch.cat((output, weighted, embedded), dim = 1)) # prediction = [batch size, output dim]
return prediction, hidden.squeeze(0)
```
### Seq2Seq Model
This is the first model where we don't have to have the encoder RNN and decoder RNN have the same hidden dimensions, however the encoder has to be bidirectional. This requirement can be removed by changing all occurences of enc_dim * 2 to enc_dim * 2 if encoder_is_bidirectional else enc_dim.
This seq2seq encapsulator is similar to the last two. The only difference is that the encoder returns both the final hidden state (which is the final hidden state from both the forward and backward encoder RNNs passed through a linear layer) to be used as the initial hidden state for the decoder, as well as every hidden state (which are the forward and backward hidden states stacked on top of each other). We also need to ensure that hidden and encoder_outputs are passed to the decoder.
**Briefly going over all of the steps:**
* the outputs tensor is created to hold all predictions, $\hat{Y}$
* the source sequence, $X$, is fed into the encoder to receive $z$ and $H$
* the initial decoder hidden state is set to be the context vector, $s_0 = z = h_T$
* we use a batch of <sos> tokens as the first input, $y_1$
* **we then decode within a loop:**
* inserting the input token $y_t$, previous hidden state, $s_{t-1}$, and all encoder outputs, $H$, into the decoder
* receiving a prediction, $\hat{y}_{t+1}$, and a new hidden state, $s_t$
* we then decide if we are going to teacher force or not, setting the next input as appropriate
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
"""
src = [src len, batch size]
trg = [trg len, batch size]
teacher_forcing_ratio is probability to use teacher forcing
e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time
"""
trg_len, batch_size = trg.shape
trg_vocab_size = self.decoder.output_dim
# tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
# encoder_outputs is all hidden states of the input sequence, back and forwards
# hidden is the final forward and backward hidden states, passed through a linear layer
encoder_outputs, hidden = self.encoder(src)
# first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
# insert input token embedding, previous hidden state and all encoder hidden states
# receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, encoder_outputs)
# place predictions in a tensor holding predictions for each token
outputs[t] = output
# decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
# get the highest predicted token from our predictions
top1 = output.argmax(1)
# if teacher forcing, use actual next token as next input
# if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
```
### Training the Seq2Seq Model
The rest of the code is similar from the previous notebooks, where there's changes I will highlight.
### Hyper parameters
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = DEC_EMB_DIM = 256
ENC_HID_DIM = DEC_HID_DIM = 512
ENC_DROPOUT = DEC_DROPOUT = 0.5
attn = Attention(ENC_HID_DIM, DEC_HID_DIM)
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
model = Seq2Seq(enc, dec, device).to(device)
model
```
### Initializing the weights
ere, we will initialize all biases to zero and all weights from $\mathcal{N}(0, 0.01)$.
```
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
```
### Counting model parameters.
The model parameters has increased with `~50%` from the previous notebook.
```
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad)
n_params, trainable_params = count_trainable_params(model)
print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}")
```
### Optimizer
```
optimizer = torch.optim.Adam(model.parameters())
```
### Loss Function
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
```
### Training and Evaluating Functions
```
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
# trg = [trg len, batch size]
# output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
# trg = [(trg len - 1) * batch size]
# output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) # turn off teacher forcing
# trg = [trg len, batch size]
# output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
# trg = [(trg len - 1) * batch size]
# output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
### Train Loop.
Bellow is a function that tells us how long each epoch took to complete.
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'best-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
### Evaluating the best model.
```
model.load_state_dict(torch.load('best-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
We've improved on the previous model, but this came at the cost of `doubling` the training time.
In the next notebook, we'll be using the same architecture but using a few tricks that are applicable to all RNN architectures - **``packed padded``** sequences and **`masking`**. We'll also implement code which will allow us to look at what words in the input the RNN is paying attention to when decoding the output.
### Credits.
* [bentrevett](https://github.com/bentrevett/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointly%20Learning%20to%20Align%20and%20Translate.ipynb)
| github_jupyter |
<a href="https://colab.research.google.com/github/yasirabd/solver-society-job-data/blob/main/2_0_Ekstrak_job_position.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Objective
Ekstrak job_position, semakin sedikit jumlah unique semakin bagus.
Data input yang dibutuhkan:
1. jobstreet_clean_tahap1.csv
Data output yang dihasilkan:
1. jobstreet_clean_tahap2.csv
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
data = pd.read_csv("/content/drive/My Drive/Data Loker/jobstreet_clean_tahap1.csv")
data.head()
# check null values
data.isnull().sum()
```
## Explore data
```
# check null values
data['job_position'].isnull().sum()
# convert to lower case
job_position = data['job_position'].str.lower()
job_position.value_counts()
```
> Terdapat 10056 data unique, sangat kotor sekali.
```
# top 50 job_position
job_position.value_counts()[:50]
# convert menjadi list
temp = job_position.tolist()
```
## Cleaning data penjelas
```
# cari data yang terdapat bracket () dan []
brackets = ['(', ')', '[', ']']
# remove data in brackets
# alasan terdapat data -> "marketing executive (jakarta/cikarang)""
# pada bracket ada lokasi, tidak perlu
for idx, pos in enumerate(temp):
for b in brackets:
if b in pos:
pos = re.sub("[\(\[].*?[\)\]]", "", pos)
temp[idx] = pos
# remove data setelah "-"
# alasan terdapat data -> "tukang bubut manual - cikarang"
# setelah '-' menunjukkan lokasi atau penjelas
for idx, pos in enumerate(temp):
if '-' in pos:
temp[idx] = pos.split('-')[0]
# remove data setelah ":"
# alasan -> "finance supervisor: bandar lampung"
# setelah ':' menunjukkan lokasi atau penjelas
for idx, pos in enumerate(temp):
if ':' in pos:
temp[idx] = pos.split(':')[0]
# remove extra space
for idx, pos in enumerate(temp):
temp[idx] = ' '.join(pos.split())
```
## Kelompokkan job_position yang mirip
### Sales Executive
```
# check banyak jenis job_position sales executive
sales_exec = list()
for pos in temp:
if 'sales executive' in pos:
sales_exec.append(pos)
pd.Series(sales_exec).unique()
# replace dengan hanya sales executive
for idx,pos in enumerate(temp):
if 'sales executive' in pos:
temp[idx] = 'sales executive'
```
### Accounting Staff
```
# check banyak jenis accounting staff
acc_staff = list()
for pos in temp:
if 'accounting staff' in pos and 'finance' not in pos:
acc_staff.append(pos)
pd.Series(acc_staff).unique()
# replace dengan hanya accounting staff
for idx, pos in enumerate(temp):
if 'accounting staff' in pos and 'finance' not in pos:
temp[idx] = 'accounting staff'
```
### Digital Marketing
```
# check banyak jenis digital marketing
digital_marketing = list()
for pos in temp:
if 'digital marketing' in pos:
digital_marketing.append(pos)
pd.Series(digital_marketing).unique()
# replace dengan hanya digital marketing
for idx,pos in enumerate(temp):
if 'digital marketing' in pos:
temp[idx] = 'digital marketing'
```
### Sales Manager
```
# check banyak jenis sales manager
sales_manager = list()
for pos in temp:
if 'sales manager' in pos:
sales_manager.append(pos)
pd.Series(sales_manager).unique()
# replace dengan hanya sales manager
for idx,pos in enumerate(temp):
if 'sales manager' in pos:
temp[idx] = 'sales manager'
```
### Project Manager
```
# check banyak jenis project manager
proj_manager = list()
for pos in temp:
if 'project manager' in pos:
proj_manager.append(pos)
pd.Series(proj_manager).unique()
# replace dengan hanya project manager
for idx,pos in enumerate(temp):
if 'project manager' in pos:
temp[idx] = 'project manager'
```
### Graphic Designer
```
# check banyak jenis graphic designer
graph_designer = list()
for pos in temp:
if 'graphic designer' in pos:
graph_designer.append(pos)
pd.Series(graph_designer).unique()
# replace dengan hanya graphic designer
for idx,pos in enumerate(temp):
if 'graphic designer' in pos:
temp[idx] = 'graphic designer'
```
### Staff Accounting
```
# check banyak jenis staff accounting
staff_acc = list()
for pos in temp:
if 'staff accounting' in pos:
staff_acc.append(pos)
pd.Series(staff_acc).unique()
# replace dengan hanya staff accounting
for idx,pos in enumerate(temp):
if 'staff accounting' in pos:
temp[idx] = 'staff accounting'
```
### Marketing Manager
```
# check banyak jenis marketing manager
mark_manager = list()
for pos in temp:
if 'marketing manager' in pos:
mark_manager.append(pos)
pd.Series(mark_manager).unique()
# replace dengan hanya marketing manager
for idx,pos in enumerate(temp):
if 'marketing manager' in pos:
temp[idx] = 'marketing manager'
```
### Marketing Executive
```
# check banyak jenis marketing executive
mark_exec = list()
for pos in temp:
if 'marketing executive' in pos:
mark_exec.append(pos)
pd.Series(mark_exec).unique()
# replace dengan hanya marketing executive
for idx,pos in enumerate(temp):
if 'marketing executive' in pos:
temp[idx] = 'marketing executive'
```
### Sales Engineer
```
# check banyak jenis sales engineer
sales_engineer = list()
for pos in temp:
if 'sales engineer' in pos:
sales_engineer.append(pos)
pd.Series(sales_engineer).unique()
# replace dengan hanya sales engineer
for idx,pos in enumerate(temp):
if 'sales engineer' in pos:
temp[idx] = 'sales engineer'
```
### Web Developer
```
# check banyak jenis web developer
web_dev = list()
for pos in temp:
if 'web developer' in pos:
web_dev.append(pos)
pd.Series(web_dev).unique()
# replace dengan hanya web developer
for idx,pos in enumerate(temp):
if 'web developer' in pos:
temp[idx] = 'web developer'
```
### Accounting Supervisor
```
# check banyak jenis accounting supervisor
job_list = list()
for pos in temp:
if 'accounting supervisor' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya accounting supervisor
for idx,pos in enumerate(temp):
if 'accounting supervisor' in pos:
temp[idx] = 'accounting supervisor'
```
### Account Executive
```
# check banyak jenis account executive
job_list = list()
for pos in temp:
if 'account executive' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya account executive
for idx,pos in enumerate(temp):
if 'account executive' in pos:
temp[idx] = 'account executive'
```
### Sales Marketing
```
# check banyak jenis sales marketing
job_list = list()
for pos in temp:
if 'sales marketing' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya sales marketing
for idx,pos in enumerate(temp):
if 'sales marketing' in pos:
temp[idx] = 'sales marketing'
```
### Finance Staff
```
# check banyak jenis finance staff
job_list = list()
for pos in temp:
if 'finance staff' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya finance staff
for idx,pos in enumerate(temp):
if 'finance staff' in pos:
temp[idx] = 'finance staff'
```
### Drafter
```
# check banyak jenis drafter
job_list = list()
for pos in temp:
if 'drafter' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya drafter
for idx,pos in enumerate(temp):
if 'drafter' in pos:
temp[idx] = 'drafter'
```
### Sales Supervisor
```
# check banyak jenis sales supervisor
job_list = list()
for pos in temp:
if 'sales supervisor' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya sales supervisor
for idx,pos in enumerate(temp):
if 'sales supervisor' in pos:
temp[idx] = 'sales supervisor'
```
### Purchasing Staff
```
# check banyak jenis purchasing staff
job_list = list()
for pos in temp:
if 'purchasing staff' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya purchasing staff
for idx,pos in enumerate(temp):
if 'purchasing staff' in pos:
temp[idx] = 'purchasing staff'
```
### IT Programmer
```
# check banyak jenis it programmer
job_list = list()
for pos in temp:
if 'it programmer' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya it programmer
for idx,pos in enumerate(temp):
if 'it programmer' in pos:
temp[idx] = 'it programmer'
```
### IT Support
```
# check banyak jenis it support
job_list = list()
for pos in temp:
if 'it support' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya it support
for idx,pos in enumerate(temp):
if 'it support' in pos:
temp[idx] = 'it support'
```
### Account Manager
```
# check banyak jenis account manager
job_list = list()
for pos in temp:
if 'account manager' in pos:
job_list.append(pos)
pd.Series(job_list).unique()
# replace dengan hanya account manager
for idx,pos in enumerate(temp):
if 'account manager' in pos:
temp[idx] = 'account manager'
```
### Customer Service
```
# check banyak jenis customer service
job_list = list()
for pos in temp:
if 'customer service' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya customer service
for idx,pos in enumerate(temp):
if 'customer service' in pos:
temp[idx] = 'customer service'
```
### Brand Manager
```
# check banyak jenis brand manager
job_list = list()
for pos in temp:
if 'brand manager' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya brand manager
for idx,pos in enumerate(temp):
if 'brand manager' in pos:
temp[idx] = 'brand manager'
```
### Programmer
```
# check banyak jenis programmer
job_list = list()
for pos in temp:
if 'programmer' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya programmer
for idx,pos in enumerate(temp):
if 'programmer' in pos:
temp[idx] = 'programmer'
```
### Supervisor Produksi
```
# check banyak jenis supervisor produksi
job_list = list()
for pos in temp:
if 'supervisor produksi' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya supervisor produksi
for idx,pos in enumerate(temp):
if 'supervisor produksi' in pos:
temp[idx] = 'supervisor produksi'
```
### Personal Assistant
```
# check banyak jenis personal assistant
job_list = list()
for pos in temp:
if 'personal assistant' in pos:
job_list.append(pos)
print(pd.Series(job_list).unique())
# replace dengan hanya personal assistant
for idx,pos in enumerate(temp):
if 'personal assistant' in pos:
temp[idx] = 'personal assistant'
```
## Kelompokkan job_position dengan jumlah kata
Note: pada tahap ini belum dilakukan, mengingat waktu yang sudah mepet.
```
job_pos_satu = list()
job_pos_dua = list()
job_pos_banyak = list()
for job in temp:
split = job.split()
if len(split) == 1:
job_pos_satu.append(job)
elif len(split) == 2:
job_pos_dua.append(job)
else:
job_pos_banyak.append(job)
len(job_pos_satu), len(job_pos_dua), len(job_pos_banyak)
# lihat unique pada satu kata
pd.Series(job_pos_satu).value_counts()
# lihat unique pada dua kata
pd.Series(job_pos_dua).value_counts()
# lihat unique pada banyak kata
pd.Series(job_pos_banyak).value_counts()
```
### Mapping job satu kata
```
# pd.Series(job_pos_satu).value_counts().to_csv('job_pos_satu.csv')
# pd.Series(job_pos_dua).value_counts().to_csv('job_pos_dua.csv')
# pd.Series(job_pos_banyak).value_counts().to_csv('job_pos_banyak.csv')
```
## Hasil akhir
```
pd.Series(temp).value_counts()
# pd.Series(temp).value_counts().to_csv('job_position_distinct.csv')
```
> Yang semula distinct 10078, menjadi 7414 distinct
```
# insert into dataframe dan check null values
data['job_position'] = temp
data['job_position'].isnull().sum()
data.head()
```
# Export csv
```
data.shape
data.to_csv('jobstreet_clean_tahap2.csv', index=False)
```
| github_jupyter |
```
## Ejercicio 1.2
class Rectangulo:
def __init__(self, lado1, lado2):
self.lado1 = lado1
self.lado2 = lado2
c1 = Rectangulo(20, 10)
print(c1.lado1)
print(c1.lado2)
c1.lado2=25
print(c1.lado1)
print(c1.lado2)
## Challengue
class Rectangulo:
def __init__(self, lado1, lado2):
self.lado1 = lado1
self.lado2 = lado2
if self.lado1>self.lado2:
self.lado_largo=self.lado1
else :
self.lado_largo=self.lado2
self.area=lado1*lado2
c1 = Rectangulo(20, 30)
print(c1.area)
print(c1.lado_largo)
class Rectangulo:
def __init__(self, lado1, lado2):
self.lado1 = lado1
self.lado2 = lado2
def area (lado):
a=lado.lado1
b=lado.lado2
area=a*b
if(a>b):
lado_largo=a
else:
lado_largo=b
return area,lado_largo
r1= Rectangulo(20,30)
area_rec,lado_largo=area(r1)
print(area_rec)
print(lado_largo)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=1000, centers=2,
random_state=0, cluster_std=1)
plt.scatter(X[:, 0], X[:, 1], c=y, s=25, cmap='bwr')
plt.colorbar()
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=1000, centers=2,
random_state=0, cluster_std=1.3)
# Creamos un objeto arbol
tree = DecisionTreeClassifier(max_depth=3, random_state = 42)
tree.fit(X,y)
instancia = np.array([3,4]) # el primer valor corresponde a x1 y el segundo x2
instancia = instancia.reshape(1,-1) # No te preocupes por ahora por el reshape. Es un requisito que quedará más claro después
y_pred = tree.predict(instancia) # Hacemos la predicciṕn
print(y_pred) # imprimimos en pantalla la predicción
np.random.seed(3) # si quieres que sea al azar, cambia la semilla o comenta esta linea.
n = 3
idxs = np.random.randint(X.shape[0], size=3)
instancias = X[idxs,:]
print(instancias)
y_pred = tree.predict(instancias)
print(y_pred)
for i, idx in enumerate(idxs):
print(f'Instancia {idx}. Etiqueta real: {y[idx]}. Etiqueta predicha: {y_pred[i]}')
k = 874
print(X[k,:])
plt.scatter(X[:, 0], X[:, 1], c=y, s=25, cmap='bwr', alpha = 0.5)
plt.colorbar()
plt.scatter(X[k, 0], X[k, 1], c = 'k', s=200, cmap='bwr', marker = '*')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def visualize_classifier(model, X, y, ax=None, cmap='bwr'):
ax = ax or plt.gca()
# Plot the training points
ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap,
clim=(y.min(), y.max()), zorder=3, alpha = 0.5)
ax.axis('tight')
ax.set_xlabel('x1')
ax.set_ylabel('x2')
# ax.axis('off')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
xx, yy = np.meshgrid(np.linspace(*xlim, num=200),
np.linspace(*ylim, num=200))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
# Create a color plot with the results
n_classes = len(np.unique(y))
contours = ax.contourf(xx, yy, Z, alpha=0.3,
levels=np.arange(n_classes + 1) - 0.5,
cmap=cmap, clim=(y.min(), y.max()),
zorder=1)
ax.set(xlim=xlim, ylim=ylim)
visualize_classifier(tree, X, y)
from sklearn.metrics import accuracy_score
# Predecimos sobre nuestro set de entrenamieto
y_pred = tree.predict(X)
# Comaparamos con las etiquetas reales
accuracy_score(y_pred,y)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y,y_pred))
from sklearn import datasets
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.datasets import make_blobs
iris = datasets.load_iris()
data = iris.data
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
sns.pairplot(data1, hue='target')
data1.tail()
X=data1[['petal width (cm)','sepal width (cm)','sepal length (cm)']]
y=data1['target']
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=6, random_state = 42)
tree.fit(X,y)
X.head()
print(tree.classes_)
print(tree.n_classes_)
print(tree.max_features_)
print(tree.feature_importances_)
importances = tree.feature_importances_
columns = X.columns
sns.barplot(columns, importances)
plt.title('Importancia de cada Feature')
plt.show()
y_pred=tree.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y,y_pred)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y,y_pred))
instancia = np.array([2.21,1,3]) # el primer valor corresponde a x1 y el segundo x2
instancia = instancia.reshape(1,-1) # No te preocupes por ahora por el reshape. Es un requisito que quedará más claro después
y_pred = tree.predict(instancia) # Hacemos la predicciṕn
if y_pred==2:
y_pred="I. virginica"
if y_pred==1:
y_pred="I. versicolor"
else:
y_pred="I. setosa"
print(y_pred)
#Bitacora 12
X=data1[['petal width (cm)','sepal width (cm)','sepal length (cm)','petal length (cm)']]
y=data1['target']
from sklearn.neighbors import KNeighborsClassifier
kn=KNeighborsClassifier()
kn.fit(X,y)
y_pred=kn.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y,y_pred)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y,y_pred))
plt.figure()
ax = sns.scatterplot(X.iloc[:,0], X.iloc[:,1], hue=y.values, palette='Set2')
plt.legend().remove()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
xx, yy = np.meshgrid(np.linspace(*xlim, num=150),
np.linspace(*ylim, num=150))
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
contours = ax.contourf(xx, yy, Z, alpha=0.3, cmap = 'Set2')
plt.show()
#Ejercicio 2.1
from sklearn import datasets
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.datasets import make_blobs
data = sns.load_dataset('Titanic')
data.head()
X=['sex','fare','age','pclass','survived']
data2 = sns.load_dataset('Titanic',usecols=X)
data2
X=data2[['sex','fare','age','pclass']]
y=data2['survived']
def predict_instance(x):
#prediction=0 61% accuracy
if x.age < 12:
prediction = 1
elif x.age>70:
prediction = 1
elif x.sex=='female':
prediction = 1
else:
prediction = 0
return prediction
def predict(X):
y_predicted = []
for x in X.itertuples():
y_i = predict_instance(x)
y_predicted.append(y_i)
return y_predicted
y_pred=predict(X)
print(y_pred)
def accuracy(y_predicted, y_real):
mask = np.array(y_predicted) == np.array(y_real)
return mask.sum()/len(y_real)
accuracy(y_pred,y)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y,y_pred))
J=pd.DataFrame({'sex':['male'],'fare':[1000],'age':[5],'pclass':[1]})
y_pred=predict(J)
print(y_pred)
#Ejercicio 2.2
muestras_neg, muestras_pos =data2['survived'].value_counts()
N=muestras_neg+muestras_pos
Gini_inicial=1-(muestras_neg/N)**2-(muestras_pos/N)**2
print(Gini_inicial)
mascara1 = data2.sex=='female'
mascara2=data2.sex=='male'
y_female = data2[mascara1]
y_male = data2[mascara2]
muestras_neg1,muestras_pos1=data2[mascara1]['survived'].value_counts()
muestras_neg2,muestras_pos2=data2[mascara2]['survived'].value_counts()
N1=muestras_neg1+muestras_pos1
N2=muestras_neg2+muestras_pos2
Gini_inicial_female=1-(muestras_neg1/N1)**2-(muestras_pos1/N1)**2
print(Gini_inicial_female)
Gini_inicial_male=1-(muestras_neg2/N2)**2-(muestras_pos2/N2)**2
print(Gini_inicial_male)
#Ejercicio 3
from sklearn import datasets
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.datasets import make_blobs
data = sns.load_dataset('Titanic')
data.head()
X=data[['pclass','parch','sibsp','adult_male']]
y=data['survived']
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=10, random_state = 42)
tree.fit(X,y)
print(tree.classes_)
print(tree.n_classes_)
print(tree.max_features_)
print(tree.feature_importances_)
importances = tree.feature_importances_
columns = X.columns
sns.barplot(columns, importances)
plt.title('Importancia de cada Feature')
plt.show()
y_pred=tree.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y,y_pred)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y,y_pred))
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
from sqlalchemy import create_engine
import warnings
warnings.filterwarnings('ignore')
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'useducation'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
education_df = pd.read_sql_query('select * from useducation',con=engine)
# no need for an open connection,
# as we're only doing a single query
engine.dispose()
```
Filling in missing values
```
fill_list = ["ENROLL", "TOTAL_REVENUE", "FEDERAL_REVENUE",
"STATE_REVENUE", "LOCAL_REVENUE", "TOTAL_EXPENDITURE",
"INSTRUCTION_EXPENDITURE", "SUPPORT_SERVICES_EXPENDITURE",
"OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE", "GRADES_PK_G",
"GRADES_KG_G", "GRADES_4_G", "GRADES_8_G", "GRADES_12_G", "GRADES_1_8_G",
"GRADES_9_12_G", "GRADES_ALL_G"]
states = education_df["STATE"].unique()
for state in states:
education_df.loc[education_df["STATE"] == state, fill_list] = education_df.loc[education_df["STATE"] == state, fill_list].interpolate()
# we drop the null values after interpolation
education_df.dropna(inplace=True)
education_df.isnull().sum()
```
1. Derive the descriptive statistics of the data and discuss the points you find remarkable.
ANSWER: Local and State revenue are almost equal while Federal is much lower. Standard deviations for state and local are much higher than federal. Fourth and eighth graders have higher grade values than 12 graders which is interesting to me. Both average math scores and reading scores increase from 4 to 8.
2. Choose a state (such as California) and draw a line graph of its total revenues and total expenditures over the years. How do these two variables evolve during the years? Try to explain the peaks and troughs with some state-wise news and information around those dates.
ANSWER: From 2000 thru 2016 both variables steadily increase in Nebraska. Since 2012, NE has seen higher expenditures than revenue.
3. In your chosen state, in which of the lessons are the students more successful—math or reading?
4. What are the distributions of the math and reading scores in the sample?
5. Now, look again at the original dataset (before you filled in the missing values). Notice there are too many missing values for math and reading scores. Fill out the missing values using mean, median, and linear interpolation. Then compare the effects of these techniques on the distributions of the score variables.
```
education_df.round(4).describe()
```
2. Choose a state (such as California) and draw a line graph of its total revenues and total expenditures over the years. How do these two variables evolve during the years? Try to explain the peaks and troughs with some state-wise news and information around those dates.
```
plt.plot(education_df.loc[education_df.STATE == "NEBRASKA", "YEAR"],
education_df.loc[education_df.STATE == "NEBRASKA", "TOTAL_REVENUE"], label="total revenue")
plt.plot(education_df.loc[education_df.STATE == "NEBRASKA", "YEAR"],
education_df.loc[education_df.STATE == "NEBRASKA", "TOTAL_EXPENDITURE"], label="total expenditure")
plt.title("NE total revenue and total expenditure")
plt.legend()
plt.show()
```
3. In your chosen state, in which of the lessons are the students more successful—math or reading?
ANSWER: Nebraska students consistanly perform better in math tests than in reading tests.
```
print("difference between reading and math scores (4)")
print(education_df.loc[education_df.STATE == "NEBRASKA", "AVG_READING_4_SCORE"] - education_df.loc[education_df.STATE == "NEBRASKA", "AVG_MATH_4_SCORE"])
print("difference between reading and math scores (8)")
print(education_df.loc[education_df.STATE == "NEBRASKA", "AVG_READING_8_SCORE"] - education_df.loc[education_df.STATE == "NEBRASKA", "AVG_MATH_8_SCORE"])
```
4. What are the distributions of the math and reading scores in the sample?
ANSWER: All 4 averages appear to be skewed right with some low average outliers in all for plots. All plots appear somewhat normally distributed.
```
plt.figure(figsize=(20,10))
plt.subplot(2,2,1)
plt.hist(education_df.AVG_READING_4_SCORE.dropna())
plt.title("histogram of {}".format("AVG_READING_4_SCORE"))
plt.subplot(2,2,2)
plt.hist(education_df.AVG_MATH_4_SCORE.dropna())
plt.title("histogram of {}".format("AVG_MATH_4_SCORE"))
plt.subplot(2,2,3)
plt.hist(education_df.AVG_READING_8_SCORE.dropna())
plt.title("histogram of {}".format("AVG_READING_8_SCORE"))
plt.subplot(2,2,4)
plt.hist(education_df.AVG_MATH_8_SCORE.dropna())
plt.title("histogram of {}".format("AVG_MATH_8_SCORE"))
plt.show()
```
5. Now, look again at the original dataset (before you filled in the missing values). Notice there are too many missing values for math and reading scores. Fill out the missing values using mean, median, and linear interpolation. Then compare the effects of these techniques on the distributions of the score variables.
ANSWER: Filling in the average score missing values using mean and median exaggerated the peak of the bell curve at that value's mean and median. Using interpolation did not appear to change the distribution too much compared to the original data. Interpolation or dropping the missing values appear to have the same effect.
```
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'useducation'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
education_df = pd.read_sql_query('select * from useducation',con=engine)
# no need for an open connection,
# as we're only doing a single query
engine.dispose()
# 4 histograms of average math 4 score comparing original data, interpolated data, filled with median and filled with mean
plt.figure(figsize=(20,20))
plt.subplot(4,4,1)
plt.hist(education_df.AVG_MATH_4_SCORE.dropna())
plt.title("histogram of {} (original)".format("AVG_MATH_4_SCORE"))
plt.subplot(4,4,2)
plt.hist(education_df.AVG_MATH_4_SCORE.interpolate())
plt.title("histogram of {} (interpolated)".format("AVG_READING_4_SCORE"))
plt.subplot(4,4,3)
plt.hist(education_df.AVG_MATH_4_SCORE.fillna(education_df.AVG_MATH_4_SCORE.median()))
plt.title("histogram of {} (filled with median)".format("AVG_MATH_4_SCORE"))
plt.subplot(4,4,4)
plt.hist(education_df.AVG_MATH_4_SCORE.fillna(education_df.AVG_MATH_4_SCORE.mean()))
plt.title("histogram of {} (filled with mean)".format("AVG_MATH_4_SCORE"))
# 4 histograms of average reading 4 score comparing original data, interpolated data, filled with median and filled with mean
plt.subplot(4,4,5)
plt.hist(education_df.AVG_READING_4_SCORE.dropna())
plt.title("histogram of {} (original)".format("AVG_READING_4_SCORE"))
plt.subplot(4,4,6)
plt.hist(education_df.AVG_READING_4_SCORE.interpolate())
plt.title("histogram of {} (interpolated)".format("AVG_READING_4_SCORE"))
plt.subplot(4,4,7)
plt.hist(education_df.AVG_READING_4_SCORE.fillna(education_df.AVG_READING_4_SCORE.median()))
plt.title("histogram of {} (filled with median)".format("AVG_READING_4_SCORE"))
plt.subplot(4,4,8)
plt.hist(education_df.AVG_READING_4_SCORE.fillna(education_df.AVG_READING_4_SCORE.mean()))
plt.title("histogram of {} (filled with mean)".format("AVG_READING_4_SCORE"))
# 4 histograms of average math 8 score comparing original data, interpolated data, filled with median and filled with mean
plt.subplot(4,4,9)
plt.hist(education_df.AVG_MATH_8_SCORE.dropna())
plt.title("histogram of {} (original)".format("AVG_MATH_8_SCORE"))
plt.subplot(4,4,10)
plt.hist(education_df.AVG_MATH_8_SCORE.interpolate())
plt.title("histogram of {} (interpolated)".format("AVG_MATH_8_SCORE"))
plt.subplot(4,4,11)
plt.hist(education_df.AVG_MATH_8_SCORE.fillna(education_df.AVG_MATH_8_SCORE.median()))
plt.title("histogram of {} (filled with median)".format("AVG_MATH_8_SCORE"))
plt.subplot(4,4,12)
plt.hist(education_df.AVG_MATH_8_SCORE.fillna(education_df.AVG_MATH_8_SCORE.mean()))
plt.title("histogram of {} (filled with mean)".format("AVG_MATH_8_SCORE"))
# 4 histograms of average reading 8 score comparing original data, interpolated data, filled with median and filled with mean
plt.subplot(4,4,13)
plt.hist(education_df.AVG_READING_8_SCORE.dropna())
plt.title("histogram of {} (original)".format("AVG_READING_8_SCORE"))
plt.subplot(4,4,14)
plt.hist(education_df.AVG_READING_8_SCORE.interpolate().dropna())
plt.title("histogram of {} (interpolated)".format("AVG_READING_8_SCORE"))
plt.subplot(4,4,15)
plt.hist(education_df.AVG_READING_8_SCORE.fillna(education_df.AVG_READING_8_SCORE.median()))
plt.title("histogram of {} (filled with median)".format("AVG_READING_8_SCORE"))
plt.subplot(4,4,16)
plt.hist(education_df.AVG_READING_8_SCORE.fillna(education_df.AVG_READING_8_SCORE.mean()))
plt.title("histogram of {} (filled with mean)".format("AVG_READING_8_SCORE"))
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Node2Vec representation learning with Stellargraph components
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
This example demonstrates how to apply components from the stellargraph library to perform representation learning via Node2Vec. This uses a keras implementation of Node2Vec available in stellargraph instead of the reference implementation provided by ``gensim``. This implementation provides flexible interfaces to downstream tasks for end-to-end learning.
<a name="refs"></a>
**References**
[1] Node2Vec: Scalable Feature Learning for Networks. A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016. ([link](https://snap.stanford.edu/node2vec/))
[2] Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
[3] word2vec Parameter Learning Explained. X. Rong. arXiv preprint arXiv:1411.2738. 2014 Nov 11. ([link](https://arxiv.org/pdf/1411.2738.pdf))
## Introduction
Following word2vec [2,3], for each (``target``,``context``) node pair $(v_i,v_j)$ collected from random walks, we learn the representation for the target node $v_i$ by using it to predict the existence of context node $v_j$, with the following three-layer neural network.

Node $v_i$'s representation in the hidden layer is obtained by multiplying $v_i$'s one-hot representation in the input layer with the input-to-hidden weight matrix $W_{in}$, which is equivalent to look up the $i$th row of input-to-hidden weight matrix $W_{in}$. The existence probability of each node conditioned on node $v_i$ is outputted in the output layer, which is obtained by multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation. To capture the ``target-context`` relation between $v_i$ and $v_j$, we need to maximize the probability $\mathrm{P}(v_j|v_i)$. However, computing $\mathrm{P}(v_j|v_i)$ is time consuming, which involves the matrix multiplication between $v_i$'s hidden-layer representation and the hidden-to-out weight matrix $W_{out}$.
To speed up the computing, we adopt the negative sampling strategy [2,3]. For each (``target``, ``context``) node pair, we sample a negative node $v_k$, which is not $v_i$'s context. To obtain the output, instead of multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation, we only calculate the dot product between $v_i$'s hidden-layer representation and the $j$th column as well as the $k$th column of the hidden-to-output weight matrix $W_{out}$ followed by a sigmoid activation respectively. According to [3], the original objective to maximize $\mathrm{P}(v_j|v_i)$ can be approximated by minimizing the cross entropy between $v_j$ and $v_k$'s outputs and their ground-truth labels (1 for $v_j$ and 0 for $v_k$).
Following [2,3], we denote the rows of the input-to-hidden weight matrix $W_{in}$ as ``input_embeddings`` and the columns of the hidden-to-out weight matrix $W_{out}$ as ``output_embeddings``. To build the Node2Vec model, we need look up ``input_embeddings`` for target nodes and ``output_embeddings`` for context nodes and calculate their inner product together with a sigmoid activation.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.1.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.1.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.1.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import os
import networkx as nx
import numpy as np
import pandas as pd
from tensorflow import keras
from stellargraph import StellarGraph
from stellargraph.data import BiasedRandomWalk
from stellargraph.data import UnsupervisedSampler
from stellargraph.data import BiasedRandomWalk
from stellargraph.mapper import Node2VecLinkGenerator, Node2VecNodeGenerator
from stellargraph.layer import Node2Vec, link_classification
from stellargraph import datasets
from IPython.display import display, HTML
%matplotlib inline
```
### Dataset
For clarity, we use only the largest connected component, ignoring isolated nodes and subgraphs; having these in the data does not prevent the algorithm from running and producing valid results.
```
dataset = datasets.Cora()
display(HTML(dataset.description))
G, subjects = dataset.load(largest_connected_component_only=True)
print(G.info())
```
### The Node2Vec algorithm
The Node2Vec algorithm introduced in [[1]](#refs) is a 2-step representation learning algorithm. The two steps are:
1. Use random walks to generate sentences from a graph. A sentence is a list of node ids. The set of all sentences makes a corpus.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node id is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [[2]](#refs) is used for calculating the embedding vectors.
In this implementation, we train the Node2Vec algorithm in the following two steps:
1. Generate a set of (`target`, `context`) node pairs through starting the biased random walk with a fixed length at per node. The starting nodes are taken as the target nodes and the following nodes in biased random walks are taken as context nodes. For each (`target`, `context`) node pair, we generate 1 negative node pair.
2. Train the Node2Vec algorithm through minimizing cross-entropy loss for `target-context` pair prediction, with the predictive value obtained by performing the dot product of the 'input embedding' of the target node and the 'output embedding' of the context node, followed by a sigmoid activation.
Specify the optional parameter values: the number of walks to take per node, the length of each walk. Here, to guarantee the running efficiency, we respectively set `walk_number` and `walk_length` to 100 and 5. Larger values can be set to them to achieve better performance.
```
walk_number = 100
walk_length = 5
```
Create the biased random walker to perform context node sampling, with the specified parameters.
```
walker = BiasedRandomWalk(
G,
n=walk_number,
length=walk_length,
p=0.5, # defines probability, 1/p, of returning to source node
q=2.0, # defines probability, 1/q, for moving to a node away from the source node
)
```
Create the UnsupervisedSampler instance with the biased random walker.
```
unsupervised_samples = UnsupervisedSampler(G, nodes=list(G.nodes()), walker=walker)
```
Set the batch size and the number of epochs.
```
batch_size = 50
epochs = 2
```
Define an attri2vec training generator, which generates a batch of (index of target node, index of context node, label of node pair) pairs per iteration.
```
generator = Node2VecLinkGenerator(G, batch_size)
```
Build the Node2Vec model, with the dimension of learned node representations set to 128.
```
emb_size = 128
node2vec = Node2Vec(emb_size, generator=generator)
x_inp, x_out = node2vec.in_out_tensors()
```
Use the link_classification function to generate the prediction, with the 'dot' edge embedding generation method and the 'sigmoid' activation, which actually performs the dot product of the ``input embedding`` of the target node and the ``output embedding`` of the context node followed by a sigmoid activation.
```
prediction = link_classification(
output_dim=1, output_act="sigmoid", edge_embedding_method="dot"
)(x_out)
```
Stack the Node2Vec encoder and prediction layer into a Keras model. Our generator will produce batches of positive and negative context pairs as inputs to the model. Minimizing the binary crossentropy between the outputs and the provided ground truth is much like a regular binary classification task.
```
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy],
)
```
Train the model.
```
history = model.fit(
generator.flow(unsupervised_samples),
epochs=epochs,
verbose=1,
use_multiprocessing=False,
workers=4,
shuffle=True,
)
```
## Visualise Node Embeddings
Build the node based model for predicting node representations from node ids and the learned parameters. Below a Keras model is constructed, with x_inp[0] as input and x_out[0] as output. Note that this model's weights are the same as those of the corresponding node encoder in the previously trained node pair classifier.
```
x_inp_src = x_inp[0]
x_out_src = x_out[0]
embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src)
```
Get the node embeddings from node ids.
```
node_gen = Node2VecNodeGenerator(G, batch_size).flow(G.nodes())
node_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1)
```
Transform the embeddings to 2d space for visualisation.
```
transform = TSNE # PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the embedding points, coloring them by the target label (paper subject)
alpha = 0.7
label_map = {l: i for i, l in enumerate(np.unique(subjects))}
node_colours = [label_map[target] for target in subjects]
plt.figure(figsize=(7, 7))
plt.axes().set(aspect="equal")
plt.scatter(
node_embeddings_2d[:, 0],
node_embeddings_2d[:, 1],
c=node_colours,
cmap="jet",
alpha=alpha,
)
plt.title("{} visualization of node embeddings".format(transform.__name__))
plt.show()
```
### Downstream task
The node embeddings calculated using Node2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the subject of a paper in Cora), community detection (clustering of nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of citation links between papers).
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| github_jupyter |
```
"""
# Start Docker Containeer
docker run -dit --rm --name adv_dsi_lab_3 -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v ~/Projects/adv_dsi/adv_dsi_lab_3:/home/jovyan/work -v ~/.aws:/home/jovyan/.aws -v ~/Projects/adv_dsi/src:/home/jovyan/work/src xgboost-notebook:latest
docker logs --tail 50 adv_dsi_lab_3
# Initialise the repo
git init
# In your local repo adv_dsi_lab_1, link it with Github
git remote add origin git@github.com:CazMayhem/adv_dsi_lab_3.git
# if you make a mistake
# git remote remove origin
# Push Content - Add you changes to git staging area
# Create the snapshot of your repository and add a description
# Push your snapshot to Github
git add .
git commit -m "first commit lab 3"
git push https://***************@github.com/CazMayhem/adv_dsi_lab_3.git
# Preventing push to master branch
git config branch.master.pushRemote no_push
# Check out to the master branch, Pull the latest updates
git checkout master
git pull https://***************@github.com/CazMayhem/adv_dsi_lab_3.git
# Create a new git branch called data_prep
git checkout -b data_prep
# Navigate the folder notebooks and create a new jupyter notebook called 1_data_prep.ipynb
"""
```
### 2. Load and Explore Dataset
```
# Import the boto3, pandas and numpy packages
import pandas as pd
import numpy as np
import boto3
import timeit
from datetime import datetime
import os
import warnings
warnings.filterwarnings("ignore")
# os.environ['AWS_PROFILE'] = "S3_dev"
def list_bucket_contents(bucket, match=''):
s3_resource = boto3.resource('s3')
bucket_resource = s3_resource.Bucket(bucket)
for key in bucket_resource.objects.all():
if match in key.key:
print(key.key)
bucket_name = 'nyc-tlc'
list_bucket_contents(bucket_name,'2020')
# Call the function you defined to list the file of the 'nyc-tlc' bucket that contains the string '2020'
# Load the file named trip data/yellow_tripdata_2020-04.csv into a dataframe called df.
# Specify s3:// as prefix for the file url
# variable `bucket_name` that will contain the name of the bucket `nyc-tlc` dataset
bucket_name = 'nyc-tlc'
#------------------------------------------------------
starttime = timeit.default_timer()
df = pd.read_csv(f's3a://{bucket_name}/trip data/yellow_tripdata_2020-04.csv')
print('\nTime taken is {:0.2f}'.format(timeit.default_timer() - starttime))
# Display the dimensions (shape) of df
print(df.shape)
# Display the summary (info) of df
print(df.info())
# Display the descriptive statistics of df
print(df.describe)
df.to_csv('../data/raw/yellow_tripdata_2020-04.csv')
```
### 3. Prepare Data
Create a copy of df and save it into a variable called df_cleaned
Launch magic commands to automatically reload modules
```
# Create a copy of df and save it into a variable called df_cleaned
df_cleaned = df.copy()
# Launch magic commands to automatically reload modules
%load_ext autoreload
%autoreload 2
```
**[3.4]** Import your new function `convert_to_date` from `src.features.dates`
**[3.5]** Convert the column `tpep_pickup_datetime`, `tpep_dropoff_datetime` with your function `convert_to_date`
**[3.6]** Create a new column `trip_duration` that will corresponds to the diuration of the trip in seconds (difference between `tpep_pickup_datetime` and `tpep_dropoff_datetime`)
```
# Import your new function convert_to_date from src.features.dates
from src.features.dates import convert_to_date
# Convert the column tpep_pickup_datetime, tpep_dropoff_datetime with your function convert_to_date
df_cleaned = convert_to_date(df_cleaned, ['tpep_pickup_datetime', 'tpep_dropoff_datetime'])
# Create a new column trip_duration that will corresponds to the diuration of the trip in seconds (difference between tpep_pickup_datetime and tpep_dropoff_datetime)
df_cleaned['trip_duration'] = (df_cleaned['tpep_dropoff_datetime'] - df_cleaned['tpep_pickup_datetime']).dt.total_seconds()
```
### Bins - to covert regresssion into classification problem
**[3.7]** Convert the `trip_duration` column into 5 different bins with [0, 300, 600, 1800, 100000]
```
df_cleaned['trip_duration'] = pd.cut(df_cleaned['trip_duration'], bins=[0, 300, 600, 1800, 100000], labels=['x<5min', 'x<10min', 'x<30min', 'x>=30min'])
```
**[3.8]** Extract the month component from `tpep_pickup_datetime` and save the results in the column `tpep_pickup_dayofmonth`
```
df_cleaned['tpep_pickup_dayofmonth'] = df_cleaned['tpep_pickup_datetime'].dt.month
```
**[3.9]** Extract the hour component from `tpep_pickup_datetime` and save the results in the column `tpep_pickup_hourofday`
```
# Extract the month name component from dteday and save the results in the column mnth
df_cleaned['tpep_pickup_hourofday'] = df_cleaned['tpep_pickup_datetime'].dt.hour
```
**[3.10]** Extract the day of week component from `tpep_pickup_datetime` and save the results in the column `tpep_pickup_dayofweek`
```
# Extract the day of week component from dteday and save the results in the column weekday
df_cleaned['tpep_pickup_dayofweek'] = df_cleaned['tpep_pickup_datetime'].dt.dayofweek
```
**[3.11]** Perform One-Hot encoding on the categorical features (`VendorID`, `RatecodeID`, `store_and_fwd_flag`)
```
# Perform One-Hot encoding on the categorical features (VendorID, RatecodeID, store_and_fwd_flag)
df_cleaned = pd.get_dummies(df_cleaned, columns=['VendorID', 'RatecodeID', 'store_and_fwd_flag'])
```
**[3.11a]** Check for Null values in the target rows and remove
```
# view target values - are there any nulls or other wierd stuff
pd.DataFrame(df_cleaned['trip_duration']).sort_values(by=['trip_duration']).to_csv('../data/target.csv')
# are there any NULL values in the Target rows
print('Null target values :',df_cleaned['trip_duration'].isnull().values.sum() )
# drop the null target values as it causes the predictions / scoring to fail
pre_delete = len(df_cleaned)
print('before drop target na:',len(df_cleaned))
df_cleaned = df_cleaned.dropna(subset=['trip_duration'])
print('after drop target na :',len(df_cleaned))
print('rows dropped :',pre_delete-len(df_cleaned))
```
**[3.12]** Drop the columns `tpep_pickup_datetime`, `tpep_dropoff_datetime`, `PULocationID`, `DOLocationID`
**[3.13]** Save the prepared dataframe in the `data/interim` folder
```
# Drop the columns `tpep_pickup_datetime`, `tpep_dropoff_datetime`, `PULocationID`, `DOLocationID`
df_cleaned.drop(['tpep_pickup_datetime', 'tpep_dropoff_datetime', 'PULocationID', 'DOLocationID'], axis=1, inplace=True)
# Save the prepared dataframe in the `data/interim` folder
df_cleaned.to_csv('../data/interim/yellow_tripdata_2020-04_prepared.csv')
```
## 4. Split the Dataset
**[4.1]** In the file `src/data/sets.py` create a function called `pop_target` with the following logics:
- input parameters: dataframe (`df`), target column name (`target_col`), flag to convert to Numpy array which False by default (`to_numpy`)
- logics: extract the target variable from input dataframe, split the input dataframe into training, validation and testing sets from the specified ratio
- output parameters: features and target
```
# create a function called pop_target
# Create a subset function
```
**[4.3]** Import your new function `split_sets_random` and split the data into several sets as Numpy arrays
**[4.4]** Import save_sets from src.data.sets and save the sets into the folder `data/processed`
```
# Import save_sets from src.data.sets and save the sets into the folder data/processed
from src.data.sets import split_sets_random
X_train, y_train, X_val, y_val, X_test, y_test = split_sets_random(df_cleaned, target_col='trip_duration', test_ratio=0.2, to_numpy=True)
# Import save_sets from src.data.sets and save the sets into the folder data/processed
from src.data.sets import save_sets
save_sets(X_train, y_train, X_val, y_val, X_test, y_test, path='../data/processed/')
```
## 5. Baseline Model
**[5.1]** in `src.models` folder, create a script called `null.py` ans define a class called
```
# Import NullModel from src.models.null
from src.models.null import NullModel
# Instantiate a NullModel with target_type='classification and save it into a variable called base_model
base_model = NullModel(target_type="classification")
```
**[5.3]** Instantiate a `NullModel` with `target_type='classification'` and save it into a variable called `base_model`
**[5.4]** Make a prediction using `fit_predict()` and save the results in a variable called `y_base`
```
# Make a prediction using fit_predict() and save the results in a variable called y_base
y_base = base_model.fit_predict(y_train)
# pd.DataFrame(y_train).sort_values(by=[0]).to_csv('../data/y_train.csv')
```
**[5.5]** In the `src/models/performance.py` file, create a function called `print_class_perf` with the following logics:
- input parameters: predicted target (`y_preds`), actual target (`y_actuals`) and name of the set (`set_name`)
- logics: Print the Accuracy and F1 score for the provided data
- output parameters: None
```
# In the src/models/performance.py file, create a function called print_class_perf
from src.models.performance import print_class_perf
# Display the Accuracy and F1 scores of this baseline model on the training set
print_class_perf(y_preds=y_base, y_actuals=np.array(y_train), set_name='Training', average='weighted')
"""
# 6. Push changes
# Add you changes to git staging area
# Create the snapshot of your repository and add a description
# Push your snapshot to Github
git add .
git commit -m "commit 2 S3 resource 'nyc-tlc'"
git push https://*********@github.com/CazMayhem/adv_dsi_lab_3.git
# Check out to the master branch
# Pull the latest updates
git checkout master
git pull https://*********@github.com/CazMayhem/adv_dsi_lab_3.git
# Check out to the data_prep branch
# Merge the master branch and push your changes
git checkout data_prep
git merge master
git push https://*********@github.com/CazMayhem/adv_dsi_lab_3.git
"""
```
**[6.8]** Go to Github and merge the branch after reviewing the code and fixing any conflict
**[6.9]** Stop the Docker container
```
# Stop the Docker container
# docker stop adv_dsi_lab_3
```
| github_jupyter |
# Tutorial: DESI spectral fitting with `provabgs`
```
# lets install the python package `provabgs`, a python package for generating the PRObabilistic Value-Added BGS (PROVABGS)
!pip install git+https://github.com/changhoonhahn/provabgs.git --upgrade --user
!pip install zeus-mcmc --user
import numpy as np
from provabgs import infer as Infer
from provabgs import models as Models
from provabgs import flux_calib as FluxCalib
# -- plotting --
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
# read in DESI Cascades spectra from TILE 80612
from desispec.io import read_spectra
spectra = read_spectra('/global/cfs/cdirs/desi/spectro/redux/cascades/tiles/80612/deep/coadd-0-80612-deep.fits')
igal = 10
from astropy.table import Table
zbest = Table.read('/global/cfs/cdirs/desi/spectro/redux/cascades/tiles/80612/deep/zbest-0-80612-deep.fits', hdu=1)
zred = zbest['Z'][igal]
print('z=%f' % zred)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylim(0, 5)
# declare prior
priors = Infer.load_priors([
Infer.UniformPrior(9., 12, label='sed'),
Infer.FlatDirichletPrior(4, label='sed'),
Infer.UniformPrior(np.array([6.9e-5, 6.9e-5, 0., 0., -2.2]), np.array([7.3e-3, 7.3e-3, 3., 4., 0.4]), label='sed'),
Infer.UniformPrior(np.array([0.9, 0.9, 0.9]), np.array([1.1, 1.1, 1.1]), label='flux_calib') # flux calibration variables
])
# declare model
m_nmf = Models.NMF(burst=False, emulator=True)
# declare flux calibration
fluxcalib = FluxCalib.constant_flux_DESI_arms
desi_mcmc = Infer.desiMCMC(
model=m_nmf,
flux_calib=fluxcalib,
prior=priors
)
mcmc = desi_mcmc.run(
wave_obs=[spectra.wave['b'], spectra.wave['r'], spectra.wave['z']],
flux_obs=[spectra.flux['b'][igal], spectra.flux['r'][igal], spectra.flux['z'][igal]],
flux_ivar_obs=[spectra.ivar['b'][igal], spectra.ivar['r'][igal], spectra.ivar['z'][igal]],
zred=zred,
sampler='zeus',
nwalkers=100,
burnin=100,
opt_maxiter=10000,
niter=1000,
debug=True)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.plot(mcmc['wavelength_obs'], mcmc['flux_spec_model'], c='k', ls='--')
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylim(0, 5)
```
| github_jupyter |
# Human Rights Considered NLP
### **Overview**
This notebook creates a training dataset using data sourced from the [Police Brutality 2020 API](https://github.com/2020PB/police-brutality) by adding category labels for types of force and the people involved in incidents using [Snorkel](https://www.snorkel.org/) for NLP.
Build on original notebook by [Axel Corro](https://github.com/axefx) sourced from the HRD Team C DS [repository](https://github.com/Lambda-School-Labs/Labs25-Human_Rights_First-TeamC-DS/blob/main/notebooks/snorkel_hrf.ipynb).
# Imports
```
!pip install snorkel
import pandas as pd
from snorkel.labeling import labeling_function
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
import sys
from google.colab import files
# using our cleaned processed data
df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/Labs25-Human_Rights_First-TeamC-DS/main/Data/pv_incidents.csv', na_values=False)
df2 = df.filter(['text'], axis=1)
df2['text'] = df2['text'].astype(str)
```
# Use of Force Tags
### Categories of force:
- **Presence**: Police show up and their presence is enough to de-escalate. This is ideal.
- **verbalization**: Police use voice commands, force is non-physical.
- **empty-hand control soft technique**: Officers use grabs, holds and joint locks to restrain an individual. shove, chase, spit, raid, push
- **empty-hand control hard technique**: Officers use punches and kicks to restrain an individual.
- **blunt impact**: Officers may use a baton to immobilize a combative person, struck, shield, beat
- **projectiles**: Projectiles shot or launched by police at civilians. Includes "less lethal" mutnitions such as rubber bullets, bean bag rounds, water hoses, and flash grenades, as well as deadly weapons such as firearms.
- **chemical**: Officers use chemical sprays or projectiles embedded with chemicals to restrain an individual (e.g., pepper spray).
- **conducted energy devices**: Officers may use CEDs to immobilize an individual. CEDs discharge a high-voltage, low-amperage jolt of electricity at a distance.
- **miscillaneous**: LRAD (long-range audio device), sound cannon, sonic weapon
## Presence category
Police presence is enough to de-escalate. This is ideal.
```
PRESENCE = 1
NOT_PRESENCE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_swarm(x):
return PRESENCE if 'swarm' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_show(x):
return PRESENCE if 'show' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_arrive(x):
return PRESENCE if 'arrive' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_swarm, lf_keyword_show, lf_keyword_arrive]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["presence_label"] = label_model.predict(L=L_train, tie_break_policy="abstain")
```
## Verbalization Category
police use voice commands, force is non-physical
```
VERBAL = 1
NOT_VERBAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_shout(x):
return VERBAL if 'shout' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_order(x):
return VERBAL if 'order' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_loudspeaker(x):
return VERBAL if 'loudspeaker' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_shout, lf_keyword_order,lf_keyword_loudspeaker]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["verbal_label"] = label_model.predict(L=L_train, tie_break_policy="abstain")
lf_keyword_shout, lf_keyword_order, lf_keyword_loudspeaker = (L_train != ABSTAIN).mean(axis=0)
print(f"lf_keyword_shout coverage: {lf_keyword_shout * 100:.1f}%")
print(f"lf_keyword_order coverage: {lf_keyword_order * 100:.1f}%")
print(f"lf_keyword_loudspeaker coverage: {lf_keyword_loudspeaker * 100:.1f}%")
df2[df2['verbal_label']==1]
```
## Empty-hand Control - Soft Technique
Officers use grabs, holds and joint locks to restrain an individual. shove, chase, spit, raid, push
```
EHCSOFT = 1
NOT_EHCSOFT = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_shove(x):
return EHCSOFT if 'shove' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_grabs(x):
return EHCSOFT if 'grabs' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_holds(x):
return EHCSOFT if 'holds' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_arrest(x):
return EHCSOFT if 'arrest' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_spit(x):
return EHCSOFT if 'spit' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_raid(x):
return EHCSOFT if 'raid' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_push(x):
return EHCSOFT if 'push' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_shove, lf_keyword_grabs, lf_keyword_spit, lf_keyword_raid,
lf_keyword_push, lf_keyword_holds, lf_keyword_arrest]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ehc-soft_technique"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ehc-soft_technique']==1]
```
## Empty-hand Control - Hard Technique
Officers use bodily force (punches and kicks or asphyxiation) to restrain an individual.
```
EHCHARD = 1
NOT_EHCHARD = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_beat(x):
return EHCHARD if 'beat' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_tackle(x):
return EHCHARD if 'tackle' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_punch(x):
return EHCHARD if 'punch' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_assault(x):
return EHCHARD if 'assault' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_choke(x):
return EHCHARD if 'choke' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_kick(x):
return EHCHARD if 'kick' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_kneel(x):
return EHCHARD if 'kneel' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_beat, lf_keyword_tackle, lf_keyword_choke,
lf_keyword_kick, lf_keyword_punch, lf_keyword_assault,
lf_keyword_kneel]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ehc-hard_technique"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ehc-hard_technique']==1]
```
## Blunt Impact Category
Officers may use tools like batons to immobilize a person.
```
BLUNT = 1
NOT_BLUNT = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_baton(x):
return BLUNT if 'baton' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_club(x):
return BLUNT if 'club' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_shield(x):
return BLUNT if 'shield' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_bike(x):
return BLUNT if 'bike' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_horse(x):
return BLUNT if 'horse' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_vehicle(x):
return BLUNT if 'vehicle' in x.text.lower() else ABSTAIN
@labeling_function()
def lf_keyword_car(x):
return BLUNT if 'car' in x.text.lower() else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_baton, lf_keyword_club, lf_keyword_horse, lf_keyword_vehicle,
lf_keyword_car, lf_keyword_shield, lf_keyword_bike]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["blunt_impact"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['blunt_impact']==1]
```
## Projectiles category
Projectiles shot or launched by police at civilians. Includes "less lethal" mutnitions such as rubber bullets, bean bag rounds, water hoses, and flash grenades, as well as deadly weapons such as firearms.
```
PROJECTILE = 1
NOT_PROJECTILE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_pepper(x):
return PROJECTILE if 'pepper' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_rubber(x):
return PROJECTILE if 'rubber' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_bean(x):
return PROJECTILE if 'bean' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_shoot(x):
return PROJECTILE if 'shoot' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_shot(x):
return PROJECTILE if 'shot' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_fire(x):
return PROJECTILE if 'fire' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_grenade(x):
return PROJECTILE if 'grenade' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_bullet(x):
return PROJECTILE if 'bullet' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_throw(x):
return PROJECTILE if 'throw' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_discharge(x):
return PROJECTILE if 'discharge' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_projectile(x):
return PROJECTILE if 'projectile' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_pepper, lf_keyword_rubber, lf_keyword_bean,
lf_keyword_shoot, lf_keyword_shot, lf_keyword_fire, lf_keyword_grenade,
lf_keyword_bullet, lf_keyword_throw, lf_keyword_discharge,
lf_keyword_projectile]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["projectile"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['projectile'] == 1]
```
## Chemical Agents
Police use chemical agents including pepper pray, tear gas on civilians.
```
CHEMICAL = 1
NOT_CHEMICAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_pepper(x):
return CHEMICAL if 'pepper' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_gas(x):
return CHEMICAL if 'gas' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_smoke(x):
return CHEMICAL if 'smoke' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_mace(x):
return CHEMICAL if 'mace' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_spray(x):
return CHEMICAL if 'spray' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_pepper, lf_keyword_gas, lf_keyword_smoke,
lf_keyword_spray, lf_keyword_mace]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["chemical"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['chemical']==1]
```
## Conducted energy devices
Officers may use CEDs to immobilize an individual. CEDs discharge a high-voltage, low-amperage jolt of electricity at a distance. Most commonly tasers.
```
CED = 1
NOT_CED = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_taser(x):
return CED if 'taser' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_stun(x):
return CED if 'stun' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_stungun(x):
return CED if 'stungun' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_taze(x):
return CED if 'taze' in x.text else ABSTAIN
from snorkel.labeling.model import LabelModel
from snorkel.labeling import PandasLFApplier
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_taser, lf_keyword_stun, lf_keyword_stungun, lf_keyword_taze]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2["ced_category"] = label_model.predict(L=L_train, tie_break_policy="abstain")
df2[df2['ced_category']==1]
```
# Add force tags to dataframe
```
df2.columns
def add_force_labels(row):
tags = []
if row['presence_label'] == 1:
tags.append('Presence')
if row['verbal_label'] == 1:
tags.append('Verbalization')
if row['ehc-soft_technique'] == 1:
tags.append('EHC Soft Technique')
if row['ehc-hard_technique'] == 1:
tags.append('EHC Hard Technique')
if row['blunt_impact'] == 1:
tags.append('Blunt Impact')
if row['projectile'] == 1 or row['projectile'] == 0:
tags.append('Projectiles')
if row['chemical'] == 1:
tags.append('Chemical')
if row['ced_category'] == 1:
tags.append('Conductive Energy')
if not tags:
tags.append('Other/Unknown')
return tags
# apply force tags to incident data
df2['force_tags'] = df2.apply(add_force_labels,axis=1)
# take a peek
df2[['text','force_tags']].head(3)
# clean the tags column by seperating tags
def join_tags(content):
return ', '.join(content)
# add column to main df
df['force_tags'] = df2['force_tags'].apply(join_tags)
df['force_tags'].value_counts()
```
# Human Categories
### Police Categories:
police, officer, deputy, PD, cop
federal, agent
```
POLICE = 1
NOT_POLICE = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_police(x):
return POLICE if 'police' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_officer(x):
return POLICE if 'officer' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_deputy(x):
return POLICE if 'deputy' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_pd(x):
return POLICE if 'PD' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_cop(x):
return POLICE if 'cop' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_enforcement(x):
return POLICE if 'enforcement' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_leo(x):
return POLICE if 'LEO' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_swat(x):
return POLICE if 'SWAT' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_police, lf_keyword_officer, lf_keyword_deputy, lf_keyword_pd,
lf_keyword_cop, lf_keyword_enforcement, lf_keyword_swat, lf_keyword_leo]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['police_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['police_label']==1]
```
### Federal Agent Category
```
FEDERAL = 1
NOT_FEDERAL = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_federal(x):
return FEDERAL if 'federal' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_feds(x):
return FEDERAL if 'feds' in x.text else ABSTAIN
# national guard
@labeling_function()
def lf_keyword_guard(x):
return FEDERAL if 'guard' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_federal, lf_keyword_feds, lf_keyword_guard]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['federal_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['federal_label']==1]
```
### Civilian Categories:
protesters, medic,
reporter, journalist,
minor, child
```
PROTESTER = 1
NOT_PROTESTER = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_protester(x):
return PROTESTER if 'protester' in x.text else ABSTAIN
# adding the mispelling 'protestor'
@labeling_function()
def lf_keyword_protestor(x):
return PROTESTER if 'protestor' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_medic(x):
return PROTESTER if 'medic' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_protester, lf_keyword_protestor, lf_keyword_medic]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['protester_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['protester_label']==1]
```
Press
```
PRESS = 1
NOT_PRESS = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_reporter(x):
return PRESS if 'reporter' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_press(x):
return PRESS if 'press' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_journalist(x):
return PRESS if 'journalist' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_reporter, lf_keyword_press, lf_keyword_journalist]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['press_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['press_label']==1]
```
Minors
```
MINOR = 1
NOT_MINOR = 0
ABSTAIN = -1
@labeling_function()
def lf_keyword_minor(x):
return MINOR if 'minor' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_underage(x):
return MINOR if 'underage' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_teen(x):
return MINOR if 'teen' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_child(x):
return MINOR if 'child' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_baby(x):
return MINOR if 'baby' in x.text else ABSTAIN
@labeling_function()
def lf_keyword_toddler(x):
return MINOR if 'toddler' in x.text else ABSTAIN
# Define the set of labeling functions (LFs)
lfs = [lf_keyword_minor, lf_keyword_child, lf_keyword_baby,
lf_keyword_underage, lf_keyword_teen, lf_keyword_toddler]
# Apply the LFs to the unlabeled training data
applier = PandasLFApplier(lfs)
L_train = applier.apply(df2)
# Train the label model and compute the training labels
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123)
df2['minor_label'] = label_model.predict(L=L_train, tie_break_policy='abstain')
df2[df2['minor_label']==1]
```
# Add human tags to Dataframe
```
df2.columns
def add_human_labels(row):
tags = []
if row['police_label'] == 1 or row['police_label'] == 0:
tags.append('Police')
if row['federal_label'] == 1:
tags.append('Federal')
if row['protester_label'] == 1:
tags.append('Protester')
if row['press_label'] == 1:
tags.append('Press')
if row['minor_label'] == 1:
tags.append('Minor')
if not tags:
tags.append('Other/Unknown')
return tags
# apply human tags to incident data
df2['human_tags'] = df2.apply(add_human_labels,axis=1)
# take a peek
df2[['text','force_tags', 'human_tags']].head(3)
# clean the tags column by seperating tags
def join_tags(content):
return ', '.join(content)
# add column to main df
df['human_tags'] = df2['human_tags'].apply(join_tags)
df['human_tags'].value_counts()
# last check
df = df.drop('date_text', axis=1)
df = df.drop('Unnamed: 0', axis=1)
df = df.drop_duplicates(subset=['id'], keep='last')
df.head(3)
print(df.shape)
# exporting the dataframe
df.to_csv('training_data.csv')
files.download('training_data.csv')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/patrickcgray/deep_learning_ecology/blob/master/basic_cnn_minst.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Training a Convolutional Neural Network on the MINST dataset.
### import all necessary python modules
```
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np # linear algebra
import os
import matplotlib.pyplot as plt
%matplotlib inline
```
### set hyperparameters and get training and testing data formatted
```
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
```
### build the model and take a look at the model summary
```
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
### compile and train/fit the model
```
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
```
### evaluate the model on the testing dataset
```
score = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
### compare predictions to the input data
```
w=10
h=10
fig=plt.figure(figsize=(8, 8))
columns = 9
rows = 1
indices = np.random.randint(len(x_test), size=(10))
labels = np.argmax(model.predict(x_test[indices]), axis=1)
for i in range(1, columns*rows+1):
fig.add_subplot(rows, columns, i)
plt.imshow(x_test[indices[i-1]].reshape((28, 28)), cmap = 'gray')
plt.axis('off')
plt.text(15,45, labels[i-1], horizontalalignment='center', verticalalignment='center')
plt.show()
```
### code that will allow us to visualize the convolutional filters
```
layer_dict = dict([(layer.name, layer) for layer in model.layers])
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_2'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
ax[x, y].set_axis_off()
#plt.axis('off')
```
### convolutional filters for the first element in the training dataset for the first convolutional layer
```
vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64), layer_name = 'conv2d')
```
### convolutional filters for the first element in the training dataset for the second convolutional layer
```
vis_img_in_filter(img = np.array(x_train[0]).reshape((1, 28, 28, 1)).astype(np.float64), layer_name = 'conv2d_1')
```
| github_jupyter |
**Türkçe için sentiment(duygu) analiz kodu**
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('./drive/My Drive/')
# veriyi pandas ile okuyoruz
data = pd.read_csv("sentiment_data.csv")
df = data.copy()
df.head()
#0->negatif veri etiketi
#1->pozitif veri etiketi
df['Rating'].unique().tolist()
#model için gerekli kütüphaneleri import ediyoruz
import numpy as np
import pandas as pd
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Dropout
from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
# bütün verileri ve etiketleri listeye çeviriyoruz
target = df['Rating'].values.tolist()#negatif=0, pozitif=1
data = df['Review'].values.tolist()#text verisi
#veriyi test ve train verisi olarak ayırıyoruz
seperation = int(len(data) * 0.80)
x_train, x_test = data[:seperation], data[seperation:]
y_train, y_test = target[:seperation], target[seperation:]
#veri satır ve sütun sayısı
df.shape
# Verisetimizde en sık geçen 10000 kelimeyi alıyoruz
num_words = 10000
# Keras ile tokenizer tanımlıyoruz
tokenizer = Tokenizer(num_words=num_words)
# Veriyi tokenlara ayırıyoruz
tokenizer.fit_on_texts(data)
# Tokenizerı kaydediyoruz
import pickle
with open('turkish_tokenizer_hack.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Tokenizerı yüklüyoruz
with open('turkish_tokenizer_hack.pickle', 'rb') as handle:
turkish_tokenizer = pickle.load(handle)
# Train verisi olarak ayırdığımız veriyi tokenizer ile tokenize ediyoruz
x_train_tokens = turkish_tokenizer.texts_to_sequences(x_train)
x_train[100]
x_train_tokens[100]
# Test verisi olarak ayırdığımız veriyi tokenizer ile tokenize ediyoruz
x_test_tokens = turkish_tokenizer.texts_to_sequences(x_test)
#Text verileri için padding yapıyoruz
#RNN ağlarını kullanırken önceden belirdiğimiz sabit bir size olur. Tüm input textlerinin sizelarını bu sabit size için padding yaparak 0 lar
#ile doldururuz.
num_tokens = [len(tokens) for tokens in x_train_tokens + x_test_tokens]
num_tokens = np.array(num_tokens)
num_tokens.shape
# Bütün text verileri içinde maximum token sayısına sahip olanı buluyoruz
max_tokens = np.mean(num_tokens) + 2*np.std(num_tokens)
max_tokens = int(max_tokens)
max_tokens
# Bütün verilere padding yapıyoruz ve bütün veriler aynı boyutta oluyor
x_train_pad = pad_sequences(x_train_tokens, maxlen=max_tokens)
x_test_pad = pad_sequences(x_test_tokens, maxlen=max_tokens)
# size
print(x_train_pad.shape)
print(x_test_pad.shape)
model = Sequential() # Kullanacağımız Keras modelini tanımlıyoruz
embedding_size = 50 # Her kelime için vektör boyutunu 50 olarak belirledik
#Kerasta bir embedding layer oluşturuyoruz ve rastgele vektörler oluşturuyoruz
# Modele embedding layer ekliyoruz
# embedding matris size = num_words * embedding_size -> 10.000 * 50
model.add(Embedding(input_dim=num_words,
output_dim=embedding_size,
input_length=max_tokens,
name='embedding_layer'))
# 3-katmanlı(layer) LSTM
model.add(LSTM(units=16, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=8, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=4, return_sequences=False))
model.add(Dropout(0.2))
# Dense layer: Tek nörondan oluşuyor
model.add(Dense(1, activation='sigmoid'))# Sigmoid aktivasyon fonksiyonu
# Adam optimizer
from tensorflow.python.keras.optimizers import Adam
optimizer = Adam(lr=1e-3)
# Farklı optimizerları deniyoruz
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# modelin özeti
model.summary()
# epoch -> veri ile kaç kere eğiteceğiz
# batch_size -> feeding size-her epochta kaç veri ile besleyeceğiz
model.fit(x_train_pad, y_train, epochs=10, batch_size=256)
# model sonuçları
result = model.evaluate(x_test_pad, y_test)
result
# doğruluk oranı
accuracy = (result[1]) * 100
accuracy
#test yorumları(inputlar)
text1 = "böyle bir şeyi kabul edemem"
text2 = "tasarımı güzel ancak ürün açılmış tavsiye etmem"
text3 = "bu işten çok sıkıldım artık"
text4 = "kötü yorumlar gözümü korkutmuştu ancak hiçbir sorun yaşamadım teşekkürler"
text5 = "yaptığın işleri hiç beğenmiyorum"
text6 = "tam bir fiyat performans ürünü beğendim"
text7 = "Bu ürünü beğenmedim"
texts = [text1, text2,text3,text4,text5,text6,text7]
tokens = turkish_tokenizer.texts_to_sequences(texts)
tokens = turkish_tokenizer.texts_to_sequences(texts)
tokens
#padding
tokens_pad = pad_sequences(tokens, maxlen=max_tokens)
#model bu yorumların hangi duyguya yakın olduğunu tahminliyor
model.predict(tokens_pad)
for i in model.predict(tokens_pad):
if i < 0.5:
print("negatif")#negatif yorum yapmış
else
print("pozitif")#pozitif yorum yapmış
from keras.models import load_model
model.save('hack_model.h5') # modeli kaydediyoruz
```
| github_jupyter |
```
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
import h5py
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
```
## Set Training Label
```
label = 'NewSim_Type2'
```
## Find all files
```
from glob import glob
files_loc = "/gpfs/slac/atlas/fs1/u/rafaeltl/Muon/toy_sim/si-mu-lator/out_files/"
files_bkg = glob(files_loc+'*Muon*bkgr*.h5')
```
## Open files
```
import dataprep
dmat, Y, Y_mu, Y_hit = dataprep.make_data_matrix(files_bkg, max_files=100)
fig, axs = plt.subplots(3, 4, figsize=(20,10))
axs = axs.flatten()
my_mask = dmat[:,:,8] > 0
print(my_mask.shape)
for ivar in range(dmat.shape[2]):
this_var = dmat[:,:,ivar]
this_max = np.max(this_var)
this_min = np.min(this_var)
if ivar == 0:
this_min = -0.02
this_max = 0.02
if ivar == 1:
this_min = -1
this_max = 10
if ivar == 2:
this_min = -0.01
this_max = 0.01
if ivar == 3:
this_min = -1
this_max = 10
if ivar == 4:
this_min = -1
this_max = 10
if ivar == 5:
this_min = -1
this_max = 10
if ivar == 6:
this_min = -1
this_max = 10
if ivar == 7:
this_min = -1
this_max = 10
if ivar == 8:
this_min = -1
this_max = 9
if ivar == 9:
this_min = -150
this_max = 150
if this_min == -99:
this_min = -1
axs[ivar].hist( this_var[(Y_mu == 0)].flatten()[this_var[(Y_mu == 0)].flatten() != -99], histtype='step', range=(this_min, this_max), bins=50 )
axs[ivar].hist( this_var[(Y_mu == 1)].flatten()[this_var[(Y_mu == 1)].flatten() != -99], histtype='step', range=(this_min, this_max), bins=50 )
plt.show()
vars_of_interest = np.zeros(11, dtype=bool)
vars_of_interest[0] = 1
vars_of_interest[2] = 1
vars_of_interest[8] = 1
vars_of_interest[9] = 1
X = dmat[:,:,vars_of_interest]
Y_mu.sum()
```
## Define network
```
import sys
sys.path.insert(0, '../')
import models
lambs = [0, 1, 10]
mymods = []
for ll in lambs:
mymodel = models.muon_nn_type2( (X.shape[1],X.shape[2]), ll)
# mymodel = models.muon_nn_selfatt( (X.shape[1],X.shape[2]), ll)
mymods.append(mymodel)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
histories = []
for mod,ll in zip(mymods, lambs):
history = mod.fit( X_train, Y_train,
callbacks = [
EarlyStopping(monitor='val_loss', patience=1000, verbose=1),
ModelCheckpoint(f'weights/{label}_ll_{ll}.h5', monitor='val_loss', verbose=True, save_best_only=True) ],
epochs=3000,
validation_split = 0.3,
batch_size=1024*100,
verbose=0
)
mod.load_weights(f'weights/{label}_ll_{ll}.h5')
histories.append(history)
for history,ll in zip(histories,lambs):
plt.Figure()
for kk in history.history.keys():
plt.plot(history.history[kk], label=kk)
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(f'Lambda = {ll}')
plt.savefig(f'plots/{label}_loss_ll_{ll}.pdf')
plt.show()
Y_test_hits = Y_test[:,1:]
Y_test_mu = Y_test[:,0]
Y_test_hits_f_mu = Y_test_hits[Y_test_mu==1].flatten()
Y_test_hits_f_nomu = Y_test_hits[Y_test_mu==0].flatten()
from sklearn.metrics import roc_curve
for mod,ll in zip(mymods,lambs):
Y_pred = mod.predict(X_test, verbose=1)
Y_pred_hits = Y_pred[:,1:]
Y_pred_mu = Y_pred[:,0]
Y_pred_hits_f_mu = Y_pred_hits[Y_test_mu==1].flatten()
Y_pred_hits_f_nomu = Y_pred_hits[Y_test_mu==0].flatten()
plt.Figure()
plt.hist(Y_pred_hits_f_mu[Y_test_hits_f_mu==0], histtype='step', bins=50, range=(0,1),
label='Muon present, non-muon hits')
plt.hist(Y_pred_hits_f_mu[Y_test_hits_f_mu==1], histtype='step', bins=50, range=(0,1),
label='Muon present, muon hits')
plt.yscale('log')
plt.title(f'Lambda = {ll}')
plt.legend()
plt.savefig(f'plots/{label}_hits_pred_ll_{ll}.pdf')
plt.show()
plt.Figure()
plt.hist(Y_pred_mu[Y_test_mu==0], histtype='step', bins=50, range=(0,1))
plt.hist(Y_pred_mu[Y_test_mu==1], histtype='step', bins=50, range=(0,1))
plt.yscale('log')
plt.title(f'Lambda = {ll}')
plt.savefig(f'plots/{label}_muon_pred_ll_{ll}.pdf')
plt.show()
fig, axs = plt.subplots(1, 2, figsize=(16, 8) )
axs = axs.flatten()
coli = 3
icol = 0
for mod,ll in zip(mymods,lambs):
Y_pred = mod.predict(X_test, verbose=1)
Y_pred_hits = Y_pred[:,1:]
Y_pred_mu = Y_pred[:,0]
Y_pred_hits_f_mu = Y_pred_hits[Y_test_mu==1].flatten()
Y_pred_hits_f_nomu = Y_pred_hits[Y_test_mu==0].flatten()
fpr_hits, tpr_hits, _ = roc_curve(Y_test_hits_f_mu[Y_test_hits_f_mu>-90], Y_pred_hits_f_mu[Y_test_hits_f_mu>-90])
axs[0].semilogy(tpr_hits, 1./fpr_hits, color=f'C{coli+icol}', label=f'lambda = {ll}')
fpr_mus, tpr_mus, _ = roc_curve(Y_test_mu, Y_pred_mu)
axs[1].semilogy(tpr_mus, 1./fpr_mus, color=f'C{coli+icol}', label=f'lambda = {ll}')
icol+=1
axs[0].set_ylabel('Background hits rejection')
axs[0].set_xlabel('Signal hits efficiency')
axs[0].legend()
axs[0].set_xlim(-0.01, 1.01)
axs[0].set_ylim(0.5, 1e6)
axs[1].set_ylabel('Rejection of events with no muons')
axs[1].set_xlabel('Efficiency of events with muons')
axs[1].set_xlim(0.9,1.01)
axs[1].set_ylim(0.5, 1e5)
axs[1].legend()
plt.savefig(f'plots/{label}_ROCs.pdf', transparent=True)
plt.show()
```
| github_jupyter |
# Vectors, Matrices, and Arrays
# Loading Data
## Loading a Sample Dataset
```
# Load scikit-learn's datasets
from sklearn import datasets
# Load digit dataset
digits = datasets.load_digits()
# Create features matrix
features = digits.data
# Create target matrix
target = digits.target
# View first observation
print(features[0])
```
## Creating a Simulated Dataset
```
# For Regression
# Load library
from sklearn.datasets import make_regression
# Generate features matrix, target vector, and the true coefficients
features, target, coefficients = make_regression(n_samples = 100,
n_features = 3,
n_informative = 3,
n_targets = 1,
noise = 0.0,
coef = True,
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# For Classification
# Load library
from sklearn.datasets import make_classification
# Generate features matrix, target vector, and the true coefficients
features, target = make_classification(n_samples = 100,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
weights = [.25, .75],
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# For Clustering
# Load library
from sklearn.datasets import make_blobs
# Generate features matrix, target vector, and the true coefficients
features, target = make_blobs(n_samples = 100,
n_features = 2,
centers = 3,
cluster_std = 0.5,
shuffle = True,
random_state = 1)
# View feature matrix and target vector
print("Feature Matrix\n",features[:3])
print("Target Vector\n",target[:3])
# Load library
import matplotlib.pyplot as plt
%matplotlib inline
# View scatterplot
plt.scatter(features[:,0], features[:,1], c=target)
plt.show()
```
## Loading a CSV File
```
# Load a library
import pandas as pd
# Create URL
url = 'https://people.sc.fsu.edu/~jburkardt/data/csv/airtravel.csv'
# Load dataset
dataframe = pd.read_csv(url)
# View first two rows
dataframe.head(2)
```
## Loading an Excel File
```
# Load a library
import pandas as pd
# Create URL
url = 'https://dornsife.usc.edu/assets/sites/298/docs/ir211wk12sample.xls'
# Load dataset
dataframe = pd.read_excel(url, sheet_name=0, header=1)
# View first two rows
dataframe.head(2)
```
## Loading a JSON File
```
# Load a library
import pandas as pd
# Create URL
url = 'http://ergast.com/api/f1/2004/1/results.json'
# Load dataset
dataframe = pd.read_json(url, orient = 'columns')
# View first two rows
dataframe.head(2)
# semistructured JSON to a pandas DataFrame
#pd.json_normalize
```
## Queryin a SQL Database
```
# Load a library
import pandas as pd
from sqlalchemy import create_engine
# Create a connection to the database
database_connection = create_engine('sqlite://sample.db')
# Load dataset
dataframe = pd.read_sql_query('SELECT * FROM data', database_connectiona)
# View first two rows
dataframe.head(2)
```
| github_jupyter |

## Welcome to The QuantConnect Research Page
#### Refer to this page for documentation https://www.quantconnect.com/docs#Introduction-to-Jupyter
#### Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicQuantBookTemplate.ipynb
## QuantBook Basics
### Start QuantBook
- Add the references and imports
- Create a QuantBook instance
```
%matplotlib inline
# Imports
from clr import AddReference
AddReference("System")
AddReference("QuantConnect.Common")
AddReference("QuantConnect.Jupyter")
AddReference("QuantConnect.Indicators")
from System import *
from QuantConnect import *
from QuantConnect.Data.Market import TradeBar, QuoteBar
from QuantConnect.Jupyter import *
from QuantConnect.Indicators import *
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import pandas as pd
# Create an instance
qb = QuantBook()
```
### Selecting Asset Data
Checkout the QuantConnect [docs](https://www.quantconnect.com/docs#Initializing-Algorithms-Selecting-Asset-Data) to learn how to select asset data.
```
spy = qb.AddEquity("SPY")
eur = qb.AddForex("EURUSD")
```
### Historical Data Requests
We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.
For more information, please follow the [link](https://www.quantconnect.com/docs#Historical-Data-Historical-Data-Requests).
```
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
h1 = qb.History(360, Resolution.Daily)
# Plot closing prices from "SPY"
h1.loc["SPY"]["close"].plot()
# Gets historical data from the subscribed assets, from the last 30 days with daily resolution
h2 = qb.History(timedelta(30), Resolution.Daily)
# Plot high prices from "EURUSD"
h2.loc["EURUSD"]["high"].plot()
# Gets historical data from the subscribed assets, between two dates with daily resolution
h3 = qb.History(spy.Symbol, datetime(2014,1,1), datetime.now(), Resolution.Daily)
# Only fetchs historical data from a desired symbol
h4 = qb.History(spy.Symbol, 360, Resolution.Daily)
# or qb.History("SPY", 360, Resolution.Daily)
# Only fetchs historical data from a desired symbol
# When we are not dealing with equity, we must use the generic method
h5 = qb.History[QuoteBar](eur.Symbol, timedelta(30), Resolution.Daily)
# or qb.History[QuoteBar]("EURUSD", timedelta(30), Resolution.Daily)
```
### Historical Options Data Requests
- Select the option data
- Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35))
- Get the OptionHistory, an object that has information about the historical options data
```
goog = qb.AddOption("GOOG")
goog.SetFilter(-2, 2, timedelta(0), timedelta(180))
option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4))
print option_history.GetStrikes()
print option_history.GetExpiryDates()
h6 = option_history.GetAllData()
```
### Get Fundamental Data
- *GetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now())*
We will get a pandas.DataFrame with fundamental data.
```
data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio")
data
```
### Indicators
We can easily get the indicator of a given symbol with QuantBook.
For all indicators, please checkout QuantConnect Indicators [Reference Table](https://www.quantconnect.com/docs#Indicators-Reference-Table)
```
# Example with BB, it is a datapoint indicator
# Define the indicator
bb = BollingerBands(30, 2)
# Gets historical data of indicator
bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily)
# drop undesired fields
bbdf = bbdf.drop('standarddeviation', 1)
# Plot
bbdf.plot()
# For EURUSD
bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily)
bbdf = bbdf.drop('standarddeviation', 1)
bbdf.plot()
# Example with ADX, it is a bar indicator
adx = AverageDirectionalIndex("adx", 14)
adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily)
adxdf.plot()
# For EURUSD
adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily)
adxdf.plot()
# SMA cross:
symbol = "EURUSD"
# Get History
hist = qb.History[QuoteBar](symbol, 500, Resolution.Daily)
# Get the fast moving average
fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily)
# Get the fast moving average
slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily)
# Remove undesired columns and rename others
fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'})
slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'})
# Concatenate the information and plot
df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0)
df.plot()
# Get indicator defining a lookback period in terms of timedelta
ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily)
# Get indicator defining a start and end date
ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily)
ema = pd.concat([ema1, ema2], axis=1)
ema.plot()
rsi = RelativeStrengthIndex(14)
# Selects which field we want to use in our indicator (default is Field.Close)
rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High)
rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low)
rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'})
rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'})
rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1)
rsi.plot()
```
| github_jupyter |
# Convolutional Neural Networks
## Project: Write an Algorithm for a Dog Identification App
---
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
---
### Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Import Datasets
* [Step 1](#step1): Detect Humans
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Create a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Write your Algorithm
* [Step 6](#step6): Test Your Algorithm
---
<a id='step0'></a>
## Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
**Note: if you are using the Udacity workspace, you *DO NOT* need to re-download these - they can be found in the `/data` folder as noted in the cell below.**
* Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in this project's home directory, at the location `/dog_images`.
* Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the home directory, at location `/lfw`.
*Note: If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder.*
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays `human_files` and `dog_files`.
```
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("/data/lfw/*/*"))
dog_files = np.array(glob("/data/dog_images/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
```
<a id='step1'></a>
## Step 1: Detect Humans
In this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
```
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter.
In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.
### Write a Human Face Detector
We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
```
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
```
### (IMPLEMENTATION) Assess the Human Face Detector
__Question 1:__ Use the code cell below to test the performance of the `face_detector` function.
- What percentage of the first 100 images in `human_files` have a detected human face?
- What percentage of the first 100 images in `dog_files` have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.
__Answer:__
98 % of human faces were correctly classified in first 100 images of human dataset.
17 % of dog images were wrongly classified as human faces in first 100 images of dog dataset.
```
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
human_count = 0
dog_count = 0
for img in human_files_short:
if face_detector(img) == True:
human_count +=1
for img in dog_files_short:
if face_detector(img) == True:
dog_count +=1
print ("Images correctly classified as Human Faces: ", human_count)
print ("Images wrongly classified as Human faces: ", dog_count)
```
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
```
---
<a id='step2'></a>
## Step 2: Detect Dogs
In this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images.
### Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
```
import torch
import torchvision.models as models
from torchvision.models.vgg import model_urls
model_urls['vgg16'] = model_urls['vgg16'].replace('https://', 'http://')
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
```
Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
### (IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`) as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the [PyTorch documentation](http://pytorch.org/docs/stable/torchvision/models.html).
```
from PIL import Image, ImageFile
import torchvision.transforms as transforms
ImageFile.LOAD_TRUNCATED_IMAGES = True
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
image = Image.open(img_path).convert('RGB')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
transformations = transforms.Compose([transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
normalize])
transformed_image = transformations(image)[:3,:,:].unsqueeze(0)
if use_cuda:
new_image = transformed_image.cuda()
out = VGG16(new_image)
return torch.max(out,1)[1].item()
```
### (IMPLEMENTATION) Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
```
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
predict_index = VGG16_predict(img_path)
output = predict_index >=151 and predict_index <=268
return output # true/false
```
### (IMPLEMENTATION) Assess the Dog Detector
__Question 2:__ Use the code cell below to test the performance of your `dog_detector` function.
- What percentage of the images in `human_files_short` have a detected dog?
- What percentage of the images in `dog_files_short` have a detected dog?
__Answer:__
All dog faces are correctly detected in dog_files_short which means 100% of images.
1 % images have been wrongly classifed as dogs in human_files_short.
```
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
DD_dogs = 0
DD_humans = 0
for img in human_files_short:
if dog_detector(img) == True:
DD_humans +=1
for img in dog_files_short:
if dog_detector(img) == True:
DD_dogs +=1
print ("Images correctly classified as Dog Faces: ", DD_dogs)
print ("Images wrongly classified as human faces: ", DD_humans)
```
We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.html#inception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.html#id3), etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
```
---
<a id='step3'></a>
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dog_images/train`, `dog_images/valid`, and `dog_images/test`, respectively). You may find [this documentation on custom datasets](http://pytorch.org/docs/stable/torchvision/datasets.html) to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of [transforms](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!
```
import os
from torchvision import datasets
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
batch_size = 20
n = 0
data_dir = '/data/dog_images/'
train_path = data_dir + 'train'
validation_path = data_dir + 'valid'
test_path = data_dir + 'test'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(train_path, transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
normalize,
]))
validation_dataset = datasets.ImageFolder(validation_path, transforms.Compose([
transforms.Resize(size=(224,224)),
transforms.ToTensor(),
normalize,
]))
test_dataset = datasets.ImageFolder(test_path, transforms.Compose([
transforms.Resize(size=(224,224)),
transforms.ToTensor(),
normalize,
]))
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size= batch_size, num_workers = n, shuffle = True)
validation_loader = torch.utils.data.DataLoader(validation_dataset, batch_size= batch_size, num_workers = n)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size= batch_size, num_workers = n)
loaders_scratch = {
'train': train_loader,
'valid': validation_loader,
'test': test_loader
}
```
**Question 3:** Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
**Answer**:
I have used size of 224x224 because most models like VGG16 use this input size.
Image augementation has been for training data to avoid overfitting of model.
Transforms used: Random resize crop to 224, random flipping and random rotation.
Onyl image resizing has been done for validation and test data while normalization has been applied to all datasets.
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
```
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
self.conv1 = nn.Conv2d(3, 36, 3, padding=1)
self.conv2 = nn.Conv2d(36, 64, 3, padding=1)
self.conv3 = nn.Conv2d(64, 128, 3, padding=1)
self.fc1 = nn.Linear(28*28*128, 512)
self.fc2 = nn.Linear(512, 133)
self.pool = nn.MaxPool2d(2, 2)
self.dropout = nn.Dropout(0.25)
self.batch_norm = nn.BatchNorm1d(512)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 28*28*128)
x = F.relu(self.batch_norm(self.fc1(x)))
x = F.relu(self.fc2(x))
x = self.dropout(x)
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
print(model_scratch)
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
```
__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
The model has 3 convolutional layers. All layer have kernel size of 3 and stride 1. The first layer takes 224x224 image and final layer gives an output size of 128. ReLu activation function is used here. Pooling layer of (2,2) is used to reduce input size by 2. The two fully connected layers produces 133 dimensional output. Dropout of 0.25 has also been done.
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/stable/optim.html). Save the chosen loss function as `criterion_scratch`, and the optimizer as `optimizer_scratch` below.
```
import torch.optim as optim
### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
### TODO: select optimizer
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.02)
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`.
```
# the following import is required for training to be robust to truncated images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving the model'.format(valid_loss_min, valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
model_scratch = train(15, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
```
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
```
---
<a id='step4'></a>
## Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively).
If you like, **you are welcome to use the same data loaders from the previous step**, when you created a CNN from scratch.
```
## TODO: Specify data loaders
loaders_transfer = loaders_scratch.copy()
```
### (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`.
```
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
model_transfer = models.resnet101(pretrained=True)
if use_cuda:
model_transfer = model_transfer.cuda()
print(model_transfer)
for param in model_transfer.parameters():
param.requires_grad = False
model_transfer.fc = nn.Linear(2048, 133, bias=True)
if use_cuda:
model_transfer = model_transfer.cuda()
```
__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
__Answer:__
I used resnet101 architecture which is pre-trained on Imagenet dataset. I expected a better accuracy with resnet101 and therefore used it. The architecture is 101 layers deep, within just 5 epochs, the model got 79% accuracy. Usage of more epochs can improve the accuracy.
Steps:
1. Import pre-trained resnet101 model
2. Change the out_features of fully connected layer to 133 to solve the classification problem
3. CrossEntropy loss function is chosen as loss function.
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/master/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/master/optim.html). Save the chosen loss function as `criterion_transfer`, and the optimizer as `optimizer_transfer` below.
```
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.fc.parameters(), lr=0.02)
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`.
```
# train the model
model_transfer = train(5, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
```
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
```
### (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model.
```
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
data_transfer = loaders_transfer
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].dataset.classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
image = Image.open(img_path).convert('RGB')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
transformations = transforms.Compose([transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
normalize])
transformed_image = transformations(image)[:3,:,:].unsqueeze(0)
if use_cuda:
transformed_image = transformed_image.cuda()
output = model_transfer(transformed_image)
pred_index = torch.max(output,1)[1].item()
return class_names[pred_index]
```
---
<a id='step5'></a>
## Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a __dog__ is detected in the image, return the predicted breed.
- if a __human__ is detected in the image, return the resembling dog breed.
- if __neither__ is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `human_detector` functions developed above. You are __required__ to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### (IMPLEMENTATION) Write your Algorithm
```
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def load_image(img_path):
img = Image.open(img_path)
plt.imshow(img)
plt.show()
def run_app(img_path):
## handle cases for a human face, dog, and neither
if face_detector(img_path):
print ("Human!")
predicted_breed = predict_breed_transfer(img_path)
print("Predicted breed: ",predicted_breed)
load_image(img_path)
elif dog_detector(img_path):
print ("Dog!")
predicted_breed = predict_breed_transfer(img_path)
print("Predicted breed: ",predicted_breed)
load_image(img_path)
else:
print ("Invalid Image")
```
---
<a id='step6'></a>
## Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that _you_ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
### (IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
__Answer:__ Yes, the model performed better than my expectation.
Following points could help in improvement of my algorithm:
1. Using more training data.
2. Doing hyper parameter tuning.
3. More image augmentation can be tried.
4. Different architecture then ResNet101 may be work better.
```
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
```
References:
1. Original repo for Project - GitHub: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/project-dog-classification/
2. Resnet101: https://pytorch.org/docs/stable/_modules/torchvision/models/resnet.html#resnet101
3. Imagenet training in Pytorch: https://github.com/pytorch/examples/blob/97304e232807082c2e7b54c597615dc0ad8f6173/imagenet/main.py#L197-L198
4. Pytorch Documentation: https://pytorch.org/docs/master/
| github_jupyter |
<a href="https://colab.research.google.com/github/jonkrohn/ML-foundations/blob/master/notebooks/7-algos-and-data-structures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Algorithms & Data Structures
This class, *Algorithms & Data Structures*, introduces the most important computer science topics for machine learning, enabling you to design and deploy computationally efficient data models.
Through the measured exposition of theory paired with interactive examples, you’ll develop a working understanding of all of the essential data structures across the list, dictionary, tree, and graph families. You’ll also learn the key algorithms for working with these structures, including those for searching, sorting, hashing, and traversing data.
The content covered in this class is itself foundational for the *Optimization* class of the *Machine Learning Foundations* series.
Over the course of studying this topic, you'll:
* Use “Big O” notation to characterize the time efficiency and space efficiency of a given algorithm, enabling you to select or devise the most sensible approach for tackling a particular machine learning problem with the hardware resources available to you.
* Get acquainted with the entire range of the most widely-used Python data structures, including list-, dictionary-, tree-, and graph-based structures.
* Develop an understanding of all of the essential algorithms for working with data, including those for searching, sorting, hashing, and traversing.
**Note that this Jupyter notebook is not intended to stand alone. It is the companion code to a lecture or to videos from Jon Krohn's [Machine Learning Foundations](https://github.com/jonkrohn/ML-foundations) series, which offer detail on the following:**
*Segment 1: Introduction to Data Structures and Algorithms*
* A Brief History of Data and Data Structures
* A Brief History of Algorithms
* “Big O” Notation for Time and Space Complexity
*Segment 2: Lists and Dictionaries*
* List-Based Data Structures: Arrays, Linked Lists, Stacks, Queues, and Deques
* Searching and Sorting: Binary, Bubble, Merge, and Quick
* Dictionaries: Sets and Maps
* Hashing: Hash Tables and Hash Maps
*Segment 3: Trees and Graphs*
* Trees: Binary Search, Heaps, and Self-Balancing
* Graphs: Terminology, Coded Representations, Properties, Traversals, and Paths
* Resources for Further Study of Data Structures & Algorithms
**Code coming in July 2020... Watch this space**
| github_jupyter |
```
BEGIN ASSIGNMENT
requirements: requirements.txt
solutions_pdf: true
export_cell:
instructions: "These are some submission instructions."
generate:
pdf: true
zips: false
export_cell:
pdf: false
instructions: "Please submit the resultant .zip file to the SciTeens platform"
```
# Lesson Four: Plotting Data
Hey there! Today you'll work on visualizing data.
<div class="alert alert-block alert-info">
<b>What you should learn today: </b> How to use MatPlotLib to plot data.
</div>
## Section One: Importing Packages
Once again, we'll be importing pandas. This time around, we'll also import matplotlib and its pyplot submodule so that we can create some stunning plots today.
```
import matplotlib.pyplot as plt
import pandas as pd
```
## Section Two: Velocity vs. Time
Using plotting, we can visualize many things and create many types of graphs. We encourage you to explore everything you can do with matplotlib (you can create some really pretty graphs!). With respect to physics, though, we're mostly interested in plotting data to visualize the motion of an object.
Let's say we have an object in freefall. That means it's accelerating at -9.8 m/s/s. Can you initialize arrays for time and velocity in an pandas DataFrame? We'll start you off.
```
time = [10, 9, 8, 6, 5, 4, 3, 2, 1, 0]
# this represents how far the object is from the Earth
```
### Question One
In the next cell, initialize values for the velocity array by filling in the partially-written for-loop. Make sure to print your values so you can see the array.
```
BEGIN QUESTION
name: q1
points: 3
```
```
velocity = [-9.8 * _ for _ in _]
velocity = [-9.8 * t**2 for t in time] # SOLUTION
# TEST
isinstance(velocity, list)
# TEST
len(velocity) == len(time)
# TEST
check_list = [-980.0000000000001, -793.8000000000001, -627.2, -352.8, -245.00000000000003, -156.8,-88.2,-39.2,-9.8,-0.0]
all([abs(velocity[i] - check_list[i]) < 0.00001 for i in range(len(velocity))])
```
### Question Two
Now that you have both time and velocity, try putting them both in a data frame, labeling both sets of data. Put the time values in the column `time`, and the velocity values in the column `velocity`. Assign your new dataframe to the variable `df`.
```
df = ...
df = pd.DataFrame({'time': time, 'velocity': velocity}) # SOLUTION
# TEST
df['time'][0] == 10
# TEST
abs(df['velocity'][4] - -245.00000000000003) < 0.000001
```
Now let's plot the data. The output will be a velocity-time graph.
Notice that the plt.plot() method will take an x and y parameter. First, we feed it the time values (as the x-axis), and then we give it the velocity values (as the y-axis). The resultant plot will be a **Velocity-Time** graph. Before running the cell, think about what the plot should look like. If we have constant downward acceleration, what should the slope of the plot be? What should it look like?
```
plt.plot(df['time'], df['velocity'])
plt.show()
```
Was the plot what you expected? If not, think about it again. Constant acceleration means that the velocity-time graph will be a straight line. It is changing by a steady rate. The downward acceleration will be a negative value, which will mean a negative slope.
To make the graph easier to read, we can add labels and a title.
```
plt.plot(df['time'], df['velocity'])
plt.xlabel('Time, in seconds')
plt.ylabel('Velocity, in m/s')
plt.title('Velocity vs. Time for an Object in Free Fall')
plt.show()
```
## Section Three: Acceleration vs. Time
Now that you have a velocity-time graph, let's graph acceleration vs. time. Before you plot, think to yourself about what an acceleration vs. time graph of an object in free fall would look like.
Let's add another column of data -- this time, acceleration values.
```
acceleration = [-9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8 ]
df['acceleration'] = acceleration
```
### Question Three
Using our previous graph as an example, call the plt.plot(), x and ylabel(), title(), and show() methods to print the acceleration-time graph. Set the title to be "Acceleration vs Time for an object in Free Fall", the x-axis label to be "Time in seconds", the y-axis label to be "Acceleration in m/s^2".
```
# BEGIN SOLUTION
plt.plot(df['time'], df['acceleration'])
plt.xlabel('Time, in seconds')
plt.ylabel('Acceleration in m/s^2')
plt.title('Acceleration vs Time for an object in Free Fall')
plt.show()
# END SOLUTION
```
Hopefully, you guessed that the graph would just look like a horizontal line at y = -9.8. This is because the value for acceleration does not change.
An important thing to note here is that we've chosen a **negative** value for acceleration due to gravity. Depending on the physics problem, this will not always be the case. You will definitely see both positive and negative 9.8 m/s/s used for acceleration due to gravity.
However, it is important to understand the difference between a positive and negative value for acceleration and to be **consistent**. Here, we chose the direction down (as in, towards the Earth) to be negative. That's why our value for acceleration is negative. Since we're only considering one-dimensional motion here (the object only moves down in this example), it doesn't matter too much, but in other problems, it will.
## Section Four: Position vs. Time
Now that we've covered velocity and acceleration vs. time graphs, let's move on to position-time. A good way to think about physics plots is that they decrease in complexity as you go from position to velocity to acceleration. Position plots are often the most complex-looking, but they can be extremely useful in visualizing motion.
Just like before, let's start with making arrays and then plotting them.
```
position = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
# we're initializing an empty array first
# this for loop will help us fill it with values
for j in range(len(time)):
position[j] = velocity[j] * time[j]
print(position)
```
### Question Four
Add the position values to the existing DataFrame and print the resultant DataFrame.
### Question Five
Now, let's plot. Again, think about what the plot should look like. Should it be a straight line? Think about how the position values are changing as time goes on. Are they changing at a constant rate?
Just like before, use plt methods to plot the Position vs. Time graph.
Notice that the graph is a curve, not a straight line. This is because the position is NOT decreasing at a steady rate. If it decreased at a steady rate, it would not be accelerating. Instead, as time goes on, the particle is moving more and more rapidly. The closer it gets to Earth, the faster it moves. Therefore, with every second that passes, the particle has traveled further than it did the previous second.
## Section Five: Putting Them Together
Now, let's try plotting all three functions on the same graph. This way, we can more easily see the relationship of position, velocity, and acceleration of an object in free fall. To do this, we'll create three different lines and use a legend to distinguish them.
### Question Six
Most of the code has already been written for you, but see if you can fill in the blanks.
```
plt.plot(df['___'], df['___'], label = "____")
plt.plot(df['___'], df['___'], label = "___")
plt.plot(df['___'], df['___'], label = "___")
plt.xlabel('Time, in seconds')
plt.title('Position, Velocity, and Acceleration vs. Time')
plt.legend() # this invokes the legend
plt.show()
```
Notice the use of labels to distinguish each line.
## Section Six: Built-in Pandas Methods
We can also utilize built-in Pandas methods to analyze our graphs. For example, you've learned that
$ acceleration_{average} = \frac{{\Delta} velocity}{{\Delta} time} $
And that, similarly, $ velocity_{average} = \frac{{\Delta} position}{{\Delta} time} $
But, as we learned in the previous lesson, Pandas has a built-in method called mean() which finds the average of a DataFrame column.
Let's find average acceleration together. You'll do velocity on your own. First, let's find it using the formula we learned in class.
Recall that $ {\Delta} v = v_{final} - v_{initial} $
```
deltaV = velocity[9] - velocity[0] # ASSIGNING DELTA V
deltaT = time[9] - time[0] # ASSIGNING DELTA T
avg_acceleration = deltaV / deltaT
print(avg_acceleration)
```
Are you surprised by the output of the cell? It should make sense, since acceleration in this problem is constant. The following cell uses the built-in method.
```
avg_acceleration_2 = df['acceleration'].mean()
print(avg_acceleration_2)
```
Note that you cannot call acceleration.mean(), since the mean() method only applies to DataFrame objects.
### Question Seven
Acceleration was pretty straightforward --- what about velocity? Find the average using the formula and the built-in method.
```
# avg_velocity =
# calculate using built-in method
```
Are the two values the same? Why or why not?
### Question Eight
Using either value of average velocity, can you plot a graph with the velocity values and the average? Use the commented instructions below as a guide.
```
# create a list of the average velocity value
# add it to the DataFrame
# plot the two lines
# invoke a legend
```
## Challenge Question
Here's a challenge problem for you. Can you plot the upward motion of the same object?
- The object was thrown up at an initial velocity of 2 m/s
- Assume the all the previous data you've been given/calculated for the downward motion is correct
- **Hint**: The acceleration (although the particle is changing direction) will still be a constant -9.8 m/s/s. You are welcome to use a positive value if you'd like, but remember that this will change values you've previously calculated.
Create bigger lists for position, velocity, time, and acceleration, and plot every style of graph covered today.
```
# create bigger lists
# plot graph 1
# plot graph 2
# plot graph 3
```
What are the average values for velocity, position, and acceleration?
```
# find and print average velocity
# find and print average position
# find and pring average acceleration
```
<div class="alert alert-block alert-warning">
<b>EXTRA MATPLOTLIB HELP: </b> In case you're desperately craving more Matplotlib knowledge, here's a cheat sheet to check out.
</div>
Just like with our Pandas notebook, we didn't cover absolutely everything there is to cover about Matplotlib. There's tons more exploring you can do on your own time, and this document is a great place to start. As always, if you have any questions, don't hesitate to reach out.
```
%%html
<iframe id="fred" style="border:1px solid #666CCC" title="PDF in an i-Frame" src="https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf" frameborder="1" scrolling="auto" height="1100" width="850" ></iframe>
```
Also, here's a nice summary of different kinds of plots you can make and how you can make them. Feel free to explore!
```
%%html
<iframe id="fred" style="border:1px solid #666CCC" title="PDF in an i-Frame" src="hhttps://matplotlib.org/stable/tutorials/introductory/sample_plots.html" frameborder="1" scrolling="auto" height="1100" width="850" ></iframe>
```
| github_jupyter |
```
import itertools as it
import sys
import os
#if len(sys.argv) != 3:
# print("Usage: python3 " + sys.argv[0] + " cluster.xyz" + " mode")
# print("mode=1: no cp")
# print("mode=2, whole cluster cp")
# print("mode=3, individual cluster cp")
# sys.exit(1)
#fxyz = sys.argv[1]
#mode = int(sys.argv[2])
fxyz = "cluster.xyz"
atlist = [3,3,3,3,3]
chglist = [0,0,0,0,0]
mode = 3
#atlist = [3,3]
# Function to write the different combinations from an xyz file
def write_combs(xyz, use_cp, atlist):
f = open(xyz,'r')
nat = f.readline().split()[0]
f.readline()
mons = []
for i in range(len(atlist)):
m = []
for j in range(atlist[i]):
line = f.readline()
m.append(line)
mons.append(m)
comb = []
monsN = range(len(atlist))
for i in range(1,len(atlist) + 1):
comb.append(list(it.combinations(monsN,i)))
if not use_cp:
for i in range(len(atlist)):
fname = str(i + 1) + "b.xyz"
ff = open(fname,'w')
for j in range(len(comb[i])):
inat = 0
w=" "
for k in range(len(comb[i][j])):
inat += atlist[comb[i][j][k]]
w += " " + str(comb[i][j][k] + 1)
ff.write(str(inat) + "\n")
ff.write(w + "\n")
for k in range(len(comb[i][j])):
for l in mons[comb[i][j][k]]:
ff.write(l)
ff.close()
else:
for i in range(len(atlist)):
# Counterpoise
fname = str(i + 1) + "b.xyz"
ff = open(fname,'w')
for j in range(len(comb[i])):
w=" "
for k in range(len(comb[i][j])):
w += " " + str(comb[i][j][k] + 1)
ff.write(str(nat) + "\n")
ff.write(w + "\n")
for k in range(len(atlist)):
if not k in comb[i][j]:
for l in mons[k]:
line = l.strip().split()
line[0] = line[0] + "1"
for mm in range(len(line)):
ff.write(line[mm] + " ")
ff.write("\n")
for k in range(len(comb[i][j])):
for l in mons[comb[i][j][k]]:
ff.write(l)
ff.close()
return comb
def write_xyz(xyz,atlist,chglist,comb):
f = open(xyz,'r')
nat = f.readline().split()[0]
f.readline()
mons = []
for i in range(len(atlist)):
m = []
for j in range(atlist[i]):
line = f.readline()
m.append(line)
mons.append(m)
monsN = range(len(atlist))
for i in range(len(atlist)):
fname = str(i + 1) + "b.xyz"
ff = open(fname,'r')
foldname = str(i + 1) + "b"
os.mkdir(foldname)
for j in range(len(comb[i])):
os.mkdir(foldname + "/" + str(j + 1))
inat = int(ff.readline().split()[0])
mns = ff.readline()
fx = open(foldname + "/" + str(j + 1) + "/input.xyz", 'w')
fx.write(str(inat) + "\n")
fx.write(mns)
for k in range(inat):
fx.write(ff.readline())
fx.close()
fx = open(foldname + "/" + str(j + 1) + "/input.charge", 'w')
c = 0
for k in range(len(comb[i][j])):
c += chglist[comb[i][j][k]]
fx.write(str(c) + '\n')
fx.close()
ff.close()
# Obtain the different configurations
if mode == 1:
comb = write_combs(fxyz, False, atlist)
write_xyz(fxyz,atlist,chglist,comb)
elif mode == 2:
comb = write_combs(fxyz, True, atlist)
write_xyz(fxyz,atlist,chglist,comb)
elif mode == 3:
comb = write_combs(fxyz, False, atlist)
write_xyz(fxyz,atlist,chglist,comb)
for i in range(len(atlist)):
foldname = str(i + 1) + "b"
for j in range(len(comb[i])):
fi = foldname + "/" + str(j + 1)
os.chdir(fi)
atl = []
chgl = []
for k in range(len(comb[i][j])):
atl.append(atlist[comb[i][j][k]])
chgl.append(chglist[comb[i][j][k]])
cmb = write_combs("input.xyz",True,atl)
write_xyz("input.xyz",atl,chgl,cmb)
os.chdir("../../")
else:
print("Mode " + str(mode) + " not defined")
sys.exit(1)
if mode == 1:
print("\nYou run the XYZ preparation for non-counterpoise correction\n")
elif mode == 2:
print("\nYou run the XYZ preparation with non-counterpoise correction for the whole cluster\n")
elif mode == 3:
print("\nYou run the XYZ preparation with non-counterpoise correction for individual clusters\n")
if mode == 1 or mode == 2:
a = """
Now you have all the XYZ in the 1b/1, 1b/2 ... , 2b... folders
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Then run the second part of the script.
"""
elif mode == 3:
a = """
Now you have all the XYZ in the 1b/1, 1b/2 ... , 2b... folders
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Inside each one of the folders, there is a new tree that contains
the coordinates with individual cluster counterpoise correction.
Please, generate appropiate inputs run the calculations and save
the TOTAL ENERGY in the file input.energy inside each folder.
Then run the second part of the script.
"""
print(a)
```
| github_jupyter |
### \*\*\*needs cleaning***
```
import pandas as pd
import numpy as np
import sys
import os
import itertools
import time
import random
#import utils
sys.path.insert(0, '../utils/')
from utils_preprocess_v3 import *
from utils_modeling_v9 import *
from utils_plots_v2 import *
#sklearn
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
start_time = time.time()
data = pd.read_csv('../data/datasets_processed/OpenPBTA_data_mean.csv', index_col='Unnamed: 0', dtype = 'unicode')
data = data.T.reset_index().rename(columns = {'index' : 'node'})
response = pd.read_csv('../data/datasets_processed/OpenPBTA_response.csv', index_col='Kids_First_Biospecimen_ID')
interactome = pd.read_csv('../data/interactomes/inbiomap_processed.txt', sep = '\t')
# get nodes from data and graph
data_nodes = data['node'].tolist()
interactome_nodes = list(set(np.concatenate((interactome['node1'], interactome['node2']))))
# organize data
organize = Preprocessing()
save_location = '../data/reduced_interactomes/reduced_interactome_OpenPBTA.txt'
organize.transform(data_nodes, interactome_nodes, interactome, data, save_location, load_graph = True)
# extract info from preprocessing
X = organize.sorted_X.T.values
y = response.values.reshape(-1,1)
L_norm = organize.L_norm
L = organize.L
g = organize.g
num_to_node = organize.num_to_node
# split for training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# scaling X
scaler_X = StandardScaler()
X_train = scaler_X.fit_transform(X_train)
X_test = scaler_X.transform(X_test)
# scalying y
scaler_y = StandardScaler()
y_train = scaler_y.fit_transform(y_train).reshape(-1)
y_test = scaler_y.transform(y_test).reshape(-1)
val_1, vec_1 = scipy.linalg.eigh(L_norm.toarray())
val_zeroed = val_1 - min(val_1) + 1e-8
L_rebuild = vec_1.dot(np.diag(val_zeroed)).dot(np.linalg.inv(vec_1))
X_train_lower = np.linalg.cholesky(L_rebuild)
X_train_lower.dot(X_train_lower.T).sum()
L_norm.sum()
np.save('L_half.npy', X_train_lower)
```
# Lasso + LapRidge
```
# hyperparameters
alpha1_list = np.logspace(-1,0,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_pairs = list(itertools.product(alpha1_list, alpha2_list))
def loss_fn(X,Y, L, alpha1, alpha2, beta):
return 0.5/(len(X)) * cp.norm2(cp.matmul(X, beta) - Y)**2 + \
alpha1 * cp.norm1(beta) + \
alpha2 * cp.sum(cp.quad_form(beta,L))
def run(pair, X_train, y_train, L_norm):
beta = cp.Variable(X_train.shape[1])
alpha1 = cp.Parameter(nonneg=True)
alpha2 = cp.Parameter(nonneg=True)
alpha1.value = pair[0]
alpha2.value = pair[1]
problem = cp.Problem(cp.Minimize(loss_fn(X_train, y_train, L_norm, alpha1, alpha2, beta )))
problem.solve(solver=cp.SCS, verbose=True, max_iters=50000)
np.save('openpbta/' + str(pair) + '.npy', beta.value)
return beta.value
betas = Parallel(n_jobs=8, verbose=10)(delayed(run)(alpha_pairs[i],
X_train,
y_train,
L_norm) for i in range(len(alpha_pairs)))
# load betas
beta_order = [str(i) + '.npy' for i in alpha_pairs]
betas = [np.load('openpbta/' + i) for i in beta_order]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in betas]
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
train_scores = [i[0] for i in scores]
test_scores = [i[1] for i in scores]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
np.where(test_scores == min(test_scores))
min(test_scores)
getTranslatedNodes(feats[17], betas[17][feats[17]], num_to_node, g, )
```
# MCP + LapRidge
```
# define training params
alpha1_list = np.logspace(-3,-2,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_list_pairs = list(itertools.product(alpha1_list, alpha2_list))
results = {}
feats_list = []
betas = []
for i in alpha2_list:
X_train_new = np.vstack((X_train, np.sqrt(i)*X_train_lower))
y_train_new = np.concatenate((y_train, np.zeros(len(X_train_lower))))
s = pycasso.Solver(X_train_new, y_train_new, lambdas=alpha1_list, penalty = 'mcp')
s.train()
beta = s.coef()['beta']
betas += [i for i in beta]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in beta]
feats_list += feats
print([len(i) for i in feats])
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
results[i] = scores
train_scores = []
test_scores = []
for k,v in results.items():
train_scores += [i[0] for i in v]
test_scores += [i[1] for i in v]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
min(test_scores)
np.where(test_scores == min(test_scores))
getTranslatedNodes(feats_list[75], betas[75][feats_list[75]], num_to_node, g)
```
# SCAD + LapRidge
```
# define training params
alpha1_list = np.logspace(-3,-2,15)
alpha2_list = np.logspace(-1,2,15)
threshold_list = np.logspace(-3,-1,10)
max_features = 10
alpha_list_pairs = list(itertools.product(alpha1_list, alpha2_list))
results = {}
feats_list = []
betas = []
for i in alpha2_list:
X_train_new = np.vstack((X_train, np.sqrt(i)*X_train_lower))
y_train_new = np.concatenate((y_train, np.zeros(len(X_train_lower))))
s = pycasso.Solver(X_train_new, y_train_new, lambdas=alpha1_list, penalty = 'scad')
s.train()
beta = s.coef()['beta']
betas += [i for i in beta]
feats = [getFeatures(None, i, threshold=0.001, max_features=10) for i in beta]
feats_list += feats
print([len(i) for i in feats])
regr = LinearRegression()
scores = [getScoring(regr, X_train, y_train, X_test, y_test, i, None) for i in feats]
results[i] = scores
train_scores = []
test_scores = []
for k,v in results.items():
train_scores += [i[0] for i in v]
test_scores += [i[1] for i in v]
gridsearch_results = pd.DataFrame(np.array(test_scores), columns = ['Test MSE'])
getGridsearchPlot(gridsearch_results, alpha1_list, alpha2_list, save_location = None)
min(test_scores)
np.where(test_scores == min(test_scores))
getTranslatedNodes(feats_list[201], betas[201][feats_list[201]], num_to_node, g)
```
| github_jupyter |
Reconstructing virtual markers
==============================
In this tutorial, we will reconstruct virtual markers for anatomic landmarks that were not physically instrumented during the movement acquisition. We usually do this kind of reconstruction when it is not practical or feasible to stick a marker on an anatomical landmark. Instead, we track clusters of markers on rigid bodies affixed to the segment, and we express the position of virtual markers relative to these clusters.
This process has two steps:
1. A calibration step with several very short calibration acquisitions:
a) A static acquisition of a few seconds where we can see every marker.
b) Sometimes, probing acquisitions, one for each virtual marker. In each of these short acquisitions, we point the anatomical landmark using a calibrated probe. The aim is to express these landmarks as part of their segment cluster. Since they move rigidly with the marker clusters, then we could reconstruct the landmarks during the analysed tasks, using the tracked clusters.
2. An task analysis step where the clusters are tracked and the virtual markers are reconstructed into the task acquisition.
```
import kineticstoolkit.lab as ktk
import numpy as np
```
Read and visualize marker trajectories
--------------------------------------
We proceed exactly as in the previous tutorials, but this time we will perform the analysis based on a minimal set of markers. Let's say that for the right arm and forearm, all we have is one real marker on the lateral epicondyle, and two plates of three markers affixed to the arm and forearm segments (we will show every other in blue for easier visualization).
```
# Read the markers
markers = ktk.kinematics.read_c3d_file(
ktk.config.root_folder + '/data/kinematics/sample_propulsion.c3d')
# Set every unnecessary markers to blue
keep_white = ['LateralEpicondyleR', 'ArmR1', 'ArmR2', 'ArmR3',
'ForearmR1', 'ForearmR2', 'ForearmR3']
for marker_name in markers.data:
if marker_name not in keep_white:
markers = markers.add_data_info(marker_name, 'Color', 'b')
# Set the point of view for 3D visualization
viewing_options = {
'zoom': 3.5,
'azimuth': 0.8,
'elevation': 0.16,
'translation': (0.2, -0.7)
}
# Create the player
player = ktk.Player(markers, **viewing_options)
player.to_html5(start_time=0, stop_time=1)
```
The aim of this tutorial is to reconstruct the right acromion, medial epicondyle and both styloids using static and probing acquisitions. Let's begin.
Calibration: Defining cluster configurations using a static acquisition
-----------------------------------------------------------------------
In the static acquisition, every marker should be visible. We use this trial to define, for each cluster, how the cluster's markers are located each relative to the other.
For this example, we will create clusters 'ArmR' and 'ForearmR'.
```
clusters = dict()
# Read the static trial
markers_static = ktk.kinematics.read_c3d_file(
ktk.config.root_folder + '/data/kinematics/sample_static.c3d')
# Show this trial, just to inspect it
player = ktk.Player(markers_static, **viewing_options)
player.to_html5(start_time=0, stop_time=0.5)
```
Using this trial, we now define the arm cluster:
```
clusters['ArmR'] = ktk.kinematics.create_cluster(
markers_static,
marker_names=['ArmR1', 'ArmR2', 'ArmR3', 'LateralEpicondyleR'])
clusters['ArmR']
```
We proceed the same way for the forearm:
```
clusters['ForearmR'] = ktk.kinematics.create_cluster(
markers_static,
marker_names=['ForearmR1', 'ForearmR2', 'ForearmR3'])
clusters['ForearmR']
```
For the probe, we will define its cluster from its known specifications. Every 6 local point is expressed relative to a reference frame that is centered at the probe tip:
```
clusters['Probe'] = {
'ProbeTip': np.array(
[[0.0, 0.0, 0.0, 1.0]]),
'Probe1': np.array(
[[0.0021213, -0.0158328, 0.0864285, 1.0]]),
'Probe2': np.array(
[[0.0021213, 0.0158508, 0.0864285, 1.0]]),
'Probe3': np.array(
[[0.0020575, 0.0160096, 0.1309445, 1.0]]),
'Probe4': np.array(
[[0.0021213, 0.0161204, 0.1754395, 1.0]]),
'Probe5': np.array(
[[0.0017070, -0.0155780, 0.1753805, 1.0]]),
'Probe6': np.array(
[[0.0017762, -0.0156057, 0.1308888, 1.0]]),
}
clusters['Probe']
```
Now that we defined these clusters, we will be able to track those in every other acquisition. This process can be done using the [track_cluster()](../api/kineticstoolkit.kinematics.track_cluster.rst) function.
Calibration: Defining the virtual marker configurations based on probing acquisitions
-------------------------------------------------------------------------------------
Now we will go though every probing acquisition and apply the same process on each acquisition:
1. Locate the probe tip using the probe cluster;
2. Add the probe tip to the segment's cluster.
We will go step by step with the acromion, then we will do the other ones.
```
# Load the markers from the acromion probing trial
probing_markers = ktk.kinematics.read_c3d_file(
ktk.config.root_folder + '/data/kinematics/sample_probing_acromion_R.c3d')
# Track the probe cluster
tracked_markers = ktk.kinematics.track_cluster(
probing_markers,
clusters['Probe']
)
# Look at the contents of the tracked_markers TimeSeries
tracked_markers.data
```
We see that even if the probe tip was not a real marker, its position was reconstructed based on the tracking of the other probe markers. We will add the probe tip to the markers, as the location of the acromion.
```
probing_markers.data['AcromionR'] = tracked_markers.data['ProbeTip']
```
Now that the probing markers contain the new marker 'AcromionR', we can add it to the arm cluster.
```
clusters['ArmR'] = ktk.kinematics.extend_cluster(
probing_markers, clusters['ArmR'], new_point = 'AcromionR'
)
# Look at the new content of the arm cluster
clusters['ArmR']
```
Now, we can process every other probing acquisition the same way.
```
# Right medial epicondyle
probing_markers = ktk.kinematics.read_c3d_file(
ktk.config.root_folder
+ '/data/kinematics/sample_probing_medial_epicondyle_R.c3d')
tracked_markers = ktk.kinematics.track_cluster(
probing_markers, clusters['Probe']
)
probing_markers.data['MedialEpicondyleR'] = tracked_markers.data['ProbeTip']
clusters['ArmR'] = ktk.kinematics.extend_cluster(
probing_markers, clusters['ArmR'], new_point = 'MedialEpicondyleR'
)
# Right radial styloid
probing_markers = ktk.kinematics.read_c3d_file(
ktk.config.root_folder
+ '/data/kinematics/sample_probing_radial_styloid_R.c3d')
tracked_markers = ktk.kinematics.track_cluster(
probing_markers, clusters['Probe']
)
probing_markers.data['RadialStyloidR'] = tracked_markers.data['ProbeTip']
clusters['ForearmR'] = ktk.kinematics.extend_cluster(
probing_markers, clusters['ForearmR'], new_point = 'RadialStyloidR'
)
# Right ulnar styloid
probing_markers = ktk.kinematics.read_c3d_file(
ktk.config.root_folder
+ '/data/kinematics/sample_probing_ulnar_styloid_R.c3d')
tracked_markers = ktk.kinematics.track_cluster(
probing_markers, clusters['Probe']
)
probing_markers.data['UlnarStyloidR'] = tracked_markers.data['ProbeTip']
clusters['ForearmR'] = ktk.kinematics.extend_cluster(
probing_markers, clusters['ForearmR'], new_point = 'UlnarStyloidR'
)
```
Now every markers that belong to a cluster are defined, be it real or virtual:
```
clusters['ArmR']
clusters['ForearmR']
```
Task analysis: Tracking the clusters
------------------------------------
Now that we defined the clusters and inluded virtual markers to it, we are ready to process the experimental trial we loaded at the beginning of this tutorial. We already loaded the markers; we will now track the cluster to obtain the position of the virtual markers.
```
markers = markers.merge(
ktk.kinematics.track_cluster(
markers, clusters['ArmR']
)
)
markers = markers.merge(
ktk.kinematics.track_cluster(
markers, clusters['ForearmR']
)
)
# Show those rigid bodies and markers in a player
player = ktk.Player(markers, **viewing_options)
player.to_html5(start_time=0, stop_time=1)
```
That is it, we reconstructed the acromion, medial epicondyle and both styloids from probing acquisitions, without requiring physical markers on these landmarks. We can conclude by adding links for clearer visualization. From now one, we could continue our analysis and calculate the elbow angles as in the previous tutorial.
```
# Add the segments
segments = {
'ArmR': {
'Color': [1, 0.25, 0],
'Links': [['AcromionR', 'MedialEpicondyleR'],
['AcromionR', 'LateralEpicondyleR'],
['MedialEpicondyleR', 'LateralEpicondyleR']]
},
'ForearmR': {
'Color': [1, 0.5, 0],
'Links': [['MedialEpicondyleR', 'RadialStyloidR'],
['MedialEpicondyleR', 'UlnarStyloidR'],
['LateralEpicondyleR', 'RadialStyloidR'],
['LateralEpicondyleR', 'UlnarStyloidR'],
['UlnarStyloidR', 'RadialStyloidR']]
}
}
player = ktk.Player(markers, segments=segments, **viewing_options)
player.to_html5(start_time=0, stop_time=1)
```
For more information on kinematics, please check the [API Reference for the kinematics module](../api/kineticstoolkit.kinematics.rst).
| github_jupyter |
Daten einlesen
--------------
```
from sklearn.datasets import load_iris
data = load_iris()
import pandas as pd
pd.DataFrame(data["data"])
```
Daten splitten
--------------
```
X = data.data
y = data.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/3)
print('X Train: {}'.format(X_train.shape))
print('Y Test: {}'.format(y_test.shape))
```
Decision Tree Classifier - initialisieren und trainieren
---------
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(max_depth=5, random_state=0)
dt = dt.fit(X_train,y_train)
```
Decision Tree Classifier - Modell testen
---------
```
score = dt.score(X_test,y_test)
print('Decision Tree scores with {}% accuracy'.format(score*100))
#dt.feature_importances_
from sklearn.tree import export_graphviz
attribute_names = ['sepal length in cm','sepal width in cm','petal length in cm','petal width in cm', 'class']
#Export as dot file
export_graphviz(dt, out_file='iris_5.dot', class_names = True, feature_names = attribute_names[0:4])
#Export dot to png
from subprocess import check_call
check_call(['dot','-Tpng','iris_5.dot','-o','iris_5.png'])
```
Random Forest Classifier - initialisieren und trainieren
-------
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(max_depth=5, random_state=0)
rf.fit(X_train,y_train)
```
Random Forest Classifier - Modell testen
-----------
```
score = rf.score(X_test,y_test)
print('Random Forest scores with {}% accuracy'.format(score*100))
```
Random Forest Classifier Limebit - Modell testen
-----
```
#RandomForestClassifier object
random_forest_classifier = RandomForestClassifier(n_estimators=10)
from sklearn.model_selection import cross_val_score, KFold
#list with accuracies with different test and training sets of Random Forest
k_fold = KFold(n_splits=5, shuffle=True, random_state=0)
accuracies_rand_forest = cross_val_score(random_forest_classifier, X_test, y_test, cv=k_fold, n_jobs=1)
from sklearn.model_selection import GridSearchCV
#tree parameters which shall be tested
tree_para = {'criterion':['gini','entropy'],'max_depth':[i for i in range(1,20)]} #, 'min_samples_split':[i for i in range (2,20)]}
#GridSearchCV object
grd_clf = GridSearchCV(dt, tree_para, cv=5)
#creates differnt trees with all the differnet parameters out of our data
grd_clf.fit(X_train, y_train)
#best paramters that were found
best_parameters = grd_clf.best_params_
print(best_parameters)
#new tree object with best parameters
model_with_best_tree_parameters = grd_clf.best_estimator_
#k_fold object
k_fold = KFold(n_splits=5, shuffle=True, random_state=0)
#scores reached with different splits of training/test data
k_fold_scores = cross_val_score(model_with_best_tree_parameters, X_test, y_test, cv=k_fold, n_jobs=1)
import numpy as np
#scores reached with different splits of training/test data
k_fold_scores = cross_val_score(dt, X_test, y_test, cv=k_fold, n_jobs=1)
#arithmetic mean of accuracy scores
mean_accuracy = np.mean(k_fold_scores)
#arithmetic mean of accuracy scores
mean_accuracy_best_parameters_tree = np.mean(k_fold_scores)
##arithmetic mean of the list with the accuracies of the Random Forest
accuracy_rand = np.mean(accuracies_rand_forest)
print('Accuracy Random Forest ' + str(round(accuracy_rand,4)))
print('Old accuracy: ' + str(round(mean_accuracy,4)))
print('Best tree accuracy: ' + str(round(mean_accuracy_best_parameters_tree,4)))
```
| github_jupyter |
# PicoRV32 Processor Mixed-Memory Processor Demo
This notebook demonstrates using Jupyter Notebooks and IPython Magics to run C/C++ and assembly code on a PicoRV32 Processor.
The PicoRV32 Processor in this example is a Mixed-Memory processor. This means it has a 64-KB BRAM Memory space.
When arguments are passed to the PicoRV32 processor for execution they will be copied and passed in **BRAM Memory** as PYNQ Contiguous Memory Allocated (CMA) arrays. Previously allocated CMA arrays passed as arguments will not be copied, and reused.
When the program terminates, results are back-propogated
## Loading the Overlay
To begin, import the overlay using the following cell. This also loads the IPython Magics: `riscvc`, `riscvcpp`, and `riscvasm`.
```
from riscvonpynq.picorv32.bram.picorv32 import Overlay
overlay = Overlay("picorv32.bit")
```
You can examine the overlay using the `help()` method. This overlay is a subclass of riscvonpynq.Overlay, which itself is a subclass of pynq.Overlay.
```
help(overlay)
```
You can also examine the RISC-V Processor in the overlay. It is named picoAxiProcessor.
```
help(overlay.picoBramProcessor)
```
This demonstrates that picoBramProcessor is an instance of riscvonpynq.Processor.BramProcessor. As we stated above, this means that the processor is connected to BRAM and DDR. riscvonpynq.Processor.BramProcessor is an indirect subclass of pynq.overlay.DefaultHierarchy -- this means that the processor is actually a collection of IP wrapped in an Block Diagram Editor IP Hierarchy that is recognized by pynq using the `checkhierarchy` method.
The BramProcessor class provides methods to run, launch (run a program asynchronously), and land (stop an asynchronous program). You can see further documentation in the cell below:
## RISC-V Magics
Our package provides three RISC-V Magics. The first is `riscvc`, which compiles C code.
```
%%riscvc test overlay.picoBramProcessor
int main(int argc, char ** argv){
unsigned int * a = (unsigned int *)argv[1];
return a[2];
}
```
You can run the test program above and pass it arguments. The arguments must be a Numpy type.
```
import numpy as np
arg1 = np.array(range(1, 10), np.uint32)
retval = overlay.picoBramProcessor.run(test, arg1)
if(retval != arg1[2]):
print("Test Failed!")
else:
print("Test Passed!")
```
The RISC-V Processor lets the ARM Processor know it is complete by raising the IRQ line. Each processor can do this in a different way. For example, the PicoRV32 processor has a `trap` pin that is raised on an `ebreak` instruction. Other processors must write to GPIO pins.
You can see the IRQ line in the overlay:
```
help(overlay.picoBramProcessor.irq)
```
You can also examine the processor's memory:
```
arr = overlay.picoBramProcessor.psBramController.mmio.array
for i in range(128):
print(f'Memory Index {i:3}: {arr[i]:#0{10}x}')
```
We've also provided Magics for C++ (`riscvcpp`), and Assembly (`riscvasm`). These are demonstrated below:
```
%%riscvcpp test_cpp overlay.picoBramProcessor
class foo{
public:
static int mulby2(int val){
return val * 2;
}
};
int main(int argc, char ** argv){
int * a = (int *)argv[1];
return foo::mulby2(a[0]);
}
import numpy as np
test_cpp_arg = np.array([42], np.int32)
retval = overlay.picoBramProcessor.run(test_cpp, test_cpp_arg)
if(retval != test_cpp_arg*2):
print("Test Failed!")
else:
print("Test Passed!")
```
Finally, some assembly. `int argc` is in register `a0`, and `char **argv` is in register `a1`.
```
%%riscvasm test_asm overlay.picoBramProcessor
.global main
main:
lw a2, 4(a1) # Get *argv[1]
lw a3, 0(a2) # Get argv[1]
addi a0, a3, -42 # Add -42, store in a0 (return register)
ret
import numpy as np
test_asm_arg = np.array([42], np.int32)
retval = overlay.picoBramProcessor.run(test_asm, test_asm_arg)
if(retval != test_asm_arg[0] + (-42)):
print('Test failed!')
else:
print('Test passed!')
```
And that's it!
| github_jupyter |
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 3
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`) using a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do exploratory visualizations with Seaborn.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Fit a linear regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites).
(That book is good regardless of whether your cultural worldview is inferential statistics or predictive machine learning)
- [ ] Read Leo Breiman's paper, ["Statistical Modeling: The Two Cultures"](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import f_regression, SelectKBest
import pandas as pd
import numpy as np
import pandas_profiling
import matplotlib.pyplot as plt # plotting lib
import seaborn as sns # matplotlib wrapper plotting lib
import plotly.graph_objs as go # interactive low-level plotting lib https://plot.ly/python/
import plotly.express as px #high-level api wrapper for plotly https://plot.ly/python/plotly-express/#visualize-distributions
import pandas as pd
import pandas_profiling
# Read New York City property sales data
#df = pd.read_csv('../data/NYC_Citywide_Rolling_Calendar_Sales.csv')
df = pd.read_csv('../data/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
#import pandas_profiling
#df.profile_report()
print(df.shape)
df.head()
print(df.shape)
df.describe(include= 'all')#
# check for missing values
df.isna().sum()
df['BUILDING_CLASS_AT_PRESENT'].describe()
df['BUILDING_CLASS_AT_TIME_OF_SALE'].describe()
# drop the columns which has 90% of missing values
# Address has 98.5% unique value so we can drop this
#TAX_CLASS_AT_PRESENT has only two unique value where 99% value belong to one category
# BUILDING_CLASS_AT_PRESENT And BUILDING_CLASS_AT_SALE looks redundant so let's drop one of them
df= df.drop(['EASE-MENT', "APARTMENT_NUMBER", 'ADDRESS','TAX_CLASS_AT_PRESENT','BUILDING_CLASS_AT_PRESENT','TAX_CLASS_AT_TIME_OF_SALE'], axis=1)
# change the formate of data
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
df['YEAR_BUILT'] = pd.to_datetime(df['YEAR_BUILT'], infer_datetime_format=True)
#df= df[df['BUILDING_CLASS_CATEGORY']=='ONE FAMILY DWELLINGS']
#df = df.query('BUILDING_CLASS_CATEGORY == "ONE FAMILY DWELLINGS"')
#df.shape
mask = df['BUILDING_CLASS_CATEGORY'].str.contains('ONE FAMILY DWELLINGS')
df = df.drop(['BUILDING_CLASS_CATEGORY'],axis=1)
df=df[mask]
df.shape
```
### Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
```
# change the formate with a comma seperator for thousand and zero decimal
#pd.options.display.float_format = '{:,.0f}'.format
#create subset of the sale_price
df= df.query('SALE_PRICE >= 100000 & SALE_PRICE <= 2000000')
df.isna().sum()
# train the model on training data because model learn to behave very well on train data but fail miserably on new sample
# to avoid overfiting split the data into train and test data, create model on train data and test it on the test data
# let's check the SALE_DATE
# conver into datetime format and look at the date range
df['SALE_DATE'].dt.month.value_counts()
df['LAND_SQUARE_FEET']= df['LAND_SQUARE_FEET'].str.replace(',', '').astype(float)
cutoff = pd.to_datetime('2019-04-01')
train= df[df['SALE_DATE'] < cutoff]
test = df[df['SALE_DATE'] >= cutoff]
train.shape , test.shape
```
###Simple Basline Model
```
train['SALE_PRICE'].mean()
```
A baseline for regression can be the mean of the training labels.
Baseline prediction for sale price is $623855
```
# mask = ((df['BUILDING_CLASS_CATEGORY']==' 01 ONE FAMILY DWELLING') & ((df['SALE_PRICE'] > 100000) & (df['SALE_PRICE']< 2000000)))
#df= df[mask]
```
###Do exploratory visualizations with Seaborn.
###GROSS_SQUARE_FEET vs SALE_PRICE AND LAND_SQUARE_FEET vs SALE_PRICE
```
import plotly.express as px
px.scatter(train, x= 'GROSS_SQUARE_FEET', y= 'SALE_PRICE', trendline= 'ols' , color= 'SALE_PRICE', opacity= 0.5)
train['GROSS_SQUARE_FEET'].describe()
train['LAND_SQUARE_FEET'].describe()
# From plot we can see that GROSS_SQQUARE_FEET=0 has sale price > 0 , this looks like data entry error or nan value, let's see those data
(train['GROSS_SQUARE_FEET']==0).value_counts()
# let's check the land_square_feet column
# convert to int
(train['LAND_SQUARE_FEET']== 0).value_counts()
px.scatter(train, x= 'LAND_SQUARE_FEET', y= "SALE_PRICE")
#let's drop those values
train= train[train['GROSS_SQUARE_FEET'] != 0]
test= test[test['GROSS_SQUARE_FEET'] != 0]
train.shape , test.shape
# 75% percentile GROSS_square _feet is 2575 but maximum is 7,875
#so look at the data with gross_square_feet >50000
df.query('GROSS_SQUARE_FEET > 5000')
```
### NEIGHBORHOOD vs SALE_PRICE
```
# let's see avg price per neighborhood
train.groupby('NEIGHBORHOOD').SALE_PRICE.mean()
# Neighborhood has so many unique values let's dothe valuecount
train.NEIGHBORHOOD.value_counts()
# let's reduce the cardinality by keeping only top 10 and rest grouped as other
top10 = train['NEIGHBORHOOD'].value_counts()[:10].index
# Filter locations based on top10 neighborhoods, and not as OTHERS
train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD']='OTHER'
test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] ='OTHER'
sns.catplot(x='NEIGHBORHOOD', y='SALE_PRICE', data=train, color='grey', kind='bar', height=6, aspect=2);
plt.xticks(rotation=45);
```
###TOTAL_UNITS, RESIDENTIAL_UNITS, COMMERCIAL_UNITS vs. SALE_PRICE
```
#First, let's verify if the RESIDENTIAL_UNITS + COMMERICAL_UNITS = TOTAL_UNITS
#if it is, we can just discard other, and keep TOTAL_UNITS
(train['RESIDENTIAL_UNITS'] + train['COMMERCIAL_UNITS']).value_counts()
train['TOTAL_UNITS'].value_counts()
# it is the same, let's keep TOTAL_UNITS, and drop the others out.
train = train.drop(['RESIDENTIAL_UNITS', 'COMMERCIAL_UNITS'], axis=1)
test = test.drop(['RESIDENTIAL_UNITS', 'COMMERCIAL_UNITS'], axis=1)
# Total unit is the good indicator of sale price
sns.catplot(x='TOTAL_UNITS', y='SALE_PRICE', data=train, kind='bar', color='grey', height=6, aspect=1.5);
```
###Start simple & fast, with a subset of columns
Just numeric columns with no missing values
```
train_subset = train.select_dtypes('number').dropna(axis= 'columns')
test_subset = test.select_dtypes('number').dropna(axis='columns')
assert all(train_subset.columns == test_subset.columns)
target = 'SALE_PRICE'
features= train_subset.columns.drop(target)
X_train= train_subset[features]
y_train = train_subset[target]
X_test = test_subset[features]
y_test= test_subset[target]
X_train.shape , X_test.shape, y_train.shape
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
model= LinearRegression().fit(X_train , y_train)
y_pred= model.predict(X_test)
# calculate mean absolute error
mae = mean_absolute_error(y_test, y_pred)
print(f'MAE with subset of numeric column only: {mae:.0f}')
# calculation of r_squared and r_squared_adj
mse = mean_squared_error(y_test , y_pred)
print(f'MSE with subset of numeric column only: {mse:.0f}')
# calculation of rrot mean square error
RMSE = np.sqrt(mse)
print(f'RMSE with subset of numeric column only: {RMSE:.0f}')
#Calculation of adj r_square
R_square = r2_score(y_test, y_pred)
print(f'R_square with subset of numeric column only: {R_square}')
#y_pred.mean()
def lmodel(X_train, X_test, y_train, y_test):
# Instantiate Linear Regression Mode
model = LinearRegression().fit(X_train, y_train)
# Store metrics
results = {}
# Add model to results
results['model'] = model
# Calculate metrics on training data
# Add R^2
results['train_r_squared'] = model.score(X_train, y_train)
# Predict
y_train_hat = model.predict(X_train)
# MSE
results['train_MSE'] = mean_squared_error(y_train, y_train_hat)
# RMSE
results['train_RMSE'] = np.sqrt(results['train_MSE'])
# MAE
results['train_MAE'] = mean_absolute_error(y_train, y_train_hat)
# Calculate metrics on the test data
# Add R^2
results['test_r_squared'] = model.score(X_test, y_test)
# Predict
y_test_hat = model.predict(X_test)
# MSE
results['test_MSE'] = mean_squared_error(y_test, y_test_hat)
# RMSE
results['test_RMSE'] = np.sqrt(results['test_MSE'])
# MAE
results['test_MAE'] = mean_absolute_error(y_test, y_test_hat)
return results
def print_lm(results):
print("""
----------- Linear Regression Model Results -------------
Training Set
R^2: {:.2f} (Explained variance score: 1 is perfect prediction)
MSE: {:.2f}
RMSE: {:.2f}
MAE: ${:.2f}
Test Set
R^2: {:.2f} (Explained variance score: 1 is perfect prediction)
MSE: {:.2f}
RMSE: {:.2f}
MAE: ${:.2f}
""".format(results['train_r_squared'], results['train_MSE'], results['train_RMSE'], results['train_MAE'], results['test_r_squared'], results['test_MSE'], results['test_RMSE'], results['test_MAE']))
results= lmodel(X_train, X_test, y_train, y_test)
print_lm(results)
#def linear_model(X_train, y_train, X_test, y_train, y_test):
# Instantiate Linear model
model= LinearRegression().fit(X_train, y_train)
# store the matrics
#return
```
##Complex Linear Regression
####1.model with one hot encoding
###Do one-hot encoding of categorical features.
```
# look the categorical features in data
train.describe(exclude='number').T.sort_values(by='unique')
import category_encoders as ce
#Subset feature and select target
cat_features = ['NEIGHBORHOOD', 'BUILDING_CLASS_AT_TIME_OF_SALE']
numeric_features= train.select_dtypes('number').columns.tolist()
combined_features = cat_features + numeric_features
traget = 'SALE_PRICE'
# split feature and traget to X, y train and test split
X_train = train[combined_features]
X_test = test[combined_features]
y_train = train[target]
y_test = test[target]
assert all (X_train.columns == X_test.columns)
# do one hot encoding
encoder = ce.OneHotEncoder(use_cat_names= True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
# assert columns are same
assert X_train_encoded.shape[1] == X_test_encoded.shape[1]
results = lmodel(X_train_encoded, X_test_encoded, y_train, y_test)
print_lm(results)
X_train_encoded.shape
print('Intercept: ', results['model'].intercept_)
print(pd.Series(results['model'].coef_, X_train_encoded.columns.tolist()).to_string())
```
##Model with SelectKBest.
```
# Select 10 feature
selector = SelectKBest(score_func=f_regression, k = 10)
X_train_selected = selector.fit_transform(X_train_encoded, y_train)
X_test_selected = selector.transform(X_test_encoded)
assert X_train_selected.shape[1] == X_test_selected.shape[1]
X_train_selected.shape, X_test_selected.shape
results= lmodel(X_train_selected, X_test_selected, y_train, y_test)
print_lm(results)
def find_k_best_features(X_train, X_test, y_train, y_test):
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
r_square = model.score(X_test_selected, y_test)
print('''
R^2: {:.2f}
MSE: {:.2f}
RMSE: {:.2f}
MAE: ${:.2f}
'''.format(r_square, mse, rmse, mae))
find_k_best_features(X_train_encoded, X_test_encoded, y_train, y_test)
# calculate R and adj Rsquare
import statsmodels.api as sm
X1 = sm.add_constant(X_train_selected)
result = sm.OLS(y_train, X1).fit()
print(result.rsquared, result.rsquared_adj)
```
###ZIP_CODE vs. SALE_PRICE
```
train['ZIP_CODE'].describe()
train['ZIP_CODE'].value_counts()
sns.lmplot(x='ZIP_CODE', y= 'SALE_PRICE', data= train, scatter_kws= dict(alpha=0.05));
```
###YEAR_BUILT vs. SALE_PRICE
```
train['YEAR_BUILT'].describe()
train['YEAR_BUILT'].value_counts()
train['SALE_DATE'].value_counts()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import xarray as xr
import zarr
import math
import glob
import pickle
import statistics
import scipy.stats as stats
from sklearn.neighbors import KernelDensity
import dask
import seaborn as sns
import matplotlib.pyplot as plt
def getrange(numbers):
return max(numbers) - min(numbers)
def get_files():
models = glob.glob("/terra/data/cmip5/global/historical/*")
avail={}
for model in models:
zg = glob.glob(str(model)+"/r1i1p1/day/native/zg*")
try:
test = zg[0]
avail[model.split('/')[-1]] = zg
except:
pass
return avail
files = get_files()
files['NOAA'] = glob.glob("/terra/data/reanalysis/global/reanalysis/NOAA/20thC/r1/day/native/z_day*")
files['ERA5'] = glob.glob("/terra/data/reanalysis/global/reanalysis/ECMWF/ERA5/6hr/native/zg*")
results={}
for model in files.keys():
print(model)
x = xr.open_mfdataset(files[model])
if model == 'NOAA':
x = x.rename({'hgt':'zg'})
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1950','2005'))
elif model == 'ERA5':
x = x.rename({'latitude':'lat'})
x = x.rename({'longitude':'lon'})
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1979','2005'))
else:
x = x.sel(plev=85000)
x = x.sel(time=slice('1950','2005'))
x = x.load()
if model == 'ERA5':
x = x.sel(lat=slice(0,-60))
else:
x = x.sel(lat=slice(-60,0))
x = x[['zg']]
x = x.assign_coords(lon=(((x.lon + 180) % 360) - 180))
with dask.config.set(**{'array.slicing.split_large_chunks': True}):
x = x.sortby(x.lon)
x = x.sel(lon=slice(-50,20))
x = x.resample(time="QS-DEC").mean(dim="time",skipna=True)
x = x.load()
x['maxi']=x.zg
for i in range(len(x.time)):
x.maxi[i] = x.zg[i].where((x.zg[i]==np.max(x.zg[i])))
east=[]
south=[]
pres=[]
for i in range(len(x.time)):
ids = np.argwhere(~np.isnan(x.maxi[i].values))
latsid = [item[0] for item in ids]
lonsid = [item[1] for item in ids]
east.append(x.lon.values[np.max(lonsid)])
south.append(x.lat.values[np.max(latsid)])
pres.append(x.maxi.values[i][np.max(latsid)][np.max(lonsid)])
results[model]=pd.DataFrame(np.array([x.time.values,east,south,pres]).T,columns=['time','east','south','pres'])
x.close()
for model in results:
l = len(results[model])
bottom = results[model].south.mean() - 3*(results[model].south.std())
top = results[model].south.mean() + 3*(results[model].south.std())
bottom_e = results[model].east.mean() - 3*(results[model].east.std())
top_e = results[model].east.mean() + 3*(results[model].east.std())
results[model] = results[model].where((results[model].south > bottom) & (results[model].south<top))
results[model] = results[model].where((results[model].east > bottom_e) & (results[model].east < top_e)).dropna()
print(model,l-len(results[model]))
results.pop('MIROC-ESM') #no variability
scores = pd.DataFrame([],columns=['Model','Meridional','Zonal','Pressure'])
i = 1000
for model in results:
#longitude
x = np.linspace(min([np.min(results[key].east) for key in results]) , max([np.max(results[key].east) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].east.values),stats.iqr(results['NOAA'].east.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].east.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].east.values),stats.iqr(results[model].east.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].east.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
meridional = np.sum(score)
#latitude
x = np.linspace(min([np.min(results[key].south) for key in results]) , max([np.max(results[key].south) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].south.values),stats.iqr(results['NOAA'].south.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].south.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].south.values),stats.iqr(results[model].south.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].south.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
zonal = np.sum(score)
#pressure
x = np.linspace(min([np.min(results[key].pres) for key in results]) , max([np.max(results[key].pres) for key in results]) , int(i) )
bw = 1.059*np.min([np.std(results['NOAA'].pres.values),stats.iqr(results['NOAA'].pres.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results['NOAA'].pres.values)[:, np.newaxis]) # replicates sns
ref = np.exp(kde.score_samples(x[:, np.newaxis]))
#
bw = 1.059*np.min([np.std(results[model].pres.values),stats.iqr(results[model].pres.values)/1.34])*216**(-1/5.)
kde = KernelDensity(kernel='gaussian', bandwidth=bw).fit(np.array(results[model].pres.values)[:, np.newaxis]) # replicates sns
cmip = np.exp(kde.score_samples(x[:, np.newaxis]))
#
score = []
scale = getrange(x)/i
for j in range(len(ref)):
score.append(abs(ref[j]-cmip[j])*scale)
pres = np.sum(score)
scores.loc[len(scores)] = [model,meridional,zonal,pres]
inttype = type(results['NOAA'].time[1])
for index in results:
if isinstance(results[index].time[1], inttype):
results[index].time = pd.to_datetime(results[index].time)
for index in results:
results[index].east = pd.to_numeric(results[index].east)
results[index].south = pd.to_numeric(results[index].south)
results[index].pres = pd.to_numeric(results[index].pres)
pickle.dump( scores, open( "../HIGH_OUT/scores_1D.p", "wb" ) )
pickle.dump( results, open( "../HIGH_OUT/tracker_1D.p", "wb" ) )
out = pickle.load( open( "../HIGH_OUT/tracker_1D.p", "rb" ) )
#for index in out:
for index in ['ERA5']:
if index == 'NOAA':
pass
else:
df = out['NOAA']
df['model'] = 'NOAA'
df2 = out[index]
df2['model'] = str(index)
df = df.append(df2)
g = sns.jointplot(data= df,x='east',y = 'south', hue="model",kind="kde",fill=True, palette=["blue","red"],joint_kws={'alpha': 0.6} )
#g.plot_joint(sns.scatterplot, s=30, alpha=.5)
g.ax_joint.set_xlabel('Longitude')
g.ax_joint.set_ylabel('Latitude')
plt.savefig('../HIGH_OUT/jointplots/jointplot_'+str(index)+'.png',dpi=100)
plt.savefig('../HIGH_OUT/jointplots/jointplot_'+str(index)+'.pdf')
#plt.close()
plt.show()
NOAA = out['NOAA']
seasons =[]
for i in range(len(NOAA.time)):
if NOAA.iloc[i].time.month == 12:
seasons.append('Summer')
elif NOAA.iloc[i].time.month == 3:
seasons.append('Autumn')
elif NOAA.iloc[i].time.month == 6:
seasons.append('Winter')
else:
seasons.append('Spring')
NOAA['Season'] = seasons
NOAA
df = NOAA
g = sns.jointplot(data= df,x='east',y = 'south',hue='Season',kind="kde",fill=True, palette=['r','y','b','g'],joint_kws={'alpha': 0.35})
g.ax_joint.set_xlabel('Longitude')
g.ax_joint.set_ylabel('Latitude')
#plt.savefig('../HIGH_OUT/NOAA_seasonality_jointplot.png',dpi=1000)
plt.savefig('../HIGH_OUT/NOAA_seasonality_jointplot.pdf')
f = open("../HIGH_OUT/out_dict.txt","w") #ipython pickles cant be read by .py
f.write( str(out) )
f.close()
results_df = pd.DataFrame([],columns=["model", "Mean Latitude" ,"Latitude Difference","Latitude std.","Latitude Range", "Mean Longitude" ,"Longitude Difference" ,"longitude std.", "Longitude Range"])
for index in out:
results_df.loc[len(results_df)] = [index,round(np.mean(out[index].south),2),round(np.mean(out[index].south-np.mean(out['NOAA'].south)),2), round(np.std(out[index].south),2),round(getrange(out[index].south),2),round(np.mean(out[index].east),2),round(np.mean(out[index].east-np.mean(out['NOAA'].east)),2),round(np.std(out[index].east),2),round(getrange(out[index].east),2)]
results_df.to_csv('../HIGH_OUT/results_table.csv',float_format='%.2f')
fig = sns.kdeplot(data=NOAA,y='pres',hue='Season',fill=True,alpha=0.35, palette=['r','y','b','g'])
plt.ylabel('Pressure (gpm)')
plt.savefig('../HIGH_OUT/NOAA_seasonality_pressure.png',dpi=1000)
plt.savefig('../HIGH_OUT/NOAA_seasonality_pressure.pdf')
results_df=pd.DataFrame([],columns=('Model','Mean','Difference', 'Std.','Range','Mean', 'Difference', 'Std.','Range','Mean','Difference', 'Std.','Range'))
for index in out.keys():
results_df.loc[len(results_df)] = [index,round(np.mean(out[index].south),2),round(np.mean(out[index].south-np.mean(out['NOAA'].south)),2), round(np.std(out[index].south),2),round(getrange(out[index].south),2),round(np.mean(out[index].east),2),round(np.mean(out[index].east-np.mean(out['NOAA'].east)),2),round(np.std(out[index].east),2),round(getrange(out[index].east),2),round(np.mean(out[index].pres),2),round(np.mean(out[index].pres-np.mean(out['NOAA'].pres)),2),round(np.std(out[index].pres),2),round(getrange(out[index].pres),2)]
results_df.head()
results_df.to_csv('../HIGH_OUT/results_table_1D.csv')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
import itertools
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import KFold
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
from sklearn.learning_curve import learning_curve
from sklearn.learning_curve import validation_curve
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
%matplotlib inline
wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data", header = None)
wine.head()
names = """Class,Alcohol,Malic acid,Ash,Alcalinity of ash,Magnesium,
Total phenols,Flavanoids,Nonflavanoid phenols,Proanthocyanins,
Color intensity,Hue,OD280/OD315 of diluted wines,Proline""".replace("\n", "").split(",")
wine.columns = names
wine.head()
wine.info()
X, y = wine.iloc[:, 1:].values, wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
print("Train X dimension: %s, Test X dimension: %s" % (X_train.shape, X_test.shape))
le = LabelEncoder()
y = le.fit_transform(y)
pipeline = Pipeline(
[("scl", StandardScaler()),
("pca", PCA(n_components = 2)),
("clf", LogisticRegression(random_state = 0, penalty = "l2"))])
pipeline.fit(X_train, y_train)
pipeline.score(X_test, y_test)
```
# K Cross Validation
```
kfold = StratifiedKFold(y = y_train, n_folds = 10, random_state = 0)
scores = cross_val_score(estimator = pipeline, X = X_train, y = y_train, cv = 10, n_jobs = -1)
# n_jobs = -1: use all CPU cores for parallel computing
print("CV accuracy scores: ", scores)
print("Mean CV accuracy: %.3f, std: %.3f" % (np.mean(scores), np.std(scores)))
plt.plot(scores)
plt.ylim(0.9, 1.01)
plt.xlabel("Iteration")
plt.ylabel("Accuracy Score")
train_sizes, train_scores, test_scores = learning_curve(estimator = pipeline, X = X_train, y = y_train,
train_sizes = np.linspace(0.1, 1.0, 10),
cv = 10,
n_jobs = -1)
# learning_curve internally uses stratified_kfold
train_mean = np.mean(train_scores, axis = 1)
test_mean = np.mean(test_scores, axis = 1)
train_std = np.std(train_scores, axis = 1)
test_std = np.std(test_scores, axis = 1)
plt.figure(figsize = (15, 10))
plt.plot(train_sizes, train_mean, color = "b", marker = "o", markersize = 5, label = "training accuracy scores")
plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha = 0.4)
plt.plot(train_sizes, test_mean, color = "g", ls = "--", marker = "s", markersize = 5, label = "validation accurancy")
plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha = 0.15, color = "g")
plt.legend(loc = "lower right")
plt.xlabel("Number of training samples")
plt.ylabel("Accuracy scores")
```
# Validation Curve
```
np.set_printoptions(precision = 4, suppress= True)
param_range = 10.0 ** np.arange(-3, 4)
print("param_range", param_range)
train_scores, test_scores = validation_curve(estimator = pipeline,
X = X_train,
y = y_train,
param_name = "clf__C",
param_range = param_range,
cv = 10)
# validation_curve internally uses stratified_kfold
train_mean = np.mean(train_scores, axis = 1)
test_mean = np.mean(test_scores, axis = 1)
train_std = np.std(train_scores, axis = 1)
test_std = np.std(test_scores, axis = 1)
plt.figure(figsize = (15, 5))
plt.plot(param_range, train_mean, color = "b", marker = "o", markersize = 5, label = "training accuracy scores")
plt.fill_between(param_range, train_mean + train_std, train_mean - train_std, alpha = 0.4)
plt.plot(param_range, test_mean, color = "g", ls = "--", marker = "s", markersize = 5, label = "validation accurancy")
plt.fill_between(param_range, test_mean + test_std, test_mean - test_std, alpha = 0.15, color = "g")
plt.legend(loc = "lower right")
plt.xlabel("Complexity Parameter - C")
plt.ylabel("Accuracy scores")
plt.xscale("log")
plt.ylim(0.8, 1.02)
```
# Hyper Parameter Tuning using Grid Search
```
pipe_svc = Pipeline([
("scl", StandardScaler()),
("clf", SVC(random_state = 0))
])
param_range = 10.0 ** np.arange(-4, 4)
param_range
param_grid = [
{"clf__C": param_range, "clf__kernel": ["linear"]},
{"clf__C": param_range, "clf__gamma": param_range, "clf__kernel": ["rbf"]}
]
gs = GridSearchCV(estimator = pipe_svc,
param_grid = param_grid,
scoring = "accuracy",
cv = 10,
n_jobs = -1)
gs.fit(X_train, y_train)
print("Best params: %s Best score: %.4f" % (gs.best_params_, gs.best_score_))
# Applying Grid Search on Decision Tree
tree = DecisionTreeClassifier(random_state = 0)
param_grid = [
{"max_depth": [1, 2, 3, 4, 5, 6, 7, None]}
]
gs_tree = GridSearchCV(estimator = tree, param_grid = param_grid, scoring = "accuracy", cv = 5)
gs_tree.fit(X_train, y_train)
print("Best param: %s, best score: %s" % (gs_tree.best_params_, gs_tree.best_score_))
```
| github_jupyter |
**Krupali Mehta(CE076)**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score
from subprocess import call
import sklearn.metrics as metrics
data = datasets.load_wine()
dataset = pd.DataFrame(data.data, columns = data.feature_names)
print(f'Examples : {dataset.shape[0]} and Features : {dataset.shape[1]}')
print("features :- ",data.feature_names)
print("Labels :- ",data.target_names)
x_train, x_test, y_train, y_test = train_test_split(data.data,data.target,test_size=0.20,random_state = 76)
dtc = DecisionTreeClassifier(criterion = 'entropy', max_leaf_nodes = 10)
dtc.fit(x_train, y_train)
#Testing
y_pred = dtc.predict(x_test)
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy Score : ', accuracy)
conf_matrix = confusion_matrix (y_test, y_pred)
print('\nConfusion matrix : \n', conf_matrix)
precision = precision_score(y_test, y_pred, average = None)
print('\nPrecision Score : ', precision)
recall = recall_score(y_test, y_pred, average = None)
print('\nRecall Score : ', recall)
export_graphviz(dtc, out_file = 'wine_tree.dot',
feature_names = list(data.feature_names),
class_names = list(data.target_names),
filled = True)
# Convert to png
call(['dot', '-Tpng', 'wine_tree.dot', '-o', 'wine_tree.png', '-Gdpi=600'])
plt.figure(figsize = (15, 20))
plt.imshow(plt.imread('wine_tree.png'))
plt.axis('off')
plt.show()
#Task 2: Apply algorithm on digits dataset - LabelEncoding of features: and Train test Division 80%-20%
digits = datasets.load_digits()
print(digits)
print("\n================================\n")
print(digits.data.shape)
print(digits.target.shape)
x = digits.data
y = digits.target
#splitting data into train and test sets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 76)
#creating a decision tree classifier using gini
classifier = DecisionTreeClassifier(criterion = 'gini', random_state = 76, max_depth = 7, min_samples_leaf = 26)
classifier.fit(x_train, y_train)
#predicting classes of test data
y_pred = classifier.predict(x_test)
print('Predicted values : \n')
print(y_pred)
#model accuracy
print('confusion matrix : \n',metrics.confusion_matrix(y_test, y_pred))
print('\nAccuracy : ',metrics.accuracy_score(y_test, y_pred))
print('\nReport : ',metrics.classification_report(y_test, y_pred))
display = metrics.plot_confusion_matrix(classifier, x_test, y_test)
display.figure_.suptitle('Confusion Matrix')
print(f'Confusion Matrix : \n{display.confusion_matrix}')
plt.show()
```
| github_jupyter |
# Advanced lane lines
## Camera calibration
```
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
import os
```
### Compute distortion correction coefficients and save them for later use
```
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
img_size = (img.shape[1], img.shape[0])
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Perform calibration
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)
print('calibration data available')
# Let's save the distortion correction coefficients
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "camera_cal/wide_dist_pickle.p", "wb" ) )
```
### Let's see an example of distortion correction
```
img = cv2.imread('camera_cal/calibration1.jpg')
undist = cv2.undistort(img, mtx, dist, None, mtx)
%matplotlib inline
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(undist)
ax2.set_title('Undistorted Image', fontsize=30)
plt.savefig('camera_cal/report_example.jpg', dpi=500, bbox_inches='tight')
```
that's it! It looks like our camera is properly calibrated, we can continue our work.
## Image pipeline
#### Load pickled distortion correction information
```
import pickle
if 'mtx' in globals() and 'dist' in globals(): # Check if we need to load calibration data from the pickled file
print('Data already available')
else:
dist_pickle = pickle.load(open("camera_cal/wide_dist_pickle.p", "rb"))
mtx = dist_pickle['mtx']
dist = dist_pickle['dist']
print('Data loaded')
def test_image_pipeline(full=True, gray=False, save=False):
test_images = glob.glob('test_images/*.jpg')
for img_name in test_images:
img = plt.imread(img_name)
undist = image_pipeline(img)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
if gray is False:
ax2.imshow(undist)
else:
ax2.imshow(undist, cmap='gray')
ax2.set_title('Pipeline Image', fontsize=30)
if save is not False:
plt.savefig(os.path.join(img_name.split('\\')[0], save , img_name.split('\\')[-1]), dpi=500, bbox_inches='tight')
if full is False:
break
```
#### First step of the pipeline: undistord the images
```
# Pipeline implementation at this point in time
def image_pipeline(img):
# Undistord image
undist = cv2.undistort(img, mtx, dist, None, mtx)
return undist
# Let's have a look
test_image_pipeline(True, False, "cali_out")
```
#### Now let's progressively implement the image pipeline
```
def image_pipeline(img, s_thresh=(150, 255), sx_thresh=(35, 100)):
""" This pipeline uses exactly the same principle as the one seen in class
1- undistort image
2- convert to HLS color space
3- apply x gradient using Sobel and apply threshold
4- apply threshold on the S channel
5- combine all conditions and stack the channels into a single image
"""
img = np.copy(img)
# Undistord image
undist = cv2.undistort(img, mtx, dist, None, mtx)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS).astype(np.float)
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return combined_binary
test_image_pipeline(True, True, False)
```
Now that the lane pixels have been identified, it's time to perform a perspective transform in order to get a bird eye's view of the lane markings in front of the vehicle
#### Perspective transform
```
# Let's select an image where the lanes are straight
img = plt.imread('test_images/straight_lines1.jpg')
img_size = (img.shape[1], img.shape[0])
plt.imshow(img)
plt.show()
# We can define source and destination points for the perspective transform
src = np.float32([
[238, 685], # These points were defined using the matplotlib gui window
[606, 437],
[672, 437],
[1060, 675]
])
src = np.float32([
[238, 685], # These points were defined using the matplotlib gui window
[565, 470],
[725, 470],
[1060, 665]
])
dst = np.float32([
[400, img.shape[0]],
[400, 0],
[800, 0],
[800, img.shape[0]]
])
#plt.imshow(img)
#plt.plot(238, 685, 'r.')
#plt.plot(565, 460, 'r.')
#plt.plot(715, 460, 'r.')
#plt.plot(1060, 675, 'r.')
SRC = np.array([[238, 565, 715, 1060], [685, 460, 460, 675], [0, 0, 0, 0]])
# Time to try perspective transform
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
# Let's warp another image
img2 = plt.imread('test_images/test2.jpg')
warped2 = cv2.warpPerspective(img2, M, img_size, flags=cv2.INTER_LINEAR)
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))
ax1.imshow(img)
ax1.plot([238, 565], [685, 460], 'r-', lw=2)
ax1.plot([565, 715], [460, 460], 'r-', lw=2)
ax1.plot([715, 1060], [460, 675], 'r-', lw=2)
ax1.plot([1060, 238], [675, 685], 'r-', lw=2)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(warped)
ax2.plot([400, 400], [img.shape[0], 0], 'r-', lw=2)
ax2.plot([400, 800], [0, 0], 'r-', lw=2)
ax2.plot([800, 800], [0, img.shape[0]], 'r-', lw=2)
ax2.plot([800, 400], [img.shape[0], img.shape[0]], 'r-', lw=2)
ax2.set_title('Warped Straight 1', fontsize=30)
ax3.imshow(warped2)
ax3.set_title('Warped Test 2', fontsize=30)
plt.savefig(r'report_data/warp.jpg', dpi=500, bbox_inches='tight')
```
#### Let's build some lib functions for the pipeline
```
def undist_image(img):
dist_pickle = pickle.load(open("camera_cal/wide_dist_pickle.p", "rb"))
mtx = dist_pickle['mtx']
dist = dist_pickle['dist']
return cv2.undistort(img, mtx, dist, None, mtx)
def warp_image(img):
src = np.float32([ [238, 685], [606, 437], [672, 437], [1060, 675] ])
dst = np.float32([ [400, img.shape[0]], [400, 0], [800, 0], [800, img.shape[0]] ])
M = cv2.getPerspectiveTransform(src, dst)
return cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)
```
#### Here the latest version of the pipeline
```
def image_pipeline(img, s_thresh=(120, 240), sx_thresh=(50, 120)):
""" This pipeline uses exactly the same principle as the one seen in class
1- undistort image
2- convert to HLS color space
3- apply x gradient using Sobel and apply threshold
4- apply threshold on the S channel
5- combine all conditions and stack the channels into a single image
6- warp the image
"""
img = np.copy(img)
# Undistord image
undist = undist_image(img)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS).astype(np.float)
h_channel = hls[:,:0]
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Threshold hue channel
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= 100) & (h_channel <= 200)] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
#combined_binary[(sxbinary == 1)] = 1
warped = warp_image(combined_binary)
return warped
test_image_pipeline(False, True, False)
```
#### Let's fit a polynomial using the sliding window method from the class
```
def fitPolynom(binary_warped):
""" Taken from chapter 33 of the class
"""
# Assuming you have created a warped binary image called "binary_warped"
# Take a histogram of the bottom half of the image
sliced = int(binary_warped.shape[0]/3)
histogram = np.sum(binary_warped[sliced:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]/2)
leftx_base = 400 #np.argmax(histogram[:midpoint])
rightx_base = 800 #np.argmax(histogram[midpoint:]) + midpoint
# Choose the number of sliding windows
nwindows = 9
# Set height of windows
window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 110
# Set minimum number of pixels found to recenter window
minpix = 40
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),
(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),
(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Plot the result
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Compute curvature in meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/346 # meters per pixel in x dimension
y_eval = np.max(ploty)
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_cr = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
# Calculate the new radii of curvature
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
# Now our radius of curvature is in meters
print(left_curverad, 'm', right_curverad, 'm')
# Compute the position of the car in the lane
# for this purpose we compare the position of the detected mid lane with the center of the image
center = 0.5*binary_warped.shape[1] # Center of the image
midlane = left_fitx[0] + 0.5*(right_fitx[0]-left_fitx[0]) # Lane center based on the estimated lanes
carpos = (center - midlane)*xm_per_pix # Position of the car, >0 to the left.
print(carpos)
return out_img, left_fitx, right_fitx
img = plt.imread('test_images/straight_lines1.jpg')
warped_image = image_pipeline(img)
out, leftline, rightline = fitPolynom(warped_image)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(out)
ploty = np.linspace(0, warped_image.shape[0]-1, warped_image.shape[0] )
ax2.plot(leftline, ploty, color='yellow')
ax2.plot(rightline, ploty, color='yellow')
ax2.set_title('Warped Straight 1', fontsize=30)
#def image_pipeline(img, s_thresh=(170, 255), sx_thresh=(20, 100)):
def image_pipeline(img, s_thresh=(120, 240), sx_thresh=(50, 100)):
""" This pipeline uses exactly the same principle as the one seen in class
1- undistort image
2- convert to HLS color space
3- apply x gradient using Sobel and apply threshold
4- apply threshold on the S channel
5- combine all conditions and stack the channels into a single image
6- warp the image
"""
img = np.copy(img)
# Undistord image
undist = undist_image(img)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS).astype(np.float)
hsv = cv2.cvtColor(undist, cv2.COLOR_RGB2HSV).astype(np.float)
h_channel = hls[:,:,0] # Added
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
v_channel = hsv[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold hue channel
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= 0) & (h_channel <= 100)] = 1
v_channel = hsv[:,:,2]
v_binary = np.zeros_like(v_channel)
v_binary[(v_channel >= 220) & (v_channel <= 255)] = 1
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
combined_binary = np.zeros_like(sxbinary)
combined_binary[((s_binary == 1) | (v_binary == 1) | (sxbinary == 1)) & (h_binary != 0)] = 1
warped = warp_image(combined_binary)
out, leftline, rightline = fitPolynom(warped)
return out, leftline, rightline
def test_image_pipeline(full=True, save=False):
test_images = glob.glob('test_images/*.jpg')
for img_name in test_images:
img = plt.imread(img_name)
out, leftlane, rightlane = image_pipeline(img)
#out = image_pipeline(img)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(out, cmap='gray')
ax2.set_title('Pipeline Image', fontsize=30)
ploty = np.linspace(0, out.shape[0]-1, out.shape[0] )
ax2.plot(leftlane, ploty, color='yellow')
ax2.plot(rightlane, ploty, color='yellow')
if save is not False:
plt.imsave(os.path.join(img_name.split('\\')[0], save , img_name.split('\\')[-1]), undist, cmap='gray')
if full is False:
break
test_image_pipeline(False,False)
```
#### Let's experiment with color spaces, for chosing the best possible combination
```
# Let's experience the other color spaces
img = plt.imread('test_images/test5.jpg')
# Undistord image
undist = cv2.undistort(img, mtx, dist, None, mtx)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Convert to HLS color space and separate the V channel
hsv = cv2.cvtColor(undist, cv2.COLOR_RGB2HSV).astype(np.float)
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS).astype(np.float)
lab = cv2.cvtColor(undist, cv2.COLOR_RGB2LAB).astype(np.float)
h_channel = hls[:,:,0]
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= 100) & (h_channel <= 130)] = 1
s_channel = hsv[:,:,1]
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= 150) & (s_channel <= 240)] = 1
v_channel = hsv[:,:,2]
v_binary = np.zeros_like(v_channel)
v_binary[(v_channel >= 220) & (v_channel <= 255)] = 1
l_channel = hls[:,:,1]
l_binary = np.zeros_like(l_channel)
l_binary[(l_channel >= 225) & (l_channel <= 255)] = 1
b_channel = hls[:,:,2]
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= 155) & (b_channel <= 200)] = 1
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= 50) & (scaled_sobel <= 120)] = 1
f, (ax1, ax2, ax3, ax4, ax5, ax6, ax7) = plt.subplots(1, 7, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(h_binary, cmap='gray')
ax2.set_title('H', fontsize=30)
ax3.imshow(s_binary, cmap='gray')
ax3.set_title('S', fontsize=30)
ax4.imshow(l_binary, cmap='gray')
ax4.set_title('L', fontsize=30)
ax5.imshow(v_binary, cmap='gray')
ax5.set_title('v', fontsize=30)
ax6.imshow(sxbinary, cmap='gray')
ax6.set_title('sx', fontsize=30)
ax7.imshow(b_binary, cmap='gray')
ax7.set_title('b', fontsize=30)
```
We can add a thresholding of H in order to remove the influence of the shadows, as shown in the cell below
```
combined_binary = np.zeros_like(v_binary)
combined_binary_h = np.zeros_like(v_binary)
combined_binary_h[((s_binary == 1) | (v_binary == 1) | (sxbinary == 1)) & (h_binary == 0)] = 1
combined_binary[((s_binary == 1) | (v_binary == 1) | (sxbinary == 1) | (l_binary == 1) | (b_binary == 1)) & (h_binary == 0)] = 1
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(combined_binary, cmap='gray')
ax1.set_title('Without using H', fontsize=30)
ax2.imshow(combined_binary_h, cmap='gray')
ax2.set_title('Removing shadows using H', fontsize=30)
```
### Measure curvature
We assume a lane width of 3.7m and a length for the lane of 30m. For this purpose we take a sample image to see how many pixels represent these distances.
```
img = plt.imread('test_images/straight_lines2.jpg')
out, leftlane, rightlane = image_pipeline(img)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img, cmap='gray')
ax1.set_title('Original image', fontsize=30)
ax2.imshow(out, cmap='gray')
ax2.set_title('Pipeline', fontsize=30)
lanewidth_px = round(rightlane[0] - leftlane[0])
# Which yields:
xm_per_pix = 3.7 / lanewidth_px
print(lanewidth_px)
# For y we assume 30m range
ym_per_pix = 30.0/720
print(xm_per_pix, ym_per_pix)
```
# Final Pipeline
```
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
import os
import pickle
if 'mtx' in globals() and 'dist' in globals(): # Check if we need to load calibration data from the pickled file
print('Data already available')
else:
dist_pickle = pickle.load(open("camera_cal/wide_dist_pickle.p", "rb"))
mtx = dist_pickle['mtx']
dist = dist_pickle['dist']
print('Data loaded')
LEFT_LANE = False
RIGHT_LANE = False
```
### Helper functions
```
def undist_image(img):
dist_pickle = pickle.load(open("camera_cal/wide_dist_pickle.p", "rb"))
mtx = dist_pickle['mtx']
dist = dist_pickle['dist']
return cv2.undistort(img, mtx, dist, None, mtx)
def warp_image(img):
#src = np.float32([ [238, 685], [606, 437], [672, 437], [1060, 675] ])
src = np.float32([ [238, 685], [565, 470], [725, 470], [1060, 675] ])
dst = np.float32([ [400, img.shape[0]], [400, 0], [800, 0], [800, img.shape[0]] ])
M = cv2.getPerspectiveTransform(src, dst)
return cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)
def warpBack(img):
#src = np.float32([ [238, 685], [606, 437], [672, 437], [1060, 675] ])
src = np.float32([ [238, 685], [565, 470], [725, 470], [1060, 675] ])
dst = np.float32([ [400, img.shape[0]], [400, 0], [800, 0], [800, img.shape[0]] ])
Minv = cv2.getPerspectiveTransform(dst, src)
return cv2.warpPerspective(img, Minv, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)
def colorSpaceProcessing(undistorded_image, s_thresh=(150, 240), sx_thresh=(50, 120)): # sx_thresh 120 max def
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(undistorded_image, cv2.COLOR_RGB2HLS).astype(np.float)
hsv = cv2.cvtColor(undistorded_image, cv2.COLOR_RGB2HSV).astype(np.float)
lab = cv2.cvtColor(undistorded_image, cv2.COLOR_RGB2LAB).astype(np.float)
h_channel = hls[:,:,0] # Added
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
v_channel = hsv[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold hue channel
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= 100) & (h_channel <= 130)] = 1
v_channel = hsv[:,:,2]
v_binary = np.zeros_like(v_channel)
#v_binary[(v_channel >= 220) & (v_channel <= 255)] = 1
v_binary[(v_channel >= 220) & (v_channel <= 255)] = 1 # After debug
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
l_channel = hls[:,:,1]
l_binary = np.zeros_like(l_channel)
l_binary[(l_channel >= 225) & (l_channel <= 255)] = 1
b_channel = hls[:,:,2]
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= 155) & (b_channel <= 200)] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
combined_binary = np.zeros_like(sxbinary)
combined_binary[((s_binary == 1) | (v_binary == 1) | (sxbinary == 1) | (l_binary == 1) | (b_binary == 1)) & (h_binary == 0)] = 1
#combined_binary[((s_binary == 1) | (v_binary == 1) | (sxbinary == 1))] = 1
# Apply region of interest making
vertices = np.array([[(200, 720),(520, 480), (780, 480), (1200,720)]], dtype=np.int32)
mask = np.zeros_like(combined_binary)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(undistorded_image.shape) > 2:
channel_count = undistorded_image.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
combined_binary = cv2.bitwise_and(combined_binary, mask)
return combined_binary
def fitPolynom(binary_warped, previousL=None, previousR=None):
""" Taken from chapter 33 of the class
"""
global LEFT_LANE
global RIGHT_LANE
# Assuming you have created a warped binary image called "binary_warped"
# Take a histogram of the bottom half of the image
sliced = int(binary_warped.shape[0]/3)
histogram = np.sum(binary_warped[sliced:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Choose the number of sliding windows
nwindows = 9
# Set height of windows
window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 110
# Set minimum number of pixels found to recenter window
minpix = 40
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),
(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),
(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
if 0 :
# Fit a second order polynomial to each
if len(left_lane_inds) != 0:
left_fit = np.polyfit(lefty, leftx, 2)
# Filter coefficients if required
if previousL is not None:
left_fit[0] = simpleLowPass(previousL[0], left_fit[0], 0.95)
left_fit[1] = simpleLowPass(previousL[1], left_fit[1], 0.95)
left_fit[2] = simpleLowPass(previousL[2], left_fit[2], 0.95)
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
else:
left_fitx = None
left_fit = None
if len(right_lane_inds) != 0:
right_fit = np.polyfit(righty, rightx, 2)
if previousR is not None:
right_fit[0] = simpleLowPass(previousR[0], right_fit[0], 0.95)
right_fit[1] = simpleLowPass(previousR[1], right_fit[1], 0.95)
right_fit[2] = simpleLowPass(previousR[2], right_fit[2], 0.95)
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
else:
right_fitx = None
right_fit = None
else:
# Fit a second order polynomial to each
if len(left_lane_inds) != 0:
left_fit = np.polyfit(lefty, leftx, 2)
# Filter coefficients if required
if LEFT_LANE is not False:
LEFT_LANE[0] = simpleLowPass(LEFT_LANE[0], left_fit[0], 0.90)
LEFT_LANE[1] = simpleLowPass(LEFT_LANE[1], left_fit[1], 0.90)
LEFT_LANE[2] = simpleLowPass(LEFT_LANE[2], left_fit[2], 0.90)
else:
LEFT_LANE = [0.0, 0.0, 0.0]
LEFT_LANE[0] = left_fit[0]
LEFT_LANE[1] = left_fit[1]
LEFT_LANE[2] = left_fit[2]
left_fitx = LEFT_LANE[0]*ploty**2 + LEFT_LANE[1]*ploty + LEFT_LANE[2]
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
else:
left_fitx = None
LEFT_LANE = False
if len(right_lane_inds) != 0:
right_fit = np.polyfit(righty, rightx, 2)
if RIGHT_LANE is not False:
RIGHT_LANE[0] = simpleLowPass(RIGHT_LANE[0], right_fit[0], 0.90)
RIGHT_LANE[1] = simpleLowPass(RIGHT_LANE[1], right_fit[1], 0.90)
RIGHT_LANE[2] = simpleLowPass(RIGHT_LANE[2], right_fit[2], 0.90)
else:
RIGHT_LANE = [0.0, 0.0, 0.0]
RIGHT_LANE[0] = right_fit[0]
RIGHT_LANE[1] = right_fit[1]
RIGHT_LANE[2] = right_fit[2]
right_fitx = RIGHT_LANE[0]*ploty**2 + RIGHT_LANE[1]*ploty + RIGHT_LANE[2]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
else:
right_fitx = None
RIGHT_LANE = False
if 0:
return out_img, left_fitx, right_fitx, leftx, rightx, lefty, righty, left_fit, right_fit
else:
return out_img, left_fitx, right_fitx, leftx, rightx, lefty, righty, LEFT_LANE, RIGHT_LANE
def computeCurveAndCarPos(shapex, shapey, left_fitx, right_fitx, leftx, rightx, lefty, righty):
"""
"""
# Compute curvature in meters
ym_per_pix = 20/720 # meters per pixel in y dimension
xm_per_pix = 3.7/400 # meters per pixel in x dimension
ploty = np.linspace(0, shapey-1, shapey )
y_eval = np.max(ploty)
# Fit new polynomials to x,y in world space
if left_fitx is not None:
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
# Calculate the new radii of curvature
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
else:
left_curverad = -1
if right_fitx is not None:
# Fit new polynomials to x,y in world space
right_fit_cr = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
# Calculate the new radii of curvature
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
else:
right_curverad = -1
# Compute the position of the car in the lane
# for this purpose we compare the position of the detected mid lane with the center of the image
center = 600 # Center of the image, based on the warped image (offset of 40pix from the half of the size)
if left_fitx is None or right_fitx is None:
carpos = -1
else:
midlane = left_fitx[shapey-1] + 0.5*(right_fitx[shapey-1]-left_fitx[shapey-1]) # Lane center based on the estimated lanes
carpos = (center - midlane)*xm_per_pix # Position of the car, >0 to the right.
return left_curverad, right_curverad, carpos
def weighted_img(img, initial_img, α=0.6, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
def projectLanes(img, shapey, leftlane, rightlane):
tmp = np.zeros_like(img)
ploty = np.linspace(0, shapey-1, shapey )
if leftlane is not None:
try:
tmp[ploty.astype(int), leftlane.astype(int)-1, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)+1, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)-2, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)+2, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)-3, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)+3, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)-4, 0] = 255
tmp[ploty.astype(int), leftlane.astype(int)+4, 0] = 255
except Exception:
pass
if rightlane is not None:
try:
tmp[ploty.astype(int), rightlane.astype(int), 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)-1, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)+1, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)-2, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)+2, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)-3, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)+3, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)-4, 0] = 255
tmp[ploty.astype(int), rightlane.astype(int)+4, 0] = 255
except Exception:
pass
res = warpBack(tmp)
out = weighted_img(res, img)
return out
def simpleLowPass(old, new, alpha):
"""
Trivial low pass filter
"""
return (alpha*old + (1-alpha)*new)
def initParams():
"""
Initializes the line parameters
"""
global LEFT_LANE
global RIGHT_LANE
LEFT_LANE = False
RIGHT_LANE = False
```
### Pipeline
```
def pipeline(img, previousL=None, previousR=None):
img = np.copy(img)
# Undistord image
undist = undist_image(img)
# Process color spaces
binary = colorSpaceProcessing(undist)
# Warp image to new perspective
binary_warped = warp_image(binary)
# Compute polynomials for the lanes
out_img, left_fitx, right_fitx, leftx, rightx, lefty, righty, leftcoeff, rightcoeff = fitPolynom(binary_warped, previousL, previousR)
# Compute curve radii and in-lane car position
cl, cr, cp = computeCurveAndCarPos(img.shape[1], img.shape[0], left_fitx, right_fitx, leftx, rightx, lefty, righty)
data = [cl, cr, cp]
dataCoeff = [leftcoeff, rightcoeff]
out = projectLanes(img, img.shape[0], left_fitx, right_fitx)
# Create an image to draw the lines on
ploty = np.linspace(0, img.shape[0]-1, img.shape[0] )
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
src = np.float32([ [238, 685], [565, 470], [725, 470], [1060, 675] ])
dst = np.float32([ [400, img.shape[0]], [400, 0], [800, 0], [800, img.shape[0]] ])
Minv = cv2.getPerspectiveTransform(dst, src)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(out, 1, newwarp, 0.3, 0)
return result, data, dataCoeff
def pipeline_vid(img):
img = np.copy(img)
# Undistord image
undist = undist_image(img)
# Process color spaces
binary = colorSpaceProcessing(undist)
# Warp image to new perspective
binary_warped = warp_image(binary)
# Compute polynomials for the lanes
out_img, left_fitx, right_fitx, leftx, rightx, lefty, righty, leftcoeff, rightcoeff = fitPolynom(binary_warped)
# Compute curve radii and in-lane car position
cl, cr, cp = computeCurveAndCarPos(img.shape[1], img.shape[0], left_fitx, right_fitx, leftx, rightx, lefty, righty)
data = [cl, cr, cp]
dataCoeff = [left_fitx, right_fitx]
out = projectLanes(img, img.shape[0], left_fitx, right_fitx)
font = cv2.FONT_HERSHEY_PLAIN
cv2.putText(out,'Curve radius left: %.1f [m]' % data[0],(75,50), font, 2, (255,255,0))
cv2.putText(out,'Curve radius right: %.1f [m]' % data[1],(75,80), font, 2, (255,255,0))
cv2.putText(out,'In-lane car position: %.1f [m]' % data[2],(75,110), font, 2, (255,255,0))
# Create an image to draw the lines on
ploty = np.linspace(0, img.shape[0]-1, img.shape[0] )
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
src = np.float32([ [238, 685], [565, 470], [725, 470], [1060, 675] ])
dst = np.float32([ [400, img.shape[0]], [400, 0], [800, 0], [800, img.shape[0]] ])
Minv = cv2.getPerspectiveTransform(dst, src)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(out, 1, newwarp, 0.3, 0)
return result #binary_warped, data, dataCoeff
```
#### Save output images
for report
```
def image_pipeline(img):
img = np.copy(img)
# Undistord image
undist = undist_image(img)
# Process color spaces
binary = colorSpaceProcessing(undist)
return binary
test_images = glob.glob('test_images/*.jpg')
output_folder = 'output_images'
for img_name in test_images:
initParams()
img = plt.imread(img_name)
out, data, dataCoeff = pipeline(img)
font = {'family': 'serif', 'color': 'yellow', 'weight': 'normal', 'size': 10, }
f, (ax1) = plt.subplots(1, 1, figsize=(20,10))
ax1.imshow(out)
plt.text(75, 50, "Curve radius left: %.1f [m]" % data[0], fontdict=font)
plt.text(75, 80, "Curve radius right: %.1f [m]" % data[1], fontdict=font)
plt.text(75, 110, "In-lane car position: %.1f [m]" % data[2], fontdict=font)
plt.savefig(os.path.join(output_folder, img_name.split('\\')[-1]), dpi=500, bbox_inches='tight')
```
## Test on videos
The pipeline is ready to be tested on videos, a filtering process of the polynomial coefficients has been enabled in order to make the estimation more robust.
```
import matplotlib.image as mpimg
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import scipy
initParams()
white_output = 'output_videos/project_video_res.mp4'
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(pipeline_vid)
%time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
The detection looks stable, except almost at the very end when the car passes the region with lots of shadows. Due to the lack of detected features for the right lane marking, the estimated lane curves to the left for a short period of time.
| github_jupyter |
# 2D Isostatic gravity inversion - Inverse Problem
Este [IPython Notebook](http://ipython.org/videos.html#the-ipython-notebook) utiliza a biblioteca de código aberto [Fatiando a Terra](http://fatiando.org/)
```
%matplotlib inline
import numpy as np
from scipy.misc import derivative
import scipy as spy
from scipy import interpolate
import matplotlib
#matplotlib.use('TkAgg', force=True)
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
import cPickle as pickle
import datetime
import string as st
from scipy.misc import imread
from __future__ import division
from fatiando import gravmag, mesher, utils, gridder
from fatiando.mesher import Prism, Polygon
from fatiando.gravmag import prism
from fatiando.utils import ang2vec, si2nt, contaminate
from fatiando.gridder import regular
from fatiando.vis import mpl
from numpy.testing import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from pytest import raises
plt.rc('font', size=16)
import functions as fc
```
## Observation coordinates.
```
# Model`s limits
ymin = 0.0
ymax = 383000.0
zmin = -1000.0
zmax = 45000.0
xmin = -100000.0
xmax = 100000.0
area = [ymin, ymax, zmax, zmin]
ny = 150 # number of observation datas and number of prisms along the profile
# coordinates defining the horizontal boundaries of the
# adjacent columns along the profile
y = np.linspace(ymin, ymax, ny)
# coordinates of the center of the columns forming the
# interpretation model
n = ny - 1
dy = (ymax - ymin)/n
ycmin = ymin + 0.5*dy
ycmax = ymax - 0.5*dy
yc = np.reshape(np.linspace(ycmin, ycmax, n),(n,1))
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
## Edge extension (observation coordinates)
sigma = 2.0
edge = sigma*dy*n
```
## Model parameters
```
# Model densities
# Indices and polygons relationship:
# cc = continental crust layer
# oc = ocean crust layer
# w = water layer
# s = sediment layer
# m = mantle layer
dw = np.array([1030.0])
ds0 = np.array([2350.0])
ds1 = np.array([2600.0])
dcc = np.array([2870.0])
doc = np.array([2885.0])
dm = np.array([3300.0])
#dc = dcc
# coordinate defining the horizontal boundaries of the continent-ocean boundary
COT = 350000.0
# list defining crust density variance
dc = np.zeros_like(yc)
aux = yc <= COT
for i in range(len(yc[aux])):
dc[i] = dcc
for i in range(len(yc[aux]),n):
dc[i] = doc
# defining sediments layers density matrix
ds = np.vstack((np.reshape(np.repeat(ds0,n),(1,n)),np.reshape(np.repeat(ds1,n),(1,n))))
# S0 => isostatic compensation surface (Airy's model)
S0 = np.array([44000.0]) #original
```
## Synthetic data
```
gsyn = np.reshape(np.loadtxt('../data/magma-poor-margin-synthetic-gravity-data.txt'),(n,1))
```
## Water bottom
```
bathymetry = np.reshape(np.loadtxt('../data/etopo1-pelotas.txt'),(n,1))
tw = 0.0 - bathymetry
```
## True surfaces
```
toi = np.reshape(np.loadtxt('../data/volcanic-margin-true-toi-surface.txt'),(n,1))
true_basement = np.reshape(np.loadtxt('../data/volcanic-margin-true-basement-surface.txt'),(n,1))
true_moho = np.reshape(np.loadtxt('../data/volcanic-margin-true-moho-surface.txt'),(n,1))
# True reference moho surface (SR = S0+dS0)
true_S0 = np.array([44000.0])
true_dS0 = np.array([2200.0]) #original
# True first layer sediments thickness
ts0 = toi - tw
# True second layer sediments thickness
true_ts1 = true_basement - toi
# True thickness sediments vector
true_ts = np.vstack((np.reshape(ts0,(1,n)),np.reshape(true_ts1,(1,n))))
# True layer anti-root thickness
true_tm = S0 - true_moho
# true parameters vector
ptrue = np.vstack((true_ts1, true_tm, true_dS0))
```
## Initial guess surfaces
```
# initial guess basement surface
ini_basement = np.reshape(np.loadtxt('../data/volcanic-margin-initial-basement-surface.txt'),(n,1))
# initial guess moho surface
ini_moho = np.reshape(np.loadtxt('../data/volcanic-margin-initial-moho-surface.txt'),(n,1))
# initial guess reference moho surface (SR = S0+dS0)
ini_dS0 = np.array([11500.0])
ini_RM = S0 + ini_dS0
# initial guess layer igneous thickness
ini_ts1 = ini_basement - toi
# initial guess anti-root layer thickness
ini_tm = S0 - ini_moho
# initial guess parameters vector
p0 = np.vstack((ini_ts1, ini_tm, ini_dS0))
```
## Known depths
```
# Known values: basement and moho surfaces
base_known = np.loadtxt('../data/volcanic-margin-basement-known-depths.txt')
#base_known = np.loadtxt('../data/volcanic-margin-basement-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/volcanic-margin-basement-new-known-depths.txt')
#base_known = np.loadtxt('../data/volcanic-margin-basement-few-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/volcanic-margin-basement-few-new-known-depths.txt')
#base_known_old = np.loadtxt('../data/volcanic-margin-basement-known-depths.txt')
moho_known = np.loadtxt('../data/volcanic-margin-moho-known-depths.txt')
(rs,index_rs) = fc.base_known_function(dy,tw,yc,base_known,ts0,two_layers=True)
(rm,index_rm) = fc.moho_known_function(dy,yc,S0,moho_known)
index_base = index_rs
index_moho = index_rm - n
assert_almost_equal(base_known[:,0], yc[index_base][:,0], decimal=6)
assert_almost_equal(moho_known[:,0], yc[index_moho][:,0], decimal=6)
assert_almost_equal(true_ts1[index_base][:,0], rs[:,0], decimal=6)
assert_almost_equal((true_tm[index_moho][:,0]), rm[:,0], decimal=6)
```
## Initial guess data
```
g0 = np.reshape(np.loadtxt('../data/magma-poor-margin-initial-guess-gravity-data.txt'),(n,1))
```
### parameters vector box limits
```
# true thickness vector limits
print 'ts =>', np.min(ptrue[0:n]),'-', np.max(ptrue[0:n])
print 'tm =>', np.min(ptrue[n:n+n]),'-', np.max(ptrue[n:n+n])
print 'dS0 =>', ptrue[n+n]
# initial guess thickness vector limits
print 'ts =>', np.min(p0[0:n]),'-', np.max(p0[0:n])
print 'tm =>', np.min(p0[n:n+n]),'-', np.max(p0[n:n+n])
print 'dS0 =>', p0[n+n]
```
```
# defining parameters values limits
pjmin = np.zeros((len(ptrue),1))
pjmax = np.zeros((len(ptrue),1))
pjmin[0:n] = 0.0
pjmax[0:n] = 25000.
pjmin[n:n+n] = 2000.0
pjmax[n:n+n] = 28000.
pjmin[n+n] = 0.0
pjmax[n+n] = 12000.
```
### Inversion code
```
#Parametros internos para implementacao da funcao (convergencia, numero de iteracoes, etc.)
beta = 10**(-3)
itmax = 50
itmax_marq = 10
lamb = 1.
mi = 10**(-3)
dmi = 10.
dp1 = 1.
dp2 = 1.
#inicializacao de variaveis
ymin = area[0]
ymax = area[1]
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
n = len(yc) # numero de dados observados
m = 2*n+1 # numero de parametros a inverter
# calculo da contribuicao dos prismas que formam a camada de agua.
prism_w = fc.prism_w_function(xmax,xmin,dy,edge,dw,dcc,tw,yc)
gzw = prism.gz(np.reshape(x,(n,)),np.reshape(yc,(n,)),np.reshape(z,(n,)),prism_w)
# matrizes
I = np.identity(m)
W0 = np.identity(n-1)
R0 = fc.R_matrix_function(n,isostatic=True)
R = fc.R_matrix_function(n)
C = fc.C_matrix_function(ds,dm,dc,two_layers=True)
D = fc.D_matrix_function(dw,dc,ds,two_layers=True)
A = fc.A_matrix_function(n,rs,index_rs)
B = fc.B_matrix_function(n,rm,index_rm)
G0 = fc.G_matrix_function(xmax,xmin,dy,edge,dp1,dp2,S0,dw,ds,dm,dcc,dc,tw,p0,yc,ts0,two_layers=True)
# Hessianas
Hess_phi = (2/n)*G0.T.dot(G0)
Hess_psi0 = 2*C.T.dot(R0.T.dot(W0.T.dot(W0.dot(R0.dot(C)))))
Hess_psi1 = 2*R.T.dot(R)
Hess_psi2 = 2*A.T.dot(A)
Hess_psi3 = 2*B.T.dot(B)
# Normalizacao dos vinculos
diag_phi = np.diag(Hess_phi)
diag_psi0 = np.diag(Hess_psi0)
diag_psi1 = np.diag(Hess_psi1)
diag_psi2 = np.diag(Hess_psi2)
diag_psi3 = np.diag(Hess_psi3)
f_phi = np.median(diag_phi)
f_psi0 = np.median(diag_psi0)
f_psi1 = np.median(diag_psi1)
#f_psi2 = np.median(diag_psi2)
#f_psi3 = np.median(diag_psi3)
f_psi2 = 2.
f_psi3 = 2.
print f_phi, f_psi0, f_psi1, f_psi2, f_psi3
# coeficientes dos vinculos
alpha0 = (f_phi/f_psi0)*10**(1) # vinculo isostatico
alpha1 = (f_phi/f_psi1)*10**(0) # vinculo suavidade
alpha2 = (f_phi/f_psi2)*10**(0) # vinculo de igualdade espessura sedimento
alpha3 = (f_phi/f_psi3)*10**(1) # vinculo de igualdade espessura (S0 - tm)
print alpha0, alpha1, alpha2, alpha3
p1 = p0.copy()
g1 = g0.copy()
gama1 = fc.gama_function(alpha0,alpha1,alpha2,alpha3,lamb,S0,tw,gsyn,g1,p1,rs,rm,W0,R0,C,D,R,A,B,ts0,two_layers=True)
gama_list = [gama1]
k0=0
k1=0
#implementacao da funcao
for it in range (itmax):
p1_hat = - np.log((pjmax - p1)/(p1-pjmin))
G1 = fc.G_matrix_function(xmax,xmin,dy,edge,dp1,dp2,S0,dw,ds,dm,dcc,dc,tw,p1,yc,ts0,two_layers=True)
grad_phi = (-2/n)*G1.T.dot(gsyn - g1)
Hess_phi = (2/n)*G1.T.dot(G1)
grad_psi0 = fc.grad_ps0_function(S0,tw,p1,W0,R0,C,D,ts0,two_layers=True)
grad_psi1 = fc.grad_psi1_function(p1,R)
grad_psi2 = fc.grad_psi2_function(p1,rs,A)
grad_psi3 = fc.grad_psi2_function(p1,rm,B)
grad_gama = grad_phi + lamb*(alpha0*grad_psi0+alpha1*grad_psi1+alpha2*grad_psi2+alpha3*grad_psi3)
Hess_gama = Hess_phi+lamb*(alpha0*Hess_psi0+alpha1*Hess_psi1+alpha2*Hess_psi2+alpha3*Hess_psi3)
T = fc.T_matrix_function(pjmin, pjmax, p1)
for it_marq in range(itmax_marq):
deltap = np.linalg.solve((Hess_gama.dot(T) + mi*I), -grad_gama)
p2_hat = p1_hat + deltap
p2 = pjmin + ((pjmax - pjmin)/(1 + np.exp(-p2_hat)))
#Calculo do vetor de dados preditos e da funcao phi
prism_s = fc.prism_s_function(xmax,xmin,dy,edge,ds,dcc,tw,p2,yc,ts0,two_layers=True)
prism_c = fc.prism_c_function(xmax,xmin,dy,edge,S0,dcc,dc,tw,p2,yc,ts0,two_layers=True)
prism_m = fc.prism_m_function(xmax,xmin,dy,edge,S0,dcc,dm,p2,yc)
g2 = np.reshape(fc.g_function(np.reshape(x,(n,)),np.reshape(yc,(n,)),np.reshape(z,(n,)),gzw,prism_s,prism_c,prism_m),(n,1))
gama2 = fc.gama_function(alpha0,alpha1,alpha2,alpha3,lamb,S0,tw,gsyn,g2,p2,rs,rm,W0,R0,C,D,R,A,B,ts0,two_layers=True)
#Verificando se a funcao phi esta diminuindo
dgama = gama2 - gama1
if dgama > 0.:
mi *= dmi
print 'k0=',k0
k0 += 1
else:
mi /= dmi
break
#Testando convergencia da funcao phi
if (dgama < 0.) & (abs(gama1 - gama2) < beta):
#if fc.convergence_function(gama1, gama2, beta):
print 'convergence achieved'
break
#Atualizando variaveis
else:
print 'k1=',k1
k1 += 1
#gama1 = gama2.copy()
print gama1
gama_list.append(gama1)
thicknesses = tw + ts0 + p2[0:n] + p2[n:n+n]
print 'thicknesses=', np.max(thicknesses)
if np.alltrue(thicknesses <= S0):
p = p1.copy()
g = g1.copy()
p1 = p2.copy()
g1 = g2.copy()
gama1 = gama2.copy()
assert np.alltrue(thicknesses <= S0), 'sum of the thicknesses shall be less than or equal to isostatic compensation surface'
p = p2.copy()
g = g2.copy()
gama_list.append(gama2)
it = [i for i in range(len(gama_list))]
#plt.figure(figsize=(8,8))
ax = plt.figure(figsize=(8,8)).gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.plot(gama_list,'ko')
plt.yscale('log')
plt.xlabel('$k$', fontsize=18)
plt.ylabel('$\Gamma(\mathbf{p})$', fontsize=18)
plt.grid()
#plt.xlim(-1,50)
#plt.xlim(-1, len(gama_list)+5)
plt.ylim(np.min(gama_list)-3*np.min(gama_list),np.max(gama_list)+3*np.min(gama_list))
#mpl.savefig('../manuscript/figures/magma-poor-margin-gama-list-alphas_1_0_0_1.png', dpi='figure', bbox_inches='tight')
#mpl.savefig('../manuscript/figures/magma-poor-margin-gama-list-alphas_2_1_0_1_more-known-depths.png', dpi='figure', bbox_inches='tight')
plt.show()
```
## Lithostatic Stress
```
sgm_true = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*true_ts1 + dc*(S0-tw-ts0-true_ts1-true_tm)+dm*true_tm)
sgm = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*p[0:n] + dc*(S0-tw-ts0-p[0:n]-p[n:n+n])+dm*p[n:n+n])
```
## Inversion model plot
```
# Inverrsion results
RM = S0 + p[n+n]
basement = tw + ts0 + p[0:n]
moho = S0 - p[n:n+n]
print ptrue[n+n], p[n+n]
polygons_water = []
for (yi, twi) in zip(yc, tw):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_water.append(Polygon(np.array([[y1, y2, y2, y1],
[0.0, 0.0, twi, twi]]).T,
props={'density': dw - dcc}))
polygons_sediments0 = []
for (yi, twi, s0i) in zip(yc, np.reshape(tw,(n,)), np.reshape(toi,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments0.append(Polygon(np.array([[y1, y2, y2, y1],
[twi, twi, s0i, s0i]]).T,
props={'density': ds0 - dcc}))
polygons_sediments1 = []
for (yi, s0i, s1i) in zip(yc, np.reshape(toi,(n,)), np.reshape(basement,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments1.append(Polygon(np.array([[y1, y2, y2, y1],
[s0i, s0i, s1i, s1i]]).T,
props={'density': ds1 - dcc}))
polygons_crust = []
for (yi, si, Si, dci) in zip(yc, np.reshape(basement,(n,)), np.reshape(moho,(n,)), dc):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_crust.append(Polygon(np.array([[y1, y2, y2, y1],
[si, si, Si, Si]]).T,
props={'density': dci - dcc}))
polygons_mantle = []
for (yi, Si) in zip(yc, np.reshape(moho,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_mantle.append(Polygon(np.array([[y1, y2, y2, y1],
[Si, Si, S0+p[n+n], S0+p[n+n]]]).T,
props={'density': dm - dcc}))
%matplotlib inline
plt.close('all')
fig = plt.figure(figsize=(12,16))
import matplotlib.gridspec as gridspec
heights = [8, 8, 8, 1]
gs = gridspec.GridSpec(4, 1, height_ratios=heights)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='--', linewidth=1)
ax1.plot(0.001*yc, gsyn, 'or', mfc='none', markersize=8, label='simulated data')
ax1.plot(0.001*yc, g0, '-b', linewidth=2, label='initial guess data')
ax1.plot(0.001*yc, g, '-g', linewidth=2, label='predicted data')
ax1.set_xlim(0.001*ymin, 0.001*ymax)
ax1.set_ylabel('gravity disturbance (mGal)', fontsize=16)
ax1.set_xticklabels(['%g'% (l) for l in ax1.get_xticks()], fontsize=14)
ax1.set_yticklabels(['%g'% (l) for l in ax1.get_yticks()], fontsize=14)
ax1.legend(loc='best', fontsize=14, facecolor='silver')
ax2.plot(0.001*yc, sgm_true, 'or', mfc='none', markersize=8, label='simulated lithostatic stress')
ax2.plot(0.001*yc, sgm, '-g', linewidth=2, label='evaluated lithostatic stress')
ax2.set_xlim(0.001*ymin, 0.001*ymax)
ax2.set_ylabel('lithostatic stress (MPa)', fontsize=16)
ax2.set_xticklabels(['%g'% (l) for l in ax2.get_xticks()], fontsize=14)
ax2.set_yticklabels(['%g'% (l) for l in ax2.get_yticks()], fontsize=14)
ax2.legend(loc='best', fontsize=14, facecolor='silver')
ax3.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=1)
aux = yc <= COT
for (pwi) in (polygons_water):
tmpx = [x for x in pwi.x]
tmpx.append(pwi.x[0])
tmpy = [y for y in pwi.y]
tmpy.append(pwi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='lightskyblue')
for (ps0i) in (polygons_sediments0):
tmpx = [x for x in ps0i.x]
tmpx.append(ps0i.x[0])
tmpy = [y for y in ps0i.y]
tmpy.append(ps0i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='tan')
for (ps1i) in (polygons_sediments1):
tmpx = [x for x in ps1i.x]
tmpx.append(ps1i.x[0])
tmpy = [y for y in ps1i.y]
tmpy.append(ps1i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='rosybrown')
for (pci) in (polygons_crust[:len(yc[aux])]):
tmpx = [x for x in pci.x]
tmpx.append(pci.x[0])
tmpy = [y for y in pci.y]
tmpy.append(pci.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='orange')
for (pcoi) in (polygons_crust[len(yc[aux]):n]):
tmpx = [x for x in pcoi.x]
tmpx.append(pcoi.x[0])
tmpy = [y for y in pcoi.y]
tmpy.append(pcoi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='olive')
for (pmi) in (polygons_mantle):
tmpx = [x for x in pmi.x]
tmpx.append(pmi.x[0])
tmpy = [y for y in pmi.y]
tmpy.append(pmi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='pink')
#ax3.axhline(y=S0, xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(yc, tw, '-k', linewidth=3)
ax3.plot(yc, toi, '-k', linewidth=3)
ax3.plot(yc, true_basement, '-k', linewidth=3, label='true surfaces')
ax3.plot(yc, true_moho, '-k', linewidth=3)
ax3.plot(yc, ini_basement, '-.b', linewidth=3, label='initial guess surfaces')
ax3.plot(yc, ini_moho, '-.b', linewidth=3)
ax3.plot(yc, basement, '--w', linewidth=3, label='estimated surfaces')
ax3.plot(yc, moho, '--w', linewidth=3)
ax3.axhline(y=true_S0+true_dS0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=3)
ax3.axhline(y=S0+ini_dS0, xmin=ymin, xmax=ymax, color='b', linestyle='-.', linewidth=3)
ax3.axhline(y=S0+p[n+n], xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(base_known[:,0], base_known[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_old[:,0], base_known_old[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_new[:,0], base_known_new[:,1], 'v', color = 'magenta', markersize=15, label='more known depths (basement)')
ax3.plot(moho_known[:,0], moho_known[:,1], 'D', color = 'lime', markersize=15, label='known depths (moho)')
#ax3.set_ylim((S0+p[n+n]), zmin)
ax3.set_ylim((56000.0), zmin)
ax3.set_xlim(ymin, ymax)
ax3.set_xlabel('y (km)', fontsize=16)
ax3.set_ylabel('z (km)', fontsize=16)
ax3.set_xticklabels(['%g'% (0.001*l) for l in ax3.get_xticks()], fontsize=14)
ax3.set_yticklabels(['%g'% (0.001*l) for l in ax3.get_yticks()], fontsize=14)
ax3.legend(loc='lower right', fontsize=14, facecolor='silver')
X, Y = fig.get_dpi()*fig.get_size_inches()
plt.title('Density contrast (kg/m$^{3}$)', fontsize=17)
#plt.title('Density (kg/m$^{3}$)', fontsize=17)
ax4.axis('off')
layers_list1 = ['water', 'sediment 1', 'sediment 2', 'continental', 'oceanic', 'mantle']
layers_list2 = ['', '', '', 'crust', 'crust', '']
colors_list = ['lightskyblue', 'tan', 'rosybrown', 'orange', 'olive', 'pink']
density_list = ['-1840', '-520', '-270', '0', '15', '430'] #original
#density_list = ['1030', '2350', '2600', '2870', '2885', '3300']
ncols = len(colors_list)
nrows = 1
h = Y / nrows
w = X / (ncols + 1)
i=ncols-1
for color, density, layers1, layers2 in zip(colors_list, density_list, layers_list1, layers_list2):
col = i // nrows
row = i % nrows
x = X - (col*w) - w
yi_line = Y
yf_line = Y - Y*0.15
yi_text1 = Y - Y*0.2
yi_text2 = Y - Y*0.28
yi_text3 = Y - Y*0.08
i-=1
poly = Polygon(np.array([[x, x+w*0.75, x+w*0.75, x], [yi_line, yi_line, yf_line, yf_line]]).T)
tmpx = [x for x in poly.x]
tmpx.append(poly.x[0])
tmpy = [y for y in poly.y]
tmpy.append(poly.y[0])
ax4.plot(tmpx, tmpy, linestyle='-', color='k', linewidth=1)
ax4.fill(tmpx, tmpy, color=color)
ax4.text(x+w*0.375, yi_text1, layers1, fontsize=(w*0.14), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text2, layers2, fontsize=(w*0.14), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text3, density, color = 'k', fontsize=(w*0.14), horizontalalignment='center', verticalalignment='center')
plt.tight_layout()
#mpl.savefig('../manuscript/figures/magma-poor-margin-grafics-estimated-model-alphas_1_0_0_1.png', dpi='figure', bbox_inches='tight')
#mpl.savefig('../data/fig/volcanic-margin-grafics-estimated-model-alphas_2_1_1_2_more-known-depths.png', dpi='figure', bbox_inches='tight')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/shivamguptadrps/BuildingMachineLearningSystemsWithPython/blob/master/Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>What is Colaboratory?</h1>
Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with
- Zero configuration required
- Free access to GPUs
- Easy sharing
Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!
## **Getting started**
The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.
Variables that you define in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).
Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).
## Data science
With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
```
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
```
You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).
## Machine learning
With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.
Colab is used extensively in the machine learning community with applications including:
- Getting started with TensorFlow
- Developing and training neural networks
- Experimenting with TPUs
- Disseminating AI research
- Creating tutorials
To see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.
## More Resources
### Working with Notebooks in Colab
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
- <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
<a name="working-with-data"></a>
### Working with Data
- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualizing data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine Learning Crash Course
These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
- [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)
- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)
- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)
<a name="using-accelerated-hardware"></a>
### Using Accelerated Hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)
<a name="machine-learning-examples"></a>
## Machine Learning Examples
To see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from [TensorFlow Hub](https://tfhub.dev).
A few featured examples:
- [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
- [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.
- [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.
- [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.
- [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.
| github_jupyter |
# **Fraud Detection & Model Evaluation** (SOLUTION)
Source: [https://github.com/d-insight/code-bank.git](https://github.com/dsfm-org/code-bank.git)
License: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository.
-------------
## Overview
In this project, we will explore different model evaluation metrics in the context of fraud detection.
Data source: [Kaggle](https://www.kaggle.com/c/ieee-fraud-detection/overview).
| Feature name | Variable Type | Description
|------------------|---------------|--------------------------------------------------------
|isFraud | Categorical | Target variable, where 1 = fraud and 0 = no fraud
|TransactionAMT | Continuous | Transaction payment amount in USD
|ProductCD | Categorical | Product code, the product for each transaction
|card1 - card6 | Continuous | Payment card information, such as card type, card category, issue bank, country, etc.
|dist | Continuous | Distance
|C1 - C14 | Continuous | Counting, such as how many addresses are found to be associated with the payment card, etc. The actual meaning is masked.
|D1 - D15 | Continous | Timedelta, such as days between previous transaction, etc.
-------------
## **Part 0**: Setup
### Import Packages
```
# Import packages
import lightgbm
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 6]
import seaborn as sn
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
# Constants
TRAIN_PATH = 'data/train.csv'
TEST_PATH = 'data/test.csv'
THRESHOLDS = [i/100 for i in range(0, 101)]
SEED = 1234
```
## **Part 1**: Exploratory data analysis
```
# Load data
train_df = pd.read_csv(TRAIN_PATH)
test_df = pd.read_csv(TEST_PATH)
train_df.shape
# Target distribution in training data
train_df['isFraud'].value_counts() / len(train_df)
# Split into training and testing data
feature_names = [col for col in train_df.columns if col not in ['isFraud', 'addr1', 'addr2', 'P_emaildomain', 'R_emaildomain']]
X_train, y_train = train_df[feature_names], train_df['isFraud']
X_test, y_test = test_df[feature_names], test_df['isFraud']
```
## **Part 2**: Fit model
```
# Train model
model = lightgbm.LGBMClassifier(random_state=SEED)
model.fit(X_train, y_train)
# Evaluate model
y_test_pred = model.predict_proba(X_test)[:, 1]
print('The first 10 predictions ...')
y_test_pred[:10]
```
## **Part 3**: Compare evaluation metrics
### __a)__: Accuracy
$\frac{TP + TN}{TP + FP + TN + FN}$
Proportion of binary predictions that are correct, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
scores.append(accuracy_score(y_test, y_test_pred_binary))
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum accuracy of {}% at threshold {}'.format(round(max(scores), 2)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('Accuracy')
plt.show()
```
---
<img src="images/accuracy.png" width="800" height="800" align="center"/>
### __b)__: Confusion matrix
```
# Plot the confusion matrix
sn.heatmap(confusion_matrix(y_test, y_test_pred_binary), annot=True, fmt='.0f')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
```
### __c)__: True positive rate (recall, sensitivity)
$\frac{TP}{TP+FN}$
"Collect them all!" – High recall might give you some bad items, but it'll also return most of the good items.
How many fraudulent transactions do we “recall” out of all actual fraudulent transactions, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
true_positive_rate = tp / (tp + fn)
scores.append(true_positive_rate)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum TPR of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('TPR')
plt.show()
```
---
<img src="images/tpr.png" width="800" height="800" align="center"/>
### __d)__: False positive rate (Type I error)
$\frac{FP}{FP+TN}$
"False alarm!"
The fraction of false alarms raised by the model, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
false_positive_rate = fp / (fp + tn)
scores.append(false_positive_rate)
min_t = THRESHOLDS[np.argmin(scores)]
y_test_pred_binary = [int(i >= min_t) for i in y_test_pred]
print('Minimum FPR of {}% at threshold {}'.format(round(min(scores), 4)*100, min_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(min_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('FPR')
plt.show()
```
---
<img src="images/fpr.png" width="800" height="800" align="center"/>
### __e)__: False negative rate (Type II error)
$\frac{FN}{FN+TP}$
"Dammit, we missed it!"
The fraction of fraudulent transactions
missed by the model, for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
false_negative_rate = fn / (fn + tp)
scores.append(false_negative_rate)
min_t = THRESHOLDS[np.argmin(scores)]
y_test_pred_binary = [int(i >= min_t) for i in y_test_pred]
print('Minimum FNR of {}% at threshold {}'.format(round(min(scores), 4)*100, min_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(min_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('FNR')
plt.show()
```
---
<img src="images/fnr.png" width="800" height="800" align="center"/>
### __f)__: Positive predictive value (precision)
$\frac{TP}{TP+FP}$
"Don't waste my time!"
High precision might leave some good ideas out, but what it returns is of high quality (i.e. very precise).
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred_binary).ravel()
ppv = tp / (tp + fp)
scores.append(ppv)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum PPV of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('PPV')
plt.show()
```
### __g)__: F1 score
"Let's just have one metric."
A balance between precision (accuracy over cases predicted to be positive) and recall (actual positive cases that correctly get a positive prediction), for a given threshold.
```
scores = []
for t in THRESHOLDS:
y_test_pred_binary = [int(i >= t) for i in y_test_pred]
f1 = f1_score(y_test, y_test_pred_binary)
scores.append(f1)
max_t = THRESHOLDS[np.argmax(scores)]
y_test_pred_binary = [int(i >= max_t) for i in y_test_pred]
print('Maximum F1 of {}% at threshold {}'.format(round(max(scores), 4)*100, max_t))
# Plot scores
plt.plot(THRESHOLDS, scores)
plt.vlines(max_t, ymin=0, ymax=1, color='r')
plt.xlabel('Thresholds')
plt.ylabel('F1 score')
plt.show()
```
---
<img src="images/f1.png" width="800" height="800" align="center"/>
## Bonus materials
- Google's Cassie Kozyrkov on precision and recall [on Youtube](https://www.youtube.com/watch?v=O4joFUqvz40)
| github_jupyter |
```
import sys
sys.path.append('/opt/miniconda3/lib/python3.7/site-packages')
from urllib.request import urlopen
from urllib.request import urlretrieve
import urllib
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import re
import os
import pandas as pd
import numpy as np
from selenium import webdriver
from time import sleep
def getPageNum(url):
driver.get(url)
pageNumber = driver.find_element_by_xpath("//label[@class='mg_r_10']")
number = int( pageNumber.text.strip('共').strip('页') )
return number
def findProjectURL(url,ID,name):
# 项目页数
pageNumber = getPageNum(url)
colName = ['brand', 'link']
df = pd.DataFrame(columns=colName)
# 每一页
for i in range(1,pageNumber+1):
# 找具体项目的链接
projectURL12 = "https://www.digitaling.com/company/projects/"+ID+"/latest/"+str(i)
driver.get(projectURL12)
for j in range(1,13):
try:
link = driver.find_element_by_xpath("//*[@id='warp']/div/div[2]/div[2]/div["+str(j)+"]/div[1]/a")
except BaseException:
pass
else:
link = link.get_attribute("href")
rawName = name+'_'+str(i)+'_'+str(j)
df.loc[rawName] = [name,link]
# print(df)
return df
def getImage(url,name,brand):
try:
driver.get(url)
driver.implicitly_wait(10) # 隐形等待
except BaseException:
pass
else:
#项目图片
try:
imgList = driver.find_elements_by_xpath("//*[@id='article_con']/p/img")
except BaseException:
pass
else:
for i in range(0,len(imgList)):
imgURL = imgList[i].get_attribute("data-original")
# 新建文件夹
path = "/Volumes/Backup Plus/brandData/" + brand
isExists=os.path.exists(path)
if not isExists:
os.makedirs(path)
# 储存图片
imgName = "/Volumes/Backup Plus/brandData/"+ brand +'/' + name +'_' + str(i) + ".jpg"
try:
urlretrieve(imgURL, imgName)
except BaseException:
pass
else:
# print("download"+imgName)
pass
def findBrand(brandName,brandID):
# 项目页面
brandURL = "https://www.digitaling.com/company/projects/"+brandID
driver.get(brandURL)
# 所有项目链接
try:
projectList = findProjectURL(brandURL,brandID,brandName)
except BaseException:
pass
else:
projectList = projectList.reset_index() # 自增索引
# 获取项目中的图片
for i in range(0,len(projectList)):
link = projectList.loc[i,"link"]
name = projectList.loc[i,"index"]
brand = projectList.loc[i,"brand"]
getImage(link,name,brand)
# 设置后台运行
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
driver = webdriver.Chrome(chrome_options=chrome_options)
data = pd.read_csv('data/industry/total.csv')
for i in range(0,len(data)):
brandName = data.loc[i,'品牌']
path = "/Volumes/Backup Plus/brandData/" + brandName
isExists=os.path.exists(path)
if not isExists:
brandLink = data.loc[i,'链接']
brandID = brandLink.strip("https://www.digitaling.com/company/")
findBrand(brandName,brandID)
print(str(i)+brandName)
else:
print(str(i)+brandName+"爬过了")
driver.close()
```
| github_jupyter |
## How-to
1. You need to use [modeling.py](modeling.py) from extractive-summarization folder. An improvement BERT model to accept text longer than 512 tokens.
```
import tensorflow as tf
import numpy as np
import pickle
with open('dataset-bert.pkl', 'rb') as fopen:
dataset = pickle.load(fopen)
dataset.keys()
BERT_VOCAB = 'uncased_L-12_H-768_A-12/vocab.txt'
BERT_INIT_CHKPNT = 'uncased_L-12_H-768_A-12/bert_model.ckpt'
BERT_CONFIG = 'uncased_L-12_H-768_A-12/bert_config.json'
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
import modeling
tokenization.validate_case_matches_checkpoint(True,BERT_INIT_CHKPNT)
tokenizer = tokenization.FullTokenizer(
vocab_file=BERT_VOCAB, do_lower_case=True)
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
epoch = 20
batch_size = 8
warmup_proportion = 0.1
num_train_steps = int(len(dataset['train_texts']) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
class Model:
def __init__(
self,
learning_rate = 2e-5,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, None])
self.mask = tf.placeholder(tf.int32, [None, None])
self.clss = tf.placeholder(tf.int32, [None, None])
mask = tf.cast(self.mask, tf.float32)
model = modeling.BertModel(
config=bert_config,
is_training=True,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
outputs = tf.gather(model.get_sequence_output(), self.clss, axis = 1, batch_dims = 1)
self.logits = tf.layers.dense(outputs, 1)
self.logits = tf.squeeze(self.logits, axis=-1)
self.logits = self.logits * mask
crossent = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.Y)
crossent = crossent * mask
crossent = tf.reduce_sum(crossent)
total_size = tf.reduce_sum(mask)
self.cost = tf.div_no_nan(crossent, total_size)
self.optimizer = optimization.create_optimizer(self.cost, learning_rate,
num_train_steps, num_warmup_steps, False)
l = tf.round(tf.sigmoid(self.logits))
self.accuracy = tf.reduce_mean(tf.cast(tf.boolean_mask(l, tf.equal(self.Y, 1)), tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(learning_rate = 1e-5)
sess.run(tf.global_variables_initializer())
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, BERT_INIT_CHKPNT)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
train_X = dataset['train_texts']
test_X = dataset['test_texts']
train_clss = dataset['train_clss']
test_clss = dataset['test_clss']
train_Y = dataset['train_labels']
test_Y = dataset['test_labels']
train_segments = dataset['train_segments']
test_segments = dataset['test_segments']
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x, _ = pad_sentence_batch(train_X[i : index], 0)
batch_y, _ = pad_sentence_batch(train_Y[i : index], 0)
batch_segments, _ = pad_sentence_batch(train_segments[i : index], 0)
batch_clss, _ = pad_sentence_batch(train_clss[i : index], -1)
batch_clss = np.array(batch_clss)
batch_x = np.array(batch_x)
batch_mask = 1 - (batch_clss == -1)
batch_clss[batch_clss == -1] = 0
mask_src = 1 - (batch_x == 0)
feed = {model.X: batch_x,
model.Y: batch_y,
model.mask: batch_mask,
model.clss: batch_clss,
model.segment_ids: batch_segments,
model.input_masks: mask_src}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x, _ = pad_sentence_batch(test_X[i : index], 0)
batch_y, _ = pad_sentence_batch(test_Y[i : index], 0)
batch_segments, _ = pad_sentence_batch(test_segments[i : index], 0)
batch_clss, _ = pad_sentence_batch(test_clss[i : index], -1)
batch_clss = np.array(batch_clss)
batch_x = np.array(batch_x)
batch_mask = 1 - (batch_clss == -1)
batch_clss[batch_clss == -1] = 0
mask_src = 1 - (batch_x == 0)
feed = {model.X: batch_x,
model.Y: batch_y,
model.mask: batch_mask,
model.clss: batch_clss,
model.segment_ids: batch_segments,
model.input_masks: mask_src}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/GuysBarash/ML_Workshop/blob/main/Bayesian_Agent.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
from scipy.optimize import minimize_scalar
from scipy.stats import beta
from scipy.stats import binom
from scipy.stats import bernoulli
from matplotlib import animation
from IPython.display import HTML, clear_output
from matplotlib import rc
matplotlib.use('Agg')
agent_truth_p = 0.8 #@param {type: "slider", min: 0.0, max: 1.0, step:0.01}
repeats = 700
starting_guess_for_b = 1 # Agent's correct answers
starting_guess_for_a = 1 # Agent's incorrect answers
```
# Example
```
def plotPrior(a, b):
fig = plt.figure()
ax = plt.axes()
plt.xlim(0, 1)
x = np.linspace(0, 1, 1000)
y = beta.pdf(x, a, b)
x_guess = x[y.argmax()]
ax.plot(x, y);
maximal_point = ax.axvline(x=x_guess, label=f'Best guess for prior: {x_guess:>.2f}');
ax.legend();
return
```
The agent has a chance of "p" of telling the truth, and a chance of 1-p of randomly selecting an answer
```
def agentDecision(real_answer,options,agent_truth_p):
choice = bernoulli.rvs(agent_truth_p)
if choice == 1:
return real_answer
else:
choice = bernoulli.rvs(0.5)
if choice == 1:
return options[0]
else:
return options[1]
b = starting_guess_for_b
a = starting_guess_for_a
```
Prior before any testing takes place. You can see it's balanced.
```
print("p = ", a / (a + b))
plotPrior(a, b)
agent_log = pd.DataFrame(index=range(repeats),columns=['a','b','Real type','Agent answer','Agent is correct'])
data_validity_types = ["BAD","GOOD"]
for i in range(repeats):
data_is_valid = np.random.choice(data_validity_types)
agent_response_on_the_data = agentDecision(data_is_valid,data_validity_types,agent_truth_p)
agent_is_correct = data_is_valid == agent_response_on_the_data
agent_log.loc[i,['Real type','Agent answer','Agent is correct']] = data_is_valid, agent_response_on_the_data, agent_is_correct
# a and b update dynamically each step
a += int(agent_is_correct)
b += int(not agent_is_correct)
agent_log.loc[i,['a','b']] = a, b
correct_answers = agent_log['Agent is correct'].sum()
total_answers = agent_log['Agent is correct'].count()
percentage = 0
if total_answers > 0:
percentage = float(correct_answers) / total_answers
print(f"Agent was right {correct_answers}/{total_answers} ({100 * percentage:>.2f} %) of the times.")
plotPrior(a, b)
```
# Dynamic example
```
# create a figure and axes
fig = plt.figure(figsize=(12,5));
ax = plt.subplot(1,1,1);
# set up the subplots as needed
ax.set_xlim(( 0, 1));
ax.set_ylim((0, 10));
# create objects that will change in the animation. These are
# initially empty, and will be given new values for each frame
# in the animation.
txt_title = ax.set_title('');
maximal_point = ax.axvline(x=0, label='line at x = {}'.format(0));
line1, = ax.plot([], [], 'b', lw=2); # ax.plot returns a list of 2D line objects
clear_output()
plt.close('all')
def getPriorFrame(frame_n):
global agent_log
a = agent_log.loc[frame_n,'a']
b = agent_log.loc[frame_n,'b']
x = np.linspace(0, 1, 1000)
y = beta.pdf(x, a, b)
x_guess = x[y.argmax()]
ax.legend()
maximal_point.set_xdata(x_guess)
maximal_point.set_label(f'Best guess for prior: {x_guess:>.2f}')
line1.set_data(x, y)
txt_title.set_text(f'Agent step = {frame_n:4d}, a = {a}, b= {b}')
return line1,
num_of_steps = 50
frames =[0]+ list(range(0, len(agent_log), int(len(agent_log) / num_of_steps))) + [agent_log.index[-1]]
ani = animation.FuncAnimation(fig, getPriorFrame, frames,
interval=100, blit=True)
rc('animation', html='html5')
ani
```
| github_jupyter |
# Quantum Counting
To understand this algorithm, it is important that you first understand both Grover’s algorithm and the quantum phase estimation algorithm. Whereas Grover’s algorithm attempts to find a solution to the Oracle, the quantum counting algorithm tells us how many of these solutions there are. This algorithm is interesting as it combines both quantum search and quantum phase estimation.
## Contents
1. [Overview](#overview)
1.1 [Intuition](#intuition)
1.2 [A Closer Look](#closer_look)
2. [The Code](#code)
2.1 [Initialising our Code](#init_code)
2.2 [The Controlled-Grover Iteration](#cont_grover)
2.3 [The Inverse QFT](#inv_qft)
2.4 [Putting it Together](#putting_together)
3. [Simulating](#simulating)
4. [Finding the Number of Solutions](#finding_m)
5. [Exercises](#exercises)
6. [References](#references)
## 1. Overview <a id='overview'></a>
### 1.1 Intuition <a id='intuition'></a>
In quantum counting, we simply use the quantum phase estimation algorithm to find an eigenvalue of a Grover search iteration. You will remember that an iteration of Grover’s algorithm, $G$, rotates the state vector by $\theta$ in the $|\omega\rangle$, $|s’\rangle$ basis:

The percentage number of solutions in our search space affects the difference between $|s\rangle$ and $|s’\rangle$. For example, if there are not many solutions, $|s\rangle$ will be very close to $|s’\rangle$ and $\theta$ will be very small. It turns out that the eigenvalues of the Grover iterator are $e^{\pm i\theta}$, and we can extract this using quantum phase estimation (QPE) to estimate the number of solutions ($M$).
### 1.2 A Closer Look <a id='closer_look'></a>
In the $|\omega\rangle$,$|s’\rangle$ basis we can write the Grover iterator as the matrix:
$$
G =
\begin{pmatrix}
\cos{\theta} && -\sin{\theta}\\
\sin{\theta} && \cos{\theta}
\end{pmatrix}
$$
The matrix $G$ has eigenvectors:
$$
\begin{pmatrix}
-i\\
1
\end{pmatrix}
,
\begin{pmatrix}
i\\
1
\end{pmatrix}
$$
With the aforementioned eigenvalues $e^{\pm i\theta}$. Fortunately, we do not need to prepare our register in either of these states, the state $|s\rangle$ is in the space spanned by $|\omega\rangle$, $|s’\rangle$, and thus is a superposition of the two vectors.
$$
|s\rangle = \alpha |\omega\rangle + \beta|s'\rangle
$$
As a result, the output of the QPE algorithm will be a superposition of the two phases, and when we measure the register we will obtain one of these two values! We can then use some simple maths to get our estimate of $M$.

## 2. The Code <a id='code'></a>
### 2.1 Initialising our Code <a id='init_code'></a>
First, let’s import everything we’re going to need:
```
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
import qiskit
from qiskit import QuantumCircuit, transpile, assemble, Aer
# import basic plot tools
from qiskit.visualization import plot_histogram
```
In this guide will choose to ‘count’ on the first 4 qubits on our circuit (we call the number of counting qubits $t$, so $t = 4$), and to 'search' through the last 4 qubits ($n = 4$). With this in mind, we can start creating the building blocks of our circuit.
### 2.2 The Controlled-Grover Iteration <a id='cont_grover'></a>
We have already covered Grover iterations in the Grover’s algorithm section. Here is an example with an Oracle we know has 5 solutions ($M = 5$) of 16 states ($N = 2^n = 16$), combined with a diffusion operator:
```
def example_grover_iteration():
"""Small circuit with 5/16 solutions"""
# Do circuit
qc = QuantumCircuit(4)
# Oracle
qc.h([2,3])
qc.ccx(0,1,2)
qc.h(2)
qc.x(2)
qc.ccx(0,2,3)
qc.x(2)
qc.h(3)
qc.x([1,3])
qc.h(2)
qc.mct([0,1,3],2)
qc.x([1,3])
qc.h(2)
# Diffuser
qc.h(range(3))
qc.x(range(3))
qc.z(3)
qc.mct([0,1,2],3)
qc.x(range(3))
qc.h(range(3))
qc.z(3)
return qc
```
Notice the python function takes no input and returns a `QuantumCircuit` object with 4 qubits. In the past the functions you created might have modified an existing circuit, but a function like this allows us to turn the `QuantumCircuit` object into a single gate we can then control.
We can use `.to_gate()` and `.control()` to create a controlled gate from a circuit. We will call our Grover iterator `grit` and the controlled Grover iterator `cgrit`:
```
# Create controlled-Grover
grit = example_grover_iteration().to_gate()
cgrit = grit.control()
cgrit.label = "Grover"
```
### 2.3 The Inverse QFT <a id='inv_qft'></a>
We now need to create an inverse QFT. This code implements the QFT on n qubits:
```
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(4)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
```
Again, note we have chosen to return another `QuantumCircuit` object, this is so we can easily invert the gate. We create the gate with t = 4 qubits as this is the number of counting qubits we have chosen in this guide:
```
qft_dagger = qft(4).to_gate().inverse()
qft_dagger.label = "QFT†"
```
### 2.4 Putting it Together <a id='putting_together'></a>
We now have everything we need to complete our circuit! Let’s put it together.
First we need to put all qubits in the $|+\rangle$ state:
```
# Create QuantumCircuit
t = 4 # no. of counting qubits
n = 4 # no. of searching qubits
qc = QuantumCircuit(n+t, t) # Circuit with n+t qubits and t classical bits
# Initialize all qubits to |+>
for qubit in range(t+n):
qc.h(qubit)
# Begin controlled Grover iterations
iterations = 1
for qubit in range(t):
for i in range(iterations):
qc.append(cgrit, [qubit] + [*range(t, n+t)])
iterations *= 2
# Do inverse QFT on counting qubits
qc.append(qft_dagger, range(t))
# Measure counting qubits
qc.measure(range(t), range(t))
# Display the circuit
qc.draw()
```
Great! Now let’s see some results.
## 3. Simulating <a id='simulating'></a>
```
# Execute and see results
qasm_sim = Aer.get_backend('qasm_simulator')
transpiled_qc = transpile(qc, qasm_sim)
qobj = assemble(transpiled_qc)
job = qasm_sim.run(qobj)
hist = job.result().get_counts()
plot_histogram(hist)
```
We can see two values stand out, having a much higher probability of measurement than the rest. These two values correspond to $e^{i\theta}$ and $e^{-i\theta}$, but we can’t see the number of solutions yet. We need to little more processing to get this information, so first let us get our output into something we can work with (an `int`).
We will get the string of the most probable result from our output data:
```
measured_str = max(hist, key=hist.get)
```
Let us now store this as an integer:
```
measured_int = int(measured_str,2)
print("Register Output = %i" % measured_int)
```
## 4. Finding the Number of Solutions (M) <a id='finding_m'></a>
We will create a function, `calculate_M()` that takes as input the decimal integer output of our register, the number of counting qubits ($t$) and the number of searching qubits ($n$).
First we want to get $\theta$ from `measured_int`. You will remember that QPE gives us a measured $\text{value} = 2^n \phi$ from the eigenvalue $e^{2\pi i\phi}$, so to get $\theta$ we need to do:
$$
\theta = \text{value}\times\frac{2\pi}{2^t}
$$
Or, in code:
```
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
```
You may remember that we can get the angle $\theta/2$ can from the inner product of $|s\rangle$ and $|s’\rangle$:

$$
\langle s'|s\rangle = \cos{\tfrac{\theta}{2}}
$$
And that $|s\rangle$ (a uniform superposition of computational basis states) can be written in terms of $|\omega\rangle$ and $|s'\rangle$ as:
$$
|s\rangle = \sqrt{\tfrac{M}{N}}|\omega\rangle + \sqrt{\tfrac{N-M}{N}}|s'\rangle
$$
The inner product of $|s\rangle$ and $|s'\rangle$ is:
$$
\langle s'|s\rangle = \sqrt{\frac{N-M}{N}} = \cos{\tfrac{\theta}{2}}
$$
From this, we can use some trigonometry and algebra to show:
$$
N\sin^2{\frac{\theta}{2}} = M
$$
From the [Grover's algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) chapter, you will remember that a common way to create a diffusion operator, $U_s$, is actually to implement $-U_s$. This implementation is used in the Grover iteration provided in this chapter. In a normal Grover search, this phase is global and can be ignored, but now we are controlling our Grover iterations, this phase does have an effect. The result is that we have effectively searched for the states that are _not_ solutions, and our quantum counting algorithm will tell us how many states are _not_ solutions. To fix this, we simply calculate $N-M$.
And in code:
```
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
```
And we can see we have (approximately) the correct answer! We can approximately calculate the error in this answer using:
```
m = t - 1 # Upper bound: Will be less than this
err = (math.sqrt(2*M*N) + N/(2**(m+1)))*(2**(-m))
print("Error < %.2f" % err)
```
Explaining the error calculation is outside the scope of this article, but an explanation can be found in [1].
Finally, here is the finished function `calculate_M()`:
```
def calculate_M(measured_int, t, n):
"""For Processing Output of Quantum Counting"""
# Calculate Theta
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
# Calculate No. of Solutions
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
# Calculate Upper Error Bound
m = t - 1 #Will be less than this (out of scope)
err = (math.sqrt(2*M*N) + N/(2**(m+1)))*(2**(-m))
print("Error < %.2f" % err)
```
## 5. Exercises <a id='exercises'></a>
1. Can you create an oracle with a different number of solutions? How does the accuracy of the quantum counting algorithm change?
2. Can you adapt the circuit to use more or less counting qubits to get a different precision in your result?
## 6. References <a id='references'></a>
[1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# WeatherPy
----
## Observable Trends
* There is a moderate negative correlation between latitude and maximum temperature. This means that as the latitude increases, moving up into the northern hemisphere, the maximum temperature decreases correspondingly. Also, as expected, the maximum temperature increases as the cities approach the equator.
* There does not appear to be any correlation between latitude and humidity, neither does there appear to be any strong correlation between latitude and cloudiness or latitude and windspeed. This indicates that the londitudal differences are likely responsible for the differences in these variables in cities at similar latitudes.
* Despite the lack of strong correlation, it does seem that wind speeds are higher in the more northern and southern latitudes. The outliers occuring at the extremes of each hemisphere.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import json
# Import API key
from api_keys import api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# Get Weather Data
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
query_url = f"{url}appid={api_key}&units={units}&q="
weather_response = requests.get(query_url + city)
weather_json = weather_response.json()
print(json.dumps(weather_json, indent=4))
print(requests.get(query_url + city))
# Set Up Lists to Hold Reponse Info
city_name = []
country = []
date = []
latitude = []
longitude = []
max_temperature = []
humidity = []
cloudiness = []
wind_speed = []
# Processing Record Counter Starting a 1
processing_record = 1
# Print Starting Log Statement
print(f"Beginning Data Retrieval")
print(f"-------------------------------")
# Loop Through List of Cities & Perform a Request for Data on Each
for city in cities:
# Exception Handling
try:
response = requests.get(query_url + city).json()
city_name.append(response["name"])
country.append(response["sys"]["country"])
date.append(response["dt"])
latitude.append(response["coord"]["lat"])
longitude.append(response["coord"]["lon"])
max_temperature.append(response["main"]["temp_max"])
humidity.append(response["main"]["humidity"])
cloudiness.append(response["clouds"]["all"])
wind_speed.append(response["wind"]["speed"])
city_record = response["name"]
print(f"Processing Record {processing_record} | {city_record}")
# Increase Processing Record Counter by 1 For Each Loop
processing_record += 1
except:
print("City not found. Skipping...")
continue
# Print Ending Log Statement
print(f"-------------------------------")
print(f"Data Retrieval Complete")
print(f"-------------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
# Create a DataFrame from Cities, Latitude, Longitude, Temperature, Humidity, Cloudiness & Wind Speed
weather_dict = {
"City": city_name,
"Country": country,
"Date": date,
"Latitude": latitude,
"Longitude": longitude,
"Max Temperature": max_temperature,
"Humidity": humidity,
"Cloudiness": cloudiness,
"Wind Speed": wind_speed
}
weather_data = pd.DataFrame(weather_dict)
weather_data.count()
# Display DataFrame
weather_data.head()
# Export & Save Data Into a .csv.
weather_data.to_csv("weather_data.csv")
```
### Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
#### Latitude vs. Temperature Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Max Temperature"], facecolors="blue", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Max Temperature (02/01/2019)")
plt.ylabel("Max Temperature (°F)")
plt.xlabel("Latitude")
plt.grid(True)
# Save Figure
plt.savefig("City_Latitude_vs_Max_Temperature.png")
# Show Plot
plt.show()
```
#### Latitude vs. Humidity Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Humidity"], facecolors="green", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Humidity (02/01/2019)")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save Figure
plt.savefig("City_Latitude_vs_Humidity.png")
# Show Plot
plt.show()
```
#### Latitude vs. Cloudiness Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Cloudiness"], facecolors="red", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Cloudiness (02/01/2019)")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save Figure
plt.savefig("City_Latitude_vs_Cloudiness.png")
# Show Plot
plt.show()
```
#### Latitude vs. Wind Speed Plot
```
# Build Scatter Plot for Each Data Type
plt.scatter(weather_data["Latitude"], weather_data["Wind Speed"], facecolors="yellow", marker="o", edgecolor="black")
# Incorporate Other Graph Properties
plt.title("City Latitude vs. Wind Speed (02/01/2019)")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
# Save Figure
plt.savefig("City_Latitude_vs_Wind_Speed.png")
# Show Plot
plt.show()
```
| github_jupyter |
# Sentiment with Flair
Flair offers models that we can use out-of-the-box. One of those is the English sentiment model, which we will learn how to use here.
First, we need to make sure Flair has been installed, we do this in our CLI with:
```
pip install flair
```
Flair uses PyTorch/TensorFlow in under the hood, so it's essential that you also have one of the two libraries (or both) installed. There are a few steps in applying sentiment analysis, these are:
1. Initializing the model.
2. Tokenizing input text.
3. Processing with the model.
4. *(Optional) Formatting the outputs.*
We then load the English sentiment model like so:
```
import flair
model = flair.models.TextClassifier.load("en-sentiment")
```
The first time this `load` method is run for the `'en-sentiment'` model the model will be downloaded. After this, the model is initialized. The `en-sentiment` model is a distilBERT model fitted with a classification head that outputs two classes - negative and positive.
Our next step is to tokenize input text. For this we use the Flair `Sentence` object, which we initialize by passing our text into it:
```
text = (
"I like you. I love you" # we are expecting a confidently positive sentiment here
)
sentence = flair.data.Sentence(text)
sentence
```
Here we now have the Flair `Sentence` object, which contains our text, alongside a *tokenized* version of it (each word/punctuation character is an individual token):
```
sentence.to_tokenized_string()
```
The next step is to process our tokenized inputs through out distilBERT classifier:
```
model.predict(sentence)
```
The `predict` method doesn't output our prediction, instead the predictions are added to our `sentence`:
```
sentence
```
Here we can see that we are predicting a `POSITIVE` sentiment with a probability of `0.9933`, which is **very** confident as expected. Let's repeat the process with something more negative.
```
text = "I hate it when I'm not learning"
sentence = flair.data.Sentence(text)
model.predict(sentence)
sentence
```
And we correctly predict a `NEGATIVE` sentiment. Finally, we will typically want to extract our predictions and format them into the format that we need for our own use-case (for example plotting sentiment over time). Let's take a look at how we do that.
The `Sentence` object provides us with a method called `get_labels`, we can use this to extract our sentiment prediction.
```
sentence.get_labels()
```
From this method we actually get a list, which contains our label object. To access each item in the list we need to dig a little deeper. We first access the label object by accessing the *0th* index of our list. Flair `Label` objects contain two attributes, `score` and `value` - which contain our prediction.
```
sentence.get_labels()[0]
sentence.get_labels()[0].score
sentence.get_labels()[0].value
```
Alternatively, we can access the label values directly (although not recommended) like so:
```
sentence.labels[0].score, sentence.labels[0].value
```
| github_jupyter |
```
%matplotlib inline
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
plt.style.use(['seaborn-colorblind', 'seaborn-darkgrid'])
```
#### Code 2.1
```
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
```
#### Code 2.2
$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$
The probability of observing six W’s in nine tosses—under a value of p=0.5
```
stats.binom.pmf(6, n=9, p=0.5)
```
#### Code 2.3 and 2.5
Computing the posterior using a grid approximation.
In the book the following code is not inside a function, but this way is easier to play with different parameters
```
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
#prior = (p_grid >= 0.5).astype(int) # truncated
#prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
```
#### Code 2.3
```
points = 20
w, n = 6, 9
p_grid, posterior = posterior_grid_approx(points, w, n)
plt.plot(p_grid, posterior, 'o-', label='success = {}\ntosses = {}'.format(w, n))
plt.xlabel('probability of water', fontsize=14)
plt.ylabel('posterior probability', fontsize=14)
plt.title('{} points'.format(points))
plt.legend(loc=0);
```
#### Code 2.6
Computing the posterior using the quadratic aproximation
```
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1/pm.find_hessian(mean_q, vars=[p]))**0.5)[0]
mean_q['p'], std_q
norm = stats.norm(mean_q, std_q)
prob = .89
z = stats.norm.ppf([(1-prob)/2, (1+prob)/2])
pi = mean_q['p'] + std_q * z
pi
```
#### Code 2.7
```
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x , w+1, n-w+1),
label='True posterior')
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q['p'], std_q),
label='Quadratic approximation')
plt.legend(loc=0, fontsize=13)
plt.title('n = {}'.format(n), fontsize=14)
plt.xlabel('Proportion water', fontsize=14)
plt.ylabel('Density', fontsize=14);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was createad on a computer %s running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nMatplotlib %s\n" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, matplotlib.__version__))
```
| github_jupyter |
# Section 1: Homework Exercises
This material provides some hands-on experience using the methods learned from the first day's material. They focus on building models using real-world data.
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set_context('notebook')
import warnings
warnings.simplefilter("ignore")
import pymc3 as pm
import arviz as az
```
## Exercise: Comparing Two Groups with Binary Outcomes
Binary outcomes are common in clinical research:
- survival/death
- true/false
- presence/absence
- positive/negative
In practice, binary outcomes are encoded as ones (for event occurrences) and zeros (for non-occurrence). A single binary variable is distributed as a **Bernoulli** random variable:
$$f(x \mid p) = p^{x} (1-p)^{1-x}$$
In terms of inference, we are typically interested in whether $p$ is larger or smaller in one group relative to another.
To demonstrate the comparison of two groups with binary outcomes using Bayesian inference, we will use a sample pediatric dataset. Data on 671 infants with very low (<1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center. Of interest is the relationship between the outcome intra-ventricular hemorrhage (IVH) and predictor such as birth weight, gestational age, presence of pneumothorax and mode of delivery.
```
vlbw = pd.read_csv('../data/vlbw.csv', index_col=0).dropna(axis=0, subset=['ivh', 'pneumo'])
vlbw.head()
```
To demonstrate binary data analysis, we will try to estimate the difference between the probability of an intra-ventricular hemorrhage for infants with and without a pneumothorax.
```
pd.crosstab(vlbw.ivh, vlbw.pneumo)
```
We will create a binary outcome by combining `definite` and `possible` into a single outcome.
```
ivh = vlbw.ivh.isin(['definite', 'possible']).astype(int).values
x = vlbw.pneumo.astype(int).values
```
Fit a model that evaluates the association of a pneumothorax with the presence of IVH.
```
with pm.Model() as ivh_model:
p = pm.Beta('p', 1, 1, shape=2)
bb_like = pm.Bernoulli('bb_like', p=p[x], observed=ivh)
p_diff = pm.Deterministic('p_diff', p[1] - p[0])
ivh_trace = pm.sample(1000)
az.plot_posterior(ivh_trace, var_names=['p_diff'], ref_val=0);
```
## Exercise: Cancer Rate Estimation
[Tsutakawa et al. (1985)](http://onlinelibrary.wiley.com/doi/10.1002/sim.4780040210/abstract) provides mortality data for stomach cancer among men aged 45-64 in several cities in Missouri. The file `cancer.csv` contains deaths $y_i$ and subjects at risk $n_i$ for 20 cities from this dataset.
```
import pandas as pd
cancer = pd.read_csv('../data/cancer.csv')
cancer
```
If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
$$\hat{p} = \frac{y}{n}$$
```
p_hat = cancer.y.sum() / cancer.n.sum()
p_hat
```
The binomial variance can be caclulated by:
$$\text{Var}(y) = np(1-p)$$
```
mle_var = (cancer.n * p_hat * (1-p_hat)).sum()
mle_var
```
However, if we compare this to the observed variance in $y$, things don't look good.
```
cancer.y.var()
_p = trace['p'].mean(axis=0)
(cancer.n * _p * (1-_p)).sum()
```
However, if we compare this to the observed variance in $y$, things don't look good. The data are *overdispersed* relative to what would be expected from a binomial model. As you might expect, it is unrealistic to assume the prevalence of cancer to be the same in all cities. Rather, a more realistic model might allow the probability to vary from place to place, according to any number of unmeasured risk factors.
Create a hierarchical model that allows the cancer prevalence to vary.
*Hint: a reasonable distribution for probabilities is the beta distribution. So, you would want to estimate the hyperparameters of the beta distribution to fit the hierarchical model.*
```
import pymc3 as pm
import arviz as az
import matplotlib.pyplot as plt
with pm.Model() as hierarchical_model:
# Prior on p
a = pm.Exponential('a', 0.5)
b = pm.Exponential('b', 0.5)
p = pm.Beta('p', a, b, shape=cancer.y.shape[0])
# Likelihood function
deaths = pm.Binomial('deaths', n=cancer.n.values, p=p, observed=cancer.y.values)
with hierarchical_model:
trace = pm.sample()
az.plot_forest(trace, var_names=['p'])
```
| github_jupyter |
# Rolling Window Features
Following notebook showcases an example workflow of creating rolling window features and building a model to predict which customers will buy in next 4 weeks.
This uses dummy sales data but the idea can be implemented on actual sales data and can also be expanded to include other available data sources such as click-stream data, call center data, email contacts data, etc.
***
<b>Spark 3.1.2</b> (with Python 3.8) has been used for this notebook.<br>
Refer to [spark documentation](https://spark.apache.org/docs/3.1.2/api/sql/index.html) for help with <b>data ops functions</b>.<br>
Refer to [this article](https://medium.com/analytics-vidhya/installing-and-using-pyspark-on-windows-machine-59c2d64af76e) to <b>install and use PySpark on Windows machine</b>.
### Building a spark session
To create a SparkSession, use the following builder pattern:
`spark = SparkSession\
.builder\
.master("local")\
.appName("Word Count")\
.config("spark.some.config.option", "some-value")\
.getOrCreate()`
```
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql import Window
from pyspark.sql.types import FloatType
#initiating spark session
spark.stop()
spark = SparkSession\
.builder\
.appName("rolling_window")\
.config("spark.executor.memory", "1536m")\
.config("spark.driver.memory", "2g")\
.getOrCreate()
spark
```
## Data prep
We will be using window functions to compute relative features for all dates. We will first aggregate the data to customer x week level so it is easier to handle.
<mark>The week level date that we create will serve as the 'reference date' from which everything will be relative.</mark>
All the required dimension tables have to be joined with the sales table prior to aggregation so that we can create all required features.
### Read input datasets
```
import pandas as pd
df_sales = spark.read.csv('./data/rw_sales.csv',inferSchema=True,header=True)
df_customer = spark.read.csv('./data/clustering_customer.csv',inferSchema=True,header=True)
df_product = spark.read.csv('./data/clustering_product.csv',inferSchema=True,header=True)
df_payment = spark.read.csv('./data/clustering_payment.csv',inferSchema=True,header=True)
```
<b>Quick exploration of the datasets:</b>
1. We have sales data that captures date, customer id, product, quantity, dollar amount & payment type at order x item level. `order_item_id` refers to each unique product in each order
2. We have corresponding dimension tables for customer info, product info, and payment tender info
```
df_sales.show(5)
# order_item_id is the primary key
(df_sales.count(),
df_sales.selectExpr('count(Distinct order_item_id)').collect()[0][0],
df_sales.selectExpr('count(Distinct order_id)').collect()[0][0])
df_sales.printSchema()
# fix date type for tran_dt
df_sales = df_sales.withColumn('tran_dt', F.to_date('tran_dt'))
df_customer.show(5)
# we have 1k unique customers in sales data with all their info in customer dimension table
(df_sales.selectExpr('count(Distinct customer_id)').collect()[0][0],
df_customer.count(),
df_customer.selectExpr('count(Distinct customer_id)').collect()[0][0])
# product dimension table provides category and price for each product
df_product.show(5)
(df_product.count(),
df_product.selectExpr('count(Distinct product_id)').collect()[0][0])
# payment type table maps the payment type id from sales table
df_payment.show(5)
```
### Join all dim tables and add week_end column
```
df_sales = df_sales.join(df_product.select('product_id','category'), on=['product_id'], how='left')
df_sales = df_sales.join(df_payment, on=['payment_type_id'], how='left')
```
<b>week_end column: Saturday of every week</b>
`dayofweek()` returns 1-7 corresponding to Sun-Sat for a date.
Using this, we will convert each date to the date corresponding to the Saturday of that week (week: Sun-Sat) using below logic:<br/>
`date + 7 - dayofweek()`
```
df_sales.printSchema()
df_sales = df_sales.withColumn('week_end',
F.col('tran_dt') + 7 - F.dayofweek('tran_dt'))
df_sales.show(5)
```
### customer_id x week_end aggregation
We will be creating following features at weekly level. These will then be aggregated for multiple time frames using window functions for the final dataset.
1. Sales
2. No. of orders
3. No. of units
4. Sales split by category
5. Sales split by payment type
```
df_sales_agg = df_sales.groupBy('customer_id','week_end').agg(
F.sum('dollars').alias('sales'),
F.countDistinct('order_id').alias('orders'),
F.sum('qty').alias('units'))
# category split pivot
df_sales_cat_agg = df_sales.withColumn('category', F.concat(F.lit('cat_'), F.col('category')))
df_sales_cat_agg = df_sales_cat_agg.groupBy('customer_id','week_end').pivot('category').agg(F.sum('dollars'))
# payment type split pivot
# clean-up values in payment type column
df_payment_agg = df_sales.withColumn(
'payment_type',
F.concat(F.lit('pay_'), F.regexp_replace(F.col('payment_type'),' ','_')))
df_payment_agg = df_payment_agg.groupby('customer_id','week_end').pivot('payment_type').agg(F.max('dollars'))
# join all together
df_sales_agg = df_sales_agg.join(df_sales_cat_agg, on=['customer_id','week_end'], how='left')
df_sales_agg = df_sales_agg.join(df_payment_agg, on=['customer_id','week_end'], how='left')
df_sales_agg = df_sales_agg.persist()
df_sales_agg.count()
df_sales_agg.show(5)
```
### Fill Missing weeks
```
# cust level min and max weeks
df_cust = df_sales_agg.groupBy('customer_id').agg(
F.min('week_end').alias('min_week'),
F.max('week_end').alias('max_week'))
# function to get a dataframe with 1 row per date in provided range
def pandas_date_range(start, end):
dt_rng = pd.date_range(start=start, end=end, freq='W-SAT') # W-SAT required as we want all Saturdays
df_date = pd.DataFrame(dt_rng, columns=['date'])
return df_date
# use the cust level table and create a df with all Saturdays in our range
date_list = df_cust.selectExpr('min(min_week)', 'max(max_week)').collect()[0]
min_date = date_list[0]
max_date = date_list[1]
# use the function and create df
df_date_range = spark.createDataFrame(pandas_date_range(min_date, max_date))
# date format
df_date_range = df_date_range.withColumn('date',F.to_date('date'))
df_date_range = df_date_range.repartition(1).persist()
df_date_range.count()
```
<b>Cross join date list df with cust table to create filled base table</b>
```
df_base = df_cust.crossJoin(F.broadcast(df_date_range))
# filter to keep only week_end since first week per customer
df_base = df_base.where(F.col('date')>=F.col('min_week'))
# rename date to week_end
df_base = df_base.withColumnRenamed('date','week_end')
```
<b>Join with the aggregated week level table to create full base table</b>
```
df_base = df_base.join(df_sales_agg, on=['customer_id','week_end'], how='left')
df_base = df_base.fillna(0)
df_base = df_base.persist()
df_base.count()
# write base table as parquet
df_base.repartition(8).write.parquet('./data/rw_base/', mode='overwrite')
df_base = spark.read.parquet('./data/rw_base/')
```
## y-variable
Determining whether a customer buys something in the next 4 weeks of current week.
```
# flag 1/0 for weeks with purchases
df_base = df_base.withColumn('purchase_flag', F.when(F.col('sales')>0,1).otherwise(0))
# window to aggregate the flag over next 4 weeks
df_base = df_base.withColumn(
'purchase_flag_next_4w',
F.max('purchase_flag').over(
Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(1,4)))
```
## Features
We will be aggregating the features columns over various time intervals (1/4/13/26/52 weeks) to create a rich set of look-back features. We will also create derived features post aggregation.
```
# we can create and keep Window() objects that can be referenced in multiple formulas
# we don't need a window definition for 1w features as these are already present
window_4w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-3,Window.currentRow)
window_13w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-12,Window.currentRow)
window_26w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-25,Window.currentRow)
window_52w = Window.partitionBy('customer_id').orderBy('week_end').rowsBetween(-51,Window.currentRow)
df_base.columns
```
<b>Direct features</b>
```
cols_skip = ['customer_id','week_end','min_week','max_week','purchase_flag_next_4w']
for cols in df_base.drop(*cols_skip).columns:
df_base = df_base.withColumn(cols+'_4w', F.sum(F.col(cols)).over(window_4w))
df_base = df_base.withColumn(cols+'_13w', F.sum(F.col(cols)).over(window_13w))
df_base = df_base.withColumn(cols+'_26w', F.sum(F.col(cols)).over(window_26w))
df_base = df_base.withColumn(cols+'_52w', F.sum(F.col(cols)).over(window_52w))
```
<b>Derived features</b>
```
# aov, aur, upt at each time cut
for cols in ['sales','orders','units']:
for time_cuts in ['1w','_4w','_13w','_26w','_52w']:
if time_cuts=='1w': time_cuts=''
df_base = df_base.withColumn('aov'+time_cuts, F.col('sales'+time_cuts)/F.col('orders'+time_cuts))
df_base = df_base.withColumn('aur'+time_cuts, F.col('sales'+time_cuts)/F.col('units'+time_cuts))
df_base = df_base.withColumn('upt'+time_cuts, F.col('units'+time_cuts)/F.col('orders'+time_cuts))
# % split of category and payment type for 26w (can be extended to other time-frames as well)
for cat in ['A','B','C','D','E']:
df_base = df_base.withColumn('cat_'+cat+'_26w_perc', F.col('cat_'+cat+'_26w')/F.col('sales_26w'))
for pay in ['cash', 'credit_card', 'debit_card', 'gift_card', 'others']:
df_base = df_base.withColumn('pay_'+pay+'_26w_perc', F.col('pay_'+pay+'_26w')/F.col('sales_26w'))
# all columns
df_base.columns
```
<b>Derived features: trend vars</b>
```
# we will take ratio of sales for different time-frames to estimate trend features
# that depict whether a customer has an increasing trend or not
df_base = df_base.withColumn('sales_1w_over_4w', F.col('sales')/ F.col('sales_4w'))
df_base = df_base.withColumn('sales_4w_over_13w', F.col('sales_4w')/ F.col('sales_13w'))
df_base = df_base.withColumn('sales_13w_over_26w', F.col('sales_13w')/F.col('sales_26w'))
df_base = df_base.withColumn('sales_26w_over_52w', F.col('sales_26w')/F.col('sales_52w'))
```
<b>Time elements</b>
```
# extract year, month, and week of year from week_end to be used as features
df_base = df_base.withColumn('year', F.year('week_end'))
df_base = df_base.withColumn('month', F.month('week_end'))
df_base = df_base.withColumn('weekofyear', F.weekofyear('week_end'))
```
<b>More derived features</b>:<br/>
We can add many more derived features as well, as required.
e.g. lag variables of existing features, trend ratios for other features, % change (Q-o-Q, M-o-M type) using lag variables, etc.
```
# save sample rows to csv for checks
df_base.limit(50).toPandas().to_csv('./files/rw_features_qc.csv',index=False)
# save features dataset as parquet
df_base.repartition(8).write.parquet('./data/rw_features/', mode='overwrite')
df_features = spark.read.parquet('./data/rw_features/')
```
## Model Build
### Dataset for modeling
<b>Sample one week_end per month</b>
```
df_wk_sample = df_features.select('week_end').withColumn('month', F.substring(F.col('week_end'), 1,7))
df_wk_sample = df_wk_sample.groupBy('month').agg(F.max('week_end').alias('week_end'))
df_wk_sample = df_wk_sample.repartition(1).persist()
df_wk_sample.count()
df_wk_sample.sort('week_end').show(5)
count_features = df_features.count()
# join back to filer
df_model = df_features.join(F.broadcast(df_wk_sample.select('week_end')), on=['week_end'], how='inner')
count_wk_sample = df_model.count()
```
<b>Eligibility filter</b>: Customer should be active in last year w.r.t the reference date
```
# use sales_52w for elig. filter
df_model = df_model.where(F.col('sales_52w')>0)
count_elig = df_model.count()
# count of rows at each stage
print(count_features, count_wk_sample, count_elig)
```
<b>Removing latest 4 week_end dates</b>: As we have a look-forward period of 4 weeks, latest 4 week_end dates in the data cannot be used for our model as these do not have 4 weeks ahead of them for the y-variable.
```
# see latest week_end dates (in the dataframe prior to monthly sampling)
df_features.select('week_end').drop_duplicates().sort(F.col('week_end').desc()).show(5)
# filter
df_model = df_model.where(F.col('week_end')<'2020-11-14')
count_4w_rm = df_model.count()
# count of rows at each stage
print(count_features, count_wk_sample, count_elig, count_4w_rm)
```
### Model Dataset Summary
Let's look at event rate for our dataset and also get a quick summary of all features.
The y-variable is balanced here because it is a dummy dataset. <mark>In most actual scenarios, this will not be balanced and the model build exercise will involving sampling for balancing.</mark>
```
df_model.groupBy('purchase_flag_next_4w').count().sort('purchase_flag_next_4w').show()
df_model.groupBy().agg(F.avg('purchase_flag_next_4w').alias('event_rate'), F.avg('purchase_flag').alias('wk_evt_rt')).show()
```
<b>Saving summary of all numerical features as a csv</b>
```
summary_metrics =\
('count','mean','stddev','min','0.10%','1.00%','5.00%','10.00%','20.00%','25.00%','30.00%',
'40.00%','50.00%','60.00%','70.00%','75.00%','80.00%','90.00%','95.00%','99.00%','99.90%','max')
df_summary_numeric = df_model.summary(*summary_metrics)
df_summary_numeric.toPandas().T.to_csv('./files/rw_features_summary.csv')
# fillna
df_model = df_model.fillna(0)
```
### Train-Test Split
80-20 split
```
train, test = df_model.randomSplit([0.8, 0.2], seed=125)
train.columns
```
### Data Prep
Spark Models require a vector of features as input. Categorical columns also need to be String Indexed before they can be used.
As we don't have any categorical columns currently, we will directly go with VectorAssembler.
<b>We will add it to a pipeline model that can be saved to be used on test & scoring datasets.</b>
```
# model related imports (RF)
from pyspark.ml.classification import RandomForestClassifier, RandomForestClassificationModel
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.feature import VectorAssembler, StringIndexer
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# list of features: remove identifier columns and the y-var
col_list = df_model.drop('week_end','customer_id','min_week','max_week','purchase_flag_next_4w').columns
stages = []
assembler = VectorAssembler(inputCols=col_list, outputCol='features')
stages.append(assembler)
pipe = Pipeline(stages=stages)
pipe_model = pipe.fit(train)
pipe_model.write().overwrite().save('./files/model_objects/rw_pipe/')
pipe_model = PipelineModel.load('./files/model_objects/rw_pipe/')
```
<b>Apply the transformation pipeline</b>
Also keep the identifier columns and y-var in the transformed dataframe.
```
train_pr = pipe_model.transform(train)
train_pr = train_pr.select('customer_id','week_end','purchase_flag_next_4w','features')
train_pr = train_pr.persist()
train_pr.count()
test_pr = pipe_model.transform(test)
test_pr = test_pr.select('customer_id','week_end','purchase_flag_next_4w','features')
test_pr = test_pr.persist()
test_pr.count()
```
### Model Training
We will train one iteration of Random Forest model as showcase.
In actual scenario, you will have to iterate through the training step multiple times for feature selection, and model hyper parameter tuning to get a good final model.
```
train_pr.show(5)
model_params = {
'labelCol': 'purchase_flag_next_4w',
'numTrees': 128, # default: 128
'maxDepth': 12, # default: 12
'featuresCol': 'features',
'minInstancesPerNode': 25,
'maxBins': 128,
'minInfoGain': 0.0,
'subsamplingRate': 0.7,
'featureSubsetStrategy': '0.3',
'impurity': 'gini',
'seed': 125,
'cacheNodeIds': False,
'maxMemoryInMB': 256
}
clf = RandomForestClassifier(**model_params)
trained_clf = clf.fit(train_pr)
```
### Feature Importance
We will save feature importance as a csv.
```
# Feature importance
feature_importance_list = trained_clf.featureImportances
feature_list = pd.DataFrame(train_pr.schema['features'].metadata['ml_attr']['attrs']['numeric']).sort_values('idx')
feature_importance_list = pd.DataFrame(
data=feature_importance_list.toArray(),
columns=['relative_importance'],
index=feature_list['name'])
feature_importance_list = feature_importance_list.sort_values('relative_importance', ascending=False)
feature_importance_list.to_csv('./files/rw_rf_feat_imp.csv')
```
### Predict on train and test
```
secondelement = F.udf(lambda v: float(v[1]), FloatType())
train_pred = trained_clf.transform(train_pr).withColumn('score',secondelement(F.col('probability')))
test_pred = trained_clf.transform(test_pr).withColumn('score', secondelement(F.col('probability')))
test_pred.show(5)
```
### Test Set Evaluation
```
evaluator = BinaryClassificationEvaluator(
rawPredictionCol='rawPrediction',
labelCol='purchase_flag_next_4w',
metricName='areaUnderROC')
# areaUnderROC
evaluator.evaluate(train_pred)
evaluator.evaluate(test_pred)
# cm
test_pred.groupBy('purchase_flag_next_4w','prediction').count().sort('purchase_flag_next_4w','prediction').show()
# accuracy
test_pred.where(F.col('purchase_flag_next_4w')==F.col('prediction')).count()/test_pred.count()
```
### Save Model
```
trained_clf.write().overwrite().save('./files/model_objects/rw_rf_model/')
trained_clf = RandomForestClassificationModel.load('./files/model_objects/rw_rf_model/')
```
## Scoring
We will take the records for latest week_end from df_features and score it using our trained model.
```
df_features = spark.read.parquet('./data/rw_features/')
max_we = df_features.selectExpr('max(week_end)').collect()[0][0]
max_we
df_scoring = df_features.where(F.col('week_end')==max_we)
df_scoring.count()
# fillna
df_scoring = df_scoring.fillna(0)
# transformation pipeline
pipe_model = PipelineModel.load('./files/model_objects/rw_pipe/')
# apply
df_scoring = pipe_model.transform(df_scoring)
df_scoring = df_scoring.select('customer_id','week_end','features')
# rf model
trained_clf = RandomForestClassificationModel.load('./files/model_objects/rw_rf_model/')
#apply
secondelement = F.udf(lambda v: float(v[1]), FloatType())
df_scoring = trained_clf.transform(df_scoring).withColumn('score',secondelement(F.col('probability')))
df_scoring.show(5)
# save scored output
df_scoring.repartition(8).write.parquet('./data/rw_scored/', mode='overwrite')
```
| github_jupyter |
# Hyperparameter Tuning using Your Own Keras/Tensorflow Container
This notebook shows how to build your own Keras(Tensorflow) container, test it locally using SageMaker Python SDK local mode, and bring it to SageMaker for training, leveraging hyperparameter tuning.
The model used for this notebook is a ResNet model, trainer with the CIFAR-10 dataset. The example is based on https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
## Set up the notebook instance to support local mode
Currently you need to install docker-compose in order to use local mode (i.e., testing the container in the notebook instance without pushing it to ECR).
```
!/bin/bash setup.sh
```
## Permissions
Running this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because it creates new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately.
## Set up the environment
We will set up a few things before starting the workflow.
1. get the execution role which will be passed to sagemaker for accessing your resources such as s3 bucket
2. specify the s3 bucket and prefix where training data set and model artifacts are stored
```
import os
import numpy as np
import tempfile
import tensorflow as tf
import sagemaker
import boto3
from sagemaker.estimator import Estimator
region = boto3.Session().region_name
sagemaker_session = sagemaker.Session()
smclient = boto3.client("sagemaker")
bucket = (
sagemaker.Session().default_bucket()
) # s3 bucket name, must be in the same region as the one specified above
prefix = "sagemaker/DEMO-hpo-keras-cifar10"
role = sagemaker.get_execution_role()
NUM_CLASSES = 10 # the data set has 10 categories of images
```
## Complete source code
- [trainer/start.py](trainer/start.py): Keras model
- [trainer/environment.py](trainer/environment.py): Contain information about the SageMaker environment
## Building the image
We will build the docker image using the Tensorflow versions on dockerhub. The full list of Tensorflow versions can be found at https://hub.docker.com/r/tensorflow/tensorflow/tags/
```
import shlex
import subprocess
def get_image_name(ecr_repository, tensorflow_version_tag):
return "%s:tensorflow-%s" % (ecr_repository, tensorflow_version_tag)
def build_image(name, version):
cmd = "docker build -t %s --build-arg VERSION=%s -f Dockerfile ." % (name, version)
subprocess.check_call(shlex.split(cmd))
# version tag can be found at https://hub.docker.com/r/tensorflow/tensorflow/tags/
# e.g., latest cpu version is 'latest', while latest gpu version is 'latest-gpu'
tensorflow_version_tag = "1.10.1"
account = boto3.client("sts").get_caller_identity()["Account"]
domain = "amazonaws.com"
if region == "cn-north-1" or region == "cn-northwest-1":
domain = "amazonaws.com.cn"
ecr_repository = "%s.dkr.ecr.%s.%s/test" % (
account,
region,
domain,
) # your ECR repository, which you should have been created before running the notebook
image_name = get_image_name(ecr_repository, tensorflow_version_tag)
print("building image:" + image_name)
build_image(image_name, tensorflow_version_tag)
```
## Prepare the data
```
def upload_channel(channel_name, x, y):
y = tf.keras.utils.to_categorical(y, NUM_CLASSES)
file_path = tempfile.mkdtemp()
np.savez_compressed(os.path.join(file_path, "cifar-10-npz-compressed.npz"), x=x, y=y)
return sagemaker_session.upload_data(
path=file_path, bucket=bucket, key_prefix="data/DEMO-keras-cifar10/%s" % channel_name
)
def upload_training_data():
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
train_data_location = upload_channel("train", x_train, y_train)
test_data_location = upload_channel("test", x_test, y_test)
return {"train": train_data_location, "test": test_data_location}
channels = upload_training_data()
```
## Testing the container locally (optional)
You can test the container locally using local mode of SageMaker Python SDK. A training container will be created in the notebook instance based on the docker image you built. Note that we have not pushed the docker image to ECR yet since we are only running local mode here. You can skip to the tuning step if you want but testing the container locally can help you find issues quickly before kicking off the tuning job.
### Setting the hyperparameters
```
hyperparameters = dict(
batch_size=32,
data_augmentation=True,
learning_rate=0.0001,
width_shift_range=0.1,
height_shift_range=0.1,
epochs=1,
)
hyperparameters
```
### Create a training job using local mode
```
%%time
output_location = "s3://{}/{}/output".format(bucket, prefix)
estimator = Estimator(
image_name,
role=role,
output_path=output_location,
train_instance_count=1,
train_instance_type="local",
hyperparameters=hyperparameters,
)
estimator.fit(channels)
```
## Pushing the container to ECR
Now that we've tested the container locally and it works fine, we can move on to run the hyperparmeter tuning. Before kicking off the tuning job, you need to push the docker image to ECR first.
The cell below will create the ECR repository, if it does not exist yet, and push the image to ECR.
```
# The name of our algorithm
algorithm_name = 'test'
# If the repository doesn't exist in ECR, create it.
exist_repo = !aws ecr describe-repositories --repository-names {algorithm_name} > /dev/null 2>&1
if not exist_repo:
!aws ecr create-repository --repository-name {algorithm_name} > /dev/null
# Get the login command from ECR and execute it directly
!$(aws ecr get-login --region {region} --no-include-email)
!docker push {image_name}
```
## Specify hyperparameter tuning job configuration
*Note, with the default setting below, the hyperparameter tuning job can take 20~30 minutes to complete. You can customize the code in order to get better result, such as increasing the total number of training jobs, epochs, etc., with the understanding that the tuning time will be increased accordingly as well.*
Now you configure the tuning job by defining a JSON object that you pass as the value of the TuningJobConfig parameter to the create_tuning_job call. In this JSON object, you specify:
* The ranges of hyperparameters you want to tune
* The limits of the resource the tuning job can consume
* The objective metric for the tuning job
```
import json
from time import gmtime, strftime
tuning_job_name = "BYO-keras-tuningjob-" + strftime("%d-%H-%M-%S", gmtime())
print(tuning_job_name)
tuning_job_config = {
"ParameterRanges": {
"CategoricalParameterRanges": [],
"ContinuousParameterRanges": [
{
"MaxValue": "0.001",
"MinValue": "0.0001",
"Name": "learning_rate",
}
],
"IntegerParameterRanges": [],
},
"ResourceLimits": {"MaxNumberOfTrainingJobs": 9, "MaxParallelTrainingJobs": 3},
"Strategy": "Bayesian",
"HyperParameterTuningJobObjective": {"MetricName": "loss", "Type": "Minimize"},
}
```
## Specify training job configuration
Now you configure the training jobs the tuning job launches by defining a JSON object that you pass as the value of the TrainingJobDefinition parameter to the create_tuning_job call.
In this JSON object, you specify:
* Metrics that the training jobs emit
* The container image for the algorithm to train
* The input configuration for your training and test data
* Configuration for the output of the algorithm
* The values of any algorithm hyperparameters that are not tuned in the tuning job
* The type of instance to use for the training jobs
* The stopping condition for the training jobs
This example defines one metric that Tensorflow container emits: loss.
```
training_image = image_name
print("training artifacts will be uploaded to: {}".format(output_location))
training_job_definition = {
"AlgorithmSpecification": {
"MetricDefinitions": [{"Name": "loss", "Regex": "loss: ([0-9\\.]+)"}],
"TrainingImage": training_image,
"TrainingInputMode": "File",
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": channels["train"],
"S3DataDistributionType": "FullyReplicated",
}
},
"CompressionType": "None",
"RecordWrapperType": "None",
},
{
"ChannelName": "test",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": channels["test"],
"S3DataDistributionType": "FullyReplicated",
}
},
"CompressionType": "None",
"RecordWrapperType": "None",
},
],
"OutputDataConfig": {"S3OutputPath": "s3://{}/{}/output".format(bucket, prefix)},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 50},
"RoleArn": role,
"StaticHyperParameters": {
"batch_size": "32",
"data_augmentation": "True",
"height_shift_range": "0.1",
"width_shift_range": "0.1",
"epochs": "1",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 43200},
}
```
## Create and launch a hyperparameter tuning job
Now you can launch a hyperparameter tuning job by calling create_tuning_job API. Pass the name and JSON objects you created in previous steps as the values of the parameters. After the tuning job is created, you should be able to describe the tuning job to see its progress in the next step, and you can go to SageMaker console->Jobs to check out the progress of each training job that has been created.
```
smclient.create_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuning_job_name,
HyperParameterTuningJobConfig=tuning_job_config,
TrainingJobDefinition=training_job_definition,
)
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is `InProgress`.
```
smclient.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)[
"HyperParameterTuningJobStatus"
]
```
## Analyze tuning job results - after tuning job is completed
Please refer to "HPO_Analyze_TuningJob_Results.ipynb" to see example code to analyze the tuning job results.
## Deploy the best model
Now that we have got the best model, we can deploy it to an endpoint. Please refer to other SageMaker sample notebooks or SageMaker documentation to see how to deploy a model.
| github_jupyter |
### Scroll Down Below to start from Exercise 8.04
```
# Removes Warnings
import warnings
warnings.filterwarnings('ignore')
#import the necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Reading the data using pandas
```
data= pd.read_csv('Churn_Modelling.csv')
data.head(5)
len(data)
data.shape
```
## Scrubbing the data
```
data.isnull().values.any()
#It seems we have some missing values now let us explore what are the columns
#having missing values
data.isnull().any()
## it seems that we have missing values in Gender,age and EstimatedSalary
data[["EstimatedSalary","Age"]].describe()
data.describe()
#### It seems that HasCrCard has value as 0 and 1 hence needs to be changed to category
data['HasCrCard'].value_counts()
## No of missing Values present
data.isnull().sum()
## Percentage of missing Values present
round(data.isnull().sum()/len(data)*100,2)
## Checking the datatype of the missing columns
data[["Gender","Age","EstimatedSalary"]].dtypes
```
### There are three ways to impute missing values:
1. Droping the missing values rows
2. Fill missing values with a test stastics
3. Predict the missing values using ML algorithm
```
mean_value=data['EstimatedSalary'].mean()
data['EstimatedSalary']=data['EstimatedSalary']\
.fillna(mean_value)
data['Gender'].value_counts()
data['Gender']=data['Gender'].fillna(data['Gender']\
.value_counts().idxmax())
mode_value=data['Age'].mode()
data['Age']=data['Age'].fillna(mode_value[0])
##checking for any missing values
data.isnull().any()
```
### Renaming the columns
```
# We would want to rename some of the columns
data = data.rename(columns={'CredRate': 'CreditScore',\
'ActMem' : 'IsActiveMember',\
'Prod Number': 'NumOfProducts',\
'Exited':'Churn'})
data.columns
```
### We would also like to move the churn columnn to the extreme right and drop the customer ID
```
data.drop(labels=['CustomerId'], axis=1,inplace = True)
column_churn = data['Churn']
data.drop(labels=['Churn'], axis=1,inplace = True)
data.insert(len(data.columns), 'Churn', column_churn.values)
data.columns
```
### Changing the data type
```
data["Geography"] = data["Geography"].astype('category')
data["Gender"] = data["Gender"].astype('category')
data["HasCrCard"] = data["HasCrCard"].astype('category')
data["Churn"] = data["Churn"].astype('category')
data["IsActiveMember"] = data["IsActiveMember"]\
.astype('category')
data.dtypes
```
# Exploring the data
## Statistical Overview
```
data['Churn'].value_counts(0)
data['Churn'].value_counts(1)*100
data['IsActiveMember'].value_counts(1)*100
data.describe()
summary_churn = data.groupby('Churn')
summary_churn.mean()
summary_churn.median()
corr = data.corr()
plt.figure(figsize=(15,8))
sns.heatmap(corr, \
xticklabels=corr.columns.values,\
yticklabels=corr.columns.values,\
annot=True,cmap='Greys_r')
corr
```
## Visualization
```
f, axes = plt.subplots(ncols=3, figsize=(15, 6))
sns.distplot(data.EstimatedSalary, kde=True, color="gray", \
ax=axes[0]).set_title('EstimatedSalary')
axes[0].set_ylabel('No of Customers')
sns.distplot(data.Age, kde=True, color="gray", \
ax=axes[1]).set_title('Age')
axes[1].set_ylabel('No of Customers')
sns.distplot(data.Balance, kde=True, color="gray", \
ax=axes[2]).set_title('Balance')
axes[2].set_ylabel('No of Customers')
plt.figure(figsize=(15,4))
p=sns.countplot(y="Gender", hue='Churn', data=data,\
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Churn Distribution by Gender')
plt.figure(figsize=(15,4))
p=sns.countplot(x='Geography', hue='Churn', data=data, \
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Geography Distribution')
plt.figure(figsize=(15,4))
p=sns.countplot(x='NumOfProducts', hue='Churn', data=data, \
palette="Greys_r")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Distribution by Product')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Age'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Age'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='Customer Age', ylabel='Frequency')
plt.title('Customer Age - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Balance'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Balance'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='Customer Balance', ylabel='Frequency')
plt.title('Customer Balance - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'CreditScore'] , \
color=sns.color_palette("Greys_r")[0],\
shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'CreditScore'] , \
color=sns.color_palette("Greys_r")[1],\
shade=True, label='churn')
ax.set(xlabel='CreditScore', ylabel='Frequency')
plt.title('Customer CreditScore - churn vs no churn')
plt.figure(figsize=(16,4))
p=sns.barplot(x='NumOfProducts',y='Balance',hue='Churn',\
data=data, palette="Greys_r")
p.legend(loc='upper right')
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Number of Product VS Balance')
```
## Feature selection
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data.dtypes
### Encoding the categorical variables
data["Geography"] = data["Geography"].astype('category')\
.cat.codes
data["Gender"] = data["Gender"].astype('category').cat.codes
data["HasCrCard"] = data["HasCrCard"].astype('category')\
.cat.codes
data["Churn"] = data["Churn"].astype('category').cat.codes
target = 'Churn'
X = data.drop('Churn', axis=1)
y=data[target]
X_train, X_test, y_train, y_test = train_test_split\
(X,y,test_size=0.15, \
random_state=123, \
stratify=y)
forest=RandomForestClassifier(n_estimators=500,random_state=1)
forest.fit(X_train,y_train)
importances=forest.feature_importances_
features = data.drop(['Churn'],axis=1).columns
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(15,4))
plt.title("Feature importances using Random Forest")
plt.bar(range(X_train.shape[1]), importances[indices],\
color="gray", align="center")
plt.xticks(range(X_train.shape[1]), features[indices], \
rotation='vertical',fontsize=15)
plt.xlim([-1, X_train.shape[1]])
plt.show()
feature_importance_df = pd.DataFrame({"Feature":features,\
"Importance":importances})
print(feature_importance_df)
```
## Model Fitting
```
import statsmodels.api as sm
top5_features = ['Age','EstimatedSalary','CreditScore',\
'Balance','NumOfProducts']
logReg = sm.Logit(y_train, X_train[top5_features])
logistic_regression = logReg.fit()
logistic_regression.summary
logistic_regression.params
# Create function to compute coefficients
coef = logistic_regression.params
def y (coef, Age, EstimatedSalary, CreditScore, Balance, \
NumOfProducts) : return coef[0]*Age+ coef[1]\
*EstimatedSalary+coef[2]*CreditScore\
+coef[1]*Balance+coef[2]*NumOfProducts
import numpy as np
#A customer having below attributes
#Age: 50
#EstimatedSalary: 100,000
#CreditScore: 600
#Balance: 100,000
#NumOfProducts: 2
#would have 38% chance of churn
y1 = y(coef, 50, 100000, 600,100000,2)
p = np.exp(y1) / (1+np.exp(y1))
p
```
## Logistic regression using scikit-learn
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0, solver='lbfgs')\
.fit(X_train[top5_features], y_train)
clf.predict(X_test[top5_features])
clf.predict_proba(X_test[top5_features])
clf.score(X_test[top5_features], y_test)
```
## Exercise 8.04
# Performing standardization
```
from sklearn import preprocessing
X_train[top5_features].head()
scaler = preprocessing.StandardScaler().fit(X_train[top5_features])
scaler.mean_
scaler.scale_
X_train_scalar=scaler.transform(X_train[top5_features])
X_train_scalar
X_test_scalar=scaler.transform(X_test[top5_features])
```
## Exercise 8.05
# Performing Scaling
```
min_max = preprocessing.MinMaxScaler().fit(X_train[top5_features])
min_max.min_
min_max.scale_
X_train_min_max=min_max.transform(X_train[top5_features])
X_test_min_max=min_max.transform(X_test[top5_features])
```
## Exercise 8.06
# Normalization
```
normalize = preprocessing.Normalizer().fit(X_train[top5_features])
normalize
X_train_normalize=normalize.transform(X_train[top5_features])
X_test_normalize=normalize.transform(X_test[top5_features])
np.sqrt(np.sum(X_train_normalize**2, axis=1))
np.sqrt(np.sum(X_test_normalize**2, axis=1))
```
## Exercise 8.07
# Model Evaluation
```
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=10)\
.split(X_train[top5_features].values,y_train.values)
results=[]
for i, (train,test) in enumerate(skf):
clf.fit(X_train[top5_features].values[train],\
y_train.values[train])
fit_result=clf.score(X_train[top5_features].values[test],\
y_train.values[test])
results.append(fit_result)
print('k-fold: %2d, Class Ratio: %s, Accuracy: %.4f'\
% (i,np.bincount(y_train.values[train]),fit_result))
print('accuracy for CV is:%.3f' % np.mean(results))
```
### Using Scikit Learn cross_val_score
```
from sklearn.model_selection import cross_val_score
results_cross_val_score=cross_val_score\
(estimator=clf,\
X=X_train[top5_features].values,\
y=y_train.values,cv=10,n_jobs=1)
print('accuracy for CV is:%.3f '\
% np.mean(results_cross_val_score))
results_cross_val_score
print('accuracy for CV is:%.3f' % np.mean(results_cross_val_score))
```
## Exercise 8.08
# Fine Tuning of Model Using Grid Search
```
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
parameters = [ {'kernel': ['linear'], 'C':[0.1, 1]}, \
{'kernel': ['rbf'], 'C':[0.1, 1]}]
clf = GridSearchCV(svm.SVC(), parameters, \
cv = StratifiedKFold(n_splits = 3),\
verbose=4,n_jobs=-1)
clf.fit(X_train[top5_features], y_train)
print('best score train:', clf.best_score_)
print('best parameters train: ', clf.best_params_)
```
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
## Submitting:
Open up the 'jwt' file in the first-neural-network directory (which also contains this notebook) for submission instructions
| github_jupyter |
```
# default_exp callback.MVP
```
# MVP (aka TSBERT) - Self-Supervised Pretraining of Time Series Models
> Masked Value Predictor callback used to predict time series step values after a binary mask has been applied.
This is an unofficial PyTorch implementation created by Ignacio Oguiza (timeseriesAI@gmail.com) based on:
* Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., & Eickhoff, C. (2020). [A Transformer-based Framework for Multivariate Time Series Representation Learning. arXiv preprint arXiv:2010.02803v2.](https://arxiv.org/pdf/2010.02803). No official implementation available as far as I know (Oct 10th, 2020)
```
#export
from tsai.imports import *
from fastai.callback.all import *
from tsai.utils import *
from tsai.models.utils import *
from tsai.models.layers import *
#export
from torch.distributions.beta import Beta
#export
from torch.distributions.geometric import Geometric
from torch.distributions.binomial import Binomial
def create_subsequence_mask(o, r=.15, lm=3, stateful=True, sync=False):
if r <= 0: return torch.zeros_like(o).bool()
device = o.device
if o.ndim == 2: o = o[None]
n_masks, mask_dims, mask_len = o.shape
if sync == 'random': sync = random.random() > .5
dims = 1 if sync else mask_dims
if stateful:
numels = n_masks * dims * mask_len
pm = torch.tensor([1 / lm], device=device)
pu = torch.clip(pm * (r / max(1e-6, 1 - r)), 1e-3, 1)
zot, proba_a, proba_b = (torch.as_tensor([False, True], device=device), pu, pm) if random.random() > pm else \
(torch.as_tensor([True, False], device=device), pm, pu)
max_len = max(1, 2 * math.ceil(numels // (1/pm + 1/pu)))
for i in range(10):
_dist_a = (Geometric(probs=proba_a).sample([max_len])+1).long()
_dist_b = (Geometric(probs=proba_b).sample([max_len])+1).long()
dist_a = _dist_a if i == 0 else torch.cat((dist_a, _dist_a), dim=0)
dist_b = _dist_b if i == 0 else torch.cat((dist_b, _dist_b), dim=0)
add = torch.add(dist_a, dist_b)
if torch.gt(torch.sum(add), numels): break
dist_len = torch.argmax((torch.cumsum(add, 0) >= numels).float()) + 1
if dist_len%2: dist_len += 1
repeats = torch.cat((dist_a[:dist_len], dist_b[:dist_len]), -1).flatten()
zot = zot.repeat(dist_len)
mask = torch.repeat_interleave(zot, repeats)[:numels].reshape(n_masks, dims, mask_len)
else:
probs = torch.tensor(r, device=device)
mask = Binomial(1, probs).sample((n_masks, dims, mask_len)).bool()
if sync: mask = mask.repeat(1, mask_dims, 1)
return mask
def create_variable_mask(o, r=.15):
if r <= 0: return torch.zeros_like(o).bool()
device = o.device
n_masks, mask_dims, mask_len = o.shape
_mask = torch.zeros((n_masks * mask_dims, mask_len), device=device)
if int(mask_dims * r) > 0:
n_masked_vars = int(n_masks * mask_dims * r)
p = torch.tensor([1./(n_masks * mask_dims)], device=device).repeat([n_masks * mask_dims])
sel_dims = p.multinomial(num_samples=n_masked_vars, replacement=False)
_mask[sel_dims] = 1
mask = _mask.reshape(*o.shape).bool()
return mask
def create_future_mask(o, r=.15, sync=False):
if r <= 0: return torch.zeros_like(o).bool()
if o.ndim == 2: o = o[None]
n_masks, mask_dims, mask_len = o.shape
if sync == 'random': sync = random.random() > .5
dims = 1 if sync else mask_dims
probs = torch.tensor(r, device=o.device)
mask = Binomial(1, probs).sample((n_masks, dims, mask_len))
if sync: mask = mask.repeat(1, mask_dims, 1)
mask = torch.sort(mask,dim=-1, descending=True)[0].bool()
return mask
def natural_mask(o):
"""Applies natural missingness in a batch to non-nan values in the next sample"""
mask1 = torch.isnan(o)
mask2 = rotate_axis0(mask1)
return torch.logical_and(mask2, ~mask1)
t = torch.rand(16, 3, 100)
mask = create_subsequence_mask(t, sync=False)
test_eq(mask.shape, t.shape)
mask = create_subsequence_mask(t, sync=True)
test_eq(mask.shape, t.shape)
mask = create_variable_mask(t)
test_eq(mask.shape, t.shape)
mask = create_future_mask(t)
test_eq(mask.shape, t.shape)
o = torch.randn(2, 3, 4)
o[o>.5] = np.nan
test_eq(torch.isnan(natural_mask(o)).sum(), 0)
t = torch.rand(16, 30, 100)
mask = create_subsequence_mask(t, r=.15) # default settings
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'sample 0 subsequence mask (sync=False) - default mean: {mask[0].float().mean().item():.3f}')
plt.show()
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[1], cmap='cool')
plt.title(f'sample 1 subsequence mask (sync=False) - default mean: {mask[1].float().mean().item():.3f}')
plt.show()
t = torch.rand(16, 30, 100)
mask = create_subsequence_mask(t, r=.3) # 30% of values masked
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'sample 0 subsequence mask (r=.3) mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(16, 30, 100)
mask = create_subsequence_mask(t, lm=5) # average length of mask = 5
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'sample 0 subsequence mask (lm=5) mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(16, 30, 100)
mask = create_subsequence_mask(t, stateful=False) # individual time steps masked
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'per sample subsequence mask (stateful=False) mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(1, 30, 100)
mask = create_subsequence_mask(t, sync=True) # all time steps masked simultaneously
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'per sample subsequence mask (sync=True) mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(1, 30, 100)
mask = create_variable_mask(t) # masked variables
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'per sample variable mask mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(1, 30, 100)
mask = create_future_mask(t, r=.15, sync=True) # masked steps
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'future mask mean: {mask[0].float().mean().item():.3f}')
plt.show()
t = torch.rand(1, 30, 100)
mask = create_future_mask(t, r=.15, sync=False) # masked steps
mask = create_future_mask(t, r=.15, sync=True) # masked steps
test_eq(mask.dtype, torch.bool)
plt.figure(figsize=(10, 3))
plt.pcolormesh(mask[0], cmap='cool')
plt.title(f'future mask mean: {mask[0].float().mean().item():.3f}')
plt.show()
#export
def create_mask(o, r=.15, lm=3, stateful=True, sync=False, subsequence_mask=True, variable_mask=False, future_mask=False):
if r <= 0 or r >=1: return torch.zeros_like(o).bool()
if int(r * o.shape[1]) == 0:
variable_mask = False
if subsequence_mask and variable_mask:
random_thr = 1/3 if sync == 'random' else 1/2
if random.random() > random_thr:
variable_mask = False
else:
subsequence_mask = False
elif future_mask:
return create_future_mask(o, r=r)
elif subsequence_mask:
return create_subsequence_mask(o, r=r, lm=lm, stateful=stateful, sync=sync)
elif variable_mask:
return create_variable_mask(o, r=r)
else:
raise ValueError('You need to set subsequence_mask, variable_mask or future_mask to True or pass a custom mask.')
# export
import matplotlib.colors as mcolors
class MVP(Callback):
order = 60
def __init__(self, r: float = .15, subsequence_mask: bool = True, lm: float = 3., stateful: bool = True, sync: bool = False, variable_mask: bool = False,
future_mask: bool = False, custom_mask: Optional = None, nan_to_num : int = 0, dropout: float = .1, crit: callable = None,
weights_path:Optional[str]=None, target_dir: str = './data/MVP', fname: str = 'model', save_best: bool = True, verbose: bool = False):
r"""
Callback used to perform the pretext task of reconstruct the original data after a binary mask has been applied.
Args:
r: proba of masking.
subsequence_mask: apply a mask to random subsequences.
lm: average mask len when using stateful (geometric) masking.
stateful: geometric distribution is applied so that average mask length is lm.
sync: all variables have the same masking.
variable_mask: apply a mask to random variables. Only applicable to multivariate time series.
future_mask: used to train a forecasting model.
custom_mask: allows to pass any type of mask with input tensor and output tensor. Values to mask should be set to True.
nan_to_num: integer used to fill masked values
dropout: dropout applied to the head of the model during pretraining.
crit: loss function that will be used. If None MSELossFlat().
weights_path: indicates the path to pretrained weights. This is useful when you want to continue training from a checkpoint. It will load the
pretrained weights to the model with the MVP head.
target_dir : directory where trained model will be stored.
fname : file name that will be used to save the pretrained model.
save_best: saves best model weights
"""
assert subsequence_mask or variable_mask or future_mask or custom_mask, \
'you must set (subsequence_mask and/or variable_mask) or future_mask to True or use a custom_mask'
if custom_mask is not None and (future_mask or subsequence_mask or variable_mask):
warnings.warn("Only custom_mask will be used")
elif future_mask and (subsequence_mask or variable_mask):
warnings.warn("Only future_mask will be used")
store_attr("subsequence_mask,variable_mask,future_mask,custom_mask,dropout,r,lm,stateful,sync,crit,weights_path,fname,save_best,verbose,nan_to_num")
self.PATH = Path(f'{target_dir}/{self.fname}')
if not os.path.exists(self.PATH.parent):
os.makedirs(self.PATH.parent)
self.path_text = f"pretrained weights_path='{self.PATH}.pth'"
def before_fit(self):
self.run = not hasattr(self, "gather_preds")
if 'SaveModelCallback' in [cb.__class__.__name__ for cb in self.learn.cbs]:
self.save_best = False # avoid saving if SaveModelCallback is being used
if not(self.run): return
# prepare to save best model
self.best = float('inf')
# modify loss for denoising task
self.old_loss_func = self.learn.loss_func
self.learn.loss_func = self._loss
if self.crit is None:
self.crit = MSELossFlat()
self.learn.MVP = self
self.learn.TSBERT = self
# remove and store metrics
self.learn.metrics = L([])
# change head with conv layer (equivalent to linear layer applied to dim=1)
assert hasattr(self.learn.model, "head"), "model must have a head attribute to be trained with MVP"
self.learn.model.head = nn.Sequential(nn.Dropout(self.dropout),
nn.Conv1d(self.learn.model.head_nf, self.learn.dls.vars, 1)
).to(self.learn.dls.device)
if self.weights_path is not None:
transfer_weights(learn.model, self.weights_path, device=self.learn.dls.device, exclude_head=False)
with torch.no_grad():
xb = torch.randn(2, self.learn.dls.vars, self.learn.dls.len).to(self.learn.dls.device)
assert xb.shape == self.learn.model(xb).shape, 'the model cannot reproduce the input shape'
def before_batch(self):
original_mask = torch.isnan(self.x)
if self.custom_mask is not None:
new_mask = self.custom_mask(self.x)
else:
new_mask = create_mask(self.x, r=self.r, lm=self.lm, stateful=self.stateful, sync=self.sync, subsequence_mask=self.subsequence_mask,
variable_mask=self.variable_mask, future_mask=self.future_mask).bool()
if original_mask.any():
self.mask = torch.logical_and(new_mask, ~original_mask)
else:
self.mask = new_mask
self.learn.yb = (torch.nan_to_num(self.x, self.nan_to_num),)
self.learn.xb = (self.yb[0].masked_fill(self.mask, self.nan_to_num), )
def after_epoch(self):
val = self.learn.recorder.values[-1][-1]
if self.save_best:
if np.less(val, self.best):
self.best = val
self.best_epoch = self.epoch
torch.save(self.learn.model.state_dict(), f'{self.PATH}.pth')
pv(f"best epoch: {self.best_epoch:3} val_loss: {self.best:8.6f} - {self.path_text}", self.verbose or (self.epoch == self.n_epoch - 1))
elif self.epoch == self.n_epoch - 1:
print(f"\nepochs: {self.n_epoch} best epoch: {self.best_epoch:3} val_loss: {self.best:8.6f} - {self.path_text}\n")
def after_fit(self):
self.run = True
def _loss(self, preds, target):
return self.crit(preds[self.mask], target[self.mask])
def show_preds(self, max_n=9, nrows=3, ncols=3, figsize=None, sharex=True, **kwargs):
b = self.learn.dls.valid.one_batch()
self.learn._split(b)
xb = self.xb[0].detach().cpu().numpy()
bs, nvars, seq_len = xb.shape
self.learn('before_batch')
masked_pred = torch.where(self.mask, self.learn.model(*self.learn.xb), tensor([np.nan], device=self.learn.x.device)).detach().cpu().numpy()
ncols = min(ncols, math.ceil(bs / ncols))
nrows = min(nrows, math.ceil(bs / ncols))
max_n = min(max_n, bs, nrows*ncols)
if figsize is None:
figsize = (ncols*6, math.ceil(max_n/ncols)*4)
fig, ax = plt.subplots(nrows=nrows, ncols=ncols,
figsize=figsize, sharex=sharex, **kwargs)
idxs = np.random.permutation(np.arange(bs))
colors = list(mcolors.TABLEAU_COLORS.keys()) + \
random_shuffle(list(mcolors.CSS4_COLORS.keys()))
i = 0
for row in ax:
for col in row:
color_iter = iter(colors)
for j in range(nvars):
try:
color = next(color_iter)
except:
color_iter = iter(colors)
color = next(color_iter)
col.plot(xb[idxs[i]][j], alpha=.5, color=color)
col.plot(masked_pred[idxs[i]][j],
marker='o', markersize=4, linestyle='None', color=color)
i += 1
plt.tight_layout()
plt.show()
TSBERT = MVP
```
# Experiments
```
from fastai.data.transforms import *
from tsai.data.all import *
from tsai.models.utils import *
from tsai.models.layers import *
from tsai.learner import *
from tsai.models.TSTPlus import *
from tsai.models.InceptionTimePlus import *
dsid = 'MoteStrain'
X, y, splits = get_UCR_data(dsid, split_data=False)
check_data(X, y, splits, False)
X[X<-1] = np.nan # This is to test the model works well even if nan values are passed through the dataloaders.
# Pre-train
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True)]
unlabeled_dls = get_ts_dls(X, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(unlabeled_dls, InceptionTimePlus, cbs=[MVP(fname=f'{dsid}')])
learn.fit_one_cycle(1, 3e-3)
learn = ts_learner(unlabeled_dls, InceptionTimePlus, cbs=[MVP(weights_path=f'data/MVP/{dsid}.pth')])
learn.fit_one_cycle(1, 3e-3)
learn.MVP.show_preds(sharey=True) # these preds are highly inaccurate as the model's been trained for just 1 epoch for testing purposes
# Fine-tune
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
labeled_dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=64)
learn = ts_learner(labeled_dls, InceptionTimePlus, pretrained=True, weights_path=f'data/MVP/{dsid}.pth', metrics=accuracy)
learn.fit_one_cycle(1)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
unlabeled_dls = get_ts_dls(X, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=64)
fname = f'{dsid}_test'
mvp = MVP(subsequence_mask=True, sync='random', variable_mask=True, future_mask=True, fname=fname)
learn = ts_learner(unlabeled_dls, InceptionTimePlus, metrics=accuracy, cbs=mvp) # Metrics will not be used!
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True)]
unlabeled_dls = get_ts_dls(X, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=64)
fname = f'{dsid}_test'
mvp = MVP(subsequence_mask=True, sync='random', variable_mask=True, future_mask=True, custom_mask=partial(create_future_mask, r=.15),
fname=fname)
learn = ts_learner(unlabeled_dls, InceptionTimePlus, metrics=accuracy, cbs=mvp) # Metrics will not be used!
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
```
import tensorflow_probability as tfp
import tensorflow as tf
import networkit as nit
import numpy as np
import matplotlib
import networkx as nx
import matplotlib.pyplot as plt
import random
import math as m
pi = tf.constant(m.pi)
def loss_func():
loss = 0
for node in graph.iterNodes():
walk = random_walk(node,3)
for i in walk:
p = tf.transpose(matrix[int(node)])
t = tf.tensordot(p,matrix[int(i)],1)
loss -= tf.math.log(tf.math.abs(tf.math.sigmoid(t)))
for n_i in P_dist():
loss += tf.math.log(tf.math.abs(tf.tensordot(tf.transpose(matrix[node]), matrix[int(n_i)],0)))
return loss
def random_walk(node, walk_len):
lst = [node]
for _ in range(walk_len):
k=[i for i in graph.iterNeighbors(int(lst[-1]))]
if k:
lst.append(np.random.choice(np.asarray(k,dtype=np.float32)))
else:
break
return lst
def P_dist():
return np.random.choice(node_list, size = 7, replace = True, p = degree_dist)
def visualize(graph, nodelst = [], lst = []):
''' visiualizes the graph
'''
plt.clf() #clear the sreen
matrix = nit.algebraic.adjacencyMatrix(graph, matrixType='sparse') #make adjancency matrix of graph to convert it to a graph object of networkx
G2 = nx.from_scipy_sparse_matrix(matrix)# converts the graph
if not lst: #this is for just viewing graph
nx.draw(G2,with_labels=True)
else: #this is for viewing graph with colors to see clusters
colors = [random.uniform(0,0.1) for _ in range(0,len(lst))]
color_lst = []
for i in nodelst:
for index, j in enumerate(lst):
if i in j:
color_lst.append(colors[index])
nx.draw(G2,nodelist = nodelst ,with_labels=True, node_color = color_lst)
plt.axis('equal')
#graph = nit.generators.ClusteredRandomGraphGenerator(20,5,0.2,0.4).generate()
graph = nit.generators.ErdosRenyiGenerator(10, 0.3, directed = False, selfLoops = False).generate()
totalNodes = graph.numberOfNodes()
g = tf.random.Generator.from_seed(1234)
matrix = g.uniform(shape=(graph.numberOfNodes(),50), minval = 0.001)
node_list = tf.convert_to_tensor([i for i in graph.iterNodes()], dtype=tf.float32)
total_weight = sum(graph.degree(i) for i in graph.iterNodes())
degree_dist = np.asarray([graph.degree(i)/total_weight for i in node_list], dtype=np.float32)
visualize(graph)
opt = tf.keras.optimizers.SGD(learning_rate=0.01)
#print(matrix)
matrix = tf.Variable(matrix)
losses = opt.minimize(loss_func,[matrix])
# In TF2/eager mode, the optimization runs immediately.
#print("optimized value is {} with loss {}".format("h", loss_func()))
#print(matrix)
def similarity(y_true, y_pred):
coss = tf.keras.losses.CosineSimilarity(axis=0)
return coss(y_true,y_pred).numpy()
print(similarity(matrix[0],matrix[1]))
for i in graph.iterNodes():
print("{} - {} similarity: {}".format(0,i,similarity(matrix[0],matrix[i])))
%%html
$\sum_{v\in V}$
```
| github_jupyter |
# ALIGN Tutorial Notebook: DEVIL'S ADVOCATE
This notebook provides an introduction to **ALIGN**,
a tool for quantifying multi-level linguistic similarity
between speakers, using the "Devil's Advocate" transcript data reported in Duran, Paxton, and Fusaroli: "ALIGN: Analyzing Linguistic Interactions with Generalizable techNiques - a Python Library". This method was also introduced in Duran, Paxton, and Fusaroli (2019), which can be accessed here for reference: https://osf.io/kx8ur/.
## Tutorial Overview
The Devil's Advocate ("DA") study examines interpersonal linguistic alignment between dyads across two conversations where participants either agreed or disagreed with each other (as a randomly assigned between-dyads condition) and where one of the conversations involved the truth and the other deception (as a within-subjects condition), with order of conversations counterbalanced across dyads.
**Transcript Data:**
The complete de-identified dataset of raw conversational transcripts is hosted on a secure protected access data repository provided by the Inter-university Consortium for Political and Social Research (ICPSR). These transcripts need to be downloaded to use this tutorial. Please click on the link to the ICPSR repository to access: http://dx.doi.org/10.3886/ICPSR37124.v1.
**Analysis:**
To replicate the results reported in Duran, Paxton, and Fusaroli (2019), or for an example of R code used to analzye the ALIGN output for this dataset, please visit the OSF repository for this project: https://osf.io/3TGUF/
***
## Table of Contents
* [Getting Started](#Getting-Started)
* [Prerequisites](#Prerequisites)
* [Preparing input data](#Preparing-input-data)
* [Filename conventions](#Filename-conventions)
* [Highest-level functions](#Highest-level-functions)
* [Setup](#Setup)
* [Import libraries](#Import-libraries)
* [Specify ALIGN path settings](#Specify-ALIGN-path-settings)
* [Phase 1: Prepare transcripts](#Phase-1:-Prepare-transcripts)
* [Preparation settings](#Preparation-settings)
* [Run preparation phase](#Run-preparation-phase)
* [Phase 2: Calculate alignment](#Phase-2:-Calculate-alignment)
* [For real data: Alignment calculation settings](#For-real-data:-Alignment-calculation-settings)
* [For real data: Run alignment calculation](#For-real-data:-Run-alignment-calculation)
* [For surrogate data: Alignment calculation settings](#For-surrogate-data:-Alignment-calculation-settings)
* [For surrogate data: Run alignment calculation](#For-surrogate-data:-Run-alignment-calculation)
* [ALIGN output overview](#ALIGN-output-overview)
* [Speed calculations](#Speed-calculations)
* [Printouts!](#Printouts!)
***
# Getting Started
## Preparing input data
**The transcript data used for this analysis adheres to the following requirements:**
* Each input text file contains a single conversation organized in an `N x 2` matrix
* Text file must be tab-delimited.
* Each row corresponds to a single conversational turn from a speaker.
* Rows must be temporally ordered based on their occurrence in the conversation.
* Rows must alternate between speakers.
* Speaker identifier and content for each turn are divided across two columns.
* Column 1 must have the header `participant`.
* Each cell specifies the speaker.
* Each speaker must have a unique label (e.g., `P1` and `P2`, `0` and `1`).
* **NOTE: For the DA dataset, the label with a value of 0 indicates speaker did not receive any special assignment at the start of the experiment, a value of 1 indicates the speaker has been assigned the role of deceiver (i.e., “devil’s advocate) at the start of the experiment.**
* Column 2 must have the header `content`.
* Each cell corresponds to the transcribed utterance from the speaker.
* Each cell must end with a newline character: `\n`
## Filename conventions
* Each conversation text file must be regularly formatted, including a prefix for dyad and a prefix for conversation prior to the identifier for each that are separated by a unique character. By default, ALIGN looks for patterns that follow this convention: `dyad1-condA.txt`
* However, users may choose to include any label for dyad or condition so long as the two labels are distinct from one another and are not subsets of any possible dyad or condition labels. Users may also use any character as a separator so long as it does not occur anywhere else in the filename.
* The chosen file format **must** be used when saving **all** files for this analysis.
**NOTE: For the DA dataset, each conversation text file is saved in the format of: dyad_condX-Y-Z (e.g., dyad11_cond1-0-2).**
Such that for X, Y, and Z condition codes:
* X = Indicates whether the conversation involved dyads who agreed or disagreed with each other: value of 1 indicates a disagreement conversation, value of 2 indicates an agreement conversation (e.g., “cond1”)
* Y = Indicates whether the conversation involved deception: value of 0 indicates truth, value of 1 indicates deception.
* Z = Indicates conversation order. Given each dyad had two conversations: value of 2 indicates the conversation occurred first, value of 3 indicates the conversation occurred last.
## Highest-level functions
Given appropriately prepared transcript files, ALIGN can be run in 3 high-level functions:
**`prepare_transcripts`**: Pre-process each standardized
conversation, checking it conforms to the requirements.
Each utterance is tokenized and lemmatized and has
POS tags added.
**`calculate_alignment`**: Generates turn-level and
conversation-level alignment scores (lexical,
conceptual, and syntactic) across a range of
*n*-gram sequences.
**`calculate_baseline_alignment`**: Generate a surrogate corpus
and run alignment analysis (using identical specifications
from `calculate_alignment`) on it to produce a baseline.
***
# Setup
## Import libraries
Install ALIGN if you have not already.
```
import sys
!{sys.executable} -m pip install align
```
Import packages we'll need to run ALIGN.
```
import align, os
import pandas as pd
```
Import `time` so that we can get a sense of how
long the ALIGN pipeline takes.
```
import time
```
Import `warnings` to flag us if required files aren't provided.
```
import warnings
```
## Install additional NTLK packages
Download some addition `nltk` packages for `align` to work.
```
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
```
## Specify ALIGN path settings
ALIGN will need to know where the raw transcripts are stored, where to store the processed data, and where to read in any additional files needed for optional ALIGN parameters.
### Required directories
For the sake of this tutorial, specify a base path that will serve as our jumping-off point for our saved data. All of the shipped data will be called from the package directory but the DA transcripts will need to be added manually.
**`BASE_PATH`**: Containing directory for this tutorial.
```
BASE_PATH = os.getcwd()
```
**`DA_EXAMPLE`**: Subdirectories for output and other
files for this tutorial. (We'll create a default directory
if one doesn't already exist.)
```
DA_EXAMPLE = os.path.join(BASE_PATH,
'DA/')
if not os.path.exists(DA_EXAMPLE):
os.makedirs(DA_EXAMPLE)
```
**`TRANSCRIPTS`**: Transcript text files must be first downloaded from the ICPSR repository.
Next, set variable for folder name (as string) for relative location of folder into which the downloaded transcript files need to be manually added. (We'll create a default directory if one doesn't already exist.)
```
TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-transcripts/')
if not os.path.exists(TRANSCRIPTS):
os.makedirs(TRANSCRIPTS)
if not os.listdir(TRANSCRIPTS) :
warnings.warn('DA text files not found at the specified '
'location. Please download from '
'http://dx.doi.org/10.3886/ICPSR37124.v1 '
'and add to directory.')
```
**`PREPPED_TRANSCRIPTS`**: Set variable for folder name
(as string) for relative location of folder into which
prepared transcript files will be saved. (We'll create
a default directory if one doesn't already exist.)
```
PREPPED_TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-prepped/')
if not os.path.exists(PREPPED_TRANSCRIPTS):
os.makedirs(PREPPED_TRANSCRIPTS)
```
**`ANALYSIS_READY`**: Set variable for folder name
(as string) for relative location of folder into
which analysis-ready dataframe files will be saved.
(We'll create a default directory if one doesn't
already exist.)
```
ANALYSIS_READY = os.path.join(DA_EXAMPLE,
'DA-analysis/')
if not os.path.exists(ANALYSIS_READY):
os.makedirs(ANALYSIS_READY)
```
**`SURROGATE_TRANSCRIPTS`**: Set variable for folder name
(as string) for relative location of folder into which all
prepared surrogate transcript files will be saved. (We'll
create a default directory if one doesn't already exist.)
```
SURROGATE_TRANSCRIPTS = os.path.join(DA_EXAMPLE,
'DA-surrogate/')
if not os.path.exists(SURROGATE_TRANSCRIPTS):
os.makedirs(SURROGATE_TRANSCRIPTS)
```
### Paths for optional parameters
**`OPTIONAL_PATHS`**: If using Stanford POS tagger or
pretrained vectors, the path to these files. If these
files are provided in other locations, be sure to
change the file paths for them. (We'll create a default
directory if one doesn't already exist.)
```
OPTIONAL_PATHS = os.path.join(DA_EXAMPLE,
'optional_directories/')
if not os.path.exists(OPTIONAL_PATHS):
os.makedirs(OPTIONAL_PATHS)
```
#### Stanford POS Tagger
The Stanford POS tagger **will not be used** by
default in this example. However, you may use them
by uncommenting and providing the requested file
paths in the cells in this section and then changing
the relevant parameters in the ALIGN calls below.
If desired, we could use the Standford part-of-speech
tagger along with the Penn part-of-speech tagger
(which is always used in ALIGN). To do so, the files
will need to be downloaded separately:
https://nlp.stanford.edu/software/tagger.shtml#Download
**`STANFORD_POS_PATH`**: If using Stanford POS tagger
with the Penn POS tagger, path to Stanford directory.
```
# STANFORD_POS_PATH = os.path.join(OPTIONAL_PATHS,
# 'stanford-postagger-full-2018-10-16/')
# if os.path.exists(STANFORD_POS_PATH) == False:
# warnings.warn('Stanford POS directory not found at the specified '
# 'location. Please update the file path with '
# 'the folder that can be directly downloaded here: '
# 'https://nlp.stanford.edu/software/stanford-postagger-full-2018-10-16.zip '
# '- Alternatively, comment out the '
# '`STANFORD_POS_PATH` information.')
```
**`STANFORD_LANGUAGE`**: If using Stanford tagger,
set language model to be used for POS tagging.
```
# STANFORD_LANGUAGE = os.path.join('models/english-left3words-distsim.tagger')
# if os.path.exists(STANFORD_POS_PATH + STANFORD_LANGUAGE) == False:
# warnings.warn('Stanford tagger language not found at the specified '
# 'location. Please update the file path or comment '
# 'out the `STANFORD_POS_PATH` information.')
```
#### Google News pretrained vectors
The Google News pretrained vectors **will be used**
by default in this example. The file is available for
download here: https://code.google.com/archive/p/word2vec/
If desired, researchers may choose to read in pretrained
`word2vec` vectors rather than creating a semantic space
from the corpus provided. This may be especially useful
for small corpora (i.e., fewer than 30k unique words),
although the choice of semantic space corpus should be
made with careful consideration about the nature of the
linguistic context (for further discussion, see Duran,
Paxton, & Fusaroli, 2019).
**`PRETRAINED_INPUT_FILE`**: If using pretrained vectors, path
to pretrained vector files. You may choose to download the file
directly to this path or change the path to a different one.
```
PRETRAINED_INPUT_FILE = os.path.join(OPTIONAL_PATHS,
'GoogleNews-vectors-negative300.bin')
if os.path.exists(PRETRAINED_INPUT_FILE) == False:
warnings.warn('Google News vector not found at the specified '
'location. Please update the file path with '
'the .bin file that can be accessed here: '
'https://code.google.com/archive/p/word2vec/ '
'- Alternatively, comment out the `PRETRAINED_INPUT_FILE` information')
```
***
# Phase 1: Prepare transcripts
In Phase 1, we take our raw transcripts and get them ready
for later ALIGN analysis.
## Preparation settings
There are a number of parameters that we can set for the
`prepare_transcripts()` function:
```
print(align.prepare_transcripts.__doc__)
```
For the sake of this demonstration, we'll keep everything as
defaults. Among other parameters, this means that:
* any turns fewer than 2 words will be removed from the corpus
(`minwords=2`),
* we'll be using regex to strip out any filler words
(e.g., "uh," "um," "huh"; `use_filler_list=None`),
* if you like, you can supply additional filler words as `use_filler_list=["string1", "string2"]` but be sure to set `filler_regex_and_list=True`
* we'll be using the Project Gutenberg corpus to create our
spell-checker algorithm (`training_dictionary=None`),
* we'll rely only on the Penn POS tagger
(`add_stanford_tags=False`), and
* our data will be saved both as individual conversation files
and as a master dataframe of all conversation outputs
(`save_concatenated_dataframe=True`).
## Run preparation phase
First, we prepare our transcripts by reading in individual `.txt`
files for each conversation, clean up undesired text and turns,
spell-check, tokenize, lemmatize, and add POS tags.
```
start_phase1 = time.time()
model_store = align.prepare_transcripts(
input_files=TRANSCRIPTS,
output_file_directory=PREPPED_TRANSCRIPTS,
minwords=2,
use_filler_list=None,
filler_regex_and_list=False,
training_dictionary=None,
add_stanford_tags=False,
### if you want to run the Stanford POS tagger, be sure to uncomment the next two lines
# stanford_pos_path=STANFORD_POS_PATH,
# stanford_language_path=STANFORD_LANGUAGE,
save_concatenated_dataframe=True)
end_phase1 = time.time()
```
***
# Phase 2: Calculate alignment
## For real data: Alignment calculation settings
There are a number of parameters that we can set for the
`calculate_alignment()` function:
```
print(align.calculate_alignment.__doc__)
```
For the sake of this tutorial, we'll keep everything as
defaults. Among other parameters, this means that we'll:
* use only unigrams and bigrams for our *n*-grams
(`maxngram=2`),
* use pretrained vectors instead of creating our own
semantic space, since our tutorial corpus is quite
small (`use_pretrained_vectors=True` and
`pretrained_file_directory=PRETRAINED_INPUT_FILE`),
* ignore exact lexical duplicates when calculating
syntactic alignment,
* we'll rely only on the Penn POS tagger
(`add_stanford_tags=False`), and
* implement high- and low-frequency cutoffs to clean
our transcript data (`high_sd_cutoff=3` and
`low_n_cutoff=1`).
Whenever we calculate a baseline level of alignment,
we need to include the same parameter values for any
parameters that are present in both `calculate_alignment()`
(this step) and `calculate_baseline_alignment()`
(next step). As a result, we'll specify these here:
```
# set standards to be used for real and surrogate
INPUT_FILES = PREPPED_TRANSCRIPTS
MAXNGRAM = 2
USE_PRETRAINED_VECTORS = True
SEMANTIC_MODEL_INPUT_FILE = os.path.join(DA_EXAMPLE,
'align_concatenated_dataframe.txt')
PRETRAINED_FILE_DRIRECTORY = PRETRAINED_INPUT_FILE
ADD_STANFORD_TAGS = False
IGNORE_DUPLICATES = True
HIGH_SD_CUTOFF = 3
LOW_N_CUTOFF = 1
```
## For real data: Run alignment calculation
```
start_phase2real = time.time()
[turn_real,convo_real] = align.calculate_alignment(
input_files=INPUT_FILES,
maxngram=MAXNGRAM,
use_pretrained_vectors=USE_PRETRAINED_VECTORS,
pretrained_input_file=PRETRAINED_INPUT_FILE,
semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE,
output_file_directory=ANALYSIS_READY,
add_stanford_tags=ADD_STANFORD_TAGS,
ignore_duplicates=IGNORE_DUPLICATES,
high_sd_cutoff=HIGH_SD_CUTOFF,
low_n_cutoff=LOW_N_CUTOFF)
end_phase2real = time.time()
```
## For surrogate data: Alignment calculation settings
For the surrogate or baseline data, we have many of the same
parameters for `calculate_baseline_alignment()` as we do for
`calculate_alignment()`:
```
print(align.calculate_baseline_alignment.__doc__)
```
As mentioned above, when calculating the baseline, it is **vital**
to include the *same* parameter values for any parameters that
are included in both `calculate_alignment()` and
`calculate_baseline_alignment()`. As a result, we re-use those
values here.
We demonstrate other possible uses for labels by setting
`dyad_label = time`, allowing us to compare alignment over
time across the same speakers. We also demonstrate how to
generate a subset of surrogate pairings rather than all
possible pairings.
In addition to the parameters that we're re-using from
the `calculate_alignment()` values (see above), we'll
keep most parameters at their defaults by:
* preserving the turn order when creating surrogate
pairs (`keep_original_turn_order=True`),
* specifying condition with `cond` prefix
(`condition_label='cond'`), and
* using a hyphen to separate the condition and
dyad identifiers (`id_separator='\-'`).
However, we will also change some of these defaults,
including:
* generating only a subset of surrogate data equal
to the size of the real data (`all_surrogates=False`)
and
* specifying that we'll be shuffling the baseline data
by time instead of by dyad (`dyad_label='time'`).
## For surrogate data: Run alignment calculation
```
start_phase2surrogate = time.time()
[turn_surrogate,convo_surrogate] = align.calculate_baseline_alignment(
input_files=INPUT_FILES,
maxngram=MAXNGRAM,
use_pretrained_vectors=USE_PRETRAINED_VECTORS,
pretrained_input_file=PRETRAINED_INPUT_FILE,
semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE,
output_file_directory=ANALYSIS_READY,
add_stanford_tags=ADD_STANFORD_TAGS,
ignore_duplicates=IGNORE_DUPLICATES,
high_sd_cutoff=HIGH_SD_CUTOFF,
low_n_cutoff=LOW_N_CUTOFF,
surrogate_file_directory=SURROGATE_TRANSCRIPTS,
all_surrogates=False,
keep_original_turn_order=True,
id_separator='\_',
dyad_label='dyad',
condition_label='cond')
convo_surrogate
end_phase2surrogate = time.time()
```
***
# ALIGN output overview
## Speed calculations
As promised, let's take a look at how long it takes to run each section. Time is given in seconds.
**Phase 1:**
```
end_phase1 - start_phase1
```
**Phase 2, real data:**
```
end_phase2real - start_phase2real
```
**Phase 2, surrogate data:**
```
end_phase2surrogate - start_phase2surrogate
```
**All phases:**
```
end_phase2surrogate - start_phase1
```
## Printouts!
And that's it! Before we go, let's take a look at the output from the real data analyzed at the turn level for each conversation (`turn_real`) and at the conversation level for each dyad (`convo_real`). We'll then look at our surrogate data, analyzed both at the turn level (`turn_surrogate`) and at the conversation level (`convo_surrogate`). In our next step, we would then take these data and plug them into our statistical model of choice. As an example of how this was done for Duran, Paxton, and Fusaroli (2019, *Psychological Methods*, https://doi.org/10.1037/met0000206) please visit: https://osf.io/3TGUF/
```
turn_real.head(10)
convo_real.head(10)
turn_surrogate.head(10)
convo_surrogate.head(10)
```
| github_jupyter |
```
print("Hi")
print(8)
Name = "Amr"
Age = 38
print("\r Welcome {0} your age is {1} and you\n are too old to play with me" .format(Name,Age))
print(f"Welcome {Name} your age is {Age}")
s= "Amr"
s2= s.replace("A","O")
print(f"The old is {s} and the new name is {s2}")
print("""print this string on
many many many lines""")
Namr ="Amr"
print(Name.title())
len(Name)
print(Name.isalpha())
Name.index("r")
print(Name[0:3])
print(Name*10)
```
**Tasks**
```
#Task 1
Y= input("First Number")
X= input("Second Number")
print(int(Y)*int(X))
Y= input("First Number")
X= input("Second Number")
print(int(Y)+int(X))
Y= input("First Number")
X= input("Second Number")
print(int(Y)-int(X))
Y= input("First Number")
X= input("Second Number")
print(int(Y)/int(X))
#Task2
my_name ="Amr Abdelkarim Abdelfata Mohamed"
my_name.upper()
my_name ="Amr Abdelkarim Abdelfata Mohamed"
my_name.isnumeric()
my_name ="Amr Abdelkarim Abdelfatah Mohamed"
New_Name= my_name.replace("Mohamed","Eleraqi")
print(New_Name)
len(New_Name)
print(New_Name*2)
Age = 39
print("\r Welcome {0} your age is {1} and you\n are too old to play with me" .format(New_Name,Age))
print("welcom %s your age is %s" %(New_Name,Age))
print(f"welcome {New_Name} your age is {Age}")
print(f"welcom {New_Name}\nyour name is {Age}")
```
**second lecture tasks**
```
#1
t=(4,5,6)
x,y,z=t
print(f'x={x}\ny={y}\nz={z}\n')
#2
set_1={4,5,6,7,8,8,7}
set_2={1,2,3,4,4,5,6,9}
newset=set_1.union(set_2)
print(newset)
comman=set_1.intersection(set_2)
print(comman)
print(set_1 - set_2)
print(set_2 - set_2)
print(set_2.difference(set_1))
print(max(set_2))
print(len(set_1))
print((set_1 - set_2).union(set_1))
print(set_1 ^ set_2)
print(set_1 | set_2)
set_1.add (9)
set_2.remove (6)
print(set_2)
#3
dict = {'Name':'Amr',
'gender': 'Male',
'feeling': 'Happy'}
print(dict)
del dict['feeling']
print(dict)
'age' in dict
dict ['gender']
#4
x= input("inter the number")
print(float(x))
print(int(float(x)))
#5
str = "will rock it"
for line in range(5):
print(str)
```
**Day 3**
```
list1= [1,2,3]
from collections.abc import Iterable
hasattr(list1, '_iter_')
isinstance(list1, Iterable)
list2= [1,2,3,4]
for i in list2:
print (i)
for i in range (101,110,2):
print(i)
str = 'فلسطين'
for i in range(5):
print(str)
for i in range (10):
print(i)
wallet = 1000
if wallet > 3000:
print(wallet)
else:
print("you dont have enogh money")
salary= int(input("your yearly salary is "))
if salary > 10000:
print(salary*0.10)
elif salary < 10000:
print(salary*0.05)
else:
print(salary)
import pandas as pd
df = pd.read_csv("/content/tweets_01-08-2021.csv")
df.head()
df.shape
df.describe().T
df.columns
import matplotlib.pyplot as plt
import seaborn as sns
q_Var = ['retweets']
expGrid = sns.PairGrid(df, y_vars = 'favorites', x_vars = q_Var)
expGrid.map(sns.regplot)
new_data = df.copy()
new_data.tail()
df['date'] = pd.to_datetime(df['date'])
plt.figure(figsize=(10,7))
sns.countplot(df['date'].apply(lambda x: x.year))
plt.xlabel('Year')
plt.show()
```
**Exam**
```
def f_cube(number):
return number**3
def s_by_three(number):
if number % 3 == 0 :
return f_cube(number)
else :
return False
f_cube(3)
s_by_three (6)
def dis(N):
if type(N) == int or type(N) == float:
return abs(N)
else:
return "Nope"
dis(-5.6)
dis("m")
```
# **Q1**
```
class Vehicle:
def __init__(self, name, max_speed, mileage):
self.name = name
self.max_speed = max_speed
self.mileage = mileage
class Bus(Vehicle):
pass
Details_school_bus = Bus("School Volvo", 360, 24)
print("Vehicle Name:", Details_school_bus.name, "Speed:", Details_school_bus.max_speed, "Mileage:", Details_school_bus.mileage)
```
#**Q2**
```
class bus(Vehicle):
def seating_capacity(self, capacity=50):
return f"The seating capacity of a {self.name} is {capacity} passengers"
bus("Details_school_bus", 200, 3000).seating_capacity()
```
# **Q3**
```
class Vehicle:
def __init__(self, name, mileage, capacity):
self.name = name
self.mileage = mileage
self.capacity = capacity
def fare(self):
return self.capacity * 100
class Bus(Vehicle):
pass
School_bus = Bus("School Volvo", 12, 50)
print("Total Details Bus fare is:", School_bus.fare())
```
| github_jupyter |
<a href="https://colab.research.google.com/github/LucasD-SEO/site-pages-graph/blob/master/Predicting_Successful_Content.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Predicting Successful Content
* Let's find the 15% of search queries that Google has never seen before
* Group them into canonical queries (consolidate duplicates)
* manually group intents
* Finally forecast their traffic intents 30, 60, 90 days to pick the most promising candidates
```
%%capture
!pip install git+https://github.com/joshcarty/google-searchconsole
```
First, there is some setup to download a client_id.json file our Python code can use to connect securely to Google Search Console.
Activate Search Console API in Compute Engine https://console.cloud.google.com/apis/api/webmasters.googleapis.com/overview?project=&folder=&organizationId= Create New Credentials / Help me choose (Search Console API, Other UI, User data) https://console.cloud.google.com/apis/credentials/wizard?api=iamcredentials.googleapis.com&project= Download client_id.json
```
#upload client_id.json and credentials.json files
from google.colab import files
names = files.upload()
#names
filename=list(names.keys())[0]
filename
import searchconsole
account = searchconsole.authenticate(client_config=filename, serialize='credentials.json', flow="console")
account.webproperties
domain_name = "https://www.evaneos.fr/" #@param {type:"string"}
#Insert your domain name below.
webproperty = account[evaneos.fr]
```
Line below should print the the site's property
```
webproperty
```
```
#let's build a pandas dataframe with the search console data
import pandas as pd
def get_search_console_data(webproperty, days=-365):
if webproperty is not None:
query = webproperty.query.range(start='today', days=days).dimension('date', 'query')
r = query.get()
df = pd.DataFrame(r.rows)
return df
print("Web property doesn't exist, please select a valid one from this list")
print(account.webproperties)
return None
df = get_search_console_data(webproperty)
df.head()
df.info()
df.to_csv("canadahelps.csv")
!gzip canadahelps.csv
df["date"] = pd.to_datetime(df.date)
df.info()
```
Most recent data is 2 days old
```
df[df["date"] > "2020-11-6"]
last_day_queries = df[df["date"] > "2020-11-6"]["query"]
len(last_day_queries)
rest_of_queries = df[df["date"] < "2020-11-6"]["query"]
len(rest_of_queries)
```
Next, we want to find the queries in last day, but not in the rest.
```
fiften_percent = set(last_day_queries) - set(rest_of_queries)
len(fiften_percent)
fiften_percent
```
Let's check if these queries are semantic duplicates of existing ones.
```
%%capture
!pip install sentence-transformers
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
# Two lists of sentences
sentences1 = ['The cat sits outside',
'A man is playing guitar',
'The new movie is awesome']
sentences2 = ['The dog plays in the garden',
'A woman watches TV',
'The new movie is so great']
#Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
#Compute cosine-similarits
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(sentences1)):
print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
```
Next, let's try with our queries
```
fiften_percent_list = list(fiften_percent)
#Compute embedding for both lists
embeddings1 = model.encode(fiften_percent_list, convert_to_tensor=True)
# try on a smaller set, as it takes too long to run on full set of +1m queries
rest_of_queries_list = list(set(rest_of_queries))[:10000]
embeddings2 = model.encode( rest_of_queries_list, convert_to_tensor=True)
#Compute cosine-similarits
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(fiften_percent_list)):
score = cosine_scores[i][i]
if score > 0.4:
print(f"{i}. {fiften_percent_list[i]} <> {rest_of_queries_list[i]} \nScore: {score:.4f}")
```
Once we have the duplicate queries, we can use their historical traffic to predict the potential traffic of the new ones and prioritize the topics to focus on.
```
```
Loading from backup
```
df.head()
df.info()
ideas_df = df[df['query'].str.contains("idea")]
ideas_df["date"] = pd.to_datetime(ideas_df["date"])
ideas_df
near_me_df = df[df['query'].str.contains("near me")]
near_me_df["date"] = pd.to_datetime(near_me_df["date"])
ideas_df = ideas_df.set_index("date")
near_me_df = near_me_df.set_index("date")
ideas_df.info()
near_me_df.info()
ideas_df.head()
near_me_df.head()
grouped_ideas_df = ideas_df.groupby(pd.Grouper(freq='M')).sum()[["clicks", "impressions"]]
grouped_near_me_df = near_me_df.groupby(pd.Grouper(freq='M')).sum()[["clicks", "impressions"]]
grouped_ideas_df
grouped_near_me_df
import plotly.express as px
fig = px.line(grouped_ideas_df, y="clicks", title='Clicks over Time for Ideas')
fig.show()
fig = px.line(grouped_near_me_df, y="clicks", title='Clicks over Time for Near Me')
fig.show()
grouped_ideas_df = ideas_df.groupby(pd.Grouper(freq='D')).sum()[["clicks", "impressions"]]
grouped_ideas_df
fig = px.line(grouped_ideas_df, y="clicks", title='Clicks over Time for Ideas')
fig.show()
grouped_near_me_df = near_me_df.groupby(pd.Grouper(freq='D')).sum()[["clicks", "impressions"]]
grouped_near_me_df
fig = px.line(grouped_near_me_df, y="clicks", title='Clicks over Time for Near Me')
fig.show()
from fbprophet import Prophet
from fbprophet.plot import plot_plotly
grouped_ideas_df.reset_index()
```
Rename columns
```
dft = grouped_ideas_df.reset_index().rename(columns={"date":"ds", "clicks":"y"})
dft
m = Prophet()
m.fit(dft)
#Predicting clicks for the next 30 days.
future_30 = m.make_future_dataframe(periods=30)
forecast_30 = m.predict(future_30)
#Predicting clicks for the next 60 days.
future_60 = m.make_future_dataframe(periods=60)
forecast_60 = m.predict(future_60)
#Predicting clicks for the next 90 days.
future_90 = m.make_future_dataframe(periods=90)
forecast_90 = m.predict(future_90)
#Visualizing the prediction for next 30 days.
plot_plotly(m, forecast_30, xlabel='Date', ylabel='Clicks')
#Visualizing the prediction for next 60 days.
plot_plotly(m, forecast_60, xlabel='Date', ylabel='Clicks')
#Visualizing the prediction for next 90 days.
plot_plotly(m, forecast_90, xlabel='Date', ylabel='Clicks')
```
| github_jupyter |
### Import Spacy
If you cannot import it, then open Anaconda Prompt as an Administrator (right click on Anaconda Prompt -> More -> Open as Admin) and then:
- conda install -c conda-forge spacy
- python -m spacy download en
This should work.
```
import spacy
nlp = spacy.load("en_core_web_sm")
text = "Tell me, Muse, of that man of many resources, who wandered far and wide, after sacking the holy citadel of Troy. Many the men whose cities he saw, whose ways he learned. Many the sorrows he suffered at sea, while trying to bring himself and his friends back alive. Yet despite his wishes he failed to save them, because of their own un-wisdom, foolishly eating the cattle of Helios, the Sun, so the god denied them their return. Tell us of these things, beginning where you will, Goddess, Daughter of Zeus."
doc = nlp(text)
for token in doc:
print(token.text, token.lemma_, token.pos_, token.is_stop)
```
### Reformating the spaCy parse of that sentence as a pandas dataframe
```
import pandas as pd
cols = ("text", "lemma", "POS", "explain", "stopword")
rows = []
for t in doc:
row = [t.text, t.lemma_, t.pos_, spacy.explain(t.pos_), t.is_stop]
rows.append(row)
df = pd.DataFrame(rows, columns=cols)
df
```
### Visualize the Parse Tree
```
from spacy import displacy
displacy.render(doc, style="dep")
```
### Sentence Boundary Detection (SBD) – also known as Sentence Segmentation
```
text = "Now, all the others, who had escaped destruction, had reached their homes, and were free of sea and war. \
He alone, longing for wife and home, Calypso, the Nymph, kept in her echoing cavern, desiring him for a husband. \
Not even when the changing seasons brought the year the gods had chosen for his return to Ithaca was he free from danger, \
and among friends. \
Yet all the gods pitied him, except Poseidon, \
who continued his relentless anger against godlike Odysseus until he reached his own land at last."
doc = nlp(text)
for sent in doc.sents:
print(">", sent)
```
### Non-Destructive Tokenization - Indexes
```
for sent in doc.sents:
print(">", sent.start, sent.end)
doc[25:52]
token = doc[45]
print(token.text, token.lemma_, token.pos_)
```
### Acquiring Text
```
import sys
import warnings
warnings.filterwarnings("ignore")
from bs4 import BeautifulSoup
import requests
import traceback
def get_text (url):
buf = []
try:
soup = BeautifulSoup(requests.get(url).text, "html.parser")
for p in soup.find_all("p"):
buf.append(p.get_text())
return "\n".join(buf)
except:
print(traceback.format_exc())
sys.exit(-1)
lic = {}
lic["mit"] = nlp(get_text("https://opensource.org/licenses/MIT"))
lic["asl"] = nlp(get_text("https://opensource.org/licenses/Apache-2.0"))
lic["bsd"] = nlp(get_text("https://opensource.org/licenses/BSD-3-Clause"))
for sent in lic["bsd"].sents:
print(">", sent)
```
### Compare Pairs
```
pairs = [
["mit", "asl"],
["asl", "bsd"],
["bsd", "mit"]
]
for a, b in pairs:
print(a, b, lic[a].similarity(lic[b]))
```
### Natural Language Understanding
```
text = "Now, all the others, who had escaped destruction, had reached their homes, and were free of sea and war. He alone, longing for wife and home, Calypso, the Nymph, kept in her echoing cavern, desiring him for a husband. Not even when the changing seasons brought the year the gods had chosen for his return to Ithaca was he free from danger, and among friends. Yet all the gods pitied him, except Poseidon, who continued his relentless anger against godlike Odysseus until he reached his own land at last."
doc = nlp(text)
for chunk in doc.noun_chunks:
print(chunk.text)
```
### Named Entities
```
for ent in doc.ents:
print(ent.text, ent.label_)
```
### Visualize Name Entities
```
displacy.render(doc, style="ent")
```
### NLTK
```
import nltk
nltk.download("wordnet")
```
If you have problems with Spacy_Wordnet then:
pip install spacy-wordnet
```
from spacy_wordnet.wordnet_annotator import WordnetAnnotator
print("before", nlp.pipe_names)
if "WordnetAnnotator" not in nlp.pipe_names:
nlp.add_pipe(WordnetAnnotator(nlp.lang), after="tagger")
print("after", nlp.pipe_names)
```
### Perfom Automatic Lookup
```
token = nlp("withdraw")[0]
token._.wordnet.synsets()
token._.wordnet.lemmas()
token._.wordnet.wordnet_domains()
```
### Particular Domain or Set of Topics
```
domains = ["finance", "banking"]
sentence = nlp(u"I want to withdraw 5.000 euros.")
enriched_sent = []
for token in sentence:
# get synsets within the desired domains
synsets = token._.wordnet.wordnet_synsets_for_domain(domains)
if synsets:
lemmas_for_synset = []
for s in synsets:
# get synset variants and add to the enriched sentence
lemmas_for_synset.extend(s.lemma_names())
enriched_sent.append("({})".format("|".join(set(lemmas_for_synset))))
else:
enriched_sent.append(token.text)
print(" ".join(enriched_sent))
```
### Analyze Text Data
```
import scattertext as st
if "merge_entities" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_entities"))
if "merge_noun_chunks" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_noun_chunks"))
convention_df = st.SampleCorpora.ConventionData2012.get_data()
corpus = (st.CorpusFromPandas(convention_df,
category_col="party",
text_col="text",
nlp=st.whitespace_nlp_with_sentences).build())
html = st.produce_scattertext_explorer(
corpus,
category="democrat",
category_name="Democratic",
not_category_name="Republican",
width_in_pixels=1000,
metadata=convention_df["speaker"]
)
from IPython.display import IFrame
file_name = "foo.html"
with open(file_name, "wb") as f:
f.write(html.encode("utf-8"))
IFrame(src=file_name, width = 1200, height=700)
```
| github_jupyter |
# 180328
view, purchase관련 DataFrame을 보는데 얘네를 JOIN시켜봄. 결과는 아래와 같다
### 1. len(view) = 2833180
### 2. len(purchase) = 168996
### 3. len(view JOIN purchase) = 276
궁극적으로 우리가 해야 하는 건 각 고객에게 추천 리스트를 제공하는 건데
이 때 고객(과 쿠폰을 연결시키는 연결고리로 view와 purchase를 사용할 수 있다
---------------------
그럼 각각의 DataFrame으로 고객과 쿠폰을 연결시킨다면 __어떤 의미__를 가지는가?
### 1. view = 본 애들만 기준으로 추천하겠다
> 실수로 본 쿠폰들도 추천할 수 있음. 데이터 양이 많음
> 고객이 한번이라도 들여다본 쿠폰을 추천하자면 얘로 사용
### 2. purchase = 산 애들만 기준으로 추천하겠다
> 꼭 view에 있지 않아도, 예를 들어 지인추천으로 홈페이지를 들여다보지 않고 바로 구매하는 경우
> 아래의 봤으면서 산 애들을 기준으로 추천하는 것보다 현실을 더 넓게 커버한다
### 3. vp = 봤으면서 산 애들만 기준으로 추천하겠다
> 이 기준은 커버하는 현실이 제일 좁고 데이터 수가 많지 않음
> 만약 이 회사입장에서 충성고객의 activity를 고려한다면 얘로 사용해봄직함
> 충성고객? 홈페이지에서 어떤 쿠폰이 새로 올라왔나 view하고 purchase하는 부류
---------------------
## 그럼 본 애들, 산 애들 따로는 추천 못하나?
### 모델을 각각 따로 만든 후 test한 값을 합친다?
ex)
1. a = view_model(Xtest), b = purchase_model(X_test)
2. a와 b는 각 쿠폰에 대한 PURCHASE_FLAG (아마 pd.Series형태)
3. 얘네를 더함
# 모델링에 쓸 데이터 기준
1. view = 본 애들만 추천하겠다
2. purchase = 산 애들만 추천하겠다
3. view and purchase = 봤으면서 산 애들만 추천하겠다
4. view or purchase = 보거나 산 애들이면 추천하겠다
----------
## coupon area.csv
1. 쿠폰과 그 사용위치를 알려줌
2. 여러 위치에서 사용되는 쿠폰이 있어서 138185개
3. unique한 coupon_id는 19368개이며 pk는 세 컬럼 전부
4. 하지만 coupon_list는 unique한 coupon_id가 19413개
5. 그럼 __45개__만큼 모자란 area정보는 어디있는가? coupon_list에 있긴 있음
## prefecture locations.csv
1. 47개 prefecture에 대한 위도, 경도
2. row도 47개
# training coupon area VS coupon list
1. coupon_area는 쿠폰ID, 모든 가능한 위치정보
2. coupon_list의 쿠폰ID, 위치정보도 coupon_area와 같음
3. 그럼 왜 unique한 coupon_id가 차이가 날까?
# 4. coupon_area에 없는 애들은 coupon_list에서 빼고 해라?
-------------
coupon list와 coupon area의 area정보의 분포를 살펴보니
많이 다르지 않다 (coupon_area_eda 참조)
### coupon area를 list와 JOIN(45개 제외됨)
### coupon_list만 써서 training시킬지
### 둘 다 해보기
--------------
# JOIN결과
* purchase = 약 17만개 (168996 rows)
* view = 약 300만개 (2833180 rows)
#### 1. User <-> View <-> Coupon <-> Prefecture
= 2517206 rows × 41 columns
#### 2. User <-> View <-> Coupon <-> Area <-> Prefecture
= 2513829 rows × 42 columns
#### 3. User <-> Purchase <-> Coupon <-> Prefecture
= 168996 rows × 38 columns
#### 4. User <-> Purchase <-> Coupon <-> Area <-> Prefecture
= 168787 rows × 39 columns
#### 5. User <-> View X Purchase <-> Coupon <-> Prefecture
= ~~276 rows × 44 columns~~ (12만개)
#### 6. User <-> View X Purchase <-> Coupon <-> Area <-> Prefecture
= ~~276 rows × 45 columns~~ (12만개)
## 왜 coupon X area X prefecture는 갯수가 더 적을까?
= coupon_area에 의해 45개 쿠폰이 coupon_list에서 제거되는데 그 영향이 JOIN에 cascade되었을 것이다
----------------------------------------------------------------------------
----------------
# 180329
## user_list csv
1. 고객들은 30대~50대가 주로 많음
2. 남자 11890명, 여자 10983명, 총 22873명
3. Tokyo, Kanagawa, Osaka에 고객많음
4. 탈퇴날짜는 922명만 기록되있음
==> 922명만 탈퇴했다
5. 고객들 등록율은 500~1000으로 고만고만한 편.
다만 __2010년 11월, 2011년 5월__에 팍 치솟음
6. 2011년 6월 ~ 2012년 7월 사이에 탈퇴율이 증가하다 감소함.
2012년 2월에 탈퇴율이 확 줄음
# 쿠폰이던 고객이던 Tokyo, Kanagawa, Osaka가 많음
--------------
# 180330
그럼 각각의 DataFrame으로 고객과 쿠폰을 연결시킨다면 __어떤 의미__를 가지는가?
### 1. view = 본 애들만 기준으로 추천하겠다
> 실수로 본 쿠폰들도 추천할 수 있음. 데이터 양이 많음
> 고객이 한번이라도 들여다본 쿠폰을 추천하자면 얘로 사용
### 2. purchase = 산 애들만 기준으로 추천하겠다
> 꼭 view에 있지 않아도, 예를 들어 지인추천으로 홈페이지를 들여다보지 않고 바로 구매하는 경우
> 아래의 봤으면서 산 애들을 기준으로 추천하는 것보다 현실을 더 넓게 커버한다
### 3. view 중 PURCHASE_FLG==1 = 봤으면서 산 애들만 기준으로 추천하겠다
> 이 기준은 커버하는 현실이 제일 좁고 데이터 수가 많지 않음
> 만약 이 회사입장에서 충성고객의 activity를 고려한다면 얘로 사용해봄직함
> 충성고객? 홈페이지에서 어떤 쿠폰이 새로 올라왔나 view하고 purchase하는 부류
> # 본 것 중 산 애들은 4%이다 (280만개 중 13만개)
# 모델링에 쓸 데이터 기준
1. view = 본 애들만 추천하겠다
2. purchase = 산 애들만 추천하겠다
3. view 중 PURCHASE_FLAG==1 = 봤으면서 산 애들만 추천하겠다
4. view or purchase = 보거나 산 애들이면 추천하겠다
| github_jupyter |
### Install Required Packages
```
! pip install numpy pandas scikit-learn matplotlib
```
### Imports
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_validate
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer, ENGLISH_STOP_WORDS
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
```
### Read data
```
train_df = pd.read_csv('../assist_material/datasets/extracted/q1/train.csv', sep=',')
train_df.columns = ['id', 'title', 'content', 'label']
```
### Benchmark Models
For the benchmarking we use the following combinations. SVM with TF-IDF, Random Forest with TF-IDF & SVM with SVD,
Random Forest with SVD. As Bag-of-words I used TF-IDF variation in order to vectorize the datasets.
```
vectorizer = TfidfVectorizer(max_features=50000)
svd = TruncatedSVD(n_components=300)
svm = SVC(kernel='linear')
random_forest = RandomForestClassifier(n_estimators=1000, max_features='sqrt', n_jobs=-1)
svm_tfidf = make_pipeline(vectorizer,svm)
random_forest_tfidf = make_pipeline(vectorizer, random_forest)
svm_tfidf_svd = make_pipeline(vectorizer, svd, svm)
random_forest_tfidf_svd = make_pipeline(vectorizer, svd, random_forest)
```
### Initialize data with labels in order to seed the classifiers
```
X = train_df['title'] + ' ' + train_df['content']
y = train_df['label']
```
### SVM with TF-IDF
```
scores_svm_tfidf = cross_validate(svm_tfidf, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
n_jobs=-1,
return_train_score=False)
print('SVM + tfidf', scores_svm_tfidf)
```
### Random Forest with TF-IDF
```
scores_random_forest_tfidf = cross_validate(random_forest_tfidf, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf', scores_random_forest_tfidf)
```
### SVM with SVD
```
scores_svm_tfidf_svd = cross_validate(svm_tfidf_svd, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
n_jobs=-1,
return_train_score=False)
print('SVM + tfidf + SVD', scores_svm_tfidf_svd)
```
### Random Forest with SVD
```
scores_random_forest_tfidf_svd = cross_validate(random_forest_tfidf_svd, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf + SVD', scores_random_forest_tfidf_svd)
```
## Beat the Benchmark classifier
In order to achieve the best performance in terms of accuracy and execution time the best choice is Random Forest with
SVD. Tuning this model further we can achieve 96% accuracy. The hyper-parameters below are occurred through the
tuning phase which was a time-consuming process, approximately 20 different combinations were executed. Also, some
information about the preprocessing. The input text initially gets cleaned up from stopwords, turned to lower case
and finally vectorized to TF-IDF.
```
# Preprocess
# Give a small gain to titles
X = (train_df['title'] + ' ') * 3 + train_df['content']
stop_words = ENGLISH_STOP_WORDS.union(['will', 's', 't', 'one', 'new', 'said', 'say', 'says', 'year'])
vectorizer_tuned = TfidfVectorizer(lowercase=True, stop_words=stop_words, ngram_range=(1,1), max_features=50000)
svd_tuned = TruncatedSVD(n_components=1000)
random_forest_tuned = RandomForestClassifier(n_estimators=1000, max_features='sqrt', n_jobs=-1)
random_forest_tfidf_svd_tuned = make_pipeline(vectorizer_tuned, svd_tuned, random_forest_tuned)
scores_random_forest_tfidf_svd_tuned = cross_validate(random_forest_tfidf_svd_tuned, X, y,
scoring=['accuracy', 'precision_macro', 'recall_macro', 'f1_macro'],
cv=5,
return_train_score=False)
print('Random Forest + tfidf + SVD', scores_random_forest_tfidf_svd_tuned)
```
### Generating stats table
```
data_table = [[np.mean(scores_svm_tfidf['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_accuracy'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_accuracy'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_accuracy'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_precision_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_precision_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_precision_macro'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_recall_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_recall_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_recall_macro'], dtype='float64')],
[np.mean(scores_svm_tfidf['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf['test_f1_macro'], dtype='float64'),
np.mean(scores_svm_tfidf_svd['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd['test_f1_macro'], dtype='float64'),
np.mean(scores_random_forest_tfidf_svd_tuned['test_f1_macro'], dtype='float64')]
]
cell_text = []
for row in data_table:
cell_text.append([f'{x:1.5f}' for x in row])
plt.figure(dpi=150)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.box(on=None)
plt.subplots_adjust(left=0.2, bottom=0.2)
the_table = plt.table(cellText=cell_text,
rowLabels=['Accuracy', 'Precision', 'Recall', 'F1-Score'],
colLabels=['SVM (BoW)', 'Random Forest (BoW)', 'SVM (SVD)', 'Random Forest (SVD)', 'My Method'],
colColours=['lightsteelblue'] * 5,
rowColours=['lightsteelblue'] * 4,
loc='center')
the_table.scale(1, 1.5)
fig = plt.gcf()
plt.show()
```
| github_jupyter |
# SETUP
```
!pip install -r requirements_colab.txt -q
```
# DATA
> To speed up the review process , i provided the ***drive id*** of the data i've created from the Train creation folder noteboooks .
---
> I also add each data drive link in the Readme Pdf file attached with this solution
```
!gdown --id 1hNRbtcqd9F6stMOK1xAZApDITwAjiSDJ
!gdown --id 1-QCmWsNGREXuWArifN0nD_Sp4hJxf0tu
!gdown --id 1-47L_1NKLeVgW1vWmqXXXCuWZ3gwZWsS
!gdown --id 1-aO4FEtv5CF-ZOcxDSO3jGEzPcIFdxgP
!gdown --id 1-8J_xFgI0WKT5UXFnfH4q1KUw_KgNY37
!gdown --id 1-a55a7N6a4SoqolPF_wI4C6Q70u_d7Hj
!gdown --id 1-BgXQwmXqBuk_P8VtvLfdLqy83dv56Kz
!gdown --id 1-hQGF2TNBbsy3jsGNtndmK55egbdFDjs
!gdown --id 1VE3L15uXRbP0kzmDzyuYac3Mi9yZY3YI
!gdown --id 1wDzl_QHgKtW2-FoDJs_U-qfbBTBUqNSA
```
## LIBRARIES
```
#import necessary dependecies
import os
import numpy as np
import pandas as pd
import random
from tqdm import tqdm
import copy
import lightgbm as lgb
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import log_loss
from sklearn.preprocessing import QuantileTransformer
import warnings
warnings.filterwarnings('ignore')
# fix seed
np.random.seed(111)
random.seed(111)
```
## Train Creation
```
def create_train():
train =pd.read_csv("S2TrainObs1.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def create_test():
test =pd.read_csv("S2TestObs1.csv" )
return test
def createObs2_train():
train =pd.read_csv("S2TrainObs2.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs2_test():
test =pd.read_csv("S2TestObs2.csv" )
return test
def createObs3_train():
train =pd.read_csv("S2TrainObs3.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs3_test():
test =pd.read_csv("S2TestObs3.csv" )
return test
def createObs4_train():
train =pd.read_csv("S2TrainObs4.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs4_test():
test =pd.read_csv("S2TestObs4.csv" )
return test
def createObs5_train():
train =pd.read_csv("S2TrainObs5.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train
def createObs5_test():
test =pd.read_csv("S2TestObs5.csv" )
return test
```
## Feature Engineering
```
def process(T) :
# process bands
Bcols = T.filter(like='B').columns.tolist()
Vcols = T.filter(like='V').columns.tolist()
Obs1 = T.filter(like='Month4').columns.tolist()
Obs2 = T.filter(like='Month5').columns.tolist()
Obs3 = T.filter(like='Month6').columns.tolist()
Obs4 = T.filter(like='Month7').columns.tolist()
Obs5 = T.filter(like='Month8').columns.tolist()
Obs6 = T.filter(like='Month9').columns.tolist()
Obs7 = T.filter(like='Month10').columns.tolist()
Obs8 = T.filter(like='Month11').columns.tolist()
# vegetation indexes
B8cols = T.filter(like='B8_').columns.tolist()
B8cols = [x for x in B8cols if 'std' not in x]
B4cols = T.filter(like='B4_').columns.tolist()
B4cols = [x for x in B4cols if 'std' not in x]
B3cols = T.filter(like='B3_').columns.tolist()
B3cols = [x for x in B3cols if 'std' not in x]
B5cols = T.filter(like='B5_').columns.tolist()
B5cols = [x for x in B5cols if 'std' not in x]
B3cols = T.filter(like='B3_').columns.tolist()
B3cols = [x for x in B3cols if 'std' not in x]
B2cols = T.filter(like='B2_').columns.tolist()
B2cols = [x for x in B2cols if 'std' not in x]
B7cols = T.filter(like='B7_').columns.tolist()
B7cols = [x for x in B7cols if 'std' not in x]
B8Acols = T.filter(like='B8A_').columns.tolist()
B8Acols = [x for x in B8Acols if 'std' not in x]
B6cols = T.filter(like='B6_').columns.tolist()
B6cols = [x for x in B6cols if 'std' not in x]
B12cols = T.filter(like='B12_').columns.tolist()
B12cols = [x for x in B12cols if 'std' not in x]
B11cols = T.filter(like='B11_').columns.tolist()
B11cols = [x for x in B11cols if 'std' not in x]
B1cols = T.filter(like='B1_').columns.tolist()
B1cols = [x for x in B1cols if 'std' not in x]
B9cols = T.filter(like='B9_').columns.tolist()
B9cols = [x for x in B9cols if 'std' not in x]
L = 0.725
for b1,b2 ,b3 ,b4, b5 , b6, b7, b8 ,b8a ,b9,b11,b12 in zip(B1cols,B2cols,B3cols,B4cols,B5cols,B6cols,B7cols,B8cols,B8Acols,B9cols,B11cols,B12cols) :
T[f'NDVI_{b8.split("_")[1]}'] = ((T[b8] - T[b4]) / (T[b8] + T[b4]))
T[f'SAVI_{b8.split("_")[1]}'] = ((T[b8] - T[b4]) / (T[b8] + T[b4]+L) * (1.0 + L))
T[f'GRNDVI_{b8.split("_")[1]}'] = ((T[b8] - (T[b3]+T[b4])) / (T[b8] + (T[b3]+T[b4])))
T[f'GNDVI_{b8.split("_")[1]}'] = ((T[b8] - T[b3] ) / (T[b8] + T[b3]))
T[f'NDRE_{b8.split("_")[1]}'] = ((T[b5] - T[b4])/ (T[b5] + T[b4]))
T[f'EVI_{b8.split("_")[1]}'] = (2.5 * (T[b8] - T[b4] ) / ((T[b8] + 6.0 * T[b4] - 7.5 * T[b2]) + 1.0)).values.clip(min=-5,max=5)
T[f'WDRVI_{b8.split("_")[1]}'] = (((8 * T[b8]) - T[b4])/ ((8* T[b8]) + T[b4]))
T[f'ExBlue_{b8.split("_")[1]}'] = ((2 * T[b2]) - (T[b3]+T[b4]))
T[f'ExGreen_{b8.split("_")[1]}'] = ((2 * T[b3]) - (T[b2]+T[b4]) )
T[f'NDRE7_{b8.split("_")[1]}'] = ((T[b7] - T[b4])/ (T[b7] + T[b4]))
T[f'MTCI_{b8.split("_")[1]}'] = ((T[b8a] - T[b6])/ (T[b7] + T[b6]))
T[f'VARI_{b8.split("_")[1]}'] = ((T[b3] - T[b4])/ (T[b3] + T[b4] - T[b2]))
T[f'ARVI_{b8.split("_")[1]}'] = ( ((T[b8] - T[b4])-(T[b4] - T[b2])) / ((T[b8] + T[b4])-(T[b4] - T[b2])) )
# Bands Relations
T[f'b7b5_{b8.split("_")[1]}'] = (T[b7] - T[b5])/ (T[b7] + T[b5]) # B7 / B5
T[f'b7b6_{b8.split("_")[1]}'] = (T[b7] - T[b6])/ (T[b7] + T[b6]) # B7 / B6
T[f'b8ab5_{b8.split("_")[1]}'] = (T[b8a] - T[b5])/ (T[b8a] + T[b5]) # B8A / B5
T[f'b6b5_{b8.split("_")[1]}'] = (T[b6] - T[b5])/ (T[b6] + T[b5]) # B6 / B5
# ASSAZZIN bands relations
T[f'b3b1_{b8.split("_")[1]}'] = (T[b3] - T[b1])/ (T[b3] + T[b1])
T[f'b11b8_{b8.split("_")[1]}'] = (T[b11] - T[b8])/ (T[b11] + T[b8])
T[f'b12b11_{b8.split("_")[1]}'] = (T[b12] - T[b11])/ (T[b12] + T[b11])
T[f'b3b4_{b8.split("_")[1]}'] = (T[b3] - T[b4])/ (T[b3] + T[b4])
T[f'b9b4_{b8.split("_")[1]}'] = (T[b9] - T[b4])/ (T[b9] + T[b4])
T[f'b5b3_{b8.split("_")[1]}'] = (T[b5] - T[b3])/ (T[b5] + T[b3])
T[f'b12b3_{b8.split("_")[1]}'] = (T[b12] - T[b3])/ (T[b12] + T[b3])
T[f'b2b1_{b8.split("_")[1]}'] = (T[b2] - T[b1])/ (T[b2] + T[b1])
T[f'b4b1_{b8.split("_")[1]}'] = (T[b4] - T[b1])/ (T[b4] + T[b1])
T[f'b11b3_{b8.split("_")[1]}'] = (T[b11] - T[b3])/ (T[b11] + T[b3])
T[f'b12b8_{b8.split("_")[1]}'] = (T[b12] - T[b8])/ (T[b12] + T[b8])
T[f'b3b2_{b8.split("_")[1]}'] = (T[b3] - T[b2])/ (T[b3] + T[b2])
T[f'b8ab3_{b8.split("_")[1]}'] = (T[b8a] - T[b3])/ (T[b8a] + T[b3])
T[f'b8ab2_{b8.split("_")[1]}'] = (T[b8a] - T[b2])/ (T[b8a] + T[b2])
T[f'b8b1_{b8.split("_")[1]}'] = (T[b8] - T[b1])/ (T[b8] + T[b1])
T[f'ARVI2_{b8.split("_")[1]}'] = ( ((T[b3] - T[b4])-(T[b4] - T[b2])) / ((T[b3] + T[b4])+(T[b4] + T[b2])) )
T[f'ARVI3_{b8.split("_")[1]}'] = ( ((T[b5] - T[b3])-(T[b3] - T[b2])) / ((T[b5] + T[b3])+(T[b3] + T[b2])) )
T[f'b8b9_{b8.split("_")[1]}'] = (T[b8] - T[b9])/ (T[b8] + T[b9])
T[f'b3b9_{b8.split("_")[1]}'] = (T[b3] - T[b9])/ (T[b3] + T[b9])
T[f'b2b9_{b8.split("_")[1]}'] = (T[b2] - T[b9])/ (T[b2] + T[b9])
T[f'b12b9_{b8.split("_")[1]}'] = (T[b12] - T[b9])/ (T[b12] + T[b9])
T[f'b12b8_{b8.split("_")[1]}'] = (T[b12] - T[b8])/ (T[b12] + T[b8])
for col in Bcols :
T[col] = np.sqrt(T[col])
for b2 ,b3 ,b4 in zip(B2cols,B3cols,B4cols) :
T[f'RGB_STD_{b3.split("_")[1]}'] = T[[b2,b3,b4]].std(axis=1)
T[f'RGB_MEAN_{b3.split("_")[1]}'] = T[[b2,b3,b4]].mean(axis=1)
for col in Vcols :
T[col] = np.sqrt(T[col])
for col1,col2,col3,col4,col5,col6,col7,col8 in zip(Obs1,Obs2,Obs3,Obs4,Obs5,Obs6,Obs7,Obs8) :
T[f'{col1.split("_")[0]}_std'] = T[[col1,col2,col3,col4,col5,col6,col7,col8]].std(axis=1)
# process Vegetation indexes
ObsN = T.filter(like='NDVI_').columns.tolist()
ObsSA = T.filter(like='SAVI_').columns.tolist()
ObsCC = T.filter(like='CCCI_').columns.tolist()
ObsWDR = T.filter(like='WDRVI_').columns.tolist()
ObsNDRE7 = T.filter(like='NDRE7_').columns.tolist()
T['NDVI_max'] = T[ObsN].max(axis=1)
T['NDVI_min'] = T[ObsN].min(axis=1)
T['SAVI_max'] = T[ObsSA].max(axis=1)
T['SAVI_mmin'] = T[ObsSA].min(axis=1)
T['WDRVI_max'] = T[ObsWDR].max(axis=1)
T['WDRVI_min'] = T[ObsWDR].min(axis=1)
T['NDRE7_max'] = T[ObsNDRE7].max(axis=1)
T['NDRE7_min'] = T[ObsNDRE7].min(axis=1)
return T
Train = create_train()
Test = create_test()
Train2 = createObs2_train()
Test2 = createObs2_test()
Train3 = createObs3_train()
Test3 = createObs3_test()
Train4 = createObs4_train()
Test4 = createObs4_test()
Train5 = createObs5_train()
Test5 = createObs5_test()
Train.shape , Test.shape
Train2.shape , Test2.shape
Train3.shape , Test3.shape
Train4.shape , Test4.shape
Train5.shape , Test5.shape
Train = process(Train)
Test = process(Test)
Train2 = process(Train2)
Test2 = process(Test2)
Train3 = process(Train3)
Test3 = process(Test3)
Train4 = process(Train4)
Test4 = process(Test4)
Train5 = process(Train5)
Test5 = process(Test5)
Train.shape , Test.shape
Train2.shape , Test2.shape
Train3.shape , Test3.shape
Train4.shape , Test4.shape
Train5.shape , Test5.shape
Train = pd.concat([Train,Train2.drop(columns=['field_id','label']),Train3.drop(columns=['field_id','label']),
Train4.drop(columns=['field_id','label']),Train5.drop(columns=['field_id','label'])],axis=1)
Train.shape
Test = pd.concat([Test,Test2.drop(columns=['field_id']),Test3.drop(columns=['field_id'])],axis=1)
Test = pd.merge(Test,Test4,on='field_id',how='left')
Test = pd.merge(Test,Test5,on='field_id',how='left')
Test.shape
import gc ; gc.collect()
```
# MODELING
```
X = Train.replace(np.inf,50).drop(['field_id','label'], axis=1)
COLUMNS = X.columns.tolist()
y = Train.label
TEST = Test.replace(np.inf,50).drop(['field_id'], axis=1)
TEST.columns = X.columns.tolist()
data = pd.concat([X,TEST])
qt=QuantileTransformer(output_distribution="normal",random_state=42)
data= pd.DataFrame(qt.fit_transform(data),columns=X.columns)
X = data[:X.shape[0]].values
TEST = data[X.shape[0]:].values
X.shape , TEST.shape
##############################################################################################################################################################################
def kfold_split(Train,y):
Train["folds"]=-1
kf = StratifiedKFold(n_splits= 10,random_state=seed,shuffle=True)
for fold, (_, val_index) in enumerate(kf.split(Train,y)):
Train.loc[val_index, "folds"] = fold
return Train
Train = kfold_split(Train,y)
```
### Cross Validation
```
seed = 47
sk = StratifiedKFold(n_splits= 10,random_state=seed,shuffle=True)
def DefineModel(name='lgbm') :
if name =='lgbm':
return lgb.LGBMClassifier(learning_rate = 0.1,n_estimators = 3000,
objective ='multiclass',random_state = 111,
num_leaves = 80,max_depth = 6,
metric = 'multi_logloss',
colsample_bytree = 0.5 ,
bagging_freq= 5, bagging_fraction= 0.75,
lambda_l2 = 100 ,
)
elif name =='catboost' :
cat_params = {"loss_function": "MultiClass","eval_metric": "MultiClass","learning_rate": 0.1,
"random_seed": 42,"l2_leaf_reg": 3,"bagging_temperature": 1,
"depth": 6,"od_type": "Iter","od_wait": 50,"thread_count": 16,"iterations": 50000,
"use_best_model": True,'task_type':"GPU",'devices':'0:1'}
return CatBoostClassifier(**cat_params
)
else :
return xgb.XGBClassifier(objective = 'multi:softmax',
base_score = np.mean(y),eval_metric ="mlogloss",
subsample= 0.8,n_estimators = 2000,
seed=seed,random_state = seed,num_class = 9,
)
def Run5fold(name,X,y,TEST,COLUMNS) :
print(f'TRAINING {name}')
cv_score_ = 0
oof_preds = np.zeros((Train.shape[0],9))
final_predictions = np.zeros((Test.shape[0],9))
for fold in [7,8,9]:
print()
print(f'######### FOLD {fold+1} / {sk.n_splits} ')
train_idx = Train[Train['folds'] !=fold].index.tolist()
test_idx = Train[Train['folds'] ==fold].index.tolist()
X_train,y_train = X[train_idx,:],y[train_idx]
X_test,y_test = X[test_idx,:] ,y[test_idx]
model = DefineModel(name=name)
model.fit(X_train,y_train,
eval_set = [(X_test,y_test)],
early_stopping_rounds = 100,
verbose = 100
)
oof_prediction = model.predict_proba(X_test)
np.save(f'LGBM_oof_{fold}',oof_prediction)
cv_score_ += log_loss(y_test,oof_prediction) / sk.n_splits
print(f'Log Loss Fold {fold} : {log_loss(y_test,oof_prediction) }')
oof_preds[test_idx] = oof_prediction
test_prediction = model.predict_proba(TEST)
np.save(f'LGBM_testPred_{fold}',test_prediction)
final_predictions += test_prediction / sk.n_splits
del X_train,y_train , X_test,y_test , model
gc.collect()
# return feats,oof_preds , final_predictions
import gc ; gc.collect()
Run5fold(name='lgbm',X=X,y=y,TEST=TEST,COLUMNS=COLUMNS) #Log Loss Fold 0 : 0.625915534181997
```
---
## save into drive
```
from google.colab import drive
drive.mount('/content/drive')
os.makedirs('/content/drive/MyDrive/RadiantEarth/LGBMS2',exist_ok=True)
!cp LGBM_oof_* '/content/drive/MyDrive/RadiantEarth/LGBMS2/'
!cp LGBM_testPred_* '/content/drive/MyDrive/RadiantEarth/LGBMS2/'
```
## oof
```
OOF = np.empty((0, 9),dtype=np.float16)
for i in range(10) :
oof_pred = np.load(f'/content/drive/MyDrive/RadiantEarth/LGBMS2/LGBM_oof_{i}.npy')
OOF = np.append(OOF, oof_pred, axis=0)
OOF.shape
Y = np.empty((0,),dtype=np.float16)
for i in range(10) :
Y_ = Train[Train['folds'].isin([i])]['label'].values
Y = np.append(Y, Y_, axis=0)
Y.shape
predictions_lgbm = []
for i in range(10) :
test_pred = np.load(f'/content/drive/MyDrive/RadiantEarth/LGBMS2/LGBM_testPred_{i}.npy')
predictions_lgbm.append(test_pred)
print('LGBM LOG LOSS :',log_loss(Y,OOF))
Field = np.empty((0,),dtype=np.float16)
for i in range(10) :
Field_ = Train[Train['folds'].isin([i])]['field_id'].values
Field = np.append(Field, Field_, axis=0)
Field.shape
DLGBM = pd.DataFrame(Field,columns=['field_id'])
cols = ['oof'+str(i) for i in range(9)]
for col in cols :
DLGBM[col] =0
DLGBM[cols] = OOF
oof_lgbm = pd.merge(Train[['field_id']],DLGBM,on='field_id',how='left')[cols].values
print('LGBM LOG LOSS :',log_loss(y,oof_lgbm))
# In this part we format the DataFrame to have column names and order similar to the sample submission file.
pred_df = pd.DataFrame(np.mean(predictions_lgbm,axis=0))
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = Test['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
pred_df.shape
# Write the predicted probabilites to a csv for submission
pred_df.to_csv('S2_LightGBM.csv', index=False)
np.save('S2_oof_lgbm.npy',oof_lgbm)
```
| github_jupyter |
### Classifier with TF-IDF vectors
Dataset size:
* Train: 50,000
* Test: 5,000
TF-IDF vectors:
* max_features: 8,000
* min_df: 10
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import f1_score,accuracy_score
from sklearn.metrics import confusion_matrix
import itertools
from sklearn.model_selection import cross_validate
from sklearn import svm
import pickle
import sys
import keras
from sklearn.preprocessing import OneHotEncoder
train_set = pd.read_csv("dataset_train_pp.csv")
test_set = pd.read_csv("dataset_test_pp.csv")
print(len(train_set))
print(len(test_set))
train_x=train_set["Description"]
test_x=test_set["Description"]
train_y=train_set["Class Index"]
test_y=test_set["Class Index"]
%%time
test_x_vectors = np.load("tfidf_test_x_8000.npy")
%%time
train_x_vectors = np.load("tfidf_train_x_8000.npy")
# plot confusion matrix
def plot_confusion_matrix(cm, classes, normalize=False, title='confusion_matrix', cmap=plt.cm.Blues):
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max()/2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i,j],
horizontalalignment='center',
color='white' if cm[i,j] > thresh else 'black')
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
### Naive Bays
```
naive_bays = MultinomialNB()
%%time
naive_bays.fit(train_x_vectors, train_y)
p = pickle.dumps(naive_bays)
memoryKB = sys.getsizeof(p)/1000
print(memoryKB)
%%time
nb_pred_x = naive_bays.predict(test_x_vectors)
accuracy_score(test_y,nb_pred_x)
nb_cm = confusion_matrix(test_y, nb_pred_x)
cmPlotLabels = ['World', 'Sports', 'Business', 'Science', 'Corona']
plot_confusion_matrix(nb_cm, cmPlotLabels, title='Naive Bays')
```
### SVM
This step is skipped due to long process time
```
# svm_model = svm.SVC(kernel='poly', degree=2, gamma='scale')
# %%time
# svm_model.fit(train_x_vectors, train_y)
# svm_pred_x = svm_model.predict(test_x_vectors)
# accuracy_score(test_y,svm_pred_x)
# svm_cm = confusion_matrix(test_y, svm_pred_x)
# plot_confusion_matrix(svm_cm, cmPlotLabelstitle='SVM, deg=1')
```
### NN
#### Prepare labels for keras
```
enc = OneHotEncoder(handle_unknown='ignore')
test_y_array = test_y.to_numpy().reshape(-1,1)
train_y_array = train_y.to_numpy().reshape(-1,1)
enc.fit(test_y_array)
test_y_1hot = enc.transform(test_y_array).toarray()
enc.fit(train_y_array)
train_y_1hot = enc.transform(train_y_array).toarray()
```
#### keras model
```
nn_model = None
nn_model = keras.models.Sequential()
nn_model.add(keras.layers.Dense(32, input_dim=8000, activation='relu'))
nn_model.add(keras.layers.Dense(16, activation='relu'))
nn_model.add(keras.layers.Dense(16, activation='relu'))
nn_model.add(keras.layers.Dense(5, activation='softmax'))
nn_model.summary()
nn_model.compile(keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
%%time
nn_model.fit(train_x_vectors, train_y_1hot, epochs=10, batch_size=10)
p = pickle.dumps(nn_model)
memoryKB = sys.getsizeof(p)/1000
print(memoryKB)
nn_model.evaluate(test_x_vectors, test_y_1hot, batch_size=10)
%%time
predictions = nn_model.predict(test_x_vectors)
nn_cm = confusion_matrix(test_y, predictions.argmax(axis=1)+1)
cmPlotLabels = ['World', 'Sports', 'Business', 'Science', 'Corona']
plot_confusion_matrix(nn_cm, cmPlotLabels, title='NN')
nn_model.save("nn_tfidf_8000.h5")
```
| github_jupyter |
# Acá le ponemos algo de creatividad para crear las features que llamamos "de comportamiento"
```
# Importamos librerías
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from textblob import TextBlob
# Cargamos los datos
train = pd.read_csv('./Data/train_con_labels.csv')
test = pd.read_csv('./Data/test_con_labels.csv')
train.dropna(inplace=True)
test.dropna(inplace=True)
train
```
# Procesando el texto
## Primeras variables "a mano"
#### Train
```
# Cantidad de caracteres
train['char_count'] = train['text'].astype('str').apply(len)
# Cant de palabras
train['word_count'] = train['text'].astype('str').apply(lambda x: len(x.split()))
# Densidad de las palabras
train['word_density'] = train['char_count'] / (train['word_count']+1)
# Puntuación
import string
train['punctuation_count'] = train['text'].astype('str').apply(lambda x: len("".join(_ for _ in x if _ in string.punctuation)))
# Mayúsculas
train['upper_case_word_count'] = train['text'].astype('str').apply(lambda x: len([wrd for wrd in x.split() if wrd.isupper()]))
# Cantidad de títulos
train['title_word_count'] = train['text'].astype('str').apply(lambda x: len([wrd for wrd in x.split() if wrd.istitle()]))
# Cantidad de Preguntas
train['questions'] = train['text'].astype('str').apply(lambda x: len(x) - len(x.rstrip('?')))
train
train.to_csv('train_comportamiento.csv')
```
#### Todo lo mismo pero en Test
```
test['char_count'] = test['text'].astype('str').apply(len)
test['word_count'] = test['text'].astype('str').apply(lambda x: len(x.split()))
test['word_density'] = test['char_count'] / (test['word_count']+1)
test['punctuation_count'] = test['text'].astype('str').apply(lambda x: len("".join(_ for _ in x if _ in string.punctuation)))
test['upper_case_word_count'] = test['text'].astype('str').apply(lambda x: len([wrd for wrd in x.split() if wrd.isupper()]))
test['title_word_count'] = test['text'].astype('str').apply(lambda x: len([wrd for wrd in x.split() if wrd.istitle()]))
test['questions'] = test['text'].astype('str').apply(lambda x: len(x) - len(x.rstrip('?')))
test
test.to_csv('test_comportamiento.csv')
```
Veamos si hay correlación entre algunas de las nuevas y la Label...
```
train.columns
# Chequeando correlaciones
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(train.corr(), vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
Esperadas las correlaciones entre sí, solo el numero de linea tiene alguna correlación con la etiqueta
## Más variables con Textblob
### Usamos la librería TextBlob para crear más variables
```
pos_family = {
'noun' : ['NN','NNS','NNP','NNPS'],
'pron' : ['PRP','PRP$','WP','WP$'],
'verb' : ['VB','VBD','VBG','VBN','VBP','VBZ'],
'adj' : ['JJ','JJR','JJS'],
'adv' : ['RB','RBR','RBS','WRB']
}
# función para contar lo que querramos con textblob
def check_pos_tag(x, flag):
cnt = 0
try:
wiki = TextBlob(x)
for tup in wiki.tags:
ppo = list(tup)[1]
if ppo in pos_family[flag]:
cnt += 1
except:
pass
return cnt
```
Train
```
import nltk
nltk.download('punkt')
import time
# Sustantivos
start_time = time.time()
train['noun_count'] = train['text'].astype('str').apply(lambda x: check_pos_tag(x, 'noun'))
elapsed_time = time.time() - start_time
elapsed_time
# Verbos
train['verb_count'] = train['text'].apply(lambda x: check_pos_tag(x, 'verb'))
# Adjetivos
train['adj_count'] = train['text'].apply(lambda x: check_pos_tag(x, 'adj'))
train.to_csv('train_comportamiento.csv')
# Adverbios
train['adv_count'] = train['text'].apply(lambda x: check_pos_tag(x, 'adv'))
# Pronombres
train['pron_count'] = train['text'].apply(lambda x: check_pos_tag(x, 'pron'))
train.to_csv('train_comportamiento.csv')
```
#### Todo lo mismo a Test
```
start_time = time.time()
test['noun_count'] = test['text'].apply(lambda x: check_pos_tag(x, 'noun'))
elapsed_time = time.time() - start_time
test['verb_count'] = test['text'].apply(lambda x: check_pos_tag(x, 'verb'))
test['adj_count'] = test['text'].apply(lambda x: check_pos_tag(x, 'adj'))
test.to_csv('test_comportamiento.csv')
test['adv_count'] = test['text'].apply(lambda x: check_pos_tag(x, 'adv'))
test['pron_count'] = test['text'].apply(lambda x: check_pos_tag(x, 'pron'))
test.to_csv('test_comportamiento.csv')
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(train.corr(), square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
Misma historia que arriba
| github_jupyter |
```
#Data visualization and Manipulation
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
import re
#Natural Language Processing Libraries
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
#Sckit_learning libraries
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
#Evaluation Metrics
from sklearn.metrics import accuracy_score,precision_score,recall_score,confusion_matrix,roc_curve,classification_report
from scikitplot.metrics import plot_confusion_matrix
from nltk.sentiment.vader import SentimentIntensityAnalyzer
train_df = pd.read_csv(r'''D:\chinnu\egnite ds\emotions\train.txt''',delimiter = ';',names = ['text','label'])
val_df = pd.read_csv(r'''D:\chinnu\egnite ds\emotions\val.txt''',delimiter = ';',names = ['text','label'])
train_df = train_df.head(1000)
val_df = val_df.head(1000)
df = pd.concat([train_df,val_df])
df.reset_index(inplace = True,drop = False)
df.info()
sns.countplot(df.label)
def custom_encoder(df):
df.replace(to_replace = "surprise",value = 1, inplace = True)
df.replace(to_replace = "love",value = 1, inplace = True)
df.replace(to_replace = "fear",value = 0, inplace = True)
df.replace(to_replace = "joy",value = 1, inplace = True)
df.replace(to_replace = "anger",value = 0, inplace = True)
df.replace(to_replace = "sadness",value = 0, inplace = True)
custom_encoder(df['label'])
sns.countplot(df.label)
lm = WordNetLemmatizer()
def pre_processing(df_col):
corpus = []
for item in df_col :
new_item = re.sub("[^a-zA-Z]", ' ',str(item))
new_item = new_item.lower()
new_item = new_item.split()
new_item = [lm.lemmatize(word) for word in new_item if word not in set(stopwords.words('english'))]
corpus.append(' '.join(str(x) for x in new_item))
return corpus
corpus = pre_processing(df['text'])
word_cloud = ""
for row in corpus:
for word in row:
word_cloud += " ".join(word)
wordcloud = WordCloud(width = 1000, height= 500, background_color = 'white', min_font_size = 10).generate(word_cloud)
plt.imshow(wordcloud)
cv = CountVectorizer(ngram_range = (1,2))
traindata = cv.fit_transform(corpus)
X = traindata
Y = df.label
parameters = {'max_features': ('auto','sqrt'),
'n_estimators': [500, 1000],
'max_depth': [10, None],
'min_samples_split': [5],
'min_samples_leaf': [1],
'bootstrap': [True]}
grid_search = GridSearchCV(RandomForestClassifier(),parameters,cv=5,return_train_score=True,n_jobs=-1)
grid_search.fit(X,Y)
grid_search.best_params_
for i in range(6):
print('Parameters: ',grid_search.cv_results_['params'][i])
print('Mean Test Score: ',grid_search.cv_results_['mean_test_score'][i])
print('Rank: ',grid_search.cv_results_['rank_test_score'][i])
rfc = RandomForestClassifier(max_features=grid_search.best_params_['max_features'],
max_depth=grid_search.best_params_['max_depth'],
n_estimators=grid_search.best_params_['n_estimators'],
min_samples_split=grid_search.best_params_['min_samples_split'],
min_samples_leaf=grid_search.best_params_['min_samples_leaf'],
bootstrap=grid_search.best_params_['bootstrap'])
rfc.fit(X,Y)
test_df = pd.read_csv(r'''D:\chinnu\egnite ds\emotions\test.txt''',delimiter=';',names=['text','label'])
X_test,Y_test = test_df.text,test_df.label
#encode the labels into two classes , 0 and 1
test_df = custom_encoder(Y_test)
#pre-processing of text
test_corpus = pre_processing(X_test)
#convert text data into vectors
testdata = cv.transform(test_corpus)
#predict the target
predictions = rfc.predict(testdata)
plot_confusion_matrix(Y_test,predictions)
acc_score = accuracy_score(Y_test,predictions)
pre_score = precision_score(Y_test,predictions)
rec_score = recall_score(Y_test,predictions)
print('Accuracy_score: ',acc_score)
print('Precision_score: ',pre_score)
print('Recall_score: ',rec_score)
print("-"*50)
cr = classification_report(Y_test,predictions)
print(cr)
predictions_probability = rfc.predict_proba(testdata)
fpr,tpr,thresholds = roc_curve(Y_test,predictions_probability[:,1])
plt.plot(fpr,tpr)
plt.plot([0,1])
plt.title('ROC Curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
def expression_check(prediction_input):
if prediction_input == 0:
print("Input statement has Negative Sentiment.")
elif prediction_input == 1:
print("Input statement has Positive Sentiment.")
else:
print("Invalid Statement.")
def sentiment_predictor(input):
input = pre_processing(input)
transformed_input = cv.transform(input)
prediction = rfc.predict(transformed_input)
expression_check(prediction)
input1 = ["Sometimes I just want to punch someone in the face."]
input2 = ["I bought a new phone and it's so good."]
sentiment_predictor(input1)
sentiment_predictor(input2)
```
CALCULATING POPULARITY SCORE
```
df['preprocess text'] = corpus
sent = SentimentIntensityAnalyzer()
polarity = [round(sent.polarity_scores(i)['compound'], 2) for i in df['text']]
df['sentiment_polarity'] = polarity
df.head()
```
SENTIMENT SCORE
```
positive_df = df.loc[df['label'] == 1]
positive_df.head()
negative_df = df.loc[df['label'] == 0]
negative_df.head()
pos_words_file = df.to_csv("positive words.csv")
neg_words_file = df.to_csv("negative words.csv")
neg_file = open('negative words.csv', 'r')
pos_file = open('positive words.csv', 'r')
pos_words = pos_file.read().split()
neg_words = neg_file.read().split()
num_pos = df['preprocess text'].map(lambda x : len([i for i in x if i in pos_words]))
num_neg = df['preprocess text'].map(lambda x : len([i for i in x if i in neg_words]))
df['positive_count'] = num_pos
df['negative_count'] = num_neg
df['Sentiment Score'] = round(df['positive_count']/(df['negative_count']+1),2)
df.head()
df.to_excel("Sentiment Analysis.xlsx")
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'bird', 'cat', 'deer'}
fg_used = '234'
fg1, fg2, fg3 = 2,3,4
all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
background_classes = all_classes - foreground_classes
background_classes
# print(type(foreground_classes))
all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
background_classes = all_classes - foreground_classes
background_classes
# print(type(foreground_classes))
train = trainset.data
label = trainset.targets
train.shape
train = np.reshape(train, (50000,3072))
train.shape
from numpy import linalg as LA
u, s, vh = LA.svd(train, full_matrices= False)
u.shape , s.shape, vh.shape
s
vh
# vh = vh.T
vh
dir = vh[1062:1072,:]
dir
u1 = dir[7,:]
u2 = dir[8,:]
u3 = dir[9,:]
u1
u2
u3
len(label)
cnt=0
for i in range(50000):
if(label[i] == fg1):
# print(train[i])
# print(LA.norm(train[i]))
# print(u1)
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u1
# print(train[i])
cnt+=1
if(label[i] == fg2):
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u2
cnt+=1
if(label[i] == fg3):
train[i] = train[i] + 0.1 * LA.norm(train[i]) * u3
cnt+=1
if(i%10000 == 9999):
print("partly over")
print(cnt)
train.shape, trainset.data.shape
train = np.reshape(train, (50000,32, 32, 3))
train.shape
trainset.data = train
test = testset.data
label = testset.targets
test.shape
test = np.reshape(test, (10000,3072))
test.shape
len(label)
cnt=0
for i in range(10000):
if(label[i] == fg1):
# print(train[i])
# print(LA.norm(train[i]))
# print(u1)
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u1
# print(train[i])
cnt+=1
if(label[i] == fg2):
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u2
cnt+=1
if(label[i] == fg3):
test[i] = test[i] + 0.1 * LA.norm(test[i]) * u3
cnt+=1
if(i%1000 == 999):
print("partly over")
print(cnt)
test.shape, testset.data.shape
test = np.reshape(test, (10000,32, 32, 3))
test.shape
testset.data = test
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
fg,bg
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
img1 = torch.cat((background_data[0],background_data[1],background_data[2]),1)
imshow(img1)
img2 = torch.cat((foreground_data[27],foreground_data[3],foreground_data[43]),1)
imshow(img2)
img3 = torch.cat((img1,img2),2)
imshow(img3)
print(img2.size())
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
list_set_labels = []
for i in range(desired_num):
set_idx = set()
bg_idx = np.random.randint(0,35000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,15000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
list_set_labels.append(set_idx)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Module1(nn.Module):
def __init__(self):
super(Module1, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,1)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
class Module2(nn.Module):
def __init__(self):
super(Module2, self).__init__()
self.module1 = Module1().double()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,3)
def forward(self,z): #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
x = x.to("cuda")
y = y.to("cuda")
for i in range(9):
x[:,i] = self.module1.forward(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
y = y.contiguous()
y1 = self.pool(F.relu(self.conv1(y)))
y1 = self.pool(F.relu(self.conv2(y1)))
y1 = y1.contiguous()
y1 = y1.reshape(-1, 16 * 5 * 5)
y1 = F.relu(self.fc1(y1))
y1 = F.relu(self.fc2(y1))
y1 = F.relu(self.fc3(y1))
y1 = self.fc4(y1)
return y1 , x, y
fore_net = Module2().double()
fore_net = fore_net.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(fore_net.parameters(), lr=0.01, momentum=0.9)
nos_epochs = 600
for epoch in range(nos_epochs): # loop over the dataset multiple times
running_loss = 0.0
cnt=0
mini_loss = []
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels, fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
# zero the parameter gradients
# optimizer_what.zero_grad()
# optimizer_where.zero_grad()
optimizer.zero_grad()
# avg_images , alphas = where_net(inputs)
# avg_images = avg_images.contiguous()
# outputs = what_net(avg_images)
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
# optimizer_what.step()
# optimizer_where.step()
optimizer.step()
running_loss += loss.item()
mini = 40
if cnt % mini == mini - 1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
mini_loss.append(running_loss / mini)
running_loss = 0.0
cnt=cnt+1
if(np.average(mini_loss) <= 0.05):
break
print('Finished Training')
torch.save(fore_net.state_dict(),"/content/drive/My Drive/Research/mosaic_from_CIFAR_involving_bottop_eigen_vectors/fore_net_epoch"+str(epoch)+"_fg_used"+str(fg_used)+".pt")
```
#Train summary on Train mosaic made from Trainset of 50k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
from tabulate import tabulate
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print(count)
print("="*100)
table3 = []
entry = [1,'fg = '+ str(fg),'bg = '+str(bg),30000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
train_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
test_set_labels = []
for i in range(10000):
set_idx = set()
bg_idx = np.random.randint(0,35000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,15000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_set_labels.append(set_idx)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
```
#Test summary on Test mosaic made from Trainset of 50k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print("="*100)
# table4 = []
entry = [2,'fg = '+ str(fg),'bg = '+str(bg),10000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
test_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
dataiter = iter(testloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(1000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
test_set_labels = []
for i in range(10000):
set_idx = set()
bg_idx = np.random.randint(0,7000,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,3000)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_set_labels.append(set_idx)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
unseen_test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
```
# Test summary on Test mosaic made from Testset of 10k CIFAR
```
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in unseen_test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half",argmax_more_than_half)
print("argmax_less_than_half",argmax_less_than_half)
print("="*100)
# table4 = []
entry = [3,'fg = '+ str(fg),'bg = '+str(bg),10000]
entry.append((100 * focus_true_pred_true / total))
entry.append( (100 * focus_false_pred_true / total))
entry.append( ( 100 * focus_true_pred_false / total))
entry.append( ( 100 * focus_false_pred_false / total))
entry.append( argmax_more_than_half)
test_entry = entry
table3.append(entry)
print(tabulate(table3, headers=['S.No.', 'fg_class','bg_class','data_points','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
```
| github_jupyter |
<img align="right" src="images/ninologo.png" width="150"/>
<img align="right" src="images/tf-small.png" width="125"/>
<img align="right" src="images/dans.png" width="150"/>
# Jumps
Things do not only lie embedded in each other, they can also *point* to each other.
The mechanism for that are *edges*. Edges are links between *nodes*.
Like nodes, edges may carry feature values.
We learn how to deal with structure in a quantitative way.
```
%load_ext autoreload
%autoreload 2
import collections
from IPython.display import Markdown, display
from tf.app import use
A = use("uruk:clone", checkout="clone", hoist=globals())
# A = use('uruk', hoist=globals())
```
## Measuring depth
Numbered lines in the transliterations indicate a hierarchy of cases within lines.
How deep can cases go?
We explore the distribution of cases with respect to their depth.
We need a function that computes the depth of a case.
We program that function in such a way that it also works for *quads* (seen before),
and *clusters* (will see later).
The idea of this function is:
* if a structure does not have sub-structures, its depth is 1 or 0;
* it is 1 if the lowest level parts of the structure have a different name
such as quads versus signs;
* it is 0 if the lowest level parts of the structure have the same name,
such as cases in lines;
* the depth of a structure is 1 more than the maximum of the depths of its sub-structures.
How do we find the sub-structures of a structure?
By following *edges* with a `sub` feature, as we have seen in
[quads](quads.ipynb).
```
def depthStructure(node, nodeType, ground):
subDepths = [
depthStructure(subNode, nodeType, ground)
for subNode in E.sub.f(node)
if F.otype.v(subNode) == nodeType
]
if len(subDepths) == 0:
return ground
else:
return max(subDepths) + 1
```
## Example: cases
We call up our example tablet and do a few basic checks on cases.
Note that there is also a feature **depth** that provides the depth at which a case is found,
which is different from the depth a case has.
```
pNum = "P005381"
query = """
tablet catalogId=P005381
"""
results = A.search(query)
A.show(results, withNodes=True, lineNumbers=True, showGraphics=False)
line1 = T.nodeFromSection((pNum, "obverse:1", "1"))
A.pretty(line1, showGraphics=False)
depthStructure(line1, "case", 0)
```
That makes sense, since case 1 is divided in one level of sub-cases: 1a and 1b.
```
L.d(line1, otype="case")
line2 = T.nodeFromSection((pNum, "obverse:1", "2"))
A.pretty(line2, showGraphics=False)
depthStructure(line2, "case", 0)
```
Indeed, case 2 does not have a division in sub-cases.
```
L.d(line2, otype="case")
```
## Counting by depth
For a variety of structures we'll find out how deep they go,
and how depth is distributed in the corpus.
### Cases
We are going to collect all cases in buckets according to their depths.
```
caseDepths = collections.defaultdict(list)
for n in F.otype.s("line"):
caseDepths[depthStructure(n, "case", 0)].append(n)
for n in F.otype.s("case"):
caseDepths[depthStructure(n, "case", 0)].append(n)
caseDepthsSorted = sorted(
caseDepths.items(),
key=lambda x: (-x[0], -len(x[1])),
)
for (depth, casesOrLines) in caseDepthsSorted:
print(f"{len(casesOrLines):>5} cases or lines with depth {depth}")
```
We'll have some fun with this. We find two of the deepest cases, one on
a face that is as small as possible, one on a face that is as big as possible.
So we restrict ourselves to `caseDepths[4]`.
For all of these cases we find the face they are on, and the number of quads on that face.
```
deepCases = caseDepths[4]
candidates = []
for case in deepCases:
face = L.u(case, otype="face")[0]
size = len(A.getOuterQuads(face))
candidates.append((case, size))
sortedCandidates = sorted(candidates, key=lambda x: (x[1], x[0]))
sortedCandidates
```
We can do better than this!
```
A.table(sortedCandidates)
```
We can also assemble relevant information for this table by hand
and put it in a markdown table.
```
markdown = """
case type | case number | tablet | face | size
------ | ---- | ---- | ---- | ----
""".strip()
markdown += "\n"
bigCase = sortedCandidates[-1][0]
smallCase = sortedCandidates[0][0]
for (case, size) in sortedCandidates:
caseType = F.otype.v(case)
caseNum = F.number.v(case)
face = L.u(case, otype="face")[0]
tablet = L.u(case, otype="tablet")[0]
markdown += f"""
{caseType} | {caseNum} | {A.cdli(tablet, asString=True)} | {F.type.v(face)} | {size}
""".strip()
markdown += "\n"
Markdown(markdown)
```
Not surprisingly: the deepest cases are all lines.
Because every case is enclosed by a line, which is one deeper than that case.
You can click on the P-numbers to view these tablets on CDLI.
We finally show the source lines that contain these deep cases.
```
A.pretty(smallCase)
A.pretty(bigCase)
```
With a bit of coding we can get another display:
```
(smallPnum, smallColumn, smallCaseNum) = A.caseFromNode(smallCase)
(bigPnum, bigColumn, bigCaseNum) = A.caseFromNode(bigCase)
smallLineStr = "\n".join(A.getSource(smallCase))
bigLineStr = "\n".join(A.getSource(bigCase))
display(
Markdown(
f"""
**{smallPnum} {smallColumn} line {smallCaseNum}**
```
{smallLineStr}
```
"""
)
)
A.lineart(smallPnum, width=200)
display(
Markdown(
f"""
---
**{bigPnum} {bigColumn} line {bigCaseNum}**
```
{bigLineStr}
```
"""
)
)
A.photo(bigPnum, width=400)
```
### Quads
We just want to see how deep quads can get.
```
quadDepths = collections.defaultdict(list)
for quad in F.otype.s("quad"):
quadDepths[depthStructure(quad, "quad", 1)].append(quad)
quadDepthsSorted = sorted(
quadDepths.items(),
key=lambda x: (-x[0], -len(x[1])),
)
for (depth, quads) in quadDepthsSorted:
print(f"{len(quads):>5} quads with depth {depth}")
```
Lo and behold! There is just one quad of depth 3 and it is on our leading
example tablet.
We have studied it already in [quads](quads.jpg).
```
bigQuad = quadDepths[3][0]
tablet = L.u(bigQuad, otype="tablet")[0]
A.lineart(bigQuad)
A.cdli(tablet)
```
### Clusters
Clusters are groups of consecutive quads between brackets.
Clusters can be nested.
As with quads, we find the members of a cluster by following `sub` edges.
#### Depths in clusters
We use familiar logic to get a hang of cluster depths.
```
clusterDepths = collections.defaultdict(list)
for cl in F.otype.s("cluster"):
clusterDepths[depthStructure(cl, "cluster", 1)].append(cl)
clusterDepthsSorted = sorted(
clusterDepths.items(),
key=lambda x: (-x[0], -len(x[1])),
)
for (depth, cls) in clusterDepthsSorted:
print(f"{len(cls):>5} clusters with depth {depth}")
```
Not much going on here.
Let's pick a nested cluster.
```
nestedCluster = clusterDepths[2][0]
tablet = L.u(nestedCluster, otype="tablet")[0]
quads = A.getOuterQuads(nestedCluster)
print(A.atfFromCluster(nestedCluster))
A.pretty(nestedCluster, withNodes=True)
A.lineart(quads[0], height=150)
A.cdli(tablet)
```
#### Kinds of clusters
In our corpus we encounter several types of brackets:
* `( )a` for proper names
* `[ ]` for uncertainty
* `< >` for supplied material.
The next thing is to get on overview of the distribution of these kinds.
```
clusterTypeDistribution = collections.Counter()
for cluster in F.otype.s("cluster"):
typ = F.type.v(cluster)
clusterTypeDistribution[typ] += 1
for (typ, amount) in sorted(
clusterTypeDistribution.items(),
key=lambda x: (-x[1], x[0]),
):
print(f"{amount:>5} x a {typ:>8}-cluster")
```
The conversion to TF has transformed `[...]` to a cluster of one sign with grapheme `…`.
These are trivial clusters and we want to exclude them from further analysis, so we redo the counting.
First we make a sequence of all non-trivial clusters:
```
realClusters = [
c
for c in F.otype.s("cluster")
if (
F.type.v(c) != "uncertain"
or len(E.oslots.s(c)) > 1
or F.grapheme.v(E.oslots.s(c)[0]) != "…"
)
]
len(realClusters)
```
Now we redo the same analysis, but we start with the filtered cluster sequence.
```
clusterTypeDistribution = collections.Counter()
for cluster in realClusters:
typ = F.type.v(cluster)
clusterTypeDistribution[typ] += 1
for (typ, amount) in sorted(
clusterTypeDistribution.items(),
key=lambda x: (-x[1], x[0]),
):
print(f"{amount:>5} x a {typ:>8}-cluster")
```
#### Lengths of clusters
How long are clusters in general?
There are two possible ways to measure the length of a cluster:
* the amount of signs it occupies;
* the amount of top-level members it has (quads or signs)
By now, the pattern to answer questions like this is becoming familiar.
We express the logic in a function, that takes the way of measuring
as a parameter.
In that way, we can easily provide a cluster-length distribution based
on measurements in signs and in quads.
```
def computeDistribution(nodes, measure):
distribution = collections.Counter()
for node in nodes:
m = measure(node)
distribution[m] += 1
for (m, amount) in sorted(
distribution.items(),
key=lambda x: (-x[1], x[0]),
):
print(f"{amount:>5} x a measure of {m:>8}")
def lengthInSigns(node):
return len(L.d(node, otype="sign"))
def lengthInMembers(node):
return len(E.sub.f(node))
```
Now we can show the length distributions of clusters by just calling `computeDistribution()`:
```
computeDistribution(realClusters, lengthInSigns)
computeDistribution(realClusters, lengthInMembers)
```
Of course, we want to see the longest cluster.
```
longestCluster = [c for c in F.otype.s("cluster") if lengthInMembers(c) == 7][0]
A.pretty(longestCluster)
```
#### Lengths of quads
If you look closely at the code for these functions, there is nothing in it that
is specific for clusters.
The measures are in terms of the totally generic `oslots` function, and the fairly generic
`sub` edges, which are also defined for quads.
So, in one go, we can obtain a length distribution of quads.
Note that quads can also be sub-quads.
```
computeDistribution(F.otype.s("quad"), lengthInSigns)
computeDistribution(F.otype.s("quad"), lengthInMembers)
longestQuad = [q for q in F.otype.s("quad") if lengthInSigns(q) == 5][0]
A.pretty(longestQuad)
```
# Next
[cases](cases.ipynb)
*In* case *you are serious ...*
Try the
[primers](http://nbviewer.jupyter.org/github/Nino-cunei/primers/tree/master/)
for introductions into digital cuneiform research.
All chapters:
[start](start.ipynb)
[imagery](imagery.ipynb)
[steps](steps.ipynb)
[search](search.ipynb)
[signs](signs.ipynb)
[quads](quads.ipynb)
[jumps](jumps.ipynb)
[cases](cases.ipynb)
---
CC-BY Dirk Roorda
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Regrasyon: yakıt verimliliğini tahmin edelim
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: Bu dökümanlar TensorFlow gönüllü kullanıcıları tarafından çevirilmiştir.
Topluluk tarafından sağlananan çeviriler gönüllülerin ellerinden geldiğince
güncellendiği için [Resmi İngilizce dökümanlar](https://www.tensorflow.org/?hl=en)
ile bire bir aynı olmasını garantileyemeyiz. Eğer bu tercümeleri iyileştirmek
için önerileriniz var ise lütfen [tensorflow/docs](https://github.com/tensorflow/docs)
havuzuna pull request gönderin. Gönüllü olarak çevirilere katkıda bulunmak için
[docs-tr@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-tr)
listesi ile iletişime geçebilirsiniz.
*Regrasyon* problemlerinde, olasılık veya fiyat gibi sürekli değişken olan çıktıyı tahmin etmeyi amaçlarız. Aksine, *sınıflandırma* problemlerinde ise amacımız, belirli bir sınıf listesi içerisinden en uygun sınıfı seçebilmektir (örneğin bir resimde elma veya portakal olsun, resimde elma mı yoksa portakal mı olduğunu belirlemek isteriz).
Bu çalışma kitabı, 1970'lerin sonları ve 1980'lerin başlarında üretilmiş olan araçların yakıt verimliliğini (MPG) tahmin edebilecek bir model geliştirmek için klasik [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) veri setini kullanmaktadır. Bunu yapabilmek için, belirtilen zaman aralığında üretilmiş olan araç bilgilerini modelimize besleyeceğiz. Modelimize besleyeceğimiz bu bilgiler değişik araç özelliklerini kapsamaktadır: motorun kaç silindirli olduğu, beygir günü, hacmi ve araç ağırlığı.
Bu örnekte `tf.keras` API kullanılmıştır, detaylar için bu [yardımcı döküman](https://www.tensorflow.org/r1/guide/keras)'a göz atabilirsiniz.
```
# pairplot icin seaborn kullanalim
!pip install seaborn
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow.compat.v1 as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Auto MPG veri seti
Bu veri seti [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/) sitesinde bulunmaktadır.
### Veriyi alalım
İlk olarak veri setini indirelim.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
pandas kütüphanesini kullanarak verileri içeri alalım
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### Veriyi temizleyelim
Veri seti bilinmeyen bir kaç değer içermektedir.
```
dataset.isna().sum()
```
Eğitim dökümanını kısa tutabilmek adına, bu satırları veri setinden çıkaralım.
```
dataset = dataset.dropna()
```
`"Origin"` kolonu sayısal değil, kategorik değer içermektedir. Dolayısı ile, bu kolonu one-hot forma çevirelim:
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### Veri setini eğitim ve test olarak ikiye ayıralım
Şimdi, veri setini eğitim seti ve test setine ayıralım.
Modelimizin tamamlandıktan sonraki son değerlendirmesinde test veri setini kullanacağız.
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### Veriyi inceleyelim
Eğitim veri setindeki birkaç kolonun değer dağılımlarına birlikte hızlıca göz atalım.
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
Genel istatistiklere bakalım:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### Etiketlerin özelliklerden ayırılması
Hedef verisini, veya "etiketi", özelliklerden ayıralım. Modeli, bu etiket değerini tahminleyebilmek için eğitiyoruz.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### Veriyi normalize edelim
Yukarıda yer alan 'train_stats' değerlerine baktığımızda, her bir özelliğin birbirinden ne kadar farklı değer aralığına sahip olduğunu görmekteyiz.
Birbirinden farklı büyüklükteki ve aralıklardaki özellik değerlerinin normalize edilmesi her zaman işimizi kolaylaştırır. Modelin mevcut verilerle sonuca yakınsaması mümkün olsa bile, bu modelin eğitilmesini güçleştirir, ayrıca modelin seçilen girdinin birimine bağlı sonuçlar vermesine neden olur.
Not: İstatistikleri bilinçli olarak sadece eğitim veri setine göre oluşturmuş olsakta, bu bilgiler test veri setinin normalizasyonu için de kullanılacaktır. Test veri setini, modelin eğitilmiş olduğu veri setinin sahip olduğu dağılıma yansitabilmek için, aynı şekilde normalize etmemiz gerekmektedir.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
Bu normalize edilmiş veri, modelin eğitimi için kullanacağımız veridir.
Dikkat: Değerleri normalize etmek için kullandığımız istatiksel değerler (ortalama ve standart sapma), one-hot şifreleme kullanılarak modele beslenecek diğer tüm verilere de uygulanmalıdır. Bu normalizasyon işlemi test verisinde, ayrıca modelimizin canlı kullanımında modele beslenen verilerde de aynı şekilde uygulanmalıdır.
## Model
### Modeli oluşturalım
Modelimizi birlikte oluşturalım. İki 'yoğun bağlı (densely connected)' gizli katman, ayrıca tek bir sürekli değer üreten çıktı katmanı kullanacağız. Sonradan ikinci bir model oluşturacağımız için, kolaylık olması açısından model oluşturma adımlar 'build_model' fonsiyonu içerisinde yer almaktadır.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
```
### Modeli inceleyelim
Modelin basit bir açıklamasını ekrana yazdırabilmek için '.summary' methodunu kullanalım.
```
model.summary()
```
Şimdi modeli kullanalım. Eğitim veri setinden 10 özelliği içeren veri grubunu alalım ve 'model.predict' metodunu bu veri grubu ile çalıştıralım.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
Model beklenen form ve tipte sonuçlar üretiyor ve beklendiği şekliyle çalışıyor gözüküyor.
### Modeli eğitelim
Modeli 1000 epoch döngüsü içerisinde egitelim ve doğrulama ve eğitim doğruluk sonuçlarını 'history' objesi içerisinde kayit edelim.
```
# Tamamlanan her epoch dongusu icin bir nokta yazdirarak egitimin gelisimini gosterelim
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
Model eğitim sürecini, 'history' objesi içerisine kaydetmiş olduğumuz değerleri kullanarak görselleştirelim.
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
100 epoch sonrasında doğrulama hata değerinin iyileşmediği aksine bir miktar kötüleştiği görülmektedir. 'model.fit' metodunu, doğrulama değerinin iyileşmemesi durumunda model eğitimini otomatik olarak durduracak şekilde güncelleyelim. *EarlyStopping callback* kullanarak eğitim durumunu her epoch sonunda kontrol ediyor olacağız. Belirli bir adet epoch süresince model iyileşme göstermezse, model eğitimini otomatik olarak durduracağız.
Bu callback ile ilgili daha fazla bilgiyi [burada](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping) bulabilirsiniz.
```
model = build_model()
# 'patience parameter' kac adet epoch dongusunun iyilesme icin kontrol edilecegini belirler
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
Grafik, doğrulama seti ile ortalama hatanın +/- 2 MPG aralığında olduğunu göstermektedir. Bu hata değerinin iyi mi, yoksa kötü mü olduğu kararını size bırakıyoruz?
Modelin daha önce görmediği **test** verileri ile nasıl performans gösterdiğine, yani genelleme yeteneğinin ne kadar iyi olduğuna birlikte bakalım. Bu bize modelimizin gerçek dünyada, kullanıcı verileri ile çalıştırıldığında ne kadar iyi tahminleme yapacağını gösterecektir.
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### Tahminleme yapalım
Son olarak, test veri setini kullanarak MPG değerlerini tahminleyelim:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
Modelimizin epey iyi sonuç verdiği görülmektedir. Hata dağılımına birlikte bakalım.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
Hataların dağılımı gauss dağılımına çok benzer değil, veri adedimiz oldukça az olduğu için bu beklenen bir durumdur.
## Sonuç ve kapanış
Bu notebook ile, regrasyon problemlerinin çözümünde kullanılan bazı teknikler açıklanmıştır:
* Ortalama karekök hatası - Mean Squared Error (MSE), regrasyon problemlerinin çözümünde sıklıkla kullanılan kayıp fonksiyonudur (sınıflandırma problemlerinde daha farklı kayıp fonksiyonları kullanılır).
* Benzer şekilde, regrasyon ve sınıflandırma modellerinin değerlendirme metrikleri de farklılık gösterir. Regrasyon modellerinde genel olarak kullanılan metrik, Ortalama Mutlak Hata - Mean Absolute Error (MAE)'dır.
* Farklı özelliklere ait sayısal verilerinin değer aralıklarının farklı olması durumunda, her bir özelliğın bağımsız olarak aynı değer aralığına indirgenmesi gerekmektedir.
* Eğitim veri seti için elimizde fazla veri yoksa, aşırı uyum (overfitting) gözlemlenmemesi için, az sayıda gizli katman içeren daha küçük sinir ağı modellerini tercih etmemiz gerekmektedir.
* Model eğitiminin erken durdurulması, aşırı uyumun oluşmasını engelleyen kullanışlı bir tekniktir.
| github_jupyter |
# Natural Language Processing - Problems
**Author:** Ties de Kok ([Personal Website](https://www.tiesdekok.com)) <br>
**Last updated:** June 2021
**Python version:** Python 3.6+
**License:** MIT License
**Recommended environment: `researchPython`**
```
import os
recommendedEnvironment = 'researchPython'
if os.environ['CONDA_DEFAULT_ENV'] != recommendedEnvironment:
print('Warning: it does not appear you are using the {0} environment, did you run "conda activate {0}" before starting Jupyter?'.format(recommendedEnvironment))
```
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Introduction</span>
</div>
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: center; margin-left: 100px; margin-right:100px;'>
<span style='color:black; font-size: 20px; font-weight:bold;'> Make sure to open up the respective tutorial notebook(s)! <br> That is what you are expected to use as primariy reference material. </span>
</div>
### Relevant tutorial notebooks:
1) [`0_python_basics.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/LearnPythonforResearch/blob/master/0_python_basics.ipynb)
2) [`2_handling_data.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/LearnPythonforResearch/blob/master/2_handling_data.ipynb)
3) [`NLP_Notebook.ipynb`](https://nbviewer.jupyter.org/github/TiesdeKok/Python_NLP_Tutorial/blob/master/NLP_Notebook.ipynb)
## Import required packages
```
import os, re
import pandas as pd
import numpy as np
import en_core_web_lg
nlp = en_core_web_lg.load()
```
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: center; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 1 </span>
</div>
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: center; margin-left: 100px; margin-right:100px;'>
<span style='color:black; font-size: 15px; font-weight:bold;'> Note: feel free to add as many cells as you'd like to answer these problems, you don't have to fit it all in one cell. </span>
</div>
## 1) Perform basic operations on a sample earnings transcript text file
### 1a) Load the following text file: `data > example_transcript.txt` into Python
### 1b) Print the first 400 characters of the text file you just loaded
### 1c) Count the number of times the words `Alex` and `Angie` are mentioned
### 1c) Use the provided Regular Expression to capture all numbers prior to a "%"
Use this regular expression: `\W([\.\d]{,})%`
**You can play around with this regular expression here: <a href='https://bit.ly/3heIqoG'>Test on Pythex.org</a>**
**Hint:** use the `re.findall()` function
### Extra: try to explain to a neighbour / group member what the regular expression is doing
You can use the cheatsheet on Pythex.org for reference.
### 1d) Load the text into a Spacy object and split it into a list of sentences
Make sure to evaluate how well it worked by inspecting various elements of the sentence list.
#### What is the 150th sentence?
### Why is there a difference between showing a string and printing a string? See the illustration below:
```
demo_sentence = "This is a test sentence, the keyword:\x20\nSeattle"
demo_sentence
print(demo_sentence)
```
### 1e) Parse out the following 3 blocks of text:
* The meta data at the top
* The presentation portion
* The Q&A portion
**Note 1:** I recommend to do this based on the full text (i.e., the raw string as you loaded it) not the list of sentences.
**Note 2:** Don't use the location (e.g., `[:123]`), that wouldn't work if you had more than 1 transcript.
### 1f) How many characters, sentences, words (tokens) does the presentation portion and the Q&A portion have?
<div style='border-style: solid; padding: 5px; border-color: darkred; border-width:5px; text-align: left; margin-left: 0px; margin-right:100px;'>
<span style='color:black; font-size: 20px; font-weight:bold;'> Note: problems 1g and 1h are quite challenging, it might make sense to skip them until the end and move on to questions 2 and 3 first.</span>
### 1g) Create a list of all the questions during the Q&A and include the person that asked the question
You should end up with 20 questions.
### 1h) Modify the Q&A list by adding in the answer + answering person
This is what the first entry should (rougly) look like:
```python
qa_list[0] =
{
'q_person': 'Christopher McGratty ',
'question': 'Great, thanks, good afternoon. Kevin maybe you could start -- or Alex on the margin, obviously the environment has got a little bit tougher for the banks. But you have this -- the ability to bring down deposit costs, which you talked about in your prepared remarks. I appreciate in the guidance for the first quarter, but if the rate outlook remains steady, how do we think about ultimate stability in the flow and the margin, where and kind of when?',
'answers': [{
'name': 'Alex Ko ',
'answer': 'Sure, sure. As I indicated, we would expect to have continued compression next quarter given the rate cuts that we have experienced especially October rate cut, it will continue next quarter. But as we indicated, our proactive deposit initiative as well as very disciplined pricing on the deposit, even though we have a very competitive -- competition on the loan rate is very still severe. We would expect to stabilize in the second quarter of 2020 in terms of net interest margin and then second half of the year, we would expect to start to increase.'
}]
}
```
>
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 2:</span>
</div>
## 2) Extract state name counts from MD&As
Follow Garcia and Norli (2012) and extract state name counts from MD&As.
#### References
Garcia, D., & Norli, Ø. (2012). Geographic dispersion and stock returns. Journal of Financial Economics, 106(3), 547-565.
#### Data to use
I have included a random selection of 20 pre-processed MDA filings in the `data > MDA_files` folder. The filename is the unique identifier.
You will also find a file called `MDA_META_DF.xlsx` in the "data" folder, this contains the following meta-data for eaching MD&A:
* filing date
* cik
* company name
* link to filing
### 2a) Load data into a dictionary with as key the filename and as value the content of the text file
The files should all be in the following folder:
```
os.path.join('data', 'MDA_files')
```
### 2b) Load state name data into a DataFrame
**Note:** state names are provided in the `state_names.xlsx` file in the "data" folder.
### 2c) Count the number of times that each U.S. state name is mentioned in each MD&A
**Hint:** save the counts to a list where each entry is a list that contains the following three items: [*filename*, *state_name*, *count*], like this:
> [
['21344_0000021344-16-000050.txt', 'Alabama', 0],
['21344_0000021344-16-000050.txt', 'Alaska', 0],
['21344_0000021344-16-000050.txt', 'Arizona', 0],
> ....
>['49071_0000049071-16-000117.txt', 'West Virginia', 0],
['49071_0000049071-16-000117.txt', 'Wisconsin', 0],
['49071_0000049071-16-000117.txt', 'Wyoming', 0]
]
You can verify that it worked by checking whether the the 80th element (i.e. `list[79]`) equals:
> ['21510_0000021510-16-000074.txt', 'New Jersey', 2]
(I looped over the companies first, and then over the states)
### 2d) Convert the list you created in `2c` into a Pandas DataFrame and save it as an Excel sheet
**Hint:** Use the `columns=[...]` parameter to name the columns
## 3) Create sentiment score based on Loughran and McDonald (2011)
Create a sentiment score for MD&As based on the Loughran and McDonald (2011) word lists.
#### References
Loughran, T., & McDonald, B. (2011). When is a liability not a liability? Textual analysis, dictionaries, and 10‐Ks. The Journal of Finance, 66(1), 35-65.
#### Data to use
I have included a random selection of 20 pre-processed MDA filings in the `data > MDA_files` folder. The filename is the unique identifier.
You will also find a file called `MDA_META_DF.xlsx` in the "data" folder, this contains the following meta-data for eaching MD&A:
* filing date
* cik
* company name
* link to filing
### 3a) Load the Loughran and McDonald master dictionary
**Note:** The Loughran and McDonald dictionary is included in the "data" folder: `LoughranMcDonald_MasterDictionary_2014.xlsx `
### 3b) Create two lists: one containing all the negative words and the other one containing all the positive words
**Tip:** I recommend to change all words to lowercase in this step so that you don't need to worry about that later
### 3c) For each MD&A calculate the *total* number of times negative and positive words are mentioned
**Note:** make sure you also convert the text to lowercase!
**Hint:** save the counts to a list where each entry is a list that contains the following three items: [*filename*, *total pos count*, *total neg count*], like this:
> [
['21344_0000021344-16-000050.txt', 1166, 2318],
['21510_0000021510-16-000074.txt', 606, 1078],
['21665_0001628280-16-011343.txt', 516, 1058],
> ....
['47217_0000047217-16-000093.txt', 544, 928],
['47518_0001214659-16-014806.txt', 482, 974],
['49071_0000049071-16-000117.txt', 954, 1636]
]
You can verify that it worked by checking whether the the 16th element (i.e. `list[15]`) equals:
> ['43920_0000043920-16-000025.txt', 558, 1568]
### 3d) Convert the list created in 3c into a Pandas DataFrame
**Hint:** Use the `columns=[...]` parameter to name the columns
### 3e) Create a new column with a "sentiment score" for each MD&A
Use the following imaginary sentiment score:
$$\frac{(Num\ Positive\ Words - Num\ Negative\ Words)}{Sum\ of Pos\ and\ Neg\ Words}$$
## 3f) Use the `MDA_META_DF` file to add the company name, filing date, and CIK to the sentiment dataframe
<div style='border-style: solid; padding: 10px; border-color: black; border-width:5px; text-align: left; margin-top:20px; margin-bottom: 20px;'>
<span style='color:black; font-size: 30px; font-weight:bold;'>Part 3: "Ties, I am bored, please give me a challenge"</span>
</div>
**Note:** You don't have to complete part 3 if you are handing in the problems for credit.
------
## 1) Visualize the entities in the following sentences:
```python
example_string = "John Smith is a Professor at the University of Washington. Which is located in Seattle."
```
What you should get:

## 2) For each sentence in `data > example_transcript.txt` find the sentences that is closest in semantic similarity
Use the built-in word vectors that come with Spacy. Limit your sample to sentences with more than 100 characters.
This is what you should get:

| github_jupyter |
# Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Import necessary packages
As usual we need to first import the Python packages that we will need.
```
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
```
wiki = graphlab.SFrame('people_wiki.gl')
wiki
```
## Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in `wiki`.
```
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
```
## Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
```
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
```
Let's look at the top 10 nearest neighbors by performing the following query:
```
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
```
All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
* Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
* Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
* Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
* Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
```
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
```
Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as **join**. The **join** operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See [the documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html) for more details.
For instance, running
```
obama_words.join(barrio_words, on='word')
```
will extract the rows from both tables that correspond to the common words.
```
combined_words = obama_words.join(barrio_words, on='word')
combined_words
```
Since both tables contained the column named `count`, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (`count`) is for Obama and the second (`count.1`) for Barrio.
```
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
```
**Note**. The **join** operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget `ascending=False` to display largest counts first.
```
combined_words.sort('Obama', ascending=False)
```
**Quiz Question**. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function `has_top_words` to accomplish the task.
- Convert the list of top 5 words into set using the syntax
```
set(common_words)
```
where `common_words` is a Python list. See [this link](https://docs.python.org/2/library/stdtypes.html#set) if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the [`keys()` method](https://docs.python.org/2/library/stdtypes.html#dict.keys).
- Convert the list of keys into a set as well.
- Use [`issubset()` method](https://docs.python.org/2/library/stdtypes.html#set) to check if all 5 words are among the keys.
* Now apply the `has_top_words` function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
```
common_words = set(combined_words.sort('Obama', ascending=False)['word'][:5])
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(dict(word_count_vector).keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words)
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum()
```
**Checkpoint**. Check your `has_top_words` function on two random articles:
```
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
```
**Quiz Question**. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use `graphlab.toolkits.distances.euclidean`. Refer to [this link](https://dato.com/products/create/docs/generated/graphlab.toolkits.distances.euclidean.html) for usage.
```
def get_wc_dict(name):
return wiki[wiki['name'] == name]['word_count'][0]
graphlab.toolkits.distances.euclidean(get_wc_dict('Barack Obama'), get_wc_dict('George W. Bush'))
graphlab.toolkits.distances.euclidean(get_wc_dict('Barack Obama'), get_wc_dict('Joe Biden'))
graphlab.toolkits.distances.euclidean(get_wc_dict('George W. Bush'), get_wc_dict('Joe Biden'))
```
**Quiz Question**. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
```
bush_words = top_words('George W. Bush')
obama_words.join(bush_words, on='word').rename({'count':'Obama', 'count.1':'Bush'}).sort('Obama', ascending=False)
```
**Note.** Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
## TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. **TF-IDF** (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
```
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
```
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
```
Using the **join** operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
```
obama_tf_idf.join(schiliro_tf_idf, 'word')
```
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
**Quiz Question**. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
```
common_words = set(obama_tf_idf.join(schiliro_tf_idf, 'word').sort('weight', ascending=False)['word'][:5])
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(dict(word_count_vector).keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words)
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum()
```
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
## Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of `model_tf_idf`. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
**Quiz Question**. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
```
def get_tfidf_dict(name):
return wiki[wiki['name'] == name]['tf_idf'][0]
graphlab.toolkits.distances.euclidean(get_tfidf_dict('Barack Obama'), get_tfidf_dict('Joe Biden'))
```
The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
```
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
```
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
```
To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
```
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
**Note:** Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to **cosine distances**:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
```
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
```
From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
```
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
```
Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
**Moral of the story**: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
# Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
```
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
```
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
```
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
```
Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
```
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
```
Now, compute the cosine distance between the Barack Obama article and this tweet:
```
obama = wiki[wiki['name'] == 'Barack Obama']
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
```
Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
```
model2_tf_idf.query(obama, label='name', k=10)
```
With cosine distances, the tweet is "nearer" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from _The Atlantic_, you wouldn't recommend him/her a tweet.
| github_jupyter |
Data Pre-processing
```
import pandas as pd
import math
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import TensorDataset, DataLoader
from torch.optim.lr_scheduler import _LRScheduler
from torch.autograd import Variable
from datetime import datetime
from tqdm import tqdm
import sklearn
from copy import deepcopy
df = pd.read_csv("../aqi_city_data_v2_unrolled.csv")
DROP_ONEHOT = True
SEQ_LENGTH = 7
if DROP_ONEHOT:
INPUT_DIM = 10
else:
INPUT_DIM = 29
HIDDEN_DIM = 32
LAYER_DIM = 3
normalization_type = 'mean_std' # 'max', mean_std
def get_train_test_data(df):
# we'll mostly need median and variance values of features for most of our needs
for col in df.columns:
for x in ["min", "max", "count", "County", "past_week", "latitude", "longitude", "State", "variance"]:
if x in col:
df.drop([col], axis=1, inplace=True)
df["Population Staying at Home"] = df["Population Staying at Home"].apply(lambda x: x.replace(",", ""))
df["Population Not Staying at Home"] = df["Population Not Staying at Home"].apply(lambda x: x.replace(",", ""))
# Now we want 2 more features. Which day of week it is and which month it is.
# Both of these will be one-hot and hence we'll add 7+12 = 19 more columns.
# Getting month id is easy from the datetime column.
# For day of week, we'll use datetime library.
df['weekday'] = df['Date'].apply(lambda x: datetime.strptime(x, "%Y-%m-%d").weekday())
df['month'] = df['Date'].apply(lambda x: datetime.strptime(x, "%Y-%m-%d").month - 1)
# using one-hot on month and weekday
weekday_onehot = pd.get_dummies(df['weekday'])
weekday_onehot.columns = ["day_"+str(x) for x in weekday_onehot]
month_onehot = pd.get_dummies(df['month'])
month_onehot.columns = ["month_"+str(x) for x in month_onehot]
df.drop(['weekday', 'month'], axis=1, inplace=True)
df = df.join([weekday_onehot, month_onehot])
cities_list = list(set(df['City']))
city_df = {}
test_indices_of_cities = {}
train_set = {}
test_set = {}
TEST_SET_SIZE = 60
for city in cities_list:
city_df[city] = df[df['City'] == city].sort_values('Date').reset_index()
for col in city_df[city].columns:
if col in ["pm25_median", "o3_median", "so2_median", "no2_median", "pm10_median", "co_median"]:
continue
try:
_mean = np.nanmean(city_df[city][col])
if np.isnan(_mean) == True:
_mean = 0
city_df[city][col] = city_df[city][col].fillna(_mean)
except:
pass
random.seed(0)
test_index_start = random.randint(0, city_df[city].shape[0] - TEST_SET_SIZE)
test_indices_of_cities[city] = [test_index_start, test_index_start + TEST_SET_SIZE]
test_set[city] = city_df[city].iloc[test_index_start:test_index_start + TEST_SET_SIZE]
train_set[city] = city_df[city].drop(index=list(range(test_index_start, test_index_start + TEST_SET_SIZE)))
return train_set, test_set
train_set, test_set = get_train_test_data(df)
cities_list = list(train_set.keys())
all_train = pd.DataFrame()
for city in cities_list:
all_train = all_train.append(train_set[city], ignore_index=True)
all_test = pd.DataFrame({})
for city in test_set:
all_test = all_test.append(test_set[city], ignore_index=True)
concat_df = pd.concat([all_train,all_test],axis=0)
# ---------------------------------------------------------------------------- #
col_max = {}
col_mean = {}
col_mean2 = {}
col_std = {}
for city in cities_list:
col_mean[city] = {}
for col in train_set[city]:
if col in ["index", "Date", "City"]:
continue
train_set[city][col] = train_set[city][col].astype("float")
test_set[city][col] = test_set[city][col].astype("float")
if col in ["pm25_median", "o3_median", "so2_median", "no2_median", "pm10_median", "co_median"]:
_mean = np.nanmean(train_set[city][col])
if np.isnan(_mean) == True:
_mean = 0
col_mean[city][col] = _mean
train_set[city][col] = train_set[city][col].fillna(_mean)
if normalization_type == 'mean_std':
col_mean2[col] = np.nanmean(concat_df[col].astype("float"))
col_std[col] = np.nanstd(concat_df[col].astype("float"))
train_set[city][col] = (train_set[city][col] - col_mean2[col]) / (col_std[col] + 0.001)
test_set[city][col] = (test_set[city][col] - col_mean2[col]) / (col_std[col] + 0.001)
else:
col_max[col] = concat_df[col].astype("float").max()
train_set[city][col] = train_set[city][col] / (col_max[col] + 0.001)
test_set[city][col] = test_set[city][col] / (col_max[col] + 0.001)
if DROP_ONEHOT:
train_set[city].drop(train_set[city].columns[-19:], axis=1, inplace=True)
test_set[city].drop(test_set[city].columns[-19:], axis=1, inplace=True)
class CityDataP(torch.utils.data.Dataset):
def __init__(self, selected_column, split):
self.split = split
if split == "train":
self.dataset = train_set
else:
self.dataset = test_set
self.valid_city_idx = 0
self.valid_day_idx = 0
self.selected_column = selected_column
def __getitem__(self, idx):
if self.split != "train":
# getting all data out of the validation set
out, city = self.get_idx_data(idx)
else:
# getting data randomly for train split
city = random.choice(cities_list)
_df = self.dataset[city]
start_idx = random.randint(1,_df.shape[0]-SEQ_LENGTH)
out = _df.iloc[start_idx-1:start_idx+SEQ_LENGTH]
out = out.drop(['index', 'Date', 'City'], axis=1)
Y = pd.DataFrame({})
Y_true = pd.DataFrame({})
for col in out.columns:
if col == self.selected_column:
Y_true[col] = out[col]
Y[col] = out[col].fillna(col_mean[city][col])
if col in ["pm25_median", "pm10_median", "o3_median", "so2_median", "no2_median", "co_median"]:
out.drop([col], axis=1, inplace=True)
else:
out[col] = out[col].astype("float")
out = np.concatenate((np.array(out)[1:,:], np.array(Y)[:-1,:]), axis=1)
Y = np.array(Y)[1:]
Y_true = np.array(Y_true)[1:]
return out, Y, Y_true
def get_idx_data(self, idx):
city = cities_list[self.valid_city_idx]
_df = self.dataset[city]
out = _df.iloc[self.valid_day_idx:self.valid_day_idx+SEQ_LENGTH]
if self.valid_day_idx+SEQ_LENGTH >= _df.shape[0]:
self.valid_day_idx = 0
self.valid_city_idx += 1
else:
self.valid_day_idx += 1
return out, city
def __len__(self):
if self.split != "train":
return (61-SEQ_LENGTH)*len(cities_list)
return len(all_train) - (SEQ_LENGTH - 1)*len(cities_list)
class CityDataForecast(torch.utils.data.Dataset):
def __init__(self, selected_column, split):
self.split = split
if split == "train":
self.dataset = train_set
else:
self.dataset = test_set
self.valid_city_idx = 0
self.valid_day_idx = 0
self.selected_column = selected_column
def __getitem__(self, idx):
if self.split != "train":
# getting all data out of the validation set
out, city = self.get_idx_data(idx)
else:
# getting data randomly for train split
city = random.choice(cities_list)
_df = self.dataset[city]
start_idx = random.randint(1,_df.shape[0]-SEQ_LENGTH)
out = _df.iloc[start_idx-1:start_idx+SEQ_LENGTH]
out = out.drop(['index', 'Date', 'City'], axis=1)
Y = pd.DataFrame({})
Y_true = pd.DataFrame({})
for col in out.columns:
if col == self.selected_column:
Y_true[col] = out[col]
#print(out[col])
Y[col] = out[col].fillna(col_mean[city][col])
if col in ["pm25_median", "pm10_median", "o3_median", "so2_median", "no2_median", "co_median"]:
out.drop([col], axis=1, inplace=True)
else:
out[col] = out[col].astype("float")
out = np.concatenate((np.array(out)[1:,:], np.array(Y)[:-1,:]), axis=1)
Y = np.array(Y)[1:]
Y_true = np.array(Y_true)[1:]
return out, Y, Y_true
def get_idx_data(self, idx):
city = cities_list[self.valid_city_idx]
_df = self.dataset[city]
out = _df.iloc[self.valid_day_idx:self.valid_day_idx+SEQ_LENGTH]
if self.valid_day_idx+SEQ_LENGTH >= _df.shape[0]:
self.valid_day_idx = 0
self.valid_city_idx += 1
else:
self.valid_day_idx += 1
return out, city
def __len__(self):
if self.split != "train":
return (61-SEQ_LENGTH)*len(cities_list)
return len(all_train) - (SEQ_LENGTH - 1)*len(cities_list)
```
Loss
```
'''
Code by Mehran Maghoumi
link: https://github.com/Maghoumi/pytorch-softdtw-cuda/blob/master/soft_dtw_cuda.py
'''
import numpy as np
import torch
import torch.cuda
from numba import jit
from torch.autograd import Function
from numba import cuda
import math
# ----------------------------------------------------------------------------------------------------------------------
@cuda.jit
def compute_softdtw_cuda(D, gamma, bandwidth, max_i, max_j, n_passes, R):
"""
:param seq_len: The length of the sequence (both inputs are assumed to be of the same size)
:param n_passes: 2 * seq_len - 1 (The number of anti-diagonals)
"""
# Each block processes one pair of examples
b = cuda.blockIdx.x
# We have as many threads as seq_len, because the most number of threads we need
# is equal to the number of elements on the largest anti-diagonal
tid = cuda.threadIdx.x
# Compute I, J, the indices from [0, seq_len)
# The row index is always the same as tid
I = tid
inv_gamma = 1.0 / gamma
# Go over each anti-diagonal. Only process threads that fall on the current on the anti-diagonal
for p in range(n_passes):
# The index is actually 'p - tid' but need to force it in-bounds
J = max(0, min(p - tid, max_j - 1))
# For simplicity, we define i, j which start from 1 (offset from I, J)
i = I + 1
j = J + 1
# Only compute if element[i, j] is on the current anti-diagonal, and also is within bounds
if I + J == p and (I < max_i and J < max_j):
# Don't compute if outside bandwidth
if not (abs(i - j) > bandwidth > 0):
r0 = -R[b, i - 1, j - 1] * inv_gamma
r1 = -R[b, i - 1, j] * inv_gamma
r2 = -R[b, i, j - 1] * inv_gamma
rmax = max(max(r0, r1), r2)
rsum = math.exp(r0 - rmax) + math.exp(r1 - rmax) + math.exp(r2 - rmax)
softmin = -gamma * (math.log(rsum) + rmax)
R[b, i, j] = D[b, i - 1, j - 1] + softmin
# Wait for other threads in this block
cuda.syncthreads()
# ----------------------------------------------------------------------------------------------------------------------
@cuda.jit
def compute_softdtw_backward_cuda(D, R, inv_gamma, bandwidth, max_i, max_j, n_passes, E):
k = cuda.blockIdx.x
tid = cuda.threadIdx.x
# Indexing logic is the same as above, however, the anti-diagonal needs to
# progress backwards
I = tid
for p in range(n_passes):
# Reverse the order to make the loop go backward
rev_p = n_passes - p - 1
# convert tid to I, J, then i, j
J = max(0, min(rev_p - tid, max_j - 1))
i = I + 1
j = J + 1
# Only compute if element[i, j] is on the current anti-diagonal, and also is within bounds
if I + J == rev_p and (I < max_i and J < max_j):
if math.isinf(R[k, i, j]):
R[k, i, j] = -math.inf
# Don't compute if outside bandwidth
if not (abs(i - j) > bandwidth > 0):
a = math.exp((R[k, i + 1, j] - R[k, i, j] - D[k, i + 1, j]) * inv_gamma)
b = math.exp((R[k, i, j + 1] - R[k, i, j] - D[k, i, j + 1]) * inv_gamma)
c = math.exp((R[k, i + 1, j + 1] - R[k, i, j] - D[k, i + 1, j + 1]) * inv_gamma)
E[k, i, j] = E[k, i + 1, j] * a + E[k, i, j + 1] * b + E[k, i + 1, j + 1] * c
# Wait for other threads in this block
cuda.syncthreads()
# ----------------------------------------------------------------------------------------------------------------------
class _SoftDTWCUDA(Function):
"""
CUDA implementation is inspired by the diagonal one proposed in https://ieeexplore.ieee.org/document/8400444:
"Developing a pattern discovery method in time series data and its GPU acceleration"
"""
@staticmethod
def forward(ctx, D, gamma, bandwidth):
dev = D.device
dtype = D.dtype
gamma = torch.cuda.FloatTensor([gamma])
bandwidth = torch.cuda.FloatTensor([bandwidth])
B = D.shape[0]
N = D.shape[1]
M = D.shape[2]
threads_per_block = max(N, M)
n_passes = 2 * threads_per_block - 1
# Prepare the output array
R = torch.ones((B, N + 2, M + 2), device=dev, dtype=dtype) * math.inf
R[:, 0, 0] = 0
# Run the CUDA kernel.
# Set CUDA's grid size to be equal to the batch size (every CUDA block processes one sample pair)
# Set the CUDA block size to be equal to the length of the longer sequence (equal to the size of the largest diagonal)
compute_softdtw_cuda[B, threads_per_block](cuda.as_cuda_array(D.detach()),
gamma.item(), bandwidth.item(), N, M, n_passes,
cuda.as_cuda_array(R))
ctx.save_for_backward(D, R.clone(), gamma, bandwidth)
return R[:, -2, -2]
@staticmethod
def backward(ctx, grad_output):
dev = grad_output.device
dtype = grad_output.dtype
D, R, gamma, bandwidth = ctx.saved_tensors
B = D.shape[0]
N = D.shape[1]
M = D.shape[2]
threads_per_block = max(N, M)
n_passes = 2 * threads_per_block - 1
D_ = torch.zeros((B, N + 2, M + 2), dtype=dtype, device=dev)
D_[:, 1:N + 1, 1:M + 1] = D
R[:, :, -1] = -math.inf
R[:, -1, :] = -math.inf
R[:, -1, -1] = R[:, -2, -2]
E = torch.zeros((B, N + 2, M + 2), dtype=dtype, device=dev)
E[:, -1, -1] = 1
# Grid and block sizes are set same as done above for the forward() call
compute_softdtw_backward_cuda[B, threads_per_block](cuda.as_cuda_array(D_),
cuda.as_cuda_array(R),
1.0 / gamma.item(), bandwidth.item(), N, M, n_passes,
cuda.as_cuda_array(E))
E = E[:, 1:N + 1, 1:M + 1]
return grad_output.view(-1, 1, 1).expand_as(E) * E, None, None
@jit(nopython=True)
def compute_softdtw(D, gamma, bandwidth):
B = D.shape[0]
N = D.shape[1]
M = D.shape[2]
R = np.ones((B, N + 2, M + 2)) * np.inf
R[:, 0, 0] = 0
for b in range(B):
for j in range(1, M + 1):
for i in range(1, N + 1):
# Check the pruning condition
if 0 < bandwidth < np.abs(i - j):
continue
r0 = -R[b, i - 1, j - 1] / gamma
r1 = -R[b, i - 1, j] / gamma
r2 = -R[b, i, j - 1] / gamma
rmax = max(max(r0, r1), r2)
rsum = np.exp(r0 - rmax) + np.exp(r1 - rmax) + np.exp(r2 - rmax)
softmin = - gamma * (np.log(rsum) + rmax)
R[b, i, j] = D[b, i - 1, j - 1] + softmin
return R
# ----------------------------------------------------------------------------------------------------------------------
@jit(nopython=True)
def compute_softdtw_backward(D_, R, gamma, bandwidth):
B = D_.shape[0]
N = D_.shape[1]
M = D_.shape[2]
D = np.zeros((B, N + 2, M + 2))
E = np.zeros((B, N + 2, M + 2))
D[:, 1:N + 1, 1:M + 1] = D_
E[:, -1, -1] = 1
R[:, :, -1] = -np.inf
R[:, -1, :] = -np.inf
R[:, -1, -1] = R[:, -2, -2]
for k in range(B):
for j in range(M, 0, -1):
for i in range(N, 0, -1):
if np.isinf(R[k, i, j]):
R[k, i, j] = -np.inf
# Check the pruning condition
if 0 < bandwidth < np.abs(i - j):
continue
a0 = (R[k, i + 1, j] - R[k, i, j] - D[k, i + 1, j]) / gamma
b0 = (R[k, i, j + 1] - R[k, i, j] - D[k, i, j + 1]) / gamma
c0 = (R[k, i + 1, j + 1] - R[k, i, j] - D[k, i + 1, j + 1]) / gamma
a = np.exp(a0)
b = np.exp(b0)
c = np.exp(c0)
E[k, i, j] = E[k, i + 1, j] * a + E[k, i, j + 1] * b + E[k, i + 1, j + 1] * c
return E[:, 1:N + 1, 1:M + 1]
# ----------------------------------------------------------------------------------------------------------------------
class _SoftDTW(Function):
"""
CPU implementation based on https://github.com/Sleepwalking/pytorch-softdtw
"""
@staticmethod
def forward(ctx, D, gamma, bandwidth):
dev = D.device
dtype = D.dtype
gamma = torch.Tensor([gamma]).to(dev).type(dtype) # dtype fixed
bandwidth = torch.Tensor([bandwidth]).to(dev).type(dtype)
D_ = D.detach().cpu().numpy()
g_ = gamma.item()
b_ = bandwidth.item()
R = torch.Tensor(compute_softdtw(D_, g_, b_)).to(dev).type(dtype)
ctx.save_for_backward(D, R, gamma, bandwidth)
return R[:, -2, -2]
@staticmethod
def backward(ctx, grad_output):
dev = grad_output.device
dtype = grad_output.dtype
D, R, gamma, bandwidth = ctx.saved_tensors
D_ = D.detach().cpu().numpy()
R_ = R.detach().cpu().numpy()
g_ = gamma.item()
b_ = bandwidth.item()
E = torch.Tensor(compute_softdtw_backward(D_, R_, g_, b_)).to(dev).type(dtype)
return grad_output.view(-1, 1, 1).expand_as(E) * E, None, None
# ----------------------------------------------------------------------------------------------------------------------
class SoftDTW(torch.nn.Module):
"""
The soft DTW implementation that optionally supports CUDA
"""
def __init__(self, use_cuda, gamma=1.0, normalize=False, bandwidth=None, dist_func=None):
"""
Initializes a new instance using the supplied parameters
:param use_cuda: Flag indicating whether the CUDA implementation should be used
:param gamma: sDTW's gamma parameter
:param normalize: Flag indicating whether to perform normalization
(as discussed in https://github.com/mblondel/soft-dtw/issues/10#issuecomment-383564790)
:param bandwidth: Sakoe-Chiba bandwidth for pruning. Passing 'None' will disable pruning.
:param dist_func: Optional point-wise distance function to use. If 'None', then a default Euclidean distance function will be used.
"""
super(SoftDTW, self).__init__()
self.normalize = normalize
self.gamma = gamma
self.bandwidth = 0 if bandwidth is None else float(bandwidth)
self.use_cuda = use_cuda
# Set the distance function
if dist_func is not None:
self.dist_func = dist_func
else:
self.dist_func = SoftDTW._euclidean_dist_func
def _get_func_dtw(self, x, y):
"""
Checks the inputs and selects the proper implementation to use.
"""
bx, lx, dx = x.shape
by, ly, dy = y.shape
# Make sure the dimensions match
assert bx == by # Equal batch sizes
assert dx == dy # Equal feature dimensions
use_cuda = self.use_cuda
if use_cuda and (lx > 1024 or ly > 1024): # We should be able to spawn enough threads in CUDA
print("SoftDTW: Cannot use CUDA because the sequence length > 1024 (the maximum block size supported by CUDA)")
use_cuda = False
# Finally, return the correct function
return _SoftDTWCUDA.apply if use_cuda else _SoftDTW.apply
@staticmethod
def _euclidean_dist_func(x, y):
"""
Calculates the Euclidean distance between each element in x and y per timestep
"""
n = x.size(1)
m = y.size(1)
d = x.size(2)
x = x.unsqueeze(2).expand(-1, n, m, d)
y = y.unsqueeze(1).expand(-1, n, m, d)
return torch.pow(x - y, 2).sum(3)
def forward(self, X, Y):
"""
Compute the soft-DTW value between X and Y
:param X: One batch of examples, batch_size x seq_len x dims
:param Y: The other batch of examples, batch_size x seq_len x dims
:return: The computed results
"""
# Check the inputs and get the correct implementation
func_dtw = self._get_func_dtw(X, Y)
if self.normalize:
# Stack everything up and run
x = torch.cat([X, X, Y])
y = torch.cat([Y, X, Y])
D = self.dist_func(x, y)
out = func_dtw(D, self.gamma, self.bandwidth)
out_xy, out_xx, out_yy = torch.split(out, X.shape[0])
return out_xy - 1 / 2 * (out_xx + out_yy)
else:
D_xy = self.dist_func(X, Y)
return func_dtw(D_xy, self.gamma, self.bandwidth)
# ----------------------------------------------------------------------------------------------------------------------
def timed_run(a, b, sdtw):
"""
Runs a and b through sdtw, and times the forward and backward passes.
Assumes that a requires gradients.
:return: timing, forward result, backward result
"""
from timeit import default_timer as timer
# Forward pass
start = timer()
forward = sdtw(a, b)
end = timer()
t = end - start
grad_outputs = torch.ones_like(forward)
# Backward
start = timer()
grads = torch.autograd.grad(forward, a, grad_outputs=grad_outputs)[0]
end = timer()
# Total time
t += end - start
return t, forward, grads
# ----------------------------------------------------------------------------------------------------------------------
def profile(batch_size, seq_len_a, seq_len_b, dims, tol_backward):
sdtw = SoftDTW(False, gamma=1.0, normalize=False)
sdtw_cuda = SoftDTW(True, gamma=1.0, normalize=False)
n_iters = 6
print("Profiling forward() + backward() times for batch_size={}, seq_len_a={}, seq_len_b={}, dims={}...".format(batch_size, seq_len_a, seq_len_b, dims))
times_cpu = []
times_gpu = []
for i in range(n_iters):
a_cpu = torch.rand((batch_size, seq_len_a, dims), requires_grad=True)
b_cpu = torch.rand((batch_size, seq_len_b, dims))
a_gpu = a_cpu.cuda()
b_gpu = b_cpu.cuda()
# GPU
t_gpu, forward_gpu, backward_gpu = timed_run(a_gpu, b_gpu, sdtw_cuda)
# CPU
t_cpu, forward_cpu, backward_cpu = timed_run(a_cpu, b_cpu, sdtw)
# Verify the results
assert torch.allclose(forward_cpu, forward_gpu.cpu())
assert torch.allclose(backward_cpu, backward_gpu.cpu(), atol=tol_backward)
if i > 0: # Ignore the first time we run, in case this is a cold start (because timings are off at a cold start of the script)
times_cpu += [t_cpu]
times_gpu += [t_gpu]
# Average and log
avg_cpu = np.mean(times_cpu)
avg_gpu = np.mean(times_gpu)
print("\tCPU: ", avg_cpu)
print("\tGPU: ", avg_gpu)
print("\tSpeedup: ", avg_cpu / avg_gpu)
print()
```
Models
```
# all imports here
import math
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import TensorDataset, DataLoader
from torch.optim.lr_scheduler import _LRScheduler
from torch.autograd import Variable
from datetime import datetime
from tqdm import tqdm
import sklearn
from copy import deepcopy
class LSTM(nn.Module):
def __init__(self, num_classes, input_size, hidden_size, num_layers, bidirectional = False):
super(LSTM, self).__init__()
self.num_classes = num_classes
self.num_layers = num_layers
self.input_size = input_size
self.hidden_size = hidden_size
self.seq_length = SEQ_LENGTH
self.bidrectional = bidirectional
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, batch_first=True, bidirectional = bidirectional)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size)).cuda()
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size)).cuda()
# Propagate input through LSTM
ula, (h_out, _) = self.lstm(x, (h_0, c_0))
#h_out = h_out.view(-1, self.hidden_size)
out = self.fc(ula)
return out
import torch.nn as nn
import math
device = 'cuda'
class MultiHeadAttention(nn.Module):
'''Multi-head self-attention module'''
def __init__(self, D, H):
super(MultiHeadAttention, self).__init__()
self.H = H # number of heads
self.D = D # dimension
self.wq = nn.Linear(D, D*H)
self.wk = nn.Linear(D, D*H)
self.wv = nn.Linear(D, D*H)
self.dense = nn.Linear(D*H, D)
def concat_heads(self, x):
'''(B, H, S, D) => (B, S, D*H)'''
B, H, S, D = x.shape
x = x.permute((0, 2, 1, 3)).contiguous() # (B, S, H, D)
x = x.reshape((B, S, H*D)) # (B, S, D*H)
return x
def split_heads(self, x):
'''(B, S, D*H) => (B, H, S, D)'''
B, S, D_H = x.shape
x = x.reshape(B, S, self.H, self.D) # (B, S, H, D)
x = x.permute((0, 2, 1, 3)) # (B, H, S, D)
return x
def forward(self, x, mask):
q = self.wq(x) # (B, S, D*H)
k = self.wk(x) # (B, S, D*H)
v = self.wv(x) # (B, S, D*H)
q = self.split_heads(q) # (B, H, S, D)
k = self.split_heads(k) # (B, H, S, D)
v = self.split_heads(v) # (B, H, S, D)
attention_scores = torch.matmul(q, k.transpose(-1, -2)) #(B,H,S,S)
attention_scores = attention_scores / math.sqrt(self.D)
# add the mask to the scaled tensor.
if mask is not None:
attention_scores += (mask * -1e9)
attention_weights = nn.Softmax(dim=-1)(attention_scores)
scaled_attention = torch.matmul(attention_weights, v) # (B, H, S, D)
concat_attention = self.concat_heads(scaled_attention) # (B, S, D*H)
output = self.dense(concat_attention) # (B, S, D)
return output, attention_weights
class MultiHeadAttention(nn.Module):
'''Multi-head self-attention module'''
def __init__(self, D, H):
super(MultiHeadAttention, self).__init__()
self.H = H # number of heads
self.D = D # dimension
self.wq = nn.Linear(D, D*H)
self.wk = nn.Linear(D, D*H)
self.wv = nn.Linear(D, D*H)
self.dense = nn.Linear(D*H, D)
def concat_heads(self, x):
'''(B, H, S, D) => (B, S, D*H)'''
B, H, S, D = x.shape
x = x.permute((0, 2, 1, 3)).contiguous() # (B, S, H, D)
x = x.reshape((B, S, H*D)) # (B, S, D*H)
return x
def split_heads(self, x):
'''(B, S, D*H) => (B, H, S, D)'''
B, S, D_H = x.shape
x = x.reshape(B, S, self.H, self.D) # (B, S, H, D)
x = x.permute((0, 2, 1, 3)) # (B, H, S, D)
return x
def forward(self, x, mask):
q = self.wq(x) # (B, S, D*H)
k = self.wk(x) # (B, S, D*H)
v = self.wv(x) # (B, S, D*H)
q = self.split_heads(q) # (B, H, S, D)
k = self.split_heads(k) # (B, H, S, D)
v = self.split_heads(v) # (B, H, S, D)
attention_scores = torch.matmul(q, k.transpose(-1, -2)) #(B,H,S,S)
attention_scores = attention_scores / math.sqrt(self.D)
# add the mask to the scaled tensor.
if mask is not None:
attention_scores += (mask * -1e9)
attention_weights = nn.Softmax(dim=-1)(attention_scores)
scaled_attention = torch.matmul(attention_weights, v) # (B, H, S, D)
concat_attention = self.concat_heads(scaled_attention) # (B, S, D*H)
output = self.dense(concat_attention) # (B, S, D)
return output, attention_weights
class MultiHeadAttentionCosformerNew(nn.Module):
'''Multi-head self-attention module'''
def __init__(self, D, H):
super(MultiHeadAttentionCosformerNew, self).__init__()
self.H = H # number of heads
self.D = D # dimension
self.wq = nn.Linear(D, D*H)
self.wk = nn.Linear(D, D*H)
self.wv = nn.Linear(D, D*H)
self.dense = nn.Linear(D*H, D)
def concat_heads(self, x):
'''(B, H, S, D) => (B, S, D*H)'''
B, H, S, D = x.shape
x = x.permute((0, 2, 1, 3)).contiguous() # (B, S, H, D)
x = x.reshape((B, S, H*D)) # (B, S, D*H)
return x
def split_heads(self, x):
'''(B, S, D*H) => (B, H, S, D)'''
B, S, D_H = x.shape
x = x.reshape(B, S, self.H, self.D) # (B, S, H, D)
x = x.permute((0, 2, 1, 3)) # (B, H, S, D)
return x
def forward(self, x, mask):
q = self.wq(x) # (B, S, D*H)
k = self.wk(x) # (B, S, D*H)
v = self.wv(x) # (B, S, D*H)
q = self.split_heads(q).permute(0,2,1,3) # (B, S, H, D)
k = self.split_heads(k).permute(0,2,1,3) # (B, S, H, D)
v = self.split_heads(v).permute(0,2,1,3) # (B, S, H, D)
B = q.shape[0]
S = q.shape[1]
q = torch.nn.functional.elu(q) + 1 # Sigmoid torch.nn.ReLU()
k = torch.nn.functional.elu(k) + 1 # Sigmoid torch.nn.ReLU()
# q, k, v -> [batch_size, seq_len, n_heads, d_head]
cos = (torch.cos(1.57*torch.arange(S)/S).unsqueeze(0)).repeat(B,1).cuda()
sin = (torch.sin(1.57*torch.arange(S)/S).unsqueeze(0)).repeat(B,1).cuda()
# cos, sin -> [batch_size, seq_len]
q_cos = torch.einsum('bsnd,bs->bsnd', q, cos)
q_sin = torch.einsum('bsnd,bs->bsnd', q, sin)
k_cos = torch.einsum('bsnd,bs->bsnd', k, cos)
k_sin = torch.einsum('bsnd,bs->bsnd', k, sin)
# q_cos, q_sin, k_cos, k_sin -> [batch_size, seq_len, n_heads, d_head]
kv_cos = torch.einsum('bsnx,bsnz->bnxz', k_cos, v)
# kv_cos -> [batch_size, n_heads, d_head, d_head]
qkv_cos = torch.einsum('bsnx,bnxz->bsnz', q_cos, kv_cos)
# qkv_cos -> [batch_size, seq_len, n_heads, d_head]
kv_sin = torch.einsum('bsnx,bsnz->bnxz', k_sin, v)
# kv_sin -> [batch_size, n_heads, d_head, d_head]
qkv_sin = torch.einsum('bsnx,bnxz->bsnz', q_sin, kv_sin)
# qkv_sin -> [batch_size, seq_len, n_heads, d_head]
# denominator
denominator = 1.0 / (torch.einsum('bsnd,bnd->bsn', q_cos, k_cos.sum(axis=1))
+ torch.einsum('bsnd,bnd->bsn',
q_sin, k_sin.sum(axis=1))
+ 1e-5)
# denominator -> [batch_size, seq_len, n_heads]
O = torch.einsum('bsnz,bsn->bsnz', qkv_cos +
qkv_sin, denominator).contiguous()
# output -> [batch_size, seq_len, n_heads, d_head]
concat_attention = self.concat_heads(O.permute(0,2,1,3)) # (B, S, D*H)
output = self.dense(concat_attention) # (B, S, D)
return output, None
class MultiHeadAttentionCosSquareformerNew(nn.Module):
'''Multi-head self-attention module'''
def __init__(self, D, H):
super(MultiHeadAttentionCosSquareformerNew, self).__init__()
self.H = H # number of heads
self.D = D # dimension
self.wq = nn.Linear(D, D*H)
self.wk = nn.Linear(D, D*H)
self.wv = nn.Linear(D, D*H)
self.dense = nn.Linear(D*H, D)
def concat_heads(self, x):
'''(B, H, S, D) => (B, S, D*H)'''
B, H, S, D = x.shape
x = x.permute((0, 2, 1, 3)).contiguous() # (B, S, H, D)
x = x.reshape((B, S, H*D)) # (B, S, D*H)
return x
def split_heads(self, x):
'''(B, S, D*H) => (B, H, S, D)'''
B, S, D_H = x.shape
x = x.reshape(B, S, self.H, self.D) # (B, S, H, D)
x = x.permute((0, 2, 1, 3)) # (B, H, S, D)
return x
def forward(self, x, mask):
q = self.wq(x) # (B, S, D*H)
k = self.wk(x) # (B, S, D*H)
v = self.wv(x) # (B, S, D*H)
q = self.split_heads(q).permute(0,2,1,3) # (B, S, H, D)
k = self.split_heads(k).permute(0,2,1,3) # (B, S, H, D)
v = self.split_heads(v).permute(0,2,1,3) # (B, S, H, D)
B = q.shape[0]
S = q.shape[1]
q = torch.nn.functional.elu(q) + 1 # Sigmoid torch.nn.ReLU()
k = torch.nn.functional.elu(k) + 1 # Sigmoid torch.nn.ReLU()
# q, k, v -> [batch_size, seq_len, n_heads, d_head]
cos = (torch.cos(3.1415*torch.arange(S)/S).unsqueeze(0)).repeat(B,1).cuda()
sin = (torch.sin(3.1415*torch.arange(S)/S).unsqueeze(0)).repeat(B,1).cuda()
# cos, sin -> [batch_size, seq_len]
q_cos = torch.einsum('bsnd,bs->bsnd', q, cos)
q_sin = torch.einsum('bsnd,bs->bsnd', q, sin)
k_cos = torch.einsum('bsnd,bs->bsnd', k, cos)
k_sin = torch.einsum('bsnd,bs->bsnd', k, sin)
# q_cos, q_sin, k_cos, k_sin -> [batch_size, seq_len, n_heads, d_head]
kv_cos = torch.einsum('bsnx,bsnz->bnxz', k_cos, v)
# kv_cos -> [batch_size, n_heads, d_head, d_head]
qkv_cos = torch.einsum('bsnx,bnxz->bsnz', q_cos, kv_cos)
# qkv_cos -> [batch_size, seq_len, n_heads, d_head]
kv_sin = torch.einsum('bsnx,bsnz->bnxz', k_sin, v)
# kv_sin -> [batch_size, n_heads, d_head, d_head]
qkv_sin = torch.einsum('bsnx,bnxz->bsnz', q_sin, kv_sin)
# qkv_sin -> [batch_size, seq_len, n_heads, d_head]
kv = torch.einsum('bsnx,bsnz->bnxz', k, v)
# kv -> [batch_size, n_heads, d_head, d_head]
qkv = torch.einsum('bsnx,bnxz->bsnz', q, kv)
# qkv_cos -> [batch_size, seq_len, n_heads, d_head]
# denominator
denominator = 1.0 / (torch.einsum('bsnd,bnd->bsn', q, k.sum(axis=1)) + torch.einsum('bsnd,bnd->bsn', q_cos, k_cos.sum(axis=1))
+ torch.einsum('bsnd,bnd->bsn',
q_sin, k_sin.sum(axis=1))
+ 1e-5)
# denominator -> [batch_size, seq_len, n_heads]
O = torch.einsum('bsnz,bsn->bsnz', qkv + qkv_cos +
qkv_sin, denominator).contiguous()
# output -> [batch_size, seq_len, n_heads, d_head]
concat_attention = self.concat_heads(O.permute(0,2,1,3)) # (B, S, D*H)
output = self.dense(concat_attention) # (B, S, D)
return output, None
# Positional encodings
def get_angles(pos, i, D):
angle_rates = 1 / np.power(10000, (2 * (i // 2)) / np.float32(D))
return pos * angle_rates
def positional_encoding(D, position=20, dim=3, device=device):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(D)[np.newaxis, :],
D)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
if dim == 3:
pos_encoding = angle_rads[np.newaxis, ...]
elif dim == 4:
pos_encoding = angle_rads[np.newaxis,np.newaxis, ...]
return torch.tensor(pos_encoding, device=device)
class TransformerLayer(nn.Module):
def __init__(self, D, H, hidden_mlp_dim, dropout_rate, attention_type='cosine_square'):
super(TransformerLayer, self).__init__()
self.dropout_rate = dropout_rate
self.mlp_hidden = nn.Linear(D, hidden_mlp_dim)
self.mlp_out = nn.Linear(hidden_mlp_dim, D)
self.layernorm1 = nn.LayerNorm(D, eps=1e-9)
self.layernorm2 = nn.LayerNorm(D, eps=1e-9)
self.dropout1 = nn.Dropout(dropout_rate)
self.dropout2 = nn.Dropout(dropout_rate)
if attention_type == 'cosine':
self.mha = MultiHeadAttentionCosformerNew(D, H)
elif attention_type == 'cosine_square':
self.mha = MultiHeadAttentionCosSquareformerNew(D, H)
else:
self.mha = MultiHeadAttention(D,H)
def forward(self, x, look_ahead_mask):
attn, attn_weights = self.mha(x, look_ahead_mask) # (B, S, D)
attn = self.dropout1(attn) # (B,S,D)
attn = self.layernorm1(attn + x) # (B,S,D)
mlp_act = torch.relu(self.mlp_hidden(attn))
mlp_act = self.mlp_out(mlp_act)
mlp_act = self.dropout2(mlp_act)
output = self.layernorm2(mlp_act + attn) # (B, S, D)
return output, attn_weights
class Transformer(nn.Module):
'''Transformer Decoder Implementating several Decoder Layers.
'''
def __init__(self, num_layers, D, H, hidden_mlp_dim, inp_features, out_features, dropout_rate, attention_type='cosine_square'):
super(Transformer, self).__init__()
self.attention_type = attention_type
self.sqrt_D = torch.tensor(math.sqrt(D))
self.num_layers = num_layers
self.input_projection = nn.Linear(inp_features, D) # multivariate input
self.output_projection = nn.Linear(D, out_features) # multivariate output
self.pos_encoding = positional_encoding(D)
self.dec_layers = nn.ModuleList([TransformerLayer(D, H, hidden_mlp_dim,
dropout_rate=dropout_rate, attention_type=self.attention_type
) for _ in range(num_layers)])
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, mask):
B, S, D = x.shape
# attention_weights = {}
x = self.input_projection(x)
x *= self.sqrt_D
x += self.pos_encoding[:, :S, :]
x = self.dropout(x)
for i in range(self.num_layers):
x, _ = self.dec_layers[i](x=x,
look_ahead_mask=mask)
# attention_weights['decoder_layer{}'.format(i + 1)] = block
x = self.output_projection(x)
return x, None # attention_weights # (B,S,S)
class TransLSTM(nn.Module):
'''Transformer Decoder Implementating several Decoder Layers.
'''
def __init__(self, num_layers, D, H, hidden_mlp_dim, inp_features, out_features, dropout_rate, LSTM_module, attention_type='regular'):
super(TransLSTM, self).__init__()
self.attention_type = attention_type
self.sqrt_D = torch.tensor(math.sqrt(D))
self.num_layers = num_layers
self.input_projection = nn.Linear(inp_features, D) # multivariate input
self.output_projection = nn.Linear(D, 4) # multivariate output
self.fc = nn.Linear(4*2, out_features)
self.pos_encoding = positional_encoding(D)
self.dec_layers = nn.ModuleList([TransformerLayer(D, H, hidden_mlp_dim,
dropout_rate=dropout_rate, attention_type=self.attention_type
) for _ in range(num_layers)])
self.dropout = nn.Dropout(dropout_rate)
self.LSTM = LSTM_module
def forward(self, x, mask):
x_l = self.LSTM(x)
B, S, D = x.shape
attention_weights = {}
x = self.input_projection(x)
x *= self.sqrt_D
x += self.pos_encoding[:, :S, :]
x = self.dropout(x)
for i in range(self.num_layers):
x, block = self.dec_layers[i](x=x,
look_ahead_mask=mask)
attention_weights['decoder_layer{}'.format(i + 1)] = block
x = self.output_projection(x)
x = torch.cat((x,x_l),axis=2)
x = self.fc(x)
return x, attention_weights # (B,S,S)
```
Main
```
# all imports here
import math
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import TensorDataset, DataLoader
from torch.optim.lr_scheduler import _LRScheduler
from torch.autograd import Variable
from datetime import datetime
from tqdm import tqdm
import sklearn
from copy import deepcopy
import warnings
warnings.filterwarnings('ignore')
# function that implement the look_ahead mask for masking future time steps.
def create_look_ahead_mask(size, device=device):
mask = torch.ones((size, size), device=device)
mask = torch.triu(mask, diagonal=1)
return mask # (size, size)
if __name__ == '__main__':
dtw_loss = SoftDTW(use_cuda=True, gamma=0.1)
lmbda = 0.5
for SELECTED_COLUMN in ["pm25_median"]: # ["pm25_median", "so2_median", "pm10_median", "no2_median", "o3_median", "co_median", "so2_median"]:
train_data = CityDataP(SELECTED_COLUMN, "train")
val_data = CityDataP(SELECTED_COLUMN, "test")
sampleLoader = DataLoader(train_data, 32, shuffle=True, num_workers=4)
val_loader = DataLoader(val_data, 4096, shuffle=False, num_workers=4)
lr = 0.001
n_epochs = 10
criterion = nn.MSELoss()
model = Transformer(num_layers=6, D=16, H=10, hidden_mlp_dim=32, inp_features=11, out_features=1, dropout_rate=0.1, attention_type='regular').to(device) # cosine_square, cosine, regular # 6L, 12H
# model = TransLSTM(num_layers=3, D=16, H=5, hidden_mlp_dim=32, inp_features=11, out_features=1, dropout_rate=0.2, LSTM_module = LSTM(4, INPUT_DIM+1, HIDDEN_DIM, LAYER_DIM, bidirectional = False).to(device), attention_type='regular').to(device) # cosine_square, cosine, regular # 6L, 12H
# model = LSTM(1, INPUT_DIM+1, HIDDEN_DIM, LAYER_DIM).cuda()
opt = torch.optim.Adam(model.parameters(), lr=lr)
print('Start model training')
best_mse = 2000.0
best_model = None
for epoch in range(1, n_epochs + 1):
epoch_loss = 0
batch_idx = 0
bar = tqdm(sampleLoader)
model.train()
for x_batch, y_batch, _ in bar:
model.train()
x_batch = x_batch.cuda().float()
y_batch = y_batch.cuda().float()
mask = create_look_ahead_mask(x_batch.shape[1])
out, _ = model(x_batch, mask)
opt.zero_grad()
loss = criterion(out[:,-1,:], y_batch[:,-1,:]) + lmbda * dtw_loss(out.cuda(),y_batch.cuda()).mean()
epoch_loss = (epoch_loss*batch_idx + loss.item())/(batch_idx+1)
loss.backward()
opt.step()
bar.set_description(str(epoch_loss))
batch_idx += 1
# Evaluation
model.eval()
mse_list = []
total_se = 0.0
total_pe = 0.0
total_valid = 0.0
for x_val, _, y_val in val_loader:
x_val, y_val = [t.cuda().float() for t in (x_val, y_val)]
mask = create_look_ahead_mask(x_val.shape[1])
out, _ = model(x_val, mask)
ytrue = y_val[:,-1,:].squeeze().cpu().numpy()
ypred = out[:,-1,:].squeeze().cpu().detach().numpy()
true_valid = np.isnan(ytrue) != 1
ytrue = ytrue[true_valid] #np.nan_to_num(ytrue, 0)
ypred = ypred[true_valid]
if normalization_type == 'mean_std':
ytrue = (ytrue * col_std[SELECTED_COLUMN]) + col_mean2[SELECTED_COLUMN]
ypred = (ypred * col_std[SELECTED_COLUMN]) + col_mean2[SELECTED_COLUMN]
else:
ytrue = (ytrue * col_max[SELECTED_COLUMN])
ypred = (ypred * col_max[SELECTED_COLUMN])
se = (ytrue - ypred)**2 # np.square(ytrue - ypred)
pe = np.abs((ytrue - ypred) / (ytrue + 0.0001))
total_se += np.sum(se)
total_pe += np.sum(pe)
total_valid += np.sum(true_valid)
eval_mse = total_se / total_valid # np.mean(se) #
eval_mape = total_pe / total_valid # np.mean(pe) #
print('valid samples:', total_valid)
print('Eval MSE: ', eval_mse)
print('Eval RMSE: {}: '.format(SELECTED_COLUMN), np.sqrt(eval_mse))
print('Eval MAPE: {}: '.format(SELECTED_COLUMN), eval_mape*100)
if eval_mse < best_mse:
best_model = deepcopy(model)
best_mse = eval_mse
```
| github_jupyter |
## BEATLEX: Summarizing and Forecasting Time Series with Patterns
### Abstract
Given time-series data such as electrocardiogram (ECG) readings, or motion capture data, how can we succintly summarize the data in a way that robustly identifies patterns that appear repeatedly? How can we then use such a summary to identify anomalies such as abnormal heartbeats, and also forecast future values of the time series? Our main idea is a vocabulary-based approach, which automatically learns a set of common patterns, or ‘beat patterns,’ which are used as building blocks to describe the time series in an intuitive and interpretable way. Our summarization algorithm, BEATLEX (BEAT LEXicons for Summarization) is: 1) fast and online, requiring linear time in the data size and bounded memory; 2) effective, outperforming competing algorithms in labelling accuracy by 5.3 times, and forecasting accuracy by 1.8 times; 3) principled and parameterfree, as it is based on the Minimum Description Length principle of summarizing the data by compressing it using as few bits as possible, and automatically tunes all its parameters; 4) general: it applies to any domain of time series data, and can make use of multidimensional (i.e. coevolving) time series.
You can configure the backend to use GPU or CPU only. \
Default is using backend cpu.
```
import sys
sys.path.append('..')
import spartan as st
```
```loadTensor``` function automatically read data from file and ```toDTensor``` function extract time and value separately from the tensor.<br/>```Timeseries``` class is designed to construct time tensor.
```
time, value = st.loadTensor(path = "inputData/example_time.tensor", col_types = [float, float, float]).toDTensor(hastticks=True)
time_series = st.Timeseries(value, time)
st.plot_timeseries(time_series.cut(0, 4000))
```
### Run Beatlex from specific task
```
ss_task = st.Summarization.create(time_series, st.SumPolicy.BeatLex, 'my_beatlex_model')
result = ss_task.run()
```
### Run Beatlex as a single model
```
beatlex = st.BeatLex(time_series)
result = beatlex.run()
st.plot(st.BeatLex, time_series, result)
```
Vocabularies | Segmentation
:-------------------------:|:-------------------------:
<img src="images/beatlexSum1.png" width="300"/> | <img src="images/beatlexSum2.png" width="300"/>
<b>Vocabularies learned by BeatLex. | <b>Segmentation made by BeatLex.
### Experiment Results
------
Beatlex(ECG) | Beatlex(Motion)
:-------------------------:|:-------------------------:
<img src="images/beatlexExp1.png" width="300"/> | <img src="images/beatlexExp2.png" width="300"/>
<b>Beatlex segments and labels data. | <b>Beatlex learns vocabulary.
### Cite:
------
1. Hooi, Bryan, et al. "B eat L ex: Summarizing and Forecasting Time Series with Patterns." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2017.
<details>
<summary><span style="color:blue">click for BibTex...</span></summary>
```bibtex
@inproceedings{hooi2017b,
title={B eat L ex: Summarizing and Forecasting Time Series with Patterns},
author={Hooi, Bryan and Liu, Shenghua and Smailagic, Asim and Faloutsos, Christos},
booktitle={Joint European Conference on Machine Learning and Knowledge Discovery in Databases},
pages={3--19},
year={2017},
organization={Springer}
}
```
</details>
| github_jupyter |
# Azure Cosmos DB Live TV
```
## Import (?) Client Initialization (??) + Objects Creation
######### NOT NECESSARY!!
#from azure.cosmos import CosmosClient
#import os
#url = os.environ['ACCOUNT_URI']
#key = os.environ['ACCOUNT_KEY']
#client = CosmosClient(url, credential=key)
######### NOT NECESSARY!!
db_name = 'AzureCosmosDBLiveTVTestDB'
database_client = cosmos_client.create_database_if_not_exists(db_name)
print('Database with id \'{0}\' created'.format(db_name))
#Creating a container with analytical store
from azure.cosmos.partition_key import PartitionKey
container_name = "AzureTVData"
partition_key_value = "/id"
offer = 400
container_client = database_client.create_container_if_not_exists(
id=container_name,
partition_key=PartitionKey(path=partition_key_value),
offer_throughput=offer,
analytical_storage_ttl=-1)
print('Container with id \'{0}\' created'.format(container_name))
#Propreties
properties = database_client.read()
print(json.dumps(properties))
print(" ")
print(" ")
properties = container_client.read()
print(json.dumps(properties))
#DB Offer - ERROR!!!!!!!!!!
# Dedicated throughput only. Will return error "offer not found" for Objects without dedicated throughput
# Database
db_offer = database_client.read_offer()
print('Found Offer \'{0}\' for Database \'{1}\' and its throughput is \'{2}\''.format(db_offer.properties['id'], database.id, db_offer.properties['content']['offerThroughput']))
#Container Offer
# Dedicated throughput only. Will return error "offer not found" for Objects without dedicated throughput
container_offer = container_client.read_offer()
print('Found Offer \'{0}\' for Container \'{1}\' and its throughput is \'{2}\''.format(container_offer.properties['id'], container.id, container_offer.properties['content']['offerThroughput']))
#Query 1 Error????
import json
db = cosmos_client.get_database_client('TestDB')
container = db.get_container_client ('Families')
for item in container.query_items(query='SELECT * FROM Familes'):#,enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
#Query 2
import json
db = cosmos_client.get_database_client('TestDB')
container = db.get_container_client ('Families')
for item in container.query_items(query='SELECT * FROM Familes.children',enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
# Boolean Test
for i in range(1, 10):
container_client.upsert_item({
'id': 'item{0}'.format(i),
'productName': 'Widget',
'productModel': 'Model {0}'.format(i),
'isEnabled': True
}
)
import json
for item in container_client.query_items(
query='SELECT * FROM AzureTVData',
enable_cross_partition_query=True):
print(json.dumps(item, indent=True))
import json
for item in container_client.query_items(
query='SELECT * FROM AzureTVData',
enable_cross_partition_query=True):
print(item)
```
| github_jupyter |
# Aproximações e Erros de Arredondamento
_Prof. Dr. Tito Dias Júnior_
## **Erros de Arredondamento**
### Épsilon de Máquina
```
#Calcula o épsilon de máquina
epsilon = 1
while (epsilon+1)>1:
epsilon = epsilon/2
epsilon = 2 * epsilon
print(epsilon)
```
Aproximação de uma função por Série de Taylor
```
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return -0.1*x**4 -0.15*x**3 -0.5*x**2 -0.25*x +1.2
def df(x):
return -0.4*x**3 -0.45*x**2 -1.0*x -0.25
def ddf(x):
return -1.2*x**2 -0.9*x -1.0
def dddf(x):
return -2.4*x -0.9
def d4f(x):
return -2.4
x1 = 0
x2 = 1
# Aproximação de ordem zero
fO_0 = f(x1) # Valor previsto
erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto
# Aproximação de primeira ordem
fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto
erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto
# Aproximação de segunda ordem
fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto
erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto
# Aproximação de terceira ordem
fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6
erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto
# Aproximação de quarta ordem
fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24
erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto
print('Ordem ~f(x) Erro')
print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0))
print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1))
print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2))
print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3))
print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4))
# Plotagem dos gráficos
xx = np.linspace(-2,2.0,40)
yy = f(xx)
plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*r', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r')
plt.savefig('exemplo1.png')
plt.show()
# Exercício do dia 17/08/2020
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return np.sin(x)
def df(x):
return np.cos(x)
def ddf(x):
return -np.sin(x)
def dddf(x):
return -np.cos(x)
def d4f(x):
return np.sin(x)
x1 = np.pi/2
x2 = 3*np.pi/4 # igual a pi/2 +pi/4
# Aproximação de ordem zero
fO_0 = f(x1) # Valor previsto
erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto
# Aproximação de primeira ordem
fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto
erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto
# Aproximação de segunda ordem
fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto
erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto
# Aproximação de terceira ordem
fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6
erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto
# Aproximação de quarta ordem
fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24
erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto
print('Ordem ~f(x) Erro')
print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0))
print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1))
print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2))
print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3))
print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4))
# Plotagem dos gráficos
xx = np.linspace(0,2.*np.pi,40)
yy = f(xx)
plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*b', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r')
plt.savefig('exemplo2.png')
plt.show()
```
### Exercício - Aula 17/08/2020
Utilizando o exemplo anterior faça expansões de Taylor para a função seno, de ordem zero até ordem 4, a partir de $x = \pi/2$ com $h = \pi/4$, ou seja, para estimar o valor da função em $x_{i+1} = 3 \pi/4$. E responda os check de verificação no AVA:
1. Check: Qual o erro da estimativa de ordem zero?
2. Check: Qual o erro da estimativa de quarta ordem?
### Exercício - Aula 24/08/2020
Utilizando os exemplo e exercícios anteriores faça os gráficos das expansões de Taylor para as funções estudadas, de ordem zero até ordem 4, salve o arquivo em formato png e faça o upload no AVA.
## Referências
Kiusalaas, J. (2013). **Numerical Methods in Engineering With Python 3**. Cambridge: Cambridge.<br>
Brasil, R.M.L.R.F, Balthazar, J.M., Góis, W. (2015) **Métodos Numéricos e Computacionais na Prática de Engenharias e Ciências**, São Paulo: Edgar Blucher
| github_jupyter |
```
'''
import numpy as np
import pandas as pd
import joblib
import os
os.chdir('../c620-main6') #must change the root
import time
import sys
import clr
from sqlalchemy import create_engine
import pymssql
import scipy as sp
pd.options.display.max_rows = None
sys.path.append(r'C:\Program Files (x86)\PIPC\AF\PublicAssemblies\4.0')
clr.AddReference('OSIsoft.AFSDK')
from OSIsoft.AF import *
from OSIsoft.AF.PI import *
from OSIsoft.AF.Asset import *
from OSIsoft.AF.Data import *
from OSIsoft.AF.Time import *
from OSIsoft.AF.UnitsOfMeasure import *
'''
```
# Load the DCS/LIMS data
```
'''
piServers = PIServers()
piServer = piServers.DefaultPIServer;
def PI_current(name):
pt = PIPoint.FindPIPoint(piServer, name)
timerange = AFTimeRange("*", "*-4h")
span = AFTimeSpan.Parse("4H")
result = pt.Summaries(timerange, span, AFSummaryTypes.Average, AFCalculationBasis.TimeWeighted, AFTimestampCalculation.Auto)
return result
'''
'''
Tag=['ARO2-DCS-FI61501','ARO2-DCS-FI621A2','ARO2-DCS-FI671A3','ARO2-DCS-FIC62104','ARO2-DCS-FIC62801','ARO2-DCS-FIC62802',
'ARO2-DCS-FIC65301','ARO2-DCS-FIC66501', 'ARO2-DCS-FIC67701','ARO2-DCS-FIC8270A','ARO2-DCS-TI62701','ARO2-DCS-TI660A2',
'ARO2-DCS-TI660B2','ARO2-DCS-TI6700A','ARO2-DCS-TIC62003','ARO2-DCS-TIC62006','ARO2-DCS-TIC67007','ARO2-LIMS-S601@A10+',
'ARO2-LIMS-S601@A9','ARO2-LIMS-S601@BZ','ARO2-LIMS-S601@EB','ARO2-LIMS-S601@MX','ARO2-LIMS-S601@NA','ARO2-LIMS-S601@OX',
'ARO2-LIMS-S601@PX','ARO2-LIMS-S601@TOL','ARO2-LIMS-S604@BZ','ARO2-LIMS-S604@NA','ARO2-LIMS-S604@TOL','ARO2-LIMS-S610@A10+',
'ARO2-LIMS-S610@A9','ARO2-LIMS-S610@BZ','ARO2-LIMS-S610@EB','ARO2-LIMS-S610@MX','ARO2-LIMS-S610@NA','ARO2-LIMS-S610@OX',
'ARO2-LIMS-S610@PX','ARO2-LIMS-S610@TOL','ARO2-LIMS-S622@A10','ARO2-LIMS-S622@A11+','ARO2-LIMS-S622@A9','ARO2-LIMS-S622@BZ',
'ARO2-LIMS-S622@EB','ARO2-LIMS-S622@MX','ARO2-LIMS-S622@NA','ARO2-LIMS-S622@OX','ARO2-LIMS-S622@PX','ARO2-LIMS-S622@TOL',
'ARO2-LIMS-S623@A10+','ARO2-LIMS-S623@A9','ARO2-LIMS-S623@BZ','ARO2-LIMS-S623@EB','ARO2-LIMS-S623@Gravity','ARO2-LIMS-S623@MX',
'ARO2-LIMS-S623@NA','ARO2-LIMS-S623@OX','ARO2-LIMS-S623@PX','ARO2-LIMS-S623@TOL','ARO2-LIMS-S624@A10', 'ARO2-LIMS-S624@A11+',
'ARO2-LIMS-S624@A9','ARO2-LIMS-S624@BZ','ARO2-LIMS-S624@EB','ARO2-LIMS-S624@Gravity','ARO2-LIMS-S624@MX','ARO2-LIMS-S624@NA',
'ARO2-LIMS-S624@OX','ARO2-LIMS-S624@PX','ARO2-LIMS-S624@TOL','ARO2-LIMS-S808@A10+','ARO2-LIMS-S808@A9','ARO2-LIMS-S808@BZ',
'ARO2-LIMS-S808@EB','ARO2-LIMS-S808@MX','ARO2-LIMS-S808@NA','ARO2-LIMS-S808@OX','ARO2-LIMS-S808@PX','ARO2-LIMS-S808@TOL',
]
x0_list = [[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]
,[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[],[],[],[],[],]
for x in range(len(x0_list)):
for summary in PI_current(Tag[x]):
for event in summary.Value:
x0_list[x].append(event.Value)
'''
# range(len(x0_list))
# list(range(len(x0_list)))
'''
for x in range(len(x0_list)):
for summary in Tag[x]:
for event in summary.value:
x0_list.append(event.value)
'''
'''
sqlite_dict= ({'ARO2_DCS_FI61501': x0_list[0], 'ARO2_DCS_FI621A2': x0_list[1], 'ARO2_DCS_FI671A3': x0_list[2], 'ARO2_DCS_FIC62104': x0_list[3], 'ARO2_DCS_FIC62801': x0_list[4], 'ARO2_DCS_FIC62802': x0_list[5],
'ARO2_DCS_FIC65301': x0_list[6], 'ARO2_DCS_FIC66501': x0_list[7], 'ARO2_DCS_FIC67701': x0_list[8], 'ARO2_DCS_FIC8270A':x0_list[9],'ARO2_DCS_TI62701': x0_list[10], 'ARO2-DCS-TI660A2': x0_list[11],
'ARO2_DCS_TI660B2': x0_list[12], 'ARO2_DCS_TI6700A': x0_list[13], 'ARO2_DCS_TIC62003': x0_list[14], 'ARO2_DCS_TIC62006': x0_list[15], 'ARO2_DCS_TIC67007': x0_list[16], 'ARO2_LIMS_S601_A10+': x0_list[17],
'ARO2_LIMS_S601_A9': x0_list[18], 'ARO2_LIMS_S601_BZ': x0_list[19], 'ARO2_LIMS_S601_EB': x0_list[20], 'ARO2_LIMS_S601_MX': x0_list[21], 'ARO2_LIMS_S601_NA': x0_list[22], 'ARO2_LIMS_S601_OX': x0_list[23],
'ARO2_LIMS_S601_PX': x0_list[24], 'ARO2_LIMS_S601_TOL': x0_list[25],'ARO2_LIMS_S604_BZ': x0_list[26], 'ARO2_LIMS_S604_NA': x0_list[27], 'ARO2_LIMS_S604_TOL': x0_list[28], 'ARO2_LIMS_S610_A10+': x0_list[29],
'ARO2_LIMS_S610_A9': x0_list[30], 'ARO2_LIMS_S610_BZ':x0_list[31], 'ARO2_LIMS_S610_EB': x0_list[32], 'ARO2_LIMS_S610_MX': x0_list[33], 'ARO2_LIMS_S610_NA': x0_list[34], 'ARO2_LIMS_S610_OX': x0_list[35],
'ARO2_LIMS_S610_PX': x0_list[36], 'ARO2_LIMS_S610_TOL': x0_list[37], 'ARO2_LIMS_S622_A10': x0_list[38], 'ARO2_LIMS_S622_A11+': x0_list[39], 'ARO2_LIMS_S622_A9': x0_list[40], 'ARO2_LIMS_S622_BZ': x0_list[41],
'ARO2_LIMS_S622_EB': x0_list[42], 'ARO2_LIMS_S622_MX': x0_list[43], 'ARO2_LIMS_S622_NA': x0_list[44], 'ARO2_LIMS_S622_OX': x0_list[45], 'ARO2_LIMS_S622_PX': x0_list[46], 'ARO2_LIMS_S622_TOL': x0_list[47],
'ARO2_LIMS_S623_A10+': x0_list[48], 'ARO2_LIMS_S623_A9':x0_list[49], 'ARO2_LIMS_S623_BZ': x0_list[50], 'ARO2_LIMS_S623_EB':x0_list[51], 'ARO2_LIMS_S623_Gravity': x0_list[52], 'ARO2_LIMS_S623_MX': x0_list[53],
'ARO2_LIMS_S623_NA': x0_list[54], 'ARO2_LIMS_S623_OX': x0_list[55], 'ARO2_LIMS_S623_PX': x0_list[56], 'ARO2_LIMS_S623_TOL': x0_list[57], 'ARO2_LIMS_S624_A10': x0_list[58], 'ARO2_LIMS_S624_A11+': x0_list[59],
'ARO2_LIMS_S624_A9': x0_list[60], 'ARO2_LIMS_S624_BZ': x0_list[61], 'ARO2_LIMS_S624_EB': x0_list[62], 'ARO2_LIMS_S624_Gravity': x0_list[63], 'ARO2_LIMS_S624_MX': x0_list[64], 'ARO2_LIMS_S624_NA': x0_list[65],
'ARO2_LIMS_S624_OX': x0_list[66], 'ARO2_LIMS_S624_PX': x0_list[67], 'ARO2_LIMS_S624_TOL': x0_list[68], 'ARO2_LIMS_S808_A10+': x0_list[69], 'ARO2_LIMS_S808_A9': x0_list[70],'ARO2_LIMS_S808_BZ': x0_list[71],
'ARO2_LIMS_S808_EB': x0_list[72], 'ARO2_LIMS_S808_MX': x0_list[73], 'ARO2_LIMS_S808_NA': x0_list[74], 'ARO2_LIMS_S808_OX': x0_list[75], 'ARO2_LIMS_S808_PX': x0_list[76], 'ARO2_LIMS_S808_TOL': x0_list[77],
})
sqlite_dict
'''
sqlite_dict = {'ARO2_DCS_FI61501': [179.11896663780595],
'ARO2_DCS_FI621A2': [983.5121525745467],
'ARO2_DCS_FI671A3': [2484.6111883022904],
'ARO2_DCS_FIC62104': [37.64221664292652],
'ARO2_DCS_FIC62801': [113.82300421356918],
'ARO2_DCS_FIC62802': [0.0],
'ARO2_DCS_FIC65301': [113.3689636893472],
'ARO2_DCS_FIC66501': [156.98126501787442],
'ARO2_DCS_FIC67701': [375.7814948285392],
'ARO2_DCS_FIC8270A': [19.339881278365436],
'ARO2_DCS_TI62701': [42.28831387287955],
'ARO2-DCS-TI660A2': [88.99486046250429],
'ARO2_DCS_TI660B2': [90.86455192624352],
'ARO2_DCS_TI6700A': [226.63979308776942],
'ARO2_DCS_TIC62003': [163.64815832826483],
'ARO2_DCS_TIC62006': [181.5933000093269],
'ARO2_DCS_TIC67007': [200.30058750346305],
'ARO2_LIMS_S601_A10+': [0.000699999975040555],
'ARO2_LIMS_S601_A9': [0.004100000020116568],
'ARO2_LIMS_S601_BZ': [45.24599838256836],
'ARO2_LIMS_S601_EB': [7.520500183105469],
'ARO2_LIMS_S601_MX': [3.463200092315674],
'ARO2_LIMS_S601_NA': [0.5958999991416931],
'ARO2_LIMS_S601_OX': [0.7685999870300293],
'ARO2_LIMS_S601_PX': [1.4729000329971313],
'ARO2_LIMS_S601_TOL': [40.926998138427734],
'ARO2_LIMS_S604_BZ': [99.93099975585938],
'ARO2_LIMS_S604_NA': [620.0],
'ARO2_LIMS_S604_TOL': [56.0],
'ARO2_LIMS_S610_A10+': [5.1519999504089355],
'ARO2_LIMS_S610_A9': [20.45199966430664],
'ARO2_LIMS_S610_BZ': [0.0006000000284984708],
'ARO2_LIMS_S610_EB': [9.90999984741211],
'ARO2_LIMS_S610_MX': [34.47200012207031],
'ARO2_LIMS_S610_NA': [0.31700000166893005],
'ARO2_LIMS_S610_OX': [13.793000221252441],
'ARO2_LIMS_S610_PX': [15.873000144958496],
'ARO2_LIMS_S610_TOL': [0.027000000700354576],
'ARO2_LIMS_S622_A10': [1.9190000295639038],
'ARO2_LIMS_S622_A11+': [1.6440000534057617],
'ARO2_LIMS_S622_A9': [13.569000244140625],
'ARO2_LIMS_S622_BZ': [0.125],
'ARO2_LIMS_S622_EB': [1.343999981880188],
'ARO2_LIMS_S622_MX': [20.349000930786133],
'ARO2_LIMS_S622_NA': [0.05000000074505806],
'ARO2_LIMS_S622_OX': [8.602999687194824],
'ARO2_LIMS_S622_PX': [9.465999603271484],
'ARO2_LIMS_S622_TOL': [42.92499923706055],
'ARO2_LIMS_S623_A10+': [0.0],
'ARO2_LIMS_S623_A9': [0.00039999998989515007],
'ARO2_LIMS_S623_BZ': [80.55799865722656],
'ARO2_LIMS_S623_EB': [0.0010000000474974513],
'ARO2_LIMS_S623_Gravity': [0.8803200125694275],
'ARO2_LIMS_S623_MX': [0.008999999612569809],
'ARO2_LIMS_S623_NA': [0.42399999499320984],
'ARO2_LIMS_S623_OX': [0.0010000000474974513],
'ARO2_LIMS_S623_PX': [0.004999999888241291],
'ARO2_LIMS_S623_TOL': [18.999000549316406],
'ARO2_LIMS_S624_A10': [1.7289999723434448],
'ARO2_LIMS_S624_A11+': [1.3029999732971191],
'ARO2_LIMS_S624_A9': [11.638999938964844],
'ARO2_LIMS_S624_BZ': [12.414999961853027],
'ARO2_LIMS_S624_EB': [1.0540000200271606],
'ARO2_LIMS_S624_Gravity': [0.8679599761962891],
'ARO2_LIMS_S624_MX': [16.812000274658203],
'ARO2_LIMS_S624_NA': [1.7120000123977661],
'ARO2_LIMS_S624_OX': [7.311999797821045],
'ARO2_LIMS_S624_PX': [7.757999897003174],
'ARO2_LIMS_S624_TOL': [38.25899887084961],
'ARO2_LIMS_S808_A10+': [0.0],
'ARO2_LIMS_S808_A9': [0.0005000000237487257],
'ARO2_LIMS_S808_BZ': [68.76399993896484],
'ARO2_LIMS_S808_EB': [0.8230000138282776],
'ARO2_LIMS_S808_MX': [5.460999965667725],
'ARO2_LIMS_S808_NA': [1.9769999980926514],
'ARO2_LIMS_S808_OX': [0.5239999890327454],
'ARO2_LIMS_S808_PX': [3.0899999141693115],
'ARO2_LIMS_S808_TOL': [19.35700035095215]}
import pandas as pd
sqlite_dict_df = pd.DataFrame(list(sqlite_dict.items()),columns=['TAG','Value'])
# sqlite_dict_df['ARO2_DCS_FIC67701']
sqlite_dict_df
```
## C620 inpute composition combination
```
def c620_composition_combination(S808,S624):
return (sqlite_dict['ARO2_DCS_FIC8270A'][0]*873.1*S808[0] + 862.6*sqlite_dict['ARO2_DCS_FI61501'][0]*S624[0])/(862.6*sqlite_dict['ARO2_DCS_FI61501'][0]+sqlite_dict['ARO2_DCS_FIC8270A'][0]*873.1)
C620_input_dict = dict({
'C620_NA':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_NA'],sqlite_dict['ARO2_LIMS_S624_NA']),
'C620_BZ':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_BZ'],sqlite_dict['ARO2_LIMS_S624_BZ']),
'C620_TOL':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_TOL'],sqlite_dict['ARO2_LIMS_S624_TOL']),
'C620_EB':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_EB'],sqlite_dict['ARO2_LIMS_S624_EB']),
'C620_PX':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_PX'],sqlite_dict['ARO2_LIMS_S624_PX']),
'C620_MX':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_MX'],sqlite_dict['ARO2_LIMS_S624_MX']),
'C620_OX':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_OX'],sqlite_dict['ARO2_LIMS_S624_OX']),
'C620_A9':c620_composition_combination(sqlite_dict['ARO2_LIMS_S808_A9'],sqlite_dict['ARO2_LIMS_S624_A9']),
'C620_A10':(sqlite_dict['ARO2_DCS_FIC8270A'][0]*873.1*sqlite_dict['ARO2_LIMS_S808_A10+'][0] + 862.6*sqlite_dict['ARO2_DCS_FI61501'][0]*sqlite_dict['ARO2_LIMS_S624_A10'][0] + 862.6*sqlite_dict['ARO2_DCS_FI61501'][0]*sqlite_dict['ARO2_LIMS_S624_A11+'][0])/(862.6*sqlite_dict['ARO2_DCS_FI61501'][0]+sqlite_dict['ARO2_DCS_FIC8270A'][0]*873.1),
})
C620_input_dict
```
# C620 diving into 41 composition
```
split = dict({'Methane':[0.019,0],'Ethane':[0.458,0],'Propane':[0.403,0.1108],'n_Butane':[0.097,0.1724],'n_Pentane':[0.018,0.2728],'n_Hexane':[0.004,0.0563],
'Cyclohexane':[0,0.0386],'n_Heptane':[0.002,0.0667],'Methylcyclohexane':[0,0.0618],'n_Octane':[0,0.0409],'n_Propylcyclopentane':[0,0.0483],'Ethylcyclohexane':[0,0.0483],'n_Nonane':[0,0.07],
'i_Propylbenzene':[0.001,0],'n_Propylcyclohexane':[0,0.0131],'n_Propylbenzene':[0.001,0],'1_Methyl_3_ethylbenzene':[0.078,0.2],'1_Methyl_4_ethylbenzene':[0.041,0.2],'135_Trimethylbenzene':[0.215,0.4],'1_Methyl_2_ethylbenzene':[0.023,0.1],'124_Trimethylbenzene':[0.549,0.1],'tert_Butylcyclohexane':[0,0],'123_Trimethylbenzene':[0.088,0],'Indane':[0.003,0],'1_Methyl_4_n_propylbenzene':[0.0004,0.8],'12_Diethylbenzene':[0.0119,0.2],'5_Ethyl_m_xylene':[0.4591,0],'14_Diethylbenzene':[0,0],'1235_Tetramethylbenzene':[0.5286,0],'n_Pentylbenzene':[0.7,0],'n_Hexylbenzene':[0.3,0]})
split
def c620_break(V615,C820,composition):
return (0.8626*sqlite_dict['ARO2_DCS_FI61501'][0]*sqlite_dict[V615][0]*split[composition][0] + sqlite_dict['ARO2_DCS_FIC8270A'][0]*0.8731*sqlite_dict[C820][0]*split[composition][1])/(0.8626*sqlite_dict['ARO2_DCS_FI61501'][0] + sqlite_dict['ARO2_DCS_FIC8270A'][0]*0.8731)
c620_feed_dict = dict({
'Hydrogen':0,
'Methane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Methane'),
'Ethane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Ethane'),
'Propane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Propane'),
'n_Butane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Butane'),
'n_Pentane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Pentane'),
'n_Hexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Hexane'),
'Benzene':C620_input_dict['C620_BZ'],
'Cyclohexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Cyclohexane'),
'n_Heptane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Heptane'),
'Water':0,
'Methylcyclohexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Methylcyclohexane'),
'Toluene':C620_input_dict['C620_TOL'],
'n_Octane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Octane'),
'n_Propylcyclopentane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Propylcyclopentane'),
'Ethylcyclohexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','Ethylcyclohexane'),
'EB':C620_input_dict['C620_EB'],
'PX':C620_input_dict['C620_PX'],
'MX':C620_input_dict['C620_MX'],
'OX':C620_input_dict['C620_OX'],
'n_Nonane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Nonane'),
'i_Propylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','i_Propylbenzene'),
'n_Propylcyclohexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','n_Propylcyclohexane'),
'n_Propylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','n_Propylbenzene'),
'1_Methyl_3_ethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','1_Methyl_3_ethylbenzene'),
'1_Methyl_4_ethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','1_Methyl_4_ethylbenzene'),
'135_Trimethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','135_Trimethylbenzene'),
'1_Methyl_2_ethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','1_Methyl_2_ethylbenzene'),
'124_Trimethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','124_Trimethylbenzene'),
'tert_Butylcyclohexane':c620_break('ARO2_LIMS_S624_NA','ARO2_LIMS_S808_NA','tert_Butylcyclohexane'),
'123_Trimethylbenzene':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','123_Trimethylbenzene'),
'Indane':c620_break('ARO2_LIMS_S624_A9','ARO2_LIMS_S808_A9','Indane'),
'1_Methyl_4_n_propylbenzene':c620_break('ARO2_LIMS_S624_A10','ARO2_LIMS_S808_A10+','1_Methyl_4_n_propylbenzene'),
'12_Diethylbenzene':c620_break('ARO2_LIMS_S624_A10','ARO2_LIMS_S808_A10+','12_Diethylbenzene'),
'5_Ethyl_m_xylene':c620_break('ARO2_LIMS_S624_A10','ARO2_LIMS_S808_A10+','5_Ethyl_m_xylene'),
'14_Diethylbenzene':c620_break('ARO2_LIMS_S624_A10','ARO2_LIMS_S808_A10+','14_Diethylbenzene'),
'1235_Tetramethylbenzene':c620_break('ARO2_LIMS_S624_A10','ARO2_LIMS_S808_A10+','1235_Tetramethylbenzene'),
'n_Pentylbenzene':c620_break('ARO2_LIMS_S624_A11+','ARO2_LIMS_S808_A10+','n_Pentylbenzene'),
'n_Hexylbenzene':c620_break('ARO2_LIMS_S624_A11+','ARO2_LIMS_S808_A10+','n_Hexylbenzene'),
'Nitrogen':0,
'Oxgen':0,
})
len(list(c620_feed_dict))
import joblib
c620_col_names = joblib.load('./col_names/c620_col_names.pkl')
# c620_col_names['x41'] is the column name of c620_feed
c620_feed_df = pd.DataFrame(data = c620_feed_dict,index =[0])
c620_feed_df.columns = c620_col_names['x41']
c620_feed_df
```
# ICG input
```
icg_dict = dict({
'V615_flow':sqlite_dict['ARO2_DCS_FI61501'],
'V615_NA':sqlite_dict['ARO2_LIMS_S624_NA'],
'V615_BZ':sqlite_dict['ARO2_LIMS_S624_BZ'],
'V615_TOL':sqlite_dict['ARO2_LIMS_S624_TOL'],
'C820_flow':sqlite_dict['ARO2_DCS_FIC8270A'],
'C820_NA':sqlite_dict['ARO2_LIMS_S808_NA'],
'C820_BZ':sqlite_dict['ARO2_LIMS_S808_BZ'],
'C820_TOL':sqlite_dict['ARO2_LIMS_S808_TOL'],
'T651_flow':sqlite_dict['ARO2_DCS_FIC65301'],
'T651_NA':sqlite_dict['ARO2_LIMS_S601_NA'],
'T651_BZ':sqlite_dict['ARO2_LIMS_S601_BZ'],
'T651_TOL':sqlite_dict['ARO2_LIMS_S601_TOL'],
'C620_Sidedraw':sqlite_dict['ARO2_LIMS_S623_BZ'],
'NA_BZ':sqlite_dict['ARO2_LIMS_S604_NA'],
})
icg_col_names = joblib.load('./col_names/c620_c670.pkl')
icg_input_df = pd.DataFrame(data = icg_dict,index =[0])
icg_input_df.columns = icg_col_names['x']
icg_input_df
icg_input_df['Tatoray Stripper C620 Operation_Specifications_Spec 2 : Distillate Rate_m3/hr'],icg_input_df['Benzene Column C660 Operation_Specifications_Spec 3 : Toluene in Benzene_ppmw'],icg_input_df['Tatoray Stripper C620 Operation_Specifications_Spec 1 : Receiver Temp_oC'] = [sqlite_dict['ARO2_DCS_FIC62802'],sqlite_dict['ARO2_LIMS_S604_TOL'],sqlite_dict['ARO2_DCS_TI62701']]
icg_input_df
```
# 觀察這組樣本的三項spec
```
print(icg_input_df['Simulation Case Conditions_Spec 1 : Benzene in C620 Sidedraw_wt%'].values[0])
print(icg_input_df['Simulation Case Conditions_Spec 2 : NA in Benzene_ppmw'].values[0])
print(icg_input_df['Benzene Column C660 Operation_Specifications_Spec 3 : Toluene in Benzene_ppmw'].values[0])
icg_input_df.to_dict()
```
# T651 Feed
```
def T651_break(composition,split):
return sqlite_dict[composition][0]*split
t651_feed_dict = dict({
'Hydrogen':0,
'Methane':T651_break('ARO2_LIMS_S601_NA',0),
'Ethane':T651_break('ARO2_LIMS_S601_NA',0),
'Propane':T651_break('ARO2_LIMS_S601_NA',0),
'n_Butane':T651_break('ARO2_LIMS_S601_NA',0),
'n_Pentane':T651_break('ARO2_LIMS_S601_NA',0),
'n_Hexane':T651_break('ARO2_LIMS_S601_NA',0.01),
'Benzene':T651_break('ARO2_LIMS_S601_BZ',1),
'Cyclohexane':T651_break('ARO2_LIMS_S601_NA',0),
'n_Heptane':T651_break('ARO2_LIMS_S601_NA',0.01),
'Water':0,
'Methylcyclohexane':T651_break('ARO2_LIMS_S601_NA',0),
'Toluene':T651_break('ARO2_LIMS_S601_TOL',1),
'n_Octane':T651_break('ARO2_LIMS_S601_NA',0.12),
'n_Propylcyclopentane':T651_break('ARO2_LIMS_S601_NA',0.11),
'Ethylcyclohexane':T651_break('ARO2_LIMS_S601_NA',0.09),
'EB':T651_break('ARO2_LIMS_S601_EB',1),
'PX':T651_break('ARO2_LIMS_S601_PX',1),
'MX':T651_break('ARO2_LIMS_S601_MX',1),
'OX':T651_break('ARO2_LIMS_S601_OX',1),
'n_Nonane':T651_break('ARO2_LIMS_S601_NA',0.19),
'i_Propylbenzene':T651_break('ARO2_LIMS_S601_A9',0),
'n_Propylcyclohexane':T651_break('ARO2_LIMS_S601_NA',0.42),
'n_Propylbenzene':T651_break('ARO2_LIMS_S601_A9',0),
'1_Methyl_3_ethylbenzene':T651_break('ARO2_LIMS_S601_A9',0.2),
'1_Methyl_4_ethylbenzene':T651_break('ARO2_LIMS_S601_A9',0.2),
'135_Trimethylbenzene':T651_break('ARO2_LIMS_S601_A9',0.4),
'1_Methyl_2_ethylbenzene':T651_break('ARO2_LIMS_S601_A9',0.1),
'124_Trimethylbenzene':T651_break('ARO2_LIMS_S601_A9',0.1),
'tert_Butylcyclohexane':T651_break('ARO2_LIMS_S601_NA',0.03),
'123_Trimethylbenzene':T651_break('ARO2_LIMS_S601_A9',0),
'Indane':T651_break('ARO2_LIMS_S601_A9',0),
'1_Methyl_4_n_propylbenzene':T651_break('ARO2_LIMS_S601_A10+',0.8),
'12_Diethylbenzene':T651_break('ARO2_LIMS_S601_A10+',0.2),
'5_Ethyl_m_xylene':T651_break('ARO2_LIMS_S601_A10+',0),
'14_Diethylbenzene':T651_break('ARO2_LIMS_S601_A10+',0),
'1235_Tetramethylbenzene':T651_break('ARO2_LIMS_S601_A10+',0),
'n_Pentylbenzene':T651_break('ARO2_LIMS_S601_A10+',0),
'n_Hexylbenzene':T651_break('ARO2_LIMS_S601_A10+',0),
'Nitrogen':0,
'Oxgen':0,
})
t651_feed_dict
len(list(t651_feed_dict))
t651_col_names = joblib.load('./col_names/t651_col_names.pkl')
# c620_col_names['x41'] is the column name of c620_feed
t651_feed_df = pd.DataFrame(data = t651_feed_dict,index =[0])
t651_feed_df.columns = t651_col_names['x41']
t651_feed_df
```
## 試算模式
```
import autorch
from FV2 import AllSystem
from configV2 import config
f = joblib.load('model/allsystem.pkl')
c620_wt,c620_op,c660_wt,c660_op,c670_wt,c670_op = f.inference(icg_input_df.copy(),c620_feed_df.copy(),t651_feed_df.copy())
c620_op
c660_op
c670_op
```
## 推薦模式
```
c620_wt2,c620_op2,c660_wt2,c660_op2,c670_wt2,c670_op2,bz_error,nainbz_error,tol_error = f.recommend(icg_input_df.copy(),c620_feed_df.copy(),t651_feed_df.copy(),
search_iteration = 100,only_tune_temp=True)
bz_error,nainbz_error,tol_error
```
## 計算Delta
```
c620_op2-c620_op # 苯從80降到70 c620溫度應該上升 確實有上升
c660_op2-c660_op
c670_op2-c670_op
```
| github_jupyter |
# Inheritance Exercise Clothing
The following code contains a Clothing parent class and two children classes: Shirt and Pants.
Your job is to code a class called Blouse. Read through the code and fill out the TODOs. Then check your work with the unit tests at the bottom of the code.
```
class Clothing:
def __init__(self, color, size, style, price):
self.color = color
self.size = size
self.style = style
self.price = price
def change_price(self, price):
self.price = price
def calculate_discount(self, discount):
return self.price * (1 - discount)
def calculate_shipping(self, weight, rate):
return weight * rate
class Shirt(Clothing):
def __init__(self, color, size, style, price, long_or_short):
Clothing.__init__(self, color, size, style, price)
self.long_or_short = long_or_short
def double_price(self):
self.price = 2*self.price
class Pants(Clothing):
def __init__(self, color, size, style, price, waist):
Clothing.__init__(self, color, size, style, price)
self.waist = waist
def calculate_discount(self, discount):
return self.price * (1 - discount / 2)
# TODO: Write a class called Blouse, that inherits from the Clothing class
# and has the the following attributes and methods:
# attributes: color, size, style, price, country_of_origin
# where country_of_origin is a string that holds the name of a
# country
#
# methods: triple_price, which has no inputs and returns three times
# the price of the blouse
class Blouse(Clothing):
def __init__(self, color, size, style, price, country_of_origin):
Clothing.__init__(self, color, size, style, price)
self.country_of_origin = country_of_origin
def triple_price(self):
self.price = self.price * 3
# TODO: Add a method to the clothing class called calculate_shipping.
# The method has two inputs: weight and rate. Weight is a float
# representing the weight of the article of clothing. Rate is a float
# representing the shipping weight. The method returns weight * rate
# Unit tests to check your solution
import unittest
class TestClothingClass(unittest.TestCase):
def setUp(self):
self.clothing = Clothing('orange', 'M', 'stripes', 35)
self.blouse = Blouse('blue', 'M', 'luxury', 40, 'Brazil')
self.pants = Pants('black', 32, 'baggy', 60, 30)
def test_initialization(self):
self.assertEqual(self.clothing.color, 'orange', 'color should be orange')
self.assertEqual(self.clothing.price, 35, 'incorrect price')
self.assertEqual(self.blouse.color, 'blue', 'color should be blue')
self.assertEqual(self.blouse.size, 'M', 'incorrect size')
self.assertEqual(self.blouse.style, 'luxury', 'incorrect style')
self.assertEqual(self.blouse.price, 40, 'incorrect price')
self.assertEqual(self.blouse.country_of_origin, 'Brazil', 'incorrect country of origin')
def test_calculateshipping(self):
self.assertEqual(self.clothing.calculate_shipping(.5, 3), .5 * 3,\
'Clothing shipping calculation not as expected')
self.assertEqual(self.blouse.calculate_shipping(.5, 3), .5 * 3,\
'Clothing shipping calculation not as expected')
tests = TestClothingClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
```
| github_jupyter |
# Shashank V. Sonar
## Task 3: Perform ‘Exploratory Data Analysis’ on dataset ‘SampleSuperstore’
### ● As a business manager, try to find out the weak areas where you can
### work to make more profit.
### ● What all business problems you can derive by exploring the data?
```
#importing libraries
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
#load data
my_data=pd.read_csv(r"C:\Users\91814\Desktop\GRIP\Task 3\SampleSuperstore.csv")
#displaying the data
my_data
# To Check first 5 rows of Dataset
my_data.head(5)
# To Check last 5 rows of Dataset
my_data.tail(5)
```
### Exploratory Data Analysis
```
# Information of the Dataset
my_data.info()
# Shape Of the Dataset
my_data.shape
# Columns Of the Dataset
my_data.columns
# Datatype of each Attribute
my_data.dtypes
# Checking for any Null Values in the columns and duplicates values
my_data.isnull().sum()
# Checking of Duplicated data
my_data.duplicated().sum()
# Deleting Duplicates if any
my_data.drop_duplicates(inplace=True)
# finding out any duplicates left from the sample file
my_data.duplicated().sum()
# Displaying the unique data
my_data.nunique()
# Dropping of Irrelevant columns like we have postal code in the sample file
my_data.drop(['Postal Code'],axis=1, inplace=True)
my_data
# To Check first 5 rows of Dataset
my_data.head()
# Correlation between the Attributes
my_data.corr()
```
### Data Visualization
```
# Pairplot the data
sns.pairplot(my_data,hue='Segment')
#ploting against various attributes
plt.figure(figsize=(20,10))
plt.bar('Sub-Category','Category', data=my_data)
plt.title('Category vs Sub-Category')
plt.xlabel('Sub-Category')
plt.ylabel('Category')
plt.xticks(rotation=50)
plt.show()
# Visualizing the correlation between the Attributes
sns.heatmap(my_data.corr(), annot=True)
print("1 represents strong positive correlation")
print("-0.22 represents negative correlation")
# Countplot each attribute
fig,axs=plt.subplots(nrows=2,ncols=2,figsize=(10,7));
sns.countplot(my_data['Category'],ax=axs[0][0])
axs[0][0].set_title('Category',fontsize=20)
sns.countplot(my_data['Segment'],ax=axs[0][1])
axs[0][1].set_title('Segment',fontsize=20)
sns.countplot(my_data['Ship Mode'],ax=axs[1][0])
axs[1][0].set_title('Ship Mode',fontsize=20)
sns.countplot(my_data['Region'],ax=axs[1][1])
axs[1][1].set_title('Region',fontsize=20)
plt.tight_layout()
# Countplot State wise shipments
plt.figure(figsize=(12,7))
sns.countplot(x=my_data['State'])
plt.xticks(rotation=90)
plt.title('Count of State wise shipments')
# State vs Profit
plt.figure(figsize =(20,12))
my_data.groupby(by ='State')['Profit'].sum().sort_values(ascending = True).plot(kind = 'bar')
# Count of Sub-Category
plt.figure(figsize=(12,7))
sns.countplot(x=my_data['Sub-Category'])
plt.xticks(rotation=90)
plt.title('Count of Sub-categories')
# Discount Vs Profit
sns.lineplot(x='Discount',y='Profit',label='Profit',data=my_data)
plt.legend()
# Discount VS Sales
sns.lineplot(x='Discount',y='Sales',label='Profit',data=my_data)
plt.legend()
# Count of Ship Mode by Region
plt.figure(figsize=(9,5))
sns.countplot(x='Region',hue='Ship Mode',data=my_data)
plt.ylabel('Count of Ship Mode')
plt.title('Count of Ship Mode by Region')
```
Thus we can conclude that, Product sales increases with increase in Discounts but Profit decreases
Conclusion:
1) The cities / states which gives more discounts on products shows more sales but very less profit
2) West region shows more sales and more profit whereas south region shows least sales and profits. So, we should try to increase sales and profits in south region.
3) We should limit the discount on standard class shipment and try to increase its sales and profits.
4) Technology category shows more profit so, we should increse its sales and reduce the sales for furnitures due to its low profit.
5) Copiers sub-category shows more profit and tables shows least profit so we should also reduce its sales and increase sales for sub-categories like Accessories and Phones
6) States showing high profit ratios are California and New York whereas States showing least profit ratios are Texas and Ohio
7) Cities showing high profit ratios are New-york and Los angeles whereas Cities showing least profit ratios are Philidelphia and Housten
| github_jupyter |
```
import pandas as pd
from imblearn.pipeline import make_pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import seaborn as sns
import numpy as np
from nltk.corpus import stopwords
import matplotlib.pyplot as plt
from sklearn.feature_selection import SelectFromModel
from imblearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import FunctionTransformer
from sklearn.pipeline import FeatureUnion
from sklearn import preprocessing
from sklearn.svm import LinearSVC
def svm_func(df):
# keep test_set apart
df_train, df_test = train_test_split(df, test_size=0.25, stratify=df['reenrolled'], shuffle=True,
random_state=0)
X_train = df_train['motivation']
y_train = df_train['reenrolled']
X_test = df_test['motivation']
y_test = df_test['reenrolled']
stopword_list = list(stopwords.words('Dutch'))
pipe = make_pipeline(TfidfVectorizer(lowercase=True, stop_words=stopword_list), SVC(class_weight='balanced'))
scores = cross_val_score(pipe, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
pipe.fit(X_train, y_train)
predictions = pipe.predict(X_test)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_test, predictions))
cm = confusion_matrix(y_test, predictions)
print(classification_report(y_test, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
# df.dropna(subset=['bsa_dummy', 'motivation'], inplace=True)
df = df.fillna(method='ffill')
svm_func(df)
```
# 2- Initial numeric features
```
def svm_initial_features(df):
df = df.fillna(method='ffill')
categorical_features = ['cohort', 'field', 'prior_educ', 'previously_enrolled', 'multiple_requests', 'gender',
'interest', 'ase', 'year', 'program']
numeric_features = ['age', 'HSGPA']
target = df['reenrolled']
df1 = df[categorical_features]
df2 = df[numeric_features]
df = pd.concat([df1, df2], axis=1)
df = pd.concat([df, target], axis=1)
# keep test_set apart
df_train, df_test = train_test_split(df, test_size=0.25, stratify=df['reenrolled'], shuffle=True,
random_state=0)
X_train = df_train.loc[:,df_train.columns !='reenrolled']
y_train = df_train['reenrolled']
X_test = df_test.loc[:,df_test.columns !='reenrolled']
y_test = df_test['reenrolled']
numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
pipe = make_pipeline(preprocessor,
SVC(class_weight='balanced'))
scores = cross_val_score(pipe, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
pipe.fit(X_train, y_train)
predictions = pipe.predict(X_test)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_test, predictions))
cm = confusion_matrix(y_test, predictions)
print(classification_report(y_test, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
svm_initial_features(df)
```
# 3- Added features + initial features- only numeric and categorical
```
def svm_all_nontext_features(df):
df = df.fillna(method='ffill')
categorical_features = ['cohort', 'field', 'prior_educ', 'previously_enrolled', 'multiple_requests', 'gender',
'interest', 'ase', 'year', 'program']
numeric_features = ['age', 'HSGPA', 'WC', 'WPS', 'Sixltr',
'Dic', 'funct', 'pronoun', 'ppron', 'i',
'we', 'you', 'shehe', 'they', 'ipron',
'article', 'verb', 'auxverb', 'past', 'present',
'future', 'adverb', 'preps', 'conj', 'negate',
'quant', 'number', 'swear', 'social', 'family',
'friend', 'humans', 'affect', 'posemo', 'negemo',
'anx', 'anger', 'sad', 'cogmech', 'insight',
'cause', 'discrep', 'tentat', 'certain', 'inhib',
'incl', 'excl', 'percept', 'see', 'hear',
'feel', 'bio', 'body', 'health', 'sexual',
'ingest', 'relativ', 'motion', 'space', 'time',
'work', 'achieve', 'leisure', 'home', 'money',
'relig', 'death', 'assent', 'nonfl', 'filler',
'pronadv', 'shehethey', 'AllPunc', 'Period', 'Comma',
'Colon', 'SemiC', 'QMark', 'Exclam', 'Dash',
'Quote', 'Apostro', 'Parenth', 'OtherP', 'count_punct',
'count_stopwords', 'nr_token', 'nr_adj', 'nr_noun', 'nr_verb',
'nr_number', 'topic1', 'topic2', 'topic3', 'topic4',
'topic5', 'topic6', 'topic7', 'topic8', 'topic9',
'topic10', 'topic11', 'topic12', 'topic13', 'topic14',
'topic15']
# Change object (string) type of features to float
change_type = ['WPS', 'Sixltr',
'Dic', 'funct', 'pronoun', 'ppron', 'i',
'we', 'you', 'shehe', 'they', 'ipron',
'article', 'verb', 'auxverb', 'past', 'present',
'future', 'adverb', 'preps', 'conj', 'negate',
'quant', 'number', 'swear', 'social', 'family',
'friend', 'humans', 'affect', 'posemo', 'negemo',
'anx', 'anger', 'sad', 'cogmech', 'insight',
'cause', 'discrep', 'tentat', 'certain', 'inhib',
'incl', 'excl', 'percept', 'see', 'hear',
'feel', 'bio', 'body', 'health', 'sexual',
'ingest', 'relativ', 'motion', 'space', 'time',
'work', 'achieve', 'leisure', 'home', 'money',
'relig', 'death', 'assent', 'nonfl', 'filler',
'pronadv', 'shehethey', 'AllPunc', 'Period', 'Comma',
'Colon', 'SemiC', 'QMark', 'Exclam', 'Dash',
'Quote', 'Apostro', 'Parenth', 'OtherP']
df[change_type] = df[change_type].apply(lambda x: x.str.replace(',', '.'))
df[change_type] = df[change_type].astype(float).fillna(0.0)
target = df['reenrolled']
df1 = df[categorical_features]
df2 = df[numeric_features]
df = pd.concat([df1, df2], axis=1)
df = pd.concat([df, target], axis=1)
# keep test_set apart
df_train, df_test = train_test_split(df, test_size=0.25, stratify=df['reenrolled'], shuffle=True,
random_state=0)
X_train = df_train.loc[:,df_train.columns !='reenrolled']
y_train = df_train['reenrolled']
X_test = df_test.loc[:,df_test.columns !='reenrolled']
y_test = df_test['reenrolled']
numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
pipe = make_pipeline(preprocessor,
SVC(class_weight='balanced'))
scores = cross_val_score(pipe, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
pipe.fit(X_train, y_train)
predictions = pipe.predict(X_test)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_test, predictions))
cm = confusion_matrix(y_test, predictions)
print(classification_report(y_test, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
svm_all_nontext_features(df)
```
# 4- Text + Initial non-textual features
```
def svm_text_initial_features(df):
stopword_list = list(stopwords.words('Dutch'))
df = df.fillna(method='ffill')
categorical_features = ['cohort', 'field', 'prior_educ', 'previously_enrolled', 'multiple_requests', 'gender',
'interest', 'ase', 'year', 'program']
numeric_features = ['age', 'HSGPA']
text_features = ['motivation']
target = df['reenrolled']
get_text_data = FunctionTransformer(lambda x: x['motivation'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[numeric_features], validate=False)
get_categorical_data = FunctionTransformer(lambda x: x[categorical_features], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('categorical_features', Pipeline([
('selector', get_categorical_data),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', TfidfVectorizer(lowercase=True, stop_words=stopword_list))
]))
])),
('clf', SVC(class_weight='balanced'))
])
df = df.dropna()
text = df['motivation']
num = df[numeric_features]
cat = df[categorical_features]
df_features =pd.concat([text,num], axis=1)
df_features =pd.concat([df_features,cat], axis=1)
X_train, X_val, y_train, y_val = train_test_split(df_features, df['reenrolled'], stratify=df['reenrolled'], test_size=0.25, random_state=0)
process_and_join_features.fit(X_train, y_train)
# predictions_lt = process_and_join_features.predict(X_val)
scores = cross_val_score(process_and_join_features, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val, predictions)
print(classification_report(y_val, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
svm_text_initial_features(df)
```
# 5- Text + all non-textual features
```
def svm_all_features(df):
stopword_list = list(stopwords.words('Dutch'))
df = df.fillna(method='ffill')
categorical_features = ['cohort', 'field', 'prior_educ', 'previously_enrolled', 'multiple_requests', 'gender',
'interest', 'ase', 'year', 'program']
numeric_features = ['age', 'HSGPA', 'WC', 'WPS', 'Sixltr',
'Dic', 'funct', 'pronoun', 'ppron', 'i',
'we', 'you', 'shehe', 'they', 'ipron',
'article', 'verb', 'auxverb', 'past', 'present',
'future', 'adverb', 'preps', 'conj', 'negate',
'quant', 'number', 'swear', 'social', 'family',
'friend', 'humans', 'affect', 'posemo', 'negemo',
'anx', 'anger', 'sad', 'cogmech', 'insight',
'cause', 'discrep', 'tentat', 'certain', 'inhib',
'incl', 'excl', 'percept', 'see', 'hear',
'feel', 'bio', 'body', 'health', 'sexual',
'ingest', 'relativ', 'motion', 'space', 'time',
'work', 'achieve', 'leisure', 'home', 'money',
'relig', 'death', 'assent', 'nonfl', 'filler',
'pronadv', 'shehethey', 'AllPunc', 'Period', 'Comma',
'Colon', 'SemiC', 'QMark', 'Exclam', 'Dash',
'Quote', 'Apostro', 'Parenth', 'OtherP', 'count_punct',
'count_stopwords', 'nr_token', 'nr_adj', 'nr_noun', 'nr_verb',
'nr_number', 'topic1', 'topic2', 'topic3', 'topic4',
'topic5', 'topic6', 'topic7', 'topic8', 'topic9',
'topic10', 'topic11', 'topic12', 'topic13', 'topic14',
'topic15']
text_features = ['motivation']
target = df['reenrolled']
get_text_data = FunctionTransformer(lambda x: x['motivation'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[numeric_features], validate=False)
get_categorical_data = FunctionTransformer(lambda x: x[categorical_features], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('categorical_features', Pipeline([
('selector', get_categorical_data),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', TfidfVectorizer(lowercase=True, stop_words=stopword_list)),
('feature_selection', SelectFromModel(LinearSVC(penalty="l1", dual=False)))
]))
])),
('clf', SVC(class_weight='balanced'))
])
df = df.dropna()
text = df['motivation']
num = df[numeric_features]
cat = df[categorical_features]
df_features =pd.concat([text,num], axis=1)
df_features =pd.concat([df_features,cat], axis=1)
X_train, X_val, y_train, y_val = train_test_split(df_features, df['reenrolled'], stratify=df['reenrolled'], test_size=0.25, random_state=0)
process_and_join_features.fit(X_train, y_train)
# predictions_lt = process_and_join_features.predict(X_val)
scores = cross_val_score(process_and_join_features, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val, predictions)
print(classification_report(y_val, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
change_type = ['WPS', 'Sixltr',
'Dic', 'funct', 'pronoun', 'ppron', 'i',
'we', 'you', 'shehe', 'they', 'ipron',
'article', 'verb', 'auxverb', 'past', 'present',
'future', 'adverb', 'preps', 'conj', 'negate',
'quant', 'number', 'swear', 'social', 'family',
'friend', 'humans', 'affect', 'posemo', 'negemo',
'anx', 'anger', 'sad', 'cogmech', 'insight',
'cause', 'discrep', 'tentat', 'certain', 'inhib',
'incl', 'excl', 'percept', 'see', 'hear',
'feel', 'bio', 'body', 'health', 'sexual',
'ingest', 'relativ', 'motion', 'space', 'time',
'work', 'achieve', 'leisure', 'home', 'money',
'relig', 'death', 'assent', 'nonfl', 'filler',
'pronadv', 'shehethey', 'AllPunc', 'Period', 'Comma',
'Colon', 'SemiC', 'QMark', 'Exclam', 'Dash',
'Quote', 'Apostro', 'Parenth', 'OtherP']
df[change_type] = df[change_type].apply(lambda x: x.str.replace(',', '.'))
df[change_type] = df[change_type].astype(float).fillna(0.0)
svm_all_features(df)
```
# 6 - Text + all features excluding LIWC
```
def svm_all_without_liwc_features(df):
stopword_list = list(stopwords.words('Dutch'))
df = df.fillna(method='ffill')
categorical_features = ['cohort', 'field', 'prior_educ', 'previously_enrolled', 'multiple_requests', 'gender',
'interest', 'ase', 'year', 'program']
numeric_features = ['age', 'HSGPA', 'nr_token', 'nr_adj', 'nr_noun', 'nr_verb',
'nr_number', 'topic1', 'topic2', 'topic3', 'topic4',
'topic5', 'topic6', 'topic7', 'topic8', 'topic9',
'topic10', 'topic11', 'topic12', 'topic13', 'topic14',
'topic15']
text_features = ['motivation']
target = df['reenrolled']
get_text_data = FunctionTransformer(lambda x: x['motivation'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[numeric_features], validate=False)
get_categorical_data = FunctionTransformer(lambda x: x[categorical_features], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('categorical_features', Pipeline([
('selector', get_categorical_data),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', TfidfVectorizer(lowercase=True, stop_words=stopword_list)),
('feature_selection', SelectFromModel(LinearSVC(penalty="l1", dual=False)))
]))
])),
('clf', SVC(class_weight='balanced'))
])
df = df.dropna()
text = df['motivation']
num = df[numeric_features]
cat = df[categorical_features]
df_features =pd.concat([text,num], axis=1)
df_features =pd.concat([df_features,cat], axis=1)
X_train, X_val, y_train, y_val = train_test_split(df_features, df['reenrolled'], stratify=df['reenrolled'], test_size=0.25, random_state=0)
process_and_join_features.fit(X_train, y_train)
# predictions_lt = process_and_join_features.predict(X_val)
scores = cross_val_score(process_and_join_features, X_train, y_train, cv=5)
print('5-fold cross validation scores:', scores)
print('average of 5-fold cross validation scores:', scores.mean())
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Accuracy for SVM on test_set: %s" % accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val, predictions)
print(classification_report(y_val, predictions))
sns.heatmap(cm / np.sum(cm), annot=True, fmt='.2%', cmap='Blues')
plt.show()
df = pd.read_csv(r'..\data\processed\motivation_liwc_meta_pos_topic_n15.csv')
svm_all_without_liwc_features(df)
```
| github_jupyter |
# Overfitting demo
## Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
```
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
```
Create random values for x in interval [0,1)
```
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
```
Compute y
```
y = x.apply(lambda x: math.sin(4*x))
```
Add random Gaussian noise to y
```
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
```
### Put data into an SFrame to manipulate later
```
data = graphlab.SFrame({'X1':x,'Y':y})
data
```
### Create a function to plot the data, since we'll do it many times
```
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
```
## Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
```
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
```
Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
```
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
```
Define function to plot data and predictions made, since we are going to use it many times.
```
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
```
Create a function that prints the polynomial coefficients in a pretty way :)
```
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
```
## Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
```
model = polynomial_regression(data, deg=2)
```
Inspect learned parameters
```
print_coefficients(model)
```
Form and plot our predictions along a grid of x values:
```
plot_poly_predictions(data,model)
```
## Fit a degree-4 polynomial
```
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Fit a degree-16 polynomial
```
model = polynomial_regression(data, deg=16)
print_coefficients(model)
```
### Woah!!!! Those coefficients are *crazy*! On the order of 10^6.
```
plot_poly_predictions(data,model)
```
### Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
#
#
# Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
```
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
```
## Perform a ridge fit of a degree-16 polynomial using a *very* small penalty strength
```
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
```
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Let's look at fits for a sequence of increasing lambda values
```
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
data
```
## Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
```
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
data = polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
```
Run LOO cross validation for "num" values of lambda, on a log scale
```
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
```
Plot results of estimating LOO for each value of lambda
```
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
```
Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
```
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
```
#
#
#
#
# Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
```
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
```
## Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
```
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
```
Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution.
| github_jupyter |
# TensorFlow Reproducibility
```
from __future__ import division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
from tensorflow import keras
```
## Checklist
1. Do not run TensorFlow on the GPU.
2. Beware of multithreading, and make TensorFlow single-threaded.
3. Set all the random seeds.
4. Eliminate any other source of variability.
## Do Not Run TensorFlow on the GPU
Some operations (like `tf.reduce_sum()`) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
```
## Beware of Multithreading
Because floats have limited precision, the order of execution matters:
```
2. * 5. / 7.
2. / 7. * 5.
```
You should make sure TensorFlow runs your ops on a single thread:
```
config = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
with tf.Session(config=config) as sess:
#... this will run single threaded
pass
```
The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
```
with tf.Session() as sess:
#... also single-threaded!
pass
```
## Set all the random seeds!
### Python's built-in `hash()` function
```
print(set("Try restarting the kernel and running this again"))
print(set("Try restarting the kernel and running this again"))
```
Since Python 3.3, the result will be different every time, unless you start Python with the `PYTHONHASHSEED` environment variable set to `0`:
```shell
PYTHONHASHSEED=0 python
```
```pycon
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
>>> exit()
```
```shell
PYTHONHASHSEED=0 python
```
```pycon
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
```
Alternatively, you could set this environment variable system-wide, but that's probably not a good idea, because this automatic randomization was [introduced for security reasons](http://ocert.org/advisories/ocert-2011-003.html).
Unfortunately, setting the environment variable from within Python (e.g., using `os.environ["PYTHONHASHSEED"]="0"`) will not work, because Python reads it upon startup. For Jupyter notebooks, you have to start the Jupyter server like this:
```shell
PYTHONHASHSEED=0 jupyter notebook
```
```
if os.environ.get("PYTHONHASHSEED") != "0":
raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
```
### Python Random Number Generators (RNGs)
```
import random
random.seed(42)
print(random.random())
print(random.random())
print()
random.seed(42)
print(random.random())
print(random.random())
```
### NumPy RNGs
```
import numpy as np
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
print()
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
```
### TensorFlow RNGs
TensorFlow's behavior is more complex because of two things:
* you create a graph, and then you execute it. The random seed must be set before you create the random operations.
* there are two seeds: one at the graph level, and one at the individual random operation level.
```
import tensorflow as tf
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
Every time you reset the graph, you need to set the seed again:
```
tf.reset_default_graph()
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
If you create your own graph, it will ignore the default graph's seed:
```
tf.reset_default_graph()
tf.set_random_seed(42)
graph = tf.Graph()
with graph.as_default():
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
```
You must set its own seed:
```
graph = tf.Graph()
with graph.as_default():
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
```
If you set the seed after the random operation is created, the seed has no effet:
```
tf.reset_default_graph()
rnd = tf.random_uniform(shape=[])
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
```
#### A note about operation seeds
You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works:
| Graph seed | Op seed | Resulting seed |
|------------|---------|--------------------------------|
| None | None | Random |
| graph_seed | None | f(graph_seed, op_index) |
| None | op_seed | f(default_graph_seed, op_seed) |
| graph_seed | op_seed | f(graph_seed, op_seed) |
* `f()` is a deterministic function.
* `op_index = graph._last_id` when there is a graph seed, different random ops without op seeds will have different outputs. However, each of them will have the same sequence of outputs at every run.
In eager mode, there is a global seed instead of graph seed (since there is no graph in eager mode).
```
tf.reset_default_graph()
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
```
In the following example, you may think that all random ops will have the same random seed, but `rnd3` will actually have a different seed:
```
tf.reset_default_graph()
tf.set_random_seed(42)
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
```
#### Estimators API
**Tip**: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
```
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
```
If you use the Estimators API, make sure to create a `RunConfig` and set its `tf_random_seed`, then pass it to the constructor of your estimator:
```
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
```
Let's try it on MNIST:
```
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
```
Unfortunately, the `numpy_input_fn` does not allow us to set the seed when `shuffle=True`, so we must shuffle the data ourself and set `shuffle=False`.
```
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
```
The final loss should be exactly 0.46282205.
Instead of using the `numpy_input_fn()` function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
```
def create_dataset(X, y=None, n_epochs=1, batch_size=32,
buffer_size=1000, seed=None):
dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y))
dataset = dataset.repeat(n_epochs)
dataset = dataset.shuffle(buffer_size, seed=seed)
return dataset.batch(batch_size)
input_fn=lambda: create_dataset(X_train, y_train, seed=42)
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
dnn_clf.train(input_fn=input_fn)
```
The final loss should be exactly 1.0556093.
```python
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled,
num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
```
#### Keras API
If you use the Keras API, all you need to do is set the random seed any time you clear the session:
```
keras.backend.clear_session()
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10)
```
You should get exactly 97.16% accuracy on the training set at the end of training.
## Eliminate other sources of variability
For example, `os.listdir()` returns file names in an order that depends on how the files were indexed by the file system:
```
for i in range(10):
with open("my_test_foo_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_foo_")]
for i in range(10):
with open("my_test_bar_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_bar_")]
```
You should sort the file names before you use them:
```
filenames = os.listdir()
filenames.sort()
[f for f in filenames if f.startswith("my_test_foo_")]
for f in os.listdir():
if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"):
os.remove(f)
```
I hope you enjoyed this notebook. If you do not get reproducible results, or if they are different than mine, then please [file an issue](https://github.com/ageron/handson-ml/issues) on github, specifying what version of Python, TensorFlow, and NumPy you are using, as well as your O.S. version. Thank you!
If you want to learn more about Deep Learning and TensorFlow, check out my book [Hands-On Machine Learning with Scitkit-Learn and TensorFlow](http://homl.info/amazon), O'Reilly. You can also follow me on twitter [@aureliengeron](https://twitter.com/aureliengeron) or watch my videos on YouTube at [youtube.com/c/AurelienGeron](https://www.youtube.com/c/AurelienGeron).
| github_jupyter |
## next_permutation
Implement next permutation, which rearranges numbers into the lexicographically next greater permutation of numbers.
If such arrangement is not possible, it must rearrange it as the lowest possible order (ie, sorted in ascending order).
The replacement must be in-place and use only constant extra memory.
Here are some examples. Inputs are in the left-hand column and its corresponding outputs are in the right-hand column.
<b>1,2,3 → 1,3,2</b><br>
<b>3,2,1 → 1,2,3</b><br>
<b>1,1,5 → 1,5,1</b>


To illustrate the algorithm with an example, consider nums = [2,3,1,5,4,2]. <br>
It is easy to see that i = 2 is the first i (from the right) such that nums[i] < nums[i+1].<br>
Then we swap nums[2] = 1 with the smallest number in nums[3:] that is larger than 1, which is nums[5] = 2, after which we get nums = [2,3,2,5,4,1]. <br>
To get the lexicographically next greater permutation of nums, we just need to sort nums[3:] = [5,4,1] in-place. <br>
Finally, we reach nums = [2,3,2,1,4,5].
```
#SOLUTION WITHOUT COMMENTS
def next_perm(ls):
n = len(ls)
for i in range(n-1, 0, -1):
if ls[i] > ls[i-1]:
j = i
while j < n and ls[j] > ls[i-1]:
idx = j
j += 1
ls[idx], ls[i-1] = ls[i-1], ls[idx]
for k in range((n-i)//2):
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
print(next_perm([3,2,1]))
print(next_perm([1,2,3]))
print(next_perm([1,1,5]))
#SOLUTION WITH COMMENTS
def next_perm(ls):
# Find the length of the list
n = len(ls)
print("Length of the list :", len(ls))
# Decreement from last element to the first
for i in range(n-1, 0, -1):
# if last element is greater then last previous. Then j =i
# find the first decreasing element .in the above example 4
if ls[i] > ls[i-1]:
print("Found the first decreasing element ls[i]",ls[i])
j = i
print("Reset both the pointers",ls[i], i,j)
#Here the pointer has been reset
#find the number find the next element which is greater than 4
# Here we increement j till we ls[j] is greater than 4 in the above example it is 5
while j < n and ls[j] > ls[i-1]:
idx = j
j += 1
# swap the elements in the list 4 and 5
ls[idx], ls[i-1] = ls[i-1], ls[idx]
# double-slash for “floor” division (rounds down to nearest whole number)
for k in range((n-i)//2):
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
print(next_perm([3,2,1]))
print(next_perm([1,1,5]))
#If all values are in descending order then we need to only reverse
print(next_perm([4,3,2,1]))
```

```
def next_perm(ls):
# Find the length of the list
print("ls",ls)
n = len(ls)
print("Length of the list :", len(ls))
# Decreement from last element to the first
for i in range(n-1, 0, -1):
# if last element is greater then last previous. Then j =i
# find the first decreasing element .in the above example
if ls[i] > ls[i-1]:
print("\nFIND FIRST DECREASING ELEMENT")
print("First decreasing element ls[i]:i",ls[i-1],i-1)
j = i
print("Reset both the pointers",i,j)
print("ls",ls)
#Here the pointer has been reset
#find the number find the next element which is greater than 4
# Here we increement j till we ls[j] is greater than 4 in the above example it is 5
while j < n and ls[j] > ls[i-1]:
print("\nFIND NEXT GREATEST ELEMENT")
print("j : {},len n :{},ls[j]:{},ls[i-1] {}".format(j,n,ls[j],ls[i-1]))
idx = j
j += 1
print("idx is the pointer next greater num, idx {} ls[idx] {}".format(idx,ls[idx]))
# swap the elements in the list 4 and 5
ls[idx], ls[i-1] = ls[i-1], ls[idx]
print("After swap idx and ls[i-1](this was 2)".format(ls[idx],ls[i-1]))
print("\n",ls)
# double-slash for “floor” division (rounds down to nearest whole number)
print("\nfind (n-i)//2) => ({} - {} )// 2 = {}".format(n,i,(n-i)//2))
for k in range((n-i)//2):
print("\nREVERSE")
print("n",n)
print("i",i)
print("ls[i]",ls[i])
print("(n-i)//2",(n-i)//2)
print("k",k)
print("ls[i+k]",ls[i+k])
print("ls[n-1]",ls[n-1])
print("ls[n-1-k]",ls[n-1-k])
ls[i+k], ls[n-1-k] = ls[n-1-k], ls[i+k]
break
else:
ls.reverse()
return ls
print(next_perm([2,3,1,5,4,2]))
"""print(next_perm([3,2,1]))
print(next_perm([1,1,5]))"""
#BRUTE FORCE. DOES NOT SOLVE ALL CASE
def next_perm(ls):
# Check the max element in the list
max_num=max(ls)
min_num=min(ls)
head =0
tail =len(ls)-1
#We know the max for the first element has been reached then swap the max element
if ls[0] == max_num:
temp= ls[-1]
ls[-1]=ls[0]
ls[0] =temp
return ls
while head <=tail:
if ls[tail] > ls[tail-1]:
temp= ls[tail]
ls[tail]=ls[tail-1]
ls[tail-1] =temp
return ls
head +=1
tail -=1
return ls
print(next_perm([3,2,1]))
print(next_perm([1,2,3]))
print(next_perm([1,1,5]))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/partha1189/machine_learning/blob/master/CONV1D_LSTM_time_series.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer_size):
series = tf.expand_dims(series, axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift= 1, drop_remainder =True)
dataset = dataset.flat_map(lambda window:window.batch(window_size+1))
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
return dataset.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer_size=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding='causal', activation='relu', input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(lambda epoch : 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss= tf.keras.losses.Huber(),
optimizer = optimizer,
metrics = ['mae'])
history = model.fit(train_set, epochs = 100, callbacks = [lr_schedule])
plt.semilogx(history.history['lr'], history.history['loss'])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
**This notebook is an exercise in the [Time Series](https://www.kaggle.com/learn/time-series) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/hybrid-models).**
---
# Introduction #
Run this cell to set everything up!
```
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex5 import *
# Setup notebook
from pathlib import Path
from learntools.time_series.style import * # plot style settings
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from statsmodels.tsa.deterministic import DeterministicProcess
from xgboost import XGBRegressor
comp_dir = Path('../input/store-sales-time-series-forecasting')
data_dir = Path("../input/ts-course-data")
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales', 'onpromotion'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
family_sales = (
store_sales
.groupby(['family', 'date'])
.mean()
.unstack('family')
.loc['2017']
)
```
-------------------------------------------------------------------------------
In the next two questions, you'll create a boosted hybrid for the *Store Sales* dataset by implementing a new Python class. Run this cell to create the initial class definition. You'll add `fit` and `predict` methods to give it a scikit-learn like interface.
```
# You'll add fit and predict methods to this minimal class
class BoostedHybrid:
def __init__(self, model_1, model_2):
self.model_1 = model_1
self.model_2 = model_2
self.y_columns = None # store column names from fit method
```
# 1) Define fit method for boosted hybrid
Complete the `fit` definition for the `BoostedHybrid` class. Refer back to steps 1 and 2 from the **Hybrid Forecasting with Residuals** section in the tutorial if you need.
```
def fit(self, X_1, X_2, y):
# YOUR CODE HERE: fit self.model_1
self.model_1.fit(X_1, y)
y_fit = pd.DataFrame(
# YOUR CODE HERE: make predictions with self.model_1
self.model_1.predict(X_1),
index=X_1.index, columns=y.columns,
)
# YOUR CODE HERE: compute residuals
y_resid = y - y_fit
y_resid = y_resid.stack().squeeze() # wide to long
# YOUR CODE HERE: fit self.model_2 on residuals
self.model_2.fit(X_2, y_resid)
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#q_1.hint()
q_1.solution()
```
-------------------------------------------------------------------------------
# 2) Define predict method for boosted hybrid
Now define the `predict` method for the `BoostedHybrid` class. Refer back to step 3 from the **Hybrid Forecasting with Residuals** section in the tutorial if you need.
```
def predict(self, X_1, X_2):
y_pred = pd.DataFrame(
# YOUR CODE HERE: predict with self.model_1
self.model_1.predict(X_1),
index=X_1.index, columns=self.y_columns,
)
y_pred = y_pred.stack().squeeze() # wide to long
# YOUR CODE HERE: add self.model_2 predictions to y_pred
y_pred += self.model_2.predict(X_2)
return y_pred.unstack() # long to wide
# Add method to class
BoostedHybrid.predict = predict
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#q_2.hint()
q_2.solution()
```
-------------------------------------------------------------------------------
Now you're ready to use your new `BoostedHybrid` class to create a model for the *Store Sales* data. Run the next cell to set up the data for training.
```
# Target series
y = family_sales.loc[:, 'sales']
# X_1: Features for Linear Regression
dp = DeterministicProcess(index=y.index, order=1)
X_1 = dp.in_sample()
# X_2: Features for XGBoost
X_2 = family_sales.drop('sales', axis=1).stack() # onpromotion feature
# Label encoding for 'family'
le = LabelEncoder() # from sklearn.preprocessing
X_2 = X_2.reset_index('family')
X_2['family'] = le.fit_transform(X_2['family'])
# Label encoding for seasonality
X_2["day"] = X_2.index.day # values are day of the month
```
# 3) Train boosted hybrid
Create the hybrid model by initializing a `BoostedHybrid` class with `LinearRegression()` and `XGBRegressor()` instances.
```
# YOUR CODE HERE: Create LinearRegression + XGBRegressor hybrid with BoostedHybrid
model = BoostedHybrid(
model_1=LinearRegression(),
model_2=XGBRegressor(),
)
# YOUR CODE HERE: Fit and predict
model.fit(X_1, X_2, y)
y_pred = model.predict(X_1, X_2)
y_pred = y_pred.clip(0.0)
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#q_3.hint()
q_3.solution()
```
-------------------------------------------------------------------------------
Depending on your problem, you might want to use other hybrid combinations than the linear regression + XGBoost hybrid you've created in the previous questions. Run the next cell to try other algorithms from scikit-learn.
```
# Model 1 (trend)
from pyearth import Earth
from sklearn.linear_model import ElasticNet, Lasso, Ridge
# Model 2
from sklearn.ensemble import ExtraTreesRegressor, RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
# Boosted Hybrid
# YOUR CODE HERE: Try different combinations of the algorithms above
model = BoostedHybrid(
model_1=Ridge(),
model_2=KNeighborsRegressor(),
)
```
These are just some suggestions. You might discover other algorithms you like in the scikit-learn [User Guide](https://scikit-learn.org/stable/supervised_learning.html).
Use the code in this cell to see the predictions your hybrid makes.
```
y_train, y_valid = y[:"2017-07-01"], y["2017-07-02":]
X1_train, X1_valid = X_1[: "2017-07-01"], X_1["2017-07-02" :]
X2_train, X2_valid = X_2.loc[:"2017-07-01"], X_2.loc["2017-07-02":]
# Some of the algorithms above do best with certain kinds of
# preprocessing on the features (like standardization), but this is
# just a demo.
model.fit(X1_train, X2_train, y_train)
y_fit = model.predict(X1_train, X2_train).clip(0.0)
y_pred = model.predict(X1_valid, X2_valid).clip(0.0)
families = y.columns[0:6]
axs = y.loc(axis=1)[families].plot(
subplots=True, sharex=True, figsize=(11, 9), **plot_params, alpha=0.5,
)
_ = y_fit.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C0', ax=axs)
_ = y_pred.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C3', ax=axs)
for ax, family in zip(axs, families):
ax.legend([])
ax.set_ylabel(family)
```
# 4) Fit with different learning algorithms
Once you're ready to move on, run the next cell for credit on this question.
```
# View the solution (Run this cell to receive credit!)
q_4.check()
```
# Keep Going #
[**Convert any forecasting task**](https://www.kaggle.com/ryanholbrook/forecasting-with-machine-learning) to a machine learning problem with four ML forecasting strategies.
---
*Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/time-series/discussion) to chat with other learners.*
| github_jupyter |
# Lab 2: Importing and plotting data
**Data Science for Biologists** • University of Washington • BIOL 419/519 • Winter 2019
Course design and lecture material by [Bingni Brunton](https://github.com/bwbrunton) and [Kameron Harris](https://github.com/kharris/). Lab design and materials by [Eleanor Lutz](https://github.com/eleanorlutz/), with helpful comments and suggestions from Bing and Kam.
### Table of Contents
1. Review of Numpy arrays
2. Importing data from a file into a Numpy array
3. Examining and plotting data in a Numpy array
4. Bonus exercise
### Helpful Resources
- [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas
- [Python Basics Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/e30fbcd9-f595-4a9f-803d-05ca5bf84612) by Python for Data Science
- [Jupyter Notebook Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/48093c40-5303-45f4-bbf9-0c96c0133c40) by Python for Data Science
- [Matplotlib Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/28b8210c-60cc-4f13-b0b4-5b4f2ad4790b) by Python for Data Science
- [Numpy Cheat Sheet](https://datacamp-community-prod.s3.amazonaws.com/e9f83f72-a81b-42c7-af44-4e35b48b20b7) by Python for Data Science
### Data
- The data in this lab is from the [Palmer Penguin Project](https://github.com/allisonhorst/palmerpenguins) by Dr. Kristen Gorman. The data was edited for teaching purposes.
## Lab 2 Part 1: Review of Numpy arrays
In lecture this week we used Numpy arrays to generate random numbers, look at data, and make patterns. In this first lab section we'll review how to create, access, and edit parts of a Numpy array.
To use the Numpy library we need to first import it using the command `import numpy as np`. We'll also import Matplotlib in this same code block, since we'll use this library later in the lab. It's good practice to import all of your libraries at the very beginning of your code file, so that anyone can quickly see what external libraries are necessary to run your code.
```
import numpy as np
import matplotlib.pyplot as plt
# Magic command to turn on in-line plotting
# (show plots within the Jupyter Notebook)
%matplotlib inline
```
### Creating a Numpy array from existing data
To review some important concepts about Numpy arrays, let's make a small 3x3 array called `alphabet_data`, filled with different letters of the alphabet:
```
row_A = ["A", "B", "C"]
row_D = ["D", "E", "F"]
row_G = ["G", "H", "I"]
alphabet_data = np.array([row_A, row_D, row_G])
print(alphabet_data)
```
We can use the `print` command to look at the entire `alphabet_data` Numpy array. But often we'll work with very large arrays full of data, and we'll want to pick small subsets of the data to look at. Therefore, it's useful to know how to ask Python to give you just a section of any Numpy array.
### Selecting subsets of Numpy arrays
In lab 1, we talked about how index values describe where to find a specific item within a Python list or array. For example, the variable `example_list` is a list with one row, containing three items. To print the first item in the list we would print `example_list[0]`, or *the value in the variable example_list at index 0*. Remember that the first item in a Python list corresponds to the *index* 0.
```
example_list = ["avocado", "tomato", "onion"]
print("example_list is:", example_list)
print("example_list[0] is:", example_list[0])
```
### Selecting a single value in a Numpy array
`alphabet_data` is a little more complicated since it has rows *and* columns, but the general principle of indexing is still the same. Each value in a Numpy array has a unique index value for its row location, and a separate unique index value for its column location. We can ask Numpy to give us just the value we want by using the syntax `alphabet_data[row index, column index]`.

**Exercise 1:** Use indexing to print the second item in the first row of `alphabet_data`:
```
print(alphabet_data[0, 1])
```
### Selecting a range of values in a Numpy array
In addition to selecting just one value, we can use the syntax `lower index range : upper index range` to select a range of values. Remember that ranges in Python are *exclusive* - the last index in the range is not included. Below is an example of range indexing syntax used on `example_list`:
```
example_list = ["avocado", "tomato", "onion"]
print("example_list is:", example_list)
print("example_list[0:2] is:", example_list[0:2])
```
We can use exactly the same notation in a Numpy array. However, since we have both row *and* column indices, we can declare one range for the rows and one range for the columns. For example, the following code prints all rows from index 0 to index 3, and all columns from index 0 to index 2. Note that index 3 doesn't actually exist - but since the upper index range is not included in a Python range, we need to use an index of 3 to print everything up to index 2.
```
print(alphabet_data[0:3, 0:2])
```
**Exercise 2:** Print the first two rows of the first two columns in `alphabet_data`.
```
print(alphabet_data[0:2, 0:2])
```
**Exercise 3:** Print the last two rows of the last two columns in `alphabet_data`.
```
print(alphabet_data[1:, 1:])
```
Once we know how to select subsets of arrays, we can use this knowledge to *change* the items in these selections. For example, in a list we can assign a value found at a specific index to be something else. In this example we use indexing to reference the first item in `example_list`, and then change it.
```
example_list = ["avocado", "tomato", "onion"]
print("before assignment, the example_list is:", example_list)
example_list[0] = "banana"
print("after assignment, the example_list is:", example_list)
```
Similarly, we can change items in a Numpy array using indexing:
```
print("before assignment, alphabet_data is:")
print(alphabet_data)
alphabet_data[0, 0] = "Z"
print("after assignment, alphabet_data is:")
print(alphabet_data)
```
**Exercise 4:** Replace the item in the third row and second column of `alphabet_data` with `"V"`.
```
alphabet_data[2, 1] = "V"
print(alphabet_data)
```
**Exercise 5:** Replace the entire second row of `alphabet_data` with a new row: `["X", "Y", "X"]`
```
alphabet_data[1] = ["X", "Y", "X"]
print(alphabet_data)
```
## Lab 2 Part 2: Importing data from a file into a Numpy array
Let's apply these principles of Numpy arrays to some real biological data. In the `Lab_02` data folder there are three data files:
- `./data/Lab_02/Adelie_Penguin.csv`
- `./data/Lab_02/Chinstrap_Penguin.csv`
- `./data/Lab_02/Gentoo_Penguin.csv`
These files contain data collected by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER - a member of the Long Term Ecological Research Network.

*Credit:* Artwork by @allison_horst
The data is formatted as a large table, with one file for each species of penguin. The files contain 50 rows, each representing one individual, and four columns, which represent culmen length and depth, flipper length, and body mass. For example, `Adelie_Penguin.csv` corresponds to the column and row labels shown below:
| Penguin ID | Culmen Length (mm) | Culmen Depth (mm) | Flipper Length (mm) | Body Mass (g) |
| --- | ----------- | ----------- | ----------- | ----------- |
| Individual 1 | 39.1 | 18.7 | 181 |3750 |
| Individual 2 | 39.5 |17.4 |186| 3800|
| ... | ... | ... | ... | ... |
| Individual 50 | 39.6 |17.7| 186|3500|

*Credit:* Artwork by @allison_horst
We'll use the Numpy command `loadtxt` to read in our first file, `Adelie_Penguin.csv`. We will save this data in a Numpy array called `adelie_data`.
```
# Load our file data from "filename" into a variable called adelie_data
filename = "./data/Lab_02/Adelie_Penguin.csv"
adelie_data = np.loadtxt(fname=filename, delimiter=",")
```
The data description above tells us that `adelie_data` should contain 50 rows and 4 columns, so let's use the Numpy `shape` command to double check that's the case. Numpy `shape` will print two numbers in the format `(number of rows, number of columns)`.
**Exercise 6:** Right now, the code below prints a warning if we don't have the expected 50 rows. Edit the code so that the warning is also printed if the number of columns is not 4.
```
# Print the shape of the loaded dataset
data_shape = adelie_data.shape
print("Adelie data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
```
It looks like our `adelie_data` Numpy array is the shape we expect. Now let's look at a subset of data to see what kind of data we're working with.
**Exercise 7:** Use Python array indexing to print the first three rows, first four columns of `adelie_data`. Check to make sure that the printed data matches what is given to you in the data description above.
```
print(adelie_data[0:3, 0:5])
```
## Lab 2 Part 3: Examining and plotting data in a Numpy array
### Calculate interesting characteristics of a Numpy array
Now that we have loaded our Adelie penguin data into a Numpy array, there are several interesting commands we can use to find out more about our data. First let's look at the culmen length column (the first column in the dataset). Using array indexing, we will put this entire first column into a new variable called `culmen_lengths`. When indexing between a range of values, leaving the upper range bound blank causes Python to include everything until the end of the array:
```
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = adelie_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
```
Numpy contains many useful functions for finding out different characteristics of a dataset. The code below shows some examples:
```
# Print some interesting characteristics of the data
print("Mean:", np.mean(culmen_lengths))
print("Standard deviation:", np.std(culmen_lengths))
print("Median:", np.median(culmen_lengths))
print("Minimum:", np.min(culmen_lengths))
print("Maximum:", np.max(culmen_lengths))
```
We can use our `culmen_lengths` variable and the useful characteristics we found above to make a histogram of our data. In the below code we've created a histogram, and added a line that shows where the mean of the dataset is.
**Exercise 8:** Edit the code block below to plot the maximum and minimum data values as two additional vertical lines.
```
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(culmen_lengths, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(culmen_lengths), label="mean")
# Your code here!
plt.axvline(np.max(culmen_lengths), label="maximum")
plt.axvline(np.min(culmen_lengths), label="minimum")
# Don't forget to label the axes!
plt.xlabel("Culmen length (mm)")
plt.ylabel("Frequency (number of penguins)")
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
#### Review of for loops using indexing
Last week in lab we went over an example of a `for` loop that uses indices to loop through a list. Let's pretend that in this Adelie penguin dataset, we have marked in our lab notebook that the first, 12th, 26th, and 44th penguins we sampled seemed suspiciously small. Let's use a `for` loop to print out the culmen length of each of these penguins.
```
# First let's make a list of all of the indexes where we can find suspicious penguins.
interesting_indices = [0, 11, 25, 43]
# Now we'll look at every single index in the list of suspicious indices.
for index in interesting_indices:
# Because we are looking at indices, we need to use indexing to find the
# value in culmen_lengths that we're interested in.
culmen = culmen_lengths[index]
print("The culmen length at index", index, ":", culmen)
```
**Exercise 9:** Instead of using a `for` loop to look at just the indices in `interesting_indices`, use a `for` loop to look at *all* indices in the `culmen_lengths` dataset. Remember that you can use the command `len(culmen_lengths)` to find out how many values are in the data. Print the culmen length and index if the culmen length is larger than the mean culmen length.
```
all_indices = np.arange(0, len(adelie_data))
for index in all_indices:
culmen = culmen_lengths[index]
if culmen > culmen_lengths.mean():
print("The culmen length at index", index, ":", culmen)
```
So far we've only looked at the culmen lengths in this dataset. Let's use a `for` loop to also look at the culmen depths, flipper lengths, and body mass. Remember that the columns in this dataset stand for:
```
culmen_lengths = adelie_data[0:, 0]
culmen_depths = adelie_data[0:, 1]
flipper_lengths = adelie_data[0:, 2]
body_mass = adelie_data[0:, 3]
morphologies = [culmen_lengths, culmen_depths, flipper_lengths, body_mass]
for morphology in morphologies:
# Create a histogram
plt.hist(morphology)
# Show the plot in our jupyter notebook
plt.show()
```
Notice that the code in the above box is doing the same action for every column in the array. So instead of re-assigning every column in the array to a new variable called `culmen_lengths`, `body_mass`, etc, let's use array indexing to loop through the data instead. Notice that the only thing changing when looking at different columns is the *column index*.
**Exercise 10:** Change the following code so that it creates a histogram for all columns in the Adelie penguin data, like in the previous block. However, instead of making a new variable for each column called `culmen_lengths`, `body_mass`, etc, use indexing instead.
```
column_indices = [0, 1, 2, 3]
for index in column_indices:
data_subset = adelie_data[0:, index]
# Create a histogram
plt.hist(data_subset)
# Show the plot in our jupyter notebook
plt.show()
```
### Putting it all together: Using a for loop to load, analyze, and plot multiple data files
We've now found some interesting things about Adelie penguins. But our original dataset included three different species - Adelie penguins, Chinstrap penguins, and Gentoo penguins. We probably want to run these exact same analyses for each species, and this is a great opportunity to use a `for` loop to make our lives easier. Because all three of our datasets are exactly the same shape and format, we can reuse all of our code that we've already written.
```
# First, make a list of each filename that we're interested in analyzing
filenames = ["./data/Lab_02/Adelie_Penguin.csv",
"./data/Lab_02/Chinstrap_Penguin.csv",
"./data/Lab_02/Gentoo_Penguin.csv"]
```
Now that we have a list of filenames to analyze, we can turn this into a `for` loop that loads each file and then runs analyses on the file. The code block below has started the process - for each filename, we load in the file data as a variable called `penguin_data`. Note that we're not actually doing anything with the data yet, so we don't see many interesting things being printed.
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("NOW ANALYZING DATASET: ", filename)
```
The data loading doesn't seem to have caused any errors, so we'll continue to copy and paste the code we've already written to work with the data. Note that everything we've copied and pasted is code we've already written - but now we're asking Python to run this same code on *all* the data files, instead of just Adelie penguins. For the purposes of this exercise, we'll analyze just the culmen lengths of the dataset, so that we end up with a manageable number of output plots.
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = penguin_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
```
**Exercise 11:** Similarly, add in the code you've already written to print the interesting characteristics of the data (mean, median, max, etc.) and create a histogram for each data file that includes the mean and median. Run your final for loop. Which penguin species has the longest mean culmen length? Smallest minimum culmen length?
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# put the culmen lengths for this dataset in a variable called culmen_lengths
culmen_lengths = penguin_data[0:, 0]
print("The culmen lengths in this dataset is:")
print(culmen_lengths)
# Print some interesting characteristics of the data
print("Mean:", np.mean(culmen_lengths))
print("Standard deviation:", np.std(culmen_lengths))
print("Median:", np.median(culmen_lengths))
print("Minimum:", np.min(culmen_lengths))
print("Maximum:", np.max(culmen_lengths))
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(culmen_lengths, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(culmen_lengths), label="mean")
plt.axvline(np.median(culmen_lengths), label="median")
# Your code here!
# Don't forget to label the axes!
plt.xlabel("Culmen length (mm)")
plt.ylabel("Frequency (number of penguins)")
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
## Lab 2 Bonus exercise
**Bonus Exercise 1:** Now take the above code and edit it so that we analyze all of the 4 penguin morphology variables, for all of the species. Label the plot axis and title with the appropriate information (penguin species for title, and the morphological variable on the x axis).
```
for filename in filenames:
# Load our file data from "filename" into a variable called penguin_data
penguin_data = np.loadtxt(fname=filename, delimiter=",")
print("----")
print("NOW ANALYZING DATASET: ", filename)
# Print the shape of the loaded dataset
data_shape = penguin_data.shape
print("Penguin data shape is:", data_shape)
# Print a warning if the data shape is not what we expect
if (data_shape[0] != 50) or (data_shape[1] != 4):
print("Unexpected data shape!")
else:
print("Correct data shape of 50 rows, 4 columns!")
# the number of columns is the same as the number of items in the first row
num_columns = len(penguin_data[0])
axis_labels = ["Culmen length (mm)",
"Culmen depth (mm)",
"Flipper length (mm)",
"Body mass (g)"]
# THIS IS CALLED A NESTED FOR LOOP!
# A NESTED FOR LOOP HAS A FOR LOOP INSIDE OF ANOTHER FOR LOOP.
for index in np.arange(0, num_columns):
data_subset = penguin_data[0:, index]
# Print some interesting characteristics of the data
print("Mean:", np.mean(data_subset))
print("Standard deviation:", np.std(data_subset))
print("Median:", np.median(data_subset))
print("Minimum:", np.min(data_subset))
print("Maximum:", np.max(data_subset))
# Create a histogram with an opacity of 50% (alpha=0.5)
plt.hist(data_subset, alpha=0.5)
# Add a vertical line to the plot showing the mean.
plt.axvline(np.mean(data_subset), label="mean")
plt.axvline(np.median(data_subset), label="median")
# Your code here!
# Don't forget to label the axes!
plt.xlabel(axis_labels[index])
plt.ylabel("Frequency (number of penguins)")
species_name = filename.strip('.csv')
plt.title(species_name)
# Add a legend to the plot
plt.legend()
# Show the plot in our jupyter notebook
plt.show()
```
| github_jupyter |
# Jenkins Job Monitoring Script
#### @author: Rakesh.Ranjan
Created on Wed Jun 17 22:14:40 2020
Updated on Tue Jun 30 15:53:55 2020
### Import Lib
```
import os
import pandas as pd
import requests
import json
import datetime as dt
from colorama import Fore, Back, Style
import time
import concurrent.futures
#timestamp
timstamp = dt.datetime.now()
print(timstamp)
#current directory
print(os.getcwd())
```
#### index starts from 0. Find the position of your team name and assign the correct value to variable - 'i'
```
sc_team_name = ['DT%20-%20Accountable%20Gladiators', 'DT%20-%20DeltaForce', 'DT%20-%20Disruptors', 'DT%20-%20Transformers', 'DT-Chargers', 'DT-Equalizers', 'DT-OMG', 'DT-PayTheMan', 'DT-Req2Check']
sc_team_name
```
#### Enter index of your Scrum Team.#'DT-Chargers' is placed at 5 th position.So its index is 5 - 1 = 4
```
i = 6 #7 #for Pay the man # 5 for Equalizers # 4 for CHargers
scrum_team_name = sc_team_name[i]
scrum_team_name
```
### Fetch last N build details
```
fetch_N_last_Jobs = 2
```
### Storing extract into below mentioned excel files.
```
#Failed_Jenkins_Job_filename = scrum_team_name + '_' + 'Failed_Jenkins_Job_stage' + '_' + str(timstamp).replace(' ', '_').replace(':', '-') + '.xlsx'
jenkins_job_file_name = scrum_team_name + '_' + 'Jenkins_Job' + '_' + str(timstamp).replace(' ', '_').replace(':', '-') + '.xlsx'
stage_Jenkins_Job_file_name = scrum_team_name + '_' + 'stage_Jenkins_Job' + '_' + str(timstamp).replace(' ', '_').replace(':', '-') + '.xlsx'
dev_Jenkins_Job_file_name = scrum_team_name + '_' + 'dev_Jenkins_Job' + '_' + str(timstamp).replace(' ', '_').replace(':', '-') + '.xlsx'
```
### creating an empty dataframe for storing summary of Jenkins Jobs
```
column_names = ["test_id", "fullDisplayName", "buildNumber", "result", "instance_name","virtualMachine", "buildURL", "testComplete_or_console_ErrMessage", "testCompleteURL", "consoleLogPage"]
df_Jenkins_Aut_job = pd.DataFrame(columns = column_names)
```
### command to extract all test folders of the respective SCRUM TEAM
```
scrum_team_url = 'https://testwin.epfin.coxautoinc.com/view/' + scrum_team_name + '/api/json'
resp_tc_folder = requests.get(scrum_team_url)
resp_tc_folder
```
### Find Total number of Test Folders
```
test_folder_count = len(resp_tc_folder.json()['jobs'])
print('test_folder_count:',test_folder_count)
```
### display test folder name
```
for i in range(test_folder_count):
print(resp_tc_folder.json()['jobs'][i]['name'])
start = time.time()
print('execute the main logic to get build details of Jenkins test case')
for tc_fl in range(test_folder_count):
#'O2I- Sales Tax Vertex', 'O2I - Invoice Format'etc.
#print('test folder name:',resp_tc_folder.json()['jobs'][tc_fl]['name'])
tc_folder_url = resp_tc_folder.json()['jobs'][tc_fl]['url'] + 'api/json'
resp_tc_folder_url = requests.get(tc_folder_url)
jsonRes = resp_tc_folder_url.json()# To get response dictionary as JSON
# loop count = number of testcase inside the folder
test_case_count = len(jsonRes['jobs'])
#print('test_case_count:',test_case_count)
for i in range(0,test_case_count):
suffix = 'api/json'
url = jsonRes['jobs'][i]['url'] + suffix
tc_url = requests.get(url)
color_of_tes_case = jsonRes['jobs'][i]['color']
if color_of_tes_case in ['red','blue']:
print(Fore.GREEN + '*********************')
print(Style.RESET_ALL)
if color_of_tes_case == 'red':
print(Back.RED + 'test folder name:',resp_tc_folder.json()['jobs'][tc_fl]['name'])
print(Style.RESET_ALL)
print(Back.RED + 'Failed - last build for test-case name:', jsonRes['jobs'][i]['name'])
print(Style.RESET_ALL)
else:
print(Back.GREEN + 'test folder name:',resp_tc_folder.json()['jobs'][tc_fl]['name'])
print(Style.RESET_ALL)
print(Back.GREEN + 'test-case name:', jsonRes['jobs'][i]['name'])
print(Style.RESET_ALL)
print(Fore.GREEN + '*********************')
print(Style.RESET_ALL)
#print('back to normal now')
df_Jenkins_Aut_job = df_Jenkins_Aut_job.append({
"test_id": None,
"fullDisplayName": None,
"buildNumber": None,
"result": None,
"instance_name":None,
"virtualMachine": None,
"buildURL": None,
"testComplete_or_console_ErrMessage": None,
"testCompleteURL": None,
"consoleLogPage":None}, ignore_index = True)
#fetching details of last 2 build details
if len(tc_url.json()['builds']) > 2:
lc_builds = fetch_N_last_Jobs
else:
lc_builds = len(tc_url.json()['builds'])
#build loop
for last_4_build in range(0,lc_builds):
buildNumber_Seq = tc_url.json()['builds'][last_4_build]['number']
buildURL_Seq = tc_url.json()['builds'][last_4_build]['url']
json_url= buildURL_Seq +'api/json'
#command to get details of the test case build
tc_bulk_details = requests.get(json_url)
#'********details of test case build*********'
test_id = jsonRes['jobs'][i]['name'].split('_')[0]
fullDisplayName = tc_bulk_details.json()['fullDisplayName']
buildNumber = tc_bulk_details.json()['number']
testResult = tc_bulk_details.json()['result']
virtualMachine = tc_bulk_details.json()['builtOn']
instance_name = virtualMachine.split('-')[0]
buildURL = tc_bulk_details.json()['url']
consoleTextURL = tc_bulk_details.json()['url'] + 'consoleText'
#'****logic to get error message from consoletext
try:
if testResult == 'FAILURE':
console = requests.get( buildURL + 'consoleText/api/json')
s = console.text
start = s.find("ERROR [SoapUIProTestCaseRunner]") + len("ERROR [SoapUIProTestCaseRunner]")
end_string = "INFO [log] " + test_id[2: ]
end = s.find(end_string)
console_err_msg = s[start: end]
else:
console_err_msg = None
except:
console_err_msg = 'Not able to get the console error message'
# '****logic to get error_message for TestComplete Automation********'
try:
if testResult == 'FAILURE':
test_comp_job_url = buildURL + 'TestComplete/api/json'
resp_test_comp_job_url = requests.get(test_comp_job_url)
tc_error_message = resp_test_comp_job_url.json()['reports'][0]['error']
test_comp_url = resp_test_comp_job_url.json()['reports'][0]['url']
if tc_error_message == "":
start_tc = s.find("[TestComplete] [ERROR]") + len("[TestComplete] [ERROR]")
end_tc = s.find("Finished: FAILURE")
tc_error_message = s[start_tc: end_tc]
else:
test_comp_job_url = buildURL + 'TestComplete/api/json'
resp_test_comp_job_url = requests.get(test_comp_job_url)
test_comp_url = resp_test_comp_job_url.json()['reports'][0]['url']
tc_error_message = None
except:
tc_error_message = console_err_msg
test_comp_url = 'No TestComplete URL for this test case '
finally:
#populate empty dataframe to store failed job detailsand keep appending new records
df_Jenkins_Aut_job = df_Jenkins_Aut_job.append({
"test_id": test_id,
"fullDisplayName": fullDisplayName,
"buildNumber": buildNumber,
"result": testResult,
"instance_name":instance_name,
"virtualMachine": virtualMachine,
"buildURL": buildURL,
"testComplete_or_console_ErrMessage": tc_error_message,
"testCompleteURL": test_comp_url,
"consoleLogPage":consoleTextURL
}, ignore_index = True)
if testResult == 'FAILURE':
print(Back.RED + 'build number:',buildNumber)
print(Back.RED + 'instance name:',instance_name)
print(Back.RED + 'tc_error_message:',tc_error_message)
print(Style.RESET_ALL)
else:
print(Back.GREEN + 'build number:',buildNumber)
print(Back.GREEN + 'instance name:',instance_name)
print(Back.GREEN + 'tc_error_message:',tc_error_message)
print(Style.RESET_ALL)
finish = time.time()
#print("Time taken : {} secs".format(finish - start))
print("Now execute the command to store result into flat files")
```
### Directory where you are storing the Jenkins Report
```
os.chdir("C:\\Users\\Rakesh.Ranjan\\Desktop\\Manheim\\Jenkins Job Monitoring\\25-06-2020\\Jenkins Job Monitoring")
print(os.getcwd())
df_Jenkins_Aut_job.to_excel(jenkins_job_file_name)
df_Jenkins_Aut_job[df_Jenkins_Aut_job['instance_name'] =='stage'].to_excel(stage_Jenkins_Job_file_name)
df_Jenkins_Aut_job[df_Jenkins_Aut_job['instance_name'] =='dev'].to_excel(dev_Jenkins_Job_file_name)
print('Open file - ' + jenkins_job_file_name + ' to see Jenkins job details')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/coderinspain/MLJupyterhousePrediction/blob/main/Copy_of_trainingMLhousePrediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Model to predict house price
```
import pandas as pd
import io
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.rcParams["figure.figsize"] = (20, 10)
df1 = pd.read_csv('Bengaluru_House_Data.csv')
df1.head()
df1.shape
df1.groupby('area_type')['area_type'].agg('count')
df2 = df1.drop(['area_type', 'society', 'balcony', 'availability'], axis='columns')
df1.head()
df2.isnull().sum()
df3 = df2.dropna()
df3.isnull().sum()
df3.shape
df3['size'].unique()
df3['bhk'] = df3['size'].apply(lambda x: int(x.split(' ')[0]))
df3.head()
df3['bhk'].unique()
df3[df3.bhk>12]
df3.total_sqft.unique()
def is_float(x):
try:
float(x)
except:
return False
return True
df3[~df3['total_sqft'].apply(is_float)].head(10)
def convert_sqft_to_num(x):
tokens = x.split('-')
if len(tokens) == 2:
return (float(tokens[0]) + float(tokens[1])) / 2
try:
return float(x)
except:
return None
convert_sqft_to_num('2166')
convert_sqft_to_num('3090 - 5002')
convert_sqft_to_num('34.46Sq. Meter')
df4 = df3.copy()
df4['total_sqft'] = df4['total_sqft'].apply(convert_sqft_to_num)
df4.head()
df4.loc[30]
df4.head(3)
df5 = df4.copy()
df5['price_per_sqft'] = df5['price']*100000/df5['total_sqft']
df5.head()
len(df5.location.unique())
df5.location = df5.location.apply(lambda x: x.strip())
location_stats = df5.groupby('location')['location'].agg('count').sort_values(ascending = False)
location_stats
len(location_stats[location_stats <= 10])
location_stats_less_than_10 = location_stats[location_stats<=10]
location_stats_less_than_10
len(df5.location.unique())
df5.location = df5.location.apply(lambda x: 'other' if x in location_stats_less_than_10 else x)
len(df5.location.unique())
df5.head(10)
df5[df5.total_sqft/df5.bhk < 300].head()
df5.shape
df6 = df5[~(df5.total_sqft/df5.bhk < 300)]
df6.shape
df6.price_per_sqft.describe()
def remove_pps_outliers(df):
df_out = pd.DataFrame()
for key, subdf in df.groupby('location'):
m = np.mean(subdf.price_per_sqft)
st = np.mean(subdf.price_per_sqft)
reduced_df = subdf[(subdf.price_per_sqft>(m-st)) & (subdf.price_per_sqft <=(m+st))]
df_out = pd.concat([df_out, reduced_df], ignore_index=True)
return df_out
df7 = remove_pps_outliers(df6)
df7.shape
def plot_scatter_chart(df, location):
bhk2 = df[(df.location==location) & (df.bhk==2)]
bhk3 = df[(df.location==location) & (df.bhk==3)]
matplotlib.rcParams['figure.figsize'] = (15, 8)
plt.scatter(bhk2.total_sqft, bhk2.price, color='blue', label='2 BHK', s=50)
plt.scatter(bhk3.total_sqft, bhk3.price, marker='+', color='green', label='3 BHK', s=50)
plt.xlabel('Total Square Feet Area')
plt.ylabel('Price')
plt.title(location)
plt.legend()
plot_scatter_chart(df7, 'Hebbal')
def remove_bhk_outliers(df):
exclude_indices = np.array([])
for location, location_df in df.groupby('location'):
bhk_stats = {}
for bhk, bhk_df in location_df.groupby('bhk'):
bhk_stats[bhk] = {
'mean': np.mean(bhk_df.price_per_sqft),
'std': np.std(bhk_df.price_per_sqft),
'count': bhk_df.shape[0]
}
for bhk, bhk_df in location_df.groupby('bhk'):
stats = bhk_stats.get(bhk-1)
if stats and stats['count'] > 5:
exclude_indices = np.append(exclude_indices, bhk_df[bhk_df.price_per_sqft < (stats['mean'])].index.values)
return df.drop(exclude_indices, axis='index')
df8 = remove_bhk_outliers(df7)
df8.shape
plot_scatter_chart(df8, 'Hebbal')
import matplotlib
matplotlib.rcParams['figure.figsize'] = (20, 10)
plt.hist(df8.price_per_sqft, rwidth=0.8)
plt.xlabel('Price Per Square Feet')
plt.ylabel('Count')
df8.bath.unique()
df8[df8.bath > 10]
plt.hist(df8.bath, rwidth=0.8)
plt.xlabel('Number of Bathrooms')
plt.ylabel('Count')
df8[df8.bath > df8.bhk+2]
df9 = df8[df8.bath<df8.bhk+2]
df9.shape
df10 = df9.drop(['size', 'price_per_sqft'], axis='columns')
df10.head(3)
dummies = pd.get_dummies(df10.location)
dummies.head(3)
df11 = pd.concat([df10,dummies.drop('other', axis='columns')], axis= 'columns')
df11.head(3)
df12 = df11.drop('location', axis='columns')
df12.head()
df12.shape
X = df12.drop('price', axis='columns')
X.head()
y = df12.price
y.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=10)
from sklearn.linear_model import LinearRegression
lr_clf = LinearRegression()
lr_clf.fit(X_train, y_train)
lr_clf.score(X_test, y_test)
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
cross_val_score(LinearRegression(), X, y, cv=cv)
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.tree import DecisionTreeRegressor
def find_best_model_gridsearchcv(X,y):
algos = {
'linear_regression' : {
'model': LinearRegression(),
'params': {
'normalize': [True, False]
}
},
'lasso': {
'model': Lasso(),
'params': {
'alpha': [1, 2],
'selection': ['random', 'cyclic']
}
},
'decission_tree': {
'model': DecisionTreeRegressor(),
'params': {
'criterion': ['mse', 'friedman_mse'],
'splitter': ['best', 'random']
}
}
}
scores = []
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
for algo_name, config in algos.items():
gs = GridSearchCV(config['model'], config['params'], cv=cv, return_train_score=False)
gs.fit(X,y)
scores.append({
'model': algo_name,
'best_score': gs.best_score_,
'best_params': gs.best_params_
})
return pd.DataFrame(scores, columns=['model', 'best_score', 'best_params'])
find_best_model_gridsearchcv(X,y)
X.columns
def predict_price(location, sqft, bath, bhk):
loc_index = np.where(X.columns==location)[0][0]
x = np.zeros(len(X.columns))
x[0] = sqft
x[1] = bath
x[2] = bhk
if loc_index >= 0:
x[loc_index] = 1
return lr_clf.predict([x])[0]
predict_price('1st Phase JP Nagar', 1000, 2, 2)
predict_price('1st Phase JP Nagar', 1000, 2, 3)
predict_price('Indira Nagar', 1000, 2, 2)
predict_price('Indira Nagar', 1000, 3, 3)
import pickle
with open('banglore_home_prices_model.pickle', 'wb') as f:
pickle.dump(lr_clf, f)
import json
columns = {
'data_columns' : [col.lower() for col in X.columns]
}
with open('columns.json', 'w') as f:
f.write(json.dumps(columns))
```
| github_jupyter |
# Solution for Ex 5 of the ibmqc 2021
This solution is from the point of view from someone who has just started to explore Quantum Computing, but is familiar with the physics behind it and has some experience with programming and optimization problems.
So I did not create this solution entirely by myself, but altered the tutorial solution from the H-H molecule which was provided.
The goal was to create an ansatz with the lowest possible number of CNOT gates.
```
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
```
There were many hints on how to reduce the problem to a manageable, in particular reducing the number of qubits, hence resulting in smaller circuits with fewer operations. A first hint was to freeze the core, since Li has the configuration of 2 electrons in the 1s orbital and 1 in the 2s orbital (which forms bonds with other atoms). The electrons in orbitals nearer the core can therefore be frozen.
Li : 1s, 2s, and px, py, pz orbitals --> 6 orbitals
H : 1s --> 1 orbital
```
from qiskit_nature.transformers import FreezeCoreTransformer
trafo = FreezeCoreTransformer(freeze_core=True)
q_molecule_reduced = trafo.transform(qmolecule)
```
There are 5 properties to consider to better understand the task. Note that there was already a transformation. Before this transformation the properties would have been (in this order: 4, 6, 12, 12, 1.0259348796432726)
```
n_el = q_molecule_reduced.num_alpha + q_molecule_reduced.num_beta
print("Number of electrons in the system: ", n_el)
n_mo = q_molecule_reduced.num_molecular_orbitals
print("Number of molecular orbitals: ", n_mo)
n_so = 2 * q_molecule_reduced.num_molecular_orbitals
print("Number of spin orbitals: ", n_so)
n_q = 2 * q_molecule_reduced.num_molecular_orbitals
print("Number of qubits one would need with Jordan-Wigner mapping:", n_q)
e_nn = q_molecule_reduced.nuclear_repulsion_energy
print("Nuclear repulsion energy", e_nn)
```
#### Electronic structure problem
One can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).
In the following cell on could also use a quantum molecule transformer to remove orbitals which would not contribute to the ground state - for example px and py in this problem. Why they correspond to orbitals 3 and 4 I'm not really sure, maybe one has to look through the documentation a bit better than I did, but since there were only very limited combinations I tried them at random and kept an eye on the ground state energy.
```
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem= ElectronicStructureProblem(driver, q_molecule_transformers=[FreezeCoreTransformer(freeze_core=True,
remove_orbitals=[3,4])])
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
```
### QubitConverter
Allows to define the mapping that you will use in the simulation. For the LiH problem the Parity mapper is chosen, because it allows the "TwoQubitReduction" setting which will further simplify the problem.
If I understand the paper correctly - referenced as [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1)- symmetries from particle number operators such eq 52 of the paper are used to reduce the number of qubits. The only challenging thing was to understand what [1] is meaning if you pass this as the z2symmetry-reduction parameter.
```
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper = ParityMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=[1])
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
```
#### Initial state
One has to chose an initial state for the system which is reduced to 4 qubits from 12 at the beginning. The initialisation may be chosen by you or you stick to the one proposed by the Hartree-Fock function (i.e. $|\Psi_{HF} \rangle = |1100 \rangle$). For the Exercise it is recommended to stick to stick to the function!
```
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
init_state.draw('mpl')
```
5. Ansatz
Playing with the Ansatz was really fun. I found the TwoLocal Ansatz very interesting to gain some knowlegde and insight on how to compose an ansatz for the problem. Later on I tried to create my own Ansatz and converged to an Ansatz quite similiar to a TwoLocal one.
It's obvious you have to entangle the qubits somehow with CNOTs. But to give the optimization algorithm a chance to find a minimum, you have to make sure to change the states of the qubits before and afterwards independently of the other ones.
```
# Choose the ansatz
ansatz_type = "Custom"
# Parameters for q-UCC ansatz
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry']
# Entangling gates
entanglement_blocks = ['cx']
# How the qubits are entangled
entanglement = 'linear'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 1
# Skip the final rotation_blocks layer
skip_final_rotation_layer = False
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
from qiskit.circuit.random import random_circuit
# Define the variational parameter
param_names_theta = ['a', 'b', 'c', 'd']
thetas = [Parameter(param_names_theta[i]) for i in range(len(param_names_theta))]
param_names_eta = ['e', 'f', 'g', 'h']
etas = [Parameter(param_names_eta[i]) for i in range(len(param_names_eta))]
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a CNOT ladder
for i in range(n):
qc.ry(thetas[i], i)
for i in range(n-1):
qc.cx(i, i+1)
for i in range(n):
qc.ry(etas[n-i-1], i)
# Visual separator
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
ansatz.draw('mpl')
```
### Backend
This is where you specify the simulator or device where you want to run your algorithm.
We will focus on the `statevector_simulator` in this challenge.
```
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
```
### Optimizer
The optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.
A clever choice might reduce drastically the number of needed energy evaluations.
Some of the optimizer seem to not reach the minimum. So the choice of the optimizer and the parameters is important.
I did not get to the minimum with the other optimizers than SLSQP.
I found a very nice and short explanation of how the optimizer works on stackoverflow:
The algorithm described by Dieter Kraft is a quasi-Newton method (using BFGS) applied to a lagrange function consisting of loss function and equality- and inequality-constraints. Because at each iteration some of the inequality constraints are active, some not, the inactive inequalities are omitted for the next iteration. An equality constrained problem is solved at each step using the active subset of constraints in the lagrange function.
https://stackoverflow.com/questions/59808494/how-does-the-slsqp-optimization-algorithm-work
```
from qiskit.algorithms.optimizers import SLSQP
optimizer = SLSQP(maxiter=4000)
```
### Exact eigensolver
In the exercise we got the following exact diagonalizer function to compare the results.
```
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
```
### VQE and initial parameters for the ansatz
Now we can import the VQE class and run the algorithm. This code was also provided. Everything I have done so far is plugged in.
```
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = True # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
```
Thank you very much for this awesome challenge. Without the outline, explanations, examples and hints I would have never been able to solve this in a reasonable time.
I will definitely save this Notebook along with the other exercises as a bluprint for the future.
| github_jupyter |
```
import numpy as np
import theano
import theano.tensor as T
import lasagne
import os
#thanks @keskarnitish
```
# Generate names
* Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train NN instead.
* Dataset contains ~8k human names from different cultures[in latin transcript]
* Objective (toy problem): learn a generative model over names.
```
start_token = " "
with open("names") as f:
names = f.read()[:-1].split('\n')
names = [start_token+name for name in names]
print 'n samples = ',len(names)
for x in names[::1000]:
print x
```
# Text processing
```
#all unique characters go here
token_set = set()
for name in names:
for letter in name:
token_set.add(letter)
tokens = list(token_set)
print 'n_tokens = ',len(tokens)
#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
token_to_id = {t:i for i,t in enumerate(tokens) }
#!id_to_token = < dictionary of symbol identifier -> symbol itself>
id_to_token = {i:t for i,t in enumerate(tokens)}
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(map(len,names),bins=25);
# truncate names longer than MAX_LEN characters.
MAX_LEN = min([60,max(list(map(len,names)))])
#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS
```
### Cast everything from symbols into identifiers
```
names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))
#crop long names and pad short ones
for i in range(len(names_ix)):
names_ix[i] = names_ix[i][:MAX_LEN] #crop too long
if len(names_ix[i]) < MAX_LEN:
names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short
assert len(set(map(len,names_ix)))==1
names_ix = np.array(names_ix)
```
# Input variables
```
input_sequence = T.matrix('token sequencea','int32')
target_values = T.matrix('actual next token','int32')
```
# Build NN
You will be building a model that takes token sequence and predicts next token
* iput sequence
* one-hot / embedding
* recurrent layer(s)
* otput layer(s) that predict output probabilities
```
from lasagne.layers import InputLayer,DenseLayer,EmbeddingLayer
from lasagne.layers import RecurrentLayer,LSTMLayer,GRULayer,CustomRecurrentLayer
l_in = lasagne.layers.InputLayer(shape=(None, None),input_var=input_sequence)
#!<Your neural network>
l_emb = lasagne.layers.EmbeddingLayer(l_in, len(tokens), 40)
l_rnn = lasagne.layers.RecurrentLayer(l_emb,40,nonlinearity=lasagne.nonlinearities.tanh)
#flatten batch and time to be compatible with feedforward layers (will un-flatten later)
l_rnn_flat = lasagne.layers.reshape(l_rnn, (-1,l_rnn.output_shape[-1]))
l_out = lasagne.layers.DenseLayer(l_rnn_flat,len(tokens), nonlinearity=lasagne.nonlinearities.softmax)
# Model weights
weights = lasagne.layers.get_all_params(l_out,trainable=True)
print weights
network_output = lasagne.layers.get_output(l_out)
#If you use dropout do not forget to create deterministic version for evaluation
predicted_probabilities_flat = network_output
correct_answers_flat = target_values.ravel()
loss = T.mean(lasagne.objectives.categorical_crossentropy(predicted_probabilities_flat, correct_answers_flat))
#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>
updates = lasagne.updates.adam(loss,weights)
```
# Compiling it
```
#training
train = theano.function([input_sequence, target_values], loss, updates=updates, allow_input_downcast=True)
#computing loss without training
compute_cost = theano.function([input_sequence, target_values], loss, allow_input_downcast=True)
```
# generation
Simple:
* get initial context(seed),
* predict next token probabilities,
* sample next token,
* add it to the context
* repeat from step 2
You'll get a more detailed info on how it works in the homework section.
```
#compile the function that computes probabilities for next token given previous text.
#reshape back into original shape
next_word_probas = network_output.reshape((input_sequence.shape[0],input_sequence.shape[1],len(tokens)))
#predictions for next tokens (after sequence end)
last_word_probas = next_word_probas[:,-1]
probs = theano.function([input_sequence],last_word_probas,allow_input_downcast=True)
def generate_sample(seed_phrase=None,N=MAX_LEN,t=1,n_snippets=1):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
parameters:
sample_fun - max_ or proportional_sample_fun or whatever else you implemented
The phrase is set using the variable seed_phrase
The optional input "N" is used to set the number of characters of text to predict.
'''
if seed_phrase is None:
seed_phrase=start_token
if len(seed_phrase) > MAX_LEN:
seed_phrase = seed_phrase[-MAX_LEN:]
assert type(seed_phrase) is str
snippets = []
for _ in range(n_snippets):
sample_ix = []
x = map(lambda c: token_to_id.get(c,0), seed_phrase)
x = np.array([x])
for i in range(N):
# Pick the character that got assigned the highest probability
p = probs(x).ravel()
p = p**t / np.sum(p**t)
ix = np.random.choice(np.arange(len(tokens)),p=p)
sample_ix.append(ix)
x = np.hstack((x[-MAX_LEN+1:],[[ix]]))
random_snippet = seed_phrase + ''.join(id_to_token[ix] for ix in sample_ix)
snippets.append(random_snippet)
print("----\n %s \n----" % '; '.join(snippets))
```
# Model training
Here you can tweak parameters or insert your generation function
__Once something word-like starts generating, try increasing seq_length__
```
def sample_batch(data, batch_size):
rows = data[np.random.randint(0,len(data),size=batch_size)]
return rows[:,:-1],rows[:,1:]
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 500
#how many training sequences are processed in a single function call
batch_size=10
for epoch in xrange(n_epochs):
print "Generated names"
generate_sample(n_snippets=10)
avg_cost = 0;
for _ in range(batches_per_epoch):
x,y = sample_batch(names_ix,batch_size)
avg_cost += train(x, y)
print("Epoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
generate_sample(n_snippets=10,t=1.5)
generate_sample(seed_phrase=" Putin",n_snippets=100)
```
# And now,
* try lstm/gru
* try several layers
* try mtg cards
* try your own dataset of any kind
| github_jupyter |
# Training Models
```
import numpy as np
import pandas as pd
import os
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
## Linear regression using the Normal Equation
```
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X]
theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta)
y_predict
plt.plot(X_new, y_predict, 'r-')
plt.plot(X, y, 'b.')
plt.axis([0, 2, 0, 15])
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
```
### Linear regression using batch gradient descent
```
eta = 0.1
n_iterations = 100
m = 100
theta = np.random.randn(2, 1)
for i in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta*gradients
theta
```
### Stochastic Gradient Descent
```
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2*xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch*m+i)
theta = theta - eta * gradients
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
```
### Mini-batch Gradient Descent
```
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
```
### Polynomial Regression
```
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
```
### Learning Curves
```
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_pred = model.predict(X_train[:m])
y_val_pred = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_pred))
val_errors.append(mean_squared_error(y_val, y_val_pred))
plt.plot(np.sqrt(train_errors), 'r-+', linewidth=2, label='train')
plt.plot(np.sqrt(val_errors), 'b-', linewidth=3, label='val')
plt.show()
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
```
### Ridge Regression
```
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver='cholesky')
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
```
### Lasso Regression
```
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
```
### Elastic Net
```
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
```
### Logistic Regression
```
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
X = iris['data'][:, 3:]
y = (iris['target'] ==2).astype(np.int)
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
y_proba
plt.plot(X_new, y_proba[:,1], 'g-', label='Iris-Virginica')
plt.plot(X_new, y_proba[:,0], 'b--', label='Not Iris-Virginica')
plt.xlabel("Petal width", fontsize=18)
plt.ylabel("Probability", fontsize=18)
plt.legend()
plt.show()
```
### Softmax Regression
```
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10)
softmax_reg.fit(X, y)
softmax_reg.predict([[3, 4]])
softmax_reg.predict_proba([[3, 4]])
```
| github_jupyter |
## Convolutional Neural Network for MNIST image classficiation
```
import numpy as np
# from sklearn.utils.extmath import softmax
from matplotlib import pyplot as plt
import re
from tqdm import trange
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
from sklearn.datasets import fetch_openml
import matplotlib.gridspec as gridspec
from sklearn.decomposition import PCA
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman'] + plt.rcParams['font.serif']
```
## Alternating Least Squares for Matrix Factorization
```
def coding_within_radius(X, W, H0,
r=None,
a1=0, #L1 regularizer
a2=0, #L2 regularizer
sub_iter=[5],
stopping_grad_ratio=0.0001,
nonnegativity=True,
subsample_ratio=1):
"""
Find \hat{H} = argmin_H ( || X - WH||_{F}^2 + a1*|H| + a2*|H|_{F}^{2} ) within radius r from H0
Use row-wise projected gradient descent
"""
H1 = H0.copy()
i = 0
dist = 1
idx = np.arange(X.shape[1])
if subsample_ratio>1: # subsample columns of X and solve reduced problem (like in SGD)
idx = np.random.randint(X.shape[1], size=X.shape[1]//subsample_ratio)
A = W.T @ W ## Needed for gradient computation
B = W.T @ X[:,idx]
while (i < np.random.choice(sub_iter)):
if_continue = np.ones(H0.shape[0]) # indexed by rows of H
H1_old = H1.copy()
for k in [k for k in np.arange(H0.shape[0])]:
grad = (np.dot(A[k, :], H1[:,idx]) - B[k,:] + a1 * np.ones(len(idx))) + a2 * 2 * np.sign(H1[k,idx])
grad_norm = np.linalg.norm(grad, 2)
step_size = (1 / (((i + 1) ** (1)) * (A[k, k] + 1)))
if r is not None: # usual sparse coding without radius restriction
d = step_size * grad_norm
step_size = (r / max(r, d)) * step_size
if step_size * grad_norm / np.linalg.norm(H1_old, 2) > stopping_grad_ratio:
H1[k, idx] = H1[k, idx] - step_size * grad
if nonnegativity:
H1[k,idx] = np.maximum(H1[k,idx], np.zeros(shape=(len(idx),))) # nonnegativity constraint
i = i + 1
return H1
def ALS(X,
n_components = 10, # number of columns in the dictionary matrix W
n_iter=100,
a0 = 0, # L1 regularizer for H
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
H_nonnegativity=True,
W_nonnegativity=True,
compute_recons_error=False,
subsample_ratio = 10):
'''
Given data matrix X, use alternating least squares to find factors W,H so that
|| X - WH ||_{F}^2 + a0*|H|_{1} + a1*|W|_{1} + a12 * |W|_{F}^{2}
is minimized (at least locally)
'''
d, n = X.shape
r = n_components
#normalization = np.linalg.norm(X.reshape(-1,1),1)/np.product(X.shape) # avg entry of X
#print('!!! avg entry of X', normalization)
#X = X/normalization
# Initialize factors
W = np.random.rand(d,r)
H = np.random.rand(r,n)
# H = H * np.linalg.norm(X) / np.linalg.norm(H)
for i in trange(n_iter):
H = coding_within_radius(X, W.copy(), H.copy(), a1=a0, nonnegativity=H_nonnegativity, subsample_ratio=subsample_ratio)
W = coding_within_radius(X.T, H.copy().T, W.copy().T, a1=a1, a2=a12, nonnegativity=W_nonnegativity, subsample_ratio=subsample_ratio).T
if compute_recons_error and (i % 10 == 0) :
print('iteration %i, reconstruction error %f' % (i, np.linalg.norm(X-W@H)**2))
return W, H
# Simulated Data and its factorization
W0 = np.random.rand(10,5)
H0 = np.random.rand(5,20)
X0 = W0 @ H0
W, H = ALS(X=X0,
n_components=5,
n_iter=100,
a0 = 0, # L1 regularizer for H
a1 = 1, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
H_nonnegativity=True,
W_nonnegativity=True,
compute_recons_error=True,
subsample_ratio=1)
print('reconstruction error (relative) = %f' % (np.linalg.norm(X0-W@H)**2/np.linalg.norm(X0)**2))
print('Dictionary error (relative) = %f' % (np.linalg.norm(W0 - W)**2/np.linalg.norm(W0)**2))
print('Code error (relative) = %f' % (np.linalg.norm(H0-H)**2/np.linalg.norm(H0)**2))
```
# Learn dictionary of MNIST images
```
def display_dictionary(W, save_name=None, score=None, grid_shape=None):
k = int(np.sqrt(W.shape[0]))
rows = int(np.sqrt(W.shape[1]))
cols = int(np.sqrt(W.shape[1]))
if grid_shape is not None:
rows = grid_shape[0]
cols = grid_shape[1]
figsize0=(6, 6)
if (score is None) and (grid_shape is not None):
figsize0=(cols, rows)
if (score is not None) and (grid_shape is not None):
figsize0=(cols, rows+0.2)
fig, axs = plt.subplots(nrows=rows, ncols=cols, figsize=figsize0,
subplot_kw={'xticks': [], 'yticks': []})
for ax, i in zip(axs.flat, range(100)):
if score is not None:
idx = np.argsort(score)
idx = np.flip(idx)
ax.imshow(W.T[idx[i]].reshape(k, k), cmap="viridis", interpolation='nearest')
ax.set_xlabel('%1.2f' % score[i], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.05)
else:
ax.imshow(W.T[i].reshape(k, k), cmap="viridis", interpolation='nearest')
if score is not None:
ax.set_xlabel('%1.2f' % score[i], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.05)
plt.tight_layout()
# plt.suptitle('Dictionary learned from patches of size %d' % k, fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
if save_name is not None:
plt.savefig( save_name, bbox_inches='tight')
plt.show()
def display_dictionary_list(W_list, label_list, save_name=None, score_list=None):
# Make plot
# outer gridspec
nrows=1
ncols=len(W_list)
fig = plt.figure(figsize=(16, 5), constrained_layout=False)
outer_grid = gridspec.GridSpec(nrows=nrows, ncols=ncols, wspace=0.1, hspace=0.05)
# make nested gridspecs
for i in range(1 * ncols):
k = int(np.sqrt(W_list[i].shape[0]))
sub_rows = int(np.sqrt(W_list[i].shape[1]))
sub_cols = int(np.sqrt(W_list[i].shape[1]))
idx = np.arange(W_list[i].shape[1])
if score_list is not None:
idx = np.argsort(score_list[i])
idx = np.flip(idx)
inner_grid = outer_grid[i].subgridspec(sub_rows, sub_cols, wspace=0.05, hspace=0.05)
for j in range(sub_rows*sub_cols):
a = j // sub_cols
b = j % sub_cols #sub-lattice indices
ax = fig.add_subplot(inner_grid[a, b])
ax.imshow(W_list[i].T[idx[j]].reshape(k, k), cmap="viridis", interpolation='nearest')
ax.set_xticks([])
if (b>0):
ax.set_yticks([])
if (a < sub_rows-1):
ax.set_xticks([])
if (a == 0) and (b==2):
#ax.set_title("W_nonnegativity$=$ %s \n H_nonnegativity$=$ %s"
# % (str(nonnegativity_list[i][0]), str(nonnegativity_list[i][1])), y=1.2, fontsize=14)
ax.set_title(label_list[i], y=1.2, fontsize=14)
if (score_list is not None) and (score_list[i] is not None):
ax.set_xlabel('%1.2f' % score_list[i][idx[j]], fontsize=13) # get the largest first
ax.xaxis.set_label_coords(0.5, -0.07)
# plt.suptitle('Dictionary learned from patches of size %d' % k, fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.savefig(save_name, bbox_inches='tight')
# Load data from https://www.openml.org/d/554
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
# X = X.values ### Uncomment this line if you are having type errors in plotting. It is loading as a pandas dataframe, but our indexing is for numpy array.
X = X / 255.
print('X.shape', X.shape)
print('y.shape', y.shape)
'''
Each row of X is a vectroization of an image of 28 x 28 = 784 pixels.
The corresponding row of y holds the true class label from {0,1, .. , 9}.
'''
# Unconstrained matrix factorization and dictionary images
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
W, H = ALS(X=X0,
n_components=25,
n_iter=50,
subsample_ratio=1,
W_nonnegativity=False,
H_nonnegativity=False,
compute_recons_error=True)
display_dictionary(W)
# PCA and dictionary images (principal components)
pca = PCA(n_components=24)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
display_dictionary(W, score=s, save_name = "MNIST_PCA_ex1.pdf", grid_shape=[1,24])
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
n_iter = 10
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "MNIST_NMF_ex1.pdf")
# MF and PCA on MNIST
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X[idx,:].T
n_iter = 100
W_list = []
H_list = []
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "MNIST_PCA_NMF_ex1.pdf")
def random_padding(img, thickness=1):
# img = a x b image
[a,b] = img.shape
Y = np.zeros(shape=[a+thickness, b+thickness])
r_loc = np.random.choice(np.arange(thickness+1))
c_loc = np.random.choice(np.arange(thickness+1))
Y[r_loc:r_loc+a, c_loc:c_loc+b] = img
return Y
def list2onehot(y, list_classes):
"""
y = list of class lables of length n
output = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
"""
Y = np.zeros(shape = [len(y), len(list_classes)], dtype=int)
for i in np.arange(Y.shape[0]):
for j in np.arange(len(list_classes)):
if y[i] == list_classes[j]:
Y[i,j] = 1
return Y
def onehot2list(y, list_classes=None):
"""
y = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
output = list of class lables of length n
"""
if list_classes is None:
list_classes = np.arange(y.shape[1])
y_list = []
for i in np.arange(y.shape[0]):
idx = np.where(y[i,:]==1)
idx = idx[0][0]
y_list.append(list_classes[idx])
return y_list
def sample_multiclass_MNIST_padding(list_digits=['0','1', '2'], full_MNIST=[X,y], padding_thickness=10):
# get train and test set from MNIST of given digits
# e.g., list_digits = ['0', '1', '2']
# pad each 28 x 28 image with zeros so that it has now "padding_thickness" more rows and columns
# The original image is superimposed at a uniformly chosen location
if full_MNIST is not None:
X, y = full_MNIST
else:
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
Y = list2onehot(y.tolist(), list_digits)
idx = [i for i in np.arange(len(y)) if y[i] in list_digits] # list of indices where the label y is in list_digits
X01 = X[idx,:]
y01 = Y[idx,:]
X_train = []
X_test = []
y_test = [] # list of one-hot encodings (indicator vectors) of each label
y_train = [] # list of one-hot encodings (indicator vectors) of each label
for i in trange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
img_padded = random_padding(X01[i,:].reshape(28,28), thickness=padding_thickness)
img_padded_vec = img_padded.reshape(1,-1)
if U<0.8:
X_train.append(img_padded_vec[0,:].copy())
y_train.append(y01[i,:].copy())
else:
X_test.append(img_padded_vec[0,:].copy())
y_test.append(y01[i,:].copy())
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
return X_train, X_test, y_train, y_test
# Simple MNIST binary classification experiments
list_digits=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST_padding(list_digits=list_digits,
full_MNIST=[X,y],
padding_thickness=20)
idx = np.random.choice(np.arange(X_train.shape[1]), 100)
X0 = X_train[idx,:].T
n_iter = 100
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "MNIST_NMF_ex2.pdf")
# MF and PCA on MNIST + padding
list_digits=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST_padding(list_digits=list_digits,
full_MNIST=[X,y],
padding_thickness=20)
idx = np.random.choice(np.arange(X.shape[1]), 100)
X0 = X_train[idx,:].T
n_iter = 100
W_list = []
H_list = []
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "MNIST_PCA_NMF_ex2.pdf")
```
## Dictionary Learing for Face datasets
```
from sklearn.datasets import fetch_olivetti_faces
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True,
random_state=np.random.seed(0))
n_samples, n_features = faces.shape
# global centering
#faces_centered = faces - faces.mean(axis=0)
# local centering
#faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
print("faces_centered.shape", faces.shape)
# Plot some sample images
ncols = 10
nrows = 4
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=[15, 6.5])
for j in np.arange(ncols):
for i in np.arange(nrows):
ax[i,j].imshow(faces[i*ncols + j].reshape(64,64), cmap="gray")
#if i == 0:
# ax[i,j].set_title("label$=$%s" % y[idx_subsampled[i]], fontsize=14)
# ax[i].legend()
plt.subplots_adjust(wspace=0.3, hspace=-0.1)
plt.savefig('Faces_ex1.pdf', bbox_inches='tight')
# PCA and dictionary images (principal components)
X0 = faces.T
pca = PCA(n_components=24)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
display_dictionary(W, score=s, save_name = "Faces_PCA_ex1.pdf", grid_shape=[2,12])
# Variable nonnegativity constraints
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
n_iter = 200
W_list = []
nonnegativitiy = [[False, False], [False, True], [True, True]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(nonnegativitiy)):
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "Face_NMF_ex1.pdf")
n_iter = 200
W_list = []
H_list = []
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
nonnegativitiy = ['PCA', [False, False], [False, True], [True, True]]
#PCA
pca = PCA(n_components=25)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
W_nonnegativity=nonnegativitiy[i][0],
H_nonnegativity=nonnegativitiy[i][1],
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_nonnegativity = %s" % nonnegativitiy[i][0] + "\n" + "H_nonnegativity = %s" % nonnegativitiy[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(nonnegativitiy)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "Faces_PCA_NMF_ex1.pdf")
# Variable regularizer for W
X0 = faces.T
print('X0.shape', X0.shape)
n_iter = 200
W_list = []
W_sparsity = [[0, 0], [0.5, 0], [0, 3]]
for i in np.arange(3):
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
a1 = W_sparsity[i][0], # L1 regularizer for W
a12 = W_sparsity[i][1], # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
W_list.append(W)
label_list = []
for i in np.arange(len(W_sparsity)):
label = "W_$L_{1}$-regularizer = %.2f" % W_sparsity[i][0] + "\n" + "W_$L_{2}$-regularizer = %.2f" % W_sparsity[i][1]
label_list.append(label)
display_dictionary_list(W_list=W_list, label_list = label_list, save_name = "Face_NMF_ex2.pdf")
n_iter = 200
W_list = []
H_list = []
X0 = faces.T
#X0 /= 100 * np.linalg.norm(X0)
W_sparsity = ['PCA', [0, 0], [0.5, 0], [0, 3]]
#PCA
pca = PCA(n_components=25)
pca.fit(X0.T)
W = pca.components_.T
s = pca.singular_values_
W_list.append(W)
H_list.append(s)
# MF
for i in np.arange(1,len(nonnegativitiy)):
print('!!! nonnegativitiy[i]', nonnegativitiy[i])
W, H = ALS(X=X0,
n_components=25,
n_iter=n_iter,
subsample_ratio=1,
a1 = W_sparsity[i][0], # L1 regularizer for W
a12 = W_sparsity[i][1], # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
W_list.append(W)
H_list.append(H)
label_list = []
for i in np.arange(len(W_sparsity)):
if i == 0:
label = nonnegativitiy[0]
else:
label = "W_$L_{1}$-regularizer = %.2f" % W_sparsity[i][0] + "\n" + "W_$L_{2}$-regularizer = %.2f" % W_sparsity[i][1]
label_list.append(label)
score_list = []
for i in np.arange(len(W_sparsity)):
if i == 0:
score_list.append(H_list[0])
else:
H = H_list[i]
score = np.sum(abs(H), axis=1) # sum of the coefficients of each columns of W = overall usage
score_list.append(score)
display_dictionary_list(W_list=W_list,
label_list = label_list,
score_list = score_list,
save_name = "Faces_PCA_NMF_ex2.pdf")
```
## Topic modeling for 20Newsgroups dataset
```
from nltk.corpus import stopwords
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from wordcloud import WordCloud, STOPWORDS
from scipy.stats import entropy
import pandas as pd
def list2onehot(y, list_classes):
"""
y = list of class lables of length n
output = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
"""
Y = np.zeros(shape = [len(y), len(list_classes)], dtype=int)
for i in np.arange(Y.shape[0]):
for j in np.arange(len(list_classes)):
if y[i] == list_classes[j]:
Y[i,j] = 1
return Y
def onehot2list(y, list_classes=None):
"""
y = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
output = list of class lables of length n
"""
if list_classes is None:
list_classes = np.arange(y.shape[1])
y_list = []
for i in np.arange(y.shape[0]):
idx = np.where(y[i,:]==1)
idx = idx[0][0]
y_list.append(list_classes[idx])
return y_list
remove = ('headers','footers','quotes')
stopwords_list = stopwords.words('english')
stopwords_list.extend(['thanks','edu','also','would','one','could','please','really','many','anyone','good','right','get','even','want','must','something','well','much','still','said','stay','away','first','looking','things','try','take','look','make','may','include','thing','like','two','or','etc','phone','oh','email'])
categories = [
'comp.graphics',
'comp.sys.mac.hardware',
'misc.forsale',
'rec.motorcycles',
'rec.sport.baseball',
'sci.med',
'sci.space',
'talk.politics.guns',
'talk.politics.mideast',
'talk.religion.misc'
]
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
newsgroups_labels = newsgroups_train.target
# remove numbers
data_cleaned = [re.sub(r'\d+','', file) for file in newsgroups_train.data]
# print 10 random documents
#for i in np.arange(10):
# idx = np.random.choice(len(data_cleaned))
# print('>>>> %i th doc \n\n %s \n\n' % (idx, data_cleaned[idx]))
print('len(newsgroups_labels)', len(newsgroups_labels))
print('newsgroups_labels', newsgroups_labels)
print('data_cleaned[1]', data_cleaned[1])
print('newsgroups_labels[1]', newsgroups_labels[1])
# vectorizer = TfidfVectorizer(stop_words=stopwords_list)
vectorizer_BOW = CountVectorizer(stop_words=stopwords_list)
vectors_BOW = vectorizer_BOW.fit_transform(data_cleaned).transpose() # words x docs # in the form of sparse matrix
vectorizer = TfidfVectorizer(stop_words=stopwords_list)
vectors = vectorizer.fit_transform(data_cleaned).transpose() # words x docs # in the form of sparse matrix
idx_to_word = np.array(vectorizer.get_feature_names()) # list of words that corresponds to feature coordinates
print('>>>> vectors.shape', vectors.shape)
i = 4257
print('newsgroups_labels[i]', newsgroups_labels[i])
print('>>>> data_cleaned[i]', data_cleaned[i])
# print('>>>> vectors[:,i] \n', vectors[:,i])
a = vectors[:,i].todense()
I = np.where(a>0)
count_list = []
word_list = []
for j in np.arange(len(I[0])):
# idx = np.random.choice(I[0])
idx = I[0][j]
# print('>>>> %i th coordinate <===> %s, count %i' % (idx, idx_to_word[idx], vectors[idx, i]))
count_list.append([idx, vectors_BOW[idx, i], vectors[idx, i]])
word_list.append(idx_to_word[idx])
d = pd.DataFrame(data=np.asarray(count_list).T, columns=word_list).T
d.columns = ['Coordinate', 'Bag-of-words', 'tf-idf']
cols = ['Coordinate', 'Bag-of-words']
d[cols] = d[cols].applymap(np.int64)
print(d)
def sample_multiclass_20NEWS(list_classes=[0, 1], full_data=None, vectorizer = 'tf-idf', verbose=True):
# get train and test set from 20NewsGroups of given categories
# vectorizer \in ['tf-idf', 'bag-of-words']
# documents are loaded up from the following 10 categories
categories = [
'comp.graphics',
'comp.sys.mac.hardware',
'misc.forsale',
'rec.motorcycles',
'rec.sport.baseball',
'sci.med',
'sci.space',
'talk.politics.guns',
'talk.politics.mideast',
'talk.religion.misc'
]
data_dict = {}
data_dict.update({'categories': categories})
if full_data is None:
remove = ('headers','footers','quotes')
stopwords_list = stopwords.words('english')
stopwords_list.extend(['thanks','edu','also','would','one','could','please','really','many','anyone','good','right','get','even','want','must','something','well','much','still','said','stay','away','first','looking','things','try','take','look','make','may','include','thing','like','two','or','etc','phone','oh','email'])
newsgroups_train_full = fetch_20newsgroups(subset='train', categories=categories, remove=remove) # raw documents
newsgroups_train = [re.sub(r'\d+','', file) for file in newsgroups_train_full.data] # remove numbers (we are only interested in words)
y = newsgroups_train_full.target # document class labels
Y = list2onehot(y.tolist(), list_classes)
if vectorizer == 'tfidf':
vectorizer = TfidfVectorizer(stop_words=stopwords_list)
else:
vectorizer = CountVectorizer(stop_words=stopwords_list)
X = vectorizer.fit_transform(newsgroups_train) # words x docs # in the form of sparse matrix
X = np.asarray(X.todense())
print('!! X.shape', X.shape)
idx2word = np.array(vectorizer.get_feature_names()) # list of words that corresponds to feature coordinates
data_dict.update({'newsgroups_train': data_cleaned})
data_dict.update({'newsgroups_labels': y})
data_dict.update({'feature_matrix': vectors})
data_dict.update({'idx2word': idx2word})
else:
X, y = full_data
idx = [i for i in np.arange(len(y)) if y[i] in list_classes] # list of indices where the label y is in list_classes
X01 = X[idx,:]
Y01 = Y[idx,:]
X_train = []
X_test = []
y_test = [] # list of one-hot encodings (indicator vectors) of each label
y_train = [] # list of one-hot encodings (indicator vectors) of each label
for i in np.arange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
if U<0.8:
X_train.append(X01[i,:])
y_train.append(Y01[i,:].copy())
else:
X_test.append(X01[i,:])
y_test.append(Y01[i,:].copy())
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
data_dict.update({'X_train': X_train})
data_dict.update({'X_test': X_test})
data_dict.update({'y_train': y_train})
data_dict.update({'y_test': y_test})
return X_train, X_test, y_train, y_test, data_dict
# test
X_train, X_test, y_train, y_test, data_dict = sample_multiclass_20NEWS(list_classes=[0, 1, 2,3,4,5,6,7,8,9],
vectorizer = 'tf-idf',
full_data=None)
print('X_train.shape', X_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_train.shape)
print('y_test.shape', y_test.shape)
print('y_test', y_test)
#print('y_list', onehot2list(y_test))
idx2word = data_dict.get('idx2word')
categories = data_dict.get('categories')
import random
def grey_color_func(word, font_size, position, orientation, random_state=None,
**kwargs):
return "hsl(0, 0%%, %d%%)" % random.randint(60, 100)
def plot_topic_wordcloud(W, idx2word, num_keywords_in_topic=5, save_name=None, grid_shape = [2,5]):
# plot the class-conditioanl PMF as wordclouds
# W = (p x r) (words x topic)
# idx2words = list of words used in the vectorization of documents
# categories = list of class labels
# prior on class labels = empirical PMF = [ # class i examples / total ]
# class-conditional for class i = [ # word j in class i examples / # words in class i examples]
fig, axs = plt.subplots(nrows=grid_shape[0], ncols=grid_shape[1], figsize=(15, 6), subplot_kw={'xticks': [], 'yticks': []})
for ax, i in zip(axs.flat, np.arange(W.shape[1])):
# dist = W[:,i]/np.sum(W[:,i])
### Take top k keywords in each topic (top k coordinates in each column of W)
### to generate text data corresponding to the ith topic, and then generate its wordcloud
list_words = []
idx = np.argsort(W[:,i])
idx = np.flip(idx)
for j in range(num_keywords_in_topic):
list_words.append(idx2word[idx[j]])
Y = " ".join(list_words)
#stopwords = STOPWORDS
#stopwords.update(["’", "“", "”", "000", "000 000", "https", "co", "19", "2019", "coronavirus",
# "virus", "corona", "covid", "ncov", "covid19", "amp"])
wc = WordCloud(background_color="black",
relative_scaling=0,
width=400,
height=400).generate(Y)
ax.imshow(wc.recolor(color_func=grey_color_func, random_state=3),
interpolation="bilinear")
# ax.set_xlabel(categories[i], fontsize='20')
# ax.axis("off")
plt.tight_layout()
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.08)
if save_name is not None:
plt.savefig(save_name, bbox_inches='tight')
X0 = X_train.T
print('X0.shape', X0.shape)
W, H = ALS(X=X0,
n_components=10,
n_iter=20,
subsample_ratio=1,
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=True,
compute_recons_error=True)
plot_topic_wordcloud(W, idx2word=idx2word, num_keywords_in_topic=7, grid_shape=[2,5], save_name="20NEWS_topic1.pdf")
# Topic modeling by NMF
X0 = X_train.T
W, H = ALS(X=X0,
n_components=10,
n_iter=20,
subsample_ratio=1,
a1 = 0, # L1 regularizer for W
a12 = 0, # L2 regularizer for W
W_nonnegativity=True,
H_nonnegativity=False,
compute_recons_error=True)
plot_topic_wordcloud(W, idx2word=idx2word, num_keywords_in_topic=7, grid_shape = [2,5], save_name="20NEWS_topic2.pdf")
```
## EM algorithm for PCA
```
# Gram-Schmidt Orthogonalization of a given matrix
def orthogonalize(U, eps=1e-15):
"""
Orthogonalizes the matrix U (d x n) using Gram-Schmidt Orthogonalization.
If the columns of U are linearly dependent with rank(U) = r, the last n-r columns
will be 0.
Args:
U (numpy.array): A d x n matrix with columns that need to be orthogonalized.
eps (float): Threshold value below which numbers are regarded as 0 (default=1e-15).
Returns:
(numpy.array): A d x n orthogonal matrix. If the input matrix U's cols were
not linearly independent, then the last n-r cols are zeros.
"""
n = len(U[0])
# numpy can readily reference rows using indices, but referencing full rows is a little
# dirty. So, work with transpose(U)
V = U.T
for i in range(n):
prev_basis = V[0:i] # orthonormal basis before V[i]
coeff_vec = np.dot(prev_basis, V[i].T) # each entry is np.dot(V[j], V[i]) for all j < i
# subtract projections of V[i] onto already determined basis V[0:i]
V[i] -= np.dot(coeff_vec, prev_basis).T
if np.linalg.norm(V[i]) < eps:
V[i][V[i] < eps] = 0. # set the small entries to 0
else:
V[i] /= np.linalg.norm(V[i])
return V.T
# Example:
A = np.random.rand(2,2)
print('A \n', A)
print('orthogonalize(A) \n', orthogonalize(A))
print('A.T @ A \n', A.T @ A)
def EM_PCA(X,
n_components = 10, # number of columns in the dictionary matrix W
n_iter=10,
W_ini=None,
subsample_ratio=1,
n_workers = 1):
'''
Given data matrix X of shape (d x n), compute its rank r=n_components PCA:
\hat{W} = \argmax_{W} var(Proj_{W}(X))
= \argmin_{W} || X - Proj_{W}(X) ||_{F}^{2}
where W is an (d x r) matrix of rank r.
'''
d, n = X.shape
r = n_components
X_mean = np.mean(X, axis=1).reshape(-1,1)
X_centered = X - np.repeat(X_mean, X0.shape[1], axis=1)
print('subsample_size:', n//subsample_ratio)
# Initialize factors
W_list = []
loss_list = []
for i in trange(n_workers):
W = np.random.rand(d,r)
if W_ini is not None:
W = W_ini
A = np.zeros(shape=[r, n//subsample_ratio]) # aggregate matrix for code H
# Perform EM updates
for j in np.arange(n_iter):
idx_data = np.random.choice(np.arange(X.shape[1]), X.shape[1]//subsample_ratio, replace=False)
X1 = X_centered[:,idx_data]
H = np.linalg.inv(W.T @ W) @ (W.T @ X1) # E-step
# A = (1-(1/(j+1)))*A + (1/(j+1))*H # Aggregation
W = X1 @ H.T @ np.linalg.inv(H @ H.T) # M-step
# W = X1 @ A.T @ np.linalg.inv(A @ A.T) # M-step
# W = orthogonalize(W)
#if compute_recons_error and (j > n_iter-2) :
# print('iteration %i, reconstruction error %f' % (j, np.linalg.norm(X_centered-W@(W.T @ X_centered))))
W_list.append(W.copy())
loss_list.append(np.linalg.norm(X_centered-W@(W.T @ X_centered)))
idx = np.argsort(loss_list)[0]
W = W_list[idx]
print('loss_list',np.asarray(loss_list)[np.argsort(loss_list)])
return orthogonalize(W)
# Load Olivetti Face dataset
from sklearn.datasets import fetch_olivetti_faces
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True,
random_state=np.random.seed(0))
n_samples, n_features = faces.shape
# global centering
#faces_centered = faces - faces.mean(axis=0)
# local centering
#faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
print("faces_centered.shape", faces.shape)
# EM_PCA and dictionary images (principal components)
X0 = faces.T
W = EM_PCA(X0, W_ini = None, n_workers=10, n_iter=200, subsample_ratio=2, n_components=24)
display_dictionary(W, score=None, save_name = "Faces_EM_PCA_ex1.pdf", grid_shape=[2,12])
cov = np.cov(X0)
print('(cov @ W)[:,0] / W[:,0]', (cov @ W)[:,0] / W0[:,0])
print('var coeff', np.std((cov @ W)[:,0] / W[:,0]))
print('var coeff exact', np.std((cov @ W0)[:,0] / W0[:,0]))
# plot coefficients of Cov @ W / W for exact PCA and EM PCA
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 3))
pca = PCA(n_components=24)
pca.fit(X0.T)
W0 = pca.components_.T
axs[0].plot((cov @ W0)[:,0] / W0[:,0], label='Exact PCA, 1st comp.')
axs[0].legend(fontsize=13)
axs[1].plot((cov @ W)[:,0] / W[:,0], label='EM PCA, 1st comp.')
axs[1].legend(fontsize=13)
plt.savefig("EM_PCA_coeff_plot1.pdf", bbox_inches='tight')
X0 = faces.T
pca = PCA(n_components=24)
pca.fit(X0.T)
W0 = pca.components_.T
s = pca.singular_values_
cov = np.cov(X0)
print('(cov @ W)[:,0] / W[:,0]', (cov @ W0)[:,0] / W0[:,0])
display_dictionary(W0, score=s, save_name = "Faces_PCA_ex1.pdf", grid_shape=[2,12])
X_mean = np.sum(X0, axis=1).reshape(-1,1)/X0.shape[1]
X_centered = X0 - np.repeat(X_mean, X0.shape[1], axis=1)
Cov = (X_centered @ X_centered.T) / X0.shape[1]
(Cov @ W)[:,0] / W[:,0]
cov = np.cov(X0)
(cov @ W0)[:,0] / W0[:,0]
np.real(eig_val[0])
np.sort(np.real(eig_val))
x = np.array([
[0.387,4878, 5.42],
[0.723,12104,5.25],
[1,12756,5.52],
[1.524,6787,3.94],
])
#centering the data
x0 = x - np.mean(x, axis = 0)
cov = np.cov(x0, rowvar = False)
print('cov', cov)
print('cov', np.cov(x, rowvar = False))
evals , evecs = np.linalg.eigh(cov)
evals
```
| github_jupyter |
# Quantum Kernel Machine Learning
The general task of machine learning is to find and study patterns in data. For many datasets, the datapoints are better understood in a higher dimensional feature space, through the use of a kernel function:
$k(\vec{x}_i, \vec{x}_j) = \langle f(\vec{x}_i), f(\vec{x}_j) \rangle$
where $k$ is the kernel function, $\vec{x}_i, \vec{x}_j$ are $n$ dimensional inputs, $f$ is a map from $n$-dimension to $m$-dimension space and $\langle a,b \rangle$ denotes the dot product. When considering finite data, a kernel function can be represented as a matrix:
$K_{ij} = k(\vec{x}_i,\vec{x}_j)$.
In quantum kernel machine learning, a quantum feature map $\phi(\vec{x})$ is used to map a classical feature vector $\vec{x}$ to a quantum Hilbert space, $| \phi(\vec{x})\rangle \langle \phi(\vec{x})|$, such that $K_{ij} = \left| \langle \phi^\dagger(\vec{x}_j)| \phi(\vec{x}_i) \rangle \right|^{2}$. See [_Supervised learning with quantum enhanced feature spaces_](https://arxiv.org/pdf/1804.11326.pdf) for more details.
In this notebook, we use `qiskit` to calculate a kernel matrix using a quantum feature map, then use this kernel matrix in `scikit-learn` classification and clustering algorithms.
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.svm import SVC
from sklearn.cluster import SpectralClustering
from sklearn.metrics import normalized_mutual_info_score
from qiskit import BasicAer
from qiskit.circuit.library import ZZFeatureMap
from qiskit.utils import QuantumInstance, algorithm_globals
from qiskit_machine_learning.algorithms import QSVC
from qiskit_machine_learning.kernels import QuantumKernel
from qiskit_machine_learning.datasets import ad_hoc_data
seed = 12345
algorithm_globals.random_seed = seed
```
## Classification
For our classification example, we will use the _ad hoc dataset_ as described in [_Supervised learning with quantum enhanced feature spaces_](https://arxiv.org/pdf/1804.11326.pdf), and the `scikit-learn` [support vector machine](https://scikit-learn.org/stable/modules/svm.html) classification (`svc`) algorithm.
```
adhoc_dimension = 2
train_features, train_labels, test_features, test_labels, adhoc_total = ad_hoc_data(
training_size=20,
test_size=5,
n=adhoc_dimension,
gap=0.3,
plot_data=False, one_hot=False, include_sample_total=True
)
plt.figure(figsize=(5, 5))
plt.ylim(0, 2 * np.pi)
plt.xlim(0, 2 * np.pi)
plt.imshow(np.asmatrix(adhoc_total).T, interpolation='nearest',
origin='lower', cmap='RdBu', extent=[0, 2 * np.pi, 0, 2 * np.pi])
plt.scatter(train_features[np.where(train_labels[:] == 0), 0], train_features[np.where(train_labels[:] == 0), 1],
marker='s', facecolors='w', edgecolors='b', label="A train")
plt.scatter(train_features[np.where(train_labels[:] == 1), 0], train_features[np.where(train_labels[:] == 1), 1],
marker='o', facecolors='w', edgecolors='r', label="B train")
plt.scatter(test_features[np.where(test_labels[:] == 0), 0], test_features[np.where(test_labels[:] == 0), 1],
marker='s', facecolors='b', edgecolors='w', label="A test")
plt.scatter(test_features[np.where(test_labels[:] == 1), 0], test_features[np.where(test_labels[:] == 1), 1],
marker='o', facecolors='r', edgecolors='w', label="B test")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.title("Ad hoc dataset for classification")
plt.show()
```
With our training and testing datasets ready, we set up the `QuantumKernel` class to calculate a kernel matrix using the [ZZFeatureMap](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZZFeatureMap.html), and the `BasicAer` `qasm_simulator` using 1024 shots.
```
adhoc_feature_map = ZZFeatureMap(feature_dimension=adhoc_dimension,
reps=2, entanglement='linear')
adhoc_backend = QuantumInstance(BasicAer.get_backend('qasm_simulator'), shots=1024,
seed_simulator=seed, seed_transpiler=seed)
adhoc_kernel = QuantumKernel(feature_map=adhoc_feature_map, quantum_instance=adhoc_backend)
```
The `scikit-learn` `svc` algorithm allows us to define a [custom kernel](https://scikit-learn.org/stable/modules/svm.html#custom-kernels) in two ways: by providing the kernel as a callable function or by precomputing the kernel matrix. We can do either of these using the `QuantumKernel` class in `qiskit`.
The following code gives the kernel as a callable function:
```
adhoc_svc = SVC(kernel=adhoc_kernel.evaluate)
adhoc_svc.fit(train_features, train_labels)
adhoc_score = adhoc_svc.score(test_features, test_labels)
print(f'Callable kernel classification test score: {adhoc_score}')
```
The following code precomputes and plots the training and testing kernel matrices before providing them to the `scikit-learn` `svc` algorithm:
```
adhoc_matrix_train = adhoc_kernel.evaluate(x_vec=train_features)
adhoc_matrix_test = adhoc_kernel.evaluate(x_vec=test_features,
y_vec=train_features)
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(np.asmatrix(adhoc_matrix_train),
interpolation='nearest', origin='upper', cmap='Blues')
axs[0].set_title("Ad hoc training kernel matrix")
axs[1].imshow(np.asmatrix(adhoc_matrix_test),
interpolation='nearest', origin='upper', cmap='Reds')
axs[1].set_title("Ad hoc testing kernel matrix")
plt.show()
adhoc_svc = SVC(kernel='precomputed')
adhoc_svc.fit(adhoc_matrix_train, train_labels)
adhoc_score = adhoc_svc.score(adhoc_matrix_test, test_labels)
print(f'Precomputed kernel classification test score: {adhoc_score}')
```
`qiskit` also contains the `qsvc` class that extends the `sklearn svc` class, that can be used as follows:
```
qsvc = QSVC(quantum_kernel=adhoc_kernel)
qsvc.fit(train_features, train_labels)
qsvc_score = qsvc.score(test_features, test_labels)
print(f'QSVC classification test score: {adhoc_score}')
```
## Clustering
For our clustering example, we will again use the _ad hoc dataset_ as described in [_Supervised learning with quantum enhanced feature spaces_](https://arxiv.org/pdf/1804.11326.pdf), and the `scikit-learn` `spectral` clustering algorithm.
We will regenerate the dataset with a larger gap between the two classes, and as clustering is an unsupervised machine learning task, we don't need a test sample.
```
adhoc_dimension = 2
train_features, train_labels, test_features, test_labels, adhoc_total = ad_hoc_data(
training_size=25,
test_size=0,
n=adhoc_dimension,
gap=0.6,
plot_data=False, one_hot=False, include_sample_total=True
)
plt.figure(figsize=(5, 5))
plt.ylim(0, 2 * np.pi)
plt.xlim(0, 2 * np.pi)
plt.imshow(np.asmatrix(adhoc_total).T, interpolation='nearest',
origin='lower', cmap='RdBu', extent=[0, 2 * np.pi, 0, 2 * np.pi])
plt.scatter(train_features[np.where(train_labels[:] == 0), 0], train_features[np.where(train_labels[:] == 0), 1],
marker='s', facecolors='w', edgecolors='b', label="A")
plt.scatter(train_features[np.where(train_labels[:] == 1), 0], train_features[np.where(train_labels[:] == 1), 1],
marker='o', facecolors='w', edgecolors='r', label="B")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.title("Ad hoc dataset for clustering")
plt.show()
```
We again set up the `QuantumKernel` class to calculate a kernel matrix using the [ZZFeatureMap](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZZFeatureMap.html), and the BasicAer `qasm_simulator` using 1024 shots.
```
adhoc_feature_map = ZZFeatureMap(feature_dimension=adhoc_dimension,
reps=2, entanglement='linear')
adhoc_backend = QuantumInstance(BasicAer.get_backend('qasm_simulator'), shots=1024,
seed_simulator=seed, seed_transpiler=seed)
adhoc_kernel = QuantumKernel(feature_map=adhoc_feature_map, quantum_instance=adhoc_backend)
```
The scikit-learn spectral clustering algorithm allows us to define a [custom kernel] in two ways: by providing the kernel as a callable function or by precomputing the kernel matrix. Using the QuantumKernel class in qiskit, we can only use the latter.
The following code precomputes and plots the kernel matrices before providing it to the scikit-learn spectral clustering algorithm, and scoring the labels using normalized mutual information, since we a priori know the class labels.
```
adhoc_matrix = adhoc_kernel.evaluate(x_vec=train_features)
plt.figure(figsize=(5, 5))
plt.imshow(np.asmatrix(adhoc_matrix), interpolation='nearest', origin='upper', cmap='Greens')
plt.title("Ad hoc clustering kernel matrix")
plt.show()
adhoc_spectral = SpectralClustering(2, affinity="precomputed")
cluster_labels = adhoc_spectral.fit_predict(adhoc_matrix)
cluster_score = normalized_mutual_info_score(cluster_labels, train_labels)
print(f'Clustering score: {cluster_score}')
```
`scikit-learn` has other algorithms that can use a precomputed kernel matrix, here are a few:
- [Agglomerative clustering](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html)
- [Support vector regression](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)
- [Ridge regression](https://scikit-learn.org/stable/modules/generated/sklearn.kernel_ridge.KernelRidge.html)
- [Gaussian process regression](https://scikit-learn.org/stable/modules/gaussian_process.html)
- [Principal component analysis](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html)
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from SpecFunctions import *
from HamiltonianFunctions import *
from PointChargeFunctions import *
def gen_oh(x0):
iontest = np.asarray([[x0,0,0,-2],
[-x0,0,0,-2],
[0,x0,0,-2],
[0,-x0,0,-2],
[0,0,x0,-2],
[0,0,-x0,-2]
])
return iontest
def gen_dist_oh(x0,z0):
iontest = np.asarray([[x0,0,0,-2],
[-x0,0,0,-2],
[0,x0,0,-2],
[0,-x0,0,-2],
[0,0,z0,-2],
[0,0,-z0,-2]
])
return iontest
# look at the effect of changing the Er-X distance uniformly
so_gs = StevensOperators(15/2)
so_es = StevensOperators(13/2)
sc = SpectrumCalculator()
pc = PointCharge()
xpos = 1e-10*np.linspace(1.6,2.7,501)
param_list_x = [pc.calc_cf_params(gen_oh(x0)) for x0 in xpos]
spec_dict = {'Linear':np.zeros((1000,xpos.size)),'RHC':np.zeros((1000,xpos.size)),
'LHC':np.zeros((1000,xpos.size))}
for k, dict_it in enumerate(param_list_x):
ham_gs = so_gs.build_ham(dict_it)
gs_ev_dict = so_gs.proc_ham(ham_gs)
ham_es = so_es.build_ham(dict_it)
es_ev_dict = so_es.proc_ham(ham_es)
#
freq_ax,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='Linear',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['Linear'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='RHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['RHC'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='LHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['LHC'][:,k] = temp_spec_con
%matplotlib notebook
plt.figure()
plt.contourf(freq_ax,xpos,spec_dict['Linear'].T
+spec_dict['RHC'].T
+spec_dict['LHC'].T,40)
plt.show()
# look at the effect of distorting an Oh along a C4 axis - elongating the axial pair
%matplotlib notebook
so_gs = StevensOperators(15/2)
so_es = StevensOperators(13/2)
sc = SpectrumCalculator()
pc = PointCharge()
zpos = 1e-10*np.linspace(1.5,3.0,501)
x0 = 1.8e-10
param_list_x = [pc.calc_cf_params(gen_dist_oh(x0,z0)) for z0 in zpos]
spec_dict = {'Linear':np.zeros((1000,zpos.size)),'RHC':np.zeros((1000,zpos.size)),
'LHC':np.zeros((1000,zpos.size))}
for k, dict_it in enumerate(param_list_x):
ham_gs = so_gs.build_ham(dict_it)
gs_ev_dict = so_gs.proc_ham(ham_gs)
ham_es = so_es.build_ham(dict_it)
es_ev_dict = so_es.proc_ham(ham_es)
#
freq_ax,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='Linear',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['Linear'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='RHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['RHC'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='LHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['LHC'][:,k] = temp_spec_con
%matplotlib notebook
plt.figure()
plt.contourf(freq_ax,zpos-x0,spec_dict['Linear'].T
+spec_dict['RHC'].T
+spec_dict['LHC'].T,40)
plt.show()
# look at the effect of distorting an Oh along a C4 axis - elongating the equatorial four
%matplotlib notebook
so_gs = StevensOperators(15/2)
so_es = StevensOperators(13/2)
sc = SpectrumCalculator()
pc = PointCharge()
xpos = 1e-10*np.linspace(1.5,3.0,501)
z0 = 1.8e-10
param_list_x = [pc.calc_cf_params(gen_dist_oh(x0,z0)) for x0 in xpos]
spec_dict = {'Linear':np.zeros((1000,xpos.size)),'RHC':np.zeros((1000,xpos.size)),
'LHC':np.zeros((1000,xpos.size))}
for k, dict_it in enumerate(param_list_x):
ham_gs = so_gs.build_ham(dict_it)
gs_ev_dict = so_gs.proc_ham(ham_gs)
ham_es = so_es.build_ham(dict_it)
es_ev_dict = so_es.proc_ham(ham_es)
#
freq_ax,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='Linear',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['Linear'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='RHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['RHC'][:,k] = temp_spec_con
#
_,temp_spec = sc.calc_spectrum(gs_ev_dict,es_ev_dict,
Spectrum='Excitation',Temperature=5.0,Polarization='LHC',z1y1=6200)
temp_spec_con = sc.convolve_spectrum(freq_ax,temp_spec,2.)
spec_dict['LHC'][:,k] = temp_spec_con
%matplotlib notebook
plt.figure()
plt.contourf(freq_ax,zpos-x0,spec_dict['Linear'].T
+spec_dict['RHC'].T
+spec_dict['LHC'].T,40)
plt.show()
```
| github_jupyter |
<img src="https://www.ibm.com/watson/health/ai-stories/assets/images/ibm-watson-health-logo.png" style="float: left; width: 40%; margin-bottom: 0.5em;">
## Develop a neuropathy onset predictive model using the FHIR diabetic patient data (prepared in Notebook 2)
**FHIR Dev Day Notebook 3**
Author: **Gigi Yuen-Reed** <gigiyuen@us.ibm.com>
[Section 1: Environment setup and credentials](#section_1)
[Section 2: Data ingestion](#section_2)
[Section 3: Data understanding](#section_3)
[Section 4: Construct analysis cohort](#section_4)
[Section 5: Model development and validation](#section_5)
<a id='section_1'></a>
## 1. Environment setup and credentails
```
import types
import pandas as pd
import ibm_boto3
import glob
from ibm_botocore.client import Config
from pprint import pprint
import shutil
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from numpy import set_printoptions
import ibmos2spark
from pyspark.sql.functions import *
from pyspark.sql.types import *
# import ML packages
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import tree
from sklearn.svm import SVC
from sklearn.feature_selection import SelectKBest, f_classif, chi2
from sklearn.metrics import classification_report
from sklearn.metrics import classification_report, confusion_matrix, roc_curve, auc, roc_auc_score
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', 500)
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
# Temporary credentials provided for DevDays only
synthetic_mass_read_only = \
{
"apikey": "HNJj8lVRmT-wX-n3ns2d8A8_iLFITob7ibC6aH66GZQX",
"endpoints": "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints",
"iam_apikey_description": "Auto-generated for key 418c8c60-5c31-4ed0-8a08-0f6641a01d46",
"iam_apikey_name": "dev_days",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Reader",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/f0dfe396162db060e2e2a53ff465dfa0::serviceid:ServiceId-e13864d8-8b73-4901-8060-b84123e5ca1c",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-storage:global:a/f0dfe396162db060e2e2a53ff465dfa0:3067bed7-8108-4d6e-ba32-5d5f643700e5::"
}
cos_api_key = synthetic_mass_read_only
input_bucket = 'whc-save-fhir'
credentials = {
'service_id': cos_api_key['iam_serviceid_crn'],
'api_key': cos_api_key['apikey'],
'endpoint': 'https://s3.private.us-south.cloud-object-storage.appdomain.cloud',
'iam_service_endpoint': 'https://iam.ng.bluemix.net/oidc/token'
}
configuration_name = 'syntheticmass-write' #Must be unique for each bucket / configuration!
spark_cos = ibmos2spark.CloudObjectStorage(sc, credentials, configuration_name, 'bluemix_cos')
# COS API setup
client = ibm_boto3.client(
service_name='s3',
ibm_api_key_id=cos_api_key['apikey'],
ibm_auth_endpoint=credentials['iam_service_endpoint'],
config=Config(signature_version='oauth'),
endpoint_url=credentials['endpoint'])
# explore what is inside the COS bucket
# client.list_objects(Bucket=input_bucket).get('Contents')
```
<a id='section_2'></a>
## 2. Data ingestion
Read in LPR generated from Notebook 2. Recall LPR_Row contains diabetic patient data where each observation represents an unqiue patient-comorbidty combination
```
# versify COS bucket location
input_file = 'lpr/lpr_row'
spark_cos.url(input_file, input_bucket)
# read in LPR_row
%time lprRow = spark.read.parquet(spark_cos.url(input_file, input_bucket))
lprRow.count()
lprRow.limit(20)
# load into pandas (no longer distributed)
%time df = lprRow.toPandas()
df.info()
print("panda dataframe size: ", df.shape)
# store snomed code to name mapping => next iteration, upgrade to storing mapping in dictionary
from pandas import DataFrame
snomed_map = df.groupby(['snomed_code', 'snomed_name']).first().index.tolist() #a list
mapdf = DataFrame(snomed_map,columns=['snomed_code','snomed_name']) #a dataframe
mapdf
```
<a id='section_3'></a>
## 3. Data Understanding
Data exploration and use case validation. Please note that the patient data characteristics are likely the artifact of Synetha's data generation engine, they may not reflect the nuance and diversity of disease progression one would observed in practice.
**Clinical references**
Top diabetic comorbidities, see example in Nowakowska et al (2019) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6659216/
Common diabetic complications, see the US CDC list https://www.cdc.gov/diabetes/library/features/prevent-complications.html
```
# unique patient count
n = len(pd.unique(df['patient_id']))
print("number of unique patients = ", n)
# review prevalence of co-morbidities
df.snomed_name.value_counts()
```
**Observation**: Hypertension, diabetic renal disease and neuropathy due to T2D are the top 3 co-morbidities for diabetic patients. About 40% of our diabetic population has neuropathy at some point in their lives
```
# set date format
df['birth_date'] = pd.to_datetime(df['birth_date'])
df['target_first_date'] = pd.to_datetime(df['target_first_date'])
df['first_observation_date'] = pd.to_datetime(df['first_observation_date'])
# calculate patient age at diabete onset
df['onsetAge'] = (df['target_first_date'] - df['birth_date'])/ pd.to_timedelta(1, unit='D') / 365
df.head()
# review patient age at diabetes onset
firstdf = df.groupby('patient_id').first().reset_index()
onsetAgePlot = firstdf.hist(column='onsetAge', bins=25, grid=False)
# review co-morbidities onset timing in comparison to diabetic onset
# positive means co-morbidity was first reported AFTER onset; negative means BEFORE)
df['yearDiff'] = (df['first_observation_date'] - df['target_first_date']) / pd.to_timedelta(1, unit='D') / 365
df.head()
#plot histrogram of onset time difference(in years) by comorbidity
onsetDiffPlot = df.hist(column='yearDiff', by='snomed_name', bins=10, grid=False, figsize=(15,30), layout=(22,1), sharex=True)
```
**Observation**: Majority of co-morbidities were first reported AFTER diabetic onset, recall this is a synethic dataset.
```
# review data availability dates
print("diabetic onset date ranges", df.target_first_date.min(), df.target_first_date.max())
print("comorbidity start date ranges", df.first_observation_date.min(), df.first_observation_date.max())
print("comorbidity end date ranges",df.last_observation_date.min(), df.last_observation_date.max())
```
<a id='section_4'></a>
## 4. Construct analysis cohort
Model Objective: What is the likelihood of developing neuropathy 5 years after onset?
Observation period: Demographic and known comorbidities as of diabetic onset
Prediction period: 5 years after diabetic onset
### 4.1 Define patient cohort with inclusion/exclusion criteria
```
# cohort exclusion 1: remove patients who has neuropathy due to diabetes diagnosis prior to diabetes onset date
e1 = df[(df["snomed_code"]=="368581000119106") & (df["yearDiff"]<0)][['patient_id']]
print("number of unique patients in exclusion 1: ", len(pd.unique(e1['patient_id'])))
# cohort exclusion 2: remove patients who has less than 3 year of data prior to diabetes onset
e2 = df[df["target_first_date"] < pd.Timestamp('19301009')][['patient_id']]
print("number of unique patients in exclusion 2: ", len(pd.unique(e2['patient_id'])))
# cohort exclusion 3: remove patient has less than 5 years of data in prediction window
e3 = df[df["target_first_date"] > pd.Timestamp('20140417')][['patient_id']]
print("number of unique patients in exclusion 3: ", len(pd.unique(e3['patient_id'])))
# cohort exclusion 4: patient age at diabetic onset >= 18
e4 = df[df["onsetAge"] < 18][['patient_id']]
print("number of unique patients in exclusion 4: ", len(pd.unique(e4['patient_id'])))
# construct cohort - remove all records for patients that meet any of the 4 exclusion criteria
# total patient count prior to filtering
# n = len(pd.unique(df['patient_id'])) #repeat from earlier
print("number of unique patients prior to filtering = ", n)
cohort = df[ (~df['patient_id'].isin(e1['patient_id'])) & (~df['patient_id'].isin(e2['patient_id'])) & (~df['patient_id'].isin(e3['patient_id'])) & (~df['patient_id'].isin(e4['patient_id']))]
print(cohort.shape)
print(len(pd.unique(cohort['patient_id'])))
# alternative 1:
# e4 = df[df["onsetAge"] < 18][['patient_id']]
# update_df = df.drop(e4.index, axis=0)
# alternative 2:
# e4 = df[df["onsetAge"] < 18][['patient_id']].index.tolist()
# update_df = df.drop(e4)
```
### 4.2 Construct model input features and prediction target
Develop model inputs, codify prediction target, normalize data
```
# step 1: transpose table such that each row is a patient
temp1 = cohort.loc[:,['patient_id','gender','onsetAge']]
temp1 = temp1.drop_duplicates(subset=['patient_id'])
print("patient demographic block size: ", temp1.shape)
temp2 = cohort.pivot_table(index=["patient_id"], columns='snomed_code', values='yearDiff')
print("patient condition block size: ", temp2.shape)
cohort1 = temp1.merge(temp2, left_on="patient_id", right_on="patient_id")
print("combined patient block size: ", cohort1.shape)
cohort1.head()
# set target: neuropathy (368581000119106)
cohort1.rename(columns={"368581000119106":"target"}, inplace=True)
# demographic features
demographic = ["onsetAge", "gender"]
# condition features
temp3 = cohort1.drop(["patient_id", "onsetAge", "gender"], axis=1)
condition_list = list(temp3)
print (condition_list)
# prepare condition features: value = 1 if condition was pre-existing to diabetes onset
for column in cohort1[condition_list]:
cohort1[column] = cohort1[column].apply(lambda x: 1 if x <= 0 else 0)
# prepare gender input features, female = 1 male =0
cohort1["gender"] = cohort1["gender"].apply(lambda x: 1 if x == "male" else 0)
# prepare target value = 1 if neuropathy occurs within 5 years AFTER diabetes onset
cohort1["target"] = cohort1["target"].apply(lambda x: 1 if (x > 0) & (x <=5) else 0)
# construct new feature: number of knwon comorbidities as of diabetic onset
cohort1['comorbid_ct'] = cohort1[condition_list].sum(axis=1)
cohort1.head(10)
# verify cohort characteristics
cohort1.mean()
cohort1.groupby("target").describe()
# normalize non-sparse data
cohort1["onsetAge"] = (cohort1["onsetAge"] - np.min(cohort1["onsetAge"])) / (np.max(cohort1["onsetAge"]) - np.min(cohort1["onsetAge"]))
cohort1["comorbid_ct"] = (cohort1["comorbid_ct"] - np.min(cohort1["comorbid_ct"])) / (np.max(cohort1["comorbid_ct"]) - np.min(cohort1["comorbid_ct"]))
```
<a id='section_5'></a>
## 5. Model development
[Section 5.1: Prepare train/test data](#section_5.1)
[Section 5.2 Univariate analysis](#section_5.2)
[Section 5.3.1: Logistic regression with all features](#section_5.3.1)
[Section 5.3.2: Logistic regression with only 5 features](#section_5.3.2)
[Section 5.4: Non-linear Support Vector Machine (SVM)](#section_5.4)
[Section 5.5: Decision Tree Classifier](#section_5.5)
<a id='section_5.1'></a>
### 5.1 Prepare train/test data
```
# prepare train/test data
y = cohort1["target"]
x = cohort1.drop(["patient_id","target"], axis = 1)
#x = cohort1.drop(["patient_id","target", "comorbid_ct"], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.3,random_state=0)
print("whole data shape", cohort1.shape)
print("training input data shape:", x_train.shape)
print("test input data shape:",x_test.shape)
print("training output data shape:", len(y_train))
print("test output data shape:",len(y_test))
```
<a id='section_5.2'></a>
### 5.2 Univariate analysis
```
#capture a list of input feature IDs
feature_id = list(x)
print(feature_id)
# univariate analysis - analysis all 24 features
feature_selector = SelectKBest(score_func=chi2)
fit = feature_selector.fit(x_train, y_train)
pvalue = DataFrame(list(zip(feature_id, fit.pvalues_)),columns=['feature_id','pvalues'])
pv_df = pd.merge(pvalue, mapdf, left_on='feature_id', right_on='snomed_code', how = 'outer').drop(["snomed_code"], axis=1).dropna(subset=['feature_id'])
#pv_df = pv_df.rename(columns = {'snomed_name':'description'})
#pv_df.loc[df.feature_id == "onsetAge", "description"] = "onsetAge"
pv_df['pvalue < 0.1'] = pv_df['pvalues'].apply(lambda x: 1 if x <= 0.1 else 0) #create indicator for pvalue < 0.1
pv_df = pv_df.reindex(columns=['feature_id', 'snomed_name', 'pvalues', 'pvalue < 0.1']) #rearragen columns
pv_df
```
**Obsevation**: Comorbidity count has very high significance (low p-value). Besides that, only a few features have sufficiently low p-value (hypertension, CAD). "nan" p-value when all comorbidty values are 0
<a id='section_5.3.1'></a>
### 5.3.1 Logistic regression with all features
```
# train Logistic Regression with all features
lr = LogisticRegression(solver='lbfgs').fit(x_train,y_train)
acc_train = lr.score(x_train,y_train)*100
acc_test = lr.score(x_test,y_test)*100
print("Test Accuracy for train set {:.2f}%".format(acc_train))
print("Test Accuracy for test set {:.2f}%".format(acc_test))
```
**Observation** no overfitting
```
# review model intercept and coefficients
print("model intercept ", lr.intercept_, "\n")
print("model coeff ", lr.coef_, "\n")
# review model importance
#list(zip(feature_id, lr.coef_[0]))
# put togther feature importance for LR, based on coefficeint value
importance = DataFrame(list(zip(feature_id, lr.coef_[0])),columns=['feature_id','coeff'])
importance = pd.merge(importance, mapdf, left_on='feature_id', right_on='snomed_code', how = 'outer').drop(["snomed_code"], axis=1).dropna(subset=['feature_id']) #merge in snomed description
importance = importance[importance['coeff'] != 0] #remove feature with coeff = 0, i.e., no impact
# sort by importance
#importance = importance.reindex(importance.coeff.abs().sort_values().index) #does not support descending sort
importance['abs_coeff']= importance.coeff.abs()
importance = importance.sort_values(by='abs_coeff', ascending=False).drop(["abs_coeff"], axis=1) #rearrnge columns, sort table by coefficient value
# add descriptions
importance = importance.reindex(columns=['feature_id', 'snomed_name', 'coeff']) #rearrnge columns, sort table by coefficient value
importance.loc[importance.feature_id == "gender", "snomed_name"] = "male = 1, female = 0"
importance.loc[importance.feature_id == "comorbid_ct", "snomed_name"] = "# of known conditions at time of diabetic onset"
importance.loc[importance.feature_id == "onsetAge", "snomed_name"] = "Age at diabetic onset, normalized from (18-54) to (0-1)"
importance = importance.rename(columns = {'snomed_name':'description'})
# show feature selection count
print("Feature importance from LR model (selected ", importance.shape[0], " out of ", len(feature_id), " features)")
importance
```
**Observation**: comorbid_ct is the single most important feature
Note that sckitlearn has built in regularization to reduce model features (where coefficients are 0). It currently does not natively generate p-values. Consider using statsmodels package instead if deeper statistical features are desired
```
# show confusion matrix
y_lr = lr.predict(x_test)
cm_lr = confusion_matrix(y_test,y_lr)
ax = plt.subplot()
sns.heatmap(cm_lr,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Logistic Regression Model with built in regularizer');
# show classification report
target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_lr, target_names=target_names))
```
Note that recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”.
```
# generate a no skill prediction (majority class)
ns_probs = [0 for _ in range(len(y_test))]
# calculate roc scores
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, y_lr)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# # calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr, lr_tpr, _ = roc_curve(y_test, y_lr)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(lr_fpr, lr_tpr, marker='.', label='Logistic')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Logistic Regression with built in regularizer');
plt.legend() # show the legend
plt.show() # show the plot
```
<a id='section_5.3.2'></a>
### 5.3.2 Logistic regression with only 5 features
```
# Feature selection: only keep top 5 features
feature_selector = SelectKBest(score_func=chi2, k=5)
fit2 = feature_selector.fit(x_train, y_train)
# create training dataset with selected features
x_train2 = feature_selector.fit_transform(x_train, y_train)
print(x_train2.shape)
#get selected feature names
mask = feature_selector.get_support() #list of booleans
new_features = [] # The list of your K best features
for bool, feature in zip(mask, feature_id):
if bool:
new_features.append(feature)
#cols = fit.get_support(indices=True) #get selected feature indexx_
#print(cols)
# make pvalue table
# set_printoptions(precision=3)
pvalue = list(zip(new_features, fit.pvalues_))
from pprint import pprint
print("(features, p-value)")
pprint(pvalue)
# trim test set to match selected features
x_test2 = x_test[new_features]
print(x_test2.shape)
# train Logistic Regression with seleced features
lr2 = LogisticRegression(solver='lbfgs').fit(x_train2,y_train)
acc_train = lr2.score(x_train2,y_train)*100
acc_test = lr2.score(x_test2,y_test)*100
print("Test Accuracy for train set {:.2f}%".format(acc_train))
print("Test Accuracy for test set {:.2f}%".format(acc_test))
y_lr2 = lr2.predict(x_test2)
cm_lr2 = confusion_matrix(y_test,y_lr2)
ax = plt.subplot()
sns.heatmap(cm_lr2,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Logistic Regression with 5 selected features');
# show classification report
# target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_lr2, target_names=target_names))
```
**Observation**: low sensitivity but high specificity, not bad performance given the number of features
```
# generate a no skill prediction (majority class)
#ns_probs = [0 for _ in range(len(y_test))]
# calculate roc scores
#ns_auc = roc_auc_score(y_test, ns_probs)
lr2_auc = roc_auc_score(y_test, y_lr2)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# # calculate roc curves
#ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr2, lr_tpr2, _ = roc_curve(y_test, y_lr2)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(lr_fpr2, lr_tpr2, marker='.', label='Logistic')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Logistic Regression with 5 selected features');
plt.legend() # show the legend
plt.show() # show the plot
```
<a id='section_5.4'></a>
### 5.4 Non-linear Support Vector Machine (SVM) model
```
# train SVM
svm = SVC(random_state = 4, gamma='auto').fit(x_train, y_train)
acc_train = svm.score(x_train,y_train)*100
acc_test = svm.score(x_test,y_test)*100
print("Train Accuracy of SVM Algorithm: {:.2f}%".format(acc_train))
print("Test Accuracy of SVM Algorithm: {:.2f}%".format(acc_test))
# show confusion matrix
y_svm = svm.predict(x_test)
cm_svm = confusion_matrix(y_test,y_svm)
ax = plt.subplot()
sns.heatmap(cm_svm,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for SVM');
```
**Observation** SVM classifies no patients getting neuropathy within 5 years of onset. Very low sensitivity. Performance is significantly worse than logistic regression (with regularization), possibly due to small training data volume, it's worth further investigation
<a id='section_5.5'></a>
### 5.5 Decision Tree Classifer
```
dtree = tree.DecisionTreeClassifier().fit(x_train,y_train)
acc_train = dtree.score(x_train,y_train)*100
acc_test = dtree.score(x_test,y_test)*100
print("Train Accuracy of Decision Tree Classifer: {:.2f}%".format(acc_train))
print("Test Accuracy of Decision Tree Classifer: {:.2f}%".format(acc_test))
# show confusion matrix
y_tree = dtree.predict(x_test)
cm_tree = confusion_matrix(y_test,y_tree)
ax = plt.subplot()
sns.heatmap(cm_tree,annot=True,cmap="Blues",fmt="d",cbar=False,annot_kws={"size": 30}, ax=ax)
ax.set_xlabel('Predicted Results');ax.set_ylabel('Ground Truth');
ax.set_title('Confusion Matrix for Decision Tree Classifier');
# show classification report
target_names = ['class 0', 'class 1']
print(classification_report(y_test, y_tree, target_names=target_names))
```
**Observation**: 0.90 sensitivity and 1.00 specificity, best performing model yet.
```
#generate a no skill prediction (majority class)
#ns_probs = [0 for _ in range(len(y_test))]
## calculate roc scores
#ns_auc = roc_auc_score(y_test, ns_probs)
tree_auc = roc_auc_score(y_test, y_tree)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('SVM: ROC AUC=%.3f' % (tree_auc))
## calculate roc curves
#ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
tree_fpr, tree_tpr, _ = roc_curve(y_test, y_tree)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(tree_fpr, tree_tpr, marker='.', label='tree')
# axis labels
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate')
plt.title('ROC for Decision Tree Classifier');
plt.legend() # show the legend
plt.show() # show the plot
text_representation = tree.export_text(dtree)
print(text_representation)
fig = plt.figure(figsize=(25,20))
_ = tree.plot_tree(dtree,
feature_names=feature_id,
class_names=target_names,
filled=True)
```
## Synetha metabolic syndrome disease (which includes diabetes) data generation logic
<img src="https://synthetichealth.github.io/synthea/graphviz/metabolic_syndrome_disease.png" style="float: left; width: 40%; margin-bottom: 0.5em;">
https://synthetichealth.github.io/synthea/graphviz/metabolic_syndrome_disease.png
| github_jupyter |
```
#default_exp prefect_flows.step1
#export
import os #hi
from hydra import initialize, initialize_config_module, initialize_config_dir, compose
from omegaconf import OmegaConf, dictconfig
from hydra.core.hydra_config import HydraConfig
from fastcore.basics import partialler
import pandas as pd
from pydantic import BaseModel
from typing import Optional, List
import numpy as np
from pathlib import Path
from dataclasses import dataclass
#export
from prefect import Task, Flow, task, Parameter, context, case, unmapped
from prefect.engine.results import LocalResult
from prefect.engine.serializers import PandasSerializer
from prefect.engine import signals
from prefect.tasks.control_flow import merge
from prefect.tasks.prefect import create_flow_run, RenameFlowRun
from prefect.tasks.templates import StringFormatter
context.config.flows.checkpointing = True
#export
from corradin_ovp_utils.datasets.CombinedGenoPheno import CombinedGenoPheno
from corradin_ovp_utils.catalog import get_catalog, test_data_catalog, get_config
from corradin_ovp_utils.odds_ratio import odds_ratio_df_single_combined
from corradin_ovp_utils.MTC import MtcTable
from corradin_ovp_utils.datasets.utils import cd
from corradin_ovp_utils.catalog import change_cwd_dir, package_outer_folder
```
---
```
#export
class SNPPairInfo(BaseModel):
GWAS_id: str
outside_id: str
GWAS_chrom: int
outside_chrom: int
mtc_threshold: float = None
#export
tsv_result_partial = partialler(LocalResult, dir="./prefect_step1_result_folder", serializer = PandasSerializer("csv",
serialize_kwargs={"sep":"\t", "index": False},
deserialize_kwargs={"sep": "\t"}))
parquet_result_partial = partialler(LocalResult, dir="./prefect_step1_result_folder", serializer = PandasSerializer("parquet"))
ID_COL_LIST = ["rsid", "position"]
ALL_GENO_DF_FILE_NAME = "all_geno_df.parquet"
ALL_SAMPLES_GENO_DF_FILE_NAME = "all_samples_geno_df.parquet"
SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME = partialler("{subset}_{ALL_SAMPLES_GENO_DF_FILE_NAME}".format, ALL_SAMPLES_GENO_DF_FILE_NAME = ALL_SAMPLES_GENO_DF_FILE_NAME)
SAMPLE_SUBSETS= ["case", "control"]
SEARCH_RESULT_DF_FILE_NAME = "snp_search_result.tsv"
UNFILTERED_SUMMARY_DF_FILE_NAME = "all_pairs_summary_df_unfiltered.tsv"
STEP1_FINAL_REPORT_FILE_NAME = "step1_final_report.tsv"
SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION = "snp_pairs_folders/{GWAS_id}/{GWAS_id}_{outside_id}/single_pair_data_cache_df"
def format_result_dir(cfg, task_name, **kwargs):
return f"{Path(cfg.hydra.run.dir)}/{task_name}"
task_no_checkpoint = partialler(task, checkpoint=False)
@task_no_checkpoint
def get_datasets(*,env:str, genetic_dataset_name:str, sample_dataset_name:str):
catalog = get_catalog(env=env, patterns = ['catalog*', 'catalog*/*/','catalog*/*/*'])
genetic = catalog.load(genetic_dataset_name)
sample = catalog.load(sample_dataset_name)
return dict(catalog=catalog, genetic_dataset=genetic, sample_dataset=sample)
#return catalog, genetic, sample
@task_no_checkpoint
#@task(target= "run_config.pkl")
def get_config_task(*, pairs_file:str, config_path:str, overrides: List[str]=[], config_name:str = "config"):
# hydra_cfg_raw = dictconfig.DictConfig({"hydra": {'run': {'dir': 'outputs/${dataset.genetic}/${exp}/${pairs_file}/${now:%Y-%m-%d}/${now:%H-%M-%S}'},
# 'sweep': {'dir': 'multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}', 'subdir': '${hydra.job.num}'},
# 'launcher': {'_target_': 'hydra._internal.core_plugins.basic_launcher.BasicLauncher'},
# 'sweeper': {'_target_': 'hydra._internal.core_plugins.basic_sweeper.BasicSweeper', 'max_batch_size': None}, 'help': {'app_name': '${hydra.job.name}', 'header': '${hydra.help.app_name} is powered by Hydra.\n', 'footer': 'Powered by Hydra (https://hydra.cc)\nUse --hydra-help to view Hydra specific help\n', 'template': '${hydra.help.header}\n== Configuration groups ==\nCompose your configuration from those groups (group=option)\n\n$APP_CONFIG_GROUPS\n\n== Config ==\nOverride anything in the config (foo.bar=value)\n\n$CONFIG\n\n${hydra.help.footer}\n'}, 'hydra_help': {'template': "Hydra (${hydra.runtime.version})\nSee https://hydra.cc for more info.\n\n== Flags ==\n$FLAGS_HELP\n\n== Configuration groups ==\nCompose your configuration from those groups (For example, append hydra/job_logging=disabled to command line)\n\n$HYDRA_CONFIG_GROUPS\n\nUse '--cfg hydra' to Show the Hydra config.\n", 'hydra_help': '???'}, 'hydra_logging': {'version': 1, 'formatters': {'simple': {'format': '[%(asctime)s][HYDRA] %(message)s'}}, 'handlers': {'console': {'class': 'logging.StreamHandler', 'formatter': 'simple', 'stream': 'ext://sys.stdout'}}, 'root': {'level': 'INFO', 'handlers': ['console']}, 'loggers': {'logging_example': {'level': 'DEBUG'}}, 'disable_existing_loggers': False}, 'job_logging': {'version': 1, 'formatters': {'simple': {'format': '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'}}, 'handlers': {'console': {'class': 'logging.StreamHandler', 'formatter': 'simple', 'stream': 'ext://sys.stdout'}, 'file': {'class': 'logging.FileHandler', 'formatter': 'simple', 'filename': '${hydra.job.name}.log'}}, 'root': {'level': 'INFO', 'handlers': ['console', 'file']}, 'disable_existing_loggers': False}, 'env': {}, 'searchpath': [], 'callbacks': {}, 'output_subdir': '.hydra', 'overrides': {'hydra': [], 'task': ['exp=test', 'pairs_file=test_file']}, 'job': {'name': 'hydra_test', 'override_dirname': 'exp=test,pairs_file=test_file', 'id': '???', 'num': '???', 'config_name': 'config', 'env_set': {}, 'env_copy': [], 'config': {'override_dirname': {'kv_sep': '=', 'item_sep': ',', 'exclude_keys': []}}}, 'runtime': {'version': '1.1.0', 'cwd': '/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils', 'config_sources': [{'path': 'hydra.conf', 'schema': 'pkg', 'provider': 'hydra'}, {'path': '/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/conf/hydra_conf', 'schema': 'file', 'provider': 'main'}, {'path': '', 'schema': 'structured', 'provider': 'schema'}], 'choices': {'run': 'default', 'hydra': 'default', 'dataset': 'MS', 'db': 'mysql', 'hydra/env': 'default', 'hydra/callbacks': None, 'hydra/job_logging': 'default', 'hydra/hydra_logging': 'default', 'hydra/hydra_help': 'default', 'hydra/help': 'default', 'hydra/sweeper': 'basic', 'hydra/launcher': 'basic', 'hydra/output': 'default'}}, 'verbose': False}})
#TODO: add hydra configs here
hydra_cfg_raw = {}
if Path(config_path).is_absolute():
with initialize_config_dir(config_dir=config_path):
cfg_job = compose(config_name = config_name, overrides=overrides)
else:
with initialize(config_path=config_path):
cfg_job = compose(config_name = config_name, overrides=overrides)
cfg = OmegaConf.merge(hydra_cfg_raw, cfg_job)
cfg.pairs_file = pairs_file
return cfg
@task_no_checkpoint(skip_on_upstream_skip=False)
def process_pairs_file(*, pairs_file):
pairs_df = pd.read_csv(pairs_file, sep = "\t")
GWAS_dict = pairs_df.groupby("GWAS_chrom")["GWAS_id"].unique().to_dict()
outside_dict = pairs_df.groupby("outside_chrom")["outside_id"].unique().to_dict()
all_keys = list(GWAS_dict.keys()) + list(outside_dict.keys())
chr_to_snp_dict = {}
for key in all_keys:
GWAS_dict_vals = GWAS_dict.get(key, [])
outside_dict_vals = outside_dict.get(key, [])
merged = np.append(GWAS_dict_vals,outside_dict_vals)
chr_to_snp_dict[key] = merged
return {"pairs_df": pairs_df, "chr_to_snp_dict": chr_to_snp_dict}
#TODO: Map this across chromosomes
@task_no_checkpoint#(target="{task_name}.pkl", checkpoint=True)
def get_snps_info(*, genetic_dataset, sample_dataset, chr_to_snp_dict):
# sample_subset_files_list = [SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) for subset in SAMPLE_SUBSETS]#combined_geno.sample_subsets}
# file_names_to_check = [ALL_GENO_DF_FILE_NAME, ALL_SAMPLES_GENO_DF_FILE_NAME] + sample_subset_files_list
# file_exist_cond_list = [parquet_result_partial().exists(location=file_name) for file_name in file_names_to_check]
# if all(file_exist_cond_list):
# raise signals.SKIP(message=f"Found existing {ALL_GENO_DF_FILE_NAME} and {ALL_SAMPLES_GENO_DF_FILE_NAME} in {parquet_result_partial().dir}, skipping reading genetic data")
# return
# else:
combined_geno = CombinedGenoPheno.init_from_OVPDataset(genetic_dataset= genetic_dataset,
sample_dataset=sample_dataset,
rsid_dict=chr_to_snp_dict,
id_col_list = ID_COL_LIST)
#parquet_result.write(combined_geno.all_geno_df, location = all_geno_df_file_name, target = all_geno_df_file_name, **context,)
#parquet_result.write(combined_geno.all_samples_geno_df, location = all_samples_geno_df_file_name, target = all_geno_df_file_name, **context)
parquet_result_partial(location=ALL_GENO_DF_FILE_NAME).write(combined_geno.all_geno_df, **context)
parquet_result_partial(location=ALL_SAMPLES_GENO_DF_FILE_NAME).write(combined_geno.all_samples_geno_df, **context)
# sample_subset_files_dict = {SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) : combined_geno.get_geno_each_sample_subset(subset) for subset in SAMPLE_SUBSETS}
# for file_name, subset_geno_df in sample_subset_files_dict.items():
# parquet_result_partial(location=file_name).write(subset_geno_df, **context)
return {"all_geno_df":combined_geno.all_geno_df,
"all_samples_geno_df": combined_geno.all_samples_geno_df}
@task_no_checkpoint(target=SEARCH_RESULT_DF_FILE_NAME, checkpoint=True, result = tsv_result_partial(), skip_on_upstream_skip=False)
def output_search_result_df(pairs_df):
all_geno_df = parquet_result_partial().read(location=ALL_GENO_DF_FILE_NAME).value
search_result_df = pd.melt(pairs_df, id_vars=["GWAS_chrom", "outside_chrom"], value_name = "SNP_ID", var_name = "SNP_type").drop_duplicates("SNP_ID")
search_result_df["chrom"] = np.where(search_result_df["SNP_type"] == "GWAS_id", search_result_df["GWAS_chrom"], search_result_df["outside_chrom"])
search_result_df = search_result_df.drop(columns = ["GWAS_chrom", "outside_chrom"])
search_result_df = search_result_df.merge(all_geno_df,
left_on = "SNP_ID",
right_index=True,
how="outer",
indicator=True).replace({"both": 1, "left_only": "0"}).rename(columns = {"_merge": "found_in_genetic_file"})[["chrom", "SNP_type", "SNP_ID", "found_in_genetic_file"]]
return search_result_df
@task_no_checkpoint
def load_extracted_info(*, all_geno_df_file_name:str, search_result_df_file_name:str):
search_result_df = tsv_result_partial().read(location = SEARCH_RESULT_DF_FILE_NAME).value
all_geno_df = parquet_result_partial().read(location = ALL_GENO_DF_FILE_NAME)
@task_no_checkpoint#(skip_on_upstream_skip=False)
def pairs_df_to_records(pairs_df, search_result_df=None, mtc_df=None):
if search_result_df is not None:
found_SNPs = search_result_df.query("found_in_genetic_file == 1").SNP_ID.tolist()
found_pairs_df = pairs_df.query("GWAS_id in @found_SNPs and outside_id in @found_SNPs")
else:
found_pairs_df = pairs_df
found_pairs_df = found_pairs_df.drop_duplicates(subset = ["GWAS_id", "outside_id", "GWAS_chrom", "outside_chrom"])
if mtc_df is not None:
found_pairs_df = found_pairs_df.merge(mtc_df[["GWAS_id"]], how = "inner")
mtc_table = MtcTable(mtc_df, "threshold")
snp_pair_info_list = [SNPPairInfo(**pair, mtc_threshold= mtc_table.get_threshold(pair["GWAS_id"])) for pair in found_pairs_df.to_dict(orient="records")]
else:
snp_pair_info_list = [SNPPairInfo(**pair) for pair in found_pairs_df.to_dict(orient="records")]
return snp_pair_info_list
def template_summary_df_target(*, pair_info: SNPPairInfo, parquet=False, **kwargs):
GWAS_id = pair_info.GWAS_id
outside_id = pair_info.outside_id
if parquet:
final_template = SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION + ".parquet"
else:
final_template = SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION + ".tsv"
return final_template.format(GWAS_id = GWAS_id, outside_id = outside_id)
@task_no_checkpoint
def get_sample_subset_id_dict(sample_dataset):
sample_subset_id_dict = {}
sample_dataset_files_dict = vars(sample_dataset.files)
if "single_file" in sample_dataset_files_dict:
sample_subset_id_dict["case"] = sample_dataset_files_dict["single_file"].load(with_missing_samples=False, subset = "case").index.values.astype(str)
sample_subset_id_dict["control"] = sample_dataset_files_dict["single_file"].load(with_missing_samples=False, subset = "control").index.values.astype(str)
else:
sample_subset_id_dict["case"] = sample_dataset_files_dict["case"].load(with_missing_samples=False).index.values.astype(str)
sample_subset_id_dict["control"] = sample_dataset_files_dict["control"].load(with_missing_samples=False).index.values.astype(str)
return sample_subset_id_dict
@task_no_checkpoint(target=template_summary_df_target,
checkpoint=True,
result = tsv_result_partial(),
#skip_on_upstream_skip=False,
task_run_name = "extract_info_{pair_info.GWAS_id}_{pair_info.outside_id}",
name = "Output df for each single pair")
#@task(skip_on_upstream_skip=False)
def output_case_control_single_pair_data_cache_df(pair_info, sample_subset_id_dict):
all_geno_df = parquet_result_partial().read(location=ALL_GENO_DF_FILE_NAME).value
# if "single_file" in shared_keys:
# sample_dict_loaded["case"] = sample_dict[key].load(with_missing_samples = False, subset = "case")
# sample_dict_loaded["control"] =
GWAS_id = pair_info.GWAS_id
outside_id = pair_info.outside_id
#load only the info of the 2 SNPs
parquet_column_subset_result = LocalResult(dir="./prefect_step1_result_folder",
serializer = PandasSerializer("parquet", deserialize_kwargs={"columns": [GWAS_id, outside_id]}))
all_samples_geno_df = parquet_column_subset_result.read(location = ALL_SAMPLES_GENO_DF_FILE_NAME).value
not_found_case = set(sample_subset_id_dict["case"]) - set(all_samples_geno_df.index)
not_found_control = set(sample_subset_id_dict["control"]) - set(all_samples_geno_df.index)
#check to see if the samples in the genetic data are found in the sample file
#if this assertion is wrong maybe you're rerunning the code on files created from a different dataset than your argument
assert len(not_found_case) < 100 and len(not_found_control) < 100
cases_found = sample_subset_id_dict["case"][~np.isin(sample_subset_id_dict["case"], list(not_found_case))]
controls_found = sample_subset_id_dict["control"][~np.isin(sample_subset_id_dict["control"], list(not_found_control))]
case_geno_each_sample = all_samples_geno_df.loc[cases_found, :]#parquet_column_subset_result.read(location = SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset="case")).value
control_geno_each_sample = all_samples_geno_df.loc[controls_found, :]#parquet_column_subset_result.read(location = SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset="control")).value
summary_df = odds_ratio_df_single_combined(case_geno_each_sample = case_geno_each_sample,
control_geno_each_sample = control_geno_each_sample,
single_rsid = GWAS_id,
all_geno_df = all_geno_df,
combo_rsid_list = [GWAS_id, outside_id])
return summary_df
@task_no_checkpoint
def output_parquet_case_control_single_pair_data_cache_df(summary_df, pair_info, parquet=True):
file_path = template_summary_df_target(pair_info= pair_info, parquet=True)
parquet_result_partial(location = file_path).write(summary_df)
return summary_df
@task_no_checkpoint
def test(val):
print(val)
#export
@task_no_checkpoint
def get_extracted_geno_files_names():
#sample_subset_files_list = [SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) for subset in SAMPLE_SUBSETS]
file_names_to_check = [ALL_GENO_DF_FILE_NAME, ALL_SAMPLES_GENO_DF_FILE_NAME] #+ sample_subset_files_list
return file_names_to_check
@task_no_checkpoint
def check_file_exists(file_name_to_check: str):
file_exist_cond = parquet_result_partial().exists(location=file_name_to_check) #[parquet_result_partial().exists(location=file_name) for file_name in file_names_to_check]
return file_exist_cond
@task_no_checkpoint
def check_files_exist(files_exist_cond_list: List[bool], operator):
return operator(files_exist_cond_list)
@task_no_checkpoint
def output_df_from_pydantic_obj(pydantic_obj, exceptions:List={}):
filtered_dict = {key: value for key, value in pydantic_obj.dict().items() if (isinstance(value, pd.DataFrame) | (key in exceptions))}
file_dict = {f"{pydantic_obj.__class__.__name__}/{i}_{key}.tsv": value for (i, (key, value)) in enumerate(filtered_dict.items())}
for file_path, df in file_dict.items():
tsv_result_partial(location= file_path).write( df,location= file_path, **context)
file_names = list(file_dict.keys())
return file_names
#export
@task_no_checkpoint
def generate_summary_df(single_pair_cache_df, pair_info, include_pairs_info_attrs = {"GWAS_id", "outside_id"}):
pair_dict = pair_info.dict(include= include_pairs_info_attrs)
output_df = single_pair_cache_df.copy()
other_cols = output_df.columns.difference(list(pair_dict.values()) + ["unique_samples_id_case", "unique_samples_id_control"]).tolist() #drop genetic sample data from summary table
nested_cols_list = [[key, f"{key}_geno"] for key in pair_dict.keys()]
added_cols = [item for sublist in nested_cols_list for item in sublist]
output_df["GWAS_chrom"] = pair_info.GWAS_chrom
output_df["outside_chrom"] = pair_info.outside_chrom
for (key,val), (var_col, value_col) in zip(pair_dict.items(), nested_cols_list):
output_df = pd.melt(output_df, id_vars = output_df.columns.difference([val]), var_name=var_col, value_name=value_col)
output_df = output_df[added_cols + other_cols] #output_df[added_cols + list(exclude_pair_info_attrs) + other_cols]
return output_df
#export
@task(target=UNFILTERED_SUMMARY_DF_FILE_NAME, result = tsv_result_partial())
def output_all_pairs_summary_df(summary_df_list):
all_pairs_summary_df = pd.concat(summary_df_list).sort_values(by=["GWAS_id", "outside_id", "GWAS_id_geno", "outside_id_geno"])
return all_pairs_summary_df
#export
@task_no_checkpoint
def perform_MTC_filters(all_pairs_summary_df, mtc_config):
mtc_result_object = MtcTable.create_mtc_table_from_summary_df(all_pairs_summary_df,
filter_1_queries=mtc_config["filter_1_queries"],
filter_2_queries= mtc_config["filter_2_queries"])
return mtc_result_object
#export
@task(target=STEP1_FINAL_REPORT_FILE_NAME, result = tsv_result_partial())
def output_step_1_final_report(pairs_df, search_result_df, mtc_result_object):
report_df = pairs_df.merge(search_result_df[["SNP_ID", "found_in_genetic_file"]], left_on = "GWAS_id", right_on="SNP_ID", how = "left")\
.merge(search_result_df[["SNP_ID", "found_in_genetic_file"]], left_on = "outside_id", right_on="SNP_ID", how = "left", suffixes=["_GWAS", "_outside"])
report_df = report_df[["GWAS_id", "outside_id", "GWAS_chrom", "outside_chrom", "found_in_genetic_file_GWAS", "found_in_genetic_file_outside"]]
final_test_report_df = report_df.merge(mtc_result_object.original_summary_df, how= "outer", indicator = "all_SNPs_found_in_genetic_file")\
.merge(mtc_result_object.non_zero_geno_combos_pass_cond, how="outer", indicator="geno_combo_passed_filter_1")\
.merge(mtc_result_object.filter_1, how="outer", indicator="pair_has_enough_geno_combo_passed_filter_1")\
.merge(mtc_result_object.filter_2, how="outer", indicator="geno_combo_passed_filter_2").replace({"left_only":0, "both": 1}).fillna("NA")
return final_test_report_df
#export
@task_no_checkpoint
def get_GWAS_id_for_step2(step1_final_report_df):
df = step1_final_report_df.query("geno_combo_passed_filter_2 == 1")
df = df.drop_duplicates(["GWAS_id"])
return df.GWAS_id.tolist()
step2_flow_template = StringFormatter(template = "{GWAS_id}_GWAS_locus_step2")
@task_no_checkpoint
def dummy_task(data):
return data
#export
with Flow("OVP_step1") as dev_flow:
config_path = Parameter("config_path", default = "/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/conf/hydra_conf")
overrides = Parameter("overrides", default = ["exp=test_exp", "dataset=test_MS"])
pairs_file = Parameter("pairs_file")
cfg = get_config_task(config_path=config_path, overrides = overrides, pairs_file = pairs_file)# overrides=overrides, config_name=config_name)
get_datasets_result = get_datasets(env=cfg["run"]["env"], genetic_dataset_name = cfg["dataset"]["genetic"], sample_dataset_name = cfg["dataset"]["sample"])
genetic_dataset = get_datasets_result["genetic_dataset"]
sample_dataset = get_datasets_result["sample_dataset"]
process_pairs_file_result = process_pairs_file(pairs_file=cfg["pairs_file"])
extracted_geno_file_names = get_extracted_geno_files_names()
file_exist_cond_list = check_file_exists.map(extracted_geno_file_names)
cond = check_files_exist(files_exist_cond_list=file_exist_cond_list, operator=all)
with case(cond, False):
get_snps_info_result = get_snps_info(genetic_dataset=genetic_dataset, sample_dataset=sample_dataset, chr_to_snp_dict=process_pairs_file_result["chr_to_snp_dict"])
pairs_df = process_pairs_file_result["pairs_df"]
search_result_df = output_search_result_df(pairs_df)
search_result_df.set_upstream(get_snps_info_result)
pairs_list = pairs_df_to_records(process_pairs_file_result["pairs_df"], search_result_df)
sample_subset_id_dict = get_sample_subset_id_dict(sample_dataset)
output_case_control_single_pair_data_cache_df_result = output_case_control_single_pair_data_cache_df.map(pair_info=pairs_list, sample_subset_id_dict= unmapped(sample_subset_id_dict))
output_parquet_case_control_single_pair_data_cache_df.map(summary_df = output_case_control_single_pair_data_cache_df_result, pair_info = pairs_list)
mapped_generate_summary_df_result = generate_summary_df.map(pair_info = pairs_list, single_pair_cache_df = output_case_control_single_pair_data_cache_df_result)
all_pairs_summary_df = output_all_pairs_summary_df(mapped_generate_summary_df_result)
mtc_result_object = perform_MTC_filters(all_pairs_summary_df, cfg["run"]["mtc"])
mtc_object_dict = output_df_from_pydantic_obj(mtc_result_object, exceptions = ())
report_df = output_step_1_final_report(process_pairs_file_result["pairs_df"], search_result_df, mtc_result_object)
step2_GWAS_ids = get_GWAS_id_for_step2(report_df)
step2_flow_run_names = step2_flow_template.map(GWAS_id = step2_GWAS_ids)
flow_run_ids = create_flow_run.map(flow_name = unmapped("OVP_step2_prod"),
project_name = unmapped("ovp"),
run_name = step2_flow_run_names,
parameters = unmapped({"step1_folder_path": "/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/test_prefect/",
"total_iterations" : 100,
"n_iterations": 100}))
dev_flow.visualize()
flow_result = dev_flow.run(config_path="conf/hydra_conf/", overrides=["exp=test_exp", "dataset=test_MS"], pairs_file = "test_MS_chr22.tsv")
dev_flow.visualize(flow_state = flow_result)
```
---
```
#export
class SNPPairInfo(BaseModel):
GWAS_id: str
outside_id: str
GWAS_chrom: int
outside_chrom: int
mtc_threshold: float = None
#export
tsv_result_partial = partialler(LocalResult, dir="./prefect_step1_result_folder", serializer = PandasSerializer("csv",
serialize_kwargs={"sep":"\t", "index": False},
deserialize_kwargs={"sep": "\t"}))
parquet_result_partial = partialler(LocalResult, dir="./prefect_step1_result_folder", serializer = PandasSerializer("parquet"))
ID_COL_LIST = ["rsid", "position"]
ALL_GENO_DF_FILE_NAME = "all_geno_df.parquet"
ALL_SAMPLES_GENO_DF_FILE_NAME = "all_samples_geno_df.parquet"
SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME = partialler("{subset}_{ALL_SAMPLES_GENO_DF_FILE_NAME}".format, ALL_SAMPLES_GENO_DF_FILE_NAME = ALL_SAMPLES_GENO_DF_FILE_NAME)
SAMPLE_SUBSETS= ["case", "control"]
SEARCH_RESULT_DF_FILE_NAME = "snp_search_result.tsv"
UNFILTERED_SUMMARY_DF_FILE_NAME = "all_pairs_summary_df_unfiltered.tsv"
STEP1_FINAL_REPORT_FILE_NAME = "step1_final_report.tsv"
SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION = "snp_pairs_folders/{GWAS_id}/{GWAS_id}_{outside_id}/single_pair_data_cache_df"
def format_result_dir(cfg, task_name, **kwargs):
return f"{Path(cfg.hydra.run.dir)}/{task_name}"
task_no_checkpoint = partialler(task, checkpoint=False)
@task_no_checkpoint
def get_datasets(*,env:str, genetic_dataset_name:str, sample_dataset_name:str):
catalog = get_catalog(env=env, patterns = ['catalog*', 'catalog*/*/','catalog*/*/*'])
genetic = catalog.load(genetic_dataset_name)
sample = catalog.load(sample_dataset_name)
return dict(catalog=catalog, genetic_dataset=genetic, sample_dataset=sample)
#return catalog, genetic, sample
@task_no_checkpoint
#@task(target= "run_config.pkl")
def get_config_task(*, config_path:str):
cfg = OmegaConf.load(config_path)
return cfg
@task_no_checkpoint(skip_on_upstream_skip=False)
def process_pairs_file(*, pairs_file):
pairs_df = pd.read_csv(pairs_file, sep = "\t")
GWAS_dict = pairs_df.groupby("GWAS_chrom")["GWAS_id"].unique().to_dict()
outside_dict = pairs_df.groupby("outside_chrom")["outside_id"].unique().to_dict()
all_keys = list(GWAS_dict.keys()) + list(outside_dict.keys())
chr_to_snp_dict = {}
for key in all_keys:
GWAS_dict_vals = GWAS_dict.get(key, [])
outside_dict_vals = outside_dict.get(key, [])
merged = np.append(GWAS_dict_vals,outside_dict_vals)
chr_to_snp_dict[key] = merged
return {"pairs_df": pairs_df, "chr_to_snp_dict": chr_to_snp_dict}
#TODO: Map this across chromosomes
@task_no_checkpoint#(target="{task_name}.pkl", checkpoint=True)
def get_snps_info(*, genetic_dataset, sample_dataset, chr_to_snp_dict):
# sample_subset_files_list = [SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) for subset in SAMPLE_SUBSETS]#combined_geno.sample_subsets}
# file_names_to_check = [ALL_GENO_DF_FILE_NAME, ALL_SAMPLES_GENO_DF_FILE_NAME] + sample_subset_files_list
# file_exist_cond_list = [parquet_result_partial().exists(location=file_name) for file_name in file_names_to_check]
# if all(file_exist_cond_list):
# raise signals.SKIP(message=f"Found existing {ALL_GENO_DF_FILE_NAME} and {ALL_SAMPLES_GENO_DF_FILE_NAME} in {parquet_result_partial().dir}, skipping reading genetic data")
# return
# else:
combined_geno = CombinedGenoPheno.init_from_OVPDataset(genetic_dataset= genetic_dataset,
sample_dataset=sample_dataset,
rsid_dict=chr_to_snp_dict,
id_col_list = ID_COL_LIST)
#parquet_result.write(combined_geno.all_geno_df, location = all_geno_df_file_name, target = all_geno_df_file_name, **context,)
#parquet_result.write(combined_geno.all_samples_geno_df, location = all_samples_geno_df_file_name, target = all_geno_df_file_name, **context)
parquet_result_partial(location=ALL_GENO_DF_FILE_NAME).write(combined_geno.all_geno_df, **context)
parquet_result_partial(location=ALL_SAMPLES_GENO_DF_FILE_NAME).write(combined_geno.all_samples_geno_df, **context)
# sample_subset_files_dict = {SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) : combined_geno.get_geno_each_sample_subset(subset) for subset in SAMPLE_SUBSETS}
# for file_name, subset_geno_df in sample_subset_files_dict.items():
# parquet_result_partial(location=file_name).write(subset_geno_df, **context)
return {"all_geno_df":combined_geno.all_geno_df,
"all_samples_geno_df": combined_geno.all_samples_geno_df}
@task_no_checkpoint(target=SEARCH_RESULT_DF_FILE_NAME, checkpoint=True, result = tsv_result_partial(), skip_on_upstream_skip=False)
def output_search_result_df(pairs_df):
all_geno_df = parquet_result_partial().read(location=ALL_GENO_DF_FILE_NAME).value
search_result_df = pd.melt(pairs_df, id_vars=["GWAS_chrom", "outside_chrom"], value_name = "SNP_ID", var_name = "SNP_type").drop_duplicates("SNP_ID")
search_result_df["chrom"] = np.where(search_result_df["SNP_type"] == "GWAS_id", search_result_df["GWAS_chrom"], search_result_df["outside_chrom"])
search_result_df = search_result_df.drop(columns = ["GWAS_chrom", "outside_chrom"])
search_result_df = search_result_df.merge(all_geno_df,
left_on = "SNP_ID",
right_index=True,
how="outer",
indicator=True).replace({"both": 1, "left_only": "0"}).rename(columns = {"_merge": "found_in_genetic_file"})[["chrom", "SNP_type", "SNP_ID", "found_in_genetic_file"]]
return search_result_df
@task_no_checkpoint
def load_extracted_info(*, all_geno_df_file_name:str, search_result_df_file_name:str):
search_result_df = tsv_result_partial().read(location = SEARCH_RESULT_DF_FILE_NAME).value
all_geno_df = parquet_result_partial().read(location = ALL_GENO_DF_FILE_NAME)
@task_no_checkpoint#(skip_on_upstream_skip=False)
def pairs_df_to_records(pairs_df, search_result_df=None, mtc_df=None):
if search_result_df is not None:
found_SNPs = search_result_df.query("found_in_genetic_file == 1").SNP_ID.tolist()
found_pairs_df = pairs_df.query("GWAS_id in @found_SNPs and outside_id in @found_SNPs")
else:
found_pairs_df = pairs_df
found_pairs_df = found_pairs_df.drop_duplicates(subset = ["GWAS_id", "outside_id", "GWAS_chrom", "outside_chrom"])
if mtc_df is not None:
found_pairs_df = found_pairs_df.merge(mtc_df[["GWAS_id"]], how = "inner")
mtc_table = MtcTable(mtc_df, "threshold")
snp_pair_info_list = [SNPPairInfo(**pair, mtc_threshold= mtc_table.get_threshold(pair["GWAS_id"])) for pair in found_pairs_df.to_dict(orient="records")]
else:
snp_pair_info_list = [SNPPairInfo(**pair) for pair in found_pairs_df.to_dict(orient="records")]
return snp_pair_info_list
def template_summary_df_target(*, pair_info: SNPPairInfo, parquet=False, **kwargs):
GWAS_id = pair_info.GWAS_id
outside_id = pair_info.outside_id
if parquet:
final_template = SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION + ".parquet"
else:
final_template = SINGLE_PAIR_DATA_CACHE_TEMPLATE_WITHOUT_EXTENSION + ".tsv"
return final_template.format(GWAS_id = GWAS_id, outside_id = outside_id)
@task_no_checkpoint
def get_sample_subset_id_dict(sample_dataset):
sample_subset_id_dict = {}
sample_dataset_files_dict = vars(sample_dataset.files)
if "single_file" in sample_dataset_files_dict:
sample_subset_id_dict["case"] = sample_dataset_files_dict["single_file"].load(with_missing_samples=False, subset = "case").index.values.astype(str)
sample_subset_id_dict["control"] = sample_dataset_files_dict["single_file"].load(with_missing_samples=False, subset = "control").index.values.astype(str)
else:
sample_subset_id_dict["case"] = sample_dataset_files_dict["case"].load(with_missing_samples=False).index.values.astype(str)
sample_subset_id_dict["control"] = sample_dataset_files_dict["control"].load(with_missing_samples=False).index.values.astype(str)
return sample_subset_id_dict
@task_no_checkpoint(target=template_summary_df_target,
checkpoint=True,
result = tsv_result_partial(),
#skip_on_upstream_skip=False,
task_run_name = "extract_info_{pair_info.GWAS_id}_{pair_info.outside_id}",
name = "Output df for each single pair")
#@task(skip_on_upstream_skip=False)
def output_case_control_single_pair_data_cache_df(pair_info, sample_subset_id_dict):
all_geno_df = parquet_result_partial().read(location=ALL_GENO_DF_FILE_NAME).value
# if "single_file" in shared_keys:
# sample_dict_loaded["case"] = sample_dict[key].load(with_missing_samples = False, subset = "case")
# sample_dict_loaded["control"] =
GWAS_id = pair_info.GWAS_id
outside_id = pair_info.outside_id
#load only the info of the 2 SNPs
parquet_column_subset_result = LocalResult(dir="./prefect_step1_result_folder",
serializer = PandasSerializer("parquet", deserialize_kwargs={"columns": [GWAS_id, outside_id]}))
all_samples_geno_df = parquet_column_subset_result.read(location = ALL_SAMPLES_GENO_DF_FILE_NAME).value
not_found_case = set(sample_subset_id_dict["case"]) - set(all_samples_geno_df.index)
not_found_control = set(sample_subset_id_dict["control"]) - set(all_samples_geno_df.index)
#check to see if the samples in the genetic data are found in the sample file
#if this assertion is wrong maybe you're rerunning the code on files created from a different dataset than your argument
assert len(not_found_case) < 100 and len(not_found_control) < 100
cases_found = sample_subset_id_dict["case"][~np.isin(sample_subset_id_dict["case"], list(not_found_case))]
controls_found = sample_subset_id_dict["control"][~np.isin(sample_subset_id_dict["control"], list(not_found_control))]
case_geno_each_sample = all_samples_geno_df.loc[cases_found, :]#parquet_column_subset_result.read(location = SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset="case")).value
control_geno_each_sample = all_samples_geno_df.loc[controls_found, :]#parquet_column_subset_result.read(location = SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset="control")).value
summary_df = odds_ratio_df_single_combined(case_geno_each_sample = case_geno_each_sample,
control_geno_each_sample = control_geno_each_sample,
single_rsid = GWAS_id,
all_geno_df = all_geno_df,
combo_rsid_list = [GWAS_id, outside_id])
return summary_df
@task_no_checkpoint
def output_parquet_case_control_single_pair_data_cache_df(summary_df, pair_info, parquet=True):
file_path = template_summary_df_target(pair_info= pair_info, parquet=True)
parquet_result_partial(location = file_path).write(summary_df)
return summary_df
@task_no_checkpoint
def test(val):
print(val)
#export
@task_no_checkpoint
def get_extracted_geno_files_names():
#sample_subset_files_list = [SUBSET_ALL_SAMPLES_GENO_DF_FILE_NAME(subset = subset) for subset in SAMPLE_SUBSETS]
file_names_to_check = [ALL_GENO_DF_FILE_NAME, ALL_SAMPLES_GENO_DF_FILE_NAME] #+ sample_subset_files_list
return file_names_to_check
@task_no_checkpoint
def check_file_exists(file_name_to_check: str):
file_exist_cond = parquet_result_partial().exists(location=file_name_to_check) #[parquet_result_partial().exists(location=file_name) for file_name in file_names_to_check]
return file_exist_cond
@task_no_checkpoint
def check_files_exist(files_exist_cond_list: List[bool], operator):
return operator(files_exist_cond_list)
@task_no_checkpoint
def output_df_from_pydantic_obj(pydantic_obj, exceptions:List={}):
filtered_dict = {key: value for key, value in pydantic_obj.dict().items() if (isinstance(value, pd.DataFrame) | (key in exceptions))}
file_dict = {f"{pydantic_obj.__class__.__name__}/{i}_{key}.tsv": value for (i, (key, value)) in enumerate(filtered_dict.items())}
for file_path, df in file_dict.items():
tsv_result_partial(location= file_path).write( df,location= file_path, **context)
file_names = list(file_dict.keys())
return file_names
#export
@task_no_checkpoint
def generate_summary_df(single_pair_cache_df, pair_info, include_pairs_info_attrs = {"GWAS_id", "outside_id"}):
pair_dict = pair_info.dict(include= include_pairs_info_attrs)
output_df = single_pair_cache_df.copy()
other_cols = output_df.columns.difference(list(pair_dict.values()) + ["unique_samples_id_case", "unique_samples_id_control"]).tolist() #drop genetic sample data from summary table
nested_cols_list = [[key, f"{key}_geno"] for key in pair_dict.keys()]
added_cols = [item for sublist in nested_cols_list for item in sublist]
output_df["GWAS_chrom"] = pair_info.GWAS_chrom
output_df["outside_chrom"] = pair_info.outside_chrom
for (key,val), (var_col, value_col) in zip(pair_dict.items(), nested_cols_list):
output_df = pd.melt(output_df, id_vars = output_df.columns.difference([val]), var_name=var_col, value_name=value_col)
output_df = output_df[added_cols + other_cols] #output_df[added_cols + list(exclude_pair_info_attrs) + other_cols]
return output_df
#export
@task(target=UNFILTERED_SUMMARY_DF_FILE_NAME, result = tsv_result_partial())
def output_all_pairs_summary_df(summary_df_list):
all_pairs_summary_df = pd.concat(summary_df_list).sort_values(by=["GWAS_id", "outside_id", "GWAS_id_geno", "outside_id_geno"])
return all_pairs_summary_df
#export
@task_no_checkpoint
def perform_MTC_filters(all_pairs_summary_df, mtc_config):
mtc_result_object = MtcTable.create_mtc_table_from_summary_df(all_pairs_summary_df,
filter_1_queries=mtc_config["filter_1_queries"],
filter_2_queries= mtc_config["filter_2_queries"])
return mtc_result_object
#export
@task(target=STEP1_FINAL_REPORT_FILE_NAME, result = tsv_result_partial())
def output_step_1_final_report(pairs_df, search_result_df, mtc_result_object):
report_df = pairs_df.merge(search_result_df[["SNP_ID", "found_in_genetic_file"]], left_on = "GWAS_id", right_on="SNP_ID", how = "left")\
.merge(search_result_df[["SNP_ID", "found_in_genetic_file"]], left_on = "outside_id", right_on="SNP_ID", how = "left", suffixes=["_GWAS", "_outside"])
report_df = report_df[["GWAS_id", "outside_id", "GWAS_chrom", "outside_chrom", "found_in_genetic_file_GWAS", "found_in_genetic_file_outside"]]
final_test_report_df = report_df.merge(mtc_result_object.original_summary_df, how= "outer", indicator = "all_SNPs_found_in_genetic_file")\
.merge(mtc_result_object.non_zero_geno_combos_pass_cond, how="outer", indicator="geno_combo_passed_filter_1")\
.merge(mtc_result_object.filter_1, how="outer", indicator="pair_has_enough_geno_combo_passed_filter_1")\
.merge(mtc_result_object.filter_2, how="outer", indicator="geno_combo_passed_filter_2").replace({"left_only":0, "both": 1}).fillna("NA")
return final_test_report_df
#export
@task_no_checkpoint
def get_GWAS_id_for_step2(step1_final_report_df):
df = step1_final_report_df.query("geno_combo_passed_filter_2 == 1")
df = df.drop_duplicates(["GWAS_id"])
return df.GWAS_id.tolist()
step2_flow_template = StringFormatter(template = "{GWAS_id}_GWAS_locus_step2")
@task_no_checkpoint
def dummy_task(data):
return data
#export
DEFAULT_CONFIG_PATH = "/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/corradin_ovp_utils/conf/example_prefect_step1_filled_conf.yaml"
with Flow("OVP_step1_hydra") as step1_flow_hydra:
config_path = Parameter("config_path", default = DEFAULT_CONFIG_PATH)
# overrides = Parameter("overrides", default = ["exp=test_exp", "dataset=test_MS"])
# pairs_file = Parameter("pairs_file")
cfg = get_config_task(config_path=config_path)#, overrides = overrides, pairs_file = pairs_file)# overrides=overrides, config_name=config_name)
get_datasets_result = get_datasets(env=cfg["run"]["env"], genetic_dataset_name = cfg["dataset"]["genetic"], sample_dataset_name = cfg["dataset"]["sample"])
genetic_dataset = get_datasets_result["genetic_dataset"]
sample_dataset = get_datasets_result["sample_dataset"]
process_pairs_file_result = process_pairs_file(pairs_file=cfg["run"]["input"]["pairs_file_full_path"])
extracted_geno_file_names = get_extracted_geno_files_names()
file_exist_cond_list = check_file_exists.map(extracted_geno_file_names)
cond = check_files_exist(files_exist_cond_list=file_exist_cond_list, operator=all)
with case(cond, False):
get_snps_info_result = get_snps_info(genetic_dataset=genetic_dataset, sample_dataset=sample_dataset, chr_to_snp_dict=process_pairs_file_result["chr_to_snp_dict"])
pairs_df = process_pairs_file_result["pairs_df"]
search_result_df = output_search_result_df(pairs_df)
search_result_df.set_upstream(get_snps_info_result)
pairs_list = pairs_df_to_records(process_pairs_file_result["pairs_df"], search_result_df)
sample_subset_id_dict = get_sample_subset_id_dict(sample_dataset)
output_case_control_single_pair_data_cache_df_result = output_case_control_single_pair_data_cache_df.map(pair_info=pairs_list, sample_subset_id_dict= unmapped(sample_subset_id_dict))
output_parquet_case_control_single_pair_data_cache_df.map(summary_df = output_case_control_single_pair_data_cache_df_result, pair_info = pairs_list)
mapped_generate_summary_df_result = generate_summary_df.map(pair_info = pairs_list, single_pair_cache_df = output_case_control_single_pair_data_cache_df_result)
all_pairs_summary_df = output_all_pairs_summary_df(mapped_generate_summary_df_result)
mtc_result_object = perform_MTC_filters(all_pairs_summary_df, cfg["run"]["mtc"])
mtc_object_dict = output_df_from_pydantic_obj(mtc_result_object, exceptions = ())
report_df = output_step_1_final_report(process_pairs_file_result["pairs_df"], search_result_df, mtc_result_object)
step2_GWAS_ids = get_GWAS_id_for_step2(report_df)
step2_flow_run_names = step2_flow_template.map(GWAS_id = step2_GWAS_ids)
# flow_run_ids = create_flow_run.map(flow_name = unmapped("OVP_step2_prod"),
# project_name = unmapped("ovp"),
# run_name = step2_flow_run_names,
# parameters = unmapped({"step1_folder_path": "/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/test_prefect/",
# "total_iterations" : 100,
# "n_iterations": 100}))
step1_flow_hydra.visualize()
# test_result_folder = "./test_prefect_step1/inner_folder"
# os.makedirs(test_result_folder,exist_ok=True)
# with cd(test_result_folder):
# flow_hydra_result = flow_hydra.run(config_path="conf/hydra_conf/", overrides=["exp=test_exp", "dataset=test_MS"], pairs_file = "/lab/corradin_biobank/FOR_AN/OVP/corradin_ovp_utils/test_MS_chr22.tsv")#(config_path=DEFAULT_CONFIG_PATH)
# flow_hydra.visualize(flow_state=flow_hydra_result)
flow_hydra_result = step1_flow_hydra.run(config_path=DEFAULT_CONFIG_PATH)#(config_path=DEFAULT_CONFIG_PATH)
get_datasets.run(env="cluster", genetic_dataset_name="UKB_genetic_file_bgen_split_by_chrom", sample_dataset_name='UKB_sample_file_with_pheno_col')["catalog"].reload().list()
#test in different directory. This screws up everything but helpful to know what paths/files need to provide full path
test_result_folder = "./test_prefect_step1/inner_folder"
os.makedirs(test_result_folder,exist_ok=True)
with cd(test_result_folder):
flow_hydra_different_dir_result = flow_hydra.run(config_path=DEFAULT_CONFIG_PATH)
flow_hydra.visualize(flow_state=flow_hydra_different_dir_result)
```
| github_jupyter |
# Operaciones sobre ndarrays
NumPy pone a nuestra disposición un amplio conjunto de funciones optimizadas para aplicar sobre ndarrays de forma global evitando así la necesidad de utilizar bucles (mucho más costosos).
```
import numpy as np
```
### Operaciones elemento a elemento - Universal functions
El primero de los conjuntos de funciones ofrecido por NumPy son las llamadas "funciones universales" (o ufuncs) que permiten la realización de operaciones elemento a elemento de un array. En función del número de parámetros encontramos dos tipos de funciones universales.
#### Funciones unarias
Son aquellas funciones que reciben como parámetro un único ndarray.<br/>
<ul>
<li><b>abs, fabs:</b> Valor absoluto.</li>
<li><b>sqrt:</b> Raíz cuadrada (equivalente a array \*\* 0.5).</li>
<li><b>square:</b> Potencia al cuadrado (equivalente a array ** 2).</li>
<li><b>exp:</b> Potencia de e.</li>
<li><b>log, log10, log2, log1p:</b> Logaritmos en distintas bases.</li>
<li><b>sign:</b> Signo (+ = 1 / - = -1 / 0 = 0).</li>
<li><b>ceil:</b> Techo.</li>
<li><b>floor:</b> Suelo.</li>
<li><b>rint:</b> Redondeo al entero más cercano.</li>
<li><b>modf:</b> Devuelve dos arrays uno con la parte fraccionaria y otro con la parte entera.</li>
<li><b>isnan:</b> Devuelve un array booleano indicando si el valor es NaN o no.</li>
<li><b>isfinite, isinf:</b> Devuelve un array booleano indicando si el valor es finito o infinito.</li>
<li><b>cos, cosh, sin, sinh, tan, tanh:</b> Funciones trigonométricas.</li>
<li><b>arccos, arccosh, arcsin, arcsinh, arctan, arctanh:</b> Funciones trigonométricas inversas.</li>
<li><b>logical_not:</b> Inverso booleano de todos los valores del array (equivalente a ~(array)).</li>
</ul>
```
array = np.array([1,2,3,4,5])
np.abs(array)
np.sign(array)
np.ceil(array)
```
#### Funciones binarias
Son aquellas funciones que reciben como parámetro dos arrays.
<ul>
<li><b>add:</b> Adición de los elementos de los dos arrays (equivalente a array1 + array2).</li>
<li><b>subtract:</b> Resta de los elementos de los dos arrays (equivalente a array1 - array2).</li>
<li><b>multiply:</b> Multiplica los elementos de los dos arrays (equivalente a array1 \* array2).</li>
<li><b>divide, floor_divide:</b> Divide los elementos de los dos arrays (equivalente a array1 / (o //) array2).</li>
<li><b>power:</b> Eleva los elementos del primer array a las potencias del segundo (equivalente a array1 ** array2).</li>
<li><b>maximum, fmax:</b> Calcula el máximo de los dos arrays (elemento a elemento). fmax ignora NaN.</li>
<li><b>minimum, fmin:</b> Calcula el mínimo de los dos arrays (elemento a elemento). fmax ignora NaN.</li>
<li><b>mod:</b> Calcula el resto de la división de los dos arrays (equivalente a array1 % array2).</li>
<li><b>greater, greater_equal, less, less_equal, equal, not_equal:</b> Comparativas sobre los elementos de ambos ndarrays (elemento a elemento).</li>
<li><b>logical_and, logical_or, logical_xor:</b> Operaciones booleanas sobre los elementos de ambos ndarrays (elemento a elemento).</li>
</ul>
```
array1 = np.random.randn(5, 5)
array1
array2 = np.random.randn(5, 5)
array2
np.minimum(array1, array2)
np.divide(array1,array2)
np.floor_divide(array1,array2)
```
### Selección de elementos de ndarrays en función de una condición
NumPy pone a nuestra disposición, a través de la función <b>np.where</b> la posibilidad de generar un array de salida a partir de dos de entrada, estableciendo una máscara booleana que indique si (elemento a elemento) debemos enviar a la salida el elemento del primer ndarray (valor True) o del segundo (valor False).
```
array1 = np.random.randn(5, 5)
array1
array2 = np.random.randn(5, 5)
array2
# Fusión condicional
np.where(array1 < array2, array1, array2)
# Anidación de condiciones
np.where(array1 < array2, np.where(array1 < 0, 0, array1), array2)
```
### Funciones matemáticas y estadísticas
NumPy ofrece un amplio conjunto de funciones matemáticas y estadísticas que se pueden aplicar sobre ndarrays. A continuación se pueden encontrar los ejemplos más típicos (hay algunas más que pueden consultarse en la documentación oficial de NumPy).<br/>
<ul>
<li><b>sum:</b> Suma de elementos.</li>
<li><b>mean:</b> Media aritmética de los elementos.</li>
<li><b>median:</b> Mediana de los elementos.</li>
<li><b>std:</b> Desviación estándar de los elementos.</li>
<li><b>var:</b> Varianza de los elementos.</li>
<li><b>min:</b> Valor mínimo de los elementos.</li>
<li><b>max:</b> Valor máximo de los elementos.</li>
<li><b>argmin:</b> Índice del valor mínimo.</li>
<li><b>argmax:</b> Índice del valor máximo.</li>
<li><b>cumsum:</b> Suma acumulada de los elementos.</li>
<li><b>cumprod:</b> Producto acumulado de los elementos.</li>
</ul>
Todas estas funciones pueden recibir, además del ndarray sobre el que se aplicarán, un segundo parámetro llamado <b>axis</b>. Si no se recibe este parámetro las funciones se aplicarán sobre el conjunto global de los elementos del ndarray, pero si se incluye, podrá tomar dos valores:
<ul>
<li>Valor 0: Aplicará la función por columnas</li>
<li>Valor 1: Aplicará la función por filas</li>
```
array = np.random.randn(5, 4)
array
# Operación global
np.sum(array)
# Operación por columnas
np.sum(array, axis=0)
# Operación por filas
np.sum(array, axis=1)
```
Adicionalmente algunas de estas funciones pueden ser utilizadas como "métodos" de los ndarray y no sólo como funciones sobre los mismos. En este caso la sintáxis cambiará y se utilizará la notación "ndarray.funcion()"
```
array.sum()
np.argmax(array)
np.argmin(array)
np.cumsum(array)
np.cumprod(array)
```
### Operaciones sobre ndarrays booleanos
Dado que, internamente, Python trata los valores booleanos True como 1 y los False como 0, es muy sencillo realizar operaciones matemáticas sobre estos valores booleanos de forma que se puedan hacer diferentes chequeos. Por ejemplo...
```
array = np.random.randn(5, 5)
array
# Elementos mayores que 0
(array > 0).sum()
# Elementos menores que la media
(array < array.mean()).sum()
```
NumPy también pone a nuestra disposición dos funciones de chequeo predefinidas sobre ndarrays booleanos:<br/>
<ul>
<li><b>any:</b> Para comprobar si alguno de los elementos es True.</li>
<li><b>all:</b> Para comprobar si todos los elementos son True.</li>
</ul>
```
# Alguno de los elementos cumple la condición
(array == 0).any()
# Todos los elementos cumplen la condición
((array >= -2) & (array <= 2)).all()
```
### Ordenación de ndarrays
```
array = np.random.randn(5, 5)
array
# Datos ordenados
np.sort(array)
# Datos ordenados según el primer eje
np.sort(array, axis=0)
```
### Funciones de conjunto
NumPy permite realizar tratamientos sobre un ndarray asumiendo que el total de los elementos del mismo forman un conjunto.<br/>
<ul>
<li><b>unique:</b> Calcula el conjunto único de elementos sin duplicados.</li>
<li><b>intersect1d:</b> Calcula la intersección de los elementos de dos arrays.</li>
<li><b>union1d:</b> Calcula la unión de los elementos de dos arays.</li>
<li><b>in1d:</b> Calcula un array booleano que indica si cada elemento del primer array está contenido en el segundo.</li>
<li><b>setdiff1d:</b> Calcula la diferencia entre ambos conjuntos.</li>
<li><b>setxor1d:</b> Calcula la diferencia simétrica entre ambos conjuntos.</li>
</ul>
```
array1 = np.array([6, 0, 0, 0, 3, 2, 5, 6])
array1
array2 = np.array([7, 4, 3, 1, 2, 6, 5])
array2
np.unique(array1)
np.union1d(array1, array2)
np.in1d(array1, array2)
```
| github_jupyter |
# Get Target
```
!pip install -r requirements_colab.txt -q
```
> To speed up the review process , i provided the ***drive id*** of the data i've created from the Train creation folder noteboooks .
---
> I also add each data drive link in the Readme Pdf file attached with this solution
---
> The data Used in this notebook is **S2TrainObs1**
```
!gdown --id 1hNRbtcqd9F6stMOK1xAZApDITwAjiSDJ
#import necessary dependecies
import os
import warnings
import numpy as np
import pandas as pd
import random
import gc
from sklearn.metrics import log_loss
from sklearn.model_selection import StratifiedKFold
from lightgbm import LGBMClassifier
from catboost import CatBoostClassifier
warnings.filterwarnings('ignore')
np.random.seed(111)
random.seed(111)
def create_target():
train =pd.read_csv("S2TrainObs1.csv" )
train = train.groupby('field_id').median().reset_index().sort_values('field_id')
train.label = train.label.astype('int')
return train[['field_id']]
```
# Optimized Blend
```
y_true = pd.get_dummies(create_target()['label']).values
test_dict = {'Catboost': 'S1_Catboost.csv',
'LightGbm': 'S1_LGBM.csv',
'Xgboost': 'S1_XGBOOST.csv',
'NN Attention': 'S1_NNAttention.csv' ,
'Neural Network' : 'S1_NN.csv',
'CatboostS2': 'S2_Catboost.csv',
'XgboostS2' : 'S2_Xgboost.csv',
'NN_AttentionS2': 'S2_NNAttention.csv' ,
'Neural_NetworkS2' : 'S2_NN.csv',
}
BlendTest = np.zeros((len(test_dict), 35295,y_true.shape[1]))
for i in range(BlendTest.shape[0]):
BlendTest[i] = pd.read_csv(list(test_dict.values())[i]).values[:,1:]
BlendPreds = np.tensordot([0.175 , 0.2275, 0.1225, 0.0875, 0.0875,0.0975, 0.0825, 0.06 , 0.06 ], BlendTest, axes = ((0), (0)))
BlendPreds.shape
pred_df = pd.DataFrame(BlendPreds)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
```
# Optimized Stacking
```
y_true = create_target()
oof_dict = {'Catboost': 'S1_oof_cat.npy',
'LightGbm': 'S1_oof_lgbm.npy',
'Xgboost': 'S1_oof_XGBOOST.npy',
'NN Attention': 'S1_oof_NNAttention.npy' ,
'Neural Network' : 'S1_oof_NN.npy',
'CatboostS2': 'S2_oof_cat.npy',
'XgboostS2' : 'S2_oof_xgb.npy',
'NN_AttentionS2' : 'S2_oof_NNAttention.npy' ,
'Neural_NetworkS2' : 'S2_NN.npy',
}
oof_data = y_true.copy()
for i in range(len(list(oof_dict.values())) ):
local = pd.DataFrame(np.load(list(oof_dict.values())[i]),columns=[f'{list(oof_dict.keys())[i]}_oof_{j}' for j in range(1,10)] )
oof_data = pd.concat([oof_data,local],axis=1)
oof_data.head()
test_dict = {'Catboost': 'S1_Catboost.csv',
'LightGbm': 'S1_LGBM.csv',
'Xgboost': 'S1_XGBOOST.csv',
'NN Attention': 'S1_NNAttention.csv' ,
'Neural Network' : 'S1_NN.csv',
'CatboostS2': 'S2_Catboost.csv',
'XgboostS2' : 'S2_Xgboost.csv',
'NN_AttentionS2': 'S2_NNAttention.csv' ,
'Neural_NetworkS2' : 'S2_NN.csv',
}
test_data = pd.read_csv("S1_10FoldsCatboost_V5.csv")[['field_id']]
for i in range(len(list(test_dict.values()))):
local = pd.DataFrame(pd.read_csv(list(test_dict.values())[i]).iloc[:,1:].values,columns=[f'{list(test_dict.keys())[i]}_oof_{j}' for j in range(1,10)] )
test_data = pd.concat([test_data,local],axis=1)
test_data.head()
```
# Stacker-Model
```
class AzerStacking :
def __init__(self,y) :
self.y = y
def StackingRegressor(self ,KFOLD,stacking_train : pd.DataFrame , stacking_test : pd.DataFrame) :
cols = stacking_train.drop(['field_id','label'],1).columns.tolist()
X , y , Test = stacking_train[cols] , stacking_train['label'] , stacking_test[cols]
final_preds = [] ; err_cb = []
oof_stack = np.zeros((len(X),9)) ;
for fold,(train_index, test_index) in enumerate(KFOLD.split(X,y)):
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
model1 = LGBMClassifier(verbose=10)
model2 = CatBoostClassifier(iterations=50,verbose=0)
model1.fit(X_train,y_train)
model2.fit(X_train,y_train)
preds1=model1.predict_proba(X_test)
preds2=model2.predict_proba(X_test)
preds = preds1*0.7+preds2*0.3
oof_stack[test_index] = preds
err_cb.append(log_loss(y_test,preds))
print(f'logloss fold-{fold+1}/10',log_loss(y_test,preds))
test_pred1 = model1.predict_proba(Test.values)
test_pred2 = model2.predict_proba(Test.values)
test_pred = test_pred1*0.7+test_pred2*0.3
final_preds.append(test_pred)
print(2*'--------------------------------------')
print('STACKING Log Loss',log_loss(y, oof_stack))
return oof_stack,np.mean(final_preds,axis=0)
folds = StratifiedKFold(n_splits=10,shuffle=True,random_state=47)
AzerStacker = AzerStacking(y=oof_data['label'])
oof_stack,stack_preds = AzerStacker.StackingRegressor(KFOLD=folds ,stacking_train=oof_data ,stacking_test=test_data)
pred_df = pd.DataFrame(stack_preds)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
```
# Stacking - Blend
```
pred_df = pd.DataFrame(stack_preds*0.7+ BlendPreds*0.3)
pred_df = pred_df.rename(columns={
0:'Crop_ID_1',
1:'Crop_ID_2',
2:'Crop_ID_3',
3:'Crop_ID_4',
4:'Crop_ID_5',
5:'Crop_ID_6',
6:'Crop_ID_7',
7:'Crop_ID_8',
8:'Crop_ID_9'
})
pred_df['field_id'] = pd.read_csv("S1_10FoldsCatboost_V5.csv")['field_id'].astype('int').values
pred_df = pred_df[['field_id', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7', 'Crop_ID_8', 'Crop_ID_9']]
pred_df.head()
# Write the predicted probabilites to a csv for submission
pred_df.to_csv('S1_Stacking_Blending.csv', index=False)
```
| github_jupyter |
# About: 設定ファイルの変更--httpd.conf
---
MoodleコンテナのApache HTTPサーバの設定内容を変更します。
## 概要
設定ファイルを変更する例としてMoodleコンテナのApache HTTPサーバの設定ファイルを編集して、起動時のサーバプロセス数を変更してみます。

操作手順は以下のようになります。
1. ホスト環境に配置されている設定ファイルをNotebook環境に取得する
2. 取得したファイルのバックアップを作成する
3. Notebookの編集機能を利用して設定ファイルの変更をおこなう
4. 変更した設定ファイルをホスト環境に配置する
5. 設定ファイルの変更を反映させるためにコンテナを再起動する
アプリケーションテンプレートによって構築したMoodle環境では、利用者が変更する可能性のある `httpd.conf` などの設定ファイルをホスト環境に配置しています。ホスト環境に配置したファイルはコンテナ内から参照するために[bind mount](https://docs.docker.com/storage/bind-mounts/)の設定を行っています。このため、設定ファイルを編集する場合はコンテナ内のパス名とホスト環境のパス名のどちらを指定するか、またbindする際のマウントポイントがどこになっているかなどについて留意する必要があります。
Moodleコンテナにおける設定ファイルの対応関係を以下の表に示します。
<table>
<tr>
<th style="text-align:left;">コンテナ内のパス</th>
<th style="text-align:left;">ホスト環境のパス</th>
</tr>
<tr>
<td style="text-align:left;">/etc/php.ini</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/php.ini</td>
</tr>
<tr>
<td style="text-align:left;">/etc/php.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/php.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf/httpd.conf</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf/httpd.conf</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/httpd/conf.modules.d/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/httpd/conf.modules.d/</td>
</tr>
<tr>
<td style="text-align:left;">/etc/pki/ca-trust/source/anchors/</td>
<td style="text-align:left;">/srv/moodle/moodle/conf/ca-trust/</td>
</tr>
</table>
## パラメータの設定
変更対象のコンテナ名、設定ファイル名などを指定します。
### グループ名の指定
このNotebookの操作対象となるAnsibleのグループ名を設定します。
```
# (例)
# target_group = 'Moodle'
target_group =
```
#### チェック
指定された `target_group` の値が適切なものかチェックします。
`target_group` に対応する `group_vars` ファイルが存在していることを確認します。
```
from pathlib import Path
if not (Path('group_vars') / (target_group + '.yml')).exists():
raise RuntimeError(f"ERROR: not exists {target_group + '.yml'}")
```
`target_group`で指定したホストにAnsibleで到達可能であることを確認します。
```
!ansible {target_group} -m ping
```
### コンテナの指定
設定ファイルを変更する対象となるコンテナを指定します。
現在実行中のコンテナの一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose ps --services'
```
表示されたコンテナ一覧から、対象とするコンテナ名を指定してください。
```
target_container = 'moodle'
```
### 設定ファイルの指定
変更する設定ファイルのパスを指定してください。ここではコンテナ内のパスを指定してください。
```
target_file = '/etc/httpd/conf.modules.d/00-mpm.conf'
```
## 設定ファイルの編集
Moodleコンテナの設定ファイルをローカル環境に取得しJupyter Notebookの編集機能を用いて設定ファイルを編集します。

次のセルを実行すると、以下の手順が実行されます。
1. ホスト環境に配置されているApache HTTP Serverコンテナの設定ファイルをローカル環境に取得する
2. 取得した設定ファイルのバックアップを作成する
3. Jupyter Notebookの編集機能を利用して設定ファイルを編集するためのリンクを表示する
```
%run scripts/edit_conf.py
fetch_conf(target_group, target_container, target_file)
```
上のセルの出力に表示されているリンクをクリックして設定ファイルの編集を行ってください。
この例では、ファイルの末尾に以下の内容を追加してください。
```
<IfModule mpm_prefork_module>
StartServers 20
MinSpareServers 20
MaxSpareServers 20
ServerLimit 20
MaxRequestsPerChild 50
</IfModule>
```
この設定を追加することによりApache HTTP Serverの起動時のプロセス数がデフォルトの 5 から 20 に増えます。
> ファイルの編集後は**必ず**、メニューの[File]-[Save]を選択してファイルの保存を行ってください。
ローカル環境に取得したファイルは、以下のパスに格納されています。
`./edit/{target_group}/{YYYYMMDDHHmmssffffff}/00-mpm.conf`
`{target_group}` にはAnsibleのグループ名が、`{YYYYMMDDHHmmssfffff}` にはファイルを取得したタイムスタンプが入ります。
また、バックアップファイルは以下のパスに格納されます。
`./edit/{target_group}/{YYYYMMDDHHmmssffffff}/00-mpm.conf.orig`
次のセルを実行すると編集の前後での差分を確認することができます。
```
show_local_conf_diff(target_group, target_container, target_file)
```
## 編集した設定ファイルの反映
編集したファイルをホスト環境に配置して、設定ファイルの変更内容をコンテナに反映させます。

### 変更前の状態を確認する
設定ファイル変更の反映前の状態を確認します。
Apache HTTP Serverのプロセス数に関する設定を変更したので、Apache HTTP Serverコンテナのプロセス一覧を確認します。
プロセス一覧を表示します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
### 編集内容の反映
前章で編集した設定ファイルを Apache HTTP Serverコンテナに反映します。
次のセルを実行すると、以下の手順が実行されます。
1. 編集前と編集後の設定ファイルの差分を表示する
2. 編集した設定ファイルをホスト環境に配置する
3. コンテナを再起動して変更した設定ファイルの反映を行う
```
apply_conf(target_group, target_container, target_file)
```
### 変更後の状態を確認する
設定ファイル変更の反映後の状態を確認します。
Apache HTTP Serverコンテナのプロセス一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
## 変更を取り消す
編集前の設定ファイルの状態に戻します。

次のセルを実行すると、以下の手順が実行されます。
1. 編集後と編集前の設定ファイルの差分を表示する
2. 編集前の設定ファイルをホスト環境に配置する
3. コンテナを再起動して設定値の反映を行う
```
revert_conf(target_group, target_container, target_file)
```
設定ファイルを戻した後の状態を確認します。Apache HTTP Serverコンテナのプロセス一覧を確認します。
```
!ansible {target_group} -a 'chdir=/srv/moodle docker-compose exec -T {target_container} ps ax '
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.