markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
ERROR: type should be string, got " https://pbpython.com/bullet-graph.html" | import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import FuncFormatter
%matplotlib inline
# display a palette of 5 shades of green
sns.palplot(sns.light_palette("green", 5))
# 8 different shades of purple in reverse order
sns.palplot(sns.light_palette("purple", 8, reverse=True))
# define the... | _____no_output_____ | MIT | styling/bullet_graph.ipynb | TillMeineke/machine_learning |
Was Air Quality Affected in Countries or Regions Where COVID-19 was Most Prevalent?**By: Arpit Jain, Maria Stella Vardanega, Tingting Cao, Christopher Chang, Mona Ma, Fusu Luo** --- Outline I. Problem Definition & Data Source Description 1. Project Objectives 2. Data Source 3. Dataset Preview II. Wh... | %%bigquery
SELECT * FROM `ba775-team2-b2.AQICN.air_quality_data` LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 485.85query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.05rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- II. What are the most prevalent pollutants?This question focuses on the prevalence of the pollutants. From the dataset, the prevalence can be defined geographically from the cities and countries that had recorded the parameters detected times. To find the prevalence, our team selected the parameters from situations... | %%bigquery
SELECT
Parameter,COUNT(distinct(City)) AS number_of_city,
COUNT(distinct(Country)) AS number_of_country,string_agg(distinct(Country)) AS list_country
FROM `ba775-team2-b2.AQICN.air_quality_data`
GROUP BY Parameter
ORDER BY number_of_city DESC | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 918.39query/s]
Downloading: 100%|ββββββββββ| 20/20 [00:01<00:00, 13.28rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
From the result, top 6 parameters are meteorological parameters. And the first air pollutants (which can be harmful to the public health and environment) is PM2.5, followed by NO2 and PM10.PM2.5 has been detected in 548 cities and 92 countries. NO2 has been detected in 528 cities and 64 countries. PM10 has been det... | %%bigquery
SELECT Date, Country, City, lat as Latitude, lon as Longitude, pop as Population, Parameter as Pollutant, median as Pollutant_level
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE (extract(year from date) = 2019 OR extract(year from date) = 2020) AND parameter IN ('co', 'o3','no2','so2','pm10',
'pm25')
OR... | Query complete after 0.00s: 100%|ββββββββββ| 2/2 [00:00<00:00, 1175.70query/s]
Downloading: 100%|ββββββββββ| 1968194/1968194 [00:02<00:00, 804214.66rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
As we can see after filtering the tables for only the air pollutants we have 1.9 million rows. From here we split the data into 2019 data and 2020 data. 2. Monthly Air Pollutant Data from 2019 | %%bigquery
SELECT extract(month from date) Month, Parameter as Pollutant,Round(avg(median),2) as Avg_Pollutant_Level_2019
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2019 AND parameter IN ('co')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Rou... | Query complete after 0.00s: 100%|ββββββββββ| 8/8 [00:00<00:00, 4930.85query/s]
Downloading: 100%|ββββββββββ| 72/72 [00:01<00:00, 64.20rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
This query represents the average pollutant level for each air pollutant globally for each month. We do this again for the 2020 data. 3. Monthly Air Pollutant Data from 2020 | %%bigquery
SELECT extract(month from date) Month, Parameter as Pollutant,Round(avg(median),2) as Avg_Pollutant_Level_2020
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE extract(year from date) = 2020 AND parameter IN ('co')
GROUP BY Month, Parameter
UNION ALL
SELECT extract(month from date) Month, Parameter ,Rou... | Query complete after 0.00s: 100%|ββββββββββ| 8/8 [00:00<00:00, 4959.27query/s]
Downloading: 100%|ββββββββββ| 72/72 [00:01<00:00, 51.38rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
When comparing the data there isn't a noticeable difference in global pollutant levels from 2019 to 2020, which leads to the hypothesis of pollutant levels being regional rather than global. This might also mean that whatever effects might be occurring from COVID-19 cases, and lockdowns are short-term enough that the a... | %%bigquery
CREATE OR REPLACE TABLE AQICN.pollutant_diff_daily_aqi_less_than_500
AS
(
SELECT A.Date AS Date_2020,B.Date AS Date_2019,A.Country,A.City,A.lat,A.lon,A.Parameter,A.pop,A.median AS aqi_2020,B.median AS aqi_2019,(A.median-B.median) AS aqi_diff, ROUND((A.median-B.median)/B.median*100,2) AS aqi_percent_diff
FRO... | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 368.89query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.39rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Order by monthly average AQI difference to find cities having top 10 air quality index reduction for each pollutant | %%bigquery
CREATE OR REPLACE TABLE AQICN.top_10_cites_most_pollutant_percent_diff_monthly
AS
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'co'
ORDER BY aqi_percent_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'o3'
ORDER BY aqi_percent_diff_m... | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 494.55query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 5.25rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Order by monthly average percent AQI difference to find cities having top 10 most air quality index reduction for each pollutant | %%bigquery
CREATE OR REPLACE TABLE AQICN.top_10_cites_most_pollutant_diff_monthly
AS
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'pm25'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION ALL
(SELECT *
FROM AQICN.pollutant_diff_monthly_aqi
WHERE Parameter = 'o3'
ORDER BY aqi_diff_monthly
LIMIT 10)
UNION... | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 446.25query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.48rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
2. Cities with more than 50 percent AQI decrease and 50 AQI decrease for each air pollutants Reason: the higher the AQI, the unhealthier the air will be, especially for sensitive groups such as people with heart and lung disease, elders and children. A major reduction or percent reduction in AQI for long period of tim... | %%bigquery
SELECT City,Country,Parameter,COUNT(*) AS num_month_mt_50_per_decrease FROM AQICN.pollutant_diff_monthly_aqi
WHERE aqi_percent_diff_monthly < -50 AND aqi_diff_monthly < -50
GROUP BY City,Country,Parameter
ORDER BY Parameter,COUNT(*) DESC
LIMIT 10 | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 881.71query/s]
Downloading: 100%|ββββββββββ| 10/10 [00:01<00:00, 6.79rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- ResultsDuring the pandemic, cities getting most air qualities improvements in terms of percent AQI differences for each pollutant are:CO: United States Portland, Chile Talca and Mexico Aguascalientes;NO2: Iran Qom, South Africa Middelburg and Philippines Butuan;SO2: Greece Athens, Mexico MΓ©rida and Mexico San Luis ... | %%bigquery
select A.month,A.month_n, A.country,A.parameter,round((B.avg_median_month- A.avg_median_month),2) as diff_avg,
(B.avg_median_month - A.avg_median_month)/A.avg_median_month as diff_perc
from
(SELECT FORMAT_DATETIME("%B", date) month,EXTRACT(year FROM date) year, EXTRACT(month FROM date) month_n, country,para... | Query complete after 0.00s: 100%|ββββββββββ| 3/3 [00:00<00:00, 1451.15query/s]
Downloading: 100%|ββββββββββ| 2789/2789 [00:01<00:00, 1960.44rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Using Bigquery ML to find liner regression between diff_avg for each parameter and confirmed cases(Example showing below is that parameter = co; x = confirmed; y=diff_avg --AQI changes) | %%bigquery
CREATE OR REPLACE MODEL `all_para_20_19.all_para_20_19_diff_covid_model`
# Specify options
OPTIONS
(model_type='linear_reg',
input_label_cols=['diff_avg']) AS
# Provide training data
SELECT
confirmed,
diff_avg
FROM
`all_para_20_19.all_para_20_19_diff_covid`
WHERE
parameter = 'co'
and diff_avg is no... | _____no_output_____ | MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Evaluating the model to find out r2_score for each monthly average air pollutant AQI changes vs monthly confirmed new cases linear regression model.Example showing below is Evaluation for country level monthly average CO AQI vs monthly new confirmed COVID cases model: | %%bigquery
SELECT * FROM
ML.EVALUATE(
MODEL `all_para_20_19.all_para_20_19_diff_covid_model`, # Model name
# Table to evaluate against
(SELECT
confirmed,
diff_avg
FROM
`all_para_20_19.all_para_20_19_diff_covid`
WHERE
parameter = 'co'
and diff_avg is not null
)
) | Query complete after 0.00s: 100%|ββββββββββ| 3/3 [00:00<00:00, 1508.20query/s]
Downloading: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.86s/rows]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Evaluation for country level monthly average PM2.5 AQI changes vs monthly new confirmed COVID cases model: Evaluation for country level monthly average NO2 AQI changes vs monthly new confirmed COVID cases model: Evaluation for country level monthly average O3 AQI changes vs monthly new confirmed COVID cases model: ... | %%bigquery
SELECT country, date, parameter, AVG(count) AS air_quality
FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2020-03-01' AND '2020-06-30'
AND country in ('US','IT','AU','NZ','TW')
GROUP BY country, parameter, date
ORDER BY date
%%bigquery
SELECT country, date, parameter, AVG(count) AS air_qual... | Query complete after 0.00s: 100%|ββββββββββ| 1/1 [00:00<00:00, 1015.32query/s]
Downloading: 100%|ββββββββββ| 900/900 [00:01<00:00, 558.11rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
--- VII: How did Air Quality change in countries with low COVID-19 cases (NZ, AUS, TW) and high COVID-19 cases (US, IT,CN)?This question was answered by creating separate tables that encompassed the equivalent lockdown periods per country for 2019. Then, the two tables were joined using the parameter and grouped accor... | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_Italy AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-09' AND '2019-05-18'
AND country = 'IT'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_q... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2174.90query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 3.91rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Here we can see that the only pollutant that decreased during the 2020 lockdown in Italy, compared to the respective time period in 2019, was NO2, which decreased by 35.74%. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_US AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-19' AND '2019-04-07'
AND country = 'US'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_qual... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2138.31query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 3.36rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
In the United States, all the pollutants decreased in 2020 compared to 2019. The largest changes occurred in O3, NO2 and SO2, which decreased by 36.69%, 30.22%, and 27.10% respectively. This indicates that the lockdowns during the COVID-19 pandemic may have positively affected the emission of pollutants in the United S... | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_China AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-01-23' AND '2019-04-08'
AND country = 'CN'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_q... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 1981.72query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.01rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
In China, most pollutants decreased in 2020 compared to the same period in 2019. The largest change was in NO2 which decreased by 30.88% compared to the previous year. 2. Countries with low COVID cases | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_Taiwan AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE EXTRACT(month FROM date) = 07
AND EXTRACT(year FROM date) = 2019
AND country = 'TW'
%%bigquery
SELECT a2020.country, a2020.parameter,... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 1830.37query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.02rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Taiwan, which did not experience lockdowns due to COVID-19, also shows a decrease in all pollutant levels. This contradicts our initially hypothesis that countries who experienced more COVID-19 and therefore more lockdowns would have better air quality. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_AUS AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-25' AND '2019-05-31'
AND country = 'NZ'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_qua... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2199.14query/s]
Downloading: 100%|ββββββββββ| 5/5 [00:01<00:00, 3.27rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
New Zealand also shows a decrease in all pollutant levels. Nevertheless, New Zealand did go into lockdown for a period and these numbers may reflect the lessened activity due to COVID-19 during that time compared to the equivalent in 2019. | %%bigquery
CREATE OR REPLACE TABLE AQICN.air_quality2019_AUS AS
SELECT country, parameter, median AS air_quality2019 FROM `ba775-team2-b2.AQICN.air_quality_data`
WHERE date BETWEEN '2019-03-18' AND '2019-05-31'
AND country = 'AU'
%%bigquery
SELECT a2020.country, a2020.parameter, AVG(a2020.median) AS air_qua... | Query complete after 0.00s: 100%|ββββββββββ| 4/4 [00:00<00:00, 2131.79query/s]
Downloading: 100%|ββββββββββ| 6/6 [00:01<00:00, 4.52rows/s]
| MIT | docs/team-projects/Summer-2021/B2-Team2-Analyzing-Air-Quality-During-COVID19-Pandemic.ipynb | JQmiracle/Business-Analytics-Toolbox |
Basics- All values of a categorical valiable are either in `categories` or `np.nan`.- Order is defined by the order of `categories`, not the lexical order of the values.- Internally, the data structure consists of a `categories` array and an integer arrays of `codes`, which point to the values in the `categories` arra... | titanic = sns.load_dataset("titanic")
titanic.head(2) | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Operations I frequently use Renaming categories | titanic["class"].cat.rename_categories(str.upper)[:2] | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Appending new categories | titanic["class"].cat.add_categories(["Fourth"]).cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Removing categories | titanic["class"].cat.remove_categories(["Third"]).cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Remove unused categories | titanic_small = titanic.iloc[:2]
titanic_small
titanic_small["class"].cat.remove_unused_categories().cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Remove and add categories simultaneously | titanic["class"].value_counts(dropna=False)
titanic["class"].cat.set_categories(["First", "Third", "Fourth"]).value_counts(
dropna=False
) | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Using string and datetime accessorsThis works as expected, and if the number of distinct categories is small relative to the number of rows, then operating on the categories is faster (because under the hood, pandas applies the change to `categories` and constructs a new series (see [here](https://pandas.pydata.org/pa... | cat_class = titanic["class"]
%timeit cat_class.str.contains('d')
str_class = titanic["class"].astype("object")
%timeit str_class.str.contains('d') | 149 Β΅s Β± 7.84 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each)
398 Β΅s Β± 16.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each)
| MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Object creation Convert *sex* and *class* to the same categorical type, with categories being the union of all unique values of both columns. | cols = ["sex", "who"]
unique_values = np.unique(titanic[cols].to_numpy().ravel())
categories = pd.CategoricalDtype(categories=unique_values)
titanic[cols] = titanic[cols].astype(categories)
print(titanic.sex.cat.categories)
print(titanic.who.cat.categories)
# restore sex and who to object types
titanic[cols] = titanic[... | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Custom order | df = pd.DataFrame({"quality": ["good", "excellent", "very good"]})
df.sort_values("quality")
ordered_quality = pd.CategoricalDtype(["good", "very good", "excellent"], ordered=True)
df.quality = df.quality.astype(ordered_quality)
df.sort_values("quality") | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Unique values | small_titanic = titanic.iloc[:2]
small_titanic | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
`Series.unique` returns values in order appearance, and only returns values that are present in the data. | small_titanic["class"].unique() | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
`Series.cat.categories` returns all category values. | small_titanic["class"].cat.categories | _____no_output_____ | MIT | content/post/pandas-categories/pandas-categories.ipynb | fabiangunzinger/wowchemy |
Machine Learning 1 Some Concepts | Mean Absolute Error (MAE) is the mean of the absolute value of the errors
Mean Squared Error (MSE) is the mean of the squared errors:
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Comparing these metrics:
MAE is the easiest to understand because itβs the average error.
MSE is mor... | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Predictions | # Lets say that the model inputs are
X = df[['Weight', 'Volume']]
y = df['CO2']
regr = linear_model.LinearRegression()
regr.fit(X, y)
# Simply do that for predicting the CO2 emission of a car where the weight is 2300kg, and the volume is 1300ccm:
predictedCO2 = regr.predict([[2300, 1300]])
print(predictedCO2)
| _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
OLS Regression | https://docs.w3cub.com/statsmodels/generated/statsmodels.regression.linear_model.ols.fit_regularized/
est=sm.OLS(y, X)
est = est.fit()
est.summary() | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Plotting Errors | # provided that y_test and y_pred have been called (example below)
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# y_pred = linreg.predict(X_test)
sns.distplot((y_test-y_pred),bins=50) | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Interpretation of Outputs Multiple Linear Regression | Almost all the real-world problems that you are going to encounter will have more than two variables.
Linear regression involving multiple variables is called βmultiple linear regressionβ or multivariate linear regression.
The steps to perform multiple linear regression are almost similar to that of simple linear reg... | _____no_output_____ | BSD-2-Clause | Learning Notes/Learning Notes ML - 1 Basics.ipynb | k21k/Python-Notes |
Tensorflow Timeline Analysis on Model Zoo Benchmark between Intel optimized and stock TensorflowThis jupyter notebook will help you evaluate performance benefits from Intel-optimized Tensorflow on the level of Tensorflow operations via several pre-trained models from Intel Model Zoo. The notebook will show users a ba... | from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info() | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Section 1: TensorFlow Timeline Analysis Prerequisites | !pip install cxxfilt
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1500) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out the Timeline folders First, list out all Timeline folders from previous runs. | import os
filenames= os.listdir (".")
result = []
keyword = "Timeline"
for filename in filenames:
if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %... | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Select a Timeline folder from previous runs ACTION: Please select one Timeline folder and change FdIndex accordingly | FdIndex = 3 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out all Timeline json files inside Timeline folder. | import os
TimelineFd = result[FdIndex]
print(TimelineFd)
datafiles = [TimelineFd +os.sep+ x for x in os.listdir(TimelineFd) if '.json' == x[-5:]]
print(datafiles)
if len(datafiles) is 0:
print("ERROR! No json file in the selected folder. Please select other folder.")
elif len(datafiles) is 1:
print("WARNING! Th... | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
> **Users can bypass below Section 1.1 and analyze performance among Stock and Intel TF by clicking the link : [Section 1_2](section_1_2).** Section 1.1: Performance Analysis for one TF Timeline result Step 1: Pick one of the Timeline files List out all the Timeline files first | index = 0
for file in datafiles:
print(" %d : %s " %(index, file))
index+=1 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
ACTION: Please select one timeline json file and change file_index accordingly | ## USER INPUT
file_index=0
fn = datafiles[file_index]
tfile_prefix = fn.split('_')[0]
tfile_postfix = fn.strip(tfile_prefix)[1:]
fn | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 2: Parse timeline into pandas format | from profiling.profile_utils import TFTimelinePresenter
tfp = TFTimelinePresenter(True)
timeline_pd = tfp.postprocess_timeline(tfp.read_timeline(fn))
timeline_pd = timeline_pd[timeline_pd['ph'] == 'X'] | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 3: Sum up the elapsed time of each TF operation | tfp.get_tf_ops_time(timeline_pd,fn,tfile_prefix) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 4: Draw a bar chart for elapsed time of TF ops | filename= tfile_prefix +'_tf_op_duration_bar.png'
title_=tfile_prefix +'TF : op duration bar chart'
ax=tfp.summarize_barh(timeline_pd, 'arg_op', title=title_, topk=50, logx=True, figsize=(10,10))
tfp.show(ax,'bar') | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 5: Draw a pie chart for total time percentage of TF ops | filename= tfile_prefix +'_tf_op_duration_pie.png'
title_=tfile_prefix +'TF : op duration pie chart'
timeline_pd_known = timeline_pd[ ~timeline_pd['arg_op'].str.contains('unknown') ]
ax=tfp.summarize_pie(timeline_pd_known, 'arg_op', title=title_, topk=50, logx=True, figsize=(10,10))
tfp.show(ax,'pie')
ax.figure.savefig(... | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Section 1.2: Analyze TF Timeline results between Stock and Intel Tensorflow Speedup from MKL-DNN among different TF operations Step 1: Select one Intel and one Stock TF timeline files for analysis List out all timeline files in the selected folder | if len(datafiles) is 1:
print("ERROR! There is only 1 json file in the selected folder.")
print("Please select other Timeline folder from beginnning to proceed Section 1.2.")
for i in range(len(datafiles)):
print(" %d : %s " %(i, datafiles[i])) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
ACTION: Please select one timeline file as a perfomance baseline and the other as a comparison targetput the related index for your selected timeline file.In general, please put stock_timeline_xxxxx as the baseline. | # perfomance baseline
Baseline_Index=1
# comparison target
Comparison_Index=0 | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
List out two selected timeline files | selected_datafiles = []
selected_datafiles.append(datafiles[Baseline_Index])
selected_datafiles.append(datafiles[Comparison_Index])
print(selected_datafiles) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 2: Parsing timeline results into CSV files | %matplotlib agg
from profiling.profile_utils import TFTimelinePresenter
csvfiles=[]
tfp = TFTimelinePresenter(True)
for fn in selected_datafiles:
if fn.find('/'):
fn_nofd=fn.split('/')[1]
else:
fn_nofd=fn
tfile_name= fn_nofd.split('.')[0]
tfile_prefix = fn_nofd.split('_')[0]
tfile_p... | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 3: Pre-processing for the two CSV files | import os
import pandas as pd
csvarray=[]
for csvf in csvfiles:
print("read into pandas :",csvf)
a = pd.read_csv(csvf)
csvarray.append(a)
a = csvarray[0]
b = csvarray[1] | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 4: Merge two CSV files and caculate the speedup accordingly | import os
import pandas as pd
fdir='merged'
if not os.path.exists(fdir):
os.mkdir(fdir)
fpath=fdir+os.sep+'merged.csv'
merged=tfp.merge_two_csv_files(fpath,a,b)
merged | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 5: Draw a bar chart for elapsed time of TF ops among stock TF and Intel TF | %matplotlib inline
print(fpath)
tfp.plot_compare_bar_charts(fpath)
tfp.plot_compare_ratio_bar_charts(fpath, tags=['','oneDNN ops']) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
Step 6: Draw pie charts for elapsed time of TF ops among stock TF and Intel TF | tfp.plot_compare_pie_charts(fpath) | _____no_output_____ | Apache-2.0 | docs/notebooks/perf_analysis/benchmark_perf_timeline_analysis.ipynb | yinghu5/models |
diagrams.generic.network.Firewalldiagrams.generic.network.Routerdiagrams.generic.network.Subnetdiagrams.generic.network.Switchdiagrams.generic.network.VPNdiagrams.generic.virtualization.Virtualboxdiagrams.generic.os.Windows | from diagrams import Cluster, Diagram
from diagrams.generic.network import Firewall
from diagrams.generic.network import Router
from diagrams.generic.network import Subnet
from diagrams.generic.network import Switch
from diagrams.generic.virtualization import Virtualbox
from diagrams.generic.os import Windows
graph_att... | MyLab.jpg Untitled.ipynb
| MIT | sec04-1_LabIntro/My_Lab_Diagram.ipynb | codered-by-ec-council/Network-Automation-in-Python |
The data is from a number of patients. The 12 first columns (age, an, ..., time) are features that should be used to predict the outcome in the last column (DEATH_EVENT). | # Loading some functionality you might find useful. You might want other than this...
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from pandas.plotting import s... | precision recall f1-score support
0 0.77 0.94 0.85 35
1 0.88 0.60 0.71 25
accuracy 0.80 60
macro avg 0.82 0.77 0.78 60
weighted avg 0.82 0.80 0.79 ... | MIT | SupervisedLearning/Problem2_SupervisedLearning_Gaiceanu.ipynb | TheodoraG/FRTN65 |
Death vs timeThe boxplot below illustrates the relationship between death and how long time it was between the measurements were taken and the followup event, when the patient health was checked (female=blue, male=orange).It is noted that short followup time is highly related to high probability of death, for both se... | fig, ax = plt.subplots(figsize = (8, 8))
survive = data.loc[(data.DEATH_EVENT == 0)].time
death = data.loc[(data.DEATH_EVENT == 1)].time
print('time_survived = {:.1f}'.format(survive.mean()))
print('time_dead = {:.1f}'.format(death.mean()))
sns.boxplot(data = data, x = 'DEATH_EVENT', y = 'time', hue = 'sex', width = 0... | Training score: 0.7364016736401674
Test score: 0.6166666666666667
| MIT | SupervisedLearning/Problem2_SupervisedLearning_Gaiceanu.ipynb | TheodoraG/FRTN65 |
Load Data | print(df.shape)
df.head()
df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1083 entries, 0 to 1082
Data columns (total 18 columns):
label 1083 non-null int64
artist 1083 non-null object
album 1083 non-null object
genre 1083 non-null object
single_count 1083 non-null int64
freq_billboard 10... | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
**Note**- μ¨λΌμΈλ§€μ²΄ κΈ°μ¬μ μ, νλ‘ κ° νμ μ Null Valueκ° μκΈ° λλ¬Έμ, λΉμ₯ Decision Treeλ₯Ό ν΅ν΄ νμ΅μ μν¬ μ μμ΄, Featureμμ μ μΈλ₯Ό νλ€. Data Preparation for Modeling μ₯λ₯΄ `hiphop`, `R&B`, `Soul`, `Funk`, `Pop` | df = pd.get_dummies(df, columns=['genre'])
df.columns | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
Split train & test data | feature_names = ['single_count', 'freq_billboard',
'freq_genius', 'freq_theSource', 'freq_xxl',
'twitter', 'instagram', 'facebook',
'spotify', 'soundcloud', 'youtube',
'genre_funk', 'genre_hiphop', 'genre_pop', 'genre_rnb', 'genre_soul']
dfX = df[featu... | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
KNN | from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=10).fit(X_train, y_train)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, model.predict(X_test))
from sklearn.metrics import classification_report
print(classification_report(y_test, model.predict(X_test)))... | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
SGD | from sklearn.linear_model import SGDClassifier
model_SGD = SGDClassifier(random_state=0).fit(X_train, y_train)
confusion_matrix(y_train, model_SGD.predict(X_train))
confusion_matrix(y_test, model_SGD.predict(X_test))
print(classification_report(y_test, model_SGD.predict(X_test)))
fpr, tpr, thresholds = roc_curve(y_test... | _____no_output_____ | MIT | modeling/modeling-1-KNN-SGD.ipynb | lucaseo/content-worth-debut-artist-classification-project |
AmsterdamUMCdb - Freely Accessible ICU Databaseversion 1.0.2 March 2020 Copyright © 2003-2020 Amsterdam UMC - Amsterdam Medical Data Science Vasopressors and inotropesShows medication for artificially increasing blood pressure (vasopressors) or stimulating heart function (inotropes), if any, a patient received.... | %matplotlib inline
import amsterdamumcdb
import psycopg2
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import io
from IPython.display import display, HTML, Markdown | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Display settings | #matplotlib settings for image size
#needs to be in a different cell from %matplotlib inline
plt.style.use('seaborn-darkgrid')
plt.rcParams["figure.dpi"] = 288
plt.rcParams["figure.figsize"] = [16, 12]
plt.rcParams["font.size"] = 12
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.options.di... | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Connection settings | #Modify config.ini in the root folder of the repository to change the settings to connect to your postgreSQL database
import configparser
import os
config = configparser.ConfigParser()
if os.path.isfile('../../config.ini'):
config.read('../../config.ini')
else:
config.read('../../config.SAMPLE.ini')
#Open a c... | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
Vasopressors and inotropesfrom drugitems | sql_vaso_ino = """
WITH vasopressor_inotropes AS (
SELECT
admissionid,
CASE
WHEN COUNT(*) > 0 THEN TRUE
ELSE FALSE
END AS vasopressors_inotropes_bool,
STRING_AGG(DISTINCT item, '; ') AS vasopressors_inotropes_given
FROM drugitems
WHERE
orderc... | _____no_output_____ | MIT | concepts/lifesupport/vasopressors_inotropes.ipynb | AKI-Group-ukmuenster/AmsterdamUMCdb |
#@title Calculation of mass transfer and hydrate inhibition of a wet gas by injection of methanol
#@markdown Demonstration of mass transfer calculation using the NeqSim software in Python
#@markdown <br><br>This document is part of the module ["Introduction to Gas Processing using NeqSim in Colab"](https://colab.resear... | _____no_output_____ | Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab | |
Mass transfer calculationsModel for mass transfer calculation in NeqSim based on Solbraa (2002):https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/231326 In the following calculations we assume a water saturated gas the is mixed with pure liquid methanol. These phases are not in equiibrium when they enter the pipeline. W... | # Input parameters
pressure = 52.21 # bara
temperature = 15.2 #C
gasFlow = 1.23 #MSm3/day
methanolFlow = 6000.23 # kg/day
pipelength = 10.0 #meter
pipeInnerDiameter = 0.5 #meter
# Create a gas-condensate fluid
feedgas = {'ComponentName': ["nitrogen","CO2","methane", "ethane" , "propane", "i-butane", "n-butane", "wat... | Composition of gas leaving pipe section after 10.0 meter
total gas
nitrogen 1.11418E-2 1.11418E-2 [mole fraction]
CO2 1.08558E-2 1.08558E-2 [mole fraction]
methane 8.89321E-... | Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab |
Calculation of hydrate equilibrium temperature of gas leaving pipe sectionIn the following script we will simulate the composition of the gas leaving pipe section as well as hydrate equilibrium temperature of this gas as function of pipe length. | maxpipelength = 10.0
def hydtemps(length):
pipeline.setLength(length)
run();
return gasFromScrubber.getHydrateEquilibriumTemperature()-273.15
length = np.arange(0.01, maxpipelength, (maxpipelength)/10.0)
hydtem = [hydtemps(length2) for length2 in length]
plt.figure()
plt.plot(length, hydtem)
plt.xlabel('... | _____no_output_____ | Apache-2.0 | notebooks/process/masstransferMeOH.ipynb | EvenSol/NeqSim-Colab |
Amazon Forecast: predicting time-series at scaleForecasting is used in a variety of applications and business use cases: For example, retailers need to forecast the sales of their products to decide how much stock they need by location, Manufacturers need to estimate the number of parts required at their factories to ... | %load_ext autoreload
%autoreload 2
from util.fcst_utils import *
import warnings
import boto3
import s3fs
plt.rcParams['figure.figsize'] = (15.0, 5.0)
warnings.filterwarnings('ignore') | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Although, we have set the region to us-west-2 below, you can choose any of the 6 regions that the service is available in. | region = 'us-west-2'
bucket = 'bike-demo'
version = 'prod'
session = boto3.Session(region_name=region)
forecast = session.client(service_name='forecast')
forecast_query = session.client(service_name='forecastquery')
role_arn = get_or_create_role_arn() | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
OverviewThe above figure summarizes the key workflow of using Forecast. Step 1: Preparing the Datasets | bike_df = pd.read_csv("../data/train.csv", dtype = object)
bike_df.head()
bike_df['count'] = bike_df['count'].astype('float')
bike_df['workingday'] = bike_df['workingday'].astype('float') | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
We take about two and a half week's of hourly data for demonstration, just for the purpose that there's no missing data in the whole range. | bike_df_small = bike_df[-2*7*24-24*3:]
bike_df_small['item_id'] = "bike_12" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Let us plot the time series first. | bike_df_small.plot(x='datetime', y='count', figsize=(15, 8)) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
We can see that the target time series seem to have a drop over weekends. Next let's plot both the target time series and the related time series that indicates whether today is a `workday` or not. More precisely, $r_t = 1$ if $t$ is a work day and 0 if not. | plt.figure(figsize=(15, 8))
ax = plt.gca()
bike_df_small.plot(x='datetime', y='count', ax=ax);
ax2 = ax.twinx()
bike_df_small.plot(x='datetime', y='workingday', color='red', ax=ax2); | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Notice that to use the related time series, we need to ensure that the related time series covers the whole target time series, as well as the future values as specified by the forecast horizon. More precisely, we need to make sure:```len(related time series) >= len(target time series) + forecast horizon```Basically, a... | target_df = bike_df_small[['item_id', 'datetime', 'count']][:-24]
rts_df = bike_df_small[['item_id', 'datetime', 'workingday']]
target_df.head(5) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
As we can see, the length of the related time series is equal to the length of the target time series plus the forecast horizon. | print(len(target_df), len(rts_df))
assert len(target_df) + 24 == len(rts_df), "length doesn't match" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Next we check whether there are "holes" in the related time series. | assert len(rts_df) == len(pd.date_range(
start=list(rts_df['datetime'])[0],
end=list(rts_df['datetime'])[-1],
freq='H'
)), "missing entries in the related time series" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Everything looks fine, and we plot both time series again. As it can be seen, the related time series (indicator of whether the current day is a workday or not) is longer than the target time series. The binary working day indicator feature is a good example of a related time series, since it is known at all future ti... | plt.figure(figsize=(15, 10))
ax = plt.gca()
target_df.plot(x='datetime', y='count', ax=ax);
ax2 = ax.twinx()
rts_df.plot(x='datetime', y='workingday', color='red', ax=ax2);
target_df.to_csv("../data/bike_small.csv", index= False, header = False)
rts_df.to_csv("../data/bike_small_rts.csv", index= False, header = False)
... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
If you don't have this bucket `amazon-forecast-data-{account_id}`, create it first on S3. | bucket_name = f"amazon-forecast-data-{account_id}"
key = "bike_small"
s3.upload_file(Filename="../data/bike_small.csv", Bucket = bucket_name, Key = f"{key}/bike.csv")
s3.upload_file(Filename="../data/bike_small_rts.csv", Bucket = bucket_name, Key = f"{key}/bike_rts.csv") | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2. Importing the DataNow we are ready to import the datasets into the Forecast service. Starting from the raw data, Amazon Forecast automatically extracts the dataset that is suitable for forecasting. As an example, a retailer normally records the transaction record such as | project = "bike_rts_demo"
idx = 4
s3_data_path = f"s3://{bucket_name}/{key}" | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Below, we specify key input data and forecast parameters | freq = "H"
forecast_horizon = 24
timestamp_format = "yyyy-MM-dd HH:mm:ss"
delimiter = ',' | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2a. Creating a Dataset GroupFirst let's create a dataset group and then update it later to add our datasets. | dataset_group = f"{project}_gp_{idx}"
dataset_arns = []
create_dataset_group_response = forecast.create_dataset_group(Domain="RETAIL",
DatasetGroupName=dataset_group,
DatasetArns=dataset_arns)
logging.inf... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2b. Creating a Target DatasetIn this example, we will define a target time series. This is a required dataset to use the service. Below we specify the target time series name af_demo_ts_4. | ts_dataset_name = f"{project}_ts_{idx}"
print(ts_dataset_name) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Next, we specify the schema of our dataset below. Make sure the order of the attributes (columns) matches the raw data in the files. We follow the same three attribute format as the above example. | ts_schema_val = [{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "demand", "AttributeType": "float"}]
ts_schema = {"Attributes": ts_schema_val}
logging.info(f'Creating target dataset {ts_dataset_name}')
... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2c. Creating a Related DatasetIn this example, we will define a related time series. Specify the related time series name af_demo_rts_4. | rts_dataset_name = f"{project}_rts_{idx}"
print(rts_dataset_name) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Specify the schema of your dataset here. Make sure the order of columns matches the raw data files. We follow the same three column format as the above example. | rts_schema_val = [{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "price", "AttributeType": "float"}]
rts_schema = {"Attributes": rts_schema_val}
logging.info(f'Creating related dataset {rts_dataset_name... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2d. Updating the dataset group with the datasets we createdYou can have multiple datasets under the same dataset group. Update it with the datasets we created before. | dataset_arns = []
dataset_arns.append(ts_dataset_arn)
dataset_arns.append(rts_dataset_arn)
forecast.update_dataset_group(DatasetGroupArn=dataset_group_arn, DatasetArns=dataset_arns)
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn) | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2e. Creating a Target Time Series Dataset Import Job | ts_s3_data_path = f"{s3_data_path}/bike.csv"
ts_dataset_import_job_response = forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=ts_dataset_arn,
DataSource= {
... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 2f. Creating a Related Time Series Dataset Import Job | rts_s3_data_path = f"{s3_data_path}/bike_rts.csv"
rts_dataset_import_job_response = forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=rts_dataset_arn,
DataSource= {
... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3. Choosing an algorithm and evaluating its performanceOnce the datasets are specified with the corresponding schema, Amazon Forecast will automatically aggregate all the relevant pieces of information for each item, such as sales, price, promotions, as well as categorical attributes, and generate the desired dat... | algorithm_arn = 'arn:aws:forecast:::algorithm/' | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3a. Choosing DeepAR+ | algorithm = 'Deep_AR_Plus'
algorithm_arn_deep_ar_plus = algorithm_arn + algorithm
predictor_name_deep_ar = f'{project}_{algorithm.lower()}_{idx}'
logging.info(f'[{predictor_name_deep_ar}] Creating predictor {predictor_name_deep_ar} ...')
create_predictor_response = forecast.create_predictor(PredictorName=predictor_name... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 3b. Choosing Prophet | algorithm = 'Prophet'
algorithm_arn_prophet = algorithm_arn + algorithm
predictor_name_prophet = f'{project}_{algorithm.lower()}_{idx}'
algorithm_arn_prophet
logging.info(f'[{predictor_name_prophet}] Creating predictor %s ...' % predictor_name_prophet)
create_predictor_response = forecast.create_predictor(PredictorName... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 4. Computing Error Metrics from Backtesting After creating the predictors, we can query the forecast accuracy given by the backtest scenario and have a quantitative understanding of the performance of the algorithm. Such a process is iterative in nature during model development. When an algorithm with satisfying ... | logging.info('Done creating predictor. Getting accuracy numbers for DeepAR+ ...')
error_metrics_deep_ar_plus = forecast.get_accuracy_metrics(PredictorArn=predictor_arn_deep_ar)
error_metrics_deep_ar_plus
logging.info('Done creating predictor. Getting accuracy numbers for Prophet ...')
error_metrics_prophet = forecast.g... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
As we mentioned before, if you only have a handful of time series (in this case, only 1) with a small number of examples, the neural network models (DeepAR+) are not the best choice. Here, we clearly see that DeepAR+ behaves worse than Prophet in the case of a single time series. Step 5. Creating a ForecastNext we re... | logging.info(f"Done fetching accuracy numbers. Creating forecaster for DeepAR+ ...")
forecast_name_deep_ar = f'{project}_deep_ar_plus_{idx}'
create_forecast_response_deep_ar = forecast.create_forecast(ForecastName=forecast_name_deep_ar,
PredictorArn=predictor_arn_... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 6. Querying the Forecasts | item_id = 'bike_12'
forecast_response_deep = forecast_query.query_forecast(
ForecastArn=forecast_arn_deep_ar,
Filters={"item_id": item_id})
forecast_response_prophet = forecast_query.query_forecast(ForecastArn=forecast_arn_prophet,
Filters={"item_id":item_id}... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 7. Exporting your Forecasts | forecast_export_name_deep_ar = f'{project}_forecast_export_deep_ar_plus_{idx}'
forecast_export_name_deep_ar_path = f"{s3_data_path}/{forecast_export_name_deep_ar}"
create_forecast_export_response_deep_ar = forecast.create_forecast_export_job(ForecastExportJobName=forecast_export_name_deep_ar,
... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Step 8. Cleaning up your Resources Once we have completed the above steps, we can start to cleanup the resources we created. All delete jobs, except for `delete_dataset_group` are asynchronous, so we have added the helpful `wait_till_delete` function. Resource Limits documented here. | # Delete forecast export for both algorithms
wait_till_delete(lambda: forecast.delete_forecast_export_job(ForecastExportJobArn = forecast_export_arn_deep_ar))
wait_till_delete(lambda: forecast.delete_forecast_export_job(ForecastExportJobArn = forecast_export_arn_prophet))
# Delete forecast for both algorithms
wait_till... | _____no_output_____ | MIT-0 | notebooks/6.Incorporating_Related_Time_Series_dataset_to_your_Predictor.ipynb | ardilacarlosh/amazon-forecast-samples |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.