code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Day 4 Data wrangling with Python - Exercises with answers
# ## Exercise 1
# #### Question 1
# ##### Load packages os, pandas and numpy.
# ##### Set the working directory to data directory.
# ##### Print the working directory.
# #### Answer:
import pandas as pd
import numpy as np
import os
# +
# Set `main_dir` to the location of your `skill-soft` folder (for Linux).
main_dir = "/home/[username]/Desktop/skill-soft"
# Windows only.
main_dir = "C:/Users/[user_name]/Desktop/skill-soft"
# Mac only.
main_dir = "/Users/[username]/Desktop/skill-soft"
data_dir = main_dir + "/data/"
# -
# Set working directory.
os.chdir(data_dir)
# Check working directory.
print(os.getcwd())
# #### Question 2
# ##### Read in 'TB_data.csv' into a variable called `tb_data` using pandas.
# ##### Print the first 20 rows of the DataFrame.
# #### Answer:
tb_data = pd.read_csv('TB_data.csv')
print(tb_data.head(20))
# #### Question 3
# ##### Perform the following actions on `tb_data` object:
# - Check the type of the object
# - Get the shape of the object and save it to 2 variables and print both:
# - `nrows`
# - `ncols`
# #### Answer:
print(type(tb_data))
nrows, ncols = tb_data.shape
print(nrows)
print(ncols)
# #### Question 4
# ##### Use sample function to:
# - print first 5 rows in tb_data
# - print 3 random rows in tb_data
# - print 0.1% random rows in tb_data
# #### Answer:
print(tb_data.head(5)) #<- pulls the first 3 rows
print(tb_data.sample(n = 3))
print(tb_data.sample(frac = 0.001))
# #### Question 5
# ##### Inspect tb_data with the following information:
# - columns
# - data types
# - info
# - statistical summary (describe)
# #### Answer:
tb_data.columns
tb_data.dtypes
tb_data.info()
tb_data.describe()
# ##### Note: We do not have any specific ID variable we can use as an index.
# ## Exercise 2
# #### Question 1
# ##### Split the data to be grouped by age group.
# ##### Group by the column `age_group` and assign it to a dataframe named `grouped_df`.
# #### Answer:
grouped_df = tb_data.groupby('age_group')
# #### Question 2
# ##### Use the summary functions to get the mean number of `best` for all the age groups you just grouped in `grouped_df`.
# ##### Assign the output to a dataframe called `age_best` and print that dataframe to the console.
# #### Answer:
age_best = grouped_df.mean()[['best']]
print(age_best)
# #### Question 3
# ##### Use the summary functions to get the max number of `lo` for all the age groups grouped by the column `age_group`.
# ##### Assign the output to a dataframe called `age_low` and print that dataframe to the console.
# ##### Make sure all values are rounded to the next integer.
# #### Answer:
#age_lo = round(grouped_df.max()[['lo']],0)
age_lo = round(tb_data.groupby('age_group').mean()[['lo']],0)
print(age_lo)
# #### Question 4
# ##### Sort the values of `age_lo` in descending order
# #### Answer:
age_lo.sort_values('lo',ascending=False)
# #### Question 5
# ##### Filter the values of `age_best` for values of `best` that are above 10000.
#
#
# #### Answer:
age_best.query('best > 10000')
# #### Question 6
# ##### In `age_best`, create a new boolean column (containing values True or False) called `high`
# ##### that indicates whether a row has a value for `best` that is higher than 10000.
#
#
#
# #### Answer:
age_best['high'] = age_best['best'] > 10000
print(age_best)
# ## Exercise 3
# #### Question 1
# ##### Subset `tb_data`, so that the new dataframe only contains the columns `best`, `lo`, and `hi`
# ##### Call the new dataframe `tb_subset`
# #### Answer:
tb_subset = tb_data.loc[:,['best', 'lo', 'hi']]
print(tb_subset.head())
# #### Question 2
# ##### Check the count of missing values NAs by columns.
# #### Answer:
tb_subset.isnull().sum()
# #### Question 3
# ##### We just saw that there are NAs in our data.
# ##### Impute missing values using the mean of the column.
# #### Answer:
tb_subset= tb_subset.fillna(tb_subset.mean())
tb_subset.isnull().sum()
# #### Question 4
# ##### Let us convert `best` to a categorical variable with two levels.
# ##### Convert values in `best` below 5000 to `low` and above or equal to 5000 to `high`.
# #### Answer:
tb_subset['best'] = np.where(tb_subset['best'] < 5000, 'low', 'high')
tb_subset.head()
# #### Question 5
# ##### Check the type of the variable `best` and convert it to boolean, so that it has the value `True` for `high` and `False` for `low`
# #### Answer:
tb_subset.dtypes
tb_subset['best'] = np.where(tb_subset['best'] == 'high', True, False)
tb_subset.dtypes
# #### Question 6
# ##### Group `tb_subset` by variable `best`. Save as `tb_grouped`.
# ##### Convert `tb_grouped` to a wide dataframe by taking mean of `tb_grouped` and specifying the variables `lo` and `hi`.
# ##### Print results.
# ##### Reset the index of `tb_grouped_mean` and print results again.
# #### Answer:
# Group data by `Target` variable.
tb_grouped = tb_subset.groupby('best')
# Compute mean on the listed variables using the grouped data.
tb_grouped_mean = tb_grouped.mean()[['lo', 'hi']]
print(tb_grouped_mean)
# Reset index of the dataset.
tb_grouped_mean = tb_grouped_mean.reset_index()
print(tb_grouped_mean)
# #### Question 7
# ##### Change the dataframe `tb_grouped_mean` to be a long dataframe.
# ##### The resulting column names should be `best`, `type` (contains the column names of the wide dataframe), and `value` (containts the values from the wide dataframe).
# ##### Name the resulting dataframe `tb_subset_long`.
# #### Answer:
tb_subset_long = pd.melt(tb_grouped_mean, #<- wide dataset
id_vars = ['best'], #<- identifying variable
var_name = 'type', #<- contains col names of wide data
value_name = 'value') #<- contains values from above columns
print(tb_subset_long)
| Exercises/day-4-data-wrangling-with-Python-exercises-with-answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# Follow the instructions below to help you create your ML pipeline.
# ### 1. Import libraries and load data from database.
# - Import Python libraries
# - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# - Define feature and target variables X and Y
# +
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import pickle
import re
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.tokenize import word_tokenize
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
from sklearn.metrics import precision_score, recall_score, f1_score, make_scorer
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from joblib import parallel_backend
from imblearn.ensemble import BalancedRandomForestClassifier
import warnings
warnings.simplefilter('ignore')
# -
# load data from database
engine = create_engine(r'sqlite:///data/DisasterResponse.db', pool_pre_ping=True)
df = pd.read_sql_table('CleanData', engine)
X = df.message
Y = df[df.columns[4:]]
# ### 2. Write a tokenization function to process your text data
def tokenize(text):
"""
Normalize and tokenize message strings.
Args:
text: String - message text to process
Returns:
clean_tokens: list of strings - list of tokens from the message
"""
# normalize case and remove punctuation
text = text = re.sub('\W', ' ', text.lower())
tokens = word_tokenize(text)
stop_words = stopwords.words("english")
# Reduce words to their stems
clean_tokens = [PorterStemmer().stem(tok).strip() for tok in tokens if tok not in stop_words]
return clean_tokens
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize) ),
('tfidf', TfidfTransformer() ),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=-1)) )
])
# ### 4. Train pipeline
# - Split data into train and test sets
# - Train pipeline
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
# ### 5. Test your model
# Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
def report_results(Y_test, Y_pred):
"""Report precision, recall and f1_score for the Machine Learning Model."""
results = pd.DataFrame(columns= ['category', 'precision', 'recall', 'f1-score'])
for i, category in enumerate(Y_test.columns):
y_true = Y_test.iloc[:,i].values
y_pred = Y_pred[:,i]
row = {'category':category,
'precision':precision_score(y_true, y_pred, zero_division=0, average='macro'),
'recall':recall_score(y_true, y_pred, zero_division=0, average='macro'),
'f1-score':f1_score(y_true, y_pred, zero_division=0, average='macro')}
results = results.append(row, ignore_index=True)
median_values = {'category':'median_values',
'precision':results['precision'].median(),
'recall':results['recall'].median(),
'f1-score':results['f1-score'].median()}
results = results.append(median_values, ignore_index=True)
return results
# +
pipeline.fit(X_train, Y_train)
Y_pred = pipeline.predict(X_test)
# -
print('Writing results to DB in table "Pipeline".')
report_results(Y_test, Y_pred).to_sql('Pipeline', engine, index=False, if_exists='replace')
# Due to remote execution of this code we will later transfer this notebook into a plain python script and write the performance results into the existing SQL database. When we transfer the data back to our local machine we're able to read out the tables and copmpare the models.
# ### 6. Improve your model
# Use grid search to find better parameters.
def f1_scorer(y_true, y_pred):
"""
Calculate median F1-Score to measure model performance.
Args:
y_true: DataFrame containing the actual labels
y_pred: Array containing the predicted labels
Returns:
f1_score: Float representing the median F1-Score for the model.
"""
scores = []
for i in range(y_pred.shape[1]):
scores.append(f1_score(np.array(y_true)[:,i], y_pred[:,i], zero_division=0, average='macro'))
score = np.median(scores)
return score
parameters = {
'vect__ngram_range': [(1,1), (1,2), (1,4)],
'clf__estimator__min_samples_leaf':[1, 5],
'clf__estimator__class_weight': [None, 'balanced'],
'clf__estimator__n_estimators': [50, 100, 200]
}
scorer = make_scorer(f1_scorer)
cv = GridSearchCV(pipeline, param_grid=parameters, scoring=scorer, verbose=3)
# ### 7. Test your model
# Show the accuracy, precision, and recall of the tuned model.
#
# Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
cv.fit(X_train, Y_train)
# Get results of grid search
data = {'parameter': list(cv.best_params_.keys()),
'value': [str(value) for value in cv.best_params_.values()]}
cv_results = pd.DataFrame(data)
cv_results = cv_results.append(
{'parameter': 'median f1-score','value': np.max(cv.cv_results_['mean_test_score'])},
ignore_index=True)
print('Writing results of GridSearch.fit to DB in table "GsFit".')
cv_results.to_sql('GsFit', engine, index=False, if_exists='replace')
Y_pred = cv.predict(X_test)
print('Writing results of GridSearch.predict to DB in table "GsPredict".')
report_results(Y_test, Y_pred).to_sql('GsPredict', engine, index=False, if_exists='replace')
# ### 8. Try improving your model further. Here are a few ideas:
# * try other machine learning algorithms
# * add other features besides the TF-IDF
balanced_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize) ),
('tfidf', TfidfTransformer() ),
('clf', MultiOutputClassifier(BalancedRandomForestClassifier(n_jobs=-1) ))
])
keys = ['vect__ngram_range',
'clf__estimator__min_samples_leaf',
'clf__estimator__class_weight',
'clf__estimator__n_estimators']
values = [cv.get_params(True)[key] for key in keys]
tuning_params = dict(zip(keys, values))
balanced_pipeline.set_params(
vect__ngram_range = tuning_params['vect__ngram_range'],
clf__estimator__min_samples_leaf = tuning_params['clf__estimator__min_samples_leaf'],
clf__estimator__class_weight = tuning_params['clf__estimator__class_weight'],
clf__estimator__n_estimators = tuning_params['clf__estimator__n_estimators']
)
balanced_pipeline.fit(X_train, Y_train)
Y_pred = balanced_pipeline.predict(X_test)
print('Writing results of BalancedPipeline to DB in table "BalancedPipeline".')
report_results(Y_test, Y_pred).to_sql('BalancedPipeline', engine, index=False, if_exists='replace')
# ### 9. Export your model as a pickle file
print('Saving models in pickle files.')
pickle.dump(cv, open('disaster_model.pkl', 'wb'))
pickle.dump(balanced_pipeline, open('balanced_model.pkl', 'wb'))
# ### 10. Use this notebook to complete `train.py`
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
| ML Pipeline Preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: ml-software
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import internetarchive as ia
import pandas as pd
import itertools as it
from toolz import pluck, filter, map, take
import toolz
import os
from pathlib import Path
import json
from glob import glob
from zipfile import ZipFile, BadZipFile
config = json.load(open(os.path.expanduser("~/.thesis.conf")))
db_folder = Path(config['datasets']) / Path("archive_org/")
os.chdir(str(db_folder))
iaformat = ["Single Page Processed JP2 ZIP", 'Metadata']
search = ia.search_items('pages:[20 TO 25] AND (language:eng OR language:"English") AND date:[1800-01-01 TO 1967-01-01]')
items = list(toolz.take(5,search.iter_as_items()))
item = items[2]
for item in items[0:3]:
ia.download(item.identifier, formats=iaformat)
jp2path = Path(item.identifier) / Path(next(pluck('name',filter(lambda files: files['format'] == iaformat, item.files))))
# + inputHidden=false outputHidden=false
zips = glob(str(db_folder / '*' / '*_jp2.zip'))
zips
# -
for zip in zips:
try:
jp2zip = ZipFile(str(zip))
print(jp2zip.filename, len(jp2zip.namelist()))
jp2zip.extract(jp2zip.namelist()[0])
except BadZipFile:
print('zip error', zip)
jp2zip.namelist()
| notebooks/data/ia_download.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.12 ('tevatron')
# language: python
# name: python3
# ---
query = 'Distilled water'
passage1 = 'Distilled water is water that has been boiled into vapor and condensed back into liquid in a separate container. Impurities in the original water that do not boil below or near the boiling point of water remain in the original container. Thus, distilled water is a type of purified water.'
passage2 = "An electric discharge is the release and transmission of electricity in an applied electric field through a medium such as a gas."
from tevatron.driver.encode_code import CoCondenser
model = CoCondenser('Luyu/co-condenser-marco')
query_embed = model.encode({'text_id':[0], 'text':[query]}, True)
passage_embed = model.encode({'text_id':[0,1], 'text':[passage1, passage2]}, False)
print(query_embed.shape)
print(passage_embed.shape)
from sklearn.metrics.pairwise import cosine_similarity
# torch.nn.functional.cosine_similarity(query_embed[0], passage_embed[0], dim=1)
cosine_similarity(query_embed, passage_embed)
| coCondenser_demov2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading data from Digital Earth Africa
#
# * **Products used:**
# [gm_s2_annual](https://explorer.digitalearth.africa/gm_s2_annual),
# [ga_ls8c_wofs_2_annual_summary](https://explorer.digitalearth.africa/ga_ls8c_wofs_2_annual_summary)
#
# * **Prerequisites:** Users of this notebook should have a basic understanding of:
# * How to run a [Jupyter notebook](01_Jupyter_notebooks.ipynb)
# * Inspecting available [DE Africa products and measurements](02_Products_and_measurements.ipynb)
# + raw_mimetype="text/restructuredtext" active=""
# **Keywords** :index:`beginner's guide; loading data`, :index:`loading data; beginner's guide`, :index:`data used; landsat 8 geomedian`, :index:`data used; WOfS`, :index:`data methods; resampling` :index:`data methods; reprojecting`, :index:`data attributes; coordinate reference system`
# -
# ## Background
# Loading data from the [Digital Earth Africa (DE Africa)](https://www.digitalearthafrica.org/) instance of the [Open Data Cube](https://www.opendatacube.org/) requires the construction of a data query that specifies the what, where, and when of the data request.
# Each query returns a [multi-dimensional xarray object](http://xarray.pydata.org/en/stable/) containing the contents of your query.
# It is essential to understand the `xarray` data structures as they are fundamental to the structure of data loaded from the datacube.
# Manipulations, transformations and visualisation of `xarray` objects provide datacube users with the ability to explore and analyse DE Africa datasets, as well as pose and answer scientific questions.
# ## Description
# This notebook will introduce how to load data from the Digital Earth Africa datacube through the construction of a query and use of the `dc.load()` function.
# Topics covered include:
#
# * Loading data using `dc.load()`
# * Interpreting the resulting `xarray.Dataset` object
# * Inspecting an individual `xarray.DataArray`
# * Customising parameters passed to the `dc.load()` function
# * Loading specific measurements
# * Loading data for coordinates in a custom coordinate reference system (CRS)
# * Projecting data to a new CRS and spatial resolution
# * Specifying a specific spatial resampling method
# * Loading data using a reusable dictionary query
# * Loading matching data from multiple products using `like`
# * Adding a progress bar to the data load
#
# ***
# ## Getting started
# To run this introduction to loading data from DE Africa, run all the cells in the notebook starting with the "Load packages" cell. For help with running notebook cells, refer back to the [Jupyter Notebooks notebook](01_Jupyter_notebooks.ipynb).
# ### Load packages
# First we need to load the `datacube` package.
# This will allow us to query the datacube database and load some data.
# The `with_ui_cbk` function from `odc.ui` will allow us to show a progress bar when loading large amounts of data.
import datacube
from odc.ui import with_ui_cbk
# ### Connect to the datacube
# We then need to connect to the datacube database.
# We will then be able to use the `dc` datacube object to load data.
# The `app` parameter is a unique name used to identify the notebook that does not have any effect on the analysis.
dc = datacube.Datacube(app="03_Loading_data")
# ## Loading data using `dc.load()`
#
# Loading data from the datacube uses the [dc.load()](https://datacube-core.readthedocs.io/en/latest/dev/api/generate/datacube.Datacube.load.html) function.
#
# The function requires the following minimum arguments:
#
# * `product`: A specific product to load (to revise DE Africa products, see the [Products and measurements](02_Products_and_measurements.ipynb) notebook).
# * `x`: Defines the spatial region in the *x* dimension. By default, the *x* and *y* arguments accept queries in a geographical co-ordinate system WGS84, identified by the EPSG code *4326*.
# * `y`: Defines the spatial region in the *y* dimension. The dimensions ``longitude``/``latitude`` and ``x``/``y`` can be used interchangeably.
# * `time`: Defines the temporal extent. The time dimension can be specified using a tuple of datetime objects or strings in the "YYYY", "YYYY-MM" or "YYYY-MM-DD" format.
#
# An optional arguement which provides ease of use and ease of identification of the measurements to load is:
#
# * `measurements:` This argument is used to provide a list of measurement names to load, as listed in `dc.list_measurements()`.
# For satellite datasets, measurements contain data for each individual satellite band (e.g. near infrared).
# If not provided, all measurements for the product will be returned, and they will have the default names from the satellite data.
#
# Let's run a query to load 2018 data from the [Sentinel 2 annual geomedian product](https://explorer.digitalearth.africa/gm_s2_annual) for part of Nxai Pan National park in Botswana.
# For this example, we can use the following parameters:
#
# * `product`: `gm_s2_annual`
# * `x`: `(24.60, 24.80)`
# * `y`: `(-20.05, -20.25)`
# * `time`: `("2018-01-01", "2018-12-31")`
# * `measurements`: `['blue', 'green', 'red', 'nir', 'swir_1', 'swir_2', 'emad', 'bcmad', 'smad', 'red_edge_1']`
#
# Run the following cell to load all datasets from the `gm_s2_annual` product that match this spatial and temporal extent:
# +
ds = dc.load(product="gm_s2_annual",
x=(24.65, 24.75),
y=(-20.05, -20.15),
time=("2018-01-01", "2018-12-31"),
measurements=['blue', 'green', 'red', 'nir', 'swir_1', 'swir_2', 'emad', 'bcmad', 'smad', 'red_edge_1'])
print(ds)
# -
# ### Interpreting the resulting `xarray.Dataset`
# The variable `ds` has returned an `xarray.Dataset` containing all data that matched the spatial and temporal query parameters inputted into `dc.load`.
#
# *Dimensions*
#
# * Identifies the number of timesteps returned in the search (`time: 1`) as well as the number of pixels in the `x` and `y` directions of the data query.
#
# *Coordinates*
#
# * `time` identifies the date attributed to each returned timestep.
# * `x` and `y` are the coordinates for each pixel within the spatial bounds of your query.
#
# *Data variables*
#
# * These are the measurements available for the nominated product.
# For every date (`time`) returned by the query, the measured value at each pixel (`y`, `x`) is returned as an array for each measurement.
# Each data variable is itself an `xarray.DataArray` object ([see below](#Inspecting-an-individual-xarray.DataArray)).
#
# *Attributes*
#
# * `crs` identifies the coordinate reference system (CRS) of the loaded data.
# ### Inspecting an individual `xarray.DataArray`
# The `xarray.Dataset` we loaded above is itself a collection of individual `xarray.DataArray` objects that hold the actual data for each data variable/measurement.
# For example, all measurements listed under _Data variables_ above (e.g. `blue`, `green`, `red`, `nir`, `swir_1`, `swir_2`) are `xarray.DataArray` objects.
#
# We can inspect the data in these `xarray.DataArray` objects using either of the following syntaxes:
# ```
# ds["measurement_name"]
# ```
# or:
# ```
# ds.measurement_name
# ```
#
# Being able to access data from individual data variables/measurements allows us to manipulate and analyse data from individual satellite bands or specific layers in a dataset.
# For example, we can access data from the near infra-red satellite band (i.e. `nir`):
print(ds.nir)
# Note that the object header informs us that it is an `xarray.DataArray` containing data for the `nir` satellite band.
#
# Like an `xarray.Dataset`, the array also includes information about the data's **dimensions** (i.e. `(time: 1, y: 801, x: 644)`), **coordinates** and **attributes**.
# This particular data variable/measurement contains some additional information that is specific to the `nir` band, including details of array's nodata value (i.e. `nodata: -999`).
#
# > **Note**: For a more in-depth introduction to `xarray` data structures, refer to the [official xarray documentation](http://xarray.pydata.org/en/stable/data-structures.html)
# ## Customising the `dc.load()` function
#
# The `dc.load()` function can be tailored to refine a query.
#
# Customisation options include:
#
# * `measurements:` This argument is used to provide a list of measurement names to load, as listed in `dc.list_measurements()`.
# For satellite datasets, measurements contain data for each individual satellite band (e.g. near infrared).
# If not provided, all measurements for the product will be returned.
# * `crs:` The coordinate reference system (CRS) of the query's `x` and `y` coordinates is assumed to be `WGS84`/`EPSG:4326` unless the `crs` field is supplied, even if the stored data is in another projection or the `output_crs` is specified.
# The `crs` parameter is required if your query's coordinates are in any other CRS.
# * `group_by:` Satellite datasets based around scenes can have multiple observations per day with slightly different time stamps as the satellite collects data along its path.
# These observations can be combined by reducing the `time` dimension to the day level using `group_by=solar_day`.
# * `output_crs` and `resolution`: To reproject or change the resolution the data, supply the `output_crs` and `resolution` fields.
# * `resampling`: This argument allows you to specify a custom spatial resampling method to use when data is reprojected into a different CRS.
#
# Example syntax on the use of these options follows in the cells below.
#
# > For help or more customisation options, run `help(dc.load)` in an empty cell or visit the function's [documentation page](https://datacube-core.readthedocs.io/en/latest/dev/api/generate/datacube.Datacube.load.html)
#
# ### Specifying measurements
# By default, `dc.load()` will load *all* measurements in a product.
#
# To load data from the `red`, `green` and `blue` satellite bands only, we can add `measurements=["red", "green", "blue"]` to our query:
# +
# Note the optional inclusion of the measurements list
ds_rgb = dc.load(product="gm_s2_annual",
measurements=["red", "green", "blue"],
x=(24.60, 24.80),
y=(-20.05, -20.25),
time=("2018-01-01", "2018-12-31"))
print(ds_rgb)
# -
# Note that the *Data variables* component of the `xarray.Dataset` now includes only the measurements specified in the query (i.e. the `red`, `green` and `blue` satellite bands).
# ### Loading data for coordinates in any CRS
# By default, `dc.load()` assumes that your query `x` and `y` coordinates are provided in degrees in the `WGS84/EPSG:4326` CRS.
# If your coordinates are in a different coordinate system, you need to specify this using the `crs` parameter.
#
# In the example below, we load data for a set of `x` and `y` coordinates defined in Africa Albers Equal Area Conic (`EPSG:6933`), and ensure that the `dc.load()` function accounts for this by including `crs="EPSG:6933"`:
#
# +
# Note the new `x` and `y` coordinates and `crs` parameter
ds_custom_crs = dc.load(product="gm_s2_annual",
time=("2018-01-01", "2018-12-31"),
x=(2373555, 2392845),
y=(-2507265, -2531265),
crs="EPSG:6933")
print(ds_custom_crs)
# -
# ### CRS reprojection
# Certain applications may require that you output your data into a specific CRS.
# You can reproject your output data by specifying the new `output_crs` and identifying the `resolution` required.
#
# In this example, we will reproject our data to a new CRS (UTM Zone 34S, `EPSG:32734`) and resolution (250 x 250 m). Note that for most CRSs, the first resolution value is negative (e.g. `(-250, 250)`):
# +
ds_reprojected = dc.load(product="gm_s2_annual",
x=(24.60, 24.80),
y=(-20.05, -20.25),
time=("2018-01-01", "2018-12-31"),
output_crs="EPSG:32734",
resolution=(-250, 250))
print(ds_reprojected)
# -
# Note that the `crs` attribute in the *Attributes* section has changed to `EPSG:32734`.
# Due to the larger 250 m resolution, there are also now less pixels on the `x` and `y` dimensions (e.g. `x: 87, y: 91` compared to `x: 1930, y: 2100` in earlier examples).
#
# ### Spatial resampling methods
# When a product is re-projected to a different CRS and/or resolution, the new pixel grid may differ from the original input pixels by size, number and alignment.
# It is therefore necessary to apply a spatial "resampling" rule that allocates input pixel values into the new pixel grid.
#
# By default, `dc.load()` resamples pixel values using "nearest neighbour" resampling, which allocates each new pixel with the value of the closest input pixel.
# Depending on the type of data and the analysis being run, this may not be the most appropriate choice (e.g. for continuous data).
#
# The `resampling` parameter in `dc.load()` allows you to choose a custom resampling method from the following options:
#
# ```
# "nearest", "cubic", "bilinear", "cubic_spline", "lanczos",
# "average", "mode", "gauss", "max", "min", "med", "q1", "q3"
# ```
#
# For example, we can request that all loaded data is resampled using "average" resampling:
# +
# Note the additional `resampling` parameter
ds_averageresampling = dc.load(product="gm_s2_annual",
x=(24.60, 24.80),
y=(-20.05, -20.25),
time=("2018-01-01", "2018-12-31"),
resolution=(-250, 250),
resampling="average")
print(ds_averageresampling)
# -
# You can also provide a Python dictionary to request a different sampling method for different measurements.
# This can be particularly useful when some measurements contain contain categorical data which require resampling methods such as "nearest" or "mode" that do not modify the input pixel values.
#
# In the example below, we specify `resampling={"red": "nearest", "*": "average"}`, which will use "nearest" neighbour resampling for the `red` satellite band only. `"*": "average"` will apply "average" resampling for all other satellite bands:
#
# +
ds_customresampling = dc.load(product="gm_s2_annual",
x=(24.60, 24.80),
y=(-20.05, -20.25),
time=("2018-01-01", "2018-12-31"),
resolution=(-250, 250),
resampling={"red": "nearest", "*": "average"})
print(ds_customresampling)
# -
# > **Note**: For more information about spatial resampling methods, see the [following guide](https://rasterio.readthedocs.io/en/stable/topics/resampling.html)
# ## Loading data using the query dictionary syntax
# It is often useful to re-use a set of query parameters to load data from multiple products.
# To achieve this, we can load data using the "query dictionary" syntax.
# This involves placing the query parameters we used to load data above inside a Python dictionary object which we can re-use for multiple data loads:
query = {"x": (24.60, 24.80),
"y": (-20.05, -20.25),
"time": ("2018-01-01", "2018-12-31")}
# We can then use this query dictionary object as an input to `dc.load()`.
#
# > The `**` syntax below is Python's "keyword argument unpacking" operator.
# This operator takes the named query parameters listed in the dictionary we created (e.g. `"x": (153.3, 153.4)`), and "unpacks" them into the `dc.load()` function as new arguments.
# For more information about unpacking operators, refer to the [Python documentation](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists)
# +
ds = dc.load(product="gm_s2_annual",
**query)
print(ds)
# -
# Query dictionaries can contain any set of parameters that would usually be provided to `dc.load()`:
# +
query = {"x": (24.60, 24.80),
"y": (-20.05, -20.25),
"time": ("2018-01-01", "2018-12-31"),
"output_crs": "EPSG:32734",
"resolution": (-250, 250)}
ds_s2 = dc.load(product="gm_s2_annual",
**query)
print(ds_s2)
# -
# Now we have a reusable query, we can easily use it to load data from a different product.
# For example, we can load Water Observations from Space (WOfS) Annual Summary data for the same extent, time, output CRS and resolution that we just loaded Landsat 8 Geomedian data for:
# +
ds_wofs = dc.load(product="ga_ls8c_wofs_2_annual_summary",
**query)
print(ds_wofs)
# -
# ## Other helpful tricks
# ### Loading data "like" another dataset
# Another option for loading matching data from multiple products is to use `dc.load()`'s `like` parameter.
# This will copy the spatial and temporal extent and the CRS/resolution from an existing dataset, and use these parameters to load a new data from a new product.
#
# In the example below, we load another WOfS dataset that exactly matches the `ds_s2` dataset we loaded earlier:
#
# +
ds_wofs = dc.load(product="ga_ls8c_wofs_2_annual_summary",
like=ds_s2)
print(ds_wofs)
# -
# ### Adding a progress bar
# When loading large amounts of data, it can be useful to view the progress of the data load.
# The `progress_cbk` parameter in `dc.load()` allows us to add a progress bar which will indicate how the load is progressing. In this example, we will load 5 years of data (2013, 2014, 2015, 2016 and 2017) from the `ga_ls8c_wofs_2_annual_summary` product with a progress bar:
# +
query = {"x": (24.60, 24.80),
"y": (-20.05, -20.25),
"time": ("2013", "2016")}
ds_progress = dc.load(product="ga_ls8c_wofs_2_annual_summary",
progress_cbk=with_ui_cbk(),
**query)
print(ds_progress)
# -
# ## Recommended next steps
#
# For more advanced information about working with Jupyter Notebooks or JupyterLab, you can explore [JupyterLab documentation page](https://jupyterlab.readthedocs.io/en/stable/user/notebook.html).
#
# To continue working through the notebooks in this beginner's guide, the following notebooks are designed to be worked through in the following order:
#
# 1. [Jupyter Notebooks](01_Jupyter_notebooks.ipynb)
# 2. [Products and Measurements](02_Products_and_measurements.ipynb)
# 3. **Loading data (this notebook)**
# 4. [Plotting](04_Plotting.ipynb)
# 5. [Performing a basic analysis](05_Basic_analysis.ipynb)
# 6. [Introduction to numpy](06_Intro_to_numpy.ipynb)
# 7. [Introduction to xarray](07_Intro_to_xarray.ipynb)
# 8. [Parallel processing with Dask](08_Parallel_processing_with_dask.ipynb)
#
# Once you have you have completed the above six tutorials, join advanced users in exploring:
#
# * The "Datasets" directory in the repository, where you can explore DE Africa products in depth.
# * The "Frequently used code" directory, which contains a recipe book of common techniques and methods for analysing DE Africa data.
# * The "Real-world examples" directory, which provides more complex workflows and analysis case studies.
# ***
#
# ## Additional information
#
# **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
# Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
#
# **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
# If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
#
# **Compatible datacube version:**
print(datacube.__version__)
# **Last Tested:**
from datetime import datetime
datetime.today().strftime('%Y-%m-%d')
| Beginners_guide/03_Loading_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### All Techniques Of Hyper Parameter Optimization
#
# 1. GridSearchCV
# 2. RandomizedSearchCV
# 3. Bayesian Optimization -Automate Hyperparameter Tuning (Hyperopt)
# 4. Sequential Model Based Optimization(Tuning a scikit-learn estimator with skopt)
# 4. Optuna- Automate Hyperparameter Tuning
# 5. Genetic Algorithms (TPOT Classifier)
#
# ###### References
# - https://github.com/fmfn/BayesianOptimization
# - https://github.com/hyperopt/hyperopt
# - https://www.jeremyjordan.me/hyperparameter-tuning/
# - https://optuna.org/
# - https://towardsdatascience.com/hyperparameters-optimization-526348bb8e2d(By <NAME> )
# - https://scikit-optimize.github.io/stable/auto_examples/hyperparameter-optimization.html
#
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
df=pd.read_csv('diabetes.csv')
df.head()
import numpy as np
df['Glucose']=np.where(df['Glucose']==0,df['Glucose'].median(),df['Glucose'])
df.head()
#### Independent And Dependent features
X=df.drop('Outcome',axis=1)
y=df['Outcome']
pd.DataFrame(X,columns=df.columns[:-1])
#### Train Test Split
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.20,random_state=0)
from sklearn.ensemble import RandomForestClassifier
rf_classifier=RandomForestClassifier(n_estimators=10).fit(X_train,y_train)
prediction=rf_classifier.predict(X_test)
y.value_counts()
from sklearn.metrics import confusion_matrix,classification_report,accuracy_score
print(confusion_matrix(y_test,prediction))
print(accuracy_score(y_test,prediction))
print(classification_report(y_test,prediction))
# The main parameters used by a Random Forest Classifier are:
#
# - criterion = the function used to evaluate the quality of a split.
# - max_depth = maximum number of levels allowed in each tree.
# - max_features = maximum number of features considered when splitting a node.
# - min_samples_leaf = minimum number of samples which can be stored in a tree leaf.
# - min_samples_split = minimum number of samples necessary in a node to cause node splitting.
# - n_estimators = number of trees in the ensamble.
### Manual Hyperparameter Tuning
model=RandomForestClassifier(n_estimators=300,criterion='entropy',
max_features='sqrt',min_samples_leaf=10,random_state=100).fit(X_train,y_train)
predictions=model.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(accuracy_score(y_test,predictions))
print(classification_report(y_test,predictions))
# ##### Randomized Search Cv
import numpy as np
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt','log2']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 1000,10)]
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10,14]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4,6,8]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'criterion':['entropy','gini']}
print(random_grid)
rf=RandomForestClassifier()
rf_randomcv=RandomizedSearchCV(estimator=rf,param_distributions=random_grid,n_iter=100,cv=3,verbose=2,
random_state=100,n_jobs=-1)
### fit the randomized model
rf_randomcv.fit(X_train,y_train)
rf_randomcv.best_params_
rf_randomcv
best_random_grid=rf_randomcv.best_estimator_
from sklearn.metrics import accuracy_score
y_pred=best_random_grid.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print("Accuracy Score {}".format(accuracy_score(y_test,y_pred)))
print("Classification report: {}".format(classification_report(y_test,y_pred)))
# #### GridSearch CV
rf_randomcv.best_params_
# +
from sklearn.model_selection import GridSearchCV
param_grid = {
'criterion': [rf_randomcv.best_params_['criterion']],
'max_depth': [rf_randomcv.best_params_['max_depth']],
'max_features': [rf_randomcv.best_params_['max_features']],
'min_samples_leaf': [rf_randomcv.best_params_['min_samples_leaf'],
rf_randomcv.best_params_['min_samples_leaf']+2,
rf_randomcv.best_params_['min_samples_leaf'] + 4],
'min_samples_split': [rf_randomcv.best_params_['min_samples_split'] - 2,
rf_randomcv.best_params_['min_samples_split'] - 1,
rf_randomcv.best_params_['min_samples_split'],
rf_randomcv.best_params_['min_samples_split'] +1,
rf_randomcv.best_params_['min_samples_split'] + 2],
'n_estimators': [rf_randomcv.best_params_['n_estimators'] - 200, rf_randomcv.best_params_['n_estimators'] - 100,
rf_randomcv.best_params_['n_estimators'],
rf_randomcv.best_params_['n_estimators'] + 100, rf_randomcv.best_params_['n_estimators'] + 200]
}
print(param_grid)
# -
#### Fit the grid_search to the data
rf=RandomForestClassifier()
grid_search=GridSearchCV(estimator=rf,param_grid=param_grid,cv=10,n_jobs=-1,verbose=2)
grid_search.fit(X_train,y_train)
grid_search.best_estimator_
best_grid=grid_search.best_estimator_
best_grid
y_pred=best_grid.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print("Accuracy Score {}".format(accuracy_score(y_test,y_pred)))
print("Classification report: {}".format(classification_report(y_test,y_pred)))
# ### Automated Hyperparameter Tuning
# Automated Hyperparameter Tuning can be done by using techniques such as
# - Bayesian Optimization
# - Gradient Descent
# - Evolutionary Algorithms
# #### Bayesian Optimization
# Bayesian optimization uses probability to find the minimum of a function. The final aim is to find the input value to a function which can gives us the lowest possible output value.It usually performs better than random,grid and manual search providing better performance in the testing phase and reduced optimization time.
# In Hyperopt, Bayesian Optimization can be implemented giving 3 three main parameters to the function fmin.
#
# - Objective Function = defines the loss function to minimize.
# - Domain Space = defines the range of input values to test (in Bayesian Optimization this space creates a probability distribution for each of the used Hyperparameters).
# - Optimization Algorithm = defines the search algorithm to use to select the best input values to use in each new iteration.
from hyperopt import hp,fmin,tpe,STATUS_OK,Trials
space = {'criterion': hp.choice('criterion', ['entropy', 'gini']),
'max_depth': hp.quniform('max_depth', 10, 1200, 10),
'max_features': hp.choice('max_features', ['auto', 'sqrt','log2', None]),
'min_samples_leaf': hp.uniform('min_samples_leaf', 0, 0.5),
'min_samples_split' : hp.uniform ('min_samples_split', 0, 1),
'n_estimators' : hp.choice('n_estimators', [10, 50, 300, 750, 1200,1300,1500])
}
space
# +
def objective(space):
model = RandomForestClassifier(criterion = space['criterion'], max_depth = space['max_depth'],
max_features = space['max_features'],
min_samples_leaf = space['min_samples_leaf'],
min_samples_split = space['min_samples_split'],
n_estimators = space['n_estimators'],
)
accuracy = cross_val_score(model, X_train, y_train, cv = 5).mean()
# We aim to maximize accuracy, therefore we return it as a negative value
return {'loss': -accuracy, 'status': STATUS_OK }
# -
from sklearn.model_selection import cross_val_score
trials = Trials()
best = fmin(fn= objective,
space= space,
algo= tpe.suggest,
max_evals = 80,
trials= trials)
best
# +
crit = {0: 'entropy', 1: 'gini'}
feat = {0: 'auto', 1: 'sqrt', 2: 'log2', 3: None}
est = {0: 10, 1: 50, 2: 300, 3: 750, 4: 1200,5:1300,6:1500}
print(crit[best['criterion']])
print(feat[best['max_features']])
print(est[best['n_estimators']])
# -
best['min_samples_leaf']
trainedforest = RandomForestClassifier(criterion = crit[best['criterion']], max_depth = best['max_depth'],
max_features = feat[best['max_features']],
min_samples_leaf = best['min_samples_leaf'],
min_samples_split = best['min_samples_split'],
n_estimators = est[best['n_estimators']]).fit(X_train,y_train)
predictionforest = trainedforest.predict(X_test)
print(confusion_matrix(y_test,predictionforest))
print(accuracy_score(y_test,predictionforest))
print(classification_report(y_test,predictionforest))
acc5 = accuracy_score(y_test,predictionforest)
# #### Genetic Algorithms
# Genetic Algorithms tries to apply natural selection mechanisms to Machine Learning contexts.
#
# Let's immagine we create a population of N Machine Learning models with some predifined Hyperparameters. We can then calculate the accuracy of each model and decide to keep just half of the models (the ones that performs best). We can now generate some offsprings having similar Hyperparameters to the ones of the best models so that go get again a population of N models. At this point we can again caltulate the accuracy of each model and repeate the cycle for a defined number of generations. In this way, just the best models will survive at the end of the process.
import numpy as np
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt','log2']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 1000,10)]
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10,14]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4,6,8]
# Create the random grid
param = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'criterion':['entropy','gini']}
print(param)
# +
from tpot import TPOTClassifier
tpot_classifier = TPOTClassifier(generations= 5, population_size= 24, offspring_size= 12,
verbosity= 2, early_stop= 12,
config_dict={'sklearn.ensemble.RandomForestClassifier': param},
cv = 4, scoring = 'accuracy')
tpot_classifier.fit(X_train,y_train)
# +
accuracy = tpot_classifier.score(X_test, y_test)
print(accuracy)
# -
# ### Optimize hyperparameters of the model using Optuna
#
# The hyperparameters of the above algorithm are `n_estimators` and `max_depth` for which we can try different values to see if the model accuracy can be improved. The `objective` function is modified to accept a trial object. This trial has several methods for sampling hyperparameters. We create a study to run the hyperparameter optimization and finally read the best hyperparameters.
import optuna
import sklearn.svm
def objective(trial):
classifier = trial.suggest_categorical('classifier', ['RandomForest', 'SVC'])
if classifier == 'RandomForest':
n_estimators = trial.suggest_int('n_estimators', 200, 2000,10)
max_depth = int(trial.suggest_float('max_depth', 10, 100, log=True))
clf = sklearn.ensemble.RandomForestClassifier(
n_estimators=n_estimators, max_depth=max_depth)
else:
c = trial.suggest_float('svc_c', 1e-10, 1e10, log=True)
clf = sklearn.svm.SVC(C=c, gamma='auto')
return sklearn.model_selection.cross_val_score(
clf,X_train,y_train, n_jobs=-1, cv=3).mean()
# +
study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=100)
trial = study.best_trial
print('Accuracy: {}'.format(trial.value))
print("Best hyperparameters: {}".format(trial.params))
# -
trial
study.best_params
rf=RandomForestClassifier(n_estimators=330,max_depth=30)
rf.fit(X_train,y_train)
y_pred=rf.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
| Hyper Parameter Optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## Summary
# Three observable trends
# -
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
df = pd.read_json('purchase_data.json')
# ### Total Number of Players and Player Count
df.SN.nunique()
# ## Purchasing Analysis
# #### Number of Unique Items
df['Item ID'].nunique()
# #### Average Purchase Price
'${:.2f}'.format(df.Price.mean())
# #### Total Number of Purchases
df.shape[0]
# #### Total Revenue
df.Price.sum()
# ## Gender Demographics
# #### Count of players by gender
df.groupby(['SN', 'Gender']).count().reset_index()['Gender'].value_counts()
# #### Percentage of players by gender
# +
ax2 = df.groupby(['SN', 'Gender']).count().reset_index()['Gender'].value_counts().plot(kind='bar', figsize=(10,7),
color="indigo", fontsize=13);
ax2.set_alpha(0.8)
ax2.set_ylabel("Gender Count", fontsize=18);
ax2.set_yticks([i for i in range(0,500,100)])
# create a list to collect the plt.patches data
totals = []
# find the values and append to list
for i in ax2.patches:
totals.append(i.get_height())
# set individual bar lables using above list
total = sum(totals)
# set individual bar lables using above list
for i in ax2.patches:
# get_x pulls left or right; get_height pushes up or down
ax2.text(i.get_x()+.12, i.get_height(), \
str(round((i.get_height()/total)*100, 2))+'%', fontsize=22,
color='black')
# -
# ## Gender Purchasing Analysis
df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts()
normed = df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts(normalize=True)
absolute = df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts(normalize=False)
gdf = pd.concat([normed, absolute], axis=1)
df_gender = df.groupby('Gender').agg(['sum', 'mean', 'count'])
df_gender.index
level0 = df_gender.columns.get_level_values(0)
level1 = df_gender.columns.get_level_values(1)
df_gender.columns = level0 + ' ' + level1
# df_gender = df_gender[['sum', 'mean', 'count']]
df_gender
df_gender = df_gender[['Price sum', 'Price mean', 'Price count']]
df_gender
df_gender = pd.concat([df_gender,absolute], axis=1)
df_gender['Normalized'] = df_gender['Price sum'] / df_gender.Gender
df_gender
# ## Age Demographics
import seaborn as sns
age_df = df[['Age', 'SN']].drop_duplicates()
age_df.shape
sns.distplot(age_df['Age'], bins=10, kde=False)
ages = [0, 9.9, 14.9, 19.9, 24.9, 29.90, 34.90, 39.90, 99999]
age_groups = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# +
df['Age_Group'] = pd.cut(df['Age'], ages, labels = age_groups)
age_df['Age_Group'] = pd.cut(age_df['Age'], ages, labels=age_groups)
age_out = pd.concat([age_df.Age_Group.value_counts(normalize=True),\
age_df.Age_Group.value_counts()], axis=1)
age_out.to_dict()['Age_Group']
age_norm = df.groupby('Age_Group').agg(['sum', 'mean', 'count'])['Price']
age_norm.reset_index(inplace=True)
# -
age_norm["unique_buyers"] = age_norm["Age_Group"].map(lambda x: age_out.to_dict()['Age_Group'].get(x))
age_norm['normed_mean'] = age_norm['sum'] / age_norm['unique_buyers'].astype('float')
age_norm.rename(columns={'count': 'total_purchase_count', 'mean': 'ave_purchase_price','sum': 'total_purchase_value'})
# ## Top Spenders
df['SN'].value_counts().head(15).plot.bar();
# **Since the value count is the same for the 2nd[1] item and the 6th[5] spenders, I included all of those spenders.**
top_spenders = list(df['SN'].value_counts()[:6].to_dict().keys())
mask_spend = df['SN'].isin(top_spenders)
top_spenders_df = df[mask_spend]
top_spender_purchase_analysis = top_spenders_df.groupby('SN').Price.agg(['count', 'mean', 'sum'])
top_spender_purchase_analysis = top_spender_purchase_analysis.rename(columns={\
'count': 'Purchase Count', 'mean': 'Ave Purchase Price','sum': 'Total Purchase Value'})
top_spender_purchase_analysis
# ## Most Popular Items
df['Item Name'].value_counts().head(15).plot.bar();
# **Since the value count is the same for the 5th item and the 8th items, I included those in top items.**
top_items = list(df['Item Name'].value_counts()[:8].to_dict().keys())
top_items
mask = df['Item Name'].isin(top_items)
top_items_df = df[mask]
top_items_df.sort_values(['Item Name']).head()
item_purchase_analysis = top_items_df.groupby('Item Name').Price.agg(['count', 'mean', 'sum']).sort_values\
(by='count', ascending=False)
item_purchase_analysis = item_purchase_analysis.rename(columns={\
'count': 'Purchase Count', 'mean': 'Ave Purchase Price','sum': 'Total Purchase Value'})
item_purchase_analysis
#sort by purchase count
# ## Most Profitable Items
most_profitable = df.groupby('Item Name')['Price'].agg(['sum', 'count']).\
sort_values(by='sum', ascending=False).nlargest(5, 'sum')
most_profitable.head()
#why don't Final Critic and Stormcaller show up in this group??
most_profitable.loc['Stormcaller']
most_profitable.loc['Final Critic']
most_profitable = most_profitable.rename(columns={\
'count': 'Purchase Count', 'sum': 'Total Purchase Value'})
most_profitable
| pymoli_final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
# %config InlineBackend.figure_format = 'retina'
from ipywidgets import interact
# # Question 1
# Suppose we invent a new representation for finite precision real numbers based on rational numbers instead of floating point numbers. Rational numbers are dense on the reals, and we can approximate any real number to any desired precision with a rational number. For this to work, our data structure must use only integers and arithmetic operations ($+,-,\cdot, /$) on integers.
#
# Let $x = I_1 / I_2$ where $I_1$ and $I_2$ are 16bit integers. Assume for simplicity that $0 \leq I_1 \leq I_{\rm max}$ and $0 < I_2 \leq I_{\rm max}$. Hence, each real number represented in finite precision with our new system uses 32bits to store in memory.
#
# ## A
# Devise formulas to perform addition, multiplication, and division that use only arithmetic operations on integers. Arithmetic operations should take as input two numbers in our format and return a single new number also in our format. For example, if $x_1 = I_{11}/I_{21}$ and $x_2 = I_{12}/I_{22}$, then $x_1 + x_2 = x_3$, where $x_3$ is expressed as the ratio of two integers. For each operation, you need to write $x_3$ as the ratio of two integers each of which are functions of the integers $I_{11},I_{21}, I_{12}, I_{22}$.
#
# ## B
# What is $I_{max}$ for a non negative 16bit integer?
#
# ## C
# What is the smallest possible nonzero value that can be represented by our numbers (remember that we are assuming they are non negative)?
#
# ## D
# What is the largest possible value that can be represented by our numbers?
#
# ## E
# What is the smallest (in absolute value) possible absolute difference between two numbers $x_1$ and $x_2$ such that $x_1 \neq x_2$?
#
# ## F
# What is the smallest (in absolute value) possible relative difference between two numbers $x_1$ and $x_2$ such that $x_1 \neq x_2$? Use the following for relative difference $$\frac{|x_1 - x_2| }{ \max(x_1, x_2)}$$
#
# ## G
# How do the above answers compare to 32bit floating point numbers? Is this a good way to represent real numbers on a computer? Why or why not?
| Homework 2 Problems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="hUoWqCVXDD3T" executionInfo={"status": "ok", "timestamp": 1607064432204, "user_tz": 300, "elapsed": 345, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghj9YH2JuHnytMBXOrP5AZ7UD35ohiAMKjIFnTh=s64", "userId": "03998776733547559393"}} outputId="7c78a1e6-d4ef-43da-ebbf-3fd75dd99a73"
from google.colab import drive
drive.mount('/content/drive')
# + id="NSXYe_rrBoYN" executionInfo={"status": "ok", "timestamp": 1607064433301, "user_tz": 300, "elapsed": 457, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghj9YH2JuHnytMBXOrP5AZ7UD35ohiAMKjIFnTh=s64", "userId": "03998776733547559393"}}
import yaml
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="Rc-7JtNvDCp0" executionInfo={"status": "ok", "timestamp": 1607064484813, "user_tz": 300, "elapsed": 337, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghj9YH2JuHnytMBXOrP5AZ7UD35ohiAMKjIFnTh=s64", "userId": "03998776733547559393"}} outputId="bfe9094b-50ae-4cbd-dd63-ea56c23d274c"
def computeGammaSteps(phaseEpochs, totalPhases, phaseGamma, restartPercentage):
results = []
currentDecay = 1
# prevDecay = 1
gamma = 1
results.append(gamma)
for i in range(1, phaseEpochs*totalPhases):
if i % phaseEpochs == 0:
gamma = restartPercentage / (gamma ** phaseEpochs)
# prevDecay = gamma #update prevDecay value
else:
gamma = phaseGamma
results.append(round(gamma, 2))
milestones = list(range(phaseEpochs*totalPhases))
name = str(phaseEpochs) + "x" + str(totalPhases) + "G" + str(phaseGamma) + "Restore" + str(restartPercentage) + ".yaml"
return milestones, results, name
milestones, gammas, name = computeGammaSteps(20, 4, 0.85, 0.30)
print(gammas)
# + id="x1oLnKoDQgko" executionInfo={"status": "ok", "timestamp": 1607064485668, "user_tz": 300, "elapsed": 816, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghj9YH2JuHnytMBXOrP5AZ7UD35ohiAMKjIFnTh=s64", "userId": "03998776733547559393"}}
class MultiStepSchedule:
def __init__(self, milestones, gammas, last_epoch = -1):
self.multiLR = {}
self.multiLR["class"] = "MultiStepMultiGammaLR"
self.multiLR["milestones"] = milestones
self.multiLR["gammas"] = gammas
self.multiLR["last_epoch"] = last_epoch
class LRSchedule:
def __init__(self, milestones, gammas, last_epoch = -1):
self.lr_schedulers = MultiStepSchedule(milestones, gammas, last_epoch)
schedule1YAML = LRSchedule(milestones, gammas)
f = open(name, "w")
yaml.dump(schedule1YAML, f, default_flow_style=False)
# + id="u3HK6xTYPkRv"
| OurNotebooks/MultiStepMultiGammaBuilder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2-2.3 Intro Python
# ## Lists
# - List Creation
# - List Access
# - List Append
# - **List *modify* and Insert**
# - List Delete
#
# -----
#
# ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - Create Lists
# - Access items in a list
# - Add Items to the end of a list
# - **Modify and insert items into a list**
# - Delete items from a list
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## Insert a new value for an index
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/219b35cf-03b5-4b57-8fbc-f5864281312d/Unit2_Section2.3a-Overwriting_an_Index.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/219b35cf-03b5-4b57-8fbc-f5864281312d/Unit2_Section2.3a-Overwriting_an_Index.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# ### overwrite a specific index in a list
# ```python
# party_list[2] = "Tobias"
# ```
# - **Overwrites** existing index
# - **Cannot** use to **Append** a new index to the list
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# +
# [ ] review and run example
# the list before Insert
party_list = ["Joana", "Alton", "Tobias"]
print("party_list before: ", party_list)
# the list after Insert
party_list[1] = "Colette"
print("party_list after: ", party_list)
# +
# [ ] review and run example
party_list = ["Joana", "Alton", "Tobias"]
print("before:",party_list)
# modify index value
party_list[1] = party_list[1] + " Derosa"
print("\nafter:", party_list)
# -
# <font size="3" color="#00A0B2" face="verdana"> <B>Example</B></font>
# **IndexError**
# +
# IndexError Example
# [ ] review and run example which results in an IndexError
# if result is NameError run cell above before running this cell
# IndexError trying to append to end of list
party_list[3] = "Alton"
print(party_list)
# +
# [ ] review and run example changes the data type of an element
# replace a string with a number (int)
single_digits = ["zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]
print("single_digits: ", single_digits)
print("single_digits[3]: ", single_digits[3], type(single_digits[3]),"\n")
# replace string with an int
single_digits[3] = 3
print("single_digits: ", single_digits)
print("single_digits[3]: ", single_digits[3], type(single_digits[3]))
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
#
# ## replace items in a list
# - create a list, **`three_num`**, containing 3 single digit integers
# - print three_num
# - check if index 0 value is < 5
# - if < 5 , replace index 0 with a string: "small"
# - else, replace index 0 with a string: "large"
# - print three_num
# +
# [ ] complete "replace items in a list" task
three_num = [3,6,9]
if three_num[0] < 5:
three_num = "small"
else:
three_num = "large"
print(three_num)
# -
# ## Function Challenge: create replacement function
# - Create a function, **str_replace**, that takes 2 arguments: int_list and index
# - int_list is a list of single digit integers
# - index is the index that will be checked - such as with int_list[index]
# - Function replicates purpose of task "replace items in a list" above and replaces an integer with a string "small" or "large"
# - return int_list
#
# Test the function!
# +
# [ ] create challenge function
int_list = [3, 6, 9]
def str_replace(int_list, index):
if int_list[index] < 5:
int_list[index] = 'small'
else:
int_list[index] = 'large'
return int_list
str_replace(int_list, 0)
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
#
# ## modify items in a list
# - create a list, **`three_words`**, containing 3 different capitalized word stings
# - print three_words
# - modify the first item in three_words to uppercase
# - modify the third item to swapcase
# - print three_words
# +
# [ ] complete coding task described above
three_words = ["Chair", "Desk", "Computer"]
print(three_words)
three_words[0] = three_words[0].upper()
three_words[2] = three_words[2].swapcase()
print(three_words)
# -
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## Insert items into a list
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/659b9cd2-1e84-4ead-8a69-015c737577cd/Unit2_Section2.3b-Inserting_Items_into_Lists.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/659b9cd2-1e84-4ead-8a69-015c737577cd/Unit2_Section2.3b-Inserting_Items_into_Lists.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# ### use `.insert()` to define an index to insert an item
# ```python
# party_list.insert(2,"Tobias")
# ```
# - Inserts, **doesn't overwrite**
# - **Increases index by 1**, at and above the insertion point
# - **Can** use to **Append** a new index to the end of the list
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# +
# [ ] review and run example
# the list before Insert
party_list = ["Joana", "Alton", "Tobias"]
print("party_list before: ", party_list)
print("index 1 is", party_list[1], "\nindex 2 is", party_list[2], "\n")
# the list after Insert
party_list.insert(1,"Colette")
print("party_list after: ", party_list)
print("index 1 is", party_list[1], "\nindex 2 is", party_list[2], "\nindex 3 is", party_list[3])
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font>
# ## `insert()` input into a list
# +
# [ ] insert a name from user input into the party_list in the second position (index 1)
party_list = ["Joana", "Alton", "Tobias"]
party_list.insert(1,input("enter name: "))
# [ ] print the updated list
print(party_list)
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 4</B></font>
# ## Fix The Error
#
# +
# [ ] Fix the Error
tree_list = ["oak"]
print("tree_list before =", tree_list)
tree_list.insert(1,"pine")
print("tree_list after =", tree_list)
# -
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| Python Fundamentals/Module_2_3_Python_Fundamentals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Import libraries
import os, json, time, pandas_profiling, warnings
from pandas.io.json import json_normalize
import pandas as pd
import numpy as np
from datetime import date, datetime
import calendar
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
#from docx import Document
#from docx.shared import Inches
#from mlxtend.frequent_patterns import apriori
#from mlxtend.frequent_patterns import association_rules
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style"))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
warnings.filterwarnings('ignore')
# %matplotlib inline
pd.set_option('display.max_columns', 500)
#distance plot - titles in plots
rcParams['axes.titlepad'] = 45
rcParams['font.size'] = 16
# + language="javascript"
# IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
# }
# +
# Settings - possible values:
# complete
# merged
# no-outliers
# merged-no-outliers
# merged-no-outliers_quant_002
analysis = 'merged-no-outliers_quant_001'
# if analysis == 'complete':
# legs = 'all_legs_final_ds_user_info.pkl'
# img_path = 'img/'
# report_name = 'Results_01.05_15.10.docx'
# elif analysis == 'merged':
# legs = 'all_legs_merged_1.pkl'
# img_path = 'img_merged/'
# report_name = 'Results_01.05_15.10_merged.docx'
# # elif analysis == 'no-outliers':
# # legs = 'all_legs_final_ds_user_info_no_outlier.pkl'
# # img_path = 'img_nooutliers/'
# # report_name = 'Results_01.05_30.07_nooutliers.docx'
# elif analysis == 'merged-no-outliers_quant_001':
legs = 'all_legs_merged_no_outlier_0.01.pkl'
img_path = 'img_merged_nooutliers/'
#report_name = 'Results_01.05_15.10_merged_nooutliers_0.01.docx'
# elif analysis == 'merged-no-outliers_quant_002':
# legs = 'all_legs_merged_no_outlier_quant_002.pkl'
# img_path = 'img_merged-no-outliers_quant_002/'
# report_name = 'Results_01.05_30.07_merged-no-outliers_quant_002.docx'
if not os.path.exists(img_path):
os.makedirs(img_path)
#Global variables
cutting_date = '2019-05-01' # remove trips and data published before this date
meta_data_path = '../../data-campaigns/meta-data/'
input_path = '../../out_2019.10.15/'
report_path = '../reports/'
# -
# ### Read data
#
# - `all_legs_final_ds_user_info`: all data about trips, legs and users
# - `trips_users_df`: match trip-user with date info
# - `trips_df`: original df with trip info
# - `values_from_trip`: for each leg the values for Productivity (paid work + personal tasks), Enjoyment, Fitness
#read pre-processed datasets
all_legs_final_ds_user_info = pd.read_pickle(input_path + legs)
trips_users_df = pd.read_pickle(input_path + 'trips_users_df.pkl')
# users_df_with_trips = pd.read_pickle(out_path + 'pre-processed_ds/users_df_with_trips.pkl')
trips_df = pd.read_pickle(input_path+'trips_df_geoinfo.pkl')
values_from_trip= pd.read_pickle(input_path + 'values_from_trip.pkl')
print(values_from_trip.shape)
values_from_trip.head()
# ### Preprocessing on `values_from_trip`
# +
# Available categories ['Paid_work', 'Personal_tasks', 'Enjoyment', 'Fitness', 'Unknown']
# remove unknown from the categories
tmp0 = values_from_trip[values_from_trip.valueFromTrip != 'Unknown']
### Create a new df with this structure:
# legid, Enjoyment, Fitness, Paid_work, Personal_tasks, wastedTime, Productivity
# select only column we need
tmp = tmp0[['legid', 'valueFromTrip', 'value']]
# create pivot table with this columns: legid, E, F, Pw, Pt
tmp2 = tmp.pivot(index='legid', columns='valueFromTrip', values= 'value').reset_index()
# add also WT column
tmp3 = pd.merge(tmp2, all_legs_final_ds_user_info[['legid', 'wastedTime']], on='legid', how='left')
# remove rows with NAN in WT
tmp4 = tmp3[tmp3.wastedTime.notna()]
# select values of WT in [1,5]
tmp5 = tmp4[tmp4.wastedTime.between(1,5)]
# convert WT in numeric variable and make all values int
tmp5.wastedTime = pd.to_numeric(tmp5.wastedTime)
tmp5.wastedTime = np.round(tmp5.wastedTime)
# merge Paid_work and Personal_tasks into Productivity
# (!!) considering the MAXIMUM value
tmp5['Productivity'] =tmp5[['Paid_work', 'Personal_tasks']].max(axis=1)
values_from_trip2 = tmp5.copy()
print('Final shape:', values_from_trip2.shape)
values_from_trip2.head()
# -
# save
values_from_trip2.to_csv('values_from_trip2.csv')
test= values_from_trip2[(values_from_trip2['Enjoyment']==0)&
(values_from_trip2['Fitness']==0)&
(values_from_trip2['Productivity']==0)].groupby('wastedTime').size().reset_index()
test.columns = ['wastedTime','#leg000']
test
# +
import mord
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error as mse
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
X = values_from_trip2[['Enjoyment', 'Fitness', 'Productivity']]
y = values_from_trip2['wastedTime']
mul_lr = mord.OrdinalRidge(alpha=0.001,
fit_intercept=True,
normalize=False,
copy_X=True,
max_iter=None,
tol=0.001,
solver='auto').fit(X, y)
mul_lr.coef_
values_from_trip2['pred'] = mul_lr.predict(X)
values_from_trip2[values_from_trip2['wastedTime'] == 1].head(10)
values_from_trip2['pred'].unique()
# -
# ## Correlation and Association analysis
#
# 1. Distribution of all the variables
# 2. Conditional distribution of PEF wrt WT
# 3. Average of WT wrt PEF sum
# 4. Chi-squared association and Cramer's V - each of PEF wrt WT
# 5. Comparison on average WT versus PEF
# ***Distribution of all the variables***
## Distribution of Wasted Time variable - relative and absolute frequencies
tmp = pd.DataFrame(values_from_trip2.wastedTime.value_counts())
tmp['rel_wastedTime'] = values_from_trip2.wastedTime.value_counts()/len(values_from_trip2)
tmp
# +
# ## General distributions of variables
# from matplotlib import rcParams
# rcParams['axes.titlepad'] =5
# fig, axs = plt.subplots(2,3, figsize=(15,7))
# plt.subplots_adjust(top=1)
# for idx,ax in list(enumerate(axs.flat)):
# print(idx)
# col_name = list(values_from_trip2.columns)[idx+1]
# col_name
# +
## General distributions of variables
from matplotlib import rcParams
rcParams['axes.titlepad'] =5
fig, axs = plt.subplots(2,3, figsize=(15,7))
plt.subplots_adjust(top=1)
for idx,ax in list(enumerate(axs.flat)):
col_name = list(values_from_trip2.columns)[idx+1]
weights = np.zeros_like(values_from_trip2.iloc[:,idx+1]) + 1. / len(values_from_trip2.iloc[:,idx+1])
ax.hist(values_from_trip2.iloc[:,idx+1], weights= weights)
ax.set_title(col_name)
ax.set_xticks(range(len(values_from_trip2.iloc[:,idx+1].unique())))
if col_name == 'wastedTime':
ax.set_xticks(range(1, len(values_from_trip2.iloc[:,idx+1].unique())+1))
ax.set_xlim(left=1)
# -
# ***Conditional distribution of PEF wrt WT***
cond_plot = sns.FacetGrid(data=values_from_trip2, col='wastedTime', sharey=False) #, hue='CentralAir', col_wrap=4)
cond_plot.map(plt.hist, 'Enjoyment');
cond_plot = sns.FacetGrid(data=values_from_trip2, col='wastedTime', sharey=False)
cond_plot.map(plt.hist, 'Fitness');
cond_plot = sns.FacetGrid(data=values_from_trip2, col='wastedTime', sharey=False)
cond_plot.map(plt.hist, 'Productivity');
# ***Average of WT wrt PEF sum***
# +
# add the sum
values_from_trip2['PEF'] = values_from_trip2[['Enjoyment', 'Fitness', 'Productivity']].sum(axis=1)
# select only columns we need, group by PEF sum and make the mean of WT
pef_sum = values_from_trip2[['legid', 'PEF', 'wastedTime']].groupby('PEF').mean()
pef_sum
### Interpretation: legs with sum of Enjoyment, Fitness and Productivity equal to 0
# have 3 as wastedTime *on average*.
# -
# ***Chi-squared association and Cramer's V***
#
# Evaluate the association between:
# - Enjoyment and wastedTime
# - Fitness and wastedTime
# - Productivity and wastedTime
#
# Ref: https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V
#
# Cramer's V:
# - 0: no association
# - 1: complete association
# +
from scipy.stats import chi2_contingency
def cramer_v(tab):
chi2 = chi2_contingency(tab)[0]
n = sum(tab.sum())
phi2 = chi2/n
r,k = tab.shape
return(np.sqrt(phi2 / min( (k-1), (r-1))))
CV_enj = cramer_v(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Enjoyment))
CV_fit = cramer_v(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Fitness))
CV_pro = cramer_v(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Productivity))
print("Cramer's V")
print('E:', CV_enj, ' - F:', CV_fit, ' - P:', CV_pro)
print()
print('chi squared test')
print('E:', chi2_contingency(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Enjoyment))[1],
' - F:', chi2_contingency(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Fitness))[1],
' - P:', chi2_contingency(pd.crosstab(values_from_trip2.wastedTime, values_from_trip2.Productivity))[1])
### Interpretation:
# There is 30% of association between Enjoyment and wastedTime
## Chi-Squared
# H0: distributions are significantly different
# H1: distributions are not significantly different
# with the chi squared test we have to reject the null hypothesis
# distributions are not significantly different
# -
# ***Comparison on average WT versus PEF***
values_from_trip2.pivot_table(index='wastedTime', values='Enjoyment', aggfunc='mean')
# legs with wastedTime equal to 1,2 have *on average* 0 for Enjoyment
# legs with wastedTime equal to 3,4,5 have *on average* 1 for Enjoyment
values_from_trip2.pivot_table(index='wastedTime', values='Fitness', aggfunc='mean')
# legs with wastedTime equal to 1,2,3 have *on average* 0 for Fitness
# legs with wastedTime equal to 4,5 have *on average* 1 for Fitness
np.round(values_from_trip2.pivot_table(index='wastedTime', values='Productivity', aggfunc='mean'))
# legs with wastedTime equal to 1,2 have *on average* 0 for Productivity
# legs with wastedTime equal to 3,4,5 have *on average* 1 for Productivity
values_from_trip2[['Enjoyment', 'Fitness', 'Productivity', 'wastedTime']].groupby('wastedTime').mean()
# legs with wastedTime equal to 1 have *on average* 0 for PEF
# legs with wastedTime equal to 2 have *on average* 0 for PEF
# legs with wastedTime equal to 3 have *on average* 0 for F and 1 for PE
# legs with wastedTime equal to 4 have *on average* 1 for PEF
# legs with wastedTime equal to 5 have *on average* 1 for PEF
# ### Example: Walking dataset
#
# Considering only legs with `transp_category` equal to `walking`
transp_cat = 'Walking'
x = all_legs_final_ds_user_info[['legid', 'transp_category']]
trasnp = pd.merge(values_from_trip2, x, on='legid', how='left')
print(trasnp.transp_category.unique())
trasnp = trasnp[trasnp.transp_category == transp_cat]
trasnp.head(3)
df = trasnp[['Enjoyment', 'Fitness', 'Productivity', 'wastedTime']].melt('wastedTime', var_name='element', value_name='Val')
df.head()
df1 = df.groupby(['wastedTime','element','Val']).size().reset_index()
df1.columns = ['wastedTime','element','Val','freq']
df1.head()
# +
fig, axs = plt.subplots(1,5, figsize=(15,7))
# plt.subplots_adjust(top=1)
for idx,ax in list(enumerate(axs.flat)):
plt.subplot(1, 5, idx+1)
ax = plt.gca()
sns.barplot(data = df1[df1['wastedTime']==idx+1], x="element", y='freq', hue='Val').set(
xlabel='wastedTime',
ylabel = 'Freq' )
plt.title('WastedTime ' + str(idx+1), y=1.)
plt.tight_layout()
# -
df1[df1['wastedTime']==1]
# +
# cond_plot = sns.FacetGrid(data=df1, col='wastedTime', hue='element', sharey=False) #, hue='CentralAir', col_wrap=4)
# cond_plot.map(sns.barplot, "Val", 'freq').add_legend()
# +
# cond_plot = sns.FacetGrid(data=trasnp, col='wastedTime', sharey=False) #, hue='CentralAir', col_wrap=4)
# cond_plot.map(plt.hist, 'Fitness');
# +
# cond_plot = sns.FacetGrid(data=trasnp, col='wastedTime', sharey=False) #, hue='CentralAir', col_wrap=4)
# cond_plot.map(plt.hist, 'Productivity');
# -
trasnp[['Enjoyment', 'Fitness', 'Productivity', 'wastedTime']].groupby('wastedTime').mean()
# legs with wastedTime equal to 1 have *on average* 0 for PEF
y
# +
import mord
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error as mse
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
X = trasnp[['Enjoyment', 'Fitness', 'Productivity']]
y = trasnp['wastedTime']
mul_lr = mord.OrdinalRidge(alpha=0.001,
fit_intercept=True,
normalize=False,
copy_X=True,
max_iter=None,
tol=0.001,
solver='auto').fit(X, y)
print('Coeficinets: ', mul_lr.coef_)
trasnp['pred'] = mul_lr.predict(X)
trasnp[trasnp['wastedTime'] == 1].head(10)
trasnp['pred'].unique()
# -
x = all_legs_final_ds_user_info[['legid', 'transp_category']]
df_0 = pd.merge(values_from_trip2, x, on='legid', how='left')
df_0.head()
df_0 = df_0[(df_0['Enjoyment'] == 0) & (df_0['Fitness'] == 0) & (df_0['Productivity'] == 0)]
df_0.head()
df_0.groupby('wastedTime').size()
| worthwhileness_index/correlation_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import astropy as ay
import astropy.units as ay_u
import astropy.coordinates as ay_coord
import magnetic_field_functions_2d as mff2d
import magnetic_field_functions_3d as mff3d
import model_observing as m_obs
import data_systematization as d_systize
# # %load_ext line_profiler
# # %matplotlib notebook
# +
def mag_field(x,y,z):
h = 0.3257
k_array = [0.9549,0.4608,0.6320]
disk_radius = 3.8918
uniform_B0 = 3.3118
return mff3d.hourglass_magnetic_field_cart_3d(x, y, z,
h, k_array, disk_radius, uniform_B0,
center=[0, 0, 0])
def cloud_eq(x,y,z):
radius = 1
return x**2 + y**2 + z**2 - radius
def test_field(x,y,z):
return 0,1,0
center_coord = ay_coord.SkyCoord('20h40m00.00s','42d00m00.00s',frame='icrs')
sight_coord = ay_coord.SkyCoord('20h40m00.00s','42d10m00.00s',frame='icrs')
field_of_view = 0.01745329
print(center_coord)
n_zeros = 10
# -
# target_object = m_obs.ProtostarModel(center_coord,cloud_eq,test_field,ra_wrap_angle=np.pi)
# target_sightline = m_obs.Sightline(None,None,Skycord_object=center_coord,ra_wrap_angle=np.pi)
#
#
# telescope_data = m_obs.ObservingRun(target_object,target_sightline,3)
# _ = telescope_data.Stokes_parameter_contours(n_samples=100) # Res is n_samples
# ## Multiple Plots
# +
def cloud_eq_0(x,y,z):
radius = 0.01**2
return x**2 + y**2 + z**2 - radius
def test_field_0(x,y,z):
return np.zeros_like(x),np.ones_like(x),np.zeros_like(x)
def intensity_field_0(x,y,z):
return x+y+z
target_object_0 = m_obs.ProtostarModel(center_coord,cloud_eq_0,test_field_0,density_model=intensity_field_0,
zeros_guess_count=n_zeros)
target_sightline_0 = m_obs.Sightline(None,None,SkyCoord_object=sight_coord)
telescope_data_0 = m_obs.ObservingRun(target_object_0,target_sightline_0,field_of_view)
start_time = time.time()
_ = telescope_data_0.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# -
a,b = _
print(min(b[2])) # Was 821
# +
def cloud_eq_45(x,y,z):
radius = 0.01**2
return x**2 + y**2 + z**2 - radius
def test_field_45(x,y,z):
return 0,1,1
def intensity_field_45(x,y,z):
return 100
def polar_model(x,y,z):
return 1
target_object_45 = m_obs.ProtostarModel(center_coord,cloud_eq_45,test_field_45,density_model=intensity_field_45,
zeros_guess_count=n_zeros,polarization_model=polar_model)
target_sightline_45 = m_obs.Sightline(None,None,SkyCoord_object=center_coord)
telescope_data_45 = m_obs.ObservingRun(target_object_45,target_sightline_45,field_of_view)
start_time = time.time()
_ = telescope_data_45.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def test_field_45_t(x,y,z):
return np.zeros_like(x),np.ones_like(x),np.ones_like(x)
def intensity_field_45_table(x,y,z):
return 2*x**2 + 2*y**2 + 2*z**2
x_tab_val = np.random.uniform(-2,2,3000)
y_tab_val = np.random.uniform(-2,2,3000)
z_tab_val = np.random.uniform(-2,2,3000)
ans_tab_val = intensity_field_45_table(x_tab_val,y_tab_val,z_tab_val)
intensity_table = d_systize.InterpolationTable(x_tab_val,y_tab_val,z_tab_val,'scalar',scalar_ans=ans_tab_val)
target_object_45 = m_obs.ProtostarModel(center_coord,cloud_eq,test_field_45_t,density_model=intensity_table,
zeros_guess_count=n_zeros,polarization_model=1)
target_sightline_45 = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_45 = m_obs.ObservingRun(target_object_45,target_sightline_45,3)
start_time = time.time()
_ = telescope_data_45.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def test_field_90(x,y,z):
return 0,1,0
def cloud_eq_x(x,y,z):
R = 1
r = 0.5
return (np.sqrt(y**2 + z**2) - R)**2 + x**2 - r**2
x_tab_val = np.random.uniform(-2,2,3000)
y_tab_val = np.random.uniform(-2,2,3000)
z_tab_val = np.random.uniform(-2,2,3000)
Bfield_table = d_systize.InterpolationTable(x_tab_val,y_tab_val,z_tab_val,'vector',
x_vector_ans=np.zeros_like(x_tab_val),
y_vector_ans=np.ones_like(x_tab_val),
z_vector_ans=np.zeros_like(x_tab_val))
target_object_90 = m_obs.ProtostarModel(center_coord,cloud_eq_x,test_field_90,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_90 = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_90 = m_obs.ObservingRun(target_object_90,target_sightline_90,3)
start_time = time.time()
_ = telescope_data_90.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def test_field_135(x,y,z):
return 0,-1,1
def cloud_eq_y(x,y,z):
R = 1
r = 0.5
return (np.sqrt(x**2 + z**2) - R)**2 + y**2 - r**2
target_object_135 = m_obs.ProtostarModel(center_coord,cloud_eq_y,test_field_135,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_135 = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_135 = m_obs.ObservingRun(target_object_135,target_sightline_135,3)
start_time = time.time()
_ = telescope_data_135.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def test_field_180(x,y,z):
return 0,-1,0
def cloud_eq_z(x,y,z):
R = 1
r = 0.5
return (np.sqrt(x**2 + y**2) - R)**2 + z**2 - r**2
target_object_180 = m_obs.ProtostarModel(center_coord,cloud_eq_z,test_field_180,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_180 = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_180 = m_obs.ObservingRun(target_object_180,target_sightline_180,3)
start_time = time.time()
_ = telescope_data_180.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# -
# ## Talk Plots
# +
def test_field_hg(x,y,z):
return mag_field(x,y,z)
def cloud_eq_hg(x,y,z):
#R = 1
#r = 0.5
#return (np.sqrt(x**2 + y**2) - R)**2 + z**2 - r**2
return x**2 + y**2 + z**2 - 4
target_object_hg = m_obs.ProtostarModel(center_coord,cloud_eq_hg,test_field_hg,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_hg = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_hg = m_obs.ObservingRun(target_object_hg,target_sightline_hg,3)
start_time = time.time()
_ = telescope_data_hg.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def cloud_eq_cr(x,y,z):
return x**2 + y**2 + z**2 - 4
def test_field_cr(x,y,z):
return mff3d.circular_magnetic_field_cart_3d(x, y, z,
center=[0, 0, 0],
mag_function=lambda r: 1/r**2,
curl_axis='x')
target_object_cr = m_obs.ProtostarModel(center_coord,cloud_eq_cr,test_field_cr,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_cr = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_cr = m_obs.ObservingRun(target_object_cr,target_sightline_cr,3)
start_time = time.time()
_ = telescope_data_cr.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def test_field_rad(x,y,z):
return mff3d.monopole_magnetic_field_cart_3d(x, y, z,
center = [0,0,0],
mag_function=lambda r: 1/r**2)
def cloud_eq_rad(x,y,z):
#R = 1
#r = 0.5
#return (np.sqrt(x**2 + y**2) - R)**2 + z**2 - r**2
return x**2 + y**2 + z**2 - 4
def density_rad(x,y,z):
r = np.sqrt(x**2 + y**2 + z**2)
r_max = 2
R_val = 3.8918
return r_max / (1 + r**2 / R_val**2)
def polarization_rad(x,y,z):
# A wien approximation-like function
a = 10
b = 5
n = 2
r = np.sqrt(x**2 + y**2 + z**2)
return a * r**n * np.exp(-b * r)
target_object_rad = m_obs.ProtostarModel(center_coord,cloud_eq_rad,test_field_rad,
density_model=density_rad, polarization_model=polarization_rad,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_rad = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_rad = m_obs.ObservingRun(target_object_rad,target_sightline_rad,3)
start_time = time.time()
_ = telescope_data_rad.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def cloud_eq_comb(x,y,z):
return x**2 + y**2 + z**2 - 4
def test_field_comb(x,y,z):
hg = 1
cr = 1
rad = 1
mag_funt = lambda r: 1/r**2
norm = hg+cr+rad
return ((hg/norm)*np.array(test_field_hg(x,y,z))
+ (cr/norm)*np.array(test_field_cr(x,y,z))
+ (rad/norm)*np.array(test_field_rad(x,y,z)))
target_object_comb = m_obs.ProtostarModel(center_coord,cloud_eq_comb,test_field_comb,
#density_model=density_comb, polarization_model=polarization_comb,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_comb = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_comb = m_obs.ObservingRun(target_object_comb,target_sightline_comb,3)
start_time = time.time()
_ = telescope_data_comb.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# +
def cloud_eq_comps(x,y,z):
return x**2 + y**2 + z**2 - 4
keyword_parameters = {'h' : 0.3257,
'k_array' : [0.9549,0.4608,0.6320],
'disk_radius' : 3.8918,
'uniform_B0' : 3.3118,
'center' : [0,0,0],
'mag_function' : lambda r: 1/r**2,
'curl_axis':'x'}
funt_list = [mff3d.hourglass_magnetic_field_cart_3d,mff3d.circular_magnetic_field_cart_3d,mff3d.monopole_magnetic_field_cart_3d]
contri = [1,1,1]
composite_funt = mff3d.linear_combination_magnetic_field(funt_list,contri,**keyword_parameters)
target_object_comps = m_obs.ProtostarModel(center_coord,cloud_eq_comps,composite_funt,
#density_model=density_comps, polarization_model=polarization_comps,
zeros_guess_count=n_zeros,ra_wrap_angle=np.pi)
target_sightline_comps = m_obs.Sightline(None,None,SkyCoord_object=center_coord,ra_wrap_angle=np.pi)
telescope_data_comps = m_obs.ObservingRun(target_object_comps,target_sightline_comps,3)
start_time = time.time()
_ = telescope_data_comps.Stokes_parameter_contours(n_axial_samples=50)
end_time = time.time()
print('Delta time : ', end_time - start_time)
# -
# ## Big Braking!
# +
# # %lprun -f telescope_data.Stokes_parameter_contours telescope_data.Stokes_parameter_contours(n_samples=10000)
#axes,stokes = telescope_data.Stokes_parameter_contours(n_samples=10000)
# +
# Speckle testing
#axes,stokes = telescope_data.Stokes_parameter_contours(n_samples=10000)
# -
# ## Testing!
test_coord = ay_coord.SkyCoord('12h00m01.00s','00d00m00.00s',frame='icrs')
print(test_coord.ra,test_coord.dec)
print(test_coord.ra.wrap_angle)
test_coord.ra.wrap_angle = ay_u.rad * np.pi
print(test_coord.ra,test_coord.dec)
print(test_coord.ra.wrap_angle)
# +
import numpy as np
import astropy as ap
import astropy.coordinates as ap_coord
import model_observing as m_obs
if (True):
# Making the coordinate input, should be an Astropy SkyCoord class.
sky_coordinates = ap_coord.SkyCoord('00h00m00.00s','00d00m00.00s',frame='icrs')
# Making a cloud function, a sphere in this case. Note that the units
# are in angular space, and thus the unit of circle is radians.
def cloud_equation(x,y,z):
radius = 0.1
return x**2 + y**2 + z**2 - radius**2
# Making a magnetic field that is uniform in one direction. Consider a
# field that is always 0i + 1j + 2k.
def magnetic_field(x,y,z):
Bfield_x = np.zeros_like(x)
Bfield_y = np.ones_like(y)
Bfield_z = np.full_like(z,2)
return Bfield_x,Bfield_y,Bfield_z
# Making a density function of a 1/r**2 profile.
def density_function(x,y,z):
density = 1/np.dot([x,y,z],[x,y,z])
# The above line is a faster implementation of the following.
# density = np.sqrt(x**2 + y**2 + z**2)
return density
# Making a polarization function of a 1/r**2 profile.
def polarization_function(x,y,z):
polarization = 1/np.dot([x,y,z],[x,y,z])
# The above line is a faster implementation of the following.
# polarization = np.sqrt(x**2 + y**2 + z**2)
return polarization
# Create the protostar class.
protostar = m_obs.ProtostarModel(sky_coordinates,
cloud_equation,
magnetic_field,
density_function,
polarization_function)
# Making the SkyCoord class with a RA of 20h40m00.00s and a
# DEC of 42d10m00.00s
sky_coordinates = ap_coord.SkyCoord('00h00m00.00s','00d00m00.00s',
frame='icrs')
# Creating the Sightline class using the SkyCoord class.
sightline = m_obs.Sightline(None, None, SkyCoord_object=sky_coordinates)
# Define the field of view of the observation, in radians as the total
# length of the observation square.
field_of_view = 0.015
observing_run = m_obs.ObservingRun(protostar,sightline,field_of_view)
# Decide on the resolution of the data observed, this sets the number of
# data points on one axis.
axis_resolution = 30
results = observing_run.Stokes_parameter_contours(
n_axial_samples=axis_resolution)
# -
print(results[1])
| Codebase/Active_Codebase/contour_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
import pandas as pd
import numpy as np
data_click = pd.read_csv('../train_preliminary/click_log.csv')# click_log
data_user = pd.read_csv('../train_preliminary/user.csv') # user
data_ad = pd.read_csv('../train_preliminary/ad.csv') # user
data_click = data_click.merge(data_ad,on = 'creative_id',how = 'left')
del data_ad
#对industry广告行业进行特征提取
industry_click = data_click[['user_id', 'product_category', 'click_times']].sort_values(by = 'user_id')
# industry_click = industry_click[data_click['product_category']!='\\N']
industry_click = industry_click.groupby(['user_id','product_category']).agg({'click_times':sum})
industry_click = industry_click.reset_index()
# def func_log(x):
# return np.log(x+1)
# industry_click['product_category'+'_log'] = industry_click['click_times'].transform(func_log)
# 提取头三个点击最高的种类及点击量
head_x = industry_click.sort_values(['click_times'],ascending=False).groupby(['user_id']).head(3)
head_x = head_x.sort_values('user_id')
del industry_click
def fun1(x):
x = list(x.values.reshape([-1]))[:6]
x = x[:6]+[0]*(6-len(x))
return pd.DataFrame([x])
tops = head_x.groupby('user_id')['product_category','click_times'].apply(fun1)
columns = []
for i in range(6):
columns.append('product_category'+str(i))
tops.columns = columns
tops = tops.reset_index()
tops = tops.drop(['level_1'],axis = 1)
tops.to_csv('product_category_feat.csv',index=False)
del tops
# -
click_feat = pd.read_csv('clicks_feat.csv')
category_feat = pd.read_csv('category_feat_addlog.csv')
industry_feat = pd.read_csv('industry_feat.csv')
#特征合并在一起
features = click_feat.merge(category_feat,on='user_id',how='left').merge(industry_feat,on='user_id',how='left')
#添加label
user = pd.read_csv('../../data/train_preliminary/user.csv')
data = features.merge(user,on='user_id',how='left')
# del data['user_id']
# 添加train_cat_max_id_feat特征
def func_cat (x):
x = x[['category1','category2', 'category3', 'category4', 'category5', 'category6','category7', 'category8', 'category9', 'category10', 'category11','category12', 'category13', 'category14', 'category15', 'category16','category17', 'category18']]
d = {}
d['cat_max_id'] = [np.argmax(x.values)+1]
return pd.DataFrame(d)
cat_max_id_feat = data.groupby('user_id').apply(func_cat)
cat_max_id_feat.reset_index(level=0, inplace=True)
cat_max_id_feat.index = range(len(cat_max_id_feat))
cat_max_id_feat.to_csv('train_cat_max_id_feat.csv',index=False)
data = data.merge(cat_max_id_feat,on='user_id',how='left')
data.to_csv('train_feat_user.csv',index=False)
| stastic_feats/sta_feat_category.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Get shapefiles from OpenStreetMap with OSMnx
#
# - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
# - [GitHub repo](https://github.com/gboeing/osmnx)
# - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
# - [Documentation](https://osmnx.readthedocs.io/en/stable/)
# - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
import osmnx as ox
# %matplotlib inline
ox.config(log_console=True, use_cache=True)
ox.__version__
# ## Get the shapefile for one city, project it, display it, and save it
# from some place name, create a GeoDataFrame containing the geometry of the place
city = ox.gdf_from_place('Walnut Creek, California, USA')
city
# save the retrieved data as a shapefile
ox.save_gdf_shapefile(city)
# project the geometry to the appropriate UTM zone (calculated automatically) then plot it
city = ox.project_gdf(city)
fig, ax = ox.plot_shape(city)
# ## Create a shapefile for multiple cities, project it, display it, and save it
# define a list of place names
place_names = ['Berkeley, California, USA',
'Oakland, California, USA',
'Piedmont, California, USA',
'Emeryville, California, USA',
'Alameda, Alameda County, CA, USA']
# create a GeoDataFrame with rows for each place in the list
east_bay = ox.gdf_from_places(place_names, gdf_name='east_bay_cities')
east_bay
# project the geometry to the appropriate UTM zone then plot it
east_bay = ox.project_gdf(east_bay)
fig, ax = ox.plot_shape(east_bay)
# save the retrieved and projected data as a shapefile
ox.save_gdf_shapefile(east_bay)
# ## You can also construct buffered spatial geometries
# pass in buffer_dist in meters
city_buffered = ox.gdf_from_place('Walnut Creek, California, USA', buffer_dist=250)
fig, ax = ox.plot_shape(city_buffered)
# you can buffer multiple places in a single query
east_bay_buffered = ox.gdf_from_places(place_names, gdf_name='east_bay_cities', buffer_dist=250)
fig, ax = ox.plot_shape(east_bay_buffered, alpha=0.7)
# ## You can download boroughs, counties, states, or countries too
#
# Notice the polygon geometries represent political boundaries, not physical/land boundaries.
gdf = ox.gdf_from_place('Manhattan Island, New York, New York, USA')
gdf = ox.project_gdf(gdf)
fig, ax = ox.plot_shape(gdf)
gdf = ox.gdf_from_place('Cook County, Illinois, United States')
gdf = ox.project_gdf(gdf)
fig, ax = ox.plot_shape(gdf)
gdf = ox.gdf_from_place('Iowa')
gdf = ox.project_gdf(gdf)
fig, ax = ox.plot_shape(gdf)
gdf = ox.gdf_from_places(['United Kingdom', 'Ireland'])
gdf = ox.project_gdf(gdf)
fig, ax = ox.plot_shape(gdf)
# ## Be careful to pass the right place name that OSM needs
#
# Be specific and explicit, and sanity check the results. The function logs a warning if you get a point returned instead of a polygon. In the first example below, OSM resolves 'Melbourne, Victoria, Australia' to a single point at the center of the city. In the second example below, OSM correctly resolves 'City of Melbourne, Victoria, Australia' to the entire city and returns its polygon geometry.
melbourne = ox.gdf_from_place('Melbourne, Victoria, Australia')
melbourne = ox.project_gdf(melbourne)
type(melbourne['geometry'].iloc[0])
melbourne = ox.gdf_from_place('City of Melbourne, Victoria, Australia')
melbourne = ox.project_gdf(melbourne)
fig, ax = ox.plot_shape(melbourne)
# ## Specify you wanted a *country* if it resolves to a *city* of the same name
# OSM resolves 'Mexico' to Mexico City and returns a single point at the center of the city. Instead we have a couple options:
#
# 1. We can pass a dict containing a structured query to specify that we want Mexico the country instead of Mexico the city.
# 2. We can also get multiple countries by passing a list of queries. These can be a mixture of strings and dicts.
mexico = ox.gdf_from_place('Mexico')
mexico = ox.project_gdf(mexico)
type(mexico['geometry'].iloc[0])
# instead of a string, you can pass a dict containing a structured query
mexico = ox.gdf_from_place({'country':'Mexico'})
mexico = ox.project_gdf(mexico)
fig, ax = ox.plot_shape(mexico)
# you can pass multiple queries with mixed types (dicts and strings)
mx_gt_tx = ox.gdf_from_places(queries=[{'country':'Mexico'}, 'Guatemala', {'state':'Texas'}])
mx_gt_tx = ox.project_gdf(mx_gt_tx)
fig, ax = ox.plot_shape(mx_gt_tx)
# ## You can request a specific result number
#
# By default, we only request 1 result from OSM. But, we can pass an optional `which_result` parameter to query OSM for *n* results and then process/return the *n*th. If you query 'France', OSM returns the country with all its overseas territories as result 2 and European France alone as result 1. Querying for 'France' returns just the first result (European France), but passing `which_result=2` instead retrieves the top 2 results from OSM and processes/returns the 2nd one (all of France's overseas territories). You could have also done this to retrieve Mexico the country instead of Mexico City above.
france = ox.gdf_from_place('France', which_result=2)
france = ox.project_gdf(france)
fig, ax = ox.plot_shape(france)
france = ox.gdf_from_place('France')
france = ox.project_gdf(france)
fig, ax = ox.plot_shape(france)
| notebooks/02-example-osm-to-shapefile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Psych 81.09
# language: python
# name: psych81.09
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hannahutter99/storytelling-with-data/blob/master/Story4_Amelia_Hannah_Nathan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="z0fllyCKnAga"
#
# + [markdown] id="uZ8S_ZIBMMpI"
# # Setup
#
# This demonstration notebook provides a suggested set of libraries that you might find useful in crafting your data stories. You should comment out or delete libraries that you don't use in your analysis.
# + colab={"base_uri": "https://localhost:8080/"} id="LZRIAazbo0Vz" outputId="34fe7b92-9bb3-4555-97b7-2a9aa30b02fa"
# !ls
# + id="i5X225WPGwDo"
#number crunching
import numpy as np
import pandas as pd
#data visualization
import plotly
import plotly.express as px
from matplotlib import pyplot as plt
# + id="yXRLhPljEpf9"
# + [markdown] id="utex2jHYMh1G"
# # Google authentication
#
# Run the next cell to enable use of your Google credentials in uploading and downloading data via Google Drive. See tutorial [here](https://colab.research.google.com/notebooks/io.ipynb#scrollTo=P3KX0Sm0E2sF) for interacting with data via Google services.
# + id="vnkRNlrJGwEA" colab={"base_uri": "https://localhost:8080/", "height": 507} outputId="b27ff114-1db4-47bc-881c-5b369bd0491a"
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + [markdown] id="cXsl_Ks2NVf8"
# # Project team
#
# List your team members and (as appropriate) each team member's role on this project.
#
# # Background and overview
#
# Introduce your question and motivation here. Link to other resources or related work as appropriate.
#
# # Approach
#
# Briefly describe (at a high level) the approach you'll be taking to answer or explore your question in this notebook.
#
# # Quick summary
#
# Briefly describe your key findings at a high level.
#
#
# + id="iy0UygTjK5py"
nd
# + [markdown] id="zhim6PmNnII2"
# **Project Team:**
# <NAME>, <NAME>, and <NAME>
#
# Nathan and Hannah developed the initial story idea, but this idea evolved as we brainstormed together (after I [Amelia] joined) and as a class, and as we found our data sources.
#
# Amelia performed the data extraction, tidying, manipulation, and analysis, and made the plots. She wrote the script for part 2 of the video and completed all text entries for this notebook.
#
# We worked as a team to discuss our story format, and Hannah put this to action by bringing out ideas to life through videography. She wrote the script for parts 1 and 3, and did the video compilation and editing for the story.
#
# Nathan put together and released a mental health survey to assess students' reactions to community accusations. He also helped Hannah with special effects in the video.
#
# + [markdown] id="ddpF1B-_Dzh3"
# **Background and overview:**
#
# Did the spike in Covid cases at Dartmouth this past February lead to a spike in covid cases in surrounding counties?
#
# It's no secret that Dartmouth has taken the heat from the surrounding community this past year. Prior to students returning this past fall, several articles were released by townspeople saying that Dartmouth students should not be allowed back in Hanover. Furthermore, hundreds of Dartmouth faculty signed a petition to prevent students from returning to campus. Needless to say, Dartmouth students were generally not welcomed back into Hanover, and have instead been a source of blame for Covid cases in the surrounding community.
# We aim to investigate the merits of these accusations--we want to dive into the data and see how Covid cases changed in the week(s) following Dartmouth's outbreak. We are curious to see whether there were in fact spikes in cases in the surrounding communities.
# + [markdown] id="H_ZTxYlgKA4r"
# **Approach:**
#
# After much trial-and-error, we learned that it is difficult to isolate Dartmouth-specific data. Thus, we pivoted our approach to instead look at total new cases, per 100K residents, per week in Grafton and the surrounding counties. We figured that since Dartmouth is a part of Grafton county, and we know when Dartmouth cases spike, we will still be able to see from the Grafton + surrounding county data whether or not their cases spike as well around the time of Dartmouth's outbreak.
#
# Covid-case data aside, we will also pull in reports of community accusations to add to our intro section to help the viewer understand the extent of the backlash Dartmouth students have received.
# + [markdown] id="gAqzH0AKLY2x"
# **Quick summary:**
#
# We were pretty surprised with our findings. We of course dove into our project thinking Dartmouth did not cause huge community spikes, so we were expecting to see cases stay pretty consistent, or mildely elevated, in the weeks following Dartmouth's outbreak. We were *not* expecting to see that weekly cases in the surrounding communities actually *decreased* in the weeks following Dartmouth's outbreak.
#
# It seems Dartmouth's outbreak served as a reminder to the community that Covid is still here. People likely took extra precautions that resulted in fewer community transmissions. It seems like Dartmouth students are not to blame after all!
#
# + [markdown] id="XHvFYl-aOHpN"
# # Data
#
# Briefly describe your dataset(s), including links to original sources. Provide any relevant background information specific to your data sources.
# + id="pMalPJ-XMuVh" colab={"base_uri": "https://localhost:8080/"} outputId="dc3318a2-8ed9-4f19-ee7b-dbb2adeb26cb"
# Provide code for downloading or importing your data here
# We used county-level covid data from the NY Times' github. The data has been updated by the NY Times since the beginning of Covid, and tracks total cases and deaths across all
#counties in the United States.
#Link to the NY Times github: https://github.com/nytimes/covid-19-data/blob/master/us-counties.csv
#Link to the raw data source: https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv
# !wget https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv
covid_data = pd.read_csv('us-counties.csv')
# + [markdown] id="2hpbjrd8Ofpa"
# # Analysis
#
# Briefly describe each step of your analysis, followed by the code implementing that part of the analysis and/or producing the relevant figures. (Copy this text block and the following code block as many times as are needed.)
# + id="PhK1qHUaX9Zh"
#we're interested in the covid cases in Grafton and the immediately surrounding counties,
#which includes Sullivan NH, and Orange and Windsor VT.
covid_states = ["New Hampshire", "Vermont"]
covid_counties = ["Grafton","Sullivan","Orange", "Windsor"]
#Create new dataframe that is just the counties of interest
county_data = covid_data[covid_data["state"].isin(covid_states) & covid_data["county"].isin(covid_counties)]
# + id="aQrwGHewDYOm"
#We're interested in cases per 100K. Thus, we need to add the populations for each county
#These data were found on the US Census Bureau website. The Grafton URL is: https://www.census.gov/quickfacts/fact/table/graftoncountynewhampshire/PST045219
#Define a function to add the population of these counties (updated in 2019 census)
def define_pop(row):
if row['county'] == "Grafton" :
return 89886
if row['county'] == "Rockingham" :
return 309769
if row['county'] == "Sullivan" :
return 43146
if row['county'] == "Merrimack" :
return 151391
if row['county'] == "Orange" :
return 28892
if row['county'] == "Windsor" :
return 55062
return "other"
# + colab={"base_uri": "https://localhost:8080/"} id="iFqW5oqBD4Hs" outputId="dbc72c0a-8e49-4e47-f9b1-47cdbb9814fa"
#Check to make sure it worked
county_data.apply(lambda row: define_pop(row), axis = 1)
# + id="fm0nibU2EsUw"
#Create a fresh copy of the dataframe to avoiding indexing error
df = county_data.copy()
#Subset to look at just data from 2021
df = df[df["date"] > "2020-12-31"]
#Now, add the population as a new column called "pop".
df['pop'] = df.apply(lambda row: define_pop(row), axis = 1)
df = df[df["date"] > "2020-12-31"]
# + id="6TqopOpXmdbC"
#Find cases per 100000 by multiplying cases by 100000 and dividing by the county population
df['per100K'] = 100000 * df['cases'] / df['pop']
# + colab={"base_uri": "https://localhost:8080/", "height": 163} id="_YtoLWRLmk3d" outputId="123b9549-32cd-44b0-fe7b-1bbd5800191b"
df.head() #check
# + id="lM-1bsRlB72s"
#creating a new dataframe that will ultimately have a new column that records the increase in cases per day
#this uses the "shift" command, followed by a simple subtraction to find the number of new cases per day
df["lagCol"] = df.groupby("county")["cases"].shift(1)
df["newcases"] = df.cases - df.lagCol
# + id="DyBVQdGYttPR"
#Create lag column for cases per 100K
df["per100KlagCol"] = df.groupby("county")["per100K"].shift(1)
#Find the new cases per 100K
df["newcasesper100K"] = df.per100K - df.per100KlagCol
# + colab={"base_uri": "https://localhost:8080/", "height": 197} id="dRsNUitQxLDe" outputId="a2173825-3932-441a-ebca-b33f3d8ca237"
#Plot daily new cases per county
fig5 = px.line(df, x = "date", y = "newcases", color = "county")
fig5.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 268} id="K8K0kTxCVofz" outputId="691f5fe7-3b07-4505-c45b-ccbc459a7259"
#There are clearly a lot of daily fluctuations, so we instead want to look at new cases per week. This will give us
#a better picture of what is happening on a weekly basis
df['date'] = pd.to_datetime(df['date'], errors = 'coerce')
df['weekNumb'] = df['date'].dt.week
df.head()
# + id="7-jxZC-0tEe9"
#For simplicity, we're going to eliminate "week 53", which was basically just the first 2 days of January
df = df[df["weekNumb"] < 20]
# + colab={"base_uri": "https://localhost:8080/", "height": 197} id="XFioGWPPbdAd" outputId="3e07e05c-b560-46c8-ba7e-bd6f1b315166"
df2 = df.copy() #creating new copy due to paranoia of messing something from DF up
#Now, we're grouping by county and week to find the sum total of new cases per 100K per week
df2 = df2.groupby(['county','weekNumb'], as_index = False)['newcasesper100K'].sum()
df2.head()
# + colab={"base_uri": "https://localhost:8080/"} id="1mFpaDwhzI93" outputId="daaed9d2-be35-4919-f725-1db91b352810"
df2.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="0wkXQtKZzNyj" outputId="6a5e6a34-8431-44ad-9c4d-1f6ddc835c98"
#Stacked Bar chart of Weekly Covid Cases per 100K
bar_fig = px.bar(df2, x='weekNumb', y='newcasesper100K', color = "county",
title = "2021 Weekly Covid Cases per 100K in Four Upper Valley Counties",
labels = {
"weekNumb": "Week",
"newcasesper100K": "New Cases per 100K",
"county": "county"
}) #color = 'county') #barmode = 'group')
bar_fig.update_layout(
title = {
"text": "2021 Weekly Covid Cases (per 100K) in Four Upper Valley Counties",
"y": 0.9,
"x": 0.5,
"xanchor": "center",
"yanchor": "top"
}
)
bar_fig.update_xaxes(
ticktext = ["Jan 10-16", "Jan 24-30", "Feb 7-13", "Feb 21-27", "Mar 7-13", "Mar 21-27", "Apr 4-10", "Apr 18-24"],
tickvals = [2,4,6,8,10,12,14,16]
)
bar_fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="o4-oORgiIonP" outputId="dc0addfe-c4bf-41f1-faa4-c3d0f58dfd93"
# Line graph showing trend in weekly covid cases per 100K
line_fig = px.line(df2, x='weekNumb', y='newcasesper100K', color = "county",
title = "2021 Weekly Covid Cases per 100K in Four Upper Valley Counties",
labels = {
"weekNumb": "Week",
"newcasesper100K": "New Cases per 100K",
"county": "county"
}) #color = 'county') #barmode = 'group')
line_fig.update_layout(
title = {
"text": "2021 Weekly Covid Cases (per 100K) in Four Upper Valley Counties",
"y": 0.9,
"x": 0.5,
"xanchor": "center",
"yanchor": "top"
}
)
line_fig.update_xaxes(
ticktext = ["Jan 10-16", "Jan 24-30", "Feb 7-13", "Feb 21-27", "Mar 7-13", "Mar 21-27", "Apr 4-10", "Apr 18-24"],
tickvals = [2,4,6,8,10,12,14,16]
)
line_fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="D5XsguNU1Don" outputId="794ee322-79e0-45a9-c3c4-4e8a0ec02e9a"
## Adding an annotation to the above line graph to show the Dartmouth outbreak onset
figg2 = px.line(df2, x='weekNumb', y='newcasesper100K', color = "county",
title = "2021 Weekly Covid Cases per 100K in Four Upper Valley Counties",
labels = {
"weekNumb": "Week",
"newcasesper100K": "New Cases per 100K",
"county": "county"
}) #color = 'county') #barmode = 'group')
figg2.add_annotation(x = 8, y = 220,
text = "Dartmouth Outbreak, February 21-27",
yshift = 25,
showarrow=True,
arrowhead =2,
bordercolor = "#c7c7c7",
borderwidth = 2,
bgcolor = "#ff7f0e")
figg2.update_layout(
title = {
"text": "2021 Weekly Covid Cases (per 100K) in Four Upper Valley Counties",
"y": 0.9,
"x": 0.5,
"xanchor": "center",
"yanchor": "top"
}
)
figg2.update_xaxes(
ticktext = ["Jan 10-16", "Jan 24-30", "Feb 7-13", "Feb 21-27", "Mar 7-13", "Mar 21-27", "Apr 4-10", "Apr 18-24"],
tickvals = [2,4,6,8,10,12,14,16]
)
figg2.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="rpdZEN3z7Ye5" outputId="fce8ce15-f3cf-454f-aae6-dddbdff7624a"
#Facet the line graph to show each county individually
figg3 = px.line(df2, x='weekNumb', y='newcasesper100K',
color = "county",
facet_col = "county", facet_col_wrap = 2,
title = "2021 Weekly Covid Cases per 100K in Four Upper Valley Counties",
labels = {
"weekNumb": "Week",
"newcasesper100K": "New Cases per 100K",
"county": "county"
}) #color = 'county') #barmode = 'group')
#figg3.add_annotation(x = 8, y = 220,
# text = "Dartmouth Outbreak, February 21-27",
# yshift = 25,
# showarrow=True,
# arrowhead =2,
# bordercolor = "#c7c7c7",
# borderwidth = 2,
# bgcolor = "#ff7f0e")
figg3.update_layout(
title = {
"text": "2021 Weekly Covid Cases (per 100K) in Four Upper Valley Counties",
"y": 0.9,
"x": 0.5,
"xanchor": "center",
"yanchor": "top"
}
)
figg3.update_xaxes(
ticktext = ["Jan 10-16", "Jan 24-30", "Feb 7-13", "Feb 21-27", "Mar 7-13", "Mar 21-27", "Apr 4-10", "Apr 18-24"],
tickvals = [2,4,6,8,10,12,14,16]
)
figg3.show()
# + id="4m3i6KItOypO"
# Provide code for carrying out the part of your analysis described
# in the previous text block. Any statistics or figures should be displayed
# in the notebook.
# + [markdown] id="HkYVN71FPIXE"
# # Interpretations and conclusions
#
# Describe and discuss your findings and say how they answer your question (or how they failed to answer your question). Also describe the current state of your project-- e.g., is this a "complete" story, or is further exploration needed?
#
# # Future directions
#
# Describe some open questions or tasks that another interested individual or group might be able to pick up from, using your work as a starting point.
# + [markdown] id="Dyu21ktbMpuT"
# **Interpretations and conclusions:**
#
# The blame game gets us no where, and so it's silly to keep pointing fingers. No data will tell us who brought the virus to the area, or exactly how it spread. It could be that some Dartmouth students infected some community members (and it is also likely that the reverse is true!). But this correlation and causation stuff is hard to weed out of the data.
#
# What we do see is that covid cases did NOT spike in Grafton and surrounding communitites following the Covid outbreak at Dartmouth College. In fact, there were the lowest new weekly covid cases/ 100K in the weeks following Dartmouth's outbreak. The next peak in cases wasn't until April, over 6 weeks after Dartmouth's peak.
#
# So, our analysis succeeds in answering our research question of whether the outbreak at Dartmouth caused spikes in the surrounding community.
#
# **Conclusions and future directions: **
#
# Of course, it is hard to tell when a project is truly "complete". We could extend this analysis to further into the spring, or similarly backtrack the analysis to the fall when accusations began. We could also look at Dartmouth student vaccination rates versus community vaccination rates, and explore any changes in covid cases or covid-related deaths. To explore the blame game from a different lens, we could further investigate Dartmouth students' mental health regarding covid and community accusations. We started this in our current project, but this aspect lends itself to further study. There are many avenues of exploration that will serve as interesting research questions for future work.
#
# Possible future research questions include:
# - How have community accusations impacted Dartmouth students' mental health?
# - Have covid outbreaks in Grafton and surrounding counties lead to covid outbreaks at Dartmouth? (essentially the reverse of our research question)
# - How do Dartmouth's covid cases/deaths compare to other rural colleges? Other Ivy League schools?
#
#
| Story4_Amelia_Hannah_Nathan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using spaCy for Named Entity Recognition (NER)
import spacy
import medspacy
from spacy import displacy
import pymysql
import pandas as pd
import getpass
import random
from ipywidgets import interact
from IPython.display import HTML, display
import warnings
warnings.filterwarnings("ignore")
# ## Install a Default Language Model
#
# The following cell downloads a default English language model. It is defined using web content, which we will see does not work well for medial texts.
# !python -m spacy download en_core_web_sm
# ## Load Web (`wnlp`) and Medical Language Models (`mnlp`)
#
wnlp = spacy.load("en_core_web_sm")
mnlp = medspacy.load("en_info_3700_i2b2_2012", enable=['sentencizer', 'tagger', 'parser',
'ner', 'target_matcher', 'context',
'sectionizer'])
conn = pymysql.connect(host="172.16.17.32",port=3306,
user=input("Enter username for MIMIC2 database"),
passwd=<PASSWORD>.<PASSWORD>pass("Enter password for MIMIC2 database"),
db='mimic2')
# ### Get Text
#
# Textual data is stored in the `noteevents` table
reports = pd.read_sql("""SELECT text, category FROM noteevents""", conn)
# ### What Kind of Notes are Available?
reports.category.unique()
# ### Split reports into dictiory keyed by category type
cat_reports = {c:reports[reports.category==c]['text'].tolist() for c in reports.category.unique()}
# ## Compare Web/Medical Language Markup
#
# The following function takes a list of reports, randomnly selects one and identifies named entities using first the medical specific language model in medspaCy and then the default web-based English languge model of spaCy.
def view_ner_reports(txt):
text = random.choice(txt)
display(HTML("<h1> Original Text</h1>"))
print(text)
display(HTML("<h1> MedspaCy Markup</h1>"))
displacy.render(mnlp(text), style="ent")
display(HTML("<h1> Web-based spaCy Markup</h1>"))
displacy.render(wnlp(text), style="ent")
view_ner_reports(cat_reports['Nursing/Other'])
| medspacy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
import cv2
import numpy as np
import matplotlib.pyplot as plt
import import_ipynb
import matplotlib.patches as patches
import os
import torch
import torchvision.transforms as transforms
import skimage
from options.generate_options import GenerateOptions
from data.data_loader import CreateDataLoader
from models.models import create_model
from util.visualizer import Visualizer
from util.util import save_image
from PIL import Image
# -
precision = 'fp32'
ssd_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_ssd', model_math=precision)
utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_ssd_processing_utils')
ssd_model = ssd_model.to('cuda')
ssd_model.eval()
def SSD(tensor, ssd_model):
with torch.no_grad():
detections = ssd_model(tensor)
results = utils.decode_results(detections)
best_results = [utils.pick_best(results, 0.40) for results in results]
classes_to_labels = utils.get_coco_object_dictionary()
person_counter = 0
for image_idx in range(len(best_results)):
# fig, ax = plt.subplots(1, figsize=(7,7))
# Show original, denormalized image...
image = tensor[image_idx] / 2 + 0.5
# ax.imshow(image)
# ...with detections
bboxes, classes, confidences = best_results[image_idx]
for idx in range(len(bboxes)):
# left, bot, right, top = bboxes[idx]
# x, y, w, h = [val * 300 for val in [left, bot, right - left, top - bot]]
# rect = patches.Rectangle((x, y), w, h, linewidth=1, edgecolor='r', facecolor='none')
# ax.add_patch(rect)
# ax.text(x, y, "{} {:.0f}%".format(classes_to_labels[classes[idx] - 1], confidences[idx]*100), bbox=dict(facecolor='white', alpha=0.5))
if classes[idx] == 1:
person_counter += 1
# plt.show()
return person_counter
# +
def load_PIL_image(PIL_image):
"""Code from Loading_Pretrained_Models.ipynb - a Caffe2 tutorial"""
img = skimage.img_as_float(PIL_image)
if len(img.shape) == 2:
img = np.array([img, img, img]).swapaxes(0,2)
return img
def prepare_input_from_PIL(PIL_img):
img = load_PIL_image(PIL_img)
img = utils.rescale(img, 300, 300)
img = utils.crop_center(img, 300, 300)
img = utils.normalize(img)
return img
# -
def is_longview(img, ssd_model) -> bool:
img = cv2.resize(img, (455, 256))
cv2.imshow("input", img)
height, width, _ = img.shape
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
thres = 0.75
h_lower = 30
h_upper = 90
field_mask = cv2.inRange(hsv, (h_lower, 0, 50), (h_upper, 255,255))
persons = 5
is_field = (np.sum(field_mask)/(width*height)/255 > thres)
# if is_field:
# inputs = [prepare_input_from_PIL(img)]
# tensor = utils.prepare_tensor(inputs, precision == 'fp16').to('cuda')
# persons = SSD(tensor, ssd_model)
return is_field and persons > 3
# +
opt = GenerateOptions()
opt.isTrain = False
opt.dataroot = "./datasets/generate"
opt.batchSize = 1
opt.loadSize = 256
opt.fineSize = 256
opt.input_nc = 3
opt.output_nc = 1
opt.ngf = 64
opt.ndf = 64
opt.which_model_netD = "basic"
opt.which_model_netG = "unet_256"
opt.n_layers_D = 3
opt.gpu_ids = [0]
opt.name = "soccer_seg_detection_pix2pix"
opt.dataset_mode = "single"
opt.model = "generate"
opt.which_direction = "AtoB"
opt.nThreads = 1
opt.checkpoints_dir = "./checkpoints"
opt.norm = "batch"
opt.serial_batches = True
opt.display_winsize = 256
opt.display_id = 1
opt.display_port = 8097
opt.no_dropout = True
opt.max_dataset_size = float("inf")
opt.resize_or_crop = 'resize_and_crop'
opt.no_flip = True
opt.init_type = 'normal'
opt.continue_train = False
opt.ntest = float("inf")
opt.results_dir = './'
opt.aspect_ratio = 1.0
opt.phase = "generate"
opt.which_epoch = "latest"
opt.how_many = 1
opt.output_dir = './'
# data_loader = CreateDataLoader(opt)
# dataset = data_loader.load_data()
model = create_model(opt)
# visualizer = Visualizer(opt)
counter = 1
# -
def get_transform(opt):
transform_list = []
if opt.resize_or_crop == 'resize_and_crop':
osize = [opt.loadSize, opt.loadSize]
transform_list.append(transforms.Scale(osize, Image.BICUBIC))
transform_list.append(transforms.RandomCrop(opt.fineSize))
elif opt.resize_or_crop == 'crop':
transform_list.append(transforms.RandomCrop(opt.fineSize))
elif opt.resize_or_crop == 'scale_width':
transform_list.append(transforms.Lambda(
lambda img: __scale_width(img, opt.fineSize)))
elif opt.resize_or_crop == 'scale_width_and_crop':
transform_list.append(transforms.Lambda(
lambda img: __scale_width(img, opt.loadSize)))
transform_list.append(transforms.RandomCrop(opt.fineSize))
if opt.isTrain and not opt.no_flip:
transform_list.append(transforms.RandomHorizontalFlip())
transform_list += [transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))]
return transforms.Compose(transform_list)
# +
def GAN_line_detection(input_img, counter):
model.set_input(input_img)
model.generate()
visuals = model.get_current_visuals()
print('%04d: process image... ' % (counter))
img = np.array(input_img['A'].squeeze())
h = img.shape[1]
w = img.shape[2]
img = cv2.resize(img[:, :, int(w/2-h/2):int(w/2+h/2)], (256,256))
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
kernel2 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
output = cv2.cvtColor(visuals["fake_D"], cv2.COLOR_RGB2GRAY)
# output = cv2.GaussianBlur(output,(5,5),10)
# output = cv2.morphologyEx(output, cv2.MORPH_CLOSE, kernel2)
# canny = cv2.Canny(output, 200, 120, apertureSize=3)
# edges = cv2.dilate(canny,kernel1,iterations = 1)
# edges = cv2.erode(edges,kernel1,iterations = 1)
# edges = cv2.dilate(edges,kernel2,iterations = 1)
# edges = cv2.erode(edges,kernel1,iterations = 1)
# canny = cv2.Canny(edges, 200, 120, apertureSize=3)
# edges = cv2.dilate(canny,kernel1,iterations = 1)
# edges = cv2.erode(edges,kernel1,iterations = 1)
# edges = cv2.dilate(edges,kernel2,iterations = 1)
# edges = cv2.erode(edges,kernel1,iterations = 1)
# lines = cv2.HoughLinesP(edges, rho = 1,theta = 1*np.pi/180,threshold = 10, minLineLength = 0,maxLineGap = 20)
# if lines is not None:
# print(output.shape)
# for line in lines:
# x1, y1, x2, y2 = line[0]
# img = cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 1)
cv2.imshow("Result", output)
# f = plt.figure(1)
# f.set_figheight(15)
# f.set_figwidth(15)
# plt.subplot(231)
# plt.imshow(visuals["real_A"])
# plt.subplot(232)
# plt.imshow(visuals["fake_B"])
# plt.subplot(233)
# plt.imshow(visuals["fake_D"])
# plt.subplot(234)
# plt.imshow(canny, cmap='gray')
# plt.subplot(235)
# plt.imshow(edges, cmap='gray')
# plt.subplot(236)
# plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
# plt.show()
# +
cap = cv2.VideoCapture("./videos/England v Croatia Smaller Clip 2.mp4")
counter = 0
para = None
para_ret = None
while(True):
counter += 1
ret, frame = cap.read()
if frame is None:
break
if is_longview(frame, ssd_model):
img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
w, h = img.size
box = (int(w/2-h/2), 0, int(w/2+h/2), h)
img = img.crop(box)
# plt.imshow(img)
transform = get_transform(opt)
input_img = transform(img)
input_img = {'A':input_img.unsqueeze(0), 'A_paths': ""}
GAN_line_detection(input_img, counter)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
# -
def GAN_top_hat_line_detection(input_img, counter):
model.set_input(input_img)
model.generate()
visuals = model.get_current_visuals()
img_path = model.get_image_paths()
print('%04d: process image... %s' % (counter, img_path))
counter += 1
img = cv2.imread(img_path)
h = img.shape[0]
w = img.shape[1]
img = cv2.resize(img[:, int(w/2-h/2):int(w/2+h/2), :], (256,256))
mask = cv2.cvtColor(visuals["fake_B"], cv2.COLOR_BGR2GRAY)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
filterSize =(3, 3)
# kernel = cv2.getStructuringElement(cv2.MORPH_RECT,
filterSize)
tophat_img = cv2.morphologyEx(gray,
cv2.MORPH_TOPHAT,
kernel)
edges = np.bitwise_and(tophat_img, mask)
lines = cv2.HoughLinesP(edges, rho = 1,theta = 1*np.pi/180,threshold = 10, minLineLength = 0,maxLineGap = 20)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line[0]
img = cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 1)
f = plt.figure(1)
f.set_figheight(15)
f.set_figwidth(15)
plt.subplot(231)
plt.imshow(visuals["real_A"])
plt.subplot(232)
plt.imshow(mask, cmap='gray')
plt.subplot(233)
plt.imshow(gray, cmap='gray')
plt.subplot(234)
plt.imshow(tophat_img, cmap='gray')
plt.subplot(235)
plt.imshow(edges, cmap='gray')
plt.subplot(236)
plt.imshow(img)
plt.show()
# +
# GAN_top_hat_line_detection(input_img, counter)
# -
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualising GeoGraph Interactively
# [](https://mybinder.org/v2/gh/ai4er-cdt/gtc-biodiversity/main?filepath=notebooks%2F3-demo-geographviewer-polesia.ipynb)
# This tutorial shows how to visualise a GeoGraph on an interactive map.
# ---
#
# ## 1. Setup and Loading package
# + nbsphinx="hidden" tags=[]
# %load_ext autoreload
# %autoreload 2
# %config IPCompleter.greedy=True
# + tags=[]
import ipyleaflet
import geograph
from geograph.visualisation import geoviewer
from geograph.constants import UTM35N
from geograph.demo.binder_constants import DATA_DIR
# -
# ---
#
# ## 2. Loading Data
# + tags=[]
import pathlib
import geopandas as gpd
# Loading Polesia data
data_path = DATA_DIR / "polesia" / "polesia_landcover_sample.gpkg"
gdf = gpd.read_file(data_path)
# Looking at south-west corner of data
# Choosen because are of wilderness as discussed
minx, miny, maxx, maxy = gdf.total_bounds
square_len = 25000
gdf = gdf.cx[ minx:minx+square_len , miny:miny+square_len]
print("Number of patches in region of interest:", len(gdf))
gdf.plot()
gdf.head(5)
# -
# ---
#
# ## 3. Creating `GeoGraph`
# + tags=[]
# Building the main graph structure
graph = geograph.GeoGraph(gdf,
crs=UTM35N,
columns_to_rename={"Eunis_name":"class_label","AREA":"area"})
# -
# ---
#
# ## 4. Creating Habitats
# + tags=[]
# First selecting the classes that make up our habitat
# We chose all classes with 'pine' in the name.
pine_classes = [label for label in graph.df.class_label.unique() if 'pine' in label]
pine_classes
# + tags=[]
# Distances: mobile (<100m), semi mobile (<25m) and sessile (<5m)
# (proposed by <NAME> at BTO)
graph.add_habitat('Sessile',
max_travel_distance=5,
valid_classes=pine_classes)
graph.add_habitat('Semi mobile',
max_travel_distance=25,
valid_classes=pine_classes)
graph.add_habitat('Mobile',
max_travel_distance=500,
valid_classes=pine_classes)
# -
# ---
#
# ## 5. Interactive Graph
# + tags=[]
viewer = geoviewer.GeoGraphViewer(small_screen=True)
viewer.add_layer(ipyleaflet.basemaps.Esri.WorldImagery)
viewer.add_graph(graph, name='Polesia data', with_components=True)
viewer.enable_graph_controls()
viewer
# -
# > Note: an interactive viewer will show up here.
| notebooks/3-demo-geographviewer-polesia.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from joblib import load
from optimalcodon.projects.rnastability.predictuncertainty import predict_seq_with_uncertainty
gfps = pd.read_csv("../19-05-23-ProteinOptimization/results_data/gfp_sequences_predictions.csv")
gfps['seqs'] = gfps.seqs.str.upper()
gfps
## predict stability
mdl_params = {'specie': 'human', 'cell_type': '293t', 'datatype': 'endogenous'}
modelo = load("../../data/19-08-08-PredictiveModelStability/predictivemodel.joblib")
preds = gfps.seqs.apply(predict_seq_with_uncertainty, models=modelo, **mdl_params)
gfps['ci_l'] = preds.map(lambda x: x[0])
gfps['predictions'] = preds.map(lambda x: x[1])
gfps['ci_u'] = preds.map(lambda x: x[2])
gfps = gfps.drop(['Unnamed: 0', 'predicted_stability_human293t'], axis=1)
gfps.to_csv("results_data/gfps_predictions_uncertanty.csv", index=False)
# ***
# ## Predict Synonimous Reporter
| results/19-08-12-ProteinOptimizationV2/predict_GFP_uncertainty.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import torch
import torch.nn as nn
from torch.autograd import Variable as Var
import torch.optim as optim
from syft.core.hooks import TorchHook
from syft.core.workers import VirtualWorker, SocketWorker
hook = TorchHook(local_worker=SocketWorker(id=0, port=8209))
bob_worker = hook.local_worker
alice_worker = SocketWorker(hook=hook, id=1, port=8192)
# -
alice_client = SocketWorker(hook=hook,
id=alice_worker.id,
hostname=alice_worker.hostname,
port=alice_worker.port,
is_pointer=True)
bob_client = SocketWorker(hook=hook,
id=bob_worker.id,
hostname=bob_worker.hostname,
port=bob_worker.port,
is_pointer=True)
# bob creates a tensor
bob_data = Var(torch.FloatTensor([[0,0],[0,1]]))
bob_data.id
import threading
thread = threading.Thread(target = bob_data.send, args = (alice_client, True))
thread.start()
alice_worker.id
alice_worker.listen(num_messages=1)
| examples/other/experimental/SocketWorker Unit Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# cd ..
import lux
# ### Load in a dataset of 392 different cars from 1970-1982:
dataset = lux.Dataset("lux/data/car.csv",schema=[{"Year":{"dataType":"date"}}])
dataset.df.head()
# +
dobj = lux.DataObj(dataset)
result = dobj.showMore()
result.display()
# -
# ### Intuitively, we expect cars with more horsepower means higher acceleration, but we are actually seeing the opposite of that trend.
# ### Let's learn more about whether there are additional factors that is affecting this relationship.
dobj = lux.DataObj(dataset,[lux.Column("Acceleration",dataModel="measure"),
lux.Column("Horsepower",dataModel="measure")])
result = dobj.showMore()
widget = result.display()
widget
[vis["encoding"]["color"]["field"] for vis in widget.selectedVisLst[0]["vspec"]]
# ### In Enhance, all the added variable (color), except `MilesPerGal`, shows a trend for the value being higher on the upper-left end, and value decreases towards the bottom-right.
# ### Now given these three other variables, let's look at what the `Displacement` and `Weight` is like for different `Cylinder` cars.
dobj = lux.DataObj(dataset,[lux.Column(["Weight","Displacement"]),lux.Column("Cylinders")])
dobj.display()
# ### The Count distribution shows that there is not a lot of cars with 3 and 5 cylinders, so let's clean the data up to remove those.
import pandas as pd
dataset.df[dataset.df["Cylinders"]==3]
dataset.df[dataset.df["Cylinders"]==5]
newdf = dataset.df[(dataset.df["Cylinders"]!=3) & (dataset.df["Cylinders"]!=5)]
dataset.set_df(newdf)
# ### So after cleaning up the data, we are able to validate what we saw earlier, which is that Weight and Displacement increases as the number of Cylinders increases.
dobj = lux.DataObj(dataset,[lux.Column(["Weight","Displacement"]),lux.Column("Cylinders")])
dobj.display()
| examples/short_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ungol-models
# language: python
# name: ungol-models
# ---
# # Analyze Corpora
# +
import dumpr.common as dc
import nltk
from tqdm import tqdm_notebook as tqdm
import pickle
import pathlib
import multiprocessing as mp
from collections import defaultdict
from typing import Tuple
from typing import Generator
# -
# ## CLEF
# +
basepath = pathlib.Path('/mnt/maehre/dump/text/de/')
files = [
'frankfurter-rundschau-9495/frankfurter-rundschau-9495.full.xml',
'sda-94/sda-94.full.xml',
'sda-95/sda-95.full.xml',
'spiegel-9495/spiegel-9495.full.xml', ]
def gen_docs() -> Generator[str, None, None]:
for xml in (basepath/f for f in files):
with dc.BatchReader(str(xml)) as reader:
for doc in reader:
yield doc.content
def process(content: str) -> Tuple[Tuple[str, int]]:
vocab = defaultdict(lambda: 0)
for w in nltk.word_tokenize(content):
vocab[w.lower()] += 1
return tuple(vocab.items())
def run():
total = 13781 + 69438 + 71677 + 139715
vocab = defaultdict(lambda: 0)
merge_bar = tqdm(total=total, position=1, desc='merged')
def merge(result):
for word, count in result:
vocab[word] += count
merge_bar.update(1)
with mp.Pool(3) as pool:
results = []
for i, content in tqdm(enumerate(gen_docs()), total=total, desc='read'):
res = pool.apply_async(process, (content, ), callback=merge)
results.append(res)
for res in results:
res.wait()
mapping = [(k, v) for v, k in sorted({v: k for k, v in vocab.items()}.items(), reverse=True)]
print('writing text file')
with open('../opt/clef.vocab.txt', mode='w') as fd:
for tup in mapping:
fd.write('{} {}\n'.format(*tup))
print('dumping dict')
with open('../opt/clef.vocab.pickle', mode='wb') as fd:
pickle.dump(dict(mapping), fd)
merge_bar.close()
run()
print('\ndone.')
# -
with open('../opt/clef.vocab.pickle', mode='rb') as fd:
vocab = pickle.load(fd)
print('vocabulary size', len(vocab))
print('total word count', sum(vocab.values()))
| models/notes/corpora.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# ## Please input your directory for the top level folder
# folder name : SUBMISSION MODEL
dir_ = 'INPUT-PROJECT-DIRECTORY/submission_model/' # input only here
# #### setting other directory
raw_data_dir = dir_+'2. data/'
processed_data_dir = dir_+'2. data/processed/'
log_dir = dir_+'4. logs/'
model_dir = dir_+'5. models/'
submission_dir = dir_+'6. submissions/'
import pandas as pd
submission = pd.read_csv(raw_data_dir+'sample_submission.csv')
ids = pd.DataFrame({'id':submission.iloc[30490:]['id']})
quantiles = ['0.025', '0.750', '0.250', '0.995', '0.165', '0.835', '0.005',
'0.975', '0.500']
for quantile in list(set(quantiles)):
sub1= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_recursive_store_{quantile}.csv')
sub2= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_recursive_store_cat_{quantile}.csv')
sub3= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_recursive_store_dept_{quantile}.csv')
sub4= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_nonrecursive_store_{quantile}.csv')
sub5= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_nonrecursive_store_cat_{quantile}.csv')
sub6= pd.read_csv(submission_dir+f'before_ensemble/submission_kaggle_nonrecursive_store_dept_{quantile}.csv')
sub1 = ids.merge(sub1, on='id', how='left').set_index('id')
sub2 = ids.merge(sub2, on='id', how='left').set_index('id')
sub3 = ids.merge(sub3, on='id', how='left').set_index('id')
sub4 = ids.merge(sub4, on='id', how='left').set_index('id')
sub5 = ids.merge(sub5, on='id', how='left').set_index('id')
sub6 = ids.merge(sub6, on='id', how='left').set_index('id')
final_sub = (sub1 + sub2 + sub3 + sub4 + sub5 + sub6 )/6
final_sub.to_csv(submission_dir+f'submission_final_{quantile}_non_recursive.csv')
submission_dir
| src/gluonts/nursery/QRX-Wrapped-M5-Accuracy-Solution/3. code/3. predict/3-1. Final ensemble.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Cal-0/Cal-0-dashboard.github.io/blob/master/DS-Unit-1-Sprint-3-Linear-Algebra/module1-vectors-and-matrices/Copy_of_LS_DS_131_Vectors_and_Matrices_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yXA3GwWhY9KL" colab_type="text"
# # Part 1 - Scalars and Vectors
#
# For the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property.
# + [markdown] id="oNOTv43_Zi9L" colab_type="text"
# ## 1.1 Create a two-dimensional vector and plot it on a graph
# + id="XNqjzQzrkVG7" colab_type="code" colab={}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# + id="RiujGd0ETZF_" colab_type="code" colab={}
dv_2 = [1,1]
# + id="zFasanwxTZD1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="66cd73fe-8103-4d5e-c618-a8ba517959fa"
plt.arrow(0,0, dv_2[0], dv_2[1],head_width=.05, head_length=0.05, color ='orange')
plt.xlim(0,2)
plt.ylim(0,2)
# + [markdown] id="unKFT619lk3e" colab_type="text"
# ## 1.2 Create a three-dimensional vecor and plot it on a graph
# + id="Nh9FVymgVkAW" colab_type="code" colab={}
from mpl_toolkits.mplot3d import Axes3D
# + id="atUEd3T6llKm" colab_type="code" colab={}
dv_3 = [1,1,1]
# + id="RaZD6bpsX3tt" colab_type="code" colab={}
vectors = np.array([[0,0,0,1,1,1]])
# + id="WkzwAOLbU5iC" colab_type="code" colab={}
X, Y, Z, U, V, W = zip(*vectors)
# + id="Y3XrHskLYNZc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="c4f90e0d-abab-482f-b23d-85fa74a0a8f7"
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(X, Y, Z, U, V, W, length=0.05)
# + [markdown] id="b7qFxbKxZmI2" colab_type="text"
# ## 1.3 Scale the vectors you created in 1.1 by $5$, $\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors?
# + id="ah6zMSLJdJwL" colab_type="code" outputId="c305ad38-ec66-43bd-e03d-20cf7acf6597" colab={"base_uri": "https://localhost:8080/", "height": 52}
from math import e, pi
print(e)
print(pi)
# + id="r8K2EVEQafFe" colab_type="code" colab={}
np_dv_2 = np.array(dv_2)
# + id="Zs0XktgMZx8a" colab_type="code" colab={}
np_dv2_e = 2.718281828459045*(np_dv_2)
np_dv2_pi = 3.141592653589793*(np_dv_2)
# + id="36gSO5e7bYrA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="be4bbcb7-7f67-43e9-d11b-c4e34e595675"
print (np_dv2_e)
print (np_dv2_pi)
# + id="3qpwDlzXkVf5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="2eca5a78-4bfa-4029-91a4-3b9eea47d539"
plt.arrow(0,0, dv_2[0], dv_2[1],head_width=.05, head_length=23, color ='orange')
plt.arrow(0,0, np_dv2_e[0], np_dv2_e[1],head_width=.05, head_length=40, color ='blue')
plt.arrow(0,0, np_dv2_pi[0], np_dv2_pi[1],head_width=.05, head_length=10, color ='red')
plt.xlim(-1,40)
plt.ylim(-1,40)
plt.title('Vectors')
plt.show()
# + id="fMxcMbq1d0Wd" colab_type="code" colab={}
red = [0.5,0.8]
# + id="9TqLmvPjY90V" colab_type="code" colab={}
green = np.multiply(3.141592653589793, red)
blue = np.multiply(2.718281828459045, red)
# + id="9sRUNi5tY9kw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="43a55296-e3ba-4486-8535-448e324da469"
plt.arrow(0,0, green[0], green[0], head_width=0.05, head_length=0.05, color='green')
plt.arrow(0,0, blue[0], blue[0], head_width=0.05, head_length=0.05, color='blue')
plt.arrow(0,0, red[0], red[0], head_width=0.05, head_length=0.05, color='red')
plt.xlim(-1,2)
plt.ylim(-1,2)
# + [markdown] id="wrgqa6sWimbH" colab_type="text"
# ## 1.4 Graph vectors $\vec{a}$ and $\vec{b}$ and plot them on a graph
#
# \begin{align}
# \vec{a} = \begin{bmatrix} 5 \\ 7 \end{bmatrix}
# \qquad
# \vec{b} = \begin{bmatrix} 3 \\4 \end{bmatrix}
# \end{align}
# + id="I1BGXA_skV-b" colab_type="code" colab={}
a = [5,7]
b = [3,4]
# + id="j-vNLuOrhVTT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="cdf42379-03d0-4fda-9f56-c7ec508d05ba"
plt.arrow(0,0, a[0], a[1], head_width=0.05, head_length=0.05, color='green')
plt.arrow(0,0, b[0], b[1], head_width=0.05, head_length= 0.05, color='blue')
plt.xlim(-1,8)
plt.ylim(-1,8)
# + [markdown] id="QN6RU_3gizpw" colab_type="text"
# ## 1.5 find $\vec{a} - \vec{b}$ and plot the result on the same graph as $\vec{a}$ and $\vec{b}$. Is there a relationship between vectors $\vec{a} \thinspace, \vec{b} \thinspace \text{and} \thinspace \vec{a-b}$
# + id="68sWHIOPkXp5" colab_type="code" colab={}
np_a = np.array(a)
np_b = np.array(b)
# + id="2R4CdxB4i0Wf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="50a55c92-c3fc-4140-be97-044b0f0fbe68"
print (np_a - np_b)
# + id="Lx3TTqc_i0PS" colab_type="code" colab={}
np_ab =(np_a -np_b)
# + id="3B3izE_IjS4F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="6f5ce56c-c482-4a2a-c35a-d735b0eb27ab"
plt.arrow(0,0, a[0], a[1], head_width=0.05, head_length=0.05, color='green')
plt.arrow(0,0, b[0], b[1], head_width=0.05, head_length= 0.05, color='blue')
plt.arrow(0,0, np_ab[0], np_ab[1,], head_width=0.05, head_length= 0.05, color='red')
plt.xlim(-1,8)
plt.ylim(-1,8)
# + [markdown] id="1ZPVuJAlehu_" colab_type="text"
# ## 1.6 Find $c \cdot d$
#
# \begin{align}
# \vec{c} = \begin{bmatrix}7 & 22 & 4 & 16\end{bmatrix}
# \qquad
# \vec{d} = \begin{bmatrix}12 & 6 & 2 & 9\end{bmatrix}
# \end{align}
#
# + id="2_cZQFCskYNr" colab_type="code" colab={}
c = [7,22,4,16]
d = [12,6,2,9]
c_np = np.array(c)
d_np = np.array(d)
# + id="TrWfO3Blllma" colab_type="code" colab={}
import scipy as sp
# + id="kCWUhqwOkzGM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2746c998-fe1a-420a-fb88-dc49afed56cb"
c_dot_d = np.dot(c_np, d_np)
c_dot_d
# + [markdown] id="cLm8yokpfg9B" colab_type="text"
# ## 1.7 Find $e \times f$
#
# \begin{align}
# \vec{e} = \begin{bmatrix} 5 \\ 7 \\ 2 \end{bmatrix}
# \qquad
# \vec{f} = \begin{bmatrix} 3 \\4 \\ 6 \end{bmatrix}
# \end{align}
# + id="ku-TdCKAkYs8" colab_type="code" colab={}
e =[5,7,2]
f= [3,4,6]
e_np = np.array(e)
f_np = np.array(f)
# + id="LcJY4bcTm8tX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b0871e1c-aa9f-4826-f935-ba55fc3ef365"
print (f_np * e_np)
# + [markdown] id="-TN8wO2-h53s" colab_type="text"
# ## 1.8 Find $||g||$ and then find $||h||$. Which is longer?
#
# \begin{align}
# \vec{g} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 8 \end{bmatrix}
# \qquad
# \vec{h} = \begin{bmatrix} 3 \\3 \\ 3 \\ 3 \end{bmatrix}
# \end{align}
# + id="-5VKOMKBlgaA" colab_type="code" colab={}
g = np.array([
[1],
[1],
[1],
[8]
])
h = np.array([
[3],
[3],
[3],
[3]
])
# + id="QoBd4JdIqXCs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="dec13be6-40b0-47c7-80b2-db38ffd622ad"
type(g2)
# + id="J7i4hV3Ep7Vz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 183} outputId="2f3ddfac-9915-4dc8-d580-8da0008ee93f"
g2 = np.linalg(g)
h2 = np.linalg(h)
# + id="RzOhaXnvp-xN" colab_type="code" colab={}
# + [markdown] id="njrWIMS-ZAoH" colab_type="text"
# # Part 2 - Matrices
# + [markdown] id="GjkcAVIOmOnn" colab_type="text"
# ## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations.
# \begin{align}
# A = \begin{bmatrix}
# 1 & 2 \\
# 3 & 4 \\
# 5 & 6
# \end{bmatrix}
# \qquad
# B = \begin{bmatrix}
# 2 & 4 & 6 \\
# \end{bmatrix}
# \qquad
# C = \begin{bmatrix}
# 9 & 6 & 3 \\
# 4 & 7 & 11
# \end{bmatrix}
# \qquad
# D = \begin{bmatrix}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & 0 & 1
# \end{bmatrix}
# \qquad
# E = \begin{bmatrix}
# 1 & 3 \\
# 5 & 7
# \end{bmatrix}
# \end{align}
# + id="Z69c-uPtnbIx" colab_type="code" colab={}
# a [3,2]
# b[1,3]
#c [2,3]
# d [3,3]
# e [2,2]
# cd, cb, ae, bd
# + [markdown] id="lMOlCoM3ncGa" colab_type="text"
# ## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices?
# + id="zhKwiSItoE2F" colab_type="code" colab={}
a = np.array([
[1,2],
[3,4],
[5,6]
])
b = np.array([2,4,6])
c = np.array([
[9,6,3],
[4,7,11]
])
d = np.array([
[1,0,0],
[0,1,0],
[0,0,1]
])
e = np.array([
[1,3],
[5,7]
])
# + id="IWiD8p0b4H7C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="5f28055b-8a81-4b60-d25d-33bbe368c400"
np.matmul(c,d)
# + id="3yeeXwQG4g5-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="1e96b164-4b64-4eec-f626-5eba94671b00"
np.matmul(a,e)
# + id="y6D1YOaf4gtb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="cf4b9194-7b85-4193-feac-251fda77c108"
np.matmul(b,a)
# + [markdown] id="p2jmaGLgoFPN" colab_type="text"
# ## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to bottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$?
#
# \begin{align}
# F =
# \begin{bmatrix}
# 20 & 19 & 18 & 17 \\
# 16 & 15 & 14 & 13 \\
# 12 & 11 & 10 & 9 \\
# 8 & 7 & 6 & 5 \\
# 4 & 3 & 2 & 1
# \end{bmatrix}
# \end{align}
# + id="Wl3ElwgLqaAn" colab_type="code" colab={}
f = np.array([
[20,19,18,17],
[16,15,14,13],
[12,11,10,9],
[8,7,6,5],
[4,3,2,1]
])
# + id="PsufRjYz5Tfg" colab_type="code" colab={}
# the numbers along the main diagnol stay the same
# f [5,4]
# fT [4,5]
# + id="Ogr-7L3O5TO0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="64b4d0a0-0a65-41ba-90a0-0535cc4b9afc"
f.T
# + [markdown] id="13ik2LEEZLHn" colab_type="text"
# # Part 3 - Square Matrices
# + [markdown] id="sDBAPUwfp7f7" colab_type="text"
# ## 3.1 Find $IG$ (be sure to show your work) 😃
#
# You don't have to do anything crazy complicated here to show your work, just create the G matrix as specified below, and a corresponding 2x2 Identity matrix and then multiply them together to show the result. You don't need to write LaTeX or anything like that (unless you want to).
#
# \begin{align}
# G=
# \begin{bmatrix}
# 13 & 14 \\
# 21 & 12
# \end{bmatrix}
# \end{align}
# + id="ZnqvZBOYqar3" colab_type="code" colab={}
g = np.array([
[13,14],
[21,12]
])
# + [markdown] id="DZ_0XTDQqpMT" colab_type="text"
# ## 3.2 Find $|H|$ and then find $|J|$.
#
# \begin{align}
# H=
# \begin{bmatrix}
# 12 & 11 \\
# 7 & 10
# \end{bmatrix}
# \qquad
# J=
# \begin{bmatrix}
# 0 & 1 & 2 \\
# 7 & 10 & 4 \\
# 3 & 2 & 0
# \end{bmatrix}
# \end{align}
#
# + id="5QShhoXyrjDS" colab_type="code" colab={}
h = np.array([
[12,11],
[7,10]
])
j = np.array([
[0,1,2],
[7,10,4],
[3,2,0]
])
# + id="MkNizFAn602h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c0ba0f48-4ca3-4282-a5c8-6597ec7cc66f"
np.linalg.det(h)
# + id="xSY5N27N60ss" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="22a7c18d-dadc-4ec5-a4b3-7c4ec7cef857"
np.linalg.det(j)
# + [markdown] id="2gZl1CFwrXSH" colab_type="text"
# ## 3.3 Find $H^{-1}$ and then find $J^{-1}$
# + id="nyX6De2-rio1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="ba13786b-df52-44e8-ec86-99ccefba090d"
np.linalg.inv(h)
# + id="MWwykULv8FmA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="0bd2fdde-2e71-420d-e906-8c005aec6142"
np.linalg.inv(j)
# + [markdown] id="Vvd4Pe86rjhW" colab_type="text"
# ## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not?
#
# Please ignore Python rounding errors. If necessary, format your output so that it rounds to 5 significant digits (the fifth decimal place).
#
# Yes, they will both equal one because a ,matrix multiplied by its inverse will
# always equal one
# + id="kkxJ-Una85ZK" colab_type="code" colab={}
h_inv = np.linalg.inv(h)
j_inv = np.linalg.inv(j)
# + id="WhtXph5k9O4J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="21357b77-adb7-429a-a542-2773619c0317"
np.matmul(h,h_inv)
# + id="-KLlXHFf9h8p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="0b88a49a-9057-4f01-e98c-ccc023eadc43"
np.matmul(j,j_inv)
# + [markdown] id="V0iTO4McYjtk" colab_type="text"
# # Stretch Goals:
#
# A reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them.
#
# - Write a function that can calculate the dot product of any two vectors of equal length that are passed to it.
# - Write a function that can calculate the norm of any vector
# - Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them.
# - Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab)
# - Create and plot a matrix on a 2d graph.
# - Create and plot a matrix on a 3d graph.
# - Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?
#
#
| DS-Unit-1-Sprint-3-Linear-Algebra/module1-vectors-and-matrices/Copy_of_LS_DS_131_Vectors_and_Matrices_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.3 64-bit
# name: python_defaultSpec_1598474588739
# ---
# # ML Example (MNIST)
# ## Intro
#
# In this notebook we'll be designing and creating a simple neural network to recognize handwritten numbers using the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset.
# Neural networks are powerful computing systems which are vaguley inspired by the neurons in our brains!
#
# Neural networks are a sequence of matrix multiplications. So our goal in training one is to "learn" coefficients which make our network an accurate predictor of the data. However, there's a lot that goes into how we can best "learn" those coefficients, so we'll stick to a pretty simple implementation of a nerual net.
#
# The contents of this notebook are a little outside of the scope of what our club does, but we'll do our best to give you a rundown of what the code does.
#
# We'll also have some links at the bottom if you're interested in reading more!
#
# ## High Level Explanation
#
# Here's a pretty common visualization of how a neural network looks like:
#
# ![[Neural network visualization]()](https://i.ibb.co/mzCxfFH/nn.png)
#
# A neural network can be categorized into three parts:
# - Input Layer: The data we feed into this network
# - Hidden Layer: This 'layer' can be huge, and is where all the heavy computation happens
# - Output Layer: Tells us what our neural network came up with. This layer may be a number or a [one-hot](https://en.wikipedia.org/wiki/One-hot) encoded vector depending on our needs.
#
# Each of these layers contain nodes, which can be their own functions. Links connect nodes between layers.
#
# Computations are passed between these layers in two ways:
# - Forward Pass: Compute the output of the network from given input
# - Backward Pass: Compute error from our forward pass output and expected output. Use an optimizer to update our weights in the network. This is also referred to as backpropagation.
#
# Our neural network is a convolutional neural network. And we'll use convolutions to isolate certain parts of our data.
#
# At a high level, this is what our neural network does (read bottom-up):
#
# ![[CNN visulatiozation]()](https://i.ibb.co/JFH3rP1/cnn.png)
#
# 1. We take in our input image
# 2. Do a convolution over the original image to isolate features.
# 3. Summarize our convolutions using pooling.
# 4. Repeat 2-3.
# 5. Throw output of 4. into a neural network which looks like our first visualization of a neural network.
# 6. Determine our output.
#
# Our output ends up being a list of 10 floats, which represent scores on how likely an input matches an index. So if we have an output which looks like:
#
# ```
# [0.3, 0.9, 0.1, 0.0, 0.1, 0.1, 0.6, 0.7, 0.8, 0.7]
# ```
# We would assign the label `1` to the input.
#
# ## Setup
#
# We'll be using PyTorch to run our nerual network. Machine Learning requires a lot of disgusting math, so we want to use libraries to do the heavy lifting for us whenever possible.
#
# [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) are both great libraries which make machine learning a lot easier to do. They provide high-level APIs for creating, training, and evaluating models. There's no strong case for why we use PyTorch here, but we do.
# +
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
# -
# Next we'll define our hyperparameters, which are the constants that our model will be based off of. Let's go through them one by one:
# - `n_epochs`: The number of times (epochs) we'll run our entire dataset over our model during training.
# - `batch_size_train`: Number of samples which will be thrown into the network at a time. We typically don't throw in entire datasets when training models because networks will use less memory and train faster (network will update more often) when working with smaller sample sizes. This is an important consideration, as it introduces some variance to our model if we train it multiple times.
# - `batch_size_test`: Same as above. When we test our model we're not doing as intense computations, so we don't care as much for optimizing for speed.
# - `learning_rate`: A very 'behind-the-scenes' hyperparameter. Specifies how big our step size is when doing our optimization.
# - `momentum`: Another 'behind-the-scenes` hyperparameter. Helps us to converge to our results faster
#
# The remainder of this cell aren't vitally important to the model.
# +
# set hyperparameters
n_epochs = 3
batch_size_train = 64
batch_size_test = 1000
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
# -
# Next we'll import our data using PyTorch's built in data loading features.
#
# PyTroch will download the data if we don't already have it! Very cool.
# + tags=[]
# Gather datasets
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('/files/', train=True, download=True,
transform=torchvision.transforms.ToTensor()),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('/files/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True)
# -
# Now we can do some data exploration. This is useful for finding out how you can work with the data.
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
plt.imshow(example_data[0][0], cmap='gray')
# ## Neural network code
#
# Next we'll define what our neural network looks like using PyTorch's API for models.
#
# After that we'll create some logging data structures.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
network = Net()
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
train_losses = []
train_counter = []
test_losses = []
test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)]
def train(epoch):
network.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
# Do forward pass
output = network(data)
# Find the error of our forward pass' output
loss = F.nll_loss(output, target)
# Update our weights using backward pass
loss.backward()
optimizer.step()
# Log out our progress every log_interval setps
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
train_losses.append(loss.item())
train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader.dataset)))
torch.save(network.state_dict(), './results/model.pth')
torch.save(optimizer.state_dict(), './results/optimizer.pth')
def test():
# Put our network in evaluation mode - won't try to update weights
network.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = network(data)
test_loss += F.nll_loss(output, target, size_average=False).item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# Now that all the setup's done. We'll shove our dataset through our training and testing functions.
# + tags=[]
test()
for epoch in range(1, n_epochs + 1):
train(epoch)
test()
# -
# At this point we have a trained model, so we can celebrate by labeling some data points!
# +
with torch.no_grad():
output = network(example_data)
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Prediction: {}".format(
output.data.max(1, keepdim=True)[1][i].item()))
plt.xticks([])
plt.yticks([])
# -
# ## Links
# [Cool Slides for ML (Check out Neural Networks and Structured Neural Networks)](https://courses.cs.washington.edu/courses/cse446/20sp/schedule/)
| Part 4/model/mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/motazsaad/ai-csci4304/blob/master/Fuzzy_logic_scikit_fuzzy_python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rtwd4JujYBCp" colab_type="text"
# # Install scikit-fuzzy python package
# + id="5DMLRUMFXkXM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="f20f9982-b78b-4b5f-9476-22e4e5314d56"
# ! pip install -U scikit-fuzzy
# + [markdown] id="8tWgkIQRX_TO" colab_type="text"
# # Define universe variables and fuzzy membership functions
# + id="ZDFQuiQlX4rb" colab_type="code" colab={}
import numpy as np
import skfuzzy as fuzz
# Generate universe variable
x = np.arange(11)
# Generate fuzzy membership function
mfx = fuzz.trimf(x, [0, 5, 10])
# + id="XeRQZPdTYgHw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="47228e36-bfdf-49d2-866a-8aa8377b8c99"
x
# + id="Lv3c3RGBYiDa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="43d7933d-1306-438a-a3bc-f4b400fcd47e"
mfx
# + [markdown] id="eGfDDyZlY_Rp" colab_type="text"
# # Visualize this universe and membership function
#
# + id="CsBVWtj5ZE-x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="516d2de3-a926-4252-f8c4-19bbb23b1d0b"
import matplotlib.pyplot as plt
fig, (ax0) = plt.subplots()
ax0.plot(x, mfx, 'b', linewidth=1.5, label='X')
ax0.set_title('Some Variable')
ax0.legend()
# + [markdown] id="mO8gRgc5au8B" colab_type="text"
# # Example
#
# Taking the tipping example full circle, if we were to create a controller which estimates the tip we should give at a restaurant, we might structure it as such:
# Taking the tipping example full circle, if we were to create a controller which estimates the tip we should give at a restaurant, we might structure it as such:
#
# * Antecednets (Inputs)
# * service
# * Universe (ie, crisp value range): How good was the service of the waitress, on a scale of 1 to 10?
# * Fuzzy set (ie, fuzzy value range): poor, acceptable, amazing
#
# * food quality
# * Universe: How tasty was the food, on a scale of 1 to 10?
# * Fuzzy set: bad, decent, great
#
# * Consequents (Outputs)
# * tip
# * Universe: How much should we tip, on a scale of 0% to 25%
# * Fuzzy set: low, medium, high
#
# * Rules
# * IF the service was good or the food quality was good, THEN the tip will be high.
# * IF the service was average, THEN the tip will be medium.
# * IF the service was poor or the food quality was poor THEN the tip will be low.
#
# * Usage
# * If I tell this controller that I rated:
# * the service as 9.8, and
# * the quality as 6.5,
# * What it would recommend I leave:
#
#
# + id="z2cHdQCdaNRZ" colab_type="code" colab={}
# imports
import numpy as np
import skfuzzy as fuzz
from skfuzzy import control as ctrl
# + [markdown] id="7VIEuLvMcdSl" colab_type="text"
# # New Antecedent/Consequent objects hold universe variables and membership
# # functions
#
# + id="Uuyc7XiEccqF" colab_type="code" colab={}
# New Antecedent/Consequent objects hold universe variables and membership
# functions
quality = ctrl.Antecedent(universe=np.arange(0, 11, 1), label='quality')
service = ctrl.Antecedent(universe=np.arange(0, 11, 1), label='service')
tip = ctrl.Consequent(universe=np.arange(0, 26, 1), label='tip', defuzzify_method='centroid')
# other methods 'centroid', 'bisector', 'mean of maximum', 'min of maximum','max of maximum'
# centroid, bisector, mom, som, lom
# https://pythonhosted.org/scikit-fuzzy/auto_examples/plot_defuzzify.html
# + [markdown] id="w3bgQS-UcnMH" colab_type="text"
# # Auto-membership function population is possible with .automf(3, 5, or 7)
# # for inputs
# + id="PXDVa665cmgj" colab_type="code" colab={}
# Auto-membership function population is possible with .automf(3, 5, or 7)
quality.automf(3)
service.automf(3)
# + [markdown] id="zvcpzf7Xc9RX" colab_type="text"
# # Custom membership functions can be built interactively with a familiar,
# # Pythonic API
# # for output
# + id="8NMD0_zKdAnl" colab_type="code" colab={}
# Custom membership functions can be built interactively with a familiar,
# Pythonic API
tip['low'] = fuzz.trimf(tip.universe, [0, 0, 13])
tip['medium'] = fuzz.trimf(tip.universe, [0, 13, 25])
tip['high'] = fuzz.trimf(tip.universe, [13, 25, 25])
# + [markdown] id="K6w0PNRqdHrA" colab_type="text"
# # Visulize membership function
# + [markdown] id="rO1JZF1mdVXJ" colab_type="text"
# ## Quality (input)
# + id="GOnl-OAWdL4t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="a7cc4f5b-b9c6-4ac8-8fca-1c19e06a08ab"
# You can see how these look with .view()
quality['average'].view()
# + [markdown] id="SrPIsGggdXgw" colab_type="text"
# ## Service (input)
# + id="COMOdQ1mdae3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="71c3cff7-2d7c-467b-92d7-839307b6dd3c"
service.view()
# + [markdown] id="O4hJF62XdeYJ" colab_type="text"
# # Tip (output)
# + id="3tbAe-YCdjHW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="667e86a6-59e1-4274-99a1-5992e25a6ccc"
tip.view()
# + [markdown] id="0jlZRYFXdqiw" colab_type="text"
# # Fuzzy rules
#
# Now, to make these triangles useful, we define the fuzzy relationship between input and output variables. For the purposes of our example, consider three simple rules:
#
# 1. If the food is poor OR the service is poor, then the tip will be low
# 2. If the service is average, then the tip will be medium
# 3. If the food is good OR the service is good, then the tip will be high.
# + id="C1-eteA2d5KD" colab_type="code" colab={}
rule1 = ctrl.Rule(quality['poor'] | service['poor'], tip['low'])
rule2 = ctrl.Rule(service['average'], tip['medium'])
rule3 = ctrl.Rule(service['good'] | quality['good'], tip['high'])
# + [markdown] id="ggukYuTad9Fg" colab_type="text"
# # Control System Creation and Simulation
#
# + id="9Fc54JdJd7pZ" colab_type="code" colab={}
tipping_ctrl = ctrl.ControlSystem(rules=[rule1, rule2, rule3])
tipping = ctrl.ControlSystemSimulation(control_system=tipping_ctrl, clip_to_bounds=True)
# + [markdown] id="emtt1q1IePEz" colab_type="text"
# # simulate / Compute
# + id="EtvrS0SUeRST" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="14f3b987-a4de-49c6-de02-83c41f8e857d"
# Pass inputs to the ControlSystem using Antecedent labels with Pythonic API
# Note: if you like passing many inputs all at once, use .inputs(dict_of_data)
tipping.input['quality'] = 6.5
tipping.input['service'] = 9.8
# Crunch the numbers
tipping.compute()
print(tipping.output['tip'])
tip.view(sim=tipping)
# + id="zv_MjPEDfWOB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="09c8d0b5-6fb7-450b-d2e2-eb2a517acf73"
# Pass inputs to the ControlSystem using Antecedent labels with Pythonic API
# Note: if you like passing many inputs all at once, use .inputs(dict_of_data)
tipping.input['quality'] = 10
tipping.input['service'] = 10
# Crunch the numbers
tipping.compute()
print(tipping.output['tip'])
tip.view(sim=tipping)
# + id="gORVptQ5gSlk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="8a924813-b103-4de5-cca1-e4558f6538d5"
# Pass inputs to the ControlSystem using Antecedent labels with Pythonic API
# Note: if you like passing many inputs all at once, use .inputs(dict_of_data)
tipping.input['quality'] = 10
tipping.input['service'] = 2
# Crunch the numbers
tipping.compute()
print(tipping.output['tip'])
tip.view(sim=tipping)
# + id="d3JJsn_ijIVP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="45b49300-fd1c-42e7-95f4-78e7aa0c258a"
# Pass inputs to the ControlSystem using Antecedent labels with Pythonic API
# Note: if you like passing many inputs all at once, use .inputs(dict_of_data)
tipping.input['quality'] = 7
tipping.input['service'] = 2
# Crunch the numbers
tipping.compute()
print(tipping.output['tip'])
tip.view(sim=tipping)
| fuzzy/Fuzzy_logic_scikit_fuzzy_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.6
# language: julia
# name: julia-0.4
# ---
# 2. Print the julia version
VERSION
# 3. Create a null vector of size 10
Z = zeros(10)
# 4. Create a null vector of size 10 but the fifth value which is 1
Z = zeros(10)
Z[5] = 1
Z
# 5. Create a vector with values ranging from 10 to 99
Z = [10:99]
# 6. Create a 3x3 matrix with values ranging from 0 to 8
Z = reshape(0:8, 3, 3)
# 7. Find indices of non-zero elements from [1,2,0,0,4,0]
nz = find([1,2,0,0,4,0])
# 8. Create a 3x3 identity matrix
Z = eye(3)
# 9. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal
Z = diagm(1:4, -1)
# 10. Create a 10x10x10 array with random values
rand(10, 10, 10)
# +
# Novice
# 1. Create a 8x8 matrix and fill it with a checkerboard pattern
Z = zeros(Int64,8,8)
Z[1:2:end, 2:2:end] = 1
Z[2:2:end, 1:2:end] = 1
Z
# Another solution
# Author: harven
[(i+j)%2 for i=1:8, j=1:8]
# -
# 2. Create a 10x10 array with random values and find the minimum and maximum values
Z = rand(10, 10)
Zmin, Zmax = minimum(Z), maximum(Z)
# It also write as following. thanks [hc_e](http://qiita.com/chezou/items/d7ca4e95d25835a5cd01#comment-1c20073a44695c08f523)
Zmin, Zmax = extrema(Z)
# 3. Create a checkerboard 8x8 matrix using the tile function
# numpy's tile equal to repmat
Z = repmat([0 1;1 0],4,4)
# 4. Normalize a 5x5 random matrix (between 0 and 1)
Z = rand(5, 5)
Zmin, Zmax = minimum(Z), maximum(Z)
Z = (Z .- Zmin)./(Zmax - Zmin)
# 5. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product)
Z = ones(5,3) * ones(3,2)
# +
# 6. Create a 10x10 matrix with row values ranging from 0 to 9
(zeros(Int64,10,10) .+ [0:9])'
# Alternate solution
# Author: <NAME>
[y for x in 1:10, y in 0:9]
# -
# 7. Create a vector of size 1000 with values ranging from 0 to 1, both excluded
linspace(0,1, 1002)[2:end - 1]
# +
# 8. Create a random vector of size 100 and sort it
Z = rand(100)
sort(Z) # returns a sorted copy of Z; leaves Z unchanged
# Alternate solution
# Author: <NAME>
Z = rand(100)
sort!(Z) # sorts Z in-place; returns Z
# -
# 9. Consider two random matrices A anb B, check if they are equal.
A = rand(0:2, 2,2)
B = rand(0:2, 2,2)
A == B
# 10. Create a random vector of size 1000 and find the mean value
Z = rand(1000)
m = mean(Z)
# +
# Apprentice
# 1. Make an array immutable (read-only)
# nothing
# -
# 2. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates
Z = rand(10,2)
X, Y = Z[:,1], Z[:,2]
R = sqrt(X.^2 + Y.^2)
T = atan2(Y,X)
# 3. Create random vector of size 100 and replace the maximum value by 0
Z = rand(100)
Z[indmax(Z)] = 0
# +
# 4. Create a structured array with x and y coordinates covering the [0,1]x[0,1] area.
# There is not exist `meshgrid` officially
# see also. https://github.com/JuliaLang/julia/issues/4093
# assume using https://github.com/JuliaLang/julia/blob/master/examples/ndgrid.jl
include("/Applications/Julia-0.3.0-prerelease-547facf2c1.app/Contents/Resources/julia/share/julia/examples/ndgrid.jl")
X = linspace(0,1,10)
Zx, Zy = meshgrid(X, X)
# Another solution
# Author: <NAME>
[(x,y) for x in linspace(0,1,10), y in linspace(0,1,10)]
# +
# 5. Print the minimum and maximum representable value for each Julia scalar type
for dtype in (Int8, Int16, Int32, Int64)
println(typemin(dtype))
println(typemax(dtype))
end
# Another solution
# Author: harven
print(map!(t -> (typemin(t),typemax(t)), subtypes(Signed)))
# typemin, typemax returns -Inf, Inf
for dtype in (Float32, Float64)
println(typemin(dtype))
println(typemax(dtype))
println(eps(dtype))
end
# +
# 6. Create a structured array representing a position (x,y) and a color (r,g,b)
# Julia doesn't have StructArray
# see also: https://github.com/JuliaLang/julia/issues/1263
# use DataFrames
# -
# 7. Consider a random vector with shape (100,2) representing coordinates, find point by point distances
Z = rand(10,2)
X,Y = Z[:,1], Z[:,2]
D = sqrt((X.-X').^2 + (Y .- Y').^2)
# +
# 8. Generate a generic 2D Gaussian-like array
X, Y = meshgrid(linspace(-1,1,100),linspace(-1,1,100))
D = sqrtm(X*X + Y*Y)
sigma, mu = 1.0, 0.0
G = exp(-( (D.-mu)^2 / ( 2.0 * sigma^2 ) ) )
# Another solution
# Author: <NAME>
sigma, mu = 1.0, 0.0
G = [ exp(-(x-mu).^2/(2.0*sigma^2) -(y-mu).^2/(2.0*sigma^2) ) for x in linspace(-1,1,100), y in linspace(-1,1,100) ]
# One more solution
# Author: <NAME>
sigma, mu = 1.0, 0.0
x,y = linspace(-1,1,100), linspace(-1,1,100)
G = zeros(length(x),length(y))
for i in 1:length(x), j in 1:length(y)
G[i,j] = exp(-(x[i]-mu).^2/(2.0*sigma^2) -(y[j]-mu).^2/(2.0*sigma^2) )
end
# -
# 9. Consider the vector [1, 2, 3, 4, 5], how to build a new vector with 3 consecutive zeros interleaved between each value ?
Z = [1,2,3,4,5]
nz = 3
Z0 = zeros(length(Z) + (length(Z)-1)*(nz))
Z0[1:nz+1:end] = Z
# 10. Find the nearest value from a given value in an array
Z = [3,6,9,12,15]
Z[indmin(abs(Z .- 10))]
# Journyman
# 1. Consider the following file:
# 1,2,3,4,5
# 6,,,7,8
# ,,9,10,11
# How to read it ?
using DataFrames
readtable("missing.dat")
# +
# 2. Consider a generator function that generates 10 integers and use it to build an array
# I can't translate this question
# -
# 3. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices) ?
using StatsBase
Z = ones(10)
I = rand(0:length(Z), 20)
Z += counts(I, 1:length(Z))
# 4. How to accumulate elements of a vector (X) to an array (F) based on an index list (I) ?
using StatsBase
X = WeightVec([1,2,3,4,5,6])
I = [1,3,9,3,4,1]
F = counts(I, maximum(I), X)
# 5. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors
w,h = 16,16
I = convert(Array{Uint8}, rand(0:2, (h,w,3)))
F = I[:,:,1] * 256 * 256 + I[:,:,2]*256 + I[:,:,3]
n = length(unique(F))
unique(I)
# 6. Considering a four dimensions array, how to get sum over the last two axis at once ?
A = rand(0:10, (3,4,3,4))
x,y = size(A)[1:end-2]
z = prod(size(A)[end-1:end])
calc_sum = sum(reshape(A, (x,y,z)),3)
# 7. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices ?
using StatsBase
D = WeightVec(rand(100))
S = rand(0:10,100)
D_sums = counts(S, maximum(S), D)
D_counts = counts(S, maximum(S))
D_means = D_sums ./ D_counts
# +
# Craftsman
# 1. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z[0],Z[1],Z[2]) and each subsequent row is shifted by 1 (last row should be (Z[-3],Z[-2],Z[-1])
# I don't find any function like stride_tricks.as_stride
function rolling(A, window)
Z = zeros(length(A)-2, window)
for i in 1:(length(A) - window +1)
Z[i,:] = A[i:i+2]
end
return Z
end
rolling(0:100, 3)
# +
# 2. Consider a set of 100 triplets describing 100 triangles (with shared vertices), find the set of unique line segments composing all the triangles.
faces = rand(0:100, 100, 3)
face2 = kron(faces,[1 1])
F = circshift(sortcols(face2),(0,1))
F = reshape(F, (convert(Int64,length(F)/2),2))
F = sort(F,2)
G = unique(F,1)
# -
# 3. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C ?
using StatsBase
O = [1 1 2 3 4 4 6]
C = counts(O, maximum(O))
A = foldl(vcat,[kron(ones(Int64, C[i]), i) for i in 1:length(C)])
# 4. How to compute averages using a sliding window over an array ?
function moving_average(A, n=3)
ret = cumsum(A)
ret[n+1:end] = ret[n+1:end] - ret[1:end-n]
return ret[n:end-1] / n
end
Z = 0:20
moving_average(Z, 3)
# Artisan
# 1. Considering a 100x3 matrix, extract rows with unequal values (e.g. [2,2,3])
Z = rand(0:5,100,3)
E = prod(Z[:,2:end] .== Z[:,1:end-1],2)
U = Z[find(~E), :]
# 2. Convert a vector of ints into a matrix binary representation.
I = [0 1 2 3 15 16 32 64 128]
B = foldl(hcat,[reverse(int(bool(i & (2 .^ (0:8))))) for i in I])'
# +
# Adept
# 1. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a fill value when necessary)
# TBS
# +
# Expert
# 1. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B ?
# I can't execute numpy version...
# +
# 2. Extract all the contiguous 3x3 blocks from a random 10x10 matrix.
# TBS
# +
# 3. Extract all the contiguous 3x3 blocks from a random 10x10 matrix.
# There is Symmetric class in julia but immutable
# https://github.com/JuliaLang/julia/blob/master/base/linalg/symmetric.jl
# See also: https://github.com/JuliaLang/julia/pull/1533
# -
# 4. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once ? (result has shape (n,1))
p, n = 10, 20
M = ones(n,n,p)
V = ones(n,p)
S = reduce(+, [M[i,:,j]*V[i] for i = 1:n, j = 1:p])'
S
# Master
# 1. Given a two dimensional array, how to extract unique rows ?
Z = rand(0:2, 6,3)
uZ = unique(Z,1)
| julia-100-exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hide_input=false
import requests
import json
import numpy as np
import pandas as pd
# -
##payload = {'q': "select * from water_risk_indicators where indicator = 'water_stress' and model in ('bau', 'historic') and period='year' and type='absolute' and basinid = 7664 order by year asc"}
r = requests.get('http://api.resourcewatch.org/vocabulary?page[size]=1000')
print(r)
#print(json.dumps(r.json()['data'],sort_keys=True, indent=1))
datasetCollection = np.array(r.json()['data'])
print(len(datasetCollection))
for value in r.json()['data']:
print(value['id'])
# +
payload = {'page[size]': "1000",
'app':'rw',
'includes':''}
sdf=0
r = requests.get('http://api.resourcewatch.org/dataset',params = payload)
print(r)
#print(json.dumps(r.json()['data'],sort_keys=True, indent=1))
datasetCollection = np.array(r.json()['data'])
print(len(datasetCollection))
for value in r.json()['data']:
if value['attributes']['tableName']!=None:
sdf = sdf+1
print(sdf)
# -
# ## Vocabularies
# ```json
# {
# "forest": {
# "tags": [
# "forest",
# "forest_change",
# "forest_loss",
# "forest_gain",
# "forest_cover",
# "fire",
# "burn",
# "deforestation",
# "degradation",
# "restoration",
# "land_cover",
# "intact_forest",
# "logging",
# "mangrove"
# ]
# },
# "biodiversity": {
# "tags": [
# "biodiversity",
# "marine_life",
# "coral_reef",
# "bleaching",
# "ocean",
# "hotspot",
# "risk",
# "habitat",
# "species",
# "endangered",
# "conservation",
# "human_impact",
# "protected_area",
# "reserve",
# "park",
# "fish",
# "fishing",
# "illegal",
# "poaching",
# "bleaching",
# "hunting",
# "fisheries",
# "ecosystem",
# "amphibian",
# "commercial",
# "biome",
# "mammal",
# "mangrove",
# "plant",
# "bird",
# "intact",
# "habitat_loss",
# "extinction",
# "range"
# ]
# },
# "reference_map": {
# "tags": [
# "reference_map",
# "political",
# "boundaries",
# "satellite",
# "imagery",
# "elevation",
# "land_cover",
# "land_use",
# "land_classification",
# "land_type",
# "land_units",
# "biome",
# "terrain",
# "slope",
# "road"
# ]
# },
# "disasters": {
# "tags": [
# "disaster",
# "natural_disaster",
# "earthquake",
# "seismic",
# "flood",
# "volcano",
# "volcanic",
# "eruption",
# "drought",
# "fire",
# "outbreak",
# "cold_wave",
# "heat_wave",
# "storm",
# "landslide",
# "hazard",
# "risk",
# "stress",
# "vulnerability",
# "hurricane",
# "typhoon",
# "extreme_event",
# "explosion"
# ]
# },
# "commerce": {
# "tags": [
# "commerce",
# "commodities",
# "supply_chain",
# "supply",
# "demand",
# "trade",
# "investment",
# "regulation",
# "concessions",
# "agreement",
# "transparency",
# "coal",
# "uranium",
# "bauxite",
# "sulfur",
# "lithium",
# "copper",
# "iron_ore",
# "silicon",
# "mineral",
# "network",
# "shipping",
# "port",
# "gdp",
# "gross_domestic_product",
# "gni",
# "gross_national_income",
# "limit",
# "infrastructure",
# "route",
# "market",
# "production",
# "resource",
# "economy",
# "recycle",
# "waste",
# "material_flows",
# "price"
# ]
# },
# "food": {
# "tags": [
# "food",
# "nutrition",
# "malnutrition",
# "fertilizer",
# "hunger",
# "famine",
# "food_security",
# "food_supply",
# "production",
# "resource",
# "crop_health",
# "crop_yield",
# "livestock",
# "food_waste",
# "diets",
# "malnutrition",
# "food_production",
# "food_consumption",
# "soil",
# "cropland",
# "farmland",
# "agriculture",
# "irrigation",
# "irrigated",
# "rain_fed",
# "food_price",
# "price",
# "crop",
# "vegetation",
# "vegetation_health",
# "crop_health",
# "organic",
# "meat",
# "beef",
# "lamb",
# "crop_calendar",
# "harvest",
# "ndvi",
# "anomaly",
# "aquaculture",
# "fish",
# "corn",
# "maize",
# "wheat",
# "rice",
# "soy",
# "palm_oil",
# "shrimp",
# "pesticide",
# "yield_gap"
# ]
# },
# "dataset_type": {
# "tags": [
# "raster",
# "vector",
# "table",
# "geospatial",
# "non_geospatial"
# ]
# },
# "energy": {
# "tags": [
# "energy",
# "energy_production",
# "production",
# "electricity",
# "energy_efficiency",
# "renewable",
# "nonrenewable",
# "power_line",
# "emissions",
# "power_plant",
# "solar",
# "wind",
# "geothermal",
# "hydroelectric",
# "hydropower",
# "nuclear",
# "biofuel",
# "coal",
# "oil",
# "petroleum",
# "energy_hazard",
# "spill",
# "chemical",
# "pipeline",
# "nighttime_lights",
# "uranium",
# "thorium",
# "access",
# "energy_access",
# "oil_refineries",
# "oil_reserves",
# "reserve",
# "flare",
# "flaring",
# "natural_gas",
# "shale",
# "power",
# "potential",
# "radioactive",
# "resource"
# ]
# },
# "function": {
# "tags": [
# "planet_pulse",
# "explore",
# "alert",
# "insight",
# "dashboard",
# "timeseries"
# ]
# },
# "water": {
# "tags": [
# "water",
# "water_avaliablility",
# "water_extent",
# "water_scarcity",
# "water_demand",
# "water_pollution",
# "flood",
# "drought",
# "precipitation",
# "rainfall",
# "rain",
# "water_quality",
# "nitrogen",
# "phosphorus",
# "dissolved_oxygen",
# "temperature",
# "total_suspended_solids",
# "water_scarcity",
# "water_risk",
# "water_stress",
# "drinking_water",
# "sewage",
# "wastewater",
# "treatment",
# "nutrient",
# "water_access",
# "access",
# "water_consumption",
# "infrastructure",
# "change",
# "seasonality",
# "greywater",
# "fresh_water",
# "dam",
# "reservoir",
# "water_source",
# "sedimentation",
# "sediment",
# "erosion",
# "pollution",
# "surface_water",
# "lake",
# "river",
# "sea",
# "soil_moisture",
# "ocean",
# "watershed",
# "drainage_basin",
# "catchment",
# "carbon",
# "organic",
# "hydrologic",
# "resource",
# "clean_water",
# "silica",
# "hab",
# "harmful_algal_blooms",
# "hydropower",
# "anomaly",
# "water_conflict"
# ]
# },
# "frequency": {
# "tags": [
# "weekly",
# "near_real-time",
# "daily",
# "weekly",
# "monthly",
# "annual",
# "projection"
# ]
# },
# "society": {
# "tags": [
# "society",
# "people",
# "population",
# "demographic",
# "education",
# "educate",
# "literacy",
# "illiterate",
# "school",
# "primary",
# "secondary",
# "youth",
# "children",
# "adolescents",
# "health",
# "governance",
# "government",
# "development",
# "political_stability",
# "human_rights",
# "land_rights",
# "refugees",
# "conflict",
# "protest",
# "migrant",
# "justice",
# "information",
# "participation",
# "security",
# "accountability",
# "stability",
# "corruption",
# "rule_of_law",
# "child_mortality",
# "infant_mortality",
# "poverty",
# "cooking_fuel",
# "sanitation",
# "water",
# "electricity",
# "floor_materials",
# "assets",
# "acquisition",
# "adaptation",
# "capacity",
# "aid",
# "foreign_aid",
# "tone",
# "tenure",
# "sustainable",
# "asylum",
# "displaced",
# "land_grab",
# "land_ownership",
# "migration",
# "mortality",
# "sdgs",
# "sustainable_development_goals",
# "resilience",
# "fragility",
# "freedom",
# "gdp",
# "gross_domestic_product",
# "gini",
# "inequality",
# "life_expectancy",
# "sensitivity",
# "response",
# "crisis",
# "economic",
# "income",
# "fatalities",
# "death",
# "asylum",
# "violence",
# "peace",
# "gender",
# "women",
# "girls",
# "discrimination"
# ]
# },
# "location": {
# "tags": [
# "global",
# "quasi-global",
# "tropic",
# "temperate",
# "boreal",
# "arctic",
# "antarctic",
# "afghanistan",
# "aland",
# "albania",
# "algeria",
# "american_samoa",
# "andorra",
# "angola",
# "anguilla",
# "antarctica",
# "antigua_and_barb.",
# "argentina",
# "armenia",
# "aruba",
# "australia",
# "austria",
# "azerbaijan",
# "bahamas",
# "bahrain",
# "bangladesh",
# "barbados",
# "belarus",
# "belgium",
# "belize",
# "benin",
# "bermuda",
# "bhutan",
# "bolivia",
# "bosnia_and_herz.",
# "botswana",
# "brazil",
# "br._indian_ocean_ter.",
# "brunei",
# "bulgaria",
# "burkina_faso",
# "burundi",
# "cambodia",
# "cameroon",
# "canada",
# "cape_verde",
# "cayman_is.",
# "central_african_rep.",
# "chad",
# "chile",
# "china",
# "colombia",
# "comoros",
# "congo",
# "dem._rep._congo",
# "cook_is.",
# "costa_rica",
# "cte_d'ivoire",
# "croatia",
# "cuba",
# "curaao",
# "cyprus",
# "czech_rep.",
# "denmark",
# "djibouti",
# "dominica",
# "dominican_rep.",
# "ecuador",
# "egypt",
# "el_salvador",
# "eq._guinea",
# "eritrea",
# "estonia",
# "ethiopia",
# "falkland_is.",
# "faeroe_is.",
# "fiji",
# "finland",
# "france",
# "fr._polynesia",
# "fr._s._antarctic_lands",
# "gabon",
# "gambia",
# "georgia",
# "germany",
# "ghana",
# "gibraltar",
# "greece",
# "greenland",
# "grenada",
# "guam",
# "guatemala",
# "guernsey",
# "guinea",
# "guinea-bissau",
# "guyana",
# "haiti",
# "heard_i._and_mcdonald_is.",
# "vatican",
# "honduras",
# "hong_kong",
# "hungary",
# "iceland",
# "india",
# "indonesia",
# "iran",
# "iraq",
# "ireland",
# "isle_of_man",
# "israel",
# "italy",
# "jamaica",
# "japan",
# "jersey",
# "jordan",
# "kazakhstan",
# "kenya",
# "kiribati",
# "dem._rep._korea",
# "korea",
# "kuwait",
# "kyrgyzstan",
# "lao_pdr",
# "latvia",
# "lebanon",
# "lesotho",
# "liberia",
# "libya",
# "liechtenstein",
# "lithuania",
# "luxembourg",
# "macao",
# "macedonia",
# "madagascar",
# "malawi",
# "malaysia",
# "maldives",
# "mali",
# "malta",
# "marshall_is.",
# "mauritania",
# "mauritius",
# "mexico",
# "micronesia",
# "moldova",
# "monaco",
# "mongolia",
# "montenegro",
# "montserrat",
# "morocco",
# "mozambique",
# "myanmar",
# "namibia",
# "nauru",
# "nepal",
# "netherlands",
# "new_caledonia",
# "new_zealand",
# "nicaragua",
# "niger",
# "nigeria",
# "niue",
# "norfolk_island",
# "n._mariana_is.",
# "norway",
# "oman",
# "pakistan",
# "palau",
# "palestine",
# "panama",
# "papua_new_guinea",
# "paraguay",
# "peru",
# "philippines",
# "pitcairn_is.",
# "poland",
# "portugal",
# "puerto_rico",
# "qatar",
# "romania",
# "russia",
# "rwanda",
# "st-barthlemy",
# "saint_helena",
# "st._kitts_and_nevis",
# "saint_lucia",
# "st-martin",
# "st._pierre_and_miquelon",
# "st._vin._and_gren.",
# "samoa",
# "san_marino",
# "so_tom_and_principe",
# "saudi_arabia",
# "senegal",
# "serbia",
# "seychelles",
# "sierra_leone",
# "slovakia",
# "singapore",
# "sint_maarten",
# "slovenia",
# "solomon_is.",
# "somalia",
# "south_africa",
# "s._geo._and_s._sandw._is.",
# "s._sudan",
# "spain",
# "sri_lanka",
# "sudan",
# "suriname",
# "swaziland",
# "sweden",
# "switzerland",
# "syria",
# "taiwan",
# "tajikistan",
# "tanzania",
# "thailand",
# "timor-leste",
# "togo",
# "tonga",
# "trinidad_and_tobago",
# "tunisia",
# "turkey",
# "turkmenistan",
# "turks_and_caicos_is.",
# "tuvalu",
# "uganda",
# "ukraine",
# "united_arab_emirates",
# "united_kingdom",
# "united_states",
# "u.s._minor_outlying_is.",
# "uruguay",
# "uzbekistan",
# "vanuatu",
# "venezuela",
# "vietnam",
# "british_virgin_is.",
# "u.s._virgin_is.",
# "wallis_and_futuna_is.",
# "w._sahara",
# "yemen",
# "zambia",
# "zimbabwe",
# "coral_sea_islands_territory",
# "republic_of_kosovo"
# ]
# },
# "resolution": {
# "tags": [
# "national",
# "subnational"
# ]
# },
# "cities": {
# "tags": [
# "cities",
# "traffic",
# "congestion",
# "fatalities",
# "death",
# "road_safety",
# "infrastructure",
# "road",
# "commute",
# "traffic_feed",
# "transport",
# "transportation",
# "rapid_transport",
# "bus",
# "train",
# "rail",
# "public_transport",
# "cycle",
# "cycling",
# "bike",
# "accessibility",
# "urban",
# "urban_extent",
# "settlement",
# "informal_settlement",
# "city_extent",
# "urban_expansion",
# "park",
# "greenspace",
# "urban_planning",
# "nightlights",
# "air_quality",
# "no2",
# "nitrogen_dioxide",
# "emissions",
# "air_pollution",
# "pollution",
# "co",
# "carbon_monoxide",
# "bc",
# "black_carbon",
# "o3",
# "ozone",
# "so2",
# "sulfur_dioxide",
# "wind",
# "wind_speed",
# "wind_direction",
# "wastewater",
# "treatment",
# "sewage",
# "plastic",
# "public_service",
# "affiliation",
# "membership",
# "zone",
# "zoning",
# "waste",
# "commercial",
# "building",
# "particulate_matter",
# "pm2.5",
# "heat_island",
# "height",
# "slum",
# "resilience",
# "environmental_health",
# "environmental_quality",
# "employment"
# ]
# },
# "climate": {
# "tags": [
# "climate",
# "ghg",
# "greenhouse_gas",
# "carbon_dioxide",
# "temperature",
# "sea_ice",
# "ice_shelf",
# "sea_level",
# "snow",
# "snow_cover",
# "ice",
# "glacier",
# "climate_change",
# "emissions",
# "ph",
# "sea_surface_temperature",
# "carbon",
# "biomass",
# "carbon_storage",
# "ocean",
# "vulnerability",
# "coastal",
# "acidification",
# "weather",
# "oceans",
# "carbon_sinks",
# "extreme_event",
# "sea_level",
# "sea_level_rise",
# "slr",
# "adaptation",
# "hurricane",
# "storms",
# "precipitation",
# "methane",
# "storm_intensity",
# "bioclimate",
# "ch4",
# "biome",
# "nitrous_oxide",
# "n2o",
# "indcs",
# "intended_nationally_determined_contributions",
# "peatland",
# "ndcs",
# "nationally_determined_contributions",
# "greywater",
# "groundwater",
# "resource",
# "anomaly"
# ]
# }
# }
# ```
# +
text = '{"location":{"tags":["global","quasi-global","tropic","temperate","boreal","Arctic","Antarctic","Afghanistan","Aland","Albania","Algeria","American_Samoa","Andorra","Angola","Anguilla","Antarctica","Antigua_and_Barb.","Argentina","Armenia","Aruba","Australia","Austria","Azerbaijan","Bahamas","Bahrain","Bangladesh","Barbados","Belarus","Belgium","Belize","Benin","Bermuda","Bhutan","Bolivia","Bosnia_and_Herz.","Botswana","Brazil","Br._Indian_Ocean_Ter.","Brunei","Bulgaria","Burkina_Faso","Burundi","Cambodia","Cameroon","Canada","Cape_Verde","Cayman_Is.","Central_African_Rep.","Chad","Chile","China","Colombia","Comoros","Congo","Dem._Rep._Congo","Cook_Is.","Costa_Rica","Côte_d\'Ivoire","Croatia","Cuba","Curaçao","Cyprus","Czech_Rep.","Denmark","Djibouti","Dominica","Dominican_Rep.","Ecuador","Egypt","El_Salvador","Eq._Guinea","Eritrea","Estonia","Ethiopia","Falkland_Is.","Faeroe_Is.","Fiji","Finland","France","Fr._Polynesia","Fr._S._Antarctic_Lands","Gabon","Gambia","Georgia","Germany","Ghana","Gibraltar","Greece","Greenland","Grenada","Guam","Guatemala","Guernsey","Guinea","Guinea-Bissau","Guyana","Haiti","Heard_I._and_McDonald_Is.","Vatican","Honduras","Hong_Kong","Hungary","Iceland","India","Indonesia","Iran","Iraq","Ireland","Isle_of_Man","Israel","Italy","Jamaica","Japan","Jersey","Jordan","Kazakhstan","Kenya","Kiribati","Dem._Rep._Korea","Korea","Kuwait","Kyrgyzstan","Lao_PDR","Latvia","Lebanon","Lesotho","Liberia","Libya","Liechtenstein","Lithuania","Luxembourg","Macao","Macedonia","Madagascar","Malawi","Malaysia","Maldives","Mali","Malta","Marshall_Is.","Mauritania","Mauritius","Mexico","Micronesia","Moldova","Monaco","Mongolia","Montenegro","Montserrat","Morocco","Mozambique","Myanmar","Namibia","Nauru","Nepal","Netherlands","New_Caledonia","New_Zealand","Nicaragua","Niger","Nigeria","Niue","Norfolk_Island","N._Mariana_Is.","Norway","Oman","Pakistan","Palau","Palestine","Panama","Papua_New_Guinea","Paraguay","Peru","Philippines","Pitcairn_Is.","Poland","Portugal","Puerto_Rico","Qatar","Romania","Russia","Rwanda","St-Barthélemy","Saint_Helena","St._Kitts_and_Nevis","Saint_Lucia","St-Martin","St._Pierre_and_Miquelon","St._Vin._and_Gren.","Samoa","San_Marino","São_Tomé_and_Principe","Saudi_Arabia","Senegal","Serbia","Seychelles","Sierra_Leone","Slovakia","Singapore","Sint_Maarten","Slovenia","Solomon_Is.","Somalia","South_Africa","S._Geo._and_S._Sandw._Is.","S._Sudan","Spain","Sri_Lanka","Sudan","Suriname","Swaziland","Sweden","Switzerland","Syria","Taiwan","Tajikistan","Tanzania","Thailand","Timor-Leste","Togo","Tonga","Trinidad_and_Tobago","Tunisia","Turkey","Turkmenistan","Turks_and_Caicos_Is.","Tuvalu","Uganda","Ukraine","United_Arab_Emirates","United_Kingdom","United_States","U.S._Minor_Outlying_Is.","Uruguay","Uzbekistan","Vanuatu","Venezuela","Vietnam","British_Virgin_Is.","U.S._Virgin_Is.","Wallis_and_Futuna_Is.","W._Sahara","Yemen","Zambia","Zimbabwe","Coral_Sea_Islands_Territory","Republic_of_Kosovo"]},"dataset_type":{"tags":["raster","vector","table","geospatial","non_geospatial"]},"frequency":{"tags":["weekly","near_real-time","daily","weekly","monthly","annual","projection"]},"resolution":{"tags":["national","subnational"]},"function":{"tags":["planet_pulse","explore","alert","insight","dashboard","timeseries"]},"reference_map":{"tags":["reference_map","political","boundaries","satellite","imagery","elevation","land_cover","land_use","land_classification","land_type","land_units","biome","terrain","slope","road"]},"forest":{"tags":["forest","forest_change","forest_loss","forest_gain","forest_cover","fire","burn","deforestation","degradation","restoration","land_cover","intact_forest","logging","mangrove"]},"water":{"tags":["water","water_avaliablility","water_extent","water_scarcity","water_demand","water_pollution","flood","drought","precipitation","rainfall","rain","water_quality","nitrogen","phosphorus","dissolved_oxygen","temperature","total_suspended_solids","water_scarcity","water_risk","water_stress","drinking_water","sewage","wastewater","treatment","nutrient","water_access","access","water_consumption","infrastructure","change","seasonality","greywater","fresh_water","dam","reservoir","water_source","sedimentation","sediment","erosion","pollution","surface_water","lake","river","sea","soil_moisture","ocean","watershed","drainage_basin","catchment","carbon","organic","hydrologic","resource","clean_water","silica","HAB","harmful_algal_blooms","hydropower","anomaly","water_conflict"]},"food":{"tags":["food","nutrition","malnutrition","fertilizer","hunger","famine","food_security","food_supply","production","resource","crop_health","crop_yield","livestock","food_waste","diets","malnutrition","food_production","food_consumption","soil","cropland","farmland","agriculture","irrigation","irrigated","rain_fed","food_price","price","crop","vegetation","vegetation_health","crop_health","organic","meat","beef","lamb","crop_calendar","harvest","NDVI","anomaly","aquaculture","fish","corn","maize","wheat","rice","soy","palm_oil","shrimp","pesticide","yield_gap"]},"climate":{"tags":["climate","GHG","greenhouse_gas","carbon_dioxide","temperature","sea_ice","ice_shelf","sea_level","snow","snow_cover","ice","glacier","climate_change","emissions","pH","sea_surface_temperature","carbon","biomass","carbon_storage","ocean","vulnerability","coastal","acidification","weather","oceans","carbon_sinks","extreme_event","sea_level","sea_level_rise","SLR","adaptation","hurricane","storms","precipitation","methane","storm_intensity","bioclimate"," CH4"," biome"," nitrous_oxide"," N2O"," INDCs"," Intended_Nationally_Determined_Contributions"," peatland"," NDCs"," Nationally_Determined_Contributions"," greywater"," groundwater"," resource"," anomaly"]},"energy":{"tags":["energy","energy_production","production","electricity","energy_efficiency","renewable","nonrenewable","power_line","emissions","power_plant","solar","wind","geothermal","hydroelectric","hydropower","nuclear","biofuel","coal","oil","petroleum","energy_hazard","spill","chemical","pipeline","nighttime_lights ","uranium","thorium","access","energy_access","oil_refineries","oil_reserves","reserve","flare","flaring","natural_gas","shale","power","potential","radioactive","resource"]},"commerce":{"tags":["commerce","commodities","supply_chain","supply","demand","trade","investment","regulation","concessions","agreement","transparency","coal","uranium","bauxite","sulfur","lithium","copper ","iron_ore","silicon","mineral","network","shipping","port","GDP","gross_domestic_product","GNI","gross_national_income","limit","infrastructure","route","market","production","resource","economy","recycle","waste","material_flows","price"]},"cities":{"tags":["cities","traffic","congestion","fatalities","death","road_safety","infrastructure","road","commute","traffic_feed","transport","transportation","rapid_transport","bus","train","rail","public_transport","cycle","cycling","bike ","accessibility","urban","urban_extent","settlement","informal_settlement","city_extent","urban_expansion","park","greenspace","urban_planning","nightlights","air_quality","NO2","nitrogen_dioxide","emissions","air_pollution","pollution","CO","carbon_monoxide","BC","black_carbon","O3","ozone","SO2","sulfur_dioxide","wind","wind_speed","wind_direction","wastewater","treatment","sewage","plastic","public_service","affiliation","membership","zone","zoning","waste","commercial","building","particulate_matter","PM2.5","heat_island","height","slum","resilience","environmental_health","environmental_quality","employment"]},"society":{"tags":["society","people","population","demographic","education","educate","literacy","illiterate","school","primary","secondary","youth ","children ","adolescents","health","governance","government","development","political_stability","human_rights","land_rights","refugees","conflict","protest","migrant","justice","information","participation","security","accountability","stability","corruption","rule_of_law","child_mortality","infant_mortality","poverty ","cooking_fuel","sanitation","water","electricity","floor_materials","assets","acquisition","adaptation","capacity","aid","foreign_aid","tone","tenure","sustainable","asylum","displaced","land_grab","land_ownership","migration","mortality","SDGs","sustainable_development_goals","resilience","fragility","freedom","GDP","gross_domestic_product","GINI","inequality","life_expectancy","sensitivity","response","crisis","economic","income","fatalities","death","asylum","violence","peace","gender","women","girls","discrimination"]},"disasters":{"tags":["disaster","natural_disaster","earthquake","seismic","flood","volcano","volcanic","eruption","drought","fire","outbreak","cold_wave","heat_wave","storm","landslide","hazard","risk","stress","vulnerability","hurricane","typhoon","extreme_event","explosion"]},"biodiversity":{"tags":["biodiversity","marine_life","coral_reef","bleaching","ocean","hotspot","risk","habitat","species","endangered","conservation","human_impact","protected_area","reserve","park","fish","fishing","illegal","poaching","bleaching","hunting","fisheries","ecosystem","amphibian","commercial","biome","mammal","mangrove","plant","bird","intact","habitat_loss","extinction","range"]}}'
vocabularies=json.loads(text)
# -
text = 'global, raster, geospatial, annual, explore, forest, forest_cover, land_cover'
tags = text.split(', ')
# + code_folding=[]
data_vocabulary = {
"vocabularies":{}
}
for vocabulary in vocabularies:
data = list(set(tags) & set(vocabularies[vocabulary]['tags']))
if data:
data_vocabulary['vocabularies'].update({vocabulary:{'tags': data}})
json.dumps(data_vocabulary)
# vocabularies[vocabulary]['tags'] = [element.strip().lower().encode("ascii", "ignore").decode("utf-8",'ignore') for element in vocabularies[vocabulary]['tags']]
# +
r = requests.get('https://docs.google.com/spreadsheets/d/1a5-7zZgtVWc4BQLBUg0t4pdn-UNEHzo7gXF8tnSYqdo/export?format=csv',stream=True)
with open('master_file.csv', 'wb') as fd:
for chunk in r.iter_content(chunk_size=128):
fd.write(chunk)
masterConfig = pd.read_csv('master_file.csv')
masterConfig.head(3)
# -
for index in masterConfig.dataset_id.index:
if str(masterConfig.dataset_id[index])!= 'nan':
url = 'http://api.resourcewatch.org/dataset/'+ masterConfig.dataset_id[index]
r = requests.get(url)
if r.status_code!=200:
print(url)
print(index+2)
| ResourceWatch/Api_definition/vocabulary_definition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import time
import numpy as np
import tensorflow as tf
import cv2
import os
import glob
import shutil
from IPython.display import clear_output
from utils import label_map_util
from utils import visualization_utils_color as vis_util
from IPython.display import display, Image
from joblib import Parallel, delayed
from tensorflow_face_detector import TensoflowFaceDector
# +
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = './model/frozen_inference_graph_face.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = './protos/face_label_map.pbtxt'
NUM_CLASSES = 2
# -
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def display_cv_image(image, format='.jpg'):
decoded_bytes = cv2.imencode(format, image)[1].tobytes()
display(Image(data=decoded_bytes))
def detect_n_faces_and_copyfile(imageFile):
image = cv2.imread(imageFile)
faceDetectImage = image.copy()
[h, w] = faceDetectImage.shape[:2]
(boxes, scores, classes, num_detections) = tDetector.run(faceDetectImage)
faceBoxes = vis_util.visualize_boxes_and_labels_on_image_array(
faceDetectImage,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=4)
personCount = len(faceBoxes)
if personCount == 0:
shutil.copyfile(imageFile, noPersonFolder + imageFile.split("\\")[-1])
elif personCount == 1:
shutil.copyfile(imageFile, onlyFolder + imageFile.split("\\")[-1])
saveFacePath = onlyFolderFace
elif personCount == 2:
shutil.copyfile(imageFile, twoShotFolder + imageFile.split("\\")[-1])
saveFacePath = twoShotFolderFace
else:
shutil.copyfile(imageFile, overThreeFolder + imageFile.split("\\")[-1])
saveFacePath = overThreeFolderFace
imageHeight, imageWidth = image.shape[:2]
for index, box in enumerate(faceBoxes):
ymin, xmin, ymax, xmax = box
(left, right, top, bottom) = (int(xmin * imageWidth), int(xmax * imageWidth), int(ymin * imageHeight), int(ymax * imageHeight))
cropWidth = right - left
cropHeight = bottom - top
if cropHeight > cropWidth:
diff = (cropHeight - cropWidth) / 2
if int(left - diff) < 0 or int(right + diff) > imageWidth:
top = int(top + diff)
bottom = int(bottom - diff)
else:
left = int(left - diff)
right = int(right + diff)
else:
diff = (cropWidth - cropHeight) / 2
if int(top - diff) < 0 or int(bottom + diff) > imageHeight:
left = int(left + diff)
right = int(right - diff)
else:
top = int(top - diff)
bottom = int(bottom + diff)
cv2.imwrite(saveFacePath + imageFile.split("\\")[-1].split(".")[0] + "_face_" + str(index) + ".jpg", image[top:bottom, left:right])
tDetector = TensoflowFaceDector(PATH_TO_CKPT)
originFolderPath = "../images/origin/"
saveFolderPath = "../images/classification/"
for memberName in os.listdir(originFolderPath):
# 必要なフォルダを作成
onlyFolder = saveFolderPath + memberName + "/1_only/"
twoShotFolder = saveFolderPath + memberName + "/2_shot/"
overThreeFolder = saveFolderPath + memberName + "/3_over/"
noPersonFolder = saveFolderPath + memberName + "/others/"
onlyFolderFace = saveFolderPath + memberName + "/face/1_only/"
twoShotFolderFace = saveFolderPath + memberName + "/face/2_shot/"
overThreeFolderFace = saveFolderPath + memberName + "/face/3_over/"
os.makedirs(onlyFolder, exist_ok=True)
os.makedirs(twoShotFolder, exist_ok=True)
os.makedirs(overThreeFolder, exist_ok=True)
os.makedirs(noPersonFolder, exist_ok=True)
os.makedirs(onlyFolderFace, exist_ok=True)
os.makedirs(twoShotFolderFace, exist_ok=True)
os.makedirs(overThreeFolderFace, exist_ok=True)
fileList = glob.glob(originFolderPath + memberName + "/*")
for imageFile in fileList:
detect_n_faces_and_copyfile(imageFile)
| face_detection/tensorflow_face_detect.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# # MNIST distributed training
#
# The **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, productions ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet. This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using **TensorFlow distributed training**.
#
# ### Lab Time
# This module takes around 13 to 15 minutes to complete.
#
# ### Set up the environment
import timeit
start_time = timeit.default_timer()
import os
import sagemaker
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import boto3
os.system("aws s3 cp s3://sagemaker-workshop-pdx/mnist/utils.py utils.py")
os.system("aws s3 cp s3://sagemaker-workshop-pdx/mnist/mnist.py mnist.py")
# +
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
# -
# ### Download the MNIST dataset
# +
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
# train-images-idx3-ubyte.gz: 학습 셋 이미지 - 55000개의 트레이닝 이미지, 5000개의 검증 이미지
# train-labels-idx1-ubyte.gz: 이미지와 매칭되는 학습 셋 레이블
# t10k-images-idx3-ubyte.gz: 테스트 셋 이미지 - 10000개의 이미지
# t10k-labels-idx1-ubyte.gz: 이미지와 매칭되는 테스트 셋 레이블
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
# -
# ### Upload the data
# We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-mnist')
# # Construct a script for distributed training
# Here is the full code for the network model:
# !cat 'mnist.py'
# The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference.
#
# ## A regular ```model_fn```
#
# A regular **```model_fn```** follows the pattern:
# 1. [defines a neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)
# - [applies the ```features``` in the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)
# - [if the ```mode``` is ```PREDICT```, returns the output from the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)
# - [calculates the loss function comparing the output with the ```labels```](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)
# - [creates an optimizer and minimizes the loss function to improve the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)
# - [returns the output, optimizer and loss function](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)
#
# ## Writing a ```model_fn``` for distributed training
# When distributed training happens, the same neural network will be sent to the multiple training instances. Each instance will predict a batch of the dataset, calculate loss and minimize the optimizer. One entire loop of this process is called **training step**.
#
# ### Syncronizing training steps
# A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step) is a global variable shared between the instances. It's necessary for distributed training, so the optimizer will keep track of the number of **training steps** between runs:
#
# ```python
# train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
# ```
#
# That is the only required change for distributed training!
# ## Create a training job using the sagemaker.TensorFlow estimator
# +
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
framework_version='1.10.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=1,
train_instance_type='ml.c4.xlarge') # try: ml.m4.xlarge, ml.m5.xlarge, ml.c5.xlarge
mnist_estimator.fit(inputs)
# -
# The **```fit```** method will create a training job in multiple instances. The logs above will show the instances doing training, evaluation, and incrementing the number of **training steps**.
#
# In the end of the training, the training job will generate a saved model for TF serving.
# # Deploy the trained model to prepare for predictions
#
# The deploy() method creates an endpoint which serves prediction requests in real-time.
mnist_predictor = mnist_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# # Invoking the endpoint
# +
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
# read_data_sets()함수는 각 세가지 데이터 셋을 위한 DataSet인스턴스를 가진 딕셔너리를 리턴합니다.
# data_sets.train: 초기 학습을 위한 55000개의 이미지들과 레이블들
# data_sets.validation: 학습 정확도의 반복적 검증을 위한 5000개의 이미지와 레이블들
# data_sets.test: 학습 정확도의 마지막 테스팅을 위한 10000개의 이미지와 레이블들
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i].tolist()
tensor_proto = tf.make_tensor_proto(values=np.asarray(data), shape=[1, len(data)], dtype=tf.float32)
predict_response = mnist_predictor.predict(tensor_proto)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
prediction = predict_response['outputs']['classes']['int64_val'][0]
print("prediction is {}".format(prediction))
# -
# # Deleting the endpoint
sagemaker.Session().delete_endpoint(mnist_predictor.endpoint)
# code you want to evaluate
elapsed = timeit.default_timer() - start_time
print(elapsed/60)
| _archiving/src/release/dev-day/Module4-Distributed MNIST Using TensorFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import savReaderWriter as spss
raw_data = spss.SavReader('/home/aastroza/Dropbox/lectura/data/BaseMaestraECL2014.sav', returnHeader = True) # This is fast
raw_data_list = list(raw_data) # this is slow
data = pd.DataFrame(raw_data_list) # this is slow
data = data.rename(columns=data.loc[0]).iloc[1:] # setting columnheaders, this is slow too.
data.columns
ecl = data[(data['Region'] == 13) &(data['Comuna_CODIGO'] < 13402)]
ecl['Comuna_CODIGO']
# +
name_map = dict(zip([13101, 13102, 13103
,13104,13105,13106,13107,13108,13109,13110,13111,13112,13113,13114,13115,13116,13117,13118,13119,13120,13121,13122,13123
,13124,13125,13126,13127,13128,13129,13130,13131,13132,13201,13202,13203,13301,13302,13303,13401],
map(lambda x: x.upper(), [u'Santiago',
u'Cerrillos',
u'<NAME>',
u'Conchali',
u'El Bosque',
u'Estacion Central',
u'Huechuraba',
u'Independencia',
u'La Cisterna',
u'La Florida',
u'La Granja',
u'La Pintana',
u'La Reina',
u'Las Condes',
u'Lo Barnechea',
u'Lo Espejo',
u'Lo Prado',
u'Macul',
u'Maipu',
u'nunoa',
u'<NAME>',
u'Penalolen',
u'Providencia',
u'Pudahuel',
u'Quilicura',
u'Quinta Normal',
u'Recoleta',
u'Renca',
u'<NAME>',
u'<NAME>',
u'<NAME>',
u'Vitacura',
u'<NAME>',
u'Pirque',
u'<NAME>',
u'Colina',
u'Lampa',
u'Tiltil',
u'San Bernardo'])))
name_map
# -
# # Agrupando por Comuna
variables = ('comuna', 'FEXP', 'sexo', 'edad', 'A9B', 'a.J6', 'J1D', 'O2', 'a.A6', 'b.M26B')
# +
from collections import defaultdict
from itertools import repeat, chain
values = defaultdict(list)
for id_comuna, df_comuna in ecl.groupby('Comuna_CODIGO'):
values['comuna'].append(name_map[id_comuna])
for k, v in df_comuna.ix[:,variables].iteritems():
if k == 'comuna':
continue
if k == 'edad':
v = df_comuna['FEXP'].sum()
k = 'poblacion'
if k == 'FEXP':
continue
if k == 'sexo':
continue
if k == 'A9B':#¿Cuando usted lee lo hace por gusto/ocio?
voluntad = [0, 0, 0, 0]
for i, df_voluntad in df_comuna.groupby('A9B'):
if i == 1:
voluntad[0] = df_voluntad['FEXP'].sum()
if i == 2:
voluntad[1] = df_voluntad['FEXP'].sum()
if i == 88:
voluntad[2] = df_voluntad['FEXP'].sum()
if i == 99:
voluntad[3] = df_voluntad['FEXP'].sum()
v = voluntad[0]/(voluntad[0] + voluntad[1] + voluntad[2] + voluntad[3])
k = 'voluntad'
if k == 'a.J6':#¿Es socio de alguna biblioteca?
socio = [0, 0, 0, 0]
for i, df_socio in df_comuna.groupby('a.J6'):
if i == 1:
socio[0] = df_socio['FEXP'].sum()
if i == 2:
socio[1] = df_socio['FEXP'].sum()
if i == 88:
socio[2] = df_socio['FEXP'].sum()
if i == 99:
socio[3] = df_socio['FEXP'].sum()
v = socio[0]/(socio[0] + socio[1] + socio[2] + socio[3])
k = 'socio'
if k == 'J1D':#¿Lee en el transporte publico?
transporte = [0, 0, 0, 0]
for i, df_transporte in df_comuna.groupby('J1D'):
if i == 1:
transporte[0] = df_transporte['FEXP'].sum()
if i == 2:
transporte[1] = df_transporte['FEXP'].sum()
if i == 88:
transporte[2] = df_transporte['FEXP'].sum()
if i == 99:
transporte[3] = df_transporte['FEXP'].sum()
v = transporte[0]/(transporte[0] + transporte[1] + transporte[2] + transporte[3])
k = 'transporte'
if k == 'a.A6':#¿Cuantos libros hay en su hogar?
libros = [0, 0, 0, 0, 0, 0, 0, 0]
for i, df_libros in df_comuna.groupby('a.A6'):
if i == 1:
libros[0] = df_libros['FEXP'].sum()*3
if i == 2:
libros[1] = df_libros['FEXP'].sum()*8
if i == 3:
libros[2] = df_libros['FEXP'].sum()*18
if i == 4:
libros[3] = df_libros['FEXP'].sum()*38
if i == 5:
libros[0] = df_libros['FEXP'].sum()*75
if i == 6:
libros[1] = df_libros['FEXP'].sum()*150
if i == 7:
libros[2] = df_libros['FEXP'].sum()*350
if i == 8:
libros[3] = df_libros['FEXP'].sum()*500
v = sum(libros)
k = 'nlibrosA'
if k == 'b.M26B':#¿Cuantos libros hay en su hogar?
libros = [0, 0, 0, 0, 0, 0, 0, 0]
for i, df_libros in df_comuna.groupby('b.M26B'):
if i == 1:
libros[0] = df_libros['FEXP'].sum()*3
if i == 2:
libros[1] = df_libros['FEXP'].sum()*8
if i == 3:
libros[2] = df_libros['FEXP'].sum()*18
if i == 4:
libros[3] = df_libros['FEXP'].sum()*38
if i == 5:
libros[0] = df_libros['FEXP'].sum()*75
if i == 6:
libros[1] = df_libros['FEXP'].sum()*150
if i == 7:
libros[2] = df_libros['FEXP'].sum()*350
if i == 8:
libros[3] = df_libros['FEXP'].sum()*500
v = sum(libros)
k = 'nlibrosB'
if k == 'O2':#¿Ingreso total liquido del hogar?
daut_resp = df_comuna[df_comuna['O2'].isin([1,2,3,4,5,6,7,8,9,10])]['O2'].values
daut_freq = df_comuna[df_comuna['O2'].isin([1,2,3,4,5,6,7,8,9,10])]['FEXP'].values
daut_zip = zip(daut_resp, daut_freq.astype(int))
daut_array = np.array(list(chain(*[repeat(p, w) for p, w in daut_zip])), dtype=np.float64)
v = np.median(daut_array)
k = 'ingreso'
values[k].append(v)
# -
df = pd.DataFrame(data=values)
df['librospp'] = (df['nlibrosA'] + df['nlibrosB'])/ df['poblacion']
df
import matplotlib.pyplot as plt
# %matplotlib inline
df2 = df.sort_index(by=['librospp', 'comuna'], ascending=[True, True])
ax = df2['librospp'].plot(kind='bar', title ="Libros por Persona",figsize=(18,5), fontsize=12)
#ax.set_xlabel("Hour",fontsize=12)
#ax.set_ylabel("V",fontsize=12)
ax.set_xticklabels(list(df2['comuna']))
plt.show()
ax = df.plot(kind='scatter', x='librospp', y='ingreso',
title ="Decil de Ingreso vs Libros por Persona",figsize=(12,8), fontsize=12)
for i in [0,1,2,4,10,12,13,14,19,22,25,28,30,31,32,33,34]:
#for i in range(0,35):
ax.text(df['librospp'][i]+1, df['ingreso'][i]+0.1, df['comuna'][i])
plt.show()
# # Agrupando por Ingresos a Nivel Nacional
variables = ('FEXP', 'O2', 'a.A6', 'b.M26B', 'A8')
# +
values = defaultdict(list)
for id_ingreso, df_ingreso in data.groupby('O2'):
values['ingreso'].append(id_ingreso)
for k, v in df_ingreso.ix[:,variables].iteritems():
if k == 'O2':
v = df_ingreso['FEXP'].sum()
k = 'poblacion'
if k == 'FEXP':
continue
if k == 'a.A6':#¿Cuantos libros hay en su hogar?
libros = [0, 0, 0, 0, 0, 0, 0, 0]
for i, df_libros in df_ingreso.groupby('a.A6'):
if i == 1:
libros[0] = df_libros['FEXP'].sum()*3
if i == 2:
libros[1] = df_libros['FEXP'].sum()*8
if i == 3:
libros[2] = df_libros['FEXP'].sum()*18
if i == 4:
libros[3] = df_libros['FEXP'].sum()*38
if i == 5:
libros[0] = df_libros['FEXP'].sum()*75
if i == 6:
libros[1] = df_libros['FEXP'].sum()*150
if i == 7:
libros[2] = df_libros['FEXP'].sum()*350
if i == 8:
libros[3] = df_libros['FEXP'].sum()*500
v = sum(libros)
k = 'nlibrosA'
if k == 'b.M26B':#¿Cuantos libros hay en su hogar?
libros = [0, 0, 0, 0, 0, 0, 0, 0]
for i, df_libros in df_ingreso.groupby('b.M26B'):
if i == 1:
libros[0] = df_libros['FEXP'].sum()*3
if i == 2:
libros[1] = df_libros['FEXP'].sum()*8
if i == 3:
libros[2] = df_libros['FEXP'].sum()*18
if i == 4:
libros[3] = df_libros['FEXP'].sum()*38
if i == 5:
libros[0] = df_libros['FEXP'].sum()*75
if i == 6:
libros[1] = df_libros['FEXP'].sum()*150
if i == 7:
libros[2] = df_libros['FEXP'].sum()*350
if i == 8:
libros[3] = df_libros['FEXP'].sum()*500
v = sum(libros)
k = 'nlibrosB'
if k == 'A8':#¿Que tipo de lector se considera usted?
daut_resp = df_ingreso[df_ingreso['A8'].isin([1,2,3,4,5])]['A8'].values
daut_freq = df_ingreso[df_ingreso['A8'].isin([1,2,3,4,5])]['FEXP'].values
daut_zip = zip(daut_resp, daut_freq.astype(int))
daut_array = np.array(list(chain(*[repeat(p, w) for p, w in daut_zip])), dtype=np.float64)
v = np.mean(daut_array)
k = 'lector'
values[k].append(v)
# -
dfn = pd.DataFrame(data=values)
dfn['librospp'] = (dfn['nlibrosA'] + dfn['nlibrosB'])/ dfn['poblacion']
dfn
| notebooks/ECL2014-getdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# from netCDF4 import Dataset
import numpy as np
import pandas as pd
import xarray as xa
import sklearn as sk
from matplotlib import pyplot as plt
from mpl_toolkits import basemap as bm
import seaborn as sns
# -
sss_file = '../../../data/ISAS15_SSS_Kuroshio.nc'
sst_file = '../../../data/ISAS15_SST_Kuroshio.nc'
sss = xa.open_dataset(sss_file)
sst = xa.open_dataset(sst_file)
print(sst)
print(sss)
# +
sss_lat = sss['latitude'].values
sss_lon = sss['longitude'].values
sss_psal = sss['PSAL'].values
sss_t = sss['time'].values
sst_lat = sst['latitude'].values
sst_lon = sst['longitude'].values
sst_temp = sst['TEMP'].values
sst_depth = sst['depth'].values
sst_t = sst['time'].values
# +
# # remove mean as reference to eliminate meridonial effect
# mean_sst_along_lon_t = np.nanmean(np.nanmean(sst_temp, axis=0), axis = 1)
# mean_psal_along_lon_t = np.nanmean(np.nanmean(sss_psal, axis=0), axis = 1)
# removing mean along along lon
mean_sst_along_lon_t = np.nanmean(sst_temp, axis = 1)
mean_psal_along_lon_t = np.nanmean(sss_psal, axis = 1)
# print(mean_sst_along_lon_t.shape, mean_psal_along_lon_t.shape)
# plt.figure(figsize=(10,5))
# plt.subplot(211)
# plt.plot(sst_lat, mean_sst_along_lon_t)
# plt.title('sst mean along lon & time')
# plt.subplot(212)
# plt.plot(sss_lat, mean_psal_along_lon_t)
# plt.title('psal mean along lon & time');
# -
mean_sst_along_lon_t.repeat(sst_temp.shape[1]).reshape(np.shape(sst_temp)).shape
sst_lat.shape
# plt.plot(sst_lat, sst_temp[0,:, sst_lon==144].ravel())
plt.plot(sst_lat, mean_sst_along_lon_t)
# +
# plt.plot(sst_temp[0][sst_lon==40])
# -
# repeat & reshape to be able to remove the meridional from the sst/psal
repeat_length = np.shape(sst_temp)[0] * np.shape(sst_temp)[2]
mean_sst_along_lon_t = mean_sst_along_lon_t.repeat(repeat_length).reshape(np.shape(sst_temp))
mean_psal_along_lon_t = mean_psal_along_lon_t.repeat(repeat_length).reshape(np.shape(sss_psal))
# +
plt.figure(figsize=(10,5))
plt.subplot(221)
plt.imshow(sst_temp[0])
plt.colorbar()
plt.subplot(223)
plt.imshow(sss_psal[0])
plt.colorbar()
sst_temp -= mean_sst_along_lon_t
sss_psal -= mean_psal_along_lon_t
plt.subplot(222)
plt.imshow(sst_temp[0])
plt.colorbar()
plt.subplot(224)
plt.imshow(sss_psal[0])
plt.colorbar()
# -
# +
# time series of maps of sss & sst
# +
# plt.pcolormesh(lat, lon, sss_psal[0])
# -
grad_psal = np.gradient(sss_psal[0])
grad_sst = np.gradient(sst_temp[0])
grad_psal = np.gradient(sss_psal, axis=1)
grad_sst = np.gradient(sst_temp, axis=1)
# +
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.imshow(grad_psal[0])
plt.colorbar()
plt.title('grad PSAL @ 0')
plt.subplot(122)
plt.imshow(grad_sst[0])
plt.colorbar()
plt.title('grad SST @ 0');
# +
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.imshow(grad_psal[0])
plt.title('grad PSAL @ 0')
plt.colorbar()
plt.subplot(122)
plt.imshow(grad_sst[0])
plt.title('grad SST @ 0');
plt.colorbar()
# -
sss_t = pd.to_datetime(sss_t)
plt.hist(grad_psal[0][:,0]);
# +
# plt.hist2d(grad_psal[:,0,:]);
# +
# pd.DataFrame(grad_psal[:,0,:].ravel(), grad_psal[:,1,:].ravel(), columns=['x', 'y'])
# +
data = np.array([grad_psal[:,0,:].ravel(), grad_psal[:,1,:].ravel()]).T
df = pd.DataFrame(data=data, columns=['lat', 'lon'])
sns.jointplot(x="lat", y="lon", data=df, kind="kde");
plt.title('grad psal')
data = np.array([grad_sst[:,0,:].ravel(), grad_sst[:,1,:].ravel()]).T
df = pd.DataFrame(data=data, columns=['lat', 'lon'])
sns.jointplot(x="lat", y="lon", data=df, kind="kde");
plt.title('grad sst')
# -
# plt.hist
it = 0;
grad_sst_min = np.nanmin(grad_sst[it].ravel())
grad_sst_max = np.nanmax(grad_sst[it].ravel())
hist_sst, bins_sst, edges_sst = plt.hist(grad_sst[it].ravel(), bins=50, range=[grad_sst_min, grad_sst_max]);
grad_psal_min = np.nanmin(grad_psal[it].ravel())
grad_psal_max = np.nanmax(grad_psal[it].ravel())
hist_psal, bins_psal, edges_psal = plt.hist(grad_psal[it].ravel(), bins=50, range=[grad_psal_min, grad_psal_max]);
# +
plt.figure(figsize=(12,5))
plt.suptitle('Histograms over all time & space')
plt.subplot(121)
# grad_psal_min = np.nanmin(grad_psal.ravel())
# grad_psal_max = np.nanmax(grad_psal.ravel())
grad_psal_min = -0.4
grad_psal_max = 0.3
hist_psal, bins_psal, edges_psal = plt.hist(grad_psal.ravel(), bins=50, range=[grad_psal_min, grad_psal_max]);
plt.title('Histogram of Grad PSal')
plt.subplot(122)
grad_sst_min = -2
grad_sst_max = 1
grad_sst_min = np.nanmin(grad_sst.ravel())
grad_sst_max = np.nanmax(grad_sst.ravel())
hist_sst, bins_sst, edges_sst = plt.hist(grad_sst.ravel(), bins=50, range=[grad_sst_min, grad_sst_max]);
plt.title('Histogram of Grad SST');
# -
indx
# +
# random downsampling
indx = np.random.randint(0,168, size=10)
data = np.array([grad_psal[indx].ravel(), grad_sst[indx].ravel()]).T
df = pd.DataFrame(data=data, columns=['grad psal', 'grad sst'])
# -
sns.jointplot(x="grad psal", y="grad sst", data=df, kind="kde", xlim=[-.1,.1], ylim=[-.7,.2]);
plt.title('joint plot of the hist of grad ')
# +
from sklearn import mixture
# fix # of clusters to loop for
K = 3
# initialize gmm
gmm = mixture.GaussianMixture(n_components=K)
# replace NaNs with mean value
# this shouldn't change the clustering result
df2 = np.asarray(df.copy())
df2 = pd.DataFrame( df.copy() )
col_mean = np.nanmean(df2, axis=0)
df2['grad psal'] = df2['grad psal'].fillna(col_mean[0])
df2['grad sst'] = df2['grad sst'].fillna(col_mean[1])
# fit gmm
gmm = gmm.fit(df2)
print( 'GMM score: ', gmm.score(df2) )
# -
# Print optimized parameters of the modes:
k=0
for pos, covar, w in zip(gmm.means_, gmm.covariances_, gmm.weights_):
print( "\nMode/Class/Component/Cluster ID#", k)
print( "Center:\n", pos)
print( "Covariance:\n", covar)
print( "Weight (prior):\n", w )
k = k+1
# +
# Get the posteriors:
posts = gmm.predict_proba(df2)
# posts is of shape: (n_samples, n_components)
print( "Shape of the posteriors matrix:", posts.shape)
# Get the class maximising the posteriors (hard classif):
# labels = np.argmax(posts,axis=1) # similar to gmm.predict !
labels = gmm.predict(df2)
# labels is of shape: (n_samples, )
print( "Shape of the labels matrix:", labels.shape)
# Compute robustness of the classification:
robust = (np.max(posts,axis=1) - 1./K)*K/(K-1.)
Plist = [0, 0.33, 0.66, 0.9, .99, 1];
rowl0 = ('Unlikely','As likely as not','Likely','Very Likely','Virtually certain')
robust_id = np.digitize(robust, Plist)-1
i_sample = np.random.randint(0,high=df2.shape[0],size=(1,))
print( "\nPosteriors for 1 sample:", np.round(posts[i_sample,:]*100,2), "in %")
print( "Sum of the posteriors:", np.sum(posts[i_sample,:]))
print( "Class id of the sample:", labels[i_sample])
print( "Robustness of the classification:", np.round(robust[i_sample]*100,2), "%, i.e. ", rowl0[robust_id[i_sample[0]]])
# -
# Useful functions for visualisation:
def plot_GMMellipse(gmm,ik,col,ax,label="",std=[1],main_axes=True,**kwargs):
"""
Plot an 1-STD ellipse for a given component (ik) of the GMM model gmm.
This is my routine, simply working with a matplotlib plot method
I also added the possibility to plot the main axes of the ellipse.
(c) <NAME>
"""
id = [0,1]
covariances = gmm.covariances_[ik][(id[0],id[0],id[1],id[1]),(id[0],id[1],id[0],id[1])].reshape(2,2)
d, v = np.linalg.eigh(covariances) # eigenvectors have unit length
d = np.diag(d)
theta = np.arange(0,2*np.pi,0.02)
x = np.sqrt(d[0,0])*np.cos(theta)
y = np.sqrt(d[1,1])*np.sin(theta)
xy = np.array((x,y)).T
ii = 0
for nstd in np.array(std):
ii+=1
ellipse = np.inner(v,xy).T
ellipse = nstd*ellipse + np.ones((theta.shape[0], 1))*gmm.means_[ik,(id[0],id[1])]
if ii == 1:
# p = ax.plot(ellipse[:,0], ellipse[:,1], color=col, axes=ax, label=("%s (%i-std)")%(label,nstd),**kwargs)
p = ax.plot(ellipse[:,0], ellipse[:,1], color=col, axes=ax, label=("%s")%(label),**kwargs)
else:
p = ax.plot(ellipse[:,0], ellipse[:,1], color=col, axes=ax,**kwargs)
if main_axes: # Add Main axes:
for idir in range(2):
l = np.sqrt(d[idir,idir])*v[:,idir].T
start = gmm.means_[ik,(id[0],id[1])]-l
endpt = gmm.means_[ik,(id[0],id[1])]+l
linex = [start[0], endpt[0]]
liney = [start[1], endpt[1]]
plt.plot(linex,liney,color=col,axes=ax)
return p,ax
D = np.array(df2)
gmm.means_, gmm.covariances_
# We notice the three clusters
# Plot the posteriors:
for ik in np.arange(K):
fig, ax = plt.subplots(nrows=1, ncols=1, dpi=80, facecolor='w', edgecolor='k')
sc = ax.scatter(D[:, 0], D[:, 1], c=posts[:,ik], s=20, cmap='viridis', vmin=0., vmax=1.)
# plt.axis('equal')
plt.xlabel('dimension x1')
plt.ylabel('dimension x2')
plt.title("Posteriors for class #%i"%(ik))
plt.grid(True)
plt.colorbar(sc)
p,ax=plot_GMMellipse(gmm,ik,'m',ax,label="Class-%i"%(ik),linewidth=1,std=[1,2,3])
ax.set_xlim([-.1,.1])
ax.set_xlim([-.7,.2])
plt.show()
# Plot the labels:
fig, ax = plt.subplots(nrows=1, ncols=1, dpi=80, facecolor='w', edgecolor='k')
sc = plt.scatter(D[:, 0], D[:, 1], c=labels, s=30, cmap='viridis', vmin=0, vmax=K-1)
plt.axis('equal')
plt.xlabel('dimension x1')
plt.ylabel('dimension x2')
plt.title("Labels attributed to the dataset with a GMM of %i components"%(K))
plt.grid(True)
plt.colorbar(sc)
colors = iter(plt.cm.rainbow(np.linspace(0, 1, K)))
for ik in np.arange(K):
a,a = plot_GMMellipse(gmm,ik,next(colors),ax,label="Class-%i"%(ik),linewidth=2)
ax.legend(loc='upper left')
plt.show()
# +
# Plot the robustness:
fig, ax = plt.subplots(nrows=1, ncols=1, dpi=80, facecolor='w', edgecolor='k')
sc = plt.scatter(D[:, 0], D[:, 1], c=robust, s=30, cmap='hot', vmin=0., vmax=1.)
plt.axis('equal')
plt.xlabel('dimension x1')
plt.ylabel('dimension x2')
plt.title("Robustness of the labels attributed to the dataset with a GMM of %i components"%(K))
plt.grid(True)
plt.colorbar(sc)
for ik in np.arange(K):
a,a = plot_GMMellipse(gmm,ik,'m',ax,label="Class-%i"%(ik),linewidth=1,std=[1,2,3])
plt.show()
# -
# localizing the clusters in space
plt.pcolor(sst_lon, sst_lat, labels.reshape((10,57,100))[9])
plt.colorbar()
# +
labels_new = labels.reshape((10,57*100))
hist, bins, edges = plt.hist(labels_new.T)
# bins = len(np.unique(labels))
# -
indx = np.where(hist == hist.max())
most_frequent_label = int(bins[indx])
hist.shape
indx = labels == most_frequent_label
most_frequent_label
plt.pcolor(sst_lon[indx], sst_lat[indx], labels[indx])
# +
# # # Classify new data
# #
# N = 100
# x1, x2 = np.linspace(-5, 20, N), np.linspace(-5, 20, N)
# X,Y = np.meshgrid(x1,x2)
# X,Y = np.ravel(X), np.ravel(Y)
# D2 = np.array((X,Y)).T
# print( "Shape of the new dataset:", D2.shape)
# # Get the posteriors:
# posts2 = gmm.predict_proba(D2)
# # Get the class maximising the posteriors (hard classif):
# labels2 = gmm.predict(D2)
# # Compute robustness of the classification:
# robust2 = (np.max(posts2,axis=1) - 1./K)*K/(K-1.)
# +
# # Plot the posteriors:
# for ik in np.arange(K):
# fig, ax = plt.subplots(nrows=1, ncols=1, dpi=100, facecolor='w', edgecolor='k', sharey=True)
# sc = ax.scatter(D2[:, 0], D2[:, 1], c=posts2[:,ik], s=20, cmap='viridis', vmin=0., vmax=1., edgecolor='')
# plt.axis('equal')
# plt.xlabel('dimension x1')
# plt.ylabel('dimension x2')
# plt.title("Posteriors for class #%i"%(ik))
# plt.grid(True)
# plt.colorbar(sc)
# p,ax=plot_GMMellipse(gmm,ik,'m',ax,label="Class-%i"%(ik),linewidth=1,std=[1,2,3])
# plt.show()
# # Plot the labels:
# fig, ax = plt.subplots(nrows=1, ncols=1, dpi=100, facecolor='w', edgecolor='k')
# sc = plt.scatter(D2[:, 0], D2[:, 1], c=labels2, s=20, cmap='viridis', vmin=0., vmax=K-1., edgecolor='')
# plt.axis('equal')
# plt.xlabel('dimension x1')
# plt.ylabel('dimension x2')
# plt.title("Labels attributed to the dataset with a GMM of %i components"%(K))
# plt.grid(True)
# plt.colorbar(sc)
# colors = iter(plt.cm.rainbow(np.linspace(0, 1, K)))
# for ik in np.arange(K):
# a,a = plot_GMMellipse(gmm,ik,next(colors),ax,label="Class-%i"%(ik),linewidth=2)
# ax.legend(loc='upper left')
# plt.show()
# # Plot the robustness:
# fig, ax = plt.subplots(nrows=1, ncols=1, dpi=100, facecolor='w', edgecolor='k')
# sc = plt.scatter(D2[:, 0], D2[:, 1], c=robust2, s=20, cmap='hot', vmin=0., vmax=1., edgecolor='')
# plt.axis('equal')
# plt.xlabel('dimension x1')
# plt.ylabel('dimension x2')
# plt.title("Robustness of the labels attributed to the dataset with a GMM of %i components"%(K))
# plt.grid(True)
# plt.colorbar(sc)
# for ik in np.arange(K):
# a,a = plot_GMMellipse(gmm,ik,'m',ax,label="Class-%i"%(ik),linewidth=1,std=[1,2,3])
# plt.show()
# -
df = pd.DataFrame([grad_sst[:,ilon,ilat], grad_psal[:,ilon,ilat]])
sns.pairplot( df )
# +
Nx, Ny = np.shape(grad_sst)[1], np.shape(grad_sst)[2]
NxNy = Nx * Ny
nt = np.shape(sst_t)[0]
df_sst = np.zeros((nt, NxNy))
df_sst.shape
for it in np.arange(nt):
df_sst[it] = grad_sst[it].ravel()
# df = pd.DataFrame(data=df)
# pdf along NxNy (Nt vs NxNy)
hist_sst, bins_sst, patches_sst = plt.hist(df_sst); # takes some time
# +
Nx, Ny = np.shape(grad_psal)[1], np.shape(grad_psal)[2]
NxNy = Nx * Ny
nt = np.shape(sss_t)[0]
df_psal = np.zeros((nt, NxNy))
df_psal.shape
for it in np.arange(nt):
df_psal[it] = grad_psal[it].ravel()
# df = pd.DataFrame(data=df)
# pdf along NxNy (Nt vs NxNy)
hist_psal, bins_psal, patches_psal = plt.hist(df_psal); # takes some time
# -
# Psal-SST joint plot
# data = np.array([grad_sst[:,0,:].ravel(), grad_sst[:,1,:].ravel()]).T
data = np.array([hist_sst, hist_psal]).T
df = pd.DataFrame(data=data, columns=['sst', 'psal'])
sns.jointplot(x="hist grad sst", y="hist grad psal", data=df, kind="kde");
plt.title('joint plot of the hist of grad ')
# +
# Go back to space
# Plot hist values in space
plt.imshow
# +
# start with winter months
# Dec - March
# -
# pd.datetime
np.datetime_as_string(ss[0])
# scatter plot of SSS & SST
plt.scatter(sst_temp[0,:,0], sst_temp[0,:,1])
| projects/elmerehbi/.ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 11: Reading and writing fields
#
# > Interactive online tutorial:
# > [](https://mybinder.org/v2/gh/ubermag/discretisedfield/master?filepath=docs%2Fipynb%2Findex.ipynb)
#
# There are two main file formats to which a `discretisedfield.Field` object can be saved:
#
# - [VTK](https://vtk.org/) for visualisation using e.g., [ParaView](https://www.paraview.org/) or [Mayavi](https://docs.enthought.com/mayavi/mayavi/)
# - OOMMF [Vector Field File Format (OVF)](https://math.nist.gov/oommf/doc/userguide12a5/userguide/Vector_Field_File_Format_OV.html) for exchanging fields with micromagnetic simulators.
#
# Let us say we have a nanosphere sample:
#
# $$x^2 + y^2 + z^2 <= r^2$$
#
# with $r=5\,\text{nm}$. The space is discretised into cells with dimensions $(0.5\,\text{nm}, 0.5\,\text{nm}, 0.5\,\text{nm})$. The value of the field at $(x, y, z)$ point is $(-cy, cx, cz)$, with $c=10^{9}$. The norm of the field inside the cylinder is $10^{6}$.
#
# Let us first build that field.
# +
import discretisedfield as df
r = 5e-9
cell = (0.5e-9, 0.5e-9, 0.5e-9)
mesh = df.Mesh(p1=(-r, -r, -r), p2=(r, r, r), cell=cell)
def norm_fun(pos):
x, y, z = pos
if x**2 + y**2 + z**2 <= r**2:
return 1e6
else:
return 0
def value_fun(pos):
x, y, z = pos
c = 1e9
return (-c*y, c*x, c*z)
field = df.Field(mesh, value=value_fun, norm=norm_fun)
# -
# Let us have a quick view of the field we created
# NBVAL_IGNORE_OUTPUT
field.plane('z').k3d_vectors(color_field=field.z)
# ## Writing the field to a file
#
# The main method used for saving field in different files is `discretisedfield.Field.write()`. It takes `filename` as an argument, which is a string with one of the following extensions:
# - `'.vtk'` for saving in the VTK format
# - `'.ovf'`, `'.omf'`, `'.ohf'` for saving in the OVF format
#
# Let us firstly save the field in the VTK file.
vtkfilename = 'my_vtk_file.vtk'
field.write(vtkfilename)
# We can check if the file was saved in the current directory.
import os
os.path.isfile(f'./{vtkfilename}')
# Now, we can delete the file:
os.remove(f'./{vtkfilename}')
# Next, we can save the field in the OVF format and check whether it was created in the current directory.
omffilename = 'my_omf_file.omf'
field.write(omffilename)
os.path.isfile(f'./{omffilename}')
# There are three different possible representations of an OVF file: one ASCII (`txt`) and two binary (`bin4` or `bin8`). ASCII `txt` representation is a default representation when `discretisedfield.Field.write()` is called. If any different representation is required, it can be passed via `representation` argument.
field.write(omffilename, representation='bin8')
os.path.isfile(f'./{omffilename}')
# ## Reading the OVF file
#
# The method for reading OVF files is a class method `discretisedfield.Field.fromfile()`. By passing a `filename` argument, it reads the file and creates a `discretisedfield.Field` object. It is not required to pass the representation of the OVF file to the `discretisedfield.Field.fromfile()` method, because it can retrieve it from the content of the file.
read_field = df.Field.fromfile(omffilename)
# Like previouly, we can quickly visualise the field
# NBVAL_IGNORE_OUTPUT
read_field.plane('z').k3d_vectors(color_field=read_field.z)
# Finally, we can delete the OVF file we created.
os.remove(f'./{omffilename}')
# ## Other
#
# Full description of all existing functionality can be found in the [API Reference](https://discretisedfield.readthedocs.io/en/latest/api_documentation.html).
| docs/ipynb/11-tutorial-read-write.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/SDS-AAU/DSBA-2021/blob/master/static/notebooks/DSBA21_W1_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="_H0R2izDY8RG"
import pandas as pd
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/"} id="l_fqo3kxZJ5f" outputId="e485196c-7093-4cc8-b291-c3333741d9f0"
data = pd.read_csv('https://stacks.stanford.edu/file/druid:yg821jf8611/yg821jf8611_la_new_orleans_2020_04_01.csv.zip')
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="9-sfkCiIZMtU" outputId="cf955bd7-5a3c-42ee-c524-6274f126b216"
data.head()
# + colab={"base_uri": "https://localhost:8080/"} id="rQaknv2ZZVQx" outputId="47ecd601-7adf-442a-ca86-38e43f50b801"
data.info()
# + id="e6mBStwnZV8t"
data['datetime'] = data['date'].str.cat(data['time'], ' ')
# + id="cKIh7AdZfsW3"
data.index = pd.to_datetime(data['datetime'])
# + colab={"base_uri": "https://localhost:8080/"} id="2Wdzc__VgpfQ" outputId="d27557da-227c-48bd-c62d-f26dcbb94356"
data.index
# + [markdown] id="6QJf5pnCiTvX"
# ## Ideas to look into stuff?
#
# - Crosstabs / Distribution of Sex / Race
# - Searches conducted
# - Drugs / weapons found?
# - Expensive cars vs cheap cars - proxy by vehicle age (option)
# - Sex / Race / Age vs stop-outcome
#
# + colab={"base_uri": "https://localhost:8080/"} id="b7KinDF-gkve" outputId="d4bfbb0c-2178-49c8-b742-41bbd23019e7"
# single distrubution / count
data.subject_race.value_counts(normalize=True)
# + colab={"base_uri": "https://localhost:8080/"} id="gWDWsYoXnHX9" outputId="680ea28d-187d-4c5f-9e8b-3e766a11de8a"
data.groupby(['subject_race','reason_for_stop']).size()
# + colab={"base_uri": "https://localhost:8080/", "height": 354} id="R7-SnfZwnfCh" outputId="2cccd90c-215e-4dfd-c238-ffd287f358ff"
pd.crosstab(data.subject_race, data.reason_for_stop, normalize='columns')
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="ZPLUDLWYoQf3" outputId="5befb502-8148-495e-ca10-1cb95dd85160"
pd.crosstab(data.subject_race, data.arrest_made, normalize='index')
# + colab={"base_uri": "https://localhost:8080/"} id="0TGf93Cmow-p" outputId="de1419dc-b3a8-4d5a-952c-96206d98b85a"
data.groupby(['subject_race', 'subject_sex'])['arrest_made'].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="SCQ2qRpQo66G" outputId="f39263b9-a26b-49c5-add2-43cc33b0872e"
data.groupby('subjec_sex')['arrest_made'].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="3VGaSPi7o_7X" outputId="36e0bda8-8beb-499b-fae1-2bcc1779e8e2"
data[data.search_conducted == True].groupby('subject_race')['contraband_weapons'].sum()/3113
# + colab={"base_uri": "https://localhost:8080/"} id="JooeQZ0yplDy" outputId="0f6744d3-1371-423f-db8b-dea705521c0c"
data[data.search_conducted == True].contraband_weapons.sum()
# + colab={"base_uri": "https://localhost:8080/"} id="gp7kg1zErbOM" outputId="1b237a23-d0e0-4c64-ceba-7ab959ce8eb1"
data.groupby('district')['vehicle_year'].mean().sort_values()
# + colab={"base_uri": "https://localhost:8080/"} id="rfayq5x0yfl2" outputId="02feb68e-ace5-442e-ad0d-9c4955f6c155"
data[data.district.isin([5,6])].groupby('district').search_conducted.mean()
# + colab={"base_uri": "https://localhost:8080/"} id="sZzXgjT_yuoG" outputId="ab3536b0-72c4-455c-a43f-c01d81b44bb2"
data[data.district.isin([5,6])].groupby('district').subject_race.value_counts(normalize=True)
# + [markdown] id="AAAQlsT5zrbE"
# ### Some Geoplotting
# + id="lCajKJPezLND"
import folium
m = folium.Map(tiles='Stamen Toner', location=[55.6832933, 12.5169813], zoom_start=16)
# + colab={"base_uri": "https://localhost:8080/", "height": 520} id="9IhUqdwHz1RE" outputId="3c78580b-6f33-477a-a6db-7b14bc0e0e4e"
m
# + id="oSGCNnBL0xO5"
# subset data (we need lon/lat)
geoplot_data = data.sample(20).dropna(subset=['lat'])
# + colab={"base_uri": "https://localhost:8080/"} id="b6YV5D4x00tN" outputId="6473c22a-6471-4af9-9900-6323dbdaa182"
print(geoplot_data.lat.mean())
print(geoplot_data.lng.mean())
# + colab={"base_uri": "https://localhost:8080/", "height": 520} id="vMnWNI0Zz1tX" outputId="d8899870-168c-4a0d-eb86-dadd5daafa0b"
m = folium.Map(location=[30, -90], zoom_start=12)
tooltip = "I'm a police stop"
# iterate over all rows
for row in geoplot_data.iterrows():
folium.Marker([row[1].lat, row[1].lng],
popup=f"<b>{row[1].subject_age}</b> <b>{row[1].subject_race}</b> <b>{row[1].reason_for_stop}</b> <b>{row[1].outcome}</b>",
tooltip=tooltip).add_to(m)
m
# + id="QwG-nPzm0tPl"
| static/notebooks/DSBA21_W1_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
prices = pd.read_csv('../data/StockIndexData.csv')
prices.describe()
prices[1055:1065]
# +
prices['DJIArebased'] = prices['DJIA'][1061:]/prices['DJIA'][1061] *100
prices['GSPCrebased'] = prices['GSPC'][1061:]/prices['GSPC'][1061] *100
prices['NDXrebased'] = prices['NDX'][1061:]/prices['NDX'][1061] *100
prices['GDAXIrebased'] = prices['GDAXI'][1061:]/prices['GDAXI'][1061] *100
prices['FCHIrebased'] = prices['FCHI'][1061:]/prices['FCHI'][1061] *100
prices['SSECrebased'] = prices['SSEC'][1061:]/prices['SSEC'][1061] *100
prices['SENSEXrebased'] = prices['SENSEX'][1061:]/prices['SENSEX'][1061] *100
prices['DJIArebased'].plot(linewidth=.5)
prices['GSPCrebased'].plot(linewidth=.5)
prices['NDXrebased'].plot(linewidth=.5)
prices['GDAXIrebased'].plot(linewidth=.5)
prices['FCHIrebased'].plot(linewidth=.5)
prices['SSECrebased'].plot(linewidth=.5)
prices['SENSEXrebased'].plot(linewidth=.5)
plt.legend( loc='upper left', numpoints = 1 )
# -
prices['DJIALogRtn'] = (np.log(prices['DJIA'])[1:] - np.log(prices['DJIA'].shift(1))[1:])*100
prices['DJIALogRtn'][1:].plot(linewidth=.5)
prices['GSPCLogRtn'] = (np.log(prices['GSPC'])[1:] - np.log(prices['GSPC'].shift(1))[1:])*100
prices['NDXLogRtn'] = (np.log(prices['NDX'])[1:] - np.log(prices['NDX'].shift(1))[1:])*100
prices['GDAXILogRtn'] = (np.log(prices['GDAXI'])[1:] - np.log(prices['GDAXI'].shift(1))[1:])*100
prices['FCHILogRtn'] = (np.log(prices['FCHI'])[1:] - np.log(prices['FCHI'].shift(1))[1:])*100
prices['SSECLogRtn'] = (np.log(prices['SSEC'])[1:] - np.log(prices['SSEC'].shift(1))[1:])*100
prices['SENSEXLogRtn'] = (np.log(prices['SENSEX'])[1:] - np.log(prices['SENSEX'].shift(1))[1:])*100
prices['ptfLogRtn'] = (prices['DJIALogRtn'] + prices['GSPCLogRtn'] + prices['NDXLogRtn'] + prices['GDAXILogRtn'] + prices['FCHILogRtn'] + prices['SSECLogRtn'] + prices['SENSEXLogRtn'])/7.
prices1 =prices[prices['ptfLogRtn'].notnull()]
prices1 = prices1.reindex()
prices1.head()
prices1.describe()
np.mean(prices1['ptfLogRtn'])
np.std(prices1['ptfLogRtn'])
a = np.array(prices1['ptfLogRtn'])
a
# +
ptf_index = []
ptf_index.append(100.)
for i in xrange(len(prices1)):
tmp = ptf_index[i] * (1.+a[i]/100)
ptf_index.append(tmp)
# -
plt.plot(ptf_index,linewidth=.5)
prices1['ptfLogRtn'].plot(linewidth=.5)
stats.linregress(prices1['DJIALogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['GSPCLogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['NDXLogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['GDAXILogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['FCHILogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['SSECLogRtn'],prices1['ptfLogRtn'])
stats.linregress(prices1['SENSEXLogRtn'],prices1['ptfLogRtn'])
| python_notebook/test_ptf_var.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1-6.1 Intro Python
# ## Nested Conditionals
# - Nested Conditionals
# - Escape Sequence print formatting "\\"
#
# ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - create nested conditional logic in code
# - format print output using escape "\\" sequence
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## nested conditionals
# # Video: Unit1_Section6.1-nested-conditionals.mp4
# ### nested conditionals
# **if**
# ** if**
# ** if**
# ** else**
# ** else**
# **else**
#
# ### Making a sandwich
# Taking a sandwich order starts with sandwich choices:
# > **Cheese or Veggie special?**
# if the response is **"Cheese"** "nest" a sub ask:
# >> **Manchego or Cheddar?**
#
#
# |Nested **`if`** statement flowchart |
# | ------ |
# |  |
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# > ***TIP:*** click in input box before typing input
# +
# simplified example
# [ ] review the code then run and following the flowchart paths
# ***TIP:*** click in input box before typing
sandwich_type = input('"c" for Cheese or "v" for Veggie Special: ')
if sandwich_type.lower() == "c":
# select cheese type
cheese_type = input('"c" for Cheddar or "m" for Manchego: ')
if cheese_type.lower() == "c":
print("Here is your Cheddar Cheese sandwich")
else:
print("Here is your Manchego Cheese sandwich")
else:
print("Here is your Veggie Special")
# +
# full example: handling some invalid input and elif statement
# [ ] review the code then run following the flowchart paths including **invalid responses** like "xyz123"
# ***TIP:*** click in input box before typing
print("Hi, welcome to the sandwich shop. Please select a sandwich.")
sandwich_type = input('"c" for Cheese or "v" for Veggie Special: ')
# select sandwich type sandwich_type = input('"c" for Cheese or "v" for Veggie Special: ')
print()
if sandwich_type.lower() == "c":
# select cheese type
print("Please select a cheese.")
cheese_type = input('"c" for Cheddar or "m" for Manchego: ')
print()
if cheese_type.lower() == "c":
print("Here is your Cheddar Cheese sandwich. Thank you.")
elif cheese_type.lower() == "m":
print("Here is your Manchego Cheese sandwich. Thank you.")
else:
print("Sorry, we don't have", cheese_type, "choice today.")
elif sandwich_type.lower() == "v":
print("Here is your Veggie Special. Thank you.")
else:
print("Sorry, we don't have", sandwich_type, "choice today.")
print()
print("Goodbye!")
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
# ## Nested `if`
# ### [ ] Program: Say "Hello"
# - using nested **`if`**
#
# |Say "Hello" flowchart |
# | ------ |
# |  |
# +
# [ ] Say "Hello" with nested if
# [ ] Challenge: handle input other than y/n
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
# ## Nested `if` - testing for `False`
# ### Program: [ ] 3 Guesses
# - use nested if statements complete the flowchart code
# - create a **`birds`** string variable with the names of 1, 2, 3 or more birds to make it easier
# - get **`bird_guess`** input and use **`bird_guess in bird_names`** to generate Boolean True/False
# - if the the guess is wrong (**`False`**) create a sub test until the user has had 3 guesses
#
# |3 Guesses ("Guess the Bird") flowchart |
# | ------ |
# |  |
# +
# [ ] Create the "Guess the bird" program
# -
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| Python Absolute Beginner/Module_4_1_Absolute_Beginner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext Cython
# + language="cython"
# import numpy as np
# cimport numpy as np
# from scipy.spatial.distance import cdist
#
# from scipy.spatial.distance import cdist, pdist
#
# cdef double dense_dist_mat_at_ij(double[:] dist, int i, int j, int n):
# cdef int idx
# if i < j:
# idx = i*n - i*(i+1) // 2 - (j-i-1)
# elif i > j:
# idx = j*n - j*(j+1) // 2 - (i-j-1)
# else:
# return 0.0
#
# return dist[idx]
#
# cpdef tuple well_scattered_points(int n_rep, np.ndarray[np.double_t, ndim=1] mean, np.ndarray[np.double_t, ndim=2] data):
# cdef int n = data.shape[0]
# # if the cluster contains less than no. of rep points, all points are rep points
# if n <= n_rep:
# return list(data), np.arange(data.shape[0])
#
# # calculate distances for fast access
# cdef double[:] distances = pdist(data)
#
# # farthest point from mean
# cdef int idx = np.argmax(np.linalg.norm(data - mean, axis=1))
# # get well scattered points
# cdef int i, j, max_point
# cdef float max_dist, min_dist
# cdef list scatter_idx = [idx]
# for i in range(1, n_rep):
# max_dist = 0.0
# for j in range(n):
# # minimum distances from points in scatter_idx
# min_dist = min([dense_dist_mat_at_ij(distances, idx, j, n) for idx in scatter_idx])
# if min_dist > max_dist:
# max_dist = min_dist
# max_point = j
#
# scatter_idx.append(max_point)
#
# return [data[i] for i in scatter_idx], scatter_idx
# -
import numpy as np
# %timeit well_scattered_points(1000, np.zeros((10,)), np.random.rand(2000, 10).astype(np.float64))
# +
from scipy.spatial.distance import cdist, pdist
def dense_dist_mat_at_ij(dist, i, j, n):
if i < j:
idx = int(i*n - i*(i+1) // 2 - (j-i-1))
elif i > j:
idx = int(j*n - j*(j+1) // 2 - (i-j-1))
else:
return 0.0
return dist[idx]
def py_well_scattered_points(n_rep: int, mean: np.ndarray, data: np.ndarray):
n = data.shape[0]
# if the cluster contains less than no. of rep points, all points are rep points
if n <= n_rep:
return list(data), np.arange(data.shape[0])
# calculate distances for fast access
distances = pdist(data)
# farthest point from mean
idx = np.argmax(np.linalg.norm(data - mean, axis=1))
# get well scattered points
scatter_idx = [idx]
for _ in range(1, n_rep):
max_dist = 0.0
for j in range(n):
# minimum distances from points in scatter_idx
min_dist = min([dense_dist_mat_at_ij(distances, idx, j, n) for idx in scatter_idx])
if min_dist > max_dist:
max_dist = min_dist
max_point = j
scatter_idx.append(max_point)
return [data[i] for i in scatter_idx], scatter_idx
# -
import numpy as np
# %timeit -n1 -r2 py_well_scattered_points(1000, np.zeros((10,)), np.random.rand(2000, 10).astype(np.float64))
# # keep track of minimum distances
# + language="cython"
# import numpy as np
# cimport numpy as np
# from cpython cimport array
# import array
# from scipy.spatial.distance import cdist
#
# from scipy.spatial.distance import cdist, pdist
#
# cdef double dense_dist_mat_at_ij(double[:] dist, int i, int j, int n):
# cdef int idx
# if i < j:
# idx = i*n - i*(i+1) // 2 - (j-i-1)
# elif i > j:
# idx = j*n - j*(j+1) // 2 - (i-j-1)
# else:
# return 0.0
#
# return dist[idx]
#
# cpdef tuple wsp_fast(int n_rep, np.ndarray[np.double_t, ndim=1] mean, np.ndarray[np.double_t, ndim=2] data):
# cdef int n = data.shape[0]
#
# # if the cluster contains less than no. of rep points, all points are rep points
# if n <= n_rep:
# return list(data), np.arange(data.shape[0])
#
# # calculate distances for fast access
# cdef double[:] distances = pdist(data)
#
# # farthest point from mean
# cdef int idx = np.argmax(np.linalg.norm(data - mean, axis=1))
#
# # keep track of distances to scattered points
# cdef np.ndarray[np.double_t, ndim=2] dist_to_scatter = -1.0*np.ones((n_rep, n)).astype(np.float64)
#
# # scatter points indices relative to data
# cdef array.array scatter_idx = array.array('i', [-1]*n_rep)
#
# cdef int i, j, k, max_point, min_dist_idx
# cdef double min_dist, max_dist, dist
#
# scatter_idx[0] = idx
# for i in range(n_rep-1):
# # calculate distances to latest scatter point
# for j in range(n):
# dist_to_scatter[i,j] = dense_dist_mat_at_ij(distances, scatter_idx[i], j, n)
# # check max distance to all identified scatter points
# max_dist = 0.0
# for k in range(i+1):
# # for each scatter point, check the data point that is closest to it
# print(k)
# min_dist_idx = np.argmin(dist_to_scatter[k,:])
# # out of closest data points, check for the farthest
# if dist_to_scatter[k, min_dist_idx] > max_dist:
# max_dist = dist_to_scatter[k, min_dist_idx]
# max_point = min_dist_idx
# scatter_idx[i+1] = max_point
#
# return [data[i] for i in scatter_idx], scatter_idx
# -
import numpy as np
data = np.random.rand(2000, 10).astype(np.float64)
mean = np.zeros((10,)).astype(np.float64)
# # %timeit -n2 -r2 py_well_scattered_points(100, mean, data)
# %timeit -n2 -r2 well_scattered_points(1000, mean, data)
# %timeit -n2 -r2 wsp_fast(1000, mean, data)
_, idx1 = well_scattered_points(100, mean, data)
_, idx2 = wsp_fast(100, mean, data)
print(np.vstack((idx1, idx2)))
| cythonized/.ipynb_checkpoints/well_scattered_points-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Acceleration and throttle calculation for MPC project
#
# For the MPC project it is necessary to calculate the acceleration with given velocity and throttle. And a model to calcultate the throttle value from velocity and acceleration.
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# %matplotlib inline
df = pd.read_csv("../build/measurement2.csv", names=['throttle', 'v', 'a'])
df.head()
# ## Clean data
# remove samples with acceleration < 0
df = df[df['a'] > 0]
df.reset_index(drop=True).head()
# ## Feature generation
#
# The vehicle should have a maximum power so the acceleration should be dependent on throttle, v and v^2.
# random shuffle of the data set
df = df.sample(frac=1).reset_index(drop=True)
df_features = df[['throttle', 'v']]
# add v² to features
df_features['v2'] = df_features['v'] * df_features['v']
print(df_features.head())
print('Number of samples {}'.format(df_features.size))
# ## Train/test split
# +
features = np.array(df_features)
split = int(df_features.shape[0] * 0.2)
X_train = features[:-split]
X_test = features[-split:]
y_all = np.array(df['a'])
y_train = y_all[:-split]
y_test = y_all[-split:]
# -
# ## Fit the linear regression model
# +
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
y_pred_train = regr.predict(X_train)
mse_test = mean_squared_error(y_test, y_pred)
mse_train = mean_squared_error(y_train, y_pred_train)
print('MSE test = {:.4f} MSE train = {:.4f}'.format(mse_test, mse_train))
print('Model coefficients = {}'.format(regr.coef_))
# -
# ## Plot results
#
# Plausibility can be checked
# +
plt.scatter(X_test[:,0], y_test, color='black', alpha=0.3)
plt.scatter(X_test[:,0] + 0.02, y_pred, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_test[:,1], y_test, color='black', alpha=0.3)
plt.scatter(X_test[:,1], y_pred, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_train[:,0], y_train, color='black', alpha=0.3)
plt.scatter(X_train[:,0]+0.02, y_pred_train, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_train[:,1], y_train, color='black', alpha=0.3)
plt.scatter(X_train[:,1], y_pred_train, color='blue', alpha=0.3)
plt.show()
# -
# ## Model for throttle
#
# Calculations like for acceleration
# +
# random shuffle of the data set
df = df.sample(frac=1).reset_index(drop=True)
df_features = df[['a', 'v']]
# add v² to features
df_features['v2'] = df_features['v'] * df_features['v']
print(df_features.head())
print('Number of samples {}'.format(df_features.size))
features = np.array(df_features)
split = int(df_features.shape[0] * 0.2)
X_train = features[:-split]
X_test = features[-split:]
y_all = np.array(df['throttle'])
y_train = y_all[:-split]
y_test = y_all[-split:]
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
y_pred_train = regr.predict(X_train)
mse_test = mean_squared_error(y_test, y_pred)
mse_train = mean_squared_error(y_train, y_pred_train)
print('MSE test = {:.4f} MSE train = {:.4f}'.format(mse_test, mse_train))
print('Model coefficients = {}'.format(regr.coef_))
# +
plt.scatter(X_test[:,0], y_test, color='black', alpha=0.3)
plt.scatter(X_test[:,0], y_pred, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_test[:,1], y_test, color='black', alpha=0.3)
plt.scatter(X_test[:,1], y_pred, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_train[:,0], y_train, color='black', alpha=0.3)
plt.scatter(X_train[:,0], y_pred_train, color='blue', alpha=0.3)
plt.show()
# +
plt.scatter(X_train[:,1], y_train, color='black', alpha=0.3)
plt.scatter(X_train[:,1], y_pred_train, color='blue', alpha=0.3)
plt.show()
# -
| data_analysis/Vehicle acceleration from throttle and velocity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, LSTM
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.utils import to_categorical, plot_model
from tensorflow.keras.datasets import mnist
(x_train,y_train),(x_test,y_test) = mnist.load_data()
print(x_train.shape)
# number of labels
num_labels = len(np.unique(y_train))
# convert to one-hot vector
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# image dimension
image_size = x_train.shape[1]
# +
#resize and normalize
x_train = np.reshape(x_train,[-1,image_size,image_size])
x_train = x_train.astype('float32') / 255
x_test = np.reshape(x_test,[-1,image_size,image_size])
x_test = x_test.astype('float32') / 255
print(x_train.shape)
print(x_test.shape)
# -
# network parameters
input_shape = (image_size,image_size)
batch_size = 128
units = 256
dropout = 0.2
model = Sequential()
model.add(LSTM(units=units, dropout=dropout, input_shape = input_shape))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
plot_model(model,to_file='lstm-mnist.png',show_shapes = True)
model.compile(loss = 'categorical_crossentropy',optimizer = 'sgd', metrics = ['accuracy'])
model.fit(x_train,y_train,epochs = 20,batch_size=batch_size)
_ , acc = model.evaluate(x_test,y_test,batch_size=batch_size,verbose=1)
print("Test Accuracy: %.1f%%" %(100*acc))
| tensorflow_LSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python3
# language: python
# name: python3
# ---
# # Building text classifier with Differential Privacy
# In this tutorial we will train a text classifier with Differential Privacy by taking a model pre-trained on public text data and fine-tuning it for a different task.
#
# When training a model with differential privacy, we almost always face a trade-off between model size and accuracy on the task. The exact details depend on the problem, but a rule of thumb is that the fewer parameters the model has, the easier it is to get a good performance with DP.
#
# Most state-of-the-art NLP models are quite deep and large (e.g. [BERT-base](https://github.com/google-research/bert) has over 100M parameters), which makes task of training text model on a private datasets rather challenging.
#
# One way of addressing this problem is to divide the training process into two stages. First, we will pre-train the model on a public dataset, exposing the model to generic text data. Assuming that the generic text data is public, we will not be using differential privacy at this step. Then, we freeze most of the layers, leaving only a few upper layers to be trained on the private dataset using DP-SGD. This way we can get the best of both worlds - we have a deep and powerful text understanding model, while only training a small number of parameters with differentially private algorithm.
#
# In this tutorial we will take the pre-trained [BERT-base](https://github.com/google-research/bert) model and fine-tune it to recognize textual entailment on the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset.
# ## Dataset
# First, we need to download the dataset (we'll user Stanford NLP mirror)
STANFORD_SNLI_URL = "https://nlp.stanford.edu/projects/snli/snli_1.0.zip"
DATA_DIR = "data"
# +
import zipfile
import urllib.request
import os
def download_and_extract(dataset_url, data_dir):
print("Downloading and extracting ...")
filename = "snli.zip"
urllib.request.urlretrieve(dataset_url, filename)
with zipfile.ZipFile(filename) as zip_ref:
zip_ref.extractall(data_dir)
os.remove(filename)
print("Completed!")
download_and_extract(STANFORD_SNLI_URL, DATA_DIR)
# -
# The dataset comes in two formats (`tsv` and `json`) and has already been split into train/dev/test. Let’s verify that’s the case.
snli_folder = os.path.join(DATA_DIR, "snli_1.0")
os.listdir(snli_folder)
# Let's now take a look inside. [SNLI dataset](https://nlp.stanford.edu/projects/snli/) provides ample syntactic metadata, but we'll only use raw input text. Therefore, the only fields we're interested in are **sentence1** (premise), **sentence2** (hypothesis) and **gold_label** (label chosen by the majority of annotators).
#
# Label defines the relation between premise and hypothesis: either *contradiction*, *neutral* or *entailment*.
# +
import pandas as pd
train_path = os.path.join(snli_folder, "snli_1.0_train.txt")
dev_path = os.path.join(snli_folder, "snli_1.0_dev.txt")
df_train = pd.read_csv(train_path, sep='\t')
df_test = pd.read_csv(dev_path, sep='\t')
df_train[['sentence1', 'sentence2', 'gold_label']][:5]
# -
# ## Model
# BERT (Bidirectional Encoder Representations from Transformers) is state of the art approach to various NLP tasks. It uses a Transformer architecture and relies heavily on the concept of pre-training.
#
# We'll use a pre-trained BERT-base model, provided in huggingface [transformers](https://github.com/huggingface/transformers) repo.
# It gives us a pytorch implementation for the classic BERT architecture, as well as a tokenizer and weights pre-trained on a public English corpus (Wikipedia).
#
# Please follow these [installation instrucitons](https://github.com/huggingface/transformers#installation) before proceeding.
# +
from transformers import BertConfig, BertTokenizer, BertForSequenceClassification
model_name = "bert-base-cased"
config = BertConfig.from_pretrained(
model_name,
num_labels=3,
)
tokenizer = BertTokenizer.from_pretrained(
"bert-base-cased",
do_lower_case=False,
)
model = BertForSequenceClassification.from_pretrained(
"bert-base-cased",
config=config,
)
# -
# The model has the following structure. It uses a combination of word, positional and token *embeddings* to create a sequence representation, then passes the data through 12 *transformer encoders* and finally uses a *linear classifier* to produce the final label.
# As the model is already pre-trained and we only plan to fine-tune few upper layers, we want to freeze all layers, except for the last encoder and above (`BertPooler` and `Classifier`).
from IPython.display import Image
Image(filename='img/BERT.png')
# +
trainable_layers = [model.bert.encoder.layer[-1], model.bert.pooler, model.classifier]
total_params = 0
trainable_params = 0
for p in model.parameters():
p.requires_grad = False
total_params += p.numel()
for layer in trainable_layers:
for p in layer.parameters():
p.requires_grad = True
trainable_params += p.numel()
print(f"Total parameters count: {total_params}") # ~108M
print(f"Trainable parameters count: {trainable_params}") # ~7M
# -
# Thus, by using pre-trained model we reduce the number of trainable params from over 100 millions to just above 7.5 millions. This will help both performance and convergence with added noise.
# ## Prepare the data
# Before we begin training, we need to preprocess the data and convert it to the format our model expects.
#
# (Note: it'll take 5-10 minutes to run on a laptop)
# +
LABEL_LIST = ['contradiction', 'entailment', 'neutral']
MAX_SEQ_LENGHT = 128
import torch
import transformers
from torch.utils.data import TensorDataset
from transformers.data.processors.utils import InputExample
from transformers.data.processors.glue import glue_convert_examples_to_features
def _create_examples(df, set_type):
""" Convert raw dataframe to a list of InputExample. Filter malformed examples
"""
examples = []
for index, row in df.iterrows():
if row['gold_label'] not in LABEL_LIST:
continue
if not isinstance(row['sentence1'], str) or not isinstance(row['sentence2'], str):
continue
guid = f"{index}-{set_type}"
examples.append(
InputExample(guid=guid, text_a=row['sentence1'], text_b=row['sentence2'], label=row['gold_label']))
return examples
def _df_to_features(df, set_type):
""" Pre-process text. This method will:
1) tokenize inputs
2) cut or pad each sequence to MAX_SEQ_LENGHT
3) convert tokens into ids
The output will contain:
`input_ids` - padded token ids sequence
`attention mask` - mask indicating padded tokens
`token_type_ids` - mask indicating the split between premise and hypothesis
`label` - label
"""
examples = _create_examples(df, set_type)
#backward compatibility with older transformers versions
legacy_kwards = {}
from packaging import version
if version.parse(transformers.__version__) < version.parse("2.9.0"):
legacy_kwards = {
"pad_on_left": False,
"pad_token": tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
"pad_token_segment_id": 0,
}
return glue_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
label_list=LABEL_LIST,
max_length=MAX_SEQ_LENGHT,
output_mode="classification",
**legacy_kwards,
)
def _features_to_dataset(features):
""" Convert features from `_df_to_features` into a single dataset
"""
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor(
[f.attention_mask for f in features], dtype=torch.long
)
all_token_type_ids = torch.tensor(
[f.token_type_ids for f in features], dtype=torch.long
)
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
dataset = TensorDataset(
all_input_ids, all_attention_mask, all_token_type_ids, all_labels
)
return dataset
train_features = _df_to_features(df_train, "train")
test_features = _df_to_features(df_test, "test")
train_dataset = _features_to_dataset(train_features)
test_dataset = _features_to_dataset(test_features)
# -
# ## Choosing batch size
#
# Let's talk about batch sizes for a bit.
#
# In addition to all the considerations you normally take into account when choosing batch size, training model with DP adds another one - privacy cost.
#
# Because of the threat model we assume and the way we add noise to the gradients, larger batch sizes (to a certain extent) generally help convergence. We add the same amount of noise to each gradient update (scaled to the norm of one sample in the batch) regardless of the batch size. What this means is that as the batch size increases, the relative amount of noise added decreases. while preserving the same epsilon guarantee.
#
# You should, however, keep in mind that increasing batch size has its price in terms of epsilon, which grows at `O(sqrt(batch_size))` as we train (therefore larger batches make it grow faster). The good strategy here is to experiment with multiple combinations of `batch_size` and `noise_multiplier` to find the one that provides best possible quality at acceptable privacy guarantee.
#
# There's another side to this - memory. Opacus computes and stores *per sample* gradients, so for every normal gradient, Opacus will store `n=batch_size` per-sample gradients on each step, thus increasing the memory footprint by at least `O(batch_size)`. In reality, however, the peak memory requirement is `O(batch_size^2)` compared to non-private model. This is because some intermediate steps in per sample gradient computation involve operations on two matrices, each with batch_size as one of the dimensions.
#
# The good news is, we can pick the most appropriate batch size, regardless of memory constrains. Opacus has built-in support for *virtual* batches. Using it we can separate physical steps (gradient computation) and logical steps (noise addition and parameter updates): use larger batches for training, while keeping memory footprint low. Below we will specify two constants:
#
# - `BATCH_SIZE` defines the maximum batch size we can afford from a memory standpoint, and only affects computation speed
# - `VIRTUAL_BATCH_SIZE`, on the other hand, is equivalent to normal batch_size in the non-private setting, and will affect convergence and privacy guarantee.
#
#
BATCH_SIZE = 8
VIRTUAL_BATCH_SIZE = 32
assert VIRTUAL_BATCH_SIZE % BATCH_SIZE == 0 # VIRTUAL_BATCH_SIZE should be divisible by BATCH_SIZE
N_ACCUMULATION_STEPS = int(VIRTUAL_BATCH_SIZE / BATCH_SIZE)
# +
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
from opacus.utils.uniform_sampler import UniformWithReplacementSampler
SAMPLE_RATE = BATCH_SIZE / len(train_dataset)
train_sampler=UniformWithReplacementSampler(
num_samples=len(train_dataset),
sample_rate=SAMPLE_RATE,
)
train_dataloader = DataLoader(train_dataset, batch_sampler=train_sampler)
test_sampler = SequentialSampler(test_dataset)
test_dataloader = DataLoader(test_dataset, sampler=test_sampler, batch_size=BATCH_SIZE)
# -
# ## Training
# +
import torch
# Move the model to appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Set the model to train mode (HuggingFace models load in eval mode)
model = model.train()
# Define optimizer
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-4, eps=1e-8)
# -
# Next we will define and attach PrivacyEngine. There are two parameters you need to consider here:
#
# - `noise_multiplier`. It defines the trade-off between privacy and accuracy. Adding more noise will provide stronger privacy guarantees, but will also hurt model quality.
# - `max_grad_norm`. Defines the maximum magnitude of L2 norms to which we clip per sample gradients. There is a bit of tug of war with this threshold: on the one hand, a low threshold means that we will clip many gradients, hurting convergence, so we might be tempted to raise it. However, recall that we add noise with `std=noise_multiplier * max_grad_norm` so we will pay for the increased threshold with more noise. In most cases you can rely on the model being quite resilient to clipping (after the first few iterations your model will tend to adjust so that its gradients stay below the clipping threshold), so you can often just keep the default value (`=1.0`) and focus on tuning `batch_size` and `noise_multiplier` instead. That being said, sometimes clipping hurts the model so it may be worth experimenting with different clipping thresholds, like we are doing in this tutorial.
#
# These two parameters define the scale of the noise we add to gradients: the noise will be sampled from a Gaussian distribution with `std=noise_multiplier * max_grad_norm`.
#
# +
from opacus import PrivacyEngine
ALPHAS = [1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))
NOISE_MULTIPLIER = 0.4
MAX_GRAD_NORM = 0.1
privacy_engine = PrivacyEngine(
module=model,
sample_rate=SAMPLE_RATE * N_ACCUMULATION_STEPS,
alphas=ALPHAS,
noise_multiplier=NOISE_MULTIPLIER,
max_grad_norm=MAX_GRAD_NORM,
)
privacy_engine.attach(optimizer)
# -
# Let’s first define the evaluation cycle.
# +
import numpy as np
from tqdm.notebook import tqdm
def accuracy(preds, labels):
return (preds == labels).mean()
# define evaluation cycle
def evaluate(model):
model.eval()
loss_arr = []
accuracy_arr = []
for batch in test_dataloader:
batch = tuple(t.to(device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2],
'labels': batch[3]}
outputs = model(**inputs)
loss, logits = outputs[:2]
preds = np.argmax(logits.detach().cpu().numpy(), axis=1)
labels = inputs['labels'].detach().cpu().numpy()
loss_arr.append(loss.item())
accuracy_arr.append(accuracy(preds, labels))
model.train()
return np.mean(loss_arr), np.mean(accuracy_arr)
# -
# Now we specify the training parameters and run the training loop for three epochs
EPOCHS = 3
LOGGING_INTERVAL = 1000 # once every how many steps we run evaluation cycle and report metrics
DELTA = 1 / len(train_dataloader) # Parameter for privacy accounting. Probability of not uploding privacy guarantees
for epoch in range(1, EPOCHS+1):
losses = []
for step, batch in enumerate(tqdm(train_dataloader)):
batch = tuple(t.to(device) for t in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2],
'labels': batch[3]}
outputs = model(**inputs) # output = loss, logits, hidden_states, attentions
loss = outputs[0]
loss.backward()
losses.append(loss.item())
# We process small batches of size BATCH_SIZE,
# until they're accumulated to a batch of size VIRTUAL_BATCH_SIZE.
# Only then we make a real `.step()` and update model weights
if (step + 1) % N_ACCUMULATION_STEPS == 0 or step == len(train_dataloader) - 1:
optimizer.step()
else:
optimizer.virtual_step()
if step > 0 and step % LOGGING_INTERVAL == 0:
train_loss = np.mean(losses)
eps, alpha = optimizer.privacy_engine.get_privacy_spent(DELTA)
eval_loss, eval_accuracy = evaluate(model)
print(
f"Epoch: {epoch} | "
f"Step: {step} | "
f"Train loss: {train_loss:.3f} | "
f"Eval loss: {eval_loss:.3f} | "
f"Eval accuracy: {eval_accuracy:.3f} | "
f"ɛ: {eps:.2f} (α: {alpha})"
)
# For the test accuracy, after training for three epochs you should expect something close to the results below.
#
# You can see that we can achieve quite strong privacy guarantee at epsilon=7.5 with a moderate accuracy cost of 11 percentage points compared to non-private model trained in a similar setting (upper layers only) and 16 points compared to best results we were able to achieve using the same architecture.
#
# *NB: When not specified, DP-SGD is trained with upper layers only*
# | Model | Noise multiplier | Batch size | Accuracy | Epsilon |
# | --- | --- | --- | --- | --- |
# | no DP, train full model | N/A | 32 | 90.1% | N/A |
# | no DP, train upper layers only | N/A | 32 | 85.4% | N/A |
# | DP-SGD | 1.0 | 32 | 70.5% | 0.7 |
# | **DP-SGD (this tutorial)** | **0.4** | **32** | **74.3%** | **7.5** |
# | DP-SGD | 0.3 | 32 | 75.8% | 20.7 |
# | DP-SGD | 0.1 | 32 | 78.3% | 2865 |
# | DP-SGD | 0.4 | 8 | 67.3% | 5.9 |
| tutorials/building_text_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Create path features
# Paths are created based on date features only (0.2% error)
# +
import os
import re
import pickle
import time
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.sparse import csr_matrix, vstack
# %matplotlib inline
# Custom modules
import const
import func
# -
# ## Load data
print const.TRAIN_FILES
print const.TEST_FILES
lut = pd.read_csv(const.LOOK_UP_TABLE)
lut.set_index('name_dat', inplace=True)
lut.head(3)
date_train = func.load_data_file(const.TRAIN_FILES[2])
date_test = func.load_data_file(const.TEST_FILES[2])
date_data = vstack([date_train['data']['features'],date_test['data']['features']], format='csr')
ids = pd.concat([date_train['data']['ids'], date_test['data']['ids']])
y = date_train['data']['y']
st_columns = lut.groupby('station_V2')['col_dat'].first().values
st_names = lut.groupby('station_V2')['col_dat'].first().index.values
date_data = pd.DataFrame(date_data[:, st_columns].todense()).replace(0, np.nan)
date_data.columns = [str(st_names[n]) for n in date_data.columns]
# Add clusters,response and id to data
# Add cluster info
cluster_info = pd.read_csv(os.path.join(const.DATA_PATH, 'eda_sample_clusters.csv'))
cluster_info.head(3)
date_data = date_data.merge(ids.reset_index(), left_index=True, right_index=True, how='left')
date_data = date_data.merge(cluster_info, left_on='Id', right_on='Id', how='left')
date_data = date_data.merge(y, left_on='Id', right_index=True, how='left')
print date_data.shape
date_data.head(3)
# ## Calculate features
d_cols = date_data.columns[:128]
n_samples = date_data.shape[0]
lines = lut['line_V2'].unique()
path_feat = pd.DataFrame(ids.Id.values)
for line in lines:
stations = [str(float(x)) for x in lut[lut['line_V2']==line]['station_V2'].unique()]
df = (~date_data.loc[:,date_data.columns.isin(stations)].isnull()).sum(1)
df = df.replace(0, np.nan)
df -= df.value_counts().index[0]
path_feat = pd.concat([path_feat, df], axis=1)
path_feat.columns = ['Id'] + ['V2_' + str(x) for x in lines if x!='Id']
# First station
path_feat['first_station'] = date_data[d_cols].apply(lambda x: x.first_valid_index(), axis=1)
path_feat['last_station'] = date_data[d_cols].apply(lambda x: x.last_valid_index(), axis=1)
path_feat.head()
# Which line in the end
path_feat['stage_2'] = path_feat.loc[:,['V2_5.0','V2_6.0']].abs().idxmin(1)
# Which line in the beginning
path_feat['stage_1'] = path_feat.loc[:,['V2_1.0','V2_2.0','V2_3.1','V2_3.2','V2_3.3',
'V2_4.1','V2_4.2','V2_4.3','V2_4.4']].abs().idxmin(1)
# How many lines in the first part?
path_feat['stage_1_cnt'] = path_feat.loc[:,['V2_1.0','V2_2.0','V2_3.1','V2_3.2','V2_3.3',
'V2_4.1','V2_4.2','V2_4.3','V2_4.4']].abs().count(1)
# Compress stage1
path_feat['stage_1_sum'] = path_feat.loc[:,['V2_1.0','V2_2.0','V2_3.1','V2_3.2','V2_3.3',
'V2_4.1','V2_4.2','V2_4.3','V2_4.4']].sum(1)
# Compress stage2
path_feat['stage_2_sum'] = path_feat.loc[:,['V2_5.0','V2_6.0']].sum(1)
# How many stations in total path
path_feat['stationV2_cnt'] = date_data.loc[:,'0.0':'51.0'].count(1)
# Path nr & clusters
path_feat['unique_path'] = date_data['unique_path']
path_feat['cluster_n8'] = date_data['cluster_n8']
path_feat['cluster_n50'] = date_data['cluster_n50']
path_feat['cluster_n100'] = date_data['cluster_n100']
path_feat['cluster_n500'] = date_data['cluster_n500']
# How many stations in total path (deviation from cluster median)
path_feat['stage_1_sum_devn8'] = path_feat['stage_1_sum']
for cl in path_feat['cluster_n8'].unique():
path_feat.loc[path_feat['cluster_n8']==cl, 'stage_1_sum_devn8'] -= \
path_feat.loc[path_feat['cluster_n8']==cl,'stage_1_sum'].median()
# How many stations in total path (deviation from cluster median)
path_feat['stationV2_cnt_devn8'] = path_feat['stationV2_cnt']
for cl in path_feat['cluster_n8'].unique():
path_feat.loc[path_feat['cluster_n8']==cl, 'stationV2_cnt_devn8'] -= \
path_feat.loc[path_feat['cluster_n8']==cl,'stationV2_cnt'].median()
# +
# Frequency of cluster (n=500)
n500_cnt = ((path_feat['cluster_n500'].value_counts()/n_samples).round(4)*10000).astype(int) \
.reset_index(name='n500_cnt') \
.rename(columns={'index': 'cluster_n500'})
path_feat = path_feat.merge(n500_cnt, on='cluster_n500', how='left')
# +
# Frequency of unique path
upath_cnt = ((path_feat['unique_path'].value_counts()/n_samples).round(4)*10000).astype(int) \
.reset_index(name='upath_cnt') \
.rename(columns={'index': 'unique_path'})
path_feat = path_feat.merge(upath_cnt, on='unique_path', how='left')
# -
# Combination of S32 / 33
path_feat['path_32'] = ((~date_data['32.0'].isnull()) & (date_data['33.0'].isnull()))
path_feat.head()
path_feat.head()
# ## Store feature set as csv
path_feat.to_csv(os.path.join(const.DATA_PATH, 'feat_set_path.csv'), index=False)
| feature_set_path.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python3_7_6
# language: python
# name: py3_7_6
# ---
import pandas as pd
import numpy as np
A = pd.Series([2,4,6], index=[0,1,2])
B = pd.Series([1,3,5], index=[1,2,3])
print(A)
print()
print(B)
print()
print(A+B)
print(A.add(B, fill_value=0))
# +
A =pd.DataFrame(np.random.randint(0,10,(2,2)),columns=list("AB"))
B =pd.DataFrame(np.random.randint(0,10,(3,3)),columns=list("BAC"))
print(A)
print()
print(B)
# -
print(A+B)
A.add(B, fill_value=0)
data = {
'A': [i+5 for i in range(3)],
'B': [i**2 for i in range(3)]
}
print(data)
df = pd.DataFrame(data)
df.A
df.A.sum()
df.sum()
df.mean()
# +
df = pd.DataFrame({
'col1':[2,1,4,6,7,10],
'col2':['a','a','b',np.nan,'d','c'],
'col3':[1,0,9,4,2,3]
})
df
# -
df.sort_values('col1',ascending=False)
df.sort_values(['col1','col2'])
df = pd.DataFrame(np.random.rand(5,2),columns=["A","B"])
print(df)
df["A"]<0.5
df[(df["A"]<0.5)&(df["B"]>0.3)]
df.query("A<0.5 and B>0.3")
df = pd.DataFrame({
'Animal':['dog','cat','cat','pig','cat'],
'Name':['happy','sam','toby','mini','rocky']
})
df
df["Animal"].str.contains("cat")
df.Animal.str.match("cat")
df[df["Animal"].str.contains("cat")]
df[df.Animal.str.contains("cat")]
#apply를 통해서 함수로 데이터를 다룰 수 있다
df = pd.DataFrame(np.arange(5),columns=["Num"])
def square(x):
return x**2
a = df["Num"].apply(square)
b= df["Square"] = df.Num.apply(lambda x: x**2)
print(a)
print(b)
df = pd.DataFrame(columns=["phone"])
df.loc[0] = "010-1234-1235"
df.loc[1] = "공일공-일이삼사-1235"
df.loc[2] = "010.1234.일이삼오"
df.loc[3] = "공1공-1234.1이3오"
df["preprocess_phone"] = ''
df
def get_preprocess_phone(phone):
mapping_dict = {
"공": "0",
"일": "1",
"이": "2",
"삼": "3",
"사": "4",
"오": "5",
"-": "",
".": "",
}
for k, v in mapping_dict.items():
phone = phone.replace(k,v)
return phone
df["preprocess_phone"] = df['phone'].apply(get_preprocess_phone)
df
df = pd.DataFrame([0,0,1,1,0],columns =['sex'])
print(df,'\n')
df.sex.replace({0:'Male',1:'Female'})
df
df.sex.replace({0:'male', 1:'female'}, inplace=True)
#inplace=True :바뀐 데이터타입이 바로 나옴
df
df = pd.DataFrame({'key':['A', 'B', 'C', 'A', 'B', 'C'],
'data':range(6)})
df.groupby('key')
df.groupby('key').sum()
df.groupby(['key', 'data']).sum()
# +
#aggregate 집계
# -
df.groupby('key').aggregate([min, np.median, max])
df.groupby('key').aggregate({min, np.sum})
| pandas/pandas_21_04_8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## DATASET
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(".", one_hot=True, reshape=False)
import tensorflow as tf
# Parameters
learning_rate = 0.00001
epochs = 10
batch_size = 128
# Number of samples to calculate validation and accuracy
# Decrease this if you're running out of memory to calculate accuracy
test_valid_size = 256
# Network Parameters
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
# +
## WEIGHTS AND BIASES
# Store layers weight & bias
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
'out': tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))}
# -
# ### Convoloutions
# 
# Convolution with 3×3 Filter.
# Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
#
# The above is an example of a convolution with a 3x3 filter and a stride of 1 being applied to data with a range of 0 to 1. The convolution for each 3x3 section is calculated against the weight, `[[1, 0, 1], [0, 1, 0], [1, 0, 1]]`, then a bias is added to create the convolved feature on the right. In this case, the bias is zero. In TensorFlow, this is all done using `tf.nn.conv2d()` and `tf.nn.bias_add()`.
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
# The ```tf.nn.conv2d()``` function computes the convolution against weight W as shown above.
#
# In TensorFlow, ```strides``` is an array of 4 elements; the first element in this array indicates the stride for batch and last element indicates stride for features. It's good practice to remove the batches or features you want to skip from the data set rather than use a stride to skip them. You can always set the first and last element to 1 in ```strides``` in order to use all batches and features.
#
# The middle two elements are the strides for height and width respectively. I've mentioned stride as one number because you usually have a square stride where ```height = width```. When someone says they are using a stride of 3, they usually mean ```tf.nn.conv2d(x, W, strides=[1, 3, 3, 1])```.
#
# To make life easier, the code is using ```tf.nn.bias_add()``` to add the bias. Using ```tf.add()``` doesn't work when the tensors aren't the same shape.
#
#
# ### Max Pooling
# 
# Max Pooling with 2x2 filter and stride of 2. Source: http://cs231n.github.io/convolutional-networks/
#
# The above is an example of a convolution with a 3x3 filter and a stride of 1 being applied to data with a range of 0 to 1. The convolution for each 3x3 section is calculated against the weight, `[[1, 0, 1], [0, 1, 0], [1, 0, 1]]`, then a bias is added to create the convolved feature on the right. In this case, the bias is zero. In TensorFlow, this is all done using `tf.nn.conv2d()` and `tf.nn.bias_add()`.
#
#
# The above is an example of max pooling with a 2x2 filter and stride of 2. The left square is the input and the right square is the output. The four 2x2 colors in input represents each time the filter was applied to create the max on the right side. For example, ```[[1, 1], [5, 6]]``` becomes 6 and ```[[3, 2], [1, 2]]``` becomes 3.
def maxpool2d(x, k=2):
return tf.nn.max_pool(
x,
ksize=[1, k, k, 1],
strides=[1, k, k, 1],
padding='SAME')
# The ```tf.nn.max_pool()``` function does exactly what you would expect, it performs max pooling with the ```ksize``` parameter as the size of the filter.
# ### Model
# 
# Image from Explore The Design Space video
#
#
#
#
# In the code below, we're creating 3 layers alternating between convolutions and max pooling followed by a fully connected and output layer. The transformation of each layer to new dimensions are shown in the comments. For example, the first layer shapes the images from `28x28x1` to `28x28x32` in the convolution step. Then next step applies max pooling, turning each sample into `14x14x32`. All the layers are applied from `conv1` to `output`, producing `10 class predictions`.
def conv_net(x, weights, biases, dropout):
# Layer 1 - 28*28*1 to 14*14*32
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
conv1 = maxpool2d(conv1, k=2)
# Layer 2 - 14*14*32 to 7*7*64
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer - 7*7*64 to 1024
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, dropout)
# Output Layer - class prediction - 1024 to 10
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# +
## SESSION
# tf Graph input
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32)
# Model
logits = conv_net(x, weights, biases, keep_prob)
# Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf. global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
for batch in range(mnist.train.num_examples//batch_size):
batch_x, batch_y = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={
x: batch_x,
y: batch_y,
keep_prob: dropout})
# Calculate batch loss and accuracy
loss = sess.run(cost, feed_dict={
x: batch_x,
y: batch_y,
keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={
x: mnist.validation.images[:test_valid_size],
y: mnist.validation.labels[:test_valid_size],
keep_prob: 1.})
print('Epoch {:>2}, Batch {:>3} -'
'Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
epoch + 1,
batch + 1,
loss,
valid_acc))
# Calculate Test Accuracy
test_acc = sess.run(accuracy, feed_dict={
x: mnist.test.images[:test_valid_size],
y: mnist.test.labels[:test_valid_size],
keep_prob: 1.})
print('Testing Accuracy: {}'.format(test_acc))
# -
| HelperProjects/Convolutional-Neural-Networks/cnn-in-tensorflow/convolutional-network-in-tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import pickle
data_file = r"C:\Users\t-sabeer\Documents\databases\SnapshotSerengeti.json"
with open(data_file, 'r') as f:
data = json.load(f)
tax_file = r"C:\Users\t-sabeer\Documents\code\cameraTraps\category_data_with_taxonomy.json"
with open(tax_file, 'r') as f:
taxonomy = json.load(f)
db_file = r"C:\Users\t-sabeer\Documents\code\cameraTraps\inat_category_lookup_SS.p"
inat_cat_lookup = pickle.load(open(db_file,'rb'))
images = data['images']
categories = data['categories']
annotations = data['annotations']
cat_names = [[cat['name'],cat['id']] for cat in categories]
for cat in cat_names:
if cat[0] not in inat_cat_lookup:
print(cat)
cat_to_id = {}
id_to_cat = {}
for cat in categories:
cat_to_id[cat['name']] = cat['id']
id_to_cat[cat['id']] = cat['name']
print(len(taxonomy))
print(taxonomy[0])
for taxa in taxonomy:
if taxa['name'] == 'Tamias dorsalis':
print(taxa)
print([(cat['name'],inat_cat_lookup[cat['name']]) for cat in categories if cat['name'] in inat_cat_lookup])
old_cat_lookup = {inat_cat_lookup[cat]: cat for cat in inat_cat_lookup}
# +
map_inat_to_ss = {}
cats_in_inat = []
inat_cats = []
for taxa in taxonomy:
match = False
if taxa['name'] in inat_cat_lookup.values():
map_inat_to_ss[taxa['id']] = cat_to_id[old_cat_lookup[taxa['name']]]
match = True
cats_in_inat.append(old_cat_lookup[taxa['name']])
inat_cats.append(taxa['name'])
else: #no direct species match
for a in taxa['ancestors']:
if a['name'] in inat_cat_lookup.values():
#print(a)
map_inat_to_ss[taxa['id']] = cat_to_id[old_cat_lookup[a['name']]]
match = True
print(cats_in_inat)
print(inat_cats)
# -
pickle.dump(map_inat_to_ss,open('map_inat_to_ss.p','wb'))
print(len(map_inat_to_ss))
id_to_cat[map_inat_to_ss[1]]
for cat in inat_cat_lookup:
if cat_to_id[cat] not in map_inat_to_ss.values():
print(cat, inat_cat_lookup[cat])
for taxa in taxonomy[:1]:
for a in taxa['ancestors']:
print(a)
| research/map_data_to_inat/TaxonomyMapSS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="J0Qjg6vuaHNt"
# # Neural machine translation with attention
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="tnxXKDjq3jEL" outputId="b3fd78fc-a567-4313-bfab-5a6a312b4aeb"
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
from pathlib import Path
# + [markdown] id="jE0wJLpPIF42"
# ## Load integer addition data
# + id="PCMJUn0qIH97"
def load(data_type, n_terms, n_digits):
term_dig_dir = Path(f'{n_terms}term_{n_digits}digs')
dir = Path('../Code') / Path('data') / Path(data_type) / term_dig_dir
X_train = np.load(dir / Path('X_train.npy'), allow_pickle=True)
X_test = np.load(dir / Path('X_test.npy'), allow_pickle=True)
y_train = np.load(dir / Path('y_train.npy'), allow_pickle=True)
y_test = np.load(dir / Path('y_test.npy'), allow_pickle=True)
return X_train, X_test, y_train, y_test
# + id="srhN7ga8Ic3V"
X_train, X_test, y_train, y_test = load('random', 3, 2)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="4ddOWA0LSvA3" outputId="351b8758-e418-4921-d302-c453846f1e23"
X_train.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="gi1V-VefSyEL" outputId="9c0a6057-14f4-4557-a2c2-26bac4602dca"
X_train[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="BAQJvHQn-Dmw" outputId="4bb23cc7-f104-4ca6-b8ac-7187a9a8b85c"
y_train.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="Zz0emBL7APlP" outputId="607f0494-f260-4fe9-ac25-64f4f7a03d93"
y_train[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="cg45pl4pIu_p" outputId="aed23dd0-396b-47c8-9b59-8fac030e70b7"
X_train.shape, y_train.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="Mjr0RK5jIyTS" outputId="977e75cb-95e0-4257-fa80-ee9528dda5c9"
X_test.shape, y_test.shape
# + id="qNRIjWaM1hal"
def char_to_int_map(max_value=9, min_value=0):
char_to_int = {str(n): n for n in range(min_value, max_value+1)}
n_terms = max_value - min_value + 1
char_to_int['+'] = n_terms
char_to_int['\t'] = n_terms + 1
char_to_int['\n'] = n_terms + 2
char_to_int[' '] = n_terms + 3
return char_to_int
char_to_int = char_to_int_map()
int_to_char = {v: k for k, v in char_to_int.items()}
def decode(v):
return int_to_char[v]
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="sovyIeQ3CbBI" outputId="b2301d88-8d53-47b7-e66c-046187379579"
''.join([decode(x) for x in X_train[0]])
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="8rtyOakQCO6J" outputId="954f904e-5558-4d14-c5cc-052ae33b0afb"
''.join([decode(x) for x in y_train[0]])
# + [markdown] id="rgCLkfv5uO3d"
# ### Create a tf.data dataset
# + id="TqHsArVZ3jFS"
BUFFER_SIZE = X_train.shape[0]
BATCH_SIZE = 32
steps_per_epoch = X_train.shape[0] // BATCH_SIZE
embedding_dim = 13
units = 256
bidir = True
vocab_inp_size = len(char_to_int)
vocab_tar_size = len(char_to_int)
# Data is already padded, so we can just grab any one
max_length_inp = len(X_train[0])
max_length_targ = len(y_train[0])
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(BUFFER_SIZE)
train_ds = train_ds.batch(BATCH_SIZE, drop_remainder=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="qc6-NK1GtWQt" outputId="c242e480-3713-462c-8196-019dc46ca4cd"
example_input_batch, example_target_batch = next(iter(train_ds))
example_input_batch.shape, example_target_batch.shape
# + [markdown] id="TNfHIF71ulLu"
# ## Write the encoder and decoder model
#
# Implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5).
#
# <img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
#
# The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
#
# Here are the equations that are implemented:
#
# <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
# <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
#
# This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:
#
# * FC = Fully connected (dense) layer
# * EO = Encoder output
# * H = hidden state
# * X = input to the decoder
#
# And the pseudo-code:
#
# * `score = FC(tanh(FC(EO) + FC(H)))`
# * `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
# * `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
# * `embedding output` = The input to the decoder X is passed through an embedding layer.
# * `merged vector = concat(embedding output, context vector)`
# * This merged vector is then given to the GRU
#
# The shapes of all the vectors at each step have been specified in the comments in the code:
# + id="nZ2rI24i3jFg"
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz, bidir):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.bidir = bidir
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim, name='Encoder_Embedding')
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform',
name='Encoder_GRU')
if bidir:
self.bidir_gru = tf.keras.layers.Bidirectional(self.gru, name='Encoder_BidirGRU')
def call(self, x, hidden):
x = self.embedding(x)
if not self.bidir:
output, state = self.gru(x, initial_state = hidden)
else:
hidden = [hidden, hidden]
output, forward_state, reverse_state = self.bidir_gru(x, initial_state = hidden)
state = tf.concat([forward_state, reverse_state], axis=1)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
# + id="60gSVh05Jl6l"
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE, bidir)
# + id="umohpBN2OM94"
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units, bidir):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units, name='Attn_W1')
self.W2 = tf.keras.layers.Dense(units, name='Attn_W2')
self.V = tf.keras.layers.Dense(1, name='Attn_V')
self.bidir = bidir
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
# + id="yJ_B3mhW3jFk"
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz, bidir):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.bidir = bidir
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim, name='Decoder_Embedding')
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform',
name='Decoder_GRU')
if bidir:
self.bidir_gru = tf.keras.layers.Bidirectional(self.gru, name='Decoder_BidirGRU')
self.fc = tf.keras.layers.Dense(vocab_size, name='Decoder_FC')
# used for attention
self.attention = BahdanauAttention(self.dec_units, self.bidir)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
if not self.bidir:
output, state = self.gru(x)
else:
output, forward_state, reverse_state = self.bidir_gru(x)
state = tf.concat([forward_state, reverse_state], axis=1)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
# + id="P5UY8wko3jFp"
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE, bidir)
# + [markdown] id="_ch_71VbIRfK"
# ## Define the optimizer and the loss function
# + id="WmTHr5iV3jFr"
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
# Mask whitespace first
mask = tf.math.logical_not(tf.math.equal(real, char_to_int[' ']))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
# + [markdown] id="DMVWzzsfNl4e"
# ## Checkpoints (Object-based saving)
# + id="Zj8bXQTgNwrF"
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
# + [markdown] id="hpObfY22IddU"
# ## Training
#
# 1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
# 2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.
# 3. The decoder returns the *predictions* and the *decoder hidden state*.
# 4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
# 5. Use *teacher forcing* to decide the next input to the decoder.
# 6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.
# 7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
# + id="sC9ArXSsVfqn"
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
# print(f'enc_output.shape = {enc_output.shape}')
# print(f'enc_hidden.shape = {enc_hidden.shape}')
dec_hidden = enc_hidden
dec_input = tf.expand_dims([char_to_int['\t']] * BATCH_SIZE, 1)
dec_input = tf.dtypes.cast(dec_input, tf.float32)
# print(f'dec_input.shape = {dec_input.shape}')
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
# print(f'predictions.shape = {predictions.shape}')
# print(f'dec_hidden.shape = {dec_hidden.shape}')
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
# + colab={"base_uri": "https://localhost:8080/", "height": 477} id="ddefjBMa3jF0" outputId="1d76d8e3-66e3-4d2a-aff2-00751c9a688c"
EPOCHS = 20
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(train_ds.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
# if batch % 100 == 0:
# print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
# batch,
# batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
# + [markdown] id="mU3Ce8M6I3rz"
# ## Translate
#
# * The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
# * Stop predicting when the model predicts the *end token*.
# * And store the *attention weights for every time step*.
#
# Note: The encoder output is calculated only once for one input.
# + id="EbQpyYs13jF_"
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
# Preprocess the input
inputs = [char_to_int[i] for i in sentence]
inputs = np.array(inputs).reshape(1, -1)
inputs = tf.convert_to_tensor(inputs)
# Start with a blank result
result = ''
# Initial hidden state is zeros
hidden = tf.zeros((1, units))
# Encode the input
enc_out, enc_hidden = encoder(inputs, hidden)
# Use the encoder's final hidden state as the dec's initial hidden state
dec_hidden = enc_hidden
# Start by feeding the decoder the starting character
dec_input = tf.expand_dims([char_to_int['\t']], 0)
for t in range(max_length_targ):
# Continually decode the current result, and store the attention weights for plotting
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += int_to_char[predicted_id]
if predicted_id == char_to_int['\n']:
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# + id="s5hQWlbN3jGF"
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='Reds')
fontdict = {'fontsize': 14}
sentence = list(sentence)
# Appending a \n doesn't jive with padding, need to fix this
sentence[-1] = r'\n'
predicted_sentence = list(predicted_sentence)
predicted_sentence[-1] = r'\n'
xticks = [''] + sentence
yticks = [''] + predicted_sentence
ax.set_xticklabels(xticks, fontdict=fontdict)
ax.set_yticklabels(yticks, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
# + id="sl9zUHzg3jGI"
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
# attention_plot = attention_plot[:len(result), :len(sentence)]
plot_attention(attention_plot, sentence, result)
# + [markdown] id="n250XbnjOaqP"
# ## Restore the latest checkpoint and test
# + id="UJpT9D5_OgP6"
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# + id="HF0Y_DQtHiEu"
def generate_example_plot(input=None, idx=None):
output = None
if idx is None and input is None:
idx = np.random.randint(X_test.shape[1])
if input is None:
input = X_test[idx]
output = y_test[idx]
if not isinstance(input, str):
test_input = ''.join([decode(x) for x in input])
else:
test_input = input
translate(test_input)
if output is not None:
print(f'Actual: {"".join([decode(x) for x in output]).strip()}')
# + id="WrAM0FDomq3E"
generate_example_plot()
# + [markdown] id="E62256nsboMS"
# ## Test set results
# + id="NPoGWEuIwzER"
correct_cnt = 0
incorrect_attn_plots = dict()
for x, y in zip(X_test[:100], y_test[:100]):
input = ''.join([decode(i) for i in x])
result, sentence, attention_plot = evaluate(input)
result = result.strip()
actual = ''.join([decode(i) for i in y]).strip()
if result == actual:
correct_cnt += 1
else:
incorrect_attn_plots[(input, result, actual)] = attention_plot
print(f'Actual = {actual}, result = {result}')
# + id="nq0_WdG6fiH9"
for k, attention_plot in incorrect_attn_plots.items():
sentence, result, actual = k
print(f'input = {sentence.strip()}, result = {result}, actual = {actual}')
plot_attention(attention_plot, sentence, result)
print('='*100)
# + id="Vug5BLqeia-O"
generate_example_plot(input='0+12\n ')
# + id="fFEwdC9Gk4YV"
| Notebooks/Seq2Seq_with_Attention.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# +
import pandas as pd
football = pd.read_csv('data/data_sf.csv', index_col=[0])
# display(football.head())
football.head()
# -
football.columns
small_df = football[football.columns[1:8]].head(25)
small_df
s=small_df['Nationality'].value_counts()
s
s.index
len(football['Club'].value_counts())
c = football['Club'].value_counts()
c
c[c == c.max()]
c[c == c.min()]
small_df['Nationality'].value_counts(normalize=True)
p = football['Position'].value_counts(normalize=True)
p
p[p>0.1]
p[p<0.01]
b = small_df['Wage'].value_counts(bins=4)
display(b)
small_df.loc[(small_df['Wage'] > b.index[3].left) & (small_df['Wage'] <= b.index[3].right)]
football['FKAccuracy'].value_counts(bins=5)
football['FKAccuracy'].value_counts(bins=5).index.max()
df = football
df['Position'].value_counts().reset_index()
football[football['Nationality']=='Spain']['Wage'].value_counts(bins=4).min() /\
football[football['Nationality']=='Spain']['Name'].count()
football[football['Nationality']=='Spain']['Wage'].value_counts(bins=4)
football[football['Nationality']=='Spain']['Wage'].value_counts(bins=4).index.min()
650 / football[football['Nationality']=='Spain']['Name'].count()
football[football['Club']=="Manchester United"]['Nationality'].nunique()
football[(football['Nationality']=='Brazil') & (football['Club']=="Juventus")]['Name'].to_list()
football[football['Age'] > 35]['Club'].value_counts()
football[football['Nationality'] == 'Argentina']['Age'].value_counts(bins=4)
football[football['Nationality'] == 'Spain']['Age'].value_counts(normalize=True)*100
| python-pandas/Python6-Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# This is an interactive tutorial designed to walk you through the steps of
# fitting an autoregressive Poisson GLM (i.e., a spiking GLM with
# spike-history) and a multivariate autoregressive Poisson GLM (i.e., a
# GLM with spike-history AND coupling between neurons).
#
# Data: from Uzzell & Chichilnisky 2004; see README file for details.
#
# Last updated: Mar 10, 2020 (<NAME>)
#
# Instructions: Execute each section below separately using cmd-enter.
# For detailed suggestions on how to interact with this tutorial, see
# header material in tutorial1_PoissonGLM.m
#
# Transferred into python by <NAME>
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.io import loadmat
from scipy.linalg import hankel,pinv
from scipy.interpolate import interp1d
from interpolation import interp
from numpy.linalg import inv,norm,lstsq
from matplotlib import mlab
aa=np.asarray
# # Load raw data
# Be sure to unzip the data file data_RGCs.zip
# (http://pillowlab.princeton.edu/data/data_RGCs.zip) and place it in
# this directory before running the tutorial.
# Or substitute your own dataset here instead!
#
#
# (Data from Uzzell & Chichilnisky 2004):
datadir='../data_RGCs/' # directory where stimulus lives
Stim=loadmat(datadir+'Stim.mat')['Stim'].flatten() # stimulus (temporal binary white noise)
stimtimes=loadmat(datadir+'stimtimes.mat')['stimtimes'].flatten() # stim frame times in seconds (if desired)
SpTimes=loadmat(datadir+'SpTimes.mat')['SpTimes'][0,:] # load spike times (in units of stim frames)
SpTimes=[p.flatten() for p in SpTimes]
ncells=len(SpTimes)
# Compute some basic statistics on the stimulus
dtStim = stimtimes[1]-stimtimes[0] # time bin size for stimulus (s)
nT = Stim.shape[0] # number of time bins in stimulus
# See tutorial 1 for some code to visualize the raw data!
# # Bin the spike trains
# For now we will assume we want to use the same time bin size as the time bins used for the stimulus. Later, though, we'll wish to vary this.
# +
tbins=np.arange(.5,nT+1.5)*dtStim # time bin centers for spike train binning
sps=np.zeros([nT,ncells])
for jj in range(ncells):
sps[:,jj]=np.histogram(SpTimes[jj],tbins)[0] # binned spike train
# -
# Let's just visualize the spike-train auto and cross-correlations (Comment out this part if desired!)
# +
nlags=30 # number of time-lags to use
fig,axes=plt.subplots(nrows=ncells,ncols=ncells,figsize=(20,20),sharey=True)
for ii in range(ncells):
for jj in range(ii,ncells):
# Compute cross-correlation of neuron i with neuron j
axes[ii,jj].xcorr(sps[:,ii],sps[:,jj],maxlags=nlags,normed=True)
axes[ii,jj].set_title('cells (%d,%d)'%(ii,jj))
# -
# # Build design matrix: single-neuron GLM with spike-history
# Pick the cell to focus on (for now).
cellnum = 2 # 0-1: OFF, 2-3: ON
# Set the number of time bins of stimulus to use for predicting spikes
ntfilt = 25 # Try varying this, to see how performance changes!
# Set number of time bins of auto-regressive spike-history to use
nthist = 20
# Build stimulus design matrix (using `hankel`)
paddedStim=np.r_[np.zeros(ntfilt-1),Stim] # pad early bins of stimulus with zero
Xstim=hankel(paddedStim[:-ntfilt+1],Stim[-ntfilt:])
# Build spike-history design matrix
paddedSps=np.r_[np.zeros(nthist),sps[:-1,cellnum]]
# SUPER important: note that this doesn't include the spike count for the
# bin we're predicting? The spike train is shifted by one bin (back in
# time) relative to the stimulus design matrix
Xsp = hankel(paddedSps[:-nthist+1], paddedSps[-nthist:])
# Combine these into a single design matrix
Xdsgn=np.c_[Xstim,Xsp] # post-spike here `Xsp`
# Let's visualize the design matrix just to see what it looks like
# +
fig,axes=plt.subplots(ncols=2,sharey=True,figsize=(8,8))
axes[0].imshow(Xdsgn[:50,:],extent=[1,ntfilt+nthist,1,50],aspect=1)
axes[0].set_xlabel('regressor')
axes[0].set_ylabel('time bin of response')
axes[0].set_title('design matrix (including stim and spike history)')
axes[1].imshow(sps[:50,cellnum].reshape(-1,1),aspect=0.1)
axes[1].set_title('spike count')
fig.tight_layout()
# -
# The left part of the design matrix has the stimulus values, the right part has the spike-history values. The image on the right is the spike count to be predicted. Note that the spike-history portion of the design matrix had better be shifted so that we aren't allowed to use the spike count on this time bin to predict itself!
# # Fit single-neuron GLM with spike-history
# First fit GLM with no spike-history
#
# <img src='pics/f2-1.png'>
import statsmodels.api as sm
# + tags=[]
print('Now fitting basic Poisson GLM...\n')
glmModel=sm.GLM(sps[:,cellnum],sm.add_constant(Xstim),
family=sm.families.Poisson(),link=sm.families.links.Log).fit()
pGLMwts0=glmModel.params
pGLMconst0=pGLMwts0[0]
pGLMfilt0=pGLMwts0[1:]
# -
# Then fit GLM with spike history (now use Xdsgn design matrix instead of Xstim)
#
# <img src='pics/f2-2.png'>
# + tags=[]
print('Now fitting Poisson GLM with spike-history...\n')
glmModel=sm.GLM(sps[:,cellnum],sm.add_constant(Xdsgn),
family=sm.families.Poisson(),link=sm.families.links.Log).fit()
pGLMwts1=glmModel.params
pGLMconst1=pGLMwts1[0]
pGLMfilt1=pGLMwts1[1:1+ntfilt] # stimulus weights
pGLMhistfilt1=pGLMwts1[1+ntfilt:] # spike weights
# -
# Make plots comparing filters
ttk = np.arange(-ntfilt+1,1)*dtStim # time bins for stim filter
tth = np.arange(-nthist,0)*dtStim # time bins for spike-history filter
# +
fig,axes=plt.subplots(nrows=2,figsize=(10,10))
axes[0].plot(ttk,ttk*0,'k--') # Plot stim filters
axes[0].plot(ttk,pGLMfilt0,'o-',label='GLM')
axes[0].plot(ttk,pGLMfilt1,'o-',label='sphist-GLM')
axes[0].legend()
axes[0].set_title('stimulus filters')
axes[0].set_ylabel('weight')
axes[0].set_xlabel('time before spike (s)')
axes[1].plot(tth,tth*0,'k--') # Plot spike history filter
axes[1].plot(tth,pGLMhistfilt1,'o-')
axes[1].set_title('spike history filter')
axes[1].set_ylabel('weight')
axes[1].set_xlabel('time before spike (s)')
# -
# # Plot predicted rate out of the two models
# Compute predicted spike rate on training data
ratepred0=np.exp(pGLMconst0+Xstim@pGLMfilt0) # take stim weights to predict spike count
ratepred1=np.exp(pGLMconst0+Xdsgn@pGLMwts1[1:]) # take history spike weights to predict spike count
# Make plot
# +
iiplot=np.arange(60)
ttplot=iiplot*dtStim
fig,axes=plt.subplots()
axes.stem(ttplot,sps[iiplot,cellnum],'k',label='spikes')
axes.plot(ttplot,ratepred0[iiplot],color='orange',label='GLM')
axes.plot(ttplot,ratepred1[iiplot],color='red',label='hist-GLM')
axes.legend()
axes.set_xlabel('time (s)')
axes.set_ylabel('spike count / bin')
axes.set_title('spikes and rate predictions')
# -
# # Fit coupled GLM for multiple-neuron responses
# <img src='pics/f2-3.png'>
#
# <img src='pics/f2-4.png'>
# First step: build design matrix containing spike history for all neurons
Xspall=np.zeros([nT,nthist,ncells]) # allocate space
# Loop over neurons to build design matrix, exactly as above
for jj in range(ncells):
paddedSps=np.r_[np.zeros(nthist),sps[:-1,jj]]
Xspall[:,:,jj]=hankel(paddedSps[:-nthist+1],paddedSps[-nthist:])
# Reshape it to be a single matrix
Xspall=Xspall.reshape(Xspall.shape[0],np.prod(Xspall.shape[1:]))
Xdsgn2=np.c_[Xstim,Xspall] # full design matrix (with all 4 neuron spike hist)
# Let's visualize 50 time bins of full design matrix
fig,axes=plt.subplots()
im=axes.imshow(Xdsgn2[:50,:],extent=[1,ntfilt+nthist*ncells,49,0],aspect=1.5)
axes.set_title('design matrix (stim and 4 neurons spike history)')
axes.set_xlabel('regressor')
axes.set_ylabel('time bin of response')
fig.colorbar(im)
# ## Fit the model (stim filter, sphist filter, coupling filters) for one neuron
# + tags=[]
print('Now fitting Poisson GLM with spike-history and coupling...\n')
glmModel=sm.GLM(sps[:,cellnum],sm.add_constant(Xdsgn2),
family=sm.families.Poisson(),link=sm.families.links.Log).fit()
pGLMwts2=glmModel.params
pGLMconst2=pGLMwts2[0]
pGLMfilt2=pGLMwts2[1:1+ntfilt]
pGLMhistfilt2=pGLMwts2[1+ntfilt:]
pGLMhistfilt2=pGLMhistfilt2.reshape(nthist,ncells) # coupled
# -
# So far all we've done is fit incoming stimulus and coupling filters for
# one neuron. To fit a full population model, redo the above for each cell
# (i.e., to get incoming filters for 'cellnum' = 1, 2, 3, and 4 in turn).
# ## Plot the fitted filters and rate prediction
# +
fig,axes=plt.subplots(nrows=2,figsize=(10,10))
axes[0].plot(ttk,ttk*0,'k--') # Plot stim filters
axes[0].plot(ttk,pGLMfilt0,'o-',label='GLM')
axes[0].plot(ttk,pGLMfilt1,'o-',label='sphist-GLM')
axes[0].plot(ttk,pGLMfilt2,'o-',label='coupled-GLM')
axes[0].legend()
axes[0].set_title('stimulus filter: cell %d'%cellnum)
axes[0].set_ylabel('weight')
axes[0].set_xlabel('time before spike (s)')
axes[1].plot(tth,tth*0,'k--') # Plot spike history filter
axes[1].plot(tth,pGLMhistfilt2)
axes[1].legend(['baseline','from 1','from 2','from 3','from 4'])
axes[1].set_title('coupling filters: into cell %d'%cellnum)
axes[1].set_ylabel('weight')
axes[1].set_xlabel('time before spike (s)')
# -
# Compute predicted spike rate on training data
ratepred2 = np.exp(pGLMconst2 + Xdsgn2@pGLMwts2[1:])
# Make plot
# +
iiplot = np.arange(60)
ttplot = iiplot*dtStim
fig,axes=plt.subplots()
axes.plot(ttplot,ratepred0[iiplot],color='orange')
axes.plot(ttplot,ratepred1[iiplot],color='red')
axes.plot(ttplot,ratepred2[iiplot],color='green')
axes.stem(ttplot,sps[iiplot,cellnum],'k')
axes.legend(['GLM','sphist-GLM','coupled-GLM','spikes'])
axes.set_xlabel('time (s)')
axes.set_ylabel('spike count / bin')
axes.set_title('spikes and rate predictions')
# -
# # Model comparison: log-likelihoood and AIC
# Let's compute loglikelihood (single-spike information) and AIC to see how much we gain by adding each of these filter types in turn:
# [GLM Poisson model logliklihood](https://en.wikipedia.org/wiki/Poisson_regression#:~:text=A%20Poisson%20regression%20model%20is%20sometimes%20known%20as,to%20the%20mean%20made%20by%20the%20Poisson%20model.)
LL_stimGLM = sps[:,cellnum].T@np.log(ratepred0) - np.sum(ratepred0) # already logarithmic
LL_histGLM = sps[:,cellnum].T@np.log(ratepred1) - np.sum(ratepred1)
LL_coupledGLM = sps[:,cellnum].T@np.log(ratepred2) - np.sum(ratepred2)
# log-likelihood for homogeneous Poisson model
nsp = np.sum(sps[:,cellnum])
ratepred_const = nsp/nT # mean number of spikes / bin
LL0 = nsp*np.log(ratepred_const) - nT*np.sum(ratepred_const)
# Report single-spike information (bits / sp)
SSinfo_stimGLM = (LL_stimGLM - LL0)/nsp/np.log(2)
SSinfo_histGLM = (LL_histGLM - LL0)/nsp/np.log(2)
SSinfo_coupledGLM = (LL_coupledGLM - LL0)/nsp/np.log(2)
# + tags=[]
print('\n empirical single-spike information:\n ---------------------- ')
print('stim-GLM: %.2f bits/sp'%SSinfo_stimGLM)
print('hist-GLM: %.2f bits/sp'%SSinfo_histGLM)
print('coupled-GLM: %.2f bits/sp'%SSinfo_coupledGLM)
# -
# Compute [AIC](https://en.wikipedia.org/wiki/Akaike_information_criterion)
#
# Let $k$ be the number of estimated parameters in the model. Let $\hat{L}$ be the maximum value of the likelihood function for the model. Then the AIC value of the model is the following.
#
# $$\mathrm {AIC} =2k-2\ln({\hat {L}})$$
#
# Given a set of cand
AIC0 = -2*LL_stimGLM + 2*(1+ntfilt)
AIC1 = -2*LL_histGLM + 2*(1+ntfilt+nthist)
AIC2 = -2*LL_coupledGLM + 2*(1+ntfilt+ncells*nthist)
AICmin = min(AIC0,AIC1,AIC2) # the minimum of these
# + tags=[]
print('\n AIC comparison (smaller is better):\n ---------------------- \n')
print('stim-GLM: %.1f'%(AIC0-AICmin))
print('hist-GLM: %.1f'%(AIC1-AICmin))
print('coupled-GLM: %.1f'%(AIC2-AICmin))
# -
# These are whopping differencess! Clearly coupling has a big impact in
# terms of log-likelihood, though the jump from stimulus-only to
# own-spike-history is greater than the jump from spike-history to
# full coupling.
# Advanced exercises:
# --------------------
#
# 1. Write code to simulate spike trains from the fitted spike-history GLM.
# Simulate a raster of repeated responses from the stim-only GLM and
# compare to raster from the spike-history GLM
#
# 2. Write code to simulate the 4-neuron population-coupled GLM. There are
# now 16 spike-coupling filters (including self-coupling), since each
# neuron has 4 incoming coupling filters (its own spike history coupling
# filter plus coupling from three other neurons. How does a raster of
# responses from this model compare to the two single-neuron models?
#
# 3. Compute a non-parametric estimate of the spiking nonlinearity for each
# neuron. How close does it look to exponential now that we have added
# spike history? Rerun your simulations using different non-parametric
# nonlinearity for each neuron. How much improvement do you see in terms of
# log-likelihood, AIC, or PSTH variance accounted for (R^2) when you
# simulate repeated responses?
#
| mypython/t2_spikehistcoupledGLM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas
myData = pandas.read_csv("./test.csv")
myData
df1 = pandas.read_csv('./test.csv')
df1.set_index("policyID")
df1
df1.set_index("eq_site_limit")
df1 = df1.set_index("eq_site_limit")
df1
df1 = pandas.read_csv("./test.csv")
df1
df1.shape()
df1.shape
# +
# pandas.read_csv?
# -
df6 = pandas.read_csv('http://pythonhow.com/supermarkets.csv')
df6
df6.shape
df6
df6.loc["332 Hill St": "1056 Sanchez St", "ID":"Name"]
df9 = pandas.read_csv("./test.csv")
df9
df6
df10 = pandas.read_csv('http://pythonhow.com/supermarkets.csv')
df10
df10.loc[:, "Country"]
list(df10.loc[:, "Country"])
numbers = [1,2,3,3,2,2]
list(numbers)
numbers
df9
df9.iloc[1:3, 1:3]
df9.iloc[22:33, 2:5]
df9.iloc[33, 2:5]
df9.iloc[2:3, 1:2]
df9.ix[2,1]
| JupyterNotebook/Testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:.conda-gisenv] *
# language: python
# name: conda-env-.conda-gisenv-py
# ---
import pandas as pd
from tkinter import *
from tkinter import filedialog
from tkinter.font import Font
import os
#import for writing Imed Report in Excel Sheet
import openpyxl as opxl
from random import random
# +
def makeDframe(packagelist):
df=pd.DataFrame()
df['Package']=packagelist
df[1]=0
df[2]=0
df[3]=0
df[4]=0
df[5]=0
df[6]=0
df[7]=0
df[8]=0
df[9]=0
df[10]=0
df[11]=0
df[12]=0
df[13]=0
return df
def writeDataframe(wbpath,df,sheetname):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
offset=2
for index,row in df.iterrows():
myrow=index+offset
wsheet.cell(row=myrow,column=2).value=row[1]
wsheet.cell(row=myrow,column=3).value=row[2]
wsheet.cell(row=myrow,column=4).value=row[3]
wsheet.cell(row=myrow,column=5).value=row[4]
wsheet.cell(row=myrow,column=6).value=row[5]
wsheet.cell(row=myrow,column=7).value=row[6]
wsheet.cell(row=myrow,column=8).value=row[7]
wsheet.cell(row=myrow,column=9).value=row[8]
wsheet.cell(row=myrow,column=10).value=row[9]
wsheet.cell(row=myrow,column=11).value=row[10]
wsheet.cell(row=myrow,column=12).value=row[11]
wsheet.cell(row=myrow,column=13).value=row[12]
wsheet.cell(row=myrow,column=14).value=row[13]
wb.save(wbpath)
def aggregateQuantity(packagelist,detail_df):
columnname=['1','2','3','4','5','6','7','8','9','10','11','12','13']
columnindex=[1,2,3,4,5,6,7,8,9,10,11,12,13]
aggregate_df=makeDframe(packagelist)
for index,row in detail_df.iterrows():
print(index)
i=packagelist.index(row['Package'])
stname=int(row['Structure Code'])
aggregate_df.iloc[i,stname]=float(aggregate_df.iloc[i,stname])+float(row['Qnty'])
return aggregate_df
def aggregateCost(packagelist,detail_df):
columnname=['1','2','3','4','5','6','7','8','9','10','11','12','13']
columnindex=[1,2,3,4,5,6,7,8,9,10,11,12,13]
aggregate_df=makeDframe(packagelist)
for index,row in detail_df.iterrows():
print(index)
i=packagelist.index(row['Package'])
stname=int(row['Structure Code'])
aggregate_df.iloc[i,stname]=float(aggregate_df.iloc[i,stname])+float(row['Cost'])
return aggregate_df
def writeDataframe3(wbpath,df,sheetname,dfcols,xlcols,offset):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
for index,row in df.iterrows():
myrow=index+offset
for idf,ixl in zip(dfcols,xlcols):
wsheet.cell(row=myrow,column=ixl).value=row[idf]
wb.save(wbpath)
def writeDataframe2(wbpath,df,sheetname):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
offset=2
for index,row in df.iterrows():
myrow=index+offset
wsheet.cell(row=myrow,column=1).value=row[0]
wsheet.cell(row=myrow,column=2).value=row[1]
wsheet.cell(row=myrow,column=3).value=row[2]
wsheet.cell(row=myrow,column=4).value=row[3]
wsheet.cell(row=myrow,column=5).value=row[4]
wsheet.cell(row=myrow,column=6).value=row[5]
wb.save(wbpath)
def aggregateWorksHaorWise(input_df,output_df,haor_list):
for index,row in input_df.iterrows():
#print(row)
code=row[10]
stcode=int(row[8])
ri=haor_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[3])
return output_df
def aggregateCostHaorWise(input_df,output_df,haor_list):
for index,row in input_df.iterrows():
#print(row)
code=row[10]
stcode=int(row[8])
ri=haor_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[4])
return output_df
def aggregateWorksPackageWise(input_df,output_df,haor_list):
for index,row in input_df.iterrows():
#print(row)
code=row[6]
stcode=int(row[8])
ri=haor_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[3])
return output_df
def aggregateCostPackageWise(input_df,output_df,haor_list):
for index,row in input_df.iterrows():
#print(row)
code=row[6]
stcode=int(row[8])
ri=haor_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[4])
return output_df
def writeDataframe5(wbpath,df,sheetname,dfcols,xlcols,rowindex):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
offset=2
for index,row in df.iterrows():
myrow=index+offset
wsheet.cell(row=myrow,column=1).value=row[0]
wsheet.cell(row=myrow,column=2).value=row[1]
wsheet.cell(row=myrow,column=3).value=row[2]
wsheet.cell(row=myrow,column=4).value=row[3]
wsheet.cell(row=myrow,column=5).value=row[4]
wsheet.cell(row=myrow,column=6).value=row[5]
wb.save(wbpath)
def initializeCostDFToZero(output_df):
for index,row in output_df.iterrows():
for i in range (2,14):
output_df.iloc[index,i]=0
return output_df
def aggrgateCostByStructureCode(input_df,output_df,stcodes):
for index,row in input_df.iterrows():
#print(row)
code=int(row[8])
#stcode=int(row[8])
#ri=haor_list.index(code)
#ci=stcode+1
#cost
if code-1==16:
output_df.iloc[code-1,5]=float(output_df.iloc[code-1,5])+float(row[4])
output_df.iloc[code-1,8]=float(output_df.iloc[code-1,8])+float(row[4])
else:
output_df.iloc[code-1,5]=float(output_df.iloc[code-1,5])+float(row[4])*0.1400
output_df.iloc[code-1,6]=float(output_df.iloc[code-1,6])+float(row[4])*0.8600
output_df.iloc[code-1,8]=float(output_df.iloc[code-1,8])+float(row[4])
#quantity
output_df.iloc[code-1,4]=float(output_df.iloc[code-1,4])+float(row[3])
return output_df
def transferStructureCostToDPPCost(input_df,output_df):
n=output_df.shape[0]
myvalue=[0 for i in range(0,n)]
output_df.iloc[:,4]=myvalue
output_df.iloc[:,5]=myvalue
output_df.iloc[:,6]=myvalue
output_df.iloc[:,7]=myvalue
output_df.iloc[:,8]=myvalue
output_df.iloc[:,9]=myvalue
for index,row in input_df.iterrows():
r1=row[2]-1
#print("row={}".format(r1))
#print(row[4])
output_df.iloc[r1,4]=output_df.iloc[r1,4]+row[4]
output_df.iloc[r1,5]=output_df.iloc[r1,5]+row[5]
output_df.iloc[r1,6]=output_df.iloc[r1,6]+row[6]
output_df.iloc[:,8]=output_df.iloc[:,5]+output_df.iloc[:,6]+output_df.iloc[:,7]
return output_df
def transferStructureCostToDPPCost2(input_df,output_df):
#n=civil_works_cost_df.shape[0]
#myvalue=[0 for i in range(0,n)]
#output_df.iloc[:,4]=myvalue
#output_df.iloc[:,5]=myvalue
#output_df.iloc[:,6]=myvalue
for index,row in input_df.iterrows():
r1=row[2]-1
#print("row={}".format(r1))
#print(row[4])
output_df.iloc[r1,4]=output_df.iloc[r1,4]+row[4]
output_df.iloc[r1,5]=output_df.iloc[r1,5]+row[5]
output_df.iloc[r1,6]=output_df.iloc[r1,6]+row[6]
return output_df
def makeDFforSec8(first_data,rvised_data,quantity):
mydf=pd.DataFrame()
ecode=list(rvised_data['Code'])
description=list(rvised_data['Description'])
cmpindex=list(rvised_data['rindex'])
annexindex=list(rvised_data['annex_index'])
itemunits=list(quantity['U1'])
mydf=mydf.assign(Code=ecode,Description=description,rindex=cmpindex,annex_index=annexindex,units=itemunits)
q=list(quantity['Q1'])
gob=list(first_data['GOB'])
rpa=list(first_data['RPA'])
dpa=list(first_data['DPA'])
mydf['QUANTITY_1stRevised']=q
mydf['GOB_1stRevised']=gob
mydf['RPA_1stRevised']=rpa
mydf['DPA_1stRevised']=dpa
mydf['TOTAL_1stRevised']=mydf['GOB_1stRevised']+mydf['RPA_1stRevised']+ mydf['DPA_1stRevised']
q=list(quantity['Q2'])
gob=list(rvised_data['GOB'])
rpa=list(rvised_data['RPA'])
dpa=list(rvised_data['DPA'])
mydf['QUANTITY_2ndRevised']=q
mydf['GOB_2ndRevised']=gob
mydf['RPA_2ndRevised']=rpa
mydf['DPA_2ndRevised']=dpa
mydf['TOTAL_2ndRevised']=mydf['GOB_2ndRevised']+mydf['RPA_2ndRevised']+ mydf['DPA_2ndRevised']
mydf['Diff_Q']=0
mydf['Diff_GOB']=mydf['GOB_2ndRevised']-mydf['GOB_1stRevised']
mydf['Diff_RPA']=mydf['RPA_2ndRevised']-mydf['RPA_1stRevised']
mydf['Diff_DPA']=mydf['DPA_2ndRevised']-mydf['DPA_1stRevised']
mydf['Diff_TOTAL']=mydf['TOTAL_2ndRevised']-mydf['TOTAL_1stRevised']
"""
mydf=mydf.assign(QUANTITY_2ndRevised=q,GOB_2ndRevised=gob,RPA_2ndRevised=rpa,DPA_2ndRevised=dpa,TOTAL_2ndRevised=0)
mydf['TOTAL_2ndRevised']=mydf['GOB_2ndRevised']+mydf['RPA_2ndRevised']+mydf['DPA_2ndRevised']
mydf=mydf.assign(Diff_Q=0,Diff_GOB=0,Diff_RPA=0,Diff_DPA=0,Diff_TOTAL=0)
mydf['Diff_GOB']=mydf['GOB_2ndRevised']-mydf['GOB_1stRevised']
mydf['Diff_RPA']=mydf['RPA_2ndRevised']-mydf['RPA_1stRevised']
mydf['Diff_DPA']=mydf['DPA_2ndRevised']-mydf['DPA_1stRevised']
mydf['Diff_TOTAL']=mydf['TOTAL_2ndRevised']-mydf['TOTAL_1stRevised']
"""
return mydf
def aggregateWorksDistricWise(input_df,output_df,district_list):
for index,row in input_df.iterrows():
#print(row)
code=row[11]
stcode=int(row[8])
ri=district_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[3])
return output_df
def aggregateCostDistrictWise(input_df,output_df,district_list):
for index,row in input_df.iterrows():
#print(row)
code=row[11]
stcode=int(row[8])
ri=district_list.index(code)
ci=stcode+1
output_df.iloc[ri,ci]=float(output_df.iloc[ri,ci])+float(row[4])
output_df.iloc[:,19]=output_df.iloc[:,2]+output_df.iloc[:,3]+output_df.iloc[:,4]+\
output_df.iloc[:,5]+output_df.iloc[:,6]+output_df.iloc[:,8]+output_df.iloc[:,9]+\
output_df.iloc[:,10]+output_df.iloc[:,11]+output_df.iloc[:,12]+output_df.iloc[:,13]+\
output_df.iloc[:,14]+output_df.iloc[:,15]+output_df.iloc[:,16]+output_df.iloc[:,17]+\
output_df.iloc[:,18]
return output_df
def applyRPACorrections(sec8_total_df):
rpa1st=sum(list(sec8_total_df['RPA_1st']))
rpa2nd=sum(list(sec8_total_df['RPA_2nd']))
corrections=rpa2nd-rpa1st
sec8_total_df.iloc[63,9]=sec8_total_df.iloc[63,9]+corrections
sec8_total_df.iloc[63,10]=sec8_total_df.iloc[63,10]-corrections
return sec8_total_df
def addQuantity(sec8_total_df,quantity_df):
sec8_total_df['Quantity_1st']=quantity_df['Q1']
sec8_total_df['Quantity_2nd']=quantity_df['Q2']
sec8_total_df['Quantity_diff']=quantity_df['Q3']
sec8_total_df['units']=quantity_df['U1']
return sec8_total_df
def calculateDifference(mydf):
#GOB_Diff RPA_diff DPA_diff TOTAL_diff Tindex
mydf['GOB_Diff']=mydf['GOB_2nd']-mydf['GOB_1st']
mydf['RPA_diff']=mydf['RPA_2nd']-mydf['RPA_1st']
mydf['DPA_diff']=mydf['DPA_2nd']-mydf['DPA_1st']
mydf['TOTAL_diff']=mydf['GOB_Diff']+mydf['RPA_diff']+mydf['DPA_diff']
return mydf
def addRevisedCost(input_df,output_df):
gob=list(input_df['GOB'])
rpa=list(input_df['RPA'])
dpa=list(input_df['DPA'])
total=list(input_df['TOTAL'])
output_df['GOB_2nd']= gob
output_df['RPA_2nd']=rpa
output_df['DPA_2nd']=dpa
output_df['TOTAL_2nd']=total
#GOB_2nd RPA_2nd DPA_2nd TOTAL_2nd
return output_df
def calcualteUnitCost(input_df):
nounitcost=[69,68,67,63,48,24]
civilworks=[54,55,56,57,58,59,60,61,62,63,64,65,66]
equipments=[41,42,43,44,45,47,49,50,51,52]
for index,row in input_df.iterrows():
if index in nounitcost:
input_df.iloc[index,21]=0
elif index in civilworks:
input_df.iloc[index,21]=round(input_df.iloc[index,12]/input_df.iloc[index,8],2)
elif index in equipments:
input_df.iloc[index,21]=round(input_df.iloc[index,12]/input_df.iloc[index,8],2)
else:
input_df.iloc[index,21]=input_df.iloc[index,12]
return input_df
def calculateQuantityDifference(input_df):
qdiff=[41,42,43,44,45,46,47,49,50,51,52,54,55,56,59.60,61,62,63,64,65,66]
for index,row in civil_works_cost_df.iterrows():
if index in qdiff:
input_df.iloc[index,13]=input_df.iloc[index,8]-input_df.iloc[index,2]
else:
input_df.iloc[index,13]=0
return input_df
def calculateUnitCostForCivilworks(civil_works_cost_df):
for index,row in civil_works_cost_df.iterrows():
if row[3]!="LS":
civil_works_cost_df.iloc[index,9]=civil_works_cost_df.iloc[index,8]/civil_works_cost_df.iloc[index,4]
else:
civil_works_cost_df.iloc[index,4]=1
civil_works_cost_df.iloc[index,9]= civil_works_cost_df.iloc[index,8]
return civil_works_cost_df
def calccualteCostDistribution(cost_distribution_df,sec_8_df):
codelist=list(cost_distribution_df['costcode'])
n=cost_distribution_df.shape[0]
myvalue=[0 for i in range(0,n)]
cost_distribution_df.iloc[:,3]=myvalue
cost_distribution_df.iloc[:,3]=myvalue
for index,row in sec_8_df.iterrows():
code=row[23]
myrow=codelist.index(code)
cost_distribution_df.iloc[myrow,3]=cost_distribution_df.iloc[myrow,3]+row[6]
cost_distribution_df.iloc[myrow,4]=cost_distribution_df.iloc[myrow,4]+row[12]
cost_distribution_df.iloc[:,5]= cost_distribution_df.iloc[:,4]-cost_distribution_df.iloc[:,3]
return cost_distribution_df
def calculateWeight(input_df):
value=sum(list(input_df["TotalCost"]))
input_df['Weight']=input_df["TotalCost"]/value
mylist=list(input_df['Weight'])
mylist=[round(x,5) for x in mylist]
correction=sum(mylist)-1.00000
input_df['Weight']=mylist
input_df.iloc[63,6]=input_df.iloc[63,6]-correction
return input_df
# +
"""Functions Related to Tranfering Data to Excel Sheet"""
def writeDataframe6(wbpath,df,sheetname,dfcols,xlcols,indexcol):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
for index,row in df.iterrows():
myrow=int(row[indexcol])
for idf,ixl in zip(dfcols,xlcols):
wsheet.cell(row=myrow,column=ixl).value=row[idf]
wb.save(wbpath)
def saveDataFrame(dataframes,filepath,names):
writer = pd.ExcelWriter(filepath, engine='xlsxwriter')
for df,sname in zip(dataframes,names):
df.to_excel(writer,sheet_name=sname)
writer.save()
def writeDataframe7(wbpath,df,sheetname,dfcols,xlcols,indexcol):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
for index,row in df.iterrows():
myrow=int(row[indexcol])
for idf,ixl in zip(dfcols,xlcols):
if row[idf]==0:
wsheet.cell(row=myrow,column=ixl).value=""
elif row[idf]=="":
wsheet.cell(row=myrow,column=ixl).value=""
else:
wsheet.cell(row=myrow,column=ixl).value=row[idf]
wb.save(wbpath)
def writeDataframe8(wbpath,df,sheetname,dfcols,xlcols,rowlist):
wb=opxl.load_workbook(wbpath)
wsheet=wb[sheetname]
for index,row in df.iterrows():
myrow=rowlist[index]
print(myrow)
for idf,ixl in zip(dfcols,xlcols):
if row[idf]==0:
wsheet.cell(row=myrow,column=ixl).value=""
elif row[idf]=="":
wsheet.cell(row=myrow,column=ixl).value=""
else:
wsheet.cell(row=myrow,column=ixl).value=row[idf]
wb.save(wbpath)
def calcualteEconomicCost(fcost_df):
fcost_df["2014-15"]=fcost_df["2014-15"]*fcost_df["EC_FACTOR"]
fcost_df["2015-16"]=fcost_df["2015-16"]*fcost_df["EC_FACTOR"]
fcost_df["2016-17"]=fcost_df["2016-17"]*fcost_df["EC_FACTOR"]
fcost_df["2017-18"]=fcost_df["2017-18"]*fcost_df["EC_FACTOR"]
fcost_df["2018-19"]=fcost_df["2018-19"]*fcost_df["EC_FACTOR"]
fcost_df["2019-20"]=fcost_df["2019-20"]*fcost_df["EC_FACTOR"]
fcost_df["2020-21"]=fcost_df["2020-21"]*fcost_df["EC_FACTOR"]
fcost_df["2021-22"]=fcost_df["2021-22"]*fcost_df["EC_FACTOR"]
return fcost_df
def assembleFIRR_EIRR_DF(fcost_df,ecost_df,FIRR_EIRR_Input):
FIRR_EIRR_Input.iloc[0,1]=sum(list(fcost_df["2014-15"]))
FIRR_EIRR_Input.iloc[0,2]=sum(list(ecost_df["2014-15"]))
FIRR_EIRR_Input.iloc[1,1]=sum(list(fcost_df["2015-16"]))
FIRR_EIRR_Input.iloc[1,2]=sum(list(ecost_df["2015-16"]))
FIRR_EIRR_Input.iloc[2,1]=sum(list(fcost_df["2016-17"]))
FIRR_EIRR_Input.iloc[2,2]=sum(list(ecost_df["2016-17"]))
FIRR_EIRR_Input.iloc[3,1]=sum(list(fcost_df["2017-18"]))
FIRR_EIRR_Input.iloc[3,2]=sum(list(ecost_df["2017-18"]))
FIRR_EIRR_Input.iloc[4,1]=sum(list(fcost_df["2018-19"]))
FIRR_EIRR_Input.iloc[4,2]=sum(list(ecost_df["2018-19"]))
FIRR_EIRR_Input.iloc[5,1]=sum(list(fcost_df["2019-20"]))
FIRR_EIRR_Input.iloc[5,2]=sum(list(ecost_df["2019-20"]))
FIRR_EIRR_Input.iloc[6,1]=sum(list(fcost_df["2020-21"]))
FIRR_EIRR_Input.iloc[6,2]=sum(list(ecost_df["2020-21"]))
FIRR_EIRR_Input.iloc[7,1]=sum(list(fcost_df["2021-22"]))
FIRR_EIRR_Input.iloc[7,2]=sum(list(ecost_df["2021-22"]))
return FIRR_EIRR_Input
# +
"This cell contains Annexure Data Processing Related Fucntions"
def assembeAnnex2DF(annex2_input_df):
trmc=[]
foreigntraining=[18]
deadcodes=[43,44,45,46,47,49,50,51,52]
for index,row in annex2_input_df.iterrows():
if index not in deadcodes:
tc=0
for i in range(7,13):
tc +=annex2_input_df.iloc[index,i]
remaining=annex2_input_df.iloc[index,5]-tc
if remaining <0:
annex2_input_df.iloc[index,13]=0
annex2_input_df.iloc[index,14]=0
annex2_input_df.iloc[index,12]=annex2_input_df.iloc[index,12]+remaining
else:
minval=0.55
maxval=0.63
value=random()
multiplier=minval+value*(maxval-minval)
annex2_input_df.iloc[index,13]=round(remaining*multiplier,2)
annex2_input_df.iloc[index,14]=round(remaining*(1-multiplier),2)
if index in foreigntraining:
annex2_input_df.iloc[index,13]=0
annex2_input_df.iloc[index,14]= remaining
else:
annex2_input_df.iloc[index,13]=0
annex2_input_df.iloc[index,14]=0
return annex2_input_df
def assembeAnnex2DF2(annex2_input_df):
annex2_input_df["Total_upto_19_20"]=annex2_input_df["2014-15"]+annex2_input_df["2015-16"]\
+annex2_input_df["2016-17"]+annex2_input_df["2017-18"]+annex2_input_df["2018-19"]\
+annex2_input_df["2019-20"]
annex2_input_df["Remaining"]= annex2_input_df["TotalCost"]-annex2_input_df["Total_upto_19_20"]
annex2_input_df["2020-21"]=annex2_input_df["Remaining"]*annex2_input_df["2020-21-P"]
annex2_input_df["2021-22"]=annex2_input_df["Remaining"]*annex2_input_df["2021-22-P"]
return annex2_input_df
def assembeAnnex9DF2(annex9_detail_df,annex2_input_df):
data_19_20=list(annex2_input_df['2019-20'])
data_20_21=list(annex2_input_df['2020-21'])
data_21_22=list(annex2_input_df['2021-22'])
annex9_detail_df['Total-19-20']=data_19_20
annex9_detail_df['Total-20-21']=data_20_21
annex9_detail_df['Total-21-22']=data_21_22
civilworks=[56,57,58,59,60,61,62,63,64,65,66,67]
ftrow=[18]
ltrow=[19,20,21]
dparow=[24]
gaterepair=[38]
for index,row in annex9_detail_df.iterrows():
if index in ftrow:
annex9_detail_df.iloc[index,8]=0
annex9_detail_df.iloc[index,9]=0
annex9_detail_df.iloc[index,10]=0
annex9_detail_df.iloc[index,12]=0
annex9_detail_df.iloc[index,13]=annex9_detail_df.iloc[index,11]
annex9_detail_df.iloc[index,14]=0
annex9_detail_df.iloc[index,16]=0
annex9_detail_df.iloc[index,17]=annex9_detail_df.iloc[index,15]
annex9_detail_df.iloc[index,18]=0
elif index in dparow:
annex9_detail_df.iloc[index,8]=0
annex9_detail_df.iloc[index,9]=0
annex9_detail_df.iloc[index,10]=annex9_detail_df.iloc[index,7]
annex9_detail_df.iloc[index,12]=0
annex9_detail_df.iloc[index,13]=0
annex9_detail_df.iloc[index,14]=annex9_detail_df.iloc[index,11]
annex9_detail_df.iloc[index,16]=0
annex9_detail_df.iloc[index,17]=0
annex9_detail_df.iloc[index,18]=annex9_detail_df.iloc[index,15]
elif index in ltrow:
annex9_detail_df.iloc[index,8]=annex9_detail_df.iloc[index,7]*0.1072
annex9_detail_df.iloc[index,9]=annex9_detail_df.iloc[index,7]*0.8928
annex9_detail_df.iloc[index,10]=0
annex9_detail_df.iloc[index,12]=annex9_detail_df.iloc[index,11]*0.1072
annex9_detail_df.iloc[index,13]=annex9_detail_df.iloc[index,11]*0.8928
annex9_detail_df.iloc[index,14]=0
annex9_detail_df.iloc[index,16]=annex9_detail_df.iloc[index,15]*0.1072
annex9_detail_df.iloc[index,17]=annex9_detail_df.iloc[index,15]*0.8928
annex9_detail_df.iloc[index,18]=0
elif index in gaterepair:
annex9_detail_df.iloc[index,8]=annex9_detail_df.iloc[index,7]*0.1472
annex9_detail_df.iloc[index,9]=annex9_detail_df.iloc[index,7]*0.8528
annex9_detail_df.iloc[index,10]=0
annex9_detail_df.iloc[index,12]=annex9_detail_df.iloc[index,11]*0.1472
annex9_detail_df.iloc[index,13]=annex9_detail_df.iloc[index,11]*0.8528
annex9_detail_df.iloc[index,14]=0
annex9_detail_df.iloc[index,16]=annex9_detail_df.iloc[index,15]*0.1472
annex9_detail_df.iloc[index,17]=annex9_detail_df.iloc[index,15]*0.8528
annex9_detail_df.iloc[index,18]=0
elif index in civilworks:
annex9_detail_df.iloc[index,8]=annex9_detail_df.iloc[index,7]*0.1400
annex9_detail_df.iloc[index,9]=annex9_detail_df.iloc[index,7]*0.8600
annex9_detail_df.iloc[index,10]=0
annex9_detail_df.iloc[index,12]=annex9_detail_df.iloc[index,11]*0.1400
annex9_detail_df.iloc[index,13]=annex9_detail_df.iloc[index,11]*0.8600
annex9_detail_df.iloc[index,14]=0
annex9_detail_df.iloc[index,16]=annex9_detail_df.iloc[index,15]*0.1400
annex9_detail_df.iloc[index,17]=annex9_detail_df.iloc[index,15]*0.8600
annex9_detail_df.iloc[index,18]=0
else:
annex9_detail_df.iloc[index,8]=annex9_detail_df.iloc[index,7]
annex9_detail_df.iloc[index,9]=0
annex9_detail_df.iloc[index,10]=0
annex9_detail_df.iloc[index,12]=annex9_detail_df.iloc[index,11]
annex9_detail_df.iloc[index,13]=0
annex9_detail_df.iloc[index,14]=0
annex9_detail_df.iloc[index,16]=annex9_detail_df.iloc[index,15]
annex9_detail_df.iloc[index,17]=0
annex9_detail_df.iloc[index,18]=0
return annex9_detail_df
def checkAnnex9DF(annex9_detail_df,annex2_input_df):
totalcost=list(annex2_input_df['TotalCost'])
annex9_detail_df['Total-DPP-Cost']=totalcost
annex9_detail_df['Total-Phased-Cost-1']=annex9_detail_df['Total-cum']+annex9_detail_df['Total-19-20']+annex9_detail_df['Total-20-21']+annex9_detail_df['Total-21-22']
#annex9_detail_df['Total-Phased-Cost-2']=annex9_detail_df['GoB-cum']+annex9_detail_df['GoB-19-20']+annex9_detail_df['GoB-20-21']+annex9_detail_df['GoB-21-22']\
#+annex9_detail_df['RPA-cum']+annex9_detail_df['RPA-19-20']+annex9_detail_df['RPA-20-21']+annex9_detail_df['RPA-21-22']\
#+annex9_detail_df['DPA-cum']+annex9_detail_df['DPA-19-20']+annex9_detail_df['DPA-20-21']+annex9_detail_df['DPA-21-22']
annex9_detail_df['Total-Phased-GoB']=annex9_detail_df['GoB-cum']+annex9_detail_df['GoB-19-20']+annex9_detail_df['GoB-20-21']+annex9_detail_df['GoB-21-22']
annex9_detail_df['Total-Phased-RPA']=annex9_detail_df['RPA-cum']+annex9_detail_df['RPA-19-20']+annex9_detail_df['RPA-20-21']+annex9_detail_df['RPA-21-22']
annex9_detail_df['Total-Phased-DPA']=annex9_detail_df['DPA-cum']+annex9_detail_df['DPA-19-20']+annex9_detail_df['DPA-20-21']+annex9_detail_df['DPA-21-22']
annex9_detail_df['Total-Phased-Cost-2']= annex9_detail_df['Total-Phased-GoB']+annex9_detail_df['Total-Phased-RPA']+annex9_detail_df['Total-Phased-DPA']
annex9_detail_df['Check1']=annex9_detail_df['Total-DPP-Cost']-annex9_detail_df['Total-Phased-Cost-1']
annex9_detail_df['Check2']=annex9_detail_df['Total-DPP-Cost']-annex9_detail_df['Total-Phased-Cost-2']
annex9_detail_df.round({'Check1':2})
return annex9_detail_df
def findDifferenceForSec8(mydf):
#GOB_1st RPA_1st DPA_1st TOTAL_1st rindex Quantity_2nd GOB_2nd RPA_2nd DPA_2nd TOTAL_2nd Quantity_diff GOB_Diff RPA_diff DPA_diff TOTAL_diff
"""Difference"""
mydf['TOTAL_diff']=mydf['TOTAL_2nd']-mydf['TOTAL_1st']
mydf['GOB_diff']=mydf['GOB_2nd']-mydf['GOB_1st']
mydf['RPA_diff']=mydf['RPA_2nd']-mydf['RPA_1st']
mydf['DPA_diff']=mydf['DPA_2nd']-mydf['DPA_1st']
"""Total 1st Revised"""
mydf['TOTAL_1st']=mydf['GOB_1st']+mydf['RPA_1st']+mydf['DPA_1st']
return mydf
# -
def changeCumulativeDF(input_df):
for index,row in input_df.iterrows():
input_df.iloc[index,3]=input_df.iloc[index,3]+row['Total-19-20']
input_df.iloc[index,4]=input_df.iloc[index,4]+row['GoB-19-20']
input_df.iloc[index,5]=input_df.iloc[index,5]+row['RPA-19-20']
input_df.iloc[index,6]=input_df.iloc[index,6]+row['DPA-19-20']
return input_df
# +
"""Haor Wise and Packawise Cost and Quantity Calcualtion.This part takes packagewise main
data sheet and process data.Calcualtes Haorwise and Package wise cost and quantity.
Than Transfer Data to Input sheet Economic Code Wise Comparision Section 8"""
"Initialization code"
haorwise_cost_df=None
haorwise_quantity_df=None
input_df=None
#mypath=r'F:\Office Work\DPP_Revision\Civil_works'
mypath=r'E:\Website_26_07_2020\cmis6\Civilworks cost\DPP_Revised'
myfolder=r'E:\Website_26_07_2020\cmis6\Civilworks cost\DPP_Revised'
#mypath=r'G:\Office Work\DPP_Revision\Civil_works'
#myfolder=r'G:\Office Work\IMED\DPP_Revised'
cost_input_file_path=os.path.join(myfolder,'Revised_package_cost3.xlsx')
data_input_path=os.path.join(myfolder,'First_revised2.xlsx')
#mypath3=os.path.join(myfolder,'First_revised.xlsx')
sheetName="Revised"
input_df=pd.read_excel(cost_input_file_path,sheet_name=sheetName,index_col=None,header=0)
input_df.fillna(0,inplace=True)
#summarizing data haorwise
sheetName="Haor_wise_proforma"
output_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
output_df.fillna(0,inplace=True)
haorwise_quantity_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=30)
haorwise_quantity_df.fillna(0,inplace=True)
haorwise_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=30)
haorwise_cost_df.fillna(0,inplace=True)
sheetName="HaorCode"
haorCode_df=pd.read_excel(cost_input_file_path,sheet_name=sheetName,index_col=None,header=0)
output_df.fillna(0,inplace=True)
haor_list=list(haorCode_df["Code"])
haor_list
haorwise_quantity_df=aggregateWorksHaorWise(input_df,haorwise_quantity_df,haor_list)
wbpath=data_input_path
sheetName="Haor_wise_quantity"
dfcols=[2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
xlcols=[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
writeDataframe6(wbpath,haorwise_quantity_df,sheetName,dfcols,xlcols,20)
sheetName="Haor_wise_cost"
#haorwise_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=30)
haorwise_cost_df=aggregateCostHaorWise(input_df,haorwise_cost_df,haor_list)
dfcols=[2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
xlcols=[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
writeDataframe6(wbpath,haorwise_cost_df,sheetName,dfcols,xlcols,20)
offset=2
#writeDataframe5(wbpath,output_df,sheetName,dfcols,xlcols,offset)
#output_cost_df=aggregateCostHaorWise(input_df,output_cost_df,haor_list)
#writeDataframe3(wbpath,output_df,sheetName,dfcols,xlcols,offset)
#sheetName="Haorwise_Work_Cost_List"
#writeDataframe3(wbpath,output_cost_df,sheetName,dfcols,xlcols,offset)
#output_cost_df=initializeCostDFToZero(output_cost_df)
#output_df=initializeCostDFToZero(output_df)
#summarizing data packagewise
#output_df=aggregateWorksHaorWise(input_df,output_df,haor_list)
sheetName="Package_wise_proforma"
packagewise_quantity_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
packagewise_quantity_df.fillna(0,inplace=True)
package_list=list(packagewise_quantity_df['Name'])
packagewise_quantity_df=aggregateWorksPackageWise(input_df,packagewise_quantity_df,package_list)
sheetName="Package_wise_quantity"
writeDataframe3(wbpath,packagewise_quantity_df,sheetName,dfcols,xlcols,offset)
sheetName="Package_wise_proforma"
packagewise_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)#packagewise_cost_df.fillna(0,inplace=True)
packagewise_cost_df.fillna(0,inplace=True)
packagewise_cost_df=aggregateCostPackageWise(input_df,packagewise_cost_df,package_list)
sheetName="Package_wise_cost"
writeDataframe3(wbpath,packagewise_cost_df,sheetName,dfcols,xlcols,offset)
# -
input_df
# +
"""Calculating and Transferring Civil works cost to revised cost input file"""
sheetName="Transfer_Civilwork"
civil_works_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
civil_works_cost_df.fillna(0,inplace=True)
#civil_works_cost_df2=civil_works_cost_df
sheetName="Structure_To_Dpp_Item"
structurewise_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
structurewise_cost_df.fillna(0,inplace=True)
stcodes=list(structurewise_cost_df.iloc[:,1])
dppcodes=list(structurewise_cost_df.iloc[:,2])
structurewise_cost_df=aggrgateCostByStructureCode(input_df,structurewise_cost_df,stcodes)
civil_works_cost_df=transferStructureCostToDPPCost(structurewise_cost_df,civil_works_cost_df)
sheetName="revised_data"
wbpath=data_input_path
dfcols=[5,6,7,8]
xlcols=[3,4,5,6]
writeDataframe6(wbpath,civil_works_cost_df,sheetName,dfcols,xlcols,2)
sheetName="quantity"
dfcols=[4]
xlcols=[6]
writeDataframe6(wbpath,civil_works_cost_df,sheetName,dfcols,xlcols,2)
# -
structurewise_cost_df
civil_works_cost_df
# +
# -
"""Transfering Data to Annex Preparation Sheet """
wbpath=os.path.join(myfolder,"Annex_input.xlsx")
sheetName="Yearly_EXP"
#civil_works_cost_df['TOTAL']=civil_works_cost_df['GOB']+civil_works_cost_df['RPA']+civil_works_cost_df['DPA']
#civil_works_cost_df['Unitcost']=civil_works_cost_df['TOTAL']/civil_works_cost_df['Quantity']
civil_works_cost_df=calculateUnitCostForCivilworks(civil_works_cost_df)
dfcols=[9,8,4]
xlcols=[4,6,5]
writeDataframe6(wbpath,civil_works_cost_df,sheetName,dfcols,xlcols,2)
dfcols=[0,1,2,3,4,5,6,7,8,9]
xlcols=[1,2,3,4,5,6,7,8,9,10]
sheetName="Transfer_Civilwork"
rowlist=[2,3,4,5,6,7,8,9,10,11,12,13]
#writeDataframe8(data_input_path,civil_works_cost_df,sheetName,dfcols,xlcols,rowlist)
writeDataframe6(data_input_path,civil_works_cost_df,sheetName,dfcols,xlcols,10)
"""Creating District Wisre Summary"""
sheetName="District_Wise_Proforma"
districtwise_cost_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
districtwise_cost_df.fillna(0,inplace=True)
districtwise_quantity_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
districtwise_quantity_df.fillna(0,inplace=True)
#districtwise_quantity_df=
district_code=list(districtwise_cost_df["Sub-Project No"])
districtwise_quantity_df=aggregateWorksDistricWise(input_df,districtwise_quantity_df,district_code)
districtwise_cost_df=aggregateCostDistrictWise(input_df,districtwise_cost_df,district_code)
dfcols=[2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
xlcols=[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]
wbpath=data_input_path
sheetName="District_Wise_Quantity"
writeDataframe6(wbpath,districtwise_quantity_df,sheetName,dfcols,xlcols,20)
sheetName="District_Wise_Cost"
writeDataframe6(wbpath,districtwise_cost_df,sheetName,dfcols,xlcols,20)
districtwise_cost_df
"""Writing Section 8 and updating annexinput"""
sheetName="Sec8_proforma"
sec8_total_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=70)
sec8_total_df.fillna(0,inplace=True)
sheetName="quantity"
myquantity_df=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
myquantity_df.fillna(0,inplace=True)
sec8_total_df_corrected=addQuantity(sec8_total_df,myquantity_df)
sheetName="revised_data"
revised_dpp_cost=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0)
revised_dpp_cost.fillna(0,inplace=True)
sec8_total_df_corrected=addRevisedCost(revised_dpp_cost,sec8_total_df_corrected)
sec8_total_df_corrected=applyRPACorrections(sec8_total_df_corrected)
sec8_total_df_corrected=calculateDifference(sec8_total_df_corrected)
sec8_total_df_corrected=calcualteUnitCost(sec8_total_df_corrected)
sec8_total_df_corrected=calculateQuantityDifference(sec8_total_df_corrected)
dfcols=[2,3,4,5,6,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,0,1]
xlcols=[3,4,5,6,7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,1,2]
sheetName="Sec8_Data"
sec8_total_df_corrected['units']=0
sec8_total_df_corrected['units']=myquantity_df['U1']
section_8_path=os.path.join(myfolder,"revised2.xlsx")
writeDataframe6(wbpath,sec8_total_df_corrected,sheetName,dfcols,xlcols,18)
sec8_total_df_corrected
"""transfering data from Section8 to annexes"""
section_8_path=os.path.join(myfolder,"revised2.xlsx")
sheetName="Sec8_Data"
sec8_total_df_corrected=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=70)
sec8_total_df_corrected.fillna(0,inplace=True)
sec8_total_df_corrected=findDifferenceForSec8(sec8_total_df_corrected)
dfcols=[6,3,4,5,12,9,10,11,17,14,15,16,0,1]
xlcols=[6,7,9,10,15,16,18,19,24,25,27,28,2,3]
#dfcols=[6,3,4,5]
#xlcols=[6,7,9,10]
sheetName="Revised_2nd"
writeDataframe6(section_8_path,sec8_total_df_corrected,sheetName,dfcols,xlcols,7)
dfcols=[1,2,8,13,22,22,22]
xlcols=[3,5,14,23,4,13,22]
sheetName="Revised_2nd"
writeDataframe7(section_8_path,sec8_total_df_corrected,sheetName,dfcols,xlcols,7)
sec8_total_df_corrected=None
sheetName="Sec8_Data"
sec8_total_df_corrected=pd.read_excel(data_input_path,sheet_name=sheetName,index_col=None,header=0,nrows=70)
dfcols=[21,8,12,0,1]
xlcols=[4,5,6,1,2]
sheetName="Yearly_EXP"
writeDataframe7(data_input_path,sec8_total_df_corrected,sheetName,dfcols,xlcols,19)
sheetName="Detail_9_input"
dfcols=[0,1]
xlcols=[1,2]
writeDataframe7(data_input_path,sec8_total_df_corrected,sheetName,dfcols,xlcols,19)
"""###############Writing Annex-II ##########################
###########################################"""
annex2_input_sheetname="Yearly_EXP"
annex2_output_sheetname="Annex-II"
annex_input_path=data_input_path
annex_output_path=section_8_path
annex2_input_df=pd.read_excel(annex_input_path,sheet_name=annex2_input_sheetname,
index_col=None,header=0,nrows=70)
annex2_input_df.fillna(0,inplace=True)
annex2_input_df=assembeAnnex2DF2(annex2_input_df)
annex2_input_df=calculateWeight(annex2_input_df)
dfcols=[3,4,5,6,7,8,9,10,11,12,13,14,0,1]
xlcols=[5,6,7,8,9,12,15,18,21,24,27,30,2,3]
writeDataframe6(annex_output_path,annex2_input_df,annex2_output_sheetname,dfcols,xlcols,2)
sheetName="Inves._Cost"
dfcols=[5,7,8,9,10,11,12,13,14,0,1]
xlcols=[4,5,6,7,8,9,10,11,12,2,3]
writeDataframe6(annex_output_path,annex2_input_df,sheetName,dfcols,xlcols,17)
sheetName="Economic_Factor_Input"
economic_factor_df=pd.read_excel(annex_input_path,
sheet_name=sheetName,index_col=None,header=0)
economic_factor_df.fillna(0,inplace=True)
myfactor=list(economic_factor_df["Economic_Factor"])
annex2_input_df["EC_FACTOR"]=myfactor
ecost_df=annex2_input_df.loc[:,["2014-15","2015-16","2016-17","2017-18",
"2018-19","2019-20","2020-21","2021-22","EC_FACTOR"]]
ecost_df=calcualteEconomicCost(ecost_df)
fcost_df=annex2_input_df.loc[:,["2014-15","2015-16","2016-17","2017-18",
"2018-19","2019-20","2020-21","2021-22","EC_FACTOR"]]
sheetName="FIRR_EIRR_Input"
FIRR_EIRR_Input_df=pd.read_excel(annex_input_path,sheet_name=sheetName,index_col=None,header=0)
FIRR_EIRR_Input_df.fillna(0,inplace=True)
FIRR_EIRR_Input_df=assembleFIRR_EIRR_DF(fcost_df,ecost_df,FIRR_EIRR_Input_df)
dfcols=[1,2]
xlcols=[2,3]
row_list=[2,3,4,5,6,7,8,9]
writeDataframe8(data_input_path,FIRR_EIRR_Input_df,sheetName,dfcols,xlcols,row_list)
sheetName="FIRR"
dfcols=[1]
xlcols=[2]
writeDataframe6(annex_output_path,FIRR_EIRR_Input_df,sheetName,dfcols,xlcols,3)
sheetName="EIRR"
dfcols=[2]
xlcols=[2]
writeDataframe6(annex_output_path,FIRR_EIRR_Input_df,sheetName,dfcols,xlcols,3)
"""###############Writing Annex9######################
###############################################"""
annex9_output_sheetname="9.Detil Phasing"
annex9_input_sheetname="Detail_9_input"
annex9_detail_df=pd.read_excel(annex_input_path,
sheet_name=annex9_input_sheetname,index_col=None,header=0)
annex9_detail_df.fillna(0,inplace=True)
annex9_detail_df=assembeAnnex9DF2(annex9_detail_df,annex2_input_df)
annex9_detail_df=checkAnnex9DF(annex9_detail_df,annex2_input_df)
dfcols=[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,0,1]
xlcols=[6,7,9,10,16,17,19,20,26,27,29,30,36,37,39,40,2,3]
writeDataframe6(annex_output_path,annex9_detail_df,annex9_output_sheetname,dfcols,xlcols,2)
"""###############Writing Annex9######################
###############################################"""
annex9_detail_df2=annex9_detail_df.copy(deep=True)
annex9_detail_df2= changeCumulativeDF(annex9_detail_df2)
annex9_output_sheetname="9.Detil Phasing_2"
xlcols=[6,7,9,10,16,17,19,20,26,27,29,30,2,3]
dfcols=[3,4,5,6,11,12,13,14,15,16,17,18,0,1]
writeDataframe6(annex_output_path,annex9_detail_df2,annex9_output_sheetname,dfcols,xlcols,2)
# +
#annex9_detail_df2=annex9_detail_df.copy(deep=True)
# -
sheetName="distribution_of_cost"
cost_distribution_df=pd.read_excel(annex_input_path,sheet_name=sheetName,index_col=None,header=0)
cost_distribution_df.fillna(0,inplace=True)
cost_distribution_df['2nd revised']=0
cost_distribution_df['difference']=0
cost_distribution_df=calccualteCostDistribution(cost_distribution_df,sec8_total_df_corrected)
dfcols=[3,4,5]
xlcols=[4,5,6]
writeDataframe6(data_input_path,cost_distribution_df,sheetName,dfcols,xlcols,2)
"""#################Saving All Data Frames#################"""
dfpath=os.path.join(myfolder,"dframes.xlsx")
frames=[]
names=[]
frames.append(input_df)
names.append('Input_sheet')
frames.append(haorwise_quantity_df)
names.append('haor_wise_quantity')
frames.append(haorwise_cost_df)
names.append('haor_wise_cost')
frames.append(packagewise_quantity_df)
names.append("Package_wise_quantity")
frames.append(packagewise_cost_df)
names.append("Package_wise_cost")
frames.append(structurewise_cost_df)
names.append("Structure_wise_cost")
frames.append(civil_works_cost_df)
names.append("Civil_Works_Cost")
frames.append(districtwise_quantity_df)
names.append("district_wise_quantity")
frames.append(districtwise_cost_df)
names.append("district_wise_cost")
frames.append(sec8_total_df_corrected)
names.append("code_wise_costcomaprision")
frames.append(annex2_input_df)
names.append("Yearwise-cost-projection")
frames.append(annex9_detail_df)
names.append("Annex-9_detail_phasing")
frames.append(FIRR_EIRR_Input_df)
names.append("EIRR_INPUT")
frames.append(cost_distribution_df)
names.append("Distribution_of_cost")
frames.append(myquantity_df)
names.append("quantity_comaprision")
frames.append(annex9_detail_df2)
names.append("Annex-9_detail_phasing_modified")
#sec8_total_df_corrected
#districtwise_cost_df
#districtwise_cost_df
saveDataFrame(frames,dfpath,names)
total=sum(list(sec8_total_df_corrected.iloc[:,6]))
total_weight=sum(list(annex2_input_df['Weight']))
total_weight
total_weight
cost_distribution_df
print(annex_input_path)
| DPP_Revised_Backup/haorwisecalculation5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''base'': conda)'
# language: python
# name: python37364bitbasecondacd385dda59854559b44e3f82ede14f15
# ---
from CosinorPy import file_parser, cosinor, cosinor1, cosinor_nonlin
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams.update({'font.size': 14})
# ## General settings
csv_folder = ""
#N=100
#prefix="02_"
#df_results_all = pd.read_csv(os.path.join(csv_folder,f'{prefix}evalutation_results_{N}.csv'))
#df_results_all.to_csv(os.path.join("supp_tables2","supp_table_7.csv"), index=False)
df_results_all = pd.read_csv(os.path.join('supp_tables2','supp_table_7.csv'))
# +
#df_results_all = df_results_all[df_results_all['test(noise)']<1]
# -
NOISE = df_results_all['test(noise)'].unique()
N_COMPONENTS = df_results_all['test(n_components)'].unique()
LIN_COMP = df_results_all['test(lin_comp)'].unique()
AMPLIFICATION = df_results_all['test(amplification)'].unique()
# Absolute errors
# +
df_results_all['d_amplitude'] = np.abs(df_results_all['amplitude'] - df_results_all['amplitude0'])
df_results_all['d_acrophase'] = np.abs(df_results_all['acrophase'] - df_results_all['acrophase0'])
df_results_all['d_acrophase'] = np.abs(df_results_all['d_acrophase'].map(cosinor.project_acr))
df_results_all['d_amplification'] = np.abs(df_results_all['amplification'] - df_results_all['amplification0'])
df_results_all['d_lin_comp'] = np.abs(df_results_all['lin_comp'] - df_results_all['lin_comp0'])
# -
# Model labels
# +
model_label = []
for n_comps, gen in zip(df_results_all['n_components'], df_results_all['gen']):
if gen:
#model_label.append('cosinor' + str(int(n_comps)) + " gen.")
model_label.append('C' + str(int(n_comps)) + "G")
else:
#model_label.append('cosinor' + str(int(n_comps)))
model_label.append('C' + str(int(n_comps)))
df_results_all['model'] = model_label
# -
# Dataset labels
# +
dataset_label = []
for lin_comp, amplif, n_comps in zip(df_results_all['test(lin_comp)'], df_results_all['test(amplification)'], df_results_all['test(n_components)']):
#if lin_comp or amplif:
# dataset_label.append(str(int(n_comps)) + ' comp., gen.')
osc_type = 'S' if n_comps == 1 else 'A'
amplif_type = 'F' if amplif > 0 else 'D' if amplif < 0 else ''
#if amplif > 0:
# dataset_label.append(str(int(n_comps)) + ' comp., F.')
#elif amplif < 0:
# dataset_label.append(str(int(n_comps)) + ' comp., D.')
#else:
# dataset_label.append(str(int(n_comps)) + ' comp.')
dataset_label.append(osc_type+amplif_type)
df_results_all['dataset'] = dataset_label
# -
# ## Statistics
# +
df_stats = pd.DataFrame()
tests_all = df_results_all
for noise in NOISE:
for n_components in N_COMPONENTS:
for lin_comp in LIN_COMP:
for amplification in AMPLIFICATION:
cond = (tests_all['test(noise)'] == noise) & (tests_all['test(lin_comp)'] == lin_comp) & (tests_all['test(amplification)'] == amplification) & (tests_all['test(n_components)'] == n_components)
d = {'test(noise)':noise,
'test(lin_comp)':lin_comp,
'test(amplification)': amplification,
'test(n_components)': n_components}
df_res = tests_all[cond]
for n_comps_model in [1,3]:
for generalised_model in [0,1]:
label = f"model{n_comps_model}"
if generalised_model == 1:
label += "_gen"
cond2 = (df_res['n_components'] == n_comps_model) & (df_res['gen'] == generalised_model)
df_res_model = df_res[cond2]
n_models = len(df_res_model)
d_amplitude = df_res_model['d_amplitude']
d_acrophase = df_res_model['d_acrophase']
d_amplification = df_res_model['d_amplification']
d_lin_comp = df_res_model['d_lin_comp']
d[f'MSE_amplitude({label})'] = np.nansum(d_amplitude**2)/len(d_amplitude)
d[f'MSE_acrophase({label})'] = np.nansum(d_acrophase**2)/len(d_acrophase)
#d[f'MSE_amplification({label})'] = np.nansum(d_amplification**2)/len(d_amplification)
#d[f'MSE_lin_comp({label})'] = np.nansum(d_lin_comp**2)/len(d_lin_comp)
df_stats = df_stats.append(d, ignore_index=True)
# -
# Amplification labels
# +
amplification_label = []
for amplif in df_stats['test(amplification)']:
if amplif > 0:
amplification_label.append('forced')
elif amplif < 0:
amplification_label.append('damped')
else:
amplification_label.append('none')
df_stats['amplification type'] = amplification_label
# -
# Oscillation labels
# +
oscillation_label = []
for n_comps in df_stats['test(n_components)']:
if n_comps > 1:
oscillation_label.append('asymmetric')
else:
oscillation_label.append('symmetric')
df_stats['oscillation type'] = oscillation_label
# +
#order={'symmetric':0, 'asymmetric':1}
# -
# ### Summary of amplitude errors
df_stats_summary_amplitudes = df_stats[['oscillation type', 'amplification type', 'test(noise)','MSE_amplitude(model1)','MSE_amplitude(model1_gen)','MSE_amplitude(model3)','MSE_amplitude(model3_gen)']]
df_stats_summary_amplitudes.columns = ['oscillation type', 'amplification type', 'noise', 'C1', 'C1G', 'C3', 'C3G']
df_stats_summary_amplitudes = df_stats_summary_amplitudes.sort_values(by=['oscillation type', 'amplification type', 'noise'])#, key=lambda x:order[x])
df_stats_summary_amplitudes = df_stats_summary_amplitudes.round(3)
f = open(os.path.join(csv_folder,'table_amplitude_errors.txt'), 'w')
f.write(df_stats_summary_amplitudes.to_latex(index=False))
f.close()
df_stats_summary_amplitudes
# ### Summary of acrophase errors
df_stats_summary_acrophases = df_stats[['oscillation type', 'amplification type', 'test(noise)','MSE_acrophase(model1)','MSE_acrophase(model1_gen)','MSE_acrophase(model3)','MSE_acrophase(model3_gen)']]
df_stats_summary_acrophases.columns = ['oscillation type', 'amplification type', 'noise', 'C1', 'C1G', 'C3', 'C3G']
df_stats_summary_acrophases = df_stats_summary_acrophases.sort_values(by=['oscillation type', 'amplification type', 'noise'])#, key=lambda x:order[x])
df_stats_summary_acrophases = df_stats_summary_acrophases.round(3)
f = open(os.path.join(csv_folder,'table_acrophase_errors.txt'), 'w')
f.write(df_stats_summary_acrophases.to_latex(index=False))
f.close()
df_stats_summary_acrophases
# ## Plots
# https://seaborn.pydata.org/generated/seaborn.violinplot.html
# ### Amplitudes
dataset_full = {'A': 'asymmetric',
'S': 'symmetric',
'AF': 'asymmetric forced',
'SF': 'symmetric forced',
'AD': 'asymmetric damped',
'SD': 'symmetric damped'}
# +
fig, axes = plt.subplots(2,3)
box = 1
for i, (dataset) in enumerate(df_results_all['dataset'].unique()):
data = df_results_all[df_results_all['dataset']==dataset]
data = data[data['test(noise)']<1]
data = data.rename(columns={'test(noise)':'noise'})
#data = data[data['dataset'] != 'AF']
#data = data[data['dataset'] != 'SF']
ax = axes.flat[i]
if box:
sns.boxplot(x='model', y='d_amplitude', hue='noise', data=data, palette="muted", showfliers = False, ax=ax) # , order=["Dinner", "Lunch"]
else:
sns.violinplot(x='model', y='d_amplitude', hue='noise', data=data, palette="muted", cut=0, ax=ax) # , order=["Dinner", "Lunch"]
ax.set_ylabel("absolute amplitude error")
ax.set_ylabel("absolute amplitude error")
ax.title.set_text(dataset_full[dataset])
fig=plt.gcf()
fig.set_size_inches([17,12])
plt.savefig(os.path.join(csv_folder,'amplitude_errors.pdf'), bbox_inches = 'tight')
plt.show()
# -
# ### Acrophases
# +
fig, axes = plt.subplots(2,3)
box = 1
for i, (dataset) in enumerate(df_results_all['dataset'].unique()):
data = df_results_all[df_results_all['dataset']==dataset]
data = data[data['test(noise)']<1]
#data = data[data['dataset'] != 'AF']
#data = data[data['dataset'] != 'SF']
data = data.rename(columns={'test(noise)':'noise'})
ax = axes.flat[i]
if box:
sns.boxplot(x='model', y='d_acrophase', hue='noise', data=data, palette="muted", showfliers = False, ax=ax) # , order=["Dinner", "Lunch"]
else:
sns.violinplot(x='model', y='d_acrophase', hue='noise', data=data, palette="muted", cut=0, ax=ax) # , order=["Dinner", "Lunch"]
ax.set_ylabel("absolute acrophase error")
ax.set_ylabel("absolute acrophase error")
ax.title.set_text(dataset_full[dataset])
fig=plt.gcf()
fig.set_size_inches([17,12])
plt.savefig(os.path.join(csv_folder,'acrophase_errors.pdf'), bbox_inches = 'tight')
plt.show()
# -
# ### Old code
# +
#Amplitude plots
fig, axes = plt.subplots(2,2)
box = 1
for i, (model) in enumerate(df_results_all['model'].unique()):
data = df_results_all[(df_results_all['model']==model) & (df_results_all['test(noise)']<1)]
data = data.rename(columns={'test(noise)':'noise'})
#data = data[data['dataset'] != 'AF']
#data = data[data['dataset'] != 'SF']
ax = axes.flat[i]
if box:
sns.boxplot(x='dataset', y='d_amplitude', hue='noise', data=data, palette="muted", showfliers = False, ax=ax) # , order=["Dinner", "Lunch"]
else:
sns.violinplot(x='dataset', y='d_amplitude', hue='noise', data=data, palette="muted", cut=0, ax=ax) # , order=["Dinner", "Lunch"]
ax.set_ylabel("absolute amplitude error")
ax.set_ylabel("absolute amplitude error")
ax.title.set_text(model)
fig=plt.gcf()
fig.set_size_inches([15,12])
#plt.savefig(os.path.join(csv_folder,'amplitude_errors.pdf'), bbox_inches = 'tight')
plt.show()
# +
# Acrophase plots
fig, axes = plt.subplots(2,2)
box = 1
for i, (model) in enumerate(df_results_all['model'].unique()):
data = df_results_all[(df_results_all['model']==model) & (df_results_all['test(noise)']<1)]
data = data.rename(columns={'test(noise)':'noise'})
ax = axes.flat[i]
if box:
sns.boxplot(x='dataset', y='d_acrophase', hue='noise', data=data, palette="muted", showfliers = False, ax=ax) # , order=["Dinner", "Lunch"]
else:
sns.violinplot(x='dataset', y='d_acrophase', hue='noise', data=data, palette="muted", cut=0, ax=ax) # , order=["Dinner", "Lunch"]
ax.set_ylabel("Acrophase error")
ax.set_ylim([0,np.pi])
ax.title.set_text(model)
fig=plt.gcf()
fig.set_size_inches([15,12])
#if box:
# plt.savefig('results_robustness\\params_distrib_sns_box.pdf', bbox_inches = 'tight')
#else:
# plt.savefig('results_robustness\\params_distrib_sns.pdf', bbox_inches = 'tight')
plt.show()
| analyse_errors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Custom CNN architecture with DepthwiseConv2D layers <br />
# Generates a simple CNN model <br />
# Trains the model <br />
# Prints confusion matrix <br />
# Saves the trained model as *.h5* and *.tflite* files <br />
# Software License Agreement (MIT License) <br />
# Copyright (c) 2020, <NAME>.
import numpy as np
import keras
from keras.applications import imagenet_utils
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import itertools
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
import keras.layers as kl
batchSize = 10
# +
# datasets
dataGen = ImageDataGenerator(validation_split=0.1,
preprocessing_function=keras.applications.mobilenet.preprocess_input)
trainBatch = dataGen.flow_from_directory(
'/home/amirhossein/Codes/Project/Dataset/Dataset_678/dataset_openclose_678',
target_size=(128,128), batch_size=batchSize, subset='training')
validateBatch = dataGen.flow_from_directory(
'/home/amirhossein/Codes/Project/Dataset/Dataset_678/dataset_openclose_678',
target_size=(128,128), batch_size=batchSize, subset='validation', shuffle=False)
# -
def depthwiseconv2d(blocks, inputs, outputs):
layers = [kl.DepthwiseConv2D( (3, 3), input_shape = (inputs, inputs, 3),strides=(1, 1), padding='same', activation = 'relu'),
kl.MaxPooling2D(pool_size = (2, 2))]
for i in range(blocks-1):
layers.append( kl.DepthwiseConv2D( (3, 3),strides=(1, 1), padding='same', activation = 'relu') )
layers.append( kl.MaxPooling2D(pool_size = (2, 2)) )
layers.append( kl.Flatten() )
layers.append( kl.Dense(units = outputs, activation = 'softmax') )
name = 'DepthwiseConv2D_'+ str(blocks) +'_'+ 'x' +'_'+ str(inputs) +'_'+ str(outputs) +'_'
return layers, name
# largest used blocks in experiment 1
blocks = 5
images = 128
outputs = 2
layers, name = depthwiseconv2d(blocks, images, outputs)
model = keras.Sequential(layers)
model.summary()
model.compile(keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(trainBatch, steps_per_epoch=1000, validation_data=validateBatch, validation_steps=4,
epochs=8)
# to load existing model
model = keras.models.load_model('depthwiseconv2d.h5')
# plot confusion matrix
def plot_confusion_matrix(cm, classes, normalize=False, title='confusion_matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max()/2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i,j],
horizontalalignment='center',
color='white' if cm[i,j] > thresh else 'black')
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.rcParams.update({'font.size': 12, 'mathtext.fontset':'stix', 'font.family':'STIXGeneral'})
# plt.rcParams.update(plt.rcParamsDefault)
valLabels = validateBatch.classes
predictions = model.predict_generator(validateBatch, steps= len(valLabels)//batchSize+1)
cm = confusion_matrix(valLabels, predictions.argmax(axis=1))
cmPlotLabels = ['Grasped', 'Not Grasped']
plot_confusion_matrix(cm, cmPlotLabels, title='Confusion Matrix')
# save models
model.save('depthwiseconv2d.h5')
converter = tf.lite.TFLiteConverter.from_keras_model_file('depthwiseconv2d.h5')
tflite_model = converter.convert()
open("depthwiseconv2d.tflite", "wb").write(tflite_model)
| Custom_CNN_Architectures/depthwiseconv2d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python2_opencv
# language: python
# name: python2_opencv
# ---
# ## <NAME>
# Update 10 Feb 2017
#
# Snippet of code to combine two videos (processed and raw) into a single video
# +
import cv2
import numpy as np
import matplotlib
matplotlib.use("TkAgg") # have to use this for tkinter to work below
from matplotlib import pyplot as plt
# %matplotlib tk
# scikit image
import skimage
from skimage import io
import os
import time
# -
# %qtconsole
# choose video
# 20150608_135726.avi
aa = "/Users/cswitzer/Dropbox/ExperSummer2016/Kalmia/Revision1_AmNat/OldVideos/Movie2_2016.mp4"
bb = "/Users/cswitzer/Dropbox/ExperSummer2016/Kalmia/Revision1_AmNat/OldVideos/Movie3_2016.mp4"
print aa
cap = cv2.VideoCapture(aa)
numFr = int(round(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)))
print 'numFrames = ' + str(numFr)
length = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
print length, width, height, fps
# +
print bb
cap2 = cv2.VideoCapture(bb)
numFr2 = int(round(cap2.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)))
print 'numFrames = ' + str(numFr2)
length2 = int(cap2.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
width2 = int(cap2.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height2 = int(cap2.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
fps2 = cap2.get(cv2.cv.CV_CAP_PROP_FPS)
print length2, width2, height2, fps2
# -
# %qtconsole
# +
newD = '/Users/cswitzer/Desktop/DoubleVid/'
if not os.path.isdir(newD):
os.mkdir(newD)
os.chdir(newD)
# +
# read in videos and combine them frame by frame
# reset frame num to zero
cap.set(cv2.cv.CV_CAP_PROP_FRAME_COUNT, 0)
cap2.set(cv2.cv.CV_CAP_PROP_FRAME_COUNT, 0)
ret = True
ctr = 1
stt = time.time()
while(ctr < 10):
# capture frame-by-frame
ret, frame = cap.read()
ret2, frame2 = cap2.read()
if not ret:
break
# combine frames
combFrames = np.concatenate([frame, frame2], axis = 1)
cv2.imwrite(str(ctr).zfill(4) + ".png", combFrames)
print(ctr)
ctr += 1
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done release the capture
cap.release()
cap2.release()
cv2.destroyAllWindows()
print time.time() - stt #
# -
stta = time.time()
# !ffmpeg -start_number 1 -i %04d.png -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -c:v libx264 -pix_fmt yuv420p -y mov23_combined.mp4
print time.time() - stta # ~ 1.4 seconds
# +
# delete all png files in directory
filelist = [ f for f in os.listdir(".") if f.endswith(".png") ]
for f in filelist:
os.remove(f)
# -
| 1_4_CombineTwoVideosIntoOne.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1fziHl7Ar94J" colab_type="text"
# # Eclat
# + [markdown] id="eiNwni1xsEgT" colab_type="text"
# ## Importing the libraries
# + id="DUF77Qr1vqyM" colab_type="code" outputId="86e8ae3e-6a35-47bc-a832-5a02936a9c65" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !pip install apyori
# + id="UJfitBClsJlT" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# + [markdown] id="vLt-7XUKsXBd" colab_type="text"
# ## Data Preprocessing
# + id="J_A-UFOAsaDf" colab_type="code" colab={}
dataset = pd.read_csv('Market_Basket_Optimisation.csv', header = None)
transactions = []
for i in range(0, 7501):
transactions.append([str(dataset.values[i,j]) for j in range(0, 20)])
# + [markdown] id="1wYZdBd5sea_" colab_type="text"
# ## Training the Eclat model on the dataset
# + id="YzIk4vXZsj5i" colab_type="code" colab={}
from apyori import apriori
rules = apriori(transactions = transactions, min_support = 0.003, min_confidence = 0.2, min_lift = 3, min_length = 2, max_length = 2)
# + [markdown] id="b176YNwWspiO" colab_type="text"
# ## Visualising the results
# + [markdown] id="iO6bF_dImT-E" colab_type="text"
# ### Displaying the first results coming directly from the output of the apriori function
# + id="kvF-sLc6ifhd" colab_type="code" colab={}
results = list(rules)
# + id="eAD8Co4_l9IE" colab_type="code" outputId="b87395d9-2a75-49ab-9206-61fb678e6aa8" colab={"base_uri": "https://localhost:8080/", "height": 191}
results
# + [markdown] id="MFkQP-fcjDBC" colab_type="text"
# ### Putting the results well organised into a Pandas DataFrame
# + id="gyq7Poi0mMUe" colab_type="code" colab={}
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
return list(zip(lhs, rhs, supports))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Product 1', 'Product 2', 'Support'])
# + [markdown] id="IjrrlYW4jpTR" colab_type="text"
# ### Displaying the results sorted by descending supports
# + id="nI7DJXng-nxQ" colab_type="code" outputId="63b4b5d1-9d4d-43a0-ded4-6ee4de2f4854" colab={"base_uri": "https://localhost:8080/", "height": 326}
resultsinDataFrame.nlargest(n = 10, columns = 'Support')
| 19. Eclat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Thin Plate Splines (TPS) Transforms
import numpy as np
from menpo.transform import ThinPlateSplines
from menpo.shape import PointCloud
# Let's create the landmarks used in Principal Warps paper (http://user.engineering.uiowa.edu/~aip/papers/bookstein-89.pdf)
# +
# landmarks used in Principal Warps paper
# http://user.engineering.uiowa.edu/~aip/papers/bookstein-89.pdf
src_landmarks = np.array([[3.6929, 10.3819],
[6.5827, 8.8386],
[6.7756, 12.0866],
[4.8189, 11.2047],
[5.6969, 10.0748]])
tgt_landmarks = np.array([[3.9724, 6.5354],
[6.6969, 4.1181],
[6.5394, 7.2362],
[5.4016, 6.4528],
[5.7756, 5.1142]])
src = PointCloud(src_landmarks)
tgt = PointCloud(tgt_landmarks)
tps = ThinPlateSplines(src, tgt)
# -
# Let's visualize the TPS
# %matplotlib inline
tps.view();
# This proves that the result is correct
np.allclose(tps.apply(src_landmarks), tgt_landmarks)
# Here is another example with a deformed diamond.
# +
# deformed diamond
src_landmarks = np.array([[ 0, 1.0],
[-1, 0.0],
[ 0,-1.0],
[ 1, 0.0]])
tgt_landmarks = np.array([[ 0, 0.75],
[-1, 0.25],
[ 0,-1.25],
[ 1, 0.25]])
src = PointCloud(src_landmarks)
tgt = PointCloud(tgt_landmarks)
tps = ThinPlateSplines(src, tgt)
# -
# %matplotlib inline
tps.view();
np.allclose(tps.apply(src_landmarks), tgt_landmarks)
| menpo/Transforms/Thin_Plate_Splines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Linear representations
#
# ## Matrix groups
#
# Let $M_n$ be the set of all $n\times n$ matrices $A=(a_{ij})$, $i,j=1,2,\ldots,n$. The identity matrix $\mathbb{I}_n$ is the matrix with all diagonal entries equal to 1 and the remaining entries equal to 0:
#
# $$
# \mathbb{I}_n=\begin{pmatrix}1&0&\ldots&0\\0&1&\ldots&0\\\vdots&\vdots&\ddots&\vdots\\
# 0&0&\ldots&1\end{pmatrix}
# $$
#
# Multiplication of two matrices $A=(a_{ij})$ and $B=(b_{ij})$ in $M_n$ produces a matrix
#
# $$
# A\cdot B=(\sum_{l=1}^n a_{il}b_{lj})\in M_n
# $$
#
# The multiplication can be considered as a map
#
# $$
# M_n\times M_n\to M_n
# $$
#
# It is therefore a binary operation.
#
# ```{admonition}
# :class: proposition
#
# We have the following multiplication rules
# - $A=\mathbb{I}\cdot A=A\cdot \mathbb{I}_n$
# - $A\cdot(B\cdot C)=(A\cdot B)\cdot C$ for any three matrices $A,B,C\in M_n$
#
#
# ```
#
# A matrix $A\in M_n$ is called invertible, or non-singular, if there exists a matrix $B$ such that
#
# $$
# A\cdot B=B\cdot A=\mathbb{I}_n
# $$
#
# ```{admonition} Definition (Linear group)
# :class: definition
#
# The subset of $M_n$ consisting of invertible matrices is denoted by $GL_n$ and called the general linear group.
#
# ```
#
# ```{admonition} Proposition
# :class: proposition
#
# The general linear group $GL_n$ is a group with matrix multiplication as the binary operation
# ```
#
# We can consider different linear groups by specyfying the domain of the matrix elements, e.g.:
# - $GL_n(\mathbb{C})$ -- invertible matrices with complex entries
# - $GL_n(\mathbb{R})$ -- invertible matrices with real entries
# - $GL_n(\mathbb{Z})$ -- invertible matrices with integer entries
#
# ```{admonition} Example (Linear group)
#
# $GL_2(\mathbb{R})=\left\{\begin{pmatrix}a&b\\c&d\end{pmatrix}:a,b,c,d\in\mathbb{R}, ad-bc\neq 0\right\}$
#
# ```
#
# There are various subgroups of $GL_n$:
# - $SL_n=\{A\in GL_n:\det A=1\}$
# - $O_n=\{A\in GL_n(\mathbb{R}):A\cdot A^T=\mathbb{I}_n\}$
# - $SO_n=\{A\in O_n:\det A=1\}$
# - The matrices $D=\{\mathbb{I}_2,A,A^2,A^3,B,AB,A^2B,A^3B\}$ where $A=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$ and $B=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$ is a subgroup of $GL_2$. We encountered this group already when we studied the subgroups of $S_n$, namely $D$ has the same multiplication table as the group of symmetries of a square: $D_4$.
# - Lorentz group: let $\eta$ be the following matrix
#
# $$
# \eta=\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}
# $$
#
# Then the Lorentz group is defined as:
#
# $$
# Lor=\{A\in GL_4(\mathbb{R}):A^T\eta A=\eta\}\subset GL_4(\mathbb{R})
# $$
#
# ## Linear representations
#
# Before defining linear representations of groups, let us recall from linear algebra the definition of a vector space and a linear map.
#
# ```{admonition} Definition (Vector space)
# :class: definition
#
# A vector space over a field $\mathbb{F}$ ($\mathbb{R}$ or $\mathbb{C}$), is a set $V$ together with two operations: vector addition and multiplication by scalar, that satisfy the following axioms:
# - addition is commutative, associative, has an identity element and inverses (therefore $V$ is an abelian group with +)
# - $\lambda(\mu v)=(\lambda\mu)v$ for all $\lambda,\mu\in\mathbb{F}$ and $v\in V$
# - $\lambda(v+u)=\lambda v+\lambda u$ for all $\lambda\in\mathbb{F}$ and $v,u\in V$
# - $(\lambda+\mu) v)=\lambda v+\mu v$ for all $\lambda,\mu\in\mathbb{F}$ and $v\in V$
# ```
#
# ```{admonition} Examples (Vector space)
# :class: example
#
# - $\mathbb{F}=\mathbb{R}$, $V=\mathbb{R}^n$ is a vector space over real numbers
# - $\mathbb{F}=\mathbb{C}$, $V=\mathbb{C}^n$ is a vector space over complex numbers
# - the set $M_{n\times m}(\mathbb{R})$ of all $n\times m$ matrices is a vector space
#
# ```
#
# ```{admonition} Definition (Linear map)
# :class: definition
#
# Let $U$ and $V$ be vector spaces over $\mathbb{F}$. Then $f:U\to V$ is called a linear map if
# - $f(u+v)=f(u)+f(v)$ for all $u,v\in U$
# - $f(\lambda u)=\lambda f(u)$ for all $\lambda\in \mathbb{F}$ and $u\in U$
#
# ```
#
# ```{admonition} Examples (Linear map)
# :class: example
#
# - The identity map $id_U:U\to U$ is a linear map
# - For an $n\times m$ real matrix $A$, we can define a map: $T_A:\mathbb{R}^n\to\mathbb{R}^m$ as $T_A(v)=A\cdot v$. Then $T_A$ is a linear map.
#
# ```
#
# Our goal is to define a mathematical structure that facilitates the action of groups on vector spaces. The first step is to observe that the set $GL(V)$ of invertible linear maps $V\to V$ on an $n$-dimensional vector space $V$ over a field $\mathbb{F}$ forms a group, called the general linear group of $V$.The group multiplication is composition of maps, the identity element is the identity map and the iverse is the map inverse.
#
# For $V=\mathbb{F}^n$ the general linear group reduces to the previously mentioned group of invertible matrices:
#
# $$
# GL(\mathbb{F}^n)=GL_n(\mathbb{F})
# $$
#
# In a general case, the general linear group $GL(V)$ acts linearly on the vector space $V$. For an arbitrary group $G$ we can find an action on a vector space by assigning to each group element in $G$ a linear transformation in $GL(V)$. This assignment should preserve the group structure of $G$, which means that it should be a group homomorphism $G\to GL(V)$. This motivates the following definition:
#
# ```{admonition} Definition (Representation)
# :class: definition
#
# A representation $R$ of a group $G$ is a group homomorphism $R:G\to GL(V)$, where $V$ is a vector space over $\mathbb{F}$. The dimension of the representation $R$ is defined by $\dim(R)=\dim_\mathbb{F}(V)$.
#
# ```
#
# ```{admonition} Examples (Representation)
# :class: example
#
# - Trivial representation: let $V$ be any vector space. The trivial representation of $G$ on $V$ is the group homomorphism $G\to GL(V)$ sending every element of $G$ to the identity transformation. That is, the elements of $G$ all act on $V$ trivially by doing nothing.
# - Permutation represtation of $S_n$: Consider the $n$-dimensional vector space $V$ with basis elements $e_i$, $i=1,2,\ldots,n$. Then the symmetric group $S_n$ acts on %V% by permuting basis elements, namely for $\sigma\in S_n$, the action is:
#
# $$
# \sigma.e_i=e_{\sigma(i)}
# $$
#
# For example, for $S_3$ this leads to the following three-dimensional representation $S_3\to GL(\mathbb{R}^3)$:
#
# $$
# \begin{align*}
# e\mapsto \begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}, \qquad (12)\mapsto \begin{pmatrix}0&1&0\\1&0&0\\0&0&1\end{pmatrix}, \qquad (13)\mapsto \begin{pmatrix}0&0&1\\0&1&0\\1&0&0\end{pmatrix}\\
# (23)\mapsto \begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}, \qquad (123)\mapsto \begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}, \qquad (132)\mapsto \begin{pmatrix}0&1&0\\0&0&1\\1&0&0\end{pmatrix}
# \end{align*}
# $$
#
# There is an analogous representation of $S_n$ on $\mathbb{R}^n$ for all $n$.
#
# - Alternating representation of $S-n$: For every symmetric group $S_n$
# for any $n$, there exists a one-dimensional representation defined by the homomorphism: for $\sigma\in S_n$ we have $\sigma\mapsto (1)$ if $\sigma$ is an even permutation; and $\sigma\mapsto (-1)$ if $\sigma$ is an odd permutation.
#
# - Tautological representation of $D_n$: The tautological representation of $D_n$ is given by the action of $D_n$ on regular $n$-gon. For example, for $D_4$ we have a homomorphism
#
# $$
# \rho:D_4\to GL(\mathbb{R}^2)
# $$
#
# that takes the rotation by $90^\circ$ element $A$ to $\rho(A)=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$; and the reflection with respect to the $x$-axis $B$ to $\rho(B)=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$. The tautological representation of $D_n$ is always two-dimensional.
#
# ```
#
#
| docs/_sources/Lectures/Lecture4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 - mypkgs
# language: python
# name: ipython_mypkgs
# ---
# # Get Figures
# +
import json
import os
import time
from pathlib import Path
import magic
import numpy as np
import pandas as pd
import requests
from IPython.display import Image
# + [markdown] tags=[]
# ## rpy2
# -
import rpy2.robjects as ro
from rpy2.robjects import pandas2ri
from rpy2.robjects.conversion import localconverter
from rpy2.robjects.packages import importr
# +
from functools import partial
from rpy2.ipython import html
html.html_rdataframe = partial(html.html_rdataframe, table_class="docutils")
# -
pandas2ri.activate()
base = importr("base")
readRDS = ro.r["readRDS"]
# +
target_date = "20210513"
pmc_r_df = readRDS(
f"../data/imagesdocsum_pathway_queries/{target_date}/pmc.df.all.rds"
)
with localconverter(ro.default_converter + pandas2ri.converter):
pmc_df = ro.conversion.rpy2py(pmc_r_df).rename(
columns={
"figid": "pfocr_id",
# "pmcid": "pmc_id",
"filename": "figure_filename",
"number": "figure_number",
"figtitle": "figure_title",
"papertitle": "paper_title",
# "caption": "figure_caption",
"figlink": "relative_figure_page_url",
"reftext": "reference_text",
}
)
pmc_df["paper_link"] = (
"https://www.ncbi.nlm.nih.gov/pmc/articles/" + pmc_df["pmcid"]
)
pmc_df["figure_page_url"] = (
"https://www.ncbi.nlm.nih.gov" + pmc_df["relative_figure_page_url"]
)
pmc_df["figure_thumbnail_url"] = (
"https://www.ncbi.nlm.nih.gov/pmc/articles/"
+ pmc_df["pmcid"]
+ "/bin/"
+ pmc_df["figure_filename"]
)
pmc_df.drop(columns=["relative_figure_page_url"], inplace=True)
pmc_df
# -
images_dir = Path(f"../data/images/{target_date}")
images_dir.mkdir(parents=True, exist_ok=True)
# +
wait_sec = 0.25
downloaded_images_count_path = Path(
f"../data/images/{target_date}/downloaded_images_count.log"
)
log_file_path = "../data/dead_links1.log"
# with open(log_file_path, "w") as f:
# f.write("")
for i, pmc_row in pmc_df.iterrows():
# if int(i) < 87181:
# continue
pfocr_id = pmc_row["pfocr_id"]
figure_thumbnail_url = pmc_row["figure_thumbnail_url"]
figure_path = images_dir.joinpath(pfocr_id)
if figure_path.exists():
continue
headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:86.0) Gecko/20100101 Firefox/86.0"
}
request = requests.get(figure_thumbnail_url, headers=headers)
if request.status_code == 200:
with open(figure_path, "wb") as f:
f.write(request.content)
filetype = magic.from_buffer(request.content)
if "JPEG image data" not in filetype:
with open(log_file_path, "a") as f:
f.write(
f"get {request.status_code}: {figure_thumbnail_url}\t{filetype}"
)
print(filetype)
display(Image(filename=figure_path))
print(request.content)
else:
print(f"Got {request.status_code} for {figure_thumbnail_url}")
print(request.content)
with open(log_file_path, "a") as f:
f.write(
f"get {request.status_code}: {figure_thumbnail_url}\t{request.content}"
)
with open(downloaded_images_count_path, "w") as f:
f.write(f"{i} of {len(pmc_df)}\n")
time.sleep(wait_sec)
# -
| notebooks/get_figures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import json
import re
import string
from collections import Counter
from nltk.corpus import stopwords
# +
def to_dict(string):
"""Transforma una cadena de caracteres con forma de diccionario a diccionario"""
if string != "[]":
string = json.loads(string.replace("'", "\""))
return ",".join([s["screen_name"] for s in string])
return ""
def to_list(list_):
"""Transforma una cadena de caracteres con forma de lista a lista"""
if list_ != "[]":
list_ = list_[1:-1]
list_ = list_.split(",")
return ",".join([s.strip().strip("'") for s in list_])
return ""
def normalize(s):
"""Reemplaza las letras con tildes y retorna la cadena de caracteres en minuscula"""
replacements = (("á", "a"), ("é", "e"), ("í", "i"), ("ó", "o"), ("ú", "u"))
for a, b in replacements:
s = s.lower()
s = s.replace(a, b)
return s
def deEmojify(text):
"""Quita los emojis de los tweets"""
regrex_pattern = re.compile(pattern = "["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
"]+", flags = re.UNICODE)
return regrex_pattern.sub(r"", text)
def cleanTxt(text):
"""Elimina mentions, hiperlinks, quita el simbolo "#" y el "RT""""
text = re.sub(r"@[a-zA-Z0-9]+", "", text) #Removes @mentions
text = re.sub(r"#", "", text) #Removing the "#" symbol
text = re.sub(r"RT[\s]+", "", text) #Removing RT
text = re.sub(r"https?:\/\/\S+", "", text) #Remove the hyperlink
return text
def replace_punct(s):
"""Elimina los signos de puntuacion"""
for i in string.punctuation:
if i in s:
s = s.replace(i, "").strip()
return s
def replace_num(s):
"""Remueve los numeros de los tweets"""
for i in ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]:
s = s.replace(i, "")
return s
def tokenizador(text):
"""Tokeniza el texto del tweet"""
important_words = []
for word in text.split(" "):
if word not in stopwords.words("spanish"):
if word != "":
important_words.append(word)
return " ".join(important_words).strip()
def foo(text):
"""Elimina mas signos de puntuacion"""
forbidden = ("?", "¿", "¡", "!", ",", ".", ";", ":", "-", "'", "+", "$", "/", "*",'«','»', "~", "(", ")")
aux = ""
for v in text:
if not v in forbidden:
aux += v
return aux
def quita_palabras_pequeñas(text):
"""Quita palabras de longitud menor a 4 del texto del tweet"""
return " ".join([word for word in text.split(" ") if len(word) >= 5])
# +
# %%time
df = pd.read_csv('C:/Users/Daniel/Desktop/csv/dia 25/no trends/tweets_25_notendencias_raw.csv')
df.drop(['Unnamed: 0','Unnamed: 0.1'], axis = 1, inplace = True)
df_summary = pd.read_csv("C:/Users/Daniel/Desktop/por hora/lista_tendencias_25_por_hora.csv", sep = ";")
# -
# # PREPROCESAMIENTO
# +
# 1. Hace drop a las columnas de ids, husos horarios, url y traducciones
# 2. Filtra los tweets por idioma ("es")
columns_to_drop = ["conversation_id", "cashtags", "timezone", "user_id", "name", "near", "geo", "source",
"user_rt_id", "user_rt", "retweet_id", "retweet_date", "translate", "trans_src",
"trans_dest", "place", "quote_url", "thumbnail", "created_at", "id", "link"]
df.drop(columns_to_drop, axis = 1, inplace = True)
df = df[df.language == "es"]
df.drop("language", axis = 1, inplace = True)
df = df.reset_index(drop = True)
# +
# Transforma la columna "reply_to" a diccionario
# Elimina las filas donde no es posible
reply_to_rows = []
for num, row in enumerate(df.reply_to):
try:
to_dict(row)
except:
reply_to_rows.append(num)
df.drop(reply_to_rows, inplace = True)
df.reply_to = df.reply_to.apply(to_dict)
df = df.reset_index(drop = True)
# +
# Transforma la columna "mentions" a diccionario
# Elimina las filas donde no es posible
mention_rows = []
for num, row in enumerate(df.mentions):
try:
to_dict(row)
except:
mention_rows.append(num)
df.drop(mention_rows, inplace = True)
df.mentions = df.mentions.apply(to_dict)
df = df.reset_index(drop = True)
# +
# Transforma la columna "hashtags" a lista
# Elimina las filas donde no es posible
hashtags_rows = []
for num, row in enumerate(df.hashtags):
try:
to_list(row)
except:
hashtags_rows.append(num)
df.drop(hashtags_rows, inplace = True)
df.hashtags = df.hashtags.apply(to_list)
df = df.reset_index(drop = True)
# +
# A las columnas "photos", "retweet" y "url" las cambiamos por valores de 0 y 1
# 0 si no hay photo, url o si el tweet no es retweet
# 1 si hay photo, url o si el tweet es retweet
df.photos = df.photos.apply(lambda x : 1 if x != "[]" else 0)
df.retweet = df.retweet.apply(lambda x : 1 if x == "True" else 0)
df.urls = df.urls.apply(lambda x : 1 if x != "[]" else 0)
# +
# Columnas de tiempo
df["month"] = df.date.apply(lambda x : int(x[5 : 7]))
df["day"] = df.date.apply(lambda x : int(x[-2:]))
df["hour"] = df.time.apply(lambda x : int(x[:2]))
df["minute"] = df.time.apply(lambda x : int(x[3:5]))
df["second"] = df.time.apply(lambda x : int(x[6:]))
# +
# Columnas de interaccion:
# "mentions_count" : cuenta cuantas mentions hay en el tweet
# "reply_to_count" : cuenta a cuantas personas le hace respuesta el tweet
# "hashtags_count" : cuenta cuantos hashtags hay en el tweet
# "interaccion" : es la summa de las 3 columnas anteriores
df["mentions_count"] = [len(mention.split(",")) if type(mention) == str else 0 for mention in df.mentions]
df["reply_to_count"] = [len(reply.split(",")) if type(reply) == str else 0 for reply in df.reply_to]
df["hashtags_count"] = [len(hashtag.split(",")) if type(hashtag) == str else 0 for hashtag in df.hashtags]
df["interaccion"] = [rt + re + lk for rt, re, lk in zip(df.retweets_count, df.replies_count, df.likes_count)]
# +
# Elimina las filas donde la fecha es NaN
indices_todrop = list()
for num, time in enumerate(df.time):
if type(time) != str:
indices_todrop.append(num)
df.drop(indices_todrop, inplace = True)
df = df.reset_index(drop = True)
# +
# Filtro por el dia 24 o 25
FECHA = 25
df = df[df.day == FECHA]
df = df.reset_index(drop = True)
print(df.shape)
# +
# %%time
# Eliminio las filas que no tengan texto en el tweet
tweet_na = []
for num, tweet in enumerate(df.tweet):
if type(tweet) != str:
tweet_na.append(num)
df.drop(tweet_na, inplace = True)
df = df.reset_index(drop = True)
# +
#df.to_csv("tweets_24_notendencias_preprocesado.csv", sep = ";", index = False)
# -
# # TERMINA PREPROCESAMIENTO
# # LISTA DE PALABRAS Y HASHTAGS TENDENCIAS (PARA FILTRAR)
# +
# Cargo las tendencias de ese dia
tendencias = []
with open("C:/Users/Daniel/Desktop/csv/dia 25/trends/dia 25 tendencias.txt", "r", encoding = "UTF-8") as f:
tendencias.extend(f.readlines())
tendencias = [t[:-1].strip("\t") for num, t in enumerate(tendencias) if num != len(tendencias) - 1]
df_tendencias = pd.DataFrame(tendencias, columns = ["trends"])
df_tendencias = df_tendencias.trends.unique()
df_tendencias = pd.DataFrame(df_tendencias, columns = ["trends"])
solo_tendencias = list(df_tendencias.trends.unique())
# +
# Lista de palabras tendencias y hashtags tendencias
hashtags_tendencias = [t for t in solo_tendencias if t[0] == "#"]
hashtags_tendencias_sin_numeral = [t.strip("#").lower() for t in solo_tendencias if t[0] == "#"]
palabras_tendencias = [t.strip("\t") for t in solo_tendencias if t[0] != "#"]
palabras_tendencias_lower = [t.strip("\t").lower() for t in solo_tendencias if t[0] != "#"]
print("hashtags_tendencias:", len(hashtags_tendencias))
print("hashtags_tendencias_sin_numeral:", len(hashtags_tendencias_sin_numeral))
print("palabras_tendencias:", len(palabras_tendencias))
print("palabras_tendencias_lower:", len(palabras_tendencias_lower))
# -
# # FUNCIONES ESPECIALES
def f_hashtags_no_tendencias(df_aux):
"""Retorna un diccionario con los hashtags no tendencias que mas se repiten"""
# Cuento cuantos hashtags hay en el df y me quedo con los mas repetidos
hashtags_no_tendencias = list()
for h in df_aux.hashtags:
for hashtag in h.split(","):
if hashtag not in hashtags_tendencias and hashtag != "":
hashtags_no_tendencias.append(hashtag)
hashtags_no_tendencias = Counter(hashtags_no_tendencias).most_common()
hashtags_no_tendencias = {h[0] : h[1] for h in hashtags_no_tendencias}
#print("Numero de hashtasg no tendencia:", len(hashtags_no_tendencias))
return hashtags_no_tendencias
def elimina_hashtags_tendencias(df_aux, hashtags_tendencias_sin_numeral):
"""Elimina las filas que tengan hashtags tendencias"""
# Saco los indices de las filas que tengan hashtags tendencias
hashtags_indices = list()
for num, h in enumerate(df_aux.hashtags):
for hashtag in h.split(","):
if hashtag.lower() in hashtags_tendencias_sin_numeral:
hashtags_indices.append(num)
#print("Cantidad de tweets con hashtags tendencias:", len(hashtags_indices))
df_aux.drop(hashtags_indices, inplace = True)
df_aux = df_aux.reset_index(drop = True)
return df_aux
def elimina_palabras_tendencias(df_aux, palabras_tendencias_lower):
"""Elimina las filas que tengan palabras tendencias"""
# Voy a quitar los tweets que tengan palabras claves tendencias
palabras_indices = list()
for num, tweet in enumerate(df_aux.tweet):
for palabra in palabras_tendencias_lower:
if tweet.lower().find(palabra) != -1:
palabras_indices.append(num)
#print(len(palabras_indices))
df_aux.drop(palabras_indices, inplace = True)
df_aux = df_aux.reset_index(drop = True)
return df_aux
def limpieza(df_aux):
"""Realiza toda las limpieza de texto"""
# Ahora voy a limpiar los tweets, para poder ver que palabras claves no tendencia se repiten mas
df_aux.tweet = df_aux.tweet.apply(normalize)
df_aux.tweet = df_aux.tweet.apply(deEmojify)
df_aux.tweet = df_aux.tweet.apply(cleanTxt)
df_aux.tweet = df_aux.tweet.apply(replace_punct)
df_aux.tweet = df_aux.tweet.apply(replace_num)
df_aux.tweet = df_aux.tweet.apply(quita_palabras_pequeñas)
df_aux.tweet = df_aux.tweet.apply(tokenizador)
df_aux.tweet = df_aux.tweet.apply(foo)
return df_aux
def elimina_tweets_vacios(df_aux):
"""ELimina los tweets que no tengan texto despues de aplicar las limpiezas"""
# Dropeo las filas de tweets que tengan texto ""
tweet_vacios = []
for num, tweet in enumerate(df_aux.tweet):
if tweet == "":
tweet_vacios.append(num)
#print(len(tweet_vacios))
df_aux.drop(tweet_vacios, inplace = True)
df_aux = df_aux.reset_index(drop = True)
return df_aux
def f_palabras_no_tendencias(df_aux):
"""Retorna un diccionario con las palabras no tendencia que mas se repiten"""
# Cuanto cuantos palabras hay en el df y me quedo con los mas repetidos
palabras_no_tendencias = list()
for p in df_aux.tweet:
for palabra in p.split(" "):
palabras_no_tendencias.append(palabra)
palabras_no_tendencias = Counter(palabras_no_tendencias).most_common()
palabras_no_tendencias = {h[0] : h[1] for h in palabras_no_tendencias}
#print(len(palabras_no_tendencias))
return palabras_no_tendencias
def get_df_h(df_aux, hashtags_no_tendencias):
"""Retorna un dataframe de solo hashtags, con una columna donde aparece la no tendencia"""
df_h = df_aux[df_aux.hashtags != ""]
df_h = df_h.reset_index(drop = True)
df_h["trends"] = [[h if h in hashtags_no_tendencias else 0 for h in hashtag.split(",")] for hashtag in df_h.hashtags]
df_h.trends = df_h.trends.apply(lambda x : [h for h in x if h != 0])
indices_drop = list()
for num, t in enumerate(df_h.trends):
if t == []:
indices_drop.append(num)
df_h.drop(indices_drop, inplace = True)
df_h = df_h.reset_index(drop = True)
indices_para_clonar = list()
for num, t in enumerate(df_h.trends):
if len(t) > 1:
indices_para_clonar.append(num)
dic_indices = {indice : [len(trends), trends] for indice, trends in zip(indices_para_clonar, df_h.loc[indices_para_clonar].trends)}
df_v = pd.DataFrame(columns = df_h.columns)
for key in dic_indices.keys():
for time in range(dic_indices[key][0]):
df_d = pd.DataFrame(df_h.loc[key]).T
df_d.drop(df_d.columns[-1], axis = 1, inplace = True)
df_d["trends"] = dic_indices[key][1][time]
df_v = pd.concat([df_v, df_d])
df_h.drop(indices_para_clonar, inplace = True)
df_h = df_h.reset_index(drop = True)
df_h.trends = df_h.trends.apply(lambda x : x[0])
df_h = pd.concat([df_h, df_v])
df_h.trends = df_h.trends.apply(lambda x : "#" + x)
#df_h.to_csv("H_6.csv", sep = ";", index = False)
return df_h
def get_df_p(df_aux, palabras_no_tendencias):
"""Retorna un dataframe de solo palabras, con una columna donde aparece la no tendencia"""
df_p = df_aux[df_aux.hashtags == ""]
df_p = df_p.reset_index(drop = True)
df_p["trends"] = [[p for p in palabra.split(" ") if p in palabras_no_tendencias] for palabra in df_p.tweet]
indices_drop = list()
for num, trend in enumerate(df_p.trends):
if trend == []:
indices_drop.append(num)
df_p.drop(indices_drop, inplace = True)
df_p = df_p.reset_index(drop = True)
indices_multi = []
for num, t in enumerate(df_p.trends):
if len(t) >= 2:
indices_multi.append(num)
df_dup = df_p.iloc[indices_multi, :]
df_dup = df_dup.reset_index(drop = True)
indices_dup = df_dup.index.tolist()
dic_indices = {indice : [len(trends), trends] for indice, trends in zip(indices_dup, df_dup.trends)}
vacio = list()
for key, value in dic_indices.items():
prueba = np.tile([list(df_dup.iloc[key])], (value[0], 1))
vacio.extend(prueba)
df_multi = pd.DataFrame(vacio, columns = df_dup.columns)
palabras = list()
for i in range(len(df_dup.trends)):
words = df_dup.trends[i]
for j in range(len(words)):
word = words[j]
palabras.append(word)
df_multi['trends'] = palabras
df_uni = df_p[~(df_p.index.isin(indices_multi))]
df_uni.trends = df_uni.trends.apply(lambda x : x[0])
df_palabras = pd.concat([df_multi, df_uni])
return df_palabras
def get_df_no_trends(df_h, df_p, start, num):
"""Retorna la concatenacion de los otros dos dataframes, ademas de una lista con la hora
donde esa palabra o hashtag no tendencia se repite mas"""
df_concat = pd.concat([df_h, df_p])
df_summary = pd.DataFrame(df_concat.trends.value_counts()).reset_index()
df_summary.columns = ["trend", "total_tweet"]
df_summary["total_interaction"] = [df_concat[df_concat.trends == trend].interaccion.sum() for trend in df_summary.trend]
df_summary = df_summary.sort_values("total_interaction", ascending = False).iloc[: num, :]
no_trends = df_summary.trend.tolist()
return df_concat, [[nt, start] for nt in no_trends]
# # TERMINA FUNCIONES ESPECIALES
# # BUCLE
# +
# %%time
to_df = list() # Lista que guarda la palabra o hashtag no tendencia en la hora que mas se repite
df_target = pd.DataFrame(columns = df.columns) #Dataframe con todos los dataframes concatenados (palabras y hashtags)
starts = [i for i in range(24)]
for start in starts:
num = df_summary[df_summary.hour == start].shape[0] #Numero de no tendencia que va a coleccionar
df_aux = df[df.hour == start] #Filtro por hora
df_aux = df_aux.reset_index(drop = True)
# Limpieza
hashtags_no_tendencias = f_hashtags_no_tendencias(df_aux)
df_aux = elimina_hashtags_tendencias(df_aux, hashtags_tendencias_sin_numeral)
df_aux = elimina_palabras_tendencias(df_aux, palabras_tendencias_lower)
df_aux = limpieza(df_aux)
df_aux = elimina_tweets_vacios(df_aux)
palabras_no_tendencias = f_palabras_no_tendencias(df_aux)
df_aux.hashtags = df_aux.hashtags.apply(str)
df_h = get_df_h(df_aux, hashtags_no_tendencias)
df_p = get_df_p(df_aux, palabras_no_tendencias)
df_concat, no_trends = get_df_no_trends(df_h, df_p, start, num)
df_target = pd.concat([df_target, df_concat])
to_df.extend(no_trends)
print(start)
# +
#df_target.to_csv("tweets_24_notedencias_preprocesado_labels.csv", sep = ";", index = False)
# -
df_st = pd.DataFrame(to_df, columns = ["trend", "start_lifetime"])
#df_st.to_csv("tweets_24_start_lifetime_notendencias.csv", sep = ";", index = False)
| Jupyter Notebooks/Preprop and engineering feautures/.ipynb_checkpoints/notrends_preprop-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
# # Load data
apple = np.load("./dataset/apple.npy")
banana = np.load("./dataset/banana.npy")
blackberry = np.load("./dataset/blackberry.npy")
blueberry = np.load("./dataset/blueberry.npy")
grapes = np.load("./dataset/grapes.npy")
pear = np.load("./dataset/pear.npy")
pineapple = np.load("./dataset/pineapple.npy")
strawberry = np.load("./dataset/strawberry.npy")
watermelon = np.load("./dataset/watermelon.npy")
# # Show image from dataset
# Show image
temp = watermelon[20].reshape((28, 28))
plt.imshow(temp, interpolation='nearest', cmap='gray')
plt.axis('off')
plt.show()
# # Extract data into Training dataset
appleTrain = apple[0:5000,:]
bananaTrain = banana[0:5000,:]
blackberryTrain = blackberry[0:5000,:]
blueberryTrain = blueberry[0:5000,:]
grapesTrain = grapes[0:5000,:]
pearTrain = pear[0:5000,:]
pineappleTrain = pineapple[0:5000,:]
strawberryTrain = strawberry[0:5000,:]
watermelonTrain = watermelon[0:5000,:]
appleTrainDF=pd.DataFrame(appleTrain,dtype='int')
bananaTrainDF=pd.DataFrame(bananaTrain,dtype='int')
blackberryTrainDF=pd.DataFrame(blackberryTrain,dtype='int')
blueberryTrainDF=pd.DataFrame(blueberryTrain,dtype='int')
grapesTrainDF=pd.DataFrame(grapesTrain,dtype='int')
pearTrainDF=pd.DataFrame(pearTrain,dtype='int')
pineappleTrainDF=pd.DataFrame(pineappleTrain,dtype='int')
strawberryTrainDF=pd.DataFrame(strawberryTrain,dtype='int')
watermelonTrainDF=pd.DataFrame(watermelonTrain,dtype='int')
appleTrainDF['label']=0
bananaTrainDF['label']=1
blackberryTrainDF['label']=2
blueberryTrainDF['label']=3
grapesTrainDF['label']=4
pearTrainDF['label']=5
pineappleTrainDF['label']=6
strawberryTrainDF['label']=7
watermelonTrainDF['label']=8
fruitsTrainDF = pd.concat([appleTrainDF, bananaTrainDF, blackberryTrainDF, blueberryTrainDF, grapesTrainDF, pearTrainDF, pineappleTrainDF, strawberryTrainDF, watermelonTrainDF])
fruitsTrainDF.shape
fruitsTrainDF.to_csv("9fruitsTrainDF.csv", sep=',',header=True,index=False)
# # Extract data into Testing dataset
# +
appleTest = apple[5001:6001,:]
bananaTest = banana[5001:6001,:]
blackberryTest = blackberry[5001:6001,:]
blueberryTest = blueberry[5001:6001,:]
grapesTest = grapes[5001:6001,:]
pearTest = pear[5001:6001,:]
pineappleTest = pineapple[5001:6001,:]
strawberryTest = strawberry[5001:6001,:]
watermelonTest = watermelon[5001:6001,:]
appleTestDF=pd.DataFrame(appleTest,dtype='int')
bananaTestDF=pd.DataFrame(bananaTest,dtype='int')
blackberryTestDF=pd.DataFrame(blackberryTest,dtype='int')
blueberryTestDF=pd.DataFrame(blueberryTest,dtype='int')
grapesTestDF=pd.DataFrame(grapesTest,dtype='int')
pearTestDF=pd.DataFrame(pearTest,dtype='int')
pineappleTestDF=pd.DataFrame(pineappleTest,dtype='int')
strawberryTestDF=pd.DataFrame(strawberryTest,dtype='int')
watermelonTestDF=pd.DataFrame(watermelonTest,dtype='int')
appleTestDF['label']=0
bananaTestDF['label']=1
blackberryTestDF['label']=2
blueberryTestDF['label']=3
grapesTestDF['label']=4
pearTestDF['label']=5
pineappleTestDF['label']=6
strawberryTestDF['label']=7
watermelonTestDF['label']=8
fruitsTestDF = pd.concat([appleTestDF, bananaTestDF, blackberryTestDF, blueberryTestDF, grapesTestDF, pearTestDF, pineappleTestDF, strawberryTestDF, watermelonTestDF])
# -
fruitsTestDF.shape
fruitsTestDF.to_csv("9fruitsTestDF.csv", sep=',',header=True,index=False)
| npy2csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
import pandas_datareader as web
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from keras.callbacks import EarlyStopping
plt.style.use('fivethirtyeight')
# -
# Get the stock quote from July 2015 to December 2020
# Pulled data from Yahoo Finance
df = web.DataReader('TSLA', data_source = 'yahoo', start = '2015-01-01', end = '2021-01-05' )
print('Number of rows and columns: ', df.shape)
print(df.head())
print("checking if any null values are present\n", df.isna().sum())
plt.figure(figsize = (12,6))
plt.plot(df["Open"])
plt.plot(df["High"])
plt.plot(df["Low"])
plt.plot(df["Close"])
plt.title('Tesla stock price history')
plt.ylabel('Price (USD)')
plt.xlabel('Days')
plt.legend(['Open','High','Low','Close'], loc='upper left')
plt.show()
plt.figure(figsize = (12,6))
plt.plot(df["Volume"])
plt.title('Tesla stock volume history')
plt.ylabel('Volume')
plt.xlabel('Days')
plt.show()
# +
# Create a dataframe with only the Close Stock Price Column
data_target = df.filter(['Close'])
# Convert the dataframe to a numpy array to train the LSTM model
target = data_target.values
# Splitting the dataset into training and test
# Target Variable: Close stock price value
training_data_len = math.ceil(len(target)* 0.75) # training set has 75% of the data
training_data_len
# Normalizing data before model fitting using MinMaxScaler
# Feature Scaling
sc = MinMaxScaler(feature_range=(0,1))
training_scaled_data = sc.fit_transform(target)
training_scaled_data
# +
# Create a training dataset containing the last 180-day closing price values we want to use to estimate the 181st closing price value.
train_data = training_scaled_data[0:training_data_len , : ]
X_train = []
y_train = []
for i in range(180, len(train_data)):
X_train.append(train_data[i-180:i, 0])
y_train.append(train_data[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train) # converting into numpy sequences to train the LSTM model
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
print('Number of rows and columns: ', X_train.shape) #(854 values, 180 time-steps, 1 output)
# -
# !pip install -U protobuf==3.8.0
# +
# We add the LSTM layer and later add a few Dropout layers to prevent overfitting.
# Building a LTSM model with 50 neurons and 4 hidden layers. We add the LSTM layer with the following arguments:
# 50 units which is the dimensionality of the output space
# return_sequences=True which determines whether to return the last output in the output sequence, or the full sequence input_shape as the shape of our training set.
# When defining the Dropout layers, we specify 0.2, meaning that 20% of the layers will be dropped.
# Thereafter, we add the Dense layer that specifies the output of 1 unit.
# After this, we compile our model using the popular adam optimizer and set the loss as the mean_squarred_error.
model = Sequential()
#Adding the first LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50))
model.add(Dropout(0.2))
# Adding the output layer
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
# +
# Getting the predicted stock price
test_data = training_scaled_data[training_data_len - 180: , : ]
#Create the x_test and y_test data sets
X_test = []
y_test = target[training_data_len : , : ]
for i in range(180,len(test_data)):
X_test.append(test_data[i-180:i,0])
# Convert x_test to a numpy array
X_test = np.array(X_test)
#Reshape the data into the shape accepted by the LSTM
X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1))
print('Number of rows and columns: ', X_test.shape)
# -
# Making predictions using the test dataset
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
# Visualising the results
train = data_target[:training_data_len]
valid = data_target[training_data_len:]
valid['Predictions'] = predicted_stock_price
plt.figure(figsize=(10,5))
plt.title('Model')
plt.xlabel('Date', fontsize=8)
plt.ylabel('Close Price USD ($)', fontsize=12)
plt.plot(train['Close'])
plt.plot(valid[['Close', 'Predictions']])
plt.legend(['Train', 'Val', 'Predictions'], loc='lower right')
plt.show()
valid
| Stock Price Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: clouds_kernel
# language: python
# name: clouds_kernel
# ---
# ## Random Forest
#
# **For Table 3 of the paper**
#
# Cell-based QUBICC R2B5 model
#
# n_estimator = 1 takes 6h 36s
# +
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from tensorflow.keras import backend as K
from tensorflow.keras.regularizers import l1_l2
import tensorflow as tf
import tensorflow.nn as nn
import gc
import numpy as np
import pandas as pd
import importlib
import os
import sys
import joblib
#Import sklearn before tensorflow (static Thread-local storage)
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from tensorflow.keras.models import load_model
path = '/home/b/b309170'
path_data = path + '/my_work/icon-ml_data/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05/based_on_var_interpolated_data'
import matplotlib.pyplot as plt
import time
NUM = 1
# -
# Prevents crashes of the code
gpus = tf.config.list_physical_devices('GPU')
tf.config.set_visible_devices(gpus[0], 'GPU')
# Allow the growth of memory Tensorflow allocates (limits memory usage overall)
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
scaler = StandardScaler()
# +
# # Data is not yet normalized
# input_data = np.load(path_data + '/cloud_cover_input_qubicc.npy', mmap_mode='r')
# output_data = np.load(path_data + '/cloud_cover_output_qubicc.npy', mmap_mode='r')
# +
# (samples_total, no_of_features) = input_data.shape
# assert no_of_features < samples_total # Making sure there's no mixup
# +
# # Split into training and validation (need split 2)
# training_folds = []
# validation_folds = []
# two_week_incr = samples_total//6
# for i in range(3):
# # Note that this is a temporal split since time was the first dimension in the original tensor
# first_incr = np.arange(samples_total//6*i, samples_total//6*(i+1))
# second_incr = np.arange(samples_total//6*(i+3), samples_total//6*(i+4))
# validation_folds.append(np.append(first_incr, second_incr))
# training_folds.append(np.arange(samples_total))
# training_folds[i] = np.delete(training_folds[i], validation_folds[i])
# +
# # Need the second split
# #Standardize according to the fold
# scaler.fit(input_data[training_folds[1]])
# #Load the data for the respective fold and convert it to tf data
# input_train = scaler.transform(input_data[training_folds[1]])
# input_valid = scaler.transform(input_data[validation_folds[1]])
# output_train = output_data[training_folds[1]]
# output_valid = output_data[validation_folds[1]]
# np.save('RFs/cell_based_R2B5_input_train.npy', input_train)
# np.save('RFs/cell_based_R2B5_input_valid.npy', input_valid)
# np.save('RFs/cell_based_R2B5_output_train.npy', output_train)
# np.save('RFs/cell_based_R2B5_output_valid.npy', output_valid)
# -
input_train = np.load('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/cell_based_R2B5_input_train.npy')
input_valid = np.load('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/cell_based_R2B5_input_valid.npy')
output_train = np.load('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/cell_based_R2B5_output_train.npy')
output_valid = np.load('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/cell_based_R2B5_output_valid.npy')
# ### Random Forest
# +
# Instantiate model with 1000 decision trees
rf = RandomForestRegressor(n_estimators = 5, random_state = 42)
# Train the model on training data
rf.fit(input_train, output_train)
# -
joblib.dump(rf, "/home/b/b309170/scratch/cell_based_R2B5_uncompressed.joblib", compress=0)
# +
# model_fold_3 is implemented in ICON-A
batch_size = 2**20
for i in range(1 + input_valid.shape[0]//batch_size):
if i == 0:
clc_predictions = rf.predict(input_valid[i*batch_size:(i+1)*batch_size])
else:
clc_predictions = np.concatenate((clc_predictions, rf.predict(input_valid[i*batch_size:(i+1)*batch_size])), axis=0)
K.clear_session()
gc.collect()
# +
mse_rf = mean_squared_error(output_valid, clc_predictions)
with open('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/RF_results.txt', 'a') as file:
file.write('The MSE on the validation set of the cell-based R2B5 RF is %.2f.\n'%mse_rf)
# -
# ### Prediction timing RF vs NN
input_valid = np.load('/home/b/b309170/workspace_icon-ml/iconml_clc/additional_content/baselines/RFs/cell_based_R2B5_input_valid.npy')
rf = joblib.load("/home/b/b309170/scratch/cell_based_R2B5_uncompressed.joblib")
# +
custom_objects = {}
custom_objects['leaky_relu'] = nn.leaky_relu
fold_2 = 'cross_validation_cell_based_fold_2.h5'
path_model = '/home/b/b309170/workspace_icon-ml/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05/saved_models/cloud_cover_R2B5_QUBICC'
nn = load_model(os.path.join(path_model, fold_2), custom_objects)
# +
# %%time
batch_size = 2**20
for i in range(1 + input_valid.shape[0]//batch_size):
if i == 0:
clc_predictions = rf.predict(input_valid[i*batch_size:(i+1)*batch_size])
else:
clc_predictions = np.concatenate((clc_predictions, rf.predict(input_valid[i*batch_size:(i+1)*batch_size])), axis=0)
K.clear_session()
gc.collect()
# +
# %%time
batch_size = 2**20
for i in range(1 + input_valid.shape[0]//batch_size):
if i == 0:
clc_predictions = nn.predict(input_valid[i*batch_size:(i+1)*batch_size])
else:
clc_predictions = np.concatenate((clc_predictions, nn.predict(input_valid[i*batch_size:(i+1)*batch_size])), axis=0)
K.clear_session()
gc.collect()
# -
# Single prediction
# %%time
rf.predict(input_valid[:1])
# %%time
nn.predict(input_valid[:1])
| additional_content/baselines/random_forest_cell_based-R2B5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Solutions
# There will be many ways to solve these exercises which weren't covered in this chapter; however, the solutions below use only what has been introduced in chapter 2.
#
# ## About the Data
# In this notebook, we will be working with Earthquake data from September 18, 2018 - October 13, 2018 (obtained from the US Geological Survey (USGS) using the [USGS API](https://earthquake.usgs.gov/fdsnws/event/1/))
#
# ## Setup
# +
import pandas as pd
df = pd.read_csv('../../ch_02/data/parsed.csv')
# -
# ## Exercise 1
df[
(df.parsed_place.str.endswith('Japan')) & (df.magType == 'mb')
].mag.quantile(0.95)
# ## Exercise 2
f"""{df[df.parsed_place.str.endswith('Indonesia')].tsunami.value_counts(normalize=True).loc[1,]:.2%}"""
# ## Exercise 3
df[df.parsed_place.str.endswith('Nevada')].describe()
# ## Exercise 4
# Note we need to use `^Mexico` to get Mexico, but not New Mexico.
df['ring_of_fire'] = df.parsed_place.str.contains(r'|'.join([
'Bolivia', 'Chile', 'Ecuador', 'Peru', 'Costa Rica',
'Guatemala', '^Mexico', 'Japan', 'Philippines',
'Indonesia', 'New Zealand', 'Antarctic', 'Canada',
'Fiji', 'Alaska', 'Washington', 'California', 'Russia',
'Taiwan', 'Tonga', 'Kermadec Islands'
]))
# ## Exercise 5
df.ring_of_fire.value_counts()
# ## Exercise 6
df.loc[df.ring_of_fire, 'tsunami'].sum()
| solutions/ch_02/solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: xpython
# language: python
# name: xpython
# ---
# + [markdown] deletable=false editable=false
# Copyright 2020 <NAME> and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code.
# + [markdown] deletable=false editable=false
# ## Background
#
# We will be working with data on graduate school admissions. The data has four variables:
#
# - `admit`: the admittance status (0=not admitted, 1=admitted)
# - `gre`: the student's GRE score
# - `gpa`: the student's GPA
# - `rank`: rank of the student's undergraduate institution (1=highest to 4=lowest prestige)
#
# In this session, you will solve several problems using this data.
# + [markdown] deletable=false editable=false
# ### Read CSV into a dataframe
# + [markdown] deletable=false editable=false
# Import the `pandas` library, which lets us work with dataframes.
# -
# + [markdown] deletable=false editable=false
# Load a dataframe with the data in `datasets/binary.csv` and display it
# -
# + [markdown] deletable=false editable=false
# ### Select rows from a dataframe by position
# + [markdown] deletable=false editable=false
# Show the 3rd row to the final row (total 398 rows).
# -
# + [markdown] deletable=false editable=false
# ### Select columns from a dataframe by name
# + [markdown] deletable=false editable=false
# Show the last two columns of the data.
# -
# + [markdown] deletable=false editable=false
# ### Select rows from a dataframe by value
# + [markdown] deletable=false editable=false
# Show the rows where `gpa` is less than 3.
# -
# <!-- -->
| E1/ps-far-gl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''petrophysics'': conda)'
# language: python
# name: python397jvsc74a57bd06780318cfcf01e817284a341ca3c912892e5753d755735c561d08b209c7246ed
# ---
# ## Install Dependencies
#installing packages
import lasio
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks')
import pandas as pd
import welly
from welly import Well
from welly import Curve
# + [markdown] tags=[]
# ## Functions
# +
# This is a function to show information above the column inside a las file.
# the function uses lasio
def las_info(df):
'''
importing lasio and quickly show the relevant information inside the las file
note: las_data is the output of lasio.read
'''
import lasio
for item in df.well:
print(f"{item.descr} ({item.mnemonic}): {item.value}")
# +
# This is the function to plot triple combo data from dataframe created from las file.
# Please make sure the las file has been imported into dataframe by using lasio and pandas
def tc_plot(well_name, well_df, curve_names, top_depth=0, bot_depth=10000, savepdf=False,
plot_w=10,plot_h=10, title_size=12, title_height=1.05, line_width=1,plot_tight=False,
gr_color='green', gr_trackname='GR', gr_left=0, gr_right=200, gr_cutoff=50, gr_shale='lime', gr_sand='gold', gr_base=0, gr_div=5,
res_color='purple', res_trackname='RESISTIVITY', res_left=0.2, res_right=20000, res_cutoff=50, res_shading='lightcoral',
den_color='red', den_trackname='DENSITY', den_left=1.95, den_right=2.95,
neu_color='blue', neu_trackname='NPHI', neu_left=0.45, neu_right=-0.15,
den_neu_div=5, dn_xover='yellow', dn_sep='lightgray'):
'''
- The function plot triple combo consisting three well logs data of gamma ray, resistivity, and density-neutron.
- One has to define a list of column name of each curves, based on the dataframe of the well.
To make the plot correctly, one must follow the below orders:
1. the sequence of triple combo must be create into a list, with the exact order as follows:
curve_list = ['Gamma Ray', 'Resistivity', 'Density', 'Neutron']
2. the dataframe must contain a depth column name 'DEPTH'
'''
#install dependencies
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
fig, ax = plt.subplots(figsize=(plot_w,plot_h))
fig.suptitle(f"Well: {well_name}\n( Interval: {top_depth} - {bot_depth} )",
size=title_size, y=title_height)
gr_log=well_df[curve_names[0]]
res_log=well_df[curve_names[1]]
den_log=well_df[curve_names[2]]
neu_log=well_df[curve_names[3]]
#Set up the plot axes
ax1 = plt.subplot2grid((1,3), (0,0), rowspan=1, colspan = 1)
ax2 = plt.subplot2grid((1,3), (0,1), rowspan=1, colspan = 1)
ax3 = plt.subplot2grid((1,3), (0,2), rowspan=1, colspan = 1)
ax4 = ax3.twiny() #Twins the y-axis for the density track with the neutron track
# As our curve scales will be detached from the top of the track,
# this code adds the top border back in without dealing with splines
ax7 = ax1.twiny()
ax7.xaxis.set_visible(False)
ax8 = ax2.twiny()
ax8.xaxis.set_visible(False)
ax9 = ax3.twiny()
ax9.xaxis.set_visible(False)
# Gamma Ray track
ax1.plot(gr_log, "DEPTH", data = well_df, color = gr_color, lw=line_width)
ax1.set_xlabel(gr_trackname)
ax1.set_xlim(gr_left, gr_right)
ax1.set_ylim(bot_depth, top_depth)
ax1.xaxis.label.set_color(gr_color)
ax1.tick_params(axis='x', colors=gr_color)
ax1.spines["top"].set_edgecolor(gr_color)
ax1.spines["top"].set_position(("axes", 1.02))
ax1.set_xticks(list(np.linspace(gr_left, gr_right, num = gr_div)))
ax1.grid(which='major', color='lightgrey', linestyle='-')
ax1.xaxis.set_ticks_position("top")
ax1.xaxis.set_label_position("top")
##area-fill sand and shale from gr
ax1.fill_betweenx(well_df['DEPTH'], gr_base, gr_log, where=(gr_cutoff >= gr_log), interpolate=True, color = gr_sand, linewidth=0)
ax1.fill_betweenx(well_df['DEPTH'], gr_base, gr_log, where=(gr_cutoff <= gr_log), interpolate=True, color = gr_shale, linewidth=0)
# RES track
ax2.plot(res_log, "DEPTH", data = well_df, color = res_color, lw=line_width)
ax2.set_xlabel(res_trackname)
ax2.set_xlim(res_left, res_right)
ax2.set_ylim(bot_depth, top_depth)
ax2.semilogx()
ax2.minorticks_on()
ax2.xaxis.grid(which='minor', linestyle=':', linewidth='0.5', color='gray')
ax2.xaxis.label.set_color(res_color)
ax2.tick_params(axis='x', colors=res_color)
ax2.spines["top"].set_edgecolor(res_color)
ax2.spines["top"].set_position(("axes", 1.02))
ax2.grid(which='major', color='lightgrey', linestyle='-')
ax2.xaxis.set_ticks_position("top")
ax2.xaxis.set_label_position("top")
ax2.fill_betweenx(well_df['DEPTH'], res_cutoff, res_log, where=(res_log >= res_cutoff), interpolate=True, color = res_shading, linewidth=0)
# Density track
ax3.plot(den_log, "DEPTH", data = well_df, color = den_color, lw=line_width)
ax3.set_xlabel(den_trackname)
ax3.set_xlim(den_left, den_right)
ax3.set_ylim(bot_depth, top_depth)
ax3.xaxis.label.set_color(den_color)
ax3.tick_params(axis='x', colors=den_color)
ax3.spines["top"].set_edgecolor(den_color)
ax3.spines["top"].set_position(("axes", 1.02))
ax3.set_xticks(list(np.linspace(den_left, den_right, num = den_neu_div)))
ax3.grid(which='major', color='lightgrey', linestyle='-')
ax3.xaxis.set_ticks_position("top")
ax3.xaxis.set_label_position("top")
# Neutron trak placed ontop of density track
ax4.plot(neu_log, "DEPTH", data = well_df, color = neu_color, lw=line_width)
ax4.set_xlabel(neu_trackname)
ax4.xaxis.label.set_color(neu_color)
ax4.set_xlim(neu_left, neu_right)
ax4.set_ylim(bot_depth, top_depth)
ax4.tick_params(axis='x', colors=neu_color)
ax4.spines["top"].set_position(("axes", 1.08))
ax4.spines["top"].set_visible(True)
ax4.spines["top"].set_edgecolor(neu_color)
ax4.set_xticks(list(np.linspace(neu_left, neu_right, num = den_neu_div)))
#shading between density and neutron
x1=den_log
x2=neu_log
x = np.array(ax3.get_xlim())
z = np.array(ax4.get_xlim())
nz=((x2-np.max(z))/(np.min(z)-np.max(z)))*(np.max(x)-np.min(x))+np.min(x)
ax3.fill_betweenx(well_df['DEPTH'], x1, nz, where=x1>=nz, interpolate=True, color=dn_sep, linewidth=0)
ax3.fill_betweenx(well_df['DEPTH'], x1, nz, where=x1<=nz, interpolate=True, color=dn_xover, linewidth=0)
#end
if plot_tight is True:
plt.tight_layout()
if savepdf is True:
plt.savefig(working_dir+(f"{well_name}_triple_combo_plot.pdf"), dpi=150, bbox_inches='tight')
plt.show()
# -
# ## Importing Data
# +
# setup the working directory, important when saving the file to pdf.
working_dir = 'logs/'
# load the las using welly
w=Well.from_las(working_dir+'Barossa-2.las')
# -
# ## EDA
w
w.header
#extracting well name for plot later on.
well_name = w.header.name
well_name
w.plot()
# From the above plot, only several logs are available at all interval. Primarily across depth 4000-4300. We can then adjust the depth, to see it better.
w.plot(extents=(4000,4300))
# +
#we can use welly to convert las file to dataframe, but the DEPTH column will be missing.
logs=w.df()
logs
# -
# ## Data Cleaning and Prep
#resetting the index
logs.reset_index(inplace=True)
logs.head()
# +
#changing the Depth into DEPTH
logs = logs.rename(columns = {"Depth":"DEPTH"})
logs.head()
# -
# ## Plotting Triple Combo Data
#setting up the list of triple combo curves ['Gamma Ray', 'Resistivity', 'Density', 'Neutron'] --> has to be in this exact order
curve_list = ['GRCFM', 'RPCHM', 'BDCFM', 'NPLFM']
#plotting the data and customizing the plot
tc_plot('Barossa-2', logs, curve_list, top_depth=4175, bot_depth=4275,
gr_right=150, gr_div=6,gr_cutoff=30,gr_base=30,gr_shale='green',
den_left=1.85, den_right=2.85,neu_left=45,
neu_right=-15,dn_sep='darkgrey',
res_left=1, res_right=1000,
savepdf=True)
| petroplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
# ## _*Hamiltonian and Gate Characterization*_
#
# * **Last Updated:** March 8, 2019
# * **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2
# This notebook gives examples to demonstrate how to user the ``characterization.hamiltonian`` and ``characterization.gates`` modules in ``qiskit-ignis``. For a theory background see the [Ignis Community Notebooks]( https://github.com/Qiskit/qiskit-tutorials/tree/master/community/ignis).
# +
import numpy as np
import matplotlib.pyplot as plt
import qiskit
from qiskit.providers.aer.noise.errors.standard_errors import coherent_unitary_error
from qiskit.providers.aer.noise import NoiseModel
from qiskit.ignis.characterization.hamiltonian import ZZFitter, zz_circuits
from qiskit.ignis.characterization.gates import (AmpCalFitter, ampcal_1Q_circuits,
AngleCalFitter, anglecal_1Q_circuits,
AmpCalCXFitter, ampcal_cx_circuits,
AngleCalCXFitter, anglecal_cx_circuits)
# -
# # Measuring ZZ
#
# The ``characterization.hamiltonian.zz_circuits`` module builds the circuits to perform an experiment to measure ZZ between a pair of qubits. ZZ here is defined as the energy shift on the $|11\rangle$ state,
#
# $$H=\omega_0 (1-\sigma_{Z,0})/2 +\omega_1 (1-\sigma_{Z,1})/2 + \xi |11\rangle\langle 11|$$
#
# The experiment to measure $\xi$ is to perform a Ramsey experiment on Q0 (H-t-H) and repeat the Ramsey with Q1 in the excited state. The difference in frequency between these experiments is the rate $\xi$
# +
# ZZ rates are typically ~ 100kHz so we want Ramsey oscillations around 1MHz
# 12 numbers ranging from 10 to 1000, logarithmically spaced
# extra point at 1500
num_of_gates = np.arange(0,150,5)
gate_time = 0.1
# Select the qubits whose ZZ will be measured
qubits = [0]
spectators = [1]
# Generate experiments
circs, xdata, osc_freq = zz_circuits(num_of_gates, gate_time, qubits, spectators, nosc=2)
# -
# One of the features of the fitters are that we can split the circuits into multiple jobs and then give the results to the fitter as a list. Demonstrated below.
# +
# Set the simulator with ZZ
zz_unitary = np.eye(4,dtype=complex)
zz_unitary[3,3] = np.exp(1j*2*np.pi*0.02*gate_time)
error = coherent_unitary_error(zz_unitary)
noise_model = NoiseModel()
noise_model.add_nonlocal_quantum_error(error, 'id', [0], [0,1])
# Run the simulator
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 500
# For demonstration purposes split the execution into two jobs
print("Running the first 20 circuits")
backend_result1 = qiskit.execute(circs[0:20], backend,
shots=shots, noise_model=noise_model).result()
print("Running the rest of the circuits")
backend_result2 = qiskit.execute(circs[20:], backend,
shots=shots, noise_model=noise_model).result()
# +
# %matplotlib inline
# Fit the data to an oscillation
plt.figure(figsize=(10, 6))
initial_a = 1
initial_c = 0
initial_f = osc_freq
initial_phi = -np.pi/20
# Instantiate the fitter
# pass the 2 results in as a list of results
fit = ZZFitter([backend_result1, backend_result2], xdata, qubits, spectators,
fit_p0=[initial_a, initial_f, initial_phi, initial_c],
fit_bounds=([-0.5, 0, -np.pi, -0.5],
[1.5, 2*osc_freq, np.pi, 1.5]))
fit.plot_ZZ(0, ax=plt.gca())
print("ZZ Rate: %f kHz"%(fit.ZZ_rate()[0]*1e3))
plt.show()
# -
# ## Amplitude Error Characterization for Single Qubit Gates
# Measure the amplitude error in the single qubit gates. Here this measures the error in the $\pi/2$ pulse. Note that we can run multiple amplitude calibrations in parallel. Here we measure on qubits 2 and 4.
qubits = [4,2]
circs, xdata = ampcal_1Q_circuits(10, qubits)
# This shows the sequence of the calibration, which is repeated application of Y90 (U2[0,0]). Note that the measurements are mapped to a minimal number of classical registers in order of the qubit list.
print(circs[2])
# +
# Set the simulator
# Add a rotation error
err_unitary = np.zeros([2,2],dtype=complex)
angle_err = 0.1
for i in range(2):
err_unitary[i,i] = np.cos(angle_err)
err_unitary[i,(i+1) % 2] = np.sin(angle_err)
err_unitary[0,1] *= -1.0
error = coherent_unitary_error(err_unitary)
noise_model = NoiseModel()
noise_model.add_all_qubit_quantum_error(error, 'u2')
# Run the simulator
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 500
backend_result1 = qiskit.execute(circs, backend,
shots=shots, noise_model=noise_model).result()
# +
# %matplotlib inline
# Fit the data to an oscillation
plt.figure(figsize=(10, 6))
initial_theta = 0.02
initial_c = 0.5
initial_phi = 0.1
fit = AmpCalFitter(backend_result1, xdata, qubits,
fit_p0=[initial_theta, initial_c],
fit_bounds=([-np.pi, -1],
[np.pi, 1]))
# plot the result for the number 1 indexed qubit.
# In this case that refers to Q2 since we passed in as [4, 2])
fit.plot(1, ax=plt.gca())
print("Rotation Error on U2: %f rads"%(fit.angle_err()[0]))
plt.show()
# -
# ## Angle Error Characterization for Single Qubit Gates
# Measure the angle between the X and Y gates
qubits = [0,1]
circs, xdata = anglecal_1Q_circuits(10, qubits, angleerr=0.1)
# Gate sequence for measureing the angle error
#The U1 gates are added errors to test the procedure
print(circs[2])
# Set the simulator
# Run the simulator
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 1000
backend_result1 = qiskit.execute(circs, backend,
shots=shots).result()
# +
# %matplotlib inline
# Fit the data to an oscillation
plt.figure(figsize=(10, 6))
initial_theta = 0.02
initial_c = 0.5
initial_phi = 0.01
fit = AngleCalFitter(backend_result1, xdata, qubits,
fit_p0=[initial_theta, initial_c],
fit_bounds=([-np.pi, -1],
[np.pi, 1]))
fit.plot(0, ax=plt.gca())
print("Angle error between X and Y: %f rads"%(fit.angle_err()[0]))
plt.show()
# -
# ## Amplitude Error Characterization for CX Gates
# This looks for a rotation error in the CX gate, ie., if the gate is actually $CR_x(\pi/2+\delta)$ measure $\delta$. This is very similar to the single qubit amplitude error calibration except we need to specify a control qubit (which is set to be in state $|1\rangle$) and the rotation is a $\pi$.
# We can specify more than one CX gate to calibrate in parallel
# but these lists must be the same length and not contain
# any duplicate elements
qubits = [0,2]
controls = [1,3]
circs, xdata = ampcal_cx_circuits(15, qubits, controls)
# Gate sequence to calibrate the amplitude of the CX gate on Q0-Q1 and Q2-Q3 in parallel
print(circs[2])
# +
# Set the simulator
# Add a rotation error on CX
# only if the control is in the excited state
err_unitary = np.eye(4,dtype=complex)
angle_err = 0.15
for i in range(2):
err_unitary[2+i,2+i] = np.cos(angle_err)
err_unitary[2+i,2+(i+1) % 2] = -1j*np.sin(angle_err)
error = coherent_unitary_error(err_unitary)
noise_model = NoiseModel()
noise_model.add_nonlocal_quantum_error(error, 'cx', [1,0], [0,1])
# Run the simulator
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 1500
backend_result1 = qiskit.execute(circs, backend,
shots=shots, noise_model=noise_model).result()
# +
# %matplotlib inline
# Fit the data to an oscillation
plt.figure(figsize=(10, 6))
initial_theta = 0.02
initial_c = 0.5
initial_phi = 0.01
fit = AmpCalCXFitter(backend_result1, xdata, qubits,
fit_p0=[initial_theta, initial_c],
fit_bounds=([-np.pi, -1],
[np.pi, 1]))
fit.plot(0, ax=plt.gca())
print("Rotation Error on CX: %f rads"%(fit.angle_err()[0]))
plt.show()
# -
# ## Angle Error Characterization for CX Gates
# Measure the angle error $\theta$ in the CX gate, i.e., $CR_{\cos(\theta)X+\sin(\theta)Y}(\pi/2)$ with respect to the angle of the single qubit gates.
qubits = [0,2]
controls = [1,3]
circs, xdata = anglecal_cx_circuits(15, qubits, controls, angleerr=0.1)
# Gate sequence to calibrate the CX angle for Q0-Q1 and Q3-Q4 in parallel
print(circs[2])
# Set the simulator
# Run the simulator
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 1000
backend_result1 = qiskit.execute(circs, backend,
shots=shots).result()
# +
# %matplotlib inline
# Fit the data to an oscillation
plt.figure(figsize=(10, 6))
initial_theta = 0.02
initial_c = 0.5
initial_phi = 0.01
fit = AngleCalCXFitter(backend_result1, xdata, qubits,
fit_p0=[initial_theta, initial_c],
fit_bounds=([-np.pi, -1],
[np.pi, 1]))
fit.plot(0, ax=plt.gca())
print("Rotation Error on CX: %f rads"%(fit.angle_err()[0]))
plt.show()
| qiskit/advanced/ignis/hamiltonian_and_gate_characterization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Item 9 Consider Generator Expressions for Large Comprehensions
# Issue:
# * It creates a whole new copy which could make your memory full.
# * If the file, you read in, is huge or even a never-ending network socket this could get problematic.
#
# Use generators to solve this issue!
# * You can build a generator using the same expression but using () instead of [].
a = [1 ,2 ,3]
generator = (b for b in a)
print(generator.__next__())
print(next(generator))
# Note: Iterators retuned by generators are stateful, so watch out to use them not more than once. (no idea what this means lol)
# ## Things to remember
# * List comphrehensions are problematic with huge inputs
# * Generator expressions avoid memory issues by producing one at a time as an iterator.
# * Generator expression can be composed by passing the iterator from one generator expression into the for subexpreission of another
# * Generator expressions exectue very quickly when chained together
| notes/Chapter_1:_Pythonic_Thinking/Item 9 Consider Generator Expressions for Large Comprehensions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import torch
from UnarySim.sw.metric.metric import NormStability, NSbuilder, Stability, ProgressiveError
from UnarySim.sw.stream.gen import RNG, SourceGen, BSGen
from UnarySim.sw.kernel.exp import expNG
import random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import ticker, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import time
import math
import numpy as np
import seaborn as sns
from tqdm import tqdm
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
def test(
rng="Sobol",
total_cnt=100,
mode="bipolar",
bitwidth=8,
threshold=0.05,
depth=5
):
ns_val=[0.25, 0.5, 0.75]
stype = torch.float
rtype = torch.float
pbar = tqdm(total=3*total_cnt*(2**bitwidth))
if mode is "unipolar":
# all values in unipolar are non-negative
low_bound = 0
up_bound = 2**bitwidth
elif mode is "bipolar":
# values in bipolar are arbitrarily positive or negative
low_bound = -2**(bitwidth-1)
up_bound = 2**(bitwidth-1)
# input0 is dividend
input = []
for val in range(up_bound, low_bound-1, -1):
input.append(val)
input = torch.tensor(input, dtype=torch.float).div(up_bound).to(device)
output = torch.exp(input.mul(-2)).to(device)
for ns in ns_val:
print("# # # # # # # # # # # # # # # # # #")
print("Target normstab:", ns)
print("# # # # # # # # # # # # # # # # # #")
result_ns_total = []
input_ns_total = []
output_ns_total = []
for rand_idx in range(1, total_cnt+1):
outputNS = NormStability(output, mode="unipolar", threshold=threshold/2).to(device)
inputNS = NormStability(input, mode=mode, threshold=threshold).to(device)
dut = expNG(mode=mode,
depth=depth,
gain=1).to(device)
inputBSGen = NSbuilder(bitwidth=bitwidth,
mode=mode,
normstability=ns,
threshold=threshold,
value=input,
rng_dim=rand_idx).to(device)
start_time = time.time()
with torch.no_grad():
for i in range(2**bitwidth):
input_bs = inputBSGen()
inputNS.Monitor(input_bs)
output_bs = dut(input_bs)
outputNS.Monitor(output_bs)
pbar.update(1)
# get the result for different rng
input_ns = inputNS()
output_ns = outputNS()
result_ns = (output_ns/input_ns).clamp(0, 1).cpu().numpy()
result_ns_total.append(result_ns)
input_ns = input_ns.cpu().numpy()
input_ns_total.append(input_ns)
output_ns = output_ns.cpu().numpy()
output_ns_total.append(output_ns)
# print("--- %s seconds ---" % (time.time() - start_time))
# get the result for different rng
result_ns_total = np.array(result_ns_total)
input_ns_total = np.array(input_ns_total)
output_ns_total = np.array(output_ns_total)
#######################################################################
# check the error of all simulation
#######################################################################
input_ns_total_no_nan = input_ns_total[~np.isnan(result_ns_total)]
print("avg I NS:{:1.4}".format(np.mean(input_ns_total_no_nan)))
print("max I NS:{:1.4}".format(np.max(input_ns_total_no_nan)))
print("min I NS:{:1.4}".format(np.min(input_ns_total_no_nan)))
print()
output_ns_total_no_nan = output_ns_total[~np.isnan(result_ns_total)]
print("avg O NS:{:1.4}".format(np.mean(output_ns_total_no_nan)))
print("max O NS:{:1.4}".format(np.max(output_ns_total_no_nan)))
print("min O NS:{:1.4}".format(np.min(output_ns_total_no_nan)))
print()
result_ns_total_no_nan = result_ns_total[~np.isnan(result_ns_total)]
print("avg O/I NS:{:1.4}".format(np.mean(result_ns_total_no_nan)))
print("max O/I NS:{:1.4}".format(np.max(result_ns_total_no_nan)))
print("min O/I NS:{:1.4}".format(np.min(result_ns_total_no_nan)))
print()
#######################################################################
# check the error according to input value
#######################################################################
max_total = np.max(result_ns_total, axis=0)
min_total = np.min(result_ns_total, axis=0)
avg_total = np.mean(result_ns_total, axis=0)
axis_len = outputNS().size()[0]
input_x_axis = []
for axis_index in range(axis_len):
input_x_axis.append((axis_index/(axis_len-1)*(up_bound-low_bound)+low_bound)/up_bound)
fig, ax = plt.subplots()
ax.fill_between(input_x_axis, max_total, avg_total, facecolor="red", alpha=0.75)
ax.fill_between(input_x_axis, avg_total, min_total, facecolor="blue", alpha=0.75)
ax.plot(input_x_axis, avg_total, label='Avg error', color="black", linewidth=0.3)
plt.tight_layout()
plt.xlabel('Input value')
plt.ylabel('Output/Input NS')
plt.xticks(np.arange(0, 1.1, step=0.5))
# ax.xaxis.set_ticklabels([])
plt.xlim(0, 1)
plt.yticks(np.arange(0, 1.1, step=0.2))
# ax.yaxis.set_ticklabels([])
plt.ylim(0, 1.1)
plt.grid(b=True, which="both", axis="y", linestyle="--", color="grey", linewidth=0.3)
fig.set_size_inches(4, 4)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.show()
plt.close()
pbar.close()
test(rng="Sobol", total_cnt=100, mode="bipolar", bitwidth=8, threshold=0.1, depth=5)
| sw/test/metric/test_metric_normstability_exp_cnt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false, "name": "#%%\n"}
from sklearn.linear_model import SGDClassifier
# %matplotlib inline
def sort_by_target(mnist):
reorder_train = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[:60000])]))[:, 1]
reorder_test = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[60000:])]))[:, 1]
mnist.data[:60000] = mnist.data[reorder_train]
mnist.target[:60000] = mnist.target[reorder_train]
mnist.data[60000:] = mnist.data[reorder_test + 60000]
mnist.target[60000:] = mnist.target[reorder_test + 60000]
# + pycharm={"is_executing": false, "name": "#%%\n"}
from sklearn.datasets import fetch_openml
import numpy as np
mnist = fetch_openml('mnist_784', version=1, cache=True)
mnist.target = mnist.target.astype(np.int8)
sort_by_target(mnist)
mnist["data"],mnist["target"]
# + [markdown] pycharm={"name": "#%% md\n"}
# 这里就是把数据进行了分组。
# - X:特征
# - y:目标
# + pycharm={"is_executing": false, "name": "#%%\n"}
X, y = mnist["data"], mnist["target"]
X.shape
X_train, x_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
# + pycharm={"name": "#%%\n", "is_executing": false}
X_train[:5]
# + [markdown] pycharm={"name": "#%% md\n"}
# 这个函数,本质上来说,就是把东西拼接数组,组成一个比较大的数组。然后打印那个数组。
#
# 计算机的图片就是一个一个像素点。而一个train set的中的单元,逻辑上来说是一张一张图片,而实现上来说,只是像素点组合成的数组。
# 而下面这个函数,就是把这些像素点组成的数组拼装起来。
# + pycharm={"name": "#%%\n", "is_executing": false}
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
# + pycharm={"name": "#%%\n", "is_executing": false}
import matplotlib
import matplotlib.pyplot as plt
plot_digits(X_train[:5])
plt.show()
# + pycharm={"name": "#%%\n", "is_executing": false}
y_train[:5]
# + pycharm={"is_executing": false, "name": "#%%\n"}
# %matplotlib inline
five_example = mnist["data"][36000]
image = five_example.reshape(28, 28)
plt.axis("off")
plt.imshow(image, cmap=matplotlib.cm.binary, interpolation="nearest")
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# mnist 数据的类型,就是把一张图片,拍平成一个数组。然后对数组进行对比。
# + pycharm={"name": "#%%\n", "is_executing": false}
"每个元素的类型,本质上来说是,{}".format(type(five_example))
# + pycharm={"name": "#%%\n", "is_executing": false}
sgd_clf = SGDClassifier(random_state=42, max_iter=5, tol=-np.infty)
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([five_example])
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面是交叉验证。
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42,shuffle=True)
for train_index,test_index in skfolds.split(X_train,y_train_5):
clone_clf= clone(sgd_clf)
x_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
x_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(x_train_folds,y_train_folds)
y_pred = clone_clf.predict(x_test_fold)
n_correct = sum(y_pred == y_test_fold)
print("correct is {}".format(n_correct/len(y_pred)))
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面的两个模型,一个是正经的干活,一个是唯一答案。然后发觉两者其实结果差不多的。
#
# 从另一个角度,其实AI本质还是一个找寻概率的问题。找寻概率高的那个,而有时候,猜比较大概率那个也是一种省力的办法。这里的问题还是有点神奇的。
#
# 如果以现在的只是来看。这里的工作也就高出了5%左右的正确率。。。
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf,X_train,y_train_5,cv=3,scoring="accuracy")
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self,x,y=None):
pass
def predict(self,x):
return np.zeros((len(x),1),dtype=bool) # return all 5
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf,X_train,y_train_5,cv=3,scoring="accuracy")
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf,X_train,y_train_5,cv=3)
y_train_pred
# + [markdown] pycharm={"name": "#%% md\n"}
# 这个是一个比对的结果
#
# | 说明 | 模型判断为非 | 模型判断为负 |
# | :------------- | :----------: | -----------: |
# | 负类 | 真负类(非5 认为非5) | 假正类(非5非认为5) |
# | 正类 | 假负类(为5,认为5非) | 真正类(为5,认为5) |
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.metrics import confusion_matrix
matrix = confusion_matrix(y_train_5,y_train_pred)
matrix
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.metrics import precision_score,recall_score,f1_score
precision = precision_score(y_train_5,y_train_pred)
recall= recall_score(y_train_5,y_train_pred)
f1 = f1_score(y_train_5,y_train_pred)
"precision score is {}, recall score is {}, f1 score is {}".format(precision,recall,f1)
# + [markdown] pycharm={"name": "#%% md\n"}
# - 精度:一堆里面,找出来的是准确度高. 结果里面的**杂质**会更加少。
# - 召回率: 一堆满足的里面,能够召回几个。正确的结果**漏掉**会更加少
#
#
# 打个比方:100个球,10个红的,90个黑的。找红球
#
# 精度,追求的是我找到找到的球红色的多。那么最好的策略,就是去少找球。因为相同的概率下,找的次数越多,黑球出现的概率就越大。
# 极端情况下,我找一个,只要那个是红球。那么精度就是100%。
#
# 召回率,追求的红球的数量越多,或者说剩余的球里面红球越少。那么我必须要找10次。因为有是个球。但是概率的原因,10个球里面必然有一个黑球。
# 那么我就要找11次。召回率越高,我的找寻次数就越少。
#
# 其实这个两个很奇怪。要么都是100%。否则,就是此消彼长的关系。
# + pycharm={"name": "#%%\n", "is_executing": false}
TP = matrix[1,1]
FP = matrix[0,1]
FN = matrix[1,0]
precision = TP/(TP+FP)
recall = TP/(TP+FN)
f1 = TP/(TP+(FN+FP)/2)
"precision score is {}, recall score is {}, f1 score is {}".format(precision,recall,f1)
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面这个,就是自己做比较。具体我还是不太清楚了。估计和算法了解相关。
# 发觉到了这里,整体来说就比较好理解了。
#
# 至少对于`SGDClassifier`这个算法来说。之后的流程,就是把数据按照相似度来排序。而阀值本质就是用来作为相关度的**最小值**。
# 而算这个相似度算法的,的精度和召回率,就是这个算法的衡量指标。对于业余的来说,就是一个**黑盒**。
#
# 天下没有完美的算法,造成了这个相似度*计算失误*,也就是造成了那么多问题在那里。
# + pycharm={"name": "#%%\n", "is_executing": false}
y_scores_simple = sgd_clf.decision_function(X_train)
y_scores_simple.shape
# + pycharm={"name": "#%%\n", "is_executing": false}
y_scores = cross_val_predict(sgd_clf,X_train,y_train_5,cv=3,method="decision_function")
y_scores
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.metrics import precision_recall_curve
precisions,recalls,thresholds = precision_recall_curve(y_train_5,y_scores)
def plot_precision_recall_vs_threadhold(precisions,recalls,thresholds):
plt.plot(thresholds,precisions[:-1],"b--",label="Precision")
plt.plot(thresholds,recalls[:-1],"g--",label="Recalll")
plt.xlabel("Threadhold")
plt.legend(loc="upper left")
plt.ylim([0,1])
plot_precision_recall_vs_threadhold(precisions,recalls,thresholds)
plt.show()
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.metrics import roc_curve
fpr,tpr,thresholds = roc_curve(y_train_5,y_scores)
def plot_roc_curve(fpr,tpr,label=None):
plt.plot(fpr,tpr,linewidth=2,label=label)
plt.plot([0,1],[0,1],'k--')
plt.axis([0,1,0,1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plot_roc_curve(fpr,tpr)
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# fpr(假正类率):混入正类的假概率。分母是负类的综合,在负类中占的概率。
#
# 下面的这张图,其实就是为了防止假负类率,所以就阿猫阿狗全都进如选择。最后导致,负类内选取的数量不断提高。最后全部选进来了。
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5,y_scores)
# + [markdown] pycharm={"name": "#%% md\n"}
# 随机森林方法
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_probas_forest = cross_val_predict(forest_clf,X_train,y_train_5,cv=3,method="predict_proba")
# + pycharm={"name": "#%%\n", "is_executing": false}
y_scores_forest = y_probas_forest[:,1]
fpr_forest,tpr_forest,thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.plot(fpr,tpr,"b:",label="SGD")
plot_roc_curve(fpr_forest,tpr_forest,"Random Forest")
plt.legend(loc="lower right")
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面这个明显变慢了。因为他要自动生成好几个分类器。
#
# OvO: One vs One, 需要训练数据少,但是分类器多。
# OvR: One vs Rest 相反
#
# 这个区别其实挺好理解的。因为OvO就是两个比对。但是OvR则是群体比对。
# 不过我不清楚Ovo是怎么选择最后谁胜出的。因为这个还是有点神奇的。
# + pycharm={"name": "#%%\n", "is_executing": false}
sgd_clf.fit(X_train, y_train)
# + pycharm={"name": "#%%\n", "is_executing": false}
five_example = X[36000]
sgd_clf.predict([five_example])
# + pycharm={"name": "#%%\n", "is_executing": false}
example_score = sgd_clf.decision_function([five_example])
example_score
# + pycharm={"name": "#%%\n", "is_executing": false}
np.argmax(example_score)
# + pycharm={"name": "#%%\n", "is_executing": false}
"""
目标的列表,估计是列表的真实值吧。
"""
sgd_clf.classes_
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42, max_iter=5, tol=-np.infty))
ovo_clf.fit(X_train,y_train)
ovo_clf.predict([five_example])
# + pycharm={"name": "#%%\n", "is_executing": false}
len(ovo_clf.estimators_)
# + pycharm={"name": "#%%\n", "is_executing": false}
forest_clf.fit(X_train,y_train)
forest_clf.predict([five_example])
# + pycharm={"name": "#%%\n", "is_executing": false}
forest_clf.predict_proba([five_example])
# + pycharm={"name": "#%%\n", "is_executing": false}
cross_val_score(sgd_clf,X_train,y_train,cv=3,scoring="accuracy")
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf,X_train_scaled,y_train,cv=3,scoring="accuracy")
# + [markdown] pycharm={"name": "#%% md\n"}
# 所有的交叉,就是把数据分类n份,然后做n次。
# 每次都把一个给弄出来,用其他来做训练,来评估他。
# 最后返回一个整体
# + pycharm={"name": "#%%\n", "is_executing": false}
from functools import partial
my_score = partial(cross_val_score,cv=3,scoring="accuracy")
my_score(sgd_clf,X_train_scaled,y_train)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 错误分析
#
# + pycharm={"name": "#%%\n", "is_executing": false}
y_train_pred = cross_val_predict(sgd_clf,X_train_scaled,y_train,cv=3)
conf_mx = confusion_matrix(y_train,y_train_pred)
conf_mx
# + [markdown] pycharm={"name": "#%% md\n"}
# 这张图,数字越小。表示就越黑。白的地方,反而表示正确率越高。
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.matshow(conf_mx,cmap=plt.cm.gray)
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面的处理,做的是一个反处理。算百分比。这个百分比是错误率。
# 因为除了对角线之外,其他都是错误的。就是错误数据的百分比。这个百分比越高,值越大,就会显示越白。
#
# 其实这里比较难以理解的是,对角线是正确率。其他都是错误率
# + pycharm={"name": "#%%\n", "is_executing": false}
row_sums = conf_mx.sum(axis=1,keepdims=True)
norm_conf_mx = conf_mx/row_sums
norm_conf_mx
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.matshow(norm_conf_mx,cmap=plt.cm.gray)
# + pycharm={"name": "#%%\n", "is_executing": false}
np.fill_diagonal(norm_conf_mx,0) #如果不知空,那么其他都是值太小
plt.matshow(norm_conf_mx,cmap=plt.cm.gray)
# + pycharm={"name": "#%%\n", "is_executing": false}
cl_a,cl_b = 7,9
X_aa = X_train[(y_train==cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train==cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train==cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train==cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221);plot_digits(X_aa[:25],images_per_row=5) # 3的真正类
plt.subplot(222);plot_digits(X_ab[:25],images_per_row=5) # 3的假正类
plt.subplot(223);plot_digits(X_ba[:25],images_per_row=5) # 5的假真类
plt.subplot(224);plot_digits(X_bb[:25],images_per_row=5) # 5的真真类
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 多标签分类
# 一个数据加上两个标签
# + pycharm={"name": "#%%\n", "is_executing": false}
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
# + pycharm={"name": "#%%\n", "is_executing": false}
knn_clf.predict([five_example])
# + [markdown] pycharm={"name": "#%% md\n"}
# 下面这个很慢。
# + pycharm={"name": "#%%\n", "is_executing": false}
#y_train_knn_pred = cross_val_predict(knn_clf,X_train,y_train,cv=3)
#f1_score(y_train,y_train_knn_pred,average="marcro")
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 多输出分类
#
# 这里是输出和输入都是多标签。
#
# 就这里的例子来说。他做了这么几步
# 1. 给数据集加上噪音。
# 2. 把原始的数据,当成目标集合。也就是输出的集合。
#
# 这里,读每个像素点,都做了一个归类。然后这些归好类的像素点,最后汇集成了一幅图。
# 预测结果,就是一副图片(784个像素组成的图片)
#
# 有噪音的图片 --》 干净的图片
# + pycharm={"name": "#%%\n", "is_executing": false}
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(x_test), 784))
X_test_mod = x_test + noise
y_train_mod = X_train
y_test_mod = x_test
# + pycharm={"name": "#%%\n", "is_executing": false}
def plot_digit(digit):
image = digit.reshape(28, 28)
plt.axis("off")
plt.imshow(image, cmap=matplotlib.cm.binary, interpolation="nearest")
plt.show()
some_index = 5500
plt.subplot(121);plot_digit(X_test_mod[some_index])
plt.subplot(122);plot_digit(x_test[some_index])
# + pycharm={"name": "#%%\n", "is_executing": false}
knn_clf.fit(X_train_mod,y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
| myNoteBook/chapter3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 16장. 순환 신경망으로 시퀀스 데이터 모델링
# **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch16/ch16.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch16/ch16.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
# </td>
# </table>
# `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
# +
# #!pip install watermark
# -
# %load_ext watermark
# %watermark -u -d -v -p numpy,scipy,pyprind,tensorflow
# **이 노트북을 실행하려면 텐서플로 2.0.0-alpha0 버전 이상이 필요합니다. 이전 버전의 텐서플로가 설치되어 있다면 다음 셀의 주석을 제거한 뒤 실행하세요.**
# +
# #!pip install tensorflow==2.0.0-alpha0
# -
# **코랩을 사용할 때는 다음 셀의 주석을 제거하고 GPU 버전의 텐서플로 2.0.0-alpha0 버전을 설치하세요.**
# +
# #!pip install tensorflow-gpu==2.0.0-alpha0
# -
# **코랩을 사용할 때는 다음 셀의 주석을 제거하고 실행하세요.**
# +
# #!wget https://github.com/rickiepark/python-machine-learning-book-2nd-edition/raw/master/code/ch16/movie_data.csv.gz
# +
import gzip
with gzip.open('movie_data.csv.gz') as f_in, open('movie_data.csv', 'wb') as f_out:
f_out.writelines(f_in)
# -
# # 텐서플로의 케라스 API로 시퀀스 모델링을 위한 다층 RNN 구현하기
# ## 첫 번째 프로젝트 - 다층 RNN으로 IMDb 영화 리뷰의 감성 분석 수행하기
# ### 데이터 준비
# `pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
# +
# #!pip install pyprind
# +
import pyprind
import pandas as pd
from string import punctuation
import re
import numpy as np
df = pd.read_csv('movie_data.csv', encoding='utf-8')
print(df.head(3))
# +
## 데이터 전처리:
## 단어를 나누고 등장 횟수를 카운트합니다.
from collections import Counter
counts = Counter()
pbar = pyprind.ProgBar(len(df['review']),
title='단어의 등장 횟수를 카운트합니다.')
for i,review in enumerate(df['review']):
text = ''.join([c if c not in punctuation else ' '+c+' ' \
for c in review]).lower()
df.loc[i,'review'] = text
pbar.update()
counts.update(text.split())
# +
## 고유한 각 단어를 정수로 매핑하는
## 딕셔너리를 만듭니다.
word_counts = sorted(counts, key=counts.get, reverse=True)
print(word_counts[:5])
word_to_int = {word: ii for ii, word in enumerate(word_counts, 1)}
mapped_reviews = []
pbar = pyprind.ProgBar(len(df['review']),
title='리뷰를 정수로 매핑합니다.')
for review in df['review']:
mapped_reviews.append([word_to_int[word] for word in review.split()])
pbar.update()
# +
## 동일 길이의 시퀀스를 만듭니다.
## 시퀀스 길이가 200보다 작으면 왼쪽에 0이 패딩됩니다.
## 시퀀스 길이가 200보다 크면 마지막 200개 원소만 사용합니다.
sequence_length = 200 ## (RNN 공식에 있는 T 값 입니다)
sequences = np.zeros((len(mapped_reviews), sequence_length), dtype=int)
for i, row in enumerate(mapped_reviews):
review_arr = np.array(row)
sequences[i, -len(row):] = review_arr[-sequence_length:]
X_train = sequences[:37500, :]
y_train = df.loc[:37499, 'sentiment'].values
X_test = sequences[37500:, :]
y_test = df.loc[37500:, 'sentiment'].values
# -
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
n_words = len(word_to_int) + 1
print(n_words)
# ### 임베딩
from tensorflow.keras import models, layers
model = models.Sequential()
model.add(layers.Embedding(n_words, 200,
embeddings_regularizer='l2'))
model.summary()
# ### RNN 모델 만들기
model.add(layers.LSTM(16))
model.add(layers.Flatten())
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
# ### 감성 분석 RNN 모델 훈련하기
model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['acc'])
# +
import time
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
callback_list = [ModelCheckpoint(filepath='sentiment_rnn_checkpoint.h5',
monitor='val_loss',
save_best_only=True),
TensorBoard(log_dir="sentiment_rnn_logs/{}".format(
time.asctime()))]
# -
history = model.fit(X_train, y_train,
batch_size=64, epochs=10,
validation_split=0.3, callbacks=callback_list)
import matplotlib.pyplot as plt
epochs = np.arange(1, 11)
plt.plot(epochs, history.history['loss'])
plt.plot(epochs, history.history['val_loss'])
plt.xlabel('epochs')
plt.ylabel('loss')
plt.show()
epochs = np.arange(1, 11)
plt.plot(epochs, history.history['acc'])
plt.plot(epochs, history.history['val_acc'])
plt.xlabel('epochs')
plt.ylabel('loss')
plt.show()
# ### 감성 분석 RNN 모델 평가하기
model.load_weights('sentiment_rnn_checkpoint.h5')
model.evaluate(X_test, y_test)
model.predict_proba(X_test[:10])
model.predict_classes(X_test[:10])
# ## 두 번째 프로젝트 – 텐서플로로 글자 단위 언어 모델 구현하기
# ### 데이터 전처리
# **코랩을 사용할 때는 다음 셀의 주석을 제거하고 실행하세요.**
# +
# #!wget https://github.com/rickiepark/python-machine-learning-book-2nd-edition/raw/master/code/ch16/pg2265.txt
# +
import numpy as np
## 텍스트를 읽고 처리합니다.
with open('pg2265.txt', 'r', encoding='utf-8') as f:
text=f.read()
text = text[15858:]
chars = set(text)
char2int = {ch:i for i,ch in enumerate(chars)}
int2char = dict(enumerate(chars))
text_ints = np.array([char2int[ch] for ch in text],
dtype=np.int32)
# -
len(text)
len(chars)
def reshape_data(sequence, batch_size, num_steps):
mini_batch_length = batch_size * num_steps
num_batches = int(len(sequence) / mini_batch_length)
if num_batches*mini_batch_length + 1 > len(sequence):
num_batches = num_batches - 1
## 전체 배치에 포함되지 않는 시퀀스 끝부분은 삭제합니다.
x = sequence[0 : num_batches*mini_batch_length]
y = sequence[1 : num_batches*mini_batch_length + 1]
## x와 y를 시퀀스 배치의 리스트로 나눕니다.
x_batch_splits = np.split(x, batch_size)
y_batch_splits = np.split(y, batch_size)
## 합쳐진 배치 크기는
## batch_size x mini_batch_length가 됩니다.
x = np.stack(x_batch_splits)
y = np.stack(y_batch_splits)
return x, y
## 테스트
train_x, train_y = reshape_data(text_ints, 64, 10)
print(train_x.shape)
print(train_x[0, :10])
print(train_y[0, :10])
print(''.join(int2char[i] for i in train_x[0, :10]))
print(''.join(int2char[i] for i in train_y[0, :10]))
def create_batch_generator(data_x, data_y, num_steps):
batch_size, tot_batch_length = data_x.shape[0:2]
num_batches = int(tot_batch_length/num_steps)
for b in range(num_batches):
yield (data_x[:, b*num_steps: (b+1)*num_steps],
data_y[:, b*num_steps: (b+1)*num_steps])
bgen = create_batch_generator(train_x[:,:100], train_y[:,:100], 15)
for x, y in bgen:
print(x.shape, y.shape, end=' ')
print(''.join(int2char[i] for i in x[0,:]).replace('\n', '*'), ' ',
''.join(int2char[i] for i in y[0,:]).replace('\n', '*'))
batch_size = 64
num_steps = 100
train_x, train_y = reshape_data(text_ints, batch_size, num_steps)
print(train_x.shape, train_y.shape)
# +
from tensorflow.keras.utils import to_categorical
train_encoded_x = to_categorical(train_x)
train_encoded_y = to_categorical(train_y)
print(train_encoded_x.shape, train_encoded_y.shape)
# -
print(np.max(train_x), np.max(train_y))
# ### 글자 단위 RNN 모델 만들기
char_model = models.Sequential()
# +
num_classes = len(chars)
char_model.add(layers.LSTM(128, input_shape=(None, num_classes),
return_sequences=True))
char_model.add(layers.TimeDistributed(layers.Dense(num_classes,
activation='softmax')))
# -
char_model.summary()
# ### 글자 단위 RNN 모델 훈련하기
# +
from tensorflow.keras.optimizers import Adam
adam = Adam(clipnorm=5.0)
# -
char_model.compile(loss='categorical_crossentropy', optimizer=adam)
callback_list = [ModelCheckpoint(filepath='char_rnn_checkpoint.h5')]
for i in range(500):
bgen = create_batch_generator(train_encoded_x,
train_encoded_y, num_steps)
char_model.fit_generator(bgen, steps_per_epoch=25, epochs=1,
callbacks=callback_list, verbose=0)
# ### 글자 단위 RNN 모델로 텍스트 생성하기
# +
np.random.seed(42)
def get_top_char(probas, char_size, top_n=5):
p = np.squeeze(probas)
p[np.argsort(p)[:-top_n]] = 0.0
p = p / np.sum(p)
ch_id = np.random.choice(char_size, 1, p=p)[0]
return ch_id
# -
seed_text = "The "
for ch in seed_text:
num = [char2int[ch]]
onehot = to_categorical(num, num_classes=65)
onehot = np.expand_dims(onehot, axis=0)
probas = char_model.predict(onehot)
num = get_top_char(probas, len(chars))
seed_text += int2char[num]
for i in range(500):
onehot = to_categorical([num], num_classes=65)
onehot = np.expand_dims(onehot, axis=0)
probas = char_model.predict(onehot)
num = get_top_char(probas, len(chars))
seed_text += int2char[num]
print(seed_text)
| code/ch16/ch16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import warnings
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import macd
import pandas.io.data as web
import matplotlib.pyplot as plt
import multiprocessing as mp
# %matplotlib inline
# +
filename='/home/ruslan/data/snp500.h5'
h5 = pd.HDFStore(filename, 'r')
symbols=macd.getSymbols(h5)
# -
path='/home/ruslan/data/dfs.h5'
for y in range(2005,2011):
s=str(y)+'-01-01'
e=str(y)+'-12-31'
print s,e
df=macd.parallelCumsum(filename, symbols,s,e,procs=1)
with pd.get_store(path) as store:
store[str(y)]=df
# +
#path='/home/ruslan/data/dfs.h5'
#start='2011-01-01'
#end=None
#df=macd.parallelCumsum(filename, symbols,start,end,procs=1)
#with pd.get_store(path) as store:
# store['2011']=df
# -
with pd.get_store(path) as store:
df1=store
print df1.keys()
with pd.get_store(path) as store:
df1=store['/2011-2015']
print df1.keys()
print df1.loc[(df1['win']>70.0)]
# +
start='2011-01-01'
dt0=macd.getClose(h5,'FCX')[start:]
dt = macd.macd(dt0)
dcs,res=macd.doCumsum(dt)
macd.plotMacd(dt)
# -
print res
| ipython/macdtestfile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# importing the necessary libraries
import pandas as pd
import numpy as np
import random
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
# -
data = pd.read_csv("yds_data.csv") # reading data from the CSV file
data.shape # checking how many rows and columns are in the data
data.head() # seeing how the data looks like
# # A. Data Preprocessing
# #### 1. Exploring the Columns of Dataset
# A. Using descriptive Statistics to find some insights
data.describe()
# B. Finding the dtypes of Columns to get some Insights
data.info()
# #### 2. Checking for Missing Values
# Percentage and Sum of Missing values in each Columns
missing_data = pd.DataFrame({'total_missing': data.isnull().sum(), 'perc_missing': (data.isnull().sum()/data.shape[0])*100})
missing_data
# Exploring The Target Variable 'is_goal'
data.is_goal.value_counts()
# ##### " It's a binary classification problem as there are only two values for the target ''is_goal" column
#
#
# # B. Exploratory Data Analysis
# #### 1. Dropping unnessary Columns
#
#1. Droping Unnecessary Columns
data.drop(["Unnamed: 0", 'remaining_min.1', 'power_of_shot.1','knockout_match.1', 'remaining_sec.1', 'distance_of_shot.1'], axis=1, inplace=True)
data.head() # looking at the dataset after transformation
data.columns # to see if the columns are dropped succesfully
#2. Changing dtypes to datetime
data.date_of_game = pd.to_datetime(data.date_of_game, errors='coerce')
data['game_season'] = data['game_season'].astype('object')
data['game_season']
# +
# Labelencoding the 'game_season'
# -
l_unique = data['game_season'].unique() # fteching out the unique values from game_season/
l_unique
v_unique = np.arange(len(l_unique)) # obtaining values in the range of the length of I_unique
v_unique
data['game_season'].replace(to_replace=l_unique, value=v_unique, inplace=True) # replacing categorical data with numerical values
data['game_season'].head()
data['game_season'] = data['game_season'].astype('int') # converting the datatype of the column from int64 to int32
data['game_season'].head()
# #### 3. Handeling Missing Values
#
# Filling NaN values in Column "remaining_sec" with MEAN
data['power_of_shot'].fillna(value=data['power_of_shot'].mean(), inplace=True)
data.isnull().sum() # number of missing values for power_of_shot column should be zero
# Filling NaN values in Column "type_of_combined_shot" with MODE
mode_com = data.type_of_combined_shot.value_counts().keys()[0]
print('moded is: ',mode_com)
data.type_of_combined_shot.fillna(value=mode_com, inplace=True)
data.isnull().sum() # number of missing values for type_of_combined_shot column should be zero
# Filling NaN values in Column "remaining_sec" with MEDIAN
data.remaining_sec.fillna(value=data.remaining_sec.median(), inplace=True)
data.isnull().sum() # number of missing values for remaining_sec column should be zero
# Shot_id_no.
data.shot_id_number = pd.Series(np.arange(1,data.shot_id_number.shape[0]+1))
data.isnull().sum() # number of missing values for shot_id_number column should be zero
# Filling NaN values in Columns "location_x" and "location_y" with 0
data['location_x'].fillna(value=0, inplace=True)
data['location_y'].fillna(value=0, inplace=True)
data.isnull().sum() # number of missing values for location_x and location_y columns should be zero
# Using Forward Filling method in appropriate Columns
print('Null values in column home/away before forward fill =',data['home/away'].isnull().sum())
col = ['home/away','lat/lng', 'team_name','match_id','match_event_id', 'team_id', 'remaining_min', 'knockout_match', 'game_season' ]
data.loc[:,col] = data.loc[:,col].ffill()
print('Null values in column home/away after the forward fill =',data['home/away'].isnull().sum())
# +
# Filling Missing Values In "shot_basics" based on "range_of_short" column!
# if the range of the shot is 16-24 ft it's a mid range shot
data.loc[(data.range_of_shot == '16-24 ft.'), 'shot_basics'] = data[data.range_of_shot == '16-24 ft.'].shot_basics.fillna(value='Mid Range')
# if the range of the shot is less than 8 ft then randomly assign goal line or goal area value to the shot
data.loc[(data.range_of_shot == 'Less Than 8 ft.')&(data.shot_basics.isnull()), 'shot_basics'] = pd.Series(data[(data.range_of_shot == 'Less Than 8 ft.')&(data.shot_basics.isnull())].shot_basics.apply(lambda x: x if type(x)==str else np.random.choice(['Goal Area', 'Goal Line'],1,p=[0.7590347263095939, 0.24096527369040613])[0]))
# if the range of the shot is 8-16 ft then randomly assign goal line or mid range value to the shot
data.loc[(data.range_of_shot == '8-16 ft.')&(data.shot_basics.isnull()), 'shot_basics'] = pd.Series(data[(data.range_of_shot == '8-16 ft.')&(data.shot_basics.isnull())].shot_basics.apply(lambda x: x if type(x)==str else np.random.choice(['Mid Range', 'Goal Line'],1,p=[0.6488754615642833, 0.35112453843571667])[0]))
# if the range of the shot is more than 24 ft then randomly assign one of the values from'Penalty Spot', 'Right Corner', 'Left Corner' to shot_basic field
data.loc[(data.range_of_shot == '24+ ft.')&(data.shot_basics.isnull()), 'shot_basics'] = pd.Series(data[(data.range_of_shot == '24+ ft.')&(data.shot_basics.isnull())].shot_basics.apply(lambda x: x if type(x)==str else np.random.choice(['Penalty Spot', 'Right Corner', 'Left Corner'],1,p=[0.8932384341637011, 0.06192170818505338, 0.044839857651245554])[0]))
# if the shot is a back court shot then randomly assign one of the values from''Mid Ground Line', 'Penalty Spot' to shot_basic field
data.loc[(data.range_of_shot == 'Back Court Shot')&(data.shot_basics.isnull()), 'shot_basics'] = pd.Series(data[(data.range_of_shot == 'Back Court Shot')&(data.shot_basics.isnull())].shot_basics.apply(lambda x: x if type(x)==str else np.random.choice(['Mid Ground Line', 'Penalty Spot'],1,p=[0.8441558441558441, 0.15584415584415584])[0]))
data.isna().sum()
# -
data['shot_basics'].unique() # now we have populated the shot types and reduced the number of missing values. Earlier we had 1575 missing values for this column, now we have only 66.
# +
# Filling Missing Values In "range_of_short" based on "short_basics" column!
# if shot_basics is Goal Area, then range of shot is Less Than 8 ft
data.loc[(data.shot_basics == 'Goal Area'), 'range_of_shot'] = data[data.shot_basics == 'Goal Area'].range_of_shot.fillna(value='Less Than 8 ft.')
# if shot_basics is Penalty Spot, then range of shot is 24+ ft.
data.loc[(data.shot_basics == 'Penalty Spot'), 'range_of_shot'] = data[data.shot_basics == 'Penalty Spot'].range_of_shot.fillna(value= '24+ ft.')
# if shot_basics is Right Corner, then range of shot is 24+ ft.
data.loc[(data.shot_basics == 'Right Corner'), 'range_of_shot'] = data[data.shot_basics == 'Right Corner'].range_of_shot.fillna(value='24+ ft.')
# if shot_basics is Left Corner, then range of shot is 24+ ft.
data.loc[(data.shot_basics == 'Left Corner'), 'range_of_shot'] = data[data.shot_basics == 'Left Corner'].range_of_shot.fillna(value='24+ ft.')
# if shot_basics is Mid Ground Line , then range of shot is Back Court Shot
data.loc[(data.shot_basics == 'Mid Ground Line'), 'range_of_shot'] = data[data.shot_basics == 'Mid Ground Line'].range_of_shot.fillna(value='Back Court Shot')
# if shot_basics is Mid Range then randomly assign '16-24 ft.' or '8-16 ft.' to range of shot
data.loc[(data.shot_basics == 'Mid Range')&(data.range_of_shot.isnull()), 'range_of_shot'] = pd.Series(data[(data.shot_basics == 'Mid Range')&(data.range_of_shot.isnull())].range_of_shot.apply(lambda x: x if type(x)==str else np.random.choice(['16-24 ft.', '8-16 ft.'],1,p=[0.6527708850289495, 0.34722911497105047])[0]))
# if shot_basics is Goal Line then randomly assign ''8-16 ft.' or 'Less Than 8 ft.' to range of shot
data.loc[(data.shot_basics == 'Goal Line')&(data.range_of_shot.isnull()), 'range_of_shot'] = pd.Series(data[(data.shot_basics == 'Goal Line')&(data.range_of_shot.isnull())].range_of_shot.apply(lambda x: x if type(x)==str else np.random.choice(['8-16 ft.', 'Less Than 8 ft.'],1,p=[0.5054360956752839, 0.49456390432471614])[0]))
data.isnull().sum() # number of missing values for range_of_shot column should have been reduced
# -
data['range_of_shot'].unique() # the number of missing values has fallen from 1564 to 66
# Filling the remaining missing values incase they both have NaN values using the forward fill method
data.shot_basics.fillna(method='ffill', inplace=True)
data.range_of_shot.fillna(method='ffill', inplace=True)
data.isnull().sum() # number of missing values for shot_basics and range_of_shot columns should be zero
# Filling the missing value in "ärea_of_short" Column
data.area_of_shot.fillna(value='Center(C)', inplace=True) # all the missing values get filled by 'Centre(C)'
data.isnull().sum() # number of missing values for area_of_shot column should be zero
data['distance_of_shot'].unique()
#Filling the Missing values in "distance_of_shot"
# if distance_of_shot isnull randomly assign a value from 20,45,44,37
data.loc[data['distance_of_shot'].isnull(), 'distance_of_shot'] = pd.Series(data.loc[data['distance_of_shot'].isnull(), 'distance_of_shot'].apply(lambda x: x if type(x)==str else np.random.choice([20,45,44,37],1,p=[0.5278056615137523,0.18630797028709095,0.14384661714515157,0.1420397510540052])[0]))
data.isnull().sum() # number of missing values for distance_of_shot column should be zero
# ## Making the Train and Test Dataset
# ##### # train and test data are divided based on the vaue of is goal column
# +
# Making the train Dataset
train = data[data.is_goal.notnull()]
print('the Shape of Train Dataset',train.shape)
train.set_index(np.arange(train.shape[0]),inplace=True)
train.head()
# -
# Making the Test Dataset
test = data[data.is_goal.isnull()]
print('The Shape of Test Dataset',test.shape)
test.set_index(np.arange(test.shape[0]), inplace=True)
test.head()
# ##### Handeling Missing Values in train and Test Dataset
# #### Filling the Nan value with a random choice from given list with there appropriate probablities
#
l_goal = train[train.is_goal == 1].type_of_shot.value_counts().head(6).keys() # Top six shots when it was goal
l_goal
p_g_sum = train[train.is_goal == 1].type_of_shot.value_counts().head(6).sum() # Top six shots when it was goal
p_goal = (train[train.is_goal == 1].type_of_shot.value_counts().head(6) / p_g_sum ).tolist() # There respective probablities
p_goal
# if is_goal is 1, if type of shot is a string value, fill with the same or else fill with randomly choosing value from l_goal
g = pd.Series(train[train.is_goal == 1].type_of_shot.apply(lambda x: x if type(x)==str else np.random.choice(l_goal,1,p=p_goal)[0]))
g
# # if is_goal is 1, if type of shot is null then type of shot becomes equal to the value of g based on the index
train.loc[(train.is_goal == 1)&(train.type_of_shot.isnull()), 'type_of_shot'] = g
train['type_of_shot'].isna().sum() # number of missing values got reduced from more than 15k to 6723
# #### and we have applied similar concept for the scenarios when there was no goal
l_no_goal = train[train.is_goal == 0].type_of_shot.value_counts().head(5).keys() # Top five shots when it was not a goal
p_no_sum = train[train.is_goal == 0].type_of_shot.value_counts().head(5).sum()
p_no_goal = (train[train.is_goal == 0].type_of_shot.value_counts().head(5) / p_no_sum ).tolist() # There respective probablities
ng = pd.Series(train[train.is_goal == 0].type_of_shot.apply(lambda x: x if type(x)==str else np.random.choice(l_no_goal,1,p=p_no_goal)[0]))
train.loc[(train.is_goal == 0)&(train.type_of_shot.isnull()), 'type_of_shot'] = ng
train['type_of_shot'].isna().sum() # number of missing values got reduced to zero
#Handeling the remaing values in test dataset with a smilira approach
test.loc[test['type_of_shot'].isnull(), 'type_of_shot'] = pd.Series(test.loc[test['type_of_shot'].isnull(), 'type_of_shot'].apply(lambda x: x if type(x)==str else np.random.choice(['shot - 39', 'shot - 36', 'shot - 4'],1,p=[0.37377133988618727, 0.33419555095706155, 0.2920331091567512])[0]))
test['type_of_shot'].isna().sum() # we have removed the missing values from test set as well
# ### Label Encoding the Object type Columns
# %%time
# Labeling the catagories with integers
for col in train.columns:
if train[col].dtypes == object: # if the column has categorical values
l_unique = train[col].unique() # find the unique values
v_unique = np.arange(len(l_unique)) # create a list of number from zero to the length of the I_unique values
train[col].replace(to_replace=l_unique, value=v_unique, inplace=True) # replace the categorical values with numerical values
train[col] = train[col].astype('int') # change the type from int64 to int32
# same has been done for test data as well
test[col].replace(to_replace=l_unique, value=v_unique, inplace=True)
test[col] = test[col].astype('int')
# Dropping the unnecessary Columns
train.drop(['date_of_game'], axis=1, inplace=True)
train.head()
test.drop(['date_of_game'], axis=1, inplace=True)
test.head()
# Splliting the Target Column from the Dataset
y = train.is_goal
y.head()
train.drop(['is_goal'], axis=1, inplace=True)
train.head()
test.drop(['is_goal'], axis=1, inplace=True)
test.head()
train.info() # we have converted all the categorical columns to numeric ones
train.isna().sum() # we have don't have any missing values as well. Our data is ready to be fed to a machine learning model.
| CristianoRonaldo-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: ml_project_template_test_nuutti
# kernelspec:
# display_name: Python 3.8.5 (template test)
# language: python
# name: ml_project_template_test_nuutti
# ---
# default_exp data
# %load_ext lab_black
# nb_black if running in jupyter
# %load_ext autoreload
# automatically reload python modules if there are changes in the
# %autoreload 2
# hide
from nbdev.showdoc import *
# # Data
#
# > You should begin your work by cleaning up your data and possibly defining tools for doing it repeateadly.
#
#
# ***input***: raw data
#
# ***output***: clean and tidy dataset + toy dataset for testing
#
# ***description:***
#
# This is the first notebook of your machine learning project. In this notebook, you will load the data, inspect, clean and make it tidy.
# You will define the data points and their features and labels. The output of this notebook is a clean, tidy dataset ready for analysis and machine learning.
# You can also do a basic statistical analysis of the data to better understand it.
# For any functions you define for handling the data, remember to mark their cells with `# export` -comment,
# so that they will be included in the data.py-module build based on this notebook.
# You can also include unit tests for your own functions.
#
# Rewrite this and the other text cells with your own descriptions.
# ## Import relevant modules
# +
import numpy as np
# your code here
# -
# ## Define notebook parameters
#
# Define input, output and additional parameters of this notebook, the information needed for running the notebook.
# In your own project, you can do this step in the later iterations of the work,
# when you know what is required.
# In this cell, only assing values to variables directly: `variable_name = value`.
# **Do not derive any information in this cell as it will mess up the parameterization** - do it in the cell below.
# This cell is tagged with 'parameters'
seed = 0
# your code here
# Define any immediate derivative operations from the parameters:
# set seed
np.random.seed(seed)
# your code here
# ## Load the data
# +
# your code here
# -
# ## Describe the data
#
# Define data points, features and labels
# +
# your code here
# -
# ## Clean the data and make it tidy
# +
# your code here
# -
# ## Visualize the data
# +
# your code here
# -
# ## Intermediate conclusions based on data visualization
#
# what do you see?
# ## Suffle Dataset
# +
# your code here
# -
# ## Save clean and tidy data for further use
# +
# your code here
# -
# ## Create small toy dataset for developing and testing the ML methods
# +
# your code here
# -
# save the toy dataset:
# +
# your code here
# -
# ## You can now move on to the model notebook!
| 00_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Uniform Distribution
# Used to describe probability where every event has equal chances of occuring.
#
# E.g. Generation of random numbers.
#
# It has three parameters:
#
# #### a - lower bound - default 0 .0.
#
# #### b - upper bound - default 1.0.
#
# #### size - The shape of the returned array.
from numpy import random
x = random.uniform(size=(2,3))
x
# ### Visualization of Uniform Distribution
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(random.uniform(size=1000), hist=False, label='Uniform')
plt.show()
| Python Library/NumPy/Uniform_Distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recursion Homework Problems
#
# This assignment is a variety of small problems to begin you getting used to the idea of recursion. They are not full-blown interview questions, but do serve as a great start for getting your mind "in the zone" for recursion problems.
#
#
# ______
# ### Problem 1
#
# **Write a recursive function which takes an integer and computes the cumulative sum of 0 to that integer**
#
# **For example, if n=4 , return 4+3+2+1+0, which is 10.**
#
# This problem is very similar to the factorial problem presented during the introduction to recursion. Remember, always think of what the base case will look like. In this case, we have a base case of n =0 (Note, you could have also designed the cut off to be 1).
#
# In this case, we have:
# n + (n-1) + (n-2) + .... + 0
#
# Fill out a sample solution:
def rec_sum(n):
if n == 0:
return 0
else:
return n + rec_sum(n-1)
pass
rec_sum(4)
# ______
# ### Problem 2
#
# **Given an integer, create a function which returns the sum of all the individual digits in that integer. For example:
# if n = 4321, return 4+3+2+1**
def sum_func(n):
if(n<10):
return n
else:
return n%10 + sum_func(int(n/10))
pass
sum_func(4321)
# *Hints:*
# You'll neeed to use modulo
4321%10
4321 / 10
# We'll need to think of this function recursively by knowing that:
# 4502 % 10 + sum_func(4502/10)
# ________
# ### Problem 3
# *Note, this is a more advanced problem than the previous two! It aso has a lot of variation possibilities and we're ignoring strict requirements here.*
#
# Create a function called word_split() which takes in a string **phrase** and a set **list_of_words**. The function will then determine if it is possible to split the string in a way in which words can be made from the list of words. You can assume the phrase will only contain words found in the dictionary if it is completely splittable.
#
# For example:
word_split('themanran',['the','ran','man'])
word_split('ilovedogsJohn',['i','am','a','dogs','lover','love','John'])
word_split('themanran',['clown','ran','man'])
def word_split(phrase,list_of_words, output = None):
if output is None:
output = []
for word in list_of_words:
if phrase.startswith(word):
output.append(word)
word_split(phrase[len(word):], list_of_words, output)
return output
pass
# ## Good Luck!
#
# Check out the Solutions Notebook once you're done!
| all_notebooks/15-Recursion/Recursion Homework Example Problems - PRACTICE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''venv'': venv)'
# language: python
# name: python3
# ---
# # ML 101
# ## 1. Probabilities
# ### Introduction
#
# - Statistics and probability theory constitute a branch of mathematics for dealing with uncertainty. The probability theory provides a basis for the science of statistical inference from data
# - Sample: (of size n) obtained from a mother population assumed to be represented by a probability
# - Descriptive statistics: description of the sample
# - Inferential statistics: making a decision or an inference from a sample of our problem
#
# ### Probabilities
#
# A set of probability values for an experiment with sample space $S = \\{ O_1, O_2, \cdots, O_n \\}$ consists of some probabilities that satisfy: $$ 0 \leq p_i \leq 1, \hspace{0.5cm} i= 1,2, \cdots, n $$ and
# $$ p_1 +p_2 + \cdots +p_n = 1 $$
#
# The probability of outcome $O_i$ occurring is said to be $p_i$ and it is written:
#
# $$ P(O_i) = p_i $$
#
# In cases in which the $n$ outcomes are equally likely, then each probability will have a value of $\frac{1}{n}$
# ### Events
# - Events: subset of the sample space
# - The probability of an event $A$, $P(A)$, is obtained by the probabilities of the outcomes contained withing the event $A$
# - An event is said to occur if one of the outcomes contained within the event occurs
# - Complement of events: event $ A' $ is the event consisting of everything in the sample space $S$ that is not contained within $A$: $$
# P(A) + P(A ') = 1$$
#
# ### Combinations of Events
#
# 1. Intersections
# - $A \cap B$ consists of the outcomes contained within both events $A$ and $B$
# - Probability of the intersection, $P(A \cap B) $, is the probability that both events occur simultaneously
# - Properties:
# - $P(A \cap B) +P(A \cap B') = P(A)$
# - Mutually exclusive events: if $A \cap B = \emptyset$
# - $A \cap (B \cap C) = (A \cap B) \cap C $
# 2. Union
# - Union of Events: $ A \cup B $ consists of the outcomes that are contained within at least one of the events $A$ and $B$
# - The probability of this event, $P (A \cup B)$ is the probability that at least one of these events $A$ and $B$ occurs
# - Properties:
# - If the events are mutually exclusive, then $P(A \cup B) = P(A) + P(B)$
# - $P( A \cup B) = P(A \cap B') + P(A' \cap B) + P(A \cap B)$
# - $P( A \cup B) = P(A) + P(B) - P(A \cap B)$
# - $P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P( B \cap C) - P( A \cap C) + P(A \cap B \cap C)$
# ### Conditional Probability
# - Conditional Probability: of an event $A$ conditional on an event $B$ is:
# $$P(A \mid B) = \frac{P(A \cap B)}{P(B)} \hspace{0.5cm} \text{for } P(B) >0$$
# - Properties:
# - $P (A \mid B) = \frac{P(A \cap B)}{P(B)} \Longrightarrow P(A \cap B) = P(B)P (A \mid B)$
# - $P (A \mid B \cap C) = \frac{P(A \cap B \cap C)}{P(B \cap C)} \Longrightarrow P(A \cap B \cap C) = P(B \cap C)P (A \mid B \cap C)$
# - In general, for a sequence of events $A_1, A_2, \cdots, A_n$:
# $$P(A_1, A_2, \cdots, A_n) = P(A_1)P(A_2 \mid A_1)P(A_3 \mid A_1 \cap A_2) \cdots P(A_n \mid A_1 \cap \cdots \cap A_{n-1})$$
# - Two events A and B are independent if
# - $P(A \mid B) = P(A)$
# - $P(B \mid A) = P(B)$
# - $P(A \cap B) = P(A) \times P(B)$
# - Interpretation: events are independent if the knowledge about one event does not affect the probability of the other event
# ### Posterior Probabilities
# - Law of total probability: Given $\{ A_1, A_2, \cdots, A_n \}$ a partition of sample space $S$, the probability of an event $B$, $P(B)$ can be expressed as:
# $$P(B) = \sum_{i=1}^n P(A_i)P(B \mid A_i)$$
# - Bayes' Theorem: Given $\{ A_1, A_2, \cdots, A_n \}$ a partition of a sample space, then the posterior probabilities of the event $A_i$ conditional on an event $B$ can be obtained from the probabilities $P(A_i)$ and $P(A_i \mid B)$ using the formula:
# $$ P(A_i \mid B) = \frac{P(A_i)P(B \mid A_i)}{\sum_{j=1}^n P(A_j)P(B \mid A_j)}$$
| classes/00 refresher/00_refresher_00.ipynb |
# # End to End Pure Streaming Data-Pipeline for Landlord Table Using Spark Structured Streaming on Databricks
# ###### Description: In this notebook we read landlord state rows from incoming csv files into a streamig dataframe, transform (clean, cast, rename) the data, add/update the latest state to a Databricks Delta table
# ###### Objective: (incoming csv files) --> "landlord_streamingDF" --> "results_df" --> "landlord_data"
# +
import requests
import json
import optimus as op
import phonenumbers
import re
import datetime
import time
from pyspark.sql.types import *
from pyspark.sql.functions import udf
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext, Row
from pyspark.sql.functions import unix_timestamp, from_unixtime
from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
from pyspark.sql import DataFrame
from pyspark.sql.functions import lit
from pyspark.sql.functions import rank, col
# +
# Schema for Landlord JSON
landlord_schema = StructType([
StructField("Landlord_id", IntegerType(), False),
StructField("Password", StringType(), True),
StructField("Landlord_name", StringType(), False),
StructField("Address_line_1", StringType(), False),
StructField("City", StringType(), False),
StructField("Post_code", StringType(), True),
StructField("Region", StringType(), True),
StructField("event_time", TimestampType(), True)])
landlord_udf_schema = StructType([
StructField("Password", StringType(), True),
StructField("Landlord_name", StringType(), False),
StructField("Address_line_1", StringType(), False),
StructField("City", StringType(), False),
StructField("Post_code", StringType(), True),
StructField("Region", StringType(), True),
StructField("event_time", TimestampType(), True)])
# -
# ###### Description: Get landlord csv files as a streaming "landlord_streamingDF" and process it on the fly and get transformed stream "landlord_df"
# ###### Objective: (incoming csv files) --> "landlord_streamingDF" --> "landlord_df"
# +
# Get landlord Steaming DataFrame from csv files
# streaming starts here by reading the input files
landlord_Path = "/FileStore/apartment/landlord/inprogress/"
landlord_streamingDF = (
spark
.readStream
.schema(landlord_schema)
.option("maxFilesPerTrigger", "1")
.option("header", "true")
.option("multiLine", "true")
.csv(landlord_Path)
)
# Clear invalid rows
landlord_df = landlord_streamingDF.select("*").where("Landlord_id IS NOT NULL")
# Instantiation of DataTransformer class:
transformer = op.DataFrameTransformer(landlord_df)
# Replace NA with 0's
transformer.replace_na(0.0, columns="*")
# Clear accents: clear_accents only from name column and not everywhere
transformer.clear_accents(columns='*')
# Remove special characters: From all Columns
transformer.remove_special_chars(columns=['Address_line_1', 'City', 'Post_code', 'Region'])
# -
# ##### This function parses the corresponding columns into a single column
# +
def my_fun(Password, Landlord_name, Address_line_1, City, Post_code, Region, event_time):
return zip(Password, Landlord_name, Address_line_1, City, Post_code, Region, event_time)
udf_Fun = udf(my_fun, ArrayType(landlord_udf_schema))
# -
Landlord_id, Password, Landlord_name, Address_line_1, City, Post_code, Region, event_time
intermediate_df = ( landlord_df.withWatermark("event_time", "10 seconds")
.groupBy("Landlord_id")
.agg(F.collect_list("Password").alias("Password"),
F.collect_list("Landlord_name").alias("Landlord_name"),
F.collect_list("Address_line_1").alias("Address_line_1"),
F.collect_list("City").alias("City"),
F.collect_list("Post_code").alias("Post_code"),
F.collect_list("Region").alias("Region"),
F.collect_list("event_time").alias("event_time"),
F.max("event_time").alias("latest_event_time"))
.select("Landlord_id",
F.explode(udf_Fun(F.column("Password"),
F.column("Landlord_name"),
F.column("Address_line_1"),
F.column("City"),
F.column("Post_code"),
F.column("Region"),
F.column("event_time")))
.alias("data"), "latest_event_time"))
# ##### Filter the data where event_time is latest
results_df = (intermediate_df
.select("Landlord_id",
"data.Password",
"data.Landlord_name",
"data.Address_line_1",
"data.City",
"data.Post_code",
"data.Region",
"data.event_time",
"latest_event_time")
.where("data.event_time=latest_event_time")).orderBy("Landlord_id")
# ##### Display final result
# ###### This result shows the latest state of all the unique Building_id
display(results_df)
# ##### Below cells are optional if external functionality or storage is needed
# ###### Write the stream to a Databricks Delta table for storage
streaming_query = (results_df.writeStream
.format("delta")
.outputMode("complete")
.option("mergeSchema", "true")
.option("checkpointLocation", "/delta/apartment/landlord/_checkpoints/streaming-agg")
.start("/delta/apartment/landlord_data"))
# #### Read the Delta Table as a Static or Streaming DataFrame
# #### This dataframe wil always be Up-To-Date
landlord_data = spark.read.format("delta").load("/delta/apartment/landlord_data").orderBy("Landlord_id")
display(landlord_data)
# ### Do Some Live Streaming Graphs
landlord_data_stream = spark.readStream.format("delta").load("/delta/apartment/landlord_data")
display(landlord_data_stream.groupBy("Region").count())
| Landlord_Apartment_E2E.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="-EMdoKtJd-5J" colab_type="text"
# # "GANs"
# > "Tensorflow implementation of Generative Adversarial Networks"
#
# - toc: false
# - branch: master
# - badges: true
# - comments: true
# - categories: [gans, mnist, tensorflow]
# - image: images/some_folder/your_image.png
# - hide: false
# - search_exclude: true
# -
# ### GANs use two networks competing against each other to generate data.
# 1) Generator : Recieves random noise (Gaussian Distribution).<br>
# Outputs data (often an image)
# 2) Discriminator:<br>
# Takes a data set consisting of real images from the real data set and fake images from the generator<br>
# Attempt to classify real vs fake images (always binary classification)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
(X_train,y_train), (X_test, y_test) = mnist.load_data()
plt.imshow(X_train[0])
y_train
# Filter data for faster training - Let us consider only 1 number out of the 10
only_zeros = X_train[y_train==0]
print(X_train.shape, only_zeros.shape)
plt.imshow(only_zeros[19])
import tensorflow as tf
from tensorflow.keras.layers import Dense,Reshape,Flatten
from tensorflow.keras.models import Sequential
# +
discriminator = Sequential()
discriminator.add(Flatten(input_shape=[28,28]))
discriminator.add(Dense(150,activation='relu'))
discriminator.add(Dense(100,activation='relu'))
# Final output layer
discriminator.add(Dense(1,activation='sigmoid'))
discriminator.compile(loss='binary_crossentropy',optimizer='adam')
# -
discriminator.summary()
# +
# Choose codings_size with following in mind: 100 --> 150 --> 784
codings_size = 100
generator = Sequential()
generator.add(Dense(100,activation='relu',input_shape=[codings_size]))
generator.add(Dense(150,activation='relu'))
generator.add(Dense(784,activation='relu'))
# Discriminator expects shape of 28x28
generator.add(Reshape([28,28]))
# We do not compile the generator
GAN = Sequential([generator,discriminator])
# -
generator.summary()
GAN.summary()
GAN.layers
GAN.layers[1].summary()
discriminator.trainable = False # Shouldn't be trained in the second phase
GAN.compile(loss='binary_crossentropy',optimizer='adam')
# +
batch_size = 32
my_data = only_zeros
# -
dataset = tf.data.Dataset.from_tensor_slices(my_data).shuffle(buffer_size=1000)
type(dataset)
dataset = dataset.batch(batch_size,drop_remainder=True).prefetch(1)
epochs = 1
noise = tf.random.normal(shape=[batch_size,codings_size])
noise
generator(noise)
# +
generator, discriminator = GAN.layers
for epoch in range(10): # 10 is number of epochs
print(f"Currently on Epoch {epoch+1}")
i = 0
for X_batch in dataset:
i = i+1
if i%100 == 0:
print(f"\t Currently on batch number {i} of {len(my_data)//batch_size}")
# DISCRIMINATOR TRAINING PHASE
noise = tf.random.normal(shape=[batch_size,codings_size]) # GENERATOR gets to see only this random noise
gen_images = generator(noise)
X_fake_vs_real = tf.concat([gen_images, tf.dtypes.cast(X_batch,tf.float32)],axis=0)
y1 = tf.constant([[0.0]]*batch_size + [[1.0]]*batch_size) # 0 correspond to not real and vice-versa
discriminator.trainable = True
discriminator.train_on_batch(X_fake_vs_real,y1)
# TRAIN GENERATOR
noise = tf.random.normal(shape=[batch_size,codings_size])
y2 = tf.constant([[1.0]]*batch_size)
discriminator.trainable = False
GAN.train_on_batch(noise,y2)
print("Training complete!")
# -
noise = tf.random.normal(shape=[10, codings_size])
noise.shape
plt.imshow(noise)
image = generator(noise)
image.shape
plt.imshow(image[5])
plt.imshow(image[1]) # Hence, model has undergone 'mode collapse'.
# DCGANs
X_train = X_train/255
X_train = X_train.reshape(-1, 28, 28, 1) * 2. - 1. # Because we will be using 'tanh' later
X_train.min()
X_train.max()
only_zeros = X_train[y_train==0]
only_zeros.shape
import tensorflow as tf
from tensorflow.keras.layers import Dense,Reshape,Dropout,LeakyReLU,Flatten,BatchNormalization,Conv2D,Conv2DTranspose
from tensorflow.keras.models import Sequential
# +
np.random.seed(42)
tf.random.set_seed(42)
codings_size = 100
# -
generator = Sequential()
generator.add(Dense(7 * 7 * 128, input_shape=[codings_size]))
generator.add(Reshape([7, 7, 128]))
generator.add(BatchNormalization())
generator.add(Conv2DTranspose(64, kernel_size=5, strides=2, padding="same",
activation="relu"))
generator.add(BatchNormalization())
generator.add(Conv2DTranspose(1, kernel_size=5, strides=2, padding="same",
activation="tanh"))
discriminator = Sequential()
discriminator.add(Conv2D(64, kernel_size=5, strides=2, padding="same",
activation=LeakyReLU(0.3),
input_shape=[28, 28, 1]))
discriminator.add(Dropout(0.5))
discriminator.add(Conv2D(128, kernel_size=5, strides=2, padding="same",
activation=LeakyReLU(0.3)))
discriminator.add(Dropout(0.5))
discriminator.add(Flatten())
discriminator.add(Dense(1, activation="sigmoid"))
GAN = Sequential([generator, discriminator])
discriminator.compile(loss="binary_crossentropy", optimizer="adam")
discriminator.trainable = False
GAN.compile(loss="binary_crossentropy", optimizer="adam")
GAN.layers
GAN.summary()
GAN.layers[0].summary()
batch_size = 32
my_data = only_zeros
dataset = tf.data.Dataset.from_tensor_slices(my_data).shuffle(buffer_size=1000)
type(dataset)
dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1)
epochs = 20
# +
# Grab the seprate components
generator, discriminator = GAN.layers
# For every epcoh
for epoch in range(epochs):
print(f"Currently on Epoch {epoch+1}")
i = 0
# For every batch in the dataset
for X_batch in dataset:
i=i+1
if i%20 == 0:
print(f"\tCurrently on batch number {i} of {len(my_data)//batch_size}")
#####################################
## TRAINING THE DISCRIMINATOR ######
###################################
# Create Noise
noise = tf.random.normal(shape=[batch_size, codings_size])
# Generate numbers based just on noise input
gen_images = generator(noise)
# Concatenate Generated Images against the Real Ones
# TO use tf.concat, the data types must match!
X_fake_vs_real = tf.concat([gen_images, tf.dtypes.cast(X_batch,tf.float32)], axis=0)
# Targets set to zero for fake images and 1 for real images
y1 = tf.constant([[0.]] * batch_size + [[1.]] * batch_size)
# This gets rid of a Keras warning
discriminator.trainable = True
# Train the discriminator on this batch
discriminator.train_on_batch(X_fake_vs_real, y1)
#####################################
## TRAINING THE GENERATOR ######
###################################
# Create some noise
noise = tf.random.normal(shape=[batch_size, codings_size])
# We want discriminator to belive that fake images are real
y2 = tf.constant([[1.]] * batch_size)
# Avois a warning
discriminator.trainable = False
GAN.train_on_batch(noise, y2)
print("TRAINING COMPLETE")
# -
noise = tf.random.normal(shape=[10, codings_size])
noise.shape
plt.imshow(noise)
images = generator(noise)
single_image = images[0]
for image in images:
plt.imshow(image.numpy().reshape(28,28))
plt.show()
# Hence, mode collapse has been prevented.
plt.imshow(single_image.numpy().reshape(28,28))
| _notebooks/2020-10-01-Generative Adversarial Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false
# 
#
# Exercise material of the MSc-level course **Numerical Methods in Geotechnical Engineering**.
# Held at Technische Universität Bergakademie Freiberg.
#
# Comments to:
#
# *Prof. Dr. <NAME>
# Chair of Soil Mechanics and Foundation Engineering
# Geotechnical Institute
# Technische Universität Bergakademie Freiberg.*
#
# https://tu-freiberg.de/en/soilmechanics
#
# +
import numpy as np
import matplotlib.pyplot as plt
#Some plot settings
import plot_functions.plot_settings
# -
# # Exercise 6 - Finite differences: soil column under gravity and top load
#
# 
#
# ## Governing differential equation
#
# Consider the equilibrium conditions on an infinitesimal element of the soil column:
#
# $$
# \uparrow: \quad F_z - \frac{\partial F_z}{\partial z}\text{d}z - F_z - \varrho g A \text{d}z = 0
# $$
#
# The vertical force is determined by $F_z = \sigma_{zz}A = E_\text{s} A \epsilon_{zz} = -E_\text{s} A u_{z,z}$. Therefore, the equilibrium conditions read:
#
# $$
# 0 = \left[ \frac{\partial}{\partial z} \left(E_\text{s} A \frac{\partial u_z}{\partial z}\right) - \varrho g A \right]\text{d}z
# $$
#
# Considering, that the equation has to hold for an arbitrary infinitesimal non-zero $\text{d}z$, we find the ordinary differential equation
#
# $$
# 0 = \frac{\partial}{\partial z} \left(E_\text{s} A \frac{\partial u_z}{\partial z}\right) - \varrho g A
# $$
#
# Nota bene: compare this to the differential equation for a rod ('Zugstab'): $(EAu')' + n = 0$. While the cross section of a rod can vary along its length, the cross-sectional area of a soil column is considered a constant, simplifying the equation further:
#
# $$
# 0 = \frac{\partial}{\partial z} \left(E_\text{s} \frac{\partial u_z}{\partial z}\right) - \varrho g
# $$
#
#
# Let's put this to test.
# ## Finite difference discretization
#
# We assume $E_\text{s} = \text{const.}$ and arrive at
#
# $$
# \frac{\partial^2 u_z}{\partial z^2} = \frac{\gamma}{E_\text{s}}
# $$
#
# In the finite difference method, we introduce a grid made up, in the one-dimensional case, of a series of points. The differentials are the evaluated by finite differences between values at these points:
#
# $$
# \frac{\partial u_z}{\partial z} \approx \frac{\Delta u_z}{\Delta z} = \frac{u_i - u_{i-1}}{z_i - z_{i-1}} = u_i'
# $$
#
# where here, backward differences were chosen (viz. forward and central differences). We also assume a constant $\Delta z$ here.
#
# A second derivative can likewise be approximated (now using forward differences):
#
# $$
# \frac{\partial^2 u_z}{\partial z^2} \approx u_i'' = \frac{u_{i+1}' - u_i'}{\Delta z}
# $$
#
# Substitution of the first derivatives yields:
#
# $$
# \frac{\partial^2 u_z}{\partial z^2} \approx u_i'' = \frac{u_{i+1} - 2 u_i + u_{i-1}}{\Delta z^2}
# $$
#
# Our differential equation has now transformed into a finite difference equation evaluated at a series of points $i$ as:
#
# $$
# u_{i+1} - 2 u_i + u_{i-1} = \Delta z^2 \frac{\gamma_i}{E_{\text{s},i}}
# $$
# We collect all solutions $u_i$ and $z_i$ in a vector.
H = 10.
number_of_points = 10
Delta_z = H/(number_of_points - 1)
z = np.linspace(0,H,number_of_points)
# The right-hand side is identical in every point, unless the stiffness or the specific weight changes with depth.
gamma = 2600. * (1.-0.38) * 9.81 # N/m³
Es = 1.e8 #Pa
RHS = np.ones(number_of_points) * gamma/Es * Delta_z**2
RHS
# The left hand side can be expressed via the finite difference matrix $A_{ij}$, so that the equation system finally can be written as $A_{ij} u_{j} = b_i$ for a system with $n=10$ points:
#
# $$
# \begin{pmatrix}
# -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
# 1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
# 0 & 1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
# 0 & 0 & 1 & -2 & 1 & 0 & 0 & 0 & 0 & 0\\
# 0 & 0 & 0 & 1 & -2 & 1 & 0 & 0 & 0 & 0\\
# 0 & 0 & 0 & 0 & 1 & -2 & 1 & 0 & 0 & 0\\
# 0 & 0 & 0 & 0 & 0 & 1 & -2 & 1 & 0 & 0\\
# 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2 & 1 & 0\\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2 & 1\\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2\\
# \end{pmatrix}
# \begin{pmatrix}
# u_0 \\ u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5 \\ u_6 \\ u_7 \\ u_8 \\ u_9
# \end{pmatrix}
# =
# \Delta z^2
# \begin{pmatrix}
# \frac{\gamma_0}{E_{\text{s},0}} \\ \frac{\gamma_1}{E_{\text{s},1}} \\ \frac{\gamma_3}{E_{\text{s},3}} \\ \frac{\gamma_4}{E_{\text{s},4}} \\ \frac{\gamma_5}{E_{\text{s},5}} \\ \frac{\gamma_6}{E_{\text{s},6}} \\ \frac{\gamma_7}{E_{\text{s},7}} \\ \frac{\gamma_8}{E_{\text{s},8}} \\ \frac{\gamma_9}{E_{\text{s},9}}
# \end{pmatrix}
# $$
#
# Let's build this matrix now ...
Central_Diag = np.diag([-2.]*number_of_points,0)
Central_Diag
Upper_Diag = np.diag([1.]*(number_of_points-1),1)
Lower_Diag = np.diag([1.]*(number_of_points-1),-1)
A = Central_Diag + Upper_Diag + Lower_Diag
A
# Now we have everything we need: we have the load vector (right-hand side), the system matrix (vertex connectivity and second derivative), and can solve for the vector of unknown vertex displacements. Let's give it a shot:
u = np.linalg.solve(A,RHS)
plt.plot(u*100,z)
plt.xlabel('$u_z$ / cm')
plt.ylabel('$z$ / m');
# That doesn't look plausible. What's missing are the boundary conditions:
#
# $$
# u (z = 0) = 0 \qquad u'(z=H) = 0
# $$
#
# They can be integrated easily by manipulating the first and last equations (rows).
#
# Let's start with the lower boundary condition, $u(z=0) = u_0 = 0$. Thus, we want the first equation to yield $u_0 = 0$ which can be achieved by manipulating the RHS and System matrix such that the first equation reads
#
# $$
# \begin{pmatrix}
# 1 & 0 & \dots & 0
# \end{pmatrix}
# \begin{pmatrix}
# u_0
# \\
# \vdots
# \\
# u_9
# \end{pmatrix}
# =
# 0
# $$
#
# Nota bene: if a non-zero displacement were given as the boundary condition, the RHS entry would have to be set to that given value.
#lower bc
A[0,:] = 0.
A[0,0] = 1.
RHS[0] = 0.
# Now we get to the upper boundary condition, $u'(z=H) = 0$. In other words, we want the slope of the $u_z - z$ curve to be zero at the upper end. Because of
#
# $$
# 0 = u'(z=H) \approx \frac{u_9 - u_8}{\Delta z}
# $$
#
# this means that the displacements of the two final vertices should be equal. Thus, we want the final equation to yield $u_8 - u_9 = 0$ which can be achieved by manipulating the RHS and System matrix such that the first equation reads
#
# $$
# \begin{pmatrix}
# 0 & \dots & 0 & 1 & -1
# \end{pmatrix}
# \begin{pmatrix}
# u_0
# \\
# \vdots
# \\
# u_7
# \\
# u_8
# \\
# u_9
# \end{pmatrix}
# =
# 0
# $$
#
# Nota bene: if a non-zero displacement were given as the boundary condition, the RHS entry would have to be set to that given value.
#upper bc
A[-1,-1] = -1.
RHS[-1] = 0.
# Let's check whether the manipulations were successful:
A
RHS
# Let's solve the system again with the newly implemented boundary conditions.
u = np.linalg.solve(A,RHS)
plt.plot(u*100,z)
plt.xlabel('$u_z$ / cm')
plt.ylabel('$z$ / m');
# This looks plausible. Let's go ahead and perform a basic exploration of the numerical properties of the scheme.
#
# We first define a simulation run and then vary the number of grid points.
def run_sim(number_of_points):
#discretization
Delta_z = H/(number_of_points - 1)
z = np.linspace(0,H,number_of_points)
#RHS
RHS = np.ones(number_of_points) * gamma/Es * Delta_z**2
#FD Matrix
A = np.diag([-2.]*number_of_points,0) + np.diag([1.]*(number_of_points-1),1) + np.diag([1.]*(number_of_points-1),-1)
#lower bc
A[0,:] = 0.
A[0,0] = 1.
RHS[0] = 0
#upper bc
A[-1,-1] = -1.
RHS[-1] = 0.
#solution
return np.linalg.solve(A,RHS), z
# We plot the numerical result against the analytical solution
#
# $$
# u_z(z) = \frac{\varrho gH^2}{2E_\text{s}} \left[ \left(\frac{z}{H}\right)^2 - 2 \frac{z}{H} \right]
# $$
# analytical solution
reference_z = np.linspace(0,H,100)
reference_solution = ((reference_z/H)**2 - 2.*reference_z/H) * gamma * H**2 / (2.*Es)
# +
## Interactive playground
from ipywidgets import widgets
from ipywidgets import interact
#Compute reference solution with 100 cells
@interact(num_nodes=widgets.IntSlider(min=5, max=100, value=10, step=1, description='nodes'))
def plot(num_nodes=10):
fig,ax = plt.subplots()
ax.set_ylabel('$z$ / m')
ax.set_xlabel('$u_z$ / cm')
solution, z = run_sim(num_nodes)
ax.plot(reference_solution*1e2, reference_z, '--', color='k', label='reference solution')
ax.plot(solution*1e2,z,label='solution')
ax.set_ylim([0,10])
plt.show()
# -
# Convergence of this simple scheme is rather low. A comparatively simple finite element scheme converges much faster, as we shall see in one of the following exercises.
#
# If we plot the convergence rate of the surface settlements with respect to the grid resolution, we observe a linear behaviour:
#HIDDEN
fig, ax = plt.subplots()
disc_n = np.logspace(1,3,20)
analytical_top = reference_solution[-1]
for nn in disc_n:
numerical, z = run_sim(int(nn))
numerical_top = numerical[-1]
relerr = np.abs((numerical_top - analytical_top)/analytical_top)
ax.plot(int(nn),relerr,ls='',marker='d',color='blue')
ax.plot([1e2,3e2],[5e-3,5e-3/3],ls='-',color='black')
ax.text(3.1e2,5e-3/3,'1')
ax.plot([1e2,3e2],[5e-3,5e-3/9],ls='-',color='black')
ax.text(3.1e2,5e-3/9,'2')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('$n_\\mathrm{nodes}$')
ax.set_ylabel('$\\left| \\frac{u_z^\\mathrm{numerical} - u_z^\\mathrm{analytical}}{u_z^\\mathrm{analytical}} \\right|_{z=H}$',size=24)
fig.tight_layout();
| 06aa_soil_column_FDM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # User Interaction
#
# Until this point we have explicitly specified input values for variables (and constants) in a script; now lets leverage intrinsic functions that lets us makes use of variables. We’ll revisit earlier examples, but this time we’ll make them interactive. Instead of just computing and sending output, we want read into variables values that may change from time to time. In order to do that, our script needs to be able to prompt us for information and display them on the screen.
#
# This whole process is the essence of user interaction, and from the simple examples herein, one builds more complex scripts.
# ## The `input()` method
#
# Consider and run the script below
MyName=input('What is your name ?')
print(MyName)
# The `input` method sent the string 'What is your name ?' to the screen, and then waited for the user to reply. Upon reply, the input supplied was captured and then placed into the variable named `MyName`.
#
# Then the next statement, used the `print()` method to print the contents of `MyName` back to the screen. From this simple structure we can create quite useful input and output.
#
# As a matter of good practice, we should explicitly type the input variable, as shown below which takes the input stream and converts it into a string.
MyName=str(input('What is your name ?'))
print(MyName)
# Below we prompt for a second input, in this case the user's age, which will be put into an integer. As a sdie note, we are not error checking, so if an input stream that cannot be made into an integer is suppplied we will get an exception warning or an error message.
MyAge=int(input('How old are you ? '))
print(MyAge)
# ## The `print()` method
#
# The `print()` function is used to display information to users.
# It accepts zero or more expressions as parameters, separated by commas.
#
# Consider the statement below, how many parameters are in the parameter list?
print ("Hello World, my name is", MyName, "and I am", MyAge, "years old.")
# There are five parameters;
#
# 1. "Hello World, my name is"
# 2. MyName
# 3. "and I am"
# 4. MyAge
# 5. "years old"
#
# Three of the parameters are string literals and are enclosed in quote marks, two are variables that are rendered as strings.
# ## The `%` operator
# Strings can be formatted using the `%` operator. This gives you greater control over how you want your string to be displayed and stored. The syntax for using the `%` operator is “string to be formatted” %(values or variables to be inserted into string, separated by commas)
#
# An example using the string constructor (`%`) form using a placeholder in the print function call is:
print ("Hello World, my name is %s and I am %s years old." %(MyName,MyAge))
# Notice the syntax above. The contents of the two variables are placed in the locations within the string indicated by the `%s` symbol, the tuple (MyName,MyAge) is parsed using this placeholder and converted into a string by the trailing `s` in the `%s` indicator.
#
# See what happens if we change the second `%s` into `%f` and run the script:
print ("Hello World, my name is %s and I am %f years old." %(MyName,MyAge))
# The change to `%f` turns the rendered tuple value into a float. Using these structures gives us a lot of output flexibility.
#
# ## The `format()` method
#
# Similar to the `%`operator structure there is a `format()` method. Using the same example, the `%s` symbol is replaced by a pair of curly brackets `{}` playing the same placeholder role, and the `format` keyword precedes the tuple as
print ("Hello World, my name is {} and I am {} years old.".format(MyName,MyAge))
# Observe the keyword `format` is joined to the string with a dot notation, because `format` is a formal method associated with all strings, and it attached when the string literal is created.
#
# In this example the arguments to the method are the two variables, but other arguments and decorators are possible allowing for elaborate outputs.
#
# ## Triple quotes
#
# If you need to display a long message, you can use the triple-quote symbol (‘’’ or “””) to span the message over multiple lines. For instance:
print ('''Hello World, my name is {} and I am a resturant
that is over {} years old. We serve sodium chloride infused
lipids in a variety of shapes'''.format(MyName,MyAge))
# Creating an array that contains signed integer numbers
# ## Escape Characters
#
# Sometimes we may need to print some special “unprintable” characters such as a tab or a newline.
# In this case, you need to use the `\` (backslash) character to escape characters that otherwise have a different meaning. For instance to print a tab, we type the backslash character before the letter t, like this `\t` using our same example we have:
print ("Hello\t World, my name is {} and I am {} years old.".format(MyName,MyAge))
# Here are a few more examples:
#newline after World
print ("Hello World\n, my name is {} and I am {} years old.".format(MyName,MyAge))
# backslash after World
print ("Hello World\\, my name is {} and I am {} years old.".format(MyName,MyAge))
# embedded quotes in the string literal
print ("I am 5'9\" tall")
# If you do not want characters preceded by the `\` character to be interpreted as special characters, you can use raw strings by adding an `r` before the first quote.
# For instance, if you do not want `\t` to be interpreted as a tab in the string literal "Hello\tWorld", you would type
print(r"Hello\tWorld")
# ## Readings
#
# 1. Learn Python in One Day and Learn It Well. Python for Beginners with Hands-on Project. (Learn Coding Fast with Hands-On Project Book -- Kindle Edition by LCF Publishing (Author), <NAME> [https://www.amazon.com/Python-2nd-Beginners-Hands-Project-ebook/dp/B071Z2Q6TQ/ref=sr_1_3?dchild=1&keywords=learn+python+in+a+day&qid=1611108340&sr=8-3](https://www.amazon.com/Python-2nd-Beginners-Hands-Project-ebook/dp/B071Z2Q6TQ/ref=sr_1_3?dchild=1&keywords=learn+python+in+a+day&qid=1611108340&sr=8-3)
#
# 2. Learn Python the Hard Way (Online Book) (https://learnpythonthehardway.org/book/) Recommended for beginners who want a complete course in programming with Python.
#
# 3. How to Learn Python for Data Science, The Self-Starter Way (https://elitedatascience.com/learn-python-for-data-science)
#
# 4. String Literals [https://bic-berkeley.github.io/psych-214-fall-2016/string_literals.html](https://bic-berkeley.github.io/psych-214-fall-2016/string_literals.html)
#
# 5. Tutorial on `input()` and `print()` functions [https://www.programiz.com/python-programming/input-output-import](https://www.programiz.com/python-programming/input-output-import)
| site/1-programming/7-userinteraction/userinteraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# internal function
#https://docs.python.org/ko/3.8/library/functions.html
# -
abs(-3)
all([True,False,True,True])
all([True,True,True,True])
any([True,False,False])
any([False,False,False])
ascii('Python is interesting')
ascii('Pythonは面白いです')
ascii('파이썬은 재미있습니다')
# +
class Slice:
def __index__(self):
return 10
slice = Slice()
print(bin(10))
print(bin(slice))
format(14, '#b'), format(14,'b')
f'{14:#b}', f'{14:b}'
# +
#use double method example
#https://corikachu.github.io/articles/python/python-magic-method
class NumBox:
# cls : @classmethod 데코레이터를 이용하여 선언
# self :
def __new__(cls,*args,**kargs):
if len(args)<1:
return None
else:
return super(NumBox, cls).__new__(cls) #Object 반환
def __init__(self, num=None):
self.num = num # 받은 인자 num을 인스턴스 변수로 지정
def __repr(self):
return str(self.num)
# +
from decimal import *
from fractions import *
#It prints 0j to indicate that it's still a complex value(복소수)
bool_false = [None, False, 0,0,0,0j,Decimal(0),Fraction(0,1)]
bool_false_empty_set = ['',(),[],{},set(),range(0)]
for i in bool_false:
print(bool(i))
for i in bool_false_empty_set:
print(bool(i))
# -
breakpoint()
# +
a = bytearray(b'hello')
for i in a:
print(i)
# -
a = bytes(b'hello')
print(a)
sample = 1
callable(sample)
# +
def funcSample():
print('sample')
fsample = funcSample
print(callable(funcSample))
print(callable(fsample))
# +
# __call__ class
class Sample():
def __call__(self):
print('sample')
# non __call__ class
class Calless():
print('sample')
sample_inst = Sample()
calless_inst = Calless()
print(callable(Sample))
print(callable(sample_inst))
print(callable(Calless))
print(callable(calless_inst))
# +
print(ord('김'))
print(ascii('김'))
chr(44608)
# unicode range 0 ~ 0X10FFFF
chr(0xac00)
chr(0xac00)
# -
class C:
@classmethod
def f(cls, arg1,arg2):
print('t')
c = C()
c.f(1,2)
C.f(1,2)
code = compile('a+1','<string>','eval')
a=1
a = eval(code)
print(a)
complex(3,3)
# +
class del_sample:
def __init__(self, x):
self.x = x
del_sam = del_sample(1)
del_sam.x
delattr(del_sam,'x')
# -
getattr(del_sam,'x')
print(del_sam.x)
mydict = dict()
# +
import struct
dir() # show the names in the module namespace
dir(struct) # show the names in the struct module
class Shape():
def __dir__(self):
return ['area','perimeter','location']
s = Shape()
dir(s)
# -
print(divmod(230,11))
type(divmod(230,11))
# +
seasons = ['Spring','Summer','Fall','Winter']
print(list(enumerate(seasons)))
print(list(enumerate(seasons,start=1)))
for i in enumerate(seasons):
print(type(i))
# +
#filter
def func(x):
if x>0:
return x
else:
return None
list(filter(func, range(-5,10)))
list(filter(lambda x: x>0, range(-5,10))) #
[i for i in range(-5,10) if i>0] #general expression
# -
float('+1.23')
float(' -12345\n')
float('3e-0003')
float('3e-00003')
float('+1E6')
# +
#point
format()
# -
vars()
zip()
| python/pyinterplay.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ipavlopoulos/toxic_spans/blob/master/ToxicSpans_SemEval21.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="a3d9H9zwCr5X" colab_type="text"
# # Download the data and the code
# + id="mFfkvCfweiHk" colab_type="code" colab={}
from ast import literal_eval
import pandas as pd
import random
# + id="DCmZoSzEDb-K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="42ca886e-60e2-4500-d4b5-6d06c7d1f85c"
# !git clone https://github.com/ipavlopoulos/toxic_spans.git
from toxic_spans.evaluation.semeval2021 import f1
# + id="PzgAd3i0es4L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="671359a6-6cf3-4202-bf1b-bd0252d92968"
tsd = pd.read_csv("toxic_spans/data/tsd_trial.csv")
tsd.spans = tsd.spans.apply(literal_eval)
tsd.head(1)
# + [markdown] id="YbqSdWO5tNTQ" colab_type="text"
# ### Run a random baseline
# * Returns random offsets as toxic per text
# + id="1m33iwnNeuFS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="3ecdcaad-30b7-4ebe-a2c2-f090e3aa4290"
# make an example with a taboo word
taboo_word = "fucking"
template = f"This is a {taboo_word} example."
# build a random baseline (yields offsets at random)
random_baseline = lambda text: [i for i, char in enumerate(text) if random.random()>0.5]
predictions = random_baseline(template)
# find the ground truth indices and print
gold = list(range(template.index(taboo_word), template.index(taboo_word)+len(taboo_word)))
print(f"Gold\t\t: {gold}")
print(f"Predicted\t: {predictions}")
# + id="hEmEzaf1fObx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="712880a1-46ac-4789-b329-3032553ef9bd"
tsd["random_predictions"] = tsd.text.apply(random_baseline)
tsd["f1_scores"] = tsd.apply(lambda row: f1(row.random_predictions, row.spans), axis=1)
tsd.head()
# + id="SmSy2j2PtWAr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="6d879341-1e1a-4e29-f466-314a6f67d030"
from scipy.stats import sem
_ = tsd.f1_scores.plot(kind="box")
print (f"F1 = {tsd.f1_scores.mean():.2f} ± {sem(tsd.f1_scores):.2f}")
# + [markdown] id="Laxfl78YtA3B" colab_type="text"
# ### Prepare the text file with the scores
# * Name it as `spans-pred.txt`.
# * Align the scores with the rows.
# + id="Rj0PTobdhHnf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 205} outputId="5c7925bd-5322-49f0-878f-e89a614fabf0"
# make sure that the ids match the ones of the scores
predictions = tsd.random_predictions.to_list()
ids = tsd.index.to_list()
# write in a prediction file named "spans-pred.txt"
with open("spans-pred.txt", "w") as out:
for uid, text_scores in zip(ids, predictions):
out.write(f"{str(uid)}\t{str(text_scores)}\n")
# ! head spans-pred.txt
# + [markdown] id="xMJ347K1sD49" colab_type="text"
# ### Zip the predictions
# * Take extra care to verify that only the predictions text file is included.
# * The text file should **not** be within any directory.
# * No other file should be included; the zip should only contain the txt file.
#
# + id="h4-ALOt_kVo0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3d4a2855-9598-4f3a-8177-9a122807986e"
# ! zip -r random_predictions.zip ./spans-pred.*
# + [markdown] id="FtA0drgYs4yf" colab_type="text"
# ###### Check by unziping it: only a `spans-pred.txt` file should be created
# + id="gBwvxrqMkzQv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="af8830c7-b721-42b9-816e-ce8db4c276d5"
# ! rm spans-pred.txt
# ! unzip random_predictions.zip
# + [markdown] id="WPbS6GEnr9P6" colab_type="text"
# ### Download the zip and submit it to be assessed
# + id="gILyQibsm0zd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="efdd6150-3c8f-4423-ff00-127eb6b628be"
from google.colab import files
files.download("random_predictions.zip")
# + [markdown] id="Lf3BP-FZrhiD" colab_type="text"
# ### When the submission is finished click the `Download output from scoring step`
# * The submission may take a while, so avoid late submissions.
# * Download the output_file.zip and see your score in the respective file.
# + id="-JRM3dHur7IA" colab_type="code" colab={}
| ToxicSpans_SemEval21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %cd ..
# # Prepare USPTO-sm and USPTO-lg for template-relevance prediction
# +
# if not allready in repo download temprel-fortunato
# +
#export
import requests
def download_temprel_repo(save_path, chunk_size=128):
"downloads the template-relevance master branch"
url = "https://gitlab.com/mefortunato/template-relevance/-/archive/master/template-relevance-master.zip"
r = requests.get(url, stream=True)
with open(save_path, 'wb') as fd:
for chunk in r.iter_content(chunk_size=chunk_size):
fd.write(chunk)
def unzip(path):
"unzips a file given a path"
import zipfile
with zipfile.ZipFile(path, 'r') as zip_ref:
zip_ref.extractall(path.replace('.zip',''))
# -
path = './data/temprel-fortunato.zip'
download_temprel_repo(path)
unzip(path)
# ## install template-relevance from fortunato
# + language="bash"
# cd data/temprel-fortunato/template-relevance-master/
# pip install -e .
# -
# also make sure you have the right rdchiral version
# +
# #!pip install -e "git://github.com/connorcoley/rdchiral.git#egg=rdchiral"
# +
#export
# code from fortunato
# could also import from temprel.data.download import get_uspto_50k but slightly altered ;)
import os
import gzip
import pickle
import requests
import subprocess
import pandas as pd
def download_file(url, output_path=None):
if not output_path:
output_path = url.split('/')[-1]
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(output_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
def get_uspto_480k():
if not os.path.exists('data'):
os.mkdir('data')
if not os.path.exists('data/raw'):
os.mkdir('data/raw')
os.chdir('data/raw')
download_file(
'https://github.com/connorcoley/rexgen_direct/raw/master/rexgen_direct/data/train.txt.tar.gz',
'train.txt.tar.gz'
)
subprocess.run(['tar', 'zxf', 'train.txt.tar.gz'])
download_file(
'https://github.com/connorcoley/rexgen_direct/raw/master/rexgen_direct/data/valid.txt.tar.gz',
'valid.txt.tar.gz'
)
subprocess.run(['tar', 'zxf', 'valid.txt.tar.gz'])
download_file(
'https://github.com/connorcoley/rexgen_direct/raw/master/rexgen_direct/data/test.txt.tar.gz',
'test.txt.tar.gz'
)
subprocess.run(['tar', 'zxf', 'test.txt.tar.gz'])
with open('train.txt') as f:
train = [
{
'reaction_smiles': line.strip(),
'split': 'train'
}
for line in f.readlines()
]
with open('valid.txt') as f:
valid = [
{
'reaction_smiles': line.strip(),
'split': 'valid'
}
for line in f.readlines()
]
with open('test.txt') as f:
test = [
{
'reaction_smiles': line.strip(),
'split': 'test'
}
for line in f.readlines()
]
df = pd.concat([
pd.DataFrame(train),
pd.DataFrame(valid),
pd.DataFrame(test)
]).reset_index()
df.to_json('uspto_lg_reactions.json.gz', compression='gzip')
os.chdir('..')
os.chdir('..')
return df
def get_uspto_50k():
'''
get SI from:
<NAME>; <NAME>; <NAME>; <NAME>. J. Chem. Inf. Model.201555139-53
'''
if not os.path.exists('data'):
os.mkdir('data')
if not os.path.exists('data/raw'):
os.mkdir('data/raw')
os.chdir('data/raw')
subprocess.run(['wget', 'https://pubs.acs.org/doi/suppl/10.1021/ci5006614/suppl_file/ci5006614_si_002.zip'])
subprocess.run(['unzip', '-o', 'ci5006614_si_002.zip'])
data = []
with gzip.open('ChemReactionClassification/data/training_test_set_patent_data.pkl.gz') as f:
while True:
try:
data.append(pickle.load(f))
except EOFError:
break
reaction_smiles = [d[0] for d in data]
reaction_reference = [d[1] for d in data]
reaction_class = [d[2] for d in data]
df = pd.DataFrame()
df['reaction_smiles'] = reaction_smiles
df['reaction_reference'] = reaction_reference
df['reaction_class'] = reaction_class
df.to_json('uspto_sm_reactions.json.gz', compression='gzip')
os.chdir('..')
os.chdir('..')
return df
def get_uspto_golden():
""" get uspto golden and convert it to smiles dataframe from
Lin, Arkadii; Dyubankova, Natalia; Madzhidov, Timur; Nugmanov, Ramil;
Rakhimbekova, Assima; Ibragimova, Zarina; Akhmetshin, Tagir; Gimadiev,
Timur; Suleymanov, Rail; Verhoeven, Jonas; Wegner, <NAME>;
Ceulemans, Hugo; <NAME> (2020):
Atom-to-Atom Mapping: A Benchmarking Study of Popular Mapping Algorithms and Consensus Strategies.
ChemRxiv. Preprint. https://doi.org/10.26434/chemrxiv.13012679.v1
"""
if os.path.exists('data/raw/uspto_golden.json.gz'):
print('loading precomputed')
return pd.read_json('data/raw/uspto_golden.json.gz', compression='gzip')
if not os.path.exists('data'):
os.mkdir('data')
if not os.path.exists('data/raw'):
os.mkdir('data/raw')
os.chdir('data/raw')
subprocess.run(['wget', 'https://github.com/Laboratoire-de-Chemoinformatique/Reaction_Data_Cleaning/raw/master/data/golden_dataset.zip'])
subprocess.run(['unzip', '-o', 'golden_dataset.zip']) #return golden_dataset.rdf
from CGRtools.files import RDFRead
import CGRtools
from rdkit.Chem import AllChem
def cgr2rxnsmiles(cgr_rx):
smiles_rx = '.'.join([AllChem.MolToSmiles(CGRtools.to_rdkit_molecule(m)) for m in cgr_rx.reactants])
smiles_rx += '>>'+'.'.join([AllChem.MolToSmiles(CGRtools.to_rdkit_molecule(m)) for m in cgr_rx.products])
return smiles_rx
data = {}
input_file = 'golden_dataset.rdf'
do_basic_standardization=True
print('reading and converting the rdf-file')
with RDFRead(input_file) as f:
while True:
try:
r = next(f)
key = r.meta['Reaction_ID']
if do_basic_standardization:
r.thiele()
r.standardize()
data[key] = cgr2rxnsmiles(r)
except StopIteration:
break
print('saving as a dataframe to data/uspto_golden.json.gz')
df = pd.DataFrame([data],index=['reaction_smiles']).T
df['reaction_reference'] = df.index
df.index = range(len(df)) #reindex
df.to_json('uspto_golden.json.gz', compression='gzip')
os.chdir('..')
os.chdir('..')
return df
# -
# ## run the scripts form temprel
# for more details see his documentation [readme](https://gitlab.com/mefortunato/template-relevance#step-1-extract-templates)
# an alternative is to run the script
# ```ssh
# python data/temprel-fortunato/template-relevance-master/bin/get_uspto_50k.py
# ```
# +
# #!python data/temprel-fortunato/template-relevance-master/bin/process.py --reactions data/raw/uspto_sm_reactions.json.gz --output-prefix uspto_sm
# +
# or this code ;)
import time
import argparse
import pandas as pd
from temprel.templates.extract import process_for_training, process_for_askcos, templates_from_reactions
reactions_sm = get_uspto_50k() ## get the dataset
templates_sm = templates_from_reactions(reactions_sm, nproc=50)
templates_sm.to_json('data/processed/uspto_sm_templates.df.json.gz', compression='gzip')
process_for_training(templates_sm, output_prefix='data/processed/uspto_sm_', calc_split='stratified')
# standardize templates
process_for_askcos(templates_sm, template_set_name='uspto_sm_', output_prefix='data/processed/uspto_sm_')
# -
# ### calculate the applicability matrix.. this will also take some time
# if needed install mpi4py
# !conda install -c anaconda mpi4py -y
# +
#either running it multiprocessing or single (next one)
# -
# !mpirun -n 30 python data/temprel-fortunato/template-relevance-master/bin/calculate_applicabilty.py \
# --templates data/processed/uspto_sm_retro.templates.uspto_sm_.json.gz \
# --train-smiles data/processed/uspto_sm_train.input.smiles.npy \
# --valid-smiles data/processed/uspto_sm_valid.input.smiles.npy \
# --test-smiles data/processed/uspto_sm_test.input.smiles.npy \
# --output-prefix data/processed/uspto_sm_
from mhnreact.data import load_templates
# load in the templates
t = load_templates('sm')
# +
# calculate applicability via substructureuniquearch -- fast way
import numpy as np
from mhnreact.molutils import smarts2appl
prods = np.array(templates_sm.products)
template_product_smarts = np.array([t[ti].split('>>')[-1] for ti in t])
# %time appl = smarts2appl(prods, template_product_smarts, njobs=60)
# -
# # let's do the same for the large dataset
# this might take a while ;) grab a coffee
# +
import time
import argparse
import pandas as pd
from temprel.templates.extract import process_for_training, process_for_askcos, templates_from_reactions
reactions_lg = get_uspto_480k() ## get the dataset
reactions_lg.drop(columns='index', inplace=True) #correcting for a lg specific bug
templates_lg = templates_from_reactions(reactions_lg, nproc=100)
templates_lg.to_json('data/processed/uspto_lg_templates.df.json.gz', compression='gzip')
process_for_training(templates_lg, output_prefix='data/processed/uspto_lg_', calc_split='stratified')
process_for_askcos(templates_lg, template_set_name='uspto_lg_', output_prefix='data/processed/uspto_lg_')
# +
#or run the script ;) --> won't work --> error with index col
# #!python data/temprel-fortunato/template-relevance-master/bin/process.py --reactions data/raw/uspto_lg_reactions.json.gz --nproc 100
# -
# !export PATH=$(pwd)/data/temprel-fortunato/template-relevance-master/bin:${PATH}
# +
# was used by fotrunato
# #!mpirun -n 60 --oversubscribe python
# -
# !mpirun -n 50 python data/temprel-fortunato/template-relevance-master/bin/calculate_applicabilty.py \
# --templates data/processed/uspto_lg_retro.templates.uspto_lg_.json.gz \
# --train-smiles data/processed/uspto_lg_train.input.smiles.npy \
# --valid-smiles data/processed/uspto_lg_valid.input.smiles.npy \
# --test-smiles data/processed/uspto_lg_test.input.smiles.npy \
# --output-prefix data/processed/uspto_lg_
# # finally the data can be loaded
import os
from mhnreact.data import *
os.listdir('data/processed/')
# load in the data
X, y = load_USPTO('sm',is_appl_matrix=False)
X['train'][0], y['train'][0]
# load in the applicability matrix
X, y_appl = load_USPTO('sm',is_appl_matrix=True)
# load in the templates
t = load_templates('sm')
| notebooks/01_prepro_uspto_sm_lg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# UN_Geosheme_Subregion = ['Australia and New Zealand','Caribbean','Central America','Central Asia','Eastern Africa','Eastern Asia','Eastern Europe','Melanesia','Micronesia','Middle Africa','Northern Africa','Northern America','Northern Europe','Polynesia','South America','South-Eastern Asia','Southern Africa','Southern Asia','Southern Europe','Western Africa','Western Asia','Western Europe']
# -
import os
import pandas as pd
from pathlib import Path
from functools import reduce
DATASET_FOLDER = '../../datasets/tempetes/Temperatures_Projections'
arr = os.listdir(DATASET_FOLDER)
print(arr)
DATASET_FOLDER_AVG_RCP26 = '../../datasets/tempetes/Temperatures_Projections/AverageTemp/RCP26'
arr = os.listdir(DATASET_FOLDER_AVG_RCP26)
print(arr)
len(arr)
# # Concatenate avg, min, max temperatures into 4 distinct csv, each one for a scenario
# ### RCP26
# ### Average monthly temperatures
def get_concatenated_projections_avg(DATASET_FOLDER):
dir = Path(DATASET_FOLDER)
dfs_list = []
for f in dir.glob("*.csv"):
df = pd.read_csv(f)
df["ISO"] = str(f).split("_")[-1].split(".")[0]
dfs_list.append(df)
df_final = pd.concat(dfs_list)
df_final = df_final.rename(columns={"Monthly Temperature - (Celsius)":"avg_monthly_temp"})
return df_final
df_AVG_RCP26 = get_concatenated_projections_avg('../../datasets/tempetes/Temperatures_Projections/AverageTemp/RCP26')
df_AVG_RCP26.head()
df_AVG_RCP26.shape
df_AVG_RCP45 = get_concatenated_projections_avg('../../datasets/tempetes/Temperatures_Projections/AverageTemp/RCP45')
df_AVG_RCP45.head()
df_AVG_RCP60 = get_concatenated_projections_avg('../../datasets/tempetes/Temperatures_Projections/AverageTemp/RCP60')
df_AVG_RCP60.head()
df_AVG_RCP85 = get_concatenated_projections_avg('../../datasets/tempetes/Temperatures_Projections/AverageTemp/RCP85')
df_AVG_RCP85.head()
# ### Max temperatures
def get_concatenated_projections_max(DATASET_FOLDER):
dir = Path(DATASET_FOLDER)
dfs_list = []
for f in dir.glob("*.csv"):
df = pd.read_csv(f)
df["ISO"] = str(f).split("_")[-1].split(".")[0]
dfs_list.append(df)
df_final = pd.concat(dfs_list)
df_final = df_final.rename(columns={"Monthly Max-Temperature - (Celsius)":"max_monthly_temp"})
return df_final
df_MAX_RCP26 = get_concatenated_projections_max('../../datasets/tempetes/Temperatures_Projections/TempMax/RCP26')
df_MAX_RCP26.head()
df_MAX_RCP45 = get_concatenated_projections_max('../../datasets/tempetes/Temperatures_Projections/TempMax/RCP45')
df_MAX_RCP45.head()
df_MAX_RCP60 = get_concatenated_projections_max('../../datasets/tempetes/Temperatures_Projections/TempMax/RCP60')
df_MAX_RCP60.head()
df_MAX_RCP85 = get_concatenated_projections_max('../../datasets/tempetes/Temperatures_Projections/TempMax/RCP85')
df_MAX_RCP85.head()
# ### Min temperatures
def get_concatenated_projections_min(DATASET_FOLDER):
dir = Path(DATASET_FOLDER)
dfs_list = []
for f in dir.glob("*.csv"):
df = pd.read_csv(f)
df["ISO"] = str(f).split("_")[-1].split(".")[0]
dfs_list.append(df)
df_final = pd.concat(dfs_list)
df_final = df_final.rename(columns={"Monthly Min-Temperature - (Celsius)":"min_monthly_temp"})
return df_final
df_MIN_RCP26 = get_concatenated_projections_min('../../datasets/tempetes/Temperatures_Projections/TempMin/RCP26')
df_MIN_RCP26.head()
df_MIN_RCP45 = get_concatenated_projections_min('../../datasets/tempetes/Temperatures_Projections/TempMin/RCP45')
df_MIN_RCP45.head()
df_MIN_RCP60 = get_concatenated_projections_min('../../datasets/tempetes/Temperatures_Projections/TempMin/RCP60')
df_MIN_RCP60.head()
df_MIN_RCP85 = get_concatenated_projections_min('../../datasets/tempetes/Temperatures_Projections/TempMin/RCP85')
df_MIN_RCP85.head()
# # Test sets prep
# ### RCP26
dfs_list = [df_AVG_RCP26, df_MIN_RCP26, df_MAX_RCP26]
temp_proj_rcp26 = reduce(lambda x, y: pd.merge(x, y, on = ["ISO", "Month", "Year"]), dfs_list)
temp_proj_rcp26.head()
# ### RCP45
dfs_list = [df_AVG_RCP45, df_MIN_RCP45, df_MAX_RCP45]
temp_proj_rcp45 = reduce(lambda x, y: pd.merge(x, y, on = ["ISO", "Month", "Year"]), dfs_list)
temp_proj_rcp45.head()
# ### RCP60
dfs_list = [df_AVG_RCP60, df_MIN_RCP60, df_MAX_RCP60]
temp_proj_rcp60 = reduce(lambda x, y: pd.merge(x, y, on = ["ISO", "Month", "Year"]), dfs_list)
temp_proj_rcp60.head()
# ### RCP85
dfs_list = [df_AVG_RCP85, df_MIN_RCP85, df_MAX_RCP85]
temp_proj_rcp85 = reduce(lambda x, y: pd.merge(x, y, on = ["ISO", "Month", "Year"]), dfs_list)
temp_proj_rcp85.head()
| model_tempetes/notebooks/.ipynb_checkpoints/Test sets preparation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import linear_model
from sklearn.svm import SVR
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
tickets_headers = ['PUR_DATE', 'TRV_DATE', 'TKT_NUM', 'CPN_NUM', 'CABIN', 'AIRLINE', 'ORG', 'DST', \
'FLT_NO', 'DEP_TIME', 'ARR_TIME', 'ARR_DATE', 'CPN_FARE']
tickets = pd.read_csv('ticket.txt', sep='\s+', lineterminator='\n', header=None, names=tickets_headers)
tickets['DEP_DATETIME'] = tickets['DEP_DATE'].astype(str) + ' ' + tickets['DEP_TIME'].astype(str)
tickets['ARR_DATETIME'] = tickets['ARR_DATE'].astype(str) + ' ' + tickets['ARR_TIME'].astype(str)
tickets['DEP_DATETIME'] = pd.to_datetime(tickets['DEP_DATETIME'], format='%Y%m%d %H%M')
tickets['ARR_DATETIME'] = pd.to_datetime(tickets['ARR_DATETIME'], format='%Y%m%d %H%M')
tickets.drop_duplicates()
tickets.head()
len(tickets)
# tickets.info()
tickets['TCKT_COUNT'] = tickets['TCKT_NO']
result = tickets.groupby('TCKT_NO', as_index = False)['TCKT_COUNT'].count()
bins = np.arange(0, result['TCKT_COUNT'].max() + 1.5) - 0.5
sns.displot(result, x='TCKT_COUNT', bins=bins,)
tmp = {str(k): f.to_numpy().tolist() for k, f in tickets.groupby('TCKT_NO')}
tmp
# tickets.groupby('TCKT_NO', as_index=False)['ORG', 'DST'].count()
tmp = {str(k): f.to_numpy().tolist() for k, f in tickets.groupby('TCKT_NO')}
one_way_cnt, return_cnt = 0, 0
for ticket_no, legs in tmp.items():
if legs[0][6] == legs[-1][7]:
return_cnt +=1
else:
one_way_cnt += 1
print(f'Return: {return_cnt}')
print(f'One-Way: {one_way_cnt}')
print(f'Total Tickets: {len(tmp)}')
# schedule = pd.read_csv('farzad-schedule.csv', header=0, parse_dates=schedule_dates)
schedule = pd.read_csv('farzad-schedule.csv', header=0)
schedule.drop_duplicates()
# schedule['DEP_DATETIME'] = schedule['DEPT_DATE'].astype(str) + ' ' + schedule['LOCAL_DEP_TIME'].astype(str)
# schedule['DEP_DATETIME'] = pd.to_datetime(schedule['DEP_DATETIME'], format= '%Y-%m-%d %H%M')
schedule.head()
# schedule.info()
schedule['DEPT_DATE'] = pd.to_datetime(schedule['DEPT_DATE'])
# schedule.info()
schedule.head()
# +
subset = schedule[:1000]
subset['OD'] = subset['ORG'] + '-' + subset['DST']
# result = schedule.groupby('OD', as_index = False)['SEATS'].sum()
tmp = {str(k2): {str(k1): f1.to_numpy().tolist() for k1, f1 in f2.groupby('CARRIER')} for k2, f2 in subset.groupby('OD')}
tmp['SEA-DFW']
od = 'SEA-DFW'
carrier = 'AA'
# input('Enter O-D' od)
# input('Carrier Name', carrier)
for key1, value1 in tmp.items():
total_od_seats = 0
for key2, value2 in value1.items():
total_od_carrier_seats = 0
for item in value2:
total_od_carrier_seats += item['SEATS']
total_od_seats += total_od_carrier_seats
print('Share of ')
# -
schedule_subset = schedule[schedule['SEATS'] > 100]
schedule_subset
schedule_tmp = schedule.sample(n=4000, random_state=1)
# schedule_tmp.head()
# schedule_tmp.info()
sns.scatterplot(data=schedule_tmp, x='SUM_GCD_MILE', y='SEATS')
X = schedule_tmp['SUM_GCD_MILE'].values.reshape(-1, 1)
y = schedule_tmp['SEATS'].values.reshape(-1, 1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
regr = make_pipeline(StandardScaler(), SVR(C=1.0, epsilon=0.2))
# cv_scores = cross_val_score(regr, X_train,y_train, cv=5)
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
sns.scatterplot(x=X_test.reshape(-1), y=y_pred)
def r2(y_true, y_predicted):
sse = sum((y_true - y_predicted)**2)
tse = (len(y_true) - 1) * np.var(y_true, ddof=1)
r2_score = 1 - (sse / tse)
return r2_score
r2 = r2_score(y_test, y_pred)
print(f'R-squared: {r2}')
| farzad_iw_1/interview_1_prep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
logging.basicConfig(level=logging.DEBUG)
from afsk.ax25 import FCS
from bitarray import bitarray
import crc16
import struct
bs_header = '\x82\xa0\xa4\xa6@@`\x88\xaa\x9a\x9a\xb2@`\xae\x92\x88\x8ab@b\xae\x92\x88\x8ad@c\x03\xf0'
bs_packet = '\x82\xa0\xa4\xa6@@`\x88\xaa\x9a\x9a\xb2@`\xae\x92\x88\x8ab@b\xae\x92\x88\x8ad@c\x03\xf0:Test\xf5g'
unstuffed_body = '01000001000001010010010101100101000000100000001000000110000100010101010101011001010110010100110100000010000001100111010101001001000100010101000101000110000000100100011001110101010010010001000101010001001001100000001011000110110000000000111101011100001010101010011011001110001011101111101111110011'
stuffed_body = '0100000100000101001001010110010100000010000000100000011000010001010101010101100101011001010011010000001000000110011101010100100100010001010100010100011000000010010001100111010101001001000100010101000100100110000000101100011011000000000011110101110000101010101001101100111000101110111110011111010011'
# -
fcs = FCS()
str_bytes = b'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
bits = bitarray()
bits.frombytes(str_bytes)
for bit in bits:
fcs.update_bit(bit)
assert fcs.digest() == b'[\x07'
print ("calcbytes")
print ("%r" % fcs.digest())
digest = bitarray(endian="little")
digest.frombytes(fcs.digest())
assert digest == bitarray('1101101011100000')
| TestFcs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import os
import matplotlib.pyplot as plt
from PIL import Image
import re
os.environ['GLOG_minloglevel'] = '2'
import caffe
import math
# %matplotlib inline
# The below steps show how the default cifar10 models run on arm cmsis nn, and it works well generally.
cwd = os.path.abspath(os.curdir)
# +
def RunSysCmd(cmd):
import subprocess
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
p_status = p.wait()
print(output.decode('utf-8'))
def Download(url, force=False):
tgt = os.path.basename(url)
if(os.path.exists(tgt) and force):
RunSysCmd('rm -f %s'%(tgt))
if(not os.path.exists(tgt)):
RunSysCmd('wget %s'%(url))
return tgt
def ToList(d):
sz=1
for s in d.shape:
sz = sz*s
return d.reshape(sz).tolist()
def q2f(d, Q):
'''To convert a number from Qm.n format to floating point:
1. Convert the number to floating point as if it were an integer, in other words remove the binary point
2. Multiply by 2−n
'''
if(type(d) is list):
D = []
for v in d:
D.append(float(v*math.pow(2,-Q)))
elif(type(d) is np.ndarray):
D = d*math.pow(2,-Q)
else:
D = float(d*math.pow(2,-Q))
return D
def show(w):
if(type(w) is np.ndarray):
aL = ToList(w)
else:
aL = list(w)
plt.figure(figsize=(18, 3))
plt.subplot(121)
plt.title('green is real, red is sort')
plt.plot(aL,'g')
plt.grid()
aL.sort()
plt.plot(aL,'r')
plt.grid()
plt.subplot(122)
plt.hist(aL,100)
plt.title('hist')
plt.grid()
plt.show()
def compare(a,b):
if(type(a) is np.ndarray):
aL = ToList(a)
else:
aL = list(a)
if(type(b) is np.ndarray):
bL = ToList(b)
else:
bL = list(b)
assert(len(aL) == len(bL))
Z = list(zip(aL,bL))
Z.sort(key=lambda x: x[0])
aL,bL=zip(*Z)
plt.figure(figsize=(18, 3))
plt.subplot(131)
plt.plot(aL,'r')
plt.grid()
plt.subplot(133)
plt.plot(bL,'g')
plt.plot(aL,'r')
plt.grid()
plt.subplot(132)
bL=list(bL)
bL.sort()
plt.plot(bL,'g')
plt.grid()
# -
RunSysCmd('scons')
defaults = []
reImg = re.compile('#define IMG_DATA \{([\w\d,]+)\}')
with open('arm_nnexamples_cifar10_inputs.h') as f:
for l in f.readlines():
if(reImg.search(l)):
data = eval('['+reImg.search(l).groups()[0]+']')
data = np.asarray(data,dtype=np.uint8).reshape(32,32,3)
defaults.append(data)
fig, axs = plt.subplots(1, len(defaults))
for i,dft in enumerate(defaults):
axs[i].imshow(dft)
for i,dft in enumerate(defaults):
data = dft
data.tofile('img.bin')
RunSysCmd('./cifar10 img.bin')
# +
# CATs
#url = 'http://p5.so.qhimgs1.com/bdr/_240_/t011b628e47ccf9983b.jpg'
#url = 'http://p3.so.qhmsg.com/bdr/_240_/t01067394101dcd6278.jpg'
#url = 'http://p5.so.qhimgs1.com/bdr/_240_/t01425873ec4207251b.jpg'
# AIRPLANEs
#url = 'http://p0.so.qhimgs1.com/bdr/_240_/t01a45e71a8867f2354.jpg'
#url = 'http://p4.so.qhmsg.com/bdr/_240_/t01cbd2106353872279.jpg'
# DOGs
#url = 'http://p0.so.qhimgs1.com/bdr/_240_/t0180d8b6dbb9eb54b0.jpg'
#url = 'http://p5.so.qhimgs1.com/bdr/_240_/t015ac334b42ef829db.jpg'
url = 'http://p3.so.qhimgs1.com/bdr/_240_/t017f279f05b2c73b93.jpg'
img = Download(url,True)
# -
# ref plot: https://www.jianshu.com/p/2b2caa2cf381
im = Image.open(img)
im = im.convert('RGB')
fig, axs = plt.subplots(1, 2)
axs[0].imshow(im)
im = im.resize((32,32))
im.save('img.png')
axs[1].imshow(im)
data = np.asarray(im)
data = data.astype(np.int8)
data.tofile('img.bin')
RunSysCmd('./cifar10 img.bin')
# Now research about how the cifar10 model was quantized and run on arm cmsis nn with Q Format.
#
# Firstly, It's much more better to follow page [cifar10 convert tools](https://github.com/ARM-software/ML-examples/tree/master/cmsisnn-cifar10) to study about how to quantize a model and generated C files.
#
# But need to modify the models/cifar10_m7_train_test.prototxt to point to the right input data and then run below command:
#
# ```sh
# # cd ML-examples/cmsisnn-cifar10
# python nn_quantizer.py --model models/cifar10_m7_train_test.prototxt \
# --weights models/cifar10_m7_iter_300000.caffemodel.h5 \
# --save models/cifar10_m7.pkl
# python code_gen.py --model models/cifar10_m7.pkl --out_dir code/m7
# ```
# +
m7=True
if(m7):
model_file ='ML-examples/cmsisnn-cifar10/models/cifar10_m7_train_test.prototxt'
weight_file='ML-examples/cmsisnn-cifar10/models/cifar10_m7_iter_300000.caffemodel.h5'
genWT='ML-examples/cmsisnn-cifar10/code/m7/weights.h'
else:
model_file ='ML-examples/cmsisnn-cifar10/models/cifar10_m4_train_test.prototxt'
weight_file='ML-examples/cmsisnn-cifar10/models/cifar10_m4_iter_70000.caffemodel.h5'
genWT='ML-examples/cmsisnn-cifar10/code/m4/weights.h'
RunSysCmd('git clone https://github.com/autoas/ML-examples.git')
inference_model = 'ML-examples/cmsisnn-cifar10/models/inference.prototxt'
# -
# Then on need to new a inference model inference.prototxt based on the cifar10_m7_train_test.prototxt by:
#
# * 1. Replace the data layer as below:
#
# ```json
# layer {
# name:"data"
# type:"Input"
# top:"data"
# input_param {shape: {dim:1 dim:3 dim:32 dim:32}}
# }
# ```
#
# * 2. And then remove the layer accuracy and loss, and add below softmax layer:
#
# ```json
# layer {
# name: "prob"
# type: "Softmax"
# bottom: "ip1"
# top: "prob"
# }
# ```
# load caffe model
caffe.set_mode_cpu()
net = caffe.Net(inference_model,weight_file,caffe.TEST)
# All of the below method is not given the right prediction, I don't know why, need more research.
# +
# caffe method 1
#caffe_model_file = '/home/parai/workspace/caffe/examples/cifar10/cifar10_quick.prototxt'
#caffe_weight_file = '/home/parai/workspace/caffe/examples/cifar10/cifar10_quick_iter_5000.caffemodel.h5'
caffe_model_file = inference_model
caffe_weight_file = weight_file
#caffeTestImg = '${CAFFE}/examples/images/cat.jpg'
caffeTestImg = 'img.png'
cmd = 'export CAFFE=${HOME}/workspace/caffe'
cmd += ' && export PYTHONPATH=${CAFFE}/python:$PYTHONPATH'
cmd += ' && python2 ${CAFFE}/python/classify.py --model_def %s' \
' --pretrained_model %s' \
' --center_only %s result'%(caffe_model_file, caffe_weight_file, caffeTestImg)
RunSysCmd(cmd)
CIFAR10_LABELS_LIST = [ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
r = np.load('result.npy').tolist()[0]
R = list(zip(CIFAR10_LABELS_LIST,r))
print(R)
# -
# caffe method 2
if(1):
im = Image.open(img)
im = im.convert('RGB')
im = im.resize((32,32))
else:
im = defaults[0]
im = np.asarray(im)
#fig, axs = plt.subplots(1, 3)
#axs[0].imshow(im)
#print(im[2][:10])
im = im - (125,123,114)
#print(im[2][:10])
im = np.asarray(im.transpose(2,0,1))
#axs[1].imshow(im.transpose(1,2,0))
net.blobs['data'].data[...] = im.astype(np.float32)
#axs[2].imshow(im.transpose(1,2,0)+(125,123,114))
out = net.forward()
print(out)
# save output of each layer
RunSysCmd('mkdir -p caffe_out2 caffe_out3 out')
for name,blob in net.blobs.items():
d = blob.data
if(len(d.shape)==4):
d = blob.data.transpose((0,2,3,1))
d.tofile('caffe_out2/%s.raw'%(name))
print('layer %s shape: %s'%(name, blob.data.shape))
# caffe method 3
# run inference, https://www.zhihu.com/question/38107945
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
# change dimensition from HWC to CHW
transformer.set_transpose('data', (2,0,1))
# subtract mean
mean = np.asarray((125,123,114)) # RGB
transformer.set_mean('data', mean)
# scale to range 0-255
transformer.set_raw_scale('data', 255)
transformer.set_channel_swap('data', (2,1,0)) # if using RGB instead of BGR
im=caffe.io.load_image(img)
#fig, axs = plt.subplots(1, 2)
#axs[0].imshow(im)
im = transformer.preprocess('data',im)
net.blobs['data'].data[...] = im
out = net.forward()
#axs[1].imshow(im.transpose(1,2,0))
print(out)
# load quantized weights
weights = {}
reWT = re.compile('#define\s+(\w+)\s+\{([-\w\d,]+)\}')
with open(genWT) as f:
for l in f.readlines():
if(reWT.search(l)):
grp = reWT.search(l).groups()
name = grp[0]
data = eval('['+grp[1]+']')
weights[name] = data
for name, p in net.params.items():
for i,blob in enumerate(p):
d = blob.data
print('%s weiths[%s]: max=%s, min=%s, shape=%s'%(name,i,d.max(),d.min(),d.shape))
show(net.params['conv1'][0].data)
CONV1_WT = q2f(weights['CONV1_WT'],7)
CONV1_BIAS = q2f(weights['CONV1_BIAS'],7)
compare(net.params['conv1'][0].data.transpose(0,2,3,1), CONV1_WT)
compare(net.params['conv1'][1].data, CONV1_BIAS)
CONV2_WT = q2f(weights['CONV2_WT'],8)
CONV2_BIAS = q2f(weights['CONV2_BIAS'],8)
compare(net.params['conv2'][0].data.transpose(0,2,3,1), CONV2_WT)
compare(net.params['conv2'][1].data, CONV2_BIAS)
CONV3_WT = q2f(weights['CONV3_WT'],9)
CONV3_BIAS = q2f(weights['CONV3_BIAS'],8)
compare(net.params['conv3'][0].data.transpose(0,2,3,1), CONV3_WT)
compare(net.params['conv3'][1].data, CONV3_BIAS)
RunSysCmd('scons --m7')
RunSysCmd('./cifar10 img.bin')
compare( np.fromfile('caffe_out2/data.raw', dtype=np.float32),
q2f(np.fromfile('out/data.raw', dtype=np.int8),7) )
compare( np.fromfile('caffe_out2/conv1.raw', dtype=np.float32),
q2f(np.fromfile('out/conv1.raw', dtype=np.int8),7) )
compare( np.fromfile('caffe_out2/conv2.raw', dtype=np.float32),
q2f(np.fromfile('out/conv2.raw', dtype=np.int8),7) )
compare( np.fromfile('caffe_out2/conv3.raw', dtype=np.float32),
q2f(np.fromfile('out/conv3.raw', dtype=np.int8),7) )
# study of the cifar10 traning data and test data, try to know how data was feed to caffe
import lmdb
#env = lmdb.open('/home/parai/workspace/caffe/examples/cifar10/cifar10_train_lmdb', readonly=True)
env = lmdb.open('/home/parai/workspace/caffe/examples/cifar10/cifar10_test_lmdb', readonly=True)
RunSysCmd('mkdir -p testimg')
with env.begin() as txn:
cursor = txn.cursor()
for i, (key, value) in enumerate(cursor):
if(i!=1):continue
datum = caffe.proto.caffe_pb2.Datum()
datum.ParseFromString(value)
flat_x = np.frombuffer(datum.data, dtype=np.uint8)
x = flat_x.reshape(datum.channels, datum.height, datum.width)
y = datum.label
#Image.fromarray(x.transpose(1,2,0)).save('testimg/%s_%s.png'%(CIFAR10_LABELS_LIST[int(y)],i))
plt.imshow(x.transpose(1,2,0))
break
inference_model = 'ML-examples/cmsisnn-cifar10/models/cifar10_m7_train_test.prototxt'
weight_file='ML-examples/cmsisnn-cifar10/models/cifar10_m7_iter_300000.caffemodel.h5'
caffe.set_mode_cpu()
net = caffe.Net(inference_model,weight_file,caffe.TEST)
out = net.forward()
print(out)
im = net.blobs['data'].data.transpose(0,2,3,1)[1]
im = im + (125, 123, 114) # RGB
im = im.astype(np.uint8)
plt.imshow(im)
# so from the TEST lmdb, it's clear that the input data will be with range about float32(-128, 128), so this will be definitely different with arm CMSIS NN, arm CMSIS NN request range about(-1, 1), so the TRANING lmdb and TEST lmdb need to be modified by divided each input by 128 maybe, and then get the model retrained, or add a scale layer to scale the input data to range (-1, 1).
#
# https://stackoverflow.com/questions/37410996/scale-layer-in-caffe
#
# ```json
# layer {
# name: "sc"
# type: "Scale"
# bottom: "data"
# top: "sc"
# param {
# lr_mult: 0
# decay_mult: 0
# }
# param {
# lr_mult: 0
# decay_mult: 0
# }
# scale_param {
# filler { value: 0.00392156862745098 }
# bias_term: true
# bias_filler { value: 0 }
# }
# }
# ```
| CMSIS/NN/Examples/ARM/arm_nn_examples/cifar10/cifar10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
import seaborn as sns; sns.set_theme()
# set the figure size
plt.rcParams['figure.figsize'] = [20, 20]
agentNum = 100
# +
infectedList = np.zeros(agentNum)
# SEIR model
susceptState = np.ones(agentNum)
exposedState = np.zeros(agentNum)
infectState = np.zeros(agentNum)
recoverState = np.zeros(agentNum)
# +
# time for E to I, this stage will
susceptList = np.random.lognormal(mean=4.5, sigma=1.5, size=agentNum)
# time for I to symptom
symptomList = np.random.lognormal(mean=1.1, sigma=0.9, size=agentNum)
# -
contactNetwork = np.random.normal(size=(agentNum, agentNum))
fig, axs = plt.subplots()
ax = sns.heatmap(contactNetwork,
linewidths=0.001,
cmap="viridis")
fig, axs = plt.subplots()
axs.hist(susceptList)
networkSym = contactNetwork.copy()
for i in range(networkSym.shape[0]):
for j in range(i, networkSym.shape[1]):
if networkSym[j][i] < 0:
networkSym[j][i] = -networkSym[j][i]
networkSym[i][j] = networkSym[j][i]
if i == j :
networkSym[i][j] = 0
np.min(networkSym)
fig, axs = plt.subplots() # change the contact network to symmetry
ax = sns.heatmap(networkSym,
# linewidths=0.001,
cmap="viridis")
sns.distplot(networkSym)
cg = sns.clustermap(networkSym,
# row_cluster=False,
# col_cluster=False,
# linewidths=0.001,
cmap="viridis")
cg.ax_row_dendrogram.set_visible(False)
cg.ax_col_dendrogram.set_visible(False)
# the cbar position can be moved to other places
# +
# for time = t, check whether the agents have contact or not
myTime = 1
# at the begining we will assign the number of E patient
susceptState[0] = 0
exposedState[0] = 1
# infectState
# recoverState
susNumRatio = np.sum(susceptState)
expNumRatio = np.sum(exposedState)
infNumRatio = np.sum(infectState)
recNumRatio = np.sum(recoverState)
susRatio = susNumRatio/agentNum
expRatio = expNumRatio/agentNum
infRatio = infNumRatio/agentNum
recRatio = recNumRatio/agentNum
header = 'Time: '+str(myTime) + '\n' + '\t'.join(['susNumRatio', 'expNumRatio', 'infNumRatio', 'recNumRatio'])
prtStr = '\t'.join([str(i) for i in [susNumRatio, expNumRatio, infNumRatio, recNumRatio]])
prtStr2 = '\t'.join([str(i) for i in [susRatio, expRatio, infRatio, recRatio]])
print('\n'.join([header, prtStr, prtStr2]))
# -
expIndex
# +
# begin contact, all people contact together
contactCount = 0
s2eRatio = 0.016 # beta value for S to E
susIndex = np.where(susceptState==1)[0]
expIndex = np.where(exposedState==1)[0]
infIndex = np.where(infectState==1)[0]
recIndex = np.where(recoverState==1)[0]
myTime = myTime + 1
# update E to I
for i in expIndex:
if myTime >= susceptList[i]:
exposedState[i] = 0
infectState[i] = 0
# record the expose time and E -> I
for i in range(agentNum):
for j in range(i, agentNum):
contactProbability = networkSym[i,j]
if contactProbability > 1 :
contactProbability = 1
# print(contactProbability)
if np.random.binomial(1,contactProbability):
contactCount = contactCount + 1
# print(str(i) + ' and ' + str(j) + ' contacts')
else:
contactCount = contactCount
# print(str(i) + ' and ' + str(j) + ' not contacts')
print('Total number of contacts is ' + str(contactCount))
# the contact is important, that is the reason why we will record it
susNumRatio = np.sum(susceptState)
expNumRatio = np.sum(exposedState)
infNumRatio = np.sum(infectState)
recNumRatio = np.sum(recoverState)
susRatio = susNumRatio/agentNum
expRatio = expNumRatio/agentNum
infRatio = infNumRatio/agentNum
recRatio = recNumRatio/agentNum
header = 'Time: '+str(myTime) + '\n' + '\t'.join(['susNumRatio', 'expNumRatio', 'infNumRatio', 'recNumRatio'])
prtStr = '\t'.join([str(i) for i in [susNumRatio, expNumRatio, infNumRatio, recNumRatio]])
prtStr2 = '\t'.join([str(i) for i in [susRatio, expRatio, infRatio, recRatio]])
print('\n'.join([header, prtStr, prtStr2]))
# +
# begin contact, to speed up, only E and R will contact, others we don't care
contactCount = 0
currentEList = [] # will be updated in next version
for i in currentEList:
for j in range(agentNum):
contactProbability = networkSym[i,j]
if contactProbability > 1 :
contactProbability = 1
# print(contactProbability)
if np.random.binomial(1,contactProbability):
contactCount = contactCount + 1
# print(str(i) + ' and ' + str(j) + ' contacts')
else:
contactCount = contactCount
# print(str(i) + ' and ' + str(j) + ' not contacts')
print('Total number of contacts is ' + str(contactCount))
# -
myTime=[i for i in range(100)]
susRatio=[i/100 for i in range(100)]
exposeRatio=[i/100 for i in range(100)]
infecRatio=[i/100 for i in range(100)]
recoverRatio=[i/100 for i in range(100)]
interactionRatio=[i/100 for i in range(100)]
t = 0
| oneCovid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''py38'': conda)'
# name: python3
# ---
# # Stroke data exploration
#
# Data retrieved from [this Kaggle page](https://www.kaggle.com/lirilkumaramal/heart-stroke).
# ## Import & analyze data
# +
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv('train_strokes.csv')
df.info()
# -
# Glucose level is in mg/dL (avg is between 90 to 110 mg/dL)
df.head()
df.isnull().sum()
# Check unique values of each categorical column.
# +
def print_unique_values(df_value):
print('gender', df_value['gender'].unique())
print('ever_married', df_value['ever_married'].unique())
print('work_type', df_value['work_type'].unique())
print('Residence_type', df_value['Residence_type'].unique())
print('smoking_status', df_value['smoking_status'].unique())
print_unique_values(df)
# -
# ## Normalize data
#
# Transform several columns to categorical data.
#
# - gender: [male, female, other] -> [0, 1, 2]
# - ever_married: [No, Yes] -> [0, 1]
# - work_type: [children, Private, Never_worked, Self-employed, Govt_job] -> [0, 1, 2, 3, 4]
# - Residence_type [Rural, Urban] -> [0, 1]
# - smoking_status [~~unknown~~, never smoked, formerly smoked, smokes] -> [0, 1, 2]
#
# +
dft = df.copy()
dft = dft.drop(['id'], axis=1)
dft['gender'] = dft['gender'].factorize()[0]
dft['age'] = dft['age'].apply(np.floor)
dft['ever_married'] = dft['ever_married'].factorize()[0]
dft['work_type'] = dft['work_type'].factorize()[0]
dft['Residence_type'] = dft['Residence_type'].factorize()[0]
dft = dft.dropna()
dft['smoking_status'] = dft['smoking_status'].factorize()[0]
dfStrokeTrue = dft[dft['stroke'] == 1]
dfStrokeFalse = dft[dft['stroke'] == 0]
dfStrokeFalse = dfStrokeFalse.sample(frac=1).reset_index(drop=True)
dfStrokeFalse = dfStrokeFalse.head(548)
dft = pd.concat([dfStrokeTrue, dfStrokeFalse])
dft = dft.sample(frac=1).reset_index(drop=True)
print(dft.info())
print(dft.describe())
print(dft.head())
# -
print_unique_values(dft)
dft['avg_glucose_level'].mean()
# + pycharm={"name": "#%%\n"}
dft.isna().sum()
# +
features = ['gender', 'ever_married', 'hypertension', 'heart_disease']
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(dft.corr(method='pearson'), annot=True, linewidths=.5, ax=ax)
# sns.heatmap(dft.corr(method='spearman'), annot=True, linewidths=.5, ax=ax)
plt.show()
| data-training/heart_stroke.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Introduction
# This notebook goes through multi-processing of NLP for aiming to analyse the dependency of <NAME> with the Media's
#
# Specifically, we'll be walking through:
#
# - **Getting the data** - in this case, we'll be get the data from a dataset in .csv
# - **Cleaning the data** - we will walk through popular text pre-processing techniques
# - **Organizing the data** - we will organize the cleaned data into a way that is easy to input into other algorithms
#
# The output of this notebook will be clean, organized data in two standard text formats:
#
# - Corpus - a collection of text
# - Document-Term Matrix word counts in matrix format
#
#
# In each formats, we'll divide in 3 category:
#
# - The press - the magazine from the text
# - The year of publish - The year of the interview
# - The album - Each interview is linked with an album
#
# ### Problem Statement
#
# We would like to know if trough the decade, the review about Bowie evolute.
#
# # Get the data
#
# Luckily, we already have a raw csv, with all our data ready. I have to import the csv, and transform them in dataframe
# for each categories.
#
# As we decided before we have to divide the dataset in 3 high-level category.
# + pycharm={"name": "#%%\n"}
import pandas as pd
import pickle
import numpy as np
file = r'data/bowie_txt_analysis.csv'
df = pd.read_csv(file)
# Pickle all the data
all_text = open('./data/pickle/all_text.pkl', 'wb')
pickle.dump(df, all_text)
# Create separate df for each category
df_press = df[['press', 'texte', 'date']]
df_album = df[['album', 'texte', 'date']]
def format_date(date):
new_date = int(date[-2:])
if new_date > 20:
new_date = f'19{new_date}'
elif new_date == 2 or new_date == 3:
new_date = f'200{new_date}'
else:
new_date = f'20{new_date}'
return int(new_date)
format_date_round = lambda x: format_date(x)
df_album.date = df_album.date.apply(format_date_round)
df_album = df_album.groupby(['date','texte', 'album']).size().reset_index()
df_ronson = df_album[df_album['date'].between(1969, 1974)]
df_ronson = df_ronson.groupby(['album'])['texte'].apply(' '.join).reset_index()
df_experimental = df_album[df_album['date'].between(1975, 1981)]
df_experimental = df_experimental.groupby(['album'])['texte'].apply(' '.join).reset_index()
df_demise = df_album[df_album['date'].between(1983, 1994)]
df_demise = df_demise.groupby(['album'])['texte'].apply(' '.join).reset_index()
df_popex = df_album[df_album['date'].between(1994, 2003)]
df_popex = df_popex.groupby(['album'])['texte'].apply(' '.join).reset_index()
df_surprise = df_album[df_album['date'].between(2004, 2017)]
df_surprise = df_surprise.groupby(['album'])['texte'].apply(' '.join).reset_index()
# -
# # Cleaning the data
#
# When dealing with numerical data, data cleaning often involves removing null values and duplicate data, dealing with outliers, etc. With text data, there are some common data cleaning techniques, which are also known as text pre-processing techniques.
#
# With text data, this cleaning process can go on forever. There's always an exception to every cleaning step. So, we're going to follow the MVP (minimum viable product) approach - start simple and iterate. Here are a bunch of things you can do to clean your data. We're going to execute just the common cleaning steps here and the rest can be done at a later point to improve our results.
#
# **Common data cleaning steps on all text**:
#
# - Make text all lower case
# - Remove punctuation
# - Remove numerical values
# - Remove common non-sensical text (/n)
# - Tokenize text
# - Remove stop words
#
#
# **More data cleaning steps after tokenization**:
# - Stemming / lemmatization
# - Parts of speech tagging
# - Create bi-grams or tri-grams
# - Deal with typos
# - And more...
#
# + pycharm={"name": "#%%\n"}
import re
import string
def clean_text_round1(text):
"""Make text lowercase, remove text in square brackets, remove punctuation and remove words containing numbers."""
text = text.lower()
text = re.sub('\[.*?\]', ' ', text)
text = re.sub('[%s]' % re.escape(string.punctuation), ' ', text)
text = re.sub('\w*\d\w*', ' ', text)
text = text.lstrip(' ')
return text
def clean_text_round2(text):
"""Get rid of some additional punctuation and non-sensical text that was missed the first time around."""
text = re.sub('[‘’“”…]', ' ', text)
text = re.sub('\n', ' ', text)
text = re.sub('[—]', ' ', text)
return text
def remove_redundant_word(text):
stopwords = ['bowie', 'like', 'bowies', 'just', 'album', 'david', 'time', 'new']
querywords = text.split()
resultwords = [word for word in querywords if word.lower() not in stopwords]
result = ' '.join(resultwords)
return result
round1 = lambda x: clean_text_round1(x)
round2 = lambda x: clean_text_round2(x)
round_stop = lambda x: remove_redundant_word(x)
df_ronson['texte'] = pd.DataFrame(df_ronson['texte'].apply(round1))
df_ronson['texte'] = pd.DataFrame(df_ronson['texte'].apply(round2))
df_ronson['texte'] = pd.DataFrame(df_ronson['texte'].apply(round_stop))
df_experimental['texte'] = pd.DataFrame(df_experimental['texte'].apply(round1))
df_experimental['texte'] = pd.DataFrame(df_experimental['texte'].apply(round2))
df_experimental['texte'] = pd.DataFrame(df_experimental['texte'].apply(round_stop))
df_demise['texte'] = pd.DataFrame(df_demise['texte'].apply(round1))
df_demise['texte'] = pd.DataFrame(df_demise['texte'].apply(round2))
df_demise['texte'] = pd.DataFrame(df_demise['texte'].apply(round_stop))
df_popex['texte'] = pd.DataFrame(df_popex['texte'].apply(round1))
df_popex['texte'] = pd.DataFrame(df_popex['texte'].apply(round2))
df_popex['texte'] = pd.DataFrame(df_popex['texte'].apply(round_stop))
df_surprise['texte'] = pd.DataFrame(df_surprise['texte'].apply(round1))
df_surprise['texte'] = pd.DataFrame(df_surprise['texte'].apply(round2))
df_surprise['texte'] = pd.DataFrame(df_surprise['texte'].apply(round_stop))
# Pickle press dfdf_experimental
album_pkl = open('./data/pickle/album.pkl', 'wb')
pickle.dump(df_album, album_pkl)
ronson_pkl = open('./data/pickle/groups/ronson_corpus.pkl', 'wb')
pickle.dump(df_ronson, ronson_pkl)
experimental_pkl = open('./data/pickle/groups/experimental_corpus.pkl', 'wb')
pickle.dump(df_experimental, experimental_pkl)
demise_pkl = open('./data/pickle/groups/demise_corpus.pkl', 'wb')
pickle.dump(df_demise, demise_pkl)
popex_pkl = open('./data/pickle/groups/popex_corpus.pkl', 'wb')
pickle.dump(df_popex, popex_pkl)
surprise_pkl = open('./data/pickle/groups/surprise_corpus.pkl', 'wb')
pickle.dump(df_surprise, surprise_pkl)
# -
# # Organizing The Data
# I mentioned earlier that the output of this notebook will be clean, organized data in two standard text formats:
#
# - **Corpus** - a collection of text
# - **Document**-Term Matrix - word counts in matrix format
#
# ## Document-Term Matrix
# For many of the techniques we'll be using in future notebooks, the text must be tokenized, meaning broken down into smaller pieces. The most common tokenization technique is to break down text into words. We can do this using scikit-learn's CountVectorizer, where every row will represent a different document and every column will represent a different word.
#
# In addition, with CountVectorizer, we can remove stop words. Stop words are common words that add no additional meaning to text such as 'a', 'the', etc.
# + pycharm={"name": "#%%\n"}
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(stop_words='english')
## for Ronson years
data_ronson_cv = cv.fit_transform(df_ronson['texte'])
data_ronson_dtm = pd.DataFrame(data_ronson_cv.toarray(), columns=cv.get_feature_names())
data_ronson_dtm.index = df_ronson['album']
data_ronson_dtm_pkl = open('./data/pickle/groups/ronson_corpus_dtm.pkl', 'wb')
pickle.dump(data_ronson_dtm, data_ronson_dtm_pkl)
## for Experimental years
data_experimental_cv = cv.fit_transform(df_experimental['texte'])
data_experimental_dtm = pd.DataFrame(data_experimental_cv.toarray(), columns=cv.get_feature_names())
data_experimental_dtm.index = df_experimental['album']
data_experimental_dtm_pkl = open('./data/pickle/groups/experimental_corpus_dtm.pkl', 'wb')
pickle.dump(data_experimental_dtm, data_experimental_dtm_pkl)
## for the 80-90's
data_demise_cv = cv.fit_transform(df_demise['texte'])
data_demise_dtm = pd.DataFrame(data_demise_cv.toarray(), columns=cv.get_feature_names())
data_demise_dtm.index = df_demise['album']
data_demise_dtm_pkl = open('./data/pickle/groups/demise_dtm.pkl', 'wb')
pickle.dump(data_demise_dtm, data_demise_dtm_pkl)
## for the 00's
data_popex_cv = cv.fit_transform(df_popex['texte'])
data_popex_dtm = pd.DataFrame(data_popex_cv.toarray(), columns=cv.get_feature_names())
data_popex_dtm.index = df_popex['album']
data_popex_dtm_pkl = open('./data/pickle/groups/popex_corpus_dtm.pkl', 'wb')
pickle.dump(data_popex_dtm, data_popex_dtm_pkl)
## for the end
data_surprise_cv = cv.fit_transform(df_surprise['texte'])
data_surprise_dtm = pd.DataFrame(data_surprise_cv.toarray(), columns=cv.get_feature_names())
data_surprise_dtm.index = df_surprise['album']
data_surprise_dtm_pkl = open('./data/pickle/groups/surprise_corpus.pkl', 'wb')
pickle.dump(data_surprise_dtm, data_surprise_dtm_pkl)
# Let's also pickle the cleaned data (before we put it in document-term matrix format) and the CountVectorizer object
import pickle
pickle.dump(cv, open("./data/pickle/cv.pkl", "wb"))
| 1-Data-Cleaning-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (system-wide)
# language: python
# metadata:
# cocalc:
# description: Python 3 programming language
# priority: 100
# url: https://www.python.org/
# name: python3
# ---
# + jupyter={"source_hidden": true}
# Preamble script block to identify host, user, and kernel
import sys
# ! hostname
# ! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
# -
# ## Full name: <NAME>
# ## R#: 321654987
# ## HEX: 0x132c10cb
# ## Title of the notebook
# ## Date: 9/1/2020
# # Program flow control (Loops)
# - Controlled repetition
# - Structured FOR Loop
# - Structured WHILE Loop
# ## Count controlled repetition
# Count-controlled repetition is also called definite repetition because the number of repetitions is known before the loop begins executing.
# When we do not know in advance the number of times we want to execute a statement, we cannot use count-controlled repetition.
# In such an instance, we would use sentinel-controlled repetition.
#
# A count-controlled repetition will exit after running a certain number of times.
# The count is kept in a variable called an index or counter.
# When the index reaches a certain value (the loop bound) the loop will end.
#
# Count-controlled repetition requires
#
# * control variable (or loop counter)
# * initial value of the control variable
# * increment (or decrement) by which the control variable is modified each iteration through the loop
# * condition that tests for the final value of the control variable
#
# We can use both `for` and `while` loops, for count controlled repetition, but the `for` loop in combination with the `range()` function is more common.
#
# ### Structured `FOR` loop
# We have seen the for loop already, but we will formally introduce it here. The `for` loop executes a block of code repeatedly until the condition in the `for` statement is no longer true.
#
# ### Looping through an iterable
# An iterable is anything that can be looped over - typically a list, string, or tuple.
# The syntax for looping through an iterable is illustrated by an example.
#
# First a generic syntax
#
# for a in iterable:
# print(a)
#
# Notice our friends the colon `:` and the indentation.
# Now a specific example
# set a list
MyPets = ["dusty","aspen","merrimee"]
# loop thru the list
for AllStrings in MyPets:
print(AllStrings)
# #### The `range()` function to create an iterable
#
# The `range(begin,end,increment)` function will create an iterable starting at a value of begin, in steps defined by increment (`begin += increment`), ending at `end`.
#
# So a generic syntax becomes
#
# for a in range(begin,end,increment):
# print(a)
#
# The example that follows is count-controlled repetition (increment skip if greater)
# + active=""
# # set a list
# MyPets = ["dusty","aspen","merrimee"]
# # loop thru the list
# for i in range(0,3,1): # Change the 1 to 2 and rerun, what happens?
# print(MyPets[i])
# -
# For loop with range
for x in range(2,6,1): # a sequence from 2 to 5 with steps of 1
print(x)
# Another example of For loop with range
for y in range(1,27,2): # a sequence from 1 to 26 with steps of 2
print(y)
# <hr>
# ### Exercise 1 : My own loop
#
# 1904 was a leap year. Write a for loop that prints out all the leap years from in the 20th century (1900-1999).
# Exercise 1
for years in range(1904,2000,4): # a sequence from 1904 to 1999 with steps of 4
print(years)
# <hr>
# ## Sentinel-controlled repetition.
#
# When loop control is based on the value of what we are processing, sentinel-controlled repetition is used.
# Sentinel-controlled repetition is also called indefinite repetition because it is not known in advance how many times the loop will be executed.
# It is a repetition procedure for solving a problem by using a sentinel value (also called a signal value, a dummy value or a flag value) to indicate "end of process".
# The sentinel value itself need not be a part of the processed data.
#
# One common example of using sentinel-controlled repetition is when we are processing data from a file and we do not know in advance when we would reach the end of the file.
#
# We can use both `for` and `while` loops, for __Sentinel__ controlled repetition, but the `while` loop is more common.
#
# ### Structured `WHILE` loop
# The `while` loop repeats a block of instructions inside the loop while a condition remainsvtrue.
#
# First a generic syntax
#
# while condition is true:
# execute a
# execute b
# ....
#
# Notice our friends the colon `:` and the indentation again.
# set a counter
counter = 5
# while loop
while counter > 0:
print("Counter = ",counter)
counter = counter -1
# The while loop structure just depicted is a "decrement, skip if equal" in lower level languages.
# The next structure, also a while loop is an "increment, skip if greater" structure.
# set a counter
counter = 0
# while loop
while counter <= 5: # change this line to: while counter <= 5: what happens?
print ("Counter = ",counter)
counter = counter +1 # change this line to: counter +=1 what happens?
# ## Nested Repetition
#
# Nested repetition is when a control structure is placed inside of the body or main part of another control structure.
# #### `break` to exit out of a loop
#
# Sometimes you may want to exit the loop when a certain condition different from the counting
# condition is met. Perhaps you are looping through a list and want to exit when you find the
# first element in the list that matches some criterion. The break keyword is useful for such
# an operation.
# For example run the following program:
#
j = 0
for i in range(0,5,1):
j += 2
print ("i = ",i,"j = ",j)
if j == 6:
break
# Next change the program slightly to:
j = 0
for i in range(0,5,1):
j += 2
print( "i = ",i,"j = ",j)
if j == 7:
break
# In the first case, the for loop only executes 3 times before the condition j == 6 is TRUE and the loop is exited.
# In the second case, j == 7 never happens so the loop completes all its anticipated traverses.
#
# In both cases an `if` statement was used within a for loop. Such "mixed" control structures
# are quite common (and pretty necessary).
# A `while` loop contained within a `for` loop, with several `if` statements would be very common and such a structure is called __nested control.__
# There is typically an upper limit to nesting but the limit is pretty large - easily in the
# hundreds. It depends on the language and the system architecture ; suffice to say it is not
# a practical limit except possibly for general-domain AI applications.
# <hr>
# ### Exercise 2.
#
# Write a Python script that takes a real input value (a float) for x and returns the y
# value according to the rules below
#
# \begin{gather}
# y = x~for~0 <= x < 1 \\
# y = x2~for~1 <= x < 2 \\
# y = x + 2~for~2 <= x < 1 \\
# \end{gather}
#
# Test the script with x values of 0.0, 1.0, 1.1, and 2.1
# + cocalc={"outputs": {"0": {"name": "input", "opts": {"password": false, "prompt": "Enter enter a float"}, "output_type": "stream", "value": "1.7"}}}
# Exercise 2
userInput = input('Enter enter a float') #ask for user's input
x = float(userInput)
print("x:", x)
if x >= 0 and x < 1:
y = x
print("y is equal to",y)
elif x >= 1 and x < 2:
y = x*x
print("y is equal to",y)
else:
y = x+2
print("y is equal to",y)
# -
# <hr>
# ### Exercise 3.
# using your script above, add functionality to **automaticaly** populate the table below:
#
# |x|y(x)|
# |---:|---:|
# |0.0| |
# |1.0| |
# |2.0| |
# |3.0| |
# |4.0| |
# |5.0| |
# +
# Exercise 3 -- get this far in lab, next two can be homework | with prettytable
from prettytable import PrettyTable #Required to create tables
t = PrettyTable(['x', 'y']) #Define an empty table
for x in range(0,6,1):
if x >= 0 and x < 1:
y = x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y]) #will add a row to the table "t"
elif x >= 1 and x < 2:
y = x*x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
else:
y = x+2
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
print(t)
# +
# Exercise 3 -- get this far in lab, next two can be homework | without pretty table
for x in range(0,6,1):
if x >= 0 and x < 1:
y = x
print("for x equal to", x, ", y is equal to",y)
elif x >= 1 and x < 2:
y = x*x
print("for x equal to", x, ", y is equal to",y)
else:
y = x+2
print("for x equal to", x, ", y is equal to",y)
# -
# <hr>
# ### Exercise 4.
# Modify the script above to increment the values by 0.5. and automatically populate the table:
#
# |x|y(x)|
# |---:|---:|
# |0.0| |
# |0.5| |
# |1.0| |
# |1.5| |
# |2.0| |
# |2.5| |
# |3.0| |
# |3.5| |
# |4.0| |
# |4.5| |
# |5.0| |
#
# +
# Exercise 4
from prettytable import PrettyTable
t = PrettyTable(['x', 'y'])
x=0
for i in range(0,11,1):
x += 0.5
if x >= 0 and x < 1:
y = x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
elif x >= 1 and x < 2:
y = x*x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
elif x > 5:
break
else:
y = x+2
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
print(t)
# -
# <hr>
# #### The `continue` statement
# The continue instruction skips the block of code after it is executed for that iteration.
# It is
# best illustrated by an example.
j = 0
for i in range(0,5,1):
j += 2
print ("\n i = ", i , ", j = ", j) #here the \n is a newline command
if j == 6:
continue
print(" this message will be skipped over if j = 6 ") # still within the loop, so the skip is implemented
# When j ==6 the line after the continue keyword is not printed.
# Other than that one
# difference the rest of the script runs normally.
# #### The `try`, `except` structure
#
# An important control structure (and a pretty cool one for error trapping) is the `try`, `except`
# statement.
#
# The statement controls how the program proceeds when an error occurs in an instruction.
# The structure is really useful to trap likely errors (divide by zero, wrong kind of input)
# yet let the program keep running or at least issue a meaningful message to the user.
#
# The syntax is:
#
# try:
# do something
# except:
# do something else if ``do something'' returns an error
#
# Here is a really simple, but hugely important example:
#MyErrorTrap.py
x = 12.
y = 12.
while y >= -12.: # sentinel controlled repetition
try:
print ("x = ", x, "y = ", y, "x/y = ", x/y)
except:
print ("error divide by zero")
y -= 1
# So this silly code starts with x fixed at a value of 12, and y starting at 12 and decreasing by
# 1 until y equals -1. The code returns the ratio of x to y and at one point y is equal to zero
# and the division would be undefined. By trapping the error the code can issue us a measure
# and keep running.
#
# Modify the script as shown below,Run, and see what happens
#NoErrorTrap.py
x = 12.
y = 12.
while y >= -12.: # sentinel controlled repetition
print ("x = ", x, "y = ", y, "x/y = ", x/y)
y -= 1
# ### Exercise 5.
#
# Modify your Exercise 3 script to prompt the user for three inputs, a starting value for $x$ an increment to change $x$ by and how many steps to take. Your script should produce a table like
#
# |x|y(x)|
# |---:|---:|
# |0.0| |
# |1.0| |
# |2.0| |
# |3.0| |
# |4.0| |
# |5.0| |
#
# but the increment can be different from 1.0 as above.
#
# Include error trapping that:
#
# 1. Takes any numeric input for $x$ or its increment, and forces into a float.
# 2. Takes any numeric input for number of steps. and forces into an integer.
# 3. Takes any non-numeric input, issues a message that the input needs to be numeric, and makes the user try again.
#
# Once you have acceptable input, trap the condition if x < 0 and issue a message, otherwise complete the requisite arithmetic and build the table.
#
# Test your script with the following inputs for x, x_increment, num_steps
#
# Case 1) fred , 0.5, 7
#
# Case 2) 0.0, 0.5, 7
#
# Case 3) -3.0, 0.5, 14
# + cocalc={"outputs": {"0": {"name": "input", "opts": {"password": false, "prompt": "Enter the starting value for x"}, "output_type": "stream", "value": "0"}, "1": {"name": "input", "opts": {"password": false, "prompt": "Enter the increment for x"}, "output_type": "stream", "value": "0.5"}, "2": {"name": "input", "opts": {"password": false, "prompt": "Enter how many steps for x"}, "output_type": "stream", "value": "7"}}}
# Exercise 5 --
from prettytable import PrettyTable
try:
userInput = input('Enter the starting value for x') #ask for user's input on the initial value
start = float(userInput)
userInput2 = input('Enter the increment for x') #ask for user's input on the increment
step = float(userInput2)
userInput3 = float(input('Enter how many steps for x')) #ask for user's input on the number of steps)
ns = int(userInput3)
stop = step*ns #compute the endpoint of the range by multiplying number of steps and their size
print("the range for x goes from", start, " to", stop, " by increments of", step)
t = PrettyTable(['x', 'y'])
x=start - step
for i in range(0,ns,1):
x += step
if x >= 0 and x < 1:
y = x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
elif x >= 1 and x < 2:
y = x*x
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
else:
y = x+2
print("for x equal to", x, ", y is equal to",y)
t.add_row([x, y])
print(t)
except:
print ("the input needs to be numeric. Please try again!")
| 1-Lessons/Lesson04/Lab4/src/.ipynb_checkpoints/Lab4_FP-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# Exploring the double well
# ------
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
import keras
import tensorflow as tf
import sys
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
rcParams.update({'font.size': 16})
# Switch AUTORELOAD ON. Disable this when in production mode!
# %load_ext autoreload
# %autoreload 2
from deep_boltzmann.models import DoubleWell
from deep_boltzmann.networks.invertible import EnergyInvNet, invnet
from deep_boltzmann.sampling import GaussianPriorMCMC
from deep_boltzmann.networks.plot import test_xz_projection
from deep_boltzmann.util import count_transitions
from deep_boltzmann.sampling.analysis import free_energy_bootstrap, mean_finite, std_finite
def test_sample(network, temperature=1.0):
sample_z, sample_x, energy_z, energy_x, logw = network.sample(temperature=temperature, nsample=100000)
# xgen = network.Tzx.predict(np.sqrt(temperature) * np.random.randn(100000, 2))
params = DoubleWell.params_default.copy()
params['dim'] = 2
double_well = DoubleWell(params=params)
plt.figure(figsize=(5, 4))
_, E = double_well.plot_dimer_energy(temperature=temperature)
h, b = np.histogram(sample_x[:, 0], bins=100)
Eh = -np.log(h) / temperature
Eh = Eh - Eh.min() + E.min()
bin_means = 0.5*(b[:-1] + b[1:])
plt.plot(bin_means, Eh)
return bin_means, Eh
# reweighting
def test_sample_rew(network, temperature=1.0):
sample_z, sample_x, energy_z, energy_x, log_w = network.sample(temperature=1.0, nsample=100000)
bin_means, Es = free_energy_bootstrap(sample_x[:, 0], -2.5, 2.5, 100, sample=100, weights=np.exp(log_w))
plt.figure(figsize=(5, 4))
double_well.plot_dimer_energy()
Emean = mean_finite(Es, axis=0)-10.7
Estd = std_finite(Es, axis=0)
plt.errorbar(bin_means, Emean, Estd)
# variance
var = mean_finite(std_finite(Es, axis=0) ** 2)
print('Estimator Standard Error: ', np.sqrt(var))
return bin_means, Emean, Estd
def plot_transformation_field_2d(transformer, bounds, ngrid=20, ):
# build grid
x_coarse_grid = np.linspace(bounds[0], bounds[1], num=ngrid)
y_coarse_grid = np.linspace(bounds[2], bounds[3], num=ngrid)
grid = []
for i in range(len(x_coarse_grid)):
for j in range(len(y_coarse_grid)):
grid.append([x_coarse_grid[i], y_coarse_grid[j]])
grid = np.array(grid)
# compute transformation field
grid_pred = transformer.predict(grid)
# show field
plt.figure(figsize=(5, 5))
plt.quiver(grid[:, 0], grid[:, 1], grid_pred[:, 0], grid_pred[:, 1], units='width')
def getx(x):
return x[:, 0]
def plot_convergence(losses, acceptance_rate, stepsize=None, figsize=(5, 8)):
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=figsize)
niter = len(losses)
xticks = np.arange(niter) + 1
# ML loss
losses_ML = np.array(losses)[:, 1]
axes[0].plot(xticks, losses_ML, color='black')
axes[0].set_xlim(0, niter + 1)
axes[0].set_ylabel('ML loss')
# KL loss
losses_KL = np.array(losses)[:, 2]
axes[1].plot(xticks, losses_KL, color='black')
axes[1].set_xlim(0, niter + 1)
axes[1].set_ylabel('KL loss')
if stepsize is None:
# acceptance rate
axes[2].plot(xticks, acceptance_rate, color='black')
axes[2].set_xlim(0, niter + 1)
axes[2].set_ylabel('Acc. rate')
axes[2].set_xlabel('Training iterations')
else:
# MCMC efficiency (adaptive)
efficiency = np.array(acceptance_rate) * np.array(stepsize)
axes[2].plot(xticks, efficiency, color='black')
axes[2].set_xlim(0, niter + 1)
axes[2].set_ylabel('Efficiency')
axes[2].set_xlabel('Training iterations')
return fig, axes
paper_dir = '/Users/noe/data/papers/NoeEtAl_BoltzmannGeneratorsRev/'
# Double well
# ---
params = DoubleWell.params_default.copy()
params['dim'] = 2
double_well = DoubleWell(params=params)
plt.figure(figsize=(5,5))
double_well.plot_dimer_energy();
#plt.savefig(paper_dir + 'figs/doublewell_potential.pdf', bbox_inches='tight')
def plot_potential(labels=True, cbar=True, figsize=(5, 5)):
# 2D potential
xgrid = np.linspace(-3, 3, 100)
ygrid = np.linspace(-6, 6, 100)
Xgrid, Ygrid = np.meshgrid(xgrid, ygrid)
X = np.vstack([Xgrid.flatten(), Ygrid.flatten()]).T
E = double_well.energy(X)
E = E.reshape((100, 100))
E = np.minimum(E, 10.0)
plt.figure(figsize=figsize)
plt.contourf(Xgrid, Ygrid, E, 50, cmap='jet', vmax=4)
if cbar:
cbar = plt.colorbar()
cbar.set_label('Energy / kT', labelpad=-15, y=0.6)
cbar.set_ticks([-10, -5, 0, 5, 10])
if labels:
plt.xlabel('$x_1$ / a.u.')
plt.ylabel('$x_2$ / a.u.')
else:
plt.xticks([])
plt.yticks([])
print(params)
# simulation data
from deep_boltzmann.sampling import MetropolisGauss
# +
nsteps = 10000
x0_left = np.array([[-1.8, 0.0]])
x0_right = np.array([[1.8, 0.0]])
sampler = MetropolisGauss(double_well, x0_left, noise=0.1, stride=10)
sampler.run(nsteps)
traj_left = sampler.traj.copy()
sampler.reset(x0_left)
sampler.run(nsteps)
traj_left_val = sampler.traj.copy()
sampler.reset(x0_right)
sampler.run(nsteps)
traj_right = sampler.traj.copy()
sampler.reset(x0_right)
sampler.run(nsteps)
traj_right_val = sampler.traj.copy()
# -
plt.figure(figsize=(10, 4))
ax1 = plt.subplot2grid((1, 3), (0, 0), colspan=2)
ax2 = plt.subplot2grid((1, 3), (0, 2))
ax1.plot(traj_left[:, 0], color='blue', alpha=0.7)
ax1.plot(traj_right[:, 0], color='red', alpha=0.7)
ax1.set_xlim(0, 1000)
ax1.set_ylim(-2.5, 2.5)
ax1.set_xlabel('Time / steps')
ax1.set_ylabel('x / a.u.')
ax2.hist(traj_left[:, 0], 30, orientation='horizontal', histtype='stepfilled', color='blue', alpha=0.2);
ax2.hist(traj_left[:, 0], 30, orientation='horizontal', histtype='step', color='blue', linewidth=2);
ax2.hist(traj_right[:, 0], 30, orientation='horizontal', histtype='stepfilled', color='red', alpha=0.2);
ax2.hist(traj_right[:, 0], 30, orientation='horizontal', histtype='step', color='red', linewidth=2);
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_ylim(-2.5, 2.5)
ax2.set_xlabel('Probability')
#plt.savefig(paper_dir + 'figs/doublewell_prior_trajs.pdf', bbox_inches='tight')
x = np.vstack([traj_left, traj_right])
xval = np.vstack([traj_left_val, traj_right_val])
# prepare transition state
x_ts = np.vstack([np.zeros(1000), (1.0/double_well.params['k']) * np.random.randn(1000)]).T
# Particle filter starting from one sample
# -----
bg = invnet(double_well.dim, 'RRRR', double_well, nl_layers=3, nl_hidden=100,
nl_activation='relu', nl_activation_scale='tanh')
x0 = traj_left[100:101]
X0 = np.repeat(x0, 1000, axis=0)
X0 += 0.01 * np.random.randn(X0.shape[0], X0.shape[1])
plot_potential(labels=False, cbar=False, figsize=(5, 5))
plt.plot(x0[:, 0], x0[:, 1], color='white', linewidth=0, marker='+', markersize=20, markeredgewidth=3)
#plt.savefig(paper_dir + 'figs/double_well/explore_potential_init.pdf', bbox_inches='tight', transparent=True)
loss_bg_trainML, loss_bg_valML = bg.train_ML(X0, epochs=20, batch_size=128,
std=1.0, verbose=0, return_test_energies=False)
plt.plot(loss_bg_trainML)
plt.plot(loss_bg_valML)
from deep_boltzmann.networks.training import ParticleFilter
particle_filter = ParticleFilter(bg, X0, 10000, lr=0.00025, batch_size=1024,
high_energy=10000, max_energy=1e10, std=1.0)
Ds = []
Ds.append(particle_filter.X[:, 0].copy())
for i in range(10):
print('\nITER',(i+1),'/10')
particle_filter.train(epochs=50, stepsize=None, verbose=1)
Ds.append(particle_filter.X[:, 0].copy())
plt.plot(np.array(particle_filter.acceptance_rate) * np.array(particle_filter.stepsize))
fig, axes = plot_convergence(particle_filter.loss_train, particle_filter.acceptance_rate, particle_filter.stepsize, figsize=(5, 5));
axes[0].set_ylim(-5, 5)
axes[1].set_ylim(-10, 0)
#plt.savefig(paper_dir + 'figs/double_well/explore_conv.pdf', bbox_inches='tight')
xref, Eref = double_well.plot_dimer_energy()
# +
plot_indices = np.array([0, 1, 2, 5, 7, 10])
fig, axes = plt.subplots(nrows=3, ncols=2, sharex=True, sharey=True, figsize=(6,8))
for i, ax in zip(plot_indices, axes.flatten()):
h, e = np.histogram(Ds[i], bins=50)
e = 0.5*(e[:-1] + e[1:])
ax.plot(e, h, color='darkblue', linewidth=1)
ax.fill_between(e, np.zeros(len(e)), h, color='blue', alpha=0.7)
ax.text(-2.5, 3000, str(i*50)+' iter')
ax.plot(xref, np.exp(-Eref-3.4), color='black', linewidth=3, label='Eq. dist.')
ax.semilogy()
#plt.legend(ncol=1, fontsize=16, frameon=False)
ax.set_ylim(3, 10000)
axes[-1, 0].set_xlabel('$x_1$')
axes[-1, 1].set_xlabel('$x_1$')
axes[0, 0].set_ylabel('sample count')
axes[1, 0].set_ylabel('sample count')
axes[2, 0].set_ylabel('sample count')
#plt.savefig(paper_dir + 'figs/double_well/explore_conv_hist.pdf', bbox_inches='tight')
# -
| software/Fig4_Explore_DoubleWell.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ColeBallard/digit-classifier/blob/main/digit_classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="hcd6Oy4QiN9b"
import tensorflow as tf
# + id="1HvgkEjAiX99"
import numpy as np
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} id="NpWWpr2tigA4" outputId="73cbfe0a-5666-4716-cac9-e6e9e6ee692c"
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + id="ZfurxjmyiofL"
train_images = train_images / 255.0
test_images = test_images / 255.0
# + colab={"base_uri": "https://localhost:8080/", "height": 589} id="byMv9_Lti_yf" outputId="3d16686b-dbdd-4e0f-fd2c-1a16e7b5014a"
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(train_labels[i])
plt.show()
# + id="EyCYsFuzjrot"
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
# + id="FfhpW7Vfjrye"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="ktrmsjwLjzbq" outputId="ca999a7d-0923-4f21-aff3-fce078c82cc6"
model.fit(train_images, train_labels, epochs=10)
# + colab={"base_uri": "https://localhost:8080/"} id="egCO7swMkEvk" outputId="003b2070-1d1c-4d75-baef-fb86f3cb0b22"
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
# + id="m0TMGt2-kExP"
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
# + id="74-GB3dmkMVx"
predictions = probability_model.predict(test_images)
# + id="TihRXCcDkkuC"
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(predicted_label,
100*np.max(predictions_array),
true_label),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# + colab={"base_uri": "https://localhost:8080/", "height": 618} id="nNxzPMLLkULA" outputId="7f95c8dd-a741-4b34-db0b-ba69d005d32f"
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
| digit_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from rdkit import Chem
from IPython.display import clear_output
from rdkit import DataStructs
from rdkit.Chem import AllChem
from scipy.stats.stats import pearsonr, spearmanr
import matplotlib.pyplot as plt
import seaborn as sns
def read_merged_data(input_file_list, usecols=None):
whole_pd = pd.DataFrame()
for input_file in input_file_list:
data_pd = pd.read_csv(input_file, usecols=usecols)
whole_pd = whole_pd.append(data_pd)
return whole_pd
file_list = ['./pria_rmi_cv/file_{}.csv'.format(i) for i in range(5)]
train_pd = read_merged_data(file_list)
test_pd = pd.read_csv('./updated_dataset/pria_prospective.csv.gz')
test_reg = test_pd['Keck_Pria_Continuous'].as_matrix()
fold_np = np.load('./job_results_pred/neural_networks/stage_2/single_classification_22/fold_0.npz')
labels, y_tr, y_v, y_te, y_pred_on_train, y_pred_on_val, y_pred_on_test = fold_np['labels'], fold_np['y_train'], fold_np['y_val'], fold_np['y_test'], fold_np['y_pred_on_train'], fold_np['y_pred_on_val'], fold_np['y_pred_on_test']
y_stnnc_a = y_pred_on_test[:,0]
fold_np = np.load('./job_results_pred/neural_networks/stage_2/single_regression_11/fold_0.npz')
labels, y_tr, y_v, y_te, y_pred_on_train, y_pred_on_val, y_pred_on_test = fold_np['labels'], fold_np['y_train'], fold_np['y_val'], fold_np['y_test'], fold_np['y_pred_on_train'], fold_np['y_pred_on_val'], fold_np['y_pred_on_test']
y_stnnr_b = y_pred_on_test[:,0]
fold_np = np.load('./job_results_pred/random_forest/stage_2/sklearn_rf_392335_97/fold_0.npz')
labels, y_tr, y_v, y_te, y_pred_on_train, y_pred_on_val, y_pred_on_test = fold_np['labels'], fold_np['y_train'], fold_np['y_val'], fold_np['y_test'], fold_np['y_pred_on_train'], fold_np['y_pred_on_val'], fold_np['y_pred_on_test']
y_rf_h = y_pred_on_test[:,0]
# -
spearman_res = spearmanr(test_reg, y_stnnc_a)
pearson_res = pearsonr(test_reg, y_stnnc_a)
print('STNN-C_a spearman correlation={:.4f}, pvalue={:.4f}'.format(spearman_res[0], spearman_res[1]))
print('STNN-C_a pearson correlation={:.4f}, pvalue={:.4f}'.format(pearson_res[0], pearson_res[1]))
sns.jointplot(test_reg, y_stnnc_a); plt.xlabel('True Score'); plt.ylabel('STNN-C_a'); plt.title('Scatter Plot');
sns.jointplot(test_reg, y_stnnc_a, kind="hex", color="g"); plt.xlabel('True Score'); plt.ylabel('STNN-C_a'); plt.title('Hex-Bin Plot');
spearman_res = spearmanr(test_reg, y_stnnr_b)
pearson_res = pearsonr(test_reg, y_stnnr_b)
print('STNN-R_b spearman correlation={:.4f}, pvalue={:.4f}'.format(spearman_res[0], spearman_res[1]))
print('STNN-R_b pearson correlation={:.4f}, pvalue={:.4f}'.format(pearson_res[0], pearson_res[1]))
sns.jointplot(test_reg, y_stnnr_b); plt.xlabel('True Score'); plt.ylabel('STNN-R_b'); plt.title('Scatter Plot');
sns.jointplot(test_reg, y_stnnr_b, kind="hex", color="g"); plt.xlabel('True Score'); plt.ylabel('STNN-R_b'); plt.title('Hex-Bin Plot');
spearman_res = spearmanr(test_reg, y_rf_h)
pearson_res = pearsonr(test_reg, y_rf_h)
print('RF_h spearman correlation={:.4f}, pvalue={:.4f}'.format(spearman_res[0], spearman_res[1]))
print('RF_h pearson correlation={:.4f}, pvalue={:.4f}'.format(pearson_res[0], pearson_res[1]))
sns.jointplot(test_reg, y_rf_h); plt.xlabel('True Score'); plt.ylabel('RF_h'); plt.title('Scatter Plot');
sns.jointplot(test_reg, y_rf_h, kind="hex", color="g"); plt.xlabel('True Score'); plt.ylabel('RF_h'); plt.title('Hex-Bin Plot');
| pria_lifechem/analysis/PS_Prediction_vs_Activity_Correlation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
import glob
glob.glob("*.csv")
import pandas as pd
df = pd.read_csv("all_data.csv").drop(columns=["Unnamed: 0"])
df_care = df[["datetime","location","grid_id","value"]]
df_f = df.groupby(["location","datetime"]).mean().reset_index()#["delhi"]
delhi = df_f[df_f["location"]=='Los Angeles (SoCAB)']
set(df_f["location"])
la =df_f[df_f["location"]=='Los Angeles (SoCAB)'].reset_index().drop(columns=["index"])
# la["datetime"]= pd.to_datetime(la["datetime"])
la.head()
la = la.reset_index().drop(columns=["index"])
la.head()
# +
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
# import dataset, network to train and metric to optimize
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss
import copy
from pathlib import Path
import warnings
import numpy as np
import pandas as pd
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger
import torch
from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
from pytorch_forecasting.data import GroupNormalizer
from pytorch_forecasting.metrics import SMAPE, PoissonLoss, QuantileLoss
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
# -
max_encoder_length = 36
max_prediction_length = 60
# +
# la["datetime"] = pd.to_datetime(la["datetime"])
# -
la.head()
data = la
data["time_idx"] = data["datetime"].dt.year * 365 + data["datetime"].dt.month *30 + data["datetime"].dt.day
data["time_idx"] -= data["time_idx"].min()
max_prediction_length = 100
max_encoder_length = 24
training_cutoff = data["time_idx"].max() - max_prediction_length
time_varying_known_reals = list(data.columns[3:])
time_varying_known_reals[0:2]
data.head()
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="value",
group_ids=["location"],
min_encoder_length=max_encoder_length // 2, # keep encoder length long (as it is in the validation set)
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=[],
static_reals=[],
time_varying_known_categoricals=["location"],
time_varying_known_reals=time_varying_known_reals,
time_varying_unknown_categoricals=[],
time_varying_unknown_reals=[
],
target_normalizer=GroupNormalizer(
groups=[ "location"], transformation="softplus"
), # use softplus and normalize by group
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
allow_missing_timesteps=True
)
# +
validation = TimeSeriesDataSet.from_dataset(training, data, predict=True, stop_randomization=True)
# create dataloaders for model
batch_size = 64 # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
# -
actuals = torch.cat([y for x, (y, weight) in iter(val_dataloader)])
baseline_predictions = Baseline().predict(val_dataloader)
(actuals - baseline_predictions).abs().mean().item()
# +
pl.seed_everything(42)
trainer = pl.Trainer(
gpus=1,
# clipping gradients is a hyperparameter and important to prevent divergance
# of the gradient for recurrent neural networks
gradient_clip_val=0.1,
)
tft = TemporalFusionTransformer.from_dataset(
training,
# not meaningful for finding the learning rate but otherwise very important
learning_rate=0.03,
hidden_size=16, # most important hyperparameter apart from learning rate
# number of attention heads. Set to up to 4 for large datasets
attention_head_size=1,
dropout=0.1, # between 0.1 and 0.3 are good values
hidden_continuous_size=8, # set to <= hidden_size
output_size=7, # 7 quantiles by default
loss=QuantileLoss(),
# reduce learning rate if no improvement in validation loss after x epochs
reduce_on_plateau_patience=4,
)
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
# +
# find optimal learning rate
# res = trainer.tuner.lr_find(
# tft,
# train_dataloader=train_dataloader,
# val_dataloaders=val_dataloader,
# max_lr=10.0,
# min_lr=1e-6,
# )
# print(f"suggested learning rate: {res.suggestion()}")
# fig = res.plot(show=True, suggest=True)
# fig.show()
# +
# tft.cuda()
# +
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
lr_logger = LearningRateMonitor() # log the learning rate
logger = TensorBoardLogger("lightning_logs") # logging results to a tensorboard
trainer = pl.Trainer(
max_epochs=30,
gpus=0,
weights_summary="top",
gradient_clip_val=0.1,
# limit_train_batches=30, # coment in for training, running valiation every 30 batches
# fast_dev_run=True, # comment in to check that networkor dataset has no serious bugs
callbacks=[lr_logger, early_stop_callback],
logger=logger,
)
tft = TemporalFusionTransformer.from_dataset(
training,
learning_rate=0.03,
hidden_size=16,
attention_head_size=1,
dropout=0.1,
hidden_continuous_size=8,
output_size=7, # 7 quantiles by default
loss=QuantileLoss(),
log_interval=0, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches
reduce_on_plateau_patience=4,
)
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
# -
trainer.fit(
tft,
train_dataloader=train_dataloader,
val_dataloaders=val_dataloader,
)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.zeros(1, device=device)
# +
# la.plot()
# -
from darts import TimeSeries
series = TimeSeries.from_dataframe(la, 'datetime',freq='1D',fill_missing_dates=True,fillna_value=0)
series
train, val = series[:-300], series[-300:]
# +
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import pandas as pd
import shutil
from sklearn.preprocessing import MinMaxScaler
from tqdm import tqdm_notebook as tqdm
from torch.utils.tensorboard import SummaryWriter
import matplotlib.pyplot as plt
from darts import TimeSeries
from darts.dataprocessing.transformers import Scaler
from darts.models import TransformerModel, ExponentialSmoothing
from darts.metrics import mape
from darts.utils.statistics import check_seasonality, plot_acf
from darts.datasets import AirPassengersDataset, SunspotsDataset
# -
# Normalize the time series (note: we avoid fitting the transformer on the validation set)
# Change name
scaler = Scaler()
train_scaled = scaler.fit_transform(train)
val_scaled = scaler.transform(val)
series_scaled = scaler.transform(series)
# +
# val.pd_series().plot()
# +
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
my_stopper = EarlyStopping(
monitor="val_loss",
patience=5,
min_delta=0.05,
mode='min',
)
pl_trainer_kwargs={"callbacks": [my_stopper],"accelerator": "gpu", "gpus":0 }
# -
my_model = TransformerModel(
input_chunk_length=12,
output_chunk_length=1,
batch_size=32,
n_epochs=200,
model_name="air_transformer",
nr_epochs_val_period=10,
d_model=16,
nhead=8,
num_encoder_layers=2,
num_decoder_layers=2,
dim_feedforward=128,
dropout=0.1,
activation="relu",
random_state=42,
save_checkpoints=True,
force_reset=True,
torch_device_str= "cuda"
)
pl_trainer_kwargs
my_model.fit(series=train_scaled, val_series=val_scaled, verbose=True)
dir(scaler)
scaler.inverse_transform(val_scaled).pd_series().plot()
scaler.inverse_transform(my_model.predict(100)).pd_series().plot()
best_model.predict(100)
# +
from darts.datasets import EnergyDataset
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from darts import TimeSeries
from darts.models import NBEATSModel
from darts.dataprocessing.transformers import Scaler, MissingValuesFiller
from darts.metrics import mape, r2_score
from darts.datasets import EnergyDataset
# -
df = EnergyDataset().load().pd_dataframe()
df["generation hydro run-of-river and poundage"].plot()
plt.title("Hourly generation hydro run-of-river and poundage")
df_day_avg = df.groupby(df.index.astype(str).str.split(" ").str[0]).mean().reset_index()
filler = MissingValuesFiller()
scaler = Scaler()
series = scaler.fit_transform(
filler.transform(
TimeSeries.from_dataframe(
df_day_avg, "time", ["generation hydro run-of-river and poundage"]
)
)
).astype(np.float32)
series.plot()
plt.title("Daily generation hydro run-of-river and poundage")
train, val = series.split_after(pd.Timestamp("20170901"))
model_nbeats = NBEATSModel(
input_chunk_length=30,
output_chunk_length=7,
generic_architecture=True,
num_stacks=10,
num_blocks=1,
num_layers=4,
layer_widths=512,
n_epochs=100,
nr_epochs_val_period=1,
batch_size=800,
model_name="nbeats_run",
torch_device_str= "cuda"
)
model_nbeats.fit(train, val_series=val, verbose=True)
# +
from darts.models import ExponentialSmoothing
model = ExponentialSmoothing()
model.fit(train)
prediction = model.predict(len(val))
# +
# prediction
# +
import matplotlib.pyplot as plt
series.plot()
prediction.plot(label='forecast', low_quantile=0.05, high_quantile=0.95)
plt.legend()
# -
| notebooks/final_experiments/NBEATSModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Behavioral Data Analysis
# This notebook generates timecourse analyses and figures for experiments 1 and 2.
# # Imports
import pandas as pd; import seaborn as sb; import warnings; import scipy; import re;
import os; from analysis_helpers import *; import itertools; from scipy import stats
import random; import pandas as pd; import numpy as np; from sklearn import datasets, linear_model;
from sklearn.linear_model import LinearRegression; import statsmodels.api as sm
from scipy import stats; from itertools import groupby; from operator import itemgetter
warnings.filterwarnings('ignore')
# %matplotlib inline
# # Load Data
exp1 = pd.DataFrame.from_csv('../parsed_data/behavioral_data_sustained.csv')
exp2 = pd.DataFrame.from_csv('../parsed_data/behavioral_data_variable.csv')
# # Data Organize
# Make sure all images labeled by their inherent category (use image filename)
for exp in [exp1, exp2]:
exp.ix[ exp['Memory Image'].str.contains("sun", na=False),'Category']='Place'
exp.ix[~exp['Memory Image'].str.contains("sun", na=False),'Category']='Face'
exp.loc[exp['Trial Type']=='Presentation','Category']=np.nan
# Number all presentation and memory trials
exp.loc[exp['Trial Type']=='Memory','Trial'] = list(range(0,40))*30*8
exp.loc[exp['Trial Type']=='Presentation','Trial'] = list(range(0,10))*30*8
# ### Exp1: add cued category from previous presentation to memory trials
for s in exp1['Subject'].unique():
for r in exp1['Run'].unique():
exp1.loc[(exp1['Run']==r)
& (exp1['Subject']==s)
& (exp1['Trial Type']=='Memory'), 'Cued Category'] = exp1.loc[(exp1['Run']==r) & (exp1['Subject']==s) & (exp1['Trial Type']=='Presentation') & (exp1['Trial']==0)]['Cued Category'].item()
# ### Exp2: add last-cued category from previous presentation to memory trials
# +
exp2['Last Cued'] = np.nan
for sub in exp2['Subject'].unique():
for run in exp2['Run'].unique():
# obtain cued category from the last presentation trial
last_cat = exp2[(exp2['Trial Type']=='Presentation')
& (exp2['Subject']==sub)
& (exp2['Run']==run)
& (exp2['Trial']==9)]['Cued Category'].item()
# assign to this memory run
exp2.loc[(exp2['Trial Type']=='Memory')
& (exp2['Subject']==sub)
& (exp2['Run']==run),'Last Cued'] = last_cat
# -
# ### Re-Label Novel images by Cued or Uncued category
exp1 = add_nov_label(exp1, column_name = 'Cued Category')
exp2 = add_nov_label(exp2, column_name ='Last Cued')
# ### Working version of the data with all Novel images together (not split by cued or uncued)
exp2_Novel = exp2.replace(to_replace=['Nov_Cued','Nov_Un'], value='Novel')
exp1_Novel = exp1.replace(to_replace=['Nov_Cued','Nov_Un'], value='Novel')
# +
# Note : subject 28 in exp 2 has all presentation blocks ending with 'Place' cue !
# exp2[(exp2['Subject']==28) & (exp2['Trial Type']=='Presentation') & (exp2['Trial']==9)]['Cued Category']
# -
# # Stats
# Below are all of the statistical tests done on the behavioral data, roughly in the order they appear in the paper
# ### Reaction Time Stats (Cued vs. Uncued side)
# +
# Experiment 1
exp1_gr = exp1.groupby(['Subject','Cue Validity'], as_index=False).mean()
print(scipy.stats.ttest_rel(exp1_gr[exp1_gr['Cue Validity']==1]['Attention Reaction Time (s)'],
exp1_gr[exp1_gr['Cue Validity']==0]['Attention Reaction Time (s)']))
print(cohen_d(list(exp1_gr[exp1_gr['Cue Validity']==1]['Attention Reaction Time (s)']),
list(exp1_gr[exp1_gr['Cue Validity']==0]['Attention Reaction Time (s)'])))
# +
# Experiment 2
exp2_gr = exp2.groupby(['Subject','Cue Validity'], as_index=False).mean()
print(scipy.stats.ttest_rel(exp2_gr[exp2_gr['Cue Validity']==1]['Attention Reaction Time (s)'],
exp2_gr[exp2_gr['Cue Validity']==0]['Attention Reaction Time (s)']))
print(cohen_d(list(exp2_gr[exp2_gr['Cue Validity']==1]['Attention Reaction Time (s)']),
list(exp2_gr[exp2_gr['Cue Validity']==0]['Attention Reaction Time (s)'])))
# -
# ### Reaction Time Differences
# +
diffs = {'Experiment_1':[], 'Experiment_2':[]}
for d,label in zip([exp1, exp2],['Experiment_1', 'Experiment_2']):
for s in d['Subject'].unique():
cued = d[(d['Subject']==s)&(d['Cue Validity']==0)]['Attention Reaction Time (s)'].mean()
uncued = d[(d['Subject']==s)&(d['Cue Validity']==1)]['Attention Reaction Time (s)'].mean()
diffs[label].append(cued - uncued)
print('RT Diff Comparison')
print(scipy.stats.ttest_ind(diffs['Experiment_1'], diffs['Experiment_2']))
print(cohen_d(diffs['Experiment_1'], diffs['Experiment_2']))
# -
# ### Compare Fully Attended images to all other images
for d,label in zip([exp1, exp2],['Experiment_1', 'Experiment_2']):
Fulls = []
Others = []
for s in d['Subject'].unique():
Fulls.append(d[(d['Subject']==s)&(d['Attention Level']=='Full')]['Familiarity Rating'].mean())
Others.append(d[(d['Subject']==s) &(d['Attention Level']!='Full')]['Familiarity Rating'].mean())
print(label)
print(scipy.stats.ttest_rel(Fulls, Others))
print(cohen_d(Fulls, Others))
print()
# ### Face versus Scene
for exp,label in zip([exp1, exp2],['Experiment_1', 'Experiment_2']):
f_p = exp.groupby(['Category', 'Subject', 'Attention Level'], as_index=False).mean()
print(label)
print(scipy.stats.ttest_rel(f_p[(f_p['Category']=='Place') & (f_p['Attention Level']=='Full')]['Familiarity Rating'],
f_p[(f_p['Category']=='Face') & (f_p['Attention Level']=='Full')]['Familiarity Rating']))
print(cohen_d(f_p[(f_p['Category']=='Place') & (f_p['Attention Level']=='Full')]['Familiarity Rating'],
f_p[(f_p['Category']=='Face') & (f_p['Attention Level']=='Full')]['Familiarity Rating']))
print()
# ### Attended Category versus Unattended
for d,label in zip([exp1_Novel, exp2_Novel],['Experiment_1', 'Experiment_2']):
Cats = []
Nones = []
for s in d['Subject'].unique():
Cats.append(d[(d['Subject']==s) & (d['Attention Level'].isin(['Category']))]['Familiarity Rating'].mean())
Nones.append(d[(d['Subject']==s) & (d['Attention Level']=='None')]['Familiarity Rating'].mean())
print(label)
print(scipy.stats.ttest_rel(Cats, Nones))
print(cohen_d(Cats, Nones))
print()
# ### Attended Side vs Unattended
for d,label in zip([exp1_Novel, exp2_Novel],['Experiment_1', 'Experiment_2']):
Sides = []
Nones = []
for s in d['Subject'].unique():
Sides.append(d[(d['Subject']==s) & (d['Attention Level'].isin(['Side']))]['Familiarity Rating'].mean())
Nones.append(d[(d['Subject']==s) & (d['Attention Level']=='None')]['Familiarity Rating'].mean())
print(label)
print(scipy.stats.ttest_rel(Sides, Nones))
print(cohen_d(Sides, Nones))
print()
# ### Cued versus Uncued Novel images
for d,label in zip([exp1, exp2],['Experiment_1','Experiment_2']):
d = d.groupby(['Subject','Attention Level'], as_index=False).mean()
print(label)
a = d[d['Attention Level']=='Nov_Cued']['Familiarity Rating']
b = d[d['Attention Level']=='Nov_Un']['Familiarity Rating']
print(scipy.stats.ttest_rel(a, b))
print(cohen_d(a, b))
print()
# ### Feature boost versus feature bias boost
# +
diffs = {'Experiment_1':[], 'Experiment_2':[]}
for d,label in zip([exp1, exp2],['Experiment_1', 'Experiment_2']):
cat_no = []
nov_diff = []
for s in d['Subject'].unique():
cat = d[(d['Subject']==s)&(d['Attention Level'].isin(['Category', 'Full']))]['Familiarity Rating'].mean()
no = d[(d['Subject']==s) &(d['Attention Level']=='None')]['Familiarity Rating'].mean()
nov_c = d[(d['Subject']==s) &(d['Attention Level']=='Nov_Cued')]['Familiarity Rating'].mean()
nov_u = d[(d['Subject']==s) &(d['Attention Level']=='Nov_Un')]['Familiarity Rating'].mean()
cat_no.append(cat - no)
nov_diff.append(nov_c - nov_u)
print(label)
print(scipy.stats.ttest_rel(cat_no, nov_diff))
print(cohen_d(cat_no, nov_diff))
print()
# -
# ### Feature boost versus Location boost
# +
# Exp 1: mean(Cat & Full) - mean(None)
# versus
# Exp 1: mean(Side & Full) - mean(None)
# Exp 2: mean(Cat & Full) - mean(None)
# versus
# Exp 2: mean(Side & Full) - mean(None)
# Experiment 1: ( (mean(Cat & Full) - mean(None)) - (mean(Side & Full) - mean(None)) )
# versus
# Experiment 2: ( (mean(Cat & Full) - mean(None)) - (mean(Side & Full) - mean(None)) )
# +
diffs = {'Experiment_1':[], 'Experiment_2':[]}
side_diffs = {'Experiment_1':[], 'Experiment_2':[]}
for d,label in zip([exp1_Novel, exp2_Novel],['Experiment_1', 'Experiment_2']):
cat_nov = []
side_nov = []
for s in d['Subject'].unique():
side = d[(d['Subject']==s)&(d['Attention Level'].isin(['Side','Full']))]['Familiarity Rating'].mean()
cat = d[(d['Subject']==s)&(d['Attention Level'].isin(['Category', 'Full']))]['Familiarity Rating'].mean()
nov = d[(d['Subject']==s) &(d['Attention Level']=='None')]['Familiarity Rating'].mean()
cat_nov.append(cat - nov)
side_nov.append(side - nov)
print(label)
print(scipy.stats.ttest_rel(cat_nov, side_nov))
print(cohen_d(cat_nov, side_nov))
print()
side_diffs[label] = side_nov
diff = [x-y for x,y in zip(cat_nov,side_nov)]
diffs[label] = diff
print()
print('Feature boost relative to Location boost, Exp1 vs Exp 2')
print(scipy.stats.ttest_ind(diffs['Experiment_1'], diffs['Experiment_2']))
print(cohen_d(diffs['Experiment_1'], diffs['Experiment_2']))
print()
print('Location boost relative to novel, Exp2 vs Exp1')
print(scipy.stats.ttest_ind(side_diffs['Experiment_2'], side_diffs['Experiment_1']))
print(cohen_d(side_diffs['Experiment_2'], side_diffs['Experiment_1']))
# -
# ### Fully Attended versus Side Attended boost
# +
diffs = {'Experiment_1':[], 'Experiment_2':[]}
side_diffs = {'Experiment_1':[], 'Experiment_2':[]}
for d,label in zip([exp1_Novel, exp2_Novel],['Experiment_1', 'Experiment_2']):
cat_nov = []
side_nov = []
for s in d['Subject'].unique():
side = d[(d['Subject']==s)&(d['Attention Level'].isin(['Side']))]['Familiarity Rating'].mean()
cat = d[(d['Subject']==s)&(d['Attention Level'].isin(['Full']))]['Familiarity Rating'].mean()
nov = d[(d['Subject']==s) &(d['Attention Level']=='None')]['Familiarity Rating'].mean()
cat_nov.append(cat - nov)
side_nov.append(side - nov)
print(label)
print(scipy.stats.ttest_rel(cat_nov, side_nov))
print(cohen_d(cat_nov, side_nov))
print()
side_diffs[label] = side_nov
diff = [x-y for x,y in zip(cat_nov,side_nov)]
diffs[label] = diff
print()
print('Feature boost relative to Location boost, Exp1 vs Exp 2')
print(scipy.stats.ttest_ind(diffs['Experiment_1'], diffs['Experiment_2']))
print(cohen_d(diffs['Experiment_1'], diffs['Experiment_2']))
print()
print('Location boost relative to novel, Exp2 vs Exp1')
print(scipy.stats.ttest_ind(side_diffs['Experiment_2'], side_diffs['Experiment_1']))
print(cohen_d(side_diffs['Experiment_2'], side_diffs['Experiment_1']))
# -
# # Plot Data
# ## Violin Plots
# +
stat_dict_full = {'Experiment_1':{}, 'Experiment_2':{}}
# color list
col = ['r','orange','tan','purple','blue','grey']
col_neg = ['grey','blue', 'purple', 'tan', 'orange', 'r']
# # cat list
cats = ['Full','Category','Nov_Cued','Side','None','Nov_Un']
# plot settings
sb.set_style("white")
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
# for each experiment, group and plot
for d,label in zip([exp1, exp2],['Experiment_1', 'Experiment_2']):
data = d.groupby(['Subject','Attention Level', 'Category'], as_index = False).mean()
print(label + ': Average Familiarity by Attention Level')
sb_plot = sb.violinplot(x='Attention Level', y='Familiarity Rating',
data = data, hue='Category', split=True,
order=cats)
sb_plot.set(ylim=(.2, 9))
ax1 = sb_plot.axes
### WITHIN VIOLIN SIGNIFICANCE FOR PLOTTING ###
t_draw = {}
for c in data['Attention Level'].unique():
if c in(['Nov_Cued','Nov_Un']) and label=='Experiment_2':
# if comparing novel images from exp2, eliminate participant 28 (all Place-cued as last cued category)
first = list(data[(data['Attention Level']==c) & (data['Category']=='Face') & (data['Subject']!=28)]['Familiarity Rating'])
second = list(data[(data['Attention Level']==c) & (data['Category']=='Place') & (data['Subject']!=28)]['Familiarity Rating'])
else:
first = list(data[(data['Attention Level']==c) & (data['Category']=='Face')]['Familiarity Rating'])
second = list(data[(data['Attention Level']==c) & (data['Category']=='Place')]['Familiarity Rating'])
t = scipy.stats.ttest_rel(first, second)
if t[1]<.001:
t_draw[c] = '***'
elif t[1]<.01:
t_draw[c] = '**'
elif t[1]<.05:
t_draw[c] = '*'
elif t[1]<.056:
t_draw[c] = '+'
### SIGNIFICANCE FOR PLOTTING ###
stat_dict = {}
k = data.groupby(['Subject','Attention Level'],as_index=False).mean()
for pair in list(itertools.combinations(cats, r=2)):
t = stats.ttest_rel(k[k['Attention Level']==pair[0]]['Familiarity Rating'],
k[k['Attention Level']==pair[1]]['Familiarity Rating'])
stat_dict_full[label][pair] = {'t': t.statistic, 'p': t.pvalue}
# dictionary where every key is a pair of sig dif categories
if t[1]<.056:
stat_dict[pair] = {'t': t.statistic, 'p': t.pvalue}
### ADD SIG BARS FOR POSITIVE RELATIONSHIPS TO PLOT ###
plotted_cats = []
to_be_plotted = []
line_counter = 0
for idx,c in enumerate(cats):
# for each category
x = sig_bars(c, cats, stat_dict)
# get first series of lines
for idx,a in enumerate(x):
if (a['categories'] not in plotted_cats) and (a!=np.nan) and (type(a['categories'])!=float):
a['y'] = a['y'] + line_counter
to_be_plotted.append(a)
plotted_cats.append(a['categories'])
if a['next']!=0:
# if next category also has significant relationship
fake_first = a['categories'][0]
b = a
while b['next']!= 0 :
second_fake_first = b['categories'][0]
b = sig_bars(b['next'], cats, stat_dict, adjust = (cats.index(c)-cats.index(b['next']))/len(cats))[0]
# get params for that bar, adjust height --> same level as first line
if (b['categories'] not in plotted_cats) and (b != np.nan) and (type(b['categories'])!=float):
b['y'] = b['y'] + line_counter
to_be_plotted.append(b)
plotted_cats.append(b['categories'])
plotted_cats.append((fake_first, b['categories'][1]))
plotted_cats.append((second_fake_first, b['categories'][1]))
if type(plotted_cats[-1]) != float:
l = plotted_cats[-1][0]
plotted_cats.append((l,plotted_cats[-1][1]))
plotted_cats.append((fake_first,plotted_cats[-1][1]))
line_counter += .3
if type(plotted_cats[-1]) == str:
fake_last = plotted_cats[-1][1]
plotted_cats.append((fake_first, fake_last))
# get the unique y values
y_vals = [x['y'] for x in to_be_plotted]
unique = list(set(y_vals))
unique.sort(reverse=True)
# move each to desired location
new_to_be_plotted = []
for idx,u in enumerate(unique):
for line in to_be_plotted:
if line['y']==u:
line['y'] = (idx/3)+5.2
new_to_be_plotted.append(line)
for each in new_to_be_plotted:
ax1.axhline(each['y'], ls='-', xmin = each['x_min'], xmax = each['x_max'],
linewidth = each['width'], color = col[cats.index(each['categories'][0])])
### ADD SIG BARS FOR NEGATIVE RELATIONSHIPS TO PLOT ###
plotted_cats = []
to_be_plotted = []
line_counter = 0
for idx,c in enumerate(cats):
# for each category
x = sig_bars_neg(c, cats, stat_dict)
# get first series of lines
for idx,a in enumerate(x):
if (a['categories'] not in plotted_cats) and (a!=np.nan) and (type(a['categories'])!=float):
a['y'] = a['y'] + line_counter
to_be_plotted.append(a)
plotted_cats.append(a['categories'])
if a['next']!=0:
# if next category also has significant relationship
fake_first = a['categories'][0]
b = a
while b['next']!= 0 :
second_fake_first = b['categories'][0]
b = sig_bars_neg(b['next'], cats, stat_dict, adjust = (cats.index(c)-cats.index(b['next']))/len(cats))[0]
# get params for that bar, adjust height --> same level as first line
if (b['categories'] not in plotted_cats) and (b != np.nan) and (type(b['categories'])!=float):
b['y'] = b['y'] + line_counter
to_be_plotted.append(b)
plotted_cats.append(b['categories'])
plotted_cats.append((fake_first, b['categories'][1]))
plotted_cats.append((second_fake_first, b['categories'][1]))
if type(plotted_cats[-1]) != float:
l = plotted_cats[-1][0]
line_counter += .3
if len(plotted_cats)>0 and type(plotted_cats[-1]) == str:
fake_last = plotted_cats[-1][1]
plotted_cats.append((fake_first, fake_last))
# get the unique y values
y_vals = [x['y'] for x in to_be_plotted]
unique = list(set(y_vals))
unique.sort(reverse=True)
# move each to desired location
new_to_be_plotted = []
for idx,u in enumerate(unique):
for line in to_be_plotted:
if line['y']==u:
line['y'] = (idx/3)+7.3
new_to_be_plotted.append(line)
for each in new_to_be_plotted:
ax1.axhline(each['y'], ls='-', xmin = each['x_min'], xmax = each['x_max'],
linewidth = each['width'], color = col[-cats.index(each['categories'][1])])
for stars in t_draw:
ax1.text((cats.index(stars)), 4.5, t_draw[stars], horizontalalignment='center', size='large', color='black')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
# -
# ## Timecourse Plots
# +
# Apply sliding window
window_length = 20
exp1_mean_window = apply_window(exp1, window_length)
exp2_mean_window = apply_window(exp2, window_length)
# prepare data for plotting
plot_data={}
for data,key in zip([exp1_mean_window, exp2_mean_window],['exp1','exp2']):
# average across all trials within each subject
group = data.reset_index().groupby(['Subject','Trial']).mean()
# melt/restructure the data
group_melt = pd.melt(group.reset_index(), id_vars=['Subject','Trial'],
value_vars=['Category', 'Full','None','Nov_Un', 'Nov_Cued','Side'])
# assign data to dictionary key
plot_data[key] = group_melt
# plotting color key
palette = sb.color_palette("RdBu", 20)
# Cued category --> warm colors
# Uncued category --> cool colors
# -
# ## Sliding Window - Familiarity Over Time
# ### Sliding Window - Novel Images
# +
import scipy
sb.set_style("white")
for key,label in zip(plot_data.keys(),['Experiment_1','Experiment_2']):
print(key + ': Sliding Window - Novel Images Only')
data = plot_data[key]
# plot data
ax = sb.lineplot(x='Trial',y='value', hue = 'Attention Level',
data = data[data['Attention Level'].isin(['Nov_Un','Nov_Cued'])], # ci=None,
palette = {"Full": palette[0], "Category": palette[3], "Nov_Cued":palette[5],
"Side": palette[19], "None": palette[16], "Nov_Un":palette[13]})
ax.set(ylim=(1.3, 2.3))
ax.set(xlim=(0, 39))
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
plt.xlabel('Memory Trial')
plt.ylabel('Familiarity')
# ttest at each timepoint ######################
ttest_data = timepoint_ttest(data, ['Nov_Cued','Nov_Un'])
# add lines where pvalue is significant
index = ttest_data[(ttest_data['Attention Level']=='Nov_Un') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.41, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[13])
plt.axhline( y=1.4, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[5])
plt.axhline( y=1.41, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[13])
plt.axhline( y=1.4, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[5])
# plt.axvline(x, .1, .3, color='red')
plt.xticks([0, 9, 19, 29, 39])
plt.show()
# -
# ### Novel Image Difference Scores
# +
from sklearn.linear_model import LinearRegression
for exp in plot_data.keys():
trial_avs = plot_data[exp].groupby(['Trial','Attention Level','Subject'], as_index=False).mean()
trial_avs['Nov_Diffs'] = np.nan
for s in trial_avs['Subject'].unique():
for t in trial_avs['Trial'].unique():
first = trial_avs[(trial_avs['Attention Level']=='Nov_Cued')
& (trial_avs['Trial']==t)
& (trial_avs['Subject']==s)]['value'].item()
second = trial_avs[(trial_avs['Attention Level']=='Nov_Un' )
& (trial_avs['Trial']==t)
& (trial_avs['Subject']==s)]['value'].item()
difference = first - second
trial_avs.loc[(trial_avs['Trial']==t) & (trial_avs['Subject']==s),'Nov_Diffs'] = first - second
ax = sb.lineplot(x='Trial', y='Nov_Diffs', data=trial_avs)
ax.set(ylim=(-.1, .4))
ax.set(xlim=(0, 39))
sb.regplot(x="Trial", y="Nov_Diffs", data=trial_avs, scatter=False)
trial_av_grp = trial_avs.groupby(['Trial'], as_index=False).mean()
slope, intercept, r_value, p_value, std_err = stats.linregress(trial_avs['Trial'], trial_avs['Nov_Diffs'])
print('slope = ' + str(slope))
print('intercept = ' + str(intercept))
print('p_value = ' + str(p_value))
print()
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
plt.xlabel('Memory Trial')
plt.ylabel('Familiarity Difference')
print(exp)
plt.show()
# -
# ### Uncued Category images
#
# +
sb.set_style("white")
for key,label in zip(plot_data.keys(),['Experiment 1','Experiment 2']):
print(label + ': Sliding Window - Uncued Category Images')
data = plot_data[key]
# plot data
ax = sb.lineplot(x='Trial',y='value', hue = 'Attention Level',
data = data[data['Attention Level'].isin(['Side','None','Nov_Un'])], # ci=None,
palette = {"Full": palette[0], "Category": palette[3], "Nov_Cued":palette[5],
"Side": palette[19], "None": palette[16], "Nov_Un":palette[13], 'Novel':'black'})
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(ylim=(1.2, 2.8))
ax.set(xlim=(0, 39))
# stats test
data = data[data['Attention Level'].isin(['Side','None','Nov_Un'])]
#ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Side','Nov_Un'])#, related=False)
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Nov_Un') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.32, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[19])
plt.axhline( y=1.3, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[13])
else:
plt.axhline( y=1.32, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[19])
plt.axhline( y=1.3, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[13])
# ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Side','None'])
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Side') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.42, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[19])
plt.axhline( y=1.4, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[16])
else:
plt.axhline( y=1.42, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[19])
plt.axhline( y=1.4, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[16])
# ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Nov_Un','None'])#, related=False)
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Nov_Un') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.52, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[16])
plt.axhline( y=1.5, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[13])
else:
plt.axhline( y=1.52, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[16])
plt.axhline( y=1.5, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[13])
plt.xticks([0, 9, 19, 29, 39])
plt.show()
# -
# ### Sliding Window - Images in Cued Category
for key,label in zip(plot_data.keys(),['Experiment_1','Experiment_2']):
print(label + ': Sliding Window - Same Category Images - Faces')
data = plot_data[key]
# plot ####################################
ax = sb.lineplot(x='Trial',y='value', hue = 'Attention Level',
data = data[data['Attention Level'].isin(['Full', 'Nov_Cued', 'Category'])], # 'Category', # ci=None,
palette = {"Full": palette[0], "Category": palette[3], "Nov_Cued":palette[5],
"Side": palette[19], "None": palette[16], "Nov_Un":palette[13], "Novel":"black"})
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(ylim=(1.25, 2.75))
ax.set(xlim=(0, 39))
#ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Category','Nov_Cued'])#, related=False)
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Nov_Cued') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.32, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[3])
plt.axhline( y=1.3, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[5])
plt.axhline( y=1.32, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[3])
plt.axhline( y=1.3, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[5])
# ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Category','Full'])
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Category') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.52, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[0])
plt.axhline( y=1.5, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[3])
plt.axhline( y=1.52, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[0])
plt.axhline( y=1.5, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[3])
# ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Nov_Cued','Full'])
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Full') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.42, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[0])
plt.axhline( y=1.4, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[5])
plt.axhline( y=1.42, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[0])
plt.axhline( y=1.4, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[5])
# plot settings & save ####################
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
plt.xticks([0, 9, 19, 29, 39])
plt.show()
# ### Images in Cued Location
for key,label in zip(plot_data.keys(),['Experiment_1','Experiment_2']):
print(label + ': Sliding Window - Same Category Images - Faces')
data = plot_data[key]
# plot ####################################
ax = sb.lineplot(x='Trial',y='value', hue = 'Attention Level',
data = data[data['Attention Level'].isin(['Full', 'Side'])], # 'Category', # ci=None,
palette = {"Full": palette[0], "Category": palette[3], "Nov_Cued":palette[5],
"Side": palette[19], "None": palette[16], "Nov_Un":palette[13], "Novel":"black"})
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(ylim=(1.25, 2.75))
ax.set(xlim=(0, 39))
#ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Full','Side'])#, related=False)
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Side') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.32, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[0])
plt.axhline( y=1.3, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[19])
plt.axhline( y=1.32, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[0])
plt.axhline( y=1.3, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[19])
# plot settings & save ####################
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
plt.xticks([0, 9, 19, 29, 39])
plt.show()
# ### Images in Uncued Location
for key,label in zip(plot_data.keys(),['Experiment_1','Experiment_2']):
print(label + ': Sliding Window - Same Category Images - Faces')
data = plot_data[key]
# plot ####################################
ax = sb.lineplot(x='Trial',y='value', hue = 'Attention Level',
data = data[data['Attention Level'].isin(['Category', 'None'])], # 'Category', # ci=None,
palette = {"Full": palette[0], "Category": palette[3], "Nov_Cued":palette[5],
"Side": palette[19], "None": palette[16], "Nov_Un":palette[13], "Novel":"black"})
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(ylim=(1.25, 2.75))
ax.set(xlim=(0, 39))
#ttest at each timepoint #################
ttest_data = timepoint_ttest(data, ['Category','None'])#, related=False)
# lines w/ sig pval #######################
index = ttest_data[(ttest_data['Attention Level']=='Category') & (ttest_data['timepoint_t_truth']==True)]['Trial'].tolist()
index = set(index)
for x in ranges(index):
if x[0] == x[1]:
x_new_0 = x[0]-.1
x_new_1 = x[1]+.1
plt.axhline( y=1.32, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[3])
plt.axhline( y=1.3, xmin=x_new_0*(1/39), xmax=x_new_1*(1/39), color=palette[16])
plt.axhline( y=1.32, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[3])
plt.axhline( y=1.3, xmin=x[0]*(1/39), xmax=x[1]*(1/39), color=palette[16])
# plot settings & save ####################
plt.grid(False)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel('Attention Level', fontsize = 20)
plt.ylabel('Familiarity Rating', fontsize = 20)
plt.xticks([0, 9, 19, 29, 39])
plt.show()
| data_analysis_code/analyze_behavioral_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lists and arrays
# + [markdown] tags=[]
# *Elements of Data Science*
#
# Copyright 2021 [<NAME>](https://allendowney.com)
#
# License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
# + [markdown] tags=[]
# [Click here to run this notebook on Colab](https://colab.research.google.com/github/AllenDowney/ElementsOfDataScience/blob/master/03_arrays.ipynb) or
# [click here to download it](https://github.com/AllenDowney/ElementsOfDataScience/raw/master/03_arrays.ipynb).
# -
# In the previous chapter we used tuples to represent latitude and longitude. In this chapter, you'll see how to use tuples more generally to represent a sequence of values. And we'll see two more ways to represent sequences: lists and arrays.
#
# You might wonder why we need three ways to represent the same thing. Most of the time you don't, but each of them has different capabilities. For work with data, we will use arrays most of the time.
#
# As an example, we will use a small dataset from an article in *The Economist* about the price of sandwiches. It's a silly example, but I'll use it to introduce the idea of relative differences and different ways to summarize them.
# ## Tuples
#
# A tuple is a sequence of elements. When we use a tuple to represent latitude and longitude, the sequence only contains two elements, and they are both floating-point numbers.
# But in general a tuple can contain any number of elements, and the elements can be values of any type.
# The following is a tuple of three integers:
1, 2, 3
# Notice that when Python displays a tuple, it puts the elements in parentheses.
# When you type a tuple, you can put it in parentheses if you think it is easier to read that way, but you don't have to.
# + tags=[]
(1, 2, 3)
# -
# The elements can be any type. Here's a tuple of strings:
'Data', 'Science'
# The elements don't have to be the same type. Here's a tuple with a string, an integer, and a floating-point number.
'one', 2, 3.14159
# If you have a string, you can convert it to a tuple using the `tuple` function:
tuple('DataScience')
# The result is a tuple of single-character strings.
#
# When you create a tuple, the parentheses are optional, but the commas are required. So how do you think you create a tuple with a single element? You might be tempted to write:
x = (5)
x
# + tags=[]
type(x)
# -
# But you will find that the result is just a number, not a tuple.
# To make a tuple with a single element, you need a comma:
t = 5,
t
# That might look funny, but it does the job.
# + tags=[]
type(t)
# -
# ## Lists
#
# Python provides another way to store a sequence of elements: a list.
#
# To create a list, you put a sequence of elements in square brackets.
[1, 2, 3]
# Lists and tuples are very similar.
# They can contain any number of elements, the elements can be any type, and the elements don't have to be the same type.
# The difference is that you can modify a list; tuples are immutable (cannot be modified). This difference will matter later, but for now we can ignore it.
#
# When you make a list, the brackets are required, but if there is a single element, you don't need a comma. So you can make a list like this:
single = [5]
# + tags=[]
type(single)
# -
# It is also possible to make a list with no elements, like this:
empty = []
# + tags=[]
type(empty)
# -
# The `len` function returns the length (number of elements) in a list or tuple.
len([1, 2, 3]), len(single), len(empty)
# **Exercise:** Create a list with 4 elements; then use `type` to confirm that it's a list, and `len` to confirm that it has 4 elements.
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# There's a lot more we could do with lists, but that's enough to get started. In the next section, we'll use lists to store data about sandwich prices.
# ## Sandwiches
#
# In September 2019, *The Economist* published an article comparing sandwich prices in Boston and London: "[Why Americans pay more for lunch than Britons do](https://www.economist.com/finance-and-economics/2019/09/07/why-americans-pay-more-for-lunch-than-britons-do)".
# It includes this graph showing prices of several sandwiches in the two cities:
#
# 
# Here are the sandwich names from the graph, as a list of strings.
name_list = [
'Lobster roll',
'Chicken caesar',
'Bang bang chicken',
'Ham and cheese',
'Tuna and cucumber',
'Egg'
]
# I contacted *The Economist* to ask for the data they used to create that graph, and they were kind enough to share it with me.
# Here are the sandwich prices in Boston:
boston_price_list = [9.99, 7.99, 7.49, 7.00, 6.29, 4.99]
# Here are the prices in London, converted to dollars at \$1.25 / £1.
london_price_list = [7.5, 5, 4.4, 5, 3.75, 2.25]
# Lists provide some arithmetic operators, but they might not do what you want. For example, you can "add" two lists:
boston_price_list + london_price_list
# But it concatenates the two lists, which is not very useful in this example.
# To compute differences between prices, you might try subtracting lists, but you would get an error.
# We can solve this problem with a NumPy array.
# + [markdown] tags=[]
# Run this code in the following cell to see what the error message is.
#
# ```
# boston_price_list - london_price_list
# ```
# + tags=[]
# -
# ## NumPy Arrays
#
# We've already seen that the NumPy library provides math functions. It also provides a type of sequence called an array.
# You can create a new array with the `np.array` function, starting with a list or tuple.
# +
import numpy as np
boston_price_array = np.array(boston_price_list)
london_price_array = np.array(london_price_list)
# -
# The type of the result is `numpy.ndarray`.
type(boston_price_array)
# The "nd" stands for "n-dimensional", which indicates that NumPy arrays can have any number of dimensions.
# But for now we will work with one-dimensional sequences.
# If you display an array, Python displays the elements:
boston_price_array
# You can also display the "data type" of the array, which is the type of the elements:
boston_price_array.dtype
# `float64` means that the elements are floating-point numbers that take up 64 bits each. You don't need to know about the storage format of these numbers, but if you are curious, you can read about it at <https://en.wikipedia.org/wiki/Floating-point_arithmetic#Internal_representation>.
#
# The elements of a NumPy array can be any type, but they all have to be the same type.
# Most often the elements are numbers, but you can also make an array of strings.
name_array = np.array(name_list)
name_array
# In this example, the `dtype` is `<U17`. The `U` indicates that the elements are Unicode strings; Unicode is the standard Python uses to represent strings.
#
# Now, here's why NumPy arrays are useful: they can do arithmetic. For example, to compute the differences between Boston and London prices, we can write:
differences = boston_price_array - london_price_array
differences
# Subtraction is done "elementwise"; that is, NumPy lines up the two arrays and subtracts corresponding elements. The result is a new array.
# ## Mean and standard deviation
#
# NumPy provides functions that compute statistical summaries like the mean:
np.mean(differences)
# So we could describe the difference in prices like this: "Sandwiches in Boston are more expensive by \$2.64, on average".
# We could also compute the means first, and then compute their difference:
np.mean(boston_price_array) - np.mean(london_price_array)
# And that turns out to be the same thing: the difference in means is the same as the mean of the differences.
# As an aside, many of the NumPy functions also work with lists, so we could also do this:
np.mean(boston_price_list) - np.mean(london_price_list)
# **Exercise:** Standard deviation is way to quantify the variability in a set of numbers. The NumPy function that computes standard deviation is `np.std`.
#
# Compute the standard deviation of sandwich prices in Boston and London. By this measure, which set of prices is more variable?
# +
# Solution goes here
# -
# **Exercise:** The definition of the mean, in math notation, is
#
# $\mu = \frac{1}{N} \sum_i x_i$
#
# where $x$ is a sequence of elements, $x_i$ is the element with index $i$, and $N$ is the number of elements.
# The definition of standard deviation is
#
# $\sigma = \sqrt{\frac{1}{N} \sum_i (x_i - \mu)^2}$
#
# Compute the standard deviation of `boston_price_list` using NumPy functions `np.mean` and `np.sqrt` and see if you get the same result as `np.std`.
#
# Note: You can (and should) do this exercise using only features we have discussed so far.
# +
# Solution goes here
# -
# Note: This definition of standard deviation is sometimes called the "population standard deviation". You might have seen another definition with $N-1$ in the denominator; that's the "sample standard deviation". We'll use the population standard deviation for now and come back to this issue later.
# ## Relative Difference
#
# In the previous section we computed differences between prices.
# But often when we make this kind of comparison, we are interested in **relative difference**, which are differences expressed as a fraction or percentage of a quantity.
#
# Taking the lobster roll as an example, the difference in price is:
9.99 - 7.5
# We can express that difference as a fraction of the London price, like this:
(9.99 - 7.5) / 7.5
# Or as a *percentage* of the London price, like this:
(9.99 - 7.5) / 7.5 * 100
# So we might say that the lobster roll is 33% more expensive in Boston.
# But putting London in the denominator was an arbitrary choice. We could also compute the difference as a percentage of the Boston price:
(9.99 - 7.5) / 9.99 * 100
# If we do that calculation, we might say the lobster roll is 25% cheaper in London.
# When you read this kind of comparison, you should make sure you understand which quantity is in the denominator, and you might want to think about why that choice was made.
# In this example, if you want to make the difference seem bigger, you might put London prices in the denominator.
#
# If we do the same calculation with the arrays `boston_price_array` and `boston_price_array`, we can compute the relative differences for all sandwiches:
differences = boston_price_array - london_price_array
relative_differences = differences / london_price_array
relative_differences
# And the percent differences.
percent_differences = relative_differences * 100
percent_differences
# ## Summarizing Relative Differences
#
# Now let's think about how to summarize an array of percentage differences.
# One option is to report the range, which we can compute with `np.min` and `np.max`.
np.min(percent_differences), np.max(percent_differences)
# The lobster roll is only 33% more expensive in Boston; the egg sandwich is 121% percent more (that is, more than twice the price).
# **Exercise:** What are the percent differences if we put the Boston prices in the denominator? What is the range of those differences? Write a sentence that summarizes the results.
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# Another way to summarize percentage differences is to report the mean.
np.mean(percent_differences)
# So we might say, on average, sandwiches are 65% more expensive in Boston.
# But another way to summarize the data is to compute the mean price in each city, and then compute the percentage difference of the means:
# +
boston_mean = np.mean(boston_price_array)
london_mean = np.mean(london_price_array)
(boston_mean - london_mean) / london_mean * 100
# -
# So we might say that the average sandwich price is 56% higher in Boston.
# As this example demonstrates:
#
# * With relative and percentage differences, the mean of the differences is not the same as the difference of the means.
#
# * When you report data like this, you should think about different ways to summarize the data.
#
# * When you read a summary of data like this, make sure you understand what summary was chosen and what it means.
#
# In this example, I think the second option (the relative difference in the means) is more meaningful, because it reflects the difference in price between "baskets of goods" that include one of each sandwich.
# ## Debugging
#
# So far, most of the exercises have only required a few lines of code. If you made errors along the way, you probably found them quickly.
#
# As we go along, the exercises will be more substantial, and you may find yourself spending more time debugging. Here are a couple of suggestions to help you find errors quickly -- and avoid them in the first place.
#
# * Most importantly, you should develop code incrementally; that is, you should write a small amount of code and test it. If it works, add more code; otherwise, debug what you have.
#
# * Conversely, if you have written too much code, and you are having a hard time debugging it, split it into smaller chunks and debug them separately.
#
# For example, suppose you want to compute, for each sandwich in the sandwich list, the midpoint of the Boston and London prices. As a first draft, you might write something like this:
# +
boston_price_list = [9.99, 7.99, 7.49, 7, 6.29, 4.99]
london_price_list = [7.5, 5, 4.4, 5, 3.75, 2.25]
midpoint_price = np.mean(boston_price_list + london_price_list)
midpoint_price
# -
# This code runs, and it produces an answer, but the answer is a single number rather than the list we were expecting.
#
# You might have already spotted the error, but let's suppose you did not.
# To debug this code, I would start by splitting the computation into smaller steps and displaying the intermediate results. For example, we might add the two lists and display the result, like this.
total_price = boston_price_list + london_price_list
total_price
# Looking at the result, we see that it did not add the sandwich prices elementwise, as we intended. Because the arguments are lists, the `+` operator concatenates them rather than adding the elements.
#
# We can solve this problem by converting the lists to arrays.
# +
boston_price_array = np.array(boston_price_list)
london_price_array = np.array(london_price_list)
total_price_array = boston_price_array + london_price_array
total_price_array
# -
# And then computing the midpoint of each pair of prices, like this:
midpoint_price_array = total_price_array / 2
midpoint_price_array
# As you gain experience, you will be able to write bigger chunks of code before testing. But while you are getting started, keep it simple!
# As a general rule, each line of code should perform a small number of operations, and each cell should contain a small number of statements. When you are getting started, this number should be one.
| 03_arrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Why _property()_ is important?
# Becasue fix the private variables problem in python.
# Example:
# We'll create a Person Class which is going to save the name of a person and will have methos to show and change it.
# +
class Person_v1:
def __init__(self, name = None):
self.set_name(name)
def get_name(self):
print('-- getting name --')
print('Name = {}'.format(self.name))
def set_name(self, value):
print('-- setting name --')
self.name = value
person_1 = Person_v1('johan')
person_2 = Person_v1('Sebas')
# Test methos
person_1.get_name()
person_2.get_name()
# -
# As you can see this is a easy implementation of the idea.But the problem is that someone can change the value of the varibles without premises.
# +
person_1.name = 'pedro'
person_1.get_name()
# -
# Python gives the build in function *property()* which return a object with three methos "getter", "setter" and "delete". If we apply to our example.
# +
class Person_v2:
def __init__(self, name = '', age = 0, a_live = True):
self.name = name
def get_name(self):
print('-- getting name --')
print('Name = {}'.format(self._name))
def set_name(self, value):
print('-- setting name --')
self._name = value
name = property(fget=get_name, fset=set_name)
person_1 = Person_v2('johan')
person_2 = Person_v2('Sebas')
# Test methos
person_1.get_name()
person_2.get_name()
# -
person_1.name=12
person_1.name
# As you can see if you want to modify the variable name the variable itself call the method that we give to the "fset" in property. And if I access to the variable it's going to call "fget" itself.
# The other way to define "fget" is with the decorater _@property_. for example
# +
class Person_v3:
def __init__(self, name = '', age = 0, a_live = True):
self.name = name
@property
def name(self):
print('-- getting name --')
print('Name = {}'.format(self._name))
@name.setter
def name(self, value):
print('-- setting name --')
self._name = value
person_1 = Person_v3('johan')
person_2 = Person_v3('Sebas')
# Test methos
person_1.name
person_2.name
# -
person_1.name=12
person_1.name
# **Example 2:**
#
# +
class NewsPage:
def __init__(self, url):
self._html = None
self._visit(url)
def _select(self, query_string):
return self._html
def _select_label_a(self):
return self._html
def _visit(self, url):
self._html = url
class ArticlePage(NewsPage):
def __init__(self, url):
super().__init__(url)
@property
def body(self):
print('getting the body')
result = ['231313','312312']
return result[0] if len(result) else ''
@property
def title(self):
print('getting the title')
result = ['231313','312312']
return result[0] if len(result) else ''
# -
art = ArticlePage('www')
ll = list(filter(lambda pp: not pp.startswith('_'), dir(art)))
ex = [str(getattr(art,pp)) for pp in ll]
ex
| lectures/Study_property.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Resultados - *Reddit* em inglês
# Os testes a seguir foram realizados com o *corpus* em inglês de submissões do *Reddit* relacionadas à depressão.
#
# 32165 documentos de submissões foram utilizados para realização dos treinamentos dos modelos aqui avaliados. O conjunto foi coletado a partir do uso da API Pushshift, e compreende postagens do período de 2009 a 2021 realizadas no subreddit *"depression"*.
#
# Em seguida, uma etapa de pré-processamento do dataset e de preparação das entradas para cada um dos modelos foi realizada. As etapas de pré-processamento textual realizadas foram as seguintes, na ordem de listagem:
#
# * remoção de \n e aspas simples
# * tokenização de documentos
# * lematização de documentos
# * remoção de categorias de part-of-speech diferentes de substantivos, verbos ou adjetivos
# * remoção de stopwords
# * remoção de termos infrequentes (freq. mín. = 0,5% ou termos que aparecem em menos de 160 documentos)
#
# Para as análises a seguir baseadas em embeddings de palavras, os embeddings word2vec empregados no treinamento dos modelos ETM foram reutilizados. Esses embeddings são oriundos do projeto [Wikipedia2Vec](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/). Esses embeddings word2vec também foram usados em outras etapas deste para obtenção de vetores de termos/tópicos. Apenas embeddings de palavras do Wikipedi2Vec foram considerados. Ou seja, os embeddings de entidades que o modelo contém foram ignorados para o escopo deste estudo por não serem necessários.
# ### Importações e configurações de ambiente
# +
# %load_ext autoreload
# %autoreload 2
from utils.plots import plot_wordcloud_by_word_probability, plot_coherence_by_k_graph, plot_tsne_graph_for_model, plot_lexical_categories_histogram
from utils.topics import get_word_probability_mappings
from utils.notebook import get_coherence_score_for_each_topic, get_corpus_statistics
from utils.lexical_categories_analysis import get_raw_empath_categories_for_topics, get_empath_categories_for_topics
import sys, time, json, os, joblib, numpy as np, pandas as pd, ast
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
WORKDIR = os.getcwd()
EMBEDDINGS_PATH = f'{WORKDIR}/../../embeddings/'
MODELS_PATH = f'{WORKDIR}/models/'
CSVS_PATH = f'{WORKDIR}/csvs/'
RESOURCES_PATH = f'{WORKDIR}/resources/'
WORDS_PER_TOPIC = 20
# Arquivos
TEST_DATASET = f'{WORKDIR}/resources/test_documents.json'
DICTIONARY = f'{WORKDIR}/resources/word_dictionary.gdict'
ORIGINAL_DATASET = f'{WORKDIR}/../../datasets/original/depression_2009_2021/reddit-posts-gatherer-en.submissions_[subset].json'
PREPROCESSED_DATASET = f'{WORKDIR}/../../datasets/processed/reddit-posts-gatherer-submissions/reddit-posts-gatherer-en.submissions[processed].json'
WORD_LEMMA_MAPPING = f'{WORKDIR}/../../datasets/processed/reddit-posts-gatherer-submissions/reddit-posts-gatherer-en.submissions[word_lemma_maps].json'
# -
# ### Informações do dataset
original_data = json.load(open(ORIGINAL_DATASET, 'r'))
total_docs_orig, avg_no_of_tokens_orig_data, no_of_unique_tokens_orig_data = get_corpus_statistics(original_data)
print(f'Tamanho do dataset original (sem duplicatas): {total_docs_orig}')
print(f'Número médio de tokens por documento no dataset original: {avg_no_of_tokens_orig_data}')
print(f'Número de tokens únicos no dataset original: {no_of_unique_tokens_orig_data}')
preprocessed_data = json.load(open(PREPROCESSED_DATASET, 'r'))
print(f'Tamanho do dataset após pré-processamento: {len(preprocessed_data)}')
# ### Resultados dos treinamentos de cada tipo de modelo
# Nesta seção, avaliaremos os modelos CTM, ETM e LDA com maior valor para métrica de coerência quanto aos tópicos gerados pelos mesmos. A métrica de coerência NPMI é calculada após o treinamento de cada modelo, e emprega o dataset de teste para tal. 80% dos documentos foram reservados para treino e os 20% restantes foram reservados para teste.
#
# Nesta seção, também será realizada a rotulação manual dos tópicos com significado mais claro.
#
# As nuvens de palavras de cada tópico levam em consideração a probabilidade de cada palavra no seu respectivo tópico. Apenas as 20 palavras mais importantes de cada tópico serão usadas para as visualizações a seguir.
#
# Na listagem dos modelos pelo total de coerência, apenas os 5 primeiros modelos serão exibidos.
# #### CTM
# O modelo melhor colocado foi o CTM com 10 tópicos, portanto este será analisado abaixo.
ctm_results = pd.read_csv(CSVS_PATH + "ctm_combined_results.csv")
ctm_results_by_coherence = ctm_results.sort_values(["c_npmi_test"], ascending=(False))
ctm_results_by_coherence.head()
# #### ETM
# O modelo ETM com melhor resultado foi aquele com 28 tópicos.
etm_results = pd.read_csv(CSVS_PATH + "etm_results.csv")
etm_results_by_coherence = etm_results.sort_values(["c_npmi_test"], ascending=(False))
etm_results_by_coherence.head()
# #### LDA
# O modelo LDA onde K=15 foi o que teve melhor pontuação de coerência.
lda_results = pd.read_csv(CSVS_PATH + "lda_results.csv")
lda_results_by_coherence = lda_results.sort_values(["c_npmi_test"], ascending=(False))
lda_results_by_coherence.head()
# ### Resultados de treinamento gerais
# Unificando os resultados em CSV avaliados anteriormente, pode-se determinar o modelo com maior valor de coerência dentre todos aqueles treinados (LDA, CTM, ETM).
#
# O modelo ETM com K=28 tópicos foi aquele que teve melhor resultado de coerência.
df_geral = pd.concat([ctm_results, etm_results, lda_results], ignore_index=True)
df_geral.sort_values(["c_npmi_test"], ascending=(False)).head()
df_geral = pd.concat([ctm_results, etm_results, lda_results], ignore_index=True)
df_geral.sort_values(["c_npmi_train"], ascending=(False)).head()
# O gráfico a seguir mostra a variação de coerência nos modelos treinados, conforme aumenta-se o valor de K. Percebe-se que os modelos ETM mantém um nível de coerência constante, enquanto os modelos LDA decrescem em qualidade conforme K aumenta. Além disso, percebe-se que para um número de tópicos suficientemente grande como K=30 neste caso, o modelo LDA é superado pelo CTM. Isto não ocorre antes desse ponto segundo a análise realizada.
graph_data = [{ 'x': csv['k'], 'y': csv['c_npmi_test'] } for csv in [ctm_results, etm_results, lda_results]]
plot_coherence_by_k_graph(graph_data, ['CTM', 'ETM', 'LDA'])
graph_data = [{ 'x': csv['k'], 'y': csv['train_time_in_seconds']/60 } for csv in [ctm_results, etm_results, lda_results]]
plot_coherence_by_k_graph(graph_data, ['CTM', 'ETM', 'LDA'])
# ### ETM com 28 tópicos
test_docs = json.load(open(TEST_DATASET, "r"))["split"]
dictionary = joblib.load(DICTIONARY)
# +
etm_best_model_path = etm_results_by_coherence['path'].tolist()[0]
etm1 = joblib.load(os.path.join(MODELS_PATH, etm_best_model_path))
etm_topics_probs = [topic[:WORDS_PER_TOPIC] for topic in etm1['topics_with_word_probs']]
etm_mapping = get_word_probability_mappings(etm_topics_probs)
coherence_by_topic = get_coherence_score_for_each_topic(etm1["topics"], test_docs, dictionary)
for idx, mapping in enumerate(etm_mapping):
print(f'Tópico {idx}: ')
plot_wordcloud_by_word_probability(etm_mapping[idx], f'etm_topic={idx+1}')
# -
# ##### Rótulos possíveis para os tópicos:
#
# Tópico 0: acompanhamento de saúde mental<br>
# 1: reddit<br>
# 2: angústia emocional<br>
# 3: baixa autoestima<br>
# 4: relacionamentos amorosos<br>
# 5: vida estudantil<br>
# 6: introspecção<br>
# 7: hobbies / entretenimento<br>
# 8: <br>
# 9: busca por ajuda / dificuldades com problemas mentais<br>
# 10: <br>
# 11: sociabilidade<br>
# 12: frequência temporal<br>
# 13: <br>
# 14: vida profissional<br>
# 15: vida futura / planejamento de longo-prazo<br>
# 16: ruminação<br>
# 17: pessoas<br>
# 18: <br>
# 19: emoções / sentimentos<br>
# 20: <br>
# 21: <br>
# 22: família<br>
# 23: rotina diária<br>
# 24: atitudes / ações<br>
# 25: amizades<br>
# 26: aconselhamento<br>
# 27: palavras negativas<br>
topics_coherence_table = pd.DataFrame(coherence_by_topic, columns=["npmi"])
topics_coherence_table.sort_values(["npmi"], ascending=(False))
# ### Distribuição espacial dos tópicos dos modelos com melhor coerência usando t-SNE
# #### ETM
# A figura a seguir exibe a visualização gráfica dos modelo ETM com melhor coerência, onde K=28 tópicos.
# +
#plot_tsne_graph_for_model(etm1, "ETM_K=28")
# -
# ### Categorias léxicas dos tópicos do modelo usando Empath
# #### ETM
counts = get_empath_categories_for_topics(etm1, normalize=True)
print(f'Categorias encontradas: {len(counts)}')
plot_lexical_categories_histogram(counts, topn=5)
get_raw_empath_categories_for_topics(etm1)
| evaluation/2021-03-18_Reddit_EN_NOUN_VERB_ADJ_tr0.8_mindf_0.005_maxdf_1/2021-03-18_Reddit_EN_NOUN_VERB_ADJ_tr0.8_mindf_0.005_maxdf_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# -
perf = pd.read_pickle('./results.pickle') # read in perf DataFrame
perf.head()
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value')
ax2 = plt.subplot(212, sharex=ax1)
perf.ETH.plot(ax=ax2)
ax2.set_ylabel('ETH stock price')
| notebooks/analyse_performance.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Load Test deployed web application
# This notebook pulls some images and tests them against the deployed web application. We submit requests asychronously which should reduce the contribution of latency.
# +
import os
from timeit import default_timer
import pandas as pd
import matplotlib.pyplot as plt
from azureml.core.webservice import AksWebservice
from azureml.core.workspace import Workspace
from dotenv import get_key, find_dotenv
from testing_utilities import to_img, gen_variations_of_one_image, get_auth
from urllib.parse import urlparse
# %matplotlib inline
# -
env_path = find_dotenv(raise_error_if_not_found=True)
ws = Workspace.from_config(auth=get_auth())
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep="\n")
# Let's retrive the web service.
aks_service_name = get_key(env_path, 'aks_service_name')
aks_service = AksWebservice(ws, name=aks_service_name)
# We will test our service concurrently but only have 4 concurrent requests at any time. We have only deployed one pod on one node and increasing the number of concurrent calls does not really increase throughput. Feel free to try different values and see how the service responds.
CONCURRENT_REQUESTS = 4 # Number of requests at a time
# Get the scoring URL and API key of the service.
scoring_url = aks_service.scoring_uri
api_key = aks_service.get_keys()[0]
IMAGEURL = "https://bostondata.blob.core.windows.net/aksdeploymenttutorialaml/220px-Lynx_lynx_poing.jpg"
plt.imshow(to_img(IMAGEURL))
# Below we are going to use [Locust](https://locust.io/) to load test our deployed model. First we need to write the locustfile. We will use variations of the same image to test the service.
# +
# %%writefile locustfile.py
from locust import HttpLocust, TaskSet, task
from testing_utilities import gen_variations_of_one_image
import os
from itertools import cycle
_IMAGEURL = os.getenv('IMAGEURL', "https://bostondata.blob.core.windows.net/aksdeploymenttutorialaml/220px-Lynx_lynx_poing.jpg")
_NUMBER_OF_VARIATIONS = os.getenv('NUMBER_OF_VARIATIONS', 100)
_SCORE_PATH = os.getenv('SCORE_PATH', "/score")
_API_KEY = os.getenv('API_KEY')
class UserBehavior(TaskSet):
def on_start(self):
print('Running setup')
self._image_generator = cycle(gen_variations_of_one_image(_IMAGEURL, _NUMBER_OF_VARIATIONS))
self._headers = {'Authorization':('Bearer {}'.format(_API_KEY))}
@task
def score(self):
self.client.post(_SCORE_PATH, files={'image': next(self._image_generator)}, headers=self._headers)
class WebsiteUser(HttpLocust):
task_set = UserBehavior
# min and max time to wait before repeating task
min_wait = 10
max_wait = 200
# -
# Below we define the locust command we want to run. We are going to run at a hatch rate of 10 and the whole test will last 1 minute. Feel free to adjust the parameters below and see how the results differ. The results of the test will be saved to two csv files **modeltest_requests.csv** and **modeltest_distribution.csv**
parsed_url = urlparse(scoring_url)
cmd = "locust -H {host} --no-web -c {users} -r {rate} -t {duration} --csv=modeltest --only-summary".format(
host="{url.scheme}://{url.netloc}".format(url=parsed_url),
users=CONCURRENT_REQUESTS, # concurrent users
rate=10, # hatch rate (users / second)
duration='1m', # test duration
)
# ! API_KEY={api_key} SCORE_PATH={parsed_url.path} PYTHONPATH={os.path.abspath('../')} {cmd}
# Here are the summary results of our test and below that the distribution infromation of those tests.
pd.read_csv("modeltest_requests.csv")
pd.read_csv("modeltest_distribution.csv")
# To tear down the cluster and all related resources go to the [tear down the cluster](07_TearDown.ipynb) notebook.
| {{cookiecutter.project_name}}/Keras_Tensorflow/aks/06_SpeedTestWebApp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Automated Machine Learning
# _**Classification of credit card fraudulent transactions on remote compute **_
#
# ## Contents
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Train](#Train)
# 1. [Results](#Results)
# 1. [Test](#Test)
# 1. [Acknowledgements](#Acknowledgements)
# ## Introduction
#
# In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.
#
# This notebook is using remote compute to train the model.
#
# If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.
#
# In this notebook you will learn how to:
# 1. Create an experiment using an existing workspace.
# 2. Configure AutoML using `AutoMLConfig`.
# 3. Train the model using remote compute.
# 4. Explore the results.
# 5. Test the fitted model.
# ## Setup
#
# As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
# +
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
# +
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-ccard-remote'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
# -
# ## Create or Attach existing AmlCompute
# A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.
# #### Creation of AmlCompute takes approximately 5 minutes.
# If the AmlCompute with that name is already in your workspace this code will skip the creation process.
# As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
# +
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your AmlCompute cluster.
amlcompute_cluster_name = "cpu-cluster-1"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'cpu-cluster-1':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
# -
# # Data
# ### Load Data
#
# Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model.
data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
dataset = Dataset.Tabular.from_delimited_files(data)
training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)
label_column_name = 'Class'
# ## Train
#
# Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
#
# |Property|Description|
# |-|-|
# |**task**|classification or regression|
# |**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|
# |**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|
# |**n_cross_validations**|Number of cross validation splits.|
# |**training_data**|Input dataset, containing both features and label column.|
# |**label_column_name**|The name of the label column.|
#
# **_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)
# +
automl_settings = {
"n_cross_validations": 3,
"primary_metric": 'average_precision_score_weighted',
"preprocess": True,
"enable_early_stopping": True,
"max_concurrent_iterations": 2, # This is a limit for testing purpose, please increase it as per cluster size
"experiment_timeout_minutes": 10, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target = compute_target,
training_data = training_data,
label_column_name = label_column_name,
**automl_settings
)
# -
# Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.
remote_run = experiment.submit(automl_config, show_output = False)
# +
# If you need to retrieve a run that already started, use the following code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')
# -
remote_run
# ## Results
# #### Widget for Monitoring Runs
#
# The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
#
# **Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
remote_run.wait_for_completion(show_output=False)
# #### Explain model
#
# Automated ML models can be explained and visualized using the SDK Explainability library. [Learn how to use the explainer](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/model-explanation-remote-amlcompute/auto-ml-model-explanations-remote-compute.ipynb).
# ## Analyze results
#
# ### Retrieve the Best Model
#
# Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
best_run, fitted_model = remote_run.get_output()
fitted_model
# #### Print the properties of the model
# The fitted_model is a python object and you can read the different properties of the object.
# See *Print the properties of the model* section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification/auto-ml-classification.ipynb).
# ### Deploy
#
# To deploy the model into a web service endpoint, see _Deploy_ section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment/auto-ml-classification-with-deployment.ipynb)
# ## Test the fitted model
#
# Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values.
# convert the test data to dataframe
X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()
y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()
# call the predict functions on the model
y_pred = fitted_model.predict(X_test_df)
y_pred
# ### Calculate metrics for the prediction
#
# Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values
# from the trained model that was returned.
# +
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(y_test_df.values,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['False','True']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','False','True',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
# -
# ## Acknowledgements
# This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud
#
#
# The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project
# Please cite the following works:
# • <NAME>, <NAME>, <NAME> and <NAME>. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
# • <NAME>, Andrea; Caelen, Olivier; <NAME>, Yann-Ael; <NAME>; <NAME>. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon
# • <NAME>, Andrea; Boracchi, Giacomo; Caelen, Olivier; <NAME>; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE
# o Dal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by <NAME>)
# • <NAME>; <NAME>; <NAME>, Yann-Aël; <NAME>; <NAME>; <NAME>. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier
# • <NAME>; <NAME>, Yann-Aël; <NAME>; <NAME>. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing
| how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb |