code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
from simons_mask_binarizer import *
# # simons_mask_binarizer
#
# Very small tool to binarize masks (`mask > 0`) created for example in mricron ( which have `255` as max value) to `1` and `0` and casts the mask to `uint8`, or a datatype of `2` in the nifti header.
#
# **To be clear**: all non-zero voxels will be transformed to ones.
#
# Will possibly later merge with my `random_ni_tools`.
# Not well tested, but should work in most cases. Test of the CLI were only performed on WIN10, so there might be some issues popping up.
#
# Please post an issue, if there is something not working as expected.
# ## Install
# As always I would recommend, creating a new python environment for this tool!
#
# You can then install the package via pip from Github.
# ```bash
# pip install git+https://github.com/SRSteinkamp/simons_mask_binarizer.git
# ```
#
# The installation should install all the necessary packages namely (`nilearn` and `nibabel`) and expose the command line tool below.
# ## How to use
# Currently exposes a CLI directly, to transform single files. In your terminal run:
#
# `nifti_binarizer INPUTPATH -o OUTPUTPATH -p PREFIX`
#
# Where the INPUTPATH (path of the image to be converted) has to be provided.
#
# * `-o` or `--outputpath` argument is optional to provide the destination of the converted image. Defualt image is converted inside `INPUTPATH` folder.
#
# * `-p` or `--prefix` argument is optional, prefix defaults to `bin_`.
#
# Examples:
#
# ```bash
# nifti_binarizer C:\some_folder\my_mask.nii.gz`
# ```
# Creates a new file named as `C:\some_folder\bin_my_mask.nii.gz`.
# ```bash
# nifti_binarizer C:\some_folder\my_mask.nii.gz -o C:\other_folder\`
# ```
# Creates a new file in `C:\other_folder\bin_my_mask.nii.gz`.
#
# ```bash
# nifti_binarizer C:\some_folder\my_mask.nii.gz -o C:\other_folder\ -p 'p'`
# ```
# Creates a new file in `C:\other_folder\pmy_mask.nii.gz`.
#
#
#
#
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LeetCode #905. Sort Array By Parity
#
# ## Question
#
# https://leetcode.com/problems/sort-array-by-parity/
#
# Given an array A of non-negative integers, return an array consisting of all the even elements of A, followed by all the odd elements of A.
#
# You may return any answer array that satisfies this condition.
#
# Example 1:
#
# Input: [3,1,2,4]
# Output: [2,4,3,1]
# The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted.
#
# Note:
#
# 1 <= A.length <= 5000
# 0 <= A[i] <= 5000
# ## My Solution
def sortArrayByParity(A):
B, C = [], []
for a in A:
if a % 2 == 0:
B.append(a)
else:
C.append(a)
return B + C
# test code
A = [3,1,2,4]
sortArrayByParity(A)
# ## My Result
#
# __Runtime__ : 72 ms, faster than 27.27% of Python online submissions for Sort Array By Parity.
#
# __Memory Usage__ : 12.2 MB, less than 75.00% of Python online submissions for Sort Array By Parity.
# ## @KaiPeng21's Solution
def sortArrayByParity(A):
return sorted(A, key=lambda x: x & 1)
# #### 비트연산자
#
# x | y or 둘 중 하나의 값이 1일 경우 1을 반환합니다.
# x & y and 둘 다 값이 1일 경우 1을 반환합니다.
# x ^ y xor 둘 다 값이 다를 경우 1을 반환합니다.
# x << y left shift 좌측으로 y회 비트 밀기
# x >> y right shift 우측으로 y회 비트 밀기
# ~x not 반전
#
A = [3,1,2,4]
for a in A:
print("a:", a, ' a & 1: ', a & 1)
# test code
A = [3,1,2,4]
sortArrayByParity(A)
# ## @KaiPeng21's Result
#
# __Runtime__: 72 ms, faster than 27.27% of Python online submissions for Sort Array By Parity.
#
# __Memory Usage__ : 12.2 MB, less than 66.67% of Python online submissions for Sort Array By Parity.
# ## @limitless_'s Solution
def sortArrayByParity(A):
size = len(A)
res = [None] * size
start = 0
end = size - 1
for val in A:
if val % 2 == 1:
res[end] = val
end = end -1
else:
res[start] = val
start = start + 1
return res
# test code
A = [3,1,2,4]
sortArrayByParity(A)
# ## @limitless_'s Result
#
# __Runtime__ : 64 ms, faster than 76.18% of Python online submissions for Sort Array By Parity.
#
# __Memory Usage__ : 12.3 MB, less than 20.83% of Python online submissions for Sort Array By Parity.
| LeetCode/LeetCode_905SortArrayByParity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %run common-imports.ipynb
# # Dataset-4
#
# ## Prerequisites
#
# prior notebooks:
#
# * Notebook 1: `univariate-1.ipynb`
# * Notebook 2: `dataset-2.ipynb`
# * Notebook 3: `dataset-3.ipynb`
# * Notebook 4: `univariate-2.ipynb`
# * Notebook 5: `univariate-3.ipynb`
#
#
# ## Lab Goals
#
# * will explore this data, observe its statistical characteristics, visualize it.
# * Next, we will take a systematic approach to build linear and polynomial regression models to make prediction on the data.
#
# ## Outcome
#
# This lab should have given you some more fluency in dealing with polynomial regression, and under its limitations when dealing with data more appropriately modeled with transcendental functions.
#source = 'https://raw.githubusercontent.com/'
data = pd.read_csv("../datasets/dataset-4.csv")
data.sample(5)
# #### Descriptive statistics
#
data.describe(include="all").transpose()
# #### Missing Values Analysis
#
data.isnull().sum()
# From the above results, it appears that there are no missing values at all. Therefore, we don't need to worry about addressing this issue.
# #### Pandas Profiling
#
#
# +
# data.profile_report()
# -
# ## Data Visualization
# ### Plotting using matplotlib
# Observe that the data distinctly exhibits nonlinearity of relationship between $x$ and $y$. **what the correlation is between the variables?**
#
plt.scatter(data['x'], data['y'], alpha=0.5, s=150, color='salmon')
plt.title(r'\textbf{Scatter-plot of $y$ vs $x$}')
plt.xlabel(r'$x\longrightarrow$');
plt.ylabel(r'$y\longrightarrow$');
# # Regression
#
# From the figure above, it should be apparent that a simple linear regression model is unlikely to work. However, let us first build a simple linear regression model for this dataset, in order to get a baseline performance.
#
# As usual, we will first separate out the predictor from the target, and then split the data into a training and test set.
X, y = data[['x']], data['y']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42,train_size=0.5)
# ## Build a linear regression model
#
# Let's build a regression model, and fit it to the dataset. For this, we instantiate a `LinearRegression` object named `model` using the constructor. Then we fit the `model` to the available training dataset.
model = LinearRegression();
model.fit(X_train, y_train);
# Recall that a linear regression model is given by the equation:
#
# \begin{equation} y = \beta_0 + \beta_1 x + \epsilon\end{equation}
#
# where:
# $\epsilon$ is the irreducible error term, so that the model is essentially:
#
# \begin{equation} \hat{y} = \beta_0 + \beta_1 x \end{equation}
# What values of $\beta_0$ (the intercept) and $\beta_1$ (the slope) is this model predicting? We can inspect this as follows:
print (f'Intercept: {model.intercept_}, Slope: {model.coef_}')
# Look back at the data visualization, and see if this agrees with your own estimates.
# ## Predictions from the model
#
# Now, let's use the model to make predictions on the **test** data, something the model has not seen so far. By comparing the predictions to the actual values, we will get a sense of how well the model has learned to generalize from the data.
#
yhat = model.predict(X_test)
print("Mean Squared Error: %.2f"
% mean_squared_error(y_test, yhat))
r2 = r2_score(y_test, yhat)
print(rf"Coefficient of Determination (R^2):{r2}")
# The coefficient of determination, $R^2$ indicates a dismal model! Should we plod on to the next step of model verification: namely the residual analysis.
# ## Residual Analysis
#
# will start by plotting the residuals from the predictions. Recall that the residual from the prediction $\hat{y}_i$ on a particular datum $(x_i, y_i)$ is defined as:
#
# \begin{equation}\mathbf{ r_i = \hat{y}_i - y_i }\end{equation}
#
# will use the `yellowbrick` library for looking at the model characteristics.
from yellowbrick.regressor import residuals_plot
viz = residuals_plot(model, X_train, y_train, X_test, y_test,
train_color='orange', test_color='IndianRed', train_alpha=0.3, test_alpha=0.5)
# The residuals display a striking pattern! The presence of a pattern in the residuals in a clear indication that the model has failed to capture some essential characteristics of the relationship between $x$ and $y$.
# ## Visualization of the model predictions
#
# As a final step, let us visualize the predictions of the model, and superimpose it on the actual data. This should give us a sense of how well the model is working.
# +
X = pd.DataFrame(data={'x': np.linspace(data.x.min(), data.x.max(), 1000)})
yhat = model.predict(X)
# -
fig, ax = plt.subplots(figsize=(20,10))
ax.scatter(data['x'], data['y'], alpha=0.5, s=150, color='salmon')
ax.plot(X.x, yhat, 'r--.', label="Model Predictions")
ax.legend(loc='best');
# This simple linear model has, as one would have expected by now, failed terribly!
# ##### Polynomial Regression
#
# Let's add polynomial features to the dataset, and then performing the regression can be an effective tool.
#
# merged the steps of:
# * create polynomial degrees of the input
# * fit a linear model to the data
# * make predictions on test data
# * print model diagnostics
degree = 2
polynomial = PolynomialFeatures(degree)
X_poly = polynomial.fit_transform(X_train)
model = LinearRegression()
# Now, train the model
model.fit(X_poly, y_train)
print ("The cofficients: {}".format(model.coef_))
X_poly_test = polynomial.fit_transform(X_test)
yhat = model.predict(X_poly_test)
print("Mean squared error: %.2f"
% mean_squared_error(y_test, yhat))
r2 = r2_score(y_test, yhat)
print(rf"Coefficient of Determination (R^2):{r2}")
# The coefficient of determination seems quite encouraging. Let us now proceed to the residual analysis.
# ## Residual Analysis
#
# will start by plotting the residuals from the predictions. Recall that the residual from the prediction $\hat{y}_i$ on a particular datum $(x_i, y_i)$ is defined as:
#
# \begin{equation}\mathbf{ r_i = \hat{y}_i - y_i }\end{equation}
#
# We will use the `yellowbrick` library for looking at the model characteristics.
from yellowbrick.regressor import residuals_plot
viz = residuals_plot(model, X_poly, y_train, X_poly_test, y_test,
train_color='orange', test_color='IndianRed', train_alpha=0.3, test_alpha=0.5)
# There is a homoscedasticity of the residuals, and no patterns are present in the residuals. This too is an encouraging sign.
# ## Visualization of the model predictions
#
# As a final step, let us visualize the predictions of the model, and superimpose it on the actual data. This should give us a sense of how well the model is working.
# +
X = pd.DataFrame(data={'x': np.linspace(data.x.min(), data.x.max(), 1000)})
yhat = model.predict(polynomial.transform(X))
fig, ax = plt.subplots(figsize=(20,10))
ax.scatter(data['x'], data['y'], alpha=0.5, s=150, color='salmon')
ax.plot(X.x, yhat, 'r--.', label="Model Predictions")
ax.legend(loc='best');
# -
# A careful observation of the prediction curve shows that it has fit the data with high fidelity.
# # Conclusion
#
# For this dataset, polynomial regression has proved effective.
#
| notebook/regression-06-dataset-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: s2s-asp
# language: python
# name: s2s-asp
# ---
# # Download model data using climet-lab
#
# ----
#
# This notebook shows how to use the climet-lab data store to download the data for ECMWF, ECCC, and NCEP.
#
# More information about climet-lab, what models and variables are available can be found here:
#
# https://github.com/ecmwf-lab/climetlab-s2s-ai-challenge
# +
import climetlab as cml
import xarray as xr
xr.set_options(keep_attrs=True)
from dask.distributed import Client
import dask.config
dask.config.set({"array.slicing.split_large_chunks": False})
# -
client = Client("tcp://10.12.206.54:34204")
# ## This command shows the climet-lab settings.
#
# Depending on how much data you are downloading, you will want to change your cache-directory, maximum-cache-size, and number-of-dowload-threads. There are a few examples below this cell.
cml.settings
# +
# good to set that cache does not delete files
#cml.settings.set("maximum-cache-size", "1800GB")
# +
# increase parallel downloads: default 5
#cml.settings.set("number-of-download-threads", 53)
# -
# ## What are you downloading?
#
# Here you choose your variable, model, and pressure level (if applicable). This only downloads one variable at a time, so you will need to rerun it if you want multiple variables.
var = 'v'
model = 'eccc'
plev = 850
ds = cml.load_dataset('s2s-ai-challenge-training-input', origin=model, parameter=var, format='netcdf').to_xarray()
# - Renaming variables to work in climpred and dropping coordinates we don't need.
ds = ds.rename({"realization": "member","forecast_time": "init","lead_time": "lead","latitude": "lat","longitude": "lon"}).drop("valid_time")
if var=='gh':
ds = ds.sel(plev=plev).drop("plev").rename({"gh": "gh_"+str(plev)})
elif var=='v':
ds = ds.sel(plev=plev).drop("plev").rename({"v": "v_"+str(plev)})
elif var=='u':
ds = ds.sel(plev=plev).drop("plev").rename({"u": "u_"+str(plev)})
elif var=="ttr":
ds = ds.rename({"ttr": "olr"}).drop("nominal_top")
ds
ds.lead
#Each model has different lead days available
if model=='ecmwf':
ds = ds.sel(lead=slice("1 days","47 days")) # for ecmwf
elif model=='ncep':
ds = ds.sel(lead=slice("2days","44 days")) # for ncep
ds = ds.sortby('init')
ds = ds.chunk({'init':-1,'lead':-1,'lon':'auto','lat':'auto','member':'auto'})
# ## Write to zarr
# If you are creating a new zarr use mode='w', if you are adding a variable to an existing zarr use mode='a'
# %time ds.to_zarr('/glade/campaign/mmm/c3we/jaye/S2S_zarr/ECCC.uvolr.raw.daily.geospatial.zarr', mode="a", consolidated=True)
| 1_download_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How the Climate in Tehran, Iran has Changed Over a Century
#
# Last week, Iran has stand out with __[flooding in the news](https://www.aljazeera.com/news/2019/03/iran-flood-dozen-killed-flash-flooding-190326081204605.html)__. We can also read about the researches that say Middle East climate is changing fast and there will be more hot days and also flooding will occur more often.
#
# All these news made us curious and we decided to look into history using __[NASA MERRA2 dataset](https://data.planetos.com/datasets/nasa_merra2_global)__ at __[Planet OS Datahub](https://data.planetos.com)__.
#
#
# So, we will do the following:
#
# 1) use the Planet OS package API to fetch data;
#
# 2) see mean annual maximum temperature in Tehran
#
# 3) plot number of days in year when max temperature exceeds 30 $^o$C in Tehran;
#
# 4) find out annual precipitation by year;
#
# 4) find out how many dry days there has been;
#
# 5) find out what has going on during this year with precipitation;
#
# 6) see how many days have more than 10 mm precipitation during the years
#
# 7) we will see average annual precipitation cycle
#
#
# __Note that this notebook is using Python 3.__
# %matplotlib inline
import numpy as np
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
import calendar
import matplotlib.pyplot as plt
import pandas as pd
import datetime
from mpl_toolkits.basemap import Basemap
from po_data_process import make_comparison_plot, make_plot, make_anomalies_plot,read_data_to_json,make_monthly_plot
import matplotlib.dates as mdates
from matplotlib.dates import MonthLocator, DayLocator, YearLocator
from matplotlib.ticker import MultipleLocator
import requests
import warnings
warnings.filterwarnings("ignore")
# <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
# At first, we need to define the dataset name and variables we want to use.
dh=datahub.datahub(server,version,API_key)
dataset='nasa_merra2_global'
variable_name_merra = 'TPRECMAX,T2MMEAN'
time_start = '1980-01-01T00:00:00'
time_end = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%dT%H:%M:%S')
area_name = 'Tehran'
latitude = 35.68; longitude = 51.32
# For starters, using Basemap we created a map of Africa and Middle East and marked the location of Tehran city.
plt.figure(figsize=(10,8))
m = Basemap(projection='merc',llcrnrlat=-35,urcrnrlat=47.,\
llcrnrlon=-21,urcrnrlon=60,lat_ts=0,resolution='l')
x,y = m(longitude,latitude)
m.drawcoastlines()
m.drawcountries()
m.drawstates()
m.bluemarble()
m.scatter(x,y,50,marker='o',color='#00FF00',zorder=4)
plt.show()
# ##### Download the data with package API
#
# 1. Create package objects
# 2. Send commands for the package creation
# 3. Download the package files
#
package = package_api.package_api(dh,dataset,variable_name_merra,longitude,longitude,latitude,latitude,time_start,time_end,area_name=area_name)
package.make_package()
package.download_package()
# ## Work with downloaded files
#
# We start by opening the files with xarray. We will do some conversion here as well, as it's easier to work with more commont units. For temperature we convert Kelvins to Celsius and for precipitation kg m-2 s-1 to mm/hour.
dd1 = xr.open_dataset(package.local_file_name)
temp = dd1['T2MMEAN'] - 272.15
temp = temp.loc[temp['time.year']<2019]
prec = dd1['TPRECMAX'] = dd1['TPRECMAX'] * 3600 #convert to mm/hr
# First, we evaluate temperature during the past 38 years by finding out overall average temperature in Teheran. We are also computing annual mean temperature data.
yearly_temp = temp.resample(time="1AS").mean('time')[:,0,0]
yearly_temp = yearly_temp.loc[yearly_temp['time.year'] < 2019]
temp_mean = yearly_temp.mean(axis=0)
print ('Overall mean temperature is ' + str("%.2f" % temp_mean.values))
# Now it is time to plot mean annual mean temperature in Teheran. We also marked overall average for 1980-2018 17.4 °C with a red dotted line. The green line marks a trend. We can see a rising trend, however, last years haven't been the hottest in history. 2010 was the hottest year in the 38 year history with temperature 19.1 °C. For comparison, last year mean temperature was 18.8 °C. Also, which is worrying is that 2012 was the last year that had a temperature below long term average temperature.
make_plot(yearly_temp,dataset,'Mean annual temperature in ' + area_name,ylabel = 'Temp [C]',compare_line = temp_mean.values,trend=True,locator = [5,1])
# Number of days when temperature exceeds 30 C degrees has also rising trend.
daily_data = temp.resample(time="1D").mean('time')[:,0,0]
make_plot(daily_data[np.where(daily_data.values > 30)].groupby('time.year').count(),dataset,'Number of days in year when temperature exceeds 30 $^o$C in ' + area_name,ylabel = 'Days of year',locator = [5,1],trend=True)
print ('Yearly average days when temperature exceeds 30 C is ' + str("%.1f" % daily_data[np.where(daily_data.values > 30)].groupby('time.year').count().mean().values))
# Just as important as the change with the temperature is what has happened to the rainfall. To investigate that, we created an annual precipitation plot, where a red line marks the average (84.1 mm). From the plot we can see that the average amount of rainfall has been inreasing a bit over the past century in Tehran.
daily_rain = prec.resample(time="1D").mean('time')[:,0,0]
daily_rain_whole_years = daily_rain.loc[daily_rain['time.year'] < 2019]
overall_yearly_mean_prec = daily_rain_whole_years.groupby('time.year').sum().mean()
make_plot(daily_rain_whole_years.groupby('time.year').sum(),dataset,'Annual precipitation',ylabel = 'Precipitation [mm/year]',compare_line = overall_yearly_mean_prec,locator = [5,1],trend=True)
print ('Average annual precipitation ' + str("%.1f" % overall_yearly_mean_prec) + ' mm')
# Completly dry days seems to be stable. However, there were less try days than usual in 2018 (171).
make_plot(daily_rain_whole_years[np.where(daily_rain_whole_years.values < 0.00001)].groupby('time.year').count(),dataset,'Completely dry days by year',ylabel = 'Days of year',compare_line = daily_rain_whole_years[np.where(daily_rain_whole_years.values < 0.00001)].groupby('time.year').count().mean(),locator = [5,1])
# There has been floods in Tehran last week. So, in addition to historical analysis data let's look into precipitation observations as well and find out how many millimeters of rain there has been during flooding. For that, we are using __[NOAA RBSN observation dataset](https://data.planetos.com/datasets/noaa_rbsn_timeseries)__. We choose a station in Tehran (you can find all the stations from the __[RBSN detail page](https://data.planetos.com/datasets/noaa_rbsn_timeseries)__).
dataset1 = 'noaa_rbsn_timeseries'
station = '40754' #
time_start_synop = '2019-01-01T00:00:00'
time_end = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%dT%H:%M:%S')
variable = 'precipitation_6_hour_accumulation'
link = 'https://api.planetos.com/v1/datasets/noaa_rbsn_timeseries/stations/{0}?origin=dataset-details&apikey={1}&count=1000&time_start={2}&time_end={3}&var={4},lat,lon'.format(station,API_key,time_start_synop,time_end,variable)
synop_data = read_data_to_json(link)
time_synop = [datetime.datetime.strptime(n['axes']['time'],'%Y-%m-%dT%H:%M:%S') for n in synop_data['entries'] if n['data']['precipitation_6_hour_accumulation'] != None]
prec_synop = [n['data']['precipitation_6_hour_accumulation'] for n in synop_data['entries'] if n['data']['precipitation_6_hour_accumulation'] != None]
# Here, we make pandas dataframe with observation data as it's easier to do analysis in pandas.
d = {'time':time_synop,'precipitation_6hr_accu':prec_synop}
df = pd.DataFrame(data = d)
df = df.set_index('time')
jan_feb_mar_daily_mean_prec = np.mean(daily_rain_whole_years.loc[daily_rain_whole_years['time.month'] < 4])
make_monthly_plot(df.resample('1D').sum(),df.resample('1D').sum().index,'Precipitation [mm/day]','Precipitation in Tehran',locator=[10,1])
print ('Usual daily precipitation is ' + str("%.1f" % jan_feb_mar_daily_mean_prec) + ' mm/day during the period')
print ('Maximum amount of precipitation during the period was ' + str("%.1f" % np.max(df.resample('1D').sum().values)) + ' mm/day')
# We definiately know that in the end of March, there were __[devastating floods in Tehran](https://en.radiofarda.com/a/flooding-in-tehran-as-death-toll-climbs-and-more-severe-rain-expected-in-iran/29843719.html )__. We can see from observation data that there were the most precipitation on 26th of March, when precipitation amount was 19.4 mm/day. As we can see, average daily precipitation amount is only 0.4 mm/day, but once in a while it exceeds even 5 mm/day. Below we can see how often daily precipitation in Tehran exceeds 5 mm/day threshold (since 1980). However, 5 mm/day isn't dangerous, but more than one day precipitation in a row in dry area like Iran, can get flooding easily.
make_plot(daily_rain[np.where(daily_rain.values > 5)].groupby('time.year').count(),dataset,'More precipitation than 5 mm per day',ylabel = 'Days of year',trend=True,locator = [5,1])
# Finally,we let's see an annual precipitation cycle as well. This way we can get and overview how rainy different months can be. We can see that montly it rains in March and April. Summer moths tend to be almost totally dry. Also, we can see that Tehran has pretty dry climate as daily average precipitation is always below one.
make_monthly_plot(daily_rain_whole_years.groupby('time.month').mean(),[calendar.month_abbr[m.values] for m in daily_rain_whole_years.groupby('time.month').mean().month],'Average Precipitation [mm/day]', 'Average annual cycle of precipitation')
# In conclusion, climate changes in Tehran are a bit worrying as temperature has rising trend. We can also see a rising trend in annual precipitation, however, some parts of Iran tend to have decreasing trend. Tehran has very dry climate and this is the reason why heavy rains can cause flooding easily.
| api-examples/middle_east_temp_prec_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### New to Plotly?
# Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
# <br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
# <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
# #### Version Check
# Note: Dendrograms are available in version 1.8.7+.
# Run `pip install plotly --upgrade` to update your Plotly version.
import plotly
plotly.__version__
# ##### Basic Dendrogram
# +
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(15, 15)
dendro = ff.create_dendrogram(X)
dendro['layout'].update({'width':800, 'height':500})
py.iplot(dendro, filename='simple_dendrogram')
# -
# ##### Set Color Threshold
# +
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(15, 15)
dendro = ff.create_dendrogram(X, color_threshold=1.5)
dendro['layout'].update({'width':800, 'height':500})
py.iplot(dendro, filename='simple_dendrogram_with_color_threshold')
# -
# ##### Set Orientation and Add Labels
# +
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(10, 10)
names = ['Jack', 'Oxana', 'John', 'Chelsea', 'Mark', 'Alice', 'Charlie', 'Rob', 'Lisa', 'Lily']
fig = ff.create_dendrogram(X, orientation='left', labels=names)
fig['layout'].update({'width':800, 'height':800})
py.iplot(fig, filename='dendrogram_with_labels')
# -
# ##### Plot a Dendrogram with a Heatmap
# +
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.figure_factory as ff
import numpy as np
from scipy.spatial.distance import pdist, squareform
# get data
data = np.genfromtxt("http://files.figshare.com/2133304/ExpRawData_E_TABM_84_A_AFFY_44.tab",
names=True,usecols=tuple(range(1,30)),dtype=float, delimiter="\t")
data_array = data.view((np.float, len(data.dtype.names)))
data_array = data_array.transpose()
labels = data.dtype.names
# Initialize figure by creating upper dendrogram
figure = ff.create_dendrogram(data_array, orientation='bottom', labels=labels)
for i in range(len(figure['data'])):
figure['data'][i]['yaxis'] = 'y2'
# Create Side Dendrogram
dendro_side = ff.create_dendrogram(data_array, orientation='right')
for i in range(len(dendro_side['data'])):
dendro_side['data'][i]['xaxis'] = 'x2'
# Add Side Dendrogram Data to Figure
for data in dendro_side['data']:
figure.add_trace(data)
# Create Heatmap
dendro_leaves = dendro_side['layout']['yaxis']['ticktext']
dendro_leaves = list(map(int, dendro_leaves))
data_dist = pdist(data_array)
heat_data = squareform(data_dist)
heat_data = heat_data[dendro_leaves,:]
heat_data = heat_data[:,dendro_leaves]
heatmap = [
go.Heatmap(
x = dendro_leaves,
y = dendro_leaves,
z = heat_data,
colorscale = 'Blues'
)
]
heatmap[0]['x'] = figure['layout']['xaxis']['tickvals']
heatmap[0]['y'] = dendro_side['layout']['yaxis']['tickvals']
# Add Heatmap Data to Figure
for data in heatmap:
figure.add_trace(data)
# Edit Layout
figure['layout'].update({'width':800, 'height':800,
'showlegend':False, 'hovermode': 'closest',
})
# Edit xaxis
figure['layout']['xaxis'].update({'domain': [.15, 1],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'ticks':""})
# Edit xaxis2
figure['layout'].update({'xaxis2': {'domain': [0, .15],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks':""}})
# Edit yaxis
figure['layout']['yaxis'].update({'domain': [0, .85],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks': ""})
# Edit yaxis2
figure['layout'].update({'yaxis2':{'domain':[.825, .975],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks':""}})
# Plot!
py.iplot(figure, filename='dendrogram_with_heatmap')
# -
dendro_side['layout']['xaxis']
# ### Reference
help(ff.create_dendrogram)
# +
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
# ! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'dendrograms.ipynb', 'python/dendrogram/', 'Python Dendrograms',
'How to make a dendrogram in Python with Plotly. ',
name = 'Dendrograms',
title = "Dendrograms | Plotly",
thumbnail='thumbnail/dendrogram.jpg', language='python',
has_thumbnail='true', display_as='scientific', order=6,
ipynb= '~notebook_demo/262')
# -
| _posts/python-v3/scientific/dendrogram/dendrograms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Food Identification mini
# + pycharm={"name": "#%%\n"}
import os
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from shutil import copytree, rmtree
# + pycharm={"name": "#%%\n"}
def dataset_mini(food_list, src, dest):
if os.path.exists(dest):
rmtree(dest)
os.makedirs(dest)
for food_item in food_list :
print("Copying images into",food_item)
copytree(os.path.join(src,food_item), os.path.join(dest,food_item))
# + pycharm={"name": "#%%\n"}
food_list = ['donuts','pizza','samosa']
src_train = '../Data/Train'
dest_train = '../Data/Train_mini'
src_test = '../Data/Test'
dest_test = '../Data/Test_mini'
# + pycharm={"name": "#%%\n"}
print("Creating train data folder with new classes")
dataset_mini(food_list, src_train, dest_train)
print("Creating test data folder with new classes")
dataset_mini(food_list, src_test, dest_test)
# + pycharm={"is_executing": true, "name": "#%%\n"}
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.layers import GlobalAveragePooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger
from tensorflow.keras.optimizers import SGD
from tensorflow.keras import regularizers
K.clear_session()
n_classes = 3
img_width, img_height = 299, 299
train_data_dir = dest_train
validation_data_dir = dest_test
nb_train_samples = 2250
nb_validation_samples = 750
batch_size = 16
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
inception = InceptionV3(weights='imagenet', include_top=False)
x = inception.output
x = GlobalAveragePooling2D()(x)
x = Dense(128,activation='relu')(x)
x = Dropout(0.2)(x)
predictions = Dense(3,kernel_regularizer=regularizers.l2(0.005), activation='softmax')(x)
model = Model(inputs=inception.input, outputs=predictions)
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath='best_model_3class.hdf5', verbose=1, save_best_only=True)
csv_logger = CSVLogger('history_3class.log')
history = model.fit_generator(train_generator,
steps_per_epoch = nb_train_samples // batch_size,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
epochs=30,
verbose=1,
callbacks=[csv_logger, checkpointer])
model.save('model_trained_3class.hdf5')
# + pycharm={"name": "#%%\n"}
class_map_3 = train_generator.class_indices
class_map_3
# + pycharm={"name": "#%%\n"}
from tensorflow.keras.models import load_model
model = load_model('../Model/model_trained_3class.hdf5', compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('../Model/model.tflite', 'wb') as f:
f.write(tflite_model)
| Backend/Notebook/Food-Identification-Mini.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.3
# language: julia
# name: julia-1.5
# ---
# # MOwNiT
# ## Laboratorium
# ### Znajdowanie pierwiastków
# Do poszukiwania pierwiastków funkcji w Julii używamy pakietu Roots
# ```julia
# Pkg.add("Roots")
# Pkg.add("ForwardDiff")
# ```
using Pkg
#Pkg.add("Roots")
#Pkg.add("ForwardDiff")
using Roots
using Plots
using ForwardDiff
# przykładowa funkcja do testów
f(x) = cos(x) - x
plot(f, -2, 2)
# Funkcja <i> find_zero</i> dobiera odpowiedni algorytm w zależności od sposobu wywołania.
#
# ### 1. Metody wykorzytujące przedział i zmianę znaku
# #### 1.1 Metoda bisekcji - jesli wywołamy fzero z podaniem przedziału (tutaj (0,1))
x = find_zero(f, (0, 1),verbose=true)
# mozna wyspecyfikowac wprost
x = find_zero(f, (0, 1), Bisection(),verbose=true)
# #### Sprawdzanie, czy znaleźliśmy 0
#Sprawdzamy czy znaleźliśmy 0
iszero(f(x))
# można też sprawdzić, czy funkcja zmienia znak dla lewego i prawego sąsiada miejsca zerowego.
sign(f(prevfloat(x))) *sign(f(nextfloat(x)))
# czasem algorytm znajduje najlepsze przybliżenie 0
g(x) = sin(x)
x = find_zero(g, (pi/2, 3pi/2))
x, g(x)
# nie jest to dokładnie 0, ale ...
iszero(g(x))
#... najbliższy lewy albo prawy sąsiad leży po przeciwnej stronie osi x niż nasze 0.
g(prevfloat(x)) * g(x) < 0.0 || g(x) * g(nextfloat(x)) < 0.0
# #### 1.2 Metoda regula falsi
find_zero(f, (0, 1), FalsePosition(), verbose=true)
# do dyspozycji mamy 12 wersji algorytmu regula falsi
find_zero(f, (0, 1), FalsePosition(12), verbose=true)
# ### 2. Metody korzytające z pochodnych
#
# #### 2.1 Metoda Newtona, potrzebuje punktu startowego i wykorzystuje pochodną funkcji.
# Aby użyć Metody Newtona, można skorzystać z pakietu ForwardDiff, aby obliczyć pochodną funkcji.
# definujemy D(f) obliczającą funkcje pochodną
D(f) = x->ForwardDiff.derivative(f, float(x))
plot(D(f), -2,2)
#wywolanie metody Newtona
find_zero((f, D(f)),0, Roots.Newton(),verbose=true)
# #### 2.2 Metoda Halleya (potrzebuje pierwszej i drugiej pochodnej)
DD(f) = x->ForwardDiff.derivative(D(f), float(x))
find_zero((f, D(f), DD(f)), 0.0, Roots.Halley(), verbose=true)
# ### 3. Metody korzystające z przybliżenia pochodnej
# #### 3.1 Domyślna metoda bazuje na metodzie z <a href="http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1979-12.pdf"> kalkulatorów HP-34</a> , używa metody siecznych
# oraz metody bazującej na przedziale wg artykułu:
#
# <a href="http://na.math.kit.edu/alefeld/download/1995_Algorithm_748_Enclosing_Zeros_of_Continuous_Functions.pdf"><NAME>, <NAME>, and
# <NAME>, "Algorithm 748: enclosing zeros of continuous functions," ACM
# Trans. Math. Softw. 21, 327–344 (1995), DOI: 10.1145/210089.210111. </a>
x = find_zero(f, 0, verbose=true)
# Metody find_zero dla wyzszych rzędów to wariacje metody Newtona, które nie korzystaja z pochodnej, ale ja przyblizają.
#
# #### 3.2 Metoda siecznych
# wywołanie find_zero używając punktu startowego (a nie przedziału)
# oraz opcji order=1 wykorzystuje metodę siecznych
x = find_zero(f, 0, Order1(), verbose=true)
# metoda siecznych może być wywołana bezpośrednio
# implementacja taka sama jak find_zero(f, 0, Order1()), ale bez
# narzutów frameworku oraz mniejsza ilość sprawdzanych warunków zbieżności - szybsza
Roots.secant_method(f, 0)
# mozemy podać przedział
Roots.secant_method(f, (0,1))
# #### 3.3 Metoda Steffensena
# przybliża pochodną poprzez (f(x + f(x)) - f(x))/f(x)
# aby ja wywołać podajemy punkt startowy oraz order 2
x = find_zero(f, 0, Order2(), verbose=true)
# mozliwe sa jeszcze rzędy 5, 8 i 16
x = find_zero(f, 0, Order8(), verbose=true)
# Można użyć funkcji fzero do znajdowania nieciagłości
plot(x -> 1/x)
find_zero(x -> 1/x, (-1, 1), verbose=true)
# find_zeros - szukanie więcej niż jednego pierwiastka. Wykorzystuje podział przedziału na mniejsze podprzedziały
find_zeros(x ->(x-3)*x, -10, 10)
plot(x ->(x-3)*x,-1,4)
# Więcej na https://github.com/JuliaMath/Roots.jl/blob/master/doc/roots.ipynb
# ### Zadanie:
#
# A. Wybrać trzy metody poszukiwania pierwiastków:
#
# * wykorzystującą przedział i zmianę znaku,
# * wykorzystującą pochodną,
# * wykorzystującą przybliżenie pochodnej
#
# 1. Każdą z trzech wybranych metod przetestować (ilość iteracji, ilość wywołań funkcji) na sześciu wybranych funkcjach ze zbioru http://people.sc.fsu.edu/~jburkardt/py_src/test_zero/test_zero.html Wyniki przedstawić w formie tabelki. <b>Pamiętać o sprawdzeniu czy wynik jest poprawny poprzez obliczenie wartości funkcji dla znalezionego pierwiastka !</b> (3 pkt)
#
# 2. Zademonstrować wybrany, ciekawy przykład trudnej funkcji z p.1 i działania metod na niej. (1 pkt)
#
# 3. Dla każdej z wybranych metod zademonstrować i wyjaśnić po jednym przykładzie, dla którego nie działają (można na podstawie p.1 lub wymyślić własny)(1 pkt)
#
# B. Narysować <a href="https://pl.wikipedia.org/wiki/Wst%C4%99ga_Newtona"> wstegę Newtona </a> i objaśnić, w jaki sposób powstała i jaki jest jej związek z metodą Newtona do znajdowania pierwiastków. Sposób i język - dowolny. (1 pkt)
| Mownit_Lab8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/sparse_mlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="u-t9ecR3j03I"
# # Fit an MLP using an L1 penalty on the weights to make a sparse network
#
# We use projected gradient descent as the optimizer
# + id="3a7h30TF_26l"
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import numpy as np
import matplotlib.pyplot as plt
# + id="O4lJyM2iA_fl" colab={"base_uri": "https://localhost:8080/"} outputId="555f33c8-6268-48be-9f7b-85da0c458f12"
import itertools
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as npr
import jax.numpy as jnp
from jax import jit, grad, random
from jax.experimental import stax
from jax.experimental.stax import Dense, Softplus
from jax.tree_util import (tree_flatten, tree_unflatten)
from jax.flatten_util import ravel_pytree
from graphviz import Digraph
def generate_data(num_instances, num_vars, key):
subkeys = random.split(key, 4)
X = 10 * random.uniform(subkeys[0], shape=(num_instances, num_vars)) - 5
var = 0.1
n_points = 20
example_points = 10 * random.uniform(subkeys[1], shape=(n_points, num_vars))-5
targets = 10* random.uniform(subkeys[2], shape=(n_points, 1)) -5
y = np.zeros((num_instances,1))
for i in range(num_instances):
dists = np.sum(np.abs(np.tile(X[i,:],(n_points,1)) - example_points), axis=1)
lik = (1/np.sqrt(2* np.pi)) * np.exp(-dists/(2*var))
lik = lik / np.sum(lik)
y[i,0] = lik.T @ targets + random.normal(subkeys[3])/15
return X, y
@jit
def loss(params, batch):
inputs, targets = batch
preds = predict(params, inputs)
return jnp.sum((preds - targets)**2) / 2.0
def data_stream(num_instances, batch_size):
rng = npr.RandomState(0)
num_batches = num_instances // batch_size
while True:
perm = rng.permutation(num_instances)
for i in range(num_batches):
batch_idx = perm[i * batch_size:(i + 1) * batch_size]
yield X[batch_idx], y[batch_idx]
def pgd(alpha, lambd):
step_size = alpha
def init(w0):
return w0
def soft_thresholding(z, threshold):
return jnp.sign(z) * jnp.maximum(jnp.absolute(z) - threshold , 0.0)
def update(i, g, w):
g_flat, unflatten = ravel_pytree(g)
w_flat = ravel_pytree_jit(w)
updated_params = soft_thresholding(w_flat - step_size * g_flat, step_size * lambd)
return unflatten(updated_params)
def get_params(w):
return w
def set_step_size(lr):
step_size = lr
return init, update, get_params, soft_thresholding, set_step_size
ravel_pytree_jit = jit(lambda tree: ravel_pytree(tree)[0])
@jit
def line_search(w, g, batch, beta):
lr_i = 1
g_flat, unflatten_g = ravel_pytree(g)
w_flat = ravel_pytree_jit(w)
z_flat = soft_thresholding(w_flat - lr_i*g_flat, lr_i* lambd)
z = unflatten_g(z_flat)
for i in range(20):
is_converged = loss(z, batch) > loss(w, batch) + g_flat@(z_flat - w_flat) + np.sum((z_flat - w_flat)**2)/(2*lr_i)
lr_i = jnp.where(is_converged,lr_i, beta*lr_i)
return lr_i
@jit
def update(i, opt_state, batch):
params = get_params(opt_state)
g = grad(loss)(params, batch)
lr_i = line_search(params, g, batch, 0.5)
set_step_size(lr_i)
return opt_update(i,g, opt_state)
key = random.PRNGKey(3)
num_epochs = 60000
num_instances, num_vars = 200, 2
batch_size = num_instances
minim, maxim = -5, 5
x, y = generate_data(num_instances, 1, key)
X = np.c_[np.ones_like(x), x]
batches = data_stream(num_instances, batch_size)
init_random_params, predict = stax.serial(
Dense(5), Softplus,
Dense(5), Softplus,
Dense(5), Softplus,
Dense(5), Softplus,
Dense(1))
lambd, step_size = 0.6, 1e-4
opt_init, opt_update, get_params, soft_thresholding, set_step_size = pgd(step_size,lambd)
_, init_params = init_random_params(key, (-1, num_vars))
opt_state = opt_init(init_params)
itercount = itertools.count()
for epoch in range(num_epochs):
opt_state = update(next(itercount), opt_state, next(batches))
params = get_params(opt_state)
weights, _ = tree_flatten(params)
w = weights[::2]
print(w)
# + id="nAfd2E9wjXKx" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="8b0a0c6c-9065-4704-bd5a-abdbee33cd50"
labels = {"training" : "Data", "test" : "Deep Neural Net" }
x_test = np.arange(minim, maxim, 1e-5)
x_test = np.c_[np.ones((x_test.shape[0],1)), x_test]
plt.scatter(X[:,1], y, c='k', s=13, label=labels["training"])
plt.plot(x_test[:,1], predict(params, x_test), 'g-',linewidth=3, label=labels["test"])
plt.gca().legend(loc="upper right")
pml.savefig('sparse_mlp_fit.pdf', dpi=300)
plt.show()
# + id="ddm3_Yi8jRWw" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fbccf61b-a9f4-4d88-b710-083cbde7455b"
dot = Digraph(name='Neural Network',format='png',graph_attr={'ordering':'out'}, node_attr={'shape': 'circle', 'color': 'black', 'fillcolor':'#FFFFE0', 'style': 'filled'})
dot.node('00','<x<SUB>0</SUB>>')
dot.node('01','<x<SUB>1</SUB>>')
for i in range(len(w)):
for j in range(w[i].shape[1]):
subscript = '{}{}'.format(i+1, j)
dot.node(subscript, '<h<SUB>{}</SUB>>'.format(subscript))
for k in range(w[i].shape[0]):
origin = '{}{}'.format(i,k)
if np.abs(w[i][k,j])>1e-4:
dot.edge(origin, subscript)
else:
dot.edge(origin, subscript,style='invis')
dot.edge('42','50', style='invis')
dot.view()
# + id="R7whH_kJBP7-" colab={"base_uri": "https://localhost:8080/", "height": 646} outputId="e547092a-318d-435e-f434-a0ad49a5c307"
dot
# + id="ebiEjsQVil71" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="96a7c61a-2b53-4962-f9ce-caeb2d7244e4"
dot.render('../figures/sparse-mlp-graph-structure', view=False)
# + id="pgz9DtdgiwSg" colab={"base_uri": "https://localhost:8080/"} outputId="60cae972-3e48-4b68-e7c2-55aef750750b"
# !ls ../figures
# + id="5YiIY95Riwi9"
| notebooks/sparse_mlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
from itertools import islice
from mard.brazilian_document.oab import oab_pattern, oab_parser
from pathlib import Path
from bs4 import BeautifulSoup
import json
import re
next_file = next(islice(Path('/home/mard/attachments').glob('*.json'), 7, None))
with next_file.open() as f:
obj = json.load(f)
soup = BeautifulSoup(obj['content'])
print(oab_parser._pattern.pattern)
for x in oab_parser.parse(soup.text):
print(x)
# +
from itertools import islice
from mard.lazy import as_chunks
from mard.concurrent import pipeline
from tqdm.auto import tqdm
from random import sample
import re
def producer():
def get_files():
files = Path('/home/mard/attachments').glob('*.json')
for file in files:
with file.open() as f:
yield f.read()
yield from as_chunks(get_files(), 100)
def process_one(obj: dict):
text = BeautifulSoup(obj['content'], features='lxml').text
return list(oab_parser.parse(text))
def mapper(chunk):
return [
process_one(json.loads(x))
for x in chunk
]
results = (
result
for chunk in pipeline(producer, mapper)
for result in chunk
)
findings = []
for result in tqdm(results):
for finding in result:
findings.append(finding)
# -
findings[22]
| notebooks/draft.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import time
from requests_html import HTML
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
# +
options = Options()
options.add_argument("--headless")
driver = webdriver.Chrome(options=options)
# -
categories = [
"https://www.amazon.com/Best-Sellers-Toys-Games/zgbs/toys-and-games/",
"https://www.amazon.com/Best-Sellers-Electronics/zgbs/electronics/",
"https://www.amazon.com/Best-Sellers/zgbs/fashion/"
]
# +
# categories
# -
first_url = categories[0]
driver.get(first_url)
body_el = driver.find_element_by_css_selector("body")
html_str = body_el.get_attribute("innerHTML")
html_obj = HTML(html=html_str)
new_links = [x for x in html_obj.links if x.startswith("/")]
new_links = [x for x in new_links if "product-reviews/" not in x]
page_links = [f"https://www.amazon.com{x}" for x in new_links]
page_links
def scrape_product_page(url, title_lookup = "#productTitle", price_lookup = "#priceblock_ourprice"):
driver.get(url)
time.sleep(1.2)
body_el = driver.find_element_by_css_selector("body")
html_str = body_el.get_attribute("innerHTML")
html_obj = HTML(html=html_str)
product_title = html_obj.find(title_lookup, first=True).text
product_price = html_obj.find(price_lookup, first=True).text
return product_title, product_price
first_product_link = page_links[0]
first_product_link
for link in page_links:
title, price = (None, None)
try:
title, price = scrape_product_page(link)
except:
pass
if title != None and price != None:
print(link, title, price)
# +
# https://www.amazon.com/LEGO-Classic-Medium-Creative-Brick/dp/B00NHQFA1I/
# https://www.amazon.com/Crayola-Washable-Watercolors-8-ea/dp/B000HHKAE2/
# <base-url>/<slug>/dp/<product_id>/
| tutorial-reference/Day 18/notebooks/2 - Category Products.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# General Dependencies
import os
import numpy as np
# Denoising dependencies
from trefide.pmd import batch_decompose,\
batch_recompose,\
overlapping_batch_decompose,\
overlapping_batch_recompose,\
determine_thresholds
from trefide.reformat import overlapping_component_reformat
# Plotting & Video Rendering Dependencies
import funimag
import matplotlib.pyplot as plt
from trefide.plot import pixelwise_ranks
from trefide.video import play_cv2
# Set Demo Dataset Location
ext = os.path.join("..", "example_movies")
filename = os.path.join(ext, "demoMovie.tif")
# %load_ext autoreload
# %autoreload 2
# -
# # Load Data
# +
from skimage import io
mov = io.imread(filename).transpose([1,2,0])[:60,:60,:]
mov = np.asarray(mov,order='C',dtype=np.float64)
print(mov.shape)
fov_height, fov_width, num_frames = mov.shape
# -
# # Set Params
# +
# Maximum of rank 50 blocks (safeguard to terminate early if this is hit)
max_components = 50
# Enable Decimation
max_iters_main = 10
max_iters_init = 40
d_sub=2
t_sub=2
# Defaults
consec_failures = 3
tol = 0.0005
# Set Blocksize Parameters
block_height = 20
block_width = 20
overlapping = True
# -
# # Compress Video
# ## Simulate Critical Region with Noise
spatial_thresh, temporal_thresh = determine_thresholds((fov_height, fov_width, num_frames),
(block_height, block_width),
consec_failures, max_iters_main,
max_iters_init, tol,
d_sub, t_sub, 5, True)
# ## Decompose Each Block Into Spatial & Temporal Components
# Blockwise Parallel, Single Tiling
if not overlapping:
spatial_components,\
temporal_components,\
block_ranks,\
block_indices = batch_decompose(fov_height, fov_width, num_frames,
mov, block_height, block_width,
max_components, consec_failures,
max_iters_main, max_iters_init, tol,
d_sub=d_sub, t_sub=t_sub)
# Blockwise Parallel, 4x Overlapping Tiling
else:
spatial_components,\
temporal_components,\
block_ranks,\
block_indices,\
block_weights = overlapping_batch_decompose(fov_height, fov_width, num_frames,
mov, block_height, block_width,
spatial_thresh, temporal_thresh,
max_components, consec_failures,
max_iters_main, max_iters_init, tol,
d_sub=d_sub, t_sub=t_sub)
# # Reconstruct Denoised Video
# Single Tiling (No need for reqweighting)
if not overlapping:
mov_denoised = np.asarray(batch_recompose(spatial_components,
temporal_components,
block_ranks,
block_indices))
# Overlapping Tilings With Reweighting
else:
mov_denoised = np.asarray(overlapping_batch_recompose(fov_height, fov_width, num_frames,
block_height, block_width,
spatial_components,
temporal_components,
block_ranks,
block_indices,
block_weights))
# # Produce Diagnostics
# ### Single Tiling Pixel-Wise Ranks
if overlapping:
pixelwise_ranks(block_ranks['no_skew']['full'], fov_height, fov_width, num_frames, block_height, block_width)
else:
pixelwise_ranks(block_ranks, fov_height, fov_width, num_frames, block_height, block_width)
# ### Correlation Images
from funimag.plots import util_plot
util_plot.comparison_plot([mov, mov_denoised + np.random.randn(np.prod(mov.shape)).reshape(mov.shape)*.01],
plot_orientation="vertical")
# # Save Results
U, V = overlapping_component_reformat(fov_height, fov_width, num_frames,
block_height, block_width,
spatial_components,
temporal_components,
block_ranks,
block_indices,
block_weights)
np.savez(os.path.join(ext, "demo_results.npz"), U, V,block_ranks,block_height,block_width)
| demos/Demo Compression & Denoising.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from pathlib import Path
import csv
# open/read menu data file
csvpath_menu = Path('Resources/menu_data.csv')
with open (csvpath_menu, 'r') as csvfile_menu:
csvreader_menu = csv.reader(csvfile_menu, delimiter=',')
csv_header_menu = next(csvreader_menu)
# initialize master_menu using for loop
master_menu = []
for row in csvreader_menu:
master_menu.append(row)
#print(master_menu)
# initialize count and lists before for loop
menu_name = []
menu_cat = []
menu_desc = []
menu_price = []
menu_cost = []
for row in csvreader_menu:
name = row[0]
menu_name.append(name)
cat = row[1]
menu_cat.append(cat)
desc = row[2]
menu_desc.append(desc)
price = row[3]
menu_price.append(price)
cost = row[4]
menu_cost.append(cost)
# open/read sales data file
csvpath_sales = Path('Resources/sales_data.csv')
with open (csvpath_sales, 'r') as csvfile_sales:
csvreader_sales = csv.reader(csvfile_sales, delimiter=',')
csv_header_sales = next(csvreader_sales)
report = {}
master_sales = []
sales_qty = []
sales_item = []
for row in csvreader_sales:
master_sales.append(row)
qty = row[3]
sales_qty.append(int(qty))
item = row[4]
sales_item.append(item)
ramen_list = set(sales_item) # use set function to get rid of duplicates. we will use this new list as our keys in the report dictionary
# -
| PyRamen/.ipynb_checkpoints/main-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# Lasso Optimisation
# ==================
#
# This example demonstrates the use of class [bpdn.BPDNProjL1](http://sporco.rtfd.org/en/latest/modules/sporco.admm.bpdn.html#sporco.admm.bpdn.BPDNProjL1) to solve the least absolute shrinkage and selection operator (lasso) problem [[46]](http://sporco.rtfd.org/en/latest/zreferences.html#id48)
#
# $$\mathrm{argmin}_\mathbf{x} \; (1/2) \| D \mathbf{x} - \mathbf{s} \|_2^2 \; \text{such that} \; \| \mathbf{x} \|_1 \leq \gamma$$
#
# where $D$ is the dictionary, $\mathbf{x}$ is the sparse representation, and $\mathbf{s}$ is the signal to be represented. In this example the lasso problem is used to estimate the reference sparse representation that generated a signal from a noisy version of the signal.
# +
from __future__ import print_function
from builtins import input
import numpy as np
from sporco.admm import bpdn
from sporco import plot
plot.config_notebook_plotting()
# -
# Configure problem size, sparsity, and noise level.
N = 512 # Signal size
M = 4*N # Dictionary size
L = 32 # Number of non-zero coefficients in generator
sigma = 0.5 # Noise level
# Construct random dictionary, reference random sparse representation, and test signal consisting of the synthesis of the reference sparse representation with additive Gaussian noise.
# +
# Construct random dictionary and random sparse coefficients
np.random.seed(12345)
D = np.random.randn(N, M)
x0 = np.zeros((M, 1))
si = np.random.permutation(list(range(0, M-1)))
x0[si[0:L]] = np.random.randn(L, 1)
# Construct reference and noisy signal
s0 = D.dot(x0)
s = s0 + sigma*np.random.randn(N,1)
# -
# Set [bpdn.BPDNProjL1](http://sporco.rtfd.org/en/latest/modules/sporco.admm.bpdn.html#sporco.admm.bpdn.BPDNProjL1) solver class options. The value of $\gamma$ has been manually chosen for good performance.
gamma = 2.5e1
opt = bpdn.BPDNProjL1.Options({'Verbose': True, 'MaxMainIter': 500,
'RelStopTol': 1e-6, 'AutoRho': {'RsdlTarget': 1.0}})
# Initialise and run BPDNProjL1 object
# +
b = bpdn.BPDNProjL1(D, s, gamma, opt)
x = b.solve()
print("BPDNProjL1 solve time: %.2fs" % b.timer.elapsed('solve'))
# -
# Plot comparison of reference and recovered representations.
plot.plot(np.hstack((x0, x)), title='Sparse representation',
lgnd=['Reference', 'Reconstructed'])
# Plot functional value, residuals, and rho
its = b.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(1, 3, 3)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
| sc/bpdnprjl1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Train Custom Model - HyperDrive
# + tags=["outputPrepend"]
# !pip install azureml-sdk[notebooks]
# !pip install torch
# !pip install torchvision
# !pip install tensorflow
# !pip install keras
# !pip install keras.models
# + gather={"logged": 1611451066381}
# Imports
import torch
import torchvision
import string
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
# + gather={"logged": 1611451067550}
# Azure ML Imports
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core import Workspace, Experiment
from azureml.data.dataset_factory import TabularDatasetFactory
from azureml.widgets import RunDetails
# -
# ## Workspace
# + gather={"logged": 1611451079386} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
ws = Workspace.from_config()
experiment_name = 'ASL-DeepLearning-hyperameter'
exp_with_hyper = Experiment(workspace=ws, name='Sign-Language-HyperDrive')
exp_no_hyper = Experiment(workspace=ws, name='Sign-Language-NoHyperDrive')
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
# -
# ## Compute
# + gather={"logged": 1611447434540}
# Choose a name for your CPU cluster
cluster_name = "gpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC24',
max_nodes=10)
cpu_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
# -
# ## Dataset
#
# TODO: Get data. In the cell below, write code to access the data you will be using in this project. Remember that the dataset needs to be external.
# + gather={"logged": 1611451094783}
from azureml.data.dataset_factory import TabularDatasetFactory
# Create TabularDataset using TabularDatasetFactory
# Data is available at:
# "https://www.kaggle.com/datamunge/sign-language-mnist"
found = False
key = "sign-language-mnist"
description_text = "sign Language MNIST"
if key in ws.datasets.keys():
found = True
ds = ws.datasets[key]
if not found:
from azureml.data.dataset_factory import TabularDatasetFactory
datastore_path = "https://github.com/emanbuc/ASL-Recognition-Deep-Learning/raw/main/datasets/sign-language-mnist/sign_mnist_train/sign_mnist_train.csv"
ds = TabularDatasetFactory.from_delimited_files(path=datastore_path,header=True)
#Register Dataset in Workspace
ds = ds.register(workspace=ws,name=key,description=description_text)
df = ds.to_pandas_dataframe()
df.describe()
# + gather={"logged": 1611447448174}
# To map each label number to its corresponding letter
letters = dict(enumerate(string.ascii_uppercase))
letters
# -
# ## Data Preparation
# + gather={"logged": 1611451095059}
targets = pd.get_dummies(df.pop('label')).values
data = df.values
targets.shape
# + gather={"logged": 1611451096690}
data.shape
# + gather={"logged": 1611451098399}
from sklearn.preprocessing import minmax_scale
data = minmax_scale(data)
# + gather={"logged": 1611451100520}
input_shape = (28,28, 1) # 28*28 = 784
# + gather={"logged": 1611451101661}
data = np.reshape(data,(-1, 28, 28,1))
data.shape
# + gather={"logged": 1611451103515}
targets.shape
# -
# ## SPLITTING THE DATA Training set and Validation Set
# + gather={"logged": 1611451107229}
X_train, X_val, y_train, y_val = train_test_split(data, targets , test_size = 0.2, random_state=0)
# -
# ### Save data to file for remote cluster
#
# Salva i dati prepatati su file con pickle e esegue upload verso datastore del workspace in cloud.
# + gather={"logged": 1611447455483}
import pickle
import os
if os.path.isfile('dataset/sign-language-mnist.pkl'):
print('File is present')
else:
os.makedirs('dataset')
with open('dataset/sign-language-mnist.pkl','wb') as f:
pickle.dump((X_train,X_val,y_train,y_val),f)
datastore=ws.get_default_datastore()
datastore.upload('./dataset', target_path='sign-language-mnist-data')
print('Done')
# -
# ## Remote Trainig Experiment (No Hyperdrive)
#
# Create an estimator object to run training experiment in remote compute cluster
# + gather={"logged": 1611447460117}
from azureml.train.estimator import Estimator
script_params = {
'--data_folder': ws.get_default_datastore(),
'--hidden': 100
}
est = Estimator(source_directory='.',
script_params=script_params,
compute_target=cpu_cluster,
entry_script='train_keras.py',
pip_packages=['keras','tensorflow'])
# + gather={"logged": 1611447472521}
run = exp_no_hyper.submit(est)
# -
# ## Remote Training with HyperDrive
# ### Hyperdrive Configuration
#
# TODO: Explain the model you are using and the reason for chosing the different hyperparameters, termination policy and config settings.
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611447472604}
from azureml.train.hyperdrive.policy import BanditPolicy,MedianStoppingPolicy
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform, normal,choice
# + gather={"logged": 1611447472674}
#TODO: Create the different params that you will be using during training
param_sampling = RandomParameterSampling({
'--hidden': choice([50,100,200,300]),
'--batch_size': choice([64,128]),
'--epochs': choice([3,5,10]),
'--dropout': choice([0.5,0.8,1])})
# + gather={"logged": 1611447472740}
# TODO: Create an early termination policy. This is not required if you are using Bayesian sampling.
early_termination_policy = MedianStoppingPolicy()
hd_config = HyperDriveConfig(estimator=est,
hyperparameter_sampling=param_sampling,
policy=early_termination_policy,
primary_metric_name='Accuracy',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=16,
max_concurrent_runs=10)
# -
# param_sampling = RandomParameterSampling({
# '--hidden': choice([50,100,200,300]),
# '--batch_size': choice([64,128]),
# '--epochs': choice([5,10,50]),
# '--dropout': choice([0.5,0.8,1])})
# + gather={"logged": 1611447477640}
hyperdrive_run = exp_with_hyper.submit(hd_config)
# + [markdown] gather={"logged": 1598544898497} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# ## Run Details
#
# OPTIONAL: Write about the different models trained and their performance. Why do you think some models did better than others?
#
# TODO: In the cell below, use the `RunDetails` widget to show the different experiments.
# + gather={"logged": 1611448388050} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
RunDetails(hyperdrive_run).show()
hyperdrive_run.wait_for_completion(show_output=True)
# -
# ## Best Model
#
# TODO: In the cell below, get the best model from the hyperdrive experiments and display all the properties of the model.
# + gather={"logged": 1611448389331}
assert(hyperdrive_run.get_status() == "Completed")
best_run = hyperdrive_run.get_best_run_by_primary_metric()
print(best_run)
# -
print('========================')
print(best_run.get_details()['runDefinition']['arguments'])
print(best_run.get_file_names())
best_run_metrics = best_run.get_metrics()
print('Best accuracy: {}'.format(best_run_metrics['Accuracy']))
print('========================')
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611448389974}
model = best_run.register_model(model_name='mnist_model.hdf5',model_path='./outputs/mnist_model.hdf5')
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611449428975}
model.download()
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Best Model Evaluation
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611450890237}
# load and evaluate a saved model
from tensorflow.keras.models import load_model
# load model
model = load_model('./mnist_model.hdf5')
# summarize model.
model.summary()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611451140037}
score = model.evaluate(X_val, y_val, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611451366017}
y_pred = model.predict(X_val)
y_pred = np.argmax(y_pred, axis =1)
y_val = np.argmax(y_val, axis =1)
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1611451485419}
from sklearn.metrics import confusion_matrix, plot_confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
import matplotlib.pyplot as plt
cm = confusion_matrix(y_val, y_pred)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion matrix of the classifier')
fig.colorbar(cax)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
print("Accuracy :" + str(round(accuracy_score(y_val, y_pred),4)))
p, r, f, s = precision_recall_fscore_support(y_val, y_pred, average='macro')
print("Precision | Recall | F1 ")
print(round(p,4),round(r,4),round(f,4))
# + [markdown] nteract={"transient": {"deleting": false}}
#
| notebooks/hyperparameter_tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py37
# language: python
# name: py37
# ---
# +
import sys
sys.path.append('../..')
import torchdyn; from torchdyn.models import *; from torchdyn.datasets import *
from pytorch_lightning.loggers import WandbLogger
# +
data = ToyDataset()
n_samples = 1 << 16
n_gaussians = 7
X, yn = data.generate(n_samples // n_gaussians, 'gaussians_spiral', n_gaussians=32, n_gaussians_per_loop=10, std_gaussians_start=0.5, std_gaussians_end=0.01,
dim=2, radius_start=20, radius_end=0.1)
X = (X - X.mean())/X.std()
import matplotlib.pyplot as plt
plt.figure(figsize=(3, 3))
plt.scatter(X[:,0], X[:,1], c='orange', alpha=0.3, s=4)
# -
import torch
import torch.utils.data as data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn).long().to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=512, shuffle=True)
# ## Model
# +
import torch
import torch.nn as nn
def autograd_trace(x_out, x_in, **kwargs):
"""Standard brute-force means of obtaining trace of the Jacobian, O(d) calls to autograd"""
trJ = 0.
for i in range(x_in.shape[1]):
trJ += torch.autograd.grad(x_out[:, i].sum(), x_in, allow_unused=False, create_graph=True)[0][:, i]
return trJ
def hutch_trace(x_out, x_in, noise=None, **kwargs):
"""Hutchinson's trace Jacobian estimator, O(1) call to autograd"""
jvp = torch.autograd.grad(x_out, x_in, noise, create_graph=True)[0]
trJ = torch.einsum('bi,bi->b', jvp, noise)
return trJ
REQUIRES_NOISE = [hutch_trace]
class CNF(nn.Module):
def __init__(self, net, trace_estimator=None, noise_dist=None, order=1):
super().__init__()
self.net, self.order = net, order # order at the CNF level will be merged with DEFunc
self.trace_estimator = trace_estimator if trace_estimator is not None else autograd_trace;
self.noise_dist, self.noise = noise_dist, None
if self.trace_estimator in REQUIRES_NOISE:
assert self.noise_dist is not None, 'This type of trace estimator requires specification of a noise distribution'
def forward(self, x):
with torch.set_grad_enabled(True):
x_in = torch.autograd.Variable(x[:,1:], requires_grad=True).to(x) # first dimension reserved to divergence propagation
# the neural network will handle the data-dynamics here
if self.order > 1: x_out = self.higher_order(x_in)
else: x_out = self.net(x_in)
trJ = self.trace_estimator(x_out, x_in, noise=self.noise)
return torch.cat([-trJ[:, None], x_out], 1) + 0*x # `+ 0*x` has the only purpose of connecting x[:, 0] to autograd graph
def higher_order(self, x):
# NOTE: higher-order in CNF is handled at the CNF level, to refactor
x_new = []
size_order = x.size(1) // self.order
for i in range(1, self.order):
x_new += [x[:, size_order*i:size_order*(i+1)]]
x_new += [self.net(x)]
return torch.cat(x_new, 1).to(x)
# +
from torch.distributions import MultivariateNormal, Uniform, TransformedDistribution, SigmoidTransform, Categorical
prior = MultivariateNormal(torch.zeros(2).to(device), 2*torch.eye(2).to(device))
ndes = []
for i in range(1):
f = nn.Sequential(
nn.Linear(2, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 2),
)
cnf = CNF(f, order=1) # for CNFs, order is specified here instead of `NeuralDE`
nde = NeuralDE(cnf, solver='dopri5', s_span=torch.linspace(0, 1, 2), atol=1e-6, rtol=1e-6, sensitivity='adjoint')
ndes.append(nde)
model = nn.Sequential(Augmenter(augment_idx=1, augment_dims=1),
*ndes).to(device)
# -
# ## Learner
def cnf_density(model, npts=100, memory= 100):
with torch.no_grad():
side = np.linspace(-2., 2., npts)
xx, yy = np.meshgrid(side, side)
x = np.hstack([xx.reshape(-1, 1), yy.reshape(-1, 1)])
x = torch.from_numpy(x).type(torch.float32).to(device)
z, delta_logp = [], []
inds = torch.arange(0, x.shape[0]).to(torch.int64)
for ii in torch.split(inds, int(memory**2)):
z_full = model(x[ii]).cpu().detach()
z_, delta_logp_ = z_full[:, 1:3], z_full[:, 0]
z.append(z_)
delta_logp.append(delta_logp_)
z = torch.cat(z, 0)
delta_logp = torch.cat(delta_logp, 0)
logpz = prior.log_prob(z.cuda()).cpu() # logp(z)
logpx = logpz - delta_logp
px = np.exp(logpx.cpu().numpy()).reshape(npts, npts)
plt.figure(figsize=(32, 32))
plt.imshow(px);
class Learner(pl.LightningModule):
def __init__(self, model:nn.Module):
super().__init__()
self.model = model
self.lr = 3e-3
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
# plot logging
if batch_idx % 50 == 0:
plot_samples()
self.logger.experiment.log({"chart": plt})
plt.close()
nde.nfe = 0
x, _ = batch
xtrJ = self.model(x)
logprob = prior.log_prob(xtrJ[:,1:3]).to(x) - xtrJ[:,0]
loss = -torch.mean(logprob)
nfe = nde.nfe
nde.nfe = 0
metrics = {'loss': loss, 'nfe':nfe}
self.logger.experiment.log(metrics)
return {'loss': loss}
def configure_optimizers(self):
return torch.optim.AdamW(self.model.parameters(), lr=self.lr, weight_decay=1e-5)
def train_dataloader(self):
self.loader_l = len(trainloader)
return trainloader
logger = WandbLogger(project='torchdyn-toy_cnf-bench-spiral')
learn = Learner(model)
trainer = pl.Trainer(min_steps=45000, max_steps=45000, gpus=1, logger=logger)
trainer.fit(learn);
sample = prior.sample(torch.Size([1<<15]))
# integrating from 1 to 0, 8 steps of rk4
model[1].s_span = torch.linspace(1, 0, 2)
new_x = model(sample).cpu().detach()
cnf_density(model)
# +
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.scatter(new_x[:,1], new_x[:,2], s=0.3, c='blue')
#plt.scatter(boh[:,0], boh[:,1], s=0.3, c='black')
plt.subplot(122)
plt.scatter(X[:,0], X[:,1], s=0.3, c='red')
# -
def plot_samples():
sample = prior.sample(torch.Size([1 << 13]))
# integrating from 1 to 0
model[1].s_span = torch.linspace(1, 0, 2)
new_x = model(sample).cpu().detach()
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.scatter(new_x[:,1], new_x[:,2], s=2.3, alpha=0.2, linewidths=0.1, c='blue', edgecolors='black')
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.subplot(122)
plt.scatter(X[:,0], X[:,1], s=5.3, alpha=0.2, c='red', linewidths=0.1, edgecolors='black')
plt.xlim(-2, 2)
plt.ylim(-2, 2)
model[1].s_span = torch.linspace(0, 1, 2)
def cnf_density(model):
with torch.no_grad():
npts = 200
side = np.linspace(-2., 2., npts)
xx, yy = np.meshgrid(side, side)
memory= 100
x = np.hstack([xx.reshape(-1, 1), yy.reshape(-1, 1)])
x = torch.from_numpy(x).type(torch.float32).to(device)
z, delta_logp = [], []
inds = torch.arange(0, x.shape[0]).to(torch.int64)
for ii in torch.split(inds, int(memory**2)):
z_full = model(x[ii]).cpu().detach()
z_, delta_logp_ = z_full[:, 1:], z_full[:, 0]
z.append(z_)
delta_logp.append(delta_logp_)
z = torch.cat(z, 0)
delta_logp = torch.cat(delta_logp, 0)
logpz = prior.log_prob(z.cuda()).cpu() # logp(z)
logpx = logpz - delta_logp
px = np.exp(logpx.cpu().numpy()).reshape(npts, npts)
plt.imshow(px, cmap='inferno', vmax=px.mean());
a = cnf_density(model)
| test/benchmark/toy_cnf_l.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Z5nK47uIIRcz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="edf37f5b-a87d-447c-fb9c-a7eef02c5c7b"
# !pip install --upgrade pactools
# !pip install hyperas
# !pip install hyperopt
# !pip install -U -q PyDrive
# + id="Wg49xvKcIegy" colab_type="code" colab={}
# Install the PyDrive wrapper & import libraries.
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Copy/download the file
fid = drive.ListFile({'q':"title='hyperas_resnet.ipynb'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('hyperas_resnet.ipynb')
# + id="gifQAFT4I-C0" colab_type="code" colab={}
from hyperopt import Trials, STATUS_OK, tpe
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.ticker import MaxNLocator
#TESNORFOW
import tensorflow as tf
from tensorflow import keras
from keras import datasets, layers, models
#KERAS LIBRARIES
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout , Flatten,BatchNormalization,Conv2D,MaxPooling2D, Activation,LSTM,Embedding,Input,GlobalAveragePooling2D
from keras.regularizers import l1, l2, l1_l2
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import backend
from keras.utils import np_utils
from keras.utils import to_categorical
#CROSS VALIDATION
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
from pactools.grid_search import GridSearchCVProgressBar
#WRAPPERS NEEDED FOR GRIDSEARCH
from keras.wrappers.scikit_learn import KerasClassifier
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import train_test_split
from hyperas import optim
from hyperas.distributions import choice, uniform
# + id="gPeX6jnfJCgN" colab_type="code" colab={}
def data():
train1 = np.load('/content/drive/My Drive/NumpyArrayCovidx/train.npy',allow_pickle=True)
train_labels1 = np.load('/content/drive/My Drive/NumpyArrayCovidx/train_labels.npy',allow_pickle=True)
train2,test1, train_labels2,test_labels1 = train_test_split(train1, train_labels1, test_size=0.2,random_state=42)
x_train=train2/225.0
y_train = pd.get_dummies(train_labels2)
x_test=test1/225.0
y_test = pd.get_dummies(test_labels1)
return x_train,y_train,x_test,y_test
# + id="cNB1YwboSEFj" colab_type="code" colab={}
IMG_SHAPE1=(64,64,3)
resnet=keras.applications.resnet_v2.ResNet152V2(input_shape=IMG_SHAPE1,
include_top=False,
weights='imagenet')
# + id="JJuWU_8RJNKv" colab_type="code" colab={}
def create_model_dense_resnet(x_train,y_train,x_test,y_test):
IMG_SHAPE1=(64,64,3) # θα το προσαρμόσουμε
resnet=keras.applications.resnet_v2.ResNet152V2(input_shape=IMG_SHAPE1,
include_top=False,
weights='imagenet')
tempre=resnet
froz={{choice([3,6,10,15,20,25,30,40,60])}}
for layer in tempre.layers[:(-1)*froz]:
layer.trainable = False
model = tf.keras.Sequential()
model.add(tempre)
if ({{choice([0,1])}}):
model.add(keras.layers.Flatten())
else:
model.add(keras.layers.GlobalAveragePooling2D())
dense_layer_size={{choice([0,1,2,3])}}
if (dense_layer_size==3):
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dropout({{choice([0.0,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65])}}))
model.add(keras.layers.Dense(32, activation='relu'))
model.add(keras.layers.Dropout({{choice([0.0,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65])}}))
elif (dense_layer_size==2):
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dropout({{choice([0.0,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65])}}))
model.add(keras.layers.Dense(32, activation='relu'))
model.add(keras.layers.Dropout({{choice([0.0,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65])}}))
elif (dense_layer_size==1):
model.add(keras.layers.Dense(32, activation='relu'))
model.add(keras.layers.Dropout({{choice([0.0,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65])}}))
else:
layers=[]
for i in range(len(layers)):
model.add(keras.layers.Dense(layers[i], activation='relu'))
model.add(keras.layers.Dropout(0.1))
model.add(keras.layers.Dense(3,activation="softmax"))
print('froz=',froz,'dense_layers=',dense_layer_size)
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=["accuracy"])
result = model.fit(x_train, y_train,
batch_size=1000,
epochs=5,
verbose=2,
validation_data=(x_test, y_test))
validation_acc = np.amax(result.history['val_accuracy'])
print('Best validation acc of epoch:', validation_acc)
return {'loss': -validation_acc, 'status': STATUS_OK, 'model': model}
# + id="hg08Tyb5L4Q5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="9eb1990f-ab57-4e5a-cbb7-4e62570447aa"
best_run, best_model = optim.minimize(model=create_model_dense_resnet,
data=data,
algo=tpe.suggest,
max_evals=300,
notebook_name='hyperas_resnet',
trials=Trials())
X_train, Y_train, X_test, Y_test = data()
print("Evalutation of best performing model:")
print(best_model.evaluate(X_test, Y_test))
print("Best performing model chosen hyper-parameters:")
print(best_run)
| Transfer_Learning/Hyperas/hyperas_resnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # 量子位相推定 (Quantum Phase Estimation)
# ハミルトニアン$H$と固有状態$\mid \psi \rangle$が準備されたとき、その固有値$\lambda = e^{-i\alpha}$の位相$\alpha$を求めることができる量子位相推定を確認します。
#
# $$
# H \mid \psi \rangle = e^{-i\alpha} \mid \psi \rangle
# $$
# ## 量子位相推定の量子回路
# 量子位相推定アルゴリズムの全体像は下記の通りです。
#
# 1. 固有ベクトルとしての量子状態の準備
# 2. 固有値を量子状態として取り出す(位相キックバック)
# 3. 固有値の量子状態をビットに変換(量子フーリエ変換)
#
# の3つからなります。
#
#
# ```
# step2 step3
#
# |0> ----H----------- -*-------iQFT---
# |
# |0> ----H--------*-- -|-------iQFT---
# |0> ----H-----*--|-- -|-------iQFT---
# |0> ----H--*--|--|-- -|-------iQFT---
# | | | |
# | | | |
# |ψ> -------U1-U2-U4- -U2n------------
# step1
# ```
# ## 位相キックバック
# 基本的には固有値固有ベクトル問題を利用し、ハミルトニアンHを導入したユニタリ回路を制御付きにすることにより、固有ベクトル$\mid \psi \rangle$に対応する固有値$e^{2\pi i \phi}$をコントロールビットの方の$\mid 1 \rangle$状態にかけることができます。
#
# 対応するハミルトニアンに対する固有値固有ベクトルは、
#
# $$
# U\mid \psi \rangle = e^{2\pi i \phi} \mid \psi \rangle
# $$
#
# で書くことができます。量子状態$\mid \psi \rangle$は準備されてますので、ハミルトニアンを導入したユニタリ回路Uを準備することで、位相を移すことができます。
#
# まず初期状態からアダマールゲートを使って結果を取り出したい方の量子ビットを重ね合わせ状態にします。
#
# $$
# \mid 0 \rangle \mid \psi \rangle \rightarrow \frac{\mid 0\rangle + \mid 1 \rangle}{\sqrt{2}} \mid \psi \rangle = \frac{\mid 0\rangle \mid \psi \rangle + \mid 1 \rangle \mid \psi \rangle}{\sqrt{2}}
# $$
#
# 次に制御付きのユニタリ回路(ハミルトニアンに対応した形)を導入することで、制御付きなので量子ビットが$\mid 1 \rangle$の方にだけユニタリがかかります。
#
# $$
# \frac{\mid 0\rangle \mid \psi \rangle + \mid 1 \rangle U \mid \psi \rangle}{\sqrt{2}}
# $$
#
# そして、これは固有値に対応しているので、
#
# $$
# \frac{\mid 0\rangle \mid \psi \rangle + \mid 1 \rangle e^{2\pi i \phi} \mid \psi \rangle}{\sqrt{2}} = \frac{\mid 0\rangle \mid \psi \rangle + e^{2\pi i \phi} \mid 1 \rangle \mid \psi \rangle}{\sqrt{2}} = \frac{\mid 0\rangle + e^{2\pi i \phi} \mid 1 \rangle}{\sqrt{2}} \mid \psi \rangle
# $$
#
# このようになりました。これによって、固有値をかけることができます。あとは、
#
# $$
# \frac{1}{\sqrt{2^n}}\sum_{k=0}^{2^n-1} e^{i2\pi k\phi}\mid k \rangle
# $$
#
# のように、取り出したい桁に対応するkを導入すれば良いですが、これは回転角に対応しており、固有値をk回かけるということをすると、自然と対応することができます。つまり$U^k$のように同じユニタリ操作をk回実行すれば良いことになります。ということで、k回Controlled-Unitary操作を行うことで求めることができます。
#
# $$
# \frac{\mid 0\rangle + U^k \mid 1 \rangle}{\sqrt{2}} \mid \psi \rangle = \frac{\mid 0\rangle + e^{2\pi i k \phi} \mid 1 \rangle}{\sqrt{2}} \mid \psi \rangle
# $$
#
# これにより、対応する桁kに対してk回の制御付きユニタリゲートをかけることで実行が終了します。
# ## 量子フーリエ変換
# 量子フーリエ変換は入力を01ビットの配列から、その配列に対応する位相をもつ量子状態の形に変形をすることができます。量子フーリエ変換の逆回路である逆量子フーリエ変換を利用することで、上記位相キックバックで移した位相をビット列で書き出すことができます。
#
# $$
# QFT:\mid x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} \omega_n^{xk}\mid k\rangle
# $$
#
# $\omega_n = e^{\frac{2\pi i}{N}}$とすると、
#
# $$
# {F_N=
# \frac{1}{\sqrt{N}}
# \left[
# \begin{array}{rrrr}
# 1 & 1 & 1 & \cdots &1\\
# 1 & \omega_n&\omega_n^2&\cdots&\omega_n^{N-1}\\
# 1 & \omega_n^2&\omega_n^4&\cdots&\omega_n^{2(N-1)}\\
# 1 & \omega_n^3&\omega_n^6&\cdots&\omega_n^{3(N-1)}\\
# \vdots&\vdots&\vdots&&\vdots\\
# 1 & \omega_n^{N-1}&\omega_n^{2(N-1)}&\cdots&\omega_n^{(N-1)(N-1)}
# \end{array}
# \right]
# }
# $$
#
# のようになります。$x_1$から$x_n$のビットを入力すると、それに対応した位相が量子状態の形で出力されます。
#
# $$
# QFT(\mid x_1,x_2,…,x_n \rangle) = \frac{1}{\sqrt{N}}(\mid 0 \rangle + e^{2\pi i [0.x_n]} \mid 1 \rangle) \otimes … \otimes (\mid 0 \rangle + e^{2\pi i [0.x_1x_2…x_n]} \mid 1 \rangle)
# $$
#
# 出力された量子状態はそれぞれの量子ビットは位相の角度の精度に対応された形に出力されますが、それぞれの位相に対応する係数は量子ビットから見ると絶対値はすべて1なので、測定しただけでは0と1が完全に50%ずつ出てしまうのが特徴です。
#
# $$[0.x_1x_2…] = \frac{x_1}{2}+\frac{x_2}{2^2}+…$$
#
# 位相は小数点の形で取り出されます。
#
# 量子位相推定では、作った状態ベクトル$\mid \psi \rangle$の位相である$\lambda = e^{-i\alpha}$を求めたい場合、状態ベクトルを上記の量子フーリエ変換の形に対応させた形で変形させ、それの状態ベクトルを再度逆量子フーリエ変換にかけることで、任意の精度で位相を小数点表記で取り出すことができます。
# ## Zゲートの位相を推定する
# 例題をやってみます。まずはハミルトニアンとしてZゲートを用意してみます。
#
# $$
# Z = \begin{bmatrix}
# 1&0\\
# 0&-1
# \end{bmatrix}
# $$
#
# まずは手計算で答えを確認してみると、固有値は、
#
# $$
# det\begin{bmatrix}
# 1-\lambda&0\\
# 0&-1-\lambda
# \end{bmatrix}
# $$
#
# を計算して、$\lambda = 1,-1$で、固有ベクトルは、
#
# $$
# \begin{bmatrix}
# 1\\
# 0
# \end{bmatrix},
# \begin{bmatrix}
# 0\\
# 1
# \end{bmatrix}
# $$
#
# こうなるはずです。早速確かめてみます。
# ## blueqatのインストール
# pip経由でインストールを行います。
# !pip install blueqat
# ## 回路の全体概要
# 今回の回路の全体概要です。
#
# ```
# |0> ----H--*--iQFT--M
# |
# |0> -------Z--------
# ```
#
# まず2量子ビット準備します。0番目、1番目と名前をつけます。どちらも量子ビットは0からスタートします。
#
# 1、今回は最初っから量子状態を準備します。
# 2、次に0番目の量子ビットを重ね合わせ状態にして、CZゲートで制御付きのハミルトニアンを実行して位相を0番目の量子ビットにキックバックします。
# 3、最後に出来上がった量子状態から位相をビット表記で取り出すための逆量子フーリエ変換を実行し、測定して終わりです。
#
# ## 量子状態の準備
# 固有ベクトルを|0>にして計算しますので、何もしないで大丈夫そうです。
#
# ## 位相キックバック
# 次に量子状態から位相をキックバックします。
#
# ```
# |0> --H--*--iQFT--M
# |
# |0> -----Z--------
# ```
#
# 量子状態が準備できました。0番目の量子ビットを重ね合わせにして、量子状態を固有ベクトルとして、固有値を0番目の方に情報としてうつします。
# +
from blueqat import Circuit
Circuit().h[0].cz[0,1].h[0].m[:].run(shots=100)
# -
# となり、
#
# $$
# \phi = 0.0
# $$
#
# 求める固有値は、
#
# $$
# e^{2\pi i \phi} = e^{2\pi i *0} = e^0 = 1
# $$
#
# となりました。
# ## 固有状態を|1>に
# 最初の量子状態を|1>にしてみます。
#
# ```
# |0> --H--*--iQFT--M
# |
# |0> --X--Z--------
# ```
Circuit().x[1].h[0].cz[0,1].h[0].m[:].run(shots=100)
# こちらは11となりました。
#
# $$
# \phi = 1/2 = 0.5
# $$
#
# $$
# e^{2\pi i *0.5} = -1
# $$
#
# となりました。
# ## Xゲート
# Xゲートは、
#
# $$
# X =
# \begin{bmatrix}
# 0&1\\
# 1&0
# \end{bmatrix}
# $$
#
# となります。固有ベクトルとしてまずは、
#
# $$
# \mid \psi \rangle =
# \begin{bmatrix}
# 1\\
# 1
# \end{bmatrix}
# $$
#
# を考えると、
#
# ```
# |0> --H--*--H--M
# |
# |0> --H--X-----
# ```
#
# を実行してみます。
Circuit(2).h[:].cx[0,1].h[0].m[0].run(shots=100)
# こちらは、$\phi = 0$となり、
#
# $$
# \lambda = e^0=1
# $$
#
# となりました。続いて固有ベクトルが、
#
# $$
# \mid \psi \rangle =
# \begin{bmatrix}
# 1\\
# -1
# \end{bmatrix}
# $$
#
# を考えます。
#
# ```
# |0> --H---*--H--M
# |
# |0> --HZ--X-----
# ```
#
# HZゲートで量子状態を準備します。
Circuit(2).h[:].z[1].cx[0,1].h[0].m[0].run(shots=100)
# これより、$\phi = 0.5$となり、
#
# $$
# \lambda = e^{2\pi i *0.5}=-1
# $$
#
# となりました。
| tutorial-ja/113_pea_ja.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Step 1: Data Preprocessing
# +
#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
# ignore all warnings
warnings.filterwarnings('ignore')
# -
#importing our cancer dataset
dataset = pd.read_csv('dataset.csv')
X = dataset.iloc[:, 1:31].values
Y = dataset.iloc[:, 31].values
print("Cancer data set dimensions : {}".format(dataset.shape))
# ## Missing or Null Data points
dataset.isnull().sum()
dataset.isna().sum()
#Encoding categorical data values
from sklearn.preprocessing import LabelEncoder
labelencoder_Y = LabelEncoder()
Y = labelencoder_Y.fit_transform(Y)
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25, random_state = 0)
# # Step 2: Feature Scaling
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# # Step 3: Fitting supervised machine learning algorithm to the Training set
#Using KNeighborsClassifier Method of neighbors class to use Nearest Neighbor algorithm
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, Y_train)
# # Step 4: Predecting the Result
predit_classifier_KNN = classifier.predict(X_test)
#print result
print("Prediction results of K Nearest Neighbor algorithm : \n{}".format(predit_classifier_KNN))
# # Step 5: Making the Confusion Matrix & Classfication Report
# +
#import classification report & confusion_matrix & Accuracy score
from sklearn.metrics import confusion_matrix,accuracy_score,classification_report
print("Classification Report of K Nearest Neighbor algorithm : \n{}"
.format(classification_report(Y_test, predit_classifier_KNN)))
print("Confusion Matrix of K Nearest Neighbor algorithm : \n{}".format(confusion_matrix(Y_test, predit_classifier_KNN)))
print('\nAccuracy is :- ',accuracy_score(predit_classifier_KNN,Y_test))
| Semester V/Machine Learning (ML) (4659302)/13/ML_PR_13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The tidal signal $\sin(2\pi t / T)$ is implemented, a pure sine wave with amplitude 1 and period $T = 12\times 3600 = $ 12 hours .
#
# For Riemann and Simple case, it's implemented on the left boundary, for the ocean forcing case, it is forced inside the domain.
#
# (This notebook is run for the Simple case. Only the title of the graph needs to be changed in the code for Riemann and ocean forcing case.)
# %matplotlib inline
from pylab import *
from IPython.display import Image
import clawpack.pyclaw.gauges as gauges
# +
figure(400, figsize=(13,6))
clf()
colors = ['k','b','r']
outdir = '_output'
for k,gaugeno in enumerate([1102,1187]):
gauge = gauges.GaugeSolution(gaugeno, outdir)
t = gauge.t / 3600. # convert to hours
q = gauge.q
eta = q[3,:]
plot(t, eta, colors[k], label='Gauge %s' % gaugeno)
# determine amplification and time shift:
m2 = int(floor(0.75*len(eta)))
eta2 = eta[m2:] # last part of eta signal
etamax2 = eta2.max()
etamin2 = eta2.min()
t2 = t[m2:]
jtmax = argmax(eta2)
tshift = (t2[jtmax] - 39.)*3600.
print('At gauge %i, etamin2 = %.3f, etamax2 = %.3f at tshift = %.1f s' \
% (gaugeno,etamin2,etamax2,tshift))
tperiod = 12
eta = 1.*sin(2*pi*t/tperiod)
plot(t, eta, 'k--', label='Sine')
legend(loc='upper right')
xlabel('hours')
ylabel('Surface relative to MTL (m)')
grid(True)
title('Simple sine condition and resulting GeoClaw gauge results');
xticks(arange(0,t[-1]+0.1,12))
xlim(0,60)
if 0:
fname = 'GaugeComparison.png'
savefig(fname, bbox_inches='tight')
print('Created %s' % fname)
# -
# ## Notes:
#
# - Given an observed tide at Westport that we want to match, we can shift it by about 3220 seconds and increase it by a factor 1.04 in order to obtain the signal to use at the left boundary. This is illustrated in the example `../kingtide2015`.
#
| GraysHarborBC/sine/GraysSine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 4 Lab Session (ctd..): Advertising Data
#
# * In this lab session we are going to look at how to answer questions involing the use of simple and multiple linear regression in Python.
#
# * The questions are based on the book "An introduction to Statistical Learning" by James et al.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import scipy.stats as stats
import statsmodels.api as sm
import statsmodels.formula.api as smf
import seaborn as sns
data_advert = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data_advert.head()
data_advert.describe()
# +
plt.subplot(131)
plt.scatter(data_advert.TV,data_advert.sales)
plt.xlabel('TV')
plt.ylabel('Sales')
plt.subplot(132)
plt.scatter(data_advert.radio,data_advert.sales)
plt.xlabel('radio')
plt.subplot(133)
plt.scatter(data_advert.newspaper,data_advert.sales)
plt.xlabel('newspaper')
plt.subplots_adjust(top=0.8, bottom=0.08, left=0.0, right=1.3, hspace=5, wspace=0.5)
# -
# ## QUESTION 1: Is there a relationship between advertising sales and budget?
#
# * Test the null hypothesis
# \begin{equation}
# H_0: \beta_1=\ldots=\beta_p=0
# \end{equation}
# versus the alternative
# \begin{equation}
# H_a: \text{at least one $\beta_j$ is nonzero}
# \end{equation}
# * For that compute the F-statistic in Multiple Linear Regression 'sales ~ TV+radio+newspaper' using $\texttt{ols}$ from ${\bf Statsmodels}$
# * If there is no relationship between the response and predictors, the F-statistic takes values close to 1. If $H_a$ is true, than F-statistic is expected to be significantly greater than 1. Check the associated p-values.
results = smf.ols('sales ~ TV+radio+newspaper', data=data_advert).fit()
print(results.summary())
# ## QUESTION 2: How strong is the relationship?
#
# You should base your discussion on the following quantities:
# * $RSE$ - computed using $\texttt{scale}$ atribute: $\texttt{np.sqrt(results.scale)}$
# * Compute the percentage error, i.e. $RSE/(mean sale)$
# * $R^2$ - computed using $\texttt{rsquared}$ atribute: $\texttt{results.rsquared}$
# ## QUESTION 3: Which media contribute to sales?
#
# * Examine the p-values associated with each predictor’s t-statistic
# ## QUESTION 4: How large is the effect of each medium on sales?
#
# * Examine 95% confidence intervals associated with each predictor
# * Compare your results with three separate simple lineare regression
# ## QUESTION 5: Is the relationship linear?
# * You can use residual versus fitted value plot to investigate this
# ## QUESTION 6: Is there interaction among the advertising media?
# * Consider model sales ~ TV + radio + TV:radio
# * How much more variability are we able to explain with this model?
results = smf.ols('sales ~ TV + radio + TV:radio', data=data_advert).fit()
print(results.summary())
| Week_4_Advertising_Lab_AA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv("heart failure.csv")
df.head()
df.shape
df.DEATH_EVENT.value_counts()
sns.countplot(x = 'DEATH_EVENT', data=df)
sns.countplot(hue = 'DEATH_EVENT', data=df, x='anaemia')
sns.countplot(hue = 'DEATH_EVENT', data=df, x='diabetes')
sns.countplot(hue = 'DEATH_EVENT', data=df, x='smoking')
sns.countplot(hue = 'DEATH_EVENT', data=df, x='high_blood_pressure')
# +
#plt.figure(figsize=(20,20))
#sns.pairplot(df)
# -
#sep x,y
x = df.drop('DEATH_EVENT',axis=1)
y=df['DEATH_EVENT']
x.head()
y.head()
from sklearn.model_selection import train_test_split
xtrain,xtest, ytrain, ytest = train_test_split(x,y,test_size=.35)
xtest.shape
from sklearn.tree import DecisionTreeClassifier
clf=DecisionTreeClassifier()
clf.fit(xtrain,ytrain) #train with default parameters
pred = clf.predict(xtest)
pred
# # Confusion Matrix
from sklearn.metrics import classification_report, accuracy_score,confusion_matrix,roc_curve,plot_roc_curve
cm = confusion_matrix(ytest,pred)
cm
print(classification_report(ytest,pred))
clf.score(xtest,ytest)
accuracy1 = accuracy_score(ytest,pred) # pred is predicted y ; ytest is actual
print('Acc for Decision Tree is',accuracy1)
sns.heatmap(cm, annot=True)
plt.xlabel('Predicted Values')
plt.ylabel('Actual Values')
# # Random Forest
from sklearn.ensemble import RandomForestClassifier
rclf = RandomForestClassifier()
rclf.fit(xtrain,ytrain)
pred2 = rclf.predict(xtest)
pred2
cm2 = confusion_matrix(ytest,pred2)
cm2
rclf.score(xtest,ytest)
# # Hyper Parameter Tuning / Optimization
# # Randomized Search CV
from sklearn.model_selection import RandomizedSearchCV
# +
#Update Parameters
n_estimators = [int(x) for x in np.linspace(start=100, stop=500, num=15)]
n_estimators
# -
np.random.randint(100,500,15)
# +
#Update Parameters
n_estimators = [int(x) for x in np.linspace(start=100, stop=500, num=15)]
max_features = ['auto','sqrt','log2']
max_depth = [int(x) for x in np.linspace(start=10, stop=100, num=15)]
min_samples_split = [2,4,5,6,7,9,10]
min_samples_leaf = [2,3,5,6,7,9,12]
criterion = ['entropy','gini']
grids = {
'n_estimators' : n_estimators,
'max_features' : max_features,
'max_depth' : max_depth,
'min_samples_split' : min_samples_split,
'min_samples_leaf' : min_samples_leaf,
'criterion' : criterion
}
print(grids)
# -
rnf = RandomForestClassifier()
rmcv = RandomizedSearchCV(rnf,grids,n_iter=200,cv=3)
rmcv.fit(xtrain,ytrain)
rmcv.cv_results_
result = pd.DataFrame(rmcv.cv_results_)
result
rmcv.best_score_
rmcv.best_params_
rfc = rmcv.best_estimator_
rfc.score(xtest,ytest)
rfc.score(x,y)
# # XGBoost Classifier
# !pip install xgboost
import xgboost
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(xtrain,ytrain)
xgb.score(xtest,ytest)
xgb.score(x,y)
rclf.score(x,y)
clf.score(x,y)
| Class-10/Class - 10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.decomposition import PCA # Principal Component Analysis module
from sklearn.cluster import KMeans # KMeans clustering
import matplotlib.pyplot as plt # Python defacto plotting library
import seaborn as sns # More snazzy plotting library
import pylab as pl
# %matplotlib inline
#TCP Conn Logs
import pandas as pd
import os
os.chdir("/home/rk9cx/HP/")
df = pd.read_csv("10_08_2018_Other.csv", index_col=False, header=None)
df.columns =["ts","uid","src_ip","src_port","resp_ip","resp_port","duration","orig_bytes","resp_bytes","conn_state",
"history","orig_pkts","resp_pkts","tunnel_parents","local"]
cols_num = ["resp_port","duration","orig_bytes","resp_bytes","orig_pkts","resp_pkts"]
df[cols_num] = df[cols_num].apply(pd.to_numeric, errors='coerce')
cols_char = ["src_ip","resp_ip"]
df[cols_char] = df[cols_char].astype(str)
len(df["src_ip"].unique())
df_clean = pd.DataFrame(df.groupby('src_ip').agg({'src_port': 'nunique','resp_port': 'nunique', 'resp_ip': 'nunique',
'duration': 'mean','orig_bytes':'sum',
'resp_bytes':'sum','orig_pkts':'sum',
'resp_pkts':'sum'}))
df_clean.reset_index(inplace=True)
df_clean.shape
df_clean = df_clean.dropna()
df_clean.shape
Y = df_clean.loc[:,df_clean.columns == 'src_ip']
X = df_clean.loc[:, df_clean.columns != 'src_ip']
Nc = range(1, 20)
kmeans = [KMeans(n_clusters=i) for i in Nc]
score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))]
pl.plot(Nc,score)
pl.xlabel('Number of Clusters')
pl.ylabel('Score')
pl.title('Elbow Curve')
pl.show()
#Scaling
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_scaled = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
km = KMeans(n_clusters=3).fit(X_scaled)
df_clean['cluster'] = km.labels_
X_scaled['cluster'] = km.labels_
df_clean.groupby('cluster').size()
final_df = df_clean
import numpy as np
import matplotlib.pyplot as plt
plt.scatter(final_df["orig_bytes"], final_df["resp_bytes"] , c=final_df["cluster"])
plt.scatter(final_df["orig_pkts"], final_df["resp_pkts"] , c=final_df["cluster"])
#removing outliers from pkts
p = final_df["orig_pkts"].quantile(0.99)
q = final_df["resp_pkts"].quantile(0.99)
r = final_df["orig_bytes"].quantile(0.99)
s = final_df["resp_bytes"].quantile(0.99)
test = final_df.loc[(final_df['orig_pkts'] < p) & (final_df['resp_pkts'] < q) &
(final_df['orig_bytes'] < r) & (final_df['resp_bytes'] < s),]
test.shape
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('orig_pkts', fontsize = 15)
ax.set_ylabel('resp_pkts', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'orig_pkts']
, test.loc[indicesToKeep, 'resp_pkts']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('orig_bytes', fontsize = 15)
ax.set_ylabel('resp_bytes', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'orig_bytes']
, test.loc[indicesToKeep, 'resp_bytes']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('resp_ip', fontsize = 15)
ax.set_ylabel('resp_port', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'resp_ip']
, test.loc[indicesToKeep, 'resp_port']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
#plt.scatter(final_df["resp_ip"], final_df["duration"] , c=final_df["cluster"])
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('resp_ip', fontsize = 15)
ax.set_ylabel('duration', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'resp_ip']
, test.loc[indicesToKeep, 'duration']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('resp_port', fontsize = 15)
ax.set_ylabel('src_port', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'resp_port']
, test.loc[indicesToKeep, 'src_port']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('resp_pkts', fontsize = 15)
ax.set_ylabel('resp_bytes', fontsize = 15)
ax.set_title('Clustering', fontsize = 20)
targets = [0,1,2]
colors = ['r', 'g','y']
for target, color in zip(targets,colors):
indicesToKeep = test['cluster'] == target
ax.scatter(test.loc[indicesToKeep, 'resp_pkts']
, test.loc[indicesToKeep, 'resp_bytes']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
| Conn Logs/Conn Log Clustering - Others.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: Traffic Sign Recognition Classifier
#
# **The goals / steps of this project are the following:**
# * Load the data set (see below for links to the project data set)
# * Explore, summarize and visualize the data set
# * Design, train and test a model architecture
# * Use the model to make predictions on new images
# * Analyze the softmax probabilities of the new images
# * Summarize the results with a written report
#
# **Key strategies experimented include:**
# * Experiment with the LeNet layers to make LeNet_2
# * Dropout regularization used to make sure the network doesn't overfit the training data
# * Tune the hyperparameters, such as learning rate, etc.,
# * Improve the data pre-processing with equalization and normalization
# * Augment the training data by rotating, shifting, and scaling images.
# ---
# ## Step 0: Load The Data
# +
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
# !pwd
training_file = '../data/train.p'
validation_file='../data/valid.p'
testing_file = '../data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# -
# ---
#
# ## Step 1: Dataset Summary & Exploration
#
# The pickled data is a dictionary with 4 key/value pairs:
#
# - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
# - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
# - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
# - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
#
# Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
# ### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
# +
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
import numpy as np
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of valid examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
# -
# ### Include an exploratory visualization of the dataset
# Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
#
# The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
#
# **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
# +
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
import random
import pandas as pd
# %matplotlib inline
index = [random.randint(0,len(X_train)) for i in range(16)]
# print(index)
df = pd.read_csv('signnames.csv')
# print(df.loc[y_train[index]])
methods = [str(y_train[i]) for i in index]
fig, axs = plt.subplots(2,8, figsize=(12,6), subplot_kw={'xticks': [], 'yticks': []})
for i, ax in enumerate(axs.flat):
ax.imshow(X_train[index[i]])
ax.set_title(methods[i])
plt.tight_layout()
plt.show()
# -
# **[Observation] Input images have many issues such as viewpoint variations, lighting conditions (saturations, low-contrast), motion-blur, occlusions, sun glare, physical damage, colors fading, graffiti, stickers and low resolution. Above all, different lighting conditions seem to be the biggest issue. Some samples are so dark that anyone can hardly recognize. This variation may also be a problem for machines to learn. So, i'm going to deal with this issue later at the pre-processing section.**
fig, ax = plt.subplots(figsize=(12,8))
ax.hist(y_train,bins=100)
# plt.xticks(np.arange(0,43,1), np.array(df['SignName']), rotation='vertical')
plt.xticks(np.arange(0,43,1))
fig.tight_layout()
plt.show()
# **[Observation] The histogram above shows that each class of dataset has very different number of samples. This could be a problem that lead to bias towards frequent signs. This bias is not necessarily bad if it is close to what actually happens in the real world. Here, checking the distribution of test data, y_test has almost similiar distribution with that of y_train.**
#
# **But, labels such as 0, 19, 24, 27, 29, 32, 37, 41, and 42 are less than 250 samples. This lack of samples for some lables might contribute to the bias. We want the model to be perceivable enough for less frequent signs too, not overfitting. It may be reasonable to fill those cavities to prevent the bias. Especially, the number of samples of label 0, 19, 24, 27, 29, 32, 37, 41, and 42 are significanly small, which is less than 250. I'm dealing with it at the pre-processing later.**
# ----
#
# ## Step 2: Design and Test a Model Architecture
#
# Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
#
# The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
#
# With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
#
# There are various aspects to consider when thinking about this problem:
#
# - Neural network architecture (is the network over or underfitting?)
# - Play around preprocessing techniques (normalization, rgb to grayscale, etc)
# - Number of examples per label (some have more than others).
# - Generate fake data.
#
# Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
# ### Pre-process the Data Set (normalization, grayscale, etc.)
# Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
#
# Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
#
# Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
# ### 1. Histogram Equalization
#
# **To have better quality of images. i'm using [Histogram equalization](https://en.wikipedia.org/wiki/Histogram_equalization) as the first pre-processing. Since the different lighting conditions among test samples are the primary issue as observed above, i'm trying to convert color space from RGB to [LAB](https://en.wikipedia.org/wiki/CIELAB_color_space) and equalize L channel, brightness, only.**
# +
def histogram_equalization(img):
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
lab_img = cv2.cvtColor(img, cv2.COLOR_RGB2LAB)
lab_img[:,:,0]=clahe.apply(lab_img[:,:,0])
return cv2.cvtColor(lab_img, cv2.COLOR_LAB2RGB)
def hist_equal_data(X,y):
eqHist_X_train = np.zeros_like(X)
eqHist_y = np.copy(y)
for i, _ in enumerate(X):
eqHist_X_train[i] = histogram_equalization(X[i])
print("Transformed to equal hist data...")
return eqHist_X_train, eqHist_y
# +
# index = [random.randint(0,len(X_train)) for i in range(16)]
# print(index)
import cv2
df = pd.read_csv('signnames.csv')
# print(df.loc[y_train[index]])
methods = [str(y_train[i]) for i in index]
fig, axs = plt.subplots(2,8, figsize=(12,6), subplot_kw={'xticks': [], 'yticks': []})
for i, ax in enumerate(axs.flat):
ax.imshow(histogram_equalization(X_train[index[i]]))
ax.set_title(methods[i])
plt.tight_layout()
plt.show()
# -
# **It looks like histogram equalization do the job. Now, i'm adapting it to the whole dataset and save it for the next stage pre-processing**
# +
eqHist_X_train, eqHist_y_train = hist_equal_data(X_train, y_train)
eqHist_X_valid, eqHist_y_valid = hist_equal_data(X_valid, y_valid)
eqHist_X_test, eqHist_y_test = hist_equal_data(X_test, y_test)
train_data = {
'features': eqHist_X_train,
'labels': eqHist_y_train
}
valid_data = {
'features': eqHist_X_valid,
'labels': eqHist_y_valid
}
test_data = {
'features': eqHist_X_test,
'labels': eqHist_y_test
}
with open('../data/eqHist_train.p', 'wb') as f:
pickle.dump(train_data, f, pickle.HIGHEST_PROTOCOL)
with open('../data/eqHist_valid.p', 'wb') as f:
pickle.dump(valid_data, f, pickle.HIGHEST_PROTOCOL)
with open('../data/eqHist_test.p', 'wb') as f:
pickle.dump(test_data, f, pickle.HIGHEST_PROTOCOL)
# -
# ### 2. Data Augmentation
#
# **As mentioned above, training dataset needs to be augmented though. On the [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf), it says "Samples are randomly perturbed in position ([-2,2] pixels), in scale ([.9,1.1] ratio) and rotation ([-15,+15] degrees)". There's a nice Keras api to do this job, ImageDataGenerator. I'm keeping the translation, rotation, and scaling paramters as they are in the paper. And i'm adding 5 generated images for each training image. Let's check if it works as expected before applying it to the entire training dataset.**
# **I'm complementing samples only for the label with less than 800 samples.**
# +
from numpy import expand_dims
from keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot
# Preprocessing for data augmentation
sample_size = np.bincount(y_train)
# sample_y = np.nonzero(sample_size)[0]
sample_dist = [i for i,j in zip(sample_y, sample_size) if j < 800]
print("Lables that are less than 800 samples are ...\n", sample_dist)
print()
first_idx=[]
tmp = -1
for i,label in enumerate(y_train):
if label != tmp:
first_idx.append([label,i])
tmp = label
first_idx = sorted(first_idx, key= lambda x:x[0])
# print(first_idx)
for i,item in enumerate(first_idx):
item.append(sample_size[i])
for i,j,k in first_idx:
if i in sample_dist:
print("label {} starts from index {} with size {}".format(i,j,k))
# -
# **I'm using ImageDataGenerator Keras API to obtain transformed and augmented images. Parameter settings for the transformation are translation(-.1~.1), rotation((-15,15) degree), scaling(0.3), and shear(0.5).**
# +
def augment_image(images,size):
datagen = ImageDataGenerator(
width_shift_range= 0.1, # NOTE unary parameter setting format!
height_shift_range= 0.1,
rotation_range=15,
shear_range= 0.2,
zoom_range=0.3,
fill_mode ='nearest')
# samaples = expand_dims(image, 0)
# it = datagen.flow(samples, batch_size=1)
it = datagen.flow(images, batch_size=size)
images_after = it.next()
images_after = images_after.astype('uint8')
return images_after
def display_augmented_images(image):
for i in range(5):
pyplot.subplot(150 + 1 + i)
pyplot.imshow(augment_image(image))
pyplot.show()
def augment_data(X,y):
aug_X_train = np.copy(X)
aug_y_train = np.copy(y)
for label, start, size in first_idx:
acc_size = size
while acc_size < 800:
#print("augmenting {} {} {}".format(label,start,size))
aug_X = np.copy(augment_image(X[start:start+size], size))
aug_y = np.copy(y[start:start+size])
aug_X_train = np.vstack((aug_X_train, aug_X))
aug_y_train = np.hstack((aug_y_train, aug_y))
acc_size += size
if acc_size != size:
print("label {} now amounts to {} samples ...".format(label, acc_size))
return aug_X_train, aug_y_train
# display_augmented_images(eqHist_X_train[5012])
# -
# **I'm augmenting the samples to make sure we have more than 800 samples for any label. The number of training samples are now 51448.**
# +
aug_X_train, aug_y_train = augment_data(eqHist_X_train, eqHist_y_train)
print(aug_X_train.shape, aug_y_train.shape)
data = {
'features': aug_X_train,
'labels': aug_y_train
}
# save
with open('../data/aug_train.p', 'wb') as f:
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
# -
# ### 3. Normalization
#
# **Here, finally, the next step is normalization using `(pixel - 128)/ 128`.**
# +
import cv2
train_file = '../data/aug_train.p'
# train_file = '../data/eqHist_train.p'
validate_file = '../data/eqHist_valid.p'
test_file = '../data/eqHist_test.p'
with open(train_file, mode='rb') as f:
train = pickle.load(f)
with open(validate_file, mode='rb') as f:
valid = pickle.load(f)
with open(test_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# +
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
# Pixel Normalization: scale pixel values to the range 0-1.
# Pixel Centering: scale pixel values to have a zero mean.
# Pixel Standardization: scale pixel values to have a zero mean and unit variance.
X_train = (X_train - 128.)/128
X_valid = (X_valid - 128.)/128
X_test = (X_test - 128.)/128
img = X_train[5012]
color = ('r','g','b')
for i, col in enumerate(color):
plt.hist(img[:,:,i].ravel(),256, color=col)
plt.show()
# hist = cv2.equalizeHist(img[:,:,0])
# -
# ### 4. Shuffling
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
# ### Model Architecture
# **I used Lenet5 architecture as guided. As Lenet5 architecture was designed for a grayscale image as input. Now that the input is color images, network should be modified somehow to embrace the enlarged dimesion of images.
# Here, LeNet_2 is slightly modified version of LeNet to embrace the color images and capture more low-level features. I simply made feature-maps go deeper such that conv1 output is changed from 28x28x6 to 28x28x12, conv2 from 10x10x16 to 10x10x28, and flatten output from 400 to 700.**
# +
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet_2(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x12.
Wc1 = tf.Variable(tf.truncated_normal([5,5,3,12], mean=mu, stddev=sigma))
bc1 = tf.Variable(tf.zeros(12))
conv1 = tf.nn.conv2d(x, Wc1, strides=[1,1,1,1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, bc1)
#conv1 = tf.nn.dropout(conv1, 0.5)
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x12. Output = 14x14x12.
conv1 = tf.nn.max_pool(conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x28.
Wc2 = tf.Variable(tf.truncated_normal([5,5,12,28],mean=mu, stddev=sigma))
bc2 = tf.Variable(tf.zeros(28))
conv2 = tf.nn.conv2d(conv1, Wc2, strides=[1,1,1,1], padding='VALID')
conv2 = tf.nn.bias_add(conv2, bc2)
#conv2 = tf.nn.dropout(conv2, 0.5)
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x28. Output = 5x5x28.
conv2 = tf.nn.max_pool(conv2, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
# TODO: Flatten. Input = 5x5x28. Output = 700.
fc1 = tf.contrib.layers.flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 700. Output = 300.
Wf1 = tf.Variable(tf.truncated_normal([700, 300],mean=mu, stddev=sigma))
bf1 = tf.Variable(tf.zeros([300]))
fc1 = tf.add(tf.matmul(fc1, Wf1), bf1)
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, 0.5)
# TODO: Layer 4: Fully Connected. Input = 300. Output = 120.
Wf2 = tf.Variable(tf.truncated_normal([300, 120],mean=mu, stddev=sigma))
bf2 = tf.Variable(tf.zeros(120))
fc2 = tf.add(tf.matmul(fc1, Wf2), bf2)
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
fc2 = tf.nn.dropout(fc2, 0.7)
# TODO: Layer 5: Fully Connected. Input = 120. Output = 43.
Wf3 = tf.Variable(tf.truncated_normal([120, 43],mean=mu, stddev=sigma))
bf3 = tf.Variable(tf.zeros(43))
logits = tf.add(tf.matmul(fc2, Wf3), bf3)
return logits
# -
# ### Train, Validate and Test the Model
# A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
# sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
# +
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is sele cted,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import tensorflow as tf
EPOCHS = 100
BATCH_SIZE = 128
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.0005
logits = LeNet_2(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
# +
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
# -
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './saved_model/lenet_2')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./saved_model'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
# ### *Test Accuracy is 95% !*
# ---
#
# ## Step 3: Test a Model on New Images
#
# To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
#
# You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
# ### Load and Output the Images
# +
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import matplotlib.image as mpimg
path = "five_test_signs/"
test_images_name = [str(path+f) for f in os.listdir(path) if not f.startswith(".")]
test_images = []
for i, name in enumerate(test_images_name):
image = mpimg.imread(name)
rs_image = cv2.resize(image, dsize=(32, 32), interpolation=cv2.INTER_AREA)
test_images.append(rs_image)
pyplot.subplot(150 + 1 + i)
pyplot.imshow(rs_image)
plt.show()
test_images = np.array(test_images)
ground_truth = np.array([1,38,14,40,28])
print("GROUND TRUTH:",ground_truth)
# -
# ### Predict the Sign Type for Each Image
# +
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
test_inputs = np.array([histogram_equalization(img) for img in test_images])
for i,item in enumerate(test_inputs):
pyplot.subplot(150 + 1 + i)
pyplot.imshow(histogram_equalization(item))
pyplot.show()
test_inputs = (test_inputs - 128.)/128
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./saved_model'))
out_logit = sess.run(logits, feed_dict={x: test_inputs})
output = np.argmax(out_logit,1)
print("Predictions are ",output, "!!\n")
# -
# ### Analyze Performance
df = pd.read_csv('signnames.csv')
print(df.loc[output])
# ### *For new images, the model has 100% accuracy !*
# ### Output Top 5 Softmax Probabilities For Each Image Found on the Web
# For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
#
# The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
#
# `tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
#
# Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
#
# ```
# # (5, 6) array
# a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
# 0.12789202],
# [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
# 0.15899337],
# [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
# 0.23892179],
# [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
# 0.16505091],
# [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
# 0.09155967]])
# ```
#
# Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
#
# ```
# TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
# [ 0.28086119, 0.27569815, 0.18063401],
# [ 0.26076848, 0.23892179, 0.23664738],
# [ 0.29198961, 0.26234032, 0.16505091],
# [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
# [0, 1, 4],
# [0, 5, 1],
# [1, 3, 5],
# [1, 4, 3]], dtype=int32))
# ```
#
# Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
# +
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
scores = sess.run(tf.nn.top_k(tf.nn.softmax(out_logit), k=5, sorted=True))
for i in range(len(scores[0])):
print('Input image {} is class {}'.format(i, np.argmax(out_logit[i])))
print()
# np.set_printoptions(precision=4)
for i in range(len(scores[0])):
print('Top5 score is {}, label {}'.format(np.float16(scores[0][i]), scores[1][i]))
# -
# **For all 5 images, the highest score of each label has a very large margin over the 2nd highest. The model has Let's check Top 5 scores lables 28(Children crossing) for the last image.**
# +
training_file = '../data/eqHist_train.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
X_check, y_check = train['features'], train['labels']
for i,targ_item in enumerate(scores[1][4]):
idx = np.where(y_check==targ_item)
print(targ_item, idx[0][0], end=", ")
pyplot.subplot(150 + 1 + i)
pyplot.imshow(histogram_equalization(X_check[idx[0][4]]))
pyplot.show()
print(df.loc[scores[1][4]])
# -
# ### Project Writeup
#
# Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
# > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
# "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
# ---
#
# ## Step 4 (Optional): Visualize the Neural Network's State with Test Images
#
# This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
#
# Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
#
# For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
#
# <figure>
# <img src="visualize_cnn.png" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above)</p>
# </figcaption>
# </figure>
# <p></p>
#
# +
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
| Traffic_Sign_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 迭代器模式(Iterator Pattern)
# # 1 代码
# 今天的主角是迭代器模式。在python中,迭代器并不用举太多的例子,因为python中的迭代器应用实在太多了(不管是python还是其它很多的编程语言中,实际上迭代器都已经纳入到了常用的库或者包中)。而且在当前,也几乎没有人专门去开发一个迭代器,而是直接去使用list、string、set、dict等python可迭代对象,或者直接使用\_\_iter\_\_和next函数来实现迭代器。如下例:
lst=["hello Alice","hello Bob","hello Eve"]
lst_iter=iter(lst)
print (lst_iter)
print (lst_iter.__next__())
print (lst_iter.__next__())
print (lst_iter.__next__())
print (lst_iter.__next__())
# 在python对象的方法中,也可以轻易使用迭代器模式构造可迭代对象,如下例:
class MyIter(object):
def __init__(self,n):
self.index=0
self.n=n
def __iter__(self):
return self
def __next__(self):
if self.index < self.n:
value = self.index * 2
self.index += 1
return value
else:
raise StopIteration()
x_square = MyIter(10)
for x in x_square:
print(x)
| DesignPattern/IteratorPattern.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
from fastai.vision import *
import os
import cv2
import json
path = Path('./')
path.ls()
src_data_path='../data/jinnan2_round1_train_20190305/'
annotations_images_path='../data/annotations_images/'
# +
if not os.path.exists('../data/train/'):
os.makedirs('../data/train/')
if not os.path.exists('../data/train/normal/'):
os.makedirs('../data/train/normal/')
if not os.path.exists('../data/train/restricted/'):
os.makedirs('../data/train/restricted/')
if not os.path.exists(annotations_images_path):
os.makedirs(annotations_images_path)
# +
with open(src_data_path + 'train_no_poly.json', 'r') as f:
load_dict = json.load(f)
info = load_dict['info']
licenses = load_dict['licenses']
categories = load_dict['categories']
images = load_dict['images']
annotations = load_dict['annotations']
print(info)
# -
imageid2name = {}
for i, image in enumerate(images):
imageid2name[image['id']] = image['file_name']
def overlay_class_box(image, annotations):
for i, annotation in enumerate(annotations):
category_id = annotation['category_id']
box = annotation['bbox']
minAreaRect = annotation['minAreaRect']
xmin, ymin = box[:2]
xmax, ymax = box[0]+box[2], box[1]+box[3]
cv2.putText(
image, str(category_id), (xmin, ymin), cv2.FONT_HERSHEY_SIMPLEX, .5, (0, 0, 125), 2
)#(B,G,R)
image = cv2.rectangle(
image, (xmin, ymin), (xmax, ymax), (0,0,255), 2
)
return image
img_dict = {}
for i, anno in enumerate(annotations):
image_id = anno['image_id']
item = {
'category_id': anno['category_id'],
'bbox': anno['bbox'],
'minAreaRect': anno['minAreaRect']
}
if image_id in img_dict.keys():
img_dict[image_id].append(item)
else:
img_dict[image_id] = [item]
# 创建在图片上画上标签矩形的数据集
# +
for i, image_id in enumerate(img_dict.keys()):
file_name = imageid2name[image_id]
img = cv2.imread(src_data_path + 'restricted/' + file_name)
img = overlay_class_box(img, img_dict[image_id])
if i < 2:
plt.imshow(img)
plt.show()
cv2.imwrite(annotations_images_path + file_name, img)
#print('save {} img to restricted_with_annotations dir'.format(file_name))
print('process success!!!!')
print(i)
# -
def copy_files(src_path, dest_path, flag_str):
files = os.listdir(src_path)
print('test img nums={}'.format(len(files)))
for file in files:
full_src_path = src_path+file;
full_dest_path=dest_path+flag_str+file
cmd = 'cp '+full_src_path+' '+full_dest_path+' -rf'
tmp = os.popen(cmd).readlines()
copy_files(annotations_images_path,'../data/train/restricted/','annotations_' )
# +
cmd = 'cp '+src_data_path+'restricted/*'+' '+'../data/train/restricted/'+' -rf'
tmp = os.popen(cmd).readlines()
cmd = 'cp '+src_data_path+'normal/*'+' '+'../data/train/normal/'+' -rf'
tmp = os.popen(cmd).readlines()
# -
cmd = 'rm '+annotations_images_path +' -rf'
tmp = os.popen(cmd).readlines()
# +
#copy_files('../data/train/restricted-back/','../data/train/restricted/','copy_' )
# -
| round1/classify/process_data_tianchi_annotations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc" style="margin-top: 1em;"><ul class="toc-item"></ul></div>
# -
# # TP Régression logistique
import numpy as np
import matplotlib.pyplot as plt
from diabeticRetinopathyUtils import load_diabetic_retinopathy
from scipy.optimize import check_grad
from time import time
from sklearn.metrics import classification_report
X, y = load_diabetic_retinopathy("diabeticRetinopathy.csv")
print "Before the insertion:"
print X.shape, y.shape
n, p = X.shape
X = np.c_[np.ones(n), X]
print "After the insertion:"
print X.shape, y.shape
def objective(w_, X, y, rho, return_grad=True, return_H=True):
"""
X: matrix of size n*(p+1)
y: vector of size n
w0: real number
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :] * w_) for i in range(n)])
exp_yxw_1 = np.array([np.exp(yx_w[i]) for i in range(n)]) + 1
exp_neg_yxw_1 = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yxw_1)) + np.sum(w**2) * rho / 2.
if return_grad == False:
return val
else:
# Compute gradient
grad = np.mean(-np.array([y_x[i] / exp_yxw_1[i]
for i in range(n)]), axis=0) + rho * np.r_[0, w]
if return_H == False:
return val, grad
else:
# Compute the Hessian matrix
H = np.mean(np.array([y_x[i].reshape(-1, 1).dot(y_x[i].reshape(1, -1) / (exp_yxw_1[i] * exp_neg_yxw_1[i]))
for i in range(n)]), axis=0) + rho * np.diag(np.r_[0, np.ones(p - 1)])
return val, grad, H
# +
def funcMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return val
def gradMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return grad
rho = 1. / n
t0 = time()
print "The difference of gradient is: %0.12f" % check_grad(funcMask, gradMask, np.zeros(p + 1), X, y, rho)
print "Done in %0.3fs." % (time() - t0)
# +
def gradMask(w_, X, y, rho):
val, grad = objective(w_, X, y, rho, return_H=False)
return grad.sum()
def hessianMask(w_, X, y, rho):
val, grad, H = objective(w_, X, y, rho)
return np.sum(H, axis=1)
t0 = time()
rho = 1. / n
print "The difference of Hessian matrix is: %0.12f" % check_grad(gradMask, hessianMask, np.zeros(p + 1), X, y, rho)
print "Done in %0.3fs." % (time() - t0)
# +
def val_proximal(w_, X, y, rho):
"""
X: matrix of size n*(p+1)
y: vector of size n
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :] * w_) for i in range(n)])
exp_neg_yx_w = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yx_w)) + rho * np.sum(np.fabs(w))
return val
def func(w_, X, y, return_grad=True):
"""
X: matrix of size n*(p+1)
y: vector of size n
w: vector of size p
"""
# Initialize elementary intermediate variables;
n, p = X.shape
w = w_[1:]
y_x = np.array([y[i] * X[i, :] for i in range(n)])
yx_w = np.array([np.sum(y_x[i, :] * w_) for i in range(n)])
exp_yx_w = np.array([np.exp(yx_w[i]) for i in range(n)]) + 1
exp_neg_yx_w = np.array([np.exp(-yx_w[i]) for i in range(n)]) + 1
# Compute function value
val = np.mean(np.log(exp_neg_yx_w))
if return_grad == False:
return val
else:
# Compute gradient
grad = np.mean(-np.array([y_x[i] / exp_yx_w[i]
for i in range(n)]), axis=0)
return val, grad
def soft_Threshold(w, rho):
w_ = np.zeros_like(w)
w_[w > rho] = w[w > rho] - rho
w_[w < -rho] = w[w < -rho] + rho
w_[0] = w[0]
return w_
def minimize_prox_grad_Taylor(func,
f,
w_,
X,
y,
rho,
a,
b,
tol=1e-10,
max_iter=500):
n, p = X.shape
val = func(w_, X, y, rho)
val_f, grad_f = f(w_, X, y)
gamma = b / 2.
delta_val = tol * 2
cnt = 0
while (delta_val > tol and cnt < max_iter):
gamma = 2 * gamma
w_new = Soft_Threshold(w_ - gamma * grad_f, gamma * rho)
val_f_ = f(w_new, X, y, return_grad=False)
# while (val_f_ > val_f + beta*np.sum(grad_f*(w_new - w_))):
while (val_f_ > val_f + np.sum(grad_f * (w_new - w_)) + np.sum(
(w_new - w_)**2) / gamma):
# print val_
gamma = gamma * a
w_new = soft_Threshold(w_ - gamma * grad_f, gamma * rho)
val_f_ = f(w_new, X, y, return_grad=False)
w_ = w_new
val_f, grad_f = f(w_, X, y)
val_ = func(w_, X, y, rho)
delta_val = val - val_
val = val_
cnt = cnt + 1
return func(w_, X, y, rho), w_, cnt
t0 = time()
rho = 0.1
a = 0.5
b = 1
val_pgls, w_pgls, cnt_pgls = minimize_prox_grad_Taylor(
objective_proximal,
f,
0.3 * np.ones(p + 1),
X,
y,
rho,
a,
b,
tol=1e-8,
max_iter=500)
print "The value minimal of the objective function is: %0.12f" % val_pgls
t_pgls = time() - t0
print "Done in %0.3fs, number of iterations: %d" % (t_pgls, cnt_pgls)
print w_pgls
# -
| SD-TSIA211/TP/TP2/SD_211_TP2_bolong.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from timeit import timeit
# %alias_magic t timeit
# +
NPIS = 2022348
TBC = np.random.choice(2, size=15159231).astype('bool')
ND = 3
def ram_calc_old_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (np.uint64(num_points_in_shell)
* np.sum(to_be_checked).astype(np.uint64)
* np.uint64(32)
* np.uint64(num_dimensions)
* np.uint64(2))
# -
def ram_calc_less_casting_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (np.uint64(num_points_in_shell)
* np.sum(to_be_checked)
* 32
* num_dimensions
* 2)
def ram_calc_use_countnonzero_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (np.uint64(num_points_in_shell)
* np.uint64(np.count_nonzero(to_be_checked))
* np.uint64(32)
* np.uint64(num_dimensions)
* np.uint64(2))
def ram_calc_use_countnonzero_and_less_casting_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (np.uint64(num_points_in_shell)
* np.count_nonzero(to_be_checked)
* 32
* num_dimensions
* 2)
def ram_calc_use_ndarraycountnonzero_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (np.uint64(num_points_in_shell)
* np.uint64(to_be_checked.count_nonzero())
* np.uint64(32)
* np.uint64(num_dimensions)
* np.uint64(2))
def ram_calc_use_ndarraycountnonzero_and_less_casting_speed_test(num_points_in_shell=NPIS, to_be_checked=TBC, num_dimensions=ND):
estimated_ram_needed = (num_points_in_shell
* to_be_checked.count_nonzero()
* 32
* num_dimensions
* np.uint64(2))
# %t -n1000000 -r100 ram_calc_old_speed_test
# %t -n1000000 -r100 ram_calc_less_casting_speed_test
# %t -n1000000 -r100 ram_calc_use_countnonzero_speed_test
# %t -n1000000 -r100 ram_calc_use_countnonzero_and_less_casting_speed_test
# %t -n1000000 -r100 ram_calc_use_ndarraycountnonzero_speed_test
# %t -n1000000 -r100 ram_calc_use_ndarraycountnonzero_and_less_casting_speed_test
| examples/archive/gamma/MJ-performance-testing-of-int-casts-and-boolean-array-sums.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stat 222: Finance Project (Spring 2016)
# ##### Team: <NAME>, <NAME>, Yueqi (<NAME>, <NAME>
# ## Abstract
# We created an open source Python Package 'lobpredictrst' to predict mid price movements for the AAPL LOB stock
# - In data preprocessing part, we follow closely to the Kercheval and Zhang (2015). We create categorical (up, stationary, down) price movement labels using midprice change and bid-ask price spread crossing between delta t, which we pick, respectively and create features according to the paper's feature 1-6.
# - We use SVM and random forest with the standard cross-validation procedure to get the prediction models and create a straightforward trading strategy accordingly
# - We use several measures to evaluate the precision of our different prediction models and the profit generated from the trading strategies. Our best trading strategy is selected according to the net profit.
# ## Introduction and Background
# Describe what the project is about and roughly our approach in each of the following 5 sections
#
# - In this project, we analyze the limit order book (LOB) of AAPL, fit a predictive model of price movements under 30 time-stamped scale using random forest and SVM respectively, 30? millsecond according to data from 9:30 to 11:00, and create a high frequency trading strategy based on it. In the end, we run the strategy on data from 11:00 to 12:28 and evaluate the net profit
# - The following sections are data preprocessing, model fitting, model assessment, and trading algorithm implementation
# ## Data Preprocessing
# ### Data Description
# The data we used in this project comes from the limit order book (LOB) and the message book of Apple stock. In LOB, there are 10 levels of ask/bid prices and volumes. The data is quite clean since there are no missing values nor outliers.
#
# - Add the time and limitations (only one morning, split chronologically, does not allow for seasonality effects)
#
# - Mention that we experimented with the MOB data but chose not to add the features in the end. It was sparser than originally thought
# ### Transformation Process
#
# We used 6 features as Kercheval and Zhang in their study. The first 40 columns are the original ask/bid prices and volumns after renaming. Then the next four features are in the time insensitive set. It contains bid-ask spreads and mid-prices, price differences, mean prices and volumes accumulated differences. The last four are time-sensitive features including price and volume derivatives, average intensity of each type, relative intensity indicators, accelerations(market/limit).
#
# - Insert the Kercheval and Zhang table (screenshot) here. With a caption acknowleding the source paper.
#
# - Insert summary statistics for key columns in the training set to give a exploratory feel of the underlying raw features.
#
# In time-sensitive features, the biggest problem we encountered is the choice of $\Delta t$. Also, the choice of $\Delta t$ is correlated with labels. Mainly we would like to predict stock prices by mid-price movement or price spread crossing. Price spread crossing is defined as following. (1) An upward price spread crossing appears when the best bid price at $t+\Delta t$ is greater than the best ask price at time $t$, which is $P_{t+\Delta t}^{Bid}>P_{t}^{Ask}$. (2) A downward price spread crossing appears when the best ask price at $t+\Delta t$ is smaller than the best bid price at time $t$, which is $P_{t+\Delta t}^{Ask}>P_{t}^{bid}$. (3) If the spreads of best ask price and best bid price are still crossing each other, than we consider it is no price spread crossing, which is stable status. In this case, compared to mid-price movements, price spread crossing is less possible to have upward or downward movements, particularly in high frequency trading since big $\Delta t$ might be useless. According to our test, even we use 1000 rows as $\Delta t$, we still get $92\%$ stationary labels.
# ### Deciding dt - Methodology and Reasoning
# In the previous section, we explain the importance of picking good $\Delta t$. In this section, we explain how we pick it in detail.
#
# $\Delta t$ affect our model and strategy in at least two ways:
# - In our prediction model, we use price at $t+\Delta t$ and $\Delta t$ to create our label
# - In our trading strategy, we long/short one share at some point and clear it exactly $\Delta t$ later
#
# The tradeoff we are facing can be understanded this way:
#
# In high frequency trade, any current information for profit opportunity is only valueable within an extremely short period of time. And any profit opportunity is completely exploited within a few millsecond. This implies:
# - Cost for using large $\Delta t$
# - prediction models based on ancient information will hardly work, because we essentially using no information
# - the trading strategy with large $\Delta t$ won't generate profit. Even the prediction model in some sense find a profitable opportunity, when we excute our transaction, the opportunity has already be taken by other people
#
# There is an important benefit of large $\Delta t$. Very small $\Delta t$ results in extremely high proportion of 'stationary' label, meaning that the price measure doesn't change. Highly imbalanced label makes machine learning algorithm too easy and make the information less efficiently used. It actually induces the machine learning algorithm to cheat by ignoring the features and only predicting too much 'stationary'.
# - Benefit for using large $\Delta t$
# - solve the label imbalance problem and help machine learning algorithms to learn the data more efficiently
#
# In practice, we look at the proportion of each category of labels 'up', 'stationary', 'down' for different $\Delta t$. The plot is shown below. Looking at the graph, we see that the proportion of 'stationary' falls down quickly for Midprice lables but very slowly for bid-ask spread crossing. In the end, we pick $\Delta t_{MD} = 30$, because the proportion of 'stationary' falls down quickly before 30 and and slowly after 30. 30 is too large and gives us litter enough 'stationary'. We pick $\Delta t_{SC} = 1000$, because the proportion at 1000 are about 0.33, 0.33, 0.33. We really like this balance property. However, we acknowlege that it is probably too large. However, as a machine learning excercise, we decide to care more about whether the algorithm is going to work better or not and sacrifices some really essential practical issues.
#
# For future extensions of this work, we can consider picking $\Delta t_{SC} = 30$ (say) and oversampling up/ down movements to get a better data for modelling purposes. This would be another way to help mitigate the risk of the class imbalance problem inherent in the SC approach.
#
#
#
#
# Delete the following old skeleton version
# '''
# - Describe (using graphs and tables) the process used to determine the final dt column
# - Acknowledge that Johnny allowed us to use rows instead of time to determine dt
# - The is all of the exploratory analysis performed by FN to determine the 'optimal' dt
# - Explain that we settled on dt = 30 rows i.e. from the graph it is clear that the **optimal dt and why**
# - Explain that we looked at spread crossing but did not model on it becuase we could not get a good spread of the labels - pretty much all of them were stationary
# - We used dt = 1000 for SC because it gave same degree of label stability (in distribution) as MP crossing
# - We understand that this may be too high for actual HFT purposes but for future extensions of this work, we can consider picking dt = 30 (say) and oversampling up/ down movements to get a better data for modelling purposes. This would be another way to help mitigate the risk of the class imbalance problem inherent in the SC approach.
# '''
#
# <img src="delta_t_MP.png">
# <img src="delta_t_SC.png">
# ### Preparation of transformed data for model fitting
# - Explain the split of the data - ensure that the proportions are preserved
# - **train**
# - **test**
# - **validation**
# - **train + test**
# - **train + test + validation**
# - **strategy**
# - We tried it on Chronological, but then decided to use a simple random shuffling approach
# - There is a notable limitation of this - namely that there are very highly correlated features occuring in chronolological chunks. By randomly shiffling, we not only stabilise the distribution of labels (good) but also the **emperical distribution CHECK THIS!!!** of features (not ideal). For future extensions of this work try to shuffling by these chronologically consecutive chunks across the training and test/ validation sets.
# - Explain that the 12:00-12:30PM data is discarded per Johnny's suggestion
# ## Model Fitting
# ### Background of SVM - Theoretical Background
# - From ISLR include a description of SVM
# - Explain its core strengths and weaknesses
# ### Background of Random Forests - Theoretical Background
# - From ISLR include a description of RF
# - Explain its core strengths and weaknesses and why we use it i.e. adapts well for numerical features that are not necessarily linearly separable
# ### Approach for Testing Models
# - Explain the key parameter changes for SVM and RF
# - TD: Go through YAML files and produce table summarising key parameter changes
# ## Model Assessment
# ### SVM Output
# - Insert summary tables here i.e. F1, precision and recall
# - Explain how the output improves as we change our summary metrics
# ### RF Output
# - Insert summary tables here i.e. F1, precision and recall
# - Explain how the output improves as we change our summary metrics
# ### GBM Output (If we have the time)
# - Insert summary tables here i.e. F1, precision and recall
# - Explain how the output improves as we change our summary metrics
# ## Interpretations
# - Understand the key features driving the model selection i.e. look at variable importance
# - Take the top few features and fit a simple logistic regression for up-stationary-down movements
# ## Trading Strategies Implementation
#
# According to the requirement of the project:" 1. You can place only market orders, and assume that they can be executed immediately at the best bid/ask. 2. Your position can only be long/short at most one share." These requirement means
# - We can only buy at best ask price and sell at best bid price in the future
# - Whenever we our current position is not 0, we cannot long/short a new share
#
# We construct two trading strategies based on predictions of our best random forest models with bid-ask spread crossing labels. The two strategies are called simple strategy and finer strategy.
# - Whenever we make a trading decision on a new share, we long a new share if the model prediction is up, short a new share if the prediction is down, don't do anything for any new share if the prediction is stationary.
# - We clear any old share $\Delta t$ timestamps after we long/short it originally.
# - In the simpler strategy, we only take a trading decision on a new share every other $\Delta t$ timestamp, i.e. we consider whether to do anything at $t_0, t_{\Delta t}, \ldots, t_{n\Delta t}, \ldots$
# - In the finer strategy, we take a trading decision on a new share whenever our position is 0.
# ## Discussion
# ## Conclusion
# - From a machine learning perspective, the black box approaches included SVM and RF. We also tried GBM
# - The ensemble methods of RF and GBM gave the best prediction as quantified by the F1, recall and precision scores
# - We designed a simple trading strategy and tested it on the data from 11:00-12:00 and noted that this transparent approach yielded an accuracy of 70% compared to the more black box approaches which gave us 85% accuracy
# - We note that the models should be built over a longer period of time and also randomly split by time, to ensure that we adapt to seasonal trends in a consistent manner. Perhaps longitudinal classification methods could be applied here to capture the temporal component in LOB price movements
# ## Acknowledgments
# - We would like to thank <NAME> and <NAME> for helping us set up the Python package `lobpredictrst`
# - We would like to thank Johnny for explaining the theoretical underpinnings of RF and SVM
| lobpredictrst/jupyter/report/.ipynb_checkpoints/Stat 222 - Finance Project-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demonstrates accuracy of one- and two-sided finite-difference derivatives
#
#
# **<NAME>, PhD**
#
# This demo is based on the original Matlab demo accompanying the <a href="https://mitpress.mit.edu/books/applied-computational-economics-and-finance">Computational Economics and Finance</a> 2001 textbook by <NAME> and <NAME>.
#
# Original (Matlab) CompEcon file: **demdif02.m**
#
# Running this file requires the Python version of CompEcon. This can be installed with pip by running
#
# # !pip install compecon --upgrade
#
# <i>Last updated: 2021-Oct-01</i>
# <hr>
# ## About
#
# Demonstrates accuracy of one- and two-sided finite-difference derivatives of $e^x$ at $x=1$ as a function of step size $h$.
# ## Initial tasks
import numpy as np
from compecon import demo
import matplotlib.pyplot as plt
# ## Setting parameters
n, x = 18, 1.0
c = np.linspace(-15,0,n)
h = 10 ** c
# +
exp = np.exp
eps = np.finfo(float).eps
def deriv_error(l, u):
dd = (exp(u) - exp(l)) / (u-l)
return np.log10(np.abs(dd - exp(x)))
# -
# ## One-sided finite difference derivative
d1 = deriv_error(x, x+h)
e1 = np.log10(eps**(1/2))
# ## Two-sided finite difference derivative
d2 = deriv_error(x-h, x+h)
e2 = np.log10(eps**(1/3))
# ## Plot finite difference derivatives
# +
fig, ax = plt.subplots()
ax.plot(c,d1, label='One-Sided')
ax.plot(c,d2, label='Two-Sided')
ax.axvline(e1, color='C0', linestyle=':')
ax.axvline(e2, color='C1',linestyle=':')
ax.set(title='Error in Numerical Derivatives',
xlabel='$\log_{10}(h)$',
ylabel='$\log_{10}$ Approximation Error',
xlim=[-15, 0], xticks=np.arange(-15,5,5),
ylim=[-15, 5], yticks=np.arange(-15,10,5)
)
ax.annotate('$\sqrt{\epsilon}$', (e1+.25, 2), color='C0')
ax.annotate('$\sqrt[3]{\epsilon}$', (e2 +.25, 2),color='C1')
ax.legend(loc='lower left');
# +
#demo.savefig([plt.gcf()], name='demdif02')
| notebooks/dif/02 Demonstrates accuracy of one- and two-sided finite-difference derivatives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Alpha SecureReqNet Complete Example
# This is an example of how Alpha SecureReqNet is trained from unprocessed data to evaluation.
# +
#<NAME>'19
#Prediction For Main Issues Data Set
# -
import csv
from tensorflow.keras.preprocessing import text
from nltk.corpus import gutenberg
from string import punctuation
from tensorflow.keras.preprocessing.sequence import skipgrams
import pandas as pd
import numpy as np
import re
import nltk
import matplotlib.pyplot as plt
pd.options.display.max_colwidth = 200
# %matplotlib inline
from nltk.stem.snowball import SnowballStemmer
englishStemmer=SnowballStemmer("english")
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers import Dot, Input, Dense, Reshape, LSTM, Conv2D, Flatten, MaxPooling1D, Dropout, MaxPooling2D
from tensorflow.keras.layers import Embedding, Multiply, Subtract
from tensorflow.keras.models import Sequential, Model
from tensorflow.python.keras.layers import Lambda
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping
# visualize model structure
#from IPython.display import SVG
#from keras.utils.vis_utils import model_to_dot
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.manifold import TSNE
from utils.read_data import Dynamic_Dataset,Processing_Dataset
from utils.vectorize_sentence import Embeddings
#../data replaces datasets for the to access data
path = "../data/augmented_dataset/"
process_unit = Processing_Dataset(path)
ground_truth = process_unit.get_ground_truth()
dataset = Dynamic_Dataset(ground_truth, path,False)
test, train = process_unit.get_test_and_training(ground_truth,isZip = True)
# +
#Train/Test split verification
#for elem in test:
# print(elem[0])
# -
#Adds nltk folder to the repository and is needed if the user doesn't have them already
import nltk
nltk.download('stopwords')
#Preprocesing Corpora
embeddings = Embeddings()
max_words = 5000 #<------- [Parameter]
pre_corpora_train = [doc for doc in train if len(doc[1])< max_words]
pre_corpora_test = [doc for doc in test if len(doc[1])< max_words]
print(len(pre_corpora_train))
print(len(pre_corpora_test))
embed_path = '../data/word_embeddings-embed_size_100-epochs_100.csv'
embeddings_dict = embeddings.get_embeddings_dict(embed_path)
# .decode("utf-8") takes the doc's which are saved as byte files and converts them into strings for tokenization
corpora_train = [embeddings.vectorize(doc[1].decode("utf-8"), embeddings_dict) for doc in pre_corpora_train]#vectorization Inputs
corpora_test = [embeddings.vectorize(doc[1].decode("utf-8"), embeddings_dict) for doc in pre_corpora_test]#vectorization
target_train = [[int(list(doc[0])[1]),int(list(doc[0])[3])] for doc in pre_corpora_train]#vectorization Output
target_test = [[int(list(doc[0])[1]),int(list(doc[0])[3])]for doc in pre_corpora_test]#vectorization Output
#target_train
max_len_sentences_train = max([len(doc) for doc in corpora_train]) #<------- [Parameter]
max_len_sentences_test = max([len(doc) for doc in corpora_test]) #<------- [Parameter]
max_len_sentences = max(max_len_sentences_train,max_len_sentences_test)
print("Max. Sentence # words:",max_len_sentences)
min_len_sentences_train = min([len(doc) for doc in corpora_train]) #<------- [Parameter]
min_len_sentences_test = min([len(doc) for doc in corpora_test]) #<------- [Parameter]
min_len_sentences = max(min_len_sentences_train,min_len_sentences_test)
print("Mix. Sentence # words:",min_len_sentences)
embed_size = np.size(corpora_train[0][0])
# +
#BaseLine Architecture <-------
embeddigs_cols = embed_size
input_sh = (max_len_sentences,embeddigs_cols,1)
#Selecting filters?
#https://stackoverflow.com/questions/48243360/how-to-determine-the-filter-parameter-in-the-keras-conv2d-function
#https://stats.stackexchange.com/questions/196646/what-is-the-significance-of-the-number-of-convolution-filters-in-a-convolutional
N_filters = 128 # <-------- [HyperParameter] Powers of 2 Numer of Features
K = 2 # <-------- [HyperParameter] Number of Classess
# -
input_sh
#baseline_model = Sequential()
gram_input = Input(shape = input_sh)
# 1st Convolutional Layer Convolutional Layer (7-gram)
conv_1_layer = Conv2D(filters=32, input_shape=input_sh, activation='relu',
kernel_size=(7,embeddigs_cols), padding='valid')(gram_input)
conv_1_layer.shape
# Max Pooling
max_1_pooling = MaxPooling2D(pool_size=((max_len_sentences-7+1),1), strides=None, padding='valid')(conv_1_layer)
max_1_pooling.shape
# Fully Connected layer
fully_connected_1_gram = Flatten()(max_1_pooling)
fully_connected_1_gram.shape
fully_connected_1_gram = Reshape((32, 1, 1))(fully_connected_1_gram)
fully_connected_1_gram.shape
# 2nd Convolutional Layer (5-gram)
conv_2_layer = Conv2D(filters=64, kernel_size=(5,1), activation='relu',
padding='valid')(fully_connected_1_gram)
conv_2_layer.shape
max_2_pooling = MaxPooling2D(pool_size=((32-5+1),1), strides=None, padding='valid')(conv_2_layer)
max_2_pooling.shape
# Fully Connected layer
fully_connected_2_gram = Flatten()(max_2_pooling)
fully_connected_2_gram.shape
fully_connected_2_gram = Reshape((64, 1, 1))(fully_connected_2_gram)
fully_connected_2_gram.shape
# 3rd Convolutional Layer (3-gram)
conv_3_layer = Conv2D(filters=128, kernel_size=(3,1), activation='relu',
padding='valid')(fully_connected_2_gram)
conv_3_layer.shape
# 4th Convolutional Layer (3-gram)
conv_4_layer = Conv2D(filters=128, kernel_size=(3,1), activation='relu',
padding='valid')(conv_3_layer)
conv_4_layer.shape
# 5th Convolutional Layer (3-gram)
conv_5_layer = Conv2D(filters=64, kernel_size=(3,1), activation='relu',
padding='valid')(conv_4_layer)
conv_5_layer.shape
# Max Pooling
max_5_pooling = MaxPooling2D(pool_size=(58,1), strides=None, padding='valid')(conv_5_layer)
max_5_pooling.shape
# Fully Connected layer
fully_connected = Flatten()(max_5_pooling)
fully_connected.shape
# 1st Fully Connected Layer
deep_dense_1_layer = Dense(32, activation='relu')(fully_connected)
deep_dense_1_layer = Dropout(0.2)(deep_dense_1_layer) # <-------- [HyperParameter]
deep_dense_1_layer.shape
# 2nd Fully Connected Layer
deep_dense_2_layer = Dense(32, activation='relu')(deep_dense_1_layer)
deep_dense_2_layer = Dropout(0.2)(deep_dense_2_layer) # <-------- [HyperParameter]
deep_dense_2_layer.shape
# 3rd Fully Connected Layer
deep_dense_3_layer = Dense(16, activation='relu')(deep_dense_2_layer)
deep_dense_3_layer = Dropout(0.2)(deep_dense_3_layer) # <-------- [HyperParameter]
deep_dense_3_layer.shape
predictions = Dense(K, activation='softmax')(deep_dense_3_layer)
#Criticality Model
criticality_network = Model(inputs=[gram_input],outputs=[predictions])
print(criticality_network.summary())
#Seting up the Model
criticality_network.compile(optimizer='adam',loss='binary_crossentropy',
metrics=['accuracy'])
#Data set organization
from tempfile import mkdtemp
import os.path as path
#Memoization
file_corpora_train_x = path.join(mkdtemp(), 'alex-res-adapted-003_temp_corpora_train_x.dat') #Update per experiment
file_corpora_test_x = path.join(mkdtemp(), 'alex-res-adapted-003_temp_corpora_test_x.dat')
#Shaping
shape_train_x = (len(corpora_train),max_len_sentences,embeddigs_cols,1)
shape_test_x = (len(corpora_test),max_len_sentences,embeddigs_cols,1)
#Data sets
corpora_train_x = np.memmap(
filename = file_corpora_train_x,
dtype='float32',
mode='w+',
shape = shape_train_x)
corpora_test_x = np.memmap( #Test Corpora (for future evaluation)
filename = file_corpora_test_x,
dtype='float32',
mode='w+',
shape = shape_test_x)
target_train_y = np.array(target_train) #Train Target
target_test_y = np.array(target_test) #Test Target (for future evaluation)
corpora_train_x.shape
target_train_y.shape
corpora_test_x.shape
target_test_y.shape
#Reshaping Train Inputs
for doc in range(len(corpora_train)):
#print(corpora_train[doc].shape[1])
for words_rows in range(corpora_train[doc].shape[0]):
embed_flatten = np.array(corpora_train[doc][words_rows]).flatten() #<--- Capture doc and word
for embedding_cols in range(embed_flatten.shape[0]):
corpora_train_x[doc,words_rows,embedding_cols,0] = embed_flatten[embedding_cols]
#Reshaping Test Inputs (for future evaluation)
for doc in range(len(corpora_test)):
for words_rows in range(corpora_test[doc].shape[0]):
embed_flatten = np.array(corpora_test[doc][words_rows]).flatten() #<--- Capture doc and word
for embedding_cols in range(embed_flatten.shape[0]):
corpora_test_x[doc,words_rows,embedding_cols,0] = embed_flatten[embedding_cols]
#CheckPoints
#csv_logger = CSVLogger(system+'_training.log')
# filepath changed from: "alex-adapted-res-003/best_model.hdf5" for testing
# The folder alex-adapted-res-003 doesn't exist yet in the repository. RC created 08_test in the root folder
# manually
filepath = "../08_test/best_model.hdf5"
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=100)
mc = ModelCheckpoint(filepath, monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
callbacks_list = [es,mc]
#Model Fitting
history = criticality_network.fit(
x = corpora_train_x,
y = target_train_y,
#batch_size=64,
epochs=2000, #5 <------ Hyperparameter
validation_split = 0.2,
callbacks=callbacks_list
)
# filepath changed from: 'alex-adapted-res-003/history_training.csv' for testing
#Saving Training History
df_history = pd.DataFrame.from_dict(history.history)
df_history.to_csv('../08_test/history_training.csv', encoding='utf-8',index=False)
criticality_network.save(filepath)
df_history.head()
# filepath changed from: 'alex-adapted-res-003/corpora_test_x.npy' &
# 'alex-adapted-res-003/corpora_test_x./target_test_y.npy' for testing
#Saving Test Data
np.save('../08_test/corpora_test_x.npy',corpora_test_x)
np.save('../08_test/target_test_y.npy',target_test_y)
# +
#Evaluation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs2 = range(len(acc))
plt.plot(epochs2, acc, 'b', label='Training')
plt.plot(epochs2, val_acc, 'r', label='Validation')
plt.title('Training and validation accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.legend()
plt.figure()
plt.plot(epochs2, loss, 'b', label='Training')
plt.plot(epochs2, val_loss, 'r', label='Validation')
plt.title('Training and validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend()
plt.show()
# -
from sklearn.metrics import average_precision_score,precision_recall_curve
#funcsigs replaces the (deprecated?) sklearn signature
from funcsigs import signature
#from sklearn.utils.fixes import signature
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from tensorflow.keras.models import load_model
# filepath changed from: 'alex-adapted-res-003/best_model.hdf5' for testing
path = '../08_test/best_model.hdf5'
criticality_network_load = load_model(path) #<----- The Model
score = criticality_network_load.evaluate(corpora_test_x, target_test_y, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
history_predict = criticality_network_load.predict(x=corpora_test_x)
history_predict
inferred_data = pd.DataFrame(history_predict,columns=list('AB'))
target_data = pd.DataFrame(target_test_y,columns=list('LN'))
data = target_data.join(inferred_data)
y_true = list(data['L'])
y_score= list(data['A'])
average_precision = average_precision_score(y_true, y_score)
print('Average precision-recall score: {0:0.2f}'.format(average_precision))
#ROC Curve (all our samples are balanced)
auc = roc_auc_score(y_true, y_score)
print('AUC: %.3f' % auc)
| nbs/02_alpha_securereqnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="da023b5c"
# # Part 1: data cleaning and pre-processing
# + [markdown] id="a5a8ed19"
# ## Importing libraries
# + id="37a71a35"
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# + [markdown] id="1b3cef43"
# # Importing dataset
# + id="cba1d5d3"
df=pd.read_csv('C:/cars-data.csv')
# + id="fc7a336f" outputId="fc2957a4-ad2d-4a27-a0e7-630cb556f768"
df
# + id="f2b7c77e" outputId="db9a1d2e-46c0-4499-8752-4b1947880db0"
df.head()
# + id="36cbb3a7" outputId="783eb866-7edb-4eb8-aa81-62cae15c0f7d"
df.info()
# + [markdown] id="5ef40eab"
# ## NULL value checking
# + id="8a2bc682" outputId="723a8da3-20e6-4b21-a4b7-11a565d17c09"
df.isna().sum()
# + [markdown] id="6e2e3ad1"
# # Data pre-processing
# + id="d8fec8f9"
df=df.dropna()
# + id="fef51eb3" outputId="9fad54a4-77c1-4119-b3a9-ea1ec6d592b9"
print(df)
# + id="af788d8a" outputId="ee739240-6f30-4bb0-d33f-984644f441cf"
df.dtypes
# + id="646e3615" outputId="b7bc1d4e-031c-467a-843f-7568b907f697"
x= ('Horsepower')
y = ('Year')
plt.plot(xpoints, ypoints)
plt.show()
# + id="42611908" outputId="2e130a4c-4fe0-46e4-e513-b6eb23672f10"
x= ('Edispl')
y = ('Year')
plt.hist(x)
plt.show()
# + id="2208e43b" outputId="8de47bd7-32d6-4828-f798-f423faa8060c"
plt.plot(df)
plt.show()
| project/NL-to-SQL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center> <font size='6' font-weight='bold'> Puzzle Solver </font> </center>
# <center> <i> <NAME> & <NAME> </i> </center>
#
#
# <img src=ressources/image_couverture.jpg>
# # Préliminaires
# ## Modules
# +
import numpy as np
from PIL import Image
from PIL import ImageFilter
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import os
import time
from pprint import pprint
from copy import deepcopy
import sys
# -
# %load_ext autoreload
# %autoreload 2
from utils import *
from main import *
# # Découpage de l'image
# **Rappel important**
# Avec PIL, le système de coordonées pris est le suivant :
# <img src=ressources/coords_system.png>
# +
# Global variables, to change according to the given puzzle!
filename = 'img_test.jpg'
nb_lines = 9
nb_cols = 9
im_shuffled = read_img(filename)
cropped = split_img(im_shuffled, nb_lines, nb_cols, margin=(25,35))
plt.imshow(cropped[(0,0)])
# -
save_cropped(cropped)
dicBestConfig = getBestConfig(cropped, nb_lines, nb_cols)
ordered_list = getOrderedConfigs(dicBestConfig, reverse=False)
ordered_list
main(ListFit=ordered_list, n=9)
| puzzle-solver.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Data Science Boot Camp
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Introduction to Python
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Chapters </h2>
# <ul>
# <li><a href="#intro" style="text-decoration:none;"><h3 style="color:black;">Introduction to Python</h3></a></li>
# <li><a href="#types" style="text-decoration:none;"><h3 style="color:black;">Types and Operations</h3></a></li>
# <li><a href="#syntax" style="text-decoration:none;"><h3 style="color:black;">Statements and Syntax</h3></a></li>
# <li><a href="#func" style="text-decoration:none;"><h3 style="color:black;">Functions and Files</h3></a></li>
# <li><a href="#oop" style="text-decoration:none;"><h3 style="color:black;">Object-Oriented Programming & Classes</h3></a></li>
# </ul>
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <div id="intro">
# <h2 style="color:maroon;"> Introduction to Python </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">History of Python</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">What is Python?</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Installing & Running Python</h3></a></li>
# </ul>
# </div>
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">History of Python</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Invented in the Netherlands, early 90’s by <NAME><br>
# <br>
# * Named after “Monty Python”<br>
# <br>
# * Open sourced from the beginning<br>
# <br>
# * Considered a scripting language, but is much more<br>
# <br>
# * Scalable, object oriented and functional from the beginning<br>
# <br>
# * Used by Google from the beginning
# + [markdown] slideshow={"slide_type": "notes"}
# * Monty Python (also collectively known as The Pythons) were a British surreal comedy group who created their sketch comedy show Monty Python's Flying Circus, which first aired on the BBC in 1969. Forty-five episodes were made over four series.
#
# + [markdown] slideshow={"slide_type": "slide"}
# “Python is an experiment in how much freedom programmers need. Too much freedom and nobody can read another's code; too little and expressiveness is endangered.”
# <br>
# <p style="text-align:right;">- <NAME></p>
# + [markdown] slideshow={"slide_type": "notes"}
# * <NAME> ( born 31 January 1956) is a Dutch programmer best known as the author of the Python programming language, for which he is the "Benevolent Dictator For Life" (BDFL), which means he continues to oversee Python development, making decisions when necessary. From 2005 to December 2012, he worked at Google, where he spent half of his time developing the Python language. In January 2013, he started working for Dropbox.
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Python 1</h3>
# * Python 1.0 - January 1994<br>
# <br>
# * Python 1.5 - December 31, 1997<br>
# <br>
# * Python 1.6 - September 5, 2000<br>
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Python 2</h3>
# * Python 2.0 - October 16, 2000<br>
# <br>
# * Python 2.1 - April 17, 2001<br>
# <br>
# * Python 2.2 - December 21, 2001<br>
# <br>
# * Python 2.3 - July 29, 2003<br>
# <br>
# * Python 2.4 - November 30, 2004<br>
# <br>
# * Python 2.5 - September 19, 2006<br>
# <br>
# * Python 2.6 - October 1, 2008<br>
# <br>
# * Python 2.7 - July 3, 2010<br>
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Python 3</h3>
# * Python 3.0 - December 3, 2008<br>
# <br>
# * Python 3.1 - June 27, 2009<br>
# <br>
# * Python 3.2 - February 20, 2011<br>
# <br>
# * Python 3.3 - September 29, 2012<br>
# <br>
# * Python 3.4 - March 16, 2014<br>
# <br>
# * Python 3.5 - September 13, 2015<br>
# <br>
# * Python 3.6 - December 23, 2016
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">What is Python?</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Python is a open source computer programming language.<br>
# <br>
# * Python is an interpreted language.<br>
# <br>
# * Python is dynamically typed.<br>
# <br>
# * Python is well suited to object oriented programming.<br>
# <br>
# * Python is simple, easy to learn syntax emphasizes readability.<br>
# <br>
# * Python makes difficult things easy so programmers can focus on overriding algorithms and structures rather than low level details.
# + [markdown] slideshow={"slide_type": "notes"}
# - Python is an interpreted language. That means that, unlike languages like C and its variants, Python does not need to be compiled before it is run. Other interpreted languages include PHP and Ruby.
# - Python is dynamically typed, this means that you don't need to state the types of variables when you declare them or anything like that. You can do things like x=111 and then x="I'm a string" without error.
# - Like JavaScript, Python supports a programming style that uses simple functions and variables without engaging in class definitions.
# + [markdown] slideshow={"slide_type": "slide"}
# * The Zen of Python is a collection of 20 software principles that influences the design of Python Programming Language, only 19 of which were written down—around June 1999 by <NAME>.
# + slideshow={"slide_type": "fragment"}
import this
# + [markdown] slideshow={"slide_type": "slide"}
# Some of the platforms that Python supports:
# + [markdown] slideshow={"slide_type": "fragment"}
# * Amiga Research OS (AROS)<br>
# <br>
# * Application System 400 (AS/400)<br>
# <br>
# * BeOS<br>
# <br>
# * Linux<br>
# <br>
# * Mac OS X (comes pre-installed with the OS)<br>
# <br>
# * Microsoft Disk Operating System (MS-DOS)<br>
# <br>
# * Playstation<br>
# <br>
# * Solaris<br>
# <br>
# * Windows 32-bit / 64-bit<br>
# <br>
# * Windows CE/Pocket PC
# + [markdown] slideshow={"slide_type": "slide"}
# Why Do People Use Python?
# + [markdown] slideshow={"slide_type": "fragment"}
# * Software quality<br>
# <br>
# * Developer productivity<br>
# <br>
# * Program portability<br>
# <br>
# * Support libraries<br>
# <br>
# * Component integration<br>
# <br>
# * Enjoyment
# + [markdown] slideshow={"slide_type": "slide"}
# What Can I Do with Python?
# + [markdown] slideshow={"slide_type": "fragment"}
# * Systems Programming<br>
# <br>
# * GUIs<br>
# <br>
# * Internet Scripting<br>
# <br>
# * Component Integration<br>
# <br>
# * Database Programming<br>
# <br>
# * Rapid Prototyping
# + [markdown] slideshow={"slide_type": "slide"}
# What Can I Do More?
# + [markdown] slideshow={"slide_type": "fragment"}
# * Data Analysis<br>
# <br>
# * Gaming<br>
# <br>
# * Image Processing<br>
# <br>
# * Artificial Intelligence<br>
# <br>
# * Data Visualization<br>
# <br>
# * Scientific Programming
# + [markdown] slideshow={"slide_type": "notes"}
# `Systems Programming`<br>
# * Python’s built-in interfaces to operating-system services make it ideal for writing portable, maintainable system-administration tools and utilities (sometimes called shell tools). Python programs can search files and directory trees, launch other programs, do parallel processing with processes and threads, and so on.
# * Python’s standard library comes with POSIX bindings and support for all the usual OS tools: environment variables, files, sockets, pipes, processes, multiple threads, regular expression pattern matching, command-line arguments, standard stream interfaces, shell-command launchers, filename expansion, zip file utilities, XML and JSON parsers, CSV file handlers, and more. In addition, the bulk of Python’s system interfaces are designed to be portable; for example, a script that copies directory trees typically runs unchanged on all major Python platforms.<br>
# `GUIs`<br>
# * Python’s simplicity and rapid turnaround also make it a good match for graphical user interface programming on the desktop. Python comes with a standard object-oriented interface to the Tk GUI API called tkinter (Tkinter in 2.X) that allows Python programs to implement portable GUIs with a native look and feel. Python/tkinter GUIs run unchanged on Microsoft Windows, X Windows (on Unix and Linux), and the Mac OS (both Classic and OS X).<br>
# `Internet Scripting`<br>
# * Python comes with standard Internet modules that allow Python programs to perform a wide variety of networking tasks, in client and server modes. Scripts can communicate over sockets; extract form information sent to server-side CGI scripts; transfer files by FTP; parse and generate XML and JSON documents… In addition, full-blown web development framework packages for Python, such as Django, TurboGears, web2py, Pylons, Zope, and WebWare, support quick construction of full-featured and production-quality websites with Python.<br>
# `Component Integration`<br>
# * Python’s ability to be extended by and embedded in C and C++ systems makes it useful as a flexible glue language for scripting the behavior of other systems and components. For instance, integrating a C library into Python enables Python to test and launch the library’s components, and embedding Python in a product enables onsite customizations to be coded without having to recompile the entire product (or ship its source code at all).<br>
# `Database Programming`<br>
# * For traditional database demands, there are Python interfaces to all commonly used relational database systems—Sybase, Oracle, Informix, ODBC, MySQL, PostgreSQL, SQLite, and more. The Python world has also defined a portable database API for ac- cessing SQL database systems from Python scripts.<br>
# `Rapid Prototyping`<br>
# * Unlike some prototyping tools, Python doesn’t require a complete rewrite once the prototype has solidified. Parts of the system that don’t require the efficiency of a language such as C++ can remain coded in Python for ease of maintenance and use.<br>
# `Data Analysis`<br>
# * Powerful statistical and numerical packages. NumPy and pandas allow you to read/manipulate data efficiently and easily. Scikit-learn allows you to train and apply machine learning algorithms to your data and make predictions.
# * PyMySQL allows you to easily connect to MySQL database, execute queries and extract data.
# * BeautifulSoup to easily read in XML and HTML type data which is quite common nowadays.
# * Natural language analysis with the NLTK package.<br>
# `Gaming`<br>
# * Game programming and multimedia with pygame, cgkit, pyglet, PySoy, Panda3D, and others<br>
# `Image Processing`<br>
# * Image processing with PIL and its newer Pillow fork, PyOpenGL, Blender, Maya, and more<br>
# `Artificial Intelligence`<br>
# * Artificial intelligence with the PyBrain neural net library and the Milk machine learning toolkit
# * Tensorflow for some neural network<br>
# `Data Visualization`<br>
# * Data visualization with Mayavi, matplotlib, VTK, VPython, and more<br>
# `Scientific Programming`<br>
# * Scientific Programming has grown to become one of Python’s most compelling use cases. Prominent here, the NumPy high-performance numeric programming extension for Python mentioned earlier in- cludes such advanced tools as an array object, interfaces to standard mathematical libraries, and much more.
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Installing Python</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Python is pre-installed on most Unix systems, including Linux and MAC OS X<br>
# <br>
# * The pre-installed version may not be the most recent one.<br>
# <br>
# * You can download from “http://python.org/download/”.<br>
# <br>
# * Also you can check out Anaconda.<br>
# <br>
# * Anaconda is a Python distribution that includes nearly 200 packages for data science. ( NumPy, SciPy, pandas, Jupyter, Matplotlib)
# + [markdown] slideshow={"slide_type": "slide"}
# * There are several options for an IDE<br>
# <br>
# * The best Python IDEs for data science are “Spyder”, “PyCharm”, “Rodeo”, “Jupyter Notebook” and “Atom Text Editor”.<br>
# <br>
# * We will use Jupyter formerly known as Ipython Notebook in this boot camp.
# + [markdown] slideshow={"slide_type": "notes"}
# “The best Python IDEs” paper from datacamp.com → https://www.datacamp.com/community/tutorials/data-science-python-ide
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Running Python</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * For running interactively on Unix terminals or Windows Powershell type ‘python’.<br
# <br>
# > Python prompts with ‘>>>’.<br>
# \>\>\> 2 + 2<br>
# 4<br>
# * To exit Python (not Idle):
# * In Unix, type CONTROL-D
# * In Windows, type CONTROL-Z + <Enter>
# * Evaluate exit()
# + [markdown] slideshow={"slide_type": "slide"}
# * For running python scripts via the python interpreter type ‘python’ and script name.
# > python bootcamp.py
# * Make a python file directly executable by:
# >Adding the appropriate path to your python interpreter as the first line of your file (#!/usr/bin/python)<br>
# Making the file executable (chmod a+x bootcamp.py)<br>
# Invoking file from Unix command line (bootcamp.py)<br>
# Or you can use your IDE for running.<br>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <div id="types">
# <h2 style="color:maroon;"> Types and Operations </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Variables</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Data Types</h3></a></li>
# </ul>
# </div>
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Variables</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Unlike other programming languages, Python does not need specific declaration of the data type to reserve memory space.<br>
# <br>
# * Values are assigned to variables with an equals sign ( “=” ).<br>
# + slideshow={"slide_type": "fragment"}
age = 3 # We assign the variable age a integer value
pi = 3.14 # We give the variable grade a floating point value
mood = "happy" # We give the variable mood an string value
# + [markdown] slideshow={"slide_type": "fragment"}
# * Notice that each variable contains a different data type, however, we never had to explicitly declare each of their data types.
# + [markdown] slideshow={"slide_type": "slide"}
# * You can also assign multiple variables a value at the same time.
# + slideshow={"slide_type": "fragment"}
a = b = c = 3
# + [markdown] slideshow={"slide_type": "fragment"}
# * You can also assign multiple variables on the same line with different values and even different data types:
# + slideshow={"slide_type": "fragment"}
age, pi, mood = 3, 3.14, 'happy'
# + [markdown] slideshow={"slide_type": "fragment"}
# * I'd recommend doing whatever you feel is the most simple to understand.
# + [markdown] slideshow={"slide_type": "slide"}
# * The python language defines 2 types of variables:<br>
# `Instance Variables` and `Local Variables`<br>
# + [markdown] slideshow={"slide_type": "slide"}
# * An instance variable is visible to all functions, constructors and blocks, within the file that it is declared in.
# + slideshow={"slide_type": "fragment"}
a = 3
def testGlobalVariable():
print(a)
testGlobalVariable()
# + [markdown] slideshow={"slide_type": "slide"}
# * Local variables are only declared within functions, blocks or constructors. They are only created when the function, block or constructor is created, and then destroyed as soon as the function ends.
# + slideshow={"slide_type": "fragment"}
a = 3
def testLocalVariable():
x = 5
testLocalVariable()
print(x)
# + [markdown] slideshow={"slide_type": "slide"}
# * For having user input in python there is a function exist: raw_input
# + slideshow={"slide_type": "fragment"}
v = raw_input("Enter a value for me")
print("You entered:" + v)
type(v)
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Data Types</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * When you create a variable, the data stored within it can be of many different data types.<br>
# <br>
# * Python has multiple different data types that you can use to store different types of values, each of which will have its own set of unique operations that you can use to manipulate and work with the variables.<br>
# <br>
# * Python has 5 standard core Data Types:<br>
# `Numbers`, `Strings`, `Lists`, `Tuples`, `Dictionary`
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Data Types </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:maroon;">Numbers</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Strings</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Lists</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Tuples</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Dictionary</h3></a></li>
# </ul>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Number data types store numbers.<br>
# <br>
# * They are created when you create a variable that holds a numerical value.<br>
# <br>
# * Python supports 4 types of number values:<br>
# `int`, `long`, `float` and `complex`
#
# + [markdown] slideshow={"slide_type": "slide"}
# * Int data type is a 32-bit signed two’s complement integer.<br>
# <br>
# * It’s maximum value is:<br>
# `2,147,483,647 (2^31 -1)`<br>
# <br>
# * It’s minimum value is:<br>
# `- 2,147,483,648 (-2^31)`<br>
# <br>
# * Int is the most common used data type
# + slideshow={"slide_type": "fragment"}
import sys
intSample = sys.maxint
print(intSample) #unsigned 2^63 - 1
print(type(intSample))
intSample = intSample + 1
print(type(intSample))
# + [markdown] slideshow={"slide_type": "slide"}
# * Long data type is a 64-bit signed two’s complement integer.<br>
# <br>
# * It’s maximum value is:<br>
# `9,223,372,036,854,775,807 (2^63 -1)`<br>
# <br>
# * It’s minimum value is:<br>
# `-9,223,372,036,854,775,808 (-2^63)`<br>
# <br>
# * This data type is used when a larger number is required to work with than would be possible with an int
# + slideshow={"slide_type": "fragment"}
longSample = 12351235L # has a value of 12351235
print(longSample)
print(type(longSample))
# + [markdown] slideshow={"slide_type": "slide"}
# * The floating point data type is a double-precision 64-bit IEEE 754 floating point.<br>
# <br>
# * The min and max values are too large to discuss here.<br>
# <br>
# * For detailed information import sys and type sys.float_info<br>
# + slideshow={"slide_type": "fragment"}
floatSample = 3.14
print(floatSample)
print(type(floatSample))
# + slideshow={"slide_type": "fragment"}
import sys
sys.float_info
# v = sys.float_info.max
# if that's not big enough then there is another option :)
# infinity = float("inf")
# infinity > v
# + [markdown] slideshow={"slide_type": "slide"}
# * Complex numbers consist of ordered pairs of real floating-point numbers represented by x + yj<br>
# <br>
# * x is the real floating number, while yj is the imaginary part of it.<br>
# <br>
# + slideshow={"slide_type": "fragment"}
complexSample = 3.14j
print(complexSample)
print(type(complexSample))
# + [markdown] slideshow={"slide_type": "slide"}
# * Beginning with Python 2.0, the set of additional assignment statement formats became available.
# * Known as augmented assignments, and borrowed from the C language.
# + slideshow={"slide_type": "fragment"}
x = 10
y = 5
x = x + y # Traditional form
x += y # Newer augmented form
# + slideshow={"slide_type": "fragment"}
x += y
x *= y
x %= y
x &= y
x ^= y
x <<= y
x -= y
x /= y
x **= y
x |= y
x >>= y
x //= y
# + [markdown] slideshow={"slide_type": "skip"}
# <table>
# <thead>
# <tr>
# <th style="text-align:left;">Operator</th>
# <th style="text-align:left;">Description</th>
# </tr>
# </thead>
# <tbody>
# <tr>
# <td style="text-align:left;">lambda</td>
# <td style="text-align:left;">Lambda expression</td>
# </tr>
# <tr>
# <td style="text-align:left;">if – else</td>
# <td style="text-align:left;">Conditional expression</td>
# </tr>
# <tr>
# <td style="text-align:left;">or</td>
# <td style="text-align:left;">Boolean OR</td>
# </tr>
# <tr>
# <td style="text-align:left;">and</td>
# <td style="text-align:left;">Boolean AND</td>
# </tr>
# <tr>
# <td style="text-align:left;">not x</td>
# <td style="text-align:left;">Boolean NOT</td>
# </tr>
# <tr>
# <td style="text-align:left;">in, not in, is, is not, <, <=, >, >=, <>, !=, ==</td>
# <td style="text-align:left;">Comparisons, including membership tests and identity tests</td>
# </tr>
# <tr>
# <td style="text-align:left;">|</td>
# <td style="text-align:left;">Bitwise OR</td>
# </tr>
# <tr>
# <td style="text-align:left;">^</td>
# <td style="text-align:left;">Bitwise XOR</td>
# </tr>
# <tr>
# <td style="text-align:left;">&</td>
# <td style="text-align:left;">Bitwise AND</td>
# </tr>
# <tr>
# <td style="text-align:left;"><<, >></td>
# <td style="text-align:left;">Shifts</td>
# </tr>
# <tr>
# <td style="text-align:left;">+, -</td>
# <td style="text-align:left;">Addition and subtraction</td>
# </tr>
# <tr>
# <td style="text-align:left;">*, /, //, %</td>
# <td style="text-align:left;">Multiplication, division, remainder</td>
# </tr>
# <tr>
# <td style="text-align:left;">+x, -x, ~x</td>
# <td style="text-align:left;">Positive, negative, bitwise NOT</td>
# </tr>
# <tr>
# <td style="text-align:left;">**</td>
# <td style="text-align:left;">Exponentiation</td>
# </tr>
# <tr>
# <td style="text-align:left;">x[index], x[index:index], x(arguments...), x.attribute</td>
# <td style="text-align:left;">Subscription, slicing, call, attribute reference</td>
# </tr>
# <tr>
# <td style="text-align:left;">(expressions...), [expressions...], {key: value...}, `expressions...`</td>
# <td style="text-align:left;">Binding or tuple display, list display, dictionary display, string conversion</td>
# </tr>
# </tbody>
# </table>
# + [markdown] slideshow={"slide_type": "slide"}
# * Python provides a built-in functions. We can examine this functions in 3 category for numbers:<br>
# `Math Functions`<br>
# <br>
# `Conversion Functions`<br>
# <br>
# `Collection Functions`<br>
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers (Math Functions)</h3>
# + [markdown] slideshow={"slide_type": "notes"}
# * The formal definitions of mathematical built-in functions are provided below.
# + [markdown] slideshow={"slide_type": "fragment"}
# * abs ( number ) → number<br>
# Return the absolute value of the argument, | x |.
# + slideshow={"slide_type": "fragment"}
abs(-10)
# + [markdown] slideshow={"slide_type": "slide"}
# * pow ( x , y ) → number<br>
# Raise x to the y power.
# + slideshow={"slide_type": "fragment"}
print(pow(2,3))
print(pow(2,3,5))
# + [markdown] slideshow={"slide_type": "slide"}
# * round ( number , [ ndigits ] ) → float<br>
# Round number to ndigits beyond the decimal point.
# + slideshow={"slide_type": "fragment"}
round(3.141592, 2)
# + [markdown] slideshow={"slide_type": "slide"}
# * cmp ( x , y ) → integer<br>
# Compare x and y , returning a number.
# + slideshow={"slide_type": "fragment"}
print(cmp(10,5))
print(cmp(5,5))
print(cmp(5,10))
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers (Conversion Functions)</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * The conversion functions provide alternate representations for numeric values.
# + [markdown] slideshow={"slide_type": "fragment"}
# * hex ( number ) → string<br>
# Create a hexadecimal string representation of number . A leading '0x' is placed on the string as a reminder that this is hexadecimal.
#
# + slideshow={"slide_type": "fragment"}
hex(24)
# + [markdown] slideshow={"slide_type": "slide"}
# * oct ( number ) → string<br>
# Create a octal string representation of number . A leading '0' is placed on the string as a reminder that this is octal not decimal.
# + slideshow={"slide_type": "fragment"}
oct(24)
# + [markdown] slideshow={"slide_type": "slide"}
# * int ( string , [ base ]) → integer<br>
# Generates an integer from the string x . If base is supplied, x must be in the given base. If base is omitted, x must be decimal.
# + slideshow={"slide_type": "fragment"}
print(int("100"))
print(type(int("100")))
# + slideshow={"slide_type": "fragment"}
int("010111",2)
# + [markdown] slideshow={"slide_type": "slide"}
# * str ( object ) → string<br>
# Generate a string representation of the given object.
# + slideshow={"slide_type": "fragment"}
print(str(200))
print(type(str(200)))
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers (Collection Functions)</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * These are two built-in functions which operate on a collection of data elements.
# + slideshow={"slide_type": "fragment"}
listSample = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# + [markdown] slideshow={"slide_type": "fragment"}
# * max ( sequence ) → value<br>
# Return the largest value in sequence
# + slideshow={"slide_type": "fragment"}
max(listSample)
# + [markdown] slideshow={"slide_type": "fragment"}
# * min ( sequence ) → value<br>
# Return the smallest value in sequence
# + slideshow={"slide_type": "fragment"}
min(listSample)
# + [markdown] slideshow={"slide_type": "slide"}
# * Python’s standard library is very extensive, offering a wide range of modules. Let’s look at 2 of them:<br>
# <br>
# `math Module`<br>
# <br>
# `random Module`
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers (Math Module)</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Math Module provides access to the mathematical functions defined by the C standard.<br>
# <br>
# * These functions cannot be used with complex numbers; use the functions of the same name from the “cmath” module if you require support for complex numbers.<br>
# <br>
# * All return values are floats.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Let’s check out some of the functions that provided by this module:
# + slideshow={"slide_type": "slide"}
import math
# + [markdown] slideshow={"slide_type": "fragment"}
# * math.ceil (x) → float<br>
# Return the ceiling of x as a float, the smallest integer value greater than or equal to x.
# + slideshow={"slide_type": "fragment"}
math.ceil(1.3)
# + [markdown] slideshow={"slide_type": "slide"}
# * math.factorial(x) → float<br>
# Return x factorial. Raises ValueError if x is not integral or is negative.
# + slideshow={"slide_type": "fragment"}
math.factorial(5)
# + [markdown] slideshow={"slide_type": "slide"}
# * math.floor(x) → float<br>
# Return the floor of x as a float, the largest integer value less than or equal to x.
# + slideshow={"slide_type": "fragment"}
math.floor(1.9)
# + [markdown] slideshow={"slide_type": "slide"}
# * math.isnan(x) → boolean<br>
# Check if the float x is a NaN (not a number)
# + slideshow={"slide_type": "fragment"}
math.isnan(0.9)
# + [markdown] slideshow={"slide_type": "slide"}
# * math.pi<br>
# The mathematical constant π = 3.141592…, to available precision.
# + slideshow={"slide_type": "fragment"}
math.pi
# + [markdown] slideshow={"slide_type": "notes"}
# * math.e<br>
# The mathematical constant e = 2.718281…, to available precision.
# + slideshow={"slide_type": "notes"}
math.e
# + [markdown] slideshow={"slide_type": "notes"}
# * math.exp(x) → float<br>
# Return e**x
# + slideshow={"slide_type": "notes"}
math.exp(3)
# + [markdown] slideshow={"slide_type": "notes"}
# * math.log(x [, base]) → float<br>
# With one argument, return the natural logarithm of x (to base e).
# With two arguments, return the logarithm of x to the given base, calculated as log(x)/log(base).
# + slideshow={"slide_type": "notes"}
print(math.log(20.085536923187668))
print(math.log(64,2))
# + [markdown] slideshow={"slide_type": "notes"}
# * math.log10(x) → float<br>
# Return the base-10 logarithm of x. This is usually more accurate than log(x, 10).
# + slideshow={"slide_type": "notes"}
math.log10(100)
# + [markdown] slideshow={"slide_type": "notes"}
# * math.pow(x, y) → float<br>
# Return x raised to the power y. Unlike the built-in ** operator, math.pow() converts both its arguments to type float.
# + slideshow={"slide_type": "notes"}
math.pow(2,6)
# + [markdown] slideshow={"slide_type": "notes"}
# * math.sin(x) → float<br>
# Return the sine of x radians.
# + slideshow={"slide_type": "notes"}
math.sin(15)
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Numbers (Random Module)</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * random.random()<br>
# Return the next random floating point number in the range [0.0, 1.0).
# + slideshow={"slide_type": "fragment"}
import random
random.random()
# + [markdown] slideshow={"slide_type": "slide"}
# * random.uniform(a, b) <br>
# Return a random floating point number N such that a <= N <= b
# + slideshow={"slide_type": "fragment"}
random.uniform(1,10)
# + [markdown] slideshow={"slide_type": "slide"}
# * random.randint(a, b)<br>
# Return a random integer N such that a <= N <= b.
# + slideshow={"slide_type": "fragment"}
random.randint(1, 10)
# + [markdown] slideshow={"slide_type": "notes"}
# * random.randrange(start, stop[, step])<br>
# Return a randomly selected element from range(start, stop, step).
# + slideshow={"slide_type": "notes"}
random.randrange(0, 101, 2)
# + [markdown] slideshow={"slide_type": "slide"}
# * random.choice(seq)<br>
# Return a random element from the non-empty sequence seq
# + slideshow={"slide_type": "fragment"}
random.choice('abcdefghij')
# + [markdown] slideshow={"slide_type": "slide"}
# * random.shuffle(x)<br>
# Shuffle the sequence x in place.
# + slideshow={"slide_type": "fragment"}
items = [1, 2, 3, 4, 5, 6, 7]
random.shuffle(items)
items
# + [markdown] slideshow={"slide_type": "slide"}
# * random.sample(population, k)<br>
# Return a k length list of unique elements chosen from the population sequence.<br>
# Used for random sampling without replacement.
# + slideshow={"slide_type": "fragment"}
random.sample([1, 2, 3, 4, 5], 3)
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Data Types </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Numbers</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:maroon;">Strings</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Lists</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Tuples</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Dictionary</h3></a></li>
# </ul>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Strings</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * A string is identified as a continuous set of characters within a set of quotation marks.<br>
# <br>
# * Python allows strings to be declared with in single (' ') and double (" ") quotes.<br>
# <br>
# * Also you can think strings as a immutable sequances in python.<br>
# + slideshow={"slide_type": "fragment"}
hello = "<NAME>"
print(len(hello)) # return the length of string
character = hello[0] # return first character of string
print(character)
# + slideshow={"slide_type": "fragment"}
hello[1] = "o" # Immutable objects cannot be changed
# + [markdown] slideshow={"slide_type": "slide"}
# * Also you can splice strings:
# + slideshow={"slide_type": "fragment"}
print(hello[0:2])
print(hello[:5])
print(hello[6:])
print(hello[:-2])
print(hello[0:-1:2])
# + [markdown] slideshow={"slide_type": "slide"}
# * Strings support concatenation with the plus sign (joining two strings into a new string)
# + slideshow={"slide_type": "fragment"}
hello = "Hello"
world = "World"
msg = hello + " " + world
msg
# + [markdown] slideshow={"slide_type": "slide"}
# * Strings also support repetition
# + slideshow={"slide_type": "fragment"}
hello * 2
# + [markdown] slideshow={"slide_type": "slide"}
# * You can expand strings to a list
# + slideshow={"slide_type": "fragment"}
listSample = list(msg)
listSample
# + [markdown] slideshow={"slide_type": "slide"}
# * And you can join list elements to string
# + slideshow={"slide_type": "fragment"}
newMsg = ''.join(listSample)
newMsg
# + slideshow={"slide_type": "fragment"}
newMsgWithDots = '.'.join(listSample)
newMsgWithDots
# + [markdown] slideshow={"slide_type": "slide"}
# * Operation we’ve looked so far is really a sequence operation. These operations will work on other sequences in Python. <br>
# Now let’s check strings specific methods
# + slideshow={"slide_type": "fragment"}
hello = "Hello"
print(hello.find("lo")) # return index of substring
hi = hello.replace("ello", "i")
print(hi)
# + [markdown] slideshow={"slide_type": "slide"}
# * Split a string into substrings on a delimiter (handy as a simple form of parsing)<br>
# + slideshow={"slide_type": "fragment"}
line = "cat,dog,bird"
arr = line.split(',')
arr
# + [markdown] slideshow={"slide_type": "fragment"}
# * Upper and lowercase conversions:
# + slideshow={"slide_type": "fragment"}
print(hello.upper())
print(hello.lower())
# + [markdown] slideshow={"slide_type": "slide"}
# * Content tests:
# + slideshow={"slide_type": "fragment"}
textSample = "world"
digitSample = "01010"
print(textSample.isalpha())
print(digitSample.isdigit())
# + [markdown] slideshow={"slide_type": "slide"}
# * You can import re for pattern matching:
# + slideshow={"slide_type": "fragment"}
import re
pattern = 'Hello[ \t]*(.*)world'
match = re.match(pattern, 'Hello Python world')
match.group(1)
# + [markdown] slideshow={"slide_type": "slide"}
# * Strings also support an advanced substitution operation known as formatting:
# + slideshow={"slide_type": "fragment"}
'%s, dogs, and %s' % ('cats', 'birds') # Works on all versions
# + slideshow={"slide_type": "fragment"}
'{0}, dogs, and {1}'.format('cats', 'birds') # Works on 2.6+, 3.0+
# + slideshow={"slide_type": "fragment"}
'{}, dogs, and {}'.format('cats', 'birds') # Works on 2.7+, 3.1+
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Data Types </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Numbers</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Strings</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:maroon;">Lists</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Tuples</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Dictionary</h3></a></li>
# </ul>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Lists</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * List object is the most general sequence provided by Python.<br>
# <br>
# * Lists have no fixed size.<br>
# <br>
# * Lists are mutable, can be modified in place<br>
# + [markdown] slideshow={"slide_type": "fragment"}
# * You can have a list hold multiple different types of data
# + slideshow={"slide_type": "fragment"}
listSample = [3,'happy',3.14]
# + [markdown] slideshow={"slide_type": "slide"}
# * They are sequences, lists support all the sequence operations we looked at strings.
# + slideshow={"slide_type": "fragment"}
len(listSample)
# + [markdown] slideshow={"slide_type": "fragment"}
# * We can index, slice, and so on, just as for strings
# + slideshow={"slide_type": "fragment"}
print(listSample[0])
print(listSample[:-1]) # return a new list
print(listSample + [5,6]) # return a new list
print(listSample * 2) # return a new list
# + [markdown] slideshow={"slide_type": "slide"}
# * Lists can grow and shrink on demand
# + slideshow={"slide_type": "fragment"}
listSample.append('cats') # add item to last
listSample
# + slideshow={"slide_type": "fragment"}
listSample.pop() # pick & delete last item
listSample
# + slideshow={"slide_type": "fragment"}
listSample.pop(0) # pick & delete item on index 0
listSample
# + slideshow={"slide_type": "fragment"}
listSample.insert(1, "dogs") # insert at given index
listSample
# + slideshow={"slide_type": "fragment"}
listSample.remove('dogs') # remove the first ‘dogs’ appearance in the list
listSample
# + [markdown] slideshow={"slide_type": "slide"}
# * Most list methods change the list object in place, instead of creating a new one:
# + slideshow={"slide_type": "fragment"}
M = ['bb', 'aa', 'cc']
M.sort()
print(M)
M.reverse()
print(M)
# + [markdown] slideshow={"slide_type": "slide"}
# * The list methods make it very easy to use a list as a stack
# + slideshow={"slide_type": "fragment"}
stack = [3, 4, 5]
stack.append(6)
print(stack)
stack.append(7)
print(stack)
stack.pop() # 7
print(stack)
stack.pop() # 6
print(stack)
# + [markdown] slideshow={"slide_type": "slide"}
# * It is also possible to use a list as a queue (First In, First Out)
# + slideshow={"slide_type": "fragment"}
queue = ["cats", "dogs", "birds"]
queue.append("cows")
print(queue)
queue.pop(0) # “cats
print(queue)
queue.pop(0) # “dogs”
print(queue)
# + [markdown] slideshow={"slide_type": "notes"}
# * It is also possible to concat lists with zip() function:<br>
# The zip() function returns an iterator of tuples based on the iterable object.<br>
# If no parameters are passed, zip() returns an empty iterator<br>
# If a single iterable is passed, zip() returns an iterator of 1-tuples.
#
# + slideshow={"slide_type": "notes"}
list1 = [1, 2, 3]
list2 = ['a', 'b', 'c', 'd']
result = zip()
print(result)
result = zip(list1)
print(result)
result = zip(list1, list2)
print(result)
# + slideshow={"slide_type": "notes"}
newList1, newList2 = zip(*result)
print(newList1)
print(newList2)
# + [markdown] slideshow={"slide_type": "slide"}
# * One nice feature of Python’s core data types is that they support arbitrary nesting:
# + slideshow={"slide_type": "fragment"}
M = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
print(M)
# + slideshow={"slide_type": "fragment"}
M[1]
# + slideshow={"slide_type": "fragment"}
M[1][2]
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Data Types </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Numbers</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Strings</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Lists</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:maroon;">Tuples</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Dictionary</h3></a></li>
# </ul>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Tuples</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * The tuple object is roughly like a list that cannot be changed.<br>
# <br>
# * Tuples are sequences, like lists, but they are immutable, like strings.<br>
# <br>
# * They're used to represent fixed collections of items.
#
# + [markdown] slideshow={"slide_type": "fragment"}
# * They are normally coded in parentheses instead of square brackets.
# + slideshow={"slide_type": "fragment"}
tupleSample = (1, 2, 3, 4)
print(tupleSample)
print(len(tupleSample))
print(tupleSample[0])
print(tupleSample + (5,6))
# + [markdown] slideshow={"slide_type": "slide"}
# * Tuples also have type-specific callable methods, but not nearly as many as lists
# -
print(tupleSample)
print(tupleSample.index(4))
print(tupleSample.count(4))
# + [markdown] slideshow={"slide_type": "slide"}
# * The primary distinction for tuples is that they cannot be changed once created.
# + slideshow={"slide_type": "fragment"}
tupleSample[2] = 4
# + slideshow={"slide_type": "fragment"}
tupleSample.append(7)
# + [markdown] slideshow={"slide_type": "slide"}
# <h2 style="color:maroon;"> Data Types </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Numbers</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Strings</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Lists</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Tuples</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:maroon;">Dictionary</h3></a></li>
# </ul>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Dictionary</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Dictionaries consist of key-value pairs.<br>
# <br>
# * A Dictionary consists of a name, followed by a key, which then hold a certain amount of values.<br>
# <br>
# * A dictionary works in a similar way to a real dictionary. You can sort of think of a key as if it were a word, and then the value as the definition of the word, or the value that the word holds.<br>
# <br>
# * A key can be almost any python data type<br>
# + slideshow={"slide_type": "slide"}
dict1 = {} # declares an empty dictionary
dict1["first"] = 3
dict1["second"] = "happy"
dict1["pi"] = 3.14
dict1
# + slideshow={"slide_type": "fragment"}
dict2 = {"first": 3, "second": "happy", "pi": 3.14}
dict2
# + slideshow={"slide_type": "fragment"}
dict3 = dict(zip(['first', 'second', 'pi'], [3, 'happy', 3.14]))
dict3
# + slideshow={"slide_type": "fragment"}
print(dict1["first"]) # prints the value of key "first"
print(dict2.keys()) # prints all keys
print(dict3.values()) # prints all values
# + [markdown] slideshow={"slide_type": "slide"}
# <div id="syntax">
# <h2 style="color:maroon;"> Statements and Syntax </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Control Flow Statements</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Loops</h3></a></li>
# </ul>
# </div>
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Control Flow Statements</h3>
# + [markdown] slideshow={"slide_type": "slide"}
# * The Python if statement is typical of if statements in most procedural languages.<br>
# <br>
# * It takes the form of an if test, followed by one or more optional elif (“else if”) tests and a final optional else block.<br>
# <br>
# * The tests and the else part each have an associated block of nested statements, indented under a header line
# + slideshow={"slide_type": "slide"}
if 1:
print('True')
# + [markdown] slideshow={"slide_type": "fragment"}
# * As you can see, whitespace is meaningful in Python<br>
# <br>
# * Use a newline to end a line of code.<br>
# <br>
# * No braces { } to mark blocks of code, use consistent indentation instead.<br>
# <br>
# * First line with less indentation is outside of the block,<br>
# * First line with more indentation starts a nested block<br>
#
# + slideshow={"slide_type": "slide"}
x = 10
if x > 0:
print("positive")
elif x == 0:
print("0")
else:
print(negative)
# + [markdown] slideshow={"slide_type": "slide"}
# * There is no switch or case statement in Python.<br>
# <br>
# * But you can use a dictionary-based 'switch':<br>
# + slideshow={"slide_type": "fragment"}
choices = {'ham': 1.99, 'eggs': 0.99, 'milk': 1.10}
customerChoice = 'ham'
print(choices[customerChoice])
# + [markdown] slideshow={"slide_type": "fragment"}
# * Also you can use get method of dictionary:
# + slideshow={"slide_type": "fragment"}
choices = {'ham': 1.99, 'eggs': 0.99, 'milk': 1.10}
print(choices.get('milk', 'Bad choice'))
print(choices.get('butter', 'Bad choice'))
# + [markdown] slideshow={"slide_type": "slide"}
# * and (&)<br>
# If both the operands are true then condition becomes true.<br>
# (a and b) is true.
# + slideshow={"slide_type": "fragment"}
if 1 & 0:
print('True')
else:
print('False')
# + [markdown] slideshow={"slide_type": "slide"}
# * or (|)<br>
# If any of the two operands are non-zero then condition becomes true.<br>
# (a or b) is true.
# + slideshow={"slide_type": "fragment"}
if 1 or 0:
print('True')
else:
print('False')
# + [markdown] slideshow={"slide_type": "slide"}
# * not (!)<br>
# Used to reverse the logical state of its operand.<br>
# Not(a and b) is false.
# + slideshow={"slide_type": "fragment"}
if not 0:
print('True')
else:
print('False')
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Loops</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Python has two two main looping constructs; `while` and `for`.
# + slideshow={"slide_type": "fragment"}
i=0
while i<10: # Loop test
print(i) #Loop body
i+=1
print("while loop finished")
# + [markdown] slideshow={"slide_type": "slide"}
# * The for loop is a generic iterator in Python: it can step through the items in any ordered sequence or other iterable object
# + slideshow={"slide_type": "fragment"}
print(listSample)
for elements in listSample:
print(elements) #Loop body
print("\n\nfor loop finished")
# + [markdown] slideshow={"slide_type": "slide"}
# * Also you can use range function:
# + slideshow={"slide_type": "fragment"}
print(range(10))
print(range(5,10))
print(range(10,0, -1))
print(range(0,10, 3))
# + slideshow={"slide_type": "fragment"}
sum = 0
for i in range(0,10):
sum = sum + i
print(sum)
# + [markdown] slideshow={"slide_type": "slide"}
# * Any sequence works in a for, as it’s a generic tool:
# + slideshow={"slide_type": "fragment"}
textList = ["Hello", "World"]
username = "python"
for i in textList:
print(i),
for i in username:
print(i + ''), # Print statement ends with a comma means print inline
# + [markdown] slideshow={"slide_type": "slide"}
# * `Break`, `continue`, `pass`, and the Loop `else` are the simple statements you can use with loops
# + [markdown] slideshow={"slide_type": "slide"}
# * The break statement causes an immediate exit from a loop.
# + slideshow={"slide_type": "fragment"}
sum = 0
while True:
num = int(raw_input("enter a integer (0 for exit)"))
if num == 0: break
sum = sum + num
print(sum)
# + [markdown] slideshow={"slide_type": "slide"}
# <div id="func">
# <h2 style="color:maroon;"> Functions and Files </h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Function Basics</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Lambda Functions</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Files</h3></a></li>
# </ul>
# </div>
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Function Basics</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * In simple terms, a function is a device that groups a set of statements so they can be run more than once in a program, a packaged procedure invoked by name.<br>
# <br>
# * Functions are the alternative to programming by cutting and pasting rather than having multiple redundant copies of an operation’s code
# + slideshow={"slide_type": "fragment"}
def times(x, y): # Create and assign function
return x * y # Body executed when called
prod = times(2,8) # Call function
prod
# + [markdown] slideshow={"slide_type": "slide"}
# * Variables that are defined inside a function body have a local scope, and those defined outside have a global scope.<br>
# * This means that local variables can be accessed only inside the function in which they are declared, whereas global variables can be accessed throughout the program body by all functions.
#
# + slideshow={"slide_type": "fragment"}
z = 0 # This is global variable
def sum(x, y):
z = x + y # This is local variable
print("Inside the function local sum: " + str(z))
return z
sum(2, 2)
print("Outside the function global sum: " + str(z))
# + [markdown] slideshow={"slide_type": "slide"}
# * For changing global variable in function:
# + slideshow={"slide_type": "fragment"}
z = 0
def sum(x, y):
global z
z = x + y
return z
sum(2, 2)
print(z)
# + [markdown] slideshow={"slide_type": "slide"}
# * You can call a function by using the following types of formal arguments:<br>
# <br>
# `Required arguments`<br>
# <br>
# `Default arguments`<br>
# <br>
# `Keyword arguments`<br>
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Required Arguments</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Required arguments are the arguments passed to a function in correct positional order.
# + slideshow={"slide_type": "fragment"}
def sayHello(msg):
print(msg)
sayHello() # call function with 0 arguments
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Default Arguments</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * A default argument is an argument that assumes a default value if a value is not provided in the function call for that argument.<br>
# <br>
# * For some functions, you may want to make some parameters optional, you can use default arguments.
#
# +
def sayHello(msg, times=1):
print(msg * times)
sayHello("merhaba")
sayHello("Hello", 2)
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Keyword Arguments</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * When you use keyword arguments in a function call, the caller identifies the arguments by the parameter name.<br>
# <br>
# * If you have some functions with many parameters and you want to specify only some of them, you can use keyword arguments.
#
# + slideshow={"slide_type": "fragment"}
def func(x, y, z=1):
print("x: " + str(x))
print("y: " + str(y))
print("z: " + str(z))
print("------")
func(10, 20, 30)
func(y=10, x=3)
# + [markdown] slideshow={"slide_type": "notes"}
# <h3 style="color:maroon;">Extra: Variable-length arguments</h3>
# + [markdown] slideshow={"slide_type": "notes"}
# * You may need to process a function for more arguments than you specified while defining the function.<br>
# <br>
# * These arguments are called variable-length arguments and are not named in the function definition, unlike required and default arguments.
# + slideshow={"slide_type": "notes"}
def total(a=5, *numbers, **animals):
print('a = ' + str(a))
for item in numbers:
print('item: ' + str(item))
for firstPart, secondPart in animals.items():
print(firstPart, secondPart)
total(10,1,2,3,cats=5,dogs=10)
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Lambda Functions</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Python supports the creation of anonymous functions at runtime, using a construct called "lambda".<br>
# <br>
# * This piece of code in the next cell shows the difference between a normal function definition ("f") and a lambda function ("g"):
# + slideshow={"slide_type": "fragment"}
def f (x):
return x**2
x = f(5)
print(x)
# + slideshow={"slide_type": "fragment"}
g = lambda x: x**2
x = g(5)
print(x)
# + [markdown] slideshow={"slide_type": "fragment"}
# * Note that the lambda definition does not include a "return" statement it always contains an expression which is returned.
# + [markdown] slideshow={"slide_type": "slide"}
# * In the following slide, we will first define a simple list of integer values, then use the standard functions filter(), map() and reduce() to do various things with that list. All of the three functions expect two arguments: A function and a list.
# + slideshow={"slide_type": "slide"}
numbers = [0, 6, 9, 12, 14]
# + slideshow={"slide_type": "fragment"}
print filter(lambda x: x % 3 == 0, numbers)
# + [markdown] slideshow={"slide_type": "fragment"}
# * filter() calls our lambda function for each element of the list, and returns a new list that contains only those elements for which the function returned "True".<br>
# <br>
# * In this case, we get a list of all elements that are multiples of 3. The expression x % 3 == 0 computes the remainder of x divided by 3 and compares the result with 0
# + slideshow={"slide_type": "slide"}
print map(lambda x: x * 2 + 10, numbers)
# -
# * map() is used to convert our list. The given function is called for every element in the original list, and a new list is created which contains the return values from our lambda function.<br>
# <br>
# * In this case, it computes 2 * x + 10 for every element.
# + slideshow={"slide_type": "slide"}
print reduce(lambda x, y: x + y, numbers)
# + [markdown] slideshow={"slide_type": "fragment"}
# * reduce() is somewhat special. The "worker function" for this one must accept two arguments (we've called them x and y here), not just one. <br>
# <br>
# * The function is called with the first two elements from the list, then with the result of that call and the third element, and so on, until all of the list elements have been handled. This means that our function is called n-1 times if the list contains n elements.<br>
# <br>
# * The return value of the last call is the result of the reduce() construct.<br>
# <br>
# * In the above example, it simply adds the arguments, so we get the sum of all elements.
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Files</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * When you’re working with Python, you don’t need to import a library in order to read and write files. It’s handled natively in the language.<br>
# <br>
# * The first thing you’ll need to do is use Python’s built-in open function to get a file object.<br>
# <br>
# * In short, the built-in open function creates a Python file object, which serves as a link to a file residing on your machine.
# + slideshow={"slide_type": "slide"}
myFile = open('myfile.txt', 'w')
myFile.write("['cats', 'dogs', 'birds', 'cows']\n['dogs', 'birds', 'cows']\n['birds', 'cows']")
# + slideshow={"slide_type": "slide"}
myFile = open('myfile.txt')
print(myFile)
# + slideshow={"slide_type": "fragment"}
myFile = open('myfile.txt')
print(myFile.readline()) # Return first line of file
# + slideshow={"slide_type": "fragment"}
myFile = open('myfile.txt')
print(myFile.read()) # Read all at once into string
# + slideshow={"slide_type": "fragment"}
myFile = open('myfile.txt')
for line in myFile:
print(line)
# + slideshow={"slide_type": "slide"}
myFile = open('myfile.txt', 'a')
myFile.write("\n['cows']")
myFile.close()
# + slideshow={"slide_type": "fragment"}
myFile = open('myfile.txt')
print(myFile.read())
# + [markdown] slideshow={"slide_type": "slide"}
# * You can see open modes and descriptions in the next slide.
# + [markdown] slideshow={"slide_type": "slide"}
# <table>
# <thead>
# <tr>
# <th style="text-align:left;">Mode</th>
# <th style="text-align:left;">Description</th>
# </tr>
# </thead>
# <tbody>
# <tr>
# <td style="text-align:left;">r</td>
# <td style="text-align:left;">Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode.</td>
# </tr>
# <tr>
# <td style="text-align:left;">rb</td>
# <td style="text-align:left;">Opens a file for reading only in binary format. The file pointer is placed at the beginning of the file. This is the default mode.</td>
# </tr>
# <tr>
# <td style="text-align:left;">r+</td>
# <td style="text-align:left;">Opens a file for both reading and writing. The file pointer placed at the beginning of the file.</td>
# </tr>
# <tr>
# <td style="text-align:left;">rb+</td>
# <td style="text-align:left;">Opens a file for both reading and writing in binary format. The file pointer placed at the beginning of the file.</td>
# </tr>
# <tr>
# <td style="text-align:left;">w</td>
# <td style="text-align:left;">Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">wb</td>
# <td style="text-align:left;">Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">w+</td>
# <td style="text-align:left;">Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">wb+</td>
# <td style="text-align:left;">Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">a</td>
# <td style="text-align:left;">Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">ab</td>
# <td style="text-align:left;">Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">a+</td>
# <td style="text-align:left;">Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.</td>
# </tr>
# <tr>
# <td style="text-align:left;">ab+</td>
# <td style="text-align:left;">Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
# </td>
# </tr>
# </tbody>
# </table>
# + [markdown] slideshow={"slide_type": "slide"}
# * Example Write in CSV:
# + slideshow={"slide_type": "fragment"}
import csv
with open('sample.csv', 'wb') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerows([['cats', 'dogs', 'birds']])
writer.writerows([['cows', 'birds', 'cats']])
# + [markdown] slideshow={"slide_type": "slide"}
# * Example Read in CSV:
# + slideshow={"slide_type": "fragment"}
import csv
with open('sample.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
print(row)
# + [markdown] slideshow={"slide_type": "slide"}
# * Example Write in JSON:
# + slideshow={"slide_type": "fragment"}
import json
weird_json = {"3": "Python", "2": "World", "1": "Hello"}
with open('sample.json', 'w') as jsonfile:
json.dump(weird_json, jsonfile)
# + [markdown] slideshow={"slide_type": "slide"}
# * Example Read in JSON:
# + slideshow={"slide_type": "fragment"}
import json
with open('sample.json') as jsonfile:
data = json.load(jsonfile)
print(data)
# + [markdown] slideshow={"slide_type": "fragment"}
# * When load json you can see ‘u’ prefix. The u prefix means that those strings are unicode rather than 8-bit strings. When you really use the string, it won't appear in your data.
# + slideshow={"slide_type": "fragment"}
print(data['1'])
print(type(data['1']))
print(type(data['1'].encode('utf-8')))
# + [markdown] slideshow={"slide_type": "slide"}
# <div id="oop">
# <h2 style="color:maroon;">Object-Oriented Programming & Classes</h2>
# <ul>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Object Oriented Programming (OOP)</h3></a></li>
# <li><a style="text-decoration:none;"><h3 style="color:black;">Classes</h3></a></li>
# </ul>
# </div>
#
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Object Oriented Programming (OOP)</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * In all the examples we look at it till now, is around functions i.e. blocks of statements which manipulate data. This is called the procedure-oriented way of programming.<br>
# <br>
# * There is another way of organizing your program which is to combine data and functionality and wrap it inside something called an object. This is called the object oriented programming paradigm.<br>
# <br>
# * Most of the time you can use procedural programming, but when writing large programs or have a problem that is better suited to this method, you can use object oriented programming techniques.
# + [markdown] slideshow={"slide_type": "slide"}
# <h3 style="color:maroon;">Classes</h3>
# + [markdown] slideshow={"slide_type": "fragment"}
# * Classes and objects are the two main aspects of object oriented programming.<br>
# <br>
# * A class creates a new type where objects are instances of the class.<br>
# <br>
# * Objects are a representation of the real world objects like cars, dogs, peoples, etc.<br>
# <br>
# * The objects share two main characteristics: data and behavior.
# + [markdown] slideshow={"slide_type": "fragment"}
# * For Example, Cars have data like number of wheels, number of doors, seating capacity and also have behavior: accelerate, stop, show how much fuel is missing and so many other.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Data = Attributes
# * Behavior = Methods
# + [markdown] slideshow={"slide_type": "slide"}
# * A class is a blueprint, a model for its objects, a way to define attributes and behavior.
# + [markdown] slideshow={"slide_type": "fragment"}
# Let’s look Python syntax for classes:
# + slideshow={"slide_type": "fragment"}
class Vehicle:
pass
car = Vehicle() # Creating an instance
# + [markdown] slideshow={"slide_type": "slide"}
# * Class methods have only one specific difference from ordinary functions:
# > * Class methods have a extra parameter, but you do not give a value for this parameter when you call the method, Python will provide it.<br>
# <br>
# > * This particular variable refers to the object itself, and by convention, it is given the name self.
# + [markdown] slideshow={"slide_type": "slide"}
# * Let’s define some attributes and methods to our ‘Vehicle’ Class:
# + slideshow={"slide_type": "fragment"}
class Vehicle:
def __init__(self, numWheels, fuelType, seatCapacity, maxVelocity):
self.numWheels = numWheels
self.fuelType = fuelType
self.seatCapacity = seatCapacity
self.maxVelocity = maxVelocity
# + [markdown] slideshow={"slide_type": "fragment"}
# * As you can see we use the init method. We call it a constructor method. So when create the vehicle object, we can define this attributes.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Let’s create an object and define attributes:
# + slideshow={"slide_type": "fragment"}
teslaS = Vehicle(4, 'electric', 5, 250)
teslaS
# + [markdown] slideshow={"slide_type": "fragment"}
# * Let’s create another car with methods this time
# + slideshow={"slide_type": "slide"}
class Vehicle:
def __init__(self, numWheels, fuelType, seatCapacity, maxVelocity):
self.numWheels = numWheels
self.fuelType = fuelType
self.seatCapacity = seatCapacity
self.maxVelocity = maxVelocity
def speedUp(self, amount):
self.maxVelocity += amount
def printInfo(self):
print("I have " + str(self.numWheels) + " wheels and can speed up to " + str(self.maxVelocity) + " km/h.")
myCar = Vehicle(4,'electric', 5, 120)
myCar.printInfo()
myCar.speedUp(100)
myCar.printInfo()
| iPython Notebooks/Introduction to Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
#
# Clustering with scikit-learn
#
# <br><br></p>
# In this notebook, we will learn how to perform k-means lustering using scikit-learn in Python.
#
# We will use cluster analysis to generate a big picture model of the weather at a local station using a minute-graunlarity data. In this dataset, we have in the order of millions records. How do we create 12 clusters our of them?
#
# **NOTE:** The dataset we will use is in a large CSV file called *minute_weather.csv*. Please download it into the *weather* directory in your *Week-7-MachineLearning* folder. The download link is: https://drive.google.com/open?id=0B8iiZ7pSaSFZb3ItQ1l4LWRMTjg
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Importing the Necessary Libraries<br></p>
# +
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import pandas as pd
import numpy as np
from itertools import cycle, islice
import matplotlib.pyplot as plt
from pandas.plotting import parallel_coordinates
# %matplotlib inline
# -
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Creating a Pandas DataFrame from a CSV file<br><br></p>
#
data = pd.read_csv('./weather/minute_weather.csv')
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Minute Weather Data Description</p>
# <br>
# The **minute weather dataset** comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file **minute_weather.csv**, which is a comma-separated file.
#
# As with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.
#
# Each row in **minute_weather.csv** contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables:
#
# * **rowID:** unique number for each row (*Unit: NA*)
# * **hpwren_timestamp:** timestamp of measure (*Unit: year-month-day hour:minute:second*)
# * **air_pressure:** air pressure measured at the timestamp (*Unit: hectopascals*)
# * **air_temp:** air temperature measure at the timestamp (*Unit: degrees Fahrenheit*)
# * **avg_wind_direction:** wind direction averaged over the minute before the timestamp (*Unit: degrees, with 0 means coming from the North, and increasing clockwise*)
# * **avg_wind_speed:** wind speed averaged over the minute before the timestamp (*Unit: meters per second*)
# * **max_wind_direction:** highest wind direction in the minute before the timestamp (*Unit: degrees, with 0 being North and increasing clockwise*)
# * **max_wind_speed:** highest wind speed in the minute before the timestamp (*Unit: meters per second*)
# * **min_wind_direction:** smallest wind direction in the minute before the timestamp (*Unit: degrees, with 0 being North and inceasing clockwise*)
# * **min_wind_speed:** smallest wind speed in the minute before the timestamp (*Unit: meters per second*)
# * **rain_accumulation:** amount of accumulated rain measured at the timestamp (*Unit: millimeters*)
# * **rain_duration:** length of time rain has fallen as measured at the timestamp (*Unit: seconds*)
# * **relative_humidity:** relative humidity measured at the timestamp (*Unit: percent*)
data.shape
data.head()
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Data Sampling<br></p>
#
# Lots of rows, so let us sample down by taking every 10th row. <br>
#
sampled_df = data[(data['rowID'] % 10) == 0]
sampled_df.shape
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Statistics
# <br><br></p>
#
sampled_df.describe().transpose()
sampled_df[sampled_df['rain_accumulation'] == 0].shape
sampled_df[sampled_df['rain_duration'] == 0].shape
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Drop all the Rows with Empty rain_duration and rain_accumulation
# <br><br></p>
#
del sampled_df['rain_accumulation']
del sampled_df['rain_duration']
rows_before = sampled_df.shape[0]
sampled_df = sampled_df.dropna()
rows_after = sampled_df.shape[0]
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# How many rows did we drop ?
# <br><br></p>
#
rows_before - rows_after
sampled_df.columns
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Select Features of Interest for Clustering
# <br><br></p>
#
features = ['air_pressure', 'air_temp', 'avg_wind_direction', 'avg_wind_speed', 'max_wind_direction',
'max_wind_speed','relative_humidity']
select_df = sampled_df[features]
select_df.columns
select_df
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Scale the Features using StandardScaler
# <br><br></p>
#
X = StandardScaler().fit_transform(select_df)
X
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# Use k-Means Clustering
# <br><br></p>
#
kmeans = KMeans(n_clusters=12)
model = kmeans.fit(X)
print("model\n", model)
# <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
#
# What are the centers of 12 clusters we formed ?
# <br><br></p>
#
centers = model.cluster_centers_
centers
# <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
#
# Plots
# <br><br></p>
#
# Let us first create some utility functions which will help us in plotting graphs:
# +
# Function that creates a DataFrame with a column for Cluster Number
def pd_centers(featuresUsed, centers):
colNames = list(featuresUsed)
colNames.append('prediction')
# Zip with a column called 'prediction' (index)
Z = [np.append(A, index) for index, A in enumerate(centers)]
# Convert to pandas data frame for plotting
P = pd.DataFrame(Z, columns=colNames)
P['prediction'] = P['prediction'].astype(int)
return P
# +
# Function that creates Parallel Plots
def parallel_plot(data):
my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(data)))
plt.figure(figsize=(15,8)).gca().axes.set_ylim([-3,+3])
parallel_coordinates(data, 'prediction', color = my_colors, marker='o')
# -
P = pd_centers(features, centers)
P
# # Dry Days
parallel_plot(P[P['relative_humidity'] < -0.5])
# # Warm Days
parallel_plot(P[P['air_temp'] > 0.5])
# # Cool Days
parallel_plot(P[(P['relative_humidity'] > 0.5) & (P['air_temp'] < 0.5)])
| Final Notebook/Weather Data Clustering using k-Means.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Imports
# +
import h5py
import pandas as pd
import numpy as np
from tqdm import tqdm
import os
import matplotlib.pyplot as plt
import cv2
from cv2 import imread, createCLAHE
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
# +
dataset_folder = '/home/joffrey/Documents/projects/pneumonia/dataset/'
base_folder = dataset_folder + '/chest_xray/'
TRAIN_FOLDER = base_folder + 'train'
TEST_FOLDER = base_folder + 'test'
# -
# # Data transformation
categories = [
"NORMAL",
"PNEUMONIA"
]
# ### Create image labels dataframe
# We can't load all images on the RAM, labelise them and save them directly. However, we can labelise them through their path, and load them one by one later.
df = pd.DataFrame(columns=['label', 'path'])
# +
folders = [TRAIN_FOLDER, TEST_FOLDER]
# Loop through "Normal" and "Pneumonia" from train and test folder
for folder in folders:
for label in range(len(categories)):
# Get the current directory
category = categories[label]
directory = f"{folder}/{category}/"
# Loop through all file in this directory
for file_ in os.listdir(directory):
full_path = directory + '/' + file_
df = df.append({'label': label, 'path': full_path}, ignore_index=True)
# -
df
# ### Transform dataframe to h5py
h5_file_path = dataset_folder + 'chest_xray_10.h5'
def write_df_as_hdf(out_path, dataframe, comp='gzip'):
# Create a h5 file, loop trought the content of the dataframe
with h5py.File(out_path, 'w') as h5:
for columns, content in tqdm(dataframe.to_dict().items()):
try:
serialized_data = np.stack(content.values(), 0)
try:
h5.create_dataset(columns, data=serialized_data, compression=comp)
except TypeError as e:
try:
h5.create_dataset(columns, data=serialized_data.astype(np.string_), compression=comp)
except TypeError as e2:
print('%s could not be added to hdf5, %s' % (
columns, repr(e), repr(e2)))
except ValueError as e:
print('%s could not be created, %s' % (columns, repr(e)))
all_shape = [np.shape(x) for x in content.values()]
warn('Input shapes: {}'.format(all_shape))
write_df_as_hdf(h5_file_path, df[['label']])
# Show what is inside
with h5py.File(h5_file_path, 'r') as h5_data:
for c_key in h5_data.keys():
print(c_key, h5_data[c_key].shape, h5_data[c_key].dtype)
# ### Transform image to h5py
# +
from skimage import transform
RESIZE_DIM = (1000, 1300)
def imread_and_normalize(img_path):
# Read the image as int9
img_data = np.mean(imread(img_path), 2).astype(np.uint8)
# Normalize and return
n_img = (255 * transform.resize(img_data, RESIZE_DIM, mode = 'constant')).clip(0, 255).astype(np.uint8)
return np.expand_dims(n_img, -1)
# +
CROP_DIM = (640, 850)
def crop_center(img: np.array):
h, w = CROP_DIM
center = img.shape[0] / 2, img.shape[1] / 2
x = center[1] - w/2
y = 90 + center[0] - h/2
return img[int(y):int(y+h), int(x):int(x+w)]
# +
OUT_SIZE = (32, 42)
def resize_img(img: np.array):
return transform.resize(img, OUT_SIZE, mode = 'constant')
OU_SIZE = (10, 14)
def maxpool(img: np.array):
return
# -
# Run a test:
# +
image = imread_and_normalize(df['path'].values[0])
cropped_img = crop_center(image)
resized_img = resize_img(cropped_img)
print(type(resized_img))
plt.matshow(resized_img)
# -
# +
# Preallocate output
# img_array = np.zeros((df.shape[0],) + OUT_SIZE + (1,), dtype=np.uint8)
img_array = []
# Set to TRUE to hard preallocate
# Warning: May cause system crash !
if False:
img_array = np.random.uniform(0, 255, size = (df.shape[0],)+ OUT_SIZE +(1,)).astype(np.uint8)
# -
for i, path in enumerate(tqdm(df['path'].values)):
img_array.append(resize_img(crop_center(imread_and_normalize(path))))
with h5py.File(h5_file_path, 'a') as h5_data:
h5_data.create_dataset('images', data = np.array(img_array), compression = 'gzip')
with h5py.File(h5_file_path, 'r') as h5_data:
for c_key in h5_data.keys():
print(c_key, h5_data[c_key].shape, h5_data[c_key].dtype)
| notebooks/dataset_transformation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classes and Objects
#
# **Materials by <NAME> and <NAME>**
# ## Object Orientation
#
# **Object oriented programming** is a way of thinking about and defining how different pieces of software and ideas work together. In object oriented programming, there are two main interfaces: **classes** and **objects**.
#
# * **Classes** are types of things such as int, float, person, or square.
# * **Objects** are **instances** of those types such a 1, 42.0, me, and a square with side length 2.
#
# Unlike functional or procedural paradigms, there are three main features that classes provide.
#
# * **Encapsulation:** Classes are container which may have any kind of other programming element living on them: variables, functions, and even other classes. In Python, members of a class are known as **attributes** for normal variables and **methods** for functions.
# * **Inheritence:** A class may automatically gain all of the attributes and methods from another class it is related to. The new class is called a **subclass** or sometimes a **subtype**. Multiple levels of inheritance sets up a **class heirarchy**. For example:
#
# - Shape is a class with an area attribute.
# - Rectangle is a subclass of Shape.
# - Square is a subclass of Rectangle (which also makes it a subclass of Shape).
#
# * **Polymorphism:** Subclasses may override methods and attributes of their parents in a way that is suitable to them. For example:
#
# - Shape is a class with an area method.
# - Square is a subclass of Shape which computes area by $x^2$.
# - Circle is a subclass of Shape which computes area by $\pi r^2$
#
# If this seems more complicated than writing functions and calling them in sequence that is because it is! However, object orientation enables authors to cleanly separate out ideas into independent classes. It is also good to know because in many languages - Python included - it is the way that you modify the type system.
# ## Basic Classes
#
# Object oriented programming revolves around the creation and
# manipulation of objects that have attributes and can do things. They can
# be as simple as a coordinate with x and y values or as complicated as a
# dynamic webpage framework. Here is the code for making a very simple class
# that sets an attribute.
class MyClass(object):
def me(self, name):
self.name = name
my_object = MyClass()
type(my_object)
my_object.name = 'Anthony'
print(my_object.name)
my_object.me('Jan')
print(my_object.name)
# In the object oriented terminology above:
#
# - `MyClass`: a user defined type.
# - `my_object`: a MyClass instance.
# - `me()`: a MyClass method (member function)
# - `self`: a reference to the object that is used to define a method. _This must be the first agument of any method._
# - `name`: an attribute (member variable) of MyClass.
# - `object`: a special class which should be the parent of all classes.
#
# You *write* a class and you *create* and object.
# **Hands on Example**
#
# Write an Atom class with mass and velocity attributes and an energy() method.
# Your atom class here
class Atom(object):
# Reserved method for python classes, allowing the class to initialize
# attributes.
def __init__(self, , ):
self.mass =
self.velocity =
# energy() method here
# tests:
tests = {1: 5, 7: 8, 4: 2, 40: 20}
for m, v in tests.items():
atom = Atom(m, v)
assert(atom.mass == m)
assert(atom.velocity == v)
assert(atom.energy() == 0.5 * m * v**2)
print("All tests passed.")
# Here is a more complex and realisitic example of a matrix class:
# +
# Matrix defines a real, 2-d matrix.
class Matrix(object):
# I am a matrix of real numbers
def __init__(self, h, w):
self._nrows = h
self._ncols = w
self._data = [0] * (self._nrows * self._ncols)
# How the object will be represented as a string, i.e. print(), str()
def __str__(self):
return "Matrix: " + str(self._nrows) + " by " + str(self._ncols)
def setnrows(self, w):
self._nrows = w
self.reinit()
def getnrows(self):
return self._nrows
def getncols(self):
return self._ncols
def reinit(self):
self._data = [0] * (self._nrows * self._ncols)
def setncols(self, h):
self._ncols = h
self.reinit()
def setvalue(self, i, j, value):
if i < self._nrows and j < self._ncols:
self._data[i * self._nrows + j] = value
else:
raise Exception("Out of range")
def multiply(self, other):
# Perform matrix multiplication and return a new matrix.
# The new matrix is on the left.
result = Matrix(self._nrows, other.getncols())
# Do multiplication...
return result
def inv(self):
# Invert matrix
if self._ncols != self._nrows:
raise Exception("Only square matrices are invertible")
inverted = Matrix(self._ncols, self._nrows)
inverted.setncols(self._ncols)
inverted.setnrows(self._ncols)
return inverted
# -
# ## Programatic Attribute Access
#
# Python provides three built-in functions, `getattr()`, `setattr()`, and `delattr()` to programmaticly access an object's members. These all take the object and the string name of the attribute to be accessed. This is instead of using dot-access.
class A(object):
a = 1
avar = A()
getattr(avar, 'a')
setattr(avar, 'q', 'mon')
print(avar.q)
# ## Interface vs. Implementation
#
# Users shouldn't have to know *how* your program works in order to use
# it.
#
# The interface is a **contract** saying what a class knows how to do. The
# code above defines matrix multiplication, which means that
# mat1.multiply(mat2) should always return the right answer. It turns out
# there are many ways to multiply matrices, and there are whole Ph.Ds
# written on performing efficient matrix inversion. The implementation is
# the way in which the contract is carried out.
# ## Constructors
#
# Usually you want to create an object with a set of initial or default values for
# things. Perhaps an object needs certain information to be created. For
# this you write a **constructor**. In python, constructors are just methods
# with the special name ``__init__()``:
class Person(object):
def __init__(self):
self.name = "Anthony"
person = Person()
print(person.name)
# Constructors may take arguements just like any other method or function.
class Person(object):
def __init__(self, name, title="The Best"):
self.name = name
self.title = title
anthony = Person("Anthony")
print(anthony.name, anthony.title)
jan = Person("Jan", "The Greatest")
print(jan.name, jan.title)
# ## Subclassing
#
# If you want a to create a class that behaves mostly like another class,
# you should not have to copy code. What you do is subclass and change the
# things that need changing. When we created classes we were already
# subclassing the built in python class "object."
#
# For example, let's say you want to write a sparse matrix class, which
# means that you don't explicitly store zero elements. You can create a
# subclass of the Matrix class that redefines the matrix operations.
class SparseMatrix(Matrix):
# I am a matrix of real numbers
def __str__(self):
return "SparseMatrix: " + str(self._nrows) + " by " + str(self._ncols)
def reinit(self):
self._data = {}
def setValue(self, i, j, value):
self._data[(i,j)] = value
def multiply(self, other):
# Perform matrix multiplication and return a new matrix.
# The new matrix is on the left.
result = SparseMatrix(self._nrows, other.getncols())
# Do multiplication...
return result
def inv(self):
# Invert matrix
if self._nrows != self._rcols:
raise Exception("Only square matrices are invertible")
inverted = SparseMatrix(self._ncols, self._nrows)
# The SparseMatrix object is a Matrix but some methods are defined in the *superclass* Matrix. You can see this by looking at the dir of the SparseMatrix and noting that it gets attributes from Matrix.
dir(SparseMatrix)
# A more minimal and more abstact version of inheritence may be seen here:
# +
class A(object):
a = 1
class B(A):
b = 2
class C(B):
b = 42
c = 3
x = C()
print(x.a, x.b, x.c)
# -
# ## Properties
#
# Normally, when you get or set attributes on an object the value that you are setting simply gets a new name. However, sometimes you run into the case where you want to do something extra depending on the actual value you are reciveing. For example, maybe you need to confirm that the value is actually correct or desired.
#
# Python provides a mechanism called **properties** to do this. Properties are methods which either get, set, or delete a given attribute. To implement this, use the built-in `property()` decorator:
class EvenNum(object):
def __init__(self, value):
self._value = 0
self.value = value
@property
def value(self):
# getter
return self._value
@value.setter
def value(self, val):
# setter
if val % 2 == 0:
self._value = val
else:
print("number not even")
en = EvenNum(42)
print(en.value)
en.value = 65
print(en.value)
en.value = 16
print(en.value)
# These getters and setters prevent the user from setting EvenNum to an illegal value.
# ## Data Model
#
# Since classes are user-defined types in Python, they should interact normally with literal operators such `+ - * / == < > <= >=` and other Python language constructs. However, Python doesn't know *how* to do these operations until the user tells it. Take the EvenNum example above:
x = EvenNum(42)
y = 65
x + y
# We need to let Python know that EvenNum addition should be addition on the value. We should also let it know that it should return an EvenNum if it can.
#
# To do this we have to implmenet part of the [Python Data Model](http://docs.python.org/2/reference/datamodel.html). Python has a list of special - or sometimes known as magic - method names that you can override to implement support for many language operations. All of these method names start and end with a double underscore `__`. This is because no regual method would ever use such an obtuse name. It also lets the user and other developers know that something special is happening in those methods and that they aren't meant to be called directly. Many of these has a predefined interface they must follow.
#
# We have already seen an example of this with the `__init__()` constructor method. Now let's try to make addition work for EvenNum. From the documentation, there is an `__add__()` method with the following API:
#
# object.__add__(self, other)
#
class EvenNum(object):
def __init__(self, value):
self._value = 0
self.value = value
@property
def value(self):
# getter
return self._value
@value.setter
def value(self, val):
# setter
if val % 2 == 0:
self._value = val
else:
print("number not even")
def __add__(self, other):
if isinstance(other, EvenNum):
newval = self.value + other.value
else:
newval = self.value + other
return EvenNum(newval)
x = EvenNum(42)
y = 65
x + y
# One of the most useful of these special methods is the `__str__()` method, which allows you to provide a string representation of the object.
class EvenNum(object):
def __init__(self, value):
self._value = 0
self.value = value
def __str__(self):
return str(self.value)
@property
def value(self):
# getter
return self._value
@value.setter
def value(self, val):
# setter
if val % 2 == 0:
self._value = val
else:
print("number not even")
def __add__(self, other):
if isinstance(other, EvenNum):
newval = self.value + other.value
else:
newval = self.value + other
return EvenNum(newval)
x = EvenNum(42)
y = 16
print(x + y) # when you print something __str__() is implicitly called
# Lastly, the most important magic part of classes and objects is the `__dict__` attribute. This is dictionary where all of the the method and attributes of an object are stored. Modifying the `__dict__` directly affects the object and vice versa.
# +
class A(object):
def __init__(self, a):
self.a = a
avar = A(42)
# -
print(avar.__dict__)
avar.__dict__['b'] = 'yourself'
print(avar.b)
avar.c = "me now"
print(avar.__dict__)
# It is because of this that dictionaries are the most important container in Python. Under the covers, all types ae just dicts.
| notebooks/10a-classes-and-objects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # IBM Db2 Event Store - Data Analytics using Python API
#
# IBM Db2 Event Store is a hybrid transactional/analytical processing (HTAP) system. It extends the Spark SQL interface to accelerate analytics queries.
#
# This notebook illustrates how the IBM Db2 Event Store can be integrated with multiple popular scientific tools to perform data analytics.
#
# ***Pre-Req: Event_Store_Table_Creation***
# -
# ## Connect to IBM Db2 Event Store
#
# ### Determine the IP address of your host
#
# Obtain the IP address of the host that you want to connect to by running the appropriate command for your operating system:
#
# * On Mac, run: `ifconfig`
# * On Windows, run: `ipconfig`
# * On Linux, run: `hostname -i`
#
# Edit the `HOST = "XXX.XXX.XXX.XXX"` value in the next cell to provide the IP address.
# +
# Set your host IP address
HOST = "XXX.XXX.XXX.XXX"
# Port will be 1100 for version 1.1.2 or later (5555 for version 1.1.1)
PORT = "1100"
# Database name
DB_NAME = "TESTDB"
# Table name
TABLE_NAME = "IOT_TEMPERATURE"
# -
# ## Import Python modules
# + deletable=true editable=true
# %matplotlib inline
from eventstore.common import ConfigurationReader
from eventstore.oltp import EventContext
from eventstore.sql import EventSession
from pyspark.sql import SparkSession
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
from scipy import stats
import warnings
import datetime
warnings.filterwarnings('ignore')
plt.style.use("fivethirtyeight")
# + [markdown] deletable=true editable=true
# ## Connect to Event Store
# -
endpoint = HOST + ":" + PORT
print("Event Store connection endpoint:", endpoint)
ConfigurationReader.setConnectionEndpoints(endpoint)
# ## Open the database
#
# The cells in this section are used to open the database and create a temporary view for the table that we created previously.
# + [markdown] deletable=true editable=true
# To run Spark SQL queries, you must set up a Db2 Event Store Spark session. The EventSession class extends the optimizer of the SparkSession class.
# + deletable=true editable=true
sparkSession = SparkSession.builder.appName("EventStore SQL in Python").getOrCreate()
eventSession = EventSession(sparkSession.sparkContext, DB_NAME)
# + [markdown] deletable=true editable=true
# Now you can execute the command to open the database in the event session you created:
# + deletable=true editable=true
eventSession.open_database()
# + [markdown] deletable=true editable=true
# ## Access an existing table in the database
# The following code section retrieves the names of all tables that exist in the database.
# + deletable=true editable=true
with EventContext.get_event_context(DB_NAME) as ctx:
print("Event context successfully retrieved.")
print("Table names:")
table_names = ctx.get_names_of_tables()
for name in table_names:
print(name)
# + [markdown] deletable=true editable=true
# Now we have the name of the existing table. We then load the corresponding table and get the DataFrame references to access the table with query.
# + deletable=true editable=true
tab = eventSession.load_event_table(TABLE_NAME)
print("Table " + TABLE_NAME + " successfully loaded.")
# + [markdown] deletable=true editable=true
# The next code retrieves the schema of the table we want to investigate:
# + deletable=true editable=true
try:
resolved_table_schema = ctx.get_table(TABLE_NAME)
print(resolved_table_schema)
except Exception as err:
print("Table not found")
# + [markdown] deletable=true editable=true
# In the following cell, we create a temporary view with that DataFrame called `readings` that we will use in the queries below.
# + deletable=true editable=true
tab.createOrReplaceTempView("readings")
# + [markdown] deletable=true editable=true
# ## Data Analytics with IBM Db2 Event Store
# Data analytics tasks can be performed on table stored in the IBM Db2 Event Store database with various data analytics tools.
# + [markdown] deletable=true editable=true
# Let's first take a look at the timestamp range of the record.
# + deletable=true editable=true
query = "SELECT MIN(ts) MIN_TS, MAX(ts) MAX_TS FROM readings"
print("{}\nRunning query in Event Store...".format(query))
df_data = eventSession.sql(query)
df_data.toPandas()
# + [markdown] deletable=true editable=true
# The following cell converts the timestamps in miliseconds to datetime to make it human readable
# + deletable=true editable=true
MIN_TS=1541019342393
MAX_TS=1541773999825
print("The time range of the dataset is from {} to {}".format(
datetime.datetime.fromtimestamp(MIN_TS/1000).strftime('%Y-%m-%d %H:%M:%S'),
datetime.datetime.fromtimestamp(MAX_TS/1000).strftime('%Y-%m-%d %H:%M:%S')))
# + [markdown] deletable=true editable=true
# ## Sample Problem
# Assume we are only interested in the data recorded by the 12th sensor on the 1st device in the time period on the day of 2018-11-01, and we want to investigate the effects of power consumption and ambient power on the temperature recorded by the sensor in this date.
#
# + [markdown] deletable=true editable=true
# Because the timestamp is recorded in milliseconds, we need to convert the datetime of interest to a time range in milliseconds, and then use the range as a filter in the query.
# + deletable=true editable=true
start_ts = (datetime.datetime(2018,11,1,0,0) - datetime.datetime(1970,1,1)).total_seconds() * 1000
end_ts = (datetime.datetime(2018,11,2,0,0) - datetime.datetime(1970,1,1)).total_seconds() * 1000
print("The time range of datetime 2018-11-01 in milisec is from {:.0f} to {:.0f}".format(start_ts, end_ts))
# + [markdown] deletable=true editable=true
# IBM Db2 Event Store extends the Spark SQL functionality, which allows users to apply filters with ease.
#
# In the following cell, the relevant data is extracted according to the problem scope. Note that because we are specifying a specific device and sensor, this query is fully exploiting the index.
# + deletable=true editable=true
query = "SELECT * FROM readings WHERE deviceID=1 AND sensorID=12 AND ts >1541030400000 AND ts < 1541116800000 ORDER BY ts"
print("{}\nRunning query in Event Store...".format(query))
refined_data = eventSession.sql(query)
refined_data.createOrReplaceTempView("refined_reading")
refined_data.toPandas()
# + [markdown] deletable=true editable=true
# ### Basic Statistics
# For numerical data, knowing the descriptive summary statistics can help a lot in understanding the distribution of the data.
#
# IBM Event Store extends the Spark DataFrame functionality. We can use the `describe` function to retrieve statistics about data stored in an IBM Event Store table.
# + deletable=true editable=true
refined_data.describe().toPandas()
# + [markdown] deletable=true editable=true
# It's worth noticing that some power reading records are negative, which may be caused by sensor error. The records with negative power reading will be dropped.
# + deletable=true editable=true
query = "SELECT * FROM readings WHERE deviceID=1 AND sensorID=12 AND ts >1541030400000 AND ts < 1541116800000 AND power > 0 ORDER BY ts"
print("{}\nRunning query in Event Store...".format(query))
refined_data = eventSession.sql(query)
refined_data.createOrReplaceTempView("refined_reading")
# + [markdown] deletable=true editable=true
# Total number of records in the refined table view
# + deletable=true editable=true
query = "SELECT count(*) count FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
df_data = eventSession.sql(query)
df_data.toPandas()
# + [markdown] deletable=true editable=true
# ### Covariance and correlation
# - Covariance is a measure of how two variables change with respect to each other. It can be examined by calling `.stat.cov()` function on the table.
# + deletable=true editable=true
refined_data.stat.cov("ambient_temp","temperature")
# + deletable=true editable=true
refined_data.stat.cov("power","temperature")
# + [markdown] deletable=true editable=true
# - Correlation is a normalized measure of covariance that is easier to understand, as it provides quantitative measurements of the statistical dependence between two random variables. It can be examined by calling `.stat.corr()` function on the table.
# + deletable=true editable=true
refined_data.stat.corr("ambient_temp","temperature")
# + deletable=true editable=true
refined_data.stat.corr("power","temperature")
# + [markdown] deletable=true editable=true
# ### Visualization
# Visualization of each feature provides insights into the underlying distributions.
# + [markdown] deletable=true editable=true
# - Distribution of Ambient Temperature
# + deletable=true editable=true
query = "SELECT ambient_temp FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
ambient_temp = eventSession.sql(query)
ambient_temp= ambient_temp.toPandas()
ambient_temp.head()
# + deletable=true editable=true
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(ambient_temp.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(ambient_temp.iloc[:,0])
axs[1].set_title("Boxplot on Ambient_temp")
axs[2].hist(ambient_temp.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Ambient_temp")
# + [markdown] deletable=true editable=true
# - Distribution of Power Consumption
# + deletable=true editable=true
query = "SELECT power FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
power = eventSession.sql(query)
power= power.toPandas()
power.head()
# + deletable=true editable=true
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(power.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(power.iloc[:,0])
axs[1].set_title("Boxplot on Power")
axs[2].hist(power.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Power")
# + [markdown] deletable=true editable=true
# - Distribution of Sensor Temperature
# + deletable=true editable=true
query = "SELECT temperature FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
temperature = eventSession.sql(query)
temperature= temperature.toPandas()
temperature.head()
# + deletable=true editable=true
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(temperature.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(temperature.iloc[:,0])
axs[1].set_title("Boxplot on Temperature")
axs[2].hist(temperature.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Temperature")
# + [markdown] deletable=true editable=true
# - Input-variable vs. Target-variable
# + deletable=true editable=true
fig, axs = plt.subplots(1,2, figsize=(16,6))
axs[0].scatter(power.iloc[:,0], temperature.iloc[:,0])
axs[0].set_xlabel("power in kW")
axs[0].set_ylabel("temperature in celsius")
axs[0].set_title("Power vs. Temperature")
axs[1].scatter(ambient_temp.iloc[:,0], temperature.iloc[:,0])
axs[1].set_xlabel("ambient_temp in celsius")
axs[1].set_ylabel("temperature in celsius")
axs[1].set_title("Ambient_temp vs. Temperature")
# + [markdown] deletable=true editable=true
# **By observing the plots above, we noticed:**
# - The distribution of power consumption, ambient temperature, and sensor temperature each follows an roughly normal distribution.
# - The scatter plot shows the sensor temperature has linear relationships with power consumption and ambient temperature.
# + [markdown] deletable=true editable=true
# ## Summary
# This notebook introduced you to data analytics using IBM Db2 Event Store.
#
# ## Next Step
# `"Event_Store_ML_Model_Deployment.ipynb"` will show you how to build and deploy a machine learning model.
# -
# <p><font size=-1 color=gray>
# © Copyright 2019 IBM Corp. All Rights Reserved.
# <p>
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
# except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the
# License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing permissions and
# limitations under the License.
# </font></p>
| examples/Event_Store_Data_Analytics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1번문제
# [파이썬과 웹 크롤링]
#
# https://datalab.naver.com/keyword/realtimeList.naver
#
# 셀레니움을 이용하여 네이버의 급상승 검색어 웹페이지에서 1위에서 20위까지
# 검색어 확인하는 파이썬 코드를 작성하시오.
# +
from selenium import webdriver as wd
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = wd.Chrome(executable_path='C:/Code/chromedriver.exe')
# driver = wd.PhantomJS(executable_path='phantomjs.exe')
driver.get('https://datalab.naver.com/keyword/realtimeList.naver')
titles = driver.find_elements_by_css_selector('#content > div > div.selection_area > div.selection_content > div.field_list > div > ul > li ')
for title in titles:
name =title.find_element_by_css_selector(' div > span.item_title_wrap').text
num = title.find_element_by_css_selector(' div > .item_num').text
print(num, end='위: ')
print(name)
driver.close()
# -
# # 2번문제
#
# [정형 데이터 처리 RDB]
#
# ㅇ 학과 테이블 : 학과번호(기본키) / 학과명 <br>
# `major` : `major` / `major_name` <br>
#
# ㅇ 교수 테이블 : 교수번호(기본키) / 이름 / 학과번호(학과 테이블의 외래키) / 직급 / 급여<br>
# `prof` : `prof_id` / `name` / `major` / `stat`/ `salary`
#
# 교수가 배정되지 않은 학과의 학과명을 검색하는 SQL을 작성하시오.<br>
# ### 정답(쿼리문):
# ```
# SELECT major.majorname
# From prof
# right JOIN major
# ON prof.major= major.major
# WHERE prof.prof_id IS NULL;
# ```
# ### 결과 인증
# +
import pymysql
conn = pymysql.connect(
host = 'localhost',
user = 'root',
password='<PASSWORD>',
db='python',
charset='utf8'
) #데이터 베이스 접속
cursor = conn.cursor() #Cursor 객체 생성
sql = ''' SELECT major.majorname
From prof
right JOIN major
ON prof.major= major.major
WHERE prof.prof_id IS NULL;'''
# '''는 문자열을 의미한다
cursor.execute(sql)
result = cursor.fetchall()
print(result)
conn.commit() #저장
cursor.close()
conn.close()
# -
# # 3번문제
# 비정형 데이터처리 MongoDB]
# mongo shell을 사용하여 데이터베이스 및 컬렉션을 생성하고 제시된 필드명에 맞도록 데이터를 입력하는 코드를 작성하시오.
#
# ㅇ 데이터베이스명 : exam
#
# ㅇ 컬렉션명 : product
#
# ㅇ 필드명
#
# - company (회사명) / prd_name (제품명) / price (가격)
#
# ㅇ 데이터
#
# - 회사명 : 광동제약, 제품명 : 제주 삼다수 2L, 가격 : 800
# - 회사명 : 스파클, 제품명 : 스파클 2L, 가격 : 460
# - 회사명 : 농심, 제품명 : 백두산 백산수 2L, 가격 : 670
# - 회사명 : 동원F&B, 제품명 : 동원샘물 2L, 가격 : 500
#
# ### 정답(NoSQL 문):
use exam
db.product.save({company:'광동제약', prd_name :'제주 삼다수 2L',price:800})
db.product.save({company:'스파클', prd_name :'스파클 2L',price:460})
db.product.save({company:'농심', prd_name :'백두산 백산수 2L',price:670})
db.product.save({company:'동원F&B', prd_name :'동원샘물 2L',price: 500})
# ### 결과 인증
from pymongo import MongoClient
client = MongoClient(host='localhost', port=27017)
db = client['exam']
cursor = db.product.find()
for data in list(cursor):
print(data)
| crawling/exam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Prepare batch scoring model
# In this notebook, you will prepare a model used to detect suspicious activity that will be used for batch scoring.
#
# The team at Woodgrove Bank has provided you with exported CSV copies of historical data for you to train your model against. Run the following cell to load required libraries and download the data sets from the Azure ML datastore.
# +
# #!pip install --upgrade azureml-train-automl-runtime==1.36.0
# #!pip install --upgrade azureml-automl-runtime==1.36.0
# #!pip install --upgrade scikit-learn
# #!pip install --upgrade numpy
# + gather={"logged": 1613411416397}
from azureml.core import Workspace, Environment, Datastore, Dataset
from azureml.core.experiment import Experiment
from azureml.core.run import Run
from azureml.core.model import Model
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
# sklearn.externals.joblib was deprecated in 0.21
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.21.0"):
from sklearn.externals import joblib
else:
import joblib
import numpy as np
import pandas as pd
ws = Workspace.from_config()
# Load data
ds = Datastore.get(ws, "woodgrovestorage")
account_ds = Dataset.Tabular.from_delimited_files(path = [(ds, 'synapse/Account_Info.csv')])
fraud_ds = Dataset.Tabular.from_delimited_files(path = [(ds, 'synapse/Fraud_Transactions.csv')])
untagged_ds = Dataset.Tabular.from_delimited_files(path = [(ds, 'synapse/Untagged_Transactions.csv')])
# Create pandas dataframes from datasets
account_df = account_ds.to_pandas_dataframe()
fraud_df = fraud_ds.to_pandas_dataframe()
untagged_df = untagged_ds.to_pandas_dataframe()
# + gather={"logged": 1613411416582}
print(sklearnver)
# + gather={"logged": 1613413226431}
from azureml.core import __version__ as amlver
print(amlver)
# + gather={"logged": 1613411417403}
pip freeze
# + gather={"logged": 1613411417669} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
###### Reorder the column of dataframe by ascending order in pandas
cols=untagged_df.columns.tolist()
cols.sort()
untagged_df=untagged_df[cols]
# -
# ## Prepare accounts
#
# Remove columns that have very few or no values: `accountOwnerName`, `accountAddress`, `accountCity` and `accountOpenDate`
# + gather={"logged": 1613411417843}
account_df_clean = account_df[["accountID", "transactionDate", "transactionTime",
"accountPostalCode", "accountState", "accountCountry",
"accountAge", "isUserRegistered", "paymentInstrumentAgeInAccount",
"numPaymentRejects1dPerUser"]]
account_df_clean = account_df_clean.copy()
# -
# Cleanup `paymentInstrumentAgeInAccount`. Values that are not numeric, are converted to NaN and then we can fill those NaN values with 0.
# + gather={"logged": 1613411417989}
account_df_clean['paymentInstrumentAgeInAccount'] = pd.to_numeric(account_df_clean['paymentInstrumentAgeInAccount'], errors='coerce')
account_df_clean['paymentInstrumentAgeInAccount'] = account_df_clean[['paymentInstrumentAgeInAccount']].fillna(0)['paymentInstrumentAgeInAccount']
# -
# Next, let's convert the `numPaymentRejects1dPerUser` so that the column has a datatype of `float` instead of `object`.
# + gather={"logged": 1613411418157}
account_df_clean["numPaymentRejects1dPerUser"] = account_df_clean[["numPaymentRejects1dPerUser"]].astype(float)["numPaymentRejects1dPerUser"]
account_df_clean["numPaymentRejects1dPerUser"].value_counts()
# -
# You need to combine the `transactionDate` and `transactionTime` fields into a single field `transactionDateTime`. Begin by converting the transactionTime from an integer to a 0 padded six digit string of the format hhmmss (2 digit hour minute second), then concatenate the two columns and finally parse the concatenated string as a DateTime value.
# + gather={"logged": 1613411418595}
account_df_clean["transactionTime"] = ['{0:06d}'.format(x) for x in account_df_clean["transactionTime"]]
account_df_clean["transactionDateTime"] = pd.to_datetime(account_df_clean["transactionDate"].map(str) + account_df_clean["transactionTime"], format='%Y%m%d%H%M%S')
account_df_clean["transactionDateTime"]
# -
# `account_df_clean` is now ready for use in modeling.
# ## Prepare untagged transactions
#
# There are 16 columns in the untagged_transactions whose values are all null. Drop these columns to simplify the dataset.
# + gather={"logged": 1613411419020}
untagged_df_clean = untagged_df.dropna(axis=1, how="all").copy()
# -
# Replace null values in `localHour` with `-99`. Also replace values of `-1` with `-99`.
# + gather={"logged": 1613411419170}
untagged_df_clean["localHour"] = untagged_df_clean["localHour"].fillna(-99)
untagged_df_clean.loc[untagged_df_clean.loc[:,"localHour"] == -1, "localHour"] = -99
untagged_df_clean["localHour"].value_counts()
# -
# Clean up the remaining null fields:
# - Fix missing values for location fields by setting them to `NA` for unknown.
# - Set `isProxyIP` to False
# - Set `cardType` to `U` for unknown (which is a new level)
# - Set `cvvVerifyResult` to `N` which means for those where the transaction failed because the wrong CVV2 number was entered ro no CVV2 numebr was entered, treat those as if there was no CVV2 match.
# + gather={"logged": 1613411419346}
untagged_df_clean = untagged_df_clean.fillna(value={"ipState": "NA", "ipPostcode": "NA", "ipCountryCode": "NA",
"isProxyIP":False, "cardType": "U",
"paymentBillingPostalCode" : "NA", "paymentBillingState":"NA",
"paymentBillingCountryCode" : "NA", "cvvVerifyResult": "N"
})
# -
# The `transactionScenario` column provides no insights because all rows have the same `A` value. Drop that column. Same idea for the `transactionType` column.
# + gather={"logged": 1613411419570}
del untagged_df_clean["transactionScenario"]
del untagged_df_clean["transactionType"]
# -
# Create the `transactionDateTime` in the same way as shown previously.
# + gather={"logged": 1613411420180}
untagged_df_clean["transactionTime"] = ['{0:06d}'.format(x) for x in untagged_df_clean["transactionTime"]]
untagged_df_clean["transactionDateTime"] = pd.to_datetime(untagged_df_clean["transactionDate"].map(str) + untagged_df_clean["transactionTime"], format='%Y%m%d%H%M%S')
untagged_df_clean["transactionDateTime"]
# -
# `untagged_df_clean` is now ready for use in modeling.
# ## Prepare fraud transactions
#
# The `transactionDeviceId` has no meaningful values, so drop it. Also, fill NA values of the `localHour` field with -99 as we did for the untagged transactions.
# + gather={"logged": 1613411420343}
fraud_df_clean = fraud_df.copy()
del fraud_df_clean['transactionDeviceId']
fraud_df_clean["localHour"] = fraud_df_clean["localHour"].fillna(-99)
# -
# Next, add the transactionDateTime column to the fraud data set using the same approach that was used for the untagged dataset.
# + gather={"logged": 1613411420504}
fraud_df_clean["transactionTime"] = ['{0:06d}'.format(x) for x in fraud_df_clean["transactionTime"]]
fraud_df_clean["transactionDateTime"] = pd.to_datetime(fraud_df_clean["transactionDate"].map(str) + fraud_df_clean["transactionTime"], format='%Y%m%d%H%M%S')
fraud_df_clean["transactionDateTime"]
# -
# Next, remove any duplicate rows from the fraud data set. We identify a unique transaction by the features `transactionID`, `accountID`, `transactionDateTime` and `transactionAmount`.
# + gather={"logged": 1613411420785}
fraud_df_clean = fraud_df_clean.drop_duplicates(subset=['transactionID', 'accountID', 'transactionDateTime', 'transactionAmount'], keep='first')
# -
# `fraud_df_clean` is now ready for use in modeling.
# ## Enrich the untagged data with account data
#
# In this section, you will join the untagged dataset with the account dataset to enrich each untagged example.
# + gather={"logged": 1613411421129}
latestTrans_df = pd.merge(untagged_df_clean, account_df_clean, on='accountID', suffixes=('_unt','_act'))
# + gather={"logged": 1613411421451}
latestTrans_df
# + gather={"logged": 1613411421688}
latestTrans_df = latestTrans_df[latestTrans_df['transactionDateTime_act'] <= latestTrans_df['transactionDateTime_unt']]
# -
# Find the latest record timestamp.
# + gather={"logged": 1613411422178}
latestTrans_df = latestTrans_df.groupby(['accountID','transactionDateTime_unt']).agg({'transactionDateTime_act':'max'})
# + gather={"logged": 1613411422330}
latestTrans_df
# -
# Join the latest transactions with the untagged data frame and then the account data frame.
# + gather={"logged": 1613411422846}
joined_df = pd.merge(untagged_df_clean, latestTrans_df, how='outer', left_on=['accountID','transactionDateTime'], right_on=['accountID','transactionDateTime_unt'])
joined_df
# + gather={"logged": 1613411423321}
joined_df = pd.merge(joined_df, account_df_clean, left_on=['accountID','transactionDateTime_act'], right_on=['accountID','transactionDateTime'])
joined_df
# -
# Pick out only the columns needed for the model.
# + gather={"logged": 1613411423846}
untagged_join_acct_df = joined_df[['transactionID', 'accountID', 'transactionAmountUSD', 'transactionAmount','transactionCurrencyCode', 'localHour',
'transactionIPaddress','ipState','ipPostcode','ipCountryCode', 'isProxyIP', 'browserLanguage','paymentInstrumentType',
'cardType', 'paymentBillingPostalCode', 'paymentBillingState', 'paymentBillingCountryCode', 'cvvVerifyResult',
'digitalItemCount', 'physicalItemCount', 'accountPostalCode', 'accountState', 'accountCountry', 'accountAge',
'isUserRegistered', 'paymentInstrumentAgeInAccount', 'numPaymentRejects1dPerUser', 'transactionDateTime_act'
]]
untagged_join_acct_df
# -
# Rename the columns to clean the names up and remove the suffixes.
# + gather={"logged": 1613411423987}
untagged_join_acct_df = untagged_join_acct_df.rename(columns={
'transactionDateTime_act':'transactionDateTime'
})
# -
# ## Labeling fraud examples
#
# First, get the fraud time period for each account. Do this by grouping the fraud data by `accountID`.
# + gather={"logged": 1613411424364}
fraud_t2 = fraud_df_clean.groupby(['accountID']).agg({'transactionDateTime':['min','max']})
# -
# Give these new columns some more friendly names.
# + gather={"logged": 1613411424509}
fraud_t2.columns = ["_".join(x) for x in fraud_t2.columns.ravel()]
# + gather={"logged": 1613411424658}
fraud_t2
# -
# Now left join the untagged dataset with the fraud dataset.
# + gather={"logged": 1613411424866}
untagged_joinedto_ranges = pd.merge(untagged_join_acct_df, fraud_t2, on='accountID', how='left')
untagged_joinedto_ranges
# -
# Now we use the joined data to apply a label according to the following rules:
# * accountID from untagged not found in fraud dataset at all tagged as 0, meaning not fraudulent.
# * accountID from untagged found in fraud dataset, but the transactionDateTime is outside of the time range from the fraud dataset tagged as 2.
# * accountID from untagged found in fraud dataset and the transactionDateTime is within the time range from the fraud dataset tagged as 1, meaning fraudulent.
# + gather={"logged": 1613411425001}
def label_fraud_range(row):
if (str(row['transactionDateTime_min']) != "NaT") and (row['transactionDateTime'] >= row['transactionDateTime_min']) and (row['transactionDateTime'] <= row['transactionDateTime_max']):
return 1
elif (str(row['transactionDateTime_min']) != "NaT") and row['transactionDateTime'] < row['transactionDateTime_min']:
return 2
elif (str(row['transactionDateTime_max']) != "NaT") and row['transactionDateTime'] > row['transactionDateTime_max']:
return 2
else:
return 0
# + gather={"logged": 1613411435940}
tagged_df_clean = untagged_joinedto_ranges
tagged_df_clean['label'] = untagged_joinedto_ranges.apply(lambda row: label_fraud_range(row), axis=1)
tagged_df_clean
# -
# This leaves us with 1,170 fraudulent examples, 198,326 non-fraudulent examples, and 504 examples that we will ignore as having occured prior to or after the fraud.
# + gather={"logged": 1613411436068}
tagged_df_clean['label'].value_counts()
# -
# Remove those examples with label value of 2 and drop the features `transactionDateTime_min` and `transactionDateTime_max`
# + gather={"logged": 1613411436252}
tagged_df_clean = tagged_df_clean[tagged_df_clean['label'] != 2]
del tagged_df_clean['transactionDateTime_min']
del tagged_df_clean['transactionDateTime_max']
# -
# Encode the transformations into custom transformers for use in a pipeline as follows:
# + gather={"logged": 1613411436564}
import pandas as pd
from sklearn.base import BaseEstimator, TransformerMixin
class NumericCleaner(BaseEstimator, TransformerMixin):
def __init__(self):
self = self
def fit(self, X, y=None):
print("NumericCleaner.fit called")
return self
def transform(self, X):
print("NumericCleaner.transform called")
X["localHour"] = X["localHour"].fillna(-99)
X["accountAge"] = X["accountAge"].fillna(-1)
X["numPaymentRejects1dPerUser"] = X["numPaymentRejects1dPerUser"].fillna(-1)
X.loc[X.loc[:,"localHour"] == -1, "localHour"] = -99
return X
class CategoricalCleaner(BaseEstimator, TransformerMixin):
def __init__(self):
self = self
def fit(self, X, y=None):
print("CategoricalCleaner.fit called")
return self
def transform(self, X):
print("CategoricalCleaner.transform called")
X = X.fillna(value={"cardType":"U","cvvVerifyResult": "N"})
X['isUserRegistered'] = X.apply(lambda row: 1 if row["isUserRegistered"] == "TRUE" else 0, axis=1)
return X
# + gather={"logged": 1613411436751}
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
numeric_features=["transactionAmountUSD", "localHour",
"transactionIPaddress", "digitalItemCount", "physicalItemCount", "accountAge",
"paymentInstrumentAgeInAccount", "numPaymentRejects1dPerUser"
]
categorical_features=["transactionCurrencyCode", "browserLanguage", "paymentInstrumentType", "cardType", "cvvVerifyResult",
"isUserRegistered"
]
numeric_transformer = Pipeline(steps=[
('cleaner', NumericCleaner())
])
categorical_transformer = Pipeline(steps=[
('cleaner', CategoricalCleaner()),
('encoder', OrdinalEncoder())])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)
])
# -
# Test the transformation pipeline.
# + gather={"logged": 1613411439914}
preprocessed_result = preprocessor.fit_transform(tagged_df_clean)
# -
# ## Train the model
#
# With all the hard work of preparing the data behind you, you are now ready to train the model. In this case you will train a decision tree based ensemble model `GradientBoostingClassifier`.
# + gather={"logged": 1613411459023}
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
X = preprocessed_result
y = tagged_df_clean['label']
X_train, X_test, y_train, y_test = train_test_split(X, y)
gbct = GradientBoostingClassifier()
gbct.fit(X_train, y_train)
# -
# Now use the trained model to make predictions against the test set and evaluate the performance.
# + gather={"logged": 1613411459123}
y_test_preds = gbct.predict(X_test)
# + gather={"logged": 1613411459235}
from sklearn.metrics import confusion_matrix, accuracy_score
confusion_matrix(y_test, y_test_preds)
# -
# ## Test save and load of the model
#
# When batch scoring, you will typically work with a model that has been saved off to a shared location. That way, the jobs that use the model for batch processing can easily retrieve the latest version of the model. A good practice is to version that model in Azure Machine Learning service first by registering it. Then any jobs can retrieve the model from the Azure Machine Learning service registry.
#
# Step through the following cells to create some helper functions to prepare for this.
# + gather={"logged": 1613411459447}
import os
import azureml
from azureml.core import Workspace
from azureml.core.model import Model
# sklearn.externals.joblib was deprecated in 0.21
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.21.0"):
from sklearn.externals import joblib
else:
import joblib
# + gather={"logged": 1613411459569}
def saveModelToAML(ws, model, model_folder_path="models", model_name="batch-score"):
# create the models subfolder if it does not exist in the current working directory
target_dir = './' + model_folder_path
if not os.path.exists(target_dir):
os.makedirs(target_dir)
# save the model to disk
joblib.dump(model, model_folder_path + '/' + model_name + '.pkl')
# notice for the model_path, we supply the name of the model outputs folder without a trailing slash
# anything present in the model folder path will be uploaded to AML along with the model
print("Registering and uploading model...")
registered_model = Model.register(model_path=model_folder_path,
model_name=model_name,
workspace=ws)
return registered_model
# + gather={"logged": 1613411459668}
def loadModelFromAML(ws, model_name="batch-score"):
# download the model folder from AML to the current working directory
model_file_path = Model.get_model_path(model_name, _workspace=ws)
print('Loading model from:', model_file_path)
model = joblib.load(model_file_path)
return model
# -
# Save the model to Azure Machine Learning service.
# + gather={"logged": 1613411461070}
#Save the model to the AML Workspace
registeredModel = saveModelToAML(ws, gbct)
# -
# Now, try out the loading process by getting the model from Azure Machine Learning service, loading the model and then using the model for scoring.
# + gather={"logged": 1613411461738}
# Test loading the model
gbct = loadModelFromAML(ws)
y_test_preds = gbct.predict(X_test)
# + gather={"logged": 1613411461847}
y_test_preds
# + gather={"logged": 1613411461939}
print("subscription_id = '" + ws.subscription_id + "'",
"resource_group = '" + ws.resource_group + "'",
"workspace_name = '" + ws.name + "'",
"workspace_region = '" + ws.location + "'", sep='\n')
# -
# > **Important**: Copy the output of the cell above and paste it to Notepad or similar text editor for later.
# ## Next
#
# Congratulations, you have completed Exercise 3.
#
# After you have **copied the output of the cell above**, that contains connection information to your Azure ML workspace, please return to the Cosmos DB real-time advanced analytics hands-on lab setup guide and continue on to Exercise 4.
| Hands-on lab/lab-files/Prepare batch scoring model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generación de números pseudoaleatorios
#
# <img style="float: center; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/6/6a/Dice.jpg" width="300px" height="100px" />
#
# **Referencias de la clase:**
# - https://webs.um.es/mpulido/miwiki/lib/exe/fetch.php?id=amio&cache=cache&media=wiki:simt1b.pdf
# - http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf
#
# **Referencias de las librerías que usaremos:**
# - http://www.numpy.org/
# - https://matplotlib.org/
# ___
# ## 0. Introducción
#
# - Los números aleatorios son la base esencial de la simulación de escenarios.
# - Toda la aleatoriedad involucrada en el modelo se obtiene a partir de un generador de números aleatorios que produce una sucesión de valores que supuestamente son realizaciones de una secuencia de variables aleatorias independientes e idénticamente distribuidas.
#
# *<font color = blue> Contar la historia barata de la lotería... </font>*
# ### 0.1 ¿Qué es un número pseudoaleatorio?
#
# <img style="float: right; margin: 0px 0px 15px 15px;" src="http://www.publicdomainpictures.net/pictures/50000/velka/random-numbers.jpg" width="300px" height="100px" />
#
# - Es un número generado en un proceso que parece producir números al azar, pero no lo hace realmente.
# - Las secuencias de números pseudoaleatorios no muestran ningún patrón o regularidad aparente desde un punto de vista estadístico, a pesar de haber sido generadas por un algoritmo completamente determinista, en el que las mismas condiciones iniciales producen siempre el mismo resultado.
# - Por lo general, el interés no radica en generar un solo número aleatorio, sino muchos, reunidos en lo que se conoce como secuencia aleatoria.
#
# ### 0.2 ¿En qué se aplican?
#
# - Modelado y simulación por computadora, estadística, diseño experimental. Normalmente, la entropía (aletoriedad) de los números que se generan actualmente basta para estas aplicaciones.
# - Criptografía. Este campo sigue estando en constante investigación, y por tanto la generación de números aleatorios también.
# - Asimismo, también destacan su uso en el llamado método de Montecarlo, con múltiples utilidades.
# - Entre otros...
#
# ### 0.3 Funcionamiento básico
#
# - Elegir una semilla inicial (condición inicial) $x_0$.
# - Generar una sucesión de valores $x_n$ mediante la relación de recurrencia $x_n=T(x_{n-1})$.
#
# > Generalmente, esta secuencia es de números pseudoaleatorios $\mathcal{U}(0,1)$.
#
# - Finalmente, se genera un número pseudoaleatorio con distribución deseada, definido a través de alguna relación $u_n=g(x_n)$.
# - Estas sucesiones son periódicas. Es decir, en algún momento ocurrirá que $x_j = x_i$ para algún $j > i$.
#
# ### 0.4 ¿Cuándo un generador de números pseudoaleatorios es bueno?
#
# - La sucesión de valores que proporcione deberı́a asemejarse a una sucesión de realizaciones independientes de una variable aleatoria $\mathcal{U}(0, 1)$.
# - Los resultados deben ser reproducibles, en el sentido de que comenzando con la misma semilla inicial, debe ser capaz de reproducir la misma sucesión. Esto para poder probar diferentes alrternativas bajo las mismas condiciones y/o poder depurar fallos en el modelo.
# - La sucesión de valores generados debe tener un periodo no repetitivo tan largo como sea posible.
# ___
# ## 1. Métodos congruenciales para generación de números pseudoaleatorios $\mathcal{U}(0,1)$
#
# - Introducidos por Lehmer en 1951.
# - Son los principales generadores de números pseudoaleatorios utilizados hoy en día.
#
# ### 1.1 Descripción general del método
#
# - Comienza con un valor inicial (semilla) $x_0$, y los valores subsiguientes, $x_n$ para $n \geq 1$, se obtienen recursivamente con la siguiente fórmula:
# $$x_n = (ax_{n−1} + b) \mod m.$$
# - En la fórmula de arriba $\text{mod}$ representa la operación residuo.
# - Los enteros positivos $m$, $a$ y $b$ en la fórmula se denominan:
# - $0<m$ es el módulo,
# - $0<a<m$ es el multiplicador, y
# - $0\leq b <m$ es el incremento.
# - La semilla debe satisfacer $0\leq x_0<m$.
# - Si $b = 0$, el generador se denomina multiplicativo.
# - En caso contrario se llama mixto.
# **Ejemplo**
#
# Para tomar intuición con este método, probar a mano con los siguientes conjuntos de parámetros:
# 1. $m=9$, $a=5$, $b=1$, $x_0=1$.
# 2. $m=16$, $a=5$, $b=3$, $x_0=7$.
# De acuerdo a lo anterior, ¿cómo son los números $x_i$?, ¿representa esto algún problema?, ¿cómo se podría solucionar?
#
# <font color=red> Enunciar problemas con sus respectivas soluciones... </font>
# En efecto, un generador congruencial queda completamente determinado por los parámetros $m$, $a$, $b$ y $x_0$.
#
# **Proposición.** Los valores generados por un método congruencial verifican:
#
# $$x_n = \left(a^n x_0+b\frac{a^n-1}{a-1}\right) \mod m.$$
#
# <font color=blue> Verificar esto en el pizarrón. </font>
# ### 1.2 Programemos este método
#
# De acuerdo a lo descrito arriba, quisiéramos programar una función que reciba:
# - la semilla $x_0$,
# - el multiplicador $a$,
# - el incremento $b$,
# - el módulo $m$, y
# - la cantidad de elementos de la secuencia pseudoaleatoria requeridos $n$,
#
# y que retorne la secuencia pseudoaleatoria de longitud $n$.
#### Importar la librería numpy... útil para el manejo de datos n-dimensionales (vectores)
import numpy as np
#### Escribir la función acá
def cong_method1(x0, a, b, m, n):
x = [x0]
for i in range(1,n):
x.append((a * x[-1] + b) % m)
return np.array(x)/m
# **Ejemplo**
#
# Probar con los conjuntos de parámetros anteriores:
# 1. $m=9$, $a=5$, $b=1$, $x_0=1$.
# 2. $m=16$, $a=5$, $b=3$, $x_0=7$.
#
# Además,
# - Para el conjunto de parámetros 1, probar con las semillas $x_0=5,8$.
# - Para el conjunto de parámetros 2, probar con diferentes semillas.
#### Probar acá
x = cong_method1(8, 5, 1, 9, 15)
x
x = cong_method1(2, 5, 3, 16, 20)
x
# **Ejemplo**
#
# Los ciclos *for* o *while* son un atentado contra la computación eficiente. Programar de forma vectorizada usando la fórmila:
# $$x_n = \left(a^n x_0+b\frac{a^n-1}{a-1}\right) \mod m.$$
#### Escribir la función acá
def cong_method2(x0, a, b, m, n):
N = np.arange(n)
return ((a**N * x0 + b * ((a**N-1)/(a-1))) % m)/m
cong_method2(2, 5, 3, 16, 20)
# Entonces vemos que la calidad de nuestro generador congruencial depende fuertemente de la elección de los parámetros, pues quisiéramos que los periodos sean lo más grandes posible ($m$).
#
# Cuando el periodo de un generador congruencial coincide con el módulo $m$, lo llamaremos *generador de ciclo completo*. El periodo de este tipo de generadores es independiente de la semilla que utilicemos.
#
# El siguiente Teorema nos da condiciones para crear generadores de ciclo completo:
# **Teorema.** Un generador congruencial tiene periodo completo si y sólo si se cumplen las siguientes condiciones:
# 1. $m$ y $b$ son primos entre sı́.
# 2. Si $q$ es un número primo que divide a $m$, entonces $q$ divide a $a − 1$.
# 3. Si $4$ divide a m, entonces 4 divide a $a − 1$.
# **Ejercicio**
#
# Comprobar el teorema en el conjunto de parámetros 2.
# ### 1.3 Comentarios adicionales sobre el generador congruencial
#
# Hasta ahora solo nos basamos en aspectos teóricos para ver si un generador es bueno. También hay aspectos computacionales...
#
# En ese sentido los generadores multiplicativos son más eficientes que los mixtos porque se ahorran la operación de suma. Sin embargo, por el **Teorema** <font color=red>¿qué pasa con los generadores multiplicativos?</font>
#
# De igual forma, una elección computacionalmente adecuada es $m=2^k$ (se elige m grande para tener periodos grandes). Con esta elección, y $k\geq2$, el generador tendrá periodo completo si y sólo si $b$ es impar y $1 = a \mod 4$.
#
# Si se combina lo anterior (generador multiplicativo con $m=2^k$), obtenemos que el periodo máximo que se puede obtener es una cuarta parte de $m$, $\frac{2^k}{4}=2^{k-2}$ y se alcanza únicamente para $x_0$ impar y, $3 = a \mod 8$ o $5 = a \mod 8$.
#
# Un generador multiplicativo muy utilizado, conocido como *RANDU*, tomaba $m = 2^{31}$ y $a = 2^{16} + 3$. Sin embargo, se ha demostrado que tiene propiedades estadı́sticas bastante malas.
#
# Los generadores multiplicativos más famosos utilizados por IBM tomaban $m = 2^{31} − 1$ y $a = 7^5, 630360016$.
#
# Pueden encontrar más información en este [enlace](https://en.wikipedia.org/wiki/Linear_congruential_generator).
#
# - Se pueden hacer combinaciones de generadores y otros generadores más complicados...
# **Ejemplo**
#
# Tomar los parámetros $m=2^{31} − 1$, $a=1103515245$ y $b=12345$, y generar una secuencia pseudoaleatoria uniforme estándar de $n=10^4$ elementos.
#
# Luego, dibujar el histograma (diagrama de frecuencias). ¿Corresponde lo obtenido con lo que se imaginaban?
#### Resolver acá
x = cong_method2(3, 1103515245, 12345, 2**31-1, 10**6)
import matplotlib.pyplot as plt
# %matplotlib inline
plt.hist(x)
plt.xlabel('valores aleatorios')
plt.ylabel('frecuencia')
plt.title('histograma')
plt.show()
# **Ejemplo**
#
# ¿Cómo hacer para obtener secuencias pseudoaleatorias en $\mathcal{U}(a,b)$?
#
# Realizar un código para esto. Hacer una prueba con los parámetros anteriormente tomados y dibujar el histograma para contrastar.
#### Resolver acá
a, b = 7, 10
xab = (b-a)*x+a
plt.hist(xab)
plt.xlabel('valores aleatorios')
plt.ylabel('frecuencia')
plt.title('histograma')
plt.show()
# **Ejemplo**
#
# Escribir una función que devuelva secuencias de números aleatorios $\mathcal{U}(0,1)$ usando los parámetros dados anteriormente y que use como semilla `time.time()`.
#### Resolver acá
import time
def randuni(n):
return cong_method2(round(time.time()*10**7), 1103515245, 12345, 2**31-1, n)
# ___
# ## 2. Método Box–Muller para generación de números pseudoaleatorios $\mathcal{N}(0,1)$
#
# Teniendo dos secuencias de números pseudoaleatorios independientes e uniformemente distribuidos en el intervalo $\left[0,1\right]$ ($\mathcal{U}(0,1)$) es posible generar dos secuencias de números pseudoaleatorios independientes y normalmente distribuidos con media cero y varianza unitaria ($\mathcal{N}(0,1)$).
#
# Este método se conoce como el método Box–Muller.
# Supongamos que $U_1$ y $U_2$ son variables aleatorias independientes que están uniformemente distribuidas en el intervalo $\left[0,1\right]$. Sean entonces:
#
# $$X=R\cos(\theta)=\sqrt{-2\log(U_1)}\cos(2\pi U_2),$$
#
# y
#
# $$Y=R\sin(\theta)=\sqrt{-2\log(U_1)}\sin(2\pi U_2).$$
#
# Entonces, $X$ y $Y$ son variables aleatorias independientes con una distribución normal estándar ($\mathcal{N}(0,1)$).
# La derivación de esto se basa en la transformación del sistema cartesiano al sistema polar.
#
# <font color=blue> Mostrar intuitivamente en el tablero. </font>
# **Ejemplo**
#
# Escribir una función que devuelva secuencias de números aleatorios $\mathcal{N}(0,1)$.
#
# *Usar la función escrita anteriormente*
#### Resolver acá
# **Ejemplo**
#
# Generar una secuencia pseudoaleatoria normal estándar de $n=10^4$ elementos.
#
# Luego, dibujar el histograma (diagrama de frecuencias). ¿Corresponde lo obtenido con lo que se imaginaban?
#### Resolver acá
# **Ejemplo**
#
# ¿Cómo hacer para obtener secuencias pseudoaleatorias en $\mathcal{N}(\mu,\sigma)$?
#
# Realizar un código para esto. Hacer una prueba y dibujar el histograma para contrastar.
#### Resolver acá
# Finalmente, mostrar que funciones de este tipo ya están en `numpy`. Ya sabemos como se obtienen.
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>.
# </footer>
| Modulo1/.ipynb_checkpoints/Clase4_NumerosPseudoaleatorios-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basic file handling in Python
# Before we can actually work with a file we need to tell python which file we are going to work with. In detail, we need to open a file using the 'open()' function and it return we get a file handle. using this handle we can maiputale the file we want.
# open a file and get the file handle as output
# file_handle = open(filename, mode) , mode can be 'r' for read or 'w' for write
file_handle = open('MyDetails.txt', 'r')
print(file_handle) # print the details fo the file handle
# Now since we know how to open a text file, lets see how to process a file.
# NOTE: We can treat the file handle as sequence of strings. This string sequence is composed of each of the lines of the file we just open in read mode. Thus, we can use a loop to read through this sequnece of strings. Lets see how to do it.
#
# +
file_handle = open('MyDetails.txt', 'r') # get the file handle
# loop through the file handle to print each line fo the file
for index in file_handle:
#index = index.rstrip() # removes the whitesace(e.g. \n character) after each line
print (index)
# -
# How do we count the number of lines in file. This can be done by including a variable inside the for loop thats increments its value everytime the loop runs.
# +
file_handle = open('MyDetails.txt', 'r') # get the file handle
count = 0 # initialise the count variable to 0
# loop through the file handle to print each line of the file
for index in file_handle:
count = count + 1 # increase it svalue everytime the loop runs
print('Number of lines in the file: ', count)
# -
# Another way to read the whole file into a single string is to use th read() method to the file handler. This methods reads everything as a single string including the newline character.
#
# +
file_handle = open('MyDetails.txt', 'r')
content = file_handle.read()
print('File size: ', len(content)) # print the length of content
print('First few words: ' + content[:40]) # print first 40 characters
print()
print(content) # Show all the content at one go.
# -
# We can use input function to ask user for inputs, e.g. filenames. Let see how to use it.
#
# +
filename = input('Enter filename you want to read: ')
print('Filename entered: ' + filename)
print()
# error handling usign try/except block
try:
file_handle = open(filename, 'r')
except:
print('File not found')
quit()
count = 0 # index for coutning current line number
for line in file_handle:
count = count + 1 # increase line number value everytime the loops runs
print('Current Line: ', count) # print the current line number
if line.find('research') != -1: # if research substring exists in the current line the value is not equal to -1
print('Research info: '+ line)
else:
print('No info about research')
print()
# -
| Class2-FileHandling-Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ylGjtrKMzX6e" colab_type="text"
# <NAME>: 790032
#
#
# <NAME>: 799368
# + [markdown] id="CmJNH1wk9mhk" colab_type="text"
# # Processing di segnali mono-dimensionali
# + [markdown] id="_zgH6vOWxzuF" colab_type="text"
# In questo file sono presenti 3 approcci differenti:
# * Rete CNN
# * Rete GRU
# * GMM
#
# La rete CNN è il vero e proprio modello che verrà presentato durante la demo live, gli altri due sono approcci pensati ed implementati precedentemente che raggiungono comunque ottimi risultati.
# + [markdown] id="zd6xPyS496RX" colab_type="text"
# ## Import
# + id="8YxMA31ZiJQa" colab_type="code" outputId="4c91ffdb-6caa-4e5b-e494-276ddd164df6" executionInfo={"status": "ok", "timestamp": 1581598998254, "user_tz": -60, "elapsed": 5731, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
# %tensorflow_version 2.x
# ! pip install -q keras==2.3.0
# + id="Ls7TLXvf-8B7" colab_type="code" outputId="67091c66-6a6a-4404-b49f-c29666b8bb92" executionInfo={"status": "ok", "timestamp": 1581599010378, "user_tz": -60, "elapsed": 17842, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
import os
import numpy as np
import pandas as pd
import librosa
import librosa.display as lid
from sklearn.preprocessing import scale
from tqdm import tqdm
import matplotlib.pyplot as plt
import IPython.display as ipd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.mixture import GaussianMixture
from keras import layers, optimizers, callbacks, Model, losses, models, applications
from keras.preprocessing import image as kimage
import cv2 as cv
from joblib import dump,load
import _pickle as cPickle
# + id="_XT8wLPO-mp0" colab_type="code" outputId="0931f9ef-a416-4ac4-9ca5-abc1c4d6f547" executionInfo={"status": "ok", "timestamp": 1581599031535, "user_tz": -60, "elapsed": 38777, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 128}
from google.colab import drive
drive.mount('/content/drive/')
# + id="Bc4h34SASrxd" colab_type="code" colab={}
#Definisco la directory nel mio Drive da cui saranno prelevati i dati
dir_drive = "/content/drive/My Drive/progetto_dsim/data/tutto/"
#Definisco le categorie associate alle cartelle-immagini
category = ["fede", "liccia"]
# + [markdown] id="Dm6mf6XXkjHW" colab_type="text"
# ## Preprocessing MFCC
# + id="beHxJ8sj--ik" colab_type="code" colab={}
def add_noise(audio_signal, noise_factor = 30):
noise = np.random.randn(len(audio_signal))
data_noise = np.add(audio_signal, noise_factor*noise) # se si vuole più rumore cambiare il fattore moltiplicativo
return data_noise
def mfccs(input, rate, n_mfcc = 13):
mfcc = librosa.feature.mfcc(input*1.0, sr = rate, n_mfcc = n_mfcc)
return mfcc
def identity(input, rate, n_mfcc):
return input
# + [markdown] id="1Of3MvghjI0v" colab_type="text"
# ### Caricamento dati augmentati per train e non per test
# + id="M0o_DuC4FF1m" colab_type="code" colab={}
def create_data(n_mfcc = 13, feature_extractor=identity):
for cat in category:
path = os.path.join(dir_drive,cat) # creare il path a fede, liccia
class_num = category.index(cat) # ottenere la label (0, 1).
for f in tqdm(os.listdir(path)):
try:
if f.endswith('.wav'):
# Carica file ed estraine le features
signal, rate = librosa.core.load(os.path.join(path,f), mono=True, sr = 44100)
#signal_noise = add_noise(signal, 0.005)
cur_features = feature_extractor(signal, rate, n_mfcc)
#cur_features_noise = feature_extractor(signal_noise, rate, n_mfcc)
features.append(cur_features)
labels.append(class_num)
#features.append(cur_features_noise)
#labels.append(class_num)
except Exception as e: # in the interest in keeping the output clean...
pass
# + id="b-yxxzRXVtri" colab_type="code" outputId="3bc2853a-6573-4268-cf01-29a1934218e0" executionInfo={"status": "ok", "timestamp": 1581587766520, "user_tz": -60, "elapsed": 108960, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
labels = []
features = []
create_data(feature_extractor=identity, n_mfcc=13)
# + id="AjEq9JzhHwLu" colab_type="code" colab={}
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.1, random_state=42)
# + id="FIi58wDiH5TR" colab_type="code" outputId="6ae7b0f5-4c10-4832-8452-3965d6772f11" executionInfo={"status": "ok", "timestamp": 1581587768363, "user_tz": -60, "elapsed": 106465, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
# Aggiunta rumore
a = []
for i in tqdm(X_train):
a.append(add_noise(i, 0.005))
X_train = X_train + a
y_train = y_train + y_train
# + id="6Va-3-waJsH7" colab_type="code" outputId="79e60f61-9024-4af5-c2d1-7f1a4dc7f330" executionInfo={"status": "ok", "timestamp": 1581587789049, "user_tz": -60, "elapsed": 124059, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
# Calcolo MFCC
for i in tqdm(range(len(X_train))):
X_train[i] = mfccs(X_train[i], 44100, 13)
for i in tqdm(range(len(X_test))):
X_test[i] = mfccs(X_test[i], 44100, 13)
# + id="LGIj9jTqZiPO" colab_type="code" outputId="0c7f6835-dcdb-4da4-c670-7aa27a2907ef" executionInfo={"status": "ok", "timestamp": 1581587789310, "user_tz": -60, "elapsed": 121918, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 279}
lid.specshow(X_train[0], sr=44100, x_axis='time')
plt.show()
# + id="pqvNJHctI8Vf" colab_type="code" colab={}
width = X_train[0].shape[0]
height = X_train[0].shape[1]
# + id="sOvkRsD1bDDb" colab_type="code" colab={}
X_train = np.array(list(X_train)).reshape(-1, width, height, 1)
X_test = np.array(list(X_test)).reshape(-1, width, height, 1)
# + id="dWDD1DfRl0Ou" colab_type="code" colab={}
mean = X_train.mean(axis = 0)
std = X_train.std(axis = 0)
np.save("/content/drive/My Drive/progetto_dsim/consegna/mean_audio", mean)
np.save("/content/drive/My Drive/progetto_dsim/consegna/std_audio", std)
# + id="e58_Yol-omcC" colab_type="code" colab={}
X_train = (X_train-mean) / std
# + id="1aj4Jx4JrQTz" colab_type="code" colab={}
X_test = (X_test-mean) / std
# + [markdown] id="XpNIU4yDey9n" colab_type="text"
# ## Addestramento CNN
# + id="UbMvvqCChe67" colab_type="code" outputId="49bb1ad3-4f06-4b8f-b4d1-f456b7bf77ee" executionInfo={"status": "ok", "timestamp": 1581588359690, "user_tz": -60, "elapsed": 7684, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 526}
model = models.Sequential()
model.add(layers.Conv2D(64, (2,2), activation='relu', input_shape=(width,height,1)))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(64, (2,2), activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(0.3))
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dense(2, activation='softmax'))
model.summary()
# + id="HHCuH5rYinNX" colab_type="code" colab={}
model.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizers.adam(),
metrics=['acc'])
j = 0
# + id="d7HW4Drlivv4" colab_type="code" outputId="60b35840-74fd-4917-c95c-3376e661610b" executionInfo={"status": "ok", "timestamp": 1581588399009, "user_tz": -60, "elapsed": 34423, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
mc = callbacks.ModelCheckpoint(filepath="/content/drive/My Drive/progetto_dsim/modelli/audio" + str(j) + ".hdf5" ,
monitor="val_loss",
save_best_only = True)
history = model.fit(X_train, y_train,
batch_size=32,
epochs=100,
validation_split=0.2,
callbacks = [mc])
j += 1
# + id="FKzYxzmrJwgY" colab_type="code" outputId="00e0d01c-cdfc-4991-ef48-2f0693119758" executionInfo={"status": "ok", "timestamp": 1581588399720, "user_tz": -60, "elapsed": 31691, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 541}
x_plot = list(range(1,history.epoch[-1]+2))
def plot_history(network_history):
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(x_plot, network_history.history['loss'])
plt.plot(x_plot, network_history.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(x_plot, network_history.history['acc'])
plt.plot(x_plot, network_history.history['val_acc'])
plt.legend(['Training', 'Validation'])
plot_history(history)
# + [markdown] id="UrJEyiQwD1B2" colab_type="text"
# ## Test CNN
# + id="coBuYba9N3Ar" colab_type="code" outputId="63d8e1b7-c6f9-4a6e-c088-a4dda84e3195" executionInfo={"status": "ok", "timestamp": 1581588399727, "user_tz": -60, "elapsed": 27797, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 72}
test_loss, test_acc = model.evaluate(X_test, y_test)
print("TEST LOSS:", test_loss)
print("TEST ACCURACY:", test_acc)
# + id="zhar0O05S6rJ" colab_type="code" outputId="d05bcd8b-d097-4688-ff93-b36142c440a8" executionInfo={"status": "ok", "timestamp": 1581588401914, "user_tz": -60, "elapsed": 2133, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
prova = models.load_model("/content/drive/My Drive/progetto_dsim/modelli/audio0.hdf5")
prova.evaluate(X_test, y_test)
# + id="xNwOnuFkokbq" colab_type="code" outputId="ad0d7655-2787-496e-ce63-b24a0af18d1e" executionInfo={"status": "ok", "timestamp": 1581346590873, "user_tz": -60, "elapsed": 574, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09731871360606149001"}} colab={"base_uri": "https://localhost:8080/", "height": 170}
print(classification_report(y_test, prova.predict_classes(X_test)))
# + [markdown] colab_type="text" id="xkMjvICcDnHd"
# ## GRU
# + id="1RlombzoDmrb" colab_type="code" colab={}
X_train = np.squeeze(X_train)
X_test = np.squeeze(X_test)
# + colab_type="code" id="sig-s_vKDnHH" colab={}
X_train = np.array(X_train)
X_test = np.array(X_test)
# + colab_type="code" id="oC0eqdE3DnGw" colab={}
# Da runnare solo se non si erano già normalizzati i dati precedentemente
mean = X_train.mean(axis = 0)
std = X_train.std(axis = 0)
X_train = (X_train-mean) / std
X_test = (X_test-mean) / std
# + colab_type="code" id="ChRzVu8JDnFl" colab={}
model = models.Sequential()
model.add(layers.GRU(64, return_sequences=False, input_shape = (width, height)))
model.add(layers.Dense(2, activation="softmax"))
# + colab_type="code" id="pfRQomYtDnE6" colab={}
model.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizers.adam(),
metrics=['acc'])
# + colab_type="code" outputId="8735b145-8a69-46c0-e904-c6e0ac1466df" executionInfo={"status": "ok", "timestamp": 1581526136632, "user_tz": -60, "elapsed": 22725, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} id="HQnRlluiDnDc" colab={"base_uri": "https://localhost:8080/", "height": 399}
history = model.fit(X_train, y_train,
batch_size = 16,
epochs = 10,
validation_split=0.2)
# + id="U3MEp_LkKVhv" colab_type="code" outputId="8a4d2413-8ba2-4690-f75c-77e59b62b9c9" executionInfo={"status": "ok", "timestamp": 1581526086569, "user_tz": -60, "elapsed": 985, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 541}
x_plot = list(range(1,history.epoch[-1]+2))
plot_history(history)
# + id="XBLDfAkJBkBt" colab_type="code" outputId="655baa13-f392-4f66-9d98-4fa2740874d3" executionInfo={"status": "ok", "timestamp": 1581526097508, "user_tz": -60, "elapsed": 563, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 72}
test_loss, test_acc = model.evaluate(X_test, y_test)
print("TEST LOSS:", test_loss)
print("TEST ACCURACY:", test_acc)
# + [markdown] colab_type="text" id="Z8UV_mlHDkBm"
# ## GMM
# + id="A6Loi7-4Y3-0" colab_type="code" colab={}
#Definisco la directory nel mio Drive da cui saranno prelevati i dati
dir_drive = "/content/drive/My Drive/progetto_dsim/data/tutto"
#Definisco le categorie associate alle cartelle-immagini
category = ["fede", "liccia"]
# + colab_type="code" id="qGpsOPHpDkBO" colab={}
def identity(input, rate, n_mfcc):
return input
def calculate_delta(array):
rows,cols = array.shape
deltas = np.zeros((20,cols))
N = 2
for i in range(rows):
index = []
j = 1
while j <= N:
if i-j < 0: # se sono alla prima riga non posso prendere il valore mfcc precedente
first = 0
else:
first = i-j
# Se sono all'ultima riga non posso prendere il valore mfcc successivo.
# Il comando shape restituisce il numero di righe presente nell'array partendo da 1, ma si accede all'array partendo da 0,
# il numero totale di righe sarà quindi rows-1 se si parte da zero.
if i+j > rows-1:
second = rows-1
else:
second = i+j
index.append((second,first))
j+=1
deltas[i] = ( array[index[0][0]]-array[index[0][1]] + (2 * (array[index[1][0]]-array[index[1][1]])) ) / 10
return deltas
def mfccs(input, rate, n_mfcc):
mfcc = librosa.feature.mfcc(input*1.0, sr = rate, n_mfcc = n_mfcc)
return mfcc
def create_data(n_mfcc = 20, feature_extractor=identity):
for cat in category:
path = os.path.join(dir_drive,cat)
class_num = category.index(cat)
for f in tqdm(os.listdir(path)):
try:
if f.endswith('.wav'):
# Carica file ed estraine le features
signal, rate = librosa.core.load(os.path.join(path,f), mono=True, sr = 44100)
mfcc_feat = feature_extractor(signal, rate, n_mfcc)
#delta = calculate_delta(mfcc_feat)
#combined = np.vstack((mfcc_feat,delta))
features.append(mfcc_feat)
labels.append(class_num)
except Exception as e: # in the interest in keeping the output clean...
pass
# + colab_type="code" id="Sfk0zXwgDkA6" colab={}
labels = []
features = []
create_data(feature_extractor=mfccs, n_mfcc=20)
# + colab_type="code" id="yzQTZlDZDkAm" colab={}
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.1, random_state=42)
# + id="khB3nPWyUD_U" colab_type="code" colab={}
X_train = np.array(X_train)
X_test = np.array(X_test)
# + id="3KJxTva3T0jq" colab_type="code" colab={}
#Scambio degli assi per problemi di memoria
X_train = np.swapaxes(X_train, 2, 1)
X_test = np.swapaxes(X_test, 2, 1)
# + colab_type="code" id="Jf83oz0mDkAP" colab={}
# Viene costruito il vettore di features per ogni speaker concatenando le feature dei diversi audio
features_fede = np.asarray(())
features_matteo = np.asarray(())
for index, vector in enumerate(X_train):
if y_train[index] == 0:
# caso fede
if features_fede.size == 0:
features_fede = vector
else:
features_fede = np.vstack((features_fede, vector))
else: # caso matteo
if features_matteo.size == 0:
features_matteo = vector
else:
features_matteo = np.vstack((features_matteo, vector))
# + colab_type="code" outputId="04f9b9dd-3e81-4144-cd5f-4611ed5fd386" executionInfo={"status": "ok", "timestamp": 1581602126694, "user_tz": -60, "elapsed": 1082757, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} id="k7O_IeT1Dj_E" colab={"base_uri": "https://localhost:8080/", "height": 54}
dest = "/content/drive/My Drive/progetto_dsim/modelli/GMM/" # path dove salvare modelli di training
features = ['features_fede', 'features_matteo']
for feature_name in features:
feature = eval(features[features.index(feature_name)])
#tuning del numero di componenti per GMM tramite BIC
n_components = np.arange(1, 21)
models = [GaussianMixture(n, random_state=66, max_iter=200).fit(feature) for n in n_components]
bic = [m.bic(feature) for m in models]
# scegliamo il modello con il numero di componenti che minimizza il BIC
gmm = models[bic.index(min(bic))]
picklefile = feature_name.split("_")[1]+".gmm"
cPickle.dump(gmm, open(dest + picklefile,'wb'))
print('+ modeling completed for speaker:', picklefile, " with data point = ", feature.shape,
". Selected ", gmm.get_params(True)['n_components'], "components for the GMM model.")
# + colab_type="code" id="vUOQGYERDj-n" outputId="29f1dc52-8447-44d1-8046-2aa1cd360e7f" executionInfo={"status": "ok", "timestamp": 1581602127604, "user_tz": -60, "elapsed": 898, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
modelpath = "/content/drive/My Drive/progetto_dsim/modelli/GMM/"
gmm_files = [os.path.join(modelpath,fname) for fname in
os.listdir(modelpath) if fname.endswith('.gmm')]
# Load the Models
models = [cPickle.load(open(fname,'rb')) for fname in gmm_files]
speakers = [0,1]
y_pred = []
for index, test_feature in enumerate(X_test):
log_likelihood = np.zeros(len(models))
for i in range(len(models)):
gmm = models[i]
scores = np.array(gmm.score_samples(test_feature)) #cambiato in score_samples
log_likelihood[i] = scores.sum()
winner = np.argmax(log_likelihood)
y_pred.append(speakers[winner])
print(y_test[index], "detected as - ", speakers[winner])
# + colab_type="code" id="RUDZ_QlNDj-Q" colab={}
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# + id="ZHUfTBhcsllM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 352} outputId="da7d53bd-bbb5-4430-e9f5-ed63806b61f0" executionInfo={"status": "ok", "timestamp": 1581602127883, "user_tz": -60, "elapsed": 1163, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10319363990719344506"}}
cnf_mtrx = confusion_matrix(y_test, y_pred)
import itertools
plt.figure()
plot_confusion_matrix(cnf_mtrx, classes=['federico', 'matteo'],
title='Confusion matrix, without normalization')
| Dsim/Progetto pt.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: thesis-venv
# language: python
# name: thesis-venv
# ---
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
sns.set(style="white")
# +
# Useful function
def get_highest_values(arr, n):
return np.array(arr).argsort()[-n:][::-1]
def get_lowest_values(arr, n):
return np.array(arr).argsort()[::-1][-n:][::-1]
# +
data_file = "data/temp.train"
interval = 16
# !python generate/generate_data_model_random.py --output data/temp --interval "0, 16" --kind svdne --metric sub_blocks_area --scenes "A, D, G, H" --nb_zones 16 --random 1 --percent 1.0 --step 10 --each 1 --renderer maxwell --custom temp_min_max_values
# -
# ## Correlation analysis between SVD features
df = pd.read_csv(data_file, sep=';', header=None)
df = df.drop(df.columns[[0]], axis=1)
df.head()
# Compute the correlation matrix
corr = df[1:interval].corr()
# +
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(30, 20))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
savefig('corr_no_label.png')
# +
features_corr = []
for id_row, row in enumerate(corr):
correlation_score = 0
for id_col, val in enumerate(corr[row]):
if id_col != id_row:
correlation_score += abs(val)
features_corr.append(correlation_score)
# -
get_highest_values(features_corr, 20)
get_lowest_values(features_corr, 20)
# ## Correlation analysis between SVD features and labels
df = pd.read_csv(data_file, sep=';', header=None)
df.head()
# Compute the correlation matrix
corr = df.corr()
# +
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(30, 20))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
savefig('corr_with_label.png')
# +
features_corr = []
for id_row, row in enumerate(corr):
for id_col, val in enumerate(corr[row]):
if id_col == 0 and id_row != 0:
features_corr.append(abs(val))
# -
get_highest_values(features_corr, 20)
get_lowest_values(features_corr, 20)
| analysis/corr_analysys.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Hyperedge bundling demonstration notebook
#
# This notebook demonstrates the use of the hyperedge bundling methods described in the manuscript *Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses*, <NAME>. et al.
#
# It uses the functions defined and provided in edge_clustering.py. See the README for necessary python packages.
# +
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# %matplotlib inline
# -
# The edge_clustering.py provides 4 functions.
from edge_clustering import compute_edge_similarity_matrix, cluster_edges_UPGMA
from edge_clustering import cluster_edges_Louvain, sort_edges_by_cluster_size
# The first step is to import the necessary data files.
#
# The first file is the parcel infidelity matrix which quantifies the amount of mixing present in source-reconstructed data between each parcel pair.
parcel_infidelity_matrix = np.genfromtxt('parcel_infidelity_matrix.csv',delimiter=';')
# The second file is a thresholded parcel-parcel adjacency matrix representing the to-be-bundled significant edges in the relevant parcel space.
matrix_file_name = 'IM_example_1.csv'
significant_edges_matrix = np.genfromtxt(matrix_file_name,delimiter=';')
# The first step is to use these two matrices to compute the edge similarity matrix (see Section 2.6 of the manuscript) (this can take a while depending on the number of significant edges in the significant_edges_matrix.
# +
out = compute_edge_similarity_matrix(parcel_infidelity_matrix,
significant_edges_matrix)
edge_similarity_matrix = out[0]
thresholded_similarity_matrix = out[1]
# -
# The code outputs both the raw edge similarity matrix and the thresholded edge similarity matrix. In the thresholded edge similarity matrix, the matrix value was set to zero when the correlation coefficient between the two edges' mixing profiles did not reach significance for an alpha of 0.05. In line with the simulations presented in the manuscript, we will use the thresholded matrix to identify the clusters.
#
# We can now run the clustering algorithm using either UPGMA hierarchical clustering or Louvain Community detection.
#
# ## UPGMA hierarchical clustering
#
# We first define the total number of clusters we want to bundle our edges into and then run the clustering algorithm using the cluster_edges_UPGMA() function.
# +
nb_clusters = 25
# Cluster edges
out = cluster_edges_UPGMA(edge_similarity_matrix = thresholded_similarity_matrix,
nb_clusters = nb_clusters)
clustered_tree_UPMGA = out[0]
cluster_assignments_UPMGA = out[1]
# -
# We can plot the a heatmap of the clustered tree.
cluster_plot = sns.clustermap(1-thresholded_similarity_matrix,
row_linkage = clustered_tree_UPMGA ,
col_linkage = clustered_tree_UPMGA)
# The cluster_assignements variable contains the cluster assignement for each edge and can be used as appropriate to assign each edge to a cluster, for example for visualization purposes.
cluster_assignments_UPMGA
# We can also visualize the edge similarity matrix with edges sorted by clusters of increasing sizes.
cluster_sizes_UPMGA, sorted_edge_matrix_UPMGA = sort_edges_by_cluster_size(cluster_assignments_UPMGA, thresholded_similarity_matrix)
# For each edge cluster, the cluster_sizes_UPMGA array contains its initial rank-number, the number of edges it contains and its rank when parcels are sorted by increasing cluster size.
cluster_sizes_UPMGA
# The sorted_edge_matrix is similar to thresholded_similarity_matrix but with its edges sorted by cluster.
cmap =sns.cubehelix_palette(n_colors=50, start=0, rot=0, light=0, dark=1, as_cmap=True)
plt.figure(2)
plot = sns.heatmap(sorted_edge_matrix_UPMGA, cmap=cmap,
xticklabels= False,
yticklabels = False)
plot.invert_yaxis()
# ## Louvain community detection
#
# We now use the same thresholded_similarity_matrix to cluster the edges using the Louvain community detection algorithm implemented in the community and networkx packages.
#
# Here the parameter is Louvain_resolution which affects the size and number of communities found. A higher resolution (smaller Louvain_resolution parameter) produces a higher number of smaller clusters). Louvain_resolution = 1 is the standard Louvain method.
# +
Louvain_resolution = 0.45
out = cluster_edges_Louvain(edge_similarity_matrix = thresholded_similarity_matrix,
Louvain_resolution = Louvain_resolution)
community_assignment = out[0]
dendogram_Louvain = out[1]
# -
# community_assignment contains the cluster assignement for each edge and can be used as appropriate to assign each edge to a cluster, for example for visualization purposes.
# +
cluster_sizes_Louvain, sorted_edge_matrix_Louvain = sort_edges_by_cluster_size(community_assignment,
thresholded_similarity_matrix)
cluster_sizes_Louvain
# -
# We can here also visualize the edge similarity matrix with edges sorted by clusters of increasing sizes.
cmap =sns.cubehelix_palette(n_colors=50, start=0, rot=0, light=0, dark=1, as_cmap=True)
plt.figure(2)
plot = sns.heatmap(sorted_edge_matrix_Louvain, cmap=cmap,
xticklabels= False,
yticklabels = False)
plot.invert_yaxis()
| hyperedge_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Take bedtools intersect output and make a nice table!
# +
import re
import numpy as np
import pandas as pd
import seaborn as sb
# %matplotlib notebook
# +
values = []
with open('.../output/phastcons_workflow/dm6_phastcons_intersect.txt') as f:
for line in f:
pattern = re.compile(r'\w*\t(\w*)\t(\w*)\t(FBgn\w*)\t(\S*)\t.*(chr\w*)\tFlyBase\sgene\t\w*\t\w*.*ID=(\w*);Name=(\w*).*\t(\S*)\t\w*')
match = pattern.match(line)
TF = match.group(3)
qval = float(match.group(4))
chrom = match.group(5)
start = match.group(1)
end= match.group(2)
symbol = match.group(7)
FBgn= match.group(6)
phastcon = float(match.group(8))
reorder = (TF, FBgn, symbol, chrom, start, end, qval, phastcon)
values.append(reorder)
#print(np.vstack(values[:5]))
df = pd.DataFrame(values, columns=['TF','FBgn','Symbol', 'Chrom', 'Start', 'End', 'q Value','Phastcons'])
df.head()
#df.to_csv('/Users/bergeric/data/bedtoolsoutput_df.txt', sep='\t', index=False)
# -
grp = df.groupby(['TF', 'FBgn', 'Symbol', 'Chrom', 'Start', 'End', 'q Value'])
meanframe = grp.mean()
meanframe.head()
sb.distplot(meanframe["Phastcons"])
| notebook/bedtools_output_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Serulab/BioinformaticsWithPython/blob/master/section1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LF4PnoSwZnYM" colab_type="text"
# # Processing DNA Sequences in Batch
# + [markdown] id="u0XjAP5pY0cS" colab_type="text"
# # Convert multiple sequences in Genbank format into FASTA format.
# + id="-6RqPWqEhUZy" colab_type="code" outputId="8e0d694e-ff9e-45b1-f05c-05ee227fe363" colab={"base_uri": "https://localhost:8080/", "height": 50}
# !pip install biopython
# + id="wLwgPhiOU4vR" colab_type="code" outputId="adad29e6-ef56-427a-8dc0-af3dd794fbbb" colab={"base_uri": "https://localhost:8080/", "height": 34}
import requests
url = 'https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/casein.gb'
input_file_name = url.split('/')[-1]
response = requests.get(url)
open(input_file_name, 'wb').write(response.content)
# + id="R5HCC2mMaCho" colab_type="code" colab={}
from Bio import SeqIO
with open(input_file_name, 'r') as input_handle:
sequences = SeqIO.parse(input_handle, 'genbank')
with open('casein.fasta', 'w') as output_handle:
SeqIO.write(sequences, output_handle, 'fasta')
# + id="QoqwlYqJlxCW" colab_type="code" colab={}
with open("casein.fasta") as output_handle:
print(output_handle.read())
# + [markdown] id="JkcFgJkiYpXa" colab_type="text"
# # Split a FASTA file with multiple sequences into several files, one file per sequence.
# + id="KPyhhI9MZ4hv" colab_type="code" colab={}
# Using casein.fasta file generated in previous code
with open("casein.fasta") as input_handle:
sequences = SeqIO.parse(input_handle, 'fasta')
for sequence in sequences:
SeqIO.write(sequence, sequence.name, "fasta")
# + id="PngHjgS9c2qb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="0ed27d75-c3f1-4168-cc11-660881c829c7"
# !ls
# + id="NXvF7XgYc6Ny" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="cb9a9101-d1ef-444f-8eaf-406c56addc27"
# !cat FJ429671.1
# + [markdown] id="XjkNKMBeY9Mg" colab_type="text"
# # Convert multiple sequences into a Python dictionary.
# + id="RMvbtFEWZ5f9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="50718275-0e8f-4f40-bcd2-5abffb4b0ef2"
record_dict = SeqIO.to_dict(SeqIO.parse("casein.fasta", "fasta"))
print(record_dict["FJ429671.1"]) # use any record ID
# + [markdown] id="iNxKxwp-Y9dL" colab_type="text"
# # Extracting sequences applying a filtering criteria like having a minimum length.
# + id="cQCsjR8PZ6LB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="2a8ca21b-9dcc-4e1a-d813-de6cc5a7827c"
# Using casein.fasta file generated in the first code
MIN_SEQ_LENGTH = 400
sequences = SeqIO.parse("casein.fasta", 'fasta')
filtered = []
for total, sequence in enumerate(sequences):
if len(sequence.seq) > MIN_SEQ_LENGTH:
filtered.append(sequence)
print("total sequences: {}".format(total))
print("after filter sequences: {}".format(len(filtered)))
| section1.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # EEP/IAS 118 - Introductory Applied Econometrics
# ## Problem Set 1, Spring 2022, Villas-Boas
# #### <span style="text-decoration: underline">Due in Gradescope – see deadline due time in Gradescope – Feb 3, 2022</span>
#
#
# Submit materials (all handwritten/typed answers, Excel workbooks, and R reports) as one combined pdf on [Gradescope](https://www.gradescope.com/courses/353120). All students currently on the EEP118 bCourses have been added using the bCourses email. If you do not have access to the Gradescope course, please reach out to the GSI's.
#
# For help combining pdf's, see the "Tips for Saving/Combining PDFs" announcement on bCourses.
# ## Exercise 1 (Excel)
# **Relationship between Housing Prices and Violent Crime in 10 US Cities.**
#
# We will use September 2021 data from Zumper on one-bedroom apartment prices and 2019 data from the FBI on crime for 10 US cities. The original data has 100 cities. In this first problem set we will only use a subset of the cities.
# This exercise is to be completed using Excel. We will establish a simple linear relationship between *housing prices and crime* in a subset of cities.
#
# *Note: in economics, log always refers to the natural log, ln().*
#
# <center><b> Table 1: Log of Housing Price and Log of Violent Crimes per 1,000 People, Sample 1 </b></center>
#
# |CityName | log of Housing Price | log of Violent Crimes per 1,000 People |
# |-----------|---------------|---------------|
# |sample 1 | log of Y | log of X |
# |Anchorage |6.99393298 |1.27564209
# |Chandler |7.27931884| -0.5225609
# |Gilbert |7.41457288 |-1.4064971
# |Glendale |7.02997291| -0.1473406
# |Mesa |7.05617528| 0.66936665
# |Phoenix |7.138867 |2.46835374
# |Scottsdale |7.52294092 |-0.8794768
# |Tucson |6.73340189 |1.32840038
# |Anaheim |7.50659178 |0.11332869
# |Bakersfield |6.85646198| 0.5687171
#
# (a) Use Excel to create a scatter plot of these observations. Don't forget to (1) label the axes and their units, and (2) title your graph. **You should use the tables provided here for these calculations, not the actual observations from the .csv data file.**
# (b) This question has **two parts**.
#
# First: Estimate the linear relationship between the log of Housing Price (log(Y)) and the log of violent crimes per 1,000 people (log(X)) by OLS, showing all intermediate
# calculations as we saw in the lecture 3 slides (use Excel to create the table and show all the steps).
#
# Second: interpret the value of the estimated parameters $\beta_0$ and $\beta_1$.
#
# $$ \widehat{log (Y_i)} = \hat{\beta_0} + \hat{\beta_1} log(X_i) \ \ \ \ \ \ \text{i = \{first 10 cities\}}$$
# ➡️ Type your answer to _Exercise 1 (b) Second Part_ here (replacing this text)
# (c) In your table, compute the fitted value and the residual for each observation, and verify that the residuals (approximately) sum to 0.
# (d) According to the estimated relation, what is the predicted $\hat{Y}$ (**level**, not log) for a city with a log crime of -2? (Pay attention to units)
# ➡️ Type your answer to _Exercise 1 (d)_ here (replacing this text)
# (e) How much of the variation in per capita log home prices in these 10 cities is explained by the log of violent crimes per 1,000 people?
# ➡️ Type your answer to _Exercise 1 (e)_ here (replacing this text)
# (f) Repeat exercise (b) for one additional set of 10 cities below. **You should use Table 2 provided
# above for these calculations, not the actual observations from the .csv data file.**
# <center><b> Table 2: Log of Housing Price and Log of Violent Crimes per 1,000 People, Sample 2</b></center>
#
# |CityName | log of Housing Price | log of Violent Crimes per 1,000 People |
# |-----------|---------------|---------------|
# |sample 1 | log of Y | log of X |
# |Long Beach |7.39633529 |0.86246793
# |Los Angeles| 7.64969262 |3.38099467
# |Oakland| 7.60090246 |1.70837786
# |Sacramento| 7.26542972 |1.1703126
# |San Diego |7.64969262 |1.65153909
# |San Francisco |7.9373747 |1.78052999
# |San Jose |7.69621264 |1.5171033
# |Santa Ana |7.50108212 |0.37363038
# |Aurora |7.09007684 |1.02926221
# |Colorado Springs |6.99393298 |1.03175998
#
#
#
# ➡️ Type your answer to _Exercise 1 (f) Second Part_ here (replacing this text)
# (g) Do your estimates of $\hat{\beta_0}$ and $\hat{\beta_1}$ change between Tables 1, and 2? Why?
#
# ➡️ Type your answer to _Exercise 1 (g)_ here (replacing this text)
# (h) Save a copy of your Excel workbook as a pdf (OLS tables and scatter plot) to combine with the later work.
# ## Exercise 2 (Functional Forms)
# (a) Suppose you estimate alternative specifications as given below using data from 41 cities:
#
# $$ \text{A linear relationship:} ~~~~\hat{Y_i} = 1348.3 + 18.04 X_i$$
# $$ \text{A linear-log relationship:} ~~~~\hat{Y_i} = 1346.8 + 71.45 \log(X)_i$$
# $$ \text{A log-log relationship:} ~~~~\widehat{\log(Y)}_i = 7.17 + 0.04 \log(X)_i$$
#
# Note that it is convention to always use the natural log.
#
# i. Interpret the parameter on violent crimes per 1,000 people X (or log of violent crimes per 1,000 people log(X)) in each of these equations.
#
# ii. What is the predicted one bedroom rental price in dollars for a city with a crime per 1 thousand equal to 2 in each of these equations?
#
# ➡️ Type your answer to _Exercise 2 (a) i_ here (replacing this text)
# ➡️ Type your answer to _Exercise 2 (a) ii_ here (replacing this text)
# ## Exercise 3. Importing Data into R and Basic R First Commands
#
# For the purposes of this class, we will be primarily Berkeley's _Datahub_ to conduct our analysis remotely using these notebooks.
#
# If instead you already have an installation of R/RStudio on your personal computer and prefer to work offline, you can download the data for this assignment from bCourses (Make sure to install/update all packages mentioned in the problem sets in order to prevent issues regarding deprecated or outdated packages). The data files can be accessed directly through $Datahub$ and do not require you to install anything on your computer. This exercise is designed to get you familiar with accessing the service, loading data, and obtaining summary statistics. To start off, we're going to use Jupyter notebooks to help familiarize you with some R commands.
#
# *Note: [Coding Bootcamp Part 1](https://bcourses.berkeley.edu/courses/1510635/external_tools/78985) covers all necessary R methods.
# (a) To access the Jupyter notebook for this problem set on Datahub, click the following link:
#
# *Skip! You are already here - nice work.*
#
# (b) Load the data file *dataPset1_2022.csv* into R (since this is a ".csv" file, you should use the `read.csv()` function.
# +
# insert code here
# -
# (c) Provide basic summary statistics on the log of home price (*logPrice*) in the dataframe. Use the `summary()` command. This command is part of base R, so you do not need to load any packages before using it. What is the median value of log housing price in cities in the sample?
# +
# insert code here
# -
# ➡️ Type your written answer to _Exercise 3 (c)_ here (replacing this text)
# (d) Next, generate custom summary statistics on the Log of violent crime (*logCrime*) using the `summarise()` command provided by ***dplyr***. You will need to call the ***tidyverse*** package with the `library()` command to use it (***tidyverse*** is a collection of packages designed for data science. It includes ***dplyr*** and several other packages we'll use this term).
# +
# insert code here
# -
# (e) Create a scatter plot of the price and crime data in levels. Use
#
# `figureAsked <- plot(my_data$violentcrime2019, my_data$pricesept2021,
# main = "Scatter of Y on X",
# xlab = "X = Crime (per 100 thousand People)",
# ylab = "Y = House Price (in US dollars)")`
#
# *Note:* Make sure to run the code cell to print the scatterplot in the notebook by calling `figureAsked` in the code cell below.
# +
# insert code here
# -
# (f) Save a pdf to your computer (note: this can be done by going to **File > Print Preview** in the menu and choosing to "print" the new tab as a pdf with Ctrl + P) and combine it with your excel workbook from Exercise 1 for submission on Gradescope.
#
# **Note: Make sure to check your pdf before uploading to ensure all the code cells are run and all text/output is visible, and combine with the requested Excel sheets before uploading to Gradescope.**
| Problem Sets/ProblemSet1/ProblemSet1_2022.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vishal-verma27/Create-Python-Flask-Application-Using-Ngrok/blob/main/Flask_App_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Fex18XWtJ50A"
# **Import Necessary Libraries**
# + colab={"base_uri": "https://localhost:8080/"} id="t1HlxLaRJ-uv" outputId="08b10be8-5c17-4b62-dba6-a74700e937c1"
# !pip install flask --quiet
# !pip install flask-ngrok --quiet
print("Completed!")
# + [markdown] id="KLolLhMAKaJ2"
# **Setup and Installation of Ngrok**
# + colab={"base_uri": "https://localhost:8080/"} id="u53e-GOtKepR" outputId="e869e3d7-9ce5-4d9e-9be4-a16010165dfc"
# install ngrok linux version using the following command or you can get the
# latest version from its official website- https://dashboard.ngrok.com/get-started/setup
# !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.tgz
# + colab={"base_uri": "https://localhost:8080/"} id="GhRd4dpPKtu7" outputId="c02576e0-5e3d-436e-effa-24e56392819c"
# extract the downloaded file using the following command
# !tar -xvf /content/ngrok-stable-linux-amd64.tgz
# + [markdown] id="71E1yzcHLWpL"
# **The next step is to get your AuthToken from ngrok using this link-** https://dashboard.ngrok.com/get-started/your-authtoken
# + colab={"base_uri": "https://localhost:8080/"} id="86Zuo90scOMV" outputId="8<PASSWORD>c7c-dbdd-455a-e46d-de<PASSWORD>bd5"
# paste your AuthToken here and execute this command
# !./ngrok authtoken <PASSWORD>JhKZMae_3Zabr2iqkU9AUcZ7CrRTP
# + [markdown] id="zSa5cvibKIUW"
# **Creating A Simple Hello World Application Using Python Flask Module**
# + colab={"base_uri": "https://localhost:8080/"} id="XLlugcOeKlWH" outputId="2ce871b8-1712-4ff2-8568-f45b92b7259a"
# import Flask from flask module
from flask import Flask
# import run_with_ngrok from flask_ngrok to run the app using ngrok
from flask_ngrok import run_with_ngrok
app = Flask(__name__) #app name
run_with_ngrok(app)
@app.route("/")
def hello():
return "Hello Friends! from Pykit.org. Thank you! for reading this article."
if __name__ == "__main__":
app.run()
| Flask_App_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: intro_to_pytorch
# language: python
# name: intro_to_pytorch
# ---
# # 6. Summary
#
# During our workshop we learned a lot about PyTorch:
#
# 1. PyTorch universum
# 2. Tensors
# 3. Neural networks module
# 4. Training and evaluation in PyTorch
# 5. Saving and loading custom models
# 6. Working with datasets
# #### We want to ask you last questions...
#
# Please fill in the survey and tell us what you liked and what can we do better next time?
# +
expectations = '?'
reality = '?'
assert expectations == reality
| 6_Summary/6_Summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Funktionen
#
# Funktionen spielen beim Programmieren eine entscheidende Rolle, da sich in ihnen größere Programmteile bündeln oder *kapseln* lassen. Programmiersprachen kommen in der Regel schon mit einer Reihe vordefinierter Funktionen daher. Funktionen können aber auch von Programmierer_innen neu definiert werden.
# + [markdown] slideshow={"slide_type": "subslide"}
# Überlegen Sie! Welche Vorteile bieten Funktionen?
#
# - ?
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Funktionsdefinition
#
# Eine Funktion wird in Python wie folgt definiert:
# + slideshow={"slide_type": "-"}
def rechne(): # Funktionsdefinition
print("Was soll ich rechnen?") # ...
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Funktionsaufruf
#
# Die Funktion kann nun, nachdem sie definiert wurde, aufgerufen werden:
# + slideshow={"slide_type": "-"}
rechne() # Funktionsaufruf
# + [markdown] slideshow={"slide_type": "skip"}
# Diese Ausgabe wäre auch einfach zu haben gewesen. Aber unsere Funktion hat ihr Potenzial noch gar nicht entfalten können. Erweitern wir sie also um die Möglichkeit, **Argumente** entgegen nehmen zu können. Dafür muss in der **Signatur** der Funktion - das ist die erste Zeile - eine **Parameterliste** angegeben werden. Wir definieren die Funktion also noch einmal neu:
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Funktionen mit Parametern
# + slideshow={"slide_type": "-"}
def rechne(zahl1, zahl2):
print(zahl1 + zahl2)
# + [markdown] slideshow={"slide_type": "subslide"}
# Ein Aufruf der Funktion verlangt nun die Übergabe von Argumenten an die Parameter:
# -
rechne(3,5)
# + [markdown] slideshow={"slide_type": "subslide"}
# **Frage:** Was passiert, wenn Sie keine Argumente übergeben oder eines weglassen?
# + [markdown] slideshow={"slide_type": "skip"}
# Oftmals ist es so, dass das, was die Funktion tun soll, noch nicht direkt ausgegeben werden soll. Stattdessen soll das Ergebnis ihrer Arbeit für später gespeichert werden. Wir definieren die Funktion also nocheinmal neu und lassen sie einen Wert zurückgeben:
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Funktionen mit Rückgabe
# -
def rechne(zahl1, zahl2):
return zahl1 + zahl2
# + [markdown] slideshow={"slide_type": "subslide"}
# Rufen die Funktion nun einmal wie gewohnt auf:
# -
rechne(3,5)
# + [markdown] slideshow={"slide_type": "subslide"}
# Sie tut, was wir von ihr verlangt haben, sie gibt den errechneten Wert zurück. Wie aber können wir den Wert längerfristig speichern? Nun, das Konzept des Speicherns von Werten ist bekannt, **Variablen** helfen hier. Tun wir also Folgendes:
# -
ergebnis = rechne(3,5)
# + [markdown] slideshow={"slide_type": "subslide"}
# Die Variable `ergebnis` speichert nun also das Ergebnis für uns, und wir können später darauf zugreifen:
# -
print(ergebnis)
# + [markdown] slideshow={"slide_type": "skip"}
# ## Abschließende Betrachtungen
#
# Funktionen können sehr komplex sein, hier haben wir die Konzepte an einfachen Beispielen kennen gelernt. Je mehr Erfahrung beim Programmieren gesammelt wird und je umfangreicher Programme werden, desto wichtiger werden sie.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Aufgaben
# -
# 1. Überlegen Sie! Welche Funktionen bekannter informationstechnischer Systeme sind wahrscheinlich in Funktionen gekapselt?
# 1. Definieren Sie eine Funktion, die einen übergebenen String an die Stelle der drei Punkte in dem Satz "... mag ich am liebsten" einfügt und den Satz anschließend ausgibt.
# 1. Definieren Sie eine Funktion, die eine Aufzählung von Namen entgegen nimmt und daraus eine HTML-Liste erzeugt. Die Funktion soll die Liste mit `return` zurückgegeben.
# + [markdown] slideshow={"slide_type": "skip"}
# ### Aufgabe 1
# + slideshow={"slide_type": "skip"}
# Ihre Lösung
# + [markdown] slideshow={"slide_type": "skip"}
# ### Aufgabe 2
# + slideshow={"slide_type": "skip"}
# Ihre Lösung
# + [markdown] slideshow={"slide_type": "skip"}
# ## Referenzen
#
# Sie finden im [Python3-Tutorial](https://www.python-kurs.eu/python3_funktionen.php) weitere Details zu Funktionen.
| 07-2017-12-01/funktionen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="LbHoZR7c6pkH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="a924a37f-09d6-42de-cc18-f269149d2ed1"
from google.colab import drive
drive.mount('/content/drive')
# + id="n4vCCf9lAmJK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="52f85776-a3e2-41f3-fe87-90ed1c084a37"
# !ls
# + deletable=true editable=true id="DZosbvME6TAN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 622} outputId="e1258730-7bb3-4150-ab9a-3aa8ce37dd24"
import pandas as mypd
#myData=mypd.read_csv(".\datasets\Credit_Card_Expenses.csv")
myData=mypd.read_csv("drive/My Drive/datasets/Credit_Card_Expenses.csv")
myData
# + deletable=true editable=true id="newTw_zq6TAV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 358} outputId="afd4bb5e-9119-4aff-d876-949f3fb33287"
cc= myData.CC_Expenses
cc
# + id="Dih-6adXA8xL" colab_type="code" colab={}
#myData['Month ']
# + deletable=true editable=true id="pUZF25nW6TAd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="e083d281-b47d-41db-ade5-2d15374ec0bf"
cc.mean()
# + deletable=true editable=true id="aYtkwB_A6TAm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="6a7e5c4e-e3c0-479c-b606-02e6ddf2730a"
cc.median()
# + deletable=true editable=true id="cdXzpDK96TAy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="de175825-47a2-4670-908d-f24229c67bbd"
cc.mode()
# + deletable=true editable=true id="sbaJn_gc6TA9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="6afa947e-b05b-45bd-922b-d53a1a084e2c"
cc.std()
# + deletable=true editable=true id="VLXPUcOz6TBJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="e78bb445-d600-429b-8fe8-d21e9aa9def4"
cc.var()
# + deletable=true editable=true id="TliwhwZ56TBU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="39fa64e2-3cb9-447a-9247-d4c2880beec9"
cc.min()
# + deletable=true editable=true id="n_TphPfo6TBg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="eb4608be-fa23-4177-ca61-0d46ce3ee2a5"
cc.max()
# + deletable=true editable=true id="Pdqfqe5c6TBw" colab_type="code" colab={} outputId="1c037126-1e4f-4ce5-e27e-519f9537e4d5"
cc.quantile(.9) #percentile
# + deletable=true editable=true id="M9-vQUVe6TB7" colab_type="code" colab={} outputId="3b63d34c-eca5-4ef2-9df0-8a64c491c383"
cc.skew()
# + deletable=true editable=true id="7IcZqA3i6TCb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="1e4bc28d-93ec-4f8a-acb7-758c7fbf260a"
cc.describe()
# + deletable=true editable=true id="X6-BKaLp6TCn" colab_type="code" colab={} outputId="f4fd0f64-2f79-4aa9-e6ec-0a71b3b12673"
import math as mymath
mymath.sqrt(49)
# + deletable=true editable=true id="UNOuWSLY6TC0" colab_type="code" colab={} outputId="134cdce0-bbe7-4364-b887-0e464c2e4e4f"
import matplotlib.pyplot as myplot
myplot.hist(cc)
myplot.show()
# + deletable=true editable=true id="qpQzGxbR6TDK" colab_type="code" colab={} outputId="5f7c89ee-c1fb-4c01-ec98-42f744cd17f8"
#Boxplot
myplot.boxplot(cc)
myplot.show()
# + deletable=true editable=true id="LM5kIey96TDV" colab_type="code" colab={}
| 1_Credit_CardExpenses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src='./img/EU-Copernicus-EUM_3Logos.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='40%'></img>
# <br>
# <a href="./00_index.ipynb"><< Index </a><br>
# <a href="./03_sentinel3_OLCI_L1_load_browse.ipynb"><< 03 - Sentinel-3 OLCI Level-1B - Load and browse </a><span style="float:right;"><a href="./05_sentinel3_NRT_SLSTR_AOD_load_browse.ipynb">05 - Sentinel-3 NRT SLSTR AOD - Load and browse >></a></span>
# <br>
# <div class="alert alert-block alert-warning">
# <b>LOAD, BROWSE AND VISUALIZE</b></div>
# # Sentinel-3 Near Real Time SLSTR Fire Radiative Power (FRP)
# ## Example Siberian fires in June 2020
# The following workflow is based on an example of `Sentinel-3 Near Real Time SLSTR FRP` data on 27 June 2020. As a comparison, you see below the Sentinel-3 OLCI Red Green Blue composites for the same day, which clearly shows the smoke plumes along the Californian coast resulting from the fires.
#
# <div style='text-align:center;'>
# <figure><img src='./img/s3_olci_0727.png' width='80%'/>
# <figcaption><i>RGB composites of Sentinel-OLCI Level 1 data on 27 June 2020</i></figcaption>
# </figure>
# </div>
#
# <hr>
# ### Outline
# * [1 - Load Sentinel-3 NRT SLSTR FRP data](#load_s3_frp)
# * [2 - Load, mask and regrid FRP computed from MWIR channel (3.7 um)](#load_mwir)
# * [3 - Load, mask and regrid FRP computed from SWIR channel (2.25 um)](#load_swir)
# * [4 - Load, mask and regrid FRP computed from SWIR channel (2.25 um) with SAA filter applied](#load_swir_nosaa)
# <hr>
# #### Load required libraries
# +
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.pyplot as pltfacebook
import matplotlib.colors as colors
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
# -
# #### Load helper functions
from ipynb.fs.full.functions import generate_masked_array, visualize_s3_frp, slstr_frp_gridding
# <hr>
# ## <a id='load_s3_frp'></a>Load Sentinel-3 Near Real Time SLSTR FRP data
# Sentinel-3 Near Real Time SLSTR FRP data are disseminated in `netCDF` format. The first step is to load the data file with xarray's `open_dataset()` function.
#
# <br>
# ### Load `S3 NRT SLSTR FRP` data with xarray's `open_dataset()` function
# Once the data file is loaded, you see that the data file has three dimensions: `columns`, `fires` and `rows`. The data and additional information, such as quality flags or latitude and longitude information, is stored as data variables.
#
# There are three variables of interest:
# - `FRP_MWIR` - Fire Radiative Power computed from MWIR channel (3.7 um) [MW]
# - `FRP_SWIR` - Fire Radiative Power computed from SWIR channel (2.25 um) [MW]
# - `FLAG_SWIR_SAA` - Flag values to filter out South Atlantic Anomalies (SAA) & other transient / spurious events, only applicable to FRP SWIR
frp_dir = '../eodata/sentinel3/slstr/2020/06/27/'
frp_xr = xr.open_dataset(frp_dir+'FRP_in_Siberia_202000627.nc')
frp_xr
# <br>
# ### Load `latitude` and `longitude` information
# You can already load the `latitude` and `longitude` information, which will be required for the regridding process.
# +
lat_frp = frp_xr['latitude']
lat_frp
lon_frp = frp_xr['longitude']
lat_frp.data.max()
# -
# <br>
# ### Define variables for `plotting` and `gridding`
# Let us also define some variables for `plotting` and the `gridding` process. For example the sampling size of the gridded FRP values or the geographical extent.
# +
sampling_lat_FRP_grid = 0.05 # Sampling for gridded FRP values & differenrce stats computation
sampling_lon_FRP_grid = 0.05 # Sampling for gridded FRP values & differenrce stats computation
FRP_plot_max_grid = 40. # Max Integrated FRP value, for plots
lat_min = 50. # Minimum latitude for mapping plot [deg N]
lat_max = 72. # Maximum latitude for mapping plot [deg N]
lon_min = 120. # Minimum lonitude for mapping plot [deg E]
lon_max = 170. # Maximum lonitude for mapping plot [deg E]
# -
# <br>
# Now, let us go through the three different variables (`MWIR`, `SWIR` and `SWIR with SAA filtered out`) and let us load, mask, regrid and visualize them.
# <br>
# ## <a id='load_mwir'></a>Load, mask and regrid `FRP computed from MWIR channel (3.7 um)`
# The first step is to load the `FRP_MWIR` data variable from the loaded `netCDF` file. You, see that the variable contains 983 fire entries.
frp_mwir = frp_xr['FRP_MWIR']
frp_mwir
# <br>
# The next step is to extract (mask) only the right FRP pixels. Valid pixels are different to -1. You can use the function [generate_masked_array](./functions.ipynb#generate_masked_array) to extract the right pixels.
masked_frp_mwir = generate_masked_array(frp_mwir, frp_mwir, -1.,operator='!=', drop=True)
masked_frp_mwir
# <br>
# Let us retrieve the number of of hotspots / fires in total and per category.
# +
n_fire_tot = len(lat_frp[:])
n_fire_MWIR = len(masked_frp_mwir.to_masked_array().compressed())
n_fire_tot, n_fire_MWIR
# -
# <br>
# ### Generate a gridded FRP array
# Let us compute the gridded FRP information. The gridded FRP is the integration (sum) of FRP within a grid cell.
# You can define the function [slstr_frp_gridding](./functions.ipynb#slstr_frp_gridding) and you can then reuse it to apply it for other data variables.
def slstr_frp_gridding(parameter_array, lat_frp, lon_frp, parameter, lat_min, lat_max, lon_min, lon_max, sampling_lat, sampling_lon, n_fire, **kwargs):
n_lat = int( (np.float32(lat_max) - np.float32(lat_min)) / sampling_lat_FRP_grid ) + 1 # Number of rows per latitude sampling
n_lon = int( (np.float32(lon_max) - np.float32(lon_min)) / sampling_lon_FRP_grid ) + 1 # Number of lines per longitude sampling
slstr_frp_gridded = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lat_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
lon_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999.
if (n_fire >= 0):
# Loop on i_lat: begins
for i_lat in range(n_lat):
# Loop on i_lon: begins
for i_lon in range(n_lon):
lat_grid[i_lat, i_lon] = lat_min + np.float32(i_lat) * sampling_lat_FRP_grid + sampling_lat_FRP_grid / 2.
lon_grid[i_lat, i_lon] = lon_min + np.float32(i_lon) * sampling_lon_FRP_grid + sampling_lon_FRP_grid / 2.
# Gridded SLSTR FRP MWIR Night - All days
if(parameter=='swir_nosaa'):
FLAG_FRP_SWIR_SAA_nc = kwargs.get('flag', None)
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) &
(parameter_array[:] != -1.) & (FLAG_FRP_SWIR_SAA_nc[:] == 0), False, True)
else:
mask_grid = np.where(
(lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) &
(lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) &
(lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) &
(lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) &
(parameter_array[:] != -1.), False, True)
masked_slstr_frp_grid = np.ma.array(parameter_array[:], mask=mask_grid)
if len(masked_slstr_frp_grid.compressed()) != 0:
slstr_frp_gridded[i_lat, i_lon] = np.sum(masked_slstr_frp_grid.compressed())
return slstr_frp_gridded, lat_grid, lon_grid
# <br>
# Apply the function `slstr_frp_gridding` to the `frp_mwir` data array.
# +
FRP_MWIR_grid, lat_grid, lon_grid = slstr_frp_gridding(frp_mwir,
lat_frp,
lon_frp,
'mwir',
lat_min, lat_max, lon_min, lon_max,
sampling_lat_FRP_grid,
sampling_lon_FRP_grid,
n_fire_MWIR)
FRP_MWIR_grid, lat_grid, lon_grid
# -
# <br>
# Mask out the invalid pixels for plotting. You can use numpy's function `np.ma.masked_array()` for this.
mask_valid = np.where(FRP_MWIR_grid[:,:] != -9999., False, True)
D_mwir = np.ma.masked_array(FRP_MWIR_grid[:,:], mask=mask_valid)
D_mwir
# <br>
# Calculate some statistics and add them to a string that can be integrated in the final plot.
# +
textstr_1 = 'Total number 1km hot-spots = ' + str(n_fire_MWIR)
FRP_sum = np.sum(masked_frp_mwir.to_masked_array().compressed())
FRP_mean = np.mean(masked_frp_mwir.to_masked_array().compressed())
FRP_std = np.std(masked_frp_mwir.to_masked_array().compressed())
FRP_min = np.min(masked_frp_mwir.to_masked_array().compressed())
FRP_max = np.max(masked_frp_mwir.to_masked_array().compressed())
FRP_sum_str = '%.1f' % FRP_sum
FRP_mean_str = '%.1f' % FRP_mean
FRP_std_str = '%.1f' % FRP_std
FRP_min_str = '%.1f' % FRP_min
FRP_max_str = '%.1f' % FRP_max
textstr_2 = 'FRP 1 km: \n Total = '+FRP_sum_str+' [MW] \n Avg. = '+ FRP_mean_str + ' [MW] \n Min = ' + FRP_min_str + ' [MW] \n Max = ' + FRP_max_str + ' [MW]'
# -
# <br>
# ### Visualize the masked data array with matplotlib's `pcolormesh()` function
# You can define function for the plotting code, so it can be easily re-used to plot other FRP data variables. Let us call the function [visualize_s3_frp](./functions.ipynb#visualize_s3_frp).
#
# The function takes the following keyword arguments:
# * `data`: DataArray with fire hotspots
# * `lat`: Latitude information
# * `lon`: Longitude information
# * `unit`: Unit of the data
# * `longname`: Long name of the data
# * `textstr_1`: Text that contains the total number of 1km hot-spots
# * `textstr_2`: Text that contains summary statistics of the data
# * `vmax`: Maximum value to be visualized
# +
def visualize_s3_frp(data, lat, lon, unit, longname, textstr_1, textstr_2, vmax):
fig=plt.figure(figsize=(20, 15))
ax = plt.axes(projection=ccrs.PlateCarree())
img = plt.pcolormesh(lon, lat, data,
cmap=cm.autumn_r, transform=ccrs.PlateCarree(),
vmin=0,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.bottom_labels=False
gl.right_labels=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.029, pad=0.025)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(longname, fontsize=20, pad=40.0)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box on the right side of the plot
ax.text(1.1, 0.9, textstr_1, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
props = dict(boxstyle='square', facecolor='white', alpha=0.5)
# place a text box in upper left in axes coords
ax.text(1.1, 0.85, textstr_2, transform=ax.transAxes, fontsize=16,
verticalalignment='top', bbox=props)
plt.show()
# -
# <br>
# Now, you can apply the function `visualize_s3_frp` and plot the `FRP computed from MWIR channel` data. Additionally, you can take information such as `longname` or `units` from the data variable attributes.
# +
long_name = frp_mwir.long_name
unit = frp_mwir.units
vmax = FRP_plot_max_grid
visualize_s3_frp(D_mwir[:,:],
lat_grid,
lon_grid,
unit,
long_name,
textstr_1,
textstr_2,
FRP_plot_max_grid)
# -
# <br>
# <br>
# <br>
# <a href="./00_index.ipynb"><< Index </a><br>
# <a href="./03_sentinel3_OLCI_L1_load_browse.ipynb"><< 03 - Sentinel-3 OLCI Level-1B - Load and browse </a><span style="float:right;"><a href="./05_sentinel3_NRT_SLSTR_AOD_load_browse.ipynb">05 - Sentinel-3 NRT SLSTR AOD - Load and browse >></a></span>
# <hr>
# <img src='./img/copernicus_logo.png' alt='Logo EU Copernicus' align='right' width='20%'><br><br><br><br>
#
# <p style="text-align:right;">This project is licensed under the <a href="./LICENSE">MIT License</a> and is developed under a Copernicus contract.
| 90_workshops/202011_ac_training_school/1_overview_ac_satellite_data/04_sentinel3_NRT_SLSTR_FRP_load_browse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Visualization of CatBoost decision trees tutorial
import numpy as np
import catboost
from catboost import CatBoostRegressor, CatBoostClassifier, Pool
# #### Load boston dataset from sklearn
from sklearn.datasets import load_boston
boston = load_boston()
y = boston['target']
X = boston['data']
pool = catboost.Pool(
data=X,
label=y
)
# Create and fit CatBoost model with trees of depth 2.
model = CatBoostRegressor(depth=2, verbose=False, iterations=1).fit(X, y)
# Currently only symmetric trees can be visualized.
#
# Let's consider symmetric decision tree. In such tree only one feature is used to build all splits at each tree level. There are three types of splits: "FloatFeature", "OneHotFeature" and "OnlineCtr". Model without categorical features contains only "FloatFeature" splits.
# In visualised tree each node represents one split. Since there are three types of splits there are three types of tree nodes.
# #### FloatFeature
# Let's look at the first tree of our model.
model.plot_tree(
tree_idx=0,
# pool=pool,
)
# Our model doesn't have categorical features, so there are only "FloatFeature" nodes in visualised tree.
# Node corresponding to "FloatFeature" split contains feature index and border value, which are used to split objects.
#
# In this example, the node of depth 0 shows that objects are splitted by their 0th feature with border value $393.795$. Analogously, nodes of depth 1 split objects by their 2nd feature with border value $279.5$.
# "pool" parameter is optional for models without one hot features. Features are labeled with their external indexes from pool or features names if pool is provided, otherwise internal indexes are used. For semicolon-separated pool with 2 features "f1;label;f2" external feature indexes are 0 and 2, internal indexes are 0 and 1 respectively.
# #### OneHotFeature
#
# We will use `catboost.datasets.titanic` dataset, which contains categorical data.
# +
from catboost.datasets import titanic
titanic_df = titanic()
X = titanic_df[0].drop('Survived',axis=1)
y = titanic_df[0].Survived
# -
X.head()
# Processing NaN values in categorical features.
# +
is_cat = (X.dtypes != float)
for feature, feat_is_cat in is_cat.to_dict().items():
if feat_is_cat:
X[feature].fillna("NAN", inplace=True)
cat_features_index = np.where(is_cat)[0]
# -
pool = Pool(X, y, cat_features=cat_features_index, feature_names=list(X.columns))
# Define and fit CatBoost model
model = CatBoostClassifier(
max_depth=2, verbose=False, max_ctr_complexity=1, random_seed=42, iterations=2).fit(pool)
model.plot_tree(
tree_idx=0,
pool=pool # "pool" is required parameter for trees with one hot features
)
# The first tree contains only one split made by "OneHotFeature" `Sex`. This split puts objects with `Sex=female` to the left and other objects to the right.
# #### OnlineCtr features
# Let's look at other trees, which contain "OnlineCtr" splits.
model.plot_tree(tree_idx=1, pool=pool)
# The node of depth 0 corresponds to "OnlineCtr" split. This split is made by one feature `Pclass`.
| catboost/tutorials/model_analysis/visualize_decision_trees_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [Pandas](http://pandas.pydata.org)
#
# Библиотека для работы с данными и таблицами в питоне.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# ## Основные стурктуры
# Основными структурами данных в **Pandas** являются классы **Series** и **DataFrame**.
# Первый из них представляет собой одномерный индексированный массив данных некоторого фиксированного типа. Мы можем думать о Series как о векторе из [numpy](https://numpy.org/).
# ## [Series](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html)
# Одномерный индексированный массив данных некоторого фиксированного типа. Мы можем думать о Series как о векторе из [numpy](https://numpy.org/).
salaries = pd.Series(data = [80000, 67000, 75000],
index = ['Андрей', 'Владимир', 'Анна'])
print(salaries)
type(salaries)
# Посмотрим на среднюю зарплату.
# Функции numpy принимают pd.Series, так как для него они выглядят как np.array. Можно проделать двумя способами
np.mean(salaries), salaries.mean()
# Посмотрим на людей, чья зарплата выше средней:
salaries[salaries > salaries.mean()] #по сути используем булев маску в numpy array
# Мы можем обращаться к элементам pd.Series как `salaries['Name']` или `salaries.Name`. Например:
salaries.Андрей, salaries['Андрей']
# Можно добавлять новые элементы, обращаясь к несуществующему элементу:
salaries['Кот'] = 100500
salaries
# Индексом может быть строка, состоящая из нескольких слов.
# Также, значением в pd.Series может быть `None`, точнее, его аналог в `numpy - np.nan` (not a number):
salaries['<NAME>'] = np.nan
salaries
# В данных часто бывают пропуски, поэтому вы часто будете видеть `np.nan`.
# Важно уметь находить их и обрабатывать.
# Получим битовую маску для пропущенных значений:
salaries.isnull()
salaries[salaries.isnull()]
# Назначим минимальную зарплату всем, у кого ее нет:
salaries[salaries.isnull()] = 1
salaries
# ## [DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)
# **Dataframe** - это двухмерная структура данных (матрица), представляющая собой таблицу, каждый столбец которой содержит данные одного типа. Можно представлять её как словарь объектов типа Series.
#
# Структура DataFrame отлично подходит для представления реальных данных: строки соответствуют признаковым описаниям отдельных объектов, а столбцы соответствуют признакам.
# ### Создание Датафрейма
# Создадим pd.DataFrame из единичной numpy-матрицы:
df1 = pd.DataFrame(data = np.eye(3),
index=['a', 'b', 'c'],
columns=['col1', 'col2', 'col3'],
dtype=int)
df1
# Можно создавать pd.DataFrame из словаря.
# Ключами будут названия столбцов, а значениями - списки значений в этих столбцах.
# pd.DataFrame может хранить значения любых типов. Но в пределах одного столбца тип может быть только один:
dictionary = {
'A': np.arange(3),
'B': ['a', 'b', 'c'],
'C': np.arange(3) > 1
}
df1 = pd.DataFrame(dictionary)
df1
# Столбцу присваивается тип numpy.array, из которого он появился.
df1.dtypes
# ## Индексация: at, loc, iloc.
# Можем обращаться к отдельному элементу в таблице через `at` (это быстро):
df1.at[2, 'B']
# Можем обращаться к куску таблицы через loc (это всего лишь в [22 раза медленнее](https://stackoverflow.com/questions/37216485/pandas-at-versus-loc), чем at):
df1.loc[1:2, ['A', 'B']]
# Чтобы обращаться к столбцу по индексу (они там есть, хоть явно и не указаны) можно воспользоваться функцией `iloc`, которая работает аналогично `loc`
df1.iloc[1:3, 1:3]
# Можем изменять элементы, обращаясь к ним через `at` и присваивая значение:
df1.at[2, 'B'] = 'Z'
df1
# С помощью loc можно изменять сразу всю строку.
# И даже создавать новые. Заметим, что индексы вообще говоря идут совсем не подряд и даже не по-порядку.
df1.loc[2] = [2,'c', True]
df1.loc[17] = [17, '!', False]
df1.loc[9] = [9, '!', False]
df1
# Чтобы удалить строки, можно воспользоваться функцией `drop`
df1 = df1.drop([1,9,17])
df1
# ## [copy](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.copy.html), [reset_index](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html), nan-ы
# Создадим копию нашей таблицы без последнего столбца.
# Затем, присоединим новую таблицу к старой и посмотрим, что будет.
df1_copy = df1.copy().loc[:, ['A', 'B']]
df1_copy = df1.append(df1_copy)
df1_copy
# Вообще говоря так делать - плохо. Индекс строки также копируется.
try:
df1_copy[0]
except KeyError:
print('too much object with one index')
# Чтобы индексация была валидной, нужно провести реиндексацию фрейма.
df1_copy.reset_index(drop=True) #флаг drop - означает, что мы удаляем старый индекс, иначе - он просто станет столбцом
# Вот теперь можно валидно работать с таблицей.
# Заметим, что
# * при взятии loc и других операций начальная таблица не изменяется
# * значения, которые мы не знали, заполнились NaN.
# Давайте выкинем все строки/столбцы, в которых есть NaN. (Для столбцов - `axis=1`)
df1_copy.dropna(axis=0)
# Заменим все NaN каким-то значением:
df1_copy.fillna(False)
# ## Нечисловые индексы. MultiIndex.
# Вообще говоря, никто не запрещает делать нечисловые индексы, как и колонки. В этом случае к ним придется обращаться по тому индексу, как они обозначены.
index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
df3 = pd.DataFrame({
'http_status': [200,200,404,404,301],
'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
index=index)
df3
df3.loc['Firefox']
# `iloc` будет работать с обычными индексами по строке
df3.iloc[0,1]
# Нечисловая индексация особо часто встречается при группировкач - о них ниже.
# Можно делать еще более сложные вещи, а именно многоуровневые индексы и многоуровневые колонки. Именно благодаря устройству мультииндексов мы можем удобно группировать объекты.
# +
idx = pd.MultiIndex.from_product([['Zara', 'LV', 'Roots'],
['Orders', 'GMV', 'AOV']],
names=['Brand', 'Metric'])
col = ['Yesterday', 'Yesterday-1', 'Yesterday-7', 'Thirty day average']
df_mul = pd.DataFrame('-', idx, col)
df_mul
# -
# ## Чтение из файла
# Основным форматом для хранения и передачи фреймов является - `CSV` (comma separated values / разделение по запятой). Можно читать данные абсолютно разных форматов `.txt`, `.tsv` или `.xlsx`. Но нужно аккуратно работать с `header`-ами и индексами, чтобы данные не потерялись.
# * [read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas.read_csv)
# * [read_fwf](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_fwf.html#pandas.read_fwf)
# * [read_excel](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html)
# Загрузим небольшой csv-файл по финансовой отчетности.
df = pd.read_csv('data/data_type.csv')
df
# В Jupyter-ноутбуках датафреймы `Pandas` выводятся в виде вот таких красивых табличек, и `print(df)` выглядит хуже.
print(df)
# Размеры матрицы получаем как в numpy
df.shape
# И можно получить имена колонок
df.columns
# ## [Типы данных](https://pbpython.com/pandas_dtypes.html)
#
# Чтобы посмотреть общую информацию по датафрейму и всем признакам, воспользуемся методом **`info`**:
df.info()
# На самом деле в pandas есть всего 5 типов данных: `bool`, `int64`, `float64`,`datetime64` и `object`. Любая строка, функция, класс - все 'сложное' воспринимается как `object`.
# ## Предобработка типов
# **Изменить тип колонки** можно с помощью метода `astype`.
df['Customer Number'] = df['Customer Number'].astype('int64')
# Признак `Jan Units` так кастануть не выйдет из-за значения `Closed`. Но так как - это исключительное значение, мы можем произвести каст через `to_numeric`, тогда все, что нескастуется, заменится на `Nan`.
df['Jan Units'] = pd.to_numeric(df['Jan Units'], errors='coerce') #флаг errors - чтобы то, что нескастовалось пошло в Nan.
# Получить дату можно с помощью метода `to_datetime`
df['Date'] = pd.to_datetime(df[['Month', 'Day', 'Year']]) #заметим как легко мы создали новую колонку
df
# ## Методы [apply](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html), [map](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html), [replace](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html).
# Чтобы поменять колонку `Active` можно воспользоваться методом `pd.Series.map` или `pd.DataFrame.replace`.
d = {'N' : False, 'Y' : True}
df['Active'].map(d)
df = df.replace({'Active': d})
df
# Второй метод более массовый, но работает медленнее.
# Осталось только преобразовать колонки с деньгами и процентами во что-то численное, для этого воспользуемся методом `apply` и напишем lambda-функцию
convert = lambda x : float(x.replace(',','').replace('$', '').replace('%',''))
# Эта функция убирает все ненужные символы из цифр и конвертирует строку в float.
df['2016'] = df['2016'].apply(convert)
df['2017'] = df['2017'].apply(convert)
df['Percent Growth'] = df['Percent Growth'].apply(convert)
df
# ## Настройки Pandas
# Чтобы у нас не возникало проблемы с отображением `float` мы можем поменять настройки pandas.
pd.set_option('precision', 4) #устанавливаем количество значящих (ненулевых) символов
df
# Также мы можем поменять отображение датафрейма с помощью `set_option`
pd.set_option('display.max_columns', 5)
pd.set_option('display.max_rows', 2)
df
# Этим можно пользоваться когда у вас очень большие таблицы. Сейчас нам это не нужно.
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 10)
# ## Добавление и удаление колонок
# Можно легко добавить колонку общей суммы(любой функции) для колонок `2016` и `2017`
df['Sum'] = df['2016'] + df['2017']
# Теперь нам не нужны отдельные колонки для годов и для дат, так как есть объеденненные. Давайте удалим их.
df_final = df.drop(['2016', '2017', 'Year', 'Month', 'Day'], axis=1) #обязатьно указываем что ищем по колонкам, а не по строкам!
df_final
# ## Сортировка. Reindex.
# Новые колонки добавились в конец, это не всегда удобно (Но обычно без разницы). Мы можем отсортировать колонки с помощью переиндексации. Мы задаем порядок названий колонок, и просим DataFrame выдать их в нужном порядке.
df_final.reindex(sorted(df_final.columns), axis=1)
# Или можно сделать произвольную сортировку строк.
df_final.reindex([4,3,0,1,2], axis=0)
# Но обычно мы хотим отсортировать по значению колонки, для этого есть `sort_values`
df_final.sort_values(by = 'Customer Name') #сортируем строки в алфавитном порядке
# Можно сортировать сразу по нескольким критериям и устанавливать для них порядок.
df_final.sort_values(by = ['Active', 'Date'], ascending=[1, 0])
# ## Индексация по маскам
# Очень удобной является логическая индексация `DataFrame` по маске.
#
# Мы делаем условное выражение для **одного** столбца, а потом объединяем несколько условий в одну маску.
# +
cond1 = (df_final['Sum'] > 200000)
cond2 = (df_final['Date'] < '2015-06-01')
mask = (df_final['Sum'] > 200000) & (df_final['Date'] < '2015-06-01')
df_final[mask]
# -
# Заметим, что мы используем одинарные операторы `&` вместо `&&` или `and`. Вообще мыслить о маске лучше как о битовом векторе, к которому мы по-элементно применяем битовую операцию.
cond1 ^ cond2 #xor почему бы и нет
# ## Строковые операции
# Мы узнали, что в DataFrame отсутствует класс `string` и по сути все, что не число - является строкой.
#
# Но зато для Series - это не так, так как Series - это практически `numpy.ndarray`, в котором есть строки. Поэтому для работы со стороками у нас есть специальные методы.
# Возьмем данные с большим количеством строк, а именно... данные о сражениях в "Игре Престолов".
battles=pd.read_csv('data/battles.csv')
battles = battles.drop(['attacker_1', 'attacker_2', 'attacker_3', 'attacker_4', 'defender_1', 'defender_2', 'defender_3', 'defender_4'], axis=1)
# `head` - показывает несколько первых строк фрейма.
battles.head()
# Давайте найдем все битвы в которых был коммандующий Ланнистер или Старк, c помощью метода [pd.Series.str.contains](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html)
battles['attacker_commander']
# Мы будем работать дальше с колонкой командующих - уберем из нее Нан-ы.
battles = battles[~battles['attacker_commander'].isnull()].reset_index(drop=True) # ~ - отрицание условия
# Найдем все битвы, в которых аттакующими коммандирами выступали Ланнистеры или Старки.
battles[battles['attacker_commander'].str.contains('Lannister|Stark', regex=True)]['name'] #пожно искать по регулярному выражению
# Сделаем датафрейм только главнокоманндующих армий. Чтобы создать DataFrame, a не Series нужно поствить двойные квадратные скобки.
commanders = battles[['attacker_commander']] #если вдруг есть наны, их надо убрать
# Для удобства теперь заменим наименование колонки
commanders = commanders.rename(columns={'attacker_commander': 'names'})#так как убрали na нужно менять индекс
commanders.head()
# Заметим, что коммандующих может быть несколько. Давайте разделим их с помощью операции [pd.Series.split](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html)
split_com = commanders['names'].str.split(", | and | & ")
split_com
# Найдем максимальное количество коммандующих
# ### Разбиение листа
v_len = np.vectorize(lambda x: len(x))
v_len(split_com)
# Получили максимум 6 коммандующих - теперь можем их разделить.
pd.DataFrame(split_com.tolist(), columns=['com1', 'com2', 'com3', 'com4', 'com5', 'com6'])
# ## [Группировка](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html)
#
# В общем случае группировка данных в Pandas выглядит следующим образом:
#
# ```
# df.groupby(by=grouping_columns)[columns_to_show].function()
# ```
#
# 1. К датафрейму применяется метод **`groupby`**, который разделяет данные по `grouping_columns` – признаку или набору признаков.
# 3. Индексируем по нужным нам столбцам (`columns_to_show`).
# 2. К полученным группам применяется функция или несколько функций.
#
# Выполним группировку по признаку `year`. Мы получим группировачный объект, от которого можно вызывать агрегирующие функции.
battles.groupby(by=['year'])
# `count` добавит единичку за каждый неNan объект в столбце.
battles.groupby(by=['year']).count()
# `battle_number` - признак без Nan, можем оставить только его
battles.groupby(by=['year']).count()['battle_number']
# Или можно сначала выбрать признак, а потом вызвать агрегирующую операцию
battles.groupby(by=['year'])['battle_number'].count()
# ### Сложные группировки
# Можно делать и более сложные гриппировки по нескольким параметрам и с более сложными агрегирующими функциями.
#
# Хотим сгруппироваться по году и региону и посчитать сколько всего там было сражающихся человек.
G = battles.groupby(['year','region'])['attacker_size', 'defender_size'].sum()
G
G['people'] = G['attacker_size'] + G['defender_size']
import matplotlib.pyplot as plt
G['people'].plot(kind='bar')
plt.grid()
# ## Прикладной Анализ Данных
# Теперь посмотрим на небольшую аналитическую задачу по предсказанию ухода клиента от телефонного оператора.
df = pd.read_csv('data/telecom_churn.csv')
df.head(3)
# Метод **`describe`** показывает основные статистические характеристики данных по каждому числовому признаку (типы `int64` и `float64`): число непропущенных значений, среднее, стандартное отклонение, диапазон, медиану, 0.25 и 0.75 квартили.
df.describe()
# Библиотека `seaborn` - более удобная замена `matplotib` в некоторых случаях. Например в рисовании ящиков с усами. Но при этом внутри она реализована на том же `matplotlib`
import seaborn as sns
plt.figure(figsize=(13, 4))
plt.subplot(1, 2, 1)
sns.boxplot(df['Total day minutes'])
plt.subplot(1, 2, 2)
sns.distplot(df['Total day minutes']);
plt.grid()
# Чтобы посмотреть статистику по нечисловым признакам, нужно явно указать интересующие нас типы в параметре `include`. Можно также задать `include`='all', чтоб вывести статистику по всем имеющимся признакам.
df.describe(include=['object', 'bool'])
# Для категориальных (тип `object`) и булевых (тип `bool`) признаков можно воспользоваться методом **`value_counts`**. Посмотрим на распределение нашей целевой переменной — `Churn`:
df['Churn'].value_counts()
# 2850 пользователей из 3333 — лояльные, значение переменной `Churn` у них — `0`.
#
# Посмотрим на распределение пользователей по переменной `Area code`. Укажем значение параметра `normalize=True`, чтобы посмотреть не абсолютные частоты, а относительные.
df['Area code'].value_counts(normalize=True)
# ## Сводные таблицы
# Допустим, мы хотим посмотреть, как наблюдения в нашей выборке распределены в контексте двух признаков — `Churn` и `Customer service calls`. Для этого мы можем построить **таблицу сопряженности**, воспользовавшись методом **`crosstab`**:
pd.crosstab(df['Churn'], df['International plan'])
pd.crosstab(df['Churn'], df['Voice mail plan'], normalize=True)
# Мы видим, что большинство пользователей — лояльные и пользуются дополнительными услугами (международного роуминга / голосовой почты).
# Продвинутые пользователи `Excel` наверняка вспомнят о такой фиче, как **сводные таблицы** (`pivot tables`). В `Pandas` за сводные таблицы отвечает метод **`pivot_table`**, который принимает в качестве параметров:
#
# * `values` – список переменных, по которым требуется рассчитать нужные статистики,
# * `index` – список переменных, по которым нужно сгруппировать данные,
# * `aggfunc` — то, что нам, собственно, нужно посчитать по группам — сумму, среднее, максимум, минимум или что-то ещё.
#
# Давайте посмотрим среднее число дневных, вечерних и ночных звонков для разных `Area code`:
df.pivot_table(values = ['Total day calls', 'Total eve calls', 'Total night calls'],
index = ['Area code'],
aggfunc= lambda X: X.mean()) # можно запихать любую агрегирующую функцию
# --------
#
#
#
# ## Первые попытки прогнозирования оттока
#
# Посмотрим, как отток связан с признаком **"Подключение международного роуминга"** (`International plan`). Сделаем это с помощью сводной таблички `crosstab`, а также путем иллюстрации с `Seaborn`.
# надо дополнительно установить (команда в терминале)
# чтоб картинки рисовались в тетрадке
# # !conda install seaborn
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
plt.rcParams['figure.figsize'] = (8, 6)
pd.crosstab(df['Churn'], df['International plan'], margins=True)
sns.countplot(x='International plan', hue='Churn', data=df);
# Видим, что когда роуминг подключен, доля оттока намного выше – интересное наблюдение! Возможно, большие и плохо контролируемые траты в роуминге очень конфликтогенны и приводят к недовольству клиентов телеком-оператора и, соответственно, к их оттоку.
# Далее посмотрим на еще один важный признак – **"Число обращений в сервисный центр"** (`Customer service calls`). Также построим сводную таблицу и картинку.
pd.crosstab(df['Churn'], df['Customer service calls'], margins=True, normalize=True)
sns.countplot(x='Customer service calls', hue='Churn', data=df);
plt.grid()
# Может быть, по сводной табличке это не так хорошо видно (или скучно ползать взглядом по строчкам с цифрами), а вот картинка красноречиво свидетельствует о том, что доля оттока сильно возрастает начиная с 4 звонков в сервисный центр.
# Добавим теперь в наш DataFrame бинарный признак — результат сравнения `Customer service calls > 3`. И еще раз посмотрим, как он связан с оттоком.
# +
df['Many_service_calls'] = (df['Customer service calls'] > 3).astype('int')
pd.crosstab(df['Many_service_calls'], df['Churn'], margins=True)
# -
sns.countplot(x='Many_service_calls', hue='Churn', data=df);
# Объединим рассмотренные выше условия и построим сводную табличку для этого объединения и оттока.
pd.crosstab(df['Many_service_calls'] & df['International plan'] ,
df['Churn'])
# Значит, прогнозируя отток клиента в случае, когда число звонков в сервисный центр больше 3 и подключен роуминг (и прогнозируя лояльность – в противном случае), можно ожидать около 85.8% правильных попаданий (ошибаемся всего 464 + 9 раз). Эти 85.8%, которые мы получили с помощью очень простых рассуждений – это неплохая отправная точка (*baseline*) для дальнейших моделей машинного обучения, которые мы будем строить.
# В целом до появления машинного обучения процесс анализа данных выглядел примерно так. Прорезюмируем:
#
# - Доля лояльных клиентов в выборке – 85.5%. Самая наивная модель, ответ которой "Клиент всегда лоялен" на подобных данных будет угадывать примерно в 85.5% случаев. То есть доли правильных ответов (*accuracy*) последующих моделей должны быть как минимум не меньше, а лучше, значительно выше этой цифры;
# - С помощью простого прогноза , который условно можно выразить такой формулой: "International plan = True & Customer Service calls > 3 => Churn = 1, else Churn = 0", можно ожидать долю угадываний 85.8%, что еще чуть выше 85.5%
# - Эти два бейзлайна мы получили без всякого машинного обучения, и они служат отправной точной для наших последующих моделей. Если окажется, что мы громадными усилиями увеличиваем долю правильных ответов всего, скажем, на 0.5%, то возможно, мы что-то делаем не так, и достаточно ограничиться простой моделью из двух условий.
# - Перед обучением сложных моделей рекомендуется немного покрутить данные и проверить простые предположения. Более того, в бизнес-приложениях машинного обучения чаще всего начинают именно с простых решений, а потом экспериментируют с их усложнением.
# ----------
# ## Полезные материалы
# * [Оптимизация Pandas для больших данных](https://habr.com/en/company/ruvds/blog/442516/)
# * [Pandas for Data Analysis](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks#pandas-for-data-analysis)
# * [Сборник полезных тетрадок по Pandas](https://github.com/HorusHeresyHeretic/Pandas_Practice)
# * [Learn Pandas](https://bitbucket.org/hrojas/learn-pandas/src/master/)
# * [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html)
# * [Reshape and Index](https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html)
# ## Источники материалов:
# * [mlcourse.ai](https://github.com/Yorko/mlcourse.ai) - курс Машинного обучения с OpenDataScience
# * [AI Seminars](https://github.com/AICommunityInno/Seminars) - семинары по Машинному обучению в Иннополисе
# * [HSE-ML course](https://github.com/esokolov/ml-course-hse) - курс Машинного обучения ФКН ВШЭ
| 02-Pandas/Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
norm_params_path = "/fast_scratch/WatChMaL/data/IWCDmPMT_4pi_fulltank_9M_norm_params/IWCDmPMT_4pi_fulltank_9M_trainval_norm_params.npz"
norm_params = np.load(norm_params_path, allow_pickle=True)
print(list(norm_params.keys()))
c_acc = norm_params["c_acc"]
t_acc = norm_params["t_acc"]
# Plot the histogram accumulated by the time accumulator
# +
BINS = 10000
step = t_acc[1]/BINS
x_val = np.arange(0, t_acc[1], step)
# -
print(x_val, len(x_val))
print(len(t_acc[0]), t_acc[0])
fig = plt.figure(figsize=(32,18))
plt.hist(x_val, x_val, weights=t_acc[0])
plt.yscale("log")
t_acc_nonzero = t_acc[0][t_acc[0] > 0.]
print(len(t_acc_nonzero), t_acc_nonzero)
| notebooks/2811 - Test the normalization parameters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regular Expressions
#
# In the error handling session we tried to interpret strings as valid heights and weights. This involved looking for text such as "meter" or "kilogram" in the string, and then extracting the number. This process is called pattern matching, and is best undertaken using a regular expression.
#
# Regular expressions have a long history and are available in most programming languages. Python implements a standards-compliant regular expression module, which is called `re`.
import re
# Let's create a string that contains a height and see if we can use a regular expression to match that...
h = "2 meters"
# To search for string "meters" in a string, using `re.search`, e.g.
if re.search("meters", h):
print("String contains 'meters'")
else:
print("No match")
# `re.search` returns a match object if there is a match, or `None` if there isn't.
m = re.search("meters", h)
m
# This matches "meters", but what about "meter". "meter" is "meters" without an "s". You can specify that a letter is matched 0 or 1 times using "?"
h = "2 meter"
m = re.search("meters?", h)
m
# However, this has still not worked, as we match "meters" in the middle of the string. We need to match "meters" only at the end of the string. We do this using "$", which means match at end of string
m = re.search("meters?$", h)
m
# We also want to be able to match "m" as well as "meters". To do this, we need to use the "or" operator, which is "|". It is a good idea to put this in round brackets to make both sides of the "or" statement clear.
h = "2 m"
m = re.search("(m|meters?)$", h)
m
# Next, we want to match the number, e.g. "X meters", where "X" is a number. You can use "\d" to represent any number. For example
h = "2 meters"
m = re.search("\d (m|meters?)$", h)
m
# A problem with the above example is that it only matches a number with a single digit, as "\d" only matches a single number. To match one or more digits, we need to put a "+" afterwards, as this means "match one or more", e.g.
h = "10 meters"
m = re.search("\d+ (m|meters?)$", h)
m
# This match breaks if the number is has decimal point, as it doesn't match the "\d". To match a decimal point, you need to use "\\.", and also "?", which means "match 0 or 1 decimal points", and then "\d*", which means "match 0 or more digits"
h = "1.5 meters"
m = re.search("\d+\.?\d* (m|meters?)$", h)
m
# The number must match at the beginning of the string. We use "^" to mean match at start...
h = "some 1.8 meters"
m = re.search("^\d+\.?\d* (m|meters?)$", h)
m
# Finally, we want this match to be case insensitive, and would like the user to be free to use as many spaces as they want between the number and the unit, before the string or after the string... To do this we use "\s*" to represent any number of spaces, and match using `re.IGNORECASE`.
h = " 1.8 METers "
m = re.search("^\s*\d+\.?\d*\s*(m|meters?)\s*$", h, re.IGNORECASE)
m
# The round brackets do more than just groups parts of your search. They also allow you extract the parts that match.
m.groups()
# You can place round brackets around the parts of the match you want to capture. In this case, we want to get the number...
m = re.search("^\s*(\d+\.?\d*)\s*(m|meters?)\s*$", h, re.IGNORECASE)
m.groups()
# As `m.groups()[0]` contains the match of the first set of round brackets (which is the number), then we can get the number using `m.groups()[0]`. This enables us to rewrite the `string_to_height` function from the last section as;
def string_to_height(height):
"""Parse the passed string as a height. Valid formats are 'X m', 'X meters' etc."""
m = re.search("^\s*(\d+\.?\d*)\s*(m|meters?)\s*$", height, re.IGNORECASE)
if m:
return float(m.groups()[0])
else:
raise TypeError("Cannot extract a valid height from '%s'" % height)
h = string_to_height(" 1.5 meters ")
h
# # Exercise
#
# ## Exercise 1
#
# Rewrite your `string_to_weight` function using regular expressions. Check that it responds correctly to a range of valid and invalid weights.
def string_to_weight(weight):
"""Parse the passed string as a weight. Valid formats are 'X kg', 'X kilos', 'X kilograms' etc."""
m = re.search("^\s*(\d+\.?\d*)\s*(kgs?|kilos?|kilograms?)\s*$", weight, re.IGNORECASE)
if m:
return float(m.groups()[0])
else:
raise TypeError("Cannot extract a valid weight from '%s'" % weight)
string_to_weight("23.5 kilos"), string_to_weight("5kg"), string_to_weight("10 kilogram")
# ## Exercise 2
#
# Update string_to_height so that it can also understand heights in both meters and centimeters (returning the height in meters), and update string_to_weight so that it can also understand weights in both grams and kilograms (returning the weight in kilograms). Note that you may find it easier to separate the number from the units. You can do this using the below function to divide the string into the number and units. This uses "\w" to match any word character.
def get_number_and_unit(s):
"""Interpret the passed string 's' as "X units", where "X" is a number and
"unit" is the unit. Returns the number and (lowercased) unit
"""
m = re.search("^\s*(\d+\.?\d*)\s*(\w+)\s*$", s, re.IGNORECASE)
if m:
number = float(m.groups()[0])
unit = m.groups()[1].lower()
return (number, unit)
else:
raise TypeError("Cannot extract a valid 'number unit' from '%s'" % s)
def string_to_height(height):
"""Parse the passed string as a height. Valid formats are 'X m', 'X centimeters' etc."""
(number, unit) = get_number_and_unit(height)
if re.search("cm|centimeters?", unit):
return number / 100.0
elif re.search("m|meters?", unit):
return number
else:
raise TypeError("Cannot convert a number with units '%s' to a valid height" % unit)
def string_to_weight(weight):
"""Parse the passed string as a weight. Valid formats are 'X kg', 'X grams' etc."""
(number, unit) = get_number_and_unit(weight)
if re.search("kgs?|kilos?|kilograms?", unit):
return number
elif re.search("g|grams?", unit):
return number / 1000.0
else:
raise TypeError("Cannot convert a number with units '%s' to a valid weight" % unit)
string_to_height("55 cm"), string_to_height("2m"), string_to_height("15meters")
string_to_weight("15g"), string_to_weight("5 kilograms"), string_to_weight("5gram")
| answers/17_regular_expressions.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R 3.4
# language: R
# name: r
# ---
# <!-- img src="http://cognitiveclass.ai/wp-content/uploads/2017/11/cc-logo-square.png" width="150"-->
#
# <h1 align=center>R BASICS</h1>
# ### Welcome!
#
# By the end of this notebook, you will have learned the basics of R!
# ## Table of Contents
#
#
# <ul>
# <li><a href="#About-the-Dataset">About the Dataset</a></li>
# <li><a href="#Simple-Mathematics-in-R">Simple Math in R</a></li>
# <li><a href="#Variables-in-R">Variables in R</a></li>
# <li><a href="#Vectors-in-R">Vectors in R</a></li>
# <li><a href="#Strings-in-R">Strings in R</a></li>
# </ul>
# <p></p>
# Estimated Time Needed: <strong>15 min</strong>
#
# <hr>
# <center><h2>About the Dataset</h2></center>
# Which movie should you watch next?
#
# Let's say each of your friends tells you their favorite movies. You do some research on the movies and put it all into a table. Now you can begin exploring the dataset, and asking questions about the movies. For example, you can check if movies from some certain genres tend to get better ratings. You can check how the production cost for movies changes across years, and much more.
# **Movies dataset**
#
# The table gathered includes one row for each movie, with several columns for each movie characteristic:
#
# - **name** - Name of the movie
# - **year** - Year the movie was released
# - **length_min** - Length of the movie (minutes)
# - **genre** - Genre of the movie
# - **average_rating** - Average rating on [IMDB](http://www.imdb.com/)
# - **cost_millions** - Movie's production cost (millions in USD)
# - **foreign** - Is the movie foreign (1) or domestic (0)?
# - **age_restriction** - Age restriction for the movie
# <br>
#
# <img src = "https://ibm.box.com/shared/static/6kr8sg0n6pc40zd1xn6hjhtvy3k7cmeq.png" width = 90% align="left">
# ### We can use R to help us explore the dataset
# But to begin, we'll need to get the basics, so let's get started!
# <hr>
# <center><h2>Simple Mathathematics in R</h2></center>
# Let's say you want to watch *Fight Club* and *Star Wars: Episode IV (1977)*, back-to-back. Do you have enough time to **watch both movies in 4 hours?** Let's try using simple math in R.
# What is the **total movie length** for Fight Club and Star Wars (1977)?
# - **Fight Club**: 139 min
# - **Star Wars: Episode IV**: 121 min
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **Tip**: To run the grey code cell below, click on it, and press Shift + Enter.
# </div>
# + jupyter={"outputs_hidden": false}
139 + 121
# -
# Great! You've determined that the total number of movie play time is **260 min**.
# **What is 260 min in hours?**
# + jupyter={"outputs_hidden": false}
260 / 60
# -
# Well, it looks like it's **over 4 hours**, which means you can't watch *Fight Club* and *Star Wars (1977)* back-to-back if you only have 4 hours available!
# <hr></hr>
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] Simple math in R </h4>
# <p></p>
# You can do a variety of mathematical operations in R including:
# <li> addition: **2 + 2** </li>
# <li> subtraction: **5 - 2** </li>
# <li> multiplication: **3 \* 2** </li>
# <li> division: **4 / 2** </li>
# <li> exponentiation: **4 \*\* 2** or **4 ^ 2 **</li>
# </div>
# <hr></hr>
# <center><h2>Variables in R</h2>
# We can also **store** our output in **variables**, so we can use them later on. For example:
x <- 139 + 121
# To return the value of **`x`**, we can simply run the variable as a command:
# + jupyter={"outputs_hidden": false}
x
# -
# We can also perform operations on **`x`** and save the result to a **new variable**:
# + jupyter={"outputs_hidden": false}
y <- x / 60
y
# -
# If we save something to an **existing variable**, it will **overwrite** the previous value:
# + jupyter={"outputs_hidden": false}
x <- x / 60
x
# -
# It's good practice to use **meaningful variable names**, so you don't have to keep track of what variable is what:
# + jupyter={"outputs_hidden": false}
total <- 139 + 121
total
# + jupyter={"outputs_hidden": false}
total_hr <- total / 60
total_hr
# -
# You can put this all into a single expression, but remember to use **round brackets** to add together the movie lengths first, before dividing by 60.
# + jupyter={"outputs_hidden": false}
total_hr <- (139 + 121) / 60
total_hr
# -
# <hr></hr>
# <div class="alert alert-success alertsuccess" style="margin-top: 0px">
# <h4> [Tip] Variables in R </h4>
# <p></p>
# As you just learned, you can use **variables** to store values for repeated use. Here are some more **characteristics of variables in R**:
# <li>variables store the output of a block of code </li>
# <li>variables are typically assigned using **<-**, but can also be assigned using **=**, as in **x <- 1** or **x = 1** </li>
# <li>once created, variables can be removed from memory using **rm(**my_variable**)** </li>
# <p></p>
# </div>
# <hr></hr>
# <center><h2>Vectors in R</h2></center>
# What if we want to know the **movie length in _hours_**, not minutes, for _Toy Story_ and for _Akira_?
# - **Toy Story (1995)**: 81 min
# - **Akira (1998)**: 125 min
# + jupyter={"outputs_hidden": false}
c(81, 125) / 60
# -
# As you see above, we've applied a single math operation to both of the items in **`c(81, 125)`**. You can even assign **`c(81, 125)`** to a variable before performing an operation.
# + jupyter={"outputs_hidden": false}
ratings <- c(81, 125)
ratings / 60
# -
# What we just did was create vectors, using the combine function **`c()`**. The **`c()`** function takes multiple items, then combines them into a **vector**.
#
# It's important to understand that **vectors** are used everywhere in R, and vectors are easy to use.
# + jupyter={"outputs_hidden": false}
c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
c(1:10)
# + jupyter={"outputs_hidden": false}
c(10:1) # 10 to 1
# -
# <hr></hr>
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] # Comments</h4>
#
# Did you notice the **comment** after the **c(10:1)** above? Comments are very useful in describing your code. You can create your own comments by using the **#** symbol and writing your comment after it. R will interpret it as a comment, not as code.
#
# <p></p>
# </div>
#
# <hr></hr>
# <center><h2>Strings in R</h2></center>
# R isn't just about numbers -- we can also have strings too. For example:
# + jupyter={"outputs_hidden": false}
movie <- "Toy Story"
movie
# -
# In R, you can identify **character strings** when they are wrapped with **matching double (") or single (') quotes**.
# Let's create a **character vector** for the following **genres**:
# - Animation
# - Comedy
# - Biography
# - Horror
# - Romance
# - Sci-fi
# + jupyter={"outputs_hidden": false}
genres <- c("Animation", "Comedy", "Biography", "Horror", "Romance", "Sci-fi")
genres
# -
# <hr>
# <center><h2>Vectors</h2></center>
#
# <ul>
# <li><a href="#Vector-Operations">Vector Operations</a></li>
# <li><a href="#Subsetting-Vectors">Subsetting Vectors</a></li>
# <li><a href="#Factors">Factors</a></li>
# </ul>
# <p></p>
# Estimated Time Needed: <strong>15 min</strong>
# **Vectors** are strings of numbers, characters or logical data (one-dimension array). In other words, a vector is a simple tool to store your grouped data.
#
# In R, you create a vector with the combine function **c()**. You place the vector elements separated by a comma between the brackets. Vectors will be very useful in the future as they allow you to apply operations on a series of data easily.
#
# Note that the items in a vector <em>must be of the same class</em>, for example all should be either number, character, or logical.
# ### Numeric, Character and Logical Vectors
# Let's say we have four movie release dates (1985, 1999, 2015, 1964) and we want to assign them to a single variable, `release_year`. This means we'll need to create a vector using **`c()`**.
#
# Using numbers, this becomes a **numeric vector**.
release_year <- c(1985, 1999, 2015, 1964)
release_year
# What if we use quotation marks? Then this becomes a **character vector**.
# Create genre vector and assign values to it
titles <- c("Toy Story", "Akira", "The Breakfast Club")
titles
# There are also **logical vectors**, which consist of TRUE's and FALSE's. They're particular important when you want to check its contents
titles == "Akira" # which item in `titles` is equal to "Akira"?
# <hr></hr>
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] TRUE and FALSE in R </h4>
#
# Did you know? R only recognizes `TRUE`, `FALSE`, `T` and `F` as special values for true and false. That means all other spellings, including *True* and *true*, are not interpreted by R as logical values.
#
# <p></p>
# </div>
#
# <hr></hr>
# <center><h2>Vector Operations</h2></center>
# ### Adding more elements to a vector
# You can add more elements to a vector with the same **`c()`** function you use the create vectors:
release_year <- c(1985, 1999, 2015, 1964)
release_year
release_year <- c(release_year, 2016:2018)
release_year
# ### Length of a vector
# How do we check how many items there are in a vector? We can use the **length()** function:
release_year
length(release_year)
# ### Head and Tail of a vector
# We can also retrieve just the **first few items** using the **head()** function:
head(release_year) #first six items
head(release_year, n = 2) #first n items
head(release_year, 2)
# We can also retrieve just the **last few items** using the **tail()** function:
tail(release_year) #last six items
tail(release_year, 2) #last two items
# ### Sorting a vector
# We can also sort a vector:
sort(release_year)
# We can also **sort in decreasing order**:
sort(release_year, decreasing = TRUE)
# But if you just want the minimum and maximum values of a vector, you can use the **`min()`** and **`max()`** functions
min(release_year)
max(release_year)
# ### Average of Numbers
# If you want to check the average cost of movies produced in 2014, what would you do? Of course, one way is to add all the numbers together, then divide by the number of movies:
# +
cost_2014 <- c(8.6, 8.5, 8.1)
# sum results in the sum of all elements in the vector
avg_cost_2014 <- sum(cost_2014)/3
avg_cost_2014
# -
avg_cost_2014 <- sum(cost_2014)/length(cost_2014)
avg_cost_2014
# You also can use the <b>mean</b> function to find the average of the numeric values in a vector:
mean_cost_2014 <- mean(cost_2014)
mean_cost_2014
# ### Giving Names to Values in a Vector
# Suppose you want to remember which year corresponds to which movie.
#
# With vectors, you can give names to the elements of a vector using the **names() ** function:
# +
#Creating a year vector
release_year <- c(1985, 1999, 2010, 2002)
#Assigning names
names(release_year) <- c("The Breakfast Club", "American Beauty", "Black Swan", "Chicago")
release_year
# -
# Now, you can retrieve the values based on the names:
release_year[c("American Beauty", "Chicago")]
# Note that the values of the vector are still the years. We can see this in action by adding a number to the first item:
release_year[1] + 100 #adding 100 to the first item changes the year
# And you can retrieve the names of the vector using **`names()`**
names(release_year)[1:3]
# ### Summarizing Vectors
# You can also use the **"summary"** function for simple descriptive statistics: minimum, first quartile, mean, third quartile, maximum:
summary(cost_2014)
# ### Using Logical Operations on Vectors
# A vector can also be comprised of **`TRUE`** and **`FALSE`**, which are special **logical values** in R. These boolean values are used used to indicate whether a condition is true or false.
#
# Let's check whether a movie year of 1997 is older than (**greater in value than**) 2000.
movie_year <- 1997
movie_year > 2000
# You can also make a logical comparison across multiple items in a vector. Which movie release years here are "greater" than 2014?
movies_years <- c(1998, 2010, 2016)
movies_years > 2014
# We can also check for **equivalence**, using **`==`**. Let's check which movie year is equal to 2015.
movies_years == 2015 # is equal to 2015?
# If you want to check which ones are **not equal** to 2015, you can use **`!=`**
movies_years != 2015
# <hr></hr>
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] Logical Operators in R </h4>
# <p></p>
# You can do a variety of logical operations in R including:
# <li> Checking equivalence: **1 == 2** </li>
# <li> Checking non-equivalence: **TRUE != FALSE** </li>
# <li> Greater than: **100 > 1** </li>
# <li> Greater than or equal to: **100 >= 1** </li>
# <li> Less than: **1 < 2** </li>
# <li> Less than or equal to: **1 <= 2** </li>
# </div>
# <hr></hr>
# <a id="ref3"></a>
# <center><h2>Subsetting Vectors</h2><center>
# What if you wanted to retrieve the second year from the following **vector of movie years**?
movie_years <- c(1985, 1999, 2002, 2010, 2012)
movie_years
# To retrieve the **second year**, you can use square brackets **`[]`**:
movie_years[2] #second item
# To retrieve the **third year**, you can use:
movie_years[3]
# And if you want to retrieve **multiple items**, you can pass in a vector:
movie_years[c(1,3)] #first and third items
# **Retrieving a vector without some of its items**
#
# To retrieve a vector without an item, you can use negative indexing. For example, the following returns a vector slice **without the first item**.
titles <- c("Black Swan", "Jumanji", "City of God", "Toy Story", "Casino")
titles[-1]
# You can save the new vector using a variable:
new_titles <- titles[-1] #removes "Black Swan", the first item
new_titles
# ** Missing Values (NA)**
#
# Sometimes values in a vector are missing and you have to show them using NA, which is a special value in R for "Not Available". For example, if you don't know the age restriction for some movies, you can use NA.
age_restric <- c(14, 12, 10, NA, 18, NA)
age_restric
#
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] Checking NA in R </h4>
# <p></p>
# You can check if a value is NA by using the **is.na()** function, which returns TRUE or FALSE.
# <li> Check if NA: **is.na(NA)** </li>
# <li> Check if not NA: **!is.na(2)** </li>
# </div>
#
# ### Subsetting vectors based on a logical condition
# What if we want to know which movies were created after year 2000? We can simply apply a logical comparison across all the items in a vector:
release_year > 2000
# To retrieve the actual movie years after year 2000, you can simply subset the vector using the logical vector within **square brackets "[]"**:
release_year[movie_years > 2000] #returns a vector for elements that returned TRUE for the condition
# As you may notice, subsetting vectors in R works by retrieving items that were TRUE for the provided condition. For example, `year[year > 2000]` can be verbally explained as: _"From the vector `year`, return only values where the values are TRUE for `year > 2000`"_.
#
# You can even manually write out TRUE or T for the values you want to subset:
release_year
release_year[c(T, F, F, F)] #returns the values that are TRUE
# <a id="ref4"></a>
# <center><h2>Factors</h2></center>
# Factors are variables in R which take on a limited number of different values; such variables are often refered to as **categorical variables**. The difference between a categorical variable and a continuous variable is that a categorical variable can belong to a limited number of categories. A continuous variable, on the other hand, can correspond to an infinite number of values. For example, the height of a tree is a continuous variable, but the titles of books would be a categorical variable.
#
# One of the most important uses of factors is in statistical modeling; since categorical variables enter into statistical models differently than continuous variables, storing data as factors insures that the modeling functions will treat such data correctly.
#
# Let's start with a _**vector**_ of genres:
genre_vector <- c("Comedy", "Animation", "Crime", "Comedy", "Animation")
genre_vector
# As you may have noticed, you can theoretically group the items above into three categories of genres: _Animation_, _Comedy_ and _Crime_. In R-terms, we call these categories **"factor levels"**.
#
# The function **factor()** converts a vector into a factor, and creates a factor level for each unique element.
genre_factor <- as.factor(genre_vector)
levels(genre_factor)
# ### Summarizing Factors
# When you have a large vector, it becomes difficult to identify which levels are most common (e.g., "How many 'Comedy' movies are there?").
#
# To answer this, we can use **summary()**, which produces a **frequency table**, as a named vector.
summary(genre_factor)
# And recall that you can sort the values of the table using **sort()**.
sort(summary(genre_factor)) #sorts values by ascending order
# ### Ordered factors
# There are two types of categorical variables: a **nominal categorical variable** and an **ordinal categorical variable**.
#
# A **nominal variable** is a categorical variable for names, without an implied order. This means that it is impossible to say that 'one is better or larger than the other'. For example, consider **movie genre** with the categories _Comedy_, _Animation_, _Crime_, _Comedy_, _Animation_. Here, there is no implicit order of low-to-high or high-to-low between the categories.
#
# In contrast, **ordinal variables** do have a natural ordering. Consider for example, **movie length** with the categories: _Very short_, _Short_ , _Medium_, _Long_, _Very long_. Here it is obvious that _Medium_ stands above _Short_, and _Long_ stands above _Medium_.
movie_length <- c("Very Short", "Short", "Medium","Short", "Long",
"Very Short", "Very Long")
movie_length
# __`movie_length`__ should be converted to an ordinal factor since its categories have a natural ordering. By default, the function <b>factor()</b> transforms `movie_length` into an unordered factor.
#
# To create an **ordered factor**, you have to add two additional arguments: `ordered` and `levels`.
# - `ordered`: When set to `TRUE` in `factor()`, you indicate that the factor is ordered.
# - `levels`: In this argument in `factor()`, you give the values of the factor in the correct order.
movie_length_ordered <- factor(movie_length, ordered = TRUE ,
levels = c("Very Short" , "Short" , "Medium",
"Long","Very Long"))
movie_length_ordered
# Now, lets look at the summary of the ordered factor, <b>factor_mvlength_vector</b>:
summary(movie_length_ordered)
# <hr>
# <h1 align=center>What is an Array?</h1>
# <br>
# An array is a structure that holds values grouped together, like a 2 x 2 table of 2 rows and 2 columns. Arrays can also be **multidimensional**, such as a 2 x 2 x 2 array.
# #### What is the difference between an array and a vector?
#
# Vectors are always one dimensional like a single row of data. On the other hand, an array can be multidimensional (stored as rows and columns). The "dimension" indicates how many rows of data there are.
# #### Let's create a 4 x 3 array (4 rows, 3 columns)
# The example below is a vector of 9 movie names, hence the data type is the same for all the elements.
#lets first create a vector of nine movies
movie_vector <- c("Akira", "Toy Story", "Room", "The Wave", "Whiplash",
"Star Wars", "The Ring", "The Artist", "Jumanji")
movie_vector
# To create an array, we can use the **array()** function.
movie_array <- array(movie_vector, dim = c(4,3))
movie_array
# Note that **arrays are created column-wise**. Did you also notice that there were only 9 movie names, but the array was 4 x 3? The original **vector doesn't have enough elements** to fill the entire array (that should have 3 x 4 = 12 elements). So R simply fills rest of the empty values by going back to the beginning of the vector and starting again ("Akira", "Toy story", "Room" in this case).
# We also needed to provide **`c(4,3)`** as a second _argument_ to specify the number of rows (4) and columns (3) that we wanted.
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **[Tip] What is an "argument"? How are "arguments" different from "_parameters_"?**
# <br>
# Arguments and parameters are terms you will hear constantly when talking about **functions**.
# - The _**parameters**_ are the input variables used in a function, like **dim** in the function **array()**.
# - The _**arguments**_ refer to the _values_ for those parameters that a function takes as inputs, like **c(4,3)**
# <br>
# We actually don't need to write out the name of the parameter (dim) each time, as in:
# `array(movie_vector, c(4,3))`
# As long as we write the arguments out in the correct order, R can interpret the code.
#
# <br>
# Arguments in a function may sometimes need to be of a **specific type**. For more information on each function, you can open up the help file by running the function name with a ? beforehand, as in:
# `?array`
# <p></p>
#
# </div>
# <h2 align=center>Array Indexing</h2>
# Let's look at our array again:
movie_array
# To access an element of an array, we should pass in **[row, column]** as the row and column number of that element.
# For example, here we retrieve **Whiplash** from row 1 and column 2:
movie_array[1,2] #[row, column]
# To display all the elements of the first row, we should put 1 in the row and nothing in the column part. Be sure to keep in the comma after the `1`.
movie_array[1,]
# Likewise, you can get the elements by column as below.
movie_array[,2]
# To get the dimension of the array, **dim()** should be used.
dim(movie_array)
# We can also do math on arrays. Let's create an array of the lengths of each of the nine movies used earlier.
length_vector <- c(125, 81, 118, 81, 106, 121, 95, 100, 104)
length_array <- array(length_vector, dim = c(3,3))
length_array
# Let's add 5 to the array, to account for a 5-min bathroom break:
length_array + 5
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **Tip**: Performing operations on objects, like adding 5 to an array, does not change the object. **To change the object, we would need to assign the new result to itself**.
# </div>
#
#
# <a id="ref3"></a>
# <h2 align=center>Using Logical Conditions to Subset Arrays</h2>
# Which movies can I finish watching in two hours? Using a logical condition, we can check which movies are less than 2 hours long.
mask_array <- length_array > 120
mask_array
# Using this array of TRUEs and FALSEs, we can subset the array of movie names:
# +
x_vector <- c("Akira", "Toy Story", "Room", "The Wave", "Whiplash",
"Star Wars", "The Ring", "The Artist", "Jumanji")
x_array <- array(x_vector, dim = c(3,3))
x_array[mask_array]
# -
# <hr>
# <h1 align=center>What is a Matrix?</h1>
#
# Matrices are a subtype of arrays. A matrix **must** have 2 dimensions, whereas arrays are more flexible and can have, 1, 2 or more dimensions.
#
# To create a matrix out of a vector , you can use **matrix()**, which takes in an argument for the vector, an argument for the number of rows and another for the number of columns.
movie_matrix <- matrix(movie_vector, nrow = 3, ncol = 3)
movie_matrix
# ### Accessing elements of a matrix
# As with arrays, you can use **[row, column]** to access elements of a matrix. To retrieve "Akira", you should use [1,1] as it lies in the first row and first column.
movie_matrix[1,1]
# To get data from a certain range, the folowing code can help. This takes the elements from rows 2 to 3, and from columns 1 to 2.
movie_matrix[2:3, 1:2]
# <h2>Concatenation function</h2>
#
# A concatenation function is used to combine two vectors into one vector. It combines values of both vectors.<br>
# Lets create a new vector for the upcoming movies as upcoming_movie and add them to the movie_vector to create a new_vector of movies.
upcoming_movie <- c("Fast and Furious 8", "xXx: Return of <NAME>", "Suicide Squad")
new_vector <- c(movie_vector, upcoming_movie)
new_vector
# <hr>
# <a id="ref1"></a>
# <h1 align=center>Lists</h1>
#
# First of all, we're gonna take a look at lists in R. A list is a sequenced collection of different objects of R, like vectors, numbers, characters, other lists as well, and so on. You can consider a list as a container of correlated information, well structured and easy to read. A list accepts items of different types, but a vector (or a matrix, which is a multidimensional vector) doesn't. To create a list just type __list()__ with your content inside the parenthesis and separated by commas. Let’s try it!
movie <- list("Toy Story", 1995, c("Animation", "Adventure", "Comedy"))
# In the code above, the variable movie contains a list of 3 objects, which are a string, a numeric value, and a vector of strings. Easy, eh? Now let's print the content of the list. We just need to call its name.
movie
# A list has a sequence and each element of a list has a position in that sequence, which starts from 1. If you look at our previous example, you can see that each element has its position represented by double square brackets "**[[ ]]**".
# ### Accessing items in a list
# It is possible to retrieve only a part of a list using the **single _square** bracket operator_ "**[ ]**". This operator can be also used to get a single element in a specific position. Take a look at the next example:
# The index number 2 returns the second element of a list, if that element exists:
movie[2]
# Or you can select a part or interval of elements of a list. In our next example we are retrieving the 1st, 2nd, and 3rd elements:
movie[2:3]
# It looks a little confusing, but lists can also store names for its elements.
# ### Named lists
#
# The following list is a named list:
movie <- list(name = "<NAME>",
year = 1995,
genre = c("Animation", "Adventure", "Comedy"))
# Let me explain that: the list **movie** has some named objects within it. **name**, for example, is an object of type **character**, **year** is an object of type **number**, and **genre** is a vector with objects of type **character**.
#
#
# Now take a look at this list. This time, it's full of information and well organized. It's clear what each element means. You can see that the elements have different types, and that's ok because it's a list.
movie
# You can also get separated information from the list. You can use **listName\$selectorName**. The _dollar-sign operator_ **$** will give you the block of data that is related to selectorName.
#
# Let's get the genre part of our movies list, for example.
movie$genre
# Another way of selecting the genre column:
movie["genre"]
# You can also use numerical selectors like an array. Here we are selecting elements from 2 to 3.
movie[2:3]
# The function __class()__ returns the type of a object. You can use that function to retrieve the type of specific elements of a list:
class(movie$name)
class(movie$foreign)
# ### Adding, modifying, and removing items
# Adding a new element is also very easy. The code below adds a new field named **age** and puts the numerical value 0 into it. In this case we use the double square brackets operator, because we are directly referencing a list member (and we want to change its content).
movie[["age"]] <- 5
movie
# In order to modify, you just need to reference a list member that already exists, then change its content.
movie[["age"]] <- 6
# Now it's 6, not 5
movie
# And removing is also easy! You just put **_NULL_**, which means missing value/data, into it.
movie[["age"]] <- NULL
movie
# ### Concatenating lists
#
# Concatenation is the proccess of puting things together, in sequence. And yes, you can do it with lists. Just call the function **_c()_**. Take a look at the next example:
# +
# We split our previous list in two sublists
movie_part1 <- list(name = "Toy Story")
movie_part2 <- list(year = 1995, genre = c("Animation", "Adventure", "Comedy"))
# Now we call the function c() to put everything together again
movie_concatenated <- c(movie_part1, movie_part2)
# Check it out
movie_concatenated
# -
# Lists are really handy for organizing different types of elements in R, and also easy to use. Additionally, lists are also important since this type of data structure is essential to create data frames, our next covered topic.
# <hr>
# <a id="dataframes"></a>
# <h1>DataFrames</h1>
# A DataFrame is a structure that is used for storing data tables. Underneath it all, a data frame is a list of vectors of same length, exactly like a table (each vector is a column). We call a function called __data.frame()__ to create a data frame and pass vector, which are our columns, as arguments. It is required to name the columns that will compose the data frame.
movies <- data.frame(name = c("Toy Story", "Akira", "The Breakfast Club", "The Artist",
"Modern Times", "Fight Club", "City of God", "The Untouchables"),
year = c(1995, 1998, 1985, 2011, 1936, 1999, 2002, 1987),
stringsAsFactors=F)
# Let's print its content of our recently created data frame:
movies
# It's very easy! You can note how it looks like a table.
#
# We can also use the __"$"__ selector to get some type of information. This operator returns the content of a specific column of a data frame (that's why we have to choose a name for each column).
movies$name
# You retrieve data using numeric indexing, like in lists:
# This returns the first (1st) column
movies[1]
# The function called __str()__ is one of most useful functions in R. With this function you can obtain textual information about an object. In this case, it delivers information about the objects whitin a data frame. Let's see what it returns:
str(movies)
# It shouws this data frame has 8 observations, for 2 columns, so called __name__ and __year__. The "name" column is a factor with 8 levels and "year" is a numerical column.
# The class() function works for data frames as well. You can use it to determine the type of a column of a data frame.
class(movies$year)
# You can use numerical selectors to reach information inside the table.
movies[1,2] #1-Toy Story, 2-1995
# The **_head()_** function is very useful when you have a large table and you need to take a peek at the first elements. This function returns the first 6 values of a data frame (or event a list).
head(movies)
# Similar to the previous function, **_tail()_** returns the last 6 values of a data frame or list.
tail(movies)
# Now let's try to add a new column to our data frame with the length of each movie in minutes.
movies['length'] <- c(81, 125, 97, 100, 87, 139, 130, 119)
movies
# A new column was included into our data frame with just one line of code. We just needed to add a vector to data frame, then it will be our new column.
# Now let's try to add a new movie to our data set.
movies <- rbind(movies, c(name="Dr. Strangelove", year=1964, length=94))
movies
# Remember, you can't add a list with more variables than the data frame, and vice-versa.
# We don't need this movie anymore, so let's delete it. Here we are deleting row 12 by assigning to itself the movies dataframe without the 12th row.
movies <- movies[-12,]
movies
# To delete a column you can just set it as **_NULL_**.
movies[["length"]] <- NULL
movies
# That is it! You learned a lot about data frames and how easy it is to work with them.
# <hr>
# Lets first download the dataset that we will use in this notebook:
# code to download the dataset
download.file("https://ibm.box.com/shared/static/n5ay5qadfe7e1nnsv5s01oe1x62mq51j.csv", destfile="movies-db.csv")
# To begin, we can start by associating our data to a data frame. Let's call it `movies_Data`.
movies_data <- read.csv("movies-db.csv", header=TRUE, sep=",")
# <hr>
# <a id="control"></a>
# <center><h1>Control statements</h1></center>
# **Control statements** are ways for a programmer to control what pieces of the program are to be executed at certain times. The syntax of control statements is very similar to regular english, and they are very similar to logical decisions that we make all the time.
#
# **Conditional statements** and **Loops** are the control statements that are able to change the execution flow. The expected execution flow is that each line and command should be executed in the order they are written. Control statements are able to change this, allowing you to skip parts of the code or to repeat blocks of code.
# <hr>
# <a id="ref2"></a>
# <center><h2>Conditional Statements</h2></center>
# We often want to check a conditional statement and then do something in response to that condition being true or false.
#
# ### If Statements
#
# **If** statements are composed of a conditional check and a block of code that is executed if the check results in **true**. For example, assume we want to check a movie's year, and print something if it greater than 2000:
# +
movie_year = 2002
# If Movie_Year is greater than 2000...
if(movie_year > 2000){
# ...we print a message saying that it is greater than 2000.
print('Movie year is greater than 2000')
}
# -
# Notice that the code in the above block **{}** will only be executed if the check results in **true**.
# You can also add an `else` block to `if` block -- the code in `else` block will only be executed if the check results in **false**.
#
# **Syntax:**
#
# if (condition) {
# # do something
# } else {
# # do something else
# }
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **Tip**: This syntax can be spread over multiple lines for ease of creation and legibility.
# </div>
# Let's create a variable called **`Movie_Year`** and attribute it the value 1997. Additionally, let's add an `if` statement to check if the value stored in **`Movie_Year`** is greater than 2000 or not -- if it is, then we want to output a message saying that Movie_Year is greater than 2000, if not, then we output a message saying that it is not greater than 2000.
# +
movie_year = 1997
# If Movie_Year is greater than 2000...
if(movie_year > 2000){
# ...we print a message saying that it is greater than 2000.
print('Movie year is greater than 2000')
}else{ # If the above conditions were not met (Movie_Year is not greater than 2000)...
# ...then we print a message saying that it is not greater than 2000.
print('Movie year is not greater than 2000')
}
# -
# Feel free to change **`movie_year`**'s value to other values -- you'll see that the result changes based on it!
# To create our conditional statements to be used with **`if`** and **`else`**, we have a few tools:
#
# ### Comparison operators
# When comparing two values you can use this operators
#
# <ul>
# <li>equal: `==`
# <li>not equal: `!=`
# <li>greater/less than: `>` `<` </li>
# <li> greater/less than or equal: `>=` `<=` </li>
# </ul>
#
# ### Logical operators
# Sometimes you want to check more than one condition at once. For example you might want to check if one condition **and** other condition are true. Logical operators allow you to combine or modify conditions.
#
# <ul>
# <li> and: `&`
# <li> or: `|`
# <li> not: `!`
# </ul>
#
# Let's try using these operators:
# +
movie_year = 1997
# If Movie_Year is BOTH less than 2000 AND greater than 1990 -- both conditions have to be true! -- ...
if(movie_year < 2000 & movie_year > 1990 ) {
# ...then we print this message.
print('Movie year between 1990 and 2000')
}
# If Movie_Year is EITHER greater than 2010 OR less than 2000 -- any of the conditions have to be true! -- ...
if(movie_year > 2010 | movie_year < 2000 ) {
# ...then we print this message.
print('Movie year is not between 2000 and 2010')
}
# -
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **Tip**: All the expressions will return the value in Boolean format -- this format can only house two values: true or false!
# </div>
# ### Subset
#
# Sometimes, we don't want an entire dataset -- maybe in a dataset of people we want only people with age 18 and over, or in the movies dataset, maybe we want only movies that were created after a certain year. This means we want a **subset** of the dataset. In R, we can do this by utilizing the **`subset`** function.
#
# Suppose we want a subset of the **`movies_Data`** data frame composed of movies from a given year forward (e.g. year 2000) if a selected variable is **recent**, or from that given year back if we select **old**. We can quite simply do that in R by doing this:
# +
decade = 'recent'
# If the decade given is recent...
if(decade == 'recent' ){
# Subset the dataset to include only movies after year 2000.
subset(movies_data, year >= 2000)
} else { # If not...
# Subset the dataset to include only movies before 2000.
subset(movies_data, year < 2000)
}
# -
# <hr>
# <a id="ref3"></a>
# <center><h2>Loops</h2></center>
# Sometimes, you might want to repeat a given function many times. Maybe you don't even know how many times you want it to execute, but have an idea like `once for every row in my dataset`. Repeated execution like this is supplemented by **loops**. In R, there are two main loop structures, **`for`** and **`while`**.
#
# ### The `for` loop
# The `for` loop structure enables you to execute a code block once for every element in a given structure. For example, it would be like saying **execute this once for every row in my dataset**, or "execute this once for every element in this column bigger than 10". **`for`** loops are a very useful structure that make the processing of a large amount of data very simple.
#
# Let's try to use a **`for`** loop to print all the years present in the **`year`** column in the **`movies_Data`** data frame. We can do that like this:
# +
# Get the data for the "year" column in the data frame.
years <- movies_data['year']
# For each value in the "years" variable...
# Note that "val" here is a variable -- it assumes the value of one of the data points in "years"!
for (val in years) {
# ...print the year stored in "val".
print(val)
}
# -
# ### The `while` loop
# As you can see, the `for` loop is useful for a controlled flow of repetition. However, what if we don't know when we want to stop the loop? What if we want to keep executing a code block until a certain threshold has been reached, or maybe when a logical expression finally results in an expected fashion?
#
# The `while` loop exists as a tool for repeated execution based on a condition. The code block will keep being executed until the given logical condition returns a `False` boolean value.
#
# Let's try using `while` to print the first five movie names of our dataset. It can be done like this:
# +
# Creating a start point.
iteration = 1
# We want to repeat until we reach the sixth operation -- but not execute the sixth time.
# While iteration is less or equal to five...
while (iteration <= 5) {
print(c("This is iteration number:",as.character(iteration)))
# ...print the "name" column of the iteration-th row.
print(movies_data[iteration,]$name)
# And then, we increase the "iteration" value -- so that we actually reach our stopping condition
# Be careful of infinite while loops!
iteration = iteration + 1
}
# -
# ### Applying Functions to Vectors
# One of the most common uses of loops is to **apply a given function to every element in a vector of elements**. Any of the loop structures can do that, however, R conveniently provides us with a very simple way to do that: By inferring the operation.
#
# R is a very smart language when it comes to element-wise operations. For example, you can perform an operation on a whole list by utilizing that function directly on it. Let's try that out:
# +
# First, we create a vector...
my_list <- c(10,12,15,19,25,33)
# ...we can try adding two to all the values in that vector.
my_list + 2
# Or maybe even exponentiating them by two.
my_list ** 2
# We can also sum two vectors element-wise!
my_list + my_list
# -
# R makes it very simple to operate over vectors -- anything you think should work will probably work. Try to mess around with vectors and see what you find out!
#
# This is the end of the **Loops and Conditional Execution in R** notebook. Hopefully, now you know how to manipulate the flow of your code to your needs. Thank you for reading this notebook, and good luck on your studies.
# <hr>
# #### Scaling R with big data
#
# As you learn more about R, if you are interested in exploring platforms that can help you run analyses at scale, you might want to sign up for a free account on [IBM Watson Studio](http://cocl.us/dsx_rp0101en), which allows you to run analyses in R with two Spark executors for free.
#
#
# Lets first download the dataset that we will use in this notebook:
# code to download the dataset
download.file("https://ibm.box.com/shared/static/n5ay5qadfe7e1nnsv5s01oe1x62mq51j.csv", destfile="movies-db.csv")
# <hr>
# <a id='ref1'></a>
# <center><h2>What is a Function?</h2></center>
#
# A function is a re-usable block of code which performs operations specified in the function.
#
# There are two types of functions :
#
# - **Pre-defined functions**
# - **User defined functions**
# <b>Pre-defined</b> functions are those that are already defined for you, whether it's in R or within a package. For example, **`sum()`** is a pre-defined function that returns the sum of its numeric inputs.
# <b>User-defined</b> functions are custom functions created and defined by the user. For example, you can create a custom function to print **Hello World**.
# <h3><b>Pre-defined functions</b></h3>
# There are many pre-defined functions, so let's start with the simple ones.
# Using the **`mean()`** function, let's get the average of these three movie ratings:
# - **Star Wars (1977)** - rating of 8.7
# - **Jumanji** - rating of 6.9
# - **Back to the Future** - rating of 8.5
ratings <- c(8.7, 6.9, 8.5)
mean(ratings)
# We can use the **`sort()`** function to sort the movies rating in _ascending order_.
sort(ratings)
# You can also sort by _decreasing_ order, by adding in the argument **`decreasing = TRUE`**.
sort(ratings, decreasing = TRUE)
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# <h4> [Tip] How do I learn more about the pre-defined functions in R? </h4>
# <p></p>
# We will be introducing a variety of **pre-defined functions** to you as you learn more about R. There are just too many functions, so there's no way we can teach them all in one sitting. But if you'd like to take a quick peek, here's a short reference card for some of the commonly-used pre-defined functions:
# https://cran.r-project.org/doc/contrib/Short-refcard.pdf
# </div>
# <h3>User-defined functions</h3>
# Functions are very easy to create in R:
printHelloWorld <- function(){
print("Hello World")
}
printHelloWorld()
# To use it, simply run the function with **`()`** at the end:
printHelloWorld()
# But what if you want the function to provide some **output** based on some **inputs**?
add <- function(x, y) {
x + y
}
add(3, 4)
# As you can see above, you can create functions with the following syntax to take in inputs (as its arguments), then provide some output.
#
# **`f <- function(<arguments>) { `
# ` Do something`
# ` Do something`
# ` return(some_output)`
# `} `**
#
# <hr>
# <a id='ref2'></a>
# <center><h2>Explicitly returning outputs in user-defined functions</h2></center>
#
# In R, the last line in the function is automatically inferred as the output the function.
#
# #### You can also explicitly tell the function to return an output.
add <- function(x, y){
return(x + y)
}
add(3, 4)
# It's good practice to use the `return()` function to explicitly tell the function to return the output.
# <hr>
# <a id='ref3'></a>
# <center><h2>Using IF/ELSE statements in functions</h2></center>
#
# The **`return()`** function is particularly useful if you have any IF statements in the function, when you want your output to be dependent on some condition:
# +
isGoodRating <- function(rating){
#This function returns "NO" if the input value is less than 7. Otherwise it returns "YES".
if(rating < 7){
return("NO") # return NO if the movie rating is less than 7
}else{
return("YES") # otherwise return YES
}
}
isGoodRating(6)
isGoodRating(9.5)
# -
# <hr>
# <a id='ref4'></a>
# <center><h2>Setting default argument values in your custom functions</h2></center>
#
# You can a set a default value for arguments in your function. For example, in the **`isGoodRating()`** function, what if we wanted to create a threshold for what we consider to be a good rating?
#
# Perhaps by default, we should set the threshold to 7:
# +
isGoodRating <- function(rating, threshold = 7){
if(rating < threshold){
return("NO") # return NO if the movie rating is less than the threshold
}else{
return("YES") # otherwise return YES
}
}
isGoodRating(6)
isGoodRating(10)
# -
# Notice how we did not have to explicitly specify the second argument (threshold), but we could specify it. Let's say we have a higher standard for movie ratings, so let's bring our threshold up to 8.5:
isGoodRating(8, threshold = 8.5)
# Great! Now you know how to create default values. **Note that** if you know the order of the arguments, you do not need to write out the argument, as in:
isGoodRating(8, 8.5) #rating = 8, threshold = 8.5
# <hr>
# <a id='ref5'></a>
# <center><h2>Using functions within functions</h2></center>
#
# Using functions within functions is no big deal. In fact, you've already used the **`print()`** and **`return()`** functions. So let's try making our **`isGoodRating()`** more interesting.
# Let's create a function that can help us decide on which movie to watch, based on its rating. We should be able to provide the name of the movie, and it should return **NO** if the movie rating is below 7, and **YES** otherwise.
# First, let's read in our movies data:
my_data <- read.csv("movies-db.csv")
head(my_data)
# Next, do you remember how to check the value of the **average_rating** column if we specify a movie name?
# Here's how:
# +
# Within myData, the row should be where the first column equals "Akira"
# AND the column should be "average_rating"
akira <- my_data[my_data$name == "Akira", "average_rating"]
akira
isGoodRating(akira)
# -
# Now, let's put this all together into a function, that can take any **moviename** and return a **YES** or **NO** for whether or not we should watch it.
# +
watchMovie <- function(data, moviename){
rating <- data[data["name"] == moviename,"average_rating"]
return(isGoodRating(rating))
}
watchMovie(my_data, "Akira")
# -
# **Make sure you take the time to understand the function above.** Notice how the function expects two inputs: `data` and `moviename`, and so when we use the function, we must also input two arguments.
# *But what if we only want to watch really good movies? How do we set our rating threshold that we created earlier? *
# <br>
# Here's how:
watchMovie <- function(data, moviename, my_threshold){
rating <- data[data$name == moviename,"average_rating"]
return(isGoodRating(rating, threshold = my_threshold))
}
# Now our watchMovie takes three inputs: **data**, **moviename** and **my_threshold**
watchMovie(my_data, "Akira", 7)
# *What if we want to still set our default threshold to be 7?*
# <br>
# Here's how we can do it:
# +
watchMovie <- function(data, moviename, my_threshold = 7){
rating <- data[data[,1] == moviename,"average_rating"]
return(isGoodRating(rating, threshold = my_threshold))
}
watchMovie(my_data,"Akira")
# -
# As you can imagine, if we assign the output to a variable, the variable will be assigned to **YES**
a <- watchMovie(my_data, "Akira")
a
# While the **watchMovie** is easier to use, I can't tell what the movie rating actually is. How do I make it *print* what the actual movie rating is, before giving me a response? To do so, we can simply add in a **print** statement before the final line of the function.
#
# We can also use the built-in **`paste()`** function to concatenate a sequence of character strings together into a single string.
# +
watchMovie <- function(moviename, my_threshold = 7){
rating <- my_data[my_data[,1] == moviename,"average_rating"]
memo <- paste("The movie rating for", moviename, "is", rating)
print(memo)
return(isGoodRating(rating, threshold = my_threshold))
}
watchMovie("Akira")
# -
# Just note that the returned output is actually the resulting value of the function:
x <- watchMovie("Akira")
print(x)
# <hr>
# <a id='ref6'></a>
# <center><h2>Global and local variables</h2></center>
#
# So far, we've been creating variables within functions, but did you notice what happens to those variables outside of the function?
#
# Let's try to see what **memo** returns:
# +
watchMovie <- function(moviename, my_threshold = 7){
rating <- my_data[my_data[,1] == moviename,"average_rating"]
memo <- paste("The movie rating for", moviename, "is", rating)
print(memo)
isGoodRating(rating, threshold = my_threshold)
}
watchMovie("Akira")
# -
memo
# **We got an error:** ` object 'memo' not found`. **Why?**
#
# It's because all the variables we create in the function remain within the function. In technical terms, this is a **local variable**, meaning that the variable assignment does not persist outside the function. The `memo` variable only exists within the function.
#
# But there is a way to create **global variables** from within a function -- where you can use the global variable outside of the function. It is typically _not_ recommended that you use global variables, since it may become harder to manage your code, so this is just for your information.
#
# To create a **global variable**, we need to use this syntax:
# > **`x <<- 1`**
#
#
# Here's an example of a global variable assignment:
myFunction <- function(){
y <<- 3.14
return("Hello World")
}
myFunction()
y #created only in the myFunction function
# <hr>
# Now, we are going to take a look at how R is storing and outputing the results of its objects. In other words, we'll find out how R can handle different kinds of data.
# <hr>
# <a id="ref1"></a>
# <center><h2>What is an Object?</h2></center>
#
# Everthing that you manipulate in R, literally every entity in R, is considered an **object**. In real life, we think of an object as something that we can hold and look at. R objects are a lot like that. For example, vector is one of the objects in R.
#
# An object in R has different kinds of properties or attributes. One of the attributes in objects is called the **class** of the object. The **class** of an object is the data type of this object. For instance, the class of vector can be numeric if it's composed of numeric values or character if it's composed of string values. The various classes (data types) of objects in R are important for data science programming.
# ### Class
# <p>The most common classes (data types) of objects in R are:</p>
#
# <ul>
# <li>numeric (real numbers)</li>
# <li>character</li>
# <li>integer</li>
# <li>logical (True/False)</li>
# <li>complex</li>
# </ul>
# If you want to know about the data type of your values, you can use the **"class()"** function and add the variables' name to it. Let's create a variable from the average rating of some movies and then find which data types they belong to:
movie_rating <- c(8.3, 5.2, 9.3, 8.0) # create a vector from average ratings
movie_rating # print the variable
# To check what is the data type, let's use **class()**
class(movie_rating) # show the variable's data type
# As you see, the **class()** function shows that the data type of values in the vector is **numeric**.
# <div class="alert alert-success alertsuccess" style="margin-top: 20px">
# **Tip:** A vector can only contain objects of the same class. However, a list can have different data types.
# </div>
# ### Numeric
#
# Decimal values are called numerics. They are the default computational data type in R. In the example below, If we assign a decimal value to a variable average_rating, then average_rating will be of numeric type.
average_rating <- 8.3 # assign a decimal value
average_rating
# Using **class** to check the data type results in **numeric**
class(average_rating)
# ### Character
#
# A character object is used to represent string values in R, strings are simply text values.
movies <-c("Toy Story", "Akira", "The Breakfast Club", "The Artist")
movies
class(movies)
# If numbers and texts are combined in a vector, everything is converted to the class **character**. Let's make a vector from combined movie names and their production year, then find the data type for the vector
combined <- c("Toy Story", 1995, "Akira", 1998)
combined
class(combined)
# When you simply enter numbers into R, they will be saved as class **numeric** by default. For example in the following vector, even though the numbers are integers, they are stored as numeric type in R:
movie_length <- c(80, 110, 90, 80) # create a vector from movie length
movie_length # print the variable
class(movie_length)
# ### Integer
#
# An integer is a number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, 5 1⁄2, and √2 are not. In R, when you create a variable from the mentioned numbers, they are not going to be stored as integer data type. In order to get the integer class we need to convert the variable type from numeric to integer using **as.integer()** function. Let's create a vector and check if the data type is numeric.
# +
age_restriction <- c(12, 10, 18, 18) # create a vector from age restriction
age_restriction # print the vector
class(age_restriction)
# -
integer_vector <- as.integer(age_restriction)
class(integer_vector)
# ### Logical
#
# The logical class contains True/False values (Boolean values). Let's create a vector with logical values and check its class:
logical_vector <- c(T,F,F,T,T) # creating the vector
class(logical_vector)
# A logical value is often created via comparison between variables. In the below example, we will compare the length of movies **<i>Toy Story and Akira</i>**.
length_Akira <- 125
length_ToyStory <- 81
# If we assign the result of the compare statement to a variable the variable will have FALSE if the statement was false, and TRUE if the statement is true.
x <- length_ToyStory > length_Akira # is ToyStory larger than akira?
x
x <- length_Akira > length_ToyStory # is akira larger than ToyStory?
x # print the logical value
# The resulting variable is of type logical
class(x) # print the class name of x
# ### Complex
#
# A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit.
z = 8 + 6i # create a complex number
z
class(z)
# <hr>
# <a id="ref2"></a>
# <center><h2>Converting One Class to Another</h2></center>
#
# We can convert (coerce) one data type to another if we desire. For example, we can convert objects from other data types into character values with the **"as.character()"** function. In the following example, we convert numeric value into character:
year <- as.character(1995) # convert integer into character data type
year # print the value of year in character data type
# As we mentioned before, in order to create an integer variable in R, we can use the **"as.integer()"** function. In the following example, even though the number is an integer data type, R saves the number as numeric data type by default. So you need to change the number to integer later if it is necessary.
Length_ToyStory <- 81
class(81)
length_ToyStory <- as.integer(81)
class(length_ToyStory) # print the class name of length_ToyStory
# <hr>
# <a id="ref3"></a>
# <center><h2>Difference between Class and Mode</h2></center>
#
# For a simple vector, the class and mode of the vector are the same thing: the data type of the values inside the vector (character, numeric, integer, etc). However, in some of other objects such as matrix, array, data frame, and list, class and mode means different things.
#
# In those mentioned objects, the **class()** function shows the type of the data structure. What does that mean? The class of matrix will be **matrix** regardless of what data types are **inside** the matrix. The same applies to list, array and data frame.
#
# **Mode** on the other hand, determines what types of data can be found within the object and how that values are stored. So, you need to use the **mode()** function to find the data type of values inside a matrix (character, numeric, integer, etc).
#
# So, in addition to the classes such as numeric, character, integer, logical, and complex, we have other classes such as matix, array, dataframe, and list
# ### Matrix
#
# Let’s create a matrix storing the genre for each movie. Then, we will find the class and mode of the created matrix to see which information we will get from them.
#
# First, let's check the effect of class and mode on a **vector**.
# +
movies <- c("Toy Story", "Akira", "The Breakfast Club", "The Artist") # creating two vectors
genre <- c("Animation/Adventure/Comedy", "Animation/Adventure/Comedy", "Comedy/Drama", "Comedy/Drama")
class(movies)
mode(movies)
# -
# As you see in the above, for the vector the class and mode shows the data type of values. Now lets create a matrix from these two vectors.
movies_genre <- cbind(movies, genre)
movies_genre
# Now **class()** shows that the data type is **matrix**.
class(movies_genre)
# And **mode** shows the data type of the elements of the matrix
mode(movies_genre)
# For the matrix, the __class()__ shows how the values are stored and shown in R, in this case, in a matrix. However, __mode()__ shows the data type of values in the matrix. In the above example we have made a matrix filled with __character__ values.
# ### Array
#
# A slightly more complicated version of the **matrix** data type is the
# **array** data type. The **array** data type can still only have one data
# type inside of it, but the set of data types it can store is larger. In addition
# to the data types an array can store matrices as its
# elements. In the following, we are going to create the array from integer number (1 to 12) and then compare the class and mode in an array:
sample_array <- array(1:12, dim = c(3, 2, 2)) # create an array with dimensions 3 x 2 x 2
sample_array
class(sample_array)
mode(sample_array)
# So, the array's class is **array** and its mode is **numeric**.
# ### Data Frame
#
# Data frames are similar to arrays but they have certain advantages over arrays. Data frames allow you to associate each row and each column with a name of
# your choosing and allow **each column** of the data frame to have a **different
# data type** if you like. Let's create a data frame from the movie names, year and their length:
# +
Name <- c("<NAME>", "Akira", "The Breakfast Club", "The Artist")
Year <- c(1995, 1998, 1985, 2011)
Length <- c(81, 125, 97, 100)
RowNames = c("Movie 1", "Movie 2", "Movie 3", "Movie 4")
sample_DataFrame <- data.frame(Name, Year, Length, row.names=RowNames)
sample_DataFrame
class(sample_DataFrame)
# -
# So, the class of the above table is "data.frame".
# ### List
#
# The final data type that we are going to go over is **list**. Lists
# are similar to vectors, but they can contain multiple data types. For example:
# +
sample_List = list("Star Wars", 8.7, TRUE)
sample_List
class(sample_List)
mode(sample_List)
mode(sample_List[[3]])
# -
# As you see, we have character, numeric, and logical data types in the list. The data type of third element in the list is logical as the "mode()" function shows us. The **mode** for the entire list is **list**, it could show the type of all the elements, since they don't all have the same data type.
# <hr>
# <a id="ref4"></a>
# <center><h2>Attributes</h2></center>
#
# Objects have one or more __attributes__ that can modify how R thinks about the object. Imagine you have a bowl of pasta and cheese. If you add spice, you change it to something with new taste. Different spices makes different dishes.
#
# Attributes are like spice. You can change any individual attribute of object with the **<i>attr()<i/>** function. You also can use the **attribute()** function to return a list of all of the attributes currently defined for that object.
#
# For example in the following code, we will create a vector from the average ratings (8.3, 8.1, 7.9, 8) and costs of four movies (30, 10.4, 1, 15), and then we change the __dim__ attribute of the vector. R will now treat z as it were a 4-by-2 matrix.
z <- c(8.3, 8.1, 7.9, 8, 30, 10.4, 1, 15)
z
attr(z, "dim") <- c(4,2)
z
# Now, we can find the class and mode of the above matrix.
class(z)
mode(z)
# <hr>
# <a id='ref1'></a>
# <center><h2>What is debugging and error handling?</h2></center>
# *What do you get when you try to add **`"a" + 10`**? An error!*
"a" + 10
# *And what happens to your code if an error occurs? It halts!*
for(i in 1:10){
#for every number, i, in the sequence of 1,2,3:
print(i + "a")
}
# These are very simple examples, and the sources of the errors are easy to spot. But when it's embedded in a large chunk of code with many parts, it can be difficult to identify _when_, _where_, and _why_ an error has occurred. This process of identifying the source of the error and fixing it is called **debugging**.
# <hr>
# <a id='ref2'></a>
# <center><h2>Error Catching</h2></center>
# If you know an error may occur, the best way to handle the error is to **`catch`** the error while it's happening, so it doesn't prevent the script from halting at the error.
# #### No error:
tryCatch(10 + 10)
# #### Error:
tryCatch("a" + 10) #Error
# <h3>Error Catching with `tryCatch`:</h3>
# **`tryCatch`** first _tries_ to run the code, and if it works, it executes the code normally. **But if it results in an error**, you can define what to do instead.
# +
#If tryCatch detects it will cause an error, print a message instead. Overall, no error is generated and the code continued to run successfully.
tryCatch(10 + "a",
error = function(e) print("Oops, something went wrong!") ) #No error
# +
#If error, return "10a" without an error
x <- tryCatch(10 + "a", error = function(e) return("10a") ) #No error
x
# -
tryCatch(
for(i in 1:3){
#for every number, i, in the sequence of 1,2,3:
print(i + "a")
}
, error = function(e) print("Found error.") )
# <hr>
# <a id='ref3'></a>
# <center><h2>Warning Catching</h2></center>
# Aside from **errors**, there are also **warnings**. Warnings do not halt code, but are displayed when something is perhaps not running the way a user expects.
as.integer("A") #Converting "A" into an integer warns the user that the value is converted to NA
# If needed, you can also use **`tryCatch`** to catch the warnings as they occur, without producing the warning message:
tryCatch(as.integer("A"), warning = function(e) print("Warning.") )
# <hr>
# #### Scaling R with big data
#
# As you learn more about R, if you are interested in exploring platforms that can help you run analyses at scale, you might want to sign up for a free account on [IBM Watson Studio](http://cocl.us/dsx_rp0101en), which allows you to run analyses in R with two Spark executors for free.
#
#
# <hr>
# ### Welcome!
#
# By the end of this notebook, you will have learned how to **import and read data** from different file types in R.
# ## Table of Contents
#
#
# <ul>
# <li><a href="#About-the-Dataset">About the Dataset</a></li>
# <li><a href="#Reading-CSV-Files">Reading CSV Files</a></li>
# <li><a href="#Reading-Excel-Files">Reading Excel Files</a></li>
# <li><a href="#Accessing-Rows-and-Columns">Accessing Rows and Columns from dataset</a></li>
# <li><a href="#Accessing-Built-in-Datasets-in-R">Accessing Built-in Datasets in R</a></li>
# </ul>
# <p></p>
# Estimated Time Needed: <strong>15 min</strong>
#
# <hr>
# <a id="ref0"></a>
# <h2 align=center>About the Dataset</h2>
# **Movies dataset**
#
# Here we have a dataset that includes one row for each movie, with several columns for each movie characteristic:
#
# - **name** - Name of the movie
# - **year** - Year the movie was released
# - **length_min** - Length of the movie (minutes)
# - **genre** - Genre of the movie
# - **average_rating** - Average rating on [IMDB](http://www.imdb.com/)
# - **cost_millions** - Movie's production cost (millions in USD)
# - **foreign** - Is the movie foreign (1) or domestic (0)?
# - **age_restriction** - Age restriction for the movie
# <br>
#
# <img src = "https://ibm.box.com/shared/static/6kr8sg0n6pc40zd1xn6hjhtvy3k7cmeq.png" width = 90% align="left">
# Let's learn how to **import and read data** from two common types of files used to store tabular data (when data is stored in a table or a spreadsheet.)
# - **CSV files** (.csv)
# - **Excel files** (.xls or .xlsx)
#
# To begin, we'll need to **download the data**!
# <a id="ref0"></a>
# <h2 align=center>Download the Data</h2>
# We've made it easy for you to get the data, which we've hosted online. Simply run the code cell below (Shift + Enter) to download the data to your current folder.
# +
# Download datasets
# CSV file
download.file("https://ibm.box.com/shared/static/n5ay5qadfe7e1nnsv5s01oe1x62mq51j.csv",
destfile="movies-db.csv")
# XLS file
download.file("https://ibm.box.com/shared/static/nx0ohd9sq0iz3p871zg8ehc1m39ibpx6.xls",
destfile="movies-db.xls")
# -
# **If you ran the cell above, you have now downloaded the following files to your current folder:**
# > movies-db.csv
# > movies-db.xls
system("ls",intern=TRUE) # we'll cover system() in later workshops, but note that it will run shell commands for you
# <a id="ref1"></a>
# <center><h2>Reading CSV Files</h2></center>
# #### What are CSV files?
# Let's read data from a CSV file. CSV (Comma Separated Values) is one of the most common formats of structured data you will find. These files contain data in a table format, where in each row, columns are separated by a delimiter -- traditionally, a comma (hence comma-separated values).
#
# Usually, the first line in a CSV file contains the column names for the table itself. CSV files are popular because you do not need a particular program to open it.
# #### Reading CSV files in R
# In the **`movies-db.csv`** file, the first line of text is the header (names of each of the columns), followed by rows of movie information.
#
# To read CSV files into R, we use the core function **`read.csv`**.
#
# `read.csv` easy to use. All you need is the filepath to the CSV file. Let's try loading the file using the filepath to the `movies-db.csv` file we downloaded earlier:
# Load the CSV table into the my_data variable.
my_data <- read.csv("movies-db.csv")
my_data
# The data was loaded into the `my_data` variable. But instead of viewing all the data at once, we can use the `head` function to take a look at only the top six rows of our table, like so:
# Print out the first six rows of my_data
head(my_data)
# Additionally, you may want to take a look at the **structure** of your newly created table. R provides us with a function that summarizes an entire table's properties, called `str`. Let's try it out.
# Prints out the structure of your table.
str(my_data)
# When we loaded the file with the `read.csv` function, we had to only pass it one parameter -- the **path** to our desired file.
#
# If you're using Data Scientist Workbench, it is simple to find the path to your uploaded file. In the **Recent Data** section in the sidebar on the right, you can click the arrow to the left of the filename to see extra options -- one of these commands should be **Insert Path**, which automatically copies the path to your file into Jupyter Notebooks.
# -----------------
# <a id="ref2"></a>
# <center><h2>Reading Excel Files</h2></center>
# Reading XLS (Excel Spreadsheet) files is similar to reading CSV files, but there's one catch -- R does not have a native function to read them. However, thankfully, R has an extremely large repository of user-created functions, called *CRAN*. From there, we can download a library package to make us able to read XLS files.
#
# To download a package, we use the `install.packages` function. Once installed, you do not need to install that same library ever again, unless, of course, you uninstall it.
# Download and install the "readxl" library
install.packages("readxl") # note this may take a couple of minutes to complete as there will normally be dependencies to autoinstall as well
# Whenever you are going to use a library that is not native to R, you have to load it into the R environment after you install it. In other words, you need to install once only, but to use it, you must load it into R for every new session. To do so, use the `library` function, which loads up everything we can use in that library into R.
# Load the "readxl" library into the R environment.
library(readxl)
# Now that we have our library and its functions ready, we can move on to actually reading the file. In `readxl`, there is a function called `read_excel`, which does all the work for us. You can use it like this:
# Read data from the XLS file and attribute the table to the my_excel_data variable.
my_excel_data <- read_excel("movies-db.xls")
# Since `my_excel_data` is now a dataframe in R, much like the one we created out of the CSV file, all of the native R functions can be applied to it, like `head` and `str`.
# Prints out the structure of your table.
# Tells you how many rows and columns there are, and the names and type of each column.
# This should be the very same as the other table we created, as they are the same dataset.
str(my_excel_data)
# Much like the `read.csv` function, `read_excel` takes as its main parameter the **path** to the desired file.
# <div class="alert alert-success alertsuccess">
# <b>[Tip]</b>
# A **Library** is basically a collection of different classes and functions which are used to perform some specific operations. You can install and use libraries to add more functions that are not included on the core R files.
# For example, the **readxl** library adds functions to read data from excel files.
# <br><br>
# It's important to know that there are many other libraries too which can be used for a variety of things. There are also plenty of other libraries to read Excel files -- readxl is just one of them.
# </div>
# -----------------
# <center><h2>Accessing Rows and Columns</h2></center>
# Whenever we use functions to read tabular data in R, the default method of structuring this data in the R environment is using Data Frames -- R's primary data structure. Data Frames are extremely versatile, and R presents us many options to manipulate them.
#
# Suppose we want to access the "name" column of our dataset. We can directly reference the column name on our data frame to retrieve this data, like this:
# Retrieve a subset of the data frame consisting of the "name" columns
my_data['name']
# Another way to do this is by using the `$` notation which at the output will provide a vector:
# Retrieve the data for the "name" column in the data frame.
my_data$name
# You can also do the same thing using **double square brackets**, to get a vector of `names` column.
my_data[["name"]]
# Similarly, any particular row of the dataset can also be accessed. For example, to get the first row of the dataset with all column values, we can use:
# Retrieve the first row of the data frame.
my_data[1,]
# The first value before the comma represents the **row** of the dataset and the second value (which is blank in the above example) represents the **column** of the dataset to be retrieved. By setting the first number as 1 we say we want data from row 1. By leaving the column blank we say we want all the columns in that row.
#
# We can specify more than one column or row by using **`c`**, the **concatenate** function. By using `c` to concatenate a list of elements, we tell R that we want these observations out of the data frame. Let's try it out.
# Retrieve the first row of the data frame, but only the "name" and "length_min" columns.
my_data[1, c("name","length_min")]
# -----------------
# <a id="ref4"></a>
# <center><h2>Accessing Built-in Datasets in R</h2></center>
# R provides various built-in datasets for users to utilize for different purposes. To know which datasets are available, R provides a simple function -- `data` -- that returns all of the present datasets' names with a small description beside them. The ones in the `datasets` package are all inbuilt.
# Displays a list of the inbuilt datasets. Opens in a new "window".
data()
# As you can see, there are many different datasets already inbuilt in the R environment. Having to go through each of them to take a look at their structure and try to find out what they represent might be very tiring. Thankfully, R has documentation present for each inbuilt dataset. You can take a look at that by using the `help` function.
#
# For example, if we want to know more about the `women` dataset, we can use the following function:
# Opens up the documentation for the inbuilt "women" dataset.
help(women)
# Since the datasets listed are inbuilt, you do not need to import or load them to use them. If you reference them by their name, R already has the data frame ready.
women
# <hr>
# This notebook will provide information regarding reading text files, performing various operations on Strings and saving data into various types of files like text files, CSV files, Excel files etc.
# ## Table of Contents
#
#
# <ul>
# <li><a href="#About-the-Dataset">About the Dataset</a></li>
# <li><a href="#Reading-Text-Files">Reading Text Files</a></li>
# <li><a href="#String-Operations">String Operations</a></li>
# <li><a href="#Writing-and-Saving-to-Files">Writing and Saving to Files</a></li>
# </ul>
# <p></p>
# Estimated Time Needed: <strong>25 min</strong>
#
#
# <hr>
# <a id="ref0"></a>
# <h2 align=center>About the Dataset</h2>
# In this module, we are going to use **The_Artist.txt** file. This file contains text data which is basically summary of the **The Artist** movie and we are going to perform various operations on this data.
#
# This is how our data look like.
# <img src = "https://ibm.box.com/shared/static/hqojozssqxupoanevcpzv4lbym7lynwa.png" width = 90% align="left">
# Let's first **download** the data into your account:
# Download the data file
download.file("https://ibm.box.com/shared/static/l8v8g8e6uzk7yj2j1qc8ypezbhzukphy.txt", destfile="The_Artist.txt")
# <hr>
# <a id="ref1"></a>
# <h2 align=center>Reading Text Files</h2>
# To read text files in R, we can use the built-in R function **readLines()**. This function takes **file path** as the argument and read the whole file.
#
# Let's read the **The_Artist.txt** file and see how it looks like.
my_data <- readLines("The_Artist.txt")
my_data
# <div class="alert alert-block alert-success" style="margin-top: 20px">
# **Tip:** If you got an error message here, make sure that you run the code cell above first to download the dataset.</div>
# So, we get a character vector which has three elements and these elements can be accessed as we access array.
#
# Let's check the length of **my_data** variable
length(my_data)
# Length of **my_data** variable is **5** which means it contains 5 elements.
#
# Similarly, we can check the size of the file by using the **file.size()** method of R and it takes **file path** as argument and returns the number of bytes. By executing code block below, we will get **1065** at the output, which is the size of the file **in bytes**.
file.size("/resources/data/The_Artist.txt")
# There is another method **scan()** which can be used to read **.txt** files. The Difference in **readLines()** and **scan()** method is that, **readLines()** is used to read text files line by line whereas **scan()** method read the text files word by word.
#
# **scan()** method takes two arguments. First is the **file path** and second argument is the string expression according to which we want to separate the words. Like in example below, we pass an empty string as the separator argument.
my_data1 <- scan("The_Artist.txt", "")
my_data1
# And if we will check length of **my_data1** variable then we will get total number of elements at the output.
length(my_data1)
# <hr>
# <a id="ref2"></a>
# <h2 align=center>String Operations</h2>
# There are many string operation methods in R which can be used to manipulate the data. We are going to use some basic string operations on the data that we read before.
# <h3 style="font-size:120%">nchar()</h3>
# The first function is **nchar()** which will return the total number of characters in the given string. Let's find out how many characters are there in the first element of **my_data** variable.
nchar(my_data[1])
# <br>
# <h3 style="font-size:120%">toupper()</h3>
# Now, sometimes we need the whole string to be in Upper Case. To do so, there is a function called **toupper()** in R which takes a string as input and provides the whole string in upper case at output.
toupper(my_data[3])
# **In above** code block, we convert the third element of the character vector in upper case.
# <br>
# <h3 style="font-size:120%">tolower()</h3>
# Similarly, **tolower()** method can be used to convert whole string into lower case. Let's convert the same string that we convert in upper case, into lower case.
tolower(my_data[3])
# **We can** clearly see the difference between the outputs of last two methods.
# <br>
# <h3 style="font-size:120%">chartr()</h3>
# `what if we want to replace any characters in given string?`
# This operation can also be performed in R using **chartr()** method which takes three arguments. The first argument is the characters which we want to replace in string, second argument is the new characters and the last argument is the string on which operation will be performed.
#
# Let's replace **white spaces** in the string with the **hyphen (“-”) sign** in the first element of the **my_data** variable.
chartr(" ", "-", my_data[1])
# <br>
# <h3 style="font-size:120%">strsplit()</h3>
# Previously, we learned that we can read file word by word using **scan()** function. `But what if we want to split the given string word by word?`
#
# This can be done using **strsplit()** method. Let's split the string according to the white spaces.
character_list <- strsplit(my_data[1], " ")
word_list <- unlist(character_list)
word_list
# In above code block, we separate the string word by word, but **strsplit()** method provides a list at the output which contains all the separated words as single element which is more complex to read. So, to make it more easy to read each word as single element, we used **unlist()** method which converts the list into character vector and now we can easily access each word as a single element.
# <br>
# <h3 style="font-size:120%">sort()</h3>
# Sorting is also possible in R. Let's use **sort()** method to sort elements of the **word_list** character vector in ascending order.
sorted_list <- sort(word_list)
sorted_list
# <br>
# <h3 style="font-size:120%">paste()</h3>
# Now, we sort all the elements of ** word_list** character vector. Let's use **paste()** function here, which is used to concatenate strings. This method takes two arguments, the strings we want to concatenate and **collapse** argument which defines the separator in the words.
#
# Here, we are going to concatenate all words of **sorted_list** character vector into a single string.
paste(sorted_list, collapse = " ")
# <br>
# <h3 style="font-size:120%">substr()</h3>
# There is another function **substr()** in R which is used to get a sub section of the string.
#
# Let's take an example to understand it more. In example below, we use the **substr()** method and provide it three arguments. First argument is the data string from which we want the sub string. Second argument is the starting point from where function will start reading the string and the third argument is the stopping point till where we want the function to read string.
sub_string <- substr(my_data[1], start = 4, stop = 50)
sub_string
# So, from the character vector, we start reading the first element from 4th position and read the string till 50th position and at the output, we get the resulted string which we stored in **sub_string** variable.
# <br>
# <h3 style="font-size:120%">trimws()</h3>
# As the sub string that we get in code block above, have some white spaces at the initial and end points. So, to quickly remove them, we can use **trimws()** method of R like shown below.
trimws(sub_string)
# So, at the output, we get the string which does not contain any white spaces at the both ends.
# <br>
# <h3 style="font-size:115%">str_sub()</h3>
# To read string from last, we are using **stringr** library. This library contains **str_sub()** method, which takes same arguments as **sub_stirng** method but read string from last.
#
# Like in the example below, we provide a data string and both starting and end points with negative values which indicates that we are reading string from last.
library(stringr)
str_sub(my_data[1], -8, -1)
# So, we read string from -1 till -8 and it gives **talkies.** with full stop mark at the output.
# <hr>
# <a id="ref3"></a>
# <h2 align=center>Writing and Saving to Files</h2>
# After reading files, we can also write data into files and save them in different file formats like **.txt, .csv, .xls (for excel files) etc**. Let's take a look at some examples.
# <h3 style="font-size:115%">Exporting as Text File</h3>
# Suppose we want to export a matrices or String in **.txt** file. To do so, we can use **write()** method which writes into file and save that on to disk.
#
# Let's create a matrix and try to save it into file.
m <- matrix(c(1, 2, 3, 4, 5, 6), nrow = 2, ncol = 3)
m
write(m, file = "my_text_file.txt", ncolumns = 3, sep = " ")
# In above code block, we provide the input data, file name in which we want to store data along with its path and as we are using matrices to output in file, we provide **ncolumns** argument value and **sep** argument.
#
# Let's try to write a string from our **my_data** variable into file named as **my_text_file2.txt**
write(my_data[1], file = "my_text_file2.txt", ncolumns = 1, sep = " ")
# So, we get the first element from **my_data** variable and provide it as input to write function and this time we assign **ncolumn** argument with value 1 because we want a single column for string.
# <br>
# <h3 style="font-size:115%">Exporting as CSV File</h3>
# As we export data in text files, we can export data into **CSV** files also. To do so, we need a data frame which have data. For this, we can use built-in datasets.
#
# Let's use **CO2** dataset of R which contains data about Carbon Dioxide Uptake in Grass Plants. Let's see how data look like in **CO2**dataset.
head(CO2)
# Now, let's export this data into **CSV** file. We will use **write.csv()** method which takes data frame as input and a **file** argument to specify output filename.
write.csv(CO2, file = "my_csv.csv")
# Now, when we will execute above code block, all data will be exported in CSV file. Now, the first column of CSV file contains row numbers which we do not want in our CSV file. So, we have to define **row.names** to **FALSE** in **write.csv()** method.
write.csv(CO2, file = "my_csv.csv", row.names = FALSE)
# Similarly, to remove column names just make **col.names** to **FALSE**.
# <br>
# <h3 style="font-size:115%">Exporting as Excel File</h3>
# To save data into excel files, we have to install an external library called **xlsx**, which will provide us easy methods to export data into **.xlsx** files.
#
# Let's install this library. (This may take a minute or two)
install.packages("xlsx")
library(xlsx)
write.xlsx(CO2, file = "my_excel.xlsx", row.names = FALSE)
# So, exporting data in **.xlsx** files is similary to **.csv** files just function name is different plus we had to install external library to do this operation.
# <br>
# <h3 style="font-size:115%">Exporting as .RData File</h3>
# In R, we can also save files in **.RData** format. **.RData** format provides a way to save and load our R objects.
#
# Let's create simple variable objects and save that into file with extension**.RData**.
var1 <- "var1"
var2 <- "var2"
var3 <- "var3"
# Now, to write in **.RData** file, we will use **save()** method of R. It has a **list** argument which is the list containing the variable names of all the objects we want to save (which in this case are three vaiables), **file** argument which contains file name in which we are going to write/save data and **safe** argument is to specify whether you want the saving to be performed atomically.
save(list = c("var1", "var2", "var3"), file = "variables.RData", safe = T)
# The file with name **variables.RData** is generated on the provided location.
# <hr>
# <h2 align=center>Regular Expressions (Regex)</h2>
# In this notebook, we will study some simple Regular Expression terms and apply them with R functions.
# ### Table of contents
#
# - <p><a href="#Loading-in-Data">Loading in Data</a></p>
# - <p><a href="#Regular-Expressions">Regular Expressions</a></p>
# - <p><a href="#Regular-Expression-in-R">Regular Expression in R</a></p>
# <p></p>
# <hr>
# <a id="ref9001"></a>
# # Loading in Data
# Let's load in a small list of emails to perform some data analysis and take a look at it.
email_df <- read.csv("https://ibm.box.com/shared/static/cbim8daa5vjf5rf4rlz11330lvqbu7rk.csv")
email_df
# So our simple dataset contains a list of names and a list of their corresponding emails. Let's say we want to simply count the frequency of email domains. But several problems arise before we can even attempt this. If we attempt to simply count the email column, we won't end up with what we want since every email is unique. And if we split the string at the '@', we still won't have what we want since even emails with the same domains might have different regional extensions. So how can we easily extract the necessary data in a quick and easy way?
# <a id="funyarinpa"></a>
# # Regular Expressions
# Regular Expressions are generic expressions that are used to match patterns in strings and text. A way we can exemplify this is with a simple expression that can match with an email string. But before we write it, let's look at how an email is structured:
#
# <code>$<EMAIL>$</code>
#
# So, an email is composed by a string followed by an '@' symbol followed by another string. In R regular expressions, we can express this as:
#
# <code>$.+@.+$</code>
#
# Where:
# * The '.' symbol matches with any character.
# * The '+' symbol repeats the previous symbol one or more times. So, '.+' will match with any string.
# * The '@' symbol only matches with the '@' character.
#
# Now, for our problem, which is extracting the domain from an email excluding the regional url code, we need an expression that specifically matches with what we want:
#
# <code>$@.+\\.$</code>
#
# Where the <code>'\\.'</code> symbol specifically matches with the '.' character.
# <a id="imyourpoutine"></a>
# # Regular Expressions in R
# Now let's look at some R functions that work with R functions.
#
# The grep function below takes in a Regular Expression and a list of strings to search through and returns the positions of where they appear in the list.
grep("@.+", c("<EMAIL>" , "not an email", "<EMAIL>"))
# Grep also has an extra parameter called 'value' that changes the output to display the strings instead of the list positions.
grep("@.+", c("<EMAIL>", "not an email", "<EMAIL>"), value=TRUE)
# The next function, 'gsub', is a substitution function. It takes in a Regular Expression, the string you want to swap in with the matches and a list of strings you want to perform the swap with. The code cell below updates valid emails with a new domain:
gsub("@.+", "@newdomain.com", c("<EMAIL>", "not an email", "<EMAIL>"))
# The functions below, 'regexpr' and 'regmatches', work in conjunction to extract the matches found by a regular expression specified in 'regexpr'.
matches <- regexpr("@.*", c("<EMAIL>", "not an email", "<EMAIL>"))
regmatches(c("<EMAIL>", "not an email", "<EMAIL>"), matches)
# This function is actually perfect for our problem since we simply need to extract the specific information we want. So let's use it with the Regular Expression we defined above and store the extracted strings in a new column in our dataframe.
matches <- regexpr("@.*\\.", email_df[,'Email'])
email_df[,'Domain'] = regmatches(email_df[,'Email'], matches)
# And this is the resulting dataframe:
email_df
# Now we can finally construct the frequency table for the domains in our dataframe!
table(email_df[,'Domain'])
# <hr>
# ### Excellent! You have just completed the R basics notebook!
# #### Scaling R with big data
#
# As you learn more about R, if you are interested in exploring platforms that can help you run analyses at scale, you might want to sign up for a free account on [IBM Watson Studio](http://cocl.us/dsx_rp0101en), which allows you to run analyses in R with two Spark executors for free.
#
#
# <hr>
# ### the beginning ...:
# I hope you found R easy to learn! There's lots more to learn about R but you're well on your way.
# <hr>
# Copyright © [IBM Cognitive Class](https://cognitiveclass.ai). This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/).
| notebooks/Lab-R_Combined.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
mystring = 'hello'
# +
mylist = []
for letter in mystring:
mylist.append(letter)
# -
mylist
mylist1 = [letter for letter in mystring]
mylist1
mylist = [x for x in 'wordtwo']
mylist
mylist = [num**2 for num in range(0,11)]
mylist
mylist = [x**2 for x in range(0,11) if x%2==0]
mylist
celcius = [0,10,20,34.5]
fahrenheit = [((9/5)*temp + 32) for temp in celcius]
fahrenheit
results = [x if x%2==0 else 'ODD' for x in range(0,11)]
results
mylist = []
for x in [2,4,6]:
for y in [1,10,1000]:
mylist.append(x*y)
mylist
mylist = [x*y for x in [2,4,6] for y in [1,10,1000] ]
mylist
| python notebooks by Akshit Ostwal/.ipynb_checkpoints/List comprehension-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b
sess = tf.Session()
print(sess.run(adder_node, feed_dict = {a: 3, b: 4.5}))
print(sess.run(adder_node, feed_dict = {a: [1, 3], b:[2, 4]}))
| lab01/04 Placeholder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
# +
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
# -
# Wide and deep neural net.
input_ = keras.layers.Input(shape=X_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input_)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.Concatenate()([input_, hidden2])
output = keras.layers.Dense(1)(concat)
model2 = keras.Model(inputs=[input_], outputs=[output])
model2.compile(loss="mean_squared_error", optimizer="Adam")
print(model2.summary())
history2 = model2.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
mse_test2 = model2.evaluate(X_test, y_test)
X_new = X_test[:3] # pretend these are new instances
y_pred = model2.predict(X_new)
print(y_pred)
| chapter10/wide_and_deep_example_california_housing_set.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# ### Imports
# +
# General
import os
import glob
# Data Analysis
import numpy as np
import pandas as pd
# Visualization
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ### Variables
unit = "line"
data_dir = '../data/' + unit + '/preprocessed/'
cutoff = 7
# ### Data
# +
filenames = np.array(glob.glob(data_dir+'*.csv'))
#print(filenames)
names = filenames
names = [os.path.basename(name) for name in names]
names = [os.path.splitext(name)[0] for name in names]
names = [name.replace('lucan-','') for name in names]
print(names)
# -
dfs = [pd.read_csv(filename) for filename in filenames]
# +
# Works/lines
ovid_amores_book_lengths = [763, 812, 870]
ovid_ars_book_lengths = [772, 746, 812]
ovid_heroides_book_lengths = [116, 148, 154, 176, 150, 166, 198, 120, 168, 152, 130, 214, 160, 130, 220, 378, 268, 218, 210, 244, 250 ]
ovid_medicamina_book_lengths = [100]
ovid_remedia_book_lengths = [814]
propertius_book_lengths = [703, 1359, 986, 934]
tibullus_book_lengths = [811, 430, 688]
vergil_book_lengths = [756, 804, 718, 705, 871, 901, 817, 731, 818, 908, 915, 952]
book_lengths = [sum(ovid_amores_book_lengths),sum(ovid_ars_book_lengths),sum(ovid_heroides_book_lengths),sum(ovid_medicamina_book_lengths),sum(ovid_remedia_book_lengths),sum(propertius_book_lengths),sum(tibullus_book_lengths),sum(vergil_book_lengths)]
print(book_lengths)
# +
counts = []
for df in dfs:
temp = len(df.index)
counts.append(temp)
for i, text in enumerate(filenames):
print(text)
print(counts[i])
# +
dfs_cutoff = []
for df in dfs:
dfs_cutoff.append(df[df['SCORE'] >= cutoff])
# +
scores = []
for df in dfs_cutoff:
temp = df.groupby('SCORE').agg({'SCORE': [np.size]})
scores.append(temp)
print(scores)
# -
results = [list(x) for x in zip(names, book_lengths, dfs_cutoff, scores)]
for result in results:
result[3]['per_100'] = result[3]['SCORE']['size'].map(lambda x: (x / result[1]) * 100)
y1 = np.array(results[0][3]['per_100'])
y2 = np.array(results[1][3]['per_100'])
y3 = np.array(results[2][3]['per_100'])
y4 = np.array(results[3][3]['per_100'])
y5 = np.array(results[4][3]['per_100'])
y6 = np.array(results[5][3]['per_100'])
y7 = np.array(results[6][3]['per_100'])
y8 = np.array(results[7][3]['per_100'])
print(y1)
print(y2)
print(y3)
print(y4)
print(y5)
print(y6)
print(y7)
print(y8)
ov = np.average([y1,y2,y3,y4,y5],axis=0)
prop = y6
tib = y7
verg = y8
# +
N = 4
ind = np.arange(N) # the x locations for the groups
width = 0.2 # the width of the bars
fig = plt.figure()
fig.set_size_inches(15,6)
ax = fig.add_subplot(111)
rects1 = ax.bar(ind, ov, width, color='w', hatch='O')
rects2 = ax.bar(ind+width, prop, width, color='w', hatch='.')
rects3 = ax.bar(ind+width*2, tib, width, color='w', hatch='\\')
rects4 = ax.bar(ind+width*3, verg, width, color='w', hatch='x')
#rects5 = ax.bar(ind+width*4, y5, width, color='w', hatch='*')
#rects6 = ax.bar(ind+width*5, y6, width, color='w', hatch='o')
#rects7 = ax.bar(ind+width*6, y7, width, color='w', hatch='.')
#rects8 = ax.bar(ind+width*7, y8, width, color='w', hatch='O')
ax.set_title('Tesserae Matches (per 100 lines) for Virgil and Elegists', fontsize = 18, fontweight = 'bold')
ax.set_xlabel('Tesserae Score', fontsize = 16)
ax.set_ylabel('Matches (per 100 lines)', fontsize = 16)
ax.set_xticks(ind+width*2)
ax.tick_params(axis='x', labelsize = 14)
ax.set_xticklabels( ('7', '8', '9','10') )
ax.invert_xaxis()
auths = ['Virgil', 'Tibullus', 'Propertius', 'Ovid']
ax.legend( (rects4[0], rects3[0], rects2[0], rects1[0]), auths, loc = 2, labelspacing = .75, handlelength = 2, prop={'size':20} )
plt.show()
# -
| code/fig_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# text feature are about converting texts into representative numerical values. one of the simplest methods to do this is by <i>word counts</i>: you take each snippet of text, count the occurances of each word and put it in a tabular format!
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
sample = ['problem of evil',
'evil queen',
'horizon problem']
# the fastest way to achieve the aforementioned tabular format is to use <b>CountVectorizer</b>
vec = CountVectorizer()
X = vec.fit_transform(sample)
X
# result is a sparse matrix recording the number of times each word appears
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
# Disadvantages of using CountVectorizer:<br>
# - raw word count means a lot of emphasis is put on words which occur more frequently and this may not be true for example conjunctions and prepositions occur most often but have absolutely no meaning by themselves.
# - this can cause bias in classification algorithms
# <p><p>the best way to fix this is by using <b>TF-IDF: term frequency - inverse document frequency</b>, which weighs the word counts by a measure of how often they occur in all the documents ie cosiders <i>overall document weightage</i> of a word. in layman's terms it penalizes words that occur way too often.
# <br>it may be noted that tf-idf is sensitive to document symmetry in corpus distribution
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
| feature engineering/text_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# # Classifying differential equations
#
# - toc:false
# - branch: master
# - badges: true
# - comments: false
# - categories: [mathematics, numerical recipes]
# - hide: true
# + [markdown] slideshow={"slide_type": "slide"}
# # 💃 Classifying differential equations 💃
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# Questions:
# - What is a <mark>differential equation</mark>?
# - What is the difference between an ordinary (<mark>ODE</mark>) and partial (<mark>PDE</mark>) differential equation?
# - How do I classify the different types of differential equations?
# + [markdown] slideshow={"slide_type": "slide"}
# Objectives:
# - Identify the dependent and independent variables in a differential equation
# - Distinguish between and ODE and PDE
# - Identify the <mark>order</mark> of a differential equation
# - Distinguish between <mark>linear</mark> and <mark>non-linear</mark> equations
# - Distinguish between <mark>heterogeneous</mark> and <mark>homogeneous</mark> equations
# - Identify a <mark>separable</mark> equation
# + [markdown] slideshow={"slide_type": "slide"}
# ### A differential equation is an equation that relates one or more functions and their derivatives
#
# - The functions usually represent physical quantities (e.g. position $x$)
# - The derivative represents a rate of change (e.g. speed $v$)
# - The differential equation represents the relationship between the two:
#
# \begin{equation}
# v = \frac{dx}{dt}
# \end{equation}
# + [markdown] slideshow={"slide_type": "slide"}
# ### An independent variable is... a quantity that varies independently...
#
# - An <mark>independent variable</mark> does not depend on other variables
# - A <mark> dependent variable</mark> depends on the independent variable
#
# \begin{equation}
# v = \frac{dx}{dt}
# \end{equation}
#
# - $t$ is the <mark>independent</mark> variable
# - $x$ is the <mark>dependent</mark> variable
# - Writing $x = x(t)$ makes this relationship clear.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Differential equations can be classified in a variety of ways
#
# There are several ways to describe and classify differential equations. There are standard solution methods for each type, so it is useful to understand the classifications.
#
# 
#
# <small>Once you can cook a single piece of spaghetti, you can cook all pieces of spaghetti!</small>
# + [markdown] slideshow={"slide_type": "slide"}
# ### An ODE contains differentials with respect to only one variable
#
# For example, the following equations are ODEs:
#
# \begin{equation}
# \frac{d x}{d t} = at
# \frac{d^3 x}{d t^3} + \frac{x}{t} = b
# \end{equation}
#
# As in each case the differentials are with respect to the single variable $t$.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Partial differential equations (PDE) contain differentials with respect to several independent variables.
#
# An example of a PDE is:
#
# \begin{equation}
# \frac{\partial x}{\partial t} = \frac{\partial x}{\partial y}
# \end{equation}
#
# As there is one differential with respect to $t$ and one differential with respect to $y$.
#
# Note also the difference in notation - <mark>for ODEs we use $d$ whilst for PDEs we use $\partial$</mark>.
#
# > Note: the equations in this notebook are formatted using [LaTeX](https://www.latex-project.org/).
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### The order of a differential equation is the highest order of any differential contained in it.
#
# For example:
#
# $\frac{d x}{d t} = at$ is <mark>first order</mark>.
#
# $\frac{d^3 x}{d t^3} + \frac{x}{t} = b$ is <mark>third order</mark>.
#
# > Important: $\frac{d^3 x}{d t^3}$ does not equal $\left(\frac{d x}{d t}\right)^3$!
# + [markdown] slideshow={"slide_type": "slide"}
# ### Linear equations do not contain higher powers of either the dependent variable or its differentials
#
# For example:
#
# $\frac{d^3 x}{d t^3} = at$ and $\frac{\partial x}{\partial t} = \frac{\partial x}{\partial y} $ are <mark>linear</mark>.
#
# $(\frac{d x}{d t})^3 = at$ and $\frac{d^3 x}{d t^3} = x^2$ are <mark>non-linear</mark>.
#
# Non-linear equations can be particularly nasty to solve analytically, and so are often tackled numerically.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
#
# ### Homogeneous equations do not contain any non-differential terms
#
# For example:
#
# $\frac{\partial x}{\partial t} = \frac{\partial x}{\partial y}$ is a <mark>homogeneous equation</mark>.
#
# $\frac{\partial x}{\partial t} - \frac{\partial x}{\partial y}=a$ is a <mark>heterogeneous equation</mark> (unless $a=0$!).
#
# + [markdown] slideshow={"slide_type": "slide"}
#
# ### Separable equations can be written as a product of two functions of different variables
#
# A separable differential equation takes the form
#
# \begin{equation}
# f(x)\frac{d x}{d t} = g(t)
# \end{equation}
#
# Separable equations are some of the easiest to solve as we can split the equation into two independent parts with fewer variables, and solve each in turn - we will see an example of this in the next lesson.
# + [markdown] slideshow={"slide_type": "skip"}
# ---
#
# Do [the quick-test](https://nu-cem.github.io/CompPhys/2021/08/02/ODE-Types-Qs.html).
#
# Back to [Modelling with Ordinary Differential Equations](https://nu-cem.github.io/CompPhys/2021/08/02/ODEs.html).
#
# ---
# + [markdown] slideshow={"slide_type": "slide"}
#
# Keypoints:
#
# - An independent variable is a quantity that varies independently
# - Differential equations can be classified in a variety of ways
# - An ODE contains differentials with respect to only one variable
# - The order is the highest order of any differential contained in it
# - Linear equations do not contain higher powers of either the dependent variable or its differentials
# - Homogeneous equations do not contain any non-differential terms
# -
| slides/2021-08-02-ODE-Types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # POGIL 4.3 - Whitespace
#
# ## Python for Scientists
# ### Content Learning Objectives
# After completing this activity, students should be able to:
#
# - Articulate the importance of good whitespace
# - Fix poor whitespace in provided code
# ### Process Skill Goals
# *During this activity, you should make progress toward:*
#
# - Leveraging prior knowledge and experience of other students
#
| docs/pogil/notebooks/whitespace.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Kashubian - extract text from tlog
#
# > "tlog output from wav2vec2 Polish"
#
# - toc: false
# - branch: master
# - hidden: true
# - categories: [asr, kashubian]
import json
with open("/tmp/csb/kashubian-data.json", "r") as read_file:
data = json.load(read_file)
for datum in data:
file = datum['audio'].split('/')[-1].replace('.ogg', '.txt')
with open(f'/tmp/csb/{file}', 'w') as f:
text = '\n'.join([a.strip() for a in datum['text'].split('\n') if a.strip() != ''])
f.write(text)
import glob
for file in glob.glob('/tmp/csb/*.ogg.wav.tlog'):
outfile = file.replace('.ogg.wav.tlog', '.rec.txt')
with open(file, "r") as tlog:
data = json.load(tlog)
with open(outfile, "w") as rectxt:
for datum in data:
rectxt.write(f"{datum['transcript']}\n")
| _notebooks/2021-04-22-kashubian-extract-text-from-tlog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import requests
import json
res = requests.get('http://gimmeproxy.com/api/getProxy')
res.content
jsonIP = json.loads(res.content)
jsonIP['ipPort']
testaddr = 'https://www.example.com'#'https://www.leafly.com/hybrid/100-og/reviews?page=10'
'https://' + jsonIP['ipPort']
proxyDict = {
"http" : 'http://' + jsonIP['ipPort'],
"https" : 'https://' + jsonIP['ipPort']
}
res2 = requests.get(testaddr, proxies=proxyDict)
res
| leafly/test_proxies.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST example
#
# Althought torchtuples is created to work with arrays/tensors in memory, one can still use it to fit a data set by reading from disk.
#
# This example is based on [this pytorch example](https://github.com/pytorch/examples/blob/master/mnist/main.py), but rewritten for torchtuples with some other minor modifications.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
from torchtuples import Model, optim, practical, data
# for reproducability
np.random.seed(123456)
_ = torch.manual_seed(654321)
num_workers = 0
batch_size = 64
batch_size_test = 1000
epochs = 10
lr = 0.01
momentum=0.5
# ### Dataset
#
# We load the MNIST datas set from torchvision as two dataloaders: one for training and one for testing.
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size=batch_size_test, shuffle=False, num_workers=num_workers)
# We make a simple covolutional net:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x # return F.log_softmax(x, dim=1)
net = Net()
model = Model(net, nn.CrossEntropyLoss(), optim.SGD(lr, momentum))
# and fit the net with SGD. Note that we can use `model.lr_finder_dataloader` to to find a reasonable learning rate, and include callbacks for early stopping etc.
log = model.fit_dataloader(train_loader, epochs=epochs, val_dataloader=test_loader, metrics={'acc': practical.accuracy_argmax})
# We can plot the loss and accuracy of the training progress
_ = log.to_pandas()[['train_loss', 'val_loss']].plot()
_ = log.to_pandas()[['train_acc', 'val_acc']].plot()
# ## Scoring
#
# We can either use a method of the model to score the test set, or we can make predictions
model.score_in_batches_dataloader(test_loader)
# ### Prections
#
# To use the predit method, we need a data loader that only pass the images and not the labels.
#
# This can be done by making a new data loader, or use the function in `torchtuples.data.dataloader_input_only`
test_input = data.dataloader_input_only(test_loader)
# We can now make preditions
preds = model.predict_dataloader(test_input)
preds = preds.argmax(1)
# and visualize by sampling 8 at a time
images = next(iter(test_input))
idx = np.random.choice(test_input.batch_size, 8)
sub = images[idx]
plt.figure(figsize=(10, 6))
_ = plt.imshow(torchvision.utils.make_grid(sub)[0].numpy(), cmap='gray')
preds[idx]
| examples/mnist_dataloader.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (reco_base)
# language: python
# name: reco_base
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # SAR Single Node on MovieLens (Python, CPU)
#
# In this example, we will walk through each step of the Smart Adaptive Recommendations (SAR) algorithm using a Python single-node implementation.
#
# SAR is a fast, scalable, adaptive algorithm for personalized recommendations based on user transaction history. It is powered by understanding the similarity between items, and recommending similar items to those a user has an existing affinity for.
# ## 1 SAR algorithm
#
# The following figure presents a high-level architecture of SAR.
#
# At a very high level, two intermediate matrices are created and used to generate a set of recommendation scores:
#
# - An item similarity matrix $S$ estimates item-item relationships.
# - An affinity matrix $A$ estimates user-item relationships.
#
# Recommendation scores are then created by computing the matrix multiplication $A\times S$.
#
# Optional steps (e.g. "time decay" and "remove seen items") are described in the details below.
#
# <img src="https://recodatasets.blob.core.windows.net/images/sar_schema.svg?sanitize=true">
#
# ### 1.1 Compute item co-occurrence and item similarity
#
# SAR defines similarity based on item-to-item co-occurrence data. Co-occurrence is defined as the number of times two items appear together for a given user. We can represent the co-occurrence of all items as a $m\times m$ matrix $C$, where $c_{i,j}$ is the number of times item $i$ occurred with item $j$, and $m$ is the total number of items.
#
# The co-occurence matric $C$ has the following properties:
#
# - It is symmetric, so $c_{i,j} = c_{j,i}$
# - It is nonnegative: $c_{i,j} \geq 0$
# - The occurrences are at least as large as the co-occurrences. I.e., the largest element for each row (and column) is on the main diagonal: $\forall(i,j) C_{i,i},C_{j,j} \geq C_{i,j}$.
#
# Once we have a co-occurrence matrix, an item similarity matrix $S$ can be obtained by rescaling the co-occurrences according to a given metric. Options for the metric include `Jaccard`, `lift`, and `counts` (meaning no rescaling).
#
#
# If $c_{ii}$ and $c_{jj}$ are the $i$th and $j$th diagonal elements of $C$, the rescaling options are:
#
# - `Jaccard`: $s_{ij}=\frac{c_{ij}}{(c_{ii}+c_{jj}-c_{ij})}$
# - `lift`: $s_{ij}=\frac{c_{ij}}{(c_{ii} \times c_{jj})}$
# - `counts`: $s_{ij}=c_{ij}$
#
# In general, using `counts` as a similarity metric favours predictability, meaning that the most popular items will be recommended most of the time. `lift` by contrast favours discoverability/serendipity: an item that is less popular overall but highly favoured by a small subset of users is more likely to be recommended. `Jaccard` is a compromise between the two.
#
#
# ### 1.2 Compute user affinity scores
#
# The affinity matrix in SAR captures the strength of the relationship between each individual user and the items that user has already interacted with. SAR incorporates two factors that can impact users' affinities:
#
# - It can consider information about the **type** of user-item interaction through differential weighting of different events (e.g. it may weigh events in which a user rated a particular item more heavily than events in which a user viewed the item).
# - It can consider information about **when** a user-item event occurred (e.g. it may discount the value of events that take place in the distant past.
#
# Formalizing these factors produces us an expression for user-item affinity:
#
# $$a_{ij}=\sum_k w_k \left(\frac{1}{2}\right)^{\frac{t_0-t_k}{T}} $$
#
# where the affinity $a_{ij}$ for user $i$ and item $j$ is the weighted sum of all $k$ events involving user $i$ and item $j$. $w_k$ represents the weight of a particular event, and the power of 2 term reflects the temporally-discounted event. The $(\frac{1}{2})^n$ scaling factor causes the parameter $T$ to serve as a half-life: events $T$ units before $t_0$ will be given half the weight as those taking place at $t_0$.
#
# Repeating this computation for all $n$ users and $m$ items results in an $n\times m$ matrix $A$. Simplifications of the above expression can be obtained by setting all the weights equal to 1 (effectively ignoring event types), or by setting the half-life parameter $T$ to infinity (ignoring transaction times).
#
# ### 1.3 Remove seen item
#
# Optionally we remove items which have already been seen in the training set, i.e. don't recommend items which have been previously bought by the user again.
#
# ### 1.4 Top-k item calculation
#
# The personalized recommendations for a set of users can then be obtained by multiplying the affinity matrix ($A$) by the similarity matrix ($S$). The result is a recommendation score matrix, where each row corresponds to a user, each column corresponds to an item, and each entry corresponds to a user / item pair. Higher scores correspond to more strongly recommended items.
#
# It is worth noting that the complexity of recommending operation depends on the data size. SAR algorithm itself has $O(n^3)$ complexity. Therefore the single-node implementation is not supposed to handle large dataset in a scalable manner. Whenever one uses the algorithm, it is recommended to run with sufficiently large memory.
# ## 2 SAR single-node implementation
#
# The SAR implementation illustrated in this notebook was developed in Python, primarily with Python packages like `numpy`, `pandas`, and `scipy` which are commonly used in most of the data analytics / machine learning tasks. Details of the implementation can be found in [Recommenders/reco_utils/recommender/sar/sar_singlenode.py](../../reco_utils/recommender/sar/sar_singlenode.py).
# ## 3 SAR single-node based movie recommender
# +
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import itertools
import logging
import os
import numpy as np
import pandas as pd
import papermill as pm
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.recommender.sar.sar_singlenode import SARSingleNode
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
# + tags=["parameters"]
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# -
# ### 3.1 Load Data
#
# SAR is intended to be used on interactions with the following schema:
# `<User ID>, <Item ID>, <Time>`.
#
# Each row represents a single interaction between a user and an item. These interactions might be different types of events on an e-commerce website, such as a user clicking to view an item, adding it to a shopping basket, following a recommendation link, and so on.
#
# The MovieLens dataset is well formatted interactions of Users providing Ratings to Movies (movie ratings are used as the event weight) - we will use it for the rest of the example.
# +
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=['UserId', 'MovieId', 'Rating', 'Timestamp'],
title_col='Title'
)
# Convert the float precision to 32-bit in order to reduce memory consumption
data.loc[:, 'Rating'] = data['Rating'].astype(np.float64)
data.head()
# -
# ### 3.2 Split the data using the python random splitter provided in utilities:
#
# We utilize the provided `python_random_split` function to split into `train` and `test` datasets randomly at a 75/25 ratio.
train, test = python_random_split(data, 0.75)
header = {
"col_user": "UserId",
"col_item": "MovieId",
"col_rating": "Rating",
"col_timestamp": "Timestamp",
"col_prediction": "Prediction",
}
# In this case, for the illustration purpose, the following parameter values are used:
#
# |Parameter|Value|Description|
# |---------|---------|-------------|
# |`similarity_type`|`jaccard`|Method used to calculate item similarity.|
# |`time_decay_coefficient`|30|Period in days (term of $T$ shown in the formula of Section 2.2.2)|
# |`time_now`|`None`|Time decay reference.|
# |`timedecay_formula`|`True`|Whether time decay formula is used.|
# +
# set log level to INFO
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s')
model = SARSingleNode(
similarity_type="jaccard",
time_decay_coefficient=30,
time_now=None,
timedecay_formula=True,
**header
)
# -
model.fit(train)
top_k = model.recommend_k_items(test, remove_seen=True)
# The final output from the `recommend_k_items` method generates recommendation scores for each user-item pair, which are shown as follows.
top_k_with_titles = (top_k.join(data[['MovieId', 'Title']].drop_duplicates().set_index('MovieId'),
on='MovieId',
how='inner').sort_values(by=['UserId', 'Prediction'], ascending=False))
display(top_k_with_titles.head(10))
# ### 3.3 Evaluate the results
#
# It should be known that the recommendation scores generated by multiplying the item similarity matrix $S$ and the user affinity matrix $A$ **DOES NOT** have the same scale with the original explicit ratings in the movielens dataset. That is to say, SAR algorithm is meant for the task of *recommending relevent items to users* rather than *predicting explicit ratings for user-item pairs*.
#
# To this end, ranking metrics like precision@k, recall@k, etc., are more applicable to evaluate SAR algorithm. The following illustrates how to evaluate SAR model by using the evaluation functions provided in the `reco_utils`.
# +
# all ranking metrics have the same arguments
args = [test, top_k]
kwargs = dict(col_user='UserId',
col_item='MovieId',
col_rating='Rating',
col_prediction='Prediction',
relevancy_method='top_k',
k=TOP_K)
eval_map = map_at_k(*args, **kwargs)
eval_ndcg = ndcg_at_k(*args, **kwargs)
eval_precision = precision_at_k(*args, **kwargs)
eval_recall = recall_at_k(*args, **kwargs)
# -
print(f"Model:\t\t {model.model_str}",
f"Top K:\t\t {TOP_K}",
f"MAP:\t\t {eval_map:f}",
f"NDCG:\t\t {eval_ndcg:f}",
f"Precision@K:\t {eval_precision:f}",
f"Recall@K:\t {eval_recall:f}", sep='\n')
# ## References
# Note SAR is a combinational algorithm that implements different industry heuristics. The followings are references that may be helpful in understanding the SAR logic and implementation.
#
# 1. <NAME>, *et al*, "Item-based collaborative filtering recommendation algorithms", WWW, 2001.
# 2. Scipy (sparse matrix), url: https://docs.scipy.org/doc/scipy/reference/sparse.html
# 3. <NAME> and <NAME>, "A survey of accuracy evaluation metrics of recommendation tasks", The Journal of Machine Learning Research, vol. 10, pp 2935-2962, 2009.
| notebooks/02_model/sar_deep_dive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="UE4eky2QYcXB"
# If you are interested in graident boosting, here is a good place to start: https://xgboost.readthedocs.io/en/latest/tutorials/model.html
#
# This is a supervised machine learning method.
# + [markdown] id="O9I3TrXYB0RE"
# # Predicting PorPerm - Perm
# + id="fg_LmZjejXi_" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609211836119, "user_tz": 420, "elapsed": 23299, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="47b1601f-c160-4114-e554-579cc4c323ec"
# !pip install xgboost --upgrade
# + id="qC2ECegCYcXD" executionInfo={"status": "ok", "timestamp": 1609211857934, "user_tz": 420, "elapsed": 2058, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
# If you have installation questions, please reach out
import pandas as pd # data storage
import xgboost # graident boosting
import numpy as np # math and stuff
import seaborn as sns
import scipy.stats as stats
import xgboost as xgb
import sklearn
from sklearn.preprocessing import MinMaxScaler, RobustScaler
from sklearn.model_selection import cross_val_score, KFold, train_test_split
from sklearn.utils.class_weight import compute_sample_weight
from sklearn.metrics import accuracy_score, max_error, mean_squared_error
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt # plotting utility
# + id="WNiabSVfYjTE" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609211878412, "user_tz": 420, "elapsed": 19316, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="3f54023c-7a43-44ae-8c14-409db7901568"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="eXoJIAiwSi5k" executionInfo={"status": "ok", "timestamp": 1609211889788, "user_tz": 420, "elapsed": 1069, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="abc19088-f537-4db9-a27e-7fb0f184e21d"
# ls
# + id="Hk1AsPnSYcXQ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609211892889, "user_tz": 420, "elapsed": 2489, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="596d0e77-6906-4343-b8bd-8db5936ca07f"
df = pd.read_csv('drive/My Drive/1_lewis_research/core_to_wl_merge/Merged_dataset_inner_imputed_12_21_2020.csv')
# + id="Ws9xTzdwYzgX" colab={"base_uri": "https://localhost:8080/", "height": 374} executionInfo={"status": "error", "timestamp": 1609212024777, "user_tz": 420, "elapsed": 1046, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="b4c28d9d-9c0a-45d7-b963-6693fc599f3f"
df = df.drop(['Unnamed: 0', 'Unnamed: 0.1', 'LiveTime2','ScanTime2', 'LiveTime1','ScanTime1',
'ref_num', 'API', 'well_name', 'sample_num' ], axis=1)
print(df.columns.values) # printing all column names
df.describe()
# + id="dzM1QmpLdv3w" executionInfo={"status": "ok", "timestamp": 1609212028473, "user_tz": 420, "elapsed": 1107, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
df = df[df.Si >= 0]
# + id="W2WQf52jKE89"
# df = df[df.USGS_ID != 'E997'] # removing E997
# + colab={"base_uri": "https://localhost:8080/"} id="3rG92Ml2KNIn" executionInfo={"status": "ok", "timestamp": 1609212033770, "user_tz": 420, "elapsed": 1004, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="1ba1e019-7686-4b44-b79c-9164c6923550"
df.USGS_ID.unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="_OpTnvOr9rmf" executionInfo={"status": "ok", "timestamp": 1609212035068, "user_tz": 420, "elapsed": 646, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="b146dfe4-767a-4db4-c279-61e69cf5ffe1"
df.describe()
# + [markdown] id="rKN-0n34YcXP"
# ## Loading in dataset
# + id="91nAGubNYcYo" executionInfo={"status": "ok", "timestamp": 1609212044471, "user_tz": 420, "elapsed": 1091, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
dataset = df[[
'depth_ft', 'CAL', 'GR', 'DT', 'SP', 'DENS', 'PE',
'RESD', 'PHIN', 'PHID',
'GR_smooth',
'PE_smooth',
'Si'
]]
# + [markdown] id="T52yBCFGYcYt"
# In the next code block, we will remove the rows without data, and change string NaN's to np.nans
# + id="tUO4fhDeYcYu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609212048888, "user_tz": 420, "elapsed": 537, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="314668b3-69a3-464e-d307-2ee3b9aef311"
dataset.replace('NaN',np.nan, regex=True, inplace=True)#
#dataset = dataset.dropna()
np.shape(dataset)
# + id="HhYFK3K6YcYy" colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"status": "ok", "timestamp": 1609212050773, "user_tz": 420, "elapsed": 1346, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="24c97687-34c0-4918-8296-3401f04dff77"
dataset.head(3)
# + id="MxCYJ2GVYcZA" executionInfo={"status": "ok", "timestamp": 1609212056905, "user_tz": 420, "elapsed": 688, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
X = dataset[['depth_ft', 'CAL', 'GR', 'DT', 'SP', 'DENS', 'PE',
'RESD', 'PHIN', 'PHID',
'GR_smooth',
'PE_smooth']]
Y = dataset[['Si']]
Y_array = np.array(Y.values)
# + [markdown] id="rfNwgw_MYcZJ"
# ## Starting to set up the ML model params
# + id="q_Zq4vu_YcZK" executionInfo={"status": "ok", "timestamp": 1609212061297, "user_tz": 420, "elapsed": 1058, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
seed = 7 # random seed is only used if you want to compare exact answers with friends
test_size = 0.25 # how much data you want to withold, .15 - 0.3 is a good starting point
X_train, X_test, y_train, y_test = train_test_split(X.values, Y_array, test_size=test_size)
# + [markdown] id="-ySy_-2TYcZO"
# ### Let's try some hyperparameter tuning (this takes forever!)
# + [markdown] id="aU6jtQCFYcZO"
# Hyperparameter testing does a grid search to find the best parameters, out of the parameters below. This turned out to be really slow on my laptop. Please skip this!
# + id="R8i9doQmYcZP" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609212065784, "user_tz": 420, "elapsed": 1314, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="6636d3a5-1509-4df9-f525-8a0aecb3406b"
xg_reg = xgb.XGBRegressor(objective ='reg:squarederror',
colsample_bytree = 0.9,
learning_rate = 0.1,
max_depth = 5,
n_estimators = 100)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
rmse = mean_squared_error(y_test, preds, squared=False)
print("Mean Squared Error: %f" % (rmse))
max = max_error(y_test, preds)
print("Max Error: %f" % (max))
# + id="trJgcHlqcIF6" executionInfo={"status": "ok", "timestamp": 1609213849995, "user_tz": 420, "elapsed": 1104, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
parameters = {
'max_depth': range (3, 6, 1),
'n_estimators': range(30, 80, 5),
'colsample_bytree': [ 0.8, 0.9, 1],
'learning_rate': [0.3, 0.2, 0.1],
'max_delta_step': [0, 1],
'reg_alpha' : [0, 1]
}
estimator = xgb.XGBRegressor(tree_method='gpu_hist', gpu_id=0, objective ='reg:squarederror')
grid_search = GridSearchCV(
estimator=estimator,
param_grid=parameters,
n_jobs = 8,
cv = 5,
verbose = True
)
# + id="aQKJ_xDyYcZY" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609214263633, "user_tz": 420, "elapsed": 413053, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="ae08fcde-516a-4529-a149-ff5db37e4eac"
grid_search.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="nW2WknL-yVAX" executionInfo={"status": "ok", "timestamp": 1609214455460, "user_tz": 420, "elapsed": 532, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="095fa5cd-facf-4749-d4ac-99a04146fd2c"
grid_search.best_estimator_
# + [markdown] id="_olH3GBuYcZf"
# Now plug in the hyperparameters into the training model.
# + id="F_AVSe-pYcZg" executionInfo={"status": "ok", "timestamp": 1609214298914, "user_tz": 420, "elapsed": 1097, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}}
model1 = xgb.XGBRegressor(n_estimators=grid_search.best_estimator_.n_estimators,
max_depth = grid_search.best_estimator_.max_depth,
learning_rate=grid_search.best_estimator_.learning_rate,
colsample_bytree=grid_search.best_estimator_.colsample_bytree,
max_delta_step= grid_search.best_estimator_.max_delta_step,
reg_alpha = grid_search.best_estimator_.reg_alpha)
model1.fit(X_train, y_train)
preds = model1.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="-PAOMsU2N27X" executionInfo={"status": "ok", "timestamp": 1609214300394, "user_tz": 420, "elapsed": 880, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="7f42ea43-0bd6-422d-87a4-ef49416711ad"
rmse2 = mean_squared_error(y_test, preds, squared=False)
print("Mean Squared Error: %f" % (rmse2))
max1 = max_error(y_test, preds)
print("Max Error: %f" % (max1))
# + colab={"base_uri": "https://localhost:8080/", "height": 391} id="UZ92HZ6wJ3TO" executionInfo={"status": "ok", "timestamp": 1609214346338, "user_tz": 420, "elapsed": 1411, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="534a32c2-90d2-4639-8b65-226b30ccf78f"
plt.figure(figsize=(12,6))
plt.hist(preds, alpha=0.3, bins = 15, color='blue' , label='preds')
plt.hist(y_test, alpha=0.3, bins = 15, color='green', label='y_test')
plt.hist(y_train, alpha=0.3, bins = 15, color='black', label='y_train')
plt.legend()
plt.xlim((0,50))
# + colab={"base_uri": "https://localhost:8080/"} id="KYyR6O7IulOb" executionInfo={"status": "ok", "timestamp": 1609214316900, "user_tz": 420, "elapsed": 683, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="f2e4213e-3c64-4e7e-e624-5b6941cb0c4e"
print('y_test:', np.median(y_test.flatten()))
print('pred:', np.median(preds.flatten()))
print('y_train:', np.median(y_train.flatten()))
# + colab={"base_uri": "https://localhost:8080/", "height": 386} id="P1gS8OiwPf69" executionInfo={"status": "ok", "timestamp": 1609214354495, "user_tz": 420, "elapsed": 2130, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="55038221-4e21-40e2-b034-000480ee68ff"
sns.displot([y_train.flatten(),
preds.flatten(),
y_test.flatten()], kind="kde")
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="4sNv4HnBr80H" executionInfo={"status": "ok", "timestamp": 1609214394491, "user_tz": 420, "elapsed": 1111, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="2a45336f-6793-40c0-81d7-291ace0d9fc4"
error = preds.flatten() - y_test.flatten()
plt.figure(figsize=(6,5))
plt.hist(error, bins=13)
plt.xlabel('Si')
plt.xlim((-10,10))
# + id="6SBUXVdPm0g-" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1609214396862, "user_tz": 420, "elapsed": 1067, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="7fc7d17b-ae3a-4fa9-cee8-491c720e3c11"
model1.feature_importances_
# + id="PAX4Se0cqCsh" colab={"base_uri": "https://localhost:8080/", "height": 296} executionInfo={"status": "ok", "timestamp": 1609214398258, "user_tz": 420, "elapsed": 625, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04445590536399793096"}} outputId="47075aef-c7d9-4a1d-ad6e-416aca102f8d"
sorted_idx = model1.feature_importances_.argsort()
plt.barh(X.columns[sorted_idx], model1.feature_importances_[sorted_idx])
plt.xlabel("Xgboost Feature Importance")
# + id="ZbTEzL3BpwyC"
| xgb/old_notebooks/XGB_Si_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PA005: High Value Customer Identification ( Insiders )
# ## 0.0. Planejamento da Solução ( IOT )
# ### Input - Entrada
# 1. Problema de Negócio
# - Selecionar os clientes mais valiosos para integrar um programa de Fidelização.
#
#
# 2. Conjunto de Dados
# - Vendas de um e-commerce online, durante o período de um ano.
# ### Output - Saída
# 1. A indicação das pessoas que farão parte do programa de Insiders
# - Lista: client_id | is_insider
# 1023| yes/1
#
#
# 2. Relatório com as respostas das perguntas de negócio
# - Quem são as pessoas elegíveis para participar do programa de Insiders ?
# - Quantos clientes farão parte do grupo?
# - Quais as principais características desses clientes ?
# - Qual a porcentagem de contribuição do faturamento, vinda do Insiders ?
# - Qual a expectativa de faturamento desse grupo para os próximos meses ?
# - Quais as condições para uma pessoa ser elegível ao Insiders ?
# - Quais as condições para uma pessoa ser removida do Insiders ?
# - Qual a garantia que o programa Insiders é melhor que o restante da base ?
# - Quais ações o time de marketing pode realizar para aumentar o faturamento?
# ### Task - Tarefas
# 1. Quem são as pessoas elegíveis para participar do programa de Insiders?
# - O que é ser elegível? O que são clientes de maior "valor"?
# - Faturamento:
# - Alto Ticket Médio - Quanto o cliente gastou em média na empresa;
# - Alto LTV - Durante seu tempo na empresa, quanto você já gastou?;
# - Baixa Recência - Tempo entre as compras;
# - Alto Basket Size - Tamanho da cesta de compras;
# - Baixa probabilidade de Churn - Quando a pessoa parou de comprar de você;
# - Alta previsão de LTV;
# - Alta propensão de compra
#
# - Custo:
# - Baixa taxa de devolução;
#
# - Experiência de comora:
# - Média alta das avaliações
#
#
# 2. Quantos clientes farão parte do grupo?
# - Número total de clientes;
# - % do grupo Insiders.
#
#
# 3. Quais as principais características desses clientes?
# - Escrever características do cliente:
# - Idade;
# - Localização.
#
# - Escrever características do consumo
# - Atributos da clusterização
#
#
# 4. Qual a porcentagem de contribuição do faturamento, vinda do Insiders?
# - Faturamento total do ano;
# - Faturamento total do grupo.
#
#
# 5. Qual a expectativa de faturamento desse grupo para os próximos meses?
# - LTV do grupo Insiders;
# - Análise de Cohort - Análise onde eu marco uma pessoa no tempo.
#
#
# 6. Quais as condições para uma pessoa ser elegível ao Insiders?
# - Definir a periodicidade (1 mes, 3 meses);
# - A pessoa precisa ser similar com uma pessoa do grupo
#
#
# 7. Quais as condições para uma pessoa ser removida do Insiders?
# - Definir a periodicidade (1 mes, 3 meses);
# - A pessoa precisa ser não similar com uma pessoa do grupo
#
#
# 8. Qual a garantia que o programa Insiders é melhor que o restante da base?
# - Teste A/B;
# - Teste A/B Bayesiano;
# - Teste de Hipóteses.
#
#
# 9. Quais ações o time de marketing pode realizar para aumentar o faturame1.
# - Desconto;
# - Preferência de compra;
# - Frete;
# - Visita a empresa.
# # 0.0. Imports
# +
import numpy as np
import pandas as pd
import seaborn as sns
import umap.umap_ as umap
from matplotlib import pyplot as plt
from IPython.display import HTML
from sklearn import cluster as c
from sklearn import metrics as m
from sklearn import preprocessing as pp
from plotly import express as px
from yellowbrick.cluster import KElbowVisualizer, SilhouetteVisualizer
# -
# ## 0.1. Helper Functions
# +
def jupyter_settings():
# %matplotlib inline
# %pylab inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [24, 9]
plt.rcParams['font.size'] = 24
display(HTML('<style>.container{width:100% !important;}</style>'))
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option('display.expand_frame_repr', False)
sns.set()
jupyter_settings()
# -
# ## 0.2. Load dataset
# +
# read data
url='https://drive.google.com/file/d/1P2xz5nr3c8lPJ2esMb-N0uQs0TiTAeNW/view?usp=sharing'
url_='https://drive.google.com/uc?id=' + url.split('/')[-2]
df_raw = pd.read_csv(url_, encoding='windows-1252')
# drop extra column
df_raw = df_raw.drop(columns=['Unnamed: 8'], axis=1)
# -
df_raw.columns
df_raw.head()
# # 1.0. Decrição dos Dados
df1 = df_raw.copy()
# ## 1.1. Rename Columns
df1.columns
cols_new = ['invoice_no', 'stock_code', 'description', 'quantity', 'invoice_date', 'unit_price', 'customer_id', 'country']
df1.columns = cols_new
df1.sample()
# ## 1.2. Data Dimensions
print('Number of rows: {}'.format(df1.shape[0]))
print('Number of columns: {}'.format(df1.shape[1]))
# ## 1.3. Data Types
df1.dtypes
df1.head()
# ## 1.4. Chek NA
df1.isna().sum()
# ## 1.5. Replace NA
# remova NA
df1 = df1.dropna(subset=['description', 'customer_id'])
print('Removed Data: {:.2f}%'.format(100 * (1 - (df1.shape[0]/df_raw.shape[0]))))
df1.isna().sum()
# ## 1.6. Change Dtypes
df1.dtypes
# +
# invoice date
df1['invoice_date'] = pd.to_datetime(df1['invoice_date'], format='%d-%b-%y')
# customer id
df1['customer_id'] = df1['customer_id'].astype(int)
# -
df1.dtypes
# ## 1.7. Descriptive Statistics
num_attributes = df1.select_dtypes(include=['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64', 'datetime64[ns]'])
# ### 1.7.1. Numerical Attributes
# +
# Central Tendency - Mean, Median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
# Dispersion - Desvio Padrão, Mínimo, Máximo, Range, Skew, Kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(np.min)).T
d3 = pd.DataFrame(num_attributes.apply(np.max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
# Concatenate
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
# -
# #### 1.7.1.1. Numerical Attributes - Investigating
# 1. Quantity Negativa (Pode ser devolução)
# 2. Preço unitário igual a zero (pode ser promoção?)
# ### 1.7.2. Categorical Attributes
cat_attributes.head()
# #### Invoice No
# +
# Problema: Temos invoice com letras e numeros
#df1['invoice_no'].astype(int)
# indentificacao:
df_letter_invoices = df1.loc[df1['invoice_no'].apply(lambda x: bool(re.search('[^0-9]+', x))), :]
df_letter_invoices.head()
# +
# Conferindo se todos os valores com letras são negativos
print('Total Number of invoices: {}'.format(len(df_letter_invoices)))
print('Total Number of negative quantity: {}'.format(len(df_letter_invoices[df_letter_invoices['quantity'] < 0])))
# Assumption: Letras indicam retornos
# -
# #### Stock Code
df1.head()
# +
# stock_code que são apenas letras
df1.loc[df1['stock_code'].apply(lambda x: bool(re.search('^[a-zA-Z]+$', x))), 'stock_code'].unique()
# Acao:
## 1. Remove stock_code in ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK']
# -
# #### Description
# +
df1.head()
# Acao:
## 2. Delete description
# -
# #### Country
df1['country'].unique()
df1['country'].value_counts(normalize=True).head()
df1[['customer_id', 'country']].drop_duplicates().groupby('country').count().reset_index().sort_values('customer_id', ascending=False).head()
# # 2.0. Filtragem de Variáveis
df2 = df1.copy()
# +
# unit_price
df2 = df2.loc[df2['unit_price'] >= 0.04, :]
# stock_code
df2 = df2[~df2['stock_code'].isin(['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'])]
# description
df2 = df2.drop(columns='description', axis=1)
# map
df2 = df2[~df2['country'].isin(['European Comunity', 'Unspecified'])]
df2_returns = df2.loc[df2['quantity'] < 0, :]
df2_purchases = df2.loc[df2['quantity'] >= 0, :]
# -
# # 3.0. Feature Engineering
df3 = df2.copy()
# ## 3.1. Feature Creation
# data reference
df_ref = df3.drop(['invoice_no', 'stock_code', 'quantity', 'invoice_date', 'unit_price', 'country'], axis=1).drop_duplicates(ignore_index=True)
# +
# Gross Revenue - Faturamento - Quantity * Price
df2_purchases.loc[:, 'gross_revenue'] = df2_purchases.loc[:, 'quantity'] * df2_purchases.loc[:, 'unit_price']
# Monetary
df_monetary = df2_purchases.loc[:, ['customer_id', 'gross_revenue']].groupby('customer_id').sum().reset_index()
df_ref = pd.merge(df_ref, df_monetary, on='customer_id', how='left')
df_ref.isna().sum()
# +
# Recency - Last Day purchase
df_recency = df2_purchases.loc[:, ['customer_id', 'invoice_date']].groupby('customer_id').max().reset_index()
df_recency['recency_days'] = (df2_purchases['invoice_date'].max() - df_recency['invoice_date']).dt.days
df_recency = df_recency[['customer_id', 'recency_days']].copy()
df_ref = pd.merge(df_ref, df_recency, on='customer_id', how='left')
df_ref.isna().sum()
# +
# Frequency
df_frequency = df2_purchases.loc[:, ['customer_id', 'invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index()
df_ref = pd.merge(df_ref, df_frequency, on='customer_id', how='left')
df_ref.isna().sum()
# +
# Avg Ticket
avg_ticket = df2_purchases.loc[:, ['customer_id', 'gross_revenue']].groupby('customer_id').mean().reset_index().rename(columns={'gross_revenue':'avg_ticket'})
df_ref = pd.merge(df_ref, avg_ticket, on='customer_id', how='left')
df_ref.isna().sum()
# -
df_ref.head()
# # 4.0. EDA - Exploratory Data Analysis
df4 = df_ref.dropna()
df4.isna().sum()
# # 5.0. Data Preparation
df5 = df4.copy()
df5.head()
# +
## Standard Scaler
ss = pp.StandardScaler()
df5['gross_revenue'] = ss.fit_transform(df5[['gross_revenue']])
df5['recency_days'] = ss.fit_transform(df5[['recency_days']])
df5['invoice_no'] = ss.fit_transform(df5[['invoice_no']])
df5['avg_ticket'] = ss.fit_transform(df5[['avg_ticket']])
# -
# # 6.0. Feature Selection
df6 = df5.copy()
# + [markdown] tags=[]
# # 7.0. Hyperparameter Fine-Tunning
# -
X = df6.drop(columns = ['customer_id'])
clusters = [2, 3, 4, 5, 6, 7]
# + tags=[]
kmeans = KElbowVisualizer(c.KMeans(), k=clusters, timings=False)
kmeans.fit(X)
kmeans.show()
# -
kmeans = KElbowVisualizer(c.KMeans(), k=clusters, timings=False, metric = 'silhouette')
kmeans.fit(X)
kmeans.show()
# + [markdown] tags=[]
# ## 7.1 Silhouette Analysis
# + tags=[]
fig, ax = plt.subplots(3, 2, figsize=(25,18))
for k in clusters:
km = c.KMeans(n_clusters=k, init='random', n_init=10, max_iter=100, random_state=42)
q, mod = divmod(k,2)
visualizer = SilhouetteVisualizer(km, colors='yellowbrick', ax=ax[q-1][mod])
visualizer.fit(X)
visualizer.finalize()
# -
# # 8.0. Model Training
# ## 8.1. K-Means
# +
# model definition
k = 3
kmeans = c.KMeans(init='random', n_clusters=k, n_init=10, max_iter=300, random_state=42)
# model training
kmeans.fit(X)
# clustering
labels = kmeans.labels_
# -
# ### 8.1.2. Cluster Validation
from sklearn import metrics as m
# +
## WSS (Within-cluster sum of square)
print('WSS Value: {}'.format(kmeans.inertia_))
## SS (Silhouette Score)
print('SS Value: {}'.format(m.silhouette_score(X, labels, metric='euclidean')))
# -
# # 9.0. Cluster Analysis
df9 = df6.copy()
df9['cluster'] = labels
df9.head()
# ## 9.1. Visualization Inspection
visualizer = SilhouetteVisualizer(kmeans, colors='yellowbrick')
visualizer.fit(X)
visualizer.finalize()
# ## 9.2. 2d plot
df_viz = df9.drop(columns='customer_id', axis=1)
sns.pairplot(df_viz, hue='cluster')
# ## 9.3. UMAP
reducer = umap.UMAP(n_neighbors = 90, random_state=42)
embedding = reducer.fit_transform(X)
# +
# embedding
df_viz['embedding_x'] = embedding[:, 0]
df_viz['embedding_y'] = embedding[:, 1]
# plot UMAP
sns.scatterplot(x='embedding_x', y='embedding_y',
hue = 'cluster',
palette = sns.color_palette('hls', n_colors=len(df_viz['cluster'].unique())),
data = df_viz)
# -
# ## 9.4. Cluster Profile
# +
# Number of customer
df_cluster = df9[['customer_id', 'cluster']].groupby('cluster').count().reset_index()
df_cluster['perc_customer'] = 100*(df_cluster['customer_id'] / df_cluster['customer_id'].sum())
# Avg gross revenue
df_avg_gross_revenue = df9[['gross_revenue', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, df_avg_gross_revenue, how='inner', on='cluster')
# Avg recency days
df_avg_recency_days = df9[['recency_days', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, df_avg_recency_days, how='inner', on='cluster')
# Avg invoice_no
df_avg_invoice_no = df9[['invoice_no', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, df_avg_invoice_no, how='inner', on='cluster')
# Avg Ticket
df_ticket = df9[['avg_ticket', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, df_ticket, how='inner', on='cluster')
df_cluster
# -
# # 10.0. Deploy To Production
| notebooks/c03_insiders_clustering-metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # n-queens problem
#
# https://en.wikipedia.org/wiki/Eight_queens_puzzle
#
# 
# https://www.geeksforgeeks.org/n-queen-problem-backtracking-3/
# so if we were going to use completely brute force we would need to look at C(8,64) solutions
# we can use our friend memoized function from last week
import functools
@functools.lru_cache(maxsize=None)
def C_mem(n,k):
if k == 0: return 1
if n == 0: return 0
return C_mem(n-1,k-1) + C_mem(n-1,k)
C_mem(64,8)
# +
# Python3 program to solve N Queen
# Problem using backtracking
global N # kind of ugly global state
N = 4
def printSolution(board):
for i in range(N):
for j in range(N):
print (board[i][j], end = " ")
print()
# A utility function to check if a queen can
# be placed on board[row][col]. Note that this
# function is called when "col" queens are
# already placed in columns from 0 to col -1.
# So we need to check only left side for
# attacking queens
def isSafe(board, row, col):
# Check this row on left side
for i in range(col):
if board[row][i] == 1:
return False
# Check upper diagonal on left side
for i, j in zip(range(row, -1, -1),
range(col, -1, -1)):
if board[i][j] == 1:
return False
# Check lower diagonal on left side
for i, j in zip(range(row, N, 1),
range(col, -1, -1)):
if board[i][j] == 1:
return False
return True
def solveNQUtil(board, col):
# base case: If all queens are placed
# then return true
if col >= N:
return True
# Consider this column and try placing
# this queen in all rows one by one
for i in range(N):
if isSafe(board, i, col):
# Place this queen in board[i][col]
board[i][col] = 1
# recur to place rest of the queens
if solveNQUtil(board, col + 1) == True:
return True
# If placing queen in board[i][col
# doesn't lead to a solution, then
# queen from board[i][col]
board[i][col] = 0
# if the queen can not be placed in any row in
# this colum col then return false
return False
# This function solves the N Queen problem using
# Backtracking. It mainly uses solveNQUtil() to
# solve the problem. It returns false if queens
# cannot be placed, otherwise return true and
# placement of queens in the form of 1s.
# note that there may be more than one
# solutions, this function prints one of the
# feasible solutions.
# This code is contributed by <NAME>
# -
def solveNQ(board = tuple([0,0,0,0] for n in range(4)), print_board=True): # passing in default 4x4
# board = [ [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0] ]
if solveNQUtil(board, 0) == False:
#print ("Solution does not exist")
return False
if print_board:
printSolution(board)
return True
# Driver Code
solveNQ()
N = 6
[[0]*N for _ in range(N)]
N = 5 # ack global
solveNQ([[0]*N for _ in range(N)])
N = 6 # ack global
solveNQ([[0]*N for _ in range(N)])
N = 7 # ack global
solveNQ([[0]*N for _ in range(N)])
for n in range(4,17):
N = n
solveNQ([[0]*N for _ in range(N)])
print("-"*40)
N = 8 # ack global
solveNQ([[0]*N for _ in range(N)])
N = 8 # ack global
solveNQ([[0]*N for _ in range(N)])
# since we choose our paths deterministically we get the same solution always
# we could flip the board around to get mirror and rotated solutions
board = [[0]*N for _ in range(N)]
board
N = 8
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
# reason we got a solution does not exist is that we already prefilled the board after first run :)
# ## Idea - choose path at random among all remaining paths
# We are still going to check all paths (until we find a solution)
# +
import random
def solveNQUtilRandom(board, col):
# base case: If all queens are placed
# then return true
if col >= N:
return True
# Consider this column and try placing
# this queen in all rows one by one
for i in random.sample(list(range(N)),k=N):
if isSafe(board, i, col):
# Place this queen in board[i][col]
board[i][col] = 1
# recur to place rest of the queens
if solveNQUtil(board, col + 1) == True:
return True
# If placing queen in board[i][col
# doesn't lead to a solution, then
# queen from board[i][col]
board[i][col] = 0
# if the queen can not be placed in any row in
# this colum col then return false
return False
def solveNQRandom(board = tuple([0,0,0,0] for n in range(4)), print_board=True): # passing in default 4x4
# board = [ [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0] ]
if solveNQUtilRandom(board, 0) == False:
#print ("Solution does not exist")
return False
if print_board:
printSolution(board)
return True
# -
random.choices(list(range(N)),k=N) # no good we got dupes
random.sample(list(range(N)),k=N) # this what we want a random sample of uniques
N = 8
solveNQRandom([[0]*N for _ in range(N)])
N = 8
solveNQRandom([[0]*N for _ in range(N)])
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
# +
# what does it mean that our randomized solution worked faster than hardcoded selection?
# random is about twice as fast in this case
# -
N=10
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
N=16
# with larger Ns random path selection might not be the best anymore
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
# we could try going the other way
list(range(N-1,-1,-1))
list(range(N))
# +
def solveNQUtilReverse(board, col):
# base case: If all queens are placed
# then return true
if col >= N:
return True
# Consider this column and try placing
# this queen in all rows one by one
# we still want to consider all possible solutions but choosing from other end first
for i in range(N-1,-1,-1):
if isSafe(board, i, col):
# Place this queen in board[i][col]
board[i][col] = 1
# recur to place rest of the queens
if solveNQUtil(board, col + 1) == True:
return True
# If placing queen in board[i][col
# doesn't lead to a solution, then
# queen from board[i][col]
board[i][col] = 0
# if the queen can not be placed in any row in
# this colum col then return false
return False
def solveNQReverse(board = tuple([0,0,0,0] for n in range(4)), print_board=True): # passing in default 4x4
# board = [ [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0],
# [0, 0, 0, 0] ]
if solveNQUtilReverse(board, 0) == False:
#print ("Solution does not exist")
return False
if print_board:
printSolution(board)
return True
# -
N=8
solveNQReverse([[0]*N for _ in range(N)])
# %%timeit
solveNQReverse([[0]*N for _ in range(N)], print_board=False)
N =8
# %%timeit
solveNQReverse([[0]*N for _ in range(N)], print_board=False)
N=10
# %%timeit
solveNQReverse([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
N=16
# %%timeit
solveNQReverse([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
# +
# Branch and Bound approach
""" Python3 program to solve N Queen Problem
using Branch or Bound """
N = 8
""" A utility function to prsolution """
def printSolutionBB(board):
for i in range(N):
for j in range(N):
print(board[i][j], end = " ")
print()
""" A Optimized function to check if
a queen can be placed on board[row][col] """
def isSafeBB(row, col, slashCode, backslashCode,
rowLookup, slashCodeLookup,
backslashCodeLookup):
if (slashCodeLookup[slashCode[row][col]] or
backslashCodeLookup[backslashCode[row][col]] or
rowLookup[row]):
return False
return True
""" A recursive utility function
to solve N Queen problem """
def solveNQueensUtilBB(board, col, slashCode, backslashCode,
rowLookup, slashCodeLookup,
backslashCodeLookup):
""" base case: If all queens are
placed then return True """
if(col >= N):
return True
for i in range(N):
if(isSafeBB(i, col, slashCode, backslashCode,
rowLookup, slashCodeLookup,
backslashCodeLookup)):
""" Place this queen in board[i][col] """
board[i][col] = 1
rowLookup[i] = True
slashCodeLookup[slashCode[i][col]] = True
backslashCodeLookup[backslashCode[i][col]] = True
""" recur to place rest of the queens """
if(solveNQueensUtilBB(board, col + 1,
slashCode, backslashCode,
rowLookup, slashCodeLookup,
backslashCodeLookup)):
return True
""" If placing queen in board[i][col]
doesn't lead to a solution,then backtrack """
""" Remove queen from board[i][col] """
board[i][col] = 0
rowLookup[i] = False
slashCodeLookup[slashCode[i][col]] = False
backslashCodeLookup[backslashCode[i][col]] = False
""" If queen can not be place in any row in
this colum col then return False """
return False
""" This function solves the N Queen problem using
Branch or Bound. It mainly uses solveNQueensUtil()to
solve the problem. It returns False if queens
cannot be placed,otherwise return True or
prints placement of queens in the form of 1s.
Please note that there may be more than one
solutions,this function prints one of the
feasible solutions."""
def solveNQueensBB(debug=False):
board = [[0 for i in range(N)]
for j in range(N)]
# helper matrices
slashCode = [[0 for i in range(N)]
for j in range(N)]
backslashCode = [[0 for i in range(N)]
for j in range(N)]
# arrays to tell us which rows are occupied
rowLookup = [False] * N
# keep two arrays to tell us
# which diagonals are occupied
x = 2 * N - 1
slashCodeLookup = [False] * x
backslashCodeLookup = [False] * x
# initialize helper matrices
for rr in range(N):
for cc in range(N):
slashCode[rr][cc] = rr + cc
backslashCode[rr][cc] = rr - cc + 7
if(solveNQueensUtilBB(board, 0, slashCode, backslashCode,
rowLookup, slashCodeLookup,
backslashCodeLookup) == False):
print("Solution does not exist")
return False
# solution found
if debug:
printSolutionBB(board)
return True
# Driver Cde
solveNQueensBB(debug=True)
# This code is contributed by SHUBHAMSINGH10
# -
# %%timeit
solveNQueensBB()
N = 12
solveNQueens(debug=True)
for n in range(8, 17):
N = n
solveNQueens(debug=True)
N=12
# %%timeit
solveNQueensBB()
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
N=16
solveNQ([[0]*N for _ in range(N)])
# %%timeit
solveNQ([[0]*N for _ in range(N)], print_board=False)
# %%timeit
solveNQueensBB()
N=20
solveNQueensBB(debug=True)
# %%timeit
solveNQueensBB()
N=20
solveNQRandom([[0]*N for _ in range(N)])
# %%timeit
solveNQRandom([[0]*N for _ in range(N)], print_board=False)
| topics/N Queens.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)
def sigmoid(x):
return 1.0/(1.0+np.exp(-x))
#def stepFunction(t):
# if t >= 0:
# return 1
# return 0
def prediction(X, W, b):
return stepFunction((np.matmul(X,W)+b)[0])
# TODO: Fill in the code below to implement the perceptron trick.
# The function should receive as inputs the data X, the labels y,
# the weights W (as an array), and the bias b,
# update the weights and bias W, b, according to the perceptron algorithm,
# and return W and b.
def perceptronStep(X, y, W, b, learn_rate = 0.01):
for i in range(len(X)):
y_hat = prediction(X[i],W,b)
if y[i]-y_hat == 1:
W[0] += X[i][0]*learn_rate
W[1] += X[i][1]*learn_rate
b += learn_rate
elif y[i]-y_hat == -1:
W[0] -= X[i][0]*learn_rate
W[1] -= X[i][1]*learn_rate
b -= learn_rate
return W, b
# This function runs the perceptron algorithm repeatedly on the dataset,
# and returns a few of the boundary lines obtained in the iterations,
# for plotting purposes.
# Feel free to play with the learning rate and the num_epochs,
# and see your results plotted below.
def trainPerceptronAlgorithm(X="", y="", learn_rate = 0.1, num_epochs = 250):
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,0,0,1])
x_min, x_max = min(X.T[0]), max(X.T[0])
y_min, y_max = min(X.T[1]), max(X.T[1])
W = np.array(np.random.rand(2,1))
b = np.random.rand(1)[0] + x_max
# These are the solution lines that get plotted below.
boundary_lines = []
for i in range(num_epochs):
# In each epoch, we apply the perceptron step.
W, b = perceptronStep(X, y, W, b, learn_rate)
boundary_lines.append((-W[0]/W[1], -b/W[1]))
return boundary_lines[-1] #chon akhaarin behtaring javab
result = trainPerceptronAlgorithm()
def plot(y,w=-1,b=1.5):
m = np.linspace(-2,2,100)
X = np.array([[0,0],[0,1],[1,0],[1,1]])
for i in X:
plt.plot(i[0],i[1], 'bo')
plt.axis([-2, 2, -2, 2])
plt.plot(m,[x * y[0]+y[1] for x in m])
plt.show()
plot(result)
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)
# def sigmoid(x):
# return np.divide(1.0,1.0+np.exp(np.multiply(-1,x)))
def sigmoid(x):
return 1.0/(1.0+np.exp(-x))
"""
#wi - alpha(y^ - y)xiGradian disen
#y^ =sigmoid(w_old*x+bias)
"""
def updateW(w,x,lable,bias,alpha=0.1):
for x1,lable1 in zip(x,lable):
predict=np.matmul(x1,w)+bias
w[0]=w[0] - np.multiply(alpha,(sigmoid(predict[0])-lable1)*x1.T)[0] #cross entropy sgd
w[1]=w[1] - np.multiply(alpha,(sigmoid(predict[0])-lable1)*x1.T)[1]
bias = bias - alpha*(sigmoid(predict[0])-lable1)
return w,bias
#1-0=1 or 0-1 =-1
X = np.array([[0,0],[0,1],[1,0],[1,1]])
W = np.array(np.random.rand(2,1))
y = np.array([0,0,0,1])
x_min, x_max = min(X.T[0]), max(X.T[0])
b = np.random.rand(1)[0] + x_max # +x_max bekhater in ke ba asas x bashe
epoch=400
for i in range(epoch):
W,b=updateW(W,X,y,b)
def plot(y,w=-1,b=1.5):
m = np.linspace(-2,2,100)
X = np.array([[0,0],[0,1],[1,0],[1,1]])
for i in X:
plt.plot(i[0],i[1], 'bo')
plt.axis([-2, 2, -2, 2])
plt.plot(m,[x * y[0]+y[1] for x in m])
plt.show()
plot((-W[0]/W[1], -b/W[1]))
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)
# def sigmoid(x):
# return np.divide(1.0,1.0+np.exp(np.multiply(-1,x)))
def sigmoid(x):
return 1.0/(1.0+np.exp(-x))
"""
#wi - alpha(y^ - y)xiGradian disen
#y^ =sigmoid(w_old*x+bias)
"""
def updateW(w,x,lable,bias,alpha=0.1):
for x1,lable1 in zip(x,lable):
predict=np.matmul(x1,w)+bias
w[0]=w[0] - np.multiply(alpha,(sigmoid(predict[0])-lable1)*x1.T)[0] #cross entropy sgd
w[1]=w[1] - np.multiply(alpha,(sigmoid(predict[0])-lable1)*x1.T)[1]
bias = bias - alpha*(sigmoid(predict[0])-lable1)
return w,bias
#1-0=1 or 0-1 =-1
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
#X = np.array([[0,0],[0,1],[1,0],[1,1]])
W = np.array(np.random.rand(2,1))
#y = np.array([0,0,0,1])
x_min, x_max = min(X.T[0]), max(X.T[0])
b = np.random.rand(1)[0] + x_max # +x_max bekhater in ke ba asas x bashe
epoch=400
for i in range(epoch):
W,b=updateW(W,X,y,b)
def plot(y,w=-1,b=1.5):
m = np.linspace(-2,2,100)
#X = np.array([[0,0],[0,1],[1,0],[1,1]])
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
for i in X:
plt.plot(i[0],i[1], 'bo')
plt.axis([-2, 2, -2, 2])
plt.plot(m,[x * y[0]+y[1] for x in m])
plt.show()
plot((-W[0]/W[1], -b/W[1]))
| GDs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Education theme - all audits - all data excluding PDF contents - Phrasemachine
#
# This experiment used 8697 pages from GOV.UK related to the education theme. We extracted the following content from those pages:
#
# - Title
# - Description
# - Indexable content (i.e. the body of the document stored in Search)
# - Existing topic names
# - Exiting organisation names
#
# In order to do so, we used a combination of data from the search index and the content store. We then run Latent Dirichlet allocation (LDA) with the following parameters:
#
# - we asked for 10 topics
# - we let LDA run with 10 iterations
# - we processed the documents using [phrasemachine](https://github.com/slanglab/phrasemachine).
#
# The outcome of this experiment can be seen below. In order to run the script again, use this:
#
# ```shell
# python train_lda.py --output-topics output/phrasemachine_nopdf_topics.csv --output-tags output/phrasemachine_nopdf_tags.csv --vis-filename output/phrasemachine_nopdf_vis.html --passes 10 --numtopics 10 --use-phrasemachine --no-lemmatisation import expanded_audits/all_audits_for_education_words_nopdf.csv
# ```
# ## Dictionary
#
# LDA constructs a dictionary of words it collects from the documents. This dictionary has information on word frequencies. The dictionary for this can be seen [here](models/dict).
#
# +
# This code is required so we can display the visualisation
import pyLDAvis
from IPython.core.display import display, HTML
# Changing the cell widths
display(HTML("<style>.container { width:100% !important; }</style>"))
# Setting the max number of rows
pd.options.display.max_rows = 30
# Setting the max number of columns
pd.options.display.max_columns = 50
pyLDAvis.enable_notebook()
# -
# ## Interactive topic model visualisation
#
# The page below displays the topics generated by the algorithm and allows us to interact with them in order to discover what words make up each topic.
from IPython.display import HTML
HTML(filename='vis.html')
# ## Sample of tagged documents
#
# Below we list a sample of the education links and the correspondent topics the algorithm chose to tag it with. This is useful in order to see if the algorithm is tagging those documents with meaningful topics.
#
# For a complete list, please see [here](tags.csv).
#
# In order to check the topics by their number, please use this CSV file [here](topics.csv).
#
# ### https://www.gov.uk/government/publications/registration-of-free-schools
# - Topic 0 (88%)
# - Topic 3 (9%)
#
# ### https://www.gov.uk/government/publications/first-national-survey-of-practitioners-with-early-years-professional-status
# - Topic 7 (98%)
#
# ### https://www.gov.uk/government/publications/child-death-data-collection-2013-to-2014-return-forms
# - Topic 3 (98%)
#
# ### https://www.gov.uk/government/news/bogus-training-courses-come-under-fire
# - Topic 9 (79%)
# - Topic 5 (9%)
# - Topic 6 (6%)
#
# ### https://www.gov.uk/guidance/key-stage-2-tests-planning-to-use-the-modified-tests
# - Topic 8 (97%)
#
# ### https://www.gov.uk/government/news/cash-boost-for-disadvantaged-school-children
# - Topic 2 (95%)
# - Topic 3 (3%)
# - Topic 0 (2%)
#
# ### https://www.gov.uk/government/speeches/michael-gove-to-the-national-college-annual-conference-birmingham
# - Topic 6 (83%)
# - Topic 4 (8%)
# - Topic 0 (3%)
#
# ### https://www.gov.uk/government/news/events-map-live-for-national-apprenticeship-week-2015
# - Topic 4 (98%)
#
# ### https://www.gov.uk/government/statistics/revised-gcse-and-equivalent-results-in-england-2013-to-2014
# - Topic 2 (67%)
# - Topic 9 (25%)
# - Topic 7 (6%)
#
# ### https://www.gov.uk/government/news/david-willetts-quote-on-off-quota-numbers
# - Topic 6 (99%)
#
# ### https://www.gov.uk/government/news/35-of-schools-less-than-good-in-east-riding-of-yorkshire
# - Topic 9 (98%)
# - Topic 4 (1%)
#
# ### https://www.gov.uk/government/news/more-information-needed-for-a-better-understanding-of-private-fostering
# - Topic 2 (81%)
# - Topic 9 (13%)
# - Topic 6 (4%)
#
# ### https://www.gov.uk/government/news/a-strategy-for-skills
# - Topic 1 (77%)
# - Topic 6 (22%)
#
# ### https://www.gov.uk/government/publications/gcse-and-a-level-subject-content-equality-analysis-3-subjects
# - Topic 6 (66%)
# - Topic 2 (31%)
# - Topic 5 (2%)
#
# ### https://www.gov.uk/government/speeches/apprenticeships-training-young-people-for-jobs-of-the-future
# - Topic 6 (90%)
# - Topic 5 (6%)
# - Topic 9 (1%)
#
# ### https://www.gov.uk/government/news/academies-to-have-same-freedom-as-free-schools-over-teachers
# - Topic 0 (76%)
# - Topic 4 (16%)
# - Topic 6 (4%)
#
# ### https://www.gov.uk/government/news/switching-children-on-to-online-safety-this-christmas
# - Topic 3 (78%)
# - Topic 6 (18%)
# - Topic 4 (3%)
#
# ### https://www.gov.uk/government/publications/newly-qualified-teachers-nqts-annual-survey-2015
# - Topic 9 (98%)
#
# ### https://www.gov.uk/government/publications/gcse-ancient-history
# - Topic 6 (95%)
#
# ### https://www.gov.uk/government/publications/managing-medicines-in-schools-and-early-years-settings
# - Topic 9 (90%)
# - Topic 7 (3%)
# - Topic 3 (3%)
#
#
| experiments/10_topics_without_pdf_data_with_phrasemachine/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.2 ('rising_sun')
# language: python
# name: python3
# ---
# ## Plot Ideas
# -------------
#
# * QQ Normal Plot to see if Hailstone Sizes are normally distributed
# * If not normal, then find a distribution that does fit it (maybe something log normal?)
# * Generate plot that demonstrates this
#
# * Histograms for some of the variables, especially Hailstone Sizes and maybe heatmaps with some other ones?
#
# * For all duplicates, see how far apart the actual variables are; worth using three times as much information for little benefit?
# * Do this maybe with... stacked histograms / line plot / something else?
# * Calculate mean of duplicate variables, is that a better indicator, or should use closest to mean variable?
#
# * Correlation matrix for the data, make it real pretty like, consider whether we neeeeed all these variables or can PCA/SVM/LASSO to reduce dimensionality
#
# * Scale data maybe?
#
# * Boxplots to see about spread and central tendency, maybe even two dimensional versions or facet grid
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import statsmodels.api as sm
import scipy.stats as ss
from math import floor
from matplotlib import colors
from fitter import Fitter, get_common_distributions, get_distributions
# +
# %matplotlib qt
rng = np.random.default_rng(100)
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 26
CHONK_SIZE = 32
font = {'family' : 'DIN Condensed',
'weight' : 'bold',
'size' : SMALL_SIZE}
plt.rc('font', **font)
plt.rc('axes', titlesize=BIGGER_SIZE, labelsize=MEDIUM_SIZE, facecolor="xkcd:white")
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=CHONK_SIZE, facecolor="xkcd:white", edgecolor="xkcd:black") # powder blue
drop_lst = ["MU CAPE", "MU CIN", "MU LCL", "MU LFC", "MU EL", "MU LI", "MU hght0c", "MU cap", "MU b3km", "MU brn", "SB CAPE", "SB CIN", "SB LCL", "SB LFC", "SB EL", "SB LI", "SB hght0c",
"SB cap", "SB b3km", "SB brn", "sb_tlcl", "mu_tlcl"]
col_names = ['CAPE', 'CIN', 'LCL', 'LFC', 'EL', 'LI', 'HGHT0C',
'CAP', 'B3KM', 'BRN', 'SHEAR 0-1 KM', 'SHEAR 0-6 KM',
'EFF INFLOW', 'EBWD', 'SRH 0-1 KM', 'SRH 0-3 KM', 'EFF SRH', 'SCP',
'STP-FIXED', 'STP-MIXED', 'SHIP', 'PWAT', 'DCAPE', 'MLMR', 'LRAT',
'TEI', 'TLCL', 'T500', 'SWEAT', 'K-INDEX', 'CRAV', 'HAIL SIZE IN']
path = "/Users/joshuaelms/Desktop/github_repos/CSCI-B365/Meteorology_Modeling_Project/data/pretty_data.csv"
df_full = pd.read_csv(path, index_col=0)
df_full.index += 1
df = df_full.copy(deep=True)
df = df.drop(columns = drop_lst)
df.columns = col_names
df
# +
# Fitter to find best distribution
hail = df["HAIL SIZE IN"].to_numpy()
distributions_to_check = ["gennorm", "dgamma", "dweibull", "cauchy"]
f = Fitter(hail, distributions=distributions_to_check)
f.fit()
print(f.summary())
print("\nWe will use the distribution with the lowest sum of squares error, the generalised normal distribution.")
print(f.get_best(method = "sumsquare_error"))
# +
# Plotting qq gennorm distribution for HAIL SIZE IN
# SSE = 22.30399, which is the lowest sum of squares error of all the distributions tested.
# sumsquare_error aic bic kl_div
# gennorm 22.303990 1061.334522 -208740.992995 inf
# dgamma 27.932897 1090.954331 -202191.893056 inf
# dweibull 38.602452 774.316469 -192777.084374 inf
# cauchy 40.289411 833.865428 -191542.586505 inf
# foldcauchy 40.686649 778.275503 -191246.778989
fig, ax = plt.subplots()
ss.probplot(df["HAIL SIZE IN"], sparams=(0.47774409138777574, 1.0, 0.028076), dist='gennorm', fit=True, plot=ax, rvalue=False)
ax.set_title("Generalized Normal Distribution QQ Plot")
ax.set_xlabel("Theoretical", fontsize=24)
ax.set_ylabel("Sample", fontsize=24)
plt.show()
# +
### Standard and Log Histograms of HAIL SIZE IN
# plt.clf()
fig, [ax1, ax2] = plt.subplots(ncols=2)
step = 0.25
breaks = np.arange(floor(df["HAIL SIZE IN"].min() - step), df["HAIL SIZE IN"].max() + step, step)
labs = np.arange(0, df["HAIL SIZE IN"].max() + 1, 1)
sns.histplot(data=df, x="HAIL SIZE IN", discrete=False, bins=breaks, ax=ax1)
ax1.set_xticks(labs)
ax1.set_xlim(left=-0.5)
ax1.set_xticklabels(labs)
ax1.set_title("Linear Scale")
ax1.set_xlabel("HAIL SIZE IN")
ax1.set_ylabel("Frequency")
sns.histplot(data=df, x="HAIL SIZE IN", discrete=False, bins=breaks, ax=ax2)
ax2.set_yscale("log")
ax2.set_xticks(labs)
ax2.set_xticklabels(labs)
ax2.set_ylim(bottom=0)
ax2.set_xlim(left=-0.5)
ax2.set_title("Log Scale")
ax2.set_xlabel("HAIL SIZE IN")
ax2.set_ylabel("Log(Frequency)")
plt.subplots_adjust(
top=0.89,
bottom=0.125,
left=0.13,
right=0.94,
hspace=0.2,
wspace=0.3
)
print(ax2.get_xlim())
# +
### Corr plot for ten duplicates using pcolormesh
# plt.clf()
### group plots by variable; for each variable in the dictionary, generate and display corrplot of various calculation methods for it
fig, ax_lst = plt.subplots(nrows=5, ncols=2, figsize=(10,14))
fig.suptitle("Pairwise Correlations of 3 Methods for Calculating Meteorological Parameters")
# fig.patch.set_facecolor("xkcd:light grey")
cnt = 0
for r, layer in enumerate(ax_lst):
for c, ax in enumerate(layer):
correlations = df_full.iloc[:, [cnt, cnt+10, cnt+20]].corr()
axis_labels=correlations.columns.values.tolist()
im = ax_lst[r, c].pcolormesh(correlations, norm=colors.Normalize(0, 1), cmap="magma", edgecolor="black", linewidth=0.5)
ticks = [i+0.5 for i in range(len(axis_labels))]
ax.set_xticks(ticks)
ax.invert_yaxis()
ax_lst[r, c].set_xticks(ticks)
ax_lst[r, c].set_xticklabels(axis_labels)
ax_lst[r, c].set_yticks(ticks)
ax_lst[r, c].set_yticklabels(axis_labels)
ax_lst[r, c].grid(which='minor', color='b', linestyle='-', linewidth=2)
cnt+=1
shrink_amount = 1.065
fig.colorbar(im, ax=ax_lst[:, 0], shrink=shrink_amount) # options are pad, shrink, aspect
fig.colorbar(im, ax=ax_lst[:, 1], shrink=shrink_amount)
cb1, cb2 = fig.axes[-2], fig.axes[-1]
plt.subplots_adjust(
top=0.905,
bottom=0.085,
left=0.14,
right=0.825,
hspace=0.6,
wspace=0.62
)
plt.show()
# +
### Corr plot for ten duplicates using pcolormesh vert FAILED
# plt.clf()
### group plots by variable; for each variable in the dictionary, generate and display corrplot of various calculation methods for it
fig, ax_lst = plt.subplots(nrows=10, figsize=(6,14))
fig.suptitle("Pairwise Correlations of 3 Methods for Calculating Meteorological Parameters")
# fig.patch.set_facecolor("xkcd:light grey")
cnt = 0
for r, ax in enumerate(ax_lst):
# for c, ax in enumerate(layer):
correlations = df.iloc[:, [cnt, cnt+10, cnt+20]].corr()
axis_labels=correlations.columns.values.tolist()
im = ax_lst[r].pcolormesh(correlations, norm=colors.Normalize(0, 1), cmap="magma", edgecolor="black", linewidth=0.5)
ticks = [i+0.5 for i in range(len(axis_labels))]
ax.set_xticks(ticks)
ax.invert_yaxis()
ax_lst[r].set_xticks(ticks)
ax_lst[r].set_xticklabels(axis_labels)
ax_lst[r].set_yticks(ticks)
ax_lst[r].set_yticklabels(axis_labels)
ax_lst[r].grid(which='minor', color='b', linestyle='-', linewidth=2)
cnt+=1
shrink_amount = 1.065
fig.colorbar(im, ax=ax_lst[:], shrink=shrink_amount) # options are pad, shrink, aspect
# fig.colorbar(im, ax=ax_lst[:, 1], shrink=shrink_amount)
cb1 = fig.axes[-1]
plt.subplots_adjust(
top=0.88,
bottom=0.11,
left=0.155,
right=0.725,
hspace=0.415,
wspace=0.13
)
# plt.savefig("/Users/joshuaelms/Desktop/github_repos/CSCI-B365/Meteorology_Modeling_Project/reports/img/plots/corr_plots.png")
# +
### CAPE vs Shear Scatter Plot
fig, ax = plt.subplots()
sns.scatterplot(data=df, x="CAPE", y="SHEAR 0-6 KM", ax=ax)
ax.set_xlabel("CAPE")
ax.set_ylabel("SHEAR 0-6 KM")
plt.show()
# +
### Corr plot overall
fig, ax1 = plt.subplots()
df_corr = df.corr()
sns.heatmap(data=df_corr, vmin=-1, vmax=1, ax=ax1, xticklabels=1, yticklabels=1)
ax1.set_title("Correlation Matrix for All Parameters")
plt.subplots_adjust(
top=0.92,
bottom=0.187,
left=0.145,
right=0.992,
hspace=0.2,
wspace=0.2
)
# +
### SHIP Plots
sep = 2
under_2in = df[df["HAIL SIZE IN"] <= sep]["HAIL SIZE IN"]
over_2in = df[df["HAIL SIZE IN"] > sep]["HAIL SIZE IN"]
fig, ax1 = plt.subplots(ncols=1)
### ax1 ###
[ax1.spines[x].set_visible(False) for x in ["top", "right", "left"]] # remove top, left, bottom axis border
ax1.yaxis.set_ticks_position("none") # remove y tick marks
dataset = [under_2in, over_2in]
labs = ["Under 2\"", "Over 2\""]
ax1.boxplot(dataset, labels = labs)
ax1.set_title("SHIP for Predicting Hail Size Categories")
ax1.set_ylabel("SHIP")
plt.subplots_adjust(
top=0.92,
bottom=0.06,
left=0.105,
right=0.955,
hspace=0.2,
wspace=0.2
)
## ax2 ###
# plot matplotlib heatmap of HAIL SIZE IN vs SHIP on ax2
# bin_size = 1
# bins = np.arange(0, 6 + bin_size, bin_size)
# freq_matrix_attrs = np.histogram2d(df["HAIL SIZE IN"], df["SHIP"], density=False, bins=bins)
# freq_matrix_rot = freq_matrix_attrs[0] + 0.00001
# freq_matrix = np.rot90(freq_matrix_rot).astype(float)
# sns.heatmap(freq_matrix, ax=ax2, norm=colors.LogNorm(), cmap="magma", xticklabels=1, yticklabels=1)
# # im = ax2.pcolormesh(freq_matrix, , cmap="magma", edgecolor="black", linewidth=0.5) # colors.Normalize(0, 1)
# ax2.set_title("HAIL SIZE IN vs SHIP")
# ax2.set_xlabel("HAIL SIZE IN")
# ax2.set_ylabel("SHIP")
# ax2.invert_yaxis()
# ax2.set_xticks(bins)
# ax2.set_yticks(bins)
# fig.colorbar(im, ax=ax2, shrink=1) # options are pad, shrink, aspect
plt.show()
# +
# Testing
fig, ax = plt.subplots()
data = [[0.5, 2.5], [1.5, 1.5]]
bin_size = 1
bins = np.arange(0, 3 + bin_size, bin_size)
freq_matrix = np.histogram2d(data[0], data[1], bins=bins, range=(0, 3), density=True)[0]
freq_matrix = np.rot90(freq_matrix)
im = ax.pcolormesh(freq_matrix, norm=colors.Normalize(0, 1), cmap="magma", edgecolor="black", linewidth=0.5)
ax.set_title("Testing")
ax.set_xlabel("Changing Values")
ax.set_ylabel("All 1's")
ax.set_xticks(bins)
ax.set_yticks(bins)
fig.colorbar(im, ax=ax, shrink=1) # options are pad, shrink, aspect
for y in range(freq_matrix.shape[0]):
for x in range(freq_matrix.shape[1]):
plt.text(x + 0.5, y + 0.5, '%.4f' % freq_matrix[y, x],
horizontalalignment='center',
verticalalignment='center',
color='white',
)
plt.show()
| Meteorology_Modeling_Project/visualization/notebooks/figures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import json
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
file_path = r'speed_comparison\multiprocessing_250pages1.json'
# file_path = r'speed_comparison/multiprocessing_250pages2.json'
# file_path = r'speed_comparison/multiprocessing_250pages3.json'
data = json.load(open(file_path))
ax = pd.DataFrame(data).plot(kind='bar', x='cores', y='time')
pd.DataFrame(data).plot(ax=ax)
# Visual changes
plt.legend([])
plt.title('Cores to work time comparison [seconds]')
plt.tick_params(axis='x', labelrotation = 0)
plt.show()
# plt.savefig(r'speed_comparison\chart.png')
| speed_comparison/speed_comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using nbconvert as a library
#
# In this notebook, you will be introduced to the programmatic API of nbconvert and how it can be used in various contexts.
#
# A great [blog post](http://jakevdp.github.io/blog/2013/04/15/code-golf-in-python-sudoku/) by [\@jakevdp](https://github.com/jakevdp) will be used to demonstrate. This notebook will not focus on using the command line tool. The attentive reader will point-out that no data is read from or written to disk during the conversion process. This is because nbconvert has been designed to work in memory so that it works well in a database or web-based environment too.
# ## Quick overview
# Credit: <NAME> (@jdfreder on github)
# The main principle of nbconvert is to instantiate an `Exporter` that controls the pipeline through which notebooks are converted.
#
# First, download @jakevdp's notebook (if you do not have `requests`, install it by running `pip install requests`, or if you don't have pip installed, you can find it on PYPI):
# + jupyter={"outputs_hidden": false}
from urllib.request import urlopen
url = 'https://jakevdp.github.io/downloads/notebooks/XKCD_plots.ipynb'
response = urlopen(url).read().decode()
response[0:60] + ' ...'
# -
# The response is a JSON string which represents a Jupyter notebook.
#
# Next, we will read the response using nbformat. Doing this will guarantee that the notebook structure is valid. Note that the in-memory format and on disk format are slightly different. In particular, on disk, multiline strings might be split into a list of strings.
# + jupyter={"outputs_hidden": false}
import nbformat
jake_notebook = nbformat.reads(response, as_version=4)
jake_notebook.cells[0]
# -
# The nbformat API returns a special type of dictionary. For this example, you don't need to worry about the details of the structure (if you are interested, please see the [nbformat documentation](https://nbformat.readthedocs.io/en/latest/)).
#
# The nbconvert API exposes some basic exporters for common formats and defaults. You will start by using one of them. First, you will import one of these exporters (specifically, the HTML exporter), then instantiate it using most of the defaults, and then you will use it to process the notebook we downloaded earlier.
# + jupyter={"outputs_hidden": false}
from traitlets.config import Config
# 1. Import the exporter
from nbconvert import HTMLExporter
# 2. Instantiate the exporter. We use the `classic` template for now; we'll get into more details
# later about how to customize the exporter further.
html_exporter = HTMLExporter()
html_exporter.template_name = 'classic'
# 3. Process the notebook we loaded earlier
(body, resources) = html_exporter.from_notebook_node(jake_notebook)
# -
# The exporter returns a tuple containing the source of the converted notebook, as well as a resources dict. In this case, the source is just raw HTML:
# + jupyter={"outputs_hidden": false}
print(body[:400] + '...')
# -
# If you understand HTML, you'll notice that some common tags are omitted, like the `body` tag. Those tags are included in the default `HtmlExporter`, which is what would have been constructed if we had not modified the `template_file`.
#
# The resource dict contains (among many things) the extracted `.png`, `.jpg`, etc. from the notebook when applicable. The basic HTML exporter leaves the figures as embedded base64, but you can configure it to extract the figures. So for now, the resource dict should be mostly empty, except for a key containing CSS and a few others whose content will be obvious:
# + jupyter={"outputs_hidden": false}
print("Resources:", resources.keys())
print("Metadata:", resources['metadata'].keys())
print("Inlining:", resources['inlining'].keys())
print("Extension:", resources['output_extension'])
# -
# `Exporter`s are stateless, so you won't be able to extract any useful information beyond their configuration. You can re-use an exporter instance to convert another notebook. In addition to the `from_notebook_node` used above, each exporter exposes `from_file` and `from_filename` methods.
# ## Extracting Figures using the RST Exporter
# When exporting, you may want to extract the base64 encoded figures as files. While the HTML exporter does not do this by default, the `RstExporter` does:
# + jupyter={"outputs_hidden": false}
# Import the RST exproter
from nbconvert import RSTExporter
# Instantiate it
rst_exporter = RSTExporter()
# Convert the notebook to RST format
(body, resources) = rst_exporter.from_notebook_node(jake_notebook)
print(body[:970] + '...')
print('[.....]')
print(body[800:1200] + '...')
# -
# Notice that base64 images are not embedded, but instead there are filename-like strings, such as `output_3_0.png`. The strings actually are (configurable) keys that map to the binary data in the resources dict.
#
# Note, if you write an RST Plugin, you are responsible for writing all the files to the disk (or uploading, etc...) in the right location. Of course, the naming scheme is configurable.
#
# As an exercise, this notebook will show you how to get one of those images. First, take a look at the `'outputs'` of the returned resources dictionary. This is a dictionary that contains a key for each extracted resource, with values corresponding to the actual base64 encoding:
# + jupyter={"outputs_hidden": false}
sorted(resources['outputs'].keys())
# -
# In this case, there are 5 extracted binary figures, all `png`s. We can use the Image display object to actually display one of the images:
# + jupyter={"outputs_hidden": false}
from IPython.display import Image
Image(data=resources['outputs']['output_3_0.png'], format='png')
# -
# Note that this image is being rendered without ever reading or writing to the disk.
# ## Extracting Figures using the HTML Exporter
# As mentioned above, by default, the HTML exporter does not extract images -- it just leaves them as inline base64 encodings. However, this is not always what you might want. For example, here is a use case from @jakevdp:
#
# > I write an [awesome blog](http://jakevdp.github.io/) using Jupyter notebooks converted to HTML, and I want the images to be cached. Having one html file with all of the images base64 encoded inside it is nice when sharing with a coworker, but for a website, not so much. I need an HTML exporter, and I want it to extract the figures!
# ### Some theory
#
# Before we get into actually extracting the figures, it will be helpful to give a high-level overview of the process of converting a notebook to a another format:
#
# 1. Retrieve the notebook and it's accompanying resources (you are responsible for this).
# 2. Feed the notebook into the `Exporter`, which:
# 1. Sequentially feeds the notebook into an array of `Preprocessor`s. Preprocessors only act on the **structure** of the notebook, and have unrestricted access to it.
# 2. Feeds the notebook into the Jinja templating engine, which converts it to a particular format depending on which template is selected.
# 3. The exporter returns the converted notebook and other relevant resources as a tuple.
# 4. You write the data to the disk using the built-in `FilesWriter` (which writes the notebook and any extracted files to disk), or elsewhere using a custom `Writer`.
#
# ### Using different preprocessors
#
# To extract the figures when using the HTML exporter, we will want to change which `Preprocessor`s we are using. There are several preprocessors that come with nbconvert, including one called the `ExtractOutputPreprocessor`.
#
# The `ExtractOutputPreprocessor` is responsible for crawling the notebook, finding all of the figures, and putting them into the resources directory, as well as choosing the key (i.e. `filename_xx_y.extension`) that can replace the figure inside the template. To enable the `ExtractOutputPreprocessor`, we must add it to the exporter's list of preprocessors:
# + jupyter={"outputs_hidden": false}
# create a configuration object that changes the preprocessors
from traitlets.config import Config
c = Config()
c.HTMLExporter.preprocessors = ['nbconvert.preprocessors.ExtractOutputPreprocessor']
# create the new exporter using the custom config
html_exporter_with_figs = HTMLExporter(config=c)
html_exporter_with_figs.preprocessors
# -
# We can compare the result of converting the notebook using the original HTML exporter and our new customized one:
# + jupyter={"outputs_hidden": false}
(_, resources) = html_exporter.from_notebook_node(jake_notebook)
(_, resources_with_fig) = html_exporter_with_figs.from_notebook_node(jake_notebook)
print("resources without figures:")
print(sorted(resources.keys()))
print("\nresources with extracted figures (notice that there's one more field called 'outputs'):")
print(sorted(resources_with_fig.keys()))
print("\nthe actual figures are:")
print(sorted(resources_with_fig['outputs'].keys()))
# -
# ## Custom Preprocessors
# There are an endless number of transformations that you may want to apply to a notebook. In particularly complicated cases, you may want to actually create your own `Preprocessor`. Above, when we customized the list of preprocessors accepted by the `HTMLExporter`, we passed in a string -- this can be any valid module name. So, if you create your own preprocessor, you can include it in that same list and it will be used by the exporter.
#
# To create your own preprocessor, you will need to subclass from `nbconvert.preprocessors.Preprocessor` and overwrite either the `preprocess` and/or `preprocess_cell` methods.
# ## Example
# The following demonstration adds the ability to exclude a cell by index.
#
# Note: injecting cells is similar, and won't be covered here. If you want to inject static content at the beginning/end of a notebook, use a custom template.
# + jupyter={"outputs_hidden": false}
from traitlets import Integer
from nbconvert.preprocessors import Preprocessor
class PelicanSubCell(Preprocessor):
"""A Pelican specific preprocessor to remove some of the cells of a notebook"""
# I could also read the cells from nb.metadata.pelican if someone wrote a JS extension,
# but for now I'll stay with configurable value.
start = Integer(0, help="first cell of notebook to be converted").tag(config=True)
end = Integer(-1, help="last cell of notebook to be converted").tag(config=True)
def preprocess(self, nb, resources):
self.log.info("I'll keep only cells from %d to %d", self.start, self.end)
nb.cells = nb.cells[self.start:self.end]
return nb, resources
# -
# Here a Pelican exporter is created that takes `PelicanSubCell` preprocessors and a `config` object as parameters. This may seem redundant, but with the configuration system you can register an inactive preprocessor on all of the exporters and activate it from config files or the command line.
# + jupyter={"outputs_hidden": false}
# Create a new config object that configures both the new preprocessor, as well as the exporter
c = Config()
c.PelicanSubCell.start = 4
c.PelicanSubCell.end = 6
c.RSTExporter.preprocessors = [PelicanSubCell]
# Create our new, customized exporter that uses our custom preprocessor
pelican = RSTExporter(config=c)
# Process the notebook
print(pelican.from_notebook_node(jake_notebook)[0])
# -
# ## Programmatically creating templates
# + jupyter={"outputs_hidden": false}
from jinja2 import DictLoader
dl = DictLoader({'footer':
"""
{%- extends 'lab/index.html.j2' -%}
{% block footer %}
FOOOOOOOOTEEEEER
{% endblock footer %}
"""})
exportHTML = HTMLExporter(extra_loaders=[dl], template_file='footer')
(body, resources) = exportHTML.from_notebook_node(jake_notebook)
for l in body.split('\n')[-4:]:
print(l)
# -
# ## Real World Uses
# @jakevdp uses Pelican and Jupyter Notebook to blog. Pelican [will use](https://github.com/getpelican/pelican-plugins/pull/21) nbconvert programmatically to generate blog post. Have a look a [Pythonic Preambulations](http://jakevdp.github.io/) for Jake's blog post.
# @damianavila wrote the Nikola Plugin to [write blog post as Notebooks](http://damianavila.github.io/blog/posts/one-line-deployment-of-your-site-to-gh-pages.html) and is developing a js-extension to publish notebooks via one click from the web app.
# <center>
# <blockquote class="twitter-tweet"><p>As <a href="https://twitter.com/Mbussonn">@Mbussonn</a> requested... easieeeeer! Deploy your Nikola site with just a click in the IPython notebook! <a href="http://t.co/860sJunZvj">http://t.co/860sJunZvj</a> cc <a href="https://twitter.com/ralsina">@ralsina</a></p>— <NAME> (@damian_avila) <a href="https://twitter.com/damian_avila/statuses/370306057828335616">August 21, 2013</a></blockquote>
# </center>
| docs/source/nbconvert_library.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# Most examples work across multiple plotting backends, this example is also available for:
#
# * [Bokeh - mandelbrot section](../bokeh/mandelbrot_section.ipynb)
#
# HoloViews demo that used to be showcased on the [holoviews.org
import numpy as np
import holoviews as hv
from holoviews import dim, opts
hv.extension('matplotlib')
# # Load the data
# +
import io
try: from urllib2 import urlopen
except: from urllib.request import urlopen
raw = urlopen('http://assets.holoviews.org/data/mandelbrot.npy').read()
array = np.load(io.BytesIO(raw)).astype(np.float32)
# -
# # Plot
# +
dots = np.linspace(-0.45, 0.45, 19)
fractal = hv.Image(array)
# First example on the old holoviews.org homepage was:
# ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))
for y in np.linspace(-0.3, 0.3, 21)}
layout = hv.HoloMap(layouts, kdims='Y').collate().cols(2)
layout.opts(
opts.Contours(color='w', show_legend=False),
opts.Points(s=dim('z')*50))
| examples/gallery/demos/matplotlib/mandelbrot_section.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id="title_ID"></a>
# # JWST Pipeline Validation Notebook:
# # calwebb_image3, source_catalog
#
# <span style="color:red"> **Instruments Affected**</span>: e.g., FGS, MIRI, NIRCam, NIRISS, NIRSpec
#
# ### Table of Contents
#
# <div style="text-align: left">
#
# <br> [Introduction](#intro)
# <br> [JWST CalWG Algorithm](#algorithm)
# <br> [Defining Terms](#terms)
# <br> [Test Description](#description)
# <br> [Data Description](#data_descr)
# <br> [Set up Temporary Directory](#tempdir)
# <br> [Imports](#imports)
# <br> [Loading the Data](#data_load)
# <br> [Run the Image3Pipeline](#pipeline)
# <br> [Perform Visual Inspection](#visualization)
# <br> [Manually Find Matches](#manual)
# <br> [About This Notebook](#about)
# <br>
#
# </div>
# <a id="intro"></a>
# # Introduction
#
# This is the NIRCam validation notebook for the Source Catalog step, which generates a catalog based on input exposures.
#
# * Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/source_catalog/index.html
#
# * Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/source_catalog
#
# [Top of Page](#title_ID)
# <a id="algorithm"></a>
# # JWST CalWG Algorithm
#
# This is the NIRCam imaging validation notebook for the Source Catalog step, which uses image combinations or stacks of overlapping images to generate "browse-quality" source catalogs. Having automated source catalogs will help accelerate the science output of JWST. The source catalogs should include both point and "slightly" extended sources at a minimum. The catalog should provide an indication if the source is a point or an extended source. For point sources, the source catalog should include measurements corrected to infinite aperture using aperture corrections provided by a reference file.
#
# See:
# * https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Point+Source+Catalog
#
#
# [Top of Page](#title_ID)
# <a id="terms"></a>
# # Defining Terms
#
# * JWST: James Webb Space Telescope
#
# * NIRCam: Near-Infrared Camera
#
#
# [Top of Page](#title_ID)
# <a id="description"></a>
# # Test Description
#
# Here we generate the source catalog and visually inspect a plot of the image with the source catalog overlaid. We also look at some other diagnostic plots and then cross-check the output catalog against Mirage catalog inputs.
#
#
# [Top of Page](#title_ID)
# <a id="data_descr"></a>
# # Data Description
#
# The set of data used in this test were created with the Mirage simulator. The simulator created a NIRCam imaging mode exposures for the short wave NRCA1 detector.
#
#
# [Top of Page](#title_ID)
# <a id="tempdir"></a>
# # Set up Temporary Directory
# The following cell sets up a temporary directory (using python's `tempfile.TemporaryDirectory()`), and changes the script's active directory into that directory (using python's `os.chdir()`). This is so that, when the notebook is run through, it will download files to (and create output files in) the temporary directory rather than in the notebook's directory. This makes cleanup significantly easier (since all output files are deleted when the notebook is shut down), and also means that different notebooks in the same directory won't interfere with each other when run by the automated webpage generation process.
#
# If you want the notebook to generate output in the notebook's directory, simply don't run this cell.
#
# If you have a file (or files) that are kept in the notebook's directory, and that the notebook needs to use while running, you can copy that file into the directory (the code to do so is present below, but commented out).
# +
#****
#
# Set this variable to False to not use the temporary directory
#
#****
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# If you have files that are in the notebook's directory, but that the notebook will need to use while
# running, copy them into the temporary directory here.
#
# files = ['name_of_file']
# for file_name in files:
# shutil.copy(file_name, os.path.join(data_dir.name, file_name))
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
# -
# [Top of Page](#title_ID)
# ## If Desired, set up CRDS to use a local cache
#
# By default, the notebook template environment sets up its CRDS cache (the "CRDS_PATH" environment variable) in /grp/crds/cache. However, if the notebook is running on a local machine without a fast and reliable connection to central storage, it makes more sense to put the CRDS cache locally. Currently, the cell below offers several options, and will check the supplied boolean variables one at a time until one matches.
#
# * if `use_local_crds_cache` is False, then the CRDS cache will be kept in /grp/crds/cache
# * if `use_local_crds_cache` is True, the CRDS cache will be kept locally
# * if `crds_cache_tempdir` is True, the CRDS cache will be kept in the temporary directory
# * if `crds_cache_notebook_dir` is True, the CRDS cache will be kept in the same directory as the notebook.
# * if `crds_cache_home` is True, the CRDS cache will be kept in $HOME/crds/cache
# * if `crds_cache_custom_dir` is True, the CRDS cache will be kept in whatever is stored in the
# `crds_cache_dir_name` variable.
#
# If the above cell (creating a temporary directory) is not run, then setting `crds_cache_tempdir` to True will store the CRDS cache in the notebook's directory (the same as setting `crds_cache_notebook_dir` to True).
# +
import os
# Choose CRDS cache location
use_local_crds_cache = False
crds_cache_tempdir = False
crds_cache_notebook_dir = False
crds_cache_home = False
crds_cache_custom_dir = False
crds_cache_dir_name = ""
if use_local_crds_cache:
if crds_cache_tempdir:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_notebook_dir:
try:
os.environ['CRDS_PATH'] = os.path.join(orig_dir, "crds")
except Exception as e:
os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds")
elif crds_cache_home:
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif crds_cache_custom_dir:
os.environ['CRDS_PATH'] = crds_cache_dir_name
# -
# [Top of Page](#title_ID)
# <a id="imports"></a>
# # Imports
# List the package imports and why they are relevant to this notebook.
#
#
# * astropy for various tools and packages
# * inspect to get the docstring of our objects.
# * IPython.display for printing markdown output
# * jwst.datamodels for JWST Pipeline data models
# * jwst.module.PipelineStep is the pipeline step being tested
# * matplotlib.pyplot.plt to generate plot
# +
# plotting, the inline must come before the matplotlib import
# %matplotlib inline
# # %matplotlib notebook
# These gymnastics are needed to make the sizes of the figures
# be the same in both the inline and notebook versions
# %config InlineBackend.print_figure_kwargs = {'bbox_inches': None}
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
from matplotlib import pyplot as plt
import matplotlib.patches as patches
params = {'legend.fontsize': 6,
'figure.figsize': (8, 8),
'figure.dpi': 150,
'axes.labelsize': 6,
'axes.titlesize': 6,
'xtick.labelsize':6,
'ytick.labelsize':6}
plt.rcParams.update(params)
# Box download imports
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
# python general
import os
import numpy as np
# astropy modules
import astropy
from astropy.io import fits
from astropy.table import QTable, Table, vstack, unique
from astropy.wcs.utils import skycoord_to_pixel
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
from astropy import units as u
import photutils
# jwst
from jwst.pipeline import calwebb_image3
from jwst import datamodels
# -
def create_image(data_2d, xpixel=None, ypixel=None, title=None):
''' Function to generate a 2D image of the data,
with an option to highlight a specific pixel.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
if xpixel and ypixel:
plt.plot(xpixel, ypixel, marker='o', color='red', label='Selected Pixel')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_image_with_cat(data_2d, catalog, flux_limit=None, title=None):
''' Function to generate a 2D image of the data,
with sources overlaid.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
norm = simple_norm(data_2d, 'sqrt', percent=99.)
plt.imshow(data_2d, norm=norm, origin='lower', cmap='gray')
for row in catalog:
if flux_limit:
if np.isnan(row['aper_total_flux']):
pass
else:
if row['aper_total_flux'] > flux_limit:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='3', color='red')
else:
plt.plot(row['xcentroid'], row['ycentroid'], marker='o', markersize='1', color='red')
plt.xlabel('Pixel column')
plt.ylabel('Pixel row')
if title:
plt.title(title)
plt.subplots_adjust(left=0.15)
plt.colorbar(label='MJy/sr')
def create_scatterplot(catalog_colx, catalog_coly, title=None):
''' Function to generate a generic scatterplot.
'''
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot()
ax.scatter(catalog_colx,catalog_coly)
plt.xlabel(catalog_colx.name)
plt.ylabel(catalog_coly.name)
if title:
plt.title(title)
def get_input_table(sourcelist):
'''Function to read in and access the simulator source input files.'''
all_source_table = Table()
# point source and galaxy source tables have different headers
# change column headers to match for filtering later
if "point" in sourcelist:
col_names = ["RA", "Dec", "RA_degrees", "Dec_degrees",
"PixelX", "PixelY", "Magnitude",
"counts_sec", "counts_frame"]
elif "galaxy" in sourcelist:
col_names = ["PixelX", "PixelY", "RA", "Dec",
"RA_degrees", "Dec_degrees", "V2", "V3", "radius",
"ellipticity", "pos_angle", "sersic_index",
"Magnitude", "countrate_e_s", "counts_per_frame_e"]
else:
print('Error! Source list column names need to be defined.')
sys.exit(0)
# read in the tables
input_source_table = Table.read(sourcelist,format='ascii')
orig_colnames = input_source_table.colnames
# only grab values for source catalog analysis
short_source_table = Table({'In_RA': input_source_table['RA_degrees'],
'In_Dec': input_source_table['Dec_degrees']},
names=['In_RA', 'In_Dec'])
# combine source lists into one master list
all_source_table = vstack([all_source_table, short_source_table])
# set up columns to track which sources were detected by Photutils
all_source_table['Out_RA'] = np.nan
all_source_table['Out_Dec'] = np.nan
all_source_table['Detected'] = 'N'
all_source_table['RA_Diff'] = np.nan
all_source_table['Dec_Diff'] = np.nan
# filter by RA, Dec (for now)
no_duplicates = unique(all_source_table,keys=['In_RA','In_Dec'])
return no_duplicates
# [Top of Page](#title_ID)
# <a id="data_load"></a>
# # Loading the Data
# The simulated exposures used for this test are stored in Box. Grab them.
# +
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
file_urls = ['https://stsci.box.com/shared/static/72fds4rfn4ppxv2tuj9qy2vbiao110pc.fits',
'https://stsci.box.com/shared/static/gxwtxoz5abnsx7wriqligyzxacjoz9h3.fits',
'https://stsci.box.com/shared/static/tninaa6a28tsa1z128u3ffzlzxr9p270.fits',
'https://stsci.box.com/shared/static/g4zlkv9qi0vc5brpw2lamekf4ekwcfdn.json',
'https://stsci.box.com/shared/static/kvusxulegx0xfb0uhdecu5dp8jkeluhm.list']
file_names = ['jw00042002001_01101_00004_nrca5_cal.fits',
'jw00042002001_01101_00005_nrca5_cal.fits',
'jw00042002001_01101_00006_nrca5_cal.fits',
'level3_lw_imaging_files_asn.json',
'jw00042002001_01101_00004_nrca5_uncal_galaxySources.list']
box_download_list = [(url,name) for url,name in zip(file_urls,file_names)]
# -
get_box_files(box_download_list)
# [Top of Page](#title_ID)
# <a id="pipeline"></a>
# # Run the Image3Pipeline
#
# Run calwebb_image3 to get the output source catalog and the final 2D image.
img3 = calwebb_image3.Image3Pipeline()
img3.assign_mtwcs.skip=True
img3.save_results=True
img3.resample.save_results=True
img3.source_catalog.snr_threshold = 5
img3.source_catalog.save_results=True
img3.run(file_names[3])
# [Top of Page](#title_ID)
# <a id="visualization"></a>
# # Perform Visual Inspection
#
# Perform the visual inspection of the catalog and the final image.
catalog = Table.read("lw_imaging_cat.ecsv")
combined_image = datamodels.ImageModel("lw_imaging_i2d.fits")
create_image(combined_image.data, title="Final combined NIRCam image")
create_image_with_cat(combined_image.data, catalog, title="Final image w/ catalog overlaid")
catalog
create_scatterplot(catalog['id'], catalog['aper_total_flux'],title='Total Flux in '+str(catalog['aper_total_flux'].unit))
create_scatterplot(catalog['id'], catalog['aper_total_abmag'],title='Total AB mag')
# [Top of Page](#title_ID)
# <a id="manual"></a>
# # Manually Find Matches
#
# Since this is a simulated data set, we can compare the output catalog information from the pipeline with the input catalog information used to create the simulation. Grab the input catalog RA, Dec values and the output catalog RA, Dec values.
test_outputs = get_input_table(file_names[4])
in_ra = test_outputs['In_RA'].data
in_dec = test_outputs['In_Dec'].data
out_ra = catalog['sky_centroid'].ra.deg
out_dec = catalog['sky_centroid'].dec.deg
# Set the tolerance and initialize our counters.
tol = 1.e-3
found_count=0
multiples_count=0
missed_count=0
# Below we loop through the input RA, Dec values and compare them to the RA, Dec values in the output catalog. For cases where there are multiple matches for our tolerance level, count those cases.
for ra,dec,idx in zip(in_ra, in_dec,range(len(test_outputs))):
match = np.where((np.abs(ra-out_ra) < tol) & (np.abs(dec-out_dec) < tol))
if np.size(match) == 1:
found_count +=1
test_outputs['Detected'][idx] = 'Y'
test_outputs['Out_RA'][idx] = out_ra[match]
test_outputs['Out_Dec'][idx] = out_dec[match]
test_outputs['RA_Diff'][idx] = np.abs(ra-out_ra[match])
test_outputs['Dec_Diff'][idx] = np.abs(dec-out_dec[match])
if np.size(match) > 1:
multiples_count +=1
if np.size(match) < 1:
missed_count +=1
# Let's see how it did.
# +
total_percent_found = (found_count/len(test_outputs))*100
print('\n')
print('SNR threshold used for pipeline: ',img3.source_catalog.snr_threshold)
print('Total found:',found_count)
print('Total missed:',missed_count)
print('Number of multiples: ',multiples_count)
print('Total number of input sources:',len(test_outputs))
print('Total number in output catalog:',len(catalog))
print('Total percent found:',total_percent_found)
print('\n')
# -
# ### Use photutils to find catalog matches
# Photutils includes a package to match sources between catalogs by providing a max separation value. Set that value and compare the two catalogs.
catalog_in = SkyCoord(ra=in_ra*u.degree, dec=in_dec*u.degree)
catalog_out = SkyCoord(ra=out_ra*u.degree, dec=out_dec*u.degree)
max_sep = 1.0 * u.arcsec
# idx, d2d, d3d = cat_in.match_to_catalog_3d(cat_out)
idx, d2d, d3d = catalog_in.match_to_catalog_sky(catalog_out)
sep_constraint = d2d < max_sep
catalog_in_matches = catalog_in[sep_constraint]
catalog_out_matches = catalog_out[idx[sep_constraint]]
# Now, ```catalog_in_matches``` and ```catalog_out_matches``` are the matched sources in ```catalog_in``` and ```catalog_out```, respectively, which are separated less than our ```max_sep``` value.
print('Number of matched sources using max separation of '+str(max_sep)+': ',len(catalog_out_matches))
# <a id="about_ID"></a>
# ## About this Notebook
# **Author:** <NAME>, Senior Staff Scientist, NIRCam
# <br>**Updated On:** 05/26/2021
# [Top of Page](#title_ID)
# <img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/>
| jwst_validation_notebooks/source_catalog/jwst_source_catalog_nircam_test/jwst_nircam_imaging_source_catalog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from DatasetHandler.BiwiBrowser import *
from LSTM_VGG16.LSTM_VGG16Helper import *
# %matplotlib inline
output_begin = 4
num_outputs = 1
timesteps = None # TimeseriesGenerator Handles overlapping
in_epochs = 1
out_epochs = 1
train_batch_size = 10
test_batch_size = 10
subjectList = [9, 2, 3, 4] #, 5, 7, 8, 11, 12, 14 except [6, 13, 10, ]
testSubjects = [1]
num_datasets = len(subjectList)
def getFinalModel(num_outputs = num_outputs):
inp = (224, 224, 3) # BIWI_Frame_Shape
vgg_model = VGG16(weights='imagenet', input_shape = inp) #BIWI_Frame_Shape, include_top=Falseinput_shapetimesteps,
vgg_model.layers.pop()
vgg_model.outputs = [vgg_model.layers[-1].output]#
vgg_model.output_layers = [vgg_model.layers[-1]]#
vgg_model.layers[-1].outbound_nodes = []#
nb_pretrained_layers = len(vgg_model.layers)
for layer in vgg_model.layers: #
layer.trainable = False#
#print(nb_pretrained_layers)
#vgg_model.summary()
rnn = Sequential()
rnn.add(TimeDistributed(vgg_model, batch_input_shape=(train_batch_size, 1, inp[0], inp[1], inp[2]), name = 'tdVGG16')) #Input
rnn.add(TimeDistributed(Flatten()))
# rnn.add(TimeDistributed(Dense(4096, activation='relu'), name = 'fc1024')), activation='relu'
# rnn.add(TimeDistributed(Dense(4096, activation='relu'), name = 'fc104'))
# rnn.add(TimeDistributed(Dropout(0.25)))
# rnn.add(TimeDistributed(Dense(124, activation='relu'), name = 'fc10'))
# rnn.add(TimeDistributed(Dropout(0.25)))
rnn.add(LSTM(128, dropout=0.25, recurrent_dropout=0.25, stateful = True))
# rnn.add(Flatten())
rnn.add(Dense(num_outputs))
#print(len(rnn.layers))
for layer in rnn.layers[:-2]:#
layer.trainable = False#
rnn.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae'])
return rnn
keras.backend.clear_session()#
full_model = getFinalModel(num_outputs = num_outputs)
full_model.summary()
full_model = trainImageModelForEpochs(full_model, out_epochs, subjectList, testSubjects, timesteps, False, output_begin, num_outputs, batch_size = train_batch_size, in_epochs = in_epochs)
test_generators, test_labelSets = getTestBiwiForImageModel(testSubjects, timesteps, False, output_begin, num_outputs, batch_size = test_batch_size)
test_gen, test_labels = test_generators[0], test_labelSets[0] #[1]
predictions = full_model.predict_generator(test_gen, verbose = 1)
#predictions = full_model.predict(test_gen[0][0], verbose = 1)
output1 = numpy.concatenate((test_labels[timesteps:, :1], predictions[:, :1]), axis=1)
print([i[0] for i in predictions[:10]])
plt.figure(figsize=(30,10))
plt.plot(output1)
len(test_gen[0][0])
test_gen[23][0][0][0][124][110]
| DeepRL_For_HPE/Older_VGG16Runner_Notebooks/VGG16Runner_v9.2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
# %matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats
# + deletable=true editable=true
data = pd.read_csv('data.csv')
print("data has {} measurements for {} variables".format(*data.shape))
print("\n{}\n...".format(data.head(8)))
# + deletable=true editable=true
countries = ['Canada', 'USA', 'England', 'Italy', 'Switzerland']
languages = ['English', 'French', 'Spanish', 'German', 'Italian']
F = pd.crosstab(data.country, data.language, margins=True)
F.index = [*countries, 'col totals']
F.columns = [*languages, 'row totals']
print("{}".format(F))
# + deletable=true editable=true
F = pd.crosstab(data.country, data.language, margins=False)
F.index = countries
F.columns = languages
chisq_stat, p_value, dof, E = scipy.stats.chi2_contingency(F)
print('Results of Chi-squared test of independence\n')
print(' Chi-squared test statistic: {:02.2F}'.format(chisq_stat))
print(' degrees of freedom: {}'.format(dof))
print(' p-value: {:02.6F}'.format(p_value))
# + deletable=true editable=true
print('matrix of observations F:\n\n{}'.format(F))
# +
P = F / F.sum().sum()
print('correspondence matrix P:\n\n{}'.format(P))
# +
row_centroid = P.sum(axis=1)
print('row centroid (marginal frequency distribution over countries):\n\n{}'.format(row_centroid))
# +
col_centroid = P.sum(axis=0)
print('column centroid (marginal frequency distribution over languages):\n\n{}'.format(col_centroid))
# + deletable=true editable=true
row_totals = F.sum(axis=1)
print("row totals (marginal frequency distribution over the countries):\n\n{}".format(row_totals))
# + deletable=true editable=true
col_totals = F.sum(axis=0)
print("column totals (marginal frequency distribution over the languages):\n\n{}".format(col_totals))
# + deletable=true editable=true
data = []
for _,row in P.iterrows():
acc = []
cntry_i = row.name
p_iplus = row_centroid.ix[cntry_i]
for cntry_k in P.index:
p_kplus = row_centroid.ix[cntry_k]
chisqd = np.sqrt(np.sum(np.square(row/p_iplus - P.ix[cntry_k]/p_kplus) / col_centroid))
acc.append(chisqd)
data.append(acc)
row2row_chisqd = pd.DataFrame(data, index=P.index, columns=P.index)
print("row-to-row Chi-squared distance table:\n\n{}".format(row2row_chisqd))
# + deletable=true editable=true
PT = P.T
data = []
for _,row in PT.iterrows():
acc = []
lang_j = row.name
p_plusj = col_centroid.ix[lang_j]
for lang_k in PT.index:
p_plusk = col_centroid.ix[lang_k]
chisqd = np.sqrt(np.sum(np.square(row/p_plusj - PT.ix[lang_k]/p_plusk) / row_centroid))
acc.append(chisqd)
data.append(acc)
col2col_chisqd = pd.DataFrame(data, index=PT.index, columns=PT.index)
print("column-to-column Chi-squared distance table:\n\n{}".format(col2col_chisqd))
# +
Mu_ij = row_centroid.values.reshape((P.index.size,1)) * col_centroid.values.reshape((1,P.columns.size))
Lambda = (P - Mu_ij) / np.sqrt(Mu_ij)
print('inertia Lambda:\n\n{}'.format(Lambda))
# + deletable=true editable=true
U,S,V = np.linalg.svd(Lambda)
num_sv = np.arange(1,len(S)+1)
cum_var_explained = [np.sum(np.square(S[0:n])) / np.sum(np.square(S)) for n in num_sv]
print('Using first singular value, {:0.3F}% variance explained'.format(cum_var_explained[0]))
print('Using first 2 singular values, {:0.3F}% variance explained'.format(cum_var_explained[1]))
print('Using first 3 singular values, {:0.3F}% variance explained'.format(cum_var_explained[2]))
# + deletable=true editable=true
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(num_sv, cum_var_explained, color='#2171b5', label='variance explained')
plt.scatter(num_sv, S, marker='s', color='#fc4e2a', label='singular values')
plt.legend(loc='lower left', scatterpoints=1)
ax.set_xticks(num_sv)
ax.set_xlim([0.8, 5.1])
ax.set_ylim([0.0, 1.1])
ax.set_xlabel('Number of singular values used')
ax.set_title('Singular values & cumulative variance explained',
fontsize=16,
y=1.03)
plt.grid()
# + deletable=true editable=true
cntry_x = U[:,0]
cntry_y = U[:,1]
cntry_z = U[:,2]
lang_x = V.T[:,0]
lang_y = V.T[:,1]
lang_z = V.T[:,2]
# + deletable=true editable=true
import pylab
from mpl_toolkits.mplot3d import Axes3D, proj3d
fig = pylab.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(cntry_x, cntry_y, cntry_z, marker='s', s=50, c='#2171b5')
cntry_labels = []
for i,(x,y,z) in enumerate(zip(cntry_x,cntry_y,cntry_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
label = pylab.annotate(Lambda.index[i],
xy=(x2,y2),
xytext=(-2,2),
textcoords='offset points',
ha='right',
va='bottom',
color='#2171b5')
cntry_labels.append(label)
ax.scatter(lang_x, lang_y, lang_z, marker='o', s=50, c='#fc4e2a')
lang_labels = []
for i,(x,y,z) in enumerate(zip(lang_x,lang_y,lang_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
label = pylab.annotate(Lambda.columns[i],
xy=(x2,y2),
xytext=(-2,2),
textcoords='offset points',
ha='right',
va='bottom',
color='#fc4e2a')
lang_labels.append(label)
def update_position(e):
for i,(x,y,z) in enumerate(zip(cntry_x,cntry_y,cntry_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
cntry_labels[i].xy = x2, y2
for i,(x,y,z) in enumerate(zip(lang_x,lang_y,lang_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
lang_labels[i].xy = x2, y2
fig.canvas.draw()
fig.canvas.mpl_connect('button_release_event', update_position)
ax.set_xlabel(r'$1^{st}$ singular value')
ax.set_xticks([-0.5, 0.0, 0.5])
ax.set_ylabel(r'$2^{nd}$ singular value')
ax.set_yticks([-0.5, 0.0, 0.4])
ax.set_zlabel(r'$3^{rd}$ singular value')
ax.set_zticks([-0.5, 0.0, 0.5])
ax.set_title('Correspondence Analysis with 3 Singular Values (3D)',
fontsize=16,
y=1.1)
pylab.show()
# -
# ----
# + [markdown] deletable=true editable=true
# http://www.mathematica-journal.com/2010/09/an-introduction-to-correspondence-analysis/
#
# https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3718710/
# + [markdown] deletable=true editable=true
# Next we derive the _correspondence matrix_ $P$ from $F$ with
#
# \begin{align}
# P &= \left[ p_{ij} \right] \\
# &= \left[ \frac{f_{ij}}{n} \right] & \text{where } n = \sum_{i=1}^{I} \sum_{j=1}^{J} f_{ij}
# \end{align}
# + [markdown] deletable=true editable=true
# ----
# + [markdown] deletable=true editable=true
# The $\chi^2$ distances between rows gives us a clue as to how the countries relate to one another in terms of the primary spoken languages.
#
# The $\chi^2$ distance between rows $i$ and $k$ is given by
#
# \begin{align}
# d_{ik} &= \sqrt{\sum_{j=1}^{J} \frac{(p_{ij}/p_{i+} - p_{kj}/p_{k+})^2}{p_{+j}} }
# \end{align}
# + [markdown] deletable=true editable=true
# We can see in this row-to-row $\chi^2$ distance table that for the Anglophonic countries, Canada, USA and England should be clustered near one another, while Italy and Switzerland are both separated from the other countries.
# + [markdown] deletable=true editable=true
# Conversely, the $\chi^2$ distances between columns gives us a clue as to how the languages relate to one another in terms of the countries.
#
# The $\chi^2$ distance between columns $j$ and $k$ is given by
#
# \begin{align}
# d_{jk} &= \sqrt{\sum_{i=1}^{I} \frac{(p_{ij}/p_{+j} - p_{kj}/p_{+k})^2}{p_{i+}} }
# \end{align}
# + [markdown] deletable=true editable=true
# For the languages, we can see from the column-to-column $\chi^2$ distances that English and Spanish should be closely related, with French somewhere between English and German. Italian, however, should be sitting alone all by itself away from the others.
# + [markdown] deletable=true editable=true
# ----
# + [markdown] deletable=true editable=true
# We start with a matrix of _standardized residuals_:
#
# \begin{align}
# \Omega &= \left[ \frac{p_{ij} - \mu_{ij}}{\sqrt{\mu_{ij}}} \right]
# \end{align}
# + deletable=true editable=true
| correspondence_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1.1 Activation Maximization
# ## Introduction
#
# **This section corresponds to Section 3.1 of the original paper.**
#
# When we are asked to describe a certain concept, say, a chair, it is impossible to come up with a precise description because there are so many types of chairs. However, there are certain characteristics unique to chairs that let us recognize chairs if we see one, such as a platform that we can sit on or legs that support the platform. With such features of chair in mind, we may be able draw a picture of a typical chair. In this section, we ask a deep neural network (DNN) to give us a prototype image $x^{*}$ that describes characteristics common to the set of objects it was trained to recognize.
#
# A deep neural network classifier mapping a set of data points or images $x$ to a set of classes $(\omega_c)_c$ models the conditional probability distribution $p(\omega_c | x)$. Therefore, a prototype $x^{*}$ can be found by optimizing the following objective:
#
# \begin{equation}
# \max_x \log p(\omega_c \, | \, x) - \lambda \lVert x\rVert^2
# \end{equation}
#
# This can be achieved by first training a deep neural network and then performing gradient ascent on images with randomly initialized pixel values. The rightmost term of the objective function ($\lambda \lVert x\rVert^2$) is an $l_2$-norm regularizer that prevents prototype images from deviating largely from the origin. As we will see later, activation maximization is crude compared to other interpretation techniques in that it is unable to produce natural-looking prototype images.
# ## Tensorflow Walkthrough
# ### 1. Import Dependencies
#
# We import a pre-built DNN that we will train and then perform activation maximization on. You can check out `models_1_1.py` in the models directory for more network details.
# +
import os
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from models.models_1_1 import MNIST_DNN
# %matplotlib inline
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
logdir = './tf_logs/1_1_AM/'
ckptdir = logdir + 'model'
if not os.path.exists(logdir):
os.mkdir(logdir)
# -
# ### 2. Building Graph
#
# In this step, we initialize a DNN classifier and attach necessary nodes for model training onto the computation graph.
# +
with tf.name_scope('Classifier'):
# Initialize neural network
DNN = MNIST_DNN('DNN')
# Setup training process
lmda = tf.placeholder_with_default(0.01, shape=[], name='lambda')
X = tf.placeholder(tf.float32, [None, 784], name='X')
Y = tf.placeholder(tf.float32, [None, 10], name='Y')
tf.add_to_collection('placeholders', lmda)
logits = DNN(X)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer().minimize(cost, var_list=DNN.vars)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
cost_summary = tf.summary.scalar('Cost', cost)
accuray_summary = tf.summary.scalar('Accuracy', accuracy)
summary = tf.summary.merge_all()
# -
# ### 3. Building Subgraph for Generating Prototypes
#
# Before training the network, we attach a subgraph for generating prototypes onto the computation graph.
# +
with tf.name_scope('Prototype'):
X_mean = tf.placeholder(tf.float32, [10, 784], name='X_mean')
X_prototype = tf.get_variable('X_prototype', shape=[10, 784], initializer=tf.constant_initializer(0.))
Y_prototype = tf.one_hot(tf.cast(tf.lin_space(0., 9., 10), tf.int32), depth=10)
logits_prototype = DNN(X_prototype, reuse=True)
# Objective function definition
cost_prototype = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_prototype, labels=Y_prototype)) \
+ lmda * tf.nn.l2_loss(X_prototype - X_mean)
optimizer_prototype = tf.train.AdamOptimizer().minimize(cost_prototype, var_list=[X_prototype])
# Add the subgraph nodes to a collection so that they can be used after training of the network
tf.add_to_collection('prototype', X_mean)
tf.add_to_collection('prototype', X_prototype)
tf.add_to_collection('prototype', Y_prototype)
tf.add_to_collection('prototype', logits_prototype)
tf.add_to_collection('prototype', cost_prototype)
tf.add_to_collection('prototype', optimizer_prototype)
# -
# Here's the general structure of the computation graph visualized using tensorboard.
#
# 
# ### 4. Training Network
#
# This is the step where the DNN is trained to classify the 10 digits of the MNIST images. Summaries are written into the logdir and you can visualize the statistics using tensorboard by typing this command: `tensorboard --lodir=./tf_logs`
# +
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
# Hyper parameters
training_epochs = 15
batch_size = 100
for epoch in range(training_epochs):
total_batch = int(mnist.train.num_examples / batch_size)
avg_cost = 0
avg_acc = 0
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
_, c, a, summary_str = sess.run([optimizer, cost, accuracy, summary], feed_dict={X: batch_xs, Y: batch_ys})
avg_cost += c / total_batch
avg_acc += a / total_batch
file_writer.add_summary(summary_str, epoch * total_batch + i)
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_cost), 'accuracy =', '{:.9f}'.format(avg_acc))
saver.save(sess, ckptdir)
print('Accuracy:', sess.run(accuracy, feed_dict={X: mnist.test.images, Y: mnist.test.labels}))
sess.close()
# -
# ### 5. Restoring Subgraph
#
# Here we first rebuild the DNN graph from metagraph, restore the DNN parameters from the checkpoint and then gather the necessary nodes for prototype generation using the `tf.get_collection()` function (recall prototype subgraph nodes were added onto the 'prototype' collection at step 3).
# +
tf.reset_default_graph()
sess = tf.InteractiveSession()
new_saver = tf.train.import_meta_graph(ckptdir + '.meta')
new_saver.restore(sess, tf.train.latest_checkpoint(logdir))
# Get necessary placeholders
lmda = tf.get_collection('placeholders')[0]
# Get prototype nodes
prototype = tf.get_collection('prototype')
X_mean = prototype[0]
X_prototype = prototype[1]
cost_prototype = prototype[4]
optimizer_prototype = prototype[5]
# -
# ### 6. Generating Prototype Images
#
# Before performing gradient ascent, we calculate the image means $\overline{x}$ that will be used to regularize the prototype images. Then, we generate prototype images that maximize $\log p(\omega_c | x) - \lambda \lVert x - \overline{x}\rVert^2$. I used 0.1 for lambda (lmda), but fine tuning may produce better prototype images.
# +
images = mnist.train.images
labels = mnist.train.labels
img_means = []
for i in range(10):
img_means.append(np.mean(images[np.argmax(labels, axis=1) == i], axis=0))
for epoch in range(5000):
_, c = sess.run([optimizer_prototype, cost_prototype], feed_dict={lmda: 0.1, X_mean: img_means})
if epoch % 500 == 0:
print('Epoch: {:05d} Cost = {:.9f}'.format(epoch, c))
X_prototypes = sess.run(X_prototype)
sess.close()
# -
# ### 7. Displaying Images
#
# As I mentioned in the introduction, the resulting prototype images are rather blurry.
# +
plt.figure(figsize=(15,15))
for i in range(5):
plt.subplot(5, 2, 2 * i + 1)
plt.imshow(np.reshape(X_prototypes[2 * i], [28, 28]), cmap='gray', interpolation='none')
plt.title('Digit: {}'.format(2 * i))
plt.colorbar()
plt.subplot(5, 2, 2 * i + 2)
plt.imshow(np.reshape(X_prototypes[2 * i + 1], [28, 28]), cmap='gray', interpolation='none')
plt.title('Digit: {}'.format(2 * i + 1))
plt.colorbar()
plt.tight_layout()
# -
| 1.1 Activation Maximization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ChiefGokhlayeh/MV/blob/main/Assignment4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LS7T20gWaEkg"
# # Machine Vision - Assignment 4: Image Classification with Convolutional Neural Networks
#
# ---
#
# Prof. Dr. <NAME>, Esslingen University of Applied Sciences
#
# <EMAIL>
#
# ---
#
# This is the fourth assignment for the "Machine Vision" lecture.
# It covers:
# * training a deep CNN from scratch on [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
# * evaluating the effects of different optimizers and regularization
# * finetuning an existing CNN on [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
#
# **Make sure that "GPU" is selected in Runtime -> Change runtime type**
#
# To successfully complete this assignment, it is assumed that you already have some experience in Python and numpy. You can either use [Google Colab](https://colab.research.google.com/) for free with a private (dedicated) Google account (recommended) or a local Jupyter installation.
#
# ---
#
# + [markdown] id="RH3vu3HiqaZS"
# ## Preparations
#
# + [markdown] id="MFz0s31SxNyP"
# ### Import important libraries (you should probably start with these lines all the time ...)
# + id="xPcN62DgZ6Gg"
# OpenCV
import cv2
# NumPy
import numpy as np
# Python stuff
import glob, urllib, os
# Matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# make sure we show all plots directly below each cell
# %matplotlib inline
# Some Colab specific packages
if 'google.colab' in str(get_ipython()):
# image display
from google.colab.patches import cv2_imshow
# Tensorflow and Keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout, BatchNormalization, Activation, Input, Lambda
from keras.optimizers import Adam, RMSprop, SGD
from keras.utils import to_categorical
from keras.datasets import cifar10
from keras.preprocessing import image
# Misc stuff
import operator
# Check the GPU that we got from Colab
# !nvidia-smi
device_name = tf.test.gpu_device_name()
print("Device used for TensorFlow : {}".format(device_name))
# + [markdown] id="ANRNsyDPTm9x"
#
# ### Some helper functions that we will need
# + id="jQ5AIQr1TsHU"
def my_imshow(image, windowTitle="Image"):
'''
Displays an image and differentiates between Google Colab and a local Python installation.
Args:
image: The image to be displayed
Returns:
-
'''
if 'google.colab' in str(get_ipython()):
cv2_imshow(image)
else:
cv2.imshow(windowTitle, image)
# + [markdown] id="ybKcetvDlt-o"
# ### In Google Colab only:
# Mount the Google Drive associated with your Google account. You will have to click the authorization link and enter the obtained authorization code here in Colab.
# + id="88rhI7gsltLi"
# Mount Google Drive
if 'google.colab' in str(get_ipython()):
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + [markdown] id="IcxOS80YuDCD"
# ### If you are not using Google Colab:
# Just store the image in the same folder as your Jupyter notebook file.
# + [markdown] id="xQJnwTJQPzvF"
# ## Exercise 1 - Define and train a (small) CNN on CIFAR-10 (10 points)
#
# In this exercise you will be defining and training a small deep CNN on CIFAR-10. We will build on the previous assignment, where a fully connected multilayer perceptron has been trained on CIFAR-10. Its performance on the validation dataset was approx. 53% which is not a stellar performance. The CNN will significantly improve performance over the multilayer perceptron.
#
# Additionally, the effect of different optimizers and regularization techniques will be evaluated step-by-step.
#
# We will be using TensorFlow and [Keras](https://keras.io/), a high-level API built on top of TensorFlow that provides an easier API to the training of neural networks in comparison to plain TensorFlow.
# + [markdown] id="JFzeGgygOeX8"
# ### Getting familiar with the CIFAR-10 dataset (**PROVIDED**)
# + id="qXZ4tzXccZB9"
# CIFAR-10 is available as standard dataset in Keras. Nice :)
# load the data
(trainSamples, _trainLabels), (testSamples, _testLabels) = cifar10.load_data()
# scale the image data to float 0-1 (always recommended with neural networks)
trainSamples = trainSamples.astype('float32') / 255.0
testSamples = testSamples.astype('float32') / 255.0
# convert a class vector (integers) to binary class matrix.
trainLabels = to_categorical(_trainLabels)
testLabels = to_categorical(_testLabels)
# text representation of class labels
classNames = ['airplane', 'automobile', 'bird', \
'cat', 'deer', 'dog', \
'frog', 'horse', 'ship', 'truck']
# Visualize 25 random images
plt.figure(figsize=(10,10))
indices = np.arange(len(trainSamples))
np.random.shuffle(indices)
count=0
for i in indices[0:25]:
plt.subplot(5,5,count+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(trainSamples[i], cmap=plt.cm.binary)
plt.xlabel("label: {}".format(classNames[np.argmax(trainLabels[i])]))
count = count+1
plt.show()
# + [markdown] id="ATTSTJH6P8ZX"
# ### CNN Model Definition (**add your code here**)
#
# We want to design a standard "feed-forward" CNN. In Keras-terms, this is referred to as a [sequential model](https://www.tensorflow.org/guide/keras/sequential_model). The basic TensorFlow tutorial on [Convolutional Neural Networks](https://www.tensorflow.org/tutorials/images/cnn) is a good resource to learn how CNNs are defined and trained.
#
# We will need the following layers (input to output):
# * 1 [Input](https://www.tensorflow.org/api_docs/python/tf/keras/Input) layer with ```shape = (32,32,3)``` that inputs our 32x32x3 image into the CNN
#
#
# * 2 [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) convolutional layers with **32** filters of size 3x3, ```relu``` activation functions, ```he_uniform``` kernel initializers and ```same``` padding
# * 1 [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layer with 2x2 pooling
#
# * 2 [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) convolutional layers with **64** filters of size 3x3, ```relu``` activation functions, ```he_uniform``` kernel initializers and ```same``` padding
# * 1 [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layer with 2x2 pooling
#
# * 2 [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) convolutional layers with **128** filters of size 3x3, ```relu``` activation functions, ```he_uniform``` kernel initializers and ```same``` padding
# * 1 [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layer with 2x2 pooling
#
#
# * 1 [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer to flatten the input for the upcoming fully connected layer
#
# * 1 [Dense](https://keras.io/api/layers/core_layers/dense/) fully connected layer with 128 neurons and ```relu``` activation and ```he_uniform``` kernel initializers
#
# * 1 [Dense](https://keras.io/api/layers/core_layers/dense/) fully connected output layer with 10 neurons (1 per class) and ```softmax``` activation.
#
#
# Your ```model.summary()``` should look as follows (layer indices might differ). Notice how the representation shrinks with each pooling layer.
#
# ```_________________________________________________________________
# Model: "sequential"
# _________________________________________________________________
# Layer (type) Output Shape Param #
# =================================================================
# conv2d (Conv2D) (None, 32, 32, 32) 896
# _________________________________________________________________
# conv2d_1 (Conv2D) (None, 32, 32, 32) 9248
# _________________________________________________________________
# max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0
# _________________________________________________________________
# conv2d_2 (Conv2D) (None, 16, 16, 64) 18496
# _________________________________________________________________
# conv2d_3 (Conv2D) (None, 16, 16, 64) 36928
# _________________________________________________________________
# max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0
# _________________________________________________________________
# conv2d_4 (Conv2D) (None, 8, 8, 128) 73856
# _________________________________________________________________
# conv2d_5 (Conv2D) (None, 8, 8, 128) 147584
# _________________________________________________________________
# max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128) 0
# _________________________________________________________________
# flatten (Flatten) (None, 2048) 0
# _________________________________________________________________
# dense (Dense) (None, 128) 262272
# _________________________________________________________________
# dense_1 (Dense) (None, 10) 1290
# =================================================================
# Total params: 550,570
# Trainable params: 550,570
# Non-trainable params: 0
#
# ```
# + id="CrPJ4BVfQFIV"
model = Sequential()
##### YOUR CODE GOES HERE ######
# Define the layers of the CNN model
# input layer
model.add(Input(shape=(32, 32, 3)))
# 2 conv layers and 1 pooling layer
model.add(Conv2D(filters=32, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(Conv2D(filters=32, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(MaxPool2D(pool_size=(2, 2)))
# 2 conv layers and 1 pooling layer
model.add(Conv2D(filters=64, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(Conv2D(filters=64, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(MaxPool2D(pool_size=(2, 2)))
# 2 conv layers and 1 pooling layer
model.add(Conv2D(filters=128, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(Conv2D(filters=128, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
model.add(MaxPool2D(pool_size=(2, 2)))
# fully connected layer
model.add(Flatten())
model.add(Dense(units=128, activation='relu', kernel_initializer='he_uniform'))
# output layer
model.add(Dense(units=10, activation='softmax'))
################################
print(model.summary())
# + [markdown] id="j5P7r_rHQ9hm"
# ### CNN training with different optimizers (**add your code here**)
#
# Compile the model ([model.compile()](https://keras.io/api/models/model_training_apis/)) and use ```categorical_crossentropy``` as loss and ```accuracy``` as metric (see previous assignment).
#
#
# Train your CNN using [model.fit()](https://keras.io/api/models/model_training_apis/). Pass ```trainSamples```and ```trainLabels```as training set and ```testSamples```and ```testLabels``` as ```validation_data```.
#
# Use the following hyper-parameters:
# * ```batch_size = 64```
# * ```epochs = 30```
# * ```verbose = 1```
#
# Run the training three times and switch optimizers with each training run by passing different optimizers to [model.compile()](https://keras.io/api/models/model_training_apis/) (all other settings remain the same):
# * Stochastic Gradient Descent (plain): ```optimizer=SGD(learning_rate=3e-4)```
# * RMSprop (with momentum): ```optimizer=RMSprop(learning_rate=3e-4)```
# * Adam: ```optimizer=Adam(learning_rate=3e-4)```
#
# Different optimizers might need different numbers of training epochs (hyper parameter !!). To find a good number of training epochs for each optimizer, we can use [tf.keras.callbacks.EarlyStopping](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping) to stop training automatically based on a performance metric that is evaluated during training. Browse through [tf.keras.callbacks.EarlyStopping](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping) and add ```EarlyStopping``` as a callback to [model.fit()](https://keras.io/api/models/model_training_apis/) to stop training when ```val_loss```, the current loss on the test set, did not improve for 3 epochs.
#
# The overall training should take between 7 to 15 seconds per epoch (**on a GPU**). Training times depend on the GPU that we have been assigned by Colab. Reported accuracies on the test data should be approx. 74% with the best optimizer variant.
#
# Compare the different results by computing the accuracy on the test set using:
# ```
# # evaluate model
# _, acc = model.evaluate(testSamples, testLabels, verbose=1)
# print("Accuracy = {}".format(acc))
# ```
#
# **Which optimizer gave the best accuracy on the test set?**
#
# + id="IsVNpyKvRAUC"
# make deep copies of the original (initialized) model to make sure to always train from scratch
modelSGD = tf.keras.models.clone_model(model)
modelRMS = tf.keras.models.clone_model(model)
modelAdam = tf.keras.models.clone_model(model)
##### YOUR CODE GOES HERE ######
optimizer_acc = {}
earlyStopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
print("Training model using SGD optimizer...")
modelSGD.compile(metrics=('accuracy',),
loss=('categorical_crossentropy',),
optimizer=SGD(learning_rate=3e-4))
historySGD = modelSGD.fit(x=trainSamples,
y=trainLabels,
validation_data=(testSamples, testLabels),
batch_size=64,
epochs=30,
verbose=1,
callbacks=(earlyStopping,))
_, acc = modelSGD.evaluate(testSamples, testLabels, verbose=1)
optimizer_acc[modelSGD.optimizer] = acc
print(f"SGD Optimizer Accuracy = {acc:.4f}\n")
print("Training model using RMS optimizer...")
modelRMS.compile(metrics=('accuracy',),
loss=('categorical_crossentropy',),
optimizer=RMSprop(learning_rate=3e-4))
historyRMS = modelRMS.fit(x=trainSamples,
y=trainLabels,
validation_data=(testSamples, testLabels),
batch_size=64,
epochs=30,
verbose=1,
callbacks=(earlyStopping,))
_, acc = modelRMS.evaluate(testSamples, testLabels, verbose=1)
optimizer_acc[modelRMS.optimizer] = acc
print(f"RMS Optimizer Accuracy = {acc:.4f}\n")
print("Training model using Adam optimizer...")
modelAdam.compile(metrics=('accuracy',),
loss=('categorical_crossentropy',),
optimizer=Adam(learning_rate=3e-4))
historyAdam = modelAdam.fit(x=trainSamples,
y=trainLabels,
validation_data=(testSamples, testLabels),
batch_size=64,
epochs=30,
verbose=1,
callbacks=(earlyStopping,))
_, acc = modelAdam.evaluate(testSamples, testLabels, verbose=1)
optimizer_acc[modelAdam.optimizer] = acc
print(f"Adam Optimizer Accuracy = {acc:.4f}\n")
best_optimizer = max(optimizer_acc.items(), key=operator.itemgetter(1))[0]
print(f"Best Optimizer: {best_optimizer.__name__} with accuracy: {optimizer_acc[best_optimizer]}")
################################
# + [markdown] id="RbxoU8sZA_we"
# ### Adding regularization (**add your code here**)
#
# Now we add some regularization to our CNN in terms of [Dropout](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) layers with a ```rate``` of 0.33, i.e. 33% of the neurons will be randomly disabled in each batch.
#
# Add a [Dropout](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) layer after every MaxPool2D layer in your model and retrain using the optimizer that performed best in the previous evaluation.
# + id="anJBa9xFBzzw"
modelDropOut = Sequential()
##### YOUR CODE GOES HERE ######
# Define the layers of the CNN model with dropout
# input layer
modelDropOut.add(Input(shape=(32, 32, 3)))
# 2 conv layers and 1 pooling layer
modelDropOut.add(Conv2D(filters=32, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(Conv2D(filters=32, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(MaxPool2D(pool_size=(2, 2)))
modelDropOut.add(Dropout(rate=0.33))
# 2 conv layers and 1 pooling layer
modelDropOut.add(Conv2D(filters=64, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(Conv2D(filters=64, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(MaxPool2D(pool_size=(2, 2)))
modelDropOut.add(Dropout(rate=0.33))
# 2 conv layers and 1 pooling layer
modelDropOut.add(Conv2D(filters=128, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(Conv2D(filters=128, kernel_size=3, input_shape=(3, 3), activation='relu', padding='same', kernel_initializer='he_uniform'))
modelDropOut.add(MaxPool2D(pool_size=(2, 2)))
modelDropOut.add(Dropout(rate=0.33))
# fully connected layer
modelDropOut.add(Flatten())
modelDropOut.add(Dense(units=128, activation='relu', kernel_initializer='he_uniform'))
# output layer
modelDropOut.add(Dense(units=10, activation='softmax'))
# Model summary
print(modelDropOut.summary())
# train the CNN
print(f"Training model using {best_optimizer.__module__} optimizer...")
modelDropOut.compile(metrics=('accuracy',),
loss=('categorical_crossentropy',),
optimizer=best_optimizer)
historyDropOut = modelDropOut.fit(x=trainSamples,
y=trainLabels,
validation_data=(testSamples, testLabels),
batch_size=64,
epochs=30,
verbose=1,
callbacks=(earlyStopping,))
_, acc = modelDropOut.evaluate(testSamples, testLabels, verbose=1)
print(f"Dropout Model with {best_optimizer.__name__} Optimizer Accuracy = {acc:.4f}")
################################
# + [markdown] id="FuSs_vknCtkC"
# ### Visualize some predictions (**PROVIDED**)
#
# Now we should have a model that can achieve approx. 80% accuracy on the test set. Let's look at some predictions.
# + id="CSY0FhF5C05u"
def visualizePredictions(model, testSamples, testSamplesUnnormalized):
# select 50 images randomly from the test set and run them through the CNN
plt.figure(figsize=(20,10))
# 50 random images
indices = np.arange(len(testSamples))
np.random.shuffle(indices)
count=0
for i in indices[0:50]:
plt.subplot(5,10,count+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(testSamplesUnnormalized[i], cmap=plt.cm.binary)
# predict MLP (need to reshape 32 x 32 x 3 pixels -> 1 x 3072 pixels)
prediction = model.predict(np.expand_dims(testSamples[i], axis=0))
# visualize true and predicted labels
groundTruthLabel = classNames[np.argmax(testLabels[i])]
predictedLabel = classNames[np.argmax(prediction)]
plt.xlabel("T: {} / P: {}".format(groundTruthLabel, predictedLabel))
count = count+1
plt.show()
# + id="Dq-3rRuAPZ_5"
# visualize predictions of our current model
visualizePredictions(modelDropOut, testSamples, testSamples)
# + [markdown] id="6kdud_yCDpaT"
# ### Visualize some more predictions (**PROVIDED**)
#
# That does look quite good on CIFAR-10's test set. Let's see how the model performs on some random images grabbed from Google.
# + id="7vMKBQXrD0cU"
import io
import requests
import zipfile
# path to test images (you will have to modify that)
testDir = "/content/cnn/"
url = 'https://raw.githubusercontent.com/ChiefGokhlayeh/MV/main/data/cnn/test_images.zip'
response = requests.get(url, allow_redirects = True)
stream = io.BytesIO(response.content)
print("unzipping {}".format(url))
with zipfile.ZipFile(stream, 'r') as zip_ref:
zip_ref.extractall(testDir)
# some helper functions to load images from a given URL and run a prediction
def loadImageFromURL(url):
# download the image, convert it to a NumPy array, and then read
# it into OpenCV format
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return image
def myPredict(model, image):
# resize image to model dimensions
resizedImage = cv2.resize(image, (32,32))
# run prediction
x=np.expand_dims(resizedImage, axis=0)
x=x.astype('float32') / 255
predictedClasses = model.predict(x)
return predictedClasses
def getArgMaxLabel(classNames, predictedClasses):
return classNames[np.argmax(predictedClasses)]
def visualizePredictionsExtImages(model, predictionFunction):
# get all images in testDir
files = [testDir+f for f in os.listdir(testDir) if f.startswith("test") and f.endswith(".jpg")]
for f in files:
plt.figure()
# load image
testImage = cv2.imread(f)
# display image
plt.imshow(cv2.cvtColor(cv2.resize(testImage, (400,400)), cv2.COLOR_BGR2RGB))
plt.axis("off")
plt.draw()
# evaluate our CNN
predictedClasses = predictionFunction(model, testImage)
predictedLabel = getArgMaxLabel(classNames, predictedClasses)
plt.title("{} with a probability of {:.3f}%".format(predictedLabel, 100*np.max(predictedClasses)))
plt.draw()
# + id="2xWZmY5sPnS9"
# visualize predictions of our current model
visualizePredictionsExtImages(modelDropOut, myPredict)
# + [markdown] id="SY_ni4U0HNGn"
# ## Exercise 2 - Fine-tuning / transfer learning of a larger CNN on CIFAR-10 (10 points)
#
# In this exercise you will be [fine-tuning](https://www.tensorflow.org/tutorials/images/transfer_learning) a large CNN that has been trained on ImageNet to the CIFAR-10 dataset. As we have seen in the lecture, this involves replacing the output layer by (a set of) output layers that correspond to our problem. Here: 10 output neurons for the 10 classes of CIFAR-10.
#
# As a base CNN we will use [InceptionResNet-V2](https://keras.io/api/applications/inceptionresnetv2/).
# + [markdown] id="1R8YGKXeIFPh"
# ### CNN model definition (**PROVIDED**)
#
# We will load a pre-trained InceptionResNet-V2 and replace its output layers by a new stack of fully connected layers. Luckily, common models that have been pre-trained on ImageNet are readily available in [keras.applications](https://keras.io/api/applications/).
#
# + id="wams4zAGIfw5"
# 1) Load the base InceptionResNet-V2 that has been trained on imagenet
# We can omit the original ImageNet output layer directly while loading the model.
# Additionally, we need to upscale our 32,32 to the input that InceptionResNet-V2 requires (160,160,3)
# We will link different parts of our model together:
# input -> upscale(input) -> base_model(upscale) -> base_model.output -> input to our new classification "head"
# define our input tensor: (32x32x3) images
inputs = Input(shape=(32, 32, 3))
# upscale layer to automatically upscale our images to the correct InceptionResNet-V2 input size
upscale = Lambda(lambda x: tf.image.resize_with_pad(x,
160,
160,
method=tf.image.ResizeMethod.BILINEAR))(inputs)
# load the base model and set the input to "upscale"
base_model = tf.keras.applications.InceptionResNetV2(include_top=False,
weights='imagenet',
input_tensor=upscale,
input_shape=(160,160,3),
pooling='max')
# 2) We can choose to enable or disable training of the base model, e.g. only train our new top layers or the whole model
base_model.trainable = False
# 3) Add our custom top layers to classify CIFAR-10
out = base_model.output # we build our new layers at the end of the base model's output
out = Flatten()(out)
out = BatchNormalization()(out)
out = Dense(256, activation='relu')(out)
out = Dropout(0.3)(out)
out = BatchNormalization()(out)
out = Dense(128, activation='relu')(out)
out = Dropout(0.3)(out)
out = BatchNormalization()(out)
out = Dense(64, activation='relu')(out)
out = Dropout(0.3)(out)
# 10 output neurons, 1 for every CIFAR-10 class
out = Dense(10, activation='softmax')(out)
# 4) Define the new model
modelFineTune = tf.keras.models.Model(inputs=inputs, outputs=out)
print(modelFineTune.summary())
# + [markdown] id="NGALXFooUBtT"
# ### Preprocess the training and test data (**PROVIDED**)
#
# Every pre-trained network in Keras comes with its own preprocessing function that needs to be applied to every training and test image. This preprocessing is the same that has been used to train the initial network on ImageNet.
#
# **From now on, we will need to use the preprocessed datasets for training and testing.**
#
# ---
#
#
# + id="JtP-bsJkUZIU"
# load the data again, since we have applied a different normalization in our first Excercise
(trainSamples, _trainLabels), (testSamples, _testLabels) = cifar10.load_data()
# convert a class vector (integers) to binary class matrix.
trainLabels = to_categorical(_trainLabels)
testLabels = to_categorical(_testLabels)
# preprocess training and test samples
trainSamplesPP = tf.keras.applications.inception_resnet_v2.preprocess_input(np.copy(trainSamples))
testSamplesPP = tf.keras.applications.inception_resnet_v2.preprocess_input(np.copy(testSamples))
# class names
# text representation of class labels
classNames = ['airplane', 'automobile', 'bird', \
'cat', 'deer', 'dog', \
'frog', 'horse', 'ship', 'truck']
# + [markdown] id="x9NjvIbYLvdN"
# ### CNN Training (top layers only) (**add your code here**)
#
# Train the model using your best optimizer from Exercise 1 and the same loss, metrics and hyper parameters as before, except for the ```learning_rate``` which should be decreased to 1e-4. Limit the maximum number of epochs to 10 by setting ```epochs=10```. Evaluate the model's accuracy on the test set.
#
# **Depending on the GPU that we are being assigned, one epoch can take up to 5 minutes!**
#
# We can provide [ModelCheckpoint](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) as additional callback to ```model.fit()``` to save our model after every epoch. This comes in very handy when training times increase. And, we can always continue training by loading the saved model and run ```model.fit()``` again.
#
# **Note, that only our new layers are trained! See the number of trainable vs. non-trainable parameters in the output of ```model.summary()```**.
# + id="rcMMI_JiMgEn"
# Create a callback that saves the model's weights after every epoch
checkpointPath = "/content/drive/My Drive/cifar-10-training_inception-resnet-v2/myAwesomeCNN.ckpt"
checkPoint = tf.keras.callbacks.ModelCheckpoint(filepath=checkpointPath,
save_weights_only=True,
verbose=1)
# Crate an early stopping callback
earlyStopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
##### YOUR CODE GOES HERE ######
# Compile the new CNN model
# Train the new CNN model (make sure to use both callbacks defined above !!)
# Evaluate the accuracy
################################
# + [markdown] id="ePdlK-GmQi3I"
# ### Visualize the performance (**PROVIDED**)
# + id="hU-djqGWdR0f"
# load the model from the saved checkpoint
modelFineTune.load_weights(tf.train.latest_checkpoint(os.path.dirname(checkpointPath)))
# + id="vfNRjuiDQol0"
visualizePredictions(modelFineTune, testSamplesPP, testSamples)
# + id="EQQKCQTaQz4Q"
# for external images, we also need to apply the Inception-ResNet-V2 preprocessing
def myPredictInceptionResNet(model, image):
# resize image to model dimensions
resizedImage = cv2.resize(image, (32,32))
# expand dimensions (i.e. build a tensor from a single image)
x=np.expand_dims(resizedImage, axis=0)
# apply preprocessing
x=tf.keras.applications.inception_resnet_v2.preprocess_input(x)
# and predict
predictedClasses = model.predict(x)
return predictedClasses
# + id="a_yu0yV9V5di"
visualizePredictionsExtImages(modelFineTune, myPredictInceptionResNet)
# + [markdown] id="NCJqoCTeRiRT"
# ### CNN Training (full CNN) (**add your code here**)
#
# In a final experiment, train the full CNN from top to bottom using the same parameters as before.
#
# **Depending on the GPU that we are being assigned, one epoch can take up to 17 minutes!**
#
# **How does the performance differ?**
# + id="RcGfyv88S04a"
# Define a fully trainable model
base_model.trainable = True
##### YOUR CODE GOES HERE ######
# Replicate the CNN architecture from above and use 'modelFineTuneFull' as a variable for this CNN model.
################################
# + [markdown] id="pCegxedXS_07"
# ### Visualize the performance (**PROVIDED**)
# + id="Bz4CT63ode75"
# load the model from the saved checkpoint
modelFineTuneFull.load_weights(tf.train.latest_checkpoint(os.path.dirname(checkpointPath)))
# + id="iZ6noAAeTBS-"
visualizePredictions(modelFineTuneFull, testSamplesPP, testSamples)
# + id="fUC-LkisTEIu"
visualizePredictionsExtImages(modelFineTuneFull, myPredictInceptionResNet)
| Assignment4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# language: python
# name: python3
# ---
# ## Arquivo de entrada FASTA
#Passe o caminho abaixo.. Deixe o arquivo no mesmo diretório!
diretorio = "João Lucas-15kmer-resposta.fasta"
arquivo = open(diretorio,mode="r")
kmer,sequenciaNucleotideos = arquivo.readlines()
arquivo.close()
# ## K-mers gerados em ordem lexicográfica
# +
kmer = int(kmer[3:])
aux = ""
kmers = []
inicio = 0
fim = kmer
while(fim<=len(sequenciaNucleotideos)):
kmers.append(sequenciaNucleotideos[inicio:fim])
inicio+=1
fim+=1
kmers = sorted(kmers) #Ordem lexicografica
# -
# ## Arquivo de saída txt com os kmers gerados
arquivoGerado = open("k"+str(kmer)+"mer.txt",mode="w")
arquivoGerado.write(",".join(kmers)+",")
arquivoGerado.close()
print("Resultado Gerado no diretório corrente!")
| Composition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#Enlarge figure
plt.rcParams['figure.figsize'] =(12.0,9.0)
#Read input as csv file
inputData = pd.read_csv('q1.csv')
X = inputData.iloc[:, 0]
Y = inputData.iloc[:, 1]
# # Applying Gradient Descent Algorithm from scratch for Univariate Linear Regression
# +
#gradient Descent Algorithm
theta_zero = 0
theta_one = 0
learningRate = 0.001
iterations = 100
n = len(X)
for i in range(n):
pred_Y = theta_one*X + theta_zero
D_theta_zero = (-2/n) * sum(Y - pred_Y)
D_theta_one = (-2/n) * sum( X * (Y - pred_Y))
theta_zero = theta_zero - learningRate * D_theta_zero
theta_one = theta_one - learningRate * D_theta_one
# -
print(theta_zero, theta_one)
predicted_Y = theta_zero + theta_one * X
plt.scatter(X,Y)
plt.plot([min(X),max(X)] ,[min(predicted_Y), max(predicted_Y)], color='Green')
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.show()
| Assignment 1/q1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Esse notebook destina-se a análise dos dados referentes a candidaturas eleitas entre os anos de 1995 e 2019. As perguntas estabelecidas e as hipóteses a serem comprovadas foram:
#
# 1. Após a aprovação da Lei 9.504/1997 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação?
# - **Hipótese:** Não houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação da Lei 9.504/1997.
# 2. Após a aprovação da Lei 12.034/2009 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação?
# - **Hipótese:** Sim, houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação da Lei 12.034/2009.
# + tags=[]
import pandas as pd
import matplotlib.pyplot as plt
# -
# ## 1. Após a aprovação da Lei 9.504/1997 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação?
# **Hipótese:** Não houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação da Lei 9.504/1997.
df_legislaturas = pd.read_csv('../dados/legislaturas_1934_2023_limpas.csv')
df_legislaturas.head()
# Restringe os dados a serem analisados para o período a ser analisado (1995 a 2007).
legislaturas_h1 = df_legislaturas[(df_legislaturas['id'] >= 50) & (df_legislaturas['id'] <= 53)]['id'].unique().tolist()
df_candidaturas_eleitas = pd.read_csv('../dados/candidaturas_eleitas.csv')
df_candidaturas_eleitas_h1 = df_candidaturas_eleitas[df_candidaturas_eleitas['idLegislatura'].isin(legislaturas_h1)].copy()
df_candidaturas_eleitas_h1['idLegislatura'].unique()
# Agrupa os dados por sexo
agrupa_sexo = df_candidaturas_eleitas_h1.groupby(['idLegislatura', 'sexo']).size().to_frame('valorAbsoluto')
# Estabelece porcentagem de cada categoria em relação ao total de candidaturas eleitas
agrupa_sexo['porcentagem'] = round(agrupa_sexo['valorAbsoluto'].div(agrupa_sexo.groupby('idLegislatura')['valorAbsoluto'].transform('sum')).mul(100), 2)
agrupa_sexo_df = agrupa_sexo.reset_index()
agrupa_sexo_df
# Prepara os dados para visualização em plano cartesiano
mulher_h1 = agrupa_sexo_df[agrupa_sexo_df['sexo'] == 'F']['porcentagem'].tolist()
homem_h1 = agrupa_sexo_df[agrupa_sexo_df['sexo'] == 'M']['porcentagem'].tolist()
legislaturas_lista_h1 = agrupa_sexo_df['idLegislatura'].unique()
# Substituimos o valor ordinal das legislaturas pelo ano de início para facilitar a compreensão do gráfico.
legislaturas_lista_h1 = df_legislaturas[(df_legislaturas['id'] >= 50) & (df_legislaturas['id'] <= 53)]['dataInicio'].unique().tolist()
legislaturas_lista_h1.sort()
legislaturas_lista_h1 = list(map(str, legislaturas_lista_h1))
legislaturas_lista_h1
agrupa_sexo_df2 = pd.DataFrame({'mulher': mulher_h1,
'homem': homem_h1
}, index=legislaturas_lista_h1,
)
agrupa_sexo_df2.plot.line()
agrupa_sexo_df2.to_csv('../dados/analise_genero_1995_2007.csv')
# Para melhor visualizar as variações, estabelecemos gráficos para cada gênero.
agrupa_sexo_df2.plot.line(subplots=True)
diferenca_percentual_mulher_h1_total = mulher_h1[-1] - mulher_h1[0]
print(f'''
Pergunta: Após a aprovação da Lei 9.504/1997 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas
3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação? \n
Resposta: Houve aumento de {round(diferenca_percentual_mulher_h1_total, 2)}% no total de mulheres deputadas entre 1995 e 2007.
''')
# ## 2. Após a aprovação da Lei 12.034/2009 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação?
# **Hipótese:** Sim, houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas 3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação da Lei 12.034/2009.
# Restringe os dados a serem analisados para o período a ser analisado (2007 a 2019).
legislaturas_h2 = df_legislaturas[(df_legislaturas['id'] >= 53) & (df_legislaturas['id'] <= 56)]['id'].unique().tolist()
df_candidaturas_eleitas_h2 = df_candidaturas_eleitas[df_candidaturas_eleitas['idLegislatura'].isin(legislaturas_h2)].copy()
df_candidaturas_eleitas_h2['idLegislatura'].unique()
# Agrua os dados por sexo
agrupa_sexo_h2 = df_candidaturas_eleitas_h2.groupby(['idLegislatura', 'sexo']).size().to_frame('valorAbsoluto')
# Estabelece porcentagem de cada categoria em relação ao total de candidaturas eleitas
agrupa_sexo_h2['porcentagem'] = round(agrupa_sexo_h2['valorAbsoluto'].div(agrupa_sexo_h2.groupby('idLegislatura')['valorAbsoluto'].transform('sum')).mul(100), 2)
agrupa_sexo_h2_df = agrupa_sexo_h2.reset_index()
agrupa_sexo_h2
# Prepara os dados para visualização em plano cartesiano
mulher_h2 = agrupa_sexo_h2_df[agrupa_sexo_h2_df['sexo'] == 'F']['porcentagem'].tolist()
homem_h2 = agrupa_sexo_h2_df[agrupa_sexo_h2_df['sexo'] == 'M']['porcentagem'].tolist()
legislaturas_lista_h2 = agrupa_sexo_h2_df['idLegislatura'].unique()
# Substituimos o valor ordinal das legislaturas pelo ano de início para facilitar a compreensão do gráfico.
legislaturas_lista_h2 = df_legislaturas[(df_legislaturas['id'] >= 53) & (df_legislaturas['id'] <= 56)]['dataInicio'].unique().tolist()
legislaturas_lista_h2.sort()
legislaturas_lista_h2 = list(map(str, legislaturas_lista_h2))
legislaturas_lista_h2
agrupa_sexo_h2_df2 = pd.DataFrame({'mulher': mulher_h2,
'homem': homem_h2
}, index=legislaturas_lista_h2,
)
agrupa_sexo_h2_df2.plot.line()
agrupa_sexo_h2_df2.to_csv('../dados/analise_genero_2007_2019.csv')
# Para melhor visualizar as variações, estabelecemos gráficos para cada gênero.
agrupa_sexo_h2_df2.plot.line(subplots=True)
diferenca_percentual_mulher_h2_total = mulher_h2[-1] - mulher_h2[0]
print(f'''
Pergunta: Após a aprovação da Lei 9.504/1997 houve aumento no percentual de mulheres eleitas para o cargo de deputada federal nas
3 legislações subsequentes em comparação com o percentual existente na data de sua aprovação? \n
Resposta: Houve aumento de {round(diferenca_percentual_mulher_h2_total, 2)}% no total de mulheres deputadas entre 2007 e 2019.
''')
| analise-dados/analise-candidaturas-eleitas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div align="right"><i>COM418 - Computers and Music</i></div>
# <div align="right"><a href="https://people.epfl.ch/paolo.prandoni"><NAME></a>, <a href="https://www.epfl.ch/labs/lcav/">LCAV, EPFL</a></div>
#
# <p style="font-size: 30pt; font-weight: bold; color: #B51F1F;">Channel Vocoder</p>
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Audio
from IPython.display import IFrame
from scipy import signal
import import_ipynb
from Helpers import *
figsize=(10,5)
import matplotlib
matplotlib.rcParams.update({'font.size': 16});
# -
fs=44100
# In this notebook, we will implement and test an easy **channel vocoder**. A channel vocoder is a musical device that allows to sing while playing notes on a keyboard at the same time. The vocoder blends the voice (called the modulator) with the played notes on the keyboard (called the carrier) so that the resulting voice sings the note played on the keyboard. The resulting voice has a robotic, artificial sound that is rather popular in electronic music, with notable uses by bands such as Daft Punk, or Kraftwerk.
#
# <img src="https://www.bhphotovideo.com/images/images2000x2000/waldorf_stvc_string_synthesizer_1382081.jpg" alt="Drawing" style="width: 35%;"/>
#
# The implementation of a Channel vocoder is in fact quite simple. It takes 2 inputs, the carrier and the modulator signals, that must be of the same length. It divides each signal into frequency bands called **channels** (hence the name) using many parallel bandpass filters. The width of each channel can be equal, or logarithmically sized to match the human ear perception of frequency. For each channel, the envelope of the modulator signal is then computed, for instance using a rectifier and a moving average. It is simply multiplied to the carrier signal for each channel, before all channels are added back together.
#
# <img src="https://i.imgur.com/aIePutp.png" alt="Drawing" style="width: 65%;"/>
#
# To improve the intelligibility of the speech, it is also possible to add AWGN to each to the carrier of each band, helping to produce non-voiced sounds, such as the sound s, or f.
# As an example signal to test our vocoder with, we are going to use dry voice samples from the song "Nightcall" by french artist Kavinsky.
#
# 
#
# First, let's listen to the original song:
IFrame(src="https://www.youtube.com/embed/46qo_V1zcOM?start=30", width="560", height="315", frameborder="0", allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture")
# ## 1. The modulator and the carrier signals
#
# We are now going to recreate the lead vocoder using 2 signals: we need a modulator signal, a voice pronouning the lyrics, and a carrier signal, a synthesizer, containing the notes for the pitch.
# ### 1.1. The modulator
# Let's first import the modulator signal. It is simply the lyrics spoken at the right rhythm. No need to sing or pay attention to the pitch, only the prononciation and the rhythm of the text are going to matter. Note that the voice sample is available for free on **Splice**, an online resource for audio production.
nightcall_modulator = open_audio('snd/nightcall_modulator.wav')
Audio('snd/nightcall_modulator.wav', autoplay=False)
# ### 1.2. The carrier
# Second, we import a carrier signal, which is simply a synthesizer playing the chords that are gonna be used for the vocoder. Note that the carrier signal does not need to feature silent parts, since the modulator's silences will automatically mute the final vocoded track. The carrier and the modulator simply need to be in synch with each other.
nightcall_carrier = open_audio('snd/nightcall_carrier.wav')
Audio("snd/nightcall_carrier.wav", autoplay=False)
# ## 2. The channel vocoder
# ### 2.1. The channeler
# Let's now start implementing the phase vocoder. The first tool we need is an efficient filter to allow decomposing both the carrier and the modulator signals into channels (or bands). Let's call this function the **channeler** since it decomposes the input signals into frequency channels. It takes as input a signal to be filtered, a integer representing the number of bands, and a boolean for setting if we want white noise to be added to each band (used for the carrier).
def channeler(x, n_bands, add_noise=False):
"""
Separate a signal into log-sized frequency channels.
x: the input signal
n_bands: the number of frequency channels
add_noise: add white noise or note to each channel
"""
band_freqs = np.logspace(2, 14, n_bands+1, base=2) # get all the limits between the bands, in log space
x_bands = np.zeros((n_bands, x.size)) # Placeholder for all bands
for i in range(n_bands):
noise = 0.7*np.random.random(x.size) if add_noise else 0 # Create AWGN or not
x_bands[i] = butter_pass_filter(x + noise, np.array((band_freqs[i], band_freqs[i+1])), fs, btype="band", order=5).astype(np.float32) # Carrier + uniform noise
return x_bands
# +
# Example plot
plt.figure(figsize=figsize)
plt.magnitude_spectrum(nightcall_carrier)
plt.title("Carrier signal before channeling")
plt.xscale("log")
plt.xlim(1e-4)
plt.show()
carrier_bands = channeler(nightcall_carrier, 8, add_noise=True)
plt.figure(figsize=figsize)
for i in range(8):
plt.magnitude_spectrum(carrier_bands[i], alpha=.7)
plt.title("Carrier channels after channeling and noise addition")
plt.xscale("log")
plt.xlim(1e-4)
plt.show()
# -
# ### 2.2. The envelope computer
# Next, we can implement a simple envelope computer. Given a signal, this function computes its temporal envelope.
def envelope_computer(x):
"""
Envelope computation of one channels of the modulator
x: the input signal
"""
x = np.abs(x) # Rectify the signal to positive
x = moving_average(x, 1000) # Smooth the signal
return 3*x # Normalize # Normalize
plt.figure(figsize=figsize)
plt.plot(np.abs(nightcall_modulator)[:150000] , label="Modulator")
plt.plot(envelope_computer(nightcall_modulator)[:150000], label="Modulator envelope")
plt.legend(loc="best")
plt.title("Modulator signal and its envelope")
plt.show()
# ### 2.3. The channel vocoder (itself)
# We can now implement the channel vocoder itself! It takes as input both signals presented above, as well as an integer controlling the number of channels (bands) of the vocoder. A larger number of channels results in the finer grained vocoded sound, but also takes more time to compute. Some artists may voluntarily use a lower numer of bands to increase the artificial effect of the vocoder. Try playing with it!
def channel_vocoder(modulator, carrier, n_bands=32):
"""
Channel vocoder
modulator: the modulator signal
carrier: the carrier signal
n_bands: the number of bands of the vocoder (better to be a power of 2)
"""
# Decompose both modulation and carrier signals into frequency channels
modul_bands = channeler(modulator, n_bands, add_noise=False)
carrier_bands = channeler(carrier, n_bands, add_noise=True)
# Compute envelope of the modulator
modul_bands = np.array([envelope_computer(modul_bands[i]) for i in range(n_bands)])
# Multiply carrier and modulator
result_bands = np.prod([modul_bands, carrier_bands], axis=0)
# Merge back all channels together and normalize
result = np.sum(result_bands, axis=0)
return normalize(result) # Normalize
nightcall_vocoder = channel_vocoder(nightcall_modulator, nightcall_carrier, n_bands=32)
Audio(nightcall_vocoder, rate=fs)
# The vocoded voice is still perfectly intelligible, and it's easy to understand the lyrics. However, the pitch of the voice is now the synthesizer playing chords! One can try to deactivate the AWGN and compare the results. We finally plot the STFT of all 3 signals. One can notice that the vocoded signal has kept the general shape of the voice (modulator) signal, but is using the frequency information from the carrier!
# +
# Plot
f, t, Zxx = signal.stft(nightcall_modulator[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Original voice (modulator)")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
f, t, Zxx = signal.stft(nightcall_vocoder[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Vocoded voice")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
f, t, Zxx = signal.stft(nightcall_carrier[:7*fs], fs, nperseg=1000)
plt.figure(figsize=figsize)
plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud')
plt.title("Carrier")
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
# -
# ## 3. Playing it together with the music
# Finally, let's try to play it with the background music to see if it sounds like the original!
# +
nightcall_instru = open_audio('snd/nightcall_instrumental.wav')
nightcall_final = nightcall_vocoder + 0.6*nightcall_instru
nightcall_final = normalize(nightcall_final) # Normalize
Audio(nightcall_final, rate=fs)
# -
| ChannelVocoder/ChannelVocoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # Environment Setup Guide to work with Qiskit Textbook
# -
# This is a comprehensive guide for setting up your environment on your personal computer for working with Qiskit Textbook. This will help you reproduce the results as you see them on the textbook website. The Qiskit Textbook is written in [Jupyter notebooks](https://jupyter.org/install). Notebooks and [the website](https://qiskit.org/textbook/preface.html) are the only media in which the Textbook is fully supported.
# ## Installing the qiskit_textbook Package
#
# The Qiskit Textbook provides some tools and widgets specific to the Textbook. This is not part of Qiskit and is available through the `qiskit_textbook` package. The quickest way to install this with [Pip](http://pypi.org/project/pip/) and [Git](http://git-scm.com/) is through the command:
#
# ```code
# pip install git+https://github.com/qiskit-community/qiskit-textbook.git#subdirectory=qiskit-textbook-src
# ```
# Alternatively, you can download the folder [qiskit-textbook-src](https://github.com/qiskit-community/qiskit-textbook) from the Github and run:
#
# ```code
# pip install ./qiskit-textbook-src
# ```
#
# from the directory that contains this folder.
#
# ## Steps to reproduce exact prerendered output as given in qiskit textbook (Optional)
# ### 1. Setting up default drawer to MatPlotLib
#
# The default backend for <code>QuantumCircuit.draw()</code> or <code>qiskit.visualization.circuit_drawer()</code> is the text backend. However, depending on your local environment you may want to change these defaults to something better suited for your use case. This is done with the user config file. By default the user config file should be located in <code>~/.qiskit/</code> and is the <code>settings.conf</code> file.
#
# Qiskit Textbook uses default circuit drawer as MatPlotLib. To reproduce visualizations as given in qiskit textbook create a <code>settings.conf</code> file (usually found in <code>~/.qiskit/</code>) with contents:
# ```code
# [default]
# circuit_drawer = mpl
# ```
# ### 2. Setting up default image type to svg
#
# Optionally, you can add the following line of code to the <code>ipython_kernel_config.py</code> file (usually found in <code>~/.ipython/profile_default/</code>) to set the default image format from PNG to the more scaleable SVG format:
# ```code
# c.InlineBackend.figure_format = 'svg'
# ```
# ### 3. Installing the LaTeX parser
#
# Te get a rendering similar to the Qiskit Textbook, optionally install the pylatexenc library.
# You can do this with [Pip](http://pypi.org/project/pip/) and [Git](http://git-scm.com/) through the command :
#
# <code>pip install pylatexenc</code>
# ### 4. Syncing with the Qiskit versions used in the Textbook
#
# You will find a code snippet at the end of the most tutorials which will contain the information on which versions of qiskit packages are used in the tutorial. If you find inconsistency in syntax and/or outputs, try to use the same version.
#
# To check the version installed in your computer, run the following in Python shell or Jupyter Notebook:
import qiskit
qiskit.__qiskit_version__
| notebooks/ch-prerequisites/setting-the-environment.ipynb |
# +
# Formação Cientista de Dados - <NAME> e <NAME>
# Igraph
# -
# Importação das bibliotecas
from igraph import Graph
from igraph import plot
import igraph
import numpy as np
# Carregamento de grafo no formato graphml
grafo = igraph.load('Grafo.graphml')
print(grafo)
# Visualização do grafo
plot(grafo, bbox = (0,0,600,600))
# Visualização das comunidades
comunidades = grafo.clusters()
print(comunidades)
# Visualização em qual comunidade qual registro foi associado
comunidades.membership
# Visualização do grafo
cores = comunidades.membership
# Array de cores para defirmos cores diferentes para cada grupo
cores = np.array(cores)
cores = cores * 20
cores = cores.tolist()
plot(grafo, vertex_color = cores)
# exemplo 2
# Criação de grafo direcionado com pesos nas arestas
grafo2 = Graph(edges = [(0,2),(0,1),(1,4),(1,5),(2,3),(6,7),(3,7),(4,7),(5,6)],
directed = True)
grafo2.vs['label'] = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']
grafo2.es['weight'] = [2,1,2,1,2,1,3,1]
# Visualização do grafo
plot(grafo2, bbox = (0,0,300,300))
# Visualização de comunidades e em qual comunidade cada registro foi associado
comunidades2 = grafo2.clusters()
print(comunidades2)
comunidades2.membership
# Função mais otimizada para visualização das comunidades
c = grafo2.community_edge_betweenness()
print(c)
# Obtenção do número de clusters
c.optimal_count
# Visualização da nova comunidade
comunidades3 = c.as_clustering()
print(comunidades3)
comunidades3.membership
# Geração do grafo das comunidades colocando cores entre os grupos identificados
plot(grafo2, vertex_color = comunidades3.membership)
cores = comunidades3.membership
# Array de cores para defirmos cores diferentes para cada grupo
cores = np.array(cores)
cores = cores * 100
cores = cores.tolist()
plot(grafo2, bbox = (0,0,300,300), vertex_color = cores)
# Visualização dos cliques
cli = grafo.as_undirected().cliques(min = 4)
print(cli)
len(cli)
| 08-Grafos/python/8-comunidades.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="oEZNtEjrgP6a"
import os
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
import plotly.express as px
from sklearn.ensemble import IsolationForest
# + id="0sYZqHDrgirf"
df = pd.read_csv("/content/nyc_data.csv")
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="EeHsTAoLg2L-" outputId="8893f288-94eb-46d9-c0d0-45743e611a57"
df
# + id="dvxbqvQig3Rv"
df["timestamp"] = pd.to_datetime(df["timestamp"])
# + colab={"base_uri": "https://localhost:8080/"} id="l0DnP-cZg-rm" outputId="aea4dfb8-4c96-46f0-bd49-e49df481dd83"
df.info()
# + id="U8CI13OHhAH2"
df = df.set_index("timestamp").resample("H").mean().reset_index()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="NE-DNpKAhMXV" outputId="59845d1a-b106-4d1b-edbc-94efe61644aa"
df
# + id="KtlSLhJXhM-9"
df["hour"] = df.timestamp.dt.hour
# + id="e7lHqRCphbUu"
df["weekday"] = pd.Categorical(df.timestamp.dt.strftime("%A"), categories=["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"], ordered=True)
# + colab={"base_uri": "https://localhost:8080/"} id="t11HLN33iJYH" outputId="618d44f6-3d7e-481e-f37f-0be6422cdb5e"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="64L4BKB-iUWV" outputId="45e86e39-18c1-45a1-903f-3cf63cd700e4"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="t3mIB4WliWG9" outputId="73be854a-90d6-449d-c54c-0783caf5e604"
df[["value", "weekday"]].groupby("weekday").mean().plot()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="V1BRIgOAim31" outputId="3df067c2-9161-43f1-a20b-51f63878cc1e"
df[["value", "hour"]].groupby("hour").mean().plot()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="9bZUMlDuiwZm" outputId="68dd4717-eccd-4d9b-8204-fddaf964c36f"
fig = px.line(df.reset_index(), x = "timestamp", y = "value", title="NYC Taxi Demand")
fig.update_xaxes(
rangeslider_visible=True,
)
fig.show()
# + [markdown] id="U3pRaqfKjww9"
# **Anomalous Point**
#
# NYC Marathon - 2014-11-02
# Thanksgiving - 2014-11-27
# Christmas - 2014-12-25
# New Years - 2015-01-01
# Snow Blizard - 2015-01-26 and 2015-01-27
# + colab={"base_uri": "https://localhost:8080/"} id="8PXJb78ajbPd" outputId="bda85fd0-fc54-4e16-9e5b-512b703ece5f"
model = IsolationForest(contamination=0.004)
model.fit(df[["value"]])
# + id="w1JRzBwTkspU"
df["outliers"] = pd.Series(model.predict(df[["value"]])).apply(lambda x: "Yes" if (x == -1) else "No")
# + colab={"base_uri": "https://localhost:8080/", "height": 707} id="lxDQChG7lxhs" outputId="caf4ded7-6b8b-4d53-b83d-2507b48afade"
df.query('outliers=="Yes"')
# + [markdown] id="0XmmQ-0Hm6oc"
# We can see it detected 22 Nov - Marathon as oulier, Newyear day and Blizzard. It did not detect christmas and thanksgiving. But Why? These are contextual outlier, in this case the number is not any big value like 30000 and all.
# + colab={"base_uri": "https://localhost:8080/", "height": 635} id="iNOiLAETmQe8" outputId="18c53dee-761f-47a1-c3de-bfbf6409f163"
fig = px.scatter(df.reset_index(), x = "timestamp", y = "value", color="outliers", hover_data=["weekday"], title="NYC Taxi Demand")
fig.update_xaxes(
rangeslider_visible=True,
)
fig.show()
# + colab={"base_uri": "https://localhost:8080/"} id="pMx6mmhIn4T7" outputId="5f8b3a84-0562-4493-bab8-2d4acea63cba"
model = IsolationForest()
model.fit(df[["value"]])
# + id="hOW-anGup7vE"
score=model.decision_function(df[["value"]])
# + colab={"base_uri": "https://localhost:8080/"} id="sl8HNBnzqCEL" outputId="92af9869-3038-44c2-b143-184272e2637c"
score
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="EiJgRaD0qDPy" outputId="4464f5b5-0c24-4603-9f6a-3fdfaee2450f"
plt.hist(score, bins=50)
plt.show()
# + id="IZpzcJ6hqL0q"
df["scores"] = score
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="kI5u2lTVqPu7" outputId="a0ef2564-dc2e-4491-cc13-1367eead5fc4"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 614} id="vgnGMD5MqY3D" outputId="597cd9e4-bada-43ff-bb5f-aa7d1e9c9cf3"
df.query("scores<-0.20")
# + id="mKjTVGVVqmCy"
| Anomaly Detection using Isolation Forest/Anomaly_Detection_using_Isolation_Forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
#
# # Compute Rap-Music on evoked data
#
#
# Compute a Recursively Applied and Projected MUltiple Signal Classification
# (RAP-MUSIC) [1]_ on evoked data.
#
# References
# ----------
# .. [1] <NAME> and <NAME>. 1999. Source localization using recursively
# applied and projected (RAP) MUSIC. Trans. Sig. Proc. 47, 2
# (February 1999), 332-340.
# DOI=10.1109/78.740118 https://doi.org/10.1109/78.740118
#
#
# +
# Author: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.beamformer import rap_music
from mne.viz import plot_dipole_locations, plot_dipole_amplitudes
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif'
# Read the evoked response and crop it
condition = 'Right Auditory'
evoked = mne.read_evokeds(evoked_fname, condition=condition,
baseline=(None, 0))
evoked.crop(tmin=0.05, tmax=0.15) # select N100
evoked.pick_types(meg=True, eeg=False)
# Read the forward solution
forward = mne.read_forward_solution(fwd_fname)
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
dipoles, residual = rap_music(evoked, forward, noise_cov, n_dipoles=2,
return_residual=True, verbose=True)
trans = forward['mri_head_t']
plot_dipole_locations(dipoles, trans, 'sample', subjects_dir=subjects_dir)
plot_dipole_amplitudes(dipoles)
# Plot the evoked data and the residual.
evoked.plot(ylim=dict(grad=[-300, 300], mag=[-800, 800], eeg=[-6, 8]))
residual.plot(ylim=dict(grad=[-300, 300], mag=[-800, 800], eeg=[-6, 8]))
| 0.15/_downloads/plot_rap_music.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda3]
# language: python
# name: conda-env-anaconda3-py
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Introduction to visualizing data in the eeghdf files
# +
# I have copied the stacklineplot from my python-edf/examples code to help with display. Maybe I will put this as a helper or put it out as a utility package to make it easier to install.
from __future__ import print_function, division, unicode_literals
# %matplotlib inline
# # %matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
#import seaborn
import pandas as pd
import numpy as np
import h5py
from pprint import pprint
import stacklineplot
# + slideshow={"slide_type": "fragment"}
# matplotlib.rcParams['figure.figsize'] = (18.0, 12.0)
matplotlib.rcParams['figure.figsize'] = (12.0, 8.0)
# + slideshow={"slide_type": "slide"}
hdf = h5py.File('../data/spasms.eeghdf','r')
# + slideshow={"slide_type": "subslide"}
pprint(list(hdf.items()))
pprint(list(hdf['patient'].attrs.items()))
# + slideshow={"slide_type": "slide"}
rec = hdf['record-0']
pprint(list(rec.items()))
pprint(list(rec.attrs.items()))
years_old = rec.attrs['patient_age_days']/365
pprint("age in years: %s" % years_old)
# + slideshow={"slide_type": "slide"}
signals = rec['signals']
labels = rec['signal_labels']
electrode_labels = [str(s,'ascii') for s in labels]
numbered_electrode_labels = ["%d:%s" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]
# + [markdown] slideshow={"slide_type": "slide"}
# #### Simple visualization of EEG (electrodecrement seizure pattern)
# + slideshow={"slide_type": "fragment"}
# plot 10s epochs (multiples in DE)
ch0, ch1 = (0,19)
DE = 2 # how many 10s epochs to display
epoch = 53; ptepoch = 10*int(rec.attrs['sample_frequency'])
dp = int(0.5*ptepoch)
# stacklineplot.stackplot(signals[ch0:ch1,epoch*ptepoch+dp:(epoch+DE)*ptepoch+dp],seconds=DE*10.0, ylabels=electrode_labels[ch0:ch1], yscale=0.3)
print("epoch:", epoch)
# +
# search identified spasms at 1836, 1871, 1901, 1939
stacklineplot.show_epoch_centered(signals, 1836,
epoch_width_sec=15,
chstart=0, chstop=19, fs=rec.attrs['sample_frequency'],
ylabels=electrode_labels, yscale=3.0)
# + slideshow={"slide_type": "skip"}
annot = rec['edf_annotations']
#print(list(annot.items()))
#annot['texts'][:]
# -
signals.shape
# + slideshow={"slide_type": "skip"}
antext = [s.decode('utf-8') for s in annot['texts'][:]]
starts100ns = [xx for xx in annot['starts_100ns'][:]]
len(starts100ns), len(antext)
# + slideshow={"slide_type": "slide"}
import pandas as pd
# + slideshow={"slide_type": "fragment"}
df = pd.DataFrame(data=antext, columns=['text'])
df['starts100ns'] = starts100ns
df['starts_sec'] = df['starts100ns']/10**7
# + slideshow={"slide_type": "slide"}
df # look at the annotations
# + slideshow={"slide_type": "skip"}
df[df.text.str.contains('sz',case=False)]
# + slideshow={"slide_type": "slide"}
df[df.text.str.contains('seizure',case=False)] # find the seizure
# -
df[df.text.str.contains('spasm',case=False)] # find the seizure
# + slideshow={"slide_type": "slide"}
list(annot.items())
# -
| notebooks/vizSpasm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Non-Linear Least Squares
#
# We're now going to approach estimation with a non-linear state to measurement space mapping.
#
# $
# y = h(x) + v
# $
#
# where $h(x)$ is a non-linear function and $v$ is a noise vector.
#
# As presented in class we cannot apply recursive estimation to the problem in it's current non-linear form. However, we can *linearize* the problem, allowing application of recursive estimation:
#
# $
# h(x) \approx h(\hat{x}_t) + H_{\hat{x}_t}(x - \hat{x}_t)
# $
#
# where $H_{\hat{x}_t}$ is the Jacobian of h evaluated at $\hat{x}_t$:
#
# This presents $h(x)$ as a linear function in the form of $Ax + b$ since $h(\hat{x}_t)$ and $H_{\hat{x}_t}$ are constant in this context. From here we can use recursive estimation the same as before. Note the *linearization* is only useful if $x$ is near $\hat{x}_t$, otherwise the approximation quickly breaks down. This is why it's important to update the Jacobian frequently.
#
# +
import numpy as np
import matplotlib.pyplot as plt
import numpy.linalg as LA
# %matplotlib inline
# -
# We'll define $h(x)$ as:
#
#
# $h(x) = (f_{range}(x), f_{bearing}(x))$
#
# where
#
# $
# f_{range}(x) = sqrt({x_1}^2 + {x_2}^2) \\
# f_{bearing}(x) = atan2(x_2, x_1)
# $
# +
# TODO: complete implementation
def f_range(x):
"""
Distance of x from the origin.
"""
return LA.norm(x)
# TODO: complete implementation
def f_bearing(x):
"""
atan2(x_2, x_1)
"""
return np.arctan2(x[1], x[0])
def h(x):
return np.array([f_range(x), f_bearing(x)])
# -
# ### Linearize $h(x)$
#
# In order to linearize $h(x)$ you'll need the Jacobian:
#
# $
# \begin{bmatrix}
# \frac{\partial{f_{range}}}{\partial{x_1}} & \frac{\partial{f_{range}}}{\partial{x_2}} \\
# \frac{\partial{f_{bearing}}}{\partial{x_1}} & \frac{\partial{f_{bearing}}}{\partial{x_2}} \\
# \end{bmatrix}
# $
#
# Remember to swap the derivative results of atan2 to match the swapped inputs ($atan2(x, y)$ vs $atan2(y, x)$).
#
# Jacobian solution:
#
# $
# \begin{bmatrix}
# \frac{1}{2}(x_1^2 + x_2^2)^{\frac{-1}{2}} * 2x_1 & \frac{1}{2}(x_1^2 + x_2^2)^{\frac{-1}{2}} * 2x_2 \\
# \frac{-x_1}{x_1^2 + x_2^2} & \frac{x_2} {x_1^2 + x_2^2} \\
# \end{bmatrix}
# $
# TODO: complete jacobian of h(x)
def jacobian_of_h(x):
t = (1/2) * (x[0]**2 + x[1]**2) ** (-1/2)
return np.array([
[t*2*x[0], t*2*x[1]],
# atan2(x, y)
# ( y / (x^2 + y^2), ( -x / (x^2 + y^2)
# atan2(x, y)
# ( -x / (x^2 + y^2), ( $y / (x^2 + y^2)
[-x[0] / (x[0]**2 + x[1]**2), x[1] / (x[0]**2 + x[1]**2)]
]).squeeze()
# Awesome! With the Jacobian of $h$ in your toolbox, you can plug it into recursive estimation.
#
# The update functions should look familiar ($H_{\hat{x}_t}$ is the Jacobian of $\hat{x}_t$).
#
# $
# Q_{t+1} = (Q_{t}^{-1} + H_{\hat{x}_t}^T R^{-1} H_{\hat{x}_t})^{-1} \\
# \hat{x_{t+1}} = \hat{x_t} + Q_{t+1} H_{\hat{x}_t}^{T} R^{-1} (\tilde{y_t} - h(\hat{x_t}))
# $
# ### Setup
# +
n_samples = 1000
# Covariance matrix
# added noise for range and bearing functions
#
# NOTE: these are set to low variance values
# to start with, if you increase them you
# might more samples to get
# a good estimate.
R = np.eye(2)
R[0, 0] = 0.01
R[1, 1] = np.radians(1)
# ground truth state
x = np.array([1.5, 1])
# -
# Initialize $\hat{x}_0$ and $Q_0$.
x_hat0 = np.array([3., 3]).reshape(-1, 1)
Q0 = np.eye(len(x_hat0))
# TODO: Recursive Estimation
def recursive_estimation(x_hat0, Q0, n_samples):
x_hat = np.copy(x_hat0)
Q = np.copy(Q0)
for _ in range(n_samples):
# TODO: sample a measurement
y_obs = h(x) + np.random.multivariate_normal([0, 0], R)
# TODO: compute the jacobian of h(x_hat)
H = jacobian_of_h(x_hat)
# TODO: update Q and x_hat
Q = LA.pinv(LA.pinv(Q) + H.T @ LA.pinv(R) @ H)
x_hat = x_hat + (Q @ H.T @ LA.pinv(R) @ (y_obs - h(x_hat))).reshape(2, 1)
return x_hat, Q
# +
print("x̂0 =", x_hat0.squeeze())
x_hat, Q = recursive_estimation(x_hat0, Q0, n_samples)
print("x =", x.squeeze())
print("x̂ =", x_hat.squeeze())
print("Hx =", h(x))
print("Hx̂ =", h(x_hat))
# -
# ### Error Curve
# +
errors = []
Ns = np.arange(0, 201, 5)
for n in Ns:
x_hat, Q = recursive_estimation(x_hat0, Q0, n)
errors.append(LA.norm(x.squeeze() - x_hat.squeeze()))
plt.plot(Ns, errors)
plt.xlabel('Number of samples')
plt.ylabel('Error')
# -
| jupyter_notebooks/4_State_Estimation/1_Introduction_to_Estimation/Non-Linear-Least-Squares-Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=[]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=[]
# # Google Analytics - Get unique visitors
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Google%20Analytics/Google_Analytics_Get_unique_visitors.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=[]
# **Tags:** #googleanalytics #getuniquevisitors #plotly #barchart #naas_drivers #scheduler #asset #naas
# + [markdown] papermill={} tags=[]
# **Author:** [<NAME>](https://www.linkedin.com/in/charles-demontigny/)
# + [markdown] papermill={} tags=[]
# Pre-requisite: Create your own <a href="">Google API JSON credential</a>
# + [markdown] papermill={} tags=[]
# ## Input
# + [markdown] papermill={} tags=[]
# ### Schedule your notebook
# + papermill={} tags=[]
#-> Uncomment the 2 lines below (by removing the hashtag) to schedule your job everyday at 8:00 AM (NB: you can choose the time of your scheduling bot)
# import naas
# naas.scheduler.add(cron="0 8 * * *")
#-> Uncomment the line below (by removing the hashtag) to remove your scheduler
# naas.scheduler.delete()
# + [markdown] papermill={} tags=[]
# ### Import library
# + papermill={} tags=[]
import pandas as pd
import plotly.graph_objects as go
import naas
from naas_drivers import googleanalytics
# + [markdown] papermill={} tags=[]
# ### Get your credential from Google Cloud Platform
# + papermill={} tags=[]
json_path = 'naas-googleanalytics.json'
# + [markdown] papermill={} tags=[]
# ### Get view id from google analytics
# + papermill={} tags=[]
view_id = "228952707"
# + [markdown] papermill={} tags=[]
# ### Setup your output paths
# + papermill={} tags=[]
csv_output = "googleanalytics_unique_visitors.csv"
html_output = "googleanalytics_unique_visitors.html"
# + [markdown] papermill={} tags=[]
# ## Model
# + [markdown] papermill={} tags=[]
# ### Number of uniques visitors
# + papermill={} tags=[]
df_unique_visitors = googleanalytics.connect(json_path).views.get_unique_visitors(view_id)
df_unique_visitors.tail(5)
# + [markdown] papermill={} tags=[]
# ## Output
# + [markdown] papermill={} tags=[]
# ### Save dataframe in csv
# + papermill={} tags=[]
df_unique_visitors.to_csv(csv_output, index=False)
# + [markdown] papermill={} tags=[]
# ### Plotting barchart
# + papermill={} tags=[]
def plot_unique_visitors(df: pd.DataFrame):
"""
Plot PageView in Plotly.
"""
# Prep dataframe
df["Date"] = pd.to_datetime(df['Year Month'] + "01")
# Get last month value
value = "{:,.0f}".format(df.loc[df.index[-1], "Users"]).replace(",", " ")
# Create data
data = go.Bar(
x=df["Date"],
y=df['Users'],
text=df['Users'],
# marker=dict(color="black"),
orientation="v"
)
# Create layout
layout = go.Layout(
yaxis={'categoryorder': 'total ascending'},
margin={"l":150, "pad": 20},
title=f"<b>Number of Unique Visitors by Month</b><br><span style='font-size: 13px;'>Unique visitors this month: {value}</span>",
title_font=dict(family="Arial", size=18, color="black"),
xaxis_title="Months",
xaxis_title_font=dict(family="Arial", size=11, color="black"),
yaxis_title="No visitors",
yaxis_title_font=dict(family="Arial", size=11, color="black"),
plot_bgcolor="#ffffff",
width=1200,
height=800,
margin_pad=10,
)
fig = go.Figure(data=data, layout=layout)
fig.update_traces(textposition="outside")
return fig
fig = plot_unique_visitors(df_unique_visitors)
fig.show()
# + [markdown] papermill={} tags=[]
# ### Export and share graph
# + papermill={} tags=[]
# Export in HTML
fig.write_html(html_output)
# Shave with naas
#-> Uncomment the line below (by removing the hashtag) to share your asset with naas
# naas.asset.add(html_output, params={"inline": True})
#-> Uncomment the line below (by removing the hashtag) to delete your asset
# naas.asset.delete(html_output)
| Google Analytics/Google_Analytics_Get_unique_visitors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NumPy
import numpy as np
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
# ### arange
#
# Return evenly spaced values within a given interval.
np.arange(0,10) # similar to range()
np.arange(0,11,5)
# ### zeros and ones
#
# Generate arrays of zeros or ones
np.zeros(3) #for 1-d array
np.zeros((5,2)) #here pass tuples to get 5 rows and 2 coloumn
np.ones(3)
np.ones((3,3))
# ### linspace
# Return evenly spaced numbers over a specified interval.
np.linspace(0,10,5)#1-d array with equal interval
np.linspace(0,10,50)
# ## eye
#
# Creates an identity matrix
np.eye(4) #identity matrix
# # Random
#
# Numpy also has lots of ways to create random number arrays:
#
# ### rand
# Create an array of the given shape and populate it with
# random samples from a uniform distribution
# over ``[0, 1)``.
np.random.rand(2)
np.random.rand(5,5)# not tuple just pass dimension
# ### randn
#
# Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform:
np.random.randn(2)
np.random.randn(5,5) # not tuple just pass dimension
# ### randint
# Return random integers from `low` (inclusive) to `high` (exclusive).
np.random.randint(1,100) # 1 has chance to be selected but not 100
np.random.randint(1,100,10) #get 10 random int
# ## Array Attributes and Methods
#
# Let's discuss some useful attributes and methods or an array:
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
# ## Reshape
# Returns an array containing the same data with a new shape.
arr.reshape(5,5)#can get error if it cannot be have enough data
# ### max,min,argmax,argmin
#
# These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
ranarr
ranarr.max()
ranarr.argmax()#loction/index value of max
ranarr.min()
ranarr.argmin()#loction/index value of min
# ## Shape
#
# Shape is an attribute that arrays have (not a method):
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(5,5)
arr.reshape(25,1).shape
# ### dtype
#
# You can also grab the data type of the object in the array:
arr.dtype
# ## NumPy Indexing and Selection
#Creating sample array
arr = np.arange(0,11)
arr
# ## Bracket Indexing and Selection
# The simplest way to pick one or some elements of an array looks very similar to python lists:
#Get values in a range
arr[1:5]
#Get values in a range
arr[0:5]
# ## Broadcasting
#
# Numpy arrays differ from a normal Python list because of their ability to broadcast:
# +
#Setting a value with index range (Broadcasting)
arr[0:5]=100
#Show
arr
# -
arr
# +
# Reset array
arr = np.arange(0,11)
#Show
arr
# +
#Important notes on Slices on index level
slice_of_arr = arr[0:6]
#Show slice
slice_of_arr
# +
#Change Slice
slice_of_arr[:]=99
#Show Slice again
slice_of_arr
# -
arr#the change happend in original array
# +
#To get a copy, need to be explicit
arr_copy = arr.copy()
arr_copy
# -
# ## Indexing a 2D array (matrices)
#
# The general format is **arr_2d[row][col]** or **arr_2d[row,col]**.
# +
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))
#Show
arr_2d
# -
# Getting individual element value
arr_2d[1][0]
# Getting individual element value
arr_2d[1,2]
# +
# 2D array slicing
#Shape (2,2) from top right corner
arr_2d[:3,2:]
# -
arr_2d[:3,1:]
#Shape bottom row
arr_2d[2]
#Shape bottom row
arr_2d[2,:]
# ## Selection
#
arr = np.arange(1,11)
arr
arr > 4
bool_arr = arr>4
bool_arr
arr[arr>4] #find the true index
# # NumPy Operations
arr = np.arange(0,10)
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr/arr
# Also warning, but not an error instead infinity
1/arr
arr**3
# ## Universal Array Functions
#
# **REFERENCE:** Numpy comes with many [universal array functions](http://docs.scipy.org/doc/numpy/reference/ufuncs.html).
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
arr
np.sin(arr)
np.log(10)
np.lcm(arr[3],arr[4])
np.reciprocal(1)
np.hypot(arr,arr)
| Day 2 .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Matrices are said to "act" on vectors. As an example, we will have a matrix multiply the vector ${\left[\begin{array}{cc}
# 1 \\
# 1 \\
# \end{array}\right] }$. To give you an appreciation for what this means, you can manipulate the elements of the following matrix ${\bf{M}}= \left[ {\begin{array}{cc}
# a & b \\
# c & d \\
# \end{array} } \right] $ and visualize the effects on the resulting vector ${\bf{v}} = {\bf{M}}\cdot {\left[\begin{array}{cc}
# 1 \\
# 1 \\
# \end{array} \right]}$.
# +
# import necessary modules
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import *
# define the original vector
vec = np.array([1,1])
# define a function that calculates the resulting matrix product (takes arbitrary elements as inputs)
def MatrixProduct(a,b,c,d):
# create a figure with 2 plots
f, ((ax1, ax2)) = plt.subplots(1,2, sharex=True, sharey=True )
# set x and y limits
plt.xlim(-10, 10)
plt.ylim(-10, 10)
# define matrix
mat = np.array([[a,b],[c,d]])
# calculate resulting vector
resVec = np.dot(mat,vec)
# plot original vector on first axis and resulting vector on second axis
ax1.quiver(vec[0],vec[1], color='b',angles='xy', scale_units='xy', scale=1)
ax2.quiver(resVec[0],resVec[1], color='r',angles='xy', scale_units='xy', scale=1)
ax1.set_title('Original Vector')
ax2.set_title('Resulting Vector')
# show the plot
plt.show()
# Allows you to run the function in an interactive way where you can change the arguments.
interact(MatrixProduct, a=(-10,10,0.1),b=(-10,10,0.1),c=(-10,10,0.1),d=(-10,10,0.1))
# -
# Now that we see that matrices "transform" vectors, we can ask what happens if we have a different starting vector, like ${\left[\begin{array}{cc}
# 1 \\
# 0 \\
# \end{array} \right]}$.
vec = np.array([1,0])
interact(MatrixProduct, a=(-10,10,0.1),b=(-10,10,0.1),c=(-10,10,0.1),d=(-10,10,0.1))
# You may have noticed that in this case changing the elements $a$ and $c$ effect what happens to the vector, but changing $b$ and $d$ have no effect whatsoever on the resulting vector. The opposite is true for the starting vector ${\left[\begin{array}{cc}
# 0 \\
# 1 \\
# \end{array} \right]}$
# --$b$ and $d$ affect the vector but not $c$ or $d$.
vec = np.array([0,1])
interact(MatrixProduct, a=(-10,10,0.1),b=(-10,10,0.1),c=(-10,10,0.1),d=(-10,10,0.1))
# Why should this be the case? Let's carry the general matrix multiplication ${\bf{v}} = {\bf{M}}\cdot {\left[\begin{array}{cc}
# 1 \\
# 0 \\
# \end{array} \right]}$ to see why.
#
# ${\bf{v}} = {\bf{M}} \cdot {\left[\begin{array}{cc}
# 1 \\
# 0 \\
# \end{array} \right]} $
#
# ${\bf{v}} = {\left[\begin{array}{cc}
# a & b \\
# c & d \\
# \end{array} \right]} \cdot
# {\left[\begin{array}{cc}
# 1 \\
# 0 \\
# \end{array} \right]} = {\left[\begin{array}{cc}
# a \\
# c \\
# \end{array} \right]}$
#
# From this, it is clear that only the first column of the matrix affects the transformation of the vector (1,0).
# If we carry out the multiplication on the vector (0,1), the result is (b, d). Note that the resulting vectors are the columns of our matrix (more on this later).
#
# So why were all of the elements of the matrix able to affect our original vector (1,1). Well let's do the multiplication and find out
# ${\bf{v}} = {\left[\begin{array}{cc}
# a & b \\
# c & d \\
# \end{array} \right]} \cdot
# {\left[\begin{array}{cc}
# 1 \\
# 1 \\
# \end{array} \right]} = {\left[\begin{array}{cc}
# a + b\\
# c + d\\
# \end{array} \right]}$
#
# The result is clearly a combination of both columns of our matrix, but so what? Well let's write the multiplication for a general vector ${\bf{c}}= (k1,k2)$ in a more suggestive way.
#
# ${\bf{v}} = {\left[\begin{array}{cc}
# a & b \\
# c & d \\
# \end{array} \right]} \cdot
# {\left[\begin{array}{cc}
# k1 \\
# k2 \\
# \end{array} \right]} =
# {\left[\begin{array}{cc}
# k1\cdot a + k2\cdot b \\
# k1\cdot c + k2\cdot d \\
# \end{array} \right]}
# = k1{\left[\begin{array}{cc}
# a \\
# c \\
# \end{array} \right]} +
# k2{\left[\begin{array}{cc}
# b \\
# d \\
# \end{array} \right]} $
#
# If we view the columns of our matrix as independent vectors, we see that the matrix multiplications turns out to be a linear combination of these column vectors where the scaling coefficients are given by the components of the vectors we are transforming. So if one of the components of the vector are zero than the contribution of the corresponding column vector of the matrix to the final vector will be the zero vector--ie. has no effect.
#
# As a side note, by linear combination, we mean that if you take 2 or more things and scale them by constants and then add them together. The right hand side of the last equation is clearly a linear combination of the column vectors of our matrix. The "column space" of a matrix in linear algebra is a powerful notion that can tell us alot of information about a linear system. Here we will go over linear dependence and talk about the dimensionality of the column space.
#
# Consider the matrix
#
# ${\left[\begin{array}{cc}
# 1 & 2.5 \\
# 3 & 0.5 \\
# \end{array} \right]}$
#
# The column vectors are (1,3) and (2.5, 0.5). Multiplying this matrix by an arbitrary vector, (a,b), will obviously yield the vector given by
#
# $a{\left[\begin{array}{cc}
# 1 \\
# 3 \\
# \end{array} \right]} +
# b{\left[\begin{array}{cc}
# 2.5 \\
# 0.5 \\
# \end{array} \right]} $
#
# What does this mean pictorially? Well, let's plot a grid of lines in the directions of our vectors such that when we add integers times our vectors together we hit intersections.
# +
import numpy as np
import matplotlib.pyplot as plt
#basis vectors
v1 =np.array([1,3])
v2 = np.array([2.5,0.5])
# create figure and axes object
fig, ax = plt.subplots(1,1)
ax.set_xlim(-5, 8)
ax.set_ylim(-3,7)
ax.quiver([v1[0]],[v1[1]],color='r',angles='xy', scale_units='xy', scale=1)
ax.quiver([v2[0]],[v2[1]],color='r',angles='xy', scale_units='xy', scale=1)
t = np.linspace(-15,15, 100)
a = np.arange(-10,10)
for num in a:
x=[]
y=[]
for tval in t:
vec = num*v1+tval*v2
x.append(vec[0])
y.append(vec[1])
ax.plot(x, y, color = 'blue')
for num in a:
x=[]
y=[]
for tval in t:
vec = num*v2+tval*v1
x.append(vec[0])
y.append(vec[1])
ax.plot(x, y, color = 'blue')
plt.show()
# -
# The grid presents a nice way of showing the action of our matrix on an arbitrary vector. For example, we know that the following matrix multiplication should work out as follows:
#
# ${\left[\begin{array}{cc}
# 1 & 2.5 \\
# 3 & 0.5 \\
# \end{array} \right]}\cdot {\left[\begin{array}{cc}
# 1.3 \\
# 2.2 \\
# \end{array} \right]}= 1.3{\left[\begin{array}{cc}
# 1 \\
# 3 \\
# \end{array} \right]} +
# 2.2{\left[\begin{array}{cc}
# 2.5 \\
# 0.5 \\
# \end{array} \right]}$
#
# Pictorially, we scale the vectors by their corresponding coefficients and then add them together to obtain the result of the matrix multiplication.
#
#
#
# +
import numpy as np
import matplotlib.pyplot as plt
#basis vectors
v1 =np.array([1,3])
v2 = np.array([2.5,0.5])
# create figure and axes object
fig, ax = plt.subplots(1,1)
ax.set_xlim(-5, 8)
ax.set_ylim(-3,7)
ax.quiver([v1[0]],[v1[1]],color='b',angles='xy', scale_units='xy', scale=1)
ax.quiver([v2[0]],[v2[1]],color='b',angles='xy', scale_units='xy', scale=1)
fig.set_size_inches((10,5))
t = np.linspace(-15,15, 100)
a = np.arange(-10,10)
for num in a:
x=[]
y=[]
for tval in t:
vec = num*v1+tval*v2
x.append(vec[0])
y.append(vec[1])
ax.plot(x, y, color = 'blue')
for num in a:
x=[]
y=[]
for tval in t:
vec = num*v2+tval*v1
x.append(vec[0])
y.append(vec[1])
ax.plot(x, y, color = 'blue')
ax.quiver([2.2* v2[0]],[2.2*v2[1]],[1.3*v1[0]],[1.3*v1[1]],color='r',angles='xy', scale_units='xy', scale=1)
ax.quiver([2.2* v2[0]],[2.2*v2[1]],color='r',angles='xy', scale_units='xy', scale=1)
ax.quiver([2.2* v2[0]+1.3*v1[0]],[2.2*v2[1]+1.3*v1[1]],color='r',angles='xy', scale_units='xy', scale=1)
ax.text(.5, -1, r'${\bf{v1}}$', fontsize = 20)
ax.text(-1, 1, r'${\bf{v2}}$', fontsize = 20)
ax.text(3, -0.5, r'$2.2\cdot{\bf{v1}}$', fontsize = 20)
ax.text(6, 2, r'$1.3\cdot{\bf{v2}}$', fontsize = 20)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# It should be clear from this picture that we can construct any vector in that exists in this 2-d space by carefully choosing the components of the vector on which our matrix is acting. Will this always be the case?
#
# If no, under what circumstances will we fail to be able to recover any vector that we want from our multiplication?
#
# The notion of being able to construct grids in this way can be expanded to any number of dimensions, though it is impossible to fully visualize after three dimensions. For a 3x3 matrix, our grid exits in the 3-d cartesian space and the vectors form a grid consisting of parallelpipeds instead of parallelograms. Below is just one of the parrallelpipeds that makes up a grid formed by three vectors. drawing the grid would look complicated on a 2d screen but you can imagine that the parallelpipeds are added in each of the three directions given by the vectors so that their faces and edges line up (think distorted rubics cube).
#
#
# +
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(16, 16), dpi= 80, facecolor='w', edgecolor='k')
ax = fig.add_subplot(111,projection='3d')
ax.set_xlim(-1,3)
ax.set_ylim(-1, 3)
ax.set_zlim(-1, 3)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.quiver([0],[0],[0], [1], [0.2],[0.3], color='b')
ax.quiver([0], [0],[0], [0.1], [2],[0],color='r')
ax.quiver([0], [0],[0], [0], [.4],[2],color='g')
ax.plot([0,1], [.4,0.2+0.4], [2,0.3+2], color='b')
ax.plot([0, 0.1], [.4,2+0.4],[2,2],color='r')
ax.plot([0.1,0.1], [2,2.4],[0,2],color='g')
ax.plot([1.1, 1.1], [2.2, 2.6],[0.3, 2.3],color='g')
ax.plot([0.1, 1.1], [2, 2.2],[0, 0.3], color='b')
ax.plot([1, 1.1], [0.6, 2.6],[2.3, 2.3],color='r')
ax.plot([1, 1], [0.2, 0.6],[0.3, 2.3],color='g')
ax.plot([1, 1.1], [0.2, 2.2],[0.3, 0.3],color='r')
ax.plot([0.1, 1.1], [2.4,2.6],[2, 2.3], color='b')
plt.show()
# -
# The question raised in the last cell brings us to the concept of linear dependency. The strict definition of linear dependency for vectors states that if we have a set of vectors, the vectors are said to be linear dependent if we can write one of the vectors as a linear combination of the others. If no such linear combination exists, the vectors are said to be linearly independent. For example, if we have 3 vectors $\{{\bf{v1}}, {\bf{v2}}, {\bf{v3}}\}$ living in a 3-d space, the vectors are linearly dependent if there exists numbers $a$ and $b$ such that
#
# $a\cdot {\bf{v1}} + b \cdot {\bf{v2}} = {\bf{v3}}$
#
# If no such $a$ and $b$ exist, then the vectors are linearly independent. Pictorially we can use what we've learned about constructing grids from vectors to understand what this means. We talked about forming a grid from two vectors in 2 dimensions, but what if we form a grid from 2 vectors in 3-dimensions?
#
#
# Below we form a grid from two vectors ${\bf{v1}}, {\bf{v2}}$ in 3 dimensions.
#
#
#
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from ipywidgets import *
def makePlot(phi, theta):
fig = plt.figure(figsize=(16, 16), dpi= 80, facecolor='w', edgecolor='k')
ax = fig.add_subplot(111,projection='3d')
v1 = np.array([1,0.5,1.5])
v2 = np.array([0.5,2,2.5])
fig = plt.figure()
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = X+Y
ax.set_xlim(-5,5)
ax.set_ylim(-5, 5)
ax.set_zlim(-5, 5)
#ax.plot_surface(X, Y, Z, color='w')
ax.quiver(0,0,0, 1,0.5,1.5, color = 'r')
ax.quiver(0,0,0, 0.5,2,2.5, color = 'r',)
t = np.linspace(-5,5, 100)
a = np.arange(-10,10)
for num in a:
limVec1 = list(np.divide((np.array([-5.0,-5.0,-5.0])-num*v1),v2))
limVec2 = list(np.divide((np.array([5.0,5.0,5.0])-num*v1),v2))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 100)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = num*v1+tval*v2
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y,z, color = 'blue')
for num in a:
limVec1 = list(np.divide((np.array([-5.0,-5.0,-5.0])-num*v2),v1))
limVec2 = list(np.divide((np.array([5.0,5.0,5.0])-num*v2),v1))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 100)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = num*v2+tval*v1
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y, z, color = 'blue')
ax.view_init(elev= phi, azim = theta)
plt.show()
interact(makePlot, phi= (0, 90,0.05), theta=(0,360, 0.1))
# -
# In this case, the grid covered by these two vectors forms a plane in the 3-dimensional space. Recall that the grid can be used to find the coefficients $c1$ and $c2$ that can be combined with the two vectors to make any other vector that lives in this grid (plane).
#
# ${\bf{v}}_{plane} = c1{\bf{v1}}+ c2{\bf{v2}}$
#
# If we have a third vector, ${\bf{v3}}$, it may either live in this plane or not. If it does, then we can find a $c1$ and $c2$ such that
#
# ${\bf{v}}_{3} = c1{\bf{v1}}+ c2{\bf{v2}}$
#
# and the set of three vectors is said to be linearly dependent. If the vector lives "outside" the plane, the vectors are linearly independent. Below we illustrate both cases.
#
#
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from ipywidgets import *
def makePlot(phi, theta, phi2, theta2):
fig = plt.figure(figsize=(24, 12), dpi= 80, facecolor='w', edgecolor='k')
ax = fig.add_subplot(121,projection='3d')
ax1 = fig.add_subplot(122,projection='3d')
v1 = np.array([1,0.5,1.5])
v2 = np.array([0.5,2,2.5])
fig = plt.figure()
ax.set_xlim(-5,5)
ax.set_ylim(-5, 5)
ax.set_zlim(-5, 5)
ax.set_title("Linearly Independent Vectors", fontsize = 20)
ax1.set_xlim(-5,5)
ax1.set_ylim(-5, 5)
ax1.set_zlim(-5, 5)
ax1.set_title("Linearly Dependent Vectors", fontsize = 20)
#ax.plot_surface(X, Y, Z, color='w')
ax.quiver(0,0,0, 1,0.5,1.5, color = 'r', linewidth=3)
ax.quiver(0,0,0, 0.5,2,2.5, color = 'r',linewidth=3)
ax.quiver(0,0,0, -1,-1,1, color = 'r',linewidth=3)
ax1.quiver(0,0,0, 1,0.5,1.5, color = 'r', linewidth=3)
ax1.quiver(0,0,0, 0.5,2,2.5, color = 'r',linewidth=3)
ax1.quiver(0,0,0, -0.5,1.5,1.0, color = 'r',linewidth=3)
t = np.linspace(-5,5, 100)
a = np.arange(-10,10)
for num in a:
limVec1 = list(np.divide((np.array([-5.0,-5.0,-5.0])-num*v1),v2))
limVec2 = list(np.divide((np.array([5.0,5.0,5.0])-num*v1),v2))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 2)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = num*v1+tval*v2
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y,z, color = 'blue')
ax1.plot(x, y,z, color = 'blue')
for num in a:
limVec1 = list(np.divide((np.array([-5.0,-5.0,-5.0])-num*v2),v1))
limVec2 = list(np.divide((np.array([5.0,5.0,5.0])-num*v2),v1))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 2)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = num*v2+tval*v1
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y, z, color = 'blue')
ax1.plot(x, y, z, color = 'blue')
ax.view_init(elev= phi, azim = theta)
ax1.view_init(elev= phi2, azim = theta2)
plt.show()
interact(makePlot, phi= (0, 90,0.05), theta=(0,360, 0.1),phi2= (0, 90,0.05), theta2=(0,360, 0.1))
# -
# So if we have three linearly dependent vectors or three linearly independent vectors that make up the columns of a matrix, what does this mean?
#
# This leads into the concept of column space. If our three vectors are linearly independent this means that we can form a 3-d grid out of the parralelpipeds that are formed from the three vectors over the space.
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from ipywidgets import *
def makePlot(phi, theta):
plt.clf()
fig = plt.figure(figsize=(24, 12), dpi= 80, facecolor='w', edgecolor='k')
ax = fig.add_subplot(121,projection='3d')
ax.set_xlim(-3,3)
ax.set_ylim(-3, 3)
ax.set_zlim(-3, 3)
alpha = np.arange(-7,7)
beta = np.arange(-7,7)
gamma = np.arange(-7,7)
v1 = np.array([1,0.5,1.5])
v2 = np.array([0.5,2,2.5])
v3 = np.array([-1,-1,1])
for a in alpha:
for b in beta:
limVec1 = list(np.divide((np.array([-3.0,-3.0,-3.0])-a*v1-b*v2),v3))
limVec2 = list(np.divide((np.array([3.0,3.0,3.0])-a*v1-b*v2),v3))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 100)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = a*v1+b*v2+tval*v3
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y, z, color = 'green')
for a in alpha:
for c in gamma:
limVec1 = list(np.divide((np.array([-3.0,-3.0,-3.0])-a*v1-c*v3),v2))
limVec2 = list(np.divide((np.array([3.0,3.0,3.0])-a*v1-c*v3),v2))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 100)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = a*v1+c*v3+tval*v2
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y, z, color = 'blue')
for b in beta:
for c in gamma:
limVec1 = list(np.divide((np.array([-3.0,-3.0,-3.0])-b*v2-c*v3),v1))
limVec2 = list(np.divide((np.array([3.0,3.0,3.0])-b*v2-c*v3),v1))
limVecX =sorted( [limVec1[0],limVec2[0]])
limVecY =sorted( [limVec1[1],limVec2[1]])
limVecZ =sorted( [limVec1[2],limVec2[2]])
limVecMin = sorted([limVecX[0], limVecY[0], limVecZ[0]])
limVecMax = sorted ([limVecX[1],limVecY[1], limVecZ[1]])
if limVecMin[2]< limVecMax[0]:
t = np.linspace(limVecMin[2], limVecMax[0], 100)
else:
t = []
x=[]
y=[]
z=[]
for tval in t:
vec = b*v2+c*v3+tval*v1
x.append(vec[0])
y.append(vec[1])
z.append(vec[2])
ax.plot(x, y, z, color = 'red')
ax.view_init(elev= phi, azim = theta)
plt.show()
interact(makePlot, phi= (0, 90,0.05), theta=(0,360, 0.1))
# -
| jupyter/Matrices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: NumbaKernel
# language: python
# name: numbaenv
# ---
# # Detector simulation
# In this notebook we load a track dataset generated by `edep-sim` and we calculate the ADC counts corresponding to each pixel. The result is exported to a HDF5 file.
# This is need so you can import larndsim without doing python setup.py install
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
# +
from larndsim import consts
import importlib
importlib.reload(consts)
consts.load_detector_properties("../larndsim/detector_properties/singlecube.yaml","../larndsim/pixel_layouts/layout-singlecube.yaml")
# +
from math import ceil
from larndsim import quenching, drifting, detsim, pixels_from_track, fee
importlib.reload(detsim)
importlib.reload(drifting)
importlib.reload(quenching)
importlib.reload(pixels_from_track)
importlib.reload(fee)
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.cm as cm
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.mplot3d import Axes3D
import mpl_toolkits.mplot3d.art3d as art3d
import numpy as np
import numba as nb
from numba import cuda
# -
# ### Dataset import
# First of all we load the `edep-sim` output. For this sample we need to invert $z$ and $y$ axes.
# +
with h5py.File('edepsim_1M.h5', 'r') as f:
tracks = np.array(f['segments'])
y_start = np.copy(tracks['y_start'] )
y_end = np.copy(tracks['y_end'])
y = np.copy(tracks['y'])
tracks['y_start'] = np.copy(tracks['z_start'])
tracks['y_end'] = np.copy(tracks['z_end'])
tracks['y'] = np.copy(tracks['z'])
tracks['z_start'] = y_start
tracks['z_end'] = y_end
tracks['z'] = y
# -
selected_tracks = tracks[:100]
# ### Quenching and drifting
# We calculate the number of electrons after recombination (`quenching` module) and the position and number of electrons after drifting (`drifting` module).
threadsperblock = 256
blockspergrid = ceil(selected_tracks.shape[0] / threadsperblock)
quenching.quench[blockspergrid,threadsperblock](selected_tracks, consts.box)
drifting.drift[blockspergrid,threadsperblock](selected_tracks)
# We find the pixels intersected by the projection of the tracks on the anode plane using the Bresenham's algorithm. We also take into account the neighboring pixels, due to the transverse diffusion of the charges.
# +
longest_pix = ceil(max(selected_tracks["dx"])/consts.pixel_size[0])
max_radius = ceil(max(selected_tracks["tran_diff"])*5/consts.pixel_size[0])
MAX_PIXELS = (longest_pix*4+6)*max_radius*2
MAX_ACTIVE_PIXELS = longest_pix*2
active_pixels = np.full((selected_tracks.shape[0], MAX_ACTIVE_PIXELS, 2), -1, dtype=np.int32)
neighboring_pixels = np.full((selected_tracks.shape[0], MAX_PIXELS, 2), -1, dtype=np.int32)
n_pixels_list = np.zeros(shape=(selected_tracks.shape[0]))
threadsperblock = 128
blockspergrid = ceil(selected_tracks.shape[0] / threadsperblock)
pixels_from_track.get_pixels[blockspergrid,threadsperblock](selected_tracks,
active_pixels,
neighboring_pixels,
n_pixels_list,
max_radius+1)
# -
shapes = neighboring_pixels.shape
joined = neighboring_pixels.reshape(shapes[0]*shapes[1],2)
unique_pix = np.unique(joined, axis=0)
unique_pix = unique_pix[(unique_pix[:,0] != -1) & (unique_pix[:,1] != -1),:] # This array contains a unique list of involved pixels
# ### Charge distribution calculation
# Here we calculate the current induced by each track on the pixels, taking into account longitudinal and transverse diffusion. The track segment is parametrized as:
# \begin{align}
# x'(r') &=x_s + \frac{\Delta x}{\Delta r}r'\\
# y'(r') &=y_s + \frac{\Delta y}{\Delta r}r'\\
# z'(r') &=z_s + \frac{\Delta z}{\Delta r}r',
# \end{align}
# where $\Delta r$ is the segment length. Here we assume $z$ as the drift direction.
# The diffused charge distribution is calculated with the following integral:
# \begin{equation}
# \rho(x,y,z) = \frac{Q}{\sqrt{(2\pi)^3}\sigma_x\sigma_y\sigma_z\Delta r}\exp\left[-\frac{(x-x_s)^2}{2\sigma_x^2}-\frac{(y-y_s)^2}{2\sigma_y^2}-\frac{(z-z_s)^2}{2\sigma_z^2}\right]\int^{r'=\Delta r}_{r'=0}dr'\exp[-(ar'^2+br')],
# \end{equation}
# where
# \begin{align}
# a &= \left[\left(\frac{\Delta x}{\Delta r}\right)^2\frac{1}{2\sigma_x^2} + \left(\frac{\Delta y}{\Delta r}\right)^2\frac{1}{2\sigma_y^2} + \left(\frac{\Delta z}{\Delta r}\right)^2\frac{1}{2\sigma_z^2} \right]\\
# b &= -\left[\frac{(x-x_s)}{\sigma_x^2}\frac{\Delta x}{\Delta r}+
# \frac{(y-y_s)}{\sigma_y^2}\frac{\Delta y}{\Delta r}+
# \frac{(z-z_s)}{\sigma_z^2}\frac{\Delta z}{\Delta r}\right].
# \end{align}
#
# The simmetry of the transverse diffusion along the track allows to take a slice on the $xy$ plane and solve the integral once at a fixed $z$ coordinate (e.g. at $z_{m} = (z_s+z_e)/2$) and re-use it at other $z$ coordinates away from the endpoints (where $\rho(x,y,z)$ varies along $z$ so must be calculated at each $z$).
# +
# Here we build a map between tracks and event IDs
unique_eventIDs = np.unique(selected_tracks['eventID'])
event_id_map = np.zeros_like(selected_tracks['eventID'])
for iev, evID in enumerate(selected_tracks['eventID']):
event_id_map[iev] = np.where(evID == unique_eventIDs)[0]
d_event_id_map = cuda.to_device(event_id_map)
# Here we find the longest signal in time and we store an array with the start in time of each track
max_length = np.array([0])
track_starts = np.empty(selected_tracks.shape[0])
d_track_starts = cuda.to_device(track_starts)
threadsperblock = 128
blockspergrid = ceil(selected_tracks.shape[0] / threadsperblock)
detsim.time_intervals[blockspergrid,threadsperblock](d_track_starts, max_length, d_event_id_map, selected_tracks)
# Here we calculate the induced current on each pixel
signals = np.zeros((selected_tracks.shape[0],
neighboring_pixels.shape[1],
max_length[0]), dtype=np.float32)
threadsperblock = (4,4,4)
blockspergrid_x = ceil(signals.shape[0] / threadsperblock[0])
blockspergrid_y = ceil(signals.shape[1] / threadsperblock[1])
blockspergrid_z = ceil(signals.shape[2] / threadsperblock[2])
blockspergrid = (blockspergrid_x, blockspergrid_y, blockspergrid_z)
d_signals = cuda.to_device(signals)
detsim.tracks_current[blockspergrid,threadsperblock](d_signals,
neighboring_pixels,
selected_tracks)
# Here we create a map between tracks and index in the unique pixel array
pixel_index_map = np.full((selected_tracks.shape[0], neighboring_pixels.shape[1]), -1)
for itr in range(neighboring_pixels.shape[0]):
for ipix in range(neighboring_pixels.shape[1]):
pID = neighboring_pixels[itr][ipix]
if pID[0] >= 0 and pID[1] >= 0:
try:
index = np.where((unique_pix[:,0] == pID[0]) & (unique_pix[:,1] == pID[1]))
except IndexError:
print(index,"More pixels than maximum value")
pixel_index_map[itr,ipix] = index[0]
# Here we combine the induced current on the same pixels by different tracks
d_pixel_index_map = cuda.to_device(pixel_index_map)
threadsperblock = (8,8,8)
blockspergrid_x = ceil(d_signals.shape[0] / threadsperblock[0])
blockspergrid_y = ceil(d_signals.shape[1] / threadsperblock[1])
blockspergrid_z = ceil(d_signals.shape[2] / threadsperblock[2])
blockspergrid = (blockspergrid_x, blockspergrid_y, blockspergrid_z)
pixels_signals = np.zeros((len(unique_pix),len(consts.time_ticks)*len(unique_eventIDs)*2))
d_pixels_signals = cuda.to_device(pixels_signals)
detsim.sum_pixel_signals[blockspergrid,threadsperblock](d_pixels_signals,
d_signals,
d_track_starts,
d_pixel_index_map)
currents = np.sum(d_pixels_signals,axis=1)*consts.t_sampling/consts.e_charge
# -
# ### 3D event display
# +
cmap = cm.Spectral_r
norm = mpl.colors.Normalize(vmin=0, vmax=256)
m = cm.ScalarMappable(norm=norm, cmap=cmap)
cmap = cm.viridis
norm_curr = mpl.colors.LogNorm(vmin=1, vmax=max(currents))
m_curr = cm.ScalarMappable(norm=norm_curr, cmap=cmap)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111, projection='3d')
for it,t in enumerate(selected_tracks):
if it == 0:
ax.plot((t["x_start"], t["x_end"]),
(t["z_start"], t["z_end"]),
(t["y_start"], t["y_end"]),
c='r',
lw=1,
alpha=1,
zorder=10,
label='Geant4 detector segment')
else:
ax.plot((t["x_start"], t["x_end"]),
(t["z_start"], t["z_end"]),
(t["y_start"], t["y_end"]),
c='r',
lw=1,
alpha=1,
zorder=9999)
ax.plot((t["x_start"], t["x_end"]),
(consts.module_borders[0][2][0], consts.module_borders[0][2][0]),
(t["y_start"], t["y_end"]),
c='r',
lw=1,
ls=':',
alpha=1,
zorder=9999)
for ip, p in enumerate(unique_pix):
x_rect, y_rect = detsim.get_pixel_coordinates(p)
pixel_plane = int(p[0] // consts.n_pixels[0])
row = int(pixel_plane // 4)
column = int(pixel_plane % 4)
if currents[ip] > 0:
rect = plt.Rectangle((x_rect-consts.pixel_size[0]/2, y_rect-consts.pixel_size[0]/2),
consts.pixel_size[0], consts.pixel_size[1],
linewidth=0.1, fc=m_curr.to_rgba(currents[ip]),
edgecolor='white', label=('Pixel' if ip == 5 else ''))
ax.add_patch(rect)
art3d.pathpatch_2d_to_3d(rect, z=consts.module_borders[pixel_plane][2][0], zdir="y")
# ax.set_ylim(consts.module_borders[pixel_plane][2][0],50)
ax.set_xlim(consts.module_borders[0][1][0],consts.module_borders[0][1][1])
ax.set_ylim(consts.module_borders[0][2][0],consts.module_borders[0][2][1])
ax.set_zlim(consts.module_borders[0][2][0],consts.module_borders[0][2][1])
ax.set_box_aspect((1, 1, 1))
ax.grid(False)
ax.xaxis.set_major_locator(plt.MaxNLocator(3))
ax.yaxis.set_major_locator(plt.MaxNLocator(3))
ax.view_init(20, 55)
ax.set_ylabel("z [cm]")
ax.set_xlabel("x [cm]")
ax.set_zlabel("y [cm]")
_ = plt.colorbar(m_curr,fraction=0.035, pad=0.05,label='Induced current integral [# electrons]')
# -
# ### Electronics response and digitization
# Here we simulate the electronics response (the self-triggering cycle) and the signal digitization.
# +
time_ticks = np.linspace(0,len(unique_eventIDs)*consts.time_interval[1]*2,d_pixels_signals.shape[1]+1)
integral_list = np.zeros((d_pixels_signals.shape[0], fee.MAX_ADC_VALUES))
adc_ticks_list = np.zeros((d_pixels_signals.shape[0], fee.MAX_ADC_VALUES))
TPB = 32
BPG = ceil(d_pixels_signals.shape[0] / TPB)
rng_states = cuda.random.create_xoroshiro128p_states(TPB * BPG, seed=0)
fee.get_adc_values[BPG,TPB](d_pixels_signals,
time_ticks,
integral_list,
adc_ticks_list,
0,
rng_states)
adc_list = fee.digitize(integral_list)
# -
# ### 2D event display with induced current and ADC counts
# +
fig,ax = plt.subplots(1,2,figsize=(14,6.5))
for ip, p in enumerate(unique_pix):
x_rect, y_rect = detsim.get_pixel_coordinates(p)
pixel_plane = int(p[0] // consts.n_pixels[0])
c = currents[ip]
if c >=1:
rect = plt.Rectangle((x_rect-consts.pixel_size[0]/2, y_rect-consts.pixel_size[0]/2),
consts.pixel_size[0], consts.pixel_size[1],
linewidth=0.2, fc=m_curr.to_rgba(c),
edgecolor='grey')
ax[0].add_patch(rect)
a = adc_list[ip][adc_list[ip]>fee.digitize(0)]
if len(a):
rect = plt.Rectangle((x_rect-consts.pixel_size[0]/2, y_rect-consts.pixel_size[0]/2),
consts.pixel_size[0], consts.pixel_size[1],
linewidth=0.2, fc=m.to_rgba(np.sum(a)),
edgecolor='grey')
ax[1].add_patch(rect)
for it,t in enumerate(selected_tracks):
ax[0].plot((t["x_start"], t["x_end"]),
(t["y_start"], t["y_end"]),
c='r',
lw=1.25,
ls=':',
alpha=1,
zorder=10)
ax[1].plot((t["x_start"], t["x_end"]),
(t["y_start"], t["y_end"]),
c='r',
lw=1.25,
ls=':',
alpha=1,
zorder=10)
ax[0].scatter((t["x_start"], t["x_end"]),
(t["y_start"], t["y_end"]),
c='r', s=1, zorder=99999)
ax[1].scatter((t["x_start"], t["x_end"]),
(t["y_start"], t["y_end"]),
c='r', s=1, zorder=99999)
ax[0].set_aspect("equal")
ax[1].set_aspect("equal")
ax[0].set_xlabel("x [cm]")
ax[1].set_xlabel("x [cm]")
ax[0].set_ylabel("y [cm]")
divider0 = make_axes_locatable(ax[1])
cax0 = divider0.append_axes("right", size="7%", pad=0.07)
fig.colorbar(m, ax=ax[1], cax=cax0, label='ADC counts sum')
divider1 = make_axes_locatable(ax[0])
cax1 = divider1.append_axes("right", size="7%", pad=0.07)
fig.colorbar(m_curr, ax=ax[0], cax=cax1, label='Induced current integral [# electrons]')
plt.subplots_adjust(hspace=0.5)
fig.savefig("currentadc.pdf")
# -
# ### Export result
# As a last step we backtrack the ADC counts to the Geant4 tracks and we export the result in a HDF5 file.
# +
track_pixel_map = np.full((unique_pix.shape[0],5),-1)
backtracked_id = np.full((adc_list.shape[0], adc_list.shape[1], track_pixel_map.shape[1]), -1)
detsim.get_track_pixel_map(track_pixel_map, unique_pix, neighboring_pixels)
detsim.backtrack_adcs(selected_tracks, adc_list, adc_ticks_list, track_pixel_map, event_id_map, backtracked_id)
# -
importlib.reload(fee)
pc = fee.export_to_hdf5(adc_list, adc_ticks_list, unique_pix, backtracked_id, "test.h5")
| examples/Pixel induced current.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
#import clean data
path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/module_5_auto.csv'
df = pd.read_csv(path)
# -
df.to_csv('module_5_auto.csv')
df = df._get_numeric_data()
df.head()
# %%capture
# ! pip install ipywidgets
from IPython.display import display
from IPython.html import widgets
from IPython.display import display
from ipywidgets import interact,interactive,fixed,interact_manual
# +
# checking above cell
from IPython.display import display
from IPython.html import widgets
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual
# -
def DistributionPlot(RedFunction, BlueFunction, RedName, BlueName, Title):
width = 12
height = 10
plt.figure(figsize=(width,height))
ax1 = sns.distplot(RedFunction, hist = False, color="r", label=RedName)
ax2 = sns.distplot(BlueFunction, hist = False, color = "b", label = BlueName, ax = ax1)
plt.title(Title)
plt.xlabel('Price(in dollars)')
plt.ylabel('Proportion of cars')
plt.show()
plt.close()
def PollyPlot(xtrain, xtest, ytrain, ytest, lr, ploy_transform):
width = 12
height = 10
plt.figure(figsize = (width, height))
#training data
#testing data
# lr : linear regression object
# poly_transform : polynomial transformation object
xmax = max([xtrain.values.max(), xtest.values.max()])
xmin = min([xtrain.values.min(), xtest.values.min()])
x = np.arrange(xmin, xmax, 0.1)
plt.plot(xtrain, ytrain, 'ro', label = 'Training data')
plt.plot(xtest, ytest, 'go', label ='Test data')
plt.plot(x, lr_predict(poly_transform.fit_transform(x.reshape(-1,1))), label = 'Predicted Function')
plt.ylim([-10000,60000])
plt.ylabel('Price')
plt.legend()
# +
# Training and testing
# +
# we place the target data in different data set
y_data = df['price']
# -
x_data = df.drop('price',axis=1)
# +
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x_data, y_data, test_size = 0.15, random_state = 1)
print("Number of test samples: ", x_test.shape[0])
print("Number of training samples: ", x_train.shape[0])
# +
#checking same from class
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.15, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
# +
#Question: Use the function "train_test_split" to split up the data set such that 40% of the data samples will be utilized for
# testing, set the parameter "random_state" equal to zero. The output of the function should be the following: "x_train_1"
# , "x_test_1", "y_train_1" and "y_test_1".
xtrain_1,xtest_1,ytrain_1,ytest_1 = train_test_split(x_data,y_data,test_size = 0.4, random_state = 0)
print("number of test samples: ", xtest_1.shape[0])
print("number of training samples: ", xtrain_1.shape[0])
# -
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x_train[['horsepower']], y_train)
lr.score(x_test[['horsepower']], y_test)
lr.score(x_train[['horsepower']], y_train)
# +
# Question : Find the R^2 on the test data using 90% of the data for training data
x_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size = 0.1, random_state = 0)
lr.score(x_train1[['horsepower']], y_train1)
# -
lr.score(x_test1[['horsepower']],y_test1)
from sklearn.model_selection import cross_val_score
rcross = cross_val_score(lr, x_data[['horsepower']], y_data, cv=4)
rcross
print("The mean of folds are : ", rcross.mean(), " and the standard deviation is :", rcross.std())
-1 * cross_val_score(lr, x_data[['horsepower']], y_data, cv = 4, scoring = 'neg_mean_squared_error')
# +
# Question : Calculate the average R^2 using two folds, find the average R^2 for the second fold utilizing the horsepower as a feature :
r2 = cross_val_score(lr, x_data[['horsepower']], y_data, cv= 2)
# -
r2
r2[1]
from sklearn.model_selection import cross_val_predict
yhat = cross_val_predict(lr, x_data[['horsepower']], y_data, cv = 4)
yhat[0:5]
lrm = LinearRegression()
lrm.fit(x_train[['horsepower','curb-weight', 'engine-size', 'highway-mpg']], y_train)
yhat_train = lrm.predict(x_train[['horsepower','curb-weight','engine-size','highway-mpg']])
yhat_train[0:5]
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
Title = 'Distribution of plot value using Training Data vs Training Data Distribution'
DistributionPlot(y_y1
)
| .ipynb_checkpoints/practice-model-evaluation-and-refinement-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Create a web application
#
# In our last exercise, we create the applicatoin to run in Jupyter Notebook.
# To provide the same fidelity to end-users we need to turn this into a web application.
#
# For this we need:
#
# * A layout for the web page: the sensor plot, the current alerts as a table, the model drift plot
# * the previously developed functions to get the report data and to calculate model drift
#
# We will use a framework called Plotly Dash (https://plotly.com/dash/), which makes it easy to create web applications for data analysis and machine learning.
#
# ## Tasks
#
# 1. For each step below, study the code and run it
# 2. Check that the output matches your expectation
# 3. Restart the machine simulator with a different configuration, observe how the application output changes
# +
# imports
# %load_ext autoreload
# %autoreload
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.express as px
import pandas as pd
import jupyter
from dashserve import JupyterDash
from dash.dependencies import Output, Input
import dash_table
from util import load_model, read_data, fix
# -
# # Create the basic layout
# +
def create_app():
fig_sensor = px.scatter()
fig_drift = px.scatter()
app = JupyterDash(__name__)
app.layout = html.Div(children=[
html.H1(children='Predictive Maintenace App'),
dcc.Tabs([
dcc.Tab(label='Machine Status', children=[
dcc.Graph(id='sensor-graph',
figure=fig_sensor)
]),
dcc.Tab(label='Alerts', children=[
dash_table.DataTable(id='alerts-table',
columns=[dict(name='dt', id='dt'),
dict(name='value', id='value')]),
]),
dcc.Tab(label='Model drift', children=[
dcc.Graph(id='drift-graph',
figure=fig_drift),
]),
]),
dcc.Interval(id='interval-component',
interval=1*1000, # in milliseconds
n_intervals=0)
])
return app
if __name__ == '__main__':
app = create_app()
app.run_server(debug=True)
# -
# # Add sensor data
# + tags=[]
def get_report_data(model, alerts):
# read the data from the machine API
df = read_data(100)
df['alert'] = False
df['time'] = df.index
# use the model to predict outliers
y_hat = model.predict(fix(df['value']))
# mark all outliers and record to alerts
df['alert'] = y_hat == -1
all_alerts = df[df['alert']]
for i, row in all_alerts.iterrows():
alerts.update({row.name: row['value']})
return df, alerts
def add_sensor_plot(app):
@app.callback(
Output("sensor-graph", "figure"),
Input("interval-component", "n_intervals")
)
def update_sensor_plot(n_intervals):
df, _ = get_report_data(model, alerts)
fig = px.scatter(df, 'time', 'value', color='alert', range_y=(0, 1))
return fig
app = create_app()
alerts = {}
model = load_model('models/mymodel')
add_sensor_plot(app)
if __name__ == '__main__':
app.run_server(debug=True)
# -
# # Add alert table
# +
def add_alerts_table(app):
@app.callback(
Output("alerts-table", "data"),
Input("interval-component", "n_intervals")
)
def update_alerts_table(n_intervals):
_, all_alerts = get_report_data(model, alerts)
df = pd.DataFrame({'dt': all_alerts.keys(), 'value': all_alerts.values()})
return df.to_dict(orient='records')
app = create_app()
add_sensor_plot(app)
add_alerts_table(app)
if __name__ == '__main__':
app.run_server(debug=True)
# -
# # Add model drift
# +
def calculate_expected_distribution(model, df):
y_hat = model.predict(fix(df['value']))
df['alert'] = y_hat == -1
return df['alert'].value_counts(normalize=True)
def calculate_model_drift(df, expected):
actual = df['alert'].value_counts(normalize=True)
df = pd.DataFrame({
'actual': actual,
'expected': expected
})
return df
def add_drift_plot(app):
train_data = pd.read_csv('datasets/traindata.csv')
model = load_model('models/mymodel')
expected = calculate_expected_distribution(model, train_data)
@app.callback(
Output("drift-graph", "figure"),
Input("interval-component", "n_intervals")
)
def update_drift_plot(n_intervals):
df, _ = get_report_data(model, alerts)
df_drift = calculate_model_drift(df, expected)
fig = px.bar(df_drift, y=['actual', 'expected'], barmode='group')
return fig
app = create_app()
add_sensor_plot(app)
add_alerts_table(app)
add_drift_plot(app)
if __name__ == '__main__':
app.run_server(debug=True)
| baseline/predmaint/webapp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from underground_garage import shows
import time
for url in shows.showsinarchive():
time.sleep(.5)
try:
shows.showinfo(url)
except Exception as e:
print url
print e
| Test Shows.ipynb |