code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Interacting with a Car Object
# In this notebook, you've been given some of the starting code for creating and interacting with a car object.
#
# Your tasks are to:
# 1. Become familiar with this code.
# - Know how to create a car object, and how to move and turn that car.
# 2. Constantly visualize.
# - To make sure your code is working as expected, frequently call `display_world()` to see the result!
# 3. **Make the car move in a 4x4 square path.**
# - If you understand the move and turn functions, you should be able to tell a car to move in a square path. This task is a **TODO** at the end of this notebook.
#
# Feel free to change the values of initial variables and add functions as you see fit!
#
# And remember, to run a cell in the notebook, press `Shift+Enter`.
# +
import numpy as np
import car
# %matplotlib inline
# -
# ### Define the initial variables
# +
# Create a 2D world of 0's
height = 4
width = 6
world = np.zeros((height, width))
# Define the initial car state
initial_position = [0, 0] # [y, x] (top-left corner)
velocity = [0, 1] # [vy, vx] (moving to the right)
# -
# ### Create a car object
# +
# Create a car object with these initial params
carla = car.Car(initial_position, velocity, world)
print('Carla\'s initial state is: ' + str(carla.state))
# -
# ### Move and track state
# +
# Move in the direction of the initial velocity
carla.move()
# Track the change in state
print('Carla\'s state is: ' + str(carla.state))
# Display the world
carla.display_world()
# -
# ## TODO: Move in a square path
#
# Using the `move()` and `turn_left()` functions, make carla traverse a 4x4 square path.
#
# The output should look like:
# <img src="files/4x4_path.png" style="width: 30%;">
# +
## TODO: Make carla traverse a 4x4 square path
## Display the result
initial_position = [0, 0] # [y, x] (top-left corner)
velocity = [0, 1] # [vy, vx] (moving to the right)
carla = car.Car(initial_position, velocity, world)
#Move to Right
for i in range(0,3):
carla.move()
#Turn Right
velocity = [1,0]
carla.state[1] = velocity
#Move to Bottom
for i in range(0,3):
carla.move()
#Turn Right
velocity = [0,-1]
carla.state[1] = velocity
#Move to Left
for i in range(0,3):
carla.move()
#Turn Right
velocity = [-1,0]
carla.state[1] = velocity
#Move to Up
for i in range(0,3):
carla.move()
# Track the change in state
print('Carla\'s state is: ' + str(carla.state))
# Display the world
carla.display_world()
# -
| 4_5_State_and_Motion/1. Interacting with a Car Object.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Preparing data and publishing with Quilt
# ## Get and unzip primary data table
#
# We're taking data on interstate movement of commodities gathered by the U. S. Census and published at https://www.census.gov/econ/cfs/pums.html. The relevant parts are in one large zipped CSV, plus the tabs of a modest-sized .xlsx. We'll begin by grabbing the files.
import requests
resp = requests.get('https://www.census.gov/econ/cfs/2012/cfs_2012_pumf_csv.zip')
with open('cfs_2012_pumf_csv.zip', 'wb') as outfile:
outfile.write(resp.content)
import zipfile
zip_ref = zipfile.ZipFile('cfs_2012_pumf_csv.zip', 'r')
zip_ref.extractall()
zip_ref.close()
# ## Get and unzip data dictionary tables
url = 'https://www.census.gov/econ/cfs/2012/cfs_2012_pum_file_users_guide_App_A%20(Jun%202015).xlsx'
resp = requests.get(url)
with open('data_dictionary.xlsx', 'wb') as outfile:
outfile.write(resp.content)
# ## Generate quilt package
#
# The CSV was straightforward, but most of the tabs of the spreadsheet require some work to get into tidy table form. We start with doing as much as we can by editing the parameters of `build.yml` - experimenting with arguments to Pandas' [read_excel](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html) to figure out the right arguments to `kwargs`.
# !cat build.yml
# !quilt build catherinedevlin/us_census_commodity_flow_2012 build.yml
# ## Manual corrections
#
# Some fixes can't be done through `kwargs` loading paramters. We need to make the changes in Pandas, then save them back to the Quilt package.
from quilt.data.catherinedevlin import us_census_commodity_flow_2012 as cf
field_descriptions = cf.field_descriptions()
field_descriptions
# ### drop nan rows
field_descriptions = field_descriptions[field_descriptions['Field'] != 'nan']
field_descriptions
# ### Write fixed field_descriptions into quilt package
#
# [docs](https://docs.quiltdata.com/edit-a-package.html)
cf._set(['field_descriptions'], field_descriptions)
# ### Fix two-line headers
cfs_areas = cf.cfs_areas()
cfs_areas.head()
cfs_areas.iloc[0]
first_vals = dict(zip(cfs_areas.dtypes.index, cfs_areas.iloc[0]))
first_vals
first_vals.pop('Description', None) # this one already correct
corrected_headers = {k: '%s %s' % (k, v) for (k, v) in first_vals.items()}
corrected_headers
cfs_areas = cfs_areas.rename(columns=corrected_headers)
cfs_areas.head()
cfs_areas = cfs_areas.drop([0]).head()
cfs_areas
cf._set(['cfs_areas'], cfs_areas)
# ## change 'nan' into real NaN
sctg_codes = cf.sctg_codes()
sctg_codes.head()
import numpy as np
sctg_codes = sctg_codes.replace('nan', np.nan)
sctg_codes.head()
cf._set(['sctg_codes'], sctg_codes)
# ### Write manual changes to Quilt package
import quilt
quilt.build('catherinedevlin/us_census_commodity_flow_2012', cf)
| us_census_commodity_flow_2012.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Histogramming, grouping, and binning
#
# ## Overview
#
# Histogramming (see [sc.histogram](../../generated/scipp.histogram.rst#scipp.histogram)), grouping (using [sc.groupby](../groupby.ipynb)), and binning (see [Binned data](binned-data.ipynb)) all serve similar but slightly different purposes.
# Picking the optimal one of the three for a particular application may yield more natural code and better performance.
# Let us start by an example.
# Consider a table of scattered measurements:
import numpy as np
import scipp as sc
N = 5000
values = 10*np.random.rand(N)
data = sc.DataArray(
data=sc.Variable(dims=['position'], unit=sc.units.counts, values=values, variances=values),
coords={
'x':sc.Variable(dims=['position'], unit=sc.units.m, values=np.random.rand(N)),
'y':sc.Variable(dims=['position'], unit=sc.units.m, values=np.random.rand(N))
})
data.values *= 1.0/np.exp(5.0*data.coords['x'].values)
sc.table(data['position', :5])
# We may now be interested in the total intensity (counts) as a function of `'x'`.
# There are three ways to do this:
xbins = sc.Variable(dims=['x'], unit=sc.units.m, values=np.linspace(0,1,num=40))
ds = sc.Dataset()
ds['histogram'] = sc.histogram(data, xbins)
ds['groupby'] = sc.groupby(data, group='x', bins=xbins).sum('position')
ds['bin'] = sc.bin(data, edges=[xbins]).bins.sum()
sc.plot(ds)
# In the above plot we can only see a single line, since the three solutions yield exactly the same result (neglecting floating-point rounding errors):
#
# - `histogram` sorts data points into 'x' bins, summing immediately.
# - `groupby` groups by 'x' and then sums (on-the-fly) all data points falling in the same 'x' bin.
# - `bin` sorts data points into 'x' bins.
# Summing all rows in a bin yields the same result as grouping and summing directly.
#
# So in this case we get equivalent results, but the application areas differ, as described in more detail in the following sections.
#
# ## Histogramming
#
# `histogram` directly sums the data and is efficient.
# Limitations are:
#
# - Can currently only histogram along a single dimension.
# - Can currently only apply the "sum" to accumulate into a bin.
# Support for more operatinos, such as "mean", is considered for the future.
#
# While histogramming is only supported along a single dimension, we *can* histogram binned data (since [binning](#Binning) preserves the `'y'` coord), to create 2-D (or N-D) histograms:
binned = sc.bin(data, edges=[xbins])
ybins = sc.Variable(dims=['y'], unit=sc.units.m, values=np.linspace(0,1,num=30))
hist = sc.histogram(binned, ybins)
sc.plot(hist)
hist
# Another capability of `histogram` is to histogram a dimension that has previously been binned with a different or higher resolution, i.e. different bin edges.
# Compare to the plot of the initial example:
binned = sc.bin(data, edges=[xbins])
xbins_fine = sc.Variable(dims=['x'], unit=sc.units.m, values=np.linspace(0,1,num=100))
sc.plot(sc.histogram(binned, xbins_fine))
# ## Grouping
#
# `groupby` is more flexible in terms of operations than can be applied and may be the go-to solution when a quick one-liner is required.
# Limitations are:
#
# - Can only group along a single dimension.
# - Works best for small to medium-sized data, or if data is already mostly sorted along the grouping dimension.
# Slow if millions of small input slices contribute to each group.
#
# `groupby` can also operate on binned data, combining bin contents by concatenation:
binned = sc.bin(data, edges=[xbins])
binned.coords['param'] = sc.Variable(dims=['x'], values=(np.random.random(39)*4).astype(np.int32))
grouped = sc.groupby(binned, 'param').bins.concatenate('x')
grouped
# Each output bin is a combination of multiple input bins:
grouped.values[0]
# ## Binning
#
# `bin` actually reorders data and meta data such that all data contributing to a bin is in a contiguous block.
# Binning along multiple dimensions is supported.
# Of the three options it is the only solution that supports *modifying* data in the grouped/binned layout.
# A variety of operations on such binned data is available.
# Limitations are:
#
# - Requires copying and reordering the input data and can thus become expensive.
#
# In the above example the `'y'` information is dropped by `histogram` and `groupby`, but `bin` preserves it:
binned = sc.bin(data, edges=[xbins])
binned.values[0]
# If we omit the call to `bins.sum` in the original example, we can subsequently apply another [histogramming](#Histogramming) or binning operation to the data:
binned = sc.bin(binned, edges=[ybins])
binned
# As in the 1-D example above, summing the bins is equivalent to histogramming binned data:
sc.plot(binned.bins.sum())
| docs/user-guide/binned-data/histogramming-grouping-and-binning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib as mlp
import matplotlib.pyplot as plt
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# +
# Read dataset
# -
df = pd.read_csv('USA_Housing.csv')
# +
# Let look at our dataset information
# -
df.head()
df.info()
# +
# Wow! no nulls value
# -
df.shape
df.duplicated().sum()
df.isnull().sum()
df.describe()
# +
# Wow! I don't see anything wrong with this dataset
# +
# Let visualize our dataset
# -
sns.pairplot(df)
df.corr()
# +
# Looks like House Price has a positive correlation with all other categories especially Avg. Area Income
# and Avg. Area House Age
# -
plt.figure(figsize=(12,10))
sns.heatmap(df.corr())
# +
# Okay our next step is to prepare data model for our ML
# +
# Feature Scaling
# We can see that Area Income, Area Population, and Price are measured on different scales, so
# we need to do apply Feature Scaling.
from sklearn import preprocessing
std_scale = preprocessing.StandardScaler().fit(df[['Avg. Area Income', 'Area Population', 'Price']])
df[['Avg. Area Income', 'Area Population', 'Price']] = std_scale.transform(df[['Avg. Area Income', 'Area Population', 'Price']])
# -
df.head()
# Good! I think we should drop Address column since we don't need it
df.drop(labels= 'Address', axis = 1, inplace= True)
df.head()
# +
# Now let spit our dataset into train and test data
# -
from sklearn.model_selection import train_test_split
x = df.iloc[:,:-1]
y = df.iloc[:, -1:]
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=42)
# +
# Time to run our ML
# -
from sklearn.linear_model import LinearRegression
linearModel = LinearRegression()
linearModel.fit(X_train, y_train)
linearModel.intercept_
linearModel.coef_
# +
# let create a table for better reading
# -
coef = pd.DataFrame(linearModel.coef_).transpose()
coef.index = x.columns
coef.rename({0:'Coef'}, axis = 1)
# +
# Okay let make some prediction
# -
predictions = linearModel.predict(X_test)
plt.figure(figsize=(12,8))
plt.scatter(y_test, predictions)
# +
# I think it looks okay but let do some more checkings
# -
from sklearn.metrics import r2_score, mean_squared_error
r2_score(y_test, predictions)
# +
# Our model looks very good. R2 closer to 1
# -
mean_squared_error(y_test, predictions)
# +
# Nice! for mean squared error, the closer to 0 the better our model is
# +
# I want to use different library to run our linear Regression and show p-value
# -
import statsmodels.api as sm
X_stats = sm.add_constant(X_train)
sm_model = sm.OLS(y_train, X_stats)
reg = sm_model.fit()
print(reg.summary())
# +
# So base on our train/split test result, R-squared value, and p-value, we can say our predictive
# model is really good.
# Mean Square Error (MSE) is very small (0.08)
# R-squared is very close to 1 (0.918)
# P-value is so small (<0.05)
# -
| US Housing ML/workbook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Crash Course Exercises
# ## Exercises
#
# Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
# ** What is 7 to the power of 4?**
# + jupyter={"outputs_hidden": false}
# -
# ** Split this string:**
#
# s = "Hi there Sam!"
#
# **into a list. **
# + jupyter={"outputs_hidden": false}
# -
# ** Given the variables:**
#
# planet = "Earth"
# diameter = 12742
#
# ** Use .format() to print the following string: **
#
# The diameter of Earth is 12742 kilometers.
planet = "Earth"
diameter = 12742
# + jupyter={"outputs_hidden": false}
# -
# ** Given this nested list, use indexing to grab the word "hello" **
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
# + jupyter={"outputs_hidden": false}
# -
# ** Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky **
# + jupyter={"outputs_hidden": false}
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
# + jupyter={"outputs_hidden": false}
# -
# ** What is the main difference between a tuple and a list? **
# ** Create a function that grabs the email website domain from a string in the form: **
#
# user<EMAIL>
#
# **So for example, passing "<EMAIL>" would return: domain.com**
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
domainGet('<EMAIL>')
# -
# ** Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. **
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
findDog('Where is my dog or my cat?')
# -
# ** Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. **
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
countDog('Dog! This dog runs faster than the other dog dude, dog!')
# -
# ** Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:**
#
# seq = ['soup','dog','salad','cat','great']
#
# **should be filtered down to:**
#
# ['soup','salad']
seq = ['soup','dog','salad','cat','great', 'fruit','yummy','shallow','smooth']
# + jupyter={"outputs_hidden": false}
# -
# ### Final Problem
# **You are driving a little too fast, and a police officer stops you. Write a function
# to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
# If your speed is 60 or less, the result is "No Ticket". If speed is between 61
# and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
# cases. **
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
# -
# # Great job!
| Tutorial-0_Python-Introduction/python-basics/Python Crash Course Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # My python code is too slow? What can I do?
import time
import numpy as np
import matplotlib.pyplot as plt
from skimage import feature
from skimage.data import astronaut
from scipy.ndimage import distance_transform_edt
from skimage import filters
# %matplotlib inline
# ## Vectorization in plain python
#
# First, let's look at vectorisation.
# Vectorization in python / numpy means that significant parts of the code are executed in the native implementation
# (C for CPython). It can be employed by language specific constructs like list comprehensions in plain python and use of appropriate use of `[]` in numpy.
# +
# multiply 1 million numbers by 2 and append to list, naive implementation
N = int(1e6)
t = time.time()
x = []
for i in range(N):
x.append(i*2)
t = time.time() - t
print('for-loop takes:')
print(t, 's')
# -
# same as above, but using a list comprehension
t = time.time()
x = [i*2 for i in range(N)]
t = time.time() - t
print('list comprehension takes:')
print(t, 's')
# ## Vectorization in numpy
# same as for loop above, but with numpy array
t = time.time()
x = np.zeros(N, dtype='uint64')
for i in range(N):
x[i] = i*2
t = time.time() - t
print('numpy: for loop takes')
print(t, 's')
# same as above but vectorized
t = time.time()
x = 2 * np.arange(N, dtype='uint64')
t = time.time() - t
print('numpy: vectorization takes')
print(t, 's')
# +
# TODO more complex numpy example
# -
# ## Beyond vectorization
#
# Vectorization is great! We can write python code and get (nearly) C speed. Unfortunately, it's not always possible and has other drawbacks:
# - vectorizing complex functions can be hard
# - or even impossible if a lot of `if` `else` statements are involved.
# - for plain python, vectorization does not lift the GIL
#
# Let's turn to a simple example, connected compoents of a binary image and pretend we don't know about `scipy.ndimage.label`.
# load example data from skimage
data = astronaut()
plt.imshow(data)
# make edge map using mean canny edge detector response of the three color channels
edges = np.array([feature.canny(data[..., i] / 255., sigma=3)
for i in range(data.shape[-1])])
edges = np.mean(edges, axis=0)
plt.imshow(edges)
# compute and smooth distances to edges to get a better input map
distances = distance_transform_edt(edges < .25)
distances = filters.gaussian(distances, 2)
plt.imshow(distances)
# compute the binary image we wany to use as input to connected components
binary_image = distances > 2
plt.imshow(binary_image)
# Hooray, we have our input image. What do we do now? Remember, we don't know about `scipy.ndimage.label`.
# So we will need to implement our own connected_components function in python.
# Have a look at `ccpy/connected_components.py` for the implementation.
from ccpy import connected_components as py_components
t = time.time()
cc_py = py_components(binary_image)
t = time.time() - t
print(t, 's')
plt.imshow(cc_py)
# This looks like the result we would expect, but it takes awfully long to compute the components!
# What can we do to speed this up?
# - Option 1: [numba](http://numba.pydata.org/): just in time compiler for numpy
# - Option 2: [cython](https://cython.org/): write python / c (c++) hybrid code that gets compled to c (c++)
# - Option 3: write your own c or c++ library and wrap it to python
# ## Numba
#
# Numba is a just in time compiler for python that works well with numeric (numpy-based) code. It is very easy to use
# via the decorator `numba.jit`.
# +
from numba import jit
# naive example from above
# nopython allows numba to optimize more,
# but it does not work in all cases, e.g. if python memory is allocated
# @jit(nopython=True)
# without nopython, numba works in 'object-mode' which works for more cases
# not as fast
@jit(nopython=True)
def go_fast():
x = np.zeros(N, dtype='uint64')
for i in range(N):
x[i] = i*2
# same as for loop above, but with numpy array
t = time.time()
go_fast()
t = time.time() - t
print('numba: for loop takes')
print(t, 's')
# -
# let's try to naively add numba to our watershed code ...
from ccnu import connected_components as nu_components
t = time.time()
cc_nu = nu_components(binary_image)
t = time.time() - t
print(t, 's')
plt.imshow(cc_py)
# ## Cython
# +
# TODO if time allows
# -
# ## Custom C / C++
#
# There are multiple ways to expose C / C++ code to python. Here, we will use an approach for modern C++, using [pybind11](https://github.com/pybind/pybind11) to build the python bindings and [xtensor](https://github.com/QuantStack/xtensor) / [xtensor-python](https://github.com/QuantStack/xtensor-python) for multi dimensional arrays in C++ and
# numpy buffers. See `ccxt/src/main.cpp`. For this example, I used a nice [cookiecutter set-up](https://github.com/QuantStack/xtensor-python-cookiecutter).
from ccxt import connected_components as cpplabel
t = time.time()
cc_cpp = cpplabel(binary_image)
t = time.time() - t
print(t, 's')
plt.imshow(cc_cpp)
# just for fun lets see how we do compared to scipy
from scipy.ndimage import label
t = time.time()
cc_scipy, _ = label(binary_image)
t = time.time() - t
print(t, 's')
plt.imshow(cc_scipy)
# ## Summary
#
# There are different ways to speed up python if it's necessary.
# Which one is most appropriate depends on your application.
# - vectorize yor code: easy but not allways applicable
# - numba: relatively easy, but you still need to be aware of it's limitations
# - cython: python / C hybrid code that is potentially easier to write (and compile) than pure C
# - C / C++ + python wrapper: fast, flexible and can be wrapped to other languages. BUT you need to know C or C++. Compilation can be a huge pain.
#
# ## What about deep learning?
#
# Pytorch has a [just-in-time compiler](https://pytorch.org/docs/stable/jit.html) similar to the numba option. It also offers a [C++ frontend](https://pytorch.org/cppdocs/frontend.html) that could potentially be used to implement performance critical parts of the code and wrap them to python, although it's more tailored to embedded devices.
| fastpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="y-sdyEpGKPZJ" executionInfo={"status": "ok", "timestamp": 1611001405967, "user_tz": 360, "elapsed": 22409, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}} outputId="5a89b69d-0857-4621-8dd3-c5aa45bf2948"
from google.colab import drive
drive.mount('/content/drive')
# + id="kV84RqbhKq0j" executionInfo={"status": "ok", "timestamp": 1611001450153, "user_tz": 360, "elapsed": 5423, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}}
# Librerias de apoyo
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import keras
from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers import Convolution2D, MaxPooling2D, Dropout, Flatten, Dense
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from imgaug import augmenters as iaa
import cv2
import pandas as pd
import ntpath
import random
from numpy import random
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Convolution2D,Flatten,Dense
from tensorflow.keras.optimizers import Adam
# + colab={"base_uri": "https://localhost:8080/"} id="J2QKqbvWK1sk" executionInfo={"status": "ok", "timestamp": 1611001504739, "user_tz": 360, "elapsed": 771, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}} outputId="18e1f9ea-f0bb-4641-b85f-4a97760784fc"
# Revisamos el directorio de datos
# !ls -lth "/content/drive/My Drive/Colab Notebooks/SDC_13/datos"
# + id="iADolTXeLGDQ" executionInfo={"status": "ok", "timestamp": 1611003512341, "user_tz": 360, "elapsed": 616, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}}
# Definimos una variable con la dirección del directorio de datos y uno con el nombre de las columnas para el dataframe que estaremos utilizando
datadir = "/content/drive/My Drive/Colab Notebooks/SDC_13/datos"
columns = ['center', 'left', 'right', 'steering', 'throttle', 'reverse', 'speed']
# + colab={"base_uri": "https://localhost:8080/", "height": 224} id="V-IJ0ZoiStfu" executionInfo={"status": "ok", "timestamp": 1611003547759, "user_tz": 360, "elapsed": 1033, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}} outputId="c85781e4-73e0-4173-a1c6-a08de901924e"
# Definimos nuestro DataFrame
data = pd.read_csv(os.path.join(datadir, 'driving_log.csv'), names = columns)
pd.set_option('display.max_colwidth', None)
data.head()
# + id="t0l1rdd2S50d" executionInfo={"status": "ok", "timestamp": 1611003590596, "user_tz": 360, "elapsed": 643, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04290079762179933573"}} outputId="8e09d1db-ef39-4b4b-b85d-86c6e1611b52" colab={"base_uri": "https://localhost:8080/"}
# Eliminamos toda la parte de la dirección local de las imagenes
def path_leaf(path):
head, tail = ntpath.split(path)
return tail
data['center'] = data['center'].apply(path_leaf)
data['left'] = data['left'].apply(path_leaf)
data['right'] = data['right'].apply(path_leaf)
data.head()
| SDC_13/Part1_Erick_Casanova.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv('combined_data_20211217.csv')
topics = pd.read_csv('features/partner_data/GDI/GDI_Domain_Topics_Traffic_December2021.csv')
topics
data.head()
sub_data = data.loc[:,['rating','domain','biden', 'pseudoscience',
'misogyny', 'climatedenial','antilatinx',
'5g', 'whitesupremacy', 'antiblack',
'aliens','antisemitic', 'antilgbt',
'antivaxx', 'voterfraud', 'coronavirus',
'qanon', 'votinglaws', 'antiimmigrant',
'antimuslim', 'antiasian','bigtech']]
sub_data_selected = sub_data.loc[sub_data['domain'].isin(list(topics['domain_name'].values)),:]
sub_data_selected
link_topic_count = sub_data_selected.drop('domain',axis=1).groupby('rating').sum()
link_topic_row_count = sub_data_selected.drop('domain',axis=1).groupby('rating').count()
link_topic_row_count
ratio_raw = link_topic_count/link_topic_row_count
ratio_raw = ratio_raw.reset_index()
ratio = link_topic_count/link_topic_row_count
ratio = ratio.transpose()
ratio = ratio.reset_index()
ratio
ratio_raw
link_topic_count
plt.bar(link_topic_count.columns, link_topic_count.iloc[0,:])
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.3)
grp_order = ratio.sort_values(['N'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='N',
data=ratio, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Non-truthful, Link-Domain Level', fontsize = 25)
ax
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.25)
grp_order = ratio.sort_values(['R'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='R',
data=ratio, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Repeat Offender, Link-Domain Level', fontsize = 25)
ax
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.25)
grp_order = ratio.sort_values(['T'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='T',
data=ratio, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Truthful, Link-Domain Level', fontsize = 25)
ax
topics = ['biden', 'pseudoscience', 'misogyny', 'climatedenial','antilatinx',
'5g', 'whitesupremacy', 'antiblack', 'aliens','antisemitic', 'antilgbt',
'antivaxx', 'voterfraud', 'coronavirus','qanon', 'votinglaws', 'antiimmigrant',
'antimuslim', 'antiasian','bigtech']
plt.subplots_adjust(bottom = 0.5, top = 0.9)
fig, axes = plt.subplots(4, 5, figsize=(20, 15), sharey=True, constrained_layout=True)
fig.suptitle('Topic Occurrence Frequency Ratio, Link-Domain Level', fontsize=40)
for i in range(20):
t = topics[i]
sns.barplot(ax=axes[i//5, i%5], x='rating', y=t,
data=ratio_raw, order = ['T', 'N', 'R'], palette = 'tab20c_r')
axes[i//5, i%5].set_title(t,fontsize=30)
axes[i//5, i%5].set_xlabel('')
axes[i//5, i%5].set_ylabel('')
topic_data = pd.read_csv('topicDF.csv').iloc[:,1:]
topic_data
sub_data_domain = data.loc[:,['rating','domain']]
topic_data_rating = topic_data.merge(sub_data_domain, on='domain',how='inner')\
.drop_duplicates().reset_index().iloc[:,1:]
topic_data_rating
domain_rating_topic_count = topic_data_rating.drop('domain',axis=1).groupby('rating').count()
domain_rating_topic_sum = topic_data_rating.drop('domain',axis=1).groupby('rating').sum()
domain_rating_topic_ratio = domain_rating_topic_sum/domain_rating_topic_count
domain_rating_topic_ratio = domain_rating_topic_ratio.reset_index()
domain_rating_topic_ratio
domain_rating_topic_ratio_T = domain_rating_topic_sum/domain_rating_topic_count
domain_rating_topic_ratio_T = domain_rating_topic_ratio_T.transpose()
domain_rating_topic_ratio_T = domain_rating_topic_ratio_T.reset_index()
domain_rating_topic_ratio_T
plt.subplots_adjust(bottom = 0.5, top = 0.9)
fig, axes = plt.subplots(4, 5, figsize=(20, 15), sharey=True, constrained_layout=True)
fig.suptitle('Topic Occurrence Frequency Ratio, Domain Level', fontsize=40)
for i in range(20):
t = topics[i]
sns.barplot(ax=axes[i//5, i%5], x='rating', y=t,
data=domain_rating_topic_ratio, order = ['T', 'N', 'R'], palette = 'tab20c_r')
axes[i//5, i%5].set_title(t,fontsize=30)
axes[i//5, i%5].set_xlabel('')
axes[i//5, i%5].set_ylabel('')
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.3)
#grp_order = domain_rating_topic_ratio_T.sort_values(['N'], ascending=False).reset_index(drop=True)['index']
grp_order = ratio.sort_values(['N'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='N',
data=domain_rating_topic_ratio_T, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Non-truthful, Domain Level', fontsize = 25)
ax
# +
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.3)
grp_order = domain_rating_topic_ratio_T.sort_values(['R'], ascending=False).reset_index(drop=True)['index']
#grp_order = ratio.sort_values(['R'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='R',
data=domain_rating_topic_ratio_T, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Repeat Offender, Domain Level', fontsize = 25)
ax
# +
sns.set(rc = {'figure.figsize':(14,8)}, font_scale = 1.3)
#grp_order = domain_rating_topic_ratio_T.sort_values(['T'], ascending=False).reset_index(drop=True)['index']
grp_order = ratio.sort_values(['T'], ascending=False).reset_index(drop=True)['index']
ax = sns.barplot(x='index', y='T',
data=domain_rating_topic_ratio_T, order=grp_order, color = 'tan')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 50)
ax.set(ylim=(0, 1))
ax.set_xlabel('Topics', fontsize = 25)
ax.set_ylabel('Frequency', fontsize = 25)
ax.set_title('Topic Occurrence Frequency Ratio, Truthful, Domain Level', fontsize = 25)
ax
# -
| Code/Visualization Dashboard/EDA_topics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/uzmakhan7/uk/blob/master/tweepy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="-If0REzbesxC" colab_type="code" colab={}
import tweepy
# + id="vWhrxep3fxCF" colab_type="code" colab={}
consumerKey=
accessToken=
accessTokenSecret=
# + id="UFbC-4TXfZAS" colab_type="code" outputId="5882e443-5fd6-4170-9054-ec9a17664889" colab={"base_uri": "https://localhost:8080/", "height": 129}
auth=tweepy.0AuthHandler(consumer_key=consumerkey,consumer_secret=consumersecret)
# + id="aMWkn8WwfipE" colab_type="code" outputId="a4f4ae1f-8e29-4560-c5e0-afbfb5345a94" colab={"base_uri": "https://localhost:8080/", "height": 163}
auth.set_access_token(accessToken,accessTokenSecret)
# + id="0c5J89wzgYiY" colab_type="code" outputId="0d1a416f-9d55-4cf7-bb02-edd62f586160" colab={"base_uri": "https://localhost:8080/", "height": 163}
api=tweepy.API(auth)
# + id="kVtR_ZwThHNr" colab_type="code" outputId="bc75bf2a-948d-4646-b8ba-cf535a045097" colab={"base_uri": "https://localhost:8080/", "height": 51}
search=input("enter word to be search:")
numberwords=int(input("number of words:"))
# + id="5RJpmMiLhmsy" colab_type="code" outputId="5a585c48-be4e-4fd3-d75b-bdb626b7537e" colab={"base_uri": "https://localhost:8080/", "height": 180}
result=tweepy.cursor(api.search,q=search,lang="english").items(numberwords)
result
# + id="f5etKdjXj9Si" colab_type="code" outputId="6ee4c307-1301-4d2c-ebd8-cbb6546e6d43" colab={"base_uri": "https://localhost:8080/", "height": 180}
for i in result:
print(i.text)
# + id="TlndOrZrkfYd" colab_type="code" outputId="d002e78b-f6aa-4ffb-aa55-bb14110e30a2" colab={"base_uri": "https://localhost:8080/", "height": 299}
from textblob import Textblob
# + id="wrz5PKgYkxZo" colab_type="code" colab={}
| tweepy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implementing a Route Planner
# In this project you will use A\* search to implement a "Google-maps" style route planning algorithm.
# +
# Run this cell first!
from helpers import Map, load_map, show_map
from student_code import shortest_path
# %load_ext autoreload
# %autoreload 2
# -
# ### Map Basics
map_10 = load_map('map-10.pickle')
show_map(map_10)
# The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.
#
# These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`
#
# **Intersections**
#
# The `intersections` are represented as a dictionary.
#
# In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
map_10.intersections
# **Roads**
#
# The `roads` property is a list where, if `i` is an intersection, `roads[i]` contains a list of the intersections that intersection `i` connects to.
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map('map-40.pickle')
show_map(map_40)
# ### Advanced Visualizations
#
# The map above shows a network of roads which spans 40 different intersections (labeled 0 through 39).
#
# The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizaing the output of the search algorithm you will write.
#
# * `start` - The "start" node for the search algorithm.
# * `goal` - The "goal" node.
# * `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
# ### Writing your algorithm
# You should open the file `student_code.py` in another tab and work on your algorithm there. Do that by selecting `File > Open` and then selecting the appropriate file.
#
# The algorithm you write will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`
#
# ```bash
# > shortest_path(map_40, 5, 34)
# [5, 16, 37, 12, 34]
# ```
path = shortest_path(map_40, 5, 34)
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
# ### Testing your Code
# If the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:
#
# **Submission Checklist**
#
# 1. Does my code pass all tests?
# 2. Does my code implement `A*` search and not some other search algorithm?
# 3. Do I use an **admissible heuristic** to direct search efforts towards the goal?
# 4. Do I use data structures which avoid unnecessarily slow lookups?
#
# When you can answer "yes" to all of these questions, submit by pressing the Submit button in the lower right!
# +
from test import test
test(shortest_path)
# -
| 04-advanced-algorithms/xx-project/project_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.1 64-bit
# name: python38164bit689ce9532f7a4f68816de0f6cf09db80
# ---
import numpy as np
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
from numpy import arange
df = pd.read_csv(r'/home/kekeing/Desktop/code/DateMining/data/lianjia_processed.csv',sep=',')
df = df[['deal_totalPrice','gross_area']]
da = df.to_numpy()
labels_true = da[:,-1]
eps=arange(0.2,1,0.1)
eps
# +
def dbscan(eps,labels_true):
db = DBSCAN(eps=eps, min_samples=50).fit(da)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
print('\n')
# -
for i in np.nditer(eps):
dbscan(i)
| notebook/dbscan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Exercise 2 - Running a power flow calculation and adding scenario data for electric vehicles to the grid
#
# **The goals for this exercise are:**
#
# - load the grid model from exercise 1
# - run a power flow calculation
# - display transformer, line and bus results
# - determine maximum line loading and minimum bus voltage
# - create 65 loads with random power demands between 0 and 11 kW
# - each load represents an 11 kW charging point for electric vehicles
# - connect these loads to random buses to model a future scenario for the example grid
# - run a power flow calculation again and compare the results before and after connecting the charging points to the grid
#
# **Helpful ressources for this exercise:**
#
# - https://github.com/e2nIEE/pandapower/blob/master/tutorials/minimal_example.ipynb
# - https://github.com/e2nIEE/pandapower/blob/develop/tutorials/create_simple.ipynb
# - https://github.com/e2nIEE/pandapower/blob/develop/tutorials/powerflow.ipynb
# ### Step 1 - load the grid model of exercise 1 from the json file
#
# hint: use pp.from_json(FILENAME.json). You need the import the pandapower module again.
# ### Step 2 - run a power flow calculation
# ### Step 3 - display the transformer results
# ### Step 4 - display the line results
# ### Step 5 - display the bus results
# ### Step 6 - display the maximum line loading
#
# hint: you can determine the maximum value of a column by running net.TABLE_NAME.COLUMN_NAME.max()
# ### Step 7 - display the minimum bus voltage
#
# hint: you can determine the minimum value of a column by running net.TABLE_NAME.COLUMN_NAME.min()
# ### Step 8 - create 65 loads with random power demands between 0 and 11 kW and connect them to random buses
#
# hint: you just need to fill in the "create load" command in the for loop.
# just run this cell to create the list of 50 random power demand values
import numpy as np
np.random.seed(0)
p_mw_values = list(np.random.randint(0, 12, 65)/1000)
print(p_mw_values)
for p_mw in p_mw_values:
bus = np.random.randint(2,7,1)[0] # chooses a bus index between 2 and 6
load = #<replace this by a create_load command. Set the parameters bus=bus p_mw=p_mw and name="charging_point">
net.load
# ### Step 9 - run a power flow calculation again, to get the new results for the grid with charging points
# ### Step 10 - determine the transformer loading, maximum line loading and minimum bus voltage and compare them to the results without charging points
# ### Step 11 - save the grid model as a json file with a new name
#
# hint: use the method pp.to_json(net, "FILENAME.json").
| exercises/02_power_flow_and_scenario_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Conv2D,Conv2DTranspose,Flatten,MaxPool2D
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
# +
def seed_everything(SEED):
np.random.seed(SEED)
tf.set_random_seed(SEED)
random.seed(SEED)
seed_everything(224)
# -
x_train = np.load('/Users/marcowang/CS_6000_Deep_Learning/HW1__NN/mnist.train.npy')
y_train = np.load('/Users/marcowang/CS_6000_Deep_Learning/HW1__NN/mnist.trainlabel.npy')
x_test = np.load('/Users/marcowang/CS_6000_Deep_Learning/HW1__NN/mnist.test.npy')
x_train.shape
# +
ae = Sequential()
ae.add(Conv2D(input_shape = np.expand_dims(x_train,1).shape[1:],
filters = 16,
data_format = "channels_first",
kernel_size=(3,3),strides = 1,
padding = "same",activation="relu"))
ae.add(MaxPool2D(pool_size=(2, 2),data_format = "channels_first"))
ae.add(Conv2D(
filters = 4,
data_format = "channels_first",
kernel_size=(3,3),strides = 1,
padding = "same",activation="relu"))
ae.add(MaxPool2D(pool_size=(2, 2),data_format = "channels_first"))
ae.add(Conv2DTranspose(
filters = 16,
data_format = "channels_first",
kernel_size=(2,2),strides = 2,activation="relu"))
ae.add(Conv2DTranspose(
filters = 1,
data_format = "channels_first",
kernel_size=(2,2),strides = 2,activation="sigmoid"))
ae.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
ae.summary()
# -
x_train = x_train.reshape(-1,1,28,28)
ae.fit(x_train,x_train,epochs=20)
# +
img = x_test[:10]
output = ae.predict(img.reshape(-1,1,28,28))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
for in_out, row in zip([img, output], axes):
for i, ax in zip(in_out, row):
ax.imshow(i.reshape(28,28), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# -
def noiseImage(image):
'''
image: numpy image data
return noised image numpy data
'''
noise = np.random.normal(loc=0.5, scale=0.5, size=image.shape)
noisedimage = np.clip(image + noise,0.,1.)
return noisedimage
| HW2_CNN/Autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from matplotlib import pyplot as plt
from skimage import data, io, segmentation, color
from skimage.future import graph
from skimage import io
# -
# https://www.pexels.com/photo/daylight-forest-glossy-lake-443446/
# +
nature = io.imread('./images/pexels-nature.jpg')
plt.figure(figsize=(8, 8))
plt.imshow(nature)
# -
# <b>Segmentation using Simple Linear Iterative Clustering (SLIC) which uses k-means clustering</b>
# * <b>image</b> = input image
# * <b>compactness</b> = balances the color-space proximity with image space-proximity, higher value gives more weight to the space proximity making the segments more square and cubic
# * <b>n_segments</b> = number of segments to be created
labels_1 = segmentation.slic(nature,
compactness=35,
n_segments=500)
labels_1.shape
labels_1
# <b>color.label2rgb returns an RGB image where color-coded labels are painted over image.</b>
#
# * <b>labels1</b>= integer arrays of labels with the same shape as image
# * <b>image</b> = Image used as underlay for labels
# * <b>kind</b> = The kind of color image desired. 'avg' replaces each labeled segment with its average color (for a pastel painting appearance), overlay to overlay colored blocks
#
# http://scikit-image.org/docs/dev/api/skimage.color.html#skimage.color.label2rgb
# +
segmented_overlay = color.label2rgb(labels_1, nature, kind='overlay')
plt.figure(figsize=(8, 8))
plt.imshow(segmented_overlay)
# +
segmented_avg = color.label2rgb(labels_1, nature, kind='avg')
plt.figure(figsize=(8, 8))
plt.imshow(segmented_avg)
# -
# ### Region Adjacency Graph (RAG) is used to find adjacency of regions with the graph
#
# Find the weight between two regions using difference of mean color as their edge weight. Similar regions will have lower edge weight differences
g = graph.rag_mean_color(nature, labels_1)
# #### cut_threshold removes the edges below a specified threshold.
#
# Returns new labels array by combining regions whose nodes are separated by a weight less than the given threshold
# Change thresh to 15 and re-run these last 2 cells
labels_2 = graph.cut_threshold(labels_1, g, thresh = 35)
# +
segmented_rag = color.label2rgb(labels_2, nature, kind='avg')
plt.figure(figsize=(8, 8))
plt.imshow(segmented_rag)
# -
| backup/11.sklearn1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from glob import glob
import pandas as pd
for path in glob('../data/competition_data/*.csv'):
df = pd.read_csv(path)
print(path, df.shape)
# +
import numpy as np
from sklearn.metrics import mean_squared_log_error
def rmsle(y_true, y_pred):
return np.sqrt(mean_squared_log_error(y_true, y_pred))
# -
trainval = pd.read_csv('../data/competition_data/train_set.csv')
test = pd.read_csv('../data/competition_data/test_set.csv')
trainval.head(10)
test.head(10)
trainval['quote_date'] = pd.to_datetime(trainval['quote_date'], infer_datetime_format=True)
test['quote_date'] = pd.to_datetime(trainval['quote_date'], infer_datetime_format=True)
trainval['quote_date'].describe()
test['quote_date'].describe()
trainval_tube_assemblies = trainval['tube_assembly_id'].unique()
test_tube_assemblies = test['tube_assembly_id'].unique()
len(trainval_tube_assemblies), len(test_tube_assemblies)
set(trainval_tube_assemblies) & set(test_tube_assemblies)
from sklearn.model_selection import train_test_split
train_tube_assemblies, val_tube_assemblies = train_test_split(trainval_tube_assemblies, random_state=42)
len(train_tube_assemblies), len(val_tube_assemblies)
set(train_tube_assemblies) & set(val_tube_assemblies)
train = trainval[trainval.tube_assembly_id.isin(train_tube_assemblies)]
val = trainval[trainval.tube_assembly_id.isin(val_tube_assemblies)]
train.shape, val.shape, trainval.shape
len(train) + len(val) == len(trainval)
train.describe()
train.describe(exclude='number')
target = 'cost'
y_train = train[target]
y_val = val[target]
y_pred = np.full_like(y_val, fill_value=y_train.mean())
print('Validation RMSLE, Mean Baseline:', rmsle(y_val, y_pred))
from sklearn.metrics import r2_score
print('Validation R^2, Mean Baseline:', r2_score(y_val, y_pred))
train.cost.mean()
train.groupby('quantity').cost.mean()
features =['quantity']
X_train = train[features]
X_val = val[features]
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train, y_train)
r2 = model.score(X_val, y_val)
print('Validation R^2', r2)
y_pred = model.predict(X_val)
print(f'Validation RMSLE, Randorm Forest with {features}')
print(rmsle(y_val, y_pred))
y_train_log = np.log1p(y_train)
model.fit(X_train, y_train_log)
y_pred_log = model.predict(X_val)
y_pred = np.expm1(y_pred_log)
rmsle(y_val, y_pred)
from sklearn.metrics import mean_squared_error
def rmse(y_true, y_pred):
return np.sqrt(mean_squared_error(y_true, y_pred))
y_val_log = np.log1p(y_val)
rmse(y_val_log, y_pred_log)
# +
def wrangle(X):
X = X.copy()
X['quote_date'] = pd.to_datetime(X['quote_date'], infer_datetime_format=True)
X['quote_date_year'] = X['quote_date'].dt.year
X['quote_date_month'] = X['quote_date'].dt.month
X = X.drop(columns='quote_date')
X = X.drop(columns='tube_assembly_id')
return X
train_wrangled = wrangle(train)
val_wrangled = wrangle(val)
# -
features = train_wrangled.columns.drop(target)
print('Features:', features.tolist())
X_train = train_wrangled[features]
X_val = val_wrangled[features]
# +
import category_encoders as ce
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
rmse(y_val_log, y_pred_log)
# -
y_pred = np.expm1(y_pred_log)
rmsle(y_val, y_pred)
rf = pipeline.named_steps['randomforestregressor']
importances = pd.Series(rf.feature_importances_, X_train.columns)
importances.sort_values().plot.barh();
# +
import seaborn as sns
quantity_quartiles = pd.qcut(train_wrangled['quantity'], q=4)
sns.pointplot(x=quantity_quartiles, y=train_wrangled['cost']);
# -
for path in glob('../data/competition_data/*.csv'):
df = pd.read_csv(path)
shared_columns = set(df.columns) & set(train.columns)
if shared_columns:
print(path, df.shape)
print(df.columns.tolist(), '\n')
# +
def wrangle(X):
X = X.copy()
X['quote_date'] = pd.to_datetime(X['quote_date'], infer_datetime_format=True)
X['quote_date_year'] = X['quote_date'].dt.year
X['quote_date_month'] = X['quote_date'].dt.month
X = X.drop(columns='quote_date')
tube = pd.read_csv('../data/competition_data/tube.csv')
X = X.merge(tube, how='left')
X = X.drop(columns='tube_assembly_id')
return X
train_wrangled = wrangle(train)
val_wrangled = wrangle(val)
# -
train_wrangled.shape, val_wrangled.shape
# +
X_train = train_wrangled.drop(columns=target)
X_val = val_wrangled.drop(columns=target)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
rmse(y_val_log, y_pred_log)
# -
test = pd.read_csv('../data/competition_data/test_set.csv')
test_wrangled = wrangle(test)
X_test = test_wrangled.drop(columns='id')
all(X_test.columns == X_train.columns)
# +
y_pred_log = pipeline.predict(X_test)
y_pred = np.expm1(y_pred_log)
sample_submission = pd.read_csv('../data/sample_submission.csv')
submission = sample_submission.copy()
submission['cost'] = y_pred
submission.to_csv('submission-01.csv', index=False)
# -
# ## Add 1 more file
def wrangle(X):
X = X.copy()
X['quote_date'] = pd.to_datetime(X['quote_date'], infer_datetime_format=True)
X['quote_date_year'] = X['quote_date'].dt.year
X['quote_date_month'] = X['quote_date'].dt.month
X = X.drop(columns='quote_date')
tube = pd.read_csv('../data/competition_data/tube.csv')
bill_of_materials = pd.read_csv('../data/competition_data/bill_of_materials.csv')
X = X.merge(tube, how='left')
X = X.merge(bill_of_materials, how='left')
columns_all = X.columns
for col in columns_all:
X[col] = X[col].fillna(0)
X = X.drop(columns='tube_assembly_id')
return X
train_wrangled = wrangle(train)
val_wrangled = wrangle(val)
# +
from sklearn.impute import SimpleImputer
X_train = train_wrangled.drop(columns=target)
X_val = val_wrangled.drop(columns=target)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
rmse(y_val_log, y_pred_log)
# +
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
X_train = train_wrangled.drop(columns=target)
X_val = val_wrangled.drop(columns=target)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
XGBRegressor(objective= "reg:linear", eta=0.017, min_child_weight=6, subsample=0.75, colsample_bytree=0.6,
scale_pos_weight=0.8, silent=1, max_depth=9, max_delta_step=2)
)
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
rmse(y_val_log, y_pred_log)
# -
test = pd.read_csv('../data/competition_data/test_set.csv')
test_wrangled = wrangle(test)
X_test = test_wrangled.drop(columns='id')
all(X_test.columns == X_train.columns)
y_pred_log = pipeline.predict(X_test)
y_pred = np.expm1(y_pred_log)
sample_submission = pd.read_csv('../data/sample_submission.csv')
submission = sample_submission.copy()
submission['cost'] = y_pred
submission.to_csv('submission-03.csv', index=False)
# ## More Data Wrangling and Feature Engineering
SOURCE = '../data/competition_data/'
def wrangle(X):
X = X.copy()
# Engineer date features
X['quote_date'] = pd.to_datetime(X['quote_date'], infer_datetime_format=True)
X['quote_date_year'] = X['quote_date'].dt.year
X['quote_date_month'] = X['quote_date'].dt.month
X = X.drop(columns='quote_date')
# Merge tube data
tube = pd.read_csv(SOURCE + 'tube.csv')
X = X.merge(tube, how='left')
# Engineer features from bill_of_materials
materials = pd.read_csv(SOURCE + 'bill_of_materials.csv')
materials['components_total'] = (materials['quantity_1'].fillna(0) +
materials['quantity_2'].fillna(0) +
materials['quantity_3'].fillna(0) +
materials['quantity_4'].fillna(0) +
materials['quantity_5'].fillna(0) +
materials['quantity_6'].fillna(0) +
materials['quantity_7'].fillna(0) +
materials['quantity_8'].fillna(0))
materials['components_distinct'] = (materials['component_id_1'].notnull().astype(int) +
materials['component_id_2'].notnull().astype(int) +
materials['component_id_3'].notnull().astype(int) +
materials['component_id_4'].notnull().astype(int) +
materials['component_id_5'].notnull().astype(int) +
materials['component_id_6'].notnull().astype(int) +
materials['component_id_7'].notnull().astype(int) +
materials['component_id_8'].notnull().astype(int))
# Merge selected features from bill_of_materials
# Just use the first component_id, ignore the others for now!
features = ['tube_assembly_id', 'component_id_1', 'components_total', 'components_distinct']
X = X.merge(materials[features], how='left')
# Get component_type_id (has lower cardinality than component_id)
components = pd.read_csv(SOURCE + 'components.csv')
components = components.rename(columns={'component_id': 'component_id_1'})
features = ['component_id_1', 'component_type_id']
X = X.merge(components[features], how='left')
# Count the number of specs for the tube assembly
specs = pd.read_csv(SOURCE + 'specs.csv')
specs['specs_total'] = specs.drop(columns=['tube_assembly_id']).count(axis=1)
features = ['tube_assembly_id', 'specs_total', 'spec1']
X = X.merge(specs[features], how='left')
# Drop tube_assembly_id because our goal is to predict unknown assemblies
X = X.drop(columns='tube_assembly_id')
return X
train_wrangled = wrangle(train)
val_wrangled = wrangle(val)
# +
X_train = train_wrangled.drop(columns=target)
X_val = val_wrangled.drop(columns=target)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
XGBRegressor(objective= "reg:linear", eta=0.017, min_child_weight=6, subsample=0.75, colsample_bytree=0.6,
scale_pos_weight=0.8, silent=1, max_depth=9, max_delta_step=2)
)
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
rmse(y_val_log, y_pred_log)
# -
| module2-gradient-boosting/Unit 2 Sprint 8 Gradient Boosting Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predicting House Prices - A case study with SciKit Learn and Pandas
# ## Introduction to Pandas
# Pandas is a module in Python that is great for handling lots of data. We'll be relying on it today to help us sort, reshape, and clean up our data.
#
# > **From the Pandas documentation:**
# >
# > Here are just a few of the things that pandas does well:
# >
# > - Easy handling of **missing data** (represented as NaN) in floating point as well as non-floating point data
# - Size mutability: columns can be **inserted and deleted** from DataFrame and higher dimensional objects
# - Automatic and explicit **data alignment**: objects can be explicitly aligned to a set of labels, or the user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
# - Powerful, flexible **group by** functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
# - Make it **easy to convert** ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects
# - Intelligent **label-based slicing**, **fancy indexing**, and **subsetting** of large data sets
# - Intuitive **merging** and **joining** data sets
# - Flexible **reshaping** and **pivoting** of data sets
# - **Hierarchical labeling** of axes (possible to have multiple labels per tick)
# - **Robust IO tools** for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading data from the ultrafast HDF5 format
# - **Time series**-specific functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
#
# Let's get a sense of how it works with a test data set from a weather report.
# +
import pandas as pd # the pd is by convention
import numpy as np # as is the np
import matplotlib.pyplot as plt
import seaborn as sns
# To Plot matplotlib figures inline on the notebook
# %matplotlib inline
# -
# The building blocks of pandas are called Series and DataFrames. A Dataframe is essentially a table, like shown below:
# <img src='images/dataframe.png'>
# Each row of the dataframe will be one specific record, and each column will be some aspect of that record. That will make more sense when we look at an example. Series are individual rows or columns (essentially if we break apart that dataframe into a single set of numbers). Let's look at that in action.
#
# To begin with, we're going to read in some data from a CSV.
weather = pd.read_csv('data/weather.csv')
weather.head()
# Here we got our table, and it shows us that we're looking at the hourly weather at some point. We can see that the temperature was below freezing for each of the first four hours, that there was some fog, and that it was a little bit windy. We used `weather.head()` to show us just the first few rows. Otherwise, it would show us a HUGE data amount. We can look at the last few rows with `weather.tail()`
weather.tail()
# Now let's use a Pandas built-in function to learn a little about our data.
weather.info()
# We can also look to see how much of our data makes sense by asking it to look at the numeric columns and give us some stats about them.
weather.describe()
weather.shape
# Great, looks like these are all behaving relatively as expected! That's lovely. Now let's learn how to grab some data from the DataFrame. Also, let's look at what a Series is. To start with, let's grab a column from our data.
weather['Temp (C)']
# This is a series! It has both the index (the left side) and the value. So we know which row it is and what the value is. That's pretty sweet. What can we do with a series? One really handy thing is getting the number of times a value shows up. Let's see that in action.
weather['Weather'].value_counts()
# Cool. Now how do we access rows? We have to play a little bit of pandas games to do so. We'll use `.iloc` to do the job. Let's demonstrate by grabbing the first (0th) row.
weather.iloc[0]
# We can get multiple rows by following Python's conventions like so:
weather.iloc[10:13]
# We also might want to select multiple columns. We can do that like this:
weather[['Temp (C)',"Dew Point Temp (C)"]].head()
# That should be enough to get us started with Pandas... now let's move on to our case study, and we'll learn more about Pandas on the way.
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Building Regression Models with Scikit-Learn
#
# In this section, we will walk through how to build regression models in scikit-learn.
#
# We will load in a the Ames Housing Data, split into train and test sets, and build some models.
#
# Using the Ames Housing Data:
#
# <NAME>
# Truman State University
# Journal of Statistics Education Volume 19, Number 3(2011), www.amstat.org/publications/jse/v19n3/decock.pdf
#
#
# + run_control={"frozen": false, "read_only": false}
datafile = "./data/Ames_Housing_Data.tsv"
# + run_control={"frozen": false, "read_only": false}
df=pd.read_csv(datafile, sep='\t')
# + run_control={"frozen": false, "read_only": false}
df.info()
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Data Dictionary
# A description of the variables can be found here:
#
# https://ww2.amstat.org/publications/jse/v19n3/decock/DataDocumentation.txt
#
#
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Data Cleaning
# From the above, and reading the documentation, here are a few things to note about this data set:
# - SalePrice is our target variable
# - The authors recommend removing the few houses that are >4000 SQFT (based on the 'Gr Liv Area' variable)
# - Many columns have missing data (based on the number of "non-null" entries in each column
# - We have many predictor variables
# -
# ** An aside about pandas **
#
# We can do filtering on the fly with pandas. Let's see what that means by looking at our weather data from above.
weather_test = weather[weather['Temp (C)'] == -1.8]
weather_test.head(10)
# We can can pandas to only show us rows where some condition is true. In this case, we asked to only show when the temerature was -1.8 degrees Celsius. We can do the same type of thing with any conditions!
#
# Now back to the housing data!
# ### Challenge 1: Remove all houses that are greater than 4000 sqft with filtering (‘Gr Liv Area’)
# + [markdown] run_control={"frozen": false, "read_only": false}
# - How many data points did we remove from the data set?
# + run_control={"frozen": false, "read_only": false}
## Next, let's restrict ourselves to just a few variables to get started
# + run_control={"frozen": false, "read_only": false}
smaller_df= df[['Lot Area','Overall Qual',
'Overall Cond', 'Year Built', 'Year Remod/Add',
'Gr Liv Area',
'Full Bath', 'Bedroom AbvGr',
'Fireplaces', 'Garage Cars','SalePrice']]
# + run_control={"frozen": false, "read_only": false}
## Let's have a look at these variables
smaller_df.describe()
# + run_control={"frozen": false, "read_only": false}
smaller_df.info()
# + run_control={"frozen": false, "read_only": false}
# There appears to be one NA in Garage Cars - fill with 0
smaller_df = smaller_df.fillna(0)
# + run_control={"frozen": false, "read_only": false}
smaller_df.info()
# + run_control={"frozen": false, "read_only": false}
## Let's do a pairplot with seaborn to get a sense of the variables in this data set
sns.pairplot(smaller_df)
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Comprehension question
# From the pairplot above:
#
# - Which variables seem to have the strongest correlations with SalePrice?
# + run_control={"frozen": false, "read_only": false}
# Let's make a X and y for our predictors and target, respectively
X=smaller_df[['Lot Area','Overall Qual',
'Overall Cond', 'Year Built', 'Year Remod/Add',
'Gr Liv Area',
'Full Bath', 'Bedroom AbvGr',
'Fireplaces', 'Garage Cars']]
y=smaller_df['SalePrice']
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Train - Test Splits
# + [markdown] run_control={"frozen": false, "read_only": false}
# Train-test splitting is a big part of the data science pipeline. The reason being, we're always trying to build models that perform well "in the wild." This means that in order to evaluate our model's performance, we need to test it on data that we didn't use when building the model. This means we often want to cut out some section of our data before we do any model-building; to save for use as a "evaluator" of how our model performs on data it's never seen before.
#
# <img src="images/train_test_split.png">
#
# In SkLearn, we use `train_test_split` to do this, which allows us to randomly sample the data instead of taking one big chunk.
# + run_control={"frozen": false, "read_only": false}
from sklearn.model_selection import train_test_split
# + run_control={"frozen": false, "read_only": false}
#Split the data 70-30 train/test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=42)
# + run_control={"frozen": false, "read_only": false}
X_train.shape, X_test.shape
# + run_control={"frozen": false, "read_only": false}
X_train.columns
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Linear Regression
# In the first part of this notebook we will use linear regression. We will start with a simple one-variable linear regression and then proceed to more complicated models.
# + run_control={"frozen": false, "read_only": false}
from sklearn.linear_model import LinearRegression
# + run_control={"frozen": false, "read_only": false}
# First let us fit only on Living Area (sqft)
selected_columns_1 = ['Gr Liv Area']
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Sklearn Modeling
# The package scikit-learn has a particular structure to their predictive modeling functionality. Typically, a model is "defined" then it is "fit" (to a set of examples with their answers). Then the trained model can be used to predict on a set of (unlabeled) data points. We will walk through this process in the next few cells.
# + run_control={"frozen": false, "read_only": false}
## First we define a `default` LinearRegression model and fit it to the data (with just `Gr Liv Area' as a predictor
## and SalePrice as the targer.)
lr_model1 = LinearRegression()
lr_model1.fit(X_train[selected_columns_1],y_train)
# + run_control={"frozen": false, "read_only": false}
## Let us look at the (single) variable coefficient and the intercept
lr_model1.coef_, lr_model1.intercept_
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Comprehension Question
# - What would this simple model predict as the sales price of a 1000 sq ft home?
# - Does that seem reasonable? (Remember, these are house prices in Ames, Iowa between 2006 and 2010)
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Plotting the Regression Line
# Let's use our knowledge of Matplotlib/Seaborn to make some plots of this data
# + run_control={"frozen": false, "read_only": false}
# TO DO: Make a scatterplot of the sales price (y-axis) vs the sq footage (x-axis) of the training data
# Then, plot the regression line we just computed over the data
plt.figure(figsize=(10,8))
plt.scatter(X_train['Gr Liv Area'],y_train,alpha=.1)
vec1 = np.linspace(0,4000,1000)
plt.plot(vec1, lr_model1.intercept_ + lr_model1.coef_[0]*vec1,'r')
plt.title("Housing Prices in Ames Iowa by Sq Ft (Training Set)")
plt.xlabel("Sq ft of home")
plt.ylabel("Price of home");
# + run_control={"frozen": false, "read_only": false}
# Let's make a similar plot for the test set
plt.figure(figsize=(10,8))
plt.scatter(X_test['Gr Liv Area'],y_test,alpha=.1)
vec1 = np.linspace(0,4000,1000)
plt.plot(vec1, lr_model1.intercept_ + lr_model1.coef_[0]*vec1,'r')
plt.title("Housing Prices in Ames Iowa by Sq Ft (Test Set)")
plt.xlabel("Sq ft of home")
plt.ylabel("Price of home");
# + run_control={"frozen": false, "read_only": false}
# Let's get predictions of the model on the test set
# Note the use of the `model.predict(feature_matrix)` syntax
test_set_pred1 = lr_model1.predict(X_test[selected_columns_1])
# + run_control={"frozen": false, "read_only": false}
## Let's plot the actual vs expected house price (along with the line x=y for reference)
plt.figure(figsize=(10,8))
plt.scatter(test_set_pred1,y_test,alpha=.1)
plt.plot(np.linspace(0,600000,1000),np.linspace(0,600000,1000))
plt.xlabel("Predicted")
plt.ylabel("Actual");
# + run_control={"frozen": false, "read_only": false}
# How good is our model on the test set?
# Mean Squared Error
def mean_square_error(true, pred):
return np.mean((pred - true)**2)
mean_square_error(y_test,test_set_pred1)
# + run_control={"frozen": false, "read_only": false}
# Root Mean Square Error
def root_mean_square_error(true,pred):
return np.sqrt(mean_square_error(true,pred))
root_mean_square_error(y_test,test_set_pred1)
# + run_control={"frozen": false, "read_only": false}
# Mean Absolute Deviation
def mean_absolute_deviation(true,pred):
return np.mean(np.abs(pred - true))
mean_absolute_deviation(y_test, test_set_pred1)
# + run_control={"frozen": false, "read_only": false}
# R^2
def R2_score(true,pred):
y_bar_test = np.mean(true)
SSE = np.sum((pred - true)**2)
SST = np.sum((true - y_bar_test)**2)
return 1.-SSE/SST
R2_score(y_test, test_set_pred1)
# + run_control={"frozen": false, "read_only": false}
## Now let's use another variable
# + run_control={"frozen": false, "read_only": false}
selected_columns_2 = ['Lot Area', 'Overall Qual']
# + run_control={"frozen": false, "read_only": false}
lr_model2 = LinearRegression()
lr_model2.fit(X_train[selected_columns_2],y_train)
# + run_control={"frozen": false, "read_only": false}
lr_model2.coef_
# + run_control={"frozen": false, "read_only": false}
## This is a hack to show the variables next to their values
list(zip(selected_columns_2,lr_model2.coef_))
# + run_control={"frozen": false, "read_only": false}
test_set_pred2 = lr_model2.predict(X_test[selected_columns_2])
# + run_control={"frozen": false, "read_only": false}
plt.figure(figsize=(10,6))
plt.scatter(test_set_pred2,y_test,alpha=.2)
plt.plot(np.linspace(0,600000,1000),np.linspace(0,600000,1000));
plt.xlabel("Predicted")
plt.ylabel("Actual");
# + run_control={"frozen": false, "read_only": false}
root_mean_square_error(y_test,test_set_pred2)
# + run_control={"frozen": false, "read_only": false}
#MAD
mean_absolute_deviation(y_test,test_set_pred2)
# + run_control={"frozen": false, "read_only": false}
R2_score(y_test,test_set_pred2)
# + [markdown] run_control={"frozen": false, "read_only": false}
# ### Feature Engineering
# Since there seems to be some non-linearity, let's make a new variable that is "Greater Living Area"^2. This is called feature engineering since we're "engineering (or making)" a new feature out of our old features.
# + run_control={"frozen": false, "read_only": false}
X['GLA2'] = X['Gr Liv Area']**2
X.columns
# + run_control={"frozen": false, "read_only": false}
## We need to recreate the train and test sets -- make sure you use the same random seed!
#Split the data 70-30 train/test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=42)
# + run_control={"frozen": false, "read_only": false}
selected_columns_3 = ['Lot Area', 'Overall Qual', 'GLA2']
# + run_control={"frozen": false, "read_only": false}
lr_model3 = LinearRegression()
lr_model3.fit(X_train[selected_columns_3],y_train)
# + run_control={"frozen": false, "read_only": false}
list(zip(X_train[selected_columns_3].columns,lr_model3.coef_))
# + run_control={"frozen": false, "read_only": false}
test_set_pred3 = lr_model3.predict(X_test[selected_columns_3])
# + run_control={"frozen": false, "read_only": false}
plt.figure(figsize=(10,6))
plt.scatter(test_set_pred3,y_test,alpha=.1)
plt.plot(np.linspace(0,600000,1000),np.linspace(0,600000,1000));
plt.xlabel("Predicted")
plt.ylabel("Actual");
# + run_control={"frozen": false, "read_only": false}
#RMSE
root_mean_square_error(y_test,test_set_pred3)
# + run_control={"frozen": false, "read_only": false}
#MAD
mean_absolute_deviation(y_test,test_set_pred3)
# + run_control={"frozen": false, "read_only": false}
#R-squared
R2_score(y_test,test_set_pred3)
# + run_control={"frozen": false, "read_only": false}
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Exercise
#
# We're now going to split into groups. Each group should attempt to build the best model they can using the techniques shown above. Some recommendations:
#
# * Add some of the features we removed. But be careful, we haven't talked about how to handle categorical data, so your model won't work with categories.
# * Do some feature engineering. We played with GLA^2, but there are more variables you can try things with. You might also try multiplying some features together to see if there are "interaction" terms.
# * We've looked at the SkLearn Documentation, so you might also consider trying some different Regression Models - like RandomForestRegressor. Be careful though, you can't just plug-and-play some of the models into the exact same code. They don't all have coefficients for instance...
#
# Go wild. After we finish up, each group will have a chance to describe what sort of work they tried and how their model performed!
# -
| intro-to-ds/3_ames_housing_case_study.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import LSTM
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# +
data = pd.read_csv(r'../local_data/naive_c2_q50_s4000_v0.csv')
data.shape
# -
n_in = 1
n_out = 1
n_hidden = 100
n_samples = 2297
n_timesteps = 400
model = Sequential()
model.add(LSTM(output_dim=n_hidden, input_dim=n_in, return_sequences=False))
model.add(Dense(output_dim=n_out, input_dim=n_hidden))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
| notebooks/keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os,cv2
import numpy as np
from IPython.display import Image
from keras.preprocessing import image
from keras import optimizers
from keras import layers,models
from keras.applications.imagenet_utils import preprocess_input
import matplotlib.pyplot as plt
import seaborn as sns
from keras import regularizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping,ModelCheckpoint
from keras.applications.vgg16 import VGG16
from keras.applications.resnet50 import ResNet50
import warnings
warnings.filterwarnings('ignore')
# -
train_dir="train/"
test_dir="test/"
train=pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
print('no of training images ',train.shape[0])
print('no of test images ',test.shape[0])
train.head()
Image(os.path.join("train",train.iloc[0,0]),width=250,height=250)
datagen=ImageDataGenerator((rescale=1./255)
batch_size=128
# +
train_generator=datagen.flow_from_dataframe(dataframe=train[:300],directory=train_dir,x_col='Image',
y_col='target',class_mode='categorical',batch_size=batch_size,
target_size=(224,224),color_mode='rgb')
validation_generator=datagen.flow_from_dataframe(dataframe=train[300:],directory=train_dir,x_col='Image',
y_col='target',class_mode='categorical',batch_size=50,
target_size=(224,224),color_mode='rgb')
# -
model=models.Sequential()
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(224,224,3)))
model.add(layers.MaxPool2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu',input_shape=(150,150,3)))
model.add(layers.MaxPool2D((2,2)))
model.add(layers.Conv2D(128,(3,3),activation='relu',input_shape=(150,150,3)))
model.add(layers.MaxPool2D((2,2)))
model.add(layers.Conv2D(128,(3,3),activation='relu',input_shape=(150,150,3)))
model.add(layers.MaxPool2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(512,activation='relu'))
model.add(layers.Dense(8,activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',optimizer=optimizers.adam(),metrics=['accuracy'])
early = EarlyStopping(monitor='val_loss',patience=1)
epochs=10
history=model.fit_generator(train_generator,steps_per_epoch=100,epochs=2,
validation_data=validation_generator,
validation_steps=50,callbacks=[early])
# +
acc=history.history['loss'] ##getting loss of each epochs
#epochs_=range(0,epochs)
epochs_ = range(len(acc))
plt.plot(epochs_,acc,label='training loss')
plt.xlabel('No of epochs')
plt.ylabel('loss')
acc_val=history.history['val_loss'] ## getting validation loss of each epochs
plt.scatter(epochs_,acc_val,label="validation loss")
plt.title('no of epochs vs loss')
plt.legend()
# -
test.head()
test = datagen.flow_from_dataframe(dataframe=test,directory=test_dir,x_col='Image',
target_size=(224,224),color_mode='rgb')
ypred = model.predict_classes(test)
| 8 Dance form/.ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="sPD4xO8RETK9"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + id="2CQyqElGODYd" colab={"base_uri": "https://localhost:8080/"} outputId="1a8a9727-4752-4105-ebc1-70babf0d422a"
from google.colab import drive
drive.mount('/content/drive')
# + id="ZUqmXcs1FiRG"
df=pd.read_json(r'/content/drive/MyDrive/TFD/Cell_Phones_and_Accessories_5 (1).json', lines = True)
# + id="RGyVJEGCF6Rp" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="4ed9f646-3e21-487f-95dc-8a5aa25c6227"
df.head()
# + id="LiI_ScBtHQuZ" colab={"base_uri": "https://localhost:8080/"} outputId="922a69fa-d224-425b-c21b-d6f6d73a24e4"
df['overall'].value_counts()
# + id="ShdIppQHHZNw" colab={"base_uri": "https://localhost:8080/"} outputId="2cc06e4c-5662-40bf-e1d1-4b15177d36e1"
df.isnull().sum()
# + id="ZuRKLdEKSMBp" colab={"base_uri": "https://localhost:8080/", "height": 572} outputId="b4234072-9542-40f5-9933-087392266080"
pre = df[['reviewText', 'summary']]
pre['text'] = pre['reviewText']
pre = pre[['text', 'summary']]
pre
# + id="5sxk6LzDSlqs" colab={"base_uri": "https://localhost:8080/"} outputId="a16292de-2fa4-4a2f-fc3f-c610eccf1ed0"
#LSTM with Attention
#pip install keras-self-attention
pre['text'][:10]
# + [markdown] id="leLtdF8fS__0"
# # Data Cleaning
# + id="qE1Vk-WnSzyR"
import re
#Removes non-alphabetic characters:
def text_strip(column):
for row in column:
#ORDER OF REGEX IS VERY VERY IMPORTANT!!!!!!
row=re.sub("(\\t)", ' ', str(row)).lower() #remove escape charecters
row=re.sub("(\\r)", ' ', str(row)).lower()
row=re.sub("(\\n)", ' ', str(row)).lower()
row=re.sub("(__+)", ' ', str(row)).lower() #remove _ if it occors more than one time consecutively
row=re.sub("(--+)", ' ', str(row)).lower() #remove - if it occors more than one time consecutively
row=re.sub("(~~+)", ' ', str(row)).lower() #remove ~ if it occors more than one time consecutively
row=re.sub("(\+\++)", ' ', str(row)).lower() #remove + if it occors more than one time consecutively
row=re.sub("(\.\.+)", ' ', str(row)).lower() #remove . if it occors more than one time consecutively
row=re.sub(r"[<>()|&©ø\[\]\'\",;?~*!]", ' ', str(row)).lower() #remove <>()|&©ø"',;?~*!
row=re.sub("(mailto:)", ' ', str(row)).lower() #remove mailto:
row=re.sub(r"(\\x9\d)", ' ', str(row)).lower() #remove \x9* in text
row=re.sub("([iI][nN][cC]\d+)", 'INC_NUM', str(row)).lower() #replace INC nums to INC_NUM
row=re.sub("([cC][mM]\d+)|([cC][hH][gG]\d+)", 'CM_NUM', str(row)).lower() #replace CM# and CHG# to CM_NUM
row=re.sub("(\.\s+)", ' ', str(row)).lower() #remove full stop at end of words(not between)
row=re.sub("(\-\s+)", ' ', str(row)).lower() #remove - at end of words(not between)
row=re.sub("(\:\s+)", ' ', str(row)).lower() #remove : at end of words(not between)
row=re.sub("(\s+.\s+)", ' ', str(row)).lower() #remove any single charecters hanging between 2 spaces
#Replace any url as such https://abc.xyz.net/browse/sdf-5327 ====> abc.xyz.net
try:
url = re.search(r'((https*:\/*)([^\/\s]+))(.[^\s]+)', str(row))
repl_url = url.group(3)
row = re.sub(r'((https*:\/*)([^\/\s]+))(.[^\s]+)',repl_url, str(row))
except:
pass #there might be emails with no url in them
row = re.sub("(\s+)",' ',str(row)).lower() #remove multiple spaces
#Should always be last
row=re.sub("(\s+.\s+)", ' ', str(row)).lower() #remove any single charecters hanging between 2 spaces
yield row
# + id="_BR_SZvtgFh3" colab={"base_uri": "https://localhost:8080/"} outputId="b1eb6fdf-ab29-4d52-96dc-91aeb7c076c7"
brief_cleaning1 = text_strip(pre['text'])
brief_cleaning2 = text_strip(pre['summary'])
from time import time
import spacy
nlp = spacy.load('en', disable=['ner', 'parser']) # disabling Named Entity Recognition for speed
#Taking advantage of spaCy .pipe() method to speed-up the cleaning process:
#If data loss seems to be happening(i.e len(text) = 50 instead of 75 etc etc) in this cell , decrease the batch_size parametre
t = time()
#Batch the data points into 5000 and run on all cores for faster preprocessing
text = [str(doc) for doc in nlp.pipe(brief_cleaning1, batch_size=5000, n_threads=-1)]
#Takes 7-8 mins
print('Time to clean up everything: {} mins'.format(round((time() - t) / 60, 2)))
# + id="v_iuZCtagKa-" colab={"base_uri": "https://localhost:8080/"} outputId="8d576256-913f-47e7-9302-cd5c3ae07540"
#Taking advantage of spaCy .pipe() method to speed-up the cleaning process:
t = time()
#Batch the data points into 5000 and run on all cores for faster preprocessing
summary = ['_START_ '+ str(doc) + ' _END_' for doc in nlp.pipe(brief_cleaning2, batch_size=5000, n_threads=-1)]
#Takes 7-8 mins
print('Time to clean up everything: {} mins'.format(round((time() - t) / 60, 2)))
# + id="-PJWaMt-gkos" colab={"base_uri": "https://localhost:8080/"} outputId="460f2d68-36b7-4fbf-9a6b-977d872b3f57"
print('The text:\n', text[0])
print('\nThe summary of the text:\n', summary[0])
# + id="_2MStdtYg85F" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="23bdeb7c-4785-4e2f-a475-0b7524487ef6"
pre['cleaned_text'] = pd.Series(text)
pre['cleaned_summary'] = pd.Series(summary)
text_count = []
summary_count = []
for sent in pre['cleaned_text']:
text_count.append(len(sent.split()))
for sent in pre['cleaned_summary']:
summary_count.append(len(sent.split()))
graph_df= pd.DataFrame()
graph_df['text']=text_count
graph_df['summary']=summary_count
import matplotlib.pyplot as plt
graph_df.hist(bins = 5)
plt.show()
# + [markdown] id="gcuNhx-Y1577"
# # Check how much % of summary have 0-15 words
# + id="SFODtndPjC__" colab={"base_uri": "https://localhost:8080/"} outputId="241bdc49-04dc-4e85-ee87-e550733894c4"
cnt=0
for i in pre['cleaned_summary']:
if(len(i.split())<=15):
cnt=cnt+1
print(cnt/len(pre['cleaned_summary']))
# + [markdown] id="9UYFpG0h19wl"
# #Check how much % of text have 0-70 words
#
# + id="cNcxLvGAjHhu" colab={"base_uri": "https://localhost:8080/"} outputId="b54bf233-c92f-4c4a-da94-5cc48e8bb02a"
cnt=0
for i in pre['cleaned_text']:
if(len(i.split())<=100):
cnt=cnt+1
print(cnt/len(pre['cleaned_text']))
# + [markdown] id="58dn46KP2A7P"
# # Model to summarize the text between 0-15 words for Summary and 0-100 words for Text
#
# + id="_4EVDMN4jLrQ" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="38a10c37-16d0-4648-d06e-d2170c50ee4f"
max_text_len=100
max_summary_len=15
#Select the Summaries and Text between max len defined above
cleaned_text =np.array(pre['cleaned_text'])
cleaned_summary=np.array(pre['cleaned_summary'])
short_text=[]
short_summary=[]
for i in range(len(cleaned_text)):
if(len(cleaned_summary[i].split())<=max_summary_len and len(cleaned_text[i].split())<=max_text_len):
short_text.append(cleaned_text[i])
short_summary.append(cleaned_summary[i])
post_pre=pd.DataFrame({'text':short_text,'summary':short_summary})
post_pre.head(2)
# + [markdown] id="lRdx3Od12FJN"
# # Add sostok and eostok at
# + id="hKdUsFsLjZFd" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="a1359b5c-d0eb-40b1-f59f-d210a4355a0d"
post_pre['summary'] = post_pre['summary'].apply(lambda x : 'sostok '+ x + ' eostok')
post_pre.head(2)
# + [markdown] id="uLq6Bl7g2Jbi"
# # Prepare a tokenizer for reviews on training data
# + id="3FKgPZRPjh_2"
from sklearn.model_selection import train_test_split
x_tr,x_val,y_tr,y_val=train_test_split(np.array(post_pre['text']),np.array(post_pre['summary']),test_size=0.1,random_state=0,shuffle=True)
#Lets tokenize the text to get the vocab count , you can use Spacy here also
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
x_tokenizer = Tokenizer()
x_tokenizer.fit_on_texts(list(x_tr))
# + id="ZqVawncxlZmm" colab={"base_uri": "https://localhost:8080/"} outputId="f1866100-f687-468b-cde8-e1242dea0a8b"
thresh=4
cnt=0
tot_cnt=0
freq=0
tot_freq=0
for key,value in x_tokenizer.word_counts.items():
tot_cnt=tot_cnt+1
tot_freq=tot_freq+value
if(value<thresh):
cnt=cnt+1
freq=freq+value
print("% of rare words in vocabulary:",(cnt/tot_cnt)*100)
print("Total Coverage of rare words:",(freq/tot_freq)*100)
# + [markdown] id="-MK9AmOG2SvJ"
#
#
# * Prepare a tokenizer for reviews on training data
# * Convert text sequences into integer sequences (i.e one-hot encodeing all the words)
# * Padding zero upto maximum length
# * Size of vocabulary ( +1 for padding token)
#
# + id="tICLz3SOleh6" colab={"base_uri": "https://localhost:8080/"} outputId="5739cd9d-fcc4-4aa4-ca62-76fa17c440a4"
x_tokenizer = Tokenizer(num_words=tot_cnt-cnt)
x_tokenizer.fit_on_texts(list(x_tr))
x_tr_seq = x_tokenizer.texts_to_sequences(x_tr)
x_val_seq = x_tokenizer.texts_to_sequences(x_val)
x_tr = pad_sequences(x_tr_seq, maxlen=max_text_len, padding='post')
x_val = pad_sequences(x_val_seq, maxlen=max_text_len, padding='post')
x_voc = x_tokenizer.num_words + 1
print("Size of vocabulary in X = {}".format(x_voc))
# + [markdown] id="okBIcezQ2ym9"
# # Prepare a tokenizer for reviews on training data
# + id="A690_E4BligY" colab={"base_uri": "https://localhost:8080/"} outputId="76d66fa3-ec82-4579-c615-6f0f23a7d2ce"
y_tokenizer = Tokenizer()
y_tokenizer.fit_on_texts(list(y_tr))
thresh=6
cnt=0
tot_cnt=0
freq=0
tot_freq=0
for key,value in y_tokenizer.word_counts.items():
tot_cnt=tot_cnt+1
tot_freq=tot_freq+value
if(value<thresh):
cnt=cnt+1
freq=freq+value
print("% of rare words in vocabulary:",(cnt/tot_cnt)*100)
print("Total Coverage of rare words:",(freq/tot_freq)*100)
# + [markdown] id="9Gm5SXRI23vo"
#
#
# * Prepare a tokenizer for reviews on training data
# * Convert text sequences into integer sequences (i.e one hot encode the text in Y)
# * Padding zero upto maximum length
# * Size of vocabulary
#
#
# + id="upOiZehllpcs" colab={"base_uri": "https://localhost:8080/"} outputId="63fef02d-f27b-48d1-83fd-8f909ce0f28b"
y_tokenizer = Tokenizer(num_words=tot_cnt-cnt)
y_tokenizer.fit_on_texts(list(y_tr))
y_tr_seq = y_tokenizer.texts_to_sequences(y_tr)
y_val_seq = y_tokenizer.texts_to_sequences(y_val)
y_tr = pad_sequences(y_tr_seq, maxlen=max_summary_len, padding='post')
y_val = pad_sequences(y_val_seq, maxlen=max_summary_len, padding='post')
y_voc = y_tokenizer.num_words +1
print("Size of vocabulary in Y = {}".format(y_voc))
# + id="qYQCh6Ysltvw"
ind=[]
for i in range(len(y_tr)):
cnt=0
for j in y_tr[i]:
if j!=0:
cnt=cnt+1
if(cnt==2):
ind.append(i)
y_tr=np.delete(y_tr,ind, axis=0)
x_tr=np.delete(x_tr,ind, axis=0)
ind=[]
for i in range(len(y_val)):
cnt=0
for j in y_val[i]:
if j!=0:
cnt=cnt+1
if(cnt==2):
ind.append(i)
y_val=np.delete(y_val,ind, axis=0)
x_val=np.delete(x_val,ind, axis=0)
# + [markdown] id="FRR8Ozr9003I"
# # Creating Seq2Seq natural processing model
#
# We can build a Seq2Seq model on any problem which involves sequential information. This includes Sentiment classification, Neural Machine Translation, and Named Entity Recognition – some very common applications of sequential information.
#
# In the case of Neural Machine Translation, the input is a text in one language and the output is also a text in another language:
#
# 
#
# In the Named Entity Recognition, the input is a sequence of words and the output is a sequence of tags for every word in the input sequence:
#
# 
#
#
# Our objective is to build a text summarizer where the input is a long sequence of words (in a text body), and the output is a short summary (which is a sequence as well). So, we can model this as a Many-to-Many Seq2Seq problem. Below is a typical Seq2Seq model architecture:
#
# + [markdown] id="_5thbOhN8wPQ"
# 
# + id="eV3R34rpl1Xw" colab={"base_uri": "https://localhost:8080/"} outputId="7f39590c-7965-4958-c838-a093871cbcab"
from keras import backend as K
import gensim
from numpy import *
import numpy as np
import pandas as pd
import re
from bs4 import BeautifulSoup
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from nltk.corpus import stopwords
from tensorflow.keras.layers import Input, LSTM, Embedding, Dense, Concatenate, TimeDistributed
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping
import warnings
pd.set_option("display.max_colwidth", 200)
warnings.filterwarnings("ignore")
print("Size of vocabulary from the w2v model = {}".format(x_voc))
K.clear_session()
latent_dim = 300
embedding_dim=200
# Encoder
encoder_inputs = Input(shape=(max_text_len,))
#embedding layer
enc_emb = Embedding(x_voc, embedding_dim,trainable=True)(encoder_inputs)
#encoder lstm 1
encoder_lstm1 = LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4)
encoder_output1, state_h1, state_c1 = encoder_lstm1(enc_emb)
#encoder lstm 2
encoder_lstm2 = LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4)
encoder_output2, state_h2, state_c2 = encoder_lstm2(encoder_output1)
#encoder lstm 3
encoder_lstm3=LSTM(latent_dim, return_state=True, return_sequences=True,dropout=0.4,recurrent_dropout=0.4)
encoder_outputs, state_h, state_c= encoder_lstm3(encoder_output2)
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
#embedding layer
dec_emb_layer = Embedding(y_voc, embedding_dim,trainable=True)
dec_emb = dec_emb_layer(decoder_inputs)
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True,dropout=0.4,recurrent_dropout=0.2)
decoder_outputs,decoder_fwd_state, decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c])
#dense layer
decoder_dense = TimeDistributed(Dense(y_voc, activation='softmax'))
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
# + [markdown] id="Yrut2Yux0xbH"
# # Training the model
# + id="ZrAQpjU3l-tV" colab={"base_uri": "https://localhost:8080/"} outputId="1033a604-511d-4c70-ed7a-57ffb8f8e9cf"
from keras.callbacks import ModelCheckpoint
path_model= '/content/drive/MyDrive/TFD/model_filter.h5' # save model at this location after each epoch
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics= ['acc'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1,patience=2)
history=model.fit([x_tr,y_tr[:,:-1]], y_tr.reshape(y_tr.shape[0],y_tr.shape[1], 1)[:,1:] ,epochs=1,callbacks=[es, ModelCheckpoint(filepath=path_model)],batch_size=128,
validation_data=([x_val,y_val[:,:-1]], y_val.reshape(y_val.shape[0],y_val.shape[1], 1)[:,1:]))
# + colab={"base_uri": "https://localhost:8080/", "height": 52} id="uPjLOMSgpL7K" outputId="89bb3e35-ba7b-4c86-bc0e-422b6ad1c484"
import tensorflow as tf
path_model= r'/content/drive/MyDrive/TFD/model_filter.h5'
model = tf.keras.models.load_model(path_model)
print(model.summary())
# + [markdown] id="I4eqNYxZ0atB"
# # Encoding input sequence
# + id="GWKWxECdpHfY"
reverse_target_word_index=y_tokenizer.index_word
reverse_source_word_index=x_tokenizer.index_word
target_word_index=y_tokenizer.word_index
# Encode the input sequence to get the feature vector
encoder_model = Model(inputs=encoder_inputs,outputs=[encoder_outputs, state_h, state_c])
# Decoder setup
# Below tensors will hold the states of the previous time step
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_hidden_state_input = Input(shape=(max_text_len,latent_dim))
# Get the embeddings of the decoder sequence
dec_emb2= dec_emb_layer(decoder_inputs)
# To predict the next word in the sequence, set the initial states to the states from the previous time step
decoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=[decoder_state_input_h, decoder_state_input_c])
# A dense softmax layer to generate prob dist. over the target vocabulary
decoder_outputs2 = decoder_dense(decoder_outputs2)
# Final decoder model
decoder_model = Model(
[decoder_inputs] + [decoder_hidden_state_input,decoder_state_input_h, decoder_state_input_c],
[decoder_outputs2] + [state_h2, state_c2])
# + [markdown] id="zP1LEpI20IRp"
# # To retreive the decoded summary text from Seq2Seq natural processing model.
# + colab={"base_uri": "https://localhost:8080/"} id="qgf0JRp7BVce" outputId="7a6f0c9e-b0fc-48f4-deda-0c6018fcb6d8"
def decode_sequence(input_seq):
# Encode the input as state vectors.
e_out, e_h, e_c = encoder_model.predict(input_seq)
#print(e_out, e_h, e_c)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
# Populate the first word of target sequence with the start word.
target_seq[0, 0] = target_word_index['sostok']
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + [e_out, e_h, e_c])
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_token = reverse_target_word_index[sampled_token_index]
if(sampled_token!='eostok'):
decoded_sentence += ' '+sampled_token
# Exit condition: either hit max length or find stop word.
if (sampled_token == 'eostok' or len(decoded_sentence.split()) >= (max_summary_len-1)):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1,1))
target_seq[0, 0] = sampled_token_index
# Update internal states
e_h, e_c = h, c
return decoded_sentence
#Let us define the functions to convert an integer sequence to a word sequence for summary as well as the reviews:
def seq2summary(input_seq):
newString=''
for i in input_seq:
if((i!=0 and i!=target_word_index['sostok']) and i!=target_word_index['eostok']):
newString=newString+reverse_target_word_index[i]+' '
return newString
def seq2text(input_seq):
newString=''
for i in input_seq:
if(i!=0):
newString=newString+reverse_source_word_index[i]+' '
return newString
# Run the model over the data to see the results
for i in range(0,6):
print("Review:",seq2text(x_tr[i]))
print("Original summary:",seq2summary(y_tr[i]))
print("Predicted summary:",decode_sequence(x_tr[i].reshape(1,max_text_len)))
print("\n")
# + [markdown] id="MQS8Sw29z7Gg"
# # To extract Subjects from the predicted summary:
# + id="mvT8wyl7EZex"
import requests
r = df
from bs4 import BeautifulSoup
soup = BeautifulSoup(r.text, 'html.parser')
title = soup.find('title').get_text()
document = ' '.join([p.get_text() for p in soup.find_all('p')])
document = re.sub(‘[^A-Za-z .-]+’, ‘ ‘, document)
document = ‘ ‘.join(document.split())
document = ‘ ‘.join([i for i in document.split() if i not in stop])
words = nltk.tokenize.word_tokenize(document)
words = [word.lower() for word in words if word not in stop]
fdist = nltk.FreqDist(words)
most_freq_nouns = [w for w, c in fdist.most_common(10)
if nltk.pos_tag([w])[0][1] in NOUNS]
subject_nouns = [entity for entity in top_10_entities
if entity.split()[0] in most_freq_nouns]
train_sents = nltk.corpus.brown.tagged_sents()
train_sents += nltk.corpus.conll2000.tagged_sents()
train_sents += nltk.corpus.treebank.tagged_sents()
# Create instance of SubjectTrigramTagger
trigram_tagger = SubjectTrigramTagger(train_sents)
# + [markdown] id="Vyrnsih33ecB"
# # Output Format
#
# ## 1. Predicted Summary text
# ## 2. Extracting this summary text
# ```
# start great case end
# ```
# ## to
# ```
# great case
# ```
# ## 3. Extract subject from this sentence as
#
# ```
# case
# ```
# ## 4. Voila! The retrieved subject from the sentence is the tag we were looking for.
| Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Secondary Structure Elements Word2Vec Encoder Demo
#
# This demo creates a dataset by extracting secondary structure elements "H", then encode an overlapping Ngram feature vector
#
# ## Imports
from pyspark import SQLContext
from pyspark.sql import SparkSession
from mmtfPyspark.ml import ProteinSequenceEncoder
from mmtfPyspark.mappers import StructureToPolymerChains
from mmtfPyspark.filters import ContainsLProteinChain
from mmtfPyspark.datasets import secondaryStructureElementExtractor
from mmtfPyspark.webfilters import Pisces
from mmtfPyspark.io import mmtfReader
# #### Configure Spark
spark = SparkSession.builder.appName("SecondaryStructureElementsWord2VecEncoder").getOrCreate()
# ## Read MMTF Hadoop sequence file and
#
# Create a non-redundant set(<=20% seq. identity) of L-protein chains
# +
path = "../../resources/mmtf_reduced_sample/"
fraction = 0.05
seed = 123
pdb = mmtfReader \
.read_sequence_file(path) \
.flatMap(StructureToPolymerChains(False, True)) \
.filter(ContainsLProteinChain()) \
.sample(False, fraction, seed)
# -
# ## Extract Element "H" from Secondary Structure
label = "H"
data = secondaryStructureElementExtractor.get_dataset(pdb, label).cache()
print(f"original data : {data.count()}")
data.show(10, False)
# ## Word2Vec encoded feature Vector
# +
segmentLength = 11
n = 2
windowSize = (segmentLength-1)/2
vectorSize = 50
encoder = ProteinSequenceEncoder(data)
# overlapping_ngram_word2vec_encode uses keyword attributes
data = encoder.overlapping_ngram_word2vec_encode(n=n, windowSize=windowSize, vectorSize=vectorSize)
data.show(5)
# -
# ## Terminate Spark Context
spark.stop()
| demos/ml/SecondaryStructureElementsWord2VecEncoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Jupyter notebooks <img align="right" src="../Supplementary_data/dea_logo.jpg">
#
# * **Acknowledgement**: This notebook was originally created by [Digital Eath Australia (DEA)](https://www.ga.gov.au/about/projects/geographic/digital-earth-australia) and has been modified for use in the EY Data Science Program
# * **Prerequisites**:
# * There is no prerequisite learning required, as this document is designed for a novice user of the Jupyter environment
# ## Background
# Access to implementations of the [Open Data Cube](https://www.opendatacube.org/) such as [Digital Earth Australia](https://www.ga.gov.au/dea) and [Digital Earth Africa](https://www.digitalearthafrica.org/) is achieved through the use of Python code and [Jupyter Notebooks](https://jupyterlab.readthedocs.io/en/stable/user/notebook.html).
# The Jupyter Notebook (also termed notebook from here onwards) is an interactive web application that allows for the viewing, creation and documentation of live code.
# Notebook applications include data transformation, visualisation, modelling and machine learning.
# The default web interface to access notebooks when using either the National Computational Infrastructure (NCI) or the DEA Sandbox is [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/).
# ## Description
# This notebook is designed to introduce users to the basics of using Python code in Jupyter Notebooks via JupyterLab.
#
# Topics covered include:
#
# 1. How to run (execute) a Jupyter Notebook cell
# 2. The different types of Jupyter Notebook cells
# 3. Stopping a process or restarting a Jupyter Notebook
# 4. Saving and exporting your work
# 5. Starting a new Jupyter Notebook
#
# ***
# ## Getting started
# ### Running (executing) a cell
# Jupyter Notebooks allow code to be separated into sections that can be executed independent of one another.
# These sections are called "cells".
#
# Python code is written into individual cells that can be executed by placing the cursor in the cell and typing `Shift-Enter` on the keyboard or selecting the ► "Run the selected cells and advance" button in the ribbon at the top of the notebook.
# These options will run a single cell at a time.
#
# To automatically run all cells in a notebook, navigate to the "Run" tab of the menu bar at the top of JupyterLab and select "Run All Cells" (or the option that best suits your needs).
# When a cell is run, the cell's content is executed.
# Any output produced from running the cell will appear directly below it.
#
# Run the cell below:
print("I ran a cell!")
# ### Cell status
# The `[ ]:` symbol to the left of each Code cell describes the state of the cell:
#
# * `[ ]:` means that the cell has not been run yet.
# * `[*]:` means that the cell is currently running.
# * `[1]:` means that the cell has finished running and was the first cell run.
#
# The number indicates the order that the cells were run in.
#
# > **Note:** To check whether a cell is currently executing in a Jupyter notebook, inspect the small circle in the top-right of the window.
# The circle will turn grey ("Kernel busy") when the cell is running, and return to empty ("Kernel idle") when the process is complete.
# ## Jupyter notebook cell types
# Cells are identified as either Code, Markdown, or Raw.
# This designation can be changed using the ribbon at the top of the notebook.
# ### Code cells
#
# All code operations are performed in Code cells.
# Code cells can be used to edit and write new code, and perform tasks like loading data, plotting data and running analyses.
#
# Click on the cell below.
# Note that the ribbon at the top of the notebook describes it as a Code cell.
print("This is a code cell")
# ### Markdown cells
# Place the cursor in this cell by double clicking.
#
# The cell format has changed to allow for editing.
# Note that the ribbon at the top of the notebook describes this as a Markdown cell.
#
# Run this cell to return the formatted version.
#
# Markdown cells provide the narrative to a notebook.
# They are used for text and are useful to describe the code operations in the following cells.
# To see some of the formatting options for text in a Markdown cell, navigate to the "Help" tab of the menu bar at the top of JupyterLab and select "Markdown Reference".
# Here you will see a wide range of text formatting options including headings, dot points, italics, hyperlinking and creating tables.
# ### Raw cells
# Information in Raw cells is stored in the notebook metadata and can be used to render different code formats into HTML or $\LaTeX$.
# There are a range of available Raw cell formats that differ depending on how they are to be rendered.
# For the purposes of this beginner's guide, raw cells are rarely used by the authors and not required for most notebook users.
#
# There is a Raw cell associated with the [Tags](#Tags) section of this notebook below.
# As this cell is in the "ReStructured Text" format, its contents are not visible nor are they executed in any way.
# This cell is used by the authors to store information tags in the metadata that is relevant to the notebook, and create an [index of tags on the Digital Earth Australia user guide](https://docs.dea.ga.gov.au/genindex.html).
# ## Stopping a process or restarting a Jupyter Notebook
# Sometimes it can be useful to stop a cell execution before it finishes (e.g. if a process is taking too long to complete, or if the code needs to be modified before running the cell).
# To interrupt a cell execution, click the ■ "stop" button ("Interrupt the kernel") in the ribbon above the notebook.
#
# To test this, run the following code cell.
# This will run a piece of code that will take 20 seconds to complete.
# To interrupt this code, press the ■ "stop" button.
# The notebook should stop executing the cell.
#
import time
time.sleep(20)
# If the approach above does not work (e.g. if the notebook has frozen or refuses to respond), try restarting the entire notebook.
# To do this, navigate to the "Kernel" tab of the menu bar, then select "Restart Kernel".
# Alternatively, click the ↻ "Restart the kernel" button in the ribbon above the notebook.
#
# Restarting a notebook can also be useful for testing whether code will work correctly the first time a new user tries to run the notebook.
# To restart and then run every cell in a notebook, navigate to the "Kernel" tab, then select "Restart and Run All Cells".
# ## Saving and exporting your work
#
# Modifications to Jupyter Notebooks are automatically saved every few minutes.
# To actively save the notebook, navigate to "File" in the menu bar, then select "Save Notebook".
# Alternatively, click the 💾 "save" icon on the left of the ribbon above the notebook.
#
#
# ### Exporting Jupyter Notebooks to Python scripts
# The standard file extension for a Jupyter Notebook is `.ipynb`.
#
# There are a range of export options that allow you to save your work for access outside of the Jupyter environment.
# For example, Python code can easily be saved as `.py` Python scripts by navigating to the "File" tab of the menu bar in JupyterLab and selecting "Export Notebook As" followed by "Export Notebook To Executable Script".
#
# ## Starting a new notebook
# To create a new notebook, use JupyterLab's file browser to navigate to the directory you would like the notebook to be created in (if the file browser is not visible, re-open it by clicking on the 📁 "File browser" icon at the top-left of the screen).
#
# Once you have navigated to the desired location, press the ✚ "New Launcher" button above the browser.
# This will bring up JupyterLab's "Launcher" page which allows you to launch a range of new files or utilities.
# Below the heading "Notebook", click the large "Python 3" button.
# This will create a new notebook entitled "Untitled.ipynb" in the chosen directory.
#
# To rename this notebook to something more useful, right-click on it in the file browser and select "Rename".
# ## Recommended next steps
#
# For more advanced information about working with Jupyter Notebooks or JupyterLab, see the [JupyterLab documentation](https://jupyterlab.readthedocs.io/en/stable/user/notebook.html).
#
# To continue working through the notebooks in this beginner's guide, the following notebooks are designed to be worked through in the following order:
#
# 1. **Jupyter Notebooks (this notebook)**
# 2. [Digital Earth Australia](02_DEA.ipynb)
# 3. [Products and Measurements](03_Products_and_measurements.ipynb)
# 4. [Loading data](04_Loading_data.ipynb)
# 5. [Plotting](05_Plotting.ipynb)
# 6. [Performing a basic analysis](06_Basic_analysis.ipynb)
# 7. [Introduction to Numpy](07_Intro_to_numpy.ipynb)
# 8. [Introduction to Xarray](08_Intro_to_xarray.ipynb)
# 9. [Parallel processing with Dask](09_Parallel_processing_with_dask.ipynb)
#
# Once you have worked through the beginner's guide, you can join advanced users by exploring:
#
# * The "DEA datasets" directory in the repository, where you can explore DEA products in depth.
# * The "Frequently used code" directory, which contains a recipe book of common techniques and methods for analysing DEA data.
# * The "Real-world examples" directory, which provides more complex workflows and analysis case studies.
# ***
# ## Additional information
#
# **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
# Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
#
# **Contact:** If you need assistance, please review the FAQ section and support options on the [EY Data Science platform](https://datascience.ey.com/).
| notebooks/01_Beginners_guide/01_Jupyter_notebooks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from sigvisa.graph.sigvisa_graph import SigvisaGraph
from sigvisa.signals.common import Waveform
from sigvisa.source.event import get_event
from sigvisa.infer.run_mcmc import run_open_world_MH
from sigvisa.infer.mcmc_logger import MCMCLogger
# -
# +
from sigvisa.synthetic.doublets import *
def sample_events(basedir, seed=0):
n_evs = 10
lons = [129, 130]
lats = [-3.5, -4.5]
times = [1238889600, 1245456000]
mbs = [4.0, 5.0]
sw = SampledWorld(seed=seed)
sw.sample_region_with_doublet(n_evs, lons, lats, times, mbs)
sw.stas = ["MK31", "AS12", "CM16", "FITZ", "WR1"]
gpcov = GPCov([0.7,], [ 40.0, 5.0],
dfn_str="lld",
wfn_str="compact2")
param_means = build_param_means(sw.stas)
sw.set_basis(wavelet_family="db4_2.0_3_30", iid_repeatable_var=0.1,
iid_nonrepeatable_var=0.4, srate=5.0)
sw.joint_sample_arrival_params(gpcov, param_means)
sw.sample_signals("freq_0.8_4.5")
wave_dir = os.path.join(basedir, "sampled_%d_spreadtime" % seed)
sw.serialize(wave_dir)
#sw.train_gp_models_true_data()
#sw.save_gps(wave_dir, run_name="synth_truedata")
return sw
import os
basedir = os.path.join(os.getenv("SIGVISA_HOME"), "experiments", "synth_wavematch")
#sw = sample_events(basedir)
wave_dir = os.path.join(basedir, "sampled_%d_spreadtime" % 0)
sw = load_sampled_world(wave_dir)
dummyPrior = dict([(param, Gaussian(sw.param_means["MK31"][param], std=np.sqrt(sw.gpcov.wfn_params[0]))) for param in sw.param_means["MK31"].keys()
])
# -
import cPickle as pickle
with open("../logs/mcmc/01687/step_000049/pickle.sg", 'rb') as f:
sg = pickle.load(f)
"""for sta in sg._joint_gpmodels.keys():
for param in sg._joint_gpmodels[sta].keys():
sg._joint_gpmodels[sta][param]._clear_cache()
for sta in sg.station_waves.keys():
for wn in sg.station_waves[sta]:
wn.pass_jointgp_messages()
with open("../logs/mcmc/01687/step_000049/pickle.sg", 'wb') as f:
pickle.dump(sg, f)"""
print sg.current_log_p()
# +
logger = MCMCLogger(write_template_vals=False, dump_interval=5)
corrupted_evs = []
for i,ev in enumerate(sw.evs):
eid = i+1
lon = sg.all_nodes["%d;lon_obs" %eid ].get_value()
lat = sg.all_nodes["%d;lat_obs" %eid ].get_value()
depth = sg.all_nodes["%d;depth_obs" %eid ].get_value()
mb = sg.all_nodes["%d;mb_obs" %eid ].get_value()
time = sg.all_nodes["%d;time_obs" %eid ].get_value()
cev = Event(lon=lon, lat=lat, depth=depth, mb=mb, time=time, eid=eid)
corrupted_evs.append(cev)
with open(os.path.join(logger.run_dir, "obs_events.pkl"), "wb") as f:
pickle.dump(corrupted_evs, f)
with open(os.path.join(logger.run_dir, "events.pkl"), "wb") as f:
pickle.dump(sw.evs, f)
#sg.seed = 1
run_open_world_MH(sg, steps=1000,
enable_template_moves=True,
enable_event_moves=True,
logger=logger,
enable_event_openworld=False,
enable_template_openworld=False)
# -
print sg.seed
" step 2 -61851.71"
| notebooks/LEB_as_a_sensor_debug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/munich-ml/BER_tail_fit/blob/main/Jitter_BER_fit.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="1DIessWNXjno"
# # Jitter and BER
# *<NAME>, April 2021*
#
# This Notebook contains an introduction into Jitter, its correlation to BER (bit error ratio) and how to measure Jitter, particularly using Xilinx FPGA's with IBERT.
# + [markdown] id="UeDoyjWugpLF"
# ### Jupyter setup
# Required Python imports and helper functions
#
# + id="Lyynbf8ifPP7"
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import special
from scipy.stats import norm
# + [markdown] id="EMrZ1csi4-zk"
# `helper_func` import path depends on execution in **Colab** or local **Jupyter**
# + id="fPxGEGlx4-zl" outputId="8db252ee-82eb-43f9-b75a-f0a020698e1e" colab={"base_uri": "https://localhost:8080/"}
if 'google.colab' in sys.modules:
if "BER_tail_fit" in os.listdir():
# !git -C BER_tail_fit pull
else:
# !git clone https://github.com/munich-ml/BER_tail_fit/
from BER_tail_fit.lib.helper_funcs import JitterEstimator, plot_jitter_fit, plot_jitter_overlay
filesDir = os.path.join(os.getcwd(), "BER_tail_fit", "datasets")
print("executing in Google Colab")
else:
from lib.helper_funcs import JitterEstimator, plot_jitter_fit, plot_jitter_overlay
filesDir = os.path.join(os.getcwd(), "datasets")
print("executing in Jupyter")
# + id="fo_lmoVAiIlE"
np.random.seed(22)
def get_url(image_name):
return "https://github.com/munich-ml/BER_tail_fit/blob/main/images/{}.png?raw=true".format(image_name)
# + [markdown] id="mXg7gAgNCEjK"
# # Introduction to Jitter
# + [markdown] id="Z_iehh9bHL0k"
# **Jitter** is the **timing uncertainty** of signal edges at the crossing point with their reference level (0V for differential signaling).
#
# Since **Noise** describes a level uncertainty, **timing noise** or **phase noise** (usually used in frequency domain).
#
# 
# + [markdown] id="igDq9h_gXEQb"
# The **Total Jitter (TJ)** consists of 2 major components:
#
# **Random Jitter (RJ)**
# - unbounded --> increases over time
# - Gaussian distribution
#
# **Deterministic Jitter (DJ)**
# - bounded --> saturates over time
# - can be split into sub-components (e.g. PJ, DCD, ISI)
#
#
#
# + [markdown] id="ojB-5oSrcYYv"
# ### Jitter in a transmission system
# + [markdown] id="r2ld4RqzBFLH"
# 
#
# The Jitter needs to be small enough for the receiver to sample the `rx_data`, while satisfying its setup- and hold-requirements.
#
# + [markdown] id="uJJ5qe6glFgw"
# # Measure Jitter using a Scope
# + [markdown] id="GvlJENo6dFuG"
# A scope (realtime oscilloscope) measures jitter directly with the following basic procedure:
# - **wavetrace acquisition** (voltage over time bitstream)
# - **edge detection** (signal crossings with the reference voltage)
# - **clock recovery from data** (or usage of a strobe for source synchronous clocking schemas)
# - **data eye creation** (see Tektronix Primer)
# - **jitter** (or TIE: time interval error) is now given as edge distribution (e.g. Gaussian shaped)
#
# 
# + [markdown] id="Ky_yzvOrWJUB"
# ### Disadvantages of Jitter measurements using Scopes
# Although (realtime-) scopes are are very useful tool when analysing communication systems with respect to Jitter, their usage comes with some disadvantages:
# - scopes and probes are expensive
# - measurements are only available on individual samples and/or only during test
# - the probes changes the channel, when being applied
# - the probes is placed somewhere on the channel, not at the receiver
#
# The **in-system FPGA-based measurement approach** proposed further down can potentially mitigate or even solve those issues.
# + [markdown] id="fFF4IEVrGeaR"
# # How Jitter relates to the BER (bit error ratio)
# With higher jitter it is more likely for the receiver to sample too early or too late.
# + colab={"base_uri": "https://localhost:8080/", "height": 545} id="rrK9MVEJla4U" outputId="122b296e-0d11-4649-b37e-42b05e00047b"
x = np.linspace(-5/6, 5/6, num=500)
scale = 0.05 # sigmal value of the gaussian distribution
norm_pdf = norm.pdf(x, loc=-0.5, scale=scale) + norm.pdf(x, loc=0.5, scale=scale)
too_early = 1 - norm.cdf(x, loc=-0.5, scale=scale)
too_late = norm.cdf(x, loc=0.5, scale=scale)
plt.figure(figsize=(8, 9)), plt.subplot(2,1,1)
plt.imshow(plt.imread(get_url("RJeye"))), plt.axis("off")
plt.subplot(2,1,2)
plt.fill_between(x, norm_pdf / norm_pdf.max(), color="orange", label="normalized gaussian PDF")
plt.plot(x, too_early, "k-.", label="gaussian CDF @mu=0: sampling too early")
plt.plot(x, too_late, "b-.", label="gaussian CDF @mu=1: sampling too late")
plt.xlim([min(x), max(x)]), plt.xticks(np.linspace(min(x), max(x), num=11))
plt.xlabel("time [UI]"), plt.grid(), plt.legend();
# + [markdown] id="VeIFs5OIGeaQ"
# The example above shows a data eye together with the distribution of it's crossings (jitter distribution. PDF).
#
# Integrating the PDF provides the likelihood of sampling too early or to late (CDF).
#
# + [markdown] id="MHhgnT-1shPC"
# ### BER definition
#
# The **bit error ratio ($BER$)** is a *figure of merit* for a link quality, commonly used in communications engineering.
#
# The $BER$ described, how many bit errors there are (on average) within the received data stream: $BER=\frac{error\_bits}{received\_bits}$
#
# A typical specification is: $BER < 10^{-12}$
# + [markdown] id="bMvGpMZcGeaR"
# ### BER tail fitting
#
# The basic idea of BER tail fitting is to fit BER samples from measurements to a Jitter model, consisting of:
# - $\sigma$, `sigma`: Standard diviation of the Gaussian conresponding to the **RJ** (random jitter)
# - $\mu$, `mu`: Mean value of the Gaussian conresponding to the **DJ** (deterministic jitter)
#
# The **Gaussian Model** is fitted to only to those BER samples that are below a certain BER threshold $BERt$ (below means *later* in test time). The $BERt$ is chosen such that ideally all deterministic jitter sources completed one cycle.
#
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="8VhKPX_2GeaR" outputId="04853852-dc18-4814-f797-133a7c3b1e83"
mu = -0.48 # mean value of the distribution
sigma = 0.04 # standard deviation
ui_pos = -0.42 # sample position (example)
x = np.linspace(-0.5, 0, 500)
pdf = norm.pdf(x, loc=mu, scale=sigma) # compute pdf for variable x
def cdf(x): # define the CDF (cumulative density function) using the erf (error function)
return 0.5 * (1+special.erf( (x-mu)/(np.sqrt(2)*sigma) ))
plt.figure(figsize=(10,8)), plt.subplot(3,1,1)
plt.plot(x, pdf, "k", label="PDF(x)")
plt.stem([ui_pos], [max(pdf)], markerfmt='D', use_line_collection=True, label="sample position")
plt.fill_between(x[x <= ui_pos], pdf[x <= ui_pos], color="green", alpha=0.4, label="P1")
plt.fill_between(x[x > ui_pos], pdf[x > ui_pos], color="red", alpha=0.4, label="P2")
plt.title(f"Edge probability 'left side data eye' with mu={mu}, sigma={sigma}")
plt.ylabel("probability density"), plt.legend(), plt.grid()
plt.subplot(3,1,2)
plt.plot(x, cdf(x), "g", label="CDF(x) = P1(x)")
plt.plot(x, 1-cdf(x), "r", label="1-CDF(x) = P2(x)")
plt.plot(2*[ui_pos], [cdf(ui_pos), 1-cdf(ui_pos)], "bD", label="sample position")
plt.ylabel("probability"), plt.legend(), plt.grid();
plt.subplot(3,1,3)
plt.semilogy(x, 1-cdf(x), "r", label="1-CDF(x) = P2(x)")
plt.semilogy([ui_pos], [1-cdf(ui_pos)], "bD", label="sample position")
plt.ylabel("probability"), plt.ylim([1e-12, 1])
plt.xlabel("x [UI]"), plt.legend(), plt.grid();
# + [markdown] id="Vj8GFn14GeaS"
# The **edge distribution** at $\pm\frac{1}{2}UI$ is assumed to have a **gaussian distribution** according to
#
# > $PDF(x) = \frac{1}{\sigma\sqrt{2\pi} }e^{-\frac{(x - \mu)^{2}}{2\sigma^2}}$
# >
# > with
# > - $PDF$ = normal probability density function, available in [scipy.stats.norm](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html?highlight=stats%20norm#scipy.stats.norm)
# > - $\sigma$ (*sigma*) = standard deviation
# > - $\mu$ (*mu*) = mean value
#
# When looking for the `bit error count`,
# - all edges **left** from the `sample position` within $-\frac{1}{2}UI \cdots 0UI$ provide good data,
# - while all edges **right** from the `sample position` provide data from the previous Unit Interval, thus **bit errors** at a rate of 0.5 (because every other bit is statistically right, if there are just *ones* and *zeros*)
#
# Therefore, the area $P2$ represents the $BER$ (Bit Error Ratio) with
#
# > $BER = \frac{1}{2}{P2}$
#
# The **integration of the Gaussian** can be done by means of the **Error Function** $erf(x)$, which is nicely described in
# [Integration of Gaussian between limits](https://www.youtube.com/watch?v=26QbWYBCw7Y):
#
# > $CDF(x) = \frac{1}{2}[1+erf(\frac{x-\mu}{\sigma\sqrt2})]$
# >
# > with
# > - $CDF$ = cumulative density function of a Gaussian
# > - $erf$ = error function
# > - $\sigma$ (*sigma*) = standard deviation
# > - $\mu$ (*mu*) = mean value
#
# Just for reference, the Error Function is defined as:
#
# > $erf(x)=\frac2{\sqrt{\pi}}\int_0^x e^{-t^2} \,dt$
#
# Returning to the **data eye problem**, the $CDF(sample\_position)$ equals the area $P1$, and therefore:
#
# > $BER = \frac{1}{2}[1-CDF(x)] = \frac{1}{4}[1-erf(\frac{x-\mu}{\sigma\sqrt2})]$
#
# With the **[complementary error function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfc.html)** $erfc(x) = 1-erf(x)$, we get:
#
# > $BER = \frac{1}{4} erfc(\frac{x-\mu}{\sigma\sqrt2})$
#
# This equation needs to be resolved for $x$, because the we need to find the $sample\_position$ for a given $BER$. Fortunately, there is an **[inverse compementary error function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfcinv.html#scipy.special.erfcinv)** $erfcinv$, which leads to the final equation:
#
# > $x = erfcinv(4BER)\ \sigma\sqrt2 + \mu$
# >
# > again with
# > - $x$ = sample position (on the left half of the unit interval)
# > - $erfcinv$ = inverse complementary error function
# > - $BER$ = Bit Error Ratio, at which the distribution is evaluated
# > - $\sigma$ (*sigma*) = standard deviation of the Gaussian
# > - $\mu$ (*mu*) = mean value of the Gaussian
#
# + [markdown] id="vVHSrjtZZULQ"
# # FPGA build-in BERT
# + [markdown] id="7auK2v53ZbVf"
# FPGA-maker XILINX offers **Integrated Bit Error Ratio Tester IBERT** for their [7-Series GTP transceivers](https://www.xilinx.com/support/documentation/white_papers/wp428-7Series-Serial-Link-Signal-Analysis.pdf) and [UltraScale/UltraScale+ GTX and GTY transceivers](https://www.xilinx.com/products/intellectual-property/ibert_ultrascale_gth.html#overview).
#
#
# 
#
# + [markdown] id="_vIaceeWGeaU"
# ## Evaluating test data
# The following example data has been acquired on an [Avnet AES-KU040-DB-G evaluation board](https://github.com/munich-ml/BER_tail_fit/blob/main/literature/FPGA_EvalBoard.pdf), populated with an Xilinx Kintex® UltraScale™ FPGA.
#
# The data is provided in the `datasets` directory in JSON format and library function are also provided in the `lib` directory.
# + colab={"base_uri": "https://localhost:8080/"} id="FQc_7pSAGeaV" outputId="86a7039f-aa9e-49a1-a61f-2e45f71dbfd1"
fns = os.listdir(filesDir)
fns
# + [markdown] id="uxCV8NwHGeaV"
# The **Jitter fits** shown below appear reasonable. The two channels `C9` and `C10` are quite different in terms of Jitter performance which is expected, because their channels are different:
# - `C9`: FPGA GTX transceiver connected via 10G SFP+ transceivers and a fiber-loopback
# - `C10`: FPGA GTX transceiver connected via SMA to 2x 0.5m cheap RG58 copper-loopback cables
# + id="iRWL9q0gGeaV" outputId="e872e591-55a1-4707-fcc5-f367cc4ac1da" colab={"base_uri": "https://localhost:8080/", "height": 519}
plot_jitter_fit(os.listdir(filesDir)[0], filesDir, exclude_chs=[8, 11])
# + [markdown] id="NC_Zbwy5GeaW"
# The RJ and TJ peak-2-peak values are estimated for $BERs=10^{-12}$.
#
# The images above show, that the **Gaussian model** fits well for low $BER$ below ~$10^{-4}$ ($BERt$ = threshold).
#
# The next image is an overlay of the examples above:
# + id="09lyD-dzGeaW" outputId="00159dea-5f10-4305-f46f-5fef54e66f99" colab={"base_uri": "https://localhost:8080/", "height": 241}
plot_jitter_overlay(os.listdir(filesDir)[0], filesDir, exclude_chs=[8, 11], figsize=(12,3))
# + [markdown] id="Cwuz4a_6GeaW"
# ## Jitter extrapolation to $BER=10^{-12}$
#
#
# + [markdown] id="TPV7aszQGeaX"
# Modeling the RJ behavior is helpful to estimate the *long-term* Jitter performance with a *short* test. As an example, the same set of channels (C8..C11) have been tested twice with different `targetBER` (testtime):
#
# - `jitter_long_` with `targetBER=1E-12`
# - `jitter_short_` with `targetBER=1E-8`, thus a difference of factor 10.000 in testtime!
# + id="jxivPhhdGeaX" outputId="7b66cb61-3bb5-4d60-9e7d-8456690d47c7" colab={"base_uri": "https://localhost:8080/", "height": 350}
plot_jitter_overlay(os.listdir(filesDir), filesDir, exclude_chs=[])
# + [markdown] id="C0RThOhJGeaX"
# Evaluation:
#
# - Within each trace, the fitted samples (`X`) are well on the trace. Thus, the method of fitting a Gaussian seems valid.
# - If short-term / long-term measurements differ in $sigma$/`RJrms`, the short-term measurement is worse. Thus, extrapolating from short-term is conservative.
# - Some short-term / long-term measurements differ in $mu$/`DJ`
# > - todo: Verify reproducibility (incl. tester warm-up)
# + [markdown] id="_JhsJpKFViRq"
# # Conclusion
# + id="sRjJdgeTGeaY"
# + [markdown] id="Cvhx9J4jk_lb"
# # Appndix: Jitter and BER Simulation
# + [markdown] id="JkshvZAhgeZ_"
# Setting-up the simulation
# + id="0eyKWQLyfZD0"
N = int(4e5) # number of simulated bits
DR = 1e9 # data rate [bits/s]
UI = 1/DR # unit interval [s]
RJ_SIGMA = 0.025 # simulated random jitter's sigma
PJ_FREQ = 3e5 # frequency of the periodic jitter
PJ_AMPL = 0.1 # periodic jitter amplitude [UI]
# + id="j9KhOSuihkGo"
t = np.linspace(start=0, stop=(N-1)*UI, num=N) # time vector
dj = PJ_AMPL * np.sin(2 * np.pi * PJ_FREQ * t) # determistic jitter, consists of PJ, only
rj = RJ_SIGMA * np.random.randn(N) # random jitter
tj = rj + dj # total jitter
# + id="uwPkCscsffhY" outputId="17d4f5ed-9d59-4579-daec-4466cf881936" colab={"base_uri": "https://localhost:8080/", "height": 297}
plt.figure(figsize=(12, 4))
plt.plot(tj, ".", label="TJ");
plt.plot(rj, ".", label="RJ");
plt.plot(dj, ".", label="DJ");
plt.xlabel("time [UI]"), plt.ylabel("jitter [UI]")
plt.xlim([0, 7000])
plt.legend(loc="best"), plt.grid(), plt.tight_layout();
# + id="xjsPMW6Nf266" outputId="3fa3dd05-8a3b-41fb-e057-1d066606e058" colab={"base_uri": "https://localhost:8080/", "height": 297}
bins = np.linspace(-0.5, 0.5, 300)
plt.figure(figsize=(12, 4))
plt.hist(tj, bins=bins, histtype="stepfilled", label="TJ")
plt.hist(rj, bins=bins, histtype="step", linewidth=4, label="RJ")
plt.hist(dj, bins=bins, histtype="step", linewidth=4, label="DJ")
plt.yscale("log")
plt.ylabel("counts per bin"), plt.xlabel("jitter [UI]")
plt.legend(loc="best"), plt.grid(), plt.tight_layout();
# + [markdown] id="hGj6x9y_scvT"
# Random bit sequence as data
# + id="v-QIw8zjmDOY"
data = np.random.randint(0, 2, N)
# + id="y0gGCkFAr-lP" outputId="8d9dd9c5-30b0-476a-b25e-505d42f9c482" colab={"base_uri": "https://localhost:8080/", "height": 143}
plt.figure(figsize=(14, 1.2))
n = 100 # number of bits shown
sns.lineplot(x=t[:n]*1e9, y=data[:n], drawstyle='steps-post')
plt.title(f"first {n} bits of the data")
plt.xlabel("t [ns]"), plt.ylabel('level ["arbitrary"]');
# + [markdown] id="muSHAZqWL2fo"
# **Data sampling and error checking**
#
#
# Create a receiver sampler with `65` steps within the unit interval
# + id="L59ONKDisrDk"
RX_PI_STEPS = 65 # step count of the receiver phase interpolator
rx_pi = np.linspace(0, 1, num=RX_PI_STEPS)
# + [markdown] id="iTbDX5HBMLeQ"
# Cheching for errors
# + id="s7fgLZFbAwCn"
errors = []
for rx_pi_step in rx_pi:
errors.append(0) # start with 0 errors at each new RX PI step
for i, tj_sample in enumerate(tj):
if 0 < i < N-1: # allows sampling data[i-1], data[i+1]
if tj_sample > rx_pi_step: # checking left side eye
errors[-1] += int(np.logical_xor(data[i-1], data[i]))
if 1 + tj_sample < rx_pi_step: # checking left side eye
errors[-1] += int(np.logical_xor(data[i+1], data[i]))
# + [markdown] id="qRSRMWs3ikVp"
# Compute and plot BER
# + id="TeRpcp30K9mG"
ber = np.array(errors) / N
# + id="RdkRv0NlLZIx" outputId="8fef79f6-b519-4928-eced-a16e4ee68dc7" colab={"base_uri": "https://localhost:8080/", "height": 285}
plt.figure(figsize=(12, 4))
plt.semilogy(rx_pi, ber, "rX", label="measured BER")
plt.semilogy([0, 1], [1/N, 1/N], "-b", label="targetBER")
plt.semilogy([0, 1], [1e-12, 1e-12], "-g", label="BERs")
plt.xlabel("RX PI position [UI]"), plt.ylabel("BER")
plt.xlim([0, 1]), plt.ylim([1e-12, 1])
plt.legend(loc="upper center"), plt.grid();
# + id="6OMvspDWjYF8"
| Jitter_BER_fit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# -
import pandas
import numpy as np
import datetime as dt
import xarray as xr
import numpy as np
# function that does the counting logic
def incrementer(current_value, previous_value, last_id):
# function that does the counting logic
if current_value == 1:
if previous_value > 0:
return last_id # continue current event
else:
return last_id + 1 # start a new event
else:
return 0 # non-event
vincrementer = np.vectorize(incrementer)
# the testing data set:
# "by_coords" for newest version of xarray
# the quantiles:
ds_q = xr.open_dataset("/project/amp/brianpm/TemperatureExtremes/Derived/CPC_tmax_dayofyear_quantiles_15daywindow_c20190622.nc")
ds = xr.open_mfdataset("/project/amp/akwilson/2006-2019/tmax.*.nc")
tmax = ds["tmax"]
ninety = ds_q["tmax"].sel(quantile=0.9)
# make 'dayofyear' be the coordinate variable for ninety
ninety = ninety.rename({"time": "dayofyear"})
ninety["dayofyear"] = np.arange(1, 367)
extreme_mask = np.where(tmax.groupby("time.dayofyear") >= ninety, 1, 0)
# get it into form of DataArray with coordinates
xmask = xr.DataArray(extreme_mask, coords=tmax.coords, dims=tmax.dims)
xcount = np.zeros((360, 720))
event_id = np.zeros(tmax.shape)
event_id[0, ...] = extreme_mask[0, ...]
for t in np.arange(1, len(tmax["time"])):
event_id[t, ...] = vincrementer(xmask[t, ...], xmask[t - 1, ...], xcount)
# increment xcount:
# rule: if the current event_id is larger than the last time, it means we started a new event,
# so increment xcount, otherwise keep the current value.
xcount = np.where(event_id[t, ...] > event_id[t - 1, ...], xcount + 1, xcount)
event_id_da = xr.DataArray(event_id, coords=tmax.coords, dims=tmax.dims)
event_id_da.name = "Event_ID"
event_id_da.attrs["long_name"] = "Event ID Number based on Tmax > 90th percentile"
event_id_da.max(dim='time').plot.contourf()
event_id_da
event_id_da
import numpy as np
import xarray as xr
import logging
logging.basicConfig(level=logging.INFO)
# +
testing = False
test_sample_size = 5*366
#
# local function to define events/index/duration
#
def theloop(arr):
# setup
out_event_size = arr.max() # largest number of events -> defines output array size
nz = arr.shape[1] # number of spatial points
a = np.zeros((nz, out_event_size+1)) # +1 because we didn't include the zeros
b = np.zeros((nz, out_event_size+1))
c = np.zeros((nz, out_event_size+1))
for loc in np.arange(nz):
if loc % 1000 == 0:
logging.info(f"We are up to location index {loc}")
loc_ids, init_ndx, duration = np.unique(
arr[:,loc], return_index=True, return_counts=True)
n_loc = len(loc_ids)
a[loc, 0:n_loc] = loc_ids
b[loc, 0:n_loc] = init_ndx
c[loc, 0:n_loc] = duration
# a: the individual id numbers for events at each point
# b: the index of the first occurrence of each event (initial time)
# c: the number of values with the id value, i.e., the duration in days
return a,b,c
# +
fil = (
"/project/amp/brianpm/TemperatureExtremes/Derived/CPC_tmax_90pct_event_detection.nc"
)
ds = xr.open_dataset(fil)
logging.info("ds is defined.")
if testing:
events = ds["Event_ID"].isel(time=slice(0,test_sample_size))
else:
events = ds["Event_ID"]
logging.info("events array defined.")
# -
eventnum = event_id_da.max(dim="time")
print(eventnum)
data = event_id_da.sel(lat=43.75, lon=275.25, method='nearest').values
mynums = set(data)
duration = []
for i in mynums:
tmp = data[data == i]
duration.append(len(tmp))
eventnum.plot()
projection=ccrs.PlateCarree()
np.sum(np.array(duration) == 3)
| copy_akwilson/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mikvikpik/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/module1-statistics-probability-and-inference/LS_DS_131_Statistics_Probability_and_Inference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="eJGtmni-DezY"
# <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
# <br></br>
# <br></br>
#
# ## *Data Science Unit 1 Sprint 3 Lesson 1*
#
# # Statistics, Probability and Inference
#
# Ever thought about how long it takes to make a pancake? Have you ever compared the tooking time of a pancake on each eye of your stove? Is the cooking time different between the different eyes? Now, we can run an experiment and collect a sample of 1,000 pancakes on one eye and another 800 pancakes on the other eye. Assumed we used the same pan, batter, and technique on both eyes. Our average cooking times were 180 (5 std) and 178.5 (4.25 std) seconds repsectively. Now, we can tell those numbers are not identicial, but how confident are we that those numbers are practically the same? How do we know the slight difference isn't caused by some external randomness?
#
# Yes, today's lesson will help you figure out how long to cook your pancakes (*theoretically*). Experimentation is up to you; otherwise, you have to accept my data as true. How are going to accomplish this? With probability, statistics, inference and maple syrup (optional).
#
# <img src="https://images.unsplash.com/photo-1541288097308-7b8e3f58c4c6?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=3300&q=80" width=400>
#
# ## Learning Objectives
# * [Part 1](#p1): Normal Distribution Revisted
# * [Part 2](#p2): Student's T Test
# * [Part 3](#p3): Hypothesis Test & Doing it Live
# + [markdown] id="FMPmVdIK8LxN" colab_type="text"
# ## Normal Distribution Revisited
#
# What is the Normal distribution: A probability distribution of a continuous real valued random-variable. The Normal distribution properties make it useful for the *Central Limit Theorm*, because if we assume a variable follows the normal distribution, we can make certain conclusions based on probabilities.
# + id="0ABVfSA88LxO" colab_type="code" colab={}
import numpy as np
mu = 0 # mean
sigma = 0.1 # standard deviation
sample = np.random.normal(mu, sigma, 1000)
# + id="J1w9V7VG8LxS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1e61e8fb-8473-4546-b506-a6cff913a905"
# Verify the mean of our sample
abs(mu - np.mean(sample)) < 0.01
# + id="tC1NlDjJ8LxY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d9efaa66-8f4d-4ef2-f15a-6b0f61cd7d24"
# Verify the variance of our sample
abs(sigma - np.std(sample, ddof=1)) < 0.01
# + id="fCsavpcM8Lxc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="ef09ab89-8b70-4482-c799-f7317271a479"
import seaborn as sns
from matplotlib import style
style.use('fivethirtyeight')
ax = sns.distplot(sample, color='r')
ax.axvline(np.percentile(sample,97.5),0)
ax.axvline(np.percentile(sample,2.5),0)
# + [markdown] colab_type="text" id="FMhDKOFND0qY"
# ## Student's T Test
#
# >Assuming data come from a Normal distribution, the t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the population mean.
#
# The derivation of the t-distribution was first published in 1908 by <NAME> while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student.
#
# The t-distribution is essentially a distribution of means of normaly distributed data. When we use a t-statistic, we are checking that a mean fails within a certain $\alpha$ probability of the mean of means.
# + colab_type="code" id="fQ9rkLJmEbsk" colab={}
t_df10 = np.random.standard_t(df=10, size=10)
t_df100 = np.random.standard_t(df=1000, size=1000)
t_df1000 = np.random.standard_t(df=100000, size=100000)
# + colab_type="code" id="RyNKPt_tJk86" outputId="7e7439ae-ebbd-4a6c-f37d-6109fde6cdca" colab={"base_uri": "https://localhost:8080/", "height": 282}
sns.kdeplot(t_df10, color='r');
sns.kdeplot(t_df100, color='y');
sns.kdeplot(t_df1000, color='b');
# + colab_type="code" id="seQv5unnJvpM" outputId="157abed6-4d95-4216-8bb1-3df398537afd" colab={"base_uri": "https://localhost:8080/", "height": 272}
i = 10
for sample in [t_df10, t_df100, t_df1000]:
print(f"t - distribution with {i} degrees of freedom")
print("---" * 10)
print(f"Mean: {sample.mean()}")
print(f"Standard Deviation: {sample.std()}")
print(f"Variance: {sample.var()}")
i = i*10
# + [markdown] colab_type="text" id="FOvEGMysLaE2"
# Why is it different from normal? To better reflect the tendencies of small data and situations with unknown population standard deviation. In other words, the normal distribution is still the nice pure ideal (thanks to the central limit theorem), but the t-distribution is much more useful in many real-world situations.
# + id="H3fvXIxJUc7_" colab_type="code" colab={}
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
# + id="y7X5h2-dUmJT" colab_type="code" colab={}
# Pancake Experiment
mu1 = 180 # mean
sigma1 = 5 # standard deviation
sample1 = np.random.normal(mu1, sigma1, 1000)
mu2 = 178.5 # mean
sigma2 = 4.25 # standard deviation
sample2 = np.random.normal(mu2, sigma2, 800)
# + id="NUxhzJcZVHH5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="08c6030f-6053-4b87-f761-15473ee5ff12"
ax = sns.distplot(sample1, color='r')
ax = sns.distplot(sample2, color='b')
# + id="vjb5nU_sV6ga" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="67298e82-2d01-4fac-b189-e9cec695ae27"
# T-stat is distantly related to how far out from the mean or standard deviation
ttest_ind(sample1, sample2)
# + [markdown] colab_type="text" id="1yx_QilAEC6o"
# ## Live Lecture - let's perform and interpret a t-test
#
# We'll generate our own data, so we can know and alter the "ground truth" that the t-test should find. We will learn about p-values and how to interpret "statistical significance" based on the output of a hypothesis test. We will also dig a bit deeper into how the test statistic is calculated based on the sample error, and visually what it looks like to have 1 or 2 "tailed" t-tests.
# + colab_type="code" id="BuysRPs-Ed0v" colab={}
# TODO - during class, but please help!
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
# + [markdown] colab_type="text" id="wiq83guLcuAE"
# # Resources
#
# - https://homepage.divms.uiowa.edu/~mbognar/applets/t.html
# - https://rpsychologist.com/d3/tdist/
# - https://gallery.shinyapps.io/tdist/
# - https://en.wikipedia.org/wiki/Standard_deviation#Sample_standard_deviation_of_metabolic_rate_of_northern_fulmars
# - https://www.khanacademy.org/math/ap-statistics/two-sample-inference/two-sample-t-test-means/v/two-sample-t-test-for-difference-of-means
| module1-statistics-probability-and-inference/LS_DS_131_Statistics_Probability_and_Inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sensors Mezzanine Card Examples
#
# In this notebook we demonstrate several useful features of the [Sensors Mezzanine adapter](https://www.96boards.org/product/sensors-mezzanine/). In all of these examples, Grove peripherals are read and written through Digital GPIO pins connected to the PS or PL. In addition, all of the examples use Intel UPM drivers.
#
# In addition to the Sensors Mezzanine card, you will need the following modules shown from top to bottom in the photo below: Grove LCD RGB Backlight Module (Connected to I2C0), Grove Button Module (GPIO E/F), Grove Relay Module (GPIO G/H), Grove Buzzer Module (GPIO I/J), and Grove LED Socket Module (GPIO K/L).
#
# 
#
#
# These modules should be available in the [Grove Starter Kit for 96 boards](https://www.seeedstudio.com/Grove-Starter-Kit-for-96Boards-p-2618.html). More information about the drivers available in UPM can be found on the [Intel website](https://iotdk.intel.com/docs/master/upm/python/).
#
#
# To start, we will load the sensors96b overlay. This overlay connects all PL-connected GPIOs to the PS.
# +
import time
from pynq.overlays.sensors96b import Sensors96bOverlay
overlay = Sensors96bOverlay('sensors96b.bit')
# -
# ## Grove LCD RGB Backlight Module
#
# Now that the overlay has been loaded, we can start talking to grove peripherals. We will first communicate with the Grove LCD RGB Backlight.
#
# The Grove LCD RGB Backlight is based on a JHD1313M1 I2C Controller. The driver is available through the `pyupm_jhd1313m1` module. We can initialize the driver by calling `lcd.Jhd1313m1()` with the I2C bus (0), the LCD cursor address (0x3E), and the RGB controller address (0x62).
# +
from upm import pyupm_jhd1313m1 as lcd
myLcd = lcd.Jhd1313m1(0, 0x3E, 0x62)
# -
# The lines above should turn the the backlight to white.
#
# Next, we write text to the display and change the backlight color to blue
_ = myLcd.setCursor(0,0)
_ = myLcd.write('Hello World')
_ = myLcd.setCursor(1,2)
_ = myLcd.write('Hello World')
_ = myLcd.setColor(53, 39, 249)
# This concludes the Grove LCD RGB Backlight example.
#
#
# In the next four examples we will be using GPIO-driven Grove Modules. The following command shows the mapping between alphabetical GPIO names and numerical GPIO indicies.
#
# GPIO-A through GPIO-D are not available through the 40-pin header.
#
# !mraa-gpio list
# ## Grove Button Module
#
# We now demonstrate the use of the Grove Button Module. The button driver can also be used to read the touch sensor.
#
# In this example the Grove Button Module is connected to GPIO-E, with index 27. We read the button value 10 times, once every second, and print the value.
# +
from upm import pyupm_grove as grove
button = grove.GroveButton(27)
for i in range(10):
if button.value() == 1:
print("Button is pressed!")
else:
print("Button is not pressed!")
time.sleep(1)
# -
# ## Grove Relay Module
#
# We will now demonstrate the use of the Grove Relay Module.
#
# The Grove Relay Module is connected to GPIO-G, with index 29.
#
# In this example we will open and close the relay switch three times, waiting one second between each command. While the Grove Relay Module is not connected to anything, you should hear a faint clicking noise with each state change and see a message printed below.
# +
from upm import pyupm_grove as grove
relay = grove.GroveRelay(29)
for i in range(3):
relay.on()
if relay.isOn():
print(relay.name(), 'is on')
time.sleep(1)
relay.off()
if relay.isOff():
print(relay.name(), 'is off')
time.sleep(1)
# -
# ## Grove Buzzer Module
#
# We will now demonstrate the Grove Buzzer Module.
#
# The Grove Buzzer Module is attached to GPIO-I, with index 31.
#
# In this example we will play 7 beeps.
# +
from upm import pyupm_grovespeaker as upmGrovespeaker
mySpeaker = upmGrovespeaker.GroveSpeaker(31)
mySpeaker.playAll()
# -
# ## Grove LED Module
#
# We will now demonstrate the Grove LED Module.
#
# The Grove LED Module is attached to GPIO-K, with index 33.
#
# In this example we will turn the LED on and off 10 times, with one second for each state.
#
# +
from upm import pyupm_grove as grove
led = grove.GroveLed(33)
for i in range(10):
led.on()
time.sleep(1)
led.off()
time.sleep(1)
| Ultra96/sensors96b/notebooks/sensors_mezzanine_examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (base)
# language: python
# name: base
# ---
import numpy as np
import cv2
from PIL import Image
import matplotlib.pyplot as plt
import math
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
from moviepy.editor import VideoFileClip
def save(image,name):
name=""+str(name)+".png"
cv2.imwrite(name,image)
def grayen(image):
grayenn = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
return grayenn
def darken(image):
new_image=np.zeros_like(image)
alpha=0.19
beta=-35
new_image=np.clip(np.multiply(alpha,image)+beta,0,255)
return new_image
def brighten(image):
new_image=np.zeros_like(image)
gamma=0.5
new_image=np.clip(np.multiply(np.power(np.multiply(1/255,image),gamma),255),0,255)
return new_image
def colour_threshing(image):
image = np.where(image < 210, 0, 255)
return image
def perspective(image):
h,w=image.shape
pts1 = np.float32([[h/3,0],[h/3,w-1],[h-1,0],[h-1,w-1]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(median_image,M,(300,300))
return dst
def roi_mask(image):
mask=np.zeros_like(image)
height,width = image.shape
a3 = np.array( [[[0,200],[1500,200],[width-1,800],[width-1,height-1],[0,height-1]]], dtype=np.int32 )
cv2.fillPoly(mask,a3,255)
return mask
def white_pixels(image,x,y):
h=window_h
w=window_w
sum1=np.sum(combo[y-h:y,x:x+w],axis=0).reshape((1,w))
sum2=np.sum(sum1,axis=1)
return sum2/255
def centroid(rect1x):
he=rect1x.shape[0]
wi=rect1x.shape[1]
sum1x=0
sum1y=0
for i in range(he):
for j in range(wi):
sum1x=sum1x+j*rect1x[i,j]*(1/255)
sum1y=sum1y+i*rect1x[i,j]*(1/255)
sum1x=int(sum1x/window_w)
sum1y=int(sum1y/window_h)
return [sum1x,sum1y]
def yellow_black(image):
hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
yellow_lo=np.array([25,175,175])
yellow_hi=np.array([35,255,255])
mask=cv2.inRange(hsv,yellow_lo,yellow_hi)
img=np.copy(image)
img[mask>0]=(0,0,0)
return img
def green_white(image):
hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
green_lo=np.array([20,40,40])
green_hi=np.array([80,255,255])
mask=cv2.inRange(hsv,green_lo,green_hi)
img=np.copy(image)
img[mask>0]=(255,255,255)
return img
def binary(image):
img=np.copy(image)
img=np.where(img!=(255,255,255),(0,0,0),(255,255,255))
return img
def vid_pipeline(img):
x=0
y=0
width= img.shape[1]
height=img.shape[0]
crop_image = img[y:y+(int)(height*(5/6)), x:x+width]
width= crop_image.shape[1]
height=crop_image.shape[0]
yellow=yellow_black(crop_image)
green=green_white(yellow)
masked=binary(green)
masked = cv2.medianBlur(masked.astype(np.uint8),9)
gray=grayen(crop_image)
brighten_image=brighten(gray)
bw_image=colour_threshing(brighten_image)
median_image = cv2.medianBlur(np.uint8(bw_image.astype(np.float32)),15)
mask=roi_mask(median_image)
combo=cv2.bitwise_and(mask,median_image)
edges = cv2.Canny(np.uint8(combo),175,200)
lines = cv2.HoughLines(edges,1,5*np.pi/180,50)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret,label,center2=cv2.kmeans(lines.astype(np.float32),2,None,criteria,10,cv2.KMEANS_PP_CENTERS)
line_img=np.zeros_like(edges)
for i in range(len(center2)):
rho,theta = center2[i]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 2000*(-b))
y1 = int(y0 + 2000*(a))
x2 = int(x0 - 2000*(-b))
y2 = int(y0 - 2000*(a))
cv2.line(line_img,(x1,y1),(x2,y2),255,30)
line_img=cv2.bitwise_and(mask,line_img)
line_img=cv2.cvtColor(line_img,cv2.COLOR_GRAY2RGB)
crop_image=np.where(masked==(0,0,0),crop_image*0,crop_image*1)
crop_image=np.where(line_img==(255,255,255),(255,153,51),crop_image*1)
return crop_image
myclip = VideoFileClip('sample_output.mp4')
output_vid = 'output3.mp4'
clip = myclip.fl_image(vid_pipeline)
clip.write_videofile(output_vid, audio=True)
save(vid_pipeline(cv2.imread("frame6.png")),"o6")
# +
cap= cv2.VideoCapture('sample_output.mp4')
i=0
while(cap.isOpened()):
ret, frame = cap.read()
if ret == False:
break
cv2.imwrite('f'+str(i)+'.jpg',vid_pipeline(frame))
i+=1
cap.release()
cv2.destroyAllWindows()
# -
| Task 4/Code/Output3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''.venv'': poetry)'
# name: python3
# ---
# # Estimation of population parameters
#
# The objective of this tutorial is illustrate the use of the *samplics* estimation APIs. There are two main classes: *TaylorEstimator* and *ReplicateEstimator*. The former class uses linearization methods to estimate variance of population parameters while the latter uses replicate-based methods (bootstrap, brr/fay, and jackknife) to estimate the variance.
# +
from IPython.core.display import Image, display
from samplics.datasets import load_nhanes2, load_nhanes2brr, load_nhanes2jk, load_nmihs
from samplics.estimation import TaylorEstimator, ReplicateEstimator
# -
# ## Taylor approximation <a name="section1"></a>
# +
# Load Nhanes sample data
nhanes2_dict = load_nhanes2()
nhanes2 = nhanes2_dict["data"]
nhanes2.head(15)
# -
# We calculate the survey mean of the level of zinc using Stata and we get the following
# Using *samplics*, the same estimate can be obtained using the snippet of code below.
# +
zinc_mean_str = TaylorEstimator("mean")
zinc_mean_str.estimate(
y=nhanes2["zinc"],
samp_weight=nhanes2["finalwgt"],
stratum=nhanes2["stratid"],
psu=nhanes2["psuid"],
remove_nan=True,
)
print(zinc_mean_str)
# -
# The results of the estimation are stored in the dictionary `zinc_mean_str`. The users can covert the main estimation information into a pd.DataFrame by using the method `to_dataframe()`.
zinc_mean_str.to_dataframe()
# The method `to_dataframe()` is more useful for domain estimation by producing a table where which row is a level of the domain of interest, as shown below.
# +
zinc_mean_by_race = TaylorEstimator("mean")
zinc_mean_by_race.estimate(
y=nhanes2["zinc"],
samp_weight=nhanes2["finalwgt"],
stratum=nhanes2["stratid"],
domain=nhanes2["race"],
psu=nhanes2["psuid"],
remove_nan=True,
)
zinc_mean_by_race.to_dataframe()
# -
#
# Let's remove the stratum parameter then we get the following with stata
# with samplics, we get ...
# +
zinc_mean_nostr = TaylorEstimator("mean")
zinc_mean_nostr.estimate(
y=nhanes2["zinc"], samp_weight=nhanes2["finalwgt"], psu=nhanes2["psuid"], remove_nan=True
)
print(zinc_mean_nostr)
# -
#
# The other parameters currently implemented in *TaylorEstimator* are TOTAL, PROPORTION and RATIO. TOTAL and PROPORTION have the same function call as the MEAN parameter. For the RATIO parameter, it is necessary to provide the parameter *x*.
# +
ratio_bp_lead = TaylorEstimator("ratio")
ratio_bp_lead.estimate(
y=nhanes2["highbp"],
samp_weight=nhanes2["finalwgt"],
x=nhanes2["highlead"],
stratum=nhanes2["stratid"],
psu=nhanes2["psuid"],
remove_nan=True,
)
print(ratio_bp_lead)
# -
# ## Replicate-based variance estimation <a name="section2"></a>
# #### Bootstrap <a name="section21"></a>
# +
# Load NMIHS sample data
nmihs_dict = load_nmihs()
nmihs = nmihs_dict["data"]
nmihs.head(15)
# -
#
# Let's estimate the average birth weight using the bootstrap weights.
# +
# rep_wgt_boot = nmihsboot.loc[:, "bsrw1":"bsrw50"]
birthwgt = ReplicateEstimator("bootstrap", "mean").estimate(
y=nmihs["birth_weight"],
samp_weight=nmihs["finalwgt"],
rep_weights=nmihs.loc[:, "bsrw1":"bsrw50"],
remove_nan=True,
)
print(birthwgt)
# -
# #### Balanced repeated replication (BRR) <a name="section22"></a>
# +
# Load NMIHS sample data
nhanes2brr_dict = load_nhanes2brr()
nhanes2brr = nhanes2brr_dict["data"]
nhanes2brr.head(15)
# -
# Let's estimate the average birth weight using the BRR weights.
# +
brr = ReplicateEstimator("brr", "ratio")
ratio_wgt_hgt = brr.estimate(
y=nhanes2brr["weight"],
samp_weight=nhanes2brr["finalwgt"],
x=nhanes2brr["height"],
rep_weights=nhanes2brr.loc[:, "brr_1":"brr_32"],
remove_nan=True,
)
print(ratio_wgt_hgt)
# -
# #### Jackknife <a name="section23"></a>
# +
# Load NMIHS sample data
nhanes2jk_dict = load_nhanes2jk()
nhanes2jk = nhanes2jk_dict["data"]
nhanes2jk.head(15)
# -
# In this case, stratification was used to calculate the jackknife weights. The stratum variable is not indicated in the dataset or survey design description. However, it says that the number of strata is 31 and the number of replicates is 62. Hence, the jackknife replicate coefficient is $(n_h - 1) / n_h = (2-1) / 2 = 0.5$. Now we can call *replicate()* and specify *rep_coefs = 0.5*.
# +
jackknife = ReplicateEstimator("jackknife", "ratio")
ratio_wgt_hgt2 = jackknife.estimate(
y=nhanes2jk["weight"],
samp_weight=nhanes2jk["finalwgt"],
x=nhanes2jk["height"],
rep_weights=nhanes2jk.loc[:, "jkw_1":"jkw_62"],
rep_coefs=0.5,
remove_nan=True,
)
print(ratio_wgt_hgt2)
| docs/source/tutorial/estimation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Remote Sensing Hands-On Lesson, using TGO
#
#
# Planetary Data Workshop 4 Conference, Flagstaff, June 18, 2019
#
#
# ## Overview
#
#
# In this lesson you will develop a series of simple programs that
# demonstrate the usage of SpiceyPy to compute a variety of different
# geometric quantities applicable to experiments carried out by a remote
# sensing instrument flown on an interplanetary spacecraft. This
# particular lesson focuses on a spectrometer flying on the ExoMars2016 TGO
# spacecraft, but many of the concepts are easily extended and generalized
# to other scenarios.
#
# You may find it useful to consult the permuted index, the headers of
# various source modules, and several Required Reading documents available at
# the NAIF site.
# ## Initialise SPICE by importing SpiceyPy
#
# For the following exercises, instead of loading the meta-kernel try to sort out the exact kernels that you need to load for the given execrcise and load them (unless indicated).
#
import spiceypy
# ## Time Conversion
#
#
# Write a program that given a UTC time string,
# converts it to the following time systems and output formats:
#
# * Ephemeris Time (ET) in seconds past J2000
# * Calendar Ephemeris Time
# * Spacecraft Clock Time
#
# and displays the results. Use the program to convert "2018 JUN 11
# 19:32:00" UTC into these alternate systems.
# +
#
# We need to load the leapseconds kernel and the SCLK kernel.
#
spiceypy.furnsh('kernels/lsk/naif0012.tls')
spiceypy.furnsh('kernels/sclk/em16_tgo_step_20160414.tsc')
et = spiceypy.utc2et('2018-06-11T19:32:00')
print(' Ephemeris Time (ET) in seconds past J2000: {}'.format(et))
calet = spiceypy.timout( et, 'YYYY-MON-DDTHR:MN:SC ::TDB' )
print( ' Calendar Ephemeris Time: {:s}'.format( calet ) )
#
# We will need the SCLK ID of TGO that we can retrieve from the SCLK file itself.
#
sclkid = -143
sclkst = spiceypy.sce2s( sclkid, et )
print( ' Spacecraft Clock Time: {:s}'.format( sclkst ) )
#
# We unload all the kernels in the kernel pool
#
spiceypy.kclear()
# -
# ## Obtaining Target States and Positions
#
# Write a program that given a UTC time string computes the following quantities at that epoch:
#
# * The apparent state of Mars as seen from ExoMars2016 TGO in the J2000 frame, in kilometers and kilometers/second. This vector itself is not of any particular interest, but it is a useful intermediate quantity in some geometry calculations.
#
# * The one-way light time between ExoMars2016 TGO and the apparent position of Earth, in seconds.
#
# * The actual (geometric) distance between the Sun and Mars, in astronomical units.
#
# and displays the results. Use the program to compute these quantities at
# "2018 JUN 11 19:32:00" UTC.
#
# 
# +
#
# We need to load the leapseconds kernel the Solar System and Mars epehmeris and the TGO SPK.
#
spiceypy.furnsh('kernels/lsk/naif0012.tls')
spiceypy.furnsh('kernels/spk/de430.bsp')
spiceypy.furnsh('kernels/spk/mar085.bsp')
spiceypy.furnsh('kernels/spk/em16_tgo_mlt_20171205_20230115_v01.bsp')
et = spiceypy.utc2et('2018-06-11T19:32:00')
#
# Compute the apparent state of Mars as seen from ExoMars2016 TGO in the J2000 frame.
# All of the ephemeris readers return states in units of kilometers and km/s.
#
[state, ltime] = spiceypy.spkezr('MARS', et,'J2000','LT+S','TGO' )
print( ' Apparent state of Mars as seen from ExoMars-16 TGO in the J2000\n'
' frame (km, km/s):')
print( ' X = {:16.3f}'.format(state[0]) )
print( ' Y = {:16.3f}'.format(state[1]) )
print( ' Z = {:16.3f}'.format(state[2]) )
print( ' VX = {:16.3f}'.format(state[3]) )
print( ' VY = {:16.3f}'.format(state[4]) )
print( ' VZ = {:16.3f}\n'.format(state[5]) )
#
# Compute the apparent position of Earth as seen from
# ExoMars2016 TGO in the J2000 frame.
#
[pos, ltime] = spiceypy.spkpos( 'EARTH', et, 'J2000', 'LT+S', 'TGO')
print( ' One way light time between ExoMars2016 TGO and the apparent\n'
' position of Earth: {} seconds\n'.format(ltime))
#
# Now we need to compute the actual distance between the Sun and Mars.
# We need to adjust our aberration correction appropriately.
#
[pos, ltime] = spiceypy.spkpos( 'SUN', et, 'J2000','NONE', 'MARS')
#
# Compute the distance between the body centers in kilometers.
#
dist = spiceypy.vnorm( pos )
#
# Convert this value to AU using convrt.
#
dist = spiceypy.convrt( dist, 'KM', 'AU' )
print( ' Actual distance between Sun and Mars body centers: {} AU'.format(dist))
spiceypy.kclear()
# -
# ## Spacecraft Orientation and Reference Frames
#
#
# Write a program that given a UTC time string
# computes and displays the following at the epoch of interest:
#
# * The angular separation between the apparent position of Mars as seen from ExoMars2016 TGO and the nominal instrument view direction.
#
# The nominal instrument view direction is not provided by any kernel variable, but it is indicated in the ExoMars2016 TGO frame kernel.
#
# Use the program to compute these quantities at the epoch "2018 JUN 11 19:32:00" UTC.
#
# 
# +
#
# Since we need orientation infomration in addition to the kernels we loaded before
# now we need to load the Frames Kernel, the SCLK kernel and the TGO CK kernel.
#
spiceypy.furnsh('kernels/lsk/naif0012.tls')
spiceypy.furnsh('kernels/sclk/em16_tgo_step_20160414.tsc')
spiceypy.furnsh('kernels/spk/de430.bsp')
spiceypy.furnsh('kernels/spk/mar085.bsp')
spiceypy.furnsh('kernels/spk/em16_tgo_mlt_20171205_20230115_v01.bsp')
spiceypy.furnsh('kernels/fk/em16_tgo_v07.tf')
spiceypy.furnsh('kernels/ck/em16_tgo_sc_slt_npo_20171205_20230115_s20160414_v01.bc')
spiceypy.furnsh('kernels/pck/pck00010.tpc')
et = spiceypy.utc2et('2018-06-11T19:32:00')
#
# We compute the apparent position of Mars as seen from
# ExoMars2016 TGO in the J2000 frame.
#
[pos, ltime] = spiceypy.spkpos('MARS',et,'J2000','LT+S','TGO')
#
# Now compute the location of the nominal instrument view
# direction. From reading the frame kernel we know that
# the instrument view direction is nominally the -Y axis
# of the TGO_SPACECRAFT frame defined there.
#
bsight = [ 0.0, -1.0, 0.0]
#
# Now compute the rotation matrix from TGO_SPACECRAFT into
# J2000.
#
pform = spiceypy.pxform('TGO_SPACECRAFT','J2000', et )
#
# And multiply the result to obtain the nominal instrument
# view direction in the J2000 reference frame.
#
bsight = spiceypy.mxv(pform, bsight)
#
# Lastly compute the angular separation.
#
sep = spiceypy.convrt(spiceypy.vsep(bsight, pos),'RADIANS','DEGREES')
print(' Angular separation between the apparent position of Mars and the\n'
' ExoMars2016 TGO nominal instrument view direction (degrees): {:.3f}'.format(sep))
spiceypy.kclear()
# -
# ## Computing Sub-s/c and Sub-solar Points on an Ellipsoid and a DSK
#
#
# Write a program that given a UTC time string computes the following quantities at that epoch:
#
# * The apparent sub-observer point of ExoMars2016 TGO on Mars, in the body fixed frame IAU_MARS, in kilometers.
# * The apparent sub-solar point on Mars, as seen from ExoMars2016 TGO in the body fixed frame IAU_MARS, in kilometers.
#
# The program computes each point twice: once using an ellipsoidal shape model and the
#
# near point/ellipsoid
#
# definition, and once using a DSK shape model and the
#
# nadir/dsk/unprioritized
#
# definition.
#
# Use the provided meta-kernel to load the non-DSK kernels you need. Load the DSK kernels as needed.
# The program displays the results. Use the program to compute these quantities at "2018 JUN 11 19:32:00" UTC.
#
# 
# +
spiceypy.furnsh('remote_sensing_tgo.tm')
et = spiceypy.utc2et('2018-06-11T19:32:00')
for i in range(2):
if i== 0:
#
# Use the "near point" sub-point definition
# and an ellipsoidal model.
#
method = 'NEAR POINT/Ellipsoid'
else:
#
# Use the "nadir" sub-point definition
# and a DSK model.
#
method = 'NADIR/DSK/Unprioritized'
spiceypy.furnsh('kernels/dsk/mars_lowres.bds')
print( ' Sub-point/target shape model: {:s}\n'.format(method ) )
#
# Compute the apparent sub-observer point of ExoMars-16 TGO
# on Mars.
#
[spoint, trgepc, srfvec] = spiceypy.subpnt(method,'MARS',et,
'IAU_MARS','LT+S','TGO' )
print( ' Apparent sub-observer point of ExoMars2016 TGO on Mars\n'
' in the IAU_MARS frame (km):' )
print( ' X = {:.3f}'.format(spoint[0]) )
print( ' Y = {:.3f}'.format(spoint[1]) )
print( ' Z = {:.3f}'.format(spoint[2]) )
print( ' ALT = {:.3f}\n'.format(spiceypy.vnorm(srfvec)) )
#
# Compute the apparent sub-solar point on Mars
# as seen from ExoMars-16 TGO.
#
[spoint, trgepc, srfvec] = spiceypy.subslr(method,'MARS',et,'IAU_MARS',
'LT+S','TGO' )
print( ' Apparent sub-solar point on Mars as seen from ExoMars2016 \n'
' TGO in the IAU_MARS frame (km):' )
print( ' X = {:.3f}'.format(spoint[0]) )
print( ' Y = {:.3f}'.format(spoint[1]) )
print( ' Z = {:.3f}\n'.format(spoint[2]) )
spiceypy.unload('remote_sensing_tgo.tm')
spiceypy.unload('kernels/dsk/mars_lowres.tm')
# -
# ## Intersecting Vectors with an Ellipsoid and a DSK (fovint)
#
# Write a program given input UTC time string computes the intersection of the ExoMars2016 TGO NOMAD LNO
# Nadir aperture boresight and field of view (FOV) boundary vectors with
# the surface of Mars with Mars' shape modeled by DSK data.
# The program presents each point of intersection as
#
# * Planetocentric (latitudinal) coordinates in the IAU_MARS frame.
#
# For each of the camera FOV boundary and boresight vectors, if an
# intersection is found, the program displays the results of the above
# computations, otherwise it indicates no intersection exists.
#
# At each point of intersection compute the following:
#
# * Phase angle
# * Solar incidence angle
# * Emission angle
#
#
# Use this program to compute values at "2018 JUN 11 19:32:00" UTC.
#
# 
# +
#
# We will have to handle errors
#
from spiceypy.utils.support_types import SpiceyError
spiceypy.furnsh('remote_sensing_tgo.tm')
spiceypy.furnsh('kernels/dsk/mars_lowres.bds')
et = spiceypy.utc2et('2018-06-11T19:32:00')
#
# Now we need to obtain the FOV configuration of
# the NOMAD LNO Nadir aperture. To do this we will
# need the ID code for TGO_NOMAD_LNO_NAD.
#
lnonid = spiceypy.bodn2c('TGO_NOMAD_LNO_NAD')
#
# Now retrieve the field of view parameters.
#
[shape,insfrm,bsight,n,bounds ] = spiceypy.getfov(lnonid,4)
#
# `bounds' is a numpy array. We'll convert it to a list.
#
# Rather than treat BSIGHT as a separate vector,
# # copy it into the last slot of BOUNDS.
#
bounds = bounds.tolist()
bounds.append( bsight )
#
# Set vector names to be used for output.
#
vecnam = [ 'Boundary Corner 1',
'Boundary Corner 2',
'Boundary Corner 3',
'Boundary Corner 4',
'TGO NOMAD LNO Nadir Boresight' ]
'DSK/Unprioritized'
#
# Get ID code of Mars. We'll use this ID code later, when we
# compute local solar time.
#
marsid = spiceypy.bodn2c( 'MARS' )
#
# Now perform the same set of calculations for each
# vector listed in the BOUNDS array. Use both
# ellipsoidal and detailed (DSK) shape models.
#
for i in range(5):
#
# Call sincpt to determine coordinates of the
# intersection of this vector with the surface
# of Mars.
#
print( ' Vector: {:s}\n'.format( vecnam[i] ) )
try:
[point,trgepc,srfvec] = spiceypy.sincpt('DSK/Unprioritized','MARS',et,
'IAU_MARS', 'LT+S','TGO',
insfrm, bounds[i])
#
# Display the planetocentric latitude and longitude of the intercept.
#
[radius,lon,lat] = spiceypy.reclat( point )
print( ' Planetocentric coordinates of the intercept (degrees):')
print( ' LAT = {:.3f}'.format(lat * spiceypy.dpr() ) )
print( ' LON = {:.3f}\n'.format(lon * spiceypy.dpr() ) )
#
# Compute the illumination angles at this point.
#
[trgepc,srfvec,phase,solar,emissn, visibl,lit] = spiceypy.illumf(
'DSK/Unprioritized','MARS','SUN',et,'IAU_MARS',
'LT+S','TGO', point)
print( ' Phase angle (degrees): {:.3f}'.format(phase*spiceypy.dpr()))
print( ' Solar incidence angle (degrees): {:.3f}'.format(solar*spiceypy.dpr()))
print( ' Emission angle (degrees): {:.3f}'.format(emissn*spiceypy.dpr()))
print( ' Observer visible: {:s}'.format(str(visibl)))
print( ' Sun visible: {:s}\n'.format(str(lit)))
except SpiceyError as exc:
#
# Display a message if an exception was thrown.
# For simplicity, we treat this as an indication
# that the point of intersection was not found,
# although it could be due to other errors.
# Otherwise, continue with the calculations.
#
print( 'Exception message is: {:s}'.format(exc.value ))
#
# End of vector loop.
#
spiceypy.kclear()
| lesson_remote_sensing/remote_sensing_tgo_py.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('adult.csv')
df.head()
# +
# df.describe()
# +
# df['workclass'].unique()
# +
# df['education'].unique()
# +
# df['income'].unique()
# -
df['income'].value_counts()
# ## Подготовка датасета
# +
# df.columns
# -
X, y = df[['age', 'workclass', 'fnlwgt', 'education', 'educational-num',
'marital-status', 'occupation', 'relationship', 'race', 'gender',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country']], df['income']
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
# +
# y
# -
del X['fnlwgt']
X = pd.get_dummies(X)
# +
# X.head()
# -
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.3, random_state = 0 )
from sklearn.preprocessing import StandardScaler
# +
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# +
# X_train
# -
# ## Пробуем обучить модель
from sklearn.linear_model import LogisticRegression
# +
'''
lr принимает два параметра:
penalty (l1 или l2)
C - влияние коэффициент регуляризации
'''
lr = LogisticRegression()
lr.fit(X_train, y_train)
lr.score(X_test, y_test)
# +
# переберем penalty и C
penalty = ['l1', 'l2']
C = [0.001, 0.01, 0.1, 1.0, 10.0]
# -
from sklearn.metrics import roc_auc_score, roc_curve
# +
# Заметка
# predict_proba возвращает вероятность пренадлежности
# 0 класса и второй столбец вероятность пренадлежности 1 класса (в сумме дают еденицу)
# для каждого сэмпла выборки
print(lr.predict_proba(X_test))
print()
print(lr.predict_proba(X_test).shape)
print(lr.predict_proba(X_test).sum(axis=1))
# нас интересует вероятность принадлежности второму классу (1) по этому мы берем второй столбец в probas
print(lr.predict_proba(X_test)[:,1])
# lr.predict_proba(X_test)[1] # выбор первой строки
# lr.predict_proba(X_test)[:, 1] # все строки и только первый столбец
# print(lr.predict_proba(X_test)[1,1]) # из первой строки значение первого столбца
# print(lr.predict_proba(X_test)[:,0]) # все строки и только нулевой столбец
# +
params = []
lines = []
scores = []
for p in penalty:
for c in C:
lr = LogisticRegression(penalty=p, C=c)
lr.fit(X_train, y_train)
probas = lr.predict_proba(X_test)[:,1] # это вероятности записываем в отдельную переменную (список)
params.append((p,c))
scores.append(roc_auc_score(y_test, probas))
lines.append(roc_curve(y_test, probas))
# -
# %matplotlib inline
import matplotlib.pyplot as plt
# +
for i in range(len(params)):
plt.plot(lines[i][0], lines[i][1], label='{}_{}'.format(params[i][0], params[i][1]))
plt.legend()
plt.show()
# -
for i in range(len(params)):
print('{}_{}: {}'.format(params[i][0], params[i][1], scores[i]))
pass
| Lectures notebooks/(Lectures notebooks) netology Machine learning/8. Model accuracy assessment, retraining, regularization/adult_penalty_C.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### Introduction
# One of the most important variables in various transportation models is the travel time estimates by different modes of transport. Traditionally, this information is provided by the regional travel demand model in the form of Highway Skims and Transit Skims. The highway skims contain driving time from one zone (usually TAZ) to another via the transportation network, while the transit skims contain detailed information about transit travel times from one zone to another.
#
# This data is not public and is difficult for students and researchers to access.
#
# This code provides an open-source code for retrieving this information from Google Maps Directions API. Any interested student or researcher or analyst can retrieve travel times for any origin-destination pair using this code.
#
# The shapefile used for this example is also publicly available. It is the block groups shapefile for the State of Connecticut, but the code can be used on any other shapefiles representing other locations and geographic units (census tracts, TAZ etc.).
# ### Import Libraries
# +
# %matplotlib inline
import pandas as pd, numpy as np, matplotlib.pyplot as plt
import geopandas as gpd
from geopandas import GeoDataFrame
from shapely.geometry import Point
from shapely.geometry import Polygon
from scipy import ndimage
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
import timeit
import googlemaps
pylab.rcParams['figure.figsize'] = 10, 8
import warnings
warnings.filterwarnings('ignore')
# -
# ### Read Shapefiles
blocks = gpd.read_file('./blockgroupct_37800_0000_2010_s100_census_1_shp_wgs84.shp')
blocks.crs
blocks.crs = {'init' :'epsg:4326'}
blocks.plot();
blocks['GEOID10'].nunique()
# ### Get Latitude and Longitude of Block Group Centroids
blocks["longitude"] = blocks.centroid.map(lambda p: p.x)
blocks["latitude"] = blocks.centroid.map(lambda p: p.y)
blocks.head()
# ### Select 30 Origins and 30 Destinations Randomly
origins = blocks.sample(30, random_state = 5)
destinations = blocks.sample(30, random_state = 10)
origins = origins[['GEOID10', 'latitude', 'longitude']]
destinations = destinations[['GEOID10', 'latitude', 'longitude']]
origins.reset_index(inplace=True)
destinations.reset_index(inplace=True)
origins.dtypes
data = pd.concat([origins, destinations], axis=1)
data[:2]
data.drop(data.columns[0], axis = 1, inplace=True)
data.shape
data.columns = ['O_GEOID10', 'o_lat', 'o_lng', 'D_GEOID10', 'd_lat', 'd_lng']
data.head()
driving_data = data
transit_data = data
# ## Google Maps
gmaps = googlemaps.Client(key = 'INSERT YOUR KEY HERE')
# ### Retrieve Auto (Driving) Travel Time
# +
cols = ['driving_distance', 'driving_duration']
for col in cols:
driving_data[col] = 0.0
# +
start_time = timeit.default_timer()
driving_results = []
for index, row in driving_data.iterrows():
x1 = row['o_lat']
y1 = row['o_lng']
x2 = row['d_lat']
y2 = row['d_lng']
directions_result = (gmaps.directions(origin = (x1,y1), destination = (x2,y2), mode="driving"))
driving_results.append(directions_result)
dist_meter = (directions_result[0]['legs'][0]['distance']['value'])
driving_data.set_value(index, 'driving_distance', dist_meter/1609.34)
duration_sec = (directions_result[0]['legs'][0]['duration']['value'])
driving_data.set_value(index, 'driving_duration', duration_sec/60.0)
elapsed = timeit.default_timer() - start_time
print 'Time taken to execute this code was %f seconds' %elapsed
# -
driving_data
# ### Retrieve Transit Travel Time
# +
transit_cols = ['transit_total_distance', 'transit_total_duration', 'transfers', 'access_distance', 'access_duration', 'egress_distance', 'egress_duration']
for col in transit_cols:
transit_data[col] = 0.0
# +
start_time = timeit.default_timer()
transit_results = []
for index, row in transit_data.iterrows():
x1 = row['o_lat']
y1 = row['o_lng']
x2 = row['d_lat']
y2 = row['d_lng']
directions_result = (gmaps.directions(origin = (x1,y1), destination = (x2,y2), mode="transit"))
transit_results.append(directions_result)
if len(directions_result) == 0:
for col in transit_cols:
transit_data.set_value(index, col, -99)
continue
dist_meter = (directions_result[0]['legs'][0]['distance']['value'])
transit_data.set_value(index, 'transit_total_distance', dist_meter/1609.34)
duration_sec = (directions_result[0]['legs'][0]['duration']['value'])
transit_data.set_value(index, 'transit_total_duration', duration_sec/60.0)
trans = pd.DataFrame(directions_result[0]['legs'][0]['steps'])
transfers = (np.sum(trans['travel_mode'] == 'TRANSIT') - 1)
transit_data.set_value(index, 'transfers', transfers)
steps = len(directions_result[0]['legs'][0]['steps'])
if steps == 1:
transit_data.set_value(index, 'access_distance', -99)
transit_data.set_value(index, 'access_duration', -99)
transit_data.set_value(index, 'egress_distance', -99)
transit_data.set_value(index, 'egress_duration', -99)
continue
if (directions_result[0]['legs'][0]['steps'][0]['travel_mode']) == 'WALKING':
acc_dist_meter = (directions_result[0]['legs'][0]['steps'][0]['distance']['value'])
transit_data.set_value(index, 'access_distance', round((acc_dist_meter/1609.34),2))
acc_duration_sec = (directions_result[0]['legs'][0]['steps'][0]['duration']['value'])
transit_data.set_value(index, 'access_duration', round((acc_duration_sec/60.0), 2))
else:
transit_data.set_value(index, 'access_distance', -99)
transit_data.set_value(index, 'access_duration', -99)
if (directions_result[0]['legs'][0]['steps'][steps-1]['travel_mode']) == 'WALKING':
egr_dist_meter = (directions_result[0]['legs'][0]['steps'][steps - 1]['distance']['value'])
transit_data.set_value(index, 'egress_distance', round((egr_dist_meter/1609.34), 2))
egr_duration_sec = (directions_result[0]['legs'][0]['steps'][steps - 1]['duration']['value'])
transit_data.set_value(index, 'egress_duration', round((egr_duration_sec/60.0), 2))
else:
transit_data.set_value(index, 'egress_distance', -99)
transit_data.set_value(index, 'egress_duration', -99)
elapsed = timeit.default_timer() - start_time
print 'Time taken to execute this code was %f seconds' %elapsed
# -
transit_data
| Alternative Highway and Transit Skims from Google Maps API.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %config IPython.matplotlib.backend = "retina"
import matplotlib.pyplot as plt
import numpy as np
# +
from toolkit import trappist1, transit_model, trappist_out_of_transit
g = trappist1('g')
# -
# C1 = BJD_UTC-2450000
#
# C2 = flux
#
# C3 = error
#
# C4 = X
#
# C5 = Y
#
# C6 = fwhm
#
# C7 = fwhm-x
#
# C8 = fwhm-y
#
# C9 = background
#
# C10 = airmass, irrelevant here
#
# C11 = exposure time (subarray)
#
# > To model this light curve, I use a linear function of X, Y, fwhm-x, and fwhm-y, plus a transit model.
#
bjd, flux, err, x, y, fwhm, fwhmx, fwhmy, bg, airmass, exptime = np.loadtxt('phot0002.txt', unpack=True)
bjd += 2450000
# +
plt.errorbar(bjd, flux/np.median(flux), err, fmt='.', color='k', ms=1, ecolor='silver')
transit_model_g = transit_model(bjd, g)
oot = transit_model_g == 1
plt.plot(bjd, transit_model_g)
# plt.plot(bjd[oot], transit_model_g[oot], '.')
# +
from toolkit import transit_duration
g.inc
# +
X_all = np.vstack([x, y, fwhmx, fwhmy]).T
X = X_all[oot, :]
omega = np.diag(err[oot]**2)
omega_inv = np.linalg.inv(omega)
V = np.linalg.inv(X.T @ omega_inv @ X)
beta = V @ X.T @ omega_inv @ flux[oot]
regressed_lc = flux - (X_all @ beta) + 1
plt.plot(bjd, transit_model_g)
from scipy.optimize import fmin_powell
def minimize(p):
return abs(np.sum((regressed_lc[oot] - transit_model_g[oot])**2 /
(p[0] * err[oot])**2)/len(regressed_lc[oot]) - 1)
err_scale = fmin_powell(minimize, [1])
plt.errorbar(bjd, regressed_lc, err_scale*err, fmt='.')
np.savetxt('lightcurve.txt', np.vstack([bjd, regressed_lc, err_scale*err]).T)
# -
plt.errorbar(bjd, flux, err, fmt='.', label='raw flux')
#plt.errorbar(bjd, regressed_lc, err_scale*err, fmt='.', label='Detrended flux')
plt.plot(bjd, (X_all @ beta), '.', label='detrending vector')
plt.legend()
plt.savefig('detrending.png')
# Compare with the lightcurve from Michael:
# 
# +
def quadratic_to_nonlinear(u1, u2):
a1 = a3 = 0
a2 = u1 + 2*u2
a4 = -u2
return (a1, a2, a3, a4)
quadratic_to_nonlinear(*g.u)
# +
import celerite
from celerite import terms
from scipy.optimize import minimize
from celerite.modeling import Model
from copy import deepcopy
original_params = g
times = bjd
fluxes = regressed_lc
errors = err
class MeanModel3Param(Model):
parameter_names = ['amp', 'depth', 't0']
def get_value(self, t):
params = deepcopy(trappist1('b'))
params.rp = self.depth**0.5
params.t0 = self.t0 + original_params.t0
return self.amp * transit_model(t, params)
initp_dict = dict(amp=1, depth=original_params.rp**2,
t0=0)#t0=original_params.t0)
parameter_bounds = dict(amp=[0.9*np.min(fluxes), 1.3*np.max(fluxes)],
depth=[0.9 * original_params.rp**2,
1.1 * original_params.rp**2],
t0=[-0.05, 0.05])
mean_model = MeanModel3Param(bounds=parameter_bounds, **initp_dict)
bounds = dict(log_a=(-30, 30))#, log_c=(np.log(4), np.log(8)))
log_c_median = 1.98108915
kernel = terms.RealTerm(log_a=-2, log_c=log_c_median,
bounds=bounds)
kernel.freeze_parameter('log_c')
gp = celerite.GP(kernel, mean=mean_model, fit_mean=True)
gp.compute(times - original_params.t0, errors)
# Define a cost function
def neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.log_likelihood(y)
def grad_neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.grad_log_likelihood(y)[1]
# Fit for the maximum likelihood parameters
initial_params = gp.get_parameter_vector()
bounds = gp.get_parameter_bounds()
soln = minimize(neg_log_like, initial_params, #jac=grad_neg_log_like,
method="L-BFGS-B", bounds=bounds, args=(fluxes, gp))
gp.set_parameter_vector(soln.x)
mu, var = gp.predict(fluxes, times - original_params.t0, return_var=True)
std = np.sqrt(var)
tmid = int(times.mean())
fig, ax = plt.subplots(2, 1, figsize=(6, 8), sharex=True)
ax[0].errorbar(times - tmid, fluxes, errors, fmt='.', color='k', ecolor='silver')
ax[0].fill_between(times - tmid, mu-std, mu+std, color='r', zorder=10, alpha=0.3)
ax[0].plot(times - tmid, mu, color='r', zorder=10)
ax[0].plot(bjd - tmid, transit_model_g)
ax[1].errorbar(times - tmid, fluxes - transit_model_g, errors, fmt='.', color='k', ecolor='silver')
ax[1].fill_between(times - tmid, mu-std-transit_model_g, mu+std-transit_model_g, color='r', zorder=10, alpha=0.3)
ax[1].plot(times - tmid, mu - transit_model_g, color='r', zorder=10)
ax[1].grid()
ax[1].set_xlabel('BJD - {0}'.format(tmid))
for axis in ax:
for j in ['right', 'top']:
axis.spines[j].set_visible(False)
fig.tight_layout()
fig.savefig('gp.png', dpi=200, bbox_inches='tight')
plt.show()
# -
| lightcurve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data description:
# I'm going to solve the International Airline Passengers prediction problem. This is a problem where given a year and a month, the task is to predict the number of international airline passengers in units of 1,000. The data ranges from January 1949 to December 1960 or 12 years, with 144 observations.
#
# # Workflow:
# - Load the Time Series (TS) by Pandas Library
#
# # 1) Exploration of Time Series:
# - TS Line, Histogram & Probability plots
# - TS Line & Box plots by intervals
# - TS Lag plots
# - Check the stationarity of TS, by:
# - Plotting rolling mean & standard deviation
# - Perform Dickey-Fuller test
# - Decomposition of TS into Trend, Seasonal part and residuals
#
# # 2) Seasonal ARIMA model:
# - Build and evaluate the Seasonal ARIMA model:
# - Grid-Search for the best ARIMA parameters
# - Fit the best ARIMA model
# - Evaluate model by in-sample prediction: Calculate RMSE
# - Forecast the future trend: Out-of-sample prediction
# +
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random as rn
# %matplotlib inline
import os
os.environ['PYTHONHASHSEED'] = '0'
# for the reproducable results:
np.random.seed(42)
rn.seed(42)
import warnings
warnings.filterwarnings("ignore")
# +
# Load data using Series.from_csv
from pandas import Series
#TS = Series.from_csv('C:/Users/rhash/Documents/Datasets/Time Series analysis/daily-minimum-temperatures.csv', header=0)
# Load data using pandas.read_csv
# in case, specify your own date parsing function and use the date_parser argument
from pandas import read_csv
TS = read_csv('C:/Users/rhash/Documents/Datasets/Time Series analysis/AirPassengers.csv', header=0, parse_dates=[0], index_col=0, squeeze=True)
print(TS.head())
# -
#TS=pd.to_numeric(TS, errors='coerce')
TS.dropna(inplace=True)
TS.index
TS.describe()
# +
# Time Series Line Plot: _________________________________________
plt.figure(figsize=(14, 5))
TS.plot()
TS.plot(style="k.")
plt.show()
#Time Series Histogram and Density Plot:
fig = plt.figure(figsize=(14, 9))
ax1 = fig.add_subplot(221)
ax1=sns.distplot(TS, fit=stats.norm)
ax2 = fig.add_subplot(222)
res=stats.probplot(TS, plot=ax2, rvalue=True)
# +
# Time Series Line, Box and Whisker Plots by Intervals: _________________________________________________
from pandas import Series
from pandas import DataFrame
from pandas import TimeGrouper
groups = TS.groupby(TimeGrouper('Y'))
years = DataFrame()
for name, group in groups:
years[ name.year]=group.values[0:12]
years.plot(subplots=True, legend=False, figsize=(8,10))
plt.show()
years.boxplot(figsize=(8,8))
plt.show()
plt.matshow(years.T, interpolation=None, aspect='auto')
plt.colorbar()
plt.show()
# +
# Time Series Lag Scatter Plots: ____________________________________________________
from pandas import concat
from pandas.plotting import scatter_matrix
plt.figure(figsize=(14, 8))
values = DataFrame(TS.values)
lags = 8
columns = [values]
for i in range(1,(lags + 1)):
columns.append(values.shift(i))
dataframe = concat(columns, axis=1)
columns = ['t+1']
for i in range(1,(lags + 1)):
columns.append('t-' + str(i))
dataframe.columns = columns
plt.figure(1)
for i in range(1,(lags + 1)):
ax = plt.subplot(340 + i)
ax.set_title('t+1 vs t-' + str(i))
plt.scatter(x=dataframe['t+1'].values, y=dataframe['t-'+str(i)].values)
plt.show()
# +
#Time Series Autocorrelation Plot: ________________________________________________________
from pandas.tools.plotting import autocorrelation_plot
plt.figure(figsize=(10, 6))
autocorrelation_plot(TS)
plt.show()
# +
# To check the stationarity of Time Series: _________________________________________________
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries, win=12):
#Determing rolling statistics
rolmean = timeseries.rolling(window=win).mean()
rolstd = timeseries.rolling(window=win).std()
#Plot rolling statistics:
plt.figure(figsize=(15, 5))
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
#Perform Dickey-Fuller test:
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(TS, win=12)
# -
import statsmodels.api as sm
# +
# load passenger data set and save to DataFrame
df = pd.read_csv('C:/Users/rhash/Documents/Datasets/Time Series analysis/AirPassengers.csv', header=0, index_col=0, parse_dates=True, sep=',')
# create Series object
y = df['#Passengers']
y_train = y[:'1958']
y_test = y['1959':]
# split into training and test sets
#y=TS.values
#y_train = TS[:'1958'].values
#y_test = TS['1959':].values
# +
# Decomposition of TS into Trend, Seasonal part & Residuals:
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(y)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.figure(figsize=(12, 9))
plt.subplot(411)
plt.plot(y, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
# +
import itertools
# define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 3)
# generate all different combinations of p, d and q triplets
pdq = list(itertools.product(p, d, q))
# generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
# +
# Grid-Search for the best ARIMA parameters:
import sys
import warnings
warnings.filterwarnings("ignore")
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
tmp_model = None
best_mdl = None
from sklearn.metrics import mean_squared_error
L=[]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
tmp_mdl = sm.tsa.statespace.SARIMAX(y_train,
order = param,
seasonal_order = param_seasonal,
enforce_stationarity=True,
enforce_invertibility=True)
res = tmp_mdl.fit(n_jobs=-1)
pred = res.get_prediction(start=pd.to_datetime('1949-01-01'),
end=pd.to_datetime('1958-12-01'),
dynamic=False)
RMSE= np.sqrt(mean_squared_error(y_train.values, pred.predicted_mean.values))
print('RMSE= ', RMSE, ', ', '(p,d,q)= ', param, ', ','(P,D,Q)= ', param_seasonal, sep='')
L.append([RMSE,param, param_seasonal] )
if res.aic < best_aic:
best_aic = res.aic
best_pdq = param
best_seasonal_pdq = param_seasonal
best_mdl = tmp_mdl
except:
continue
print("\n Best SARIMAX{}x{}12 model - AIC:{}".format(best_pdq, best_seasonal_pdq, best_aic))
# -
# define SARIMAX model and fit it to the data
mdl = sm.tsa.statespace.SARIMAX(y_train,
order=(2, 1, 2),
seasonal_order=(2, 1, 1, 12),
enforce_stationarity=True,
enforce_invertibility=True)
res = mdl.fit()
# +
# fit model to data
# In-sample-prediction and confidence bounds
pred = res.get_prediction(start=pd.to_datetime('1958-12-01'),
end=pd.to_datetime('1960-12-01'),
dynamic=False)
pred_ci = pred.conf_int()
print('Validation RMSE : ', np.sqrt(mean_squared_error(y['1958-12-01': ].values, pred.predicted_mean.values)))
# plot in-sample-prediction
plt.figure(figsize=(10, 6))
ax = y['1949':].plot(label='Observed',color='#006699');
pred.predicted_mean.plot(ax=ax, label='One-step Ahead Prediction', alpha=.7, color='#ff0066');
# draw confidence bound (gray)
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='#ff0066', alpha=.25);
# style the plot
ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('1958-12-01'), y.index[-1], alpha=.15, zorder=-1, color='grey');
ax.set_xlabel('Date')
ax.set_ylabel('Passengers')
plt.legend(loc='upper left')
plt.show()
# plot in-sample-prediction
plt.figure(figsize=(10, 6))
ax = y['1959':].plot(label='Observed',color='#006699');
pred.predicted_mean.plot(ax=ax, label='One-step Ahead Prediction', alpha=.7, color='#ff0066');
# draw confidence bound (gray)
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='#ff0066', alpha=.25);
# style the plot
ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('1958-12-01'), y.index[-1], alpha=.15, zorder=-1, color='grey');
ax.set_xlabel('Date')
ax.set_ylabel('Passengers')
plt.legend(loc='upper left')
plt.show()
# +
# Forecast (out-of-sample prediction)
mdl = sm.tsa.statespace.SARIMAX(y,
order=(2, 1, 2),
seasonal_order=(2, 1, 1, 12),
enforce_stationarity=True,
enforce_invertibility=True)
res = mdl.fit(dynamics=False)
# get forecast 120 steps ahead in future
pred_uc = res.get_forecast(steps=108)
# get confidence intervals of forecasts
pred_ci = pred_uc.conf_int()
# plot time series and long-term forecast
ax = y.plot(label='Observed', figsize=(16, 8), color='#006699');
pred_uc.predicted_mean.plot(ax=ax, label='Forecast', color='#ff0066');
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='#ff0066', alpha=.25);
ax.set_xlabel('Date');
ax.set_ylabel('Passengers');
plt.legend(loc='upper left')
plt.show()
| Time Series Analysis with ARIMA and Recurrent Neural Nets/Airline Passengers prediction (with Seasonal ARIMA).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
from saem import CSEMData
f = [10, 100, 1000]
x = np.arange(10., 3001, 10)
txLen = 1000 # length of the transmitter
altitude = 100
self = CSEMData(f=f, rx=x, txPos=np.array([[0, 0], [-txLen/2, txLen/2]]), alt=altitude)
print(self)
self.cmp = [1, 0, 1] # Bx and Bz to be plotted
self.showPos();
rho2 = 1000
rho = [1000, rho2, 1000]
thk = [100, 100]
self.simulate(rho=rho, thk=thk)
self.basename = "1000-{:d}-1000".format(rho2)
kw = dict(line=1, what="response", x="x", alim=[1e-3, 10.], lw=2)
ax=None # new axis
for i, f in enumerate(self.f):
ax = self.showLineFreq(nf=i, ax=ax, label="f = {:d} Hz".format(f), **kw)
# Now we want to do it with other resistivities for the second layer and make a plot for each frequency but with three lines for the individual models.
axAP = [self.showLineFreq(nf=i, label="rho={:d}".format(rho2), **kw, amphi=1)
for i in range(3)]
axRI = [self.showLineFreq(nf=i, label="rho={:d}".format(rho2), **kw)
for i in range(3)]
for rho2 in [50, 10]:
rho = [1000, rho2, 1000]
self.simulate(rho=rho, thk=thk)
for i in range(3):
self.basename = "1000 {:d} 1000, f={:d}Hz".format(rho2, self.f[i])
self.showLineFreq(nf=0, label="rho={:d}".format(rho2), ax=axRI[i], **kw)
self.showLineFreq(nf=0, label="rho={:d}".format(rho2), ax=axAP[i],
amphi=True, **kw)
| examples/3LayerCase.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sandeep92134/PYTHON-Data-Cleaning/blob/master/Chapter%204/Exersize%206.%20Using%20k-nearest%20neighbor%20to%20find%20outliers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="Ah4sEYeSalMH" outputId="d29ef1aa-53a8-479d-db52-ae08034eb8a1"
pip install pyod
# + id="AQIZ9RcvaRZY"
# import pandas, pyod, and sklearn
import pandas as pd
from pyod.models.knn import KNN
from sklearn.preprocessing import StandardScaler
pd.set_option('display.width', 80)
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 20)
pd.options.display.float_format = '{:,.2f}'.format
covidtotals = pd.read_csv("https://raw.githubusercontent.com/sandeep92134/PYTHON-Data-Cleaning/master/Chapter%204/datasets/covidtotals.csv")
covidtotals.set_index("iso_code", inplace=True)
# + id="Nbq8tZruaWoy"
# create a standardized dataset of the analysis variables
standardizer = StandardScaler()
analysisvars = ['location','total_cases_pm','total_deaths_pm',\
'pop_density','median_age','gdp_per_capita']
covidanalysis = covidtotals.loc[:, analysisvars].dropna()
covidanalysisstand = standardizer.fit_transform(covidanalysis.iloc[:, 1:])
# + id="HLwwuxKzav00"
# run the KNN model and generate anomaly scores
clf_name = 'KNN'
clf = KNN(contamination=0.1)
clf.fit(covidanalysisstand)
y_pred = clf.labels_
y_scores = clf.decision_scores_
# + colab={"base_uri": "https://localhost:8080/", "height": 390} id="a3wRQp3ta2Hi" outputId="cf1f4f5f-4216-48e2-f6ac-73a1bf4d472e"
# show the predictions from the model
pred = pd.DataFrame(zip(y_pred, y_scores),
columns=['outlier','scores'],
index=covidanalysis.index)
pred.sample(10, random_state=1)
# + colab={"base_uri": "https://localhost:8080/"} id="Tt2riFT9a73R" outputId="6efef89e-95e8-4cb4-f4ec-d354822112aa"
pred.outlier.value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="g2JFaT2DbAha" outputId="422816f4-d2d8-42d4-d556-bd1d9a7433ef"
pred.groupby(['outlier'])[['scores']].agg(['min','median','max'])
# + colab={"base_uri": "https://localhost:8080/", "height": 638} id="8km93QhzbJBf" outputId="77b84f2d-5d18-4832-c2e2-5cdd84b9992b"
# show covid data for the outliers
covidanalysis.join(pred).loc[pred.outlier==1,\
['location','total_cases_pm','total_deaths_pm','scores']].\
sort_values(['scores'], ascending=False)
| Chapter 4/Exersize 6. Using k-nearest neighbor to find outliers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CSYE7245 - Big Data Systems & Intelligent Analytics (Spring 2019)
# # Unsupervised Abnormal Event Detection
# ### <NAME> - NUID 001448312
# ## Abstract
# For Anomaly detection in videos, Instead of treating it as supervised learning and Labeling the videos as Normal and abnormal, I have used another approach and did the anomaly detection by unsupervised learning.It is difficult to obtain abnormal videos as compared to normal videos. Even in self automated cars, the biggest challenge is to get accident videos because it is very difficult to get them and also to generate them.
# Spatiotemporal architecture is proposed for anomaly detection in videos including crowded scenes. It contains two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features.
# I have trained my model by only normal videos(some outliers) and while testing, when abnormal videos are given to this trained model, the reconstruction error for such videos goes above threshold and anomaly is detected.
#
# This application can be used in video surveillance to detect abnormal events as it’s based on unsupervised learning, the advantage being that the only ingredient required is a long video segment containing only normal events in a fixed view
# ## Introduction
# Suspicious events that are of interest in long video sequences, such as surveillance footage usually have an extremely low probability of occurring. Manually detecting such events, or anomalies, is a very meticulous job that often requires more manpower than is generally available. Hence, there is a need for Automate detection.
#
#
#
# Treating the task as a binary classification problem (normal and abnormal) proved it being effective and accurate, but the practicality of such method is limited since footages of abnormal events are difficult to obtain due to its rarity. Hence, it is more efficient to train a model using little to no supervision, including spatiotemporal features and autoencoders . Unlike supervised methods, these methods only require unlabelled video footages which contain little or no abnormal event, which are easy to obtain in real-world applications.
#
# In this project, video data set is represented by a set of general features, which are inferred automatically from a long video footage through a deep learning approach. Specifically, a deep neural network composed of a stack of convolutional autoencoders was used to process video frames in an unsupervised manner that captured spatial structures in the data, which, grouped together, compose the video representation. Then, this representation is fed into a stack of convolutional temporal autoencoders to learn the regular temporal patterns.
#
# The method described here is based on the principle that when an abnormal event occurs, the most recent frames of video will be significantly different than the older frames. Trained an end-to-end model that consists of a spatial feature extractor and a temporal encoder-decoder which together learns the temporal patterns of the input volume of frames. The model is trained with video volumes consists of only normal scenes, with the objective to minimize the reconstruction error between the input video volume and the output video volume reconstructed by the learned model. After the model is properly trained, normal video volume is expected to have low reconstruction error, whereas video volume consisting of abnormal scenes is expected to have high reconstruction error. By thresholding on the error produced by each testing input volumes, our system will be able to detect when an abnormal event occurs.
import tensorflow as tf
import keras
from keras.preprocessing.image import img_to_array,load_img
from keras.layers import Conv3D,ConvLSTM2D,Conv3DTranspose,PReLU,BatchNormalization
from keras.models import Sequential
from keras.models import load_model
from sklearn.preprocessing import StandardScaler
import numpy as np
import os
from scipy.misc import imresize
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import warnings
warnings.filterwarnings('ignore')
# ## Preprocessing Videos
# The training and the testing videos are loaded. The approach used to load is by extracting frames from the video. This can be done using a linux command __ffmpeg -i {video filename} -r {fps} {image filename}.__ These images are then converted to an array and then to grayscale and saved in a list. The frames are then resized to **227x227** size and then normalized before passing to the model.
# +
def store(image_path,imagestore):
#Loading the image frames using keras load_img.
img=load_img(image_path)
#Converting the loaded image to array using keras img_to_array.
img=img_to_array(img)
#Resize the Image to (227,227,3) for the model to be able to process it.
img=imresize(img,(227,227,3))
#Convert the Image to Grayscale (Code referred from stackoverflow post mentioned in citations).
gray=0.2989*img[:,:,0]+0.5870*img[:,:,1]+0.1140*img[:,:,2]
#Appending each image to a list of all image frames.
imagestore.append(gray)
def preprocess(video_source_path, imagestore, outputName, fps):
#List of all Videos in the Source Directory.
videos=os.listdir(video_source_path)
#Make a temp dir to store all the frames
if not os.path.isdir(video_source_path+'/frames'):
os.mkdir(video_source_path+'/frames')
framepath=video_source_path+'/frames'
for video in videos:
if not video == 'frames':
#Extracts frames from the video. The number after -r is the number of fps extracted.
os.system( 'ffmpeg -i {}/{} -r {} {}/frames/%05d.jpg'.format(video_source_path,video,fps,video_source_path))
images=os.listdir(framepath)
for image in images:
image_path=framepath+ '/'+ image
#Store the image in the aggregated image list
store(image_path,imagestore)
os.system('rm -r {}/*'.format(framepath))
imagestore=np.array(imagestore)
a,b,c=imagestore.shape
#Reshape to (227,227,batch_size). This is done so that 10 frames can be used as a bunch size for training ahead.
print(imagestore.shape)
imagestore.resize(b,c,a)
print(imagestore.shape)
#Normalize the image
imagestore=(imagestore-imagestore.mean())/(imagestore.std())
#Clip negative Values. Negative values are clipped to 0 and values > 1 are clipped to 1. This is done to restrict
#the range of values between 0 and 1.
imagestore=np.clip(imagestore,0,1)
#Save all the images in a numpy array file.
np.save(outputName+'.npy',imagestore)
#Remove Buffer Directory
os.system('rm -r {}'.format(framepath))
# -
# **Both the training and the testing videos are preprocessed into frames and then stored in a file as an numpy array object.**
source_path= os.getcwd()+'/data/AvenueDataset/training_videos/'
target_path= os.getcwd()+'/data/AvenueDataset/testing_videos/'
fps=2
imstore=[]
preprocess(source_path, imstore, 'training', fps)
del imstore
imstore = []
preprocess(target_path, imstore, 'testing', fps)
del imstore
# ## Model building
# The input to the model is video volumes, where each volume consists of 10 consecutive frames with various skipping strides. As the number of parameters
# in this model is large, large amount of training data is needed. We perform data augmentation in the temporal dimension to increase
# the size of the training dataset. To generate these volumes, we concatenate
# frames with stride-1, stride-2, and stride-3. For example, the first stride-1 sequence is made up of frame {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, whereas the first
# stride-2 sequence contains frame number {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}, and
# stride-3 sequence would contain frame number {1, 4, 7, 10, 13, 16, 19, 22, 25,
# 28}. Now the input is ready for model training.
# **The model architecture consists of two parts — spatial autoencoder for learning spatial structures of each video frame, and
# temporal encoder-decoder for learning temporal patterns of the encoded spatial structures.**
# The spatial encoder and decoder
# have two convolutional and deconvolutional layers respectively, while the temporal encoder is a three-layer convolutional long short term memory (LSTM)
# model. Convolutional layers are well-known for its superb performance in object recognition, while LSTM model is widely used for sequence learning
# Autoencoders, as the name suggests, consist of two stages: encoding and decoding. It was first used to reduce dimensionality by setting the number of
# encoder output units less than the input. The model is usually trained using
# back-propagation in an unsupervised manner, by minimizing the reconstruction
# error of the decoding results from the original inputs. With the activation function chosen to be nonlinear, an autoencoder can extract more useful features
# than some common linear transformation methods such as PCA
# +
def loadModel():
model=Sequential()
model.add(Conv3D(filters=256,kernel_size=(5,5,1),strides=(3,3,1),padding='valid',input_shape=(227,227,10,1)))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Conv3D(filters=128,kernel_size=(5,5,1),strides=(2,2,1),padding='valid'))
model.add(PReLU())
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=128,kernel_size=(3,3),strides=1,padding='same',dropout=0.4,recurrent_dropout=0.3,return_sequences=True))
model.add(ConvLSTM2D(filters=64,kernel_size=(3,3),strides=1,padding='same',dropout=0.3,return_sequences=True))
model.add(ConvLSTM2D(filters=128,kernel_size=(3,3),strides=1,return_sequences=True, padding='same',dropout=0.5))
model.add(BatchNormalization())
model.add(PReLU())
model.add(Conv3DTranspose(filters=256,kernel_size=(5,5,1),strides=(2,2,1),padding='valid'))
model.add(BatchNormalization())
model.add(PReLU())
model.add(Conv3DTranspose(filters=1,kernel_size=(5,5,1),strides=(3,3,1),padding='valid'))
return model
model = loadModel()
print(model.summary())
# -
# **10 10 frames are passed to the model at once so that the model can find features in the sequence as described earlier.**
# +
def loadFrames(fileName):
#Loads a stored numpy array file.
X_train=np.load(fileName)
frames=X_train.shape[2]
print(frames)
#Need to make number of batch_size(frames) divisible by 10
frames=frames-frames%10
#Removing the remainder frames.
X_train=X_train[:,:,:frames]
#Reshaping the training images in such a way that if there were total 1251 frames extracted,
#the last 1 frame are deleted and from the remaining 1250 frames, divided in bunches of 10 consecutive frames at once.
#So now 125 bunches of frames are trained where each bunch has 10 consecutive images of size 227x227.
X_train=X_train.reshape(-1,227,227,10)
print(X_train.shape)
X_train=np.expand_dims(X_train,axis=4)
print(X_train.shape)
#Since it is unsupervised learning, x_train and y_train will be same.
Y_train=X_train.copy()
return X_train, Y_train
#Simple plot to visualize the model's training history
def visualizeModel(history):
plt.plot(history.history['acc'])
plt.plot(history.history['loss'])
plt.title('model history')
plt.ylabel('accuracy & loss')
plt.xlabel('epoch')
plt.legend(['Accuracy', 'Loss'], loc='best')
plt.show()
``
def train(model,filename):
X_train, Y_train = loadFrames(filename+'.npy')
epochs=125
batch_size=5
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
history = model.fit(X_train,Y_train,batch_size=batch_size,epochs=125)
model.save('model_train_new.h5')
visualizeModel(history)
return model, history
# -
model, history = train(model,'training')
# ## Loading the Saved Model
# The trained model is now loaded so that it can be evaluated and used for testing.
model = load_model('model_train_new.h5')
# ## Model Evaluation
# The reconstruction error of all
# pixel values I in frame t of the video sequence is taken as the Euclidean distance
# between the input frame and the reconstructed frame:
#
# **e(t) = ||x(t) − fW (x(t))||2**
#
# where fW is the learned weights by the spatiotemporal model. We then
# compute the abnormality score sa(t) by scaling between 0 and 1. Subsequently,
# regularity score sr(t) can be simply derived by subtracting abnormality score
# from 1:
#
# **sa(t) = (e(t) − e(t)min)/e(t)max**
#
# **sr(t) = 1 − sa(t)**
# +
def mean_squared_loss(x1,x2):
#Compute Euclidean Distance Loss between input frame(x1) and the reconstructed frame(x2) pixel values.
diff=x1-x2
a,b,c,d,e=diff.shape
#Number of samples is product of all the dimensions.
n_samples=a*b*c*d*e
#Square of distance
sq_diff=diff**2
#Sum of Square of distance(difference) between pixel values.
Sum=sq_diff.sum()
#Mean of Sum of Square of distance(difference) between pixel values(MSE).
mean_dist=Sum/n_samples
return mean_dist
def detectAnomaly(X_test,model):
#Just to print if any anomalies of no.
flag = 0
#Set after proper evaluating the irregularity score with manual observation in videos by approximately calculating
#frame bunch number.
threshold = 0.1
mainnum = 1
#Check for irregularity in set of 5 consecutive frame bunches. Each bunch is a set of 10 consecutive frames of 227x227.
for i in range(0,len(X_test),5):
inter = X_test[i:i+5]
losslist = []
bunchnumlist = []
for number,bunch in enumerate(inter):
n_bunch=np.expand_dims(bunch,axis=0)
reconstructed_bunch=model.predict(n_bunch)
loss=mean_squared_loss(n_bunch,reconstructed_bunch)
losslist.append(loss)
#Calculating the irregularity score from the MSE loss of 5 consecutive frame bunches.
#If the score of any frame bunch is greater than threshold, Print anomaly found.
for n,l in enumerate(losslist):
score = (l-min(losslist))/max(losslist)
# print(score)
if score > threshold:
print("Anomalous bunch of frames at bunch number {}. Score of frame {} was higher.".format(mainnum,n+1))
flag=1
mainnum = mainnum+1
if flag==1:
print("Anomaly found")
# -
X_test,_ = loadFrames('testing.npy')
detectAnomaly(X_test, model)
# **The model was first tested on the testing dataset.**
# ## Testing on a particular video
# **The model was now tested on a particular video to see if it has anomalies.**
imstore = []
preprocess(os.getcwd()+'/mytest/', imstore, 'mytesting',2)
del imstore
X_test,_ = loadFrames('mytesting.npy')
detectAnomaly(X_test, model)
# ## Testing on live feed
# **Live camera feed is captured and checked for anomalies if any**
import cv2
import numpy as np
from scipy.misc import imresize
from keras.models import load_model
# +
vc=cv2.VideoCapture(0)
rval=True
print('Loading model')
model=load_model('model_train_new.h5')
print('Model loaded')
threshold = 0.1
for k in range(10):
imagedump=[]
for j in range(10):
for i in range(10):
rval,frame=vc.read()
frame=imresize(frame,(227,227,3))
#Convert the Image to Grayscale
gray=0.2989*frame[:,:,0]+0.5870*frame[:,:,1]+0.1140*frame[:,:,2]
gray=(gray-gray.mean())/gray.std()
gray=np.clip(gray,0,1)
imagedump.append(gray)
imagedump=np.array(imagedump)
imagedump.resize(10,227,227,10)
imagedump=np.expand_dims(imagedump,axis=4)
print(imagedump.shape)
detectAnomaly(imagedump, model)
# output=model.predict(imagedump)
# loss=mean_squared_loss(imagedump,output)
# if loss>threshold:
# print('Anomalies Detected')
# for number,bunch in enumerate(inter):
# n_bunch=np.expand_dims(bunch,axis=0)
# reconstructed_bunch=model.predict(n_bunch)
# loss=mean_squared_loss(n_bunch,reconstructed_bunch)
# losslist.append(loss)
# bunchnumlist.append(number)
# #Calculating the irregularity score from the MSE loss of 5 consecutive frame bunches.
# #If the score of any frame bunch is greater than threshold, Print anomaly found.
# for n,l in enumerate(losslist):
# score = (l-min(losslist))/max(losslist)
# if score > threshold:
# print("Anomalous bunch of frames at bunch number {}. Score of frame {} was higher.".format(mainnum,n+1))
# flag=1
vc.release()
# -
# ## Conclusion
# **The model gave decent predictions on all the 3, Testing dataset, on a standalone video and on a live feed. The training data had maximum data of people walking far away from the camera and hence when someone walks near to the camera, the model shows Anomaly Found. The verification of the prediction of model on the Testing dataset was done by manually visiting the testing dataset frames and calculating the video from the frame bunch number and FramesPerSeconds used to extract frames from videos. Since its an unsupervised model, this was the only approach came to my mind to evaluate the model's testing accuracy. Hence apart from the frame numbers and bunch numbers, i could not display any other data to showcase the model accuracy.**
# **To conclude, A Spatiotemporal CNN can be used to train a model using unsupervised learning which can help in finding anomalies, which can be difficult to find by a supervised learning algorithm due to lack of unbiased datasets.**
# ## Citations
# https://www.semanticscholar.org/paper/An-Overview-of-Deep-Learning-Based-Methods-for-and-Kiran-Thomas/7198f45e979d4e7bb2ad2f8a5f098ab196c532b6
#
# https://www.semanticscholar.org/paper/Improved-anomaly-detection-in-surveillance-videos-a-Khaleghi-Moin/1a5c917ec7763c2ff9619e6f19d02d2f254d236a
#
# https://www.semanticscholar.org/paper/A-Short-Review-of-Deep-Learning-Methods-for-Group-Borja-Borja-Saval-Calvo/d9db8a4ce5ae4c4d03a55c648f4e7006838b6952
#
# https://www.semanticscholar.org/paper/Context-encoding-Variational-Autoencoder-for-Zimmerer-Kohl/2f3a2e24fb0ea3a9b6a4ebf0430886fdfa3efdd3
#
# https://machinelearningmastery.com/cnn-long-short-term-memory-networks/
#
# https://arxiv.org/abs/1411.4389
#
# https://www.coursera.org/lecture/nlp-sequence-models/long-short-term-memory-lstm-KXoay
#
# https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
#
# https://www.ncbi.nlm.nih.gov/pubmed/22392705
#
# https://arxiv.org/abs/1604.04574
#
# https://www.researchgate.net/publication/221361667_Anomaly_detection_in_extremely_crowded_scenes_using_spatio-temporal_motion_pattern_models
#
# https://pennstate.pure.elsevier.com/en/publications/adaptive-sparse-representations-for-video-anomaly-detection
# ## Declaration of adapted code
# * The approach for preprocessing and providing proper input to model was referenced from paper https://arxiv.org/pdf/1701.01546.pdf . **But the code was entirely written after understanding the approach.**
# * The model architecture was developed from paper https://arxiv.org/pdf/1701.01546.pdf . **But this was considered as the base model and many improvements were made over this model.**
# * The code for grayscaling a colored image was referenced from stackoverflow post
# ## Scope of this project
# Since this project was a solo project, and considering that this was my first time working with video analytics, I feel that understanding the video preprocessing and its feature extraction, performing experiments on the base model architecture (which can be viewed in experiments.ipynb), evaluating the model manually on the Testing dataset, on an individual standalone video and in live camera feed was sufficient enough for the project.
# ## License
# MIT License
#
# Copyright (c) 2019 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
| Anomaly detection in videos/.ipynb_checkpoints/portfolio-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] id="GGO0xH36G_BI"
# # Summer Olympics Data Analysis Assignment
# + id="1act3oNuHr55"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("summer.csv")
# + [markdown] id="RdsERyiSG_BK"
# ### 1. In how many cities Summer Olympics is held so far?
# + colab={"base_uri": "https://localhost:8080/"} id="JNpUlY2HH2HR" outputId="ea89fc6f-19d2-49a4-ec39-705b5a9d3288"
print ("Number of cities :" ,len(df['City'].unique()))
for city in df['City'].unique() :
print(city)
# + [markdown] id="5wNQMA4DG_BM"
# ### 2. Which sport is having most number of Gold Medals so far? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 403} id="7cqVEq5QRAc8" outputId="d67c7f7f-1051-4872-fccf-0ae0c8b8e993"
da = df[df['Medal'] == 'Gold']
data = []
for sport in da['Sport'].unique():
data.append([sport , len(da[da['Sport'] == sport])])
pd.DataFrame(data,columns = ['Sport','Number of Golds']).sort_values(by='Number of Golds', ascending=False).head().plot(x = 'Sport', y = 'Number of Golds', kind = 'bar', figsize = (10,5))
# + [markdown] id="hUyyYHJNG_BM"
# ### 3. Which sport is having most number of medals so far? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 403} id="e5nciw1uG_BN" outputId="3648b817-fd9d-4904-8731-08f498c7b170"
DataSports = []
for sport in df['Sport'].unique():
DataSports.append([sport , len(df[df['Sport'] == sport])])
pd.DataFrame(DataSports,columns = ['Sport','Number of Medals']).sort_values(by='Number of Medals', ascending=False).head().plot(x = 'Sport', y = 'Number of Medals', kind = 'bar', figsize = (10,5))
# + [markdown] id="Ux5f1mhgG_BN"
# ### 4. Which player has won most number of medals? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 462} id="h58RMfVJYeSd" outputId="f40a7ce8-10d6-4f3b-9543-ac7f7fd13f03"
DataPlayer = []
for player in df['Athlete'].unique():
DataPlayer.append([player , len(df[df['Athlete'] == player])])
pd.DataFrame(DataPlayer,columns = ['Athlete','Number of Medals']).sort_values(by='Number of Medals', ascending=False).head().plot(x = 'Athlete', y = 'Number of Medals', kind = 'bar', figsize = (10,5))
# + [markdown] id="MOoYdxgcG_BN"
# ### 5. Which player has won most number Gold Medals of medals? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 430} id="ev-OsFpyG_BO" outputId="d0a6cb5f-a43b-4cd8-bf5c-47ec98ca1f4a"
DataPlayerGold = []
for athlete in da['Athlete'].unique():
DataPlayerGold.append([athlete , len(da[da['Athlete'] == athlete])])
pd.DataFrame(DataPlayerGold,columns = ['Athlete','Number of Golds']).sort_values(by='Number of Golds', ascending=False).head().plot(x = 'Athlete', y = 'Number of Golds', kind = 'bar', figsize = (10,5))
# + [markdown] id="EFpjXwmdG_BO"
# ### 6. In which year India won first Gold Medal in Summer Olympics?
# + id="VlNdxiRoG_BO"
Ind = da[da['Country'] == 'IND']
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="uSVs40Q5fwk4" outputId="887b1b84-c271-45fd-d80a-a442d75ca712"
Ind.head(1)
# + [markdown] id="JGZwjhakG_BP"
# ### 7. Which event is most popular in terms on number of players? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="NH05m1qsg-C7" outputId="c2117116-0e1a-487f-99cf-a73a477e6d4c"
DataEvent = []
for player in df['Event'].unique():
DataEvent.append([player , len(df[df['Event'] == player])])
pd.DataFrame(DataEvent,columns = ['Event','Number of Players']).sort_values(by='Number of Players', ascending=False).head().plot(x = 'Event', y = 'Number of Players', kind = 'bar', figsize = (10,5))
# + [markdown] id="cqE1jvyaG_BP"
# ### 8. Which sport is having most female Gold Medalists? (Top 5)
# + colab={"base_uri": "https://localhost:8080/", "height": 435} id="cT03LecoG_BP" outputId="dd61b7ad-ca48-4b1e-a021-165a35e9880e"
dfemale = da[da['Gender'] == 'Women']
DataFemaleGold = []
for athlete in dfemale['Athlete'].unique():
DataFemaleGold.append([athlete , len(dfemale[dfemale['Athlete'] == athlete])])
pd.DataFrame(DataFemaleGold,columns = ['Athlete','Number of Golds']).sort_values(by='Number of Golds', ascending=False).head().plot(x = 'Athlete', y = 'Number of Golds', kind = 'bar', figsize = (10,5))
| Summer Olympics/Summer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
from sklearn.cross_validation import KFold
from sklearn.ensemble import RandomForestClassifier
dataframe_train=pd.read_csv("Desktop/dataset/loan_prediction_train.csv")
dataframe_train.shape
dataframe_train.head()
dataframe_train.describe()
dataframe_train['Property_Area'].value_counts()
dataframe_train['Loan_Status'].value_counts()
dataframe_train['ApplicantIncome'].hist(bins=50)
dataframe_train.boxplot(column='ApplicantIncome', by='Education')
dataframe_train['LoanAmount'].hist(bins=50)
dataframe_train.boxplot(column='LoanAmount')
dataframe_train.apply(lambda x: sum(x.isnull()),axis =0)
dataframe_train['LoanAmount'].fillna(dataframe_train['LoanAmount'].mean(), inplace=True)
dataframe_train.apply(lambda x: sum(x.isnull()),axis =0)
dataframe_train['Self_Employed'].value_counts()
dataframe_train['Self_Employed'].fillna('No',inplace=True)
dataframe_train['Self_Employed'].value_counts()
dataframe_train['LoanAmount_log']=np.log(dataframe_train['LoanAmount'])
dataframe_train['LoanAmount_log'].hist(bins=20)
dataframe_train['TotalIncome']=dataframe_train['ApplicantIncome']+dataframe_train['CoapplicantIncome']
dataframe_train['TotalIncome_log']=np.log(dataframe_train['TotalIncome'])
dataframe_train['TotalIncome_log'].hist(bins=20)
dataframe_train['Gender'].fillna('Male',inplace=True)
dataframe_train['Married'].fillna('Yes',inplace=True)
dataframe_train['Dependents'].fillna('0',inplace=True)
dataframe_train['Credit_History'].fillna('1',inplace=True)
var_mod=['Gender','Married','Dependents','Education','Self_Employed','Property_Area','Loan_Status']
le=LabelEncoder()
for i in var_mod:
dataframe_train[i]=le.fit_transform(dataframe_train[i])
# +
def classification_model(model,data,predictors,outcomes):
model.fit(data[predictors],data[outcomes])
predict=model.predict(data[predictors])
accuracy=accuracy_score(predict,data[outcomes])
print("Accuracy is:{}".format(accuracy))
kf=KFold(data.shape[0],n_folds=5)
error=[]
for train,test in kf:
train_predictors=data[predictors].iloc[train,:]
train_target=data[outcomes].iloc[train]
model.fit(train_predictors,train_target)
error.append(model.score(data[predictors].iloc[test,:],data[outcomes].iloc[test]))
print("Cross Validation score is {}".format(np.mean(error)))
model.fit(data[predictors],data[outcomes])
# -
outcome_variable='Loan_Status'
model=RandomForestClassifier(n_estimators=100,min_samples_split=25,max_depth=7,max_features=1)
predictor_var=['TotalIncome_log','LoanAmount_log','Credit_History','Dependents','Property_Area']
classification_model(model,dataframe_train,predictor_var,outcome_variable)
| Machinelearning_projects_for_begineers/Loan_prediction_RandomForest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### 카카오 로컬 API
# - https://developers.kakao.com/
# - 주소값 -> 위도, 경도
app_key = ""
addr = "서울 성동구 성수이로 113"
url = "https://dapi.kakao.com/v2/local/search/address.json?query={}".format(addr)
url
headers = {
"Authorization": "KakaoAK {}".format(app_key)
}
headers
response = requests.get(url, headers = headers)
response
datas = response.json()["documents"][0]
datas["x"], datas["y"], datas["road_address"]["zone_no"]
| etc/crawling/200626_api_kakao.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %%
from sectional_v2.util.eusaar_data import *
import sectional_v2.util.eusaar_data as lfc
import sectional_v2.util.eusaar_data
from sectional_v2.constants import get_plotpath
# %%
# load and autoreload
from IPython import get_ipython
# noinspection PyBroadException
try:
_ipython = get_ipython()
_magic = _ipython.magic
_magic('load_ext autoreload')
_magic('autoreload 2')
except:
pass
# %%
from sectional_v2.constants import get_outdata_path
path_in = get_outdata_path('eusaar')
version ='_noresmv21_dd'
file_in = path_in + 'Nd_cat_sources_timeseries%s.csv'%version
plot_path = get_plotpath('eusaar')
version ='_noresmv21dd_both'
figname = f'{plot_path}/NRMSE{version}.'
# %%
# case_ns = 'noSECTv11_ctrl_fbvoc'
# case_sec = 'SECTv11_ctrl_fbvoc'
case_sec='SECTv21_ctrl_koagD' #'SECTv11_ctrl_fbvoc']#'SECTv11_ctrl']#,'SECTv11_ctrl_fbvoc']#'SECTv11_ctrl']
case_ns ='noSECTv21_ox_ricc' #'noSECTv11_ctrl_fbvoc'] #/no SECTv11_ctrl
cases_ns = ['noSECTv21_default_dd','noSECTv21_ox_ricc_dd']
cases_s = [case_sec]
cases = cases_ns + cases_s
eusaar='eusaar'
# %% [markdown]
# ## Plotting functions:
# %%
trns ={
'N30-50':'N$_{30-50}$',
'N30-100':'N$_{30-100}$',
'N50-100':'N$_{50-100}$',
'N50':'N$_{50-500}$',
'N100':'N$_{100-500}$',
'N250':'N$_{250-500}$'
}
# %%
from sectional_v2.data_info import get_nice_name_case
# %%
df = pd.read_csv(file_in, index_col=0)
# %%
df_gd = df[df['flag_gd']]
# %%
df_gd = df_gd.rename(trns, axis=1)
# %%
c_N = ['N30-50', 'N50', 'N100', 'N250', 'N50-100', 'N30-100']
c_N = [trns[key] for key in trns.keys()]
# %%
df_dic = {}
for case in cases :
_df=sectional_v2.util.eusaar_data.clean_df(df_gd,sc=case).reset_index()
_df['time'] = pd.to_datetime(_df['time'])
df_dic[case] = _df.set_index('time')
_df=sectional_v2.util.eusaar_data.clean_df(df_gd,sc=eusaar).reset_index()
_df['time'] = pd.to_datetime(_df['time'])
df_eusaar =_df.set_index('time')
# %%
def nrmse_calc(df_m, df_eu):
#if 'station' in df_model.columns:
# df_model = df_model.drop('station', axis=1)#\
#if 'station' in df_eusaar.columns:
# df_eusaar = df_eusaar.drop('station', axis=1)#\
df_model = df_m[c_N]
df_eusaar = df_eu[c_N]
rmse = ((df_model - df_eusaar) ** 2).mean() ** .5
# normalize by mean:
rmse = rmse/df_eusaar.mean()
rmse = rmse._set_name(case)
return rmse
def get_nrmse_all(df_dic, df_eusaar):
rmse_dic = {}
for case in cases:
df_model = df_dic[case]
_rmse=nrmse_calc(df_model, df_eusaar)
_rmse = _rmse._set_name(case)
rmse_dic[case] = _rmse
rmse_all = pd.concat([rmse_dic[case] for case in cases], axis=1)
return rmse_dic, rmse_all
rmse_dic, rmse_all = get_nrmse_all(df_dic, df_eusaar)
# %%
def add_season(df_dic):
df_seas ={}
for case in cases:
_df_all = df_dic[case]
_df_all = add_seas(_df_all)
df_seas[case] = _df_all
return df_seas
def add_seas(_df_all):
_df_all['season'] = _df_all.to_xarray()['time.season'].to_series() # df_dic[case].reset_index().dt.season
return _df_all
def get_season(dic_df_seas, seas):
dic_df_s ={}
for case in dic_df_seas.keys():
dic_df_s[case] = dic_df_seas[case][dic_df_seas[case]['season'] == seas]
return dic_df_s
def get_station(dic_df_seas, station):
dic_df_s ={}
for case in dic_df_seas.keys():
dic_df_s[case] = dic_df_seas[case][dic_df_seas[case]['station'] == station]
return dic_df_s
# %%
def rmse_seas(df_dic, df_eusaar):
df_dic_s = add_season(df_dic)
df_eusaar_seas = add_seas(df_eusaar)
ls_rmse =[]# pd.DataFrame()
for seas in df_dic[case]['season'].unique():
_dic_s = get_season(df_dic_s, seas)
_df_eu_s = df_eusaar_seas[df_eusaar_seas['season'] == seas]
rmse_dic, rmse_all = get_nrmse_all(_dic_s, _df_eu_s)
rmse_all['season'] = seas
rmse_all.index=rmse_all.index.rename('var')
rmse_all = rmse_all.reset_index().set_index(['var','season'])
ls_rmse.append(rmse_all)
return pd.concat(ls_rmse)
def rmse_seas_station(df_dic, df_eusaar):
df_dic_s = add_season(df_dic)
df_eusaar_seas = add_seas(df_eusaar)
ls_rmse =[]# pd.DataFrame()
for seas in df_dic[case]['season'].unique():
_df_eu_s = df_eusaar_seas[df_eusaar_seas['season'] == seas]
_dic_s = get_season(df_dic_s, seas)
for station in df_eusaar_seas['station'].unique():
_dic_ss = get_station(_dic_s, station)
_df_eu_ss = _df_eu_s[_df_eu_s['station'] == station]
rmse_dic, rmse_all = get_nrmse_all(_dic_ss, _df_eu_ss)
rmse_all['season'] = seas
rmse_all['station'] = station
rmse_all.index=rmse_all.index.rename('var')
rmse_all = rmse_all.reset_index().set_index(['var','season','station'])
ls_rmse.append(rmse_all)
pd.concat(ls_rmse)
return pd.concat(ls_rmse)
def rmse_station(df_dic, df_eusaar):
#df_dic_s = add_season(df_dic)
#df_eusaar_seas = add_seas(df_eusaar)
ls_rmse =[]# pd.DataFrame()
for station in df_eusaar['station'].unique():
_dic_ss = get_station(df_dic, station)
_df_eu_ss = df_eusaar[df_eusaar['station'] == station]
rmse_dic, rmse_all = get_nrmse_all(_dic_ss, _df_eu_ss)
#rmse_all['season'] = seas
rmse_all['station'] = station
rmse_all.index=rmse_all.index.rename('var')
rmse_all = rmse_all.reset_index().set_index(['var','station'])
ls_rmse.append(rmse_all)
pd.concat(ls_rmse)
return pd.concat(ls_rmse)
# %%
df_dic_dailym = {}
for case in df_dic.keys():
_df = df_dic[case]
_df = _df.reset_index().set_index(['time','station'])
_df= _df.groupby([pd.Grouper(level='station'),
pd.Grouper(level='time', freq='1D')]).mean()
df_dic_dailym[case] = _df.reset_index().set_index('time')
# %%
_df = df_eusaar
_df = _df.reset_index().set_index(['time','station'])
#_df.resample('D', level=-1).mean()
df_eusaar_dailym = _df.groupby([pd.Grouper(level='station'),
pd.Grouper(level='time', freq='1D')]).mean()
df_eusaar_dailym = df_eusaar_dailym.reset_index().set_index('time')
# %%
rms_st_day = rmse_station(df_dic_dailym, df_eusaar_dailym)
# %%
rms_st = rmse_station(df_dic, df_eusaar)
# %%
rms_st_day.unstack().stack().loc[('N$_{50-100}$')].plot.barh()
plt.show()
# %%
rms_st
rms_st.unstack().stack().loc[('N$_{50-100}$')].plot.barh()
plt.show()
# %%
rms_seas = rmse_seas(df_dic, df_eusaar)
# %%
rms_seas_st = rmse_seas_station(df_dic, df_eusaar)
# %%
rms_seas_st.unstack().stack().loc[('N$_{50-100}$','JJA')].plot.barh()
plt.show()
# %%
rms_seas_st.unstack().stack().loc[('N$_{50-100}$','MAM')].plot.barh()
plt.show()
# %%
rms_seas_st.unstack().stack().loc[('N$_{50-100}$','SON')].plot.barh()
plt.show()
# %%
rms_seas.unstack().stack().loc['N$_{50-100}$'].plot.bar()
plt.show()
# %%
rms_seas.unstack().stack().loc['N$_{50-500}$'].plot.bar()
plt.show()
# %%
plt_df = rms_seas.unstack().stack()[cases[::-1]]
# %%
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
legend_elements=[]
for case in cases_ns+[case_sec]:
print(case)
c=get_case_col(case)
legend_elements.append(Patch(facecolor=c, edgecolor=c, label=get_nice_name_case(case)))
# %%
var = 'N$_{50-100}$'
cdic = {case:get_case_col(case) for case in cases}
fig, axs = plt.subplots(1,3, figsize=[8,4], sharey=True)
def _plt_bar(var, ax):
color=[cdic.get(x, '#333333') for x in plt_df.columns]
ax =plt_df.loc[var].plot.barh(color=color , width=0.8, ax=ax, legend=False)
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()-0.8, i.get_y()+.04, \
str(round((i.get_width()), 2)), fontsize=14, color='w')
ax.set_title(var)
return ax
for var, ax in zip(['N$_{50-100}$','N$_{50-500}$','N$_{100-500}$'], axs.flatten()):
_plt_bar(var, ax)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.set_yticks([], minor=True)
ax.set_xlabel('NRMSE')
lgd=fig.legend(handles=legend_elements, frameon=False,bbox_to_anchor=(0.0, -.02, 1., -.02), loc='lower center',
ncol=3, borderaxespad=0.)#mode="expand",)
plt.subplots_adjust()
plt.savefig(figname + 'pdf', dpi=300, bbox_extra_artists=(lgd,), bbox_inches='tight')
print(figname)
plt.show()
| oas_dev/notebooks/eusaari/02-04-NRMSE/02-04-RMSE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import casadi as csd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import linalg
from scipy.stats import chi2, f
# experimental conditions
expcond = [{'c0': [1.0, 2.0, 0.0, 0.0]}, {'c0': [1.0, 1.0, 0.0, 0.0]}]
meas_vars = [['ca', 'cb', 'cc'], ['ca', 'cc', 'cd']]
meas_vars_idx = [[0, 1, 2], [0, 2, 3]]
datasets = ['ABCD_data.csv', 'ABCD_data_2.csv']
expdata = []
for data in datasets:
data_df = pd.read_csv(data)
expdata.append(data_df)
# stoichiometry matrix for
# A + B -> C, B + C -> D
s = np.array([[-1.0, -1.0, 1.0, 0.0],
[0.0, -1.0, -1.0, 1.0]
])
def rxnfn(kf, tf, s, tgrid = [-100]):
nr, nc = s.shape
c = csd.MX.sym('c', nc)
r = []
for i in range(nr):
ri = kf[i]
for j in range(nc):
if s[i, j] < 0:
ri = ri * c[j] ** (-s[i, j])
r.append(ri)
dc = []
for i in range(nc):
dci = 0
for j in range(nr):
dci = dci + s[j, i] * r[j]
dc.append(dci)
ode = {}
ode['x'] = c
ode['p'] = kf
ode['ode'] = csd.vertcat(*dc)
if tgrid[0] == -100:
F = csd.integrator('F','cvodes',ode,{'tf':tf})
else:
F = csd.integrator('F','cvodes',ode,{'tf':tf, 'grid': tgrid, 'output_t0' : True})
return F
expinfo_list = [{'data': expdata[i], 'meas_var': meas_vars[i], 'meas_var_idx': meas_vars_idx[i], 'c0': expcond[i]['c0']}
for i in range(len(expdata))]
def get_exp_ssq(kf, expinfo):
data = expinfo['data']
meas_var = expinfo['meas_var']
meas_var_idx = expinfo['meas_var_idx']
c0 = expinfo['c0']
tgrid = np.append(0, data['t'].values)
ssq = 0
for i in range(len(tgrid) - 1):
F = rxnfn(kf = kf, tf = tgrid[i + 1] - tgrid[i], s = s)
res = F(x0 = c0, p = kf)
c0 = res['xf']
for (j, var) in enumerate(meas_var):
ssq = ssq + (data.iloc[i][var] - res['xf'][meas_var_idx[j]]) ** 2
return ssq
def sim_exp(kf, expinfo, tf):
data = expinfo['data']
meas_var = expinfo['meas_var']
meas_var_idx = expinfo['meas_var_idx']
c0 = expinfo['c0']
tgrid = list(np.linspace(0, tf))
F = rxnfn(kf = kf, tf = tf, s = s, tgrid = tgrid)
res = F(x0 = c0, p = kf)
res_fn = csd.Function('res_fn', [kf], [res['xf']])
return res_fn
kf = csd.MX.sym('kf', 2)
exp_ssq = 0
for i in range(len(expdata)):
exp_ssq = exp_ssq + get_exp_ssq(kf, expinfo_list[0])
exp_ssq_fn = csd.Function('exp_ssq_fn', [kf], [exp_ssq])
# function to calcuate hessian of sum of squares with respect to p = (k1, k2)
ssqfn_hess_calc = csd.hessian(exp_ssq_fn(kf), kf)
ssqfn_hess = csd.Function('ssqfn_hess', [kf], [ssqfn_hess_calc[0]])
# +
# NLP declaration
nlp = {'x': kf,'f': exp_ssq};
# Solve using IPOPT
solver = csd.nlpsol('solver','ipopt',nlp)
res = solver(x0=[3, 3], lbx = 0, ubx = 10)
# -
p_est = res['x']
p_est
res_fn = sim_exp(kf, expinfo_list[1], tf = 10)
cf = res_fn(p_est).full().T
t = list(np.linspace(0, 10))
datum = expinfo_list[1]['data']
fig, ax = plt.subplots()
ax.plot(t, cf[:, 1])
ax.scatter(datum['t'], datum['cb'])
# +
## calculate covariance matrix
# number of estiamted parameters
n_est = 2
# number of data points
n_data = np.sum([expdata[i].shape[0] * len(meas_vars[i]) for i in range(len(expdata))])
# hessian
H_sol = ssqfn_hess(p_est)
# mean sum of squares
msq = res['f'] / (n_data - n_est)
print("mean sum of squares", msq)
print("Covariance matrix")
cov = 2 * msq * linalg.inv(H_sol)
print(cov)
# -
| ABCD_parmest_multiexp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Materialien zu <i>agla</i>
#
# Autor: <NAME> - <EMAIL>
#
# ## Allgemeiner Körper
# ## Sechskant
# <br><br>
# %run agla/start
# +
# Erzeugung über einen Kantenzug
k = Kreis(xy_ebene, O, 5)
p1, p2, p3, p4, p5, p6 = [k.pkt(i*60) for i in range(6)]
abb = verschiebung(v(0, 0, 12))
q1, q2, q3, q4, q5, q6 = [k.pkt(i*60).bild(abb) for i in range(6)]
sechskant = sk =Körper(p1,q1,p1,p2,q2,p2,p3,q3,p3,p4,q4,p4,p5,q5,p5,p6,q6,p6,p1,q1,q2,q3,q4,q5,q6,q1)
# -
zeichne([sk, grün, 2], box=nein, achsen=nein)
sk.anz_ecken, sk.anz_kanten, sk.anz_seiten
# +
# Eulersche Polyederformel: #Ecken - #Kanten + #Seiten = 2
sk.anz_ecken - sk.anz_kanten + sk.anz_seiten
# +
# Einfärbung der Seiten - ist durch Teilkörper, die über die Seiten
# erzeugt werden, zu realisieren
S = sk.seiten
H1 = Körper(*[S[i] for i in (2, 5)], seiten=ja)
H2 = Körper(*[S[i] for i in (0, 3, 7)], seiten=ja)
H3 = Körper(*[S[i] for i in (1, 4, 6)], seiten=ja)
zeichne([H1, blau, 'füll=ja'], [H2, rot, 'füll=ja'], [H3, gelb, 'füll=ja'],
achsen=nein, box=nein)
# +
# Markieren der Ecken
sk.mark_ecken
# +
# Markieren der Kanten
sk.mark_kanten
# +
# Markieren der Seiten
sk.mark_seiten
# -
| agla/mat/sechskant.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from emulator.main import Account
A = Account()
actions = np.random.randint(0,3, size=(32))
rewrads_list = []
for i in actions:
r,n,d = A.step(i)
rewrads_list.append(r)
GAMMA = 0.9
R_list = []
R = 0
for i in rewrads_list[::-1]:
R = i + GAMMA * R
R_list.append(R)
R_list.reverse()
tmp = pd.DataFrame()
tmp['reward'] = rewrads_list
tmp['discount'] = R_list
tmp
| Note-6 A3CNet/Note-6.2 A3C与HS300指数择时/.ipynb_checkpoints/估值修正-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os
import pandas as pd
import sys
import warnings
warnings.simplefilter('ignore')
pkg_dir = '/home/mrossol/NaTGenPD'
#pkg_dir = '..'
sys.path.append(pkg_dir)
import NaTGenPD as npd
import NaTGenPD.cluster as cluster
from NaTGenPD.analysis import ProcedureAnalysis, QuartileAnalysis
data_dir = '/scratch/mrossol/CEMS'
#data_dir = '/Users/mrossol/Downloads/CEMS'
out_dir = os.path.join(data_dir, 'analysis')
if not os.path.exists(out_dir):
os.makedirs(out_dir)
logger = npd.setup_logger('NaTGenPD.analysis', log_level='INFO')
# -
# # Procedure Stats
# +
fits_dir = os.path.join(data_dir, 'Final_Fits')
raw_paths = [os.path.join(data_dir, '{y}/SMOKE_{y}.h5'.format(y=y))
for y in (2016, 2017)]
clean_path = os.path.join(data_dir, 'SMOKE_Clean_2016-2017.h5')
filter_path = os.path.join(data_dir, 'SMOKE_Filtered_2016-2017.h5')
process_dir = os.path.join(out_dir, 'process')
if not os.path.exists(process_dir):
os.makedirs(process_dir)
out_path = os.path.join(process_dir, 'process_stats_2016-2017.csv')
ProcedureAnalysis.stats(fits_dir, raw_paths, clean_path, filter_path, out_path)
# -
process_dir = os.path.join(out_dir, 'process')
path = os.path.join(process_dir, 'process_stats_2016-2017.csv')
stats_df = pd.read_csv(path, index_col=0)
stats_df
# +
plant_types = ['Boiler (Coal)', 'Boiler (NG)', 'Boiler (Oil)',
'Boiler (Other Solid Fuel)', 'CT (NG)', 'CT (Oil)', 'CC (NG)', 'CC (Oil)']
table_1 = stats_df.loc[plant_types].copy()
# Combine Boilers and Stokers
table_1.loc['Boiler (Coal)'] += stats_df.loc['Stoker (Coal)']
table_1.loc['Boiler (NG)'] += stats_df.loc['Stoker (NG)']
table_1.loc['Boiler (Other Solid Fuel)'] += stats_df.loc['Stoker (Other Solid Fuel)']
table_1['raw_cf (GW)'] = table_1['raw_cf'] / 1000
table_1['raw_gen (TWh)'] = table_1['raw_gen'] / 1000000
# Compute units/cf removed
table_1['step_1_cf_removed (GW)'] = (table_1['raw_cf'] - table_1['clean_cf']) / 1000
table_1['step_1_gen_removed (TWh)'] = (table_1['raw_gen'] - table_1['clean_gen']) / 1000000
table_1['step_3_cf_removed (GW)'] = (table_1['clean_cf'] - table_1['final_cf']) / 1000
table_1['step_3_gen_removed (TWh)'] = (table_1['clean_gen'] - table_1['final_gen']) / 1000000
table_1['final_cf (GW)'] = table_1['final_cf'] / 1000
table_1['final_gen (TWh)'] = table_1['final_gen'] / 1000000
drop_cols = ['total_points', 'non_zero_points', 'raw_cf', 'raw_gen',
'clean_units', 'clean_cf', 'clean_gen', 'filtered_units',
'filtered_cf', 'filtered_gen',
'final_cf', 'final_gen', 'final_points']
table_1 = table_1.drop(columns=drop_cols)
table_1.at['Total'] = table_1.sum()
cols = ['raw_cf (GW)', 'raw_gen (TWh)',
'step_1_cf_removed (GW)', 'step_1_gen_removed (TWh)',
'step_3_cf_removed (GW)', 'step_3_gen_removed (TWh)',
'final_cf (GW)', 'final_gen (TWh)']
out_path = os.path.join(out_dir, 'process/Table_1.csv')
table_1[cols].to_csv(out_path)
table_1[cols]
# +
plant_types = ['Boiler (Coal)', 'Boiler (NG)', 'Boiler (Oil)',
'Boiler (Other Solid Fuel)', 'CT (NG)', 'CT (Oil)', 'CC (NG)', 'CC (Oil)']
table_2 = stats_df.loc[plant_types].copy()
# Combine Boilers and Stokers
table_2.loc['Boiler (Coal)'] += stats_df.loc['Stoker (Coal)']
table_2.loc['Boiler (NG)'] += stats_df.loc['Stoker (NG)']
table_2.loc['Boiler (Other Solid Fuel)'] += stats_df.loc['Stoker (Other Solid Fuel)']
table_2['raw_points'] = table_2['total_points'] / 1000000
table_2['final_points'] /= 1000000
# Compute units/cf removed
table_2['step_1_units_removed'] = table_2['raw_units'] - table_2['clean_units']
table_2['step_1_points_removed'] = (table_2['raw_points'] - table_2['clean_points']) / 1000000
table_2['step_3_units_removed'] = table_2['clean_units'] - table_2['final_units']
table_2['step_3_points_removed'] = (table_2['clean_points'] - table_2['final_points']) / 1000000
drop_cols = ['total_points', 'non_zero_points', 'raw_cf', 'raw_gen',
'clean_units', 'clean_points', 'clean_cf', 'clean_gen',
'filtered_units', 'filtered_points', 'filtered_cf', 'filtered_gen',
'final_cf', 'final_gen',]
table_2 = table_2.drop(columns=drop_cols)
table_2.at['Total'] = table_2.sum()
cols = ['raw_units', 'raw_points',
'step_1_units_removed', 'step_1_points_removed',
'step_3_units_removed', 'step_3_points_removed',
'final_units', 'final_points']
out_path = os.path.join(out_dir, 'process/table_2.csv')
table_2[cols].to_csv(out_path)
table_2[cols]
# -
# # Quartile Stats
# +
fits_dir = os.path.join(data_dir, 'Final_Fits')
filter_path = os.path.join(data_dir, 'SMOKE_Filtered_2016-2017.h5')
quartile_dir = os.path.join(out_dir, 'final_fits')
analysis = QuartileAnalysis(fits_dir, filter_path)
# +
group_type = 'CC (NG)'
group_fits = analysis._fits[group_type]
if "CC" in group_type:
group_fits['unit_id'] = group_fits['unit_id'].str.split('-').str[0]
group_fits = group_fits.groupby('unit_id').mean().reset_index()
pos = group_fits['a0'].isnull()
group_fits = group_fits.loc[~pos]
with npd.CEMS(analysis._filtered_path, mode='r') as f:
filtered_df = f[group_type].df
pos = filtered_df['cluster'] >= 0
filtered_df = filtered_df.loc[pos]
pos = filtered_df['unit_id'].isin(group_fits['unit_id'].to_list())
filtered_df = filtered_df.loc[pos]
ave_hr = filtered_df.groupby('unit_id')['heat_rate'].mean()
ave_hr.name = 'ave_heat_rate'
filtered_df = pd.merge(filtered_df,
ave_hr.to_frame().reset_index(),
on='unit_id')
load_max = filtered_df.groupby('unit_id')['load'].max()
load_max.name = 'load_max'
filtered_df = pd.merge(filtered_df,
load_max.to_frame().reset_index(),
on='unit_id')
filtered_df['cf'] = (filtered_df['load']
/ filtered_df['load_max'])
filtered_df[['unit_id', 'load', 'load_max', 'cf', 'ave_heat_rate']].head()
# +
fits_dir = os.path.join(data_dir, 'CEMS_Fits')
filter_path = os.path.join(data_dir, 'SMOKE_Filtered_2016-2017.h5')
quartile_dir = os.path.join(out_dir, 'filtered_fits')
if not os.path.exists(quartile_dir):
os.makedirs(quartile_dir)
out_path = os.path.join(quartile_dir, 'filtered_quartile_stats.csv')
QuartileAnalysis.stats(fits_dir, filter_path, out_path)
# -
quartile_df = pd.read_csv(out_path, index_col=0)
quartile_df
# +
fits_dir = os.path.join(data_dir, 'Final_Fits')
filter_path = os.path.join(data_dir, 'SMOKE_Filtered_2016-2017.h5')
quartile_dir = os.path.join(out_dir, 'final_fits')
if not os.path.exists(quartile_dir):
os.makedirs(quartile_dir)
out_path = os.path.join(quartile_dir, 'final_quartile_stats.csv')
QuartileAnalysis.stats(fits_dir, filter_path, out_path)
# -
quartile_df = pd.read_csv(out_path, index_col=0)
quartile_df
| notebooks/CEMS Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # U.S. Inaugural data
# 1. A list of 1 or more contextual datasets you have identified, links to where they reside, and a sentence about why they might be useful in telling the final story.
#
# All the transcription of U.S. inaugural speech are represented by natural language, which can be read by paragraphs, sentences or words. The data can be obtained from https://avalon.law.yale.edu/subject_menus/inaug.asp . I also plan to integrate google trends for these specific keywords.
#
# 2. One paragraph explaining how to use the dashboard you created, to help someone who is not an expert understand your dataset.
#
# I create a heatmap, where the rows are the presidents (represented by the presidency years), columns are the keywords in their speech. The values of the heatmap are the word frequency in their speech. If we select a cell on the heatmap, the dashboard will tell the word frequency of the selected cell, and the ranking of the presidents for this keyword.
#
#
# +
import nltk
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from collections import Counter
from nltk.corpus import inaugural
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk import FreqDist
import ipywidgets
import bqplot
import pandas as pd
import pytrends
lemmatizer = WordNetLemmatizer()
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import string
# %matplotlib inline
# +
nltk.download('stopwords')
nltk.download('inaugural')
nltk.download('wordnet')
STOP = stopwords.words('english')
# +
def get_pres_id():
pres_list = inaugural.fileids()
# name_set = set()
last_name = None
result = []
for line in pres_list:
year, n = line.rstrip(".txt").split("-")
if n == last_name:
last_name = n
continue
else:
last_name = n
result.append((year, n, line))
return result
year = np.array([int(tri[0]) for tri in get_pres_id()])
year2name = {year: name for year, name, line in get_pres_id()}
# -
# # Wordcloud
# Let's have a glimpse of the inaugural speech! I created a wordcloud for the corpus. We can know the important (frequent) words in the president speech.
# +
def create_dic(ids):
total = []
for year, name, fileid in ids:
words = [w for w in inaugural.words(fileid) if w.isalpha() and w.lower() not in STOP and w not in string.punctuation]
total.extend(words)
ctr = Counter(total)
return ctr
freq = create_dic(get_pres_id())
freq_dic = dict(freq.most_common(50))
wordcloud = WordCloud().generate_from_frequencies(freq_dic)
fig, ax = plt.subplots(figsize=(10,5))
ax.imshow(wordcloud)
plt.show()
# +
kw_list = "justice,democracy,republic,tax,economy,patriotism,god,liberty,crime,wealth,poverty".split(',')
def to_kw2d(ids, kw_list):
x = []
y = []
idx = []
dic = {k: [] for k in kw_list}
for year, name, fileid in ids:
idx.append(name)
words = [lemmatizer.lemmatize(w.lower()) for w in inaugural.words(fileid) if w.isalnum()]
ctr = Counter(words)
for k in kw_list:
dic[k].append(ctr.get(k, 0))
df = pd.DataFrame(dic, index=idx)
return df
kw2d = to_kw2d(get_pres_id(),kw_list)
# -
# # Keywords frequency - years (corpus)
#
# I am interested in the frequency of some keywords: justice, democracy, republic, tax,economy, patriotism, god, liberty, crime, wealth, poverty
#
# People care about it, so the presidents should mention it in their speech. Then what about their frequency? I create an interactive plot to show the keyword frequency in the inaugural speech. I nomalized the frequency with regard to the total words of each speech.
# +
def keywords(ids, kw):
x = []
y = []
for year, name, fileid in ids:
words = [lemmatizer.lemmatize(w.lower()) for w in inaugural.words(fileid) if w.isalnum()]
ctr = Counter(words)
cnt = sum([ctr[i] for i in kw])
all_words = len(inaugural.words(fileid))
ratio = cnt/all_words
x.append(year)
y.append(ratio)
return x, y
@ipywidgets.interact(kw = kw_list)
def kw_in_aug(kw):
ids = get_pres_id()
x, y = keywords(ids, [kw])
fig, ax = plt.subplots(1,1)
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=10))
ax.bar(x, y)
plt.show()
# -
# # Keywords frequency - years (Google Trends)
#
# The interest of the netizens may varies with the years. To mine what they cares, I crawled data from google trends.
# +
from pytrends.request import TrendReq
pytrends = TrendReq(hl='en-US', tz=360)
@ipywidgets.interact(kw = kw_list)
def trend(kw):
# kw_list = ["tax"]
pytrends.build_payload([kw], cat=0, timeframe='all', geo='US', gprop='')
df = pytrends.interest_over_time()
fig, ax = plt.subplots(1,1)
df.plot(ax=ax)
# plt.show()
# -
# # Interactive with the data!
#
# You can look into the details of the data by the interactive heatmap and get the rank of the presidents for each keyword.
# +
mySelectedLabel = ipywidgets.Label()
y_range = ipywidgets.IntRangeSlider(value=[30,40],
min=0,
max=len(get_pres_id()),
step=1,
description="Presidency",
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True)
def updateYAxis(change):
#Update X-axis min/max value here
if change['type'] == 'change' and change['name'] == 'value':
y_sc.domain = year[change['new'][0]:change['new'][1]].tolist()
def on_selected(change): # building
if len(change['owner'].selected) == 1:
i, j = change['owner'].selected[0]
v = np.array(kw2d)[i, j] # CHANGED THIS FOR 3D DATA!!!
# mySelectedLabel.value = 'word frequency' + str(v)
# pres_by_key = dict(kw2d[kw_list[j]].sort_values(ascending=False)[:4])
idx = (-kw2d[kw_list[j]]).argsort()
pres_by_key = dict(kw2d[kw_list[j]][idx][:4])
# print(pres_by_key)
bykey_str = '\t'.join(k+'='+str(v) for k, v in pres_by_key.items())
mySelectedLabel.value = 'Selected word frequency=' + str(v) + '. Top presidents of "' + kw_list[j] +'" are ' + bykey_str + ', in ' + ', '.join([str(year[i]) for i in idx][:4])+'.'
# mySelectedLabel.value = str(kw2d[kw_list[j]][idx][:4])
year = np.array([int(tri[0]) for tri in get_pres_id()])
col_sc = bqplot.ColorScale(scheme="Blues")
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
y_sc.domain = year[-10:].tolist()
# 3. Axis -- for colors, the axis is a colorbar!
ax_col = bqplot.ColorAxis(scale = col_sc, orientation='vertical', side='right')
ax_x = bqplot.Axis(scale = x_sc,) # same x/y ax we had before
ax_y = bqplot.Axis(scale = y_sc, orientation='vertical')
# 4. Mark -- heatmap
heat_map = bqplot.GridHeatMap(color = kw2d, row=year,column=kw_list, scales = {'color':col_sc, 'row':y_sc, 'column':x_sc},
interactions={'click':'select'},
anchor_style={'fill':'blue'},
selected_style={'opacity':1.0},
unselected_style={'opacity':0.8})
# 5. Interactions -- going to be built into the GridHeatMap mark (how things *look* when selection happens)
# BUT I'm going to define what happens when the interaction takes place (something is selected)
heat_map.observe(on_selected, 'selected')
y_range.observe(updateYAxis)
fig = bqplot.Figure(marks = [heat_map], axes=[ax_col, ax_x, ax_y]) # have to add this axis
ipywidgets.VBox([y_range,mySelectedLabel,fig])
# -
| files/final_dv_part3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
# # <span style="text-align: right; direction: rtl; float: right;">Comprehensions</span>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מפתחי פייתון אוהבים מאוד קוד קצר ופשוט שמנוסח היטב.<br>
# יוצרי השפה מתמקדים פעמים רבות בלאפשר למפתחים בה לכתוב קוד בהיר ותמציתי במהירות.<br>
# במחברת זו נלמד איך לעבור על iterable וליצור ממנו מבני נתונים מעניינים בקלות ובמהירות.
# </p>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">List Comprehension</span>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">עיבוד רשימות</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נתחיל במשימה פשוטה יחסית:<br>
# בהינתן רשימת שמות, אני מעוניין להפוך את כל השמות ברשימה ליווניים.<br>
# כידוע, אפשר להפוך כל שם ליווני על ידי הוספת ההברה <em>os</em> בסופו. לדוגמה, השם Yam ביוונית הוא Yamos.
# </p>
names = ['Yam', 'Gal', 'Orpaz', 'Aviram']
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# למה אנחנו מחכים? ניצור את הרשימה החדשה:
# </p>
new_names = []
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נעבור על הרשימה הישנה בעזרת לולאת <code>for</code>, נשרשר לכל איבר "<em>os</em>" ונצרף את התוצאה לרשימה החדשה:
# </p>
for name in names:
new_names.append(name + 'os')
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כשהלולאה תסיים לרוץ, תהיה בידינו רשימה חדשה של שמות יוונים:
# </p>
print(new_names)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# אם נסתכל על הלולאה שיצרנו, נוכל לזהות בה ארבעה מרכיבים עיקריים.
# </p>
# <table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
# <caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">פירוק מרכיבי לולאת for ליצירת רשימה חדשה</caption>
# <thead>
# <tr>
# <th>שם המרכיב</th>
# <th>תיאור המרכיב</th>
# <th>דוגמה</th>
# </tr>
# </thead>
# <tbody>
# <tr>
# <td><span style="background: #073b4c; color: white; padding: 0.2em;">ה־iterable הישן</span></td>
# <td>אוסף הנתונים המקורי שעליו אנחנו רצים.</td>
# <td><var>names</var></td>
# </tr>
# <tr>
# <td><span style="background: #118ab2; color: white; padding: 0.15em;">הערך הישן</span></td>
# <td>משתנה הלולאה. הלייזר שמצביע בכל פעם על ערך יחיד מתוך ה־iterable הישן.</td>
# <td><var>name</var></td>
# </tr>
# <tr>
# <td><span style="background: #57bbad; color: white; padding: 0.15em;">הערך החדש</span></td>
# <td>הערך שנרצה להכניס ל־iterable שאנחנו יוצרים, בדרך כלל מושפע מהערך הישן.</td>
# <td><code dir="ltr">name + 'os'</code></td>
# </tr>
# <tr>
# <td><span style="background: #ef476f; color: white; padding: 0.15em;">ה־iterable החדש</span></td>
# <td>ה־iterable שאנחנו רוצים ליצור, הערך שיתקבל בסוף הריצה.</td>
# <td><var>new_names</var></td>
# </tr>
# </tbody>
# </table>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# השתמשו ב־<var>map</var> כדי ליצור מ־<var>names</var> רשימת שמות יווניים באותה הצורה.<br>
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# צורת ה־<var>map</var> בפתרון שלכם הייתה אמורה להשתמש בדיוק באותם חלקי הלולאה.<br>
# אם עדיין לא ניסיתם לפתור בעצמכם, זה הזמן לכך.<br>
# התשובה שלכם אמורה להיראות בערך כך:
# </p>
new_names = map(lambda name: name + 'os', names)
print(list(new_names))
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">הטכניקה</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# <dfn>list comprehension</dfn> היא טכניקה שמטרתה לפשט את מלאכת הרכבת הרשימה, כך שתהיה קצרה, מהירה וקריאה.<br>
# ניגש לעניינים! אבל ראו הוזהרתם – במבט ראשון list comprehension עשוי להיראות מעט מאיים וקשה להבנה.<br>
# הנה זה בא:
# </p>
names = ['Yam', 'Gal', 'Orpaz', 'Aviram']
new_names = [name + 'os' for name in names] # list comprehension
print(new_names)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# הדבר הראשון שמבלבל כשנפגשים לראשונה עם list comprehension הוא סדר הקריאה המשונה:<br>
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>list comprehension מתחיל בפתיחת סוגריים מרובעים (ומסתיים בסגירתם), שמציינים שאנחנו מעוניינים ליצור רשימה חדשה.</li>
# <li>את מה שבתוך הסוגריים עדיף להתחיל לקרוא מהמילה <code>for</code> – נוכל לראות את הביטוי <code dir="ltr">for name in names</code> שאנחנו כבר מכירים.</li>
# <li>מייד לפני המילה <code>for</code>, נכתוב את ערכו של האיבר שאנחנו רוצים לצרף לרשימה החדשה בכל איטרציה של הלולאה.</li>
# </ol>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נביט בהשוואת החלקים של ה־list comprehension לחלקים של לולאת ה־<code>for</code>:
# </p>
# <figure>
# <img src="images/for_vs_listcomp.png?v=2" style="max-width: 500px; margin-right: auto; margin-left: auto; text-align: center;" alt="לולאת ה־for שכתבנו למעלה עם המשתנה names (ה־iterable) מודגש בצבע מספר 1, המשתנה name (הערך הישן) מודגש בצבע מספר 2, הביטוי name + 'os' מודגשים בצבע 3 והמשתנה new_names (הרשימה החדשה) בצבע 4. מתחתיו לביטוי זה יש קו מקווקו, ומתחתיו הביטוי של ה־list comprehension עם אותם חלקים צבועים באותם צבעים."/>
# <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת רשימה בעזרת <code>for</code> ובעזרת list comprehension</figcaption>
# </figure>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# list comprehension מאפשרת לשנות את הערך שנוסף לרשימה בקלות.<br>
# מסיבה זו, מתכנתים רבים יעדיפו את הטכניקה הזו על פני שימוש ב־<var>map</var>, שבה נצטרך להשתמש ב־<code>lambda</code> ברוב המקרים.
# </p>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נתונה הרשימה <code dir="ltr">numbers = [1, 2, 3, 4, 5]</code>.<br>
# השתמשו ב־list comprehension כדי ליצור בעזרתה את הרשימה <code dir="ltr">[1, 4, 9, 16, 25]</code>.<br>
# האם אפשר להשתמש בפונקציה <var>range</var> במקום ב־<var>numbers</var>?
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# list comprehension הוא מבנה גמיש מאוד!<br>
# נוכל לכתוב בערך שאנחנו מצרפים לרשימה כל ביטוי שיתחשק לנו, ואפילו לקרוא לפונקציות.<br>
# נראה כמה דוגמאות:
# </p>
names = ['<NAME>', '<NAME>', '<NAME>']
reversed_names = [name[::-1] for name in names]
print(reversed_names)
reversed_names = [int(str(number) * 9) for number in range(1, 10)]
print(reversed_names)
places = (
{'name': 'salar de uyuni', 'location': 'Bolivia'},
{'name': 'northern lake baikal', 'location': 'Russia'},
{'name': 'kuang si falls', 'location': 'Laos'},
)
places_titles = [place['name'].title() for place in places]
print(places_titles)
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# השתמשו ב־list comprehension כדי ליצור את הרשימה הבאה:<br>
# <code dir="ltr">[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]</code>.
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">תנאים</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נציג תבנית נפוצה נוספת הנוגעת לעבודה עם רשימות.<br>
# לעיתים קרובות, נרצה להוסיף איבר לרשימה רק אם מתקיים לגביו תנאי מסוים.<br>
# לדוגמה, ניקח מרשימת השמות הבאה רק את האנשים ששמם ארוך מתריסר תווים:
# </p>
names = ['<NAME>', '<NAME>', "<NAME>", '<NAME>', '<NAME>']
long_names = []
for name in names:
if len(name) > 12:
long_names.append(name)
long_names
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# השתמשו ב־<var>filter</var> כדי ליצור מ־<var>names</var> רשימת שמות ארוכים באותה הצורה.<br>
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נפרק את הקוד הקצר שיצרנו למעלה למרכיביו:
# </p>
# <table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
# <caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">פירוק מרכיבי לולאת for עם התניה ליצירת רשימה חדשה</caption>
# <thead>
# <tr>
# <th>שם המרכיב</th>
# <th>תיאור המרכיב</th>
# <th>דוגמה</th>
# </tr>
# </thead>
# <tbody>
# <tr>
# <td><span style="background: #073b4c; color: white; padding: 0.2em;">איפוס</span></td>
# <td>אתחול הרשימה לערך ריק.</td>
# <td><code dir="ltr">long_names = []</code></td>
# </tr>
# <tr>
# <td><span style="background: #118ab2; color: white; padding: 0.15em;">הלולאה</span></td>
# <td>החלק שעובר על כל האיברים ב־iterable הקיים ויוצר משתנה שאליו אפשר להתייחס.</td>
# <td><code dir="ltr">for name in names:</code></td>
# </tr>
# <tr>
# <td><span style="background: #57bbad; color: white; padding: 0.15em;">הבדיקה</span></td>
# <td>התניה שבודקת אם הערך עונה על תנאי מסוים.</td>
# <td><code dir="ltr">if len(name) > 12:</code></td>
# </tr>
# <tr>
# <td><span style="background: #ef476f; color: white; padding: 0.15em;">הוספה</span></td>
# <td>צירוף האיבר לרשימה החדשה, אם הוא עונה על התנאי שנקבע בבדיקה.</td>
# <td><code dir="ltr">long_names.append(name)</code></td>
# </tr>
# </tbody>
# </table>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# ונלמד איך מממשים את אותו הרעיון בדיוק בעזרת list comprehension:
# </p>
names = ['<NAME>', '<NAME>', "<NAME>", '<NAME>', '<NAME>']
long_names = [name for name in names if len(name) > 12]
print(long_names)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נראה שוב השוואה בין list comprehension ללולאת <code>for</code> רגילה, הפעם עם תנאי:
# </p>
# <figure>
# <img src="images/for_vs_listcomp_with_if.png?v=1" style="max-width: 600px; margin-right: auto; margin-left: auto; text-align: center;" alt="בחלק העליון: לולאת ה־for שכתבנו למעלה. long_names = [] מודגש בצבע 1, גוף הלולאה בצבע 2, הבדיקה בצבע 3 וההוספה של האיבר לרשימה בצבע 4. בחלק התחתון: long_names = [ בצבע 1, name בצבע 2, for name in names בצבע 3, if len(name) > 12 בצבע 4, ] בצבע 1."/>
# <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת רשימה בעזרת <code>for</code> ובעזרת list comprehension</figcaption>
# </figure>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# גם כאן יש לנו סדר קריאה משונה מעט, אך הרעיון הכללי של ה־list comprehension נשמר:<br>
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>list comprehension מתחיל בפתיחת סוגריים מרובעים (ומסתיים בסגירתם), כדי לציין שאנחנו מעוניינים ליצור רשימה חדשה.</li>
# <li>את מה שבתוך הסוגריים עדיף להתחיל לקרוא מהמילה <code>for</code> – נוכל לראות את הביטוי <code dir="ltr">for name in names</code> שאנחנו כבר מכירים.</li>
# <li>ממשיכים לקרוא את התנאי, אם קיים כזה. רק אם התנאי יתקיים, יתווסף האיבר לרשימה.</li>
# <li>מייד לפני המילה <code>for</code>, נכתוב את ערכו של האיבר שאנחנו רוצים לצרף לרשימה בכל איטרציה של הלולאה.</li>
# </ol>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# אפשר לשלב את השיטות כדי לבנות בקלילות רשימות מורכבות.<br>
# נמצא את שמות כל הקבצים שהסיומת שלהם היא "<em dir="ltr">.html</em>":
# </p>
files = ['moshe_homepage.html', 'yahoo.html', 'python.html', 'shnitzel.gif']
html_names = [file.split('.')[0] for file in files if file.endswith('.html')]
print(html_names)
# #### <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: טיפול שורש</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בחנות של סנדל'ה הארון קצת מבולגן.<br>
# כשלקוח נכנס ומבקש מסנדל'ה למדוד מידה מסוימת, היא צריכה לפשפש בין אלפי המוצרים בארון, ולפעמים המידות שהיא מוצאת שם מוזרות מאוד.<br>
# ההנחיות שסנדל'ה נתנה לנו לצורך סידור הארון שלה די פשוטות:<br>
# התעלמו מכל מידה שיש בה תו שאינו ספרה או נקודה, והוציאו שורש רק מהמידות המספריות.<br>
# התעלמו גם ממספרים עם יותר מנקודה אחת.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לדוגמה, עבור הארון <code dir="ltr">['100', '25.0', '12a', 'mEoW', '0']</code>, החזירו <samp dir="ltr">[10.0, 5.0, 0.0]</samp>.<br>
# עבור הארון <code dir="ltr">['Area51', '303', '2038', 'f00b4r', '314.1']</code>, החזירו <samp dir="ltr">[17.4, 45.14, 17.72]</samp>.<br>
# (מחקנו קצת ספרות אחרי הנקודה בשביל הנראות).
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה בשם <var>organize_closet</var> שמקבלת רשימת ארון ומסדרת אותו.<br>
# תוכלו לבדוק את עצמכם באמצעות הפונקציה <var>generate_closet</var> שתיצור עבורכם ארון אסלי מהחנות של סנדל'ה.
# </p>
# +
import random
import string
CHARACTERS = f'.{string.digits}{string.ascii_letters}'
WEIGHTS = [1] * len(f'.{string.digits}') + [0.05] * len(string.ascii_letters)
def generate_size(length):
return ''.join(random.choices(CHARACTERS, weights=WEIGHTS, k=length))
def generate_closet(closet_size=20, shoe_size=4):
return [generate_size(shoe_size) for _ in range(closet_size)]
generate_closet(5)
# -
# <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
# <div style="display: flex; width: 10%; ">
# <img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
# </div>
# <div style="width: 90%">
# <p style="text-align: right; direction: rtl;">
# בפייתון, נהוג לכנות משתנה שלא יהיה בו שימוש בעתיד כך: <code>_</code>.<br>
# דוגמה טובה אפשר לראות בלולאה שב־<code>generate_closet</code>.
# </p>
# </div>
# </div>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">Dictionary Comprehension ו־Set Comprehension</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מלבד <strong>list</strong> comprehension, קיימים גם <strong>set</strong> comprehension ו־<strong>dictionary</strong> comprehension שפועלים בצורה דומה.<br>
# הרעיון בבסיסו נשאר זהה – שימוש בערכי iterable כלשהו לצורך יצירת מבנה נתונים חדש בצורה קריאה ומהירה.<br>
# נראה דוגמה ל־dictionary comprehension שבו המפתח הוא מספר, והערך הוא אותו המספר בריבוע:
# </p>
powers = {i: i ** 2 for i in range(1, 11)}
print(powers)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בדוגמה למעלה חישבנו את הריבוע של כל אחד מעשרת המספרים החיוביים הראשונים.<br>
# משתנה הלולאה <var>i</var> עבר על כל אחד מהמספרים בטווח שבין 1 ל־11 (לא כולל), ויצר עבור כל אחד מהם את המפתח <var>i</var>, ואת הערך <code>i ** 2</code>.<br>
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# ראו כיצד בעזרת התחביר העוצמתי הזה בפייתון, אנחנו יכולים ליצור מילונים מורכבים בקלות רבה.<br>
# כל שעלינו לעשות הוא להשתמש בסוגריים מסולסלים במקום במרובעים,<br>
# ולציין מייד אחרי פתיחת הסוגריים את הצמד שנרצה להוסיף בכל איטרציה – מפתח וערך, כשביניהם נקודתיים.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בצורה דומה אפשר ליצור set comprehension:
# </p>
sentence = "99 percent of all statistics only tell 49 percent of the story."
words = {word for word in sentence.lower().split() if word.isalpha()}
print(words)
print(type(words))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# התחביר של set comprehension כמעט זהה לתחביר של list comprehension.<br>
# ההבדל היחיד ביניהם הוא שב־set comprehension אנחנו משתמשים בסוגריים מסולסלים.<br>
# ההבדל בינו לבין dictionary comprehension הוא שאנחנו משמיטים את הנקודתיים והערך, ומשאירים רק את המפתח.
# </p>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מצאו כמה מהמספרים הנמוכים מ־1,000 מתחלקים ב־3 וב־7 ללא שארית.<br>
# השתמשו ב־set comprehension.
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">Generator Expression</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בשבוע שעבר למדנו על הכוח הטמון ב־generators.<br>
# בזכות שמירת ערך אחד בלבד בכל פעם, generators מאפשרים לנו לכתוב תוכניות יעילות מבחינת צריכת הזיכרון.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נכתוב generator פשוט שמניב עבורנו את אורכי השורות בטקסט מסוים:
# </p>
# +
def get_line_lengths(text):
for line in text.splitlines():
if line.strip(): # אם השורה אינה ריקה
yield len(line)
# לדוגמה
with open('resources/states.txt') as states_file:
states = states_file.read()
print(list(get_line_lengths(states)))
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# חדי העין כבר זיהו את התבנית המוכרת – יש פה <code>for</code>, מייד אחריו <code>if</code> ומייד אחריו אנחנו יוצרים איבר חדש.<br>
# אם כך, generator expression הוא בסך הכול שם מפונפן למה שאנחנו היינו קוראים לו generator comprehension.<br>
# נמיר את הפונקציה <var>get_line_lengths</var> ל־generator comprehension:
# </p>
# +
with open('resources/states.txt') as states_file:
states = states_file.read()
line_lengths = (len(line) for line in states.splitlines() if line.strip())
print(list(line_lengths))
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נעמוד על ההבדלים בין הגישות:
# </p>
# <figure>
# <img src="images/generator_vs_expression.png" style="max-width: 800px; margin-right: auto; margin-left: auto; text-align: center;" alt="בחלק העליון: פונקציית ה־generator שכתבנו למעלה. כותרת הפונקציה מודגשת בצבע 1, גוף הלולאה בצבע 2, הבדיקה בצבע 3 והנבת האיבר בעזרת yield בצבע 4. בחלק התחתון: ה־generator expression. line_lengths = ( בצבע 1, for line in states.splitlines() בצבע 2, if line.strip() בצבע 3, len(line) בצבע 4, ) בצבע 1."/>
# <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת generator בעזרת פונקציה ובין יצירת generator בעזרת generator expression</figcaption>
# </figure>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כאמור, הרעיון דומה מאוד ל־list comprehension.<br>
# האיבר שנחזיר בכל פעם מה־generator בעזרת <code>yield</code> יהפוך ב־generator expression להיות האיבר שנמצא לפני המילה <code>for</code>.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# שימו לב שה־generator expression שקול לערך המוחזר לנו בקריאה לפונקציית ה־generator.<br>
# זו נקודה שחשוב לשים עליה דגש: generator expression מחזיר generator iterator, ולא פונקציית generator.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נסתכל על דוגמה נוספת ל־generator expression שמחזיר את ריבועי כל המספרים מ־1 ועד 11 (לא כולל):
# </p>
squares = (number ** 2 for number in range(1, 11))
print(list(squares))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בדיוק כמו ב־generator iterator רגיל, אחרי שנשתמש באיבר לא נוכל לקבל אותו שוב:
# </p>
print(list(squares))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# והפעלת <var>next</var> על generator iterator שכבר הניב את כל הערכים תקפיץ <var>StopIterator</var>:
# </p>
next(squares)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# ולטריק האחרון בנושא זה –<br>
# טוב לדעת שכשמעבירים לפונקציה generator expression כפרמטר יחיד, לא צריך לעטוף אותו בסוגריים נוספים.<br>
# לדוגמה:
# </p>
sum(number ** 2 for number in range(1, 11))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בדוגמה שלמעלה ה־generator comprehension יצר את כל ריבועי המספרים מ־1 ועד 11, לא כולל.<br>
# הפונקציה <var>sum</var> השתמשה בכל ריבועי המספרים שה־generator הניב, וסכמה אותם.
# </p>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">לולאות מרובות</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לפעמים נרצה לכתוב כמה לולאות מקוננות זו בתוך זו.<br>
# לדוגמה, ליצירת כל האפשרויות שיכולות להתקבל בהטלת 2 קוביות:
# </p>
# +
dice_options = []
for first_die in range(1, 7):
for second_die in range(1, 7):
dice_options.append((first_die, second_die))
print(dice_options)
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נוכל להפוך גם את המבנה הזה ל־list comprehension:
# </p>
dice_options = [(die1, die2) for die1 in range(1, 7) for die2 in range(1, 7)]
print(dice_options)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כדי להבין איך זה עובד, חשוב לזכור איך קוראים list comprehension:<br>
# פשוט התחילו לקרוא מה־<code>for</code> הראשון, וחזרו לאיבר שאנחנו מוסיפים לרשימה בכל פעם רק בסוף.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# אם במשחק מוזר כלשהו נצטרך לזרוק 3 קוביות, לדוגמה, ונרצה לראות אילו אופציות יכולות להתקבל, נוכל לכתוב זאת כך:
# </p>
dice_options = [
(die1, die2, die3)
for die1 in range(1, 7)
for die2 in range(1, 7)
for die3 in range(1, 7)
]
print(dice_options)
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# שבירת השורה בתא שלמעלה נעשתה מטעמי סגנון.<br>
# באופן טכני, מותר לרשום את ה־list comprehension הזה בשורה אחת.
# </p>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# צרו פונקציית generator ו־generator expression מהדוגמה האחרונה.<br>
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# החסרון בדוגמה של קוביות הוא שאנחנו מקבלים בתוצאות גם את <code dir="ltr">(1, 1, 6)</code> וגם את <code dir="ltr">(6, 1, 1)</code> .<br>
# האם תוכלו לפתור בעיה זו בקלות?
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">נימוסין</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# הטכניקות שלמדנו במחברת זו מפקידות בידינו כוח רב, אך כמו שאומר הדוד בן, "עם כוח גדול באה אחריות גדולה".<br>
# עלינו לזכור תמיד שהמטרה של הטכניקות הללו בסופו של דבר היא להפוך את הקוד לקריא יותר.
# </p>
#
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לעיתים קרובות מתכנתים לא מנוסים ישתמשו בטכניקות שנלמדו במחברת זו כדי לבנות מבנים מורכבים מאוד.<br>
# התוצאה תהיה קוד שקשה לתחזק ולקרוא, ולעיתים קרובות הקוד יוחלף לבסוף בלולאות רגילות.<br>
# כלל האצבע הוא שבשורה לא יהיו יותר מ־99 תווים, ושהקוד יהיה פשוט ונוח לקריאה בידי מתכנת חיצוני.
# </p>
#
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# קהילת פייתון דשה בנושאי קריאות קוד לעיתים קרובות, תוך כדי התייחסויות תכופות ל־<a href="https://www.python.org/dev/peps/pep-0008/">PEP8</a>.<br>
# נסביר בקצרה – PEP8 הוא מסמך שמתקנן את הקווים הכלליים של סגנון הכתיבה הרצוי בפייתון.<br>
# לדוגמה, מאגרי קוד העוקבים אחרי המסמך בצורה מחמירה לא מתירים כתיבת שורות קוד שבהן יותר מ־79 תווים.<br>
# כתיבה מסוגננת היטב היא נושא רחב יריעה שנעמיק בו בהמשך הקורס.
# </p>
# ## <span style="align: right; direction: rtl; float: right; clear: both;">סיכום</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# במחברת זו למדנו 4 טכניקות שימושיות שעוזרות לנו ליצור בצורה קריאה ומהירה מבני נתונים:
# </p>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>List Comprehensions</li>
# <li>Dictionary Comprehensions</li>
# <li>Set Comprehensions</li>
# <li>Generator Expressions</li>
# </ul>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# למדנו מעט איך להשתמש בהם ומתי, ועל ההקבלות שלהם ללולאות רגילות ולפונקציות כמו <var>map</var> ו־<var>filter</var>.<br>
# למדנו גם איך אפשר להשתמש בכל אחת מהן במצב שבו יש לנו כמה לולאות מקוננות.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מתכנתי פייתון עושים שימוש רב בטכניקות האלו, וחשוב לשלוט בהן היטב כדי לדעת לקרוא קוד וכדי להצליח לממש רעיונות במהירות.
# </p>
# ## <span style="align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
# ### <span style="align: right; direction: rtl; float: right; clear: both;">הֲיִי שלום</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה בשם <var>words_length</var> שמקבלת משפט ומחזירה את אורכי המילים שבו, לפי סדרן במשפט.<br>
# לצורך התרגיל, הניחו שסימני הפיסוק הם חלק מאורכי המילים.<br>
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לדוגמה:<br>
# עבור המשפט: <em dir="ltr">Toto, I've a feeling we're not in Kansas anymore</em><br>
# החזירו את הרשימה: <samp dir="ltr">[5, 4, 1, 7, 5, 3, 2, 6, 7]</samp>
# </p>
# ### <span style="align: right; direction: rtl; float: right; clear: both;">א אוהל, פ זה פייתון</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה בשם <var>get_letters</var> שמחזירה את רשימת כל התווים בין a ל־z ובין A ל־Z.<br>
# השתמשו ב־list comprehension, ב־<var>ord</var> וב־<var>chr</var>.<br>
# הקפידו שלא לכלול את המספרים 65, 90, 97 או 122 בקוד שלכם.
# </p>
# ### <span style="align: right; direction: rtl; float: right; clear: both;">חתול ארוך הוא ארוך</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה בשם <var>count_words</var> שמקבלת כפרמטר טקסט, ומחזירה מילון של אורכי המילים שבו.<br>
# השתמשו ב־comprehension לבחירתכם (או ב־generator expression) כדי לנקות את הטקסט מסימנים שאינם אותיות.<br>
# לאחר מכן, השתמשו ב־dictionary comprehension כדי לגלות את אורכה של כל מילה במשפט.<br>
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לדוגמה, עבור הטקסט הבא, בדקו שחוזר לכם המילון המופיע מייד אחריו.
# </p>
# +
import string
text = """
You see, wire telegraph is a kind of a very, very long cat.
You pull his tail in New York and his head is meowing in Los Angeles.
Do you understand this?
And radio operates exactly the same way: you send signals here, they receive them there.
The only difference is that there is no cat.
"""
expected_result = {'you': 3, 'see': 3, 'wire': 4, 'telegraph': 9, 'is': 2, 'a': 1, 'kind': 4, 'of': 2, 'very': 4, 'long': 4, 'cat': 3, 'pull': 4, 'his': 3, 'tail': 4, 'in': 2, 'new': 3, 'york': 4, 'and': 3, 'head': 4, 'meowing': 7, 'los': 3, 'angeles': 7, 'do': 2, 'understand': 10, 'this': 4, 'radio': 5, 'operates': 8, 'exactly': 7, 'the': 3, 'same': 4, 'way': 3, 'send': 4, 'signals': 7, 'here': 4, 'they': 4, 'receive': 7, 'them': 4, 'there': 5, 'only': 4, 'difference': 10, 'that': 4, 'no': 2}
# -
# ### <span style="align: right; direction: rtl; float: right; clear: both;">ואלה שמות</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה בשם <var>full_names</var>, שתקבל כפרמטרים רשימת שמות פרטיים ורשימת שמות משפחה, ותרכיב מהם שמות מלאים.<br>
# לכל שם פרטי תצמיד הפונקציה את כל שמות המשפחה שהתקבלו.<br>
# ודאו שהשמות חוזרים כאשר האות הראשונה בשם הפרטי ובשם המשפחה היא אות גדולה.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# על הפונקציה לקבל גם פרמטר אופציונלי בשם <var>min_length</var>.<br>
# אם הפרמטר הועבר, שמות מלאים שכמות התווים שבהם קטנה מהאורך שהוגדר – לא יוחזרו מהפונקציה.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לדוגמה:
# </p>
# +
first_names = ['avi', 'moshe', 'yaakov']
last_names = ['cohen', 'levi', 'mizrahi']
# התנאים הבאים צריכים להתקיים
full_names(first_names, last_names, 10) == ['<NAME>', '<NAME>', '<NAME>', 'Mos<NAME>izrahi', 'Yaak<NAME>', 'Yaakov Levi', 'Yaakov Mizrahi']
full_names(first_names, last_names) == ['<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>']
| week6/3_Comprehensions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="b7B6AwHJ4f0F"
# # **Using Project class in pyAutomagic**
# A Project contains the entire relevant information for each project. Main steps to ensure smooth running includes:
#
#
# * Initialization
# * Preprocess
# * Interpolate
#
#
# + [markdown] colab_type="text" id="be_iQIB7zxfk"
# # *Initialization*
#
# We have restricted the overall package to be used with BIDS specifications.Set name as the project name of your choice. Set the data folder as the path to your BIDS folder. Correct file extension should be set according to the raw files in your dataset. Set the montage string as one of that MNE will recognize as the EEG montage you are using. According to the dataset, the sampling rate also needs to be set. Also, the preprocessing parameters need to be set as a dictionary as seen in the code below. These parameters are line frequency, filter type, filter frequency, filter length, whether or not to perform EOG regression, lambda , tolerance, maximum iterations, reference channels, eval channels and re-reference channels.
#
#
#
# + colab={} colab_type="code" id="-26mii1j0UYu"
import os
import sys
pyautomagic_dir = os.path.abspath(os.path.dirname(os.getcwd()))
sys.path.append(pyautomagic_dir)
from pyautomagic.src.Project import Project
name = "Tutorial project"
data_folder = os.path.join("..", "tests", "test_data", "test_project")
file_ext = ".set"
montage = "biosemi128"
sample_rate = 500
channels = []
for i in range(128):
channels.append('E'+str(i+1))
params = {'line_freqs' : 50,'filter_type' : 'high', 'filt_freq' : None,'filter_length' : 'auto','eog_regression' : False,'lam' : -1,'tol' : 1e-7,'max_iter': 1000,'interpolation_params': {'line_freqs' : 50,'ref_chs': channels, 'reref_chs': channels,'montage': montage}}
# + [markdown] colab_type="text" id="UFiNWlfn96SZ"
# The project class can be initialized with above mentioned parameters.
# + colab={} colab_type="code" id="DUaJ_GUr4txm"
tutorial_project = Project(name, data_folder, file_ext, montage, sample_rate, params)
# + [markdown] colab_type="text" id="_EeWMnaP-A2_"
# When the project class is initialized, it looks into the data folder and creates a list of all the raw files. Correct listing of raw files can be checked from log, where both the subject name and file name is being logged.
# + [markdown] colab_type="text" id="FTMj960eBxHm"
# # *Preprocess*
# After the data has been loaded correctly, use preprocess_all() method to process the raw files. This method goes through all the blocks in the block_list and processes them all one by one using Block's preprocess function. You can check the progress in log. Additionally it also saves the preprocessed data, 2 figures and reults JSON file for each block in the appropriate result folder.
# + colab={} colab_type="code" id="bj1_pid260mo"
tutorial_project.preprocess_all()
# + [markdown] colab_type="text" id="69nwv3i_cPQ0"
# # *Interpolate*
# After all the files have been processed. You can go ahead and interpolate the blocks that are to be interpolated. For interpolation, use the interpolate_selected() method. The progress can be checked in log. After the interpolation is done, the results are stored in same format as in preprocessing.
# + colab={} colab_type="code" id="URWLUw_z662u"
tutorial_project.interpolate_selected()
| notebooks/Project_Tutorial.ipynb |
// -*- coding: utf-8 -*-
// # Automatic generation of Notebook using PyCropML
// This notebook implements a crop model.
// ### Domain Class PhenologyAuxiliary
// +
#include "PhenologyAuxiliary.h"
PhenologyAuxiliary::PhenologyAuxiliary() { }
string PhenologyAuxiliary::getcurrentdate() {return this-> currentdate; }
float PhenologyAuxiliary::getcumulTT() {return this-> cumulTT; }
float PhenologyAuxiliary::getdayLength() {return this-> dayLength; }
float PhenologyAuxiliary::getdeltaTT() {return this-> deltaTT; }
float PhenologyAuxiliary::getgAI() {return this-> gAI; }
float PhenologyAuxiliary::getpAR() {return this-> pAR; }
float PhenologyAuxiliary::getgrainCumulTT() {return this-> grainCumulTT; }
float PhenologyAuxiliary::getfixPhyll() {return this-> fixPhyll; }
float PhenologyAuxiliary::getcumulTTFromZC_39() {return this-> cumulTTFromZC_39; }
float PhenologyAuxiliary::getcumulTTFromZC_91() {return this-> cumulTTFromZC_91; }
float PhenologyAuxiliary::getcumulTTFromZC_65() {return this-> cumulTTFromZC_65; }
void PhenologyAuxiliary::setcurrentdate(string _currentdate) { this->currentdate = _currentdate; }
void PhenologyAuxiliary::setcumulTT(float _cumulTT) { this->cumulTT = _cumulTT; }
void PhenologyAuxiliary::setdayLength(float _dayLength) { this->dayLength = _dayLength; }
void PhenologyAuxiliary::setdeltaTT(float _deltaTT) { this->deltaTT = _deltaTT; }
void PhenologyAuxiliary::setgAI(float _gAI) { this->gAI = _gAI; }
void PhenologyAuxiliary::setpAR(float _pAR) { this->pAR = _pAR; }
void PhenologyAuxiliary::setgrainCumulTT(float _grainCumulTT) { this->grainCumulTT = _grainCumulTT; }
void PhenologyAuxiliary::setfixPhyll(float _fixPhyll) { this->fixPhyll = _fixPhyll; }
void PhenologyAuxiliary::setcumulTTFromZC_39(float _cumulTTFromZC_39) { this->cumulTTFromZC_39 = _cumulTTFromZC_39; }
void PhenologyAuxiliary::setcumulTTFromZC_91(float _cumulTTFromZC_91) { this->cumulTTFromZC_91 = _cumulTTFromZC_91; }
void PhenologyAuxiliary::setcumulTTFromZC_65(float _cumulTTFromZC_65) { this->cumulTTFromZC_65 = _cumulTTFromZC_65; }
// -
// ### Domain Class PhenologyRate
#include "PhenologyRate.h"
PhenologyRate::PhenologyRate() { }
// ### Domain Class PhenologyState
// +
#include "PhenologyState.h"
PhenologyState::PhenologyState() { }
float PhenologyState::getptq() {return this-> ptq; }
string PhenologyState::getcurrentZadokStage() {return this-> currentZadokStage; }
int PhenologyState::gethasFlagLeafLiguleAppeared() {return this-> hasFlagLeafLiguleAppeared; }
int PhenologyState::gethasZadokStageChanged() {return this-> hasZadokStageChanged; }
vector<float>& PhenologyState::getlistPARTTWindowForPTQ() {return this-> listPARTTWindowForPTQ; }
int PhenologyState::gethasLastPrimordiumAppeared() {return this-> hasLastPrimordiumAppeared; }
vector<float>& PhenologyState::getlistTTShootWindowForPTQ() {return this-> listTTShootWindowForPTQ; }
vector<float>& PhenologyState::getlistTTShootWindowForPTQ1() {return this-> listTTShootWindowForPTQ1; }
vector<string>& PhenologyState::getcalendarMoments() {return this-> calendarMoments; }
float PhenologyState::getcanopyShootNumber() {return this-> canopyShootNumber; }
vector<string>& PhenologyState::getcalendarDates() {return this-> calendarDates; }
vector<int>& PhenologyState::getleafTillerNumberArray() {return this-> leafTillerNumberArray; }
float PhenologyState::getvernaprog() {return this-> vernaprog; }
float PhenologyState::getphyllochron() {return this-> phyllochron; }
float PhenologyState::getleafNumber() {return this-> leafNumber; }
int PhenologyState::getnumberTillerCohort() {return this-> numberTillerCohort; }
vector<float>& PhenologyState::gettilleringProfile() {return this-> tilleringProfile; }
float PhenologyState::getaverageShootNumberPerPlant() {return this-> averageShootNumberPerPlant; }
float PhenologyState::getminFinalNumber() {return this-> minFinalNumber; }
float PhenologyState::getfinalLeafNumber() {return this-> finalLeafNumber; }
float PhenologyState::getphase() {return this-> phase; }
vector<float>& PhenologyState::getlistGAITTWindowForPTQ() {return this-> listGAITTWindowForPTQ; }
vector<float>& PhenologyState::getcalendarCumuls() {return this-> calendarCumuls; }
float PhenologyState::getgAImean() {return this-> gAImean; }
float PhenologyState::getpastMaxAI() {return this-> pastMaxAI; }
int PhenologyState::getisMomentRegistredZC_39() {return this-> isMomentRegistredZC_39; }
void PhenologyState::setptq(float _ptq) { this->ptq = _ptq; }
void PhenologyState::setcurrentZadokStage(string _currentZadokStage) { this->currentZadokStage = _currentZadokStage; }
void PhenologyState::sethasFlagLeafLiguleAppeared(int _hasFlagLeafLiguleAppeared) { this->hasFlagLeafLiguleAppeared = _hasFlagLeafLiguleAppeared; }
void PhenologyState::sethasZadokStageChanged(int _hasZadokStageChanged) { this->hasZadokStageChanged = _hasZadokStageChanged; }
void PhenologyState::setlistPARTTWindowForPTQ(vector<float>& _listPARTTWindowForPTQ){
this->listPARTTWindowForPTQ = _listPARTTWindowForPTQ;
}
void PhenologyState::sethasLastPrimordiumAppeared(int _hasLastPrimordiumAppeared) { this->hasLastPrimordiumAppeared = _hasLastPrimordiumAppeared; }
void PhenologyState::setlistTTShootWindowForPTQ(vector<float>& _listTTShootWindowForPTQ){
this->listTTShootWindowForPTQ = _listTTShootWindowForPTQ;
}
void PhenologyState::setlistTTShootWindowForPTQ1(vector<float>& _listTTShootWindowForPTQ1){
this->listTTShootWindowForPTQ1 = _listTTShootWindowForPTQ1;
}
void PhenologyState::setcalendarMoments(vector<string>& _calendarMoments){
this->calendarMoments = _calendarMoments;
}
void PhenologyState::setcanopyShootNumber(float _canopyShootNumber) { this->canopyShootNumber = _canopyShootNumber; }
void PhenologyState::setcalendarDates(vector<string>& _calendarDates){
this->calendarDates = _calendarDates;
}
void PhenologyState::setleafTillerNumberArray(vector<int>& _leafTillerNumberArray){
this->leafTillerNumberArray = _leafTillerNumberArray;
}
void PhenologyState::setvernaprog(float _vernaprog) { this->vernaprog = _vernaprog; }
void PhenologyState::setphyllochron(float _phyllochron) { this->phyllochron = _phyllochron; }
void PhenologyState::setleafNumber(float _leafNumber) { this->leafNumber = _leafNumber; }
void PhenologyState::setnumberTillerCohort(int _numberTillerCohort) { this->numberTillerCohort = _numberTillerCohort; }
void PhenologyState::settilleringProfile(vector<float>& _tilleringProfile){
this->tilleringProfile = _tilleringProfile;
}
void PhenologyState::setaverageShootNumberPerPlant(float _averageShootNumberPerPlant) { this->averageShootNumberPerPlant = _averageShootNumberPerPlant; }
void PhenologyState::setminFinalNumber(float _minFinalNumber) { this->minFinalNumber = _minFinalNumber; }
void PhenologyState::setfinalLeafNumber(float _finalLeafNumber) { this->finalLeafNumber = _finalLeafNumber; }
void PhenologyState::setphase(float _phase) { this->phase = _phase; }
void PhenologyState::setlistGAITTWindowForPTQ(vector<float>& _listGAITTWindowForPTQ){
this->listGAITTWindowForPTQ = _listGAITTWindowForPTQ;
}
void PhenologyState::setcalendarCumuls(vector<float>& _calendarCumuls){
this->calendarCumuls = _calendarCumuls;
}
void PhenologyState::setgAImean(float _gAImean) { this->gAImean = _gAImean; }
void PhenologyState::setpastMaxAI(float _pastMaxAI) { this->pastMaxAI = _pastMaxAI; }
void PhenologyState::setisMomentRegistredZC_39(int _isMomentRegistredZC_39) { this->isMomentRegistredZC_39 = _isMomentRegistredZC_39; }
// -
// ### Model Phyllochron
// +
#define _USE_MATH_DEFINES
#include <cmath>
#include <iostream>
# include<vector>
# include<string>
# include<numeric>
# include<algorithm>
# include<array>
#include <map>
# include <tuple>
#include "Phyllochron.h"
using namespace std;
Phyllochron::Phyllochron() { }
float Phyllochron::getlincr() {return this-> lincr; }
float Phyllochron::getldecr() {return this-> ldecr; }
float Phyllochron::getpdecr() {return this-> pdecr; }
float Phyllochron::getpincr() {return this-> pincr; }
float Phyllochron::getkl() {return this-> kl; }
float Phyllochron::getpTQhf() {return this-> pTQhf; }
float Phyllochron::getB() {return this-> B; }
float Phyllochron::getp() {return this-> p; }
string Phyllochron::getchoosePhyllUse() {return this-> choosePhyllUse; }
float Phyllochron::getareaSL() {return this-> areaSL; }
float Phyllochron::getareaSS() {return this-> areaSS; }
float Phyllochron::getlARmin() {return this-> lARmin; }
float Phyllochron::getlARmax() {return this-> lARmax; }
float Phyllochron::getsowingDensity() {return this-> sowingDensity; }
float Phyllochron::getlNeff() {return this-> lNeff; }
void Phyllochron::setlincr(float _lincr) { this->lincr = _lincr; }
void Phyllochron::setldecr(float _ldecr) { this->ldecr = _ldecr; }
void Phyllochron::setpdecr(float _pdecr) { this->pdecr = _pdecr; }
void Phyllochron::setpincr(float _pincr) { this->pincr = _pincr; }
void Phyllochron::setkl(float _kl) { this->kl = _kl; }
void Phyllochron::setpTQhf(float _pTQhf) { this->pTQhf = _pTQhf; }
void Phyllochron::setB(float _B) { this->B = _B; }
void Phyllochron::setp(float _p) { this->p = _p; }
void Phyllochron::setchoosePhyllUse(string _choosePhyllUse) { this->choosePhyllUse = _choosePhyllUse; }
void Phyllochron::setareaSL(float _areaSL) { this->areaSL = _areaSL; }
void Phyllochron::setareaSS(float _areaSS) { this->areaSS = _areaSS; }
void Phyllochron::setlARmin(float _lARmin) { this->lARmin = _lARmin; }
void Phyllochron::setlARmax(float _lARmax) { this->lARmax = _lARmax; }
void Phyllochron::setsowingDensity(float _sowingDensity) { this->sowingDensity = _sowingDensity; }
void Phyllochron::setlNeff(float _lNeff) { this->lNeff = _lNeff; }
void Phyllochron::Calculate_Model(PhenologyState& s, PhenologyState& s1, PhenologyRate& r, PhenologyAuxiliary& a)
{
//- Name: Phyllochron -Version: 1.0, -Time step: 1
//- Description:
// * Title: Phyllochron Model
// * Author: <NAME>
// * Reference: Modeling development phase in the
// Wheat Simulation Model SiriusQuality.
// See documentation at http://www1.clermont.inra.fr/siriusquality/?page_id=427
// * Institution: INRA Montpellier
// * Abstract: Calculate different types of phyllochron
//- inputs:
// * name: fixPhyll
// ** description : Sowing date corrected Phyllochron
// ** inputtype : variable
// ** variablecategory : auxiliary
// ** datatype : DOUBLE
// ** default : 5.0
// ** min : 0.0
// ** max : 10000.0
// ** unit : °C d leaf-1
// ** uri : some url
// * name: leafNumber
// ** description : Actual number of phytomers
// ** inputtype : variable
// ** variablecategory : state
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 25.0
// ** unit : leaf
// ** uri : some url
// * name: lincr
// ** description : Leaf number above which the phyllochron is increased by Pincr
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 8.0
// ** min : 0.0
// ** max : 30.0
// ** unit : leaf
// ** uri : some url
// * name: ldecr
// ** description : Leaf number up to which the phyllochron is decreased by Pdecr
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 100.0
// ** unit : leaf
// ** uri : some url
// * name: pdecr
// ** description : Factor decreasing the phyllochron for leaf number less than Ldecr
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 0.4
// ** min : 0.0
// ** max : 10.0
// ** unit : -
// ** uri : some url
// * name: pincr
// ** description : Factor increasing the phyllochron for leaf number higher than Lincr
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 1.5
// ** min : 0.0
// ** max : 10.0
// ** unit : -
// ** uri : some url
// * name: ptq
// ** description : Photothermal quotient
// ** inputtype : variable
// ** variablecategory : state
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 10000.0
// ** unit : MJ °C-1 d-1 m-2)
// ** uri : some url
// * name: gAImean
// ** description : Green Area Index
// ** inputtype : variable
// ** variablecategory : state
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 10000.0
// ** unit : m2 m-2
// ** uri : some url
// * name: kl
// ** description : Exctinction Coefficient
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 0.45
// ** min : 0.0
// ** max : 50.0
// ** unit : -
// ** uri : some url
// * name: pTQhf
// ** description : Slope to intercept ratio for Phyllochron parametrization with PhotoThermal Quotient
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : °C d leaf-1
// ** uri : some url
// * name: B
// ** description : Phyllochron at PTQ equal 1
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 20.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : °C d leaf-1
// ** uri : some url
// * name: p
// ** description : Phyllochron (Varietal parameter)
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : DOUBLE
// ** default : 120.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : °C d leaf-1
// ** uri : some url
// * name: choosePhyllUse
// ** description : Switch to choose the type of phyllochron calculation to be used
// ** inputtype : parameter
// ** parametercategory : species
// ** datatype : STRING
// ** default : Default
// ** min :
// ** max :
// ** unit : -
// ** uri : some url
// * name: areaSL
// ** description : Area Leaf
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : cm2
// ** uri : some url
// * name: areaSS
// ** description : Area Sheath
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : cm2
// ** uri : some url
// * name: lARmin
// ** description : LAR minimum
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : leaf-1 °C
// ** uri : some url
// * name: lARmax
// ** description : LAR maximum
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : leaf-1 °C
// ** uri : some url
// * name: sowingDensity
// ** description : Sowing Density
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : plant m-2
// ** uri : some url
// * name: lNeff
// ** description : Leaf Number efficace
// ** inputtype : parameter
// ** parametercategory : genotypic
// ** datatype : DOUBLE
// ** default : 0.0
// ** min : 0.0
// ** max : 1000.0
// ** unit : leaf
// ** uri : some url
//- outputs:
// * name: phyllochron
// ** description : the rate of leaf appearance
// ** variablecategory : state
// ** datatype : DOUBLE
// ** min : 0
// ** max : 1000
// ** unit : °C d leaf-1
// ** uri : some url
float fixPhyll = a.getfixPhyll();
float leafNumber = s.getleafNumber();
float ptq = s.getptq();
float gAImean = s.getgAImean();
float phyllochron;
float gaiLim;
float LAR;
phyllochron = 0.0f;
LAR = 0.0f;
gaiLim = lNeff * ((areaSL + areaSS) / 10000.0f) * sowingDensity;
if (choosePhyllUse == "Default")
{
if (leafNumber < ldecr)
{
phyllochron = fixPhyll * pdecr;
}
else if ( leafNumber >= ldecr && leafNumber < lincr)
{
phyllochron = fixPhyll;
}
else
{
phyllochron = fixPhyll * pincr;
}
}
if (choosePhyllUse == "PTQ")
{
if (gAImean > gaiLim)
{
LAR = (lARmin + ((lARmax - lARmin) * ptq / (pTQhf + ptq))) / (B * gAImean);
}
else
{
LAR = (lARmin + ((lARmax - lARmin) * ptq / (pTQhf + ptq))) / (B * gaiLim);
}
phyllochron = 1.0f / LAR;
}
if (choosePhyllUse == "Test")
{
if (leafNumber < ldecr)
{
phyllochron = p * pdecr;
}
else if ( leafNumber >= ldecr && leafNumber < lincr)
{
phyllochron = p;
}
else
{
phyllochron = p * pincr;
}
}
s.setphyllochron(phyllochron);
}
// -
class Test
{
PhenologyState s = new PhenologyState();
PhenologyState s1 = new PhenologyState();
PhenologyRate r = new PhenologyRate();
PhenologyAuxiliary a = new PhenologyAuxiliary();
Phyllochron mod = new Phyllochron();
//check wheat model);
//test_wheat1
public void test_wheat1()
{
mod.lincr = 8.0D;
mod.ldecr = 3.0D;
mod.pdecr = 0.4D;
mod.pincr = 1.25D;
s.ptq = 0.97D;
mod.kl = 0.45D;
mod.p = 120.0D;
mod.choosePhyllUse = "Default";
a.fixPhyll = 91.2D;
s.leafNumber = 0.0D;
s.gAImean = 0.0D;
mod.pTQhf = 0.0D;
mod.B = 20.0D;
mod.areaSL = 0.0D;
mod.areaSS = 0.0D;
mod.lARmin = 0.0D;
mod.lARmax = 0.0D;
mod.sowingDensity = 0.0D;
mod.lNeff = 0.0D;
mod.Calculate_phyllochron(s,s1, r, a);
//phyllochron: 36.48;
Console.WriteLine("phyllochron estimated :");
Console.WriteLine(s.phyllochron);
}
}
Test t = new Test();
t.test_wheat1();
| test/Models/pheno_pkg/test/cpp/Phyllochron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# A version of this should be written for magcolloids
# # Forces in the simulation
# The simulation part of 'icenumerics' is done through the 'magcolloids' package, which works as a wrapper of the molecular dynamics program [LAMMPS](https://lammps.sandia.gov/doc/Manual.html). In molecular dynamics, the equations of motion of all particles are solved by discretizing them, and applying a velocity verlet algorithm. The equations of motion are given by Newton's equation:
# $$m_i\ddot{\vec{x_i}} = \vec{F_i}$$
# ## Brownian Dynamics
# The 'icenumerics' and 'magcolloids' packages use a modified version of LAMMPS, to instead run Brownian dynamics. In Brownian dynamics, particles are assumed to be immersed in a high Reynolds number fluid, so that:
# * A viscous drag force is included in the force balance, which is proportional to the velocity $F_{i,drag} = -\gamma\dot{\vec{x_i}}$.
# * This viscous force is assumed to be much larger than the inertial term $m_i\vec{\ddot{x_i}}$, so the later can be neglected.
# * Particles are subjected to random kicks from the fluid. These are given by a Langevin thermostat which is a random variable $\eta$ such that $\left<\eta\right> = 0$ and $\left<\eta_i(t+\Delta t)\eta_j(t)\right> = 2k_BT\gamma\delta({i-j})\delta({\Delta t})$, where $\delta$ is the Dirac delta function.
#
# The result from these assumptions is that the force balance can be written:
# $$\gamma\dot{\vec{x_i}} = \vec{F_i} + \eta$$
# which can be discretized as:
# $$\Delta{\vec{x_i}} = \frac{\Delta t}{\gamma}\vec{F_i} + \sqrt{\Delta t 2k_BT\gamma}N[0,1]$$
# where $N[0,1]$ is a Gaussian distributed random variable with zero mean and unitary variance.
# This is the equation that is used to solve the trajectories of particles.
# ## Available Forces
# +
# This only adds the package to the path.
import os
import sys
sys.path.insert(0, '../../../')
import magcolloids as mgc
import numpy as np
import matplotlib.pyplot as plt
# %load_ext autoreload
# %autoreload 2
# -
# There are two components that are fundamental to Colloidal Ice: the trapping force and the interaction force.
# ### Trapping Force
# Colloidal Ice consists of colloidal particles confined to a bistable potential, so that particles can jump from one stability position to another in a way that minimizes the energy of the system. The potential used by the 'icenumerics' package is a bi-harmonic potential, defined by:
#
# $$
# F = -k r_{\perp} \hat{e}_{\perp} + \hat{e}_{||}
# \begin{cases}
# k \left(|r_{||}|-d/2\right) \mathrm{sign}\left(r_{||}\right) & r_{||}<d/2 \\
# h r_{||} & r_{||}>d/2
# \end{cases}
# $$
#
# where $r_{||}$ is the component parallel to the direction of the trap, and $r_{\perp}$ is the perpendicular component, $\hat{e}_{||}$ is the unit vector in the direction of the trap, $\hat{e}_{||}$ a vector pointing away from the line that joins both stable points, $k$ is the trap stiffness, $d$ is the distance between centers, and $h$ is the stiffness of the central hill.
#
# #### A note on the stiffness of the central hill.
# Currently (v0.1.7) the stiffness of the central hill is given in $pN \mu{}m$. This is a mistake in how the quantity is introduced to lammps. The stiffness should be given instead in $pN/\mu{}m$.
# This will be fixed in the next release, so that the quantity 'height' actually gives the height of the central hill, in $pN \mu{}m$ (energy units). The old behaviour should be mantained by using stiffness units ($pN/\mu{}m$)
# This expression can be checked by allowing a particle to diffuse thermally through a trap, and observing its probability distribution. To do this, we run a single particle on a single trap.
ureg = ice.ureg
sp = ice.spins(centers = np.array([[0,0,0]])*ureg.um,
directions = np.array([[30,0,0]])*ureg.um,
lattice_constant=10*ureg.um)
sp.display()
# +
particle = ice.particle(radius = 1*ureg.um,
susceptibility = 0,
diffusion = 0.145*ureg.um**2/ureg.s,
temperature = 300*ureg.K)
trap = ice.trap(trap_sep = 2*ureg.um,
height = 16*ureg.pN*ureg.nm,
stiffness = 1*ureg.fN/ureg.nm)
col = ice.colloidal_ice(sp, particle, trap, height_spread = 0, susceptibility_spread = 0)
col.pad_region(3*ureg.um)
world = ice.world(
field = 0*ureg.mT,
temperature = 300*ureg.K,
dipole_cutoff = 200*ureg.um)
# -
# By adding the forces to the simulation's output, we can compare them to the expected function.
# +
# %%time
col.simulation(world,
name = "test",
include_timestamp = False,
targetdir = r".",
framerate = 10*ureg.Hz,
timestep = 1000*ureg.us,
run_time = 10000*ureg.s,
output = ["x","y","z","fx","fy","fz"])
col.run_simulation()
col.load_simulation()
# +
fig, ax = plt.subplots(1,3,figsize=(9,2),dpi=150)
col.display(ax = ax[0])
ax[0].plot(col.trj[col.trj.type==1].x, col.trj[col.trj.type==1].y)
ax[1].hist(col.trj[col.trj.type==1].x, bins=20, density=True);
ax[1].set_xlabel("x")
ax[1].set_xlabel("P(x)")
ax[2].hist(col.trj[col.trj.type==1].y, bins=20, density=True);
ax[1].set_xlabel("x")
ax[1].set_xlabel("P(y)")
# -
# Below we define the energy of the bistable trap and it's force. Notice the change in the units of $h$.
# +
k = col[0].trap.stiffness
d = col[0].trap.trap_sep
h = col[0].trap.height.to("pN um").magnitude*ureg("pN/um")
kB = 1.38064852e-23*ureg.J/ureg.K
def bistable_trap(x,y):
Uy = (k*y**2/2).to("pN nm")
Ux = (d**2*h/8-h*x**2/2).to("pN nm")
Ux1 = (k * (abs(x)-d/2)**2 / 2).to("pN nm")
Ux[abs(x)>(d/2)] = Ux1[abs(x)>(d/2)]
return Ux+Uy
# +
fig, ax = plt.subplots(1,2,figsize=(6,2),dpi=150)
## parallel dependence
[p, x] = np.histogram(col.trj[col.trj.type==1].x, bins = 20, density=True)
ax[0].plot(x[1:]-np.diff(x)/2, max(np.log(p))-np.log(p), label="log(P(x))")
x = np.linspace(min(x),max(x),1000) * ureg.um
y = np.array([0])*ureg.um
T = col.sim.world.temperature
ax[0].plot(x.magnitude, (bistable_trap(x,y)/(kB*T)).to(" ").magnitude, label="U(x)/$k_BT$")
ax[0].set_xlabel("x")
ax[0].legend()
## perpendicular dependence
[p, y] = np.histogram(col.trj[col.trj.type==1].y, bins = 20, density=True)
ax[1].plot(y[1:]-np.diff(y)/2, max(np.log(p))-np.log(p),label="log(P(y))")
x = np.array([d.magnitude/2])*d.units
y = np.linspace(min(y),max(y),1000) * ureg.um
T = col.sim.world.temperature
ax[1].plot(y.magnitude, (bistable_trap(x,y)/(kB*T)).to(" ").magnitude,label="U(y)/$k_BT$")
ax[0].set_xlabel("y")
ax[1].legend()
# -
# We can also compare directly to the forces calculated inside lammps:
def bistable_trap_force(x,y):
Fx1 = -np.sign(x.magnitude)*(abs(x)-d/2)*k
Fx = x*h
Fx[abs(x)>(d/2)] = Fx1[abs(x)>(d/2)]
Fy = -y*k
Fx = Fx.to("pN")
Fy = Fy.to("pN")
return Fx, Fy
# +
fig, ax = plt.subplots(1,2,figsize=(6,2),dpi=150, sharey=True)
forces = col.trj[col.trj.type==1]
forces = forces.sort_values(by="x")
x = forces.x
y = forces.y
fx = forces.fx * (1*ureg.pg*ureg.um/ureg.us**2).to(ureg.pN).magnitude
fy = forces.fy * (1*ureg.pg*ureg.um/ureg.us**2).to(ureg.pN).magnitude
ax[0].plot(x,fx,linewidth = 2)
ax[1].plot(y,fy,linewidth = 2)
Fx, Fy = bistable_trap_force(x.values*ureg.um, y.values*ureg.um)
ax[0].plot(x, Fx.magnitude,linewidth = 1)
ax[0].set_xlabel("x")
ax[1].plot(y,Fy.magnitude,linewidth = 1)
ax[1].set_xlabel("y")
ax[0].set_ylabel("Force [pN]")
# -
# # Force after fixing the error with the height:
# +
k = col[0].trap.stiffness
d = col[0].trap.trap_sep
h = col[0].trap.height.to("pN nm").magnitude*ureg("pN/nm")
kB = 1.38064852e-23*ureg.J/ureg.K
def bistable_trap(x,y):
Uy = (k*y**2/2).to("pN nm")
Ux = (h*(1-4 * (x)**2/d**2)).to("pN nm")
Ux1 = (k * (abs(x)-d/2)**2 / 2).to("pN nm")
Ux[abs(x)>(d/2)] = Ux1[abs(x)>(d/2)]
return Ux+Uy
# -
def bistable_trap_force(x,y):
Fx1 = -np.sign(x.magnitude)*(abs(x)-d/2)*k
Fx = x*h
Fx[abs(x)>(d/2)] = Fx1[abs(x)>(d/2)]
Fy = -y*k
Fx = Fx.to("pN")
Fy = Fy.to("pN")
return Fx, Fy
# ## Dipole Dipole interaction
#
# To do...
| docs/ForcesOfTheSimulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import datetime as dt
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
# +
years = [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018]
x = []
for year in years:
x.append(pd.read_csv('../data/Turnstile/{}_turnstile_objectid_format.csv'.format(year)))
turnstile = pd.concat(x)
turnstile = turnstile.drop('Unnamed: 0', axis=1)
# -
arrests = pd.read_csv('../data/arrests_and_turns.csv')
arrests.columns
race_counts = arrests.groupby(['objectid', 'PERP_RACE']).agg({'ARREST_KEY':'count'})
race_pcts = race_counts.groupby(level=0).apply(lambda x:100 * x / float(x.sum()))
race_pcts
# +
blck_pcts = {}
for i in arrests.objectid.unique():
try:
blck_pcts[i] = race_pcts.loc[(i, 'BLACK')]
except:
blck_pcts[i] = 0
blck_pcts = pd.DataFrame(blck_pcts).T.rename({'ARREST_KEY':'Pct Black'}, axis=1).fillna(0.0)
# +
wht_pcts = {}
for i in arrests.objectid.unique():
try:
wht_pcts[i] = race_pcts.loc[(i, 'WHITE')]
except:
wht_pcts[i] = 0
wht_pcts = pd.DataFrame(wht_pcts).T.rename({'ARREST_KEY':'Pct White'}, axis=1).fillna(0.0)
# +
his_pcts = {}
for i in arrests.objectid.unique():
try:
his_pcts[i] = race_pcts.loc[(i, 'WHITE HISPANIC')] + race_pcts.loc[(i, 'BLACK HISPANIC')]
except:
his_pcts[i] = 0
his_pcts = pd.DataFrame(his_pcts).T.rename({'ARREST_KEY':'Pct Hispanic'}, axis=1).fillna(0.0)
# +
aapi_pcts = {}
for i in arrests.objectid.unique():
try:
aapi_pcts[i] = race_pcts.loc[(i, 'ASIAN / PACIFIC ISLANDER')]
except:
aapi_pcts[i] = 0
aapi_pcts = pd.DataFrame(aapi_pcts).T.rename({'ARREST_KEY':'Pct AAPI'}, axis=1).fillna(0.0)
# -
fig, ax = plt.subplots()
colors = ["#dd6e42","#e8dab2","#4f6d7a","#c0d6df", "#AAAE7F"]
sns.kdeplot(blck_pcts['Pct Black'], label = 'Black', ax=ax, color=colors[2]).set_xlim(0,100)
sns.kdeplot(wht_pcts['Pct White'], label = 'White', ax=ax, color=colors[0])
sns.kdeplot(his_pcts['Pct Hispanic'], label = 'Latino', ax=ax, color=colors[1])
sns.kdeplot(aapi_pcts['Pct AAPI'], label = 'Asian', ax=ax, color=colors[3]).set_ylim(0, 0.1)
ax.set_xlabel('Percentage of Arrests')
#ax.title('Racial Makeup of Arrests at Subway Stops')
ax.set_ylabel('Density')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
fig.savefig('../figures/Race Percents KDE.png', bboxinches='tight')
blck_pcts
# +
blck_cts = {}
for i in arrests.objectid.unique():
try:
blck_cts[i] = race_counts.loc[(i, 'BLACK')]
except:
blck_cts[i] = 0
blck_cts = pd.DataFrame(blck_cts).T.rename({'ARREST_KEY':'Num Black'}, axis=1)
his_cts = {}
for i in arrests.objectid.unique():
try:
his_cts[i] = race_counts.loc[(i, 'WHITE HISPANIC')] + race_counts.loc[(i, 'BLACK HISPANIC')]
except:
his_cts[i] = 0
his_cts = pd.DataFrame(his_cts).T.rename({'ARREST_KEY':'Num Hispanic'}, axis=1)
wht_cts = {}
for i in arrests.objectid.unique():
try:
wht_cts[i] = race_counts.loc[(i, 'WHITE')]
except:
wht_cts[i] = 0
wht_cts = pd.DataFrame(wht_cts).T.rename({'ARREST_KEY':'Num White'}, axis=1)
aapi_cts = {}
for i in arrests.objectid.unique():
try:
aapi_cts[i] = race_counts.loc[(i, 'ASIAN / PACIFIC ISLANDER')]
except:
aapi_cts[i] = 0
aapi_cts = pd.DataFrame(aapi_cts).T.rename({'ARREST_KEY':'Num AAPI'}, axis=1)
# -
sns.kdeplot(blck_cts['Num Black'], label = 'Black').set_xlim(0, 750)
sns.kdeplot(wht_cts['Num White'], label = 'White')
sns.kdeplot(his_cts['Num Hispanic'], label = 'Hispanic')
sns.kdeplot(aapi_cts['Num AAPI'], label = 'AAPI', bw = 5).set_ylim(0, 0.007)
plt.xlabel('Number of Arrests')
plt.title('Numerical Distribution of Arrests at Subway Stops')
plt.ylabel('Density')
counts = arrests.groupby('objectid').agg({'ARREST_KEY':'count'}).rename({'ARREST_KEY':'Num Arrests'}, axis=1)
num_pct_blck = counts.merge(blck_pcts, left_index=True, right_index=True)
num_pct_blck.plot(x='Num Arrests', y='Pct Black', kind='scatter', xlim =(0, 5000))
num_blck_hisp = pd.DataFrame(blck_pcts['Pct Black']+his_pcts['Pct Hispanic'])
num_pct_blck_hisp = counts.merge(num_blck_hisp, left_index=True, right_index=True)
num_pct_blck_hisp.plot(x='Num Arrests', y=0, kind='scatter', xlim = (0, 5000))
plt.ylabel('Percent of Arrests Black or Latino')
plt.xlabel('Number of Arrests at Station')
plt.title('Racial Makeup of Arrests and Number of Arrests at a Station')
entries = turnstile.groupby('objectid').agg({'entries':'sum'})
rates = pd.DataFrame(counts['Num Arrests']*100000/entries.entries).fillna(0.0)
rate_pct_blck = rates.merge(blck_pcts, left_index=True, right_index=True)
rate_pct_blck.plot(x=0, y='Pct Black', kind='scatter')
plt.xlabel('Arrests per 100K Entries')
pct_blck_hisp = pd.DataFrame(blck_pcts['Pct Black']+his_pcts['Pct Hispanic'])
rate_pct_blck_hisp = rates.merge(pct_blck_hisp, left_index=True, right_index=True)
fig, ax = plt.subplots()
rate_pct_blck_hisp.plot(x='0_y', y='0_x', kind='scatter', ax=ax, color = "#4f6d7a")
ax.set_ylabel('Number of Arrests per 100K Entries')
ax.set_xlabel("Percentage of Arrests Black or Hispanic")
#plt.title('Racial Makeup of Arrests and Rate of Arrest at a Station')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
fig.savefig('../figures/Arrest Demographics and arrest rate.png', bboxinches='tight')
rates.sort_values(0)
| notebooks/Percent of stops each race .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover:
# -
# ## Crossover
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_sbx:
# -
# ### Simulated Binary Crossover ('real_sbx', 'int_sbx')
#
# Details about the crossover can be found in <cite data-cite="sbx"></cite>. Real values can be represented by a binary notation and then a the point crossovers can be performed. SBX simulated this operation by using a probability distribution *simulating* the binary crossover.
#
# A crossover object can be created by
# +
from pymoo.factory import get_crossover
crossover = get_crossover("real_sbx", prob=0.9, eta=20)
# -
# As arguments, the probability of a crossover and the *eta* parameter can be provided.
#
# In the example below, we demonstrate a crossover in an optimization problem with only one variable. A crossover is performed between two points, *0.2* and *0.8*, and the resulting exponential distribution is visualized. Depending on the *eta_cross*, the exponential distribution can be fine-tuned.
#
# The probability of SBX follows an exponential distribution. Please note for demonstration purposes, we have set *prob_per_variable=1.0*, which means every variable participates in the crossover (necessary because there exists only one variable). However, it is suggested to perform a crossover of two variables forms each parent with a probability of *0.5*, which is defined by default if not defined otherwise.
# +
from pymoo.interface import crossover
import numpy as np
import matplotlib.pyplot as plt
def show(eta_cross):
a,b = np.full((5000, 1), 0.2), np.full((5000, 1), 0.8)
off = crossover(get_crossover("real_sbx", prob=1.0, eta=eta_cross, prob_per_variable=1.0), a, b)
plt.hist(off, range=(0,1), bins=200, density=True, color="red")
plt.show()
show(1)
# -
show(30)
# Also, it can be used for integer variables. The bounds are slightly modified, and after doing the crossover, the variables are rounded.
# +
from pymoo.factory import get_crossover
from pymoo.interface import crossover
import numpy as np
import matplotlib.pyplot as plt
def show(eta_cross):
a,b = np.full((50000, 1), -10), np.full((50000, 1), +10)
off = crossover(get_crossover("int_sbx", prob=1.0, eta=eta_cross, prob_per_variable=1.0), a, b, xl=-20, xu=+20)
val, count = np.unique(off, return_counts=True)
#print(np.column_stack([val, count / count.sum()]))
plt.hist(off, range=(-20, 20), bins=41, density=True, color="red")
plt.show()
show(3)
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_point:
# -
# ### Point Crossover ('real_point', 'bin_point', 'int_point' )
#
# The point crossover is mostly applied to binary optimization problems. However, in general, it can be used for other variable representations.
#
# The point crossover can be initiated by
crossover = get_crossover("real_k_point", n_points=2)
# For any number of points that are desired. Additionally, for convenience the
# single- or two-point crossover is created by
get_crossover("real_one_point")
get_crossover("real_two_point")
# directly.
# +
from pymoo.interface import crossover
from pymoo.factory import get_crossover
import numpy as np
import matplotlib.pyplot as plt
def example_parents(n_matings, n_var):
a = np.arange(n_var)[None, :].repeat(n_matings, axis=0)
b = a + n_var
return a, b
def show(M):
plt.figure(figsize=(4,4))
plt.imshow(M, cmap='Greys', interpolation='nearest')
plt.xlabel("Variables")
plt.ylabel("Individuals")
plt.show()
n_matings, n_var = 100, 100
a,b = example_parents(n_matings,n_var)
print("One Point Crossover")
off = crossover(get_crossover("bin_one_point"), a, b)
show((off[:n_matings] != a[0]))
print("Two Point Crossover")
off = crossover(get_crossover("bin_two_point"), a, b)
show((off[:n_matings] != a[0]))
print("K Point Crossover (k=4)")
off = crossover(get_crossover("bin_k_point", n_points=4), a, b)
show((off[:n_matings] != a[0]))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_uniform:
# -
# ### Uniform Crossover ('real_ux', 'bin_ux', 'int_ux')
#
# The uniform crossover takes with a probability of 0.5 the values from each parent.
# In contrast to a point crossover, not a sequence of variables is taken, but random indices.
off = crossover(get_crossover("bin_ux"), a, b)
show((off[:n_matings] != a[0]))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_half_uniform:
# -
# ### Half Uniform Crossover ('bin_hux', 'int_hux')
#
# The half uniform crossover will first determine what indices are different in the first and the second parent. Then, it will take half of the difference to be selected from the other parent.
# +
_a = np.full((100,100), False)
_b = np.copy(_a)
_b[:, np.linspace(5, 95, 10).astype(np.int)] = True
print("Here, a and b are different for indices: ", np.where(_a[0] != _b[0])[0])
off = crossover(get_crossover("bin_hux"), _a, _b)
show((off[:100] != _a[0]))
diff_a_to_b = (_a != _b).sum()
diff_a_to_off = (_a != off[:100]).sum()
print("Difference in bits (a to b): ", diff_a_to_b)
print("Difference in bits (a to off): ", diff_a_to_off)
print("Crossover Rate: ", diff_a_to_off / diff_a_to_b)
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_exponential:
# -
# ### Exponential Crossover ('real_exp', 'bin_exp', 'int_exp')
#
# The exponential crossover is mostly a one-point crossover, but occasionally it can be a two-point crossover.
# First, randomly a starting index is chosen. Then, we add the next variable to be mutated with a specific probability. If we reach the last variable, we continue with the first (wrap around).
off = crossover(get_crossover("real_exp", prob=0.95), a, b)
show((off[:n_matings] != a[0]))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_crossover_differential:
# -
# ### Differential Crossover ('real_de')
#
# The differential crossover is used in the [differential evolution algorithm](../algorithms/differential_evolution.ipynb). It adds the difference of two individuals to another one.
#
# It can be initiated by
crossover = get_crossover("real_de")
# In the following, the different creating of donor vectors is shown. The difference between $x_{\pi_2} - x_{\pi_3}$ is added with different weights $F \in (0, 1)$ to $x_{\pi_1}$. The resulting donor solution can be used for further evolutionary recombinations (for example, DE uses it for another crossover).
# +
from pymoo.factory import get_crossover
from pymoo.interface import crossover
import numpy as np
import matplotlib.pyplot as plt
c = np.array([[0.8, 0.2]])
a = np.array([[0.4, 0.4]])
b = np.array([[0.6, 0.5]])
X = crossover(get_crossover("real_de", weight=0.0, dither='vector'),
a.repeat(100, axis=0), b.repeat(100, axis=0), c.repeat(100, axis=0))
plt.scatter(X[:, 0], X[:, 1], s=20,facecolors='none', edgecolors='r', label="v")
plt.scatter(a[:, 0], a[:, 1], label="$x_{\pi_1}$", marker="X")
plt.scatter(b[:, 0], b[:, 1], label="$x_{\pi_2}$", marker="X")
plt.scatter(c[:, 0], c[:, 1], label="$x_{\pi_3}$", marker="X")
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.legend()
plt.show()
# -
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autofunction:: pymoo.factory.get_crossover
# :noindex:
#
# .. autofunction:: pymoo.model.crossover.Crossover
# :noindex:
| doc/source/operators/crossover.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %%capture
import os
import site
os.sys.path.insert(0, '/home/schirrmr/code/reversible/reversible2/')
os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')
os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')
# %cd /home/schirrmr/
# %load_ext autoreload
# %autoreload 2
import numpy as np
import logging
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm
# %matplotlib inline
# %config InlineBackend.figure_format = 'png'
matplotlib.rcParams['figure.figsize'] = (12.0, 1.0)
matplotlib.rcParams['font.size'] = 14
import seaborn
seaborn.set_style('darkgrid')
from reversible.sliced import sliced_from_samples
from numpy.random import RandomState
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import copy
import math
import itertools
from reversible.plot import create_bw_image
import torch as th
from braindecode.torch_ext.util import np_to_var, var_to_np
from reversible.revnet import ResidualBlock, invert, SubsampleSplitter, ViewAs, ReversibleBlockOld
from spectral_norm import spectral_norm
from conv_spectral_norm import conv_spectral_norm
def display_text(text, fontsize=18):
fig = plt.figure(figsize=(12,0.1))
plt.title(text, fontsize=fontsize)
plt.axis('off')
display(fig)
plt.close(fig)
# +
from braindecode.datasets.bbci import BBCIDataset
from braindecode.mne_ext.signalproc import mne_apply
from collections import OrderedDict
from braindecode.datautil.trial_segment import create_signal_target_from_raw_mne
def load_file(filename):
cnt = BBCIDataset(filename).load()
cnt = cnt.drop_channels(['STI 014'])
def car(a):
return a - np.mean(a, keepdims=True, axis=0)
cnt = mne_apply(
car, cnt)
return cnt
def create_set(cnt):
marker_def = OrderedDict([('Right Hand', [1]), ('Left Hand', [2],),
('Rest', [3]), ('Feet', [4])])
ival = [500,1500]
from braindecode.mne_ext.signalproc import mne_apply, resample_cnt
from braindecode.datautil.signalproc import exponential_running_standardize, bandpass_cnt
log.info("Resampling train...")
cnt = resample_cnt(cnt, 250.0)
log.info("Standardizing train...")
cnt = mne_apply(lambda a: exponential_running_standardize(a.T ,factor_new=1e-3, init_block_size=1000, eps=1e-4).T,
cnt)
cnt = resample_cnt(cnt, 32.0)
cnt = resample_cnt(cnt, 64.0)
dataset = create_signal_target_from_raw_mne(cnt, marker_def, ival)
return dataset
def create_inputs(dataset):
x_right = dataset.X[dataset.y == 0]
x_rest = dataset.X[dataset.y == 2]
inputs_a = np_to_var(x_right[:160,0:1,:,None], dtype=np.float32)
inputs_b = np_to_var(x_rest[:160,0:1,:,None], dtype=np.float32)
inputs = [inputs_a, inputs_b]
return inputs
# -
train_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/train/4.mat')
train_cnt = train_cnt.reorder_channels(['C3', 'C4'])
train_set = create_set(train_cnt)
train_inputs = create_inputs(train_set)
test_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/test/4.mat')
test_cnt = test_cnt.reorder_channels(['C3', 'C4'])
test_set = create_set(test_cnt)
test_inputs = create_inputs(test_set)
# +
fig = plt.figure(figsize=(8,4))
for i_class in range(len(train_inputs)):
ins = var_to_np(train_inputs[i_class].squeeze())
bps = np.abs(np.fft.rfft(ins.squeeze()))
plt.plot(np.fft.rfftfreq(ins.squeeze().shape[1], d=1/ins.squeeze().shape[1]), np.median(bps, axis=0))
plt.title("Spectrum")
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude')
plt.legend(['Real Right', 'Fake Right', 'Real Rest', 'Fake Rest'])
display(fig)
plt.close(fig)
# +
# https://github.com/rosinality/glow-pytorch/blob/ddb4b65384a5f96bdfab2f07194b98c5da46ae80/model.py
class Conv1x1(nn.Module):
def __init__(self, in_channel):
super().__init__()
weight = torch.randn(in_channel, in_channel)
q, _ = torch.qr(weight)
weight = q.unsqueeze(2).unsqueeze(3)
self.weight = nn.Parameter(weight)
def forward(self, input):
out = F.conv2d(input, self.weight)
#_, _, height, width = input.shape
#logdet = (
# height * width * torch.slogdet(self.weight.squeeze().double())[1].float()
#)
return out
def invert(self, output):
return F.conv2d(
output, self.weight.squeeze().inverse().unsqueeze(2).unsqueeze(3)
)
# +
def rev_block(n_c, n_i_c):
return ReversibleBlockOld(
nn.Sequential(
nn.Conv2d(n_c // 2, n_i_c,(3,1), stride=1, padding=(1,0),bias=True),
nn.ReLU(),
nn.Conv2d(n_i_c, n_c // 2,(3,1), stride=1, padding=(1,0),bias=True)),
nn.Sequential(
nn.Conv2d(n_c // 2, n_i_c,(3,1), stride=1, padding=(1,0),bias=True),
nn.ReLU(),
nn.Conv2d(n_i_c, n_c // 2,(3,1), stride=1, padding=(1,0),bias=True))
)
def dense_rev_block(n_c, n_i_c):
return ReversibleBlockOld(
nn.Sequential(
nn.Linear(n_c // 2, n_i_c, bias=True),
nn.ReLU(),
nn.Linear(n_i_c, n_c // 2,bias=True)),
nn.Sequential(
nn.Linear(n_c // 2, n_i_c, bias=True),
nn.ReLU(),
nn.Linear(n_i_c, n_c // 2, bias=True))
)
def res_block(n_c, n_i_c):
return ResidualBlock(
nn.Sequential(
nn.Conv2d(n_c, n_i_c, (3,1), stride=1, padding=(1,0),bias=True),
nn.ReLU(),
nn.Conv2d(n_i_c, n_c, (3,1), stride=1, padding=(1,0),bias=True)),
)
# +
from discriminator import ProjectionDiscriminator
from reversible.revnet import SubsampleSplitter, ViewAs
from reversible.util import set_random_seeds
from reversible.revnet import init_model_params
from torch.nn import ConstantPad2d
import torch as th
from conv_spectral_norm import conv_spectral_norm
from disttransform import DistTransformResNet
set_random_seeds(2019011641, False)
feature_model = nn.Sequential(
ViewAs((-1,1,64,1), (-1,64,1,1)),
Conv1x1(64),
ViewAs((-1,64,1,1), (-1,64)),
)
from reversible.training import hard_init_std_mean
n_dims = train_inputs[0].shape[2]
n_clusters = len(train_inputs)
means_per_cluster = [th.autograd.Variable(th.ones(n_dims), requires_grad=True)
for _ in range(n_clusters)]
# keep in mind this is in log domain so 0 is std 1
stds_per_cluster = [th.autograd.Variable(th.zeros(n_dims), requires_grad=True)
for _ in range(n_clusters)]
for i_class in range(n_clusters):
this_outs = feature_model(train_inputs[i_class])
means_per_cluster[i_class].data = th.mean(this_outs, dim=0).view(-1).data
stds_per_cluster[i_class].data = th.log(th.std(this_outs, dim=0),).view(-1).data
from copy import deepcopy
optimizer = th.optim.Adam(
[
{'params': list(feature_model.parameters()),
'lr': 1e-3,
'weight_decay': 0},], betas=(0,0.9))
optim_dist = th.optim.Adam(
[
{'params': means_per_cluster + stds_per_cluster,
'lr': 1e-2,
'weight_decay': 0},], betas=(0,0.9))
# +
from reversible.gaussian import get_gauss_samples
from reversible.uniform import get_uniform_samples
from reversible.gaussian import get_gauss_samples
from reversible.uniform import get_uniform_samples
from reversible.revnet import invert
import pandas as pd
from gradient_penalty import gradient_penalty
import time
df = pd.DataFrame()
g_loss = np_to_var([np.nan],dtype=np.float32)
g_grad = np.nan
d_loss = np_to_var([np.nan],dtype=np.float32)
d_grad = np.nan
gradient_loss = np_to_var([np.nan],dtype=np.float32)
# +
def invert_hierarchical(features):
return invert(feature_model, features)
def get_samples(n_samples, i_class):
mean = means_per_cluster[i_class]
std = th.exp(stds_per_cluster[i_class])
# let's create a mask for the std for now
samples = get_gauss_samples(n_samples, mean, std, truncate_to=3)
return samples
# +
import ot
from reversible.util import ensure_on_same_device, np_to_var, var_to_np
def ot_euclidean_loss_for_samples(samples_a, samples_b):
diffs = samples_a.unsqueeze(1) - samples_b.unsqueeze(0)
diffs = th.sqrt(th.clamp(th.sum(diffs * diffs, dim=2), min=1e-6))
transport_mat = ot.emd([], [], var_to_np(diffs))
# sometimes weird low values, try to prevent them
transport_mat = transport_mat * (transport_mat > (1.0/(diffs.numel())))
transport_mat = np_to_var(transport_mat, dtype=np.float32)
diffs, transport_mat = ensure_on_same_device(diffs, transport_mat)
loss = th.sum(transport_mat * diffs)
return loss
# +
n_epochs = 5001
rng = RandomState(349384)
for i_epoch in range(n_epochs):
start_time = time.time()
optimizer.zero_grad()
optim_dist.zero_grad()
for i_class in range(len(train_inputs)):
this_inputs = train_inputs[i_class]
n_samples = len(this_inputs) * 5
samples = get_samples(n_samples, i_class)
inverted = invert_hierarchical(samples)
g_loss = ot_euclidean_loss_for_samples(this_inputs.view(this_inputs.shape[0],-1),
inverted.view(inverted.shape[0],-1))
g_loss.backward()
g_grad = np.mean([th.sum(p.grad **2).item() for p in itertools.chain(feature_model.parameters())])
dist_grad = np.mean([th.sum(p.grad **2).item() for p in means_per_cluster + stds_per_cluster])
optimizer.step()
optim_dist.step()
with th.no_grad():
sample_wd_row = {}
for setname, setinputs in [('train', train_inputs), ('test', test_inputs)]:
for i_class in range(len(setinputs)):
this_inputs = setinputs[i_class]
n_samples = len(this_inputs)
samples = get_samples(n_samples, i_class)
inverted = invert_hierarchical(samples)
in_np = var_to_np(this_inputs).reshape(len(this_inputs), -1)
fake_np = var_to_np(inverted).reshape(len(inverted), -1)
import ot
dist = np.sqrt(np.sum(np.square(in_np[:,None] - fake_np[None]), axis=2))
match_matrix = ot.emd([],[], dist)
cost = np.sum(dist * match_matrix)
sample_wd_row.update({
setname + '_sampled_wd' + str(i_class): cost,
})
end_time = time.time()
epoch_row = {
'g_loss': g_loss.item(),
'g_grad': g_grad,
'dist_grad': dist_grad,
'runtime': end_time -start_time,}
epoch_row.update(sample_wd_row)
df = df.append(epoch_row, ignore_index=True)
if i_epoch % (max(1,n_epochs // 20)) == 0:
display_text("Epoch {:d}".format(i_epoch))
display(df.iloc[-5:])
if i_epoch % (n_epochs // 20) == 0:
print("stds\n", var_to_np(th.exp(th.stack(stds_per_cluster))))
fig = plt.figure(figsize=(8,4))
plt.plot(var_to_np(th.exp(th.stack(stds_per_cluster))).squeeze().T)
plt.title("Standard deviation\nper dimension")
display(fig)
plt.close(fig)
fig = plt.figure(figsize=(8,4))
set_inputs = train_inputs
for i_class in range(len(set_inputs)):
ins = var_to_np(set_inputs[i_class].squeeze())
bps = np.abs(np.fft.rfft(ins.squeeze()))
plt.plot(np.fft.rfftfreq(ins.squeeze().shape[1], d=1/ins.squeeze().shape[1]), np.median(bps, axis=0))
n_samples = 5000
samples = get_samples(n_samples, i_class)
inverted = var_to_np(invert_hierarchical(samples).squeeze())
bps = np.abs(np.fft.rfft(inverted.squeeze()))
plt.plot(np.fft.rfftfreq(inverted.squeeze().shape[1], d=1/ins.squeeze().shape[1]), np.median(bps, axis=0),
color=seaborn.color_palette()[i_class], ls='--')
plt.title("Spectrum")
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude')
plt.legend(['Real Right', 'Fake Right', 'Real Rest', 'Fake Rest'])
display(fig)
plt.close(fig)
set_inputs = train_inputs
for i_class in range(len(set_inputs)):
fig = plt.figure(figsize=(5,5))
mean = means_per_cluster[i_class]
log_std = stds_per_cluster[i_class]
std = th.exp(log_std)
y = np_to_var([i_class])
n_samples = 5000
samples = get_samples(n_samples, i_class)
inverted = var_to_np(invert_hierarchical(samples).squeeze())
plt.plot(inverted.squeeze()[:,0], inverted.squeeze()[:,1],
ls='', marker='o', color=seaborn.color_palette()[i_class + 2], alpha=0.5, markersize=2)
plt.plot(var_to_np(set_inputs[i_class].squeeze())[:,0], var_to_np(set_inputs[i_class].squeeze())[:,1],
ls='', marker='o', color=seaborn.color_palette()[i_class])
display(fig)
plt.close(fig)
fig = plt.figure(figsize=(8,3))
plt.plot(inverted[:1000].T, color=seaborn.color_palette()[0],lw=0.5);
display(fig)
plt.close(fig)
i_dims = np.argsort(var_to_np(stds_per_cluster[0]))[::-1][:2]
with th.no_grad():
mean = means_per_cluster[i_class]
std = th.exp(stds_per_cluster[i_class])
samples = get_samples(5000, i_class)
outs = feature_model(set_inputs[i_class])
fig = plt.figure(figsize=(3,3))
plt.plot(var_to_np(samples)[:,i_dims[0]].squeeze(),
var_to_np(samples)[:,i_dims[1]].squeeze(), marker='o', ls='')
plt.plot(var_to_np(outs)[:,i_dims[0]].squeeze(),
var_to_np(outs)[:,i_dims[1]].squeeze(), marker='o', ls='')
plt.legend(["Fake", "Real"])
display(fig)
plt.close(fig)
i_dims = (np.argsort(np.max(var_to_np(th.stack(stds_per_cluster)), axis=0))[::-1][:4])
set_inputs = train_inputs
for i_dim in i_dims:
display_text("Dimension {:d}".format(i_dim))
examples_per_class = []
outs_per_class = []
for i_class in range(2):
mean = means_per_cluster[i_class]
std = th.exp(stds_per_cluster[i_class])
i_f_vals = th.linspace((mean[i_dim] - 2 * std[i_dim]).item(),
(mean[i_dim] + 2 *std[i_dim]).item(), 21)
examples = mean.repeat(len(i_f_vals), 1)
examples.data[:,i_dim] = i_f_vals.data
examples_per_class.append(examples)
outs_per_class.append(feature_model(set_inputs[i_class]))
#display_text(["Right", "Rest"][i_class])
fig, axes = plt.subplots(1,2, figsize=(6,3), sharex=True, sharey=True)
for i_class in range(2):
from matplotlib import rcParams, cycler
cmap = plt.cm.coolwarm
N = len(examples)
examples = examples_per_class[i_class]
axes[i_class].plot(var_to_np(outs_per_class[i_class])[:,i_dim].squeeze(),
var_to_np(outs_per_class[i_class])[:,i_dim].squeeze() * 0 - 0.01,
ls='', marker='o', alpha=0.25, markersize=3,
color=seaborn.color_palette()[i_class])
axes[i_class].scatter(var_to_np(examples)[:,i_dim].squeeze(),
var_to_np(examples)[:,i_dim].squeeze() * 0,
c=cmap(np.linspace(0, 1, N)))
if i_class == 0:
axes[i_class].set_title("Latent space:")
display(fig)
plt.close(fig)
with plt.rc_context({'axes.prop_cycle': cycler(color=cmap(np.linspace(0, 1, N)))}):
fig, axes = plt.subplots(1,2, figsize=(16,3), sharex=True, sharey=True)
for i_class in range(2):
inverted = invert_hierarchical(examples_per_class[i_class])
axes[i_class].plot(var_to_np(inverted).squeeze().T);
display(fig)
plt.close(fig)
# +
# try switching forward and invert
# https://github.com/rosinality/glow-pytorch/blob/ddb4b65384a5f96bdfab2f07194b98c5da46ae80/model.py
class InvConv1x1(nn.Module):
def __init__(self, in_channel):
super().__init__()
weight = torch.randn(in_channel, in_channel)
q, _ = torch.qr(weight)
weight = q.unsqueeze(2).unsqueeze(3)
self.weight = nn.Parameter(weight)
def forward(self, input):
return F.conv2d(
input, self.weight.squeeze().inverse().unsqueeze(2).unsqueeze(3)
)
def invert(self, output):
out = F.conv2d(output, self.weight)
return out
# +
from discriminator import ProjectionDiscriminator
from reversible.revnet import SubsampleSplitter, ViewAs
from reversible.util import set_random_seeds
from reversible.revnet import init_model_params
from torch.nn import ConstantPad2d
import torch as th
from conv_spectral_norm import conv_spectral_norm
from disttransform import DistTransformResNet
set_random_seeds(2019011641, True)
feature_model = nn.Sequential(
ViewAs((-1,1,64,1), (-1,64,1,1)),
InvConv1x1(64),
ViewAs((-1,64,1,1), (-1,64)),
)
from reversible.training import hard_init_std_mean
n_dims = train_inputs[0].shape[2]
n_clusters = len(train_inputs)
means_per_cluster = [th.autograd.Variable(th.ones(n_dims), requires_grad=True)
for _ in range(n_clusters)]
# keep in mind this is in log domain so 0 is std 1
stds_per_cluster = [th.autograd.Variable(th.zeros(n_dims), requires_grad=True)
for _ in range(n_clusters)]
for i_class in range(n_clusters):
this_outs = feature_model(train_inputs[i_class])
means_per_cluster[i_class].data = th.mean(this_outs, dim=0).view(-1).data
stds_per_cluster[i_class].data = th.log(th.std(this_outs, dim=0),).view(-1).data
from copy import deepcopy
optimizer = th.optim.Adam(
[
{'params': list(feature_model.parameters()),
'lr': 1e-3,
'weight_decay': 0},], betas=(0,0.9))
optim_dist = th.optim.Adam(
[
{'params': means_per_cluster + stds_per_cluster,
'lr': 1e-2,
'weight_decay': 0},], betas=(0,0.9))
from reversible.gaussian import get_gauss_samples
from reversible.uniform import get_uniform_samples
from reversible.gaussian import get_gauss_samples
from reversible.uniform import get_uniform_samples
from reversible.revnet import invert
import pandas as pd
from gradient_penalty import gradient_penalty
import time
df = pd.DataFrame()
g_loss = np_to_var([np.nan],dtype=np.float32)
g_grad = np.nan
d_loss = np_to_var([np.nan],dtype=np.float32)
d_grad = np.nan
gradient_loss = np_to_var([np.nan],dtype=np.float32)
# +
n_epochs = 5001
rng = RandomState(349384)
for i_epoch in range(n_epochs):
start_time = time.time()
optimizer.zero_grad()
optim_dist.zero_grad()
for i_class in range(len(train_inputs)):
this_inputs = train_inputs[i_class]
n_samples = len(this_inputs) * 5
samples = get_samples(n_samples, i_class)
inverted = invert_hierarchical(samples)
g_loss = ot_euclidean_loss_for_samples(this_inputs.view(this_inputs.shape[0],-1),
inverted.view(inverted.shape[0],-1))
g_loss.backward()
g_grad = np.mean([th.sum(p.grad **2).item() for p in itertools.chain(feature_model.parameters())])
dist_grad = np.mean([th.sum(p.grad **2).item() for p in means_per_cluster + stds_per_cluster])
optimizer.step()
optim_dist.step()
with th.no_grad():
sample_wd_row = {}
for setname, setinputs in [('train', train_inputs), ('test', test_inputs)]:
for i_class in range(len(setinputs)):
this_inputs = setinputs[i_class]
n_samples = len(this_inputs)
samples = get_samples(n_samples, i_class)
inverted = invert_hierarchical(samples)
in_np = var_to_np(this_inputs).reshape(len(this_inputs), -1)
fake_np = var_to_np(inverted).reshape(len(inverted), -1)
import ot
dist = np.sqrt(np.sum(np.square(in_np[:,None] - fake_np[None]), axis=2))
match_matrix = ot.emd([],[], dist)
cost = np.sum(dist * match_matrix)
sample_wd_row.update({
setname + '_sampled_wd' + str(i_class): cost,
})
end_time = time.time()
epoch_row = {
'g_loss': g_loss.item(),
'g_grad': g_grad,
'dist_grad': dist_grad,
'runtime': end_time -start_time,}
epoch_row.update(sample_wd_row)
df = df.append(epoch_row, ignore_index=True)
if i_epoch % (max(1,n_epochs // 20)) == 0:
display_text("Epoch {:d}".format(i_epoch))
display(df.iloc[-5:])
if i_epoch % (n_epochs // 20) == 0:
print("stds\n", var_to_np(th.exp(th.stack(stds_per_cluster))))
fig = plt.figure(figsize=(8,4))
plt.plot(var_to_np(th.exp(th.stack(stds_per_cluster))).squeeze().T)
plt.title("Standard deviation\nper dimension")
display(fig)
plt.close(fig)
fig = plt.figure(figsize=(8,4))
set_inputs = train_inputs
for i_class in range(len(set_inputs)):
ins = var_to_np(set_inputs[i_class].squeeze())
bps = np.abs(np.fft.rfft(ins.squeeze()))
plt.plot(np.fft.rfftfreq(ins.squeeze().shape[1], d=1/ins.squeeze().shape[1]), np.median(bps, axis=0))
n_samples = 5000
samples = get_samples(n_samples, i_class)
inverted = var_to_np(invert_hierarchical(samples).squeeze())
bps = np.abs(np.fft.rfft(inverted.squeeze()))
plt.plot(np.fft.rfftfreq(inverted.squeeze().shape[1], d=1/ins.squeeze().shape[1]), np.median(bps, axis=0),
color=seaborn.color_palette()[i_class], ls='--')
plt.title("Spectrum")
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude')
plt.legend(['Real Right', 'Fake Right', 'Real Rest', 'Fake Rest'])
display(fig)
plt.close(fig)
set_inputs = train_inputs
for i_class in range(len(set_inputs)):
fig = plt.figure(figsize=(5,5))
mean = means_per_cluster[i_class]
log_std = stds_per_cluster[i_class]
std = th.exp(log_std)
y = np_to_var([i_class])
n_samples = 5000
samples = get_samples(n_samples, i_class)
inverted = var_to_np(invert_hierarchical(samples).squeeze())
plt.plot(inverted.squeeze()[:,0], inverted.squeeze()[:,1],
ls='', marker='o', color=seaborn.color_palette()[i_class + 2], alpha=0.5, markersize=2)
plt.plot(var_to_np(set_inputs[i_class].squeeze())[:,0], var_to_np(set_inputs[i_class].squeeze())[:,1],
ls='', marker='o', color=seaborn.color_palette()[i_class])
display(fig)
plt.close(fig)
fig = plt.figure(figsize=(8,3))
plt.plot(inverted[:1000].T, color=seaborn.color_palette()[0],lw=0.5);
display(fig)
plt.close(fig)
i_dims = np.argsort(var_to_np(stds_per_cluster[0]))[::-1][:2]
with th.no_grad():
mean = means_per_cluster[i_class]
std = th.exp(stds_per_cluster[i_class])
samples = get_samples(5000, i_class)
outs = feature_model(set_inputs[i_class])
fig = plt.figure(figsize=(3,3))
plt.plot(var_to_np(samples)[:,i_dims[0]].squeeze(),
var_to_np(samples)[:,i_dims[1]].squeeze(), marker='o', ls='')
plt.plot(var_to_np(outs)[:,i_dims[0]].squeeze(),
var_to_np(outs)[:,i_dims[1]].squeeze(), marker='o', ls='')
plt.legend(["Fake", "Real"])
display(fig)
plt.close(fig)
i_dims = (np.argsort(np.max(var_to_np(th.stack(stds_per_cluster)), axis=0))[::-1][:4])
set_inputs = train_inputs
for i_dim in i_dims:
display_text("Dimension {:d}".format(i_dim))
examples_per_class = []
outs_per_class = []
for i_class in range(2):
mean = means_per_cluster[i_class]
std = th.exp(stds_per_cluster[i_class])
i_f_vals = th.linspace((mean[i_dim] - 2 * std[i_dim]).item(),
(mean[i_dim] + 2 *std[i_dim]).item(), 21)
examples = mean.repeat(len(i_f_vals), 1)
examples.data[:,i_dim] = i_f_vals.data
examples_per_class.append(examples)
outs_per_class.append(feature_model(set_inputs[i_class]))
#display_text(["Right", "Rest"][i_class])
fig, axes = plt.subplots(1,2, figsize=(6,3), sharex=True, sharey=True)
for i_class in range(2):
from matplotlib import rcParams, cycler
cmap = plt.cm.coolwarm
N = len(examples)
examples = examples_per_class[i_class]
axes[i_class].plot(var_to_np(outs_per_class[i_class])[:,i_dim].squeeze(),
var_to_np(outs_per_class[i_class])[:,i_dim].squeeze() * 0 - 0.01,
ls='', marker='o', alpha=0.25, markersize=3,
color=seaborn.color_palette()[i_class])
axes[i_class].scatter(var_to_np(examples)[:,i_dim].squeeze(),
var_to_np(examples)[:,i_dim].squeeze() * 0,
c=cmap(np.linspace(0, 1, N)))
if i_class == 0:
axes[i_class].set_title("Latent space:")
display(fig)
plt.close(fig)
with plt.rc_context({'axes.prop_cycle': cycler(color=cmap(np.linspace(0, 1, N)))}):
fig, axes = plt.subplots(1,2, figsize=(16,3), sharex=True, sharey=True)
for i_class in range(2):
inverted = invert_hierarchical(examples_per_class[i_class])
axes[i_class].plot(var_to_np(inverted).squeeze().T);
display(fig)
plt.close(fig)
# -
| notebooks/fft-same-chan-amp-phase-dims/.ipynb_checkpoints/1x1Conv-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''finetune-vs-scratch-gHiQbun3-py3.8'': poetry)'
# name: python3
# ---
# # Tokenization
#
# ### BertTweet
#
# - fastBPE
# - 64K subword
#
# ### Twilbert
# - SentencePiece (fastBPE)
# - 30k subword
# +
# %load_ext autoreload
# %autoreload 2
from glob import glob
num_files = 100
tweet_files = glob("../../data/filtered_tweets/*.txt")
train_files = tweet_files[:2]
tweets = list([x.strip("\n") for x in open(tweet_files[0])])[:100_000]
# -
len(tweets)
# +
from tokenizers import SentencePieceUnigramTokenizer, SentencePieceBPETokenizer, BertWordPieceTokenizer, ByteLevelBPETokenizer
tokenizer = SentencePieceBPETokenizer()#replacement="_")
# +
from finetune_vs_scratch.preprocessing import special_tokens
from finetune_vs_scratch.tokenizer import tokenizer_special_tokens
#tokenizer.add_special_tokens(tokenizer_special_tokens)
tokenizer.train_from_iterator(
tweets,
vocab_size=30_000,
min_frequency=5,
show_progress=True,
limit_alphabet=500,
special_tokens=tokenizer_special_tokens,
)
# +
vocab = tokenizer.get_vocab()
inv_vocab = {v:k for k, v in vocab.items()}
tokenizer.encode("Qué hacesssss @usuario").tokens
# -
tokenizer_path = "./sentence-piece-tokenizer"
# !mkdir $tokenizer_path
vocab_file, merges_file = tokenizer.save_model(tokenizer_path)
# +
import sentencepiece
sentence
# +
from finetune_vs_scratch.tokenizer import MyTokenizer
t_tokenizer = MyTokenizer(
vocab_file,
merges_file,
)
t_tokenizer
#sorted(vars(t_tokenizer).keys())
# +
t_tokenizer("@usuario <NAME> skere comunista")
# +
from transformers import RobertaTokenizerFast, AutoTokenizer
roberta_tokenizer = RobertaTokenizerFast(vocab_file, merges_file, never_split=special_tokens)
bertweet_tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
(roberta_tokenizer("@usuario <NAME> skere comunista")["input_ids"])
# +
inv_vocab = {v:k for k, v in roberta_tokenizer.vocab.items()}
tok_ids = roberta_tokenizer("@usuario <NAME> skere comunista")["input_ids"]
for tok in tok_ids:
print(tok, " ---> ", inv_vocab[tok])
# +
inv_vocab = {v:k for k, v in t_tokenizer.encoder.items()}
tok_ids = t_tokenizer("@usuario <NAME> skere comunista")["input_ids"]
for tok in tok_ids:
print(tok, " ---> ", inv_vocab[tok])
# +
# %%timeit
t_tokenizer(tweets[:1000]);
None
# +
# %%timeit
roberta_tokenizer(tweets[:1000]);
None
# +
# %%timeit
tokenizer.encode_batch(tweets[:1000])
None
# +
# %%timeit
bertweet_tokenizer(tweets[:1000])
None
# -
# La implementación nuestra es muy muy mala
#
| notebooks/tokenization/Tokenization naive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 回転の推定II:異方性誤差
# +
from pathlib import Path
import sys
from itertools import product
from itkwidgets import view
import numpy as np
from scipy import linalg
from scipy.stats import random_correlation, special_ortho_group
from scipy.spatial.transform import Rotation
sys.path.append('..')
import util
# -
A_bar = util.load_point_cloud(Path('../bunny/data/bun180.ply').resolve())
points_num = A_bar.shape[0]
print(points_num)
view(point_sets=A_bar)
cov_a = random_correlation.rvs((0.5, 1.2, 1.3))
print(cov_a)
noise_level = 3e-3
A = A_bar + noise_level * np.random.multivariate_normal(np.zeros(3), cov_a, points_num)
view(point_sets=A)
ideal_R = special_ortho_group.rvs(3)
print(ideal_R)
cov_a_prime = random_correlation.rvs((0.1, 0.2, 2.7))
print(cov_a_prime)
A_prime = A_bar @ ideal_R.T + noise_level * np.random.multivariate_normal(np.zeros(3), cov_a_prime, points_num)
view(point_sets=[A, A_prime])
# ## 特異値分解の場合
R1 = util.estimate_R_using_SVD(A, A_prime)
print('error:', util.eval_R_error(R1, ideal_R))
view(point_sets=[A @ R1.T, A_prime])
# ## 5.3 四元数表現による回転推定
Xi = np.stack([
np.hstack([
A_prime[:, [0]] - A[:, [0]],
np.zeros([points_num, 1]),
-(A_prime[:, [2]] + A[:, [2]]),
A_prime[:, [1]] + A[:, [1]]
]),
np.hstack([
A_prime[:, [1]] - A[:, [1]],
A_prime[:, [2]] + A[:, [2]],
np.zeros([points_num, 1]),
-(A_prime[:, [0]] + A[:, [0]])
]),
np.hstack([
A_prime[:, [2]] - A[:, [2]],
-(A_prime[:, [1]] + A[:, [1]]),
A_prime[:, [0]] + A[:, [0]],
np.zeros([points_num, 1])
])
])
print(Xi.shape)
T = np.array([
[
[-1, 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 0, 0],
[ 0, 0, -1, 0, 0, -1],
[ 0, 1, 0, 0, 1, 0]
], [
[ 0, -1, 0, 0, 1, 0],
[ 0, 0, 1, 0, 0, 1],
[ 0, 0, 0, 0, 0, 0],
[-1, 0, 0, -1, 0, 0]
], [
[ 0, 0, -1, 0, 0, 1],
[ 0, -1, 0, 0, -1, 0],
[ 1, 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 0, 0]
]
])
print(T.shape)
# +
cov_joined = linalg.block_diag(cov_a, cov_a_prime)
print(cov_joined)
V_0 = np.zeros([3, 3, T.shape[1], T.shape[1]])
for k, l in product(range(3), repeat=2):
V_0[k, l] = T[k] @ cov_joined @ T[l].T
print(V_0.shape)
# -
# ## 5.4 FNS法による最適化
def calc_M(W, Xi):
dim = Xi.shape[2]
M = np.zeros([dim, dim])
for k, l in product(range(3), repeat=2):
M += W[k, l] * Xi[k].T @ Xi[l]
return M
def calc_L(W, q, Xi, V_0):
_, points_num, dim = Xi.shape
V = np.zeros([3, points_num])
for k, l in product(range(3), repeat=2):
V[k] += W[k, l] * Xi[l] @ q
L = np.zeros([dim, dim])
for k, l in product(range(3), repeat=2):
L += np.inner(V[k], V[l]) * V_0[k, l]
return L
def FNS_method(Xi, V_0):
# step 1
q0 = np.zeros(4)
W = np.eye(3)
iters = 1
while True:
# step 2
X = calc_M(W, Xi) - calc_L(W, q0, Xi, V_0)
# step 3
w, eigenvecs = linalg.eigh(X)
q = eigenvecs[:, np.argmin(w)]
# step 4
if np.allclose(q, q0) or np.allclose(q, -q0):
return q, iters
W_inv = np.zeros_like(W)
for k, l in product(range(3), repeat=2):
W_inv[k, l] = np.inner(q, V_0[k, l] @ q)
W = linalg.inv(W_inv)
q0 = q
iters += 1
q, iters = FNS_method(Xi, V_0)
R2 = Rotation.from_quat(q[[1, 2, 3, 0]]).as_matrix()
print('iterations:', iters)
print('error:', util.eval_R_error(R2, ideal_R))
view(point_sets=[A @ R2.T, A_prime])
# ## 5.5 同次拘束条件による解法
zeros = np.zeros([points_num, 3])
Xi = np.stack([
np.hstack([A, zeros, zeros, -A_prime[:, [0]]]),
np.hstack([zeros, A, zeros, -A_prime[:, [1]]]),
np.hstack([zeros, zeros, A, -A_prime[:, [2]]])
])
del zeros
print(Xi.shape)
T = np.zeros([3 ,10, 6])
for i in range(3):
T[i, i * 3, 0] = T[i, i * 3 + 1, 1] = T[i, i * 3 + 2, 2] = 1
T[i, 9, 3 + i] = -1
print(T.shape)
print(T)
V_0 = np.zeros([3, 3, T.shape[1], T.shape[1]])
for k, l in product(range(3), repeat=2):
V_0[k, l] = T[k] @ cov_joined @ T[l].T
print(V_0.shape)
def projection_matrix(u):
orthogonal_basis = np.array([
[u[1], u[0], 0, u[4], u[3], 0, u[7], u[6], 0, 0],
[0, u[2], u[1], 0, u[5], u[4], 0, u[8], u[7], 0],
[u[2], 0, u[0], u[5], 0, u[3], u[8], 0, u[6], 0],
[2*u[0], 0, 0, 2*u[3], 0, 0, 2*u[6], 0, 0, -2*u[9]],
[0, 2*u[1], 0, 0, 2*u[4], 0, 0, 2*u[7], 0, -2*u[9]],
[0, 0, 2*u[2], 0, 0, 2*u[5], 0, 0, 2*u[8], -2*u[9]],
]).T
constraint_num = orthogonal_basis.shape[1]
# Gram–Schmidt process
Q, _ = linalg.qr(orthogonal_basis)
P = np.eye(10)
for i in range(6):
P -= np.outer(Q[:, i], Q[:, i])
return P, constraint_num
def EFNS_method(Xi, V_0):
# step 1
u = np.array([1., 0., 0.,
0., 1., 0.,
0., 0., 1., 1.])
u /= linalg.norm(u)
W = np.eye(3)
iters = 1
while True:
# step 2
M = calc_M(W, Xi)
L = calc_L(W, u, Xi, V_0)
# step 3, 4
P, constraint_num = projection_matrix(u)
# step 5
X = P @ (M - L) @ P
# step 6
w, vecs = linalg.eigh(X)
vecs = vecs[:, np.argsort(w)[:constraint_num + 1]]
# step 7
u_hat = np.zeros_like(u)
for i in range(constraint_num + 1):
u_hat += np.inner(u, vecs[:, i]) * vecs[:, i]
# step 8
u_prime = P @ u_hat
u_prime /= linalg.norm(u_prime)
if np.allclose(u_prime, u) or np.allclose(u_prime, -u):
return u_prime, iters
u += u_prime
u /= linalg.norm(u)
W_inv = np.zeros_like(W)
for k, l in product(range(3), repeat=2):
W_inv[k, l] = np.inner(u, V_0[k, l] @ u)
W = linalg.inv(W_inv)
iters += 1
u, iters = EFNS_method(Xi, V_0)
R3 = u[:-1].reshape(3, 3) / u[-1]
print('iterations:', iters)
print('error:', util.eval_R_error(R3, ideal_R))
view(point_sets=[A @ R3.T, A_prime])
# ## 6.6 最尤推定による回転の最適化
# (章は違うがやっていることは同じなので混ぜた)
def calc_W(cov_a, cov_a_prime, R):
return linalg.inv(R @ cov_a @ R.T + cov_a_prime)
def calc_g(A, A_prime, R, W, cov_a):
ART = A @ R.T
EWT = (A_prime - ART) @ W.T
g = (-np.cross(ART, EWT, axis=1) + np.cross(EWT, EWT @ (R @ cov_a @ R.T), axis=1)).sum(axis=0)
return g
def calc_H(A, R, W):
ART = A @ R.T
tmp = np.stack([
# np.cross(ART, W[:, 0], axisa=1, axisb=0, axisc=1)
np.cross(ART, W[[0]], axis=1),
np.cross(ART, W[[1]], axis=1),
np.cross(ART, W[[2]], axis=1),
], axis=2)
# np.cross(tmp, ART.reshape(*ART.shape, 1), axisa=2, axisb=1, axisc=2).sum(axis=0)
return np.cross(tmp, ART.reshape(-1, 1, 3), axis=2).sum(axis=0)
def calc_J(A, A_prime, cov_a, cov_a_prime, R):
W = calc_W(cov_a, cov_a_prime, R)
E = A_prime - A @ R.T
return (E * (E @ W.T)).sum()
def lie_optimize(A, A_prime, cov_a, cov_a_prime):
# step 1
R = init_R = util.estimate_R_using_SVD(A, A_prime)
J = init_J = calc_J(A, A_prime, cov_a, cov_a_prime, R)
c = 0.0001
while True:
W = calc_W(cov_a, cov_a_prime, R)
# step 2
g = calc_g(A, A_prime, R, W, cov_a)
H = calc_H(A, R, W)
while True:
# step 3
omega = linalg.solve(H + c * np.eye(3), -g)
# step 4
new_R = util.exponential_map(omega) @ R
# step 5
new_J = calc_J(A, A_prime, cov_a, cov_a_prime, new_R)
if new_J <= J:
break
c *= 10
# step 6
if linalg.norm(omega) < 1e-10:
return new_R, new_J, init_R, init_J
R = new_R
J = new_J
c /= 10
R4, J, init_R, init_J = lie_optimize(A, A_prime, cov_a, cov_a_prime)
print('initial error:', util.eval_R_error(R1, ideal_R))
print('final error:', util.eval_R_error(R4, ideal_R))
print('J:', init_J, '->', J)
view(point_sets=[A @ R4.T, A_prime])
| notebooks/anisotropic_error.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 [3.7]
# language: python
# name: python3
# ---
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <h2>Project 1: $k$-Nearest Neighbors</h2>
# <p><cite><center>So many points,<br>
# some near some far,<br>
# - who are my true neighbors?</center></cite></p>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <h3>Introduction</h3>
#
# <p>In this project, you will build a $k$-nearest neighbor classifier.</p>
#
# <strong>How to submit:</strong> You can submit your code using the blue <strong>Submit</strong> button above. This button will send any code below surrounded by <strong>#<GRADED></strong><strong>#</GRADED></strong> tags below to the autograder, which will then run several tests over your code. By clicking on the <strong>Details</strong> dropdown next to the Submit button, you will be able to view your submission report once the autograder has completed running. This submission report contains a summary of the tests you have failed or passed, as well as a log of any errors generated by your code when we ran it.
#
# Note that this may take a while depending on how long your code takes to run! Once your code is submitted you may navigate away from the page as you desire -- the most recent submission report will always be available from the Details menu.
#
# <p><strong>Evaluation:</strong> Your code will be autograded for technical
# correctness and--on some assignments--speed. Please <em>do not</em> change the names of any provided functions or classes within the code, or you will wreak havoc on the autograder. Furthermore, <em>any code not surrounded by <strong>#<GRADED></strong><strong>#</GRADED></strong> tags will not be run by the autograder</em>. However, the correctness of your implementation -- not the autograder's output -- will be the final judge of your score. If necessary, we will review and grade assignments individually to ensure that you receive due credit for your work.
#
# <p><strong>Academic Integrity:</strong> <em>This project should be completed in groups of one or two. Make sure you're in a Vocareum team if working with another student.</em> We will be checking your code against other submissions in the class for logical redundancy. If you copy someone else's code and submit it with minor changes, we will know. These cheat detectors are quite hard to fool, so please don't try. We trust you all to submit your team's own work only; <em>please</em> don't let us down. If you do, we will pursue the strongest consequences available to us.
#
# <p><strong>Getting Help:</strong> You are not alone! If you find yourself stuck on something, contact the course staff for help. Office hours, section, and the <a href="https://edstem.org/us/courses/19541/discussion/">Ed Discussion</a> are there for your support; please use them. We want these projects to be rewarding and instructional, not frustrating and demoralizing. But, we don't know when or how to help unless you ask.
#
#
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# **Libraries**: Before we get started we need to install a few libraries. You can do this by executing the following code.
# + nbgrader={"grade": false, "locked": false, "solution": false}
#<GRADED>
import numpy as np
# functions that may be helpful
from scipy.stats import mode
import sys
#</GRADED>
# %matplotlib notebook
#<GRADED>
import matplotlib
import matplotlib.pyplot as plt
from scipy.io import loadmat
import time
from helper_functions import loaddata, visualize_knn_2D, visualize_knn_images, plotfaces, visualize_knn_boundary
#</GRADED>
print('You\'re running python %s' % sys.version.split(' ')[0])
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <h3> k-Nearest Neighbors implementation in Python </h3>
#
# <p>Our goal towards a $k$NN classifier is to build a classifier for face recognition.
# </p>
#
# **Data:** We first obtain some data for testing your code. The data resides in the files <code>faces.mat</code> which hold the dataset for further experiments.
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# Here, <b>xTr</b> are the training vectors with labels <b>yTr</b> and <b>xTe</b> are the testing vectors with labels <b>yTe</b>.
# As a reminder, to predict the label or class of an image in <b>xTe</b>, we will look for the <i>k</i>-nearest neighbors in <b>xTr</b> and predict a label based on their labels in <b>yTr</b>. For evaluation, we will compare these labels against the true labels provided in <b>yTe</b>.</p>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <h4> Visualizing data</h4>
#
# Let us take a look at our data. The following script will take the first 10 training images from the face data set and visualize them.
# +
xTr,yTr,xTe,yTe=loaddata("faces.mat")
plt.figure()
plotfaces(xTr[:9, :])
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
#
# <h4> Implementation </h4>
# <p> The following questions will ask you to finish these functions in a pre-defined order. <br></p>
#
# <p>(a) Implement the function <b><code>l2distance</code></b>. You may use your own code(s) from the previous project.</p>
#
# -
#<GRADED>
def l2distance(X,Z=None):
"""
function D=l2distance(X,Z)
Computes the Euclidean distance matrix.
Syntax:
D=l2distance(X,Z)
Input:
X: nxd data matrix with n vectors (rows) of dimensionality d
Z: mxd data matrix with m vectors (rows) of dimensionality d
Output:
Matrix D of size nxm
D(i,j) is the Euclidean distance of X(i,:) and Z(j,:)
call with only one input:
l2distance(X)=l2distance(X,X)
"""
if Z is None:
Z=X;
n,d1=X.shape
m,d2=Z.shape
assert (d1==d2), "Dimensions of input vectors must match!"
# Your code goes here ..
# D=X_dots + Z_dots - XZ_dots
X_dots = (X*X).sum(axis=1).reshape((n,1))*np.ones(shape=(1,m))
Z_dots = (Z*Z).sum(axis=1)*np.ones(shape=(n,1))
XZ_dots= -2*X.dot(Z.T)
D=X_dots + Z_dots + XZ_dots
#raise NotImplementedError('Your code goes here!')
return D
# ... until here
#</GRADED>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
#
# <p>(b) Implement the function <b><code>findknn</code></b>, which should find the $k$ nearest neighbors of a set of vectors within a given training data set. Break ties arbitrarily. The call of
# <pre>
# [I,D]=findknn(xTr,xTe,k);
# </pre>
# should result in two matrices $I$ and $D$, both of dimensions $k\times n$, where $n$ is the number of input vectors in <code>xTe</code>. The matrix $I(i,j)$ is the index of the $i^{th}$ nearest neighbor of the vector $xTe(j,:)$.
# So, for example, if we set <code>i=I(1,3)</code>, then <code>xTr(i,:)</code> is the first nearest neighbor of vector <code>xTe(3,:)</code>. The second matrix $D$ returns the corresponding distances. So $D(i,j)$ is the distance of $xTe(j,:)$ to its $i^{th}$ nearest neighbor.
# </p>
# +
#<GRADED>
def findknn(xTr,xTe,k):
"""
function [indices,dists]=findknn(xTr,xTe,k);
Finds the k nearest neighbors of xTe in xTr.
Input:
xTr = nxd input matrix with n row-vectors of dimensionality d
xTe = mxd input matrix with m row-vectors of dimensionality d
k = number of nearest neighbors to be found
Output:
indices = kxm matrix, where indices(i,j) is the i^th nearest neighbor of xTe(j,:)
dists = Euclidean distances to the respective nearest neighbors
"""
# Enter your code here
n,d1=xTr.shape
m,d2=xTe.shape
assert (d1==d2), "Dimensions of input vectors must match!"
D=l2distance(xTr,xTe)
indices=np.argsort(D, axis=0)[0:k,:]
dists=np.sort(D, axis=0)[0:k,:]
#raise NotImplementedError('Your code goes here')
return indices, dists
# until here
#</GRADED>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <p> The following demo samples random points in 2D. If your findknn function is correctly implemented, you should be able to click anywhere on the plot to add a test point. The function should then draw direct connections from your test point to the k nearest neighbors. Verify manually if your code is correct.
# </p>
# -
visualize_knn_2D(findknn)
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# We can visualize the k=3 nearest training neighbors of some of the test points (Click on the image to cycle through different test points).
# -
visualize_knn_images(findknn, imageType='faces')
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <p>(c) The function <b><code>analyze</code></b> should compute various metrics to evaluate a classifier. The call of
# <pre>
# result=analyze(kind,truth,preds);
# </pre>
# should output the <b>accuracy</b> or <b>absolute loss</b> in variable <code>result</code>. The type of output required can be specified in the input argument <code>kind</code> as <code>"abs"</code> or <code>"acc"</code>. The input variables <code>truth</code> and <code>pred</code> should contain vectors of true and predicted labels respectively.
# For example, the call
# <pre>
# >> analyze('acc',[1 2 1 2],[1 2 1 1])
# </pre>
# should return an accuracy of 0.75. Here, the true labels are 1,2,1,2 and the predicted labels are 1,2,1,1. So the first three examples are classified correctly, and the last one is wrong --- 75% accuracy.
# <pre>
# >> analyze('abs',[1 2 1 2],[1 2 1 1])
# </pre>
# should return sum (abs ([1 2 1 2] - [1 2 1 1]))/4 = 0.25. Here, the true labels are 1,2,1,2 and the predicted labels are 1,2,1,1. So the first three examples are classified correctly, and the last one is wrong --- 25% loss.
# </p>
#
#
# -
#<GRADED>
def analyze(kind,truth,preds):
"""
function output=analyze(kind,truth,preds)
Analyses the accuracy of a prediction
Input:
kind=
'acc' accuracy, or
'abs' absolute loss
(other values of 'kind' will follow later)
"""
truth = truth.flatten()
preds = preds.flatten()
d1=truth.shape[0]
d2=preds.shape[0]
assert (d1==d2), "Dimensions of input vectors must match!"
if kind == 'abs':
# compute the absolute difference between truth and predictions
result=np.absolute(np.array(truth) - np.array(preds))
#raise NotImplementedError('Your code goes here!')
elif kind == 'acc':
result=(truth==preds)
#raise NotImplementedError('Your code goes here!')
output=sum(result)/d1
return output
#</GRADED>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
#
# <p>(d) Implement the function <b><code>knnclassifier</code></b>, which should perform $k$ nearest neighbor classification on a given test data set. Break ties arbitrarily. The call <pre>preds=knnclassifier(xTr,yTr,xTe,k)</pre>
# should output the predictions for the data in <code>xTe</code> i.e. <code>preds[i]</code> will contain the prediction for <code>xTe[i,:]</code>.</p>
# +
#<GRADED>
def knnclassifier(xTr,yTr,xTe,k):
"""
function preds=knnclassifier(xTr,yTr,xTe,k);
k-nn classifier
Input:
xTr = nxd input matrix with n row-vectors of dimensionality d
xTe = mxd input matrix with m row-vectors of dimensionality d
k = number of nearest neighbors to be found
Output:
preds = predicted labels, ie preds(i) is the predicted label of xTe(i,:)
"""
# fix array shapes
yTr = yTr.flatten()
# Your code goes here
n=xTr.shape[0]
m=xTe.shape[0]
#get indices matrix for k-nearest points
indices_row=findknn(xTr,xTe,k)[0]
indices_col=np.indices((k, m))[1]
#B = np.array(xTe)
Y=yTr.reshape(n,1)*np.ones(shape=(n,m))
yTe=Y[indices_row,indices_col]
preds=np.rint(np.sum(yTe, axis=0)/k)
#raise NotImplementedError('Your code goes here!')
return preds
#</GRADED>
#xTr,yTr,xTe,yTe=loaddata("faces.mat")
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <p>You can compute the actual classification error on the test set by calling
# <pre>
# >> analyze("acc",yTe,knnclassifier(xTr,yTr,xTe,3))
# </pre></p>
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <p>(e) This script runs the $k$-nearest neighbor classifier over the faces data set. The faces data set has $40$ classes. What classification accuracy would you expect from a random classifier?</p>
# -
print("Face Recognition: (1-nn)")
xTr,yTr,xTe,yTe=loaddata("faces.mat") # load the data
t0 = time.time()
preds = knnclassifier(xTr,yTr,xTe,1)
result=analyze("acc",yTe,preds)
t1 = time.time()
print("You obtained %.2f%% classification acccuracy in %.4f seconds\n" % (result*100.0,t1-t0))
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <p>(f) (optional) Sometimes a $k$-NN classifier can result in a tie, when the majority vote is not clearly defined. Can you improve your accuracy by falling back onto $k$-NN with lower $k$ in such a case?</p>
#
# -
# + [markdown] nbgrader={"grade": false, "locked": true, "solution": false}
# <h3> k-NN Boundary Visualization </h3>
# <p> To help give you a visual understanding of how the k-NN boundary is affected by $k$ and the specific dataset, feel free to play around with the visualization below. </p>
#
# **Instructions:**
# Run the cell below.
# Click anywhere in the graph to add a negative class point.
# Hold down 'p' key and click anywhere in the graph to add a positive class point.
# To increase $k$, hold down 'h' key and click anywhere in the graph.
#
# -
# %matplotlib notebook
visualize_knn_boundary(knnclassifier)
| python_HW1_kNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Making Words-Network
# Implement social network for making graph of words connection in the Hoax corpus
# +
# %matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
H = nx.Graph()
F = nx.Graph()
# -
# ## Add Edges to network
# ### Hoax
# +
berita = open("all_hoax_stemmed.txt","r").read()
words = berita.split()
i = 0
for idx in range(1, len(words)):
#print(words[idx-1], words[idx])
H.add_edge(words[idx-1], words[idx])
i += 1
print(i)
# -
len(H.edges())
# ### Facts
# +
beritaf = open("all_facts_stemmed.txt","r").read()
wordsf = beritaf.split()
i = 0
for idx in range(1, len(wordsf)):
#print(words[idx-1], words[idx])
F.add_edge(wordsf[idx-1], wordsf[idx])
i += 1
print(i)
# -
len(F.edges())
# ## Hoax Analysis
# ## Draw Graph
# +
labels = {}
for idx in range(len(words)):
labels[idx] = words[idx]
pos = nx.spring_layout(H)
nx.draw(H,pos,node_color='#A0CBE2',font_size = 5, scale=3, edge_color='#BB0000', width=2, edge_cmap=plt.cm.Blues, with_labels=True)
plt.savefig("hoax_graph.png", dpi=1000)
#nx.draw_networkx_nodes(G, pos)
#nx.draw_networkx_edges(G, pos)
#nx.draw_networkx_labels(G, pos)
#nx.draw(G, with_labels=True, node_size=5, font_size=5, node_color="skyblue", node_shape="s", alpha=0.5, linewidths=10)
#plt.show()
# +
import collections
degree_sequence = sorted([d for n, d in H.degree()], reverse=True) # degree sequence
# print "Degree sequence", degree_sequence
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.bar(deg, cnt, width=2, color='b')
plt.title("Degree Histogram")
plt.ylabel("Count")
plt.xlabel("Degree")
ax.set_xticks([d + 0.4 for d in deg])
ax.set_xticklabels(deg)
# draw graph in inset
plt.axes([0.4, 0.4, 0.5, 0.5])
Gcc = sorted(nx.connected_component_subgraphs(H), key=len, reverse=True)[0]
pos = nx.spring_layout(H)
plt.axis('off')
#nx.draw_networkx_nodes(G, pos, node_size=20)
#nx.draw_networkx_edges(G, pos, alpha=0.4)
plt.show()
# -
H.edges()
len(H.nodes())
# +
import networkx as nx
g1 = nx.Graph()
g1.add_edges_from([('a', 'b'), ('a','c'), ('b', 'a')])
g1['a']
# -
sorted(H.degree(), key = lambda x: int(x[1]), reverse = True)
# ## Degree Centrality
# +
from operator import itemgetter
#degree centrality
deg_cen = nx.degree_centrality(H)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_degcen = sorted(deg_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by degree centrality:")
for b in sorted_degcen[:20]:
print(b)
# -
# ## Betweenness Centrality
# +
#betweenness centrality
bet_cen = nx.betweenness_centrality(H)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_betcen = sorted(bet_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by betweenness centrality:")
for b in sorted_betcen[:20]:
print(b)
# -
# ## Closeness Centrality
# +
#closeness centrality
clo_cen = nx.closeness_centrality(H)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_clocen = sorted(clo_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by closeness centrality:")
for b in sorted_clocen[:20]:
print(b)
# -
nx.write_gexf(H, "hoax_test.gexf")
H['pesan']
# ## Facts Analysis
# ## Draw Graph - Facts
# +
labels = {}
for idx in range(len(wordsf)):
labels[idx] = wordsf[idx]
pos = nx.spring_layout(F)
nx.draw(F,pos,node_color='#A0CBE2',font_size = 5, scale=3, edge_color='#BB0000', width=2, edge_cmap=plt.cm.Blues, with_labels=True)
plt.savefig("fact_graph.png", dpi=1000)
#nx.draw_networkx_nodes(G, pos)
#nx.draw_networkx_edges(G, pos)
#nx.draw_networkx_labels(G, pos)
#nx.draw(G, with_labels=True, node_size=5, font_size=5, node_color="skyblue", node_shape="s", alpha=0.5, linewidths=10)
#plt.show()
# +
import collections
degree_sequence = sorted([d for n, d in F.degree()], reverse=True) # degree sequence
# print "Degree sequence", degree_sequence
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.bar(deg, cnt, width=2, color='b')
plt.title("Degree Histogram")
plt.ylabel("Count")
plt.xlabel("Degree")
ax.set_xticks([d + 0.4 for d in deg])
ax.set_xticklabels(deg)
# draw graph in inset
plt.axes([0.4, 0.4, 0.5, 0.5])
Gcc = sorted(nx.connected_component_subgraphs(F), key=len, reverse=True)[0]
pos = nx.spring_layout(F)
plt.axis('off')
#nx.draw_networkx_nodes(G, pos, node_size=20)
#nx.draw_networkx_edges(G, pos, alpha=0.4)
plt.show()
# -
F.edges()
len(F.nodes())
sorted(F.degree(), key = lambda x: int(x[1]), reverse = True)
# ## Degree Centrality - facts
# +
from operator import itemgetter
#degree centrality
deg_cen = nx.degree_centrality(F)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_degcen = sorted(deg_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by degree centrality:")
for b in sorted_degcen[:20]:
print(b)
# -
# ## Betweenness Centrality - facts
# +
#betweenness centrality
bet_cen = nx.betweenness_centrality(F)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_betcen = sorted(bet_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by betweenness centrality:")
for b in sorted_betcen[:20]:
print(b)
# -
# ## Closeness Centrality - facts
# +
#closeness centrality
clo_cen = nx.closeness_centrality(F)
#nx.set_node_attributes(G, 'degree', deg_cen)
sorted_clocen = sorted(clo_cen.items(), key=itemgetter(1), reverse=True)
print("Top 20 nodes by closeness centrality:")
for b in sorted_clocen[:20]:
print(b)
# -
nx.write_gexf(F, "fact_test.gexf")
| corpus/WordNetwork.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# ### Exercise 1: Load and examine a superstore sales data from an Excel file
df = pd.read_excel("Sample - Superstore.xls")
df.head(10)
df.drop('Row ID',axis=1,inplace=True)
df.shape
# ### Exercise 2: Subsetting the DataFrame
df_subset = df.loc[[i for i in range(5,10)],['Customer ID','Customer Name','City','Postal Code','Sales']]
df_subset
# ### Exercise 3: An example use case – determining statistics on sales and profit for records 100-199
df_subset = df.loc[[i for i in range(100,200)],['Sales','Profit']]
df_subset.describe()
df_subset.plot.box()
plt.title("Boxplot of sales and profit",fontsize=15)
plt.ylim(0,500)
plt.grid(True)
plt.show()
# ### Exercise 4: A useful function – unique
df['State'].unique()
df['State'].nunique()
df['Country'].unique()
df.drop('Country',axis=1,inplace=True)
# ### Exercise 5: Conditional Selection and Boolean Filtering
df_subset = df.loc[[i for i in range (10)],['Ship Mode','State','Sales']]
df_subset
df_subset>100
df_subset[df_subset>100]
df_subset[df_subset['Sales']>100]
df_subset[(df_subset['State']!='California') & (df_subset['Sales']>100)]
# ### Exercise 6: Setting and re-setting index
# +
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df1 = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DataFrame\n",'-'*25, sep='')
print(df1)
print("\nAfter resetting index\n",'-'*35, sep='')
print(df1.reset_index())
print("\nAfter resetting index with 'drop' option TRUE\n",'-'*45, sep='')
print(df1.reset_index(drop=True))
print("\nAdding a new column 'Profession'\n",'-'*45, sep='')
df1['Profession'] = "Student Teacher Engineer Doctor Nurse".split()
print(df1)
print("\nSetting 'Profession' column as index\n",'-'*45, sep='')
print (df1.set_index('Profession'))
# -
# ### Exercise 7: GroupBy method
df_subset = df.loc[[i for i in range (10)],['Ship Mode','State','Sales']]
df_subset
byState = df_subset.groupby('State')
byState
print("\nGrouping by 'State' column and listing mean sales\n",'-'*50, sep='')
print(byState.mean())
print("\nGrouping by 'State' column and listing total sum of sales\n",'-'*50, sep='')
print(byState.sum())
print(pd.DataFrame(df_subset.groupby('State').describe().loc['California']).transpose())
df_subset.groupby('Ship Mode').describe().loc[['Second Class','Standard Class']]
pd.DataFrame(byState.describe().loc['California'])
byStateCity=df.groupby(['State','City'])
byStateCity.describe()['Sales']
| Lesson04/Exercise48-51/Subsetting_Filtering_Grouping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="nvAT8wcRNVEk" colab_type="text"
# # Getting Started - RocketPy in Colab
# + [markdown] id="8gCWjpEyPNp1" colab_type="text"
# We start by setting up our environment. To run this notebook, we will need:
#
#
# * RocketPy
# * netCDF4 (to get weather forecasts)
# * Data files (we will clone RocketPy's repository for these)
#
# Therefore, let's run the following lines of code:
# + id="zwDDabtpNc6Z" colab_type="code" colab={}
# !pip install rocketpyalpha netCDF4
# !git clone https://github.com/giovaniceotto/RocketPy.git
# + id="pY5XGge5OoGJ" colab_type="code" colab={}
import os
os.chdir('RocketPy/docs/notebooks')
# + [markdown] id="55zcnvqdNVEo" colab_type="text"
# Now we can start!
#
# Here we go through a simplified rocket trajectory simulation to get you started. Let's start by importing the rocketpy module.
# + id="XGK9M8ecNVEp" colab_type="code" colab={}
from rocketpy import Environment, SolidMotor, Rocket, Flight
# + [markdown] id="ImgkhEkZNVE8" colab_type="text"
# If you are using Jupyter Notebooks, it is recommended to run the following line to make matplotlib plots which will be shown later interactive and higher quality.
# + id="uRa566HoNVE9" colab_type="code" colab={}
# %config InlineBackend.figure_formats = ['svg']
# %matplotlib inline
# + [markdown] id="sSeqramENVFB" colab_type="text"
# ## Setting Up a Simulation
# + [markdown] id="Vm4ZHAnnNVFC" colab_type="text"
# ### Creating an Environment for Spaceport America
# + id="d7mooAZONVFD" colab_type="code" colab={}
Env = Environment(
railLength=5.2,
latitude=32.990254,
longitude=-106.974998,
elevation=1400
)
# + [markdown] id="Fz8Ha6usNVFH" colab_type="text"
# To get weather data from the GFS forecast, available online, we run the following lines.
#
# First, we set tomorrow's date.
# + id="5kl-Je8dNVFI" colab_type="code" colab={}
import datetime
tomorrow = datetime.date.today() + datetime.timedelta(days=1)
Env.setDate((tomorrow.year, tomorrow.month, tomorrow.day, 12)) # Hour given in UTC time
# + [markdown] id="or5MLF9gNVFM" colab_type="text"
# Then, we tell Env to use a GFS forecast to get the atmospheric conditions for flight.
#
# Don't mind the warning, it just means that not all variables, such as wind speed or atmospheric temperature, are available at all altitudes given by the forecast.
# + id="g73fa7DWNVFN" colab_type="code" colab={}
Env.setAtmosphericModel(type='Forecast', file='GFS')
# + [markdown] id="wSnZQuRYNVFS" colab_type="text"
# We can see what the weather will look like by calling the info method!
# + id="H_AMjVTjNVFT" colab_type="code" colab={}
Env.info()
# + [markdown] id="Aksbs-pMNVFW" colab_type="text"
# ### Creating a Motor
#
# A solid rocket motor is used in this case. To create a motor, the SolidMotor class is used and the required arguments are given.
#
# The SolidMotor class requires the user to have a thrust curve ready. This can come either from a .eng file for a commercial motor, such as below, or a .csv file from a static test measurement.
#
# Besides the thrust curve, other parameters such as grain properties and nozzle dimensions must also be given.
# + id="Vx1dZObwNVFX" colab_type="code" colab={}
Pro75M1670 = SolidMotor(
thrustSource="../../data/motors/Cesaroni_M1670.eng",
burnOut=3.9,
grainNumber=5,
grainSeparation=5/1000,
grainDensity=1815,
grainOuterRadius=33/1000,
grainInitialInnerRadius=15/1000,
grainInitialHeight=120/1000,
nozzleRadius=33/1000,
throatRadius=11/1000,
interpolationMethod='linear'
)
# + [markdown] id="E1LJDIa0NVFa" colab_type="text"
# To see what our thrust curve looks like, along with other import properties, we invoke the info method yet again. You may try the allInfo method if you want more information all at once!
# + id="vjyPT7GVNVFb" colab_type="code" colab={}
Pro75M1670.info()
# + [markdown] id="kN7y1EwLNVFf" colab_type="text"
# ### Creating a Rocket
# + [markdown] id="_Ee-0vb5NVFg" colab_type="text"
# A rocket is composed of several components. Namely, we must have a motor (good thing we have the Pro75M1670 ready), a couple of aerodynamic surfaces (nose cone, fins and tail) and parachutes (if we are not launching a missile).
#
# Let's start by initializing our rocket, named Calisto, supplying it with the Pro75M1670 engine, entering its inertia properties, some dimensions and also its drag curves.
# + id="D1fyK8u_NVFh" colab_type="code" colab={}
Calisto = Rocket(
motor=Pro75M1670,
radius=127/2000,
mass=19.197-2.956,
inertiaI=6.60,
inertiaZ=0.0351,
distanceRocketNozzle=-1.255,
distanceRocketPropellant=-0.85704,
powerOffDrag='../../data/calisto/powerOffDragCurve.csv',
powerOnDrag='../../data/calisto/powerOnDragCurve.csv'
)
Calisto.setRailButtons([0.2, -0.5])
# + [markdown] id="CfOfqmroNVFk" colab_type="text"
# #### Adding Aerodynamic Surfaces
# + [markdown] id="LuUdEmWhNVFl" colab_type="text"
# Now we define the aerodynamic surfaces. They are really straight forward.
# + id="AQbv244VNVFm" colab_type="code" colab={}
NoseCone = Calisto.addNose(length=0.55829, kind="vonKarman", distanceToCM=0.71971)
FinSet = Calisto.addFins(4, span=0.100, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956)
Tail = Calisto.addTail(topRadius=0.0635, bottomRadius=0.0435, length=0.060, distanceToCM=-1.194656)
# + [markdown] id="D8oKc7s2NVFp" colab_type="text"
# #### Adding Parachutes
# + [markdown] id="IxAX61ZENVFq" colab_type="text"
# Finally, we have parachutes! Calisto will have two parachutes, Drogue and Main.
#
# Both parachutes are activated by some special algorithm, which is usually really complex and a trade secret. Most algorithms are based on pressure sampling only, while some also use acceleration info.
#
# RocketPy allows you to define a trigger function which will decide when to activate the ejection event for each parachute. This trigger function is supplied with pressure measurement at a predefined sampling rate. This pressure signal is usually noisy, so artificial noise parameters can be given. Call help(Rocket.addParachute) for more details. Furthermore, the trigger function also recieves the complete state vector of the rocket, allowing us to use velocity, acceleration or even attitude to decide when the parachute event should be triggered.
#
# Here, we define our trigger functions rather simply using Python. However, you can call the exact code which will fly inside your rocket as well.
# + id="f0PmLcF8NVFr" colab_type="code" colab={}
def drogueTrigger(p, y):
# p = pressure
# y = [x, y, z, vx, vy, vz, e0, e1, e2, e3, w1, w2, w3]
# activate drogue when vz < 0 m/s.
return True if y[5] < 0 else False
def mainTrigger(p, y):
# p = pressure
# y = [x, y, z, vx, vy, vz, e0, e1, e2, e3, w1, w2, w3]
# activate main when vz < 0 m/s and z < 800 m.
return True if y[5] < 0 and y[2] < 800 else False
Main = Calisto.addParachute('Main',
CdS=10.0,
trigger=mainTrigger,
samplingRate=105,
lag=1.5,
noise=(0, 8.3, 0.5))
Drogue = Calisto.addParachute('Drogue',
CdS=1.0,
trigger=drogueTrigger,
samplingRate=105,
lag=1.5,
noise=(0, 8.3, 0.5))
# + [markdown] id="xIoXe33FNVFv" colab_type="text"
# Just be careful if you run this last cell multiple times! If you do so, your rocket will end up with lots of parachutes which activate together, which may cause problems during the flight simulation. We advise you to re-run all cells which define our rocket before running this, preventing unwanted old parachutes. Alternatively, you can run the following lines to remove parachutes.
#
# ```python
# Calisto.parachutes.remove(Drogue)
# Calisto.parachutes.remove(Main)
# ```
# + [markdown] id="4PR0fgSbNVFw" colab_type="text"
# ## Simulating a Flight
#
# Simulating a flight trajectory is as simples as initializing a Flight class object givin the rocket and environement set up above as inputs. The launch rail inclination and heading are also given here.
# + id="v__Ud2p2NVFx" colab_type="code" colab={}
TestFlight = Flight(rocket=Calisto, environment=Env, inclination=85, heading=0)
# + [markdown] id="8SjrGQqzNVF0" colab_type="text"
# ## Analysing the Results
#
# RocketPy gives you many plots, thats for sure! They are divided into sections to keep them organized. Alternatively, see the Flight class documentation to see how to get plots for specific variables only, instead of all of them at once.
# + id="Hh4A_RQzNVF0" colab_type="code" colab={}
TestFlight.allInfo()
# + [markdown] id="Aun9D2OINVF4" colab_type="text"
# ## Using Simulation for Design
#
# Here, we go through a couple of examples which make use of RockePy in cool ways to help us design our rocket.
# + [markdown] id="gcT43lt2NVF5" colab_type="text"
# ### Dynamic Stability Analysis
# + [markdown] id="tFd1yJujNVF6" colab_type="text"
# Ever wondered how static stability translates into dynamic stability? Different static margins result in different dynamic behaviour, which also depends on the rocket's rotational inertial.
#
# Let's make use of RocketPy's helper class called Function to explore how the dynamic stability of Calisto varies if we change the fins span by a certain factor.
# + id="ULLEtVz7NVF7" colab_type="code" colab={}
# Helper class
from rocketpy import Function
# Prepare Rocket Class
Calisto = Rocket(motor=Pro75M1670,
radius=127/2000,
mass=19.197-2.956,
inertiaI=6.60,
inertiaZ=0.0351,
distanceRocketNozzle=-1.255,
distanceRocketPropellant=-0.85704,
powerOffDrag='../../data/calisto/powerOffDragCurve.csv',
powerOnDrag='../../data/calisto/powerOnDragCurve.csv')
Calisto.setRailButtons([0.2, -0.5])
Nose = Calisto.addNose(length=0.55829, kind="vonKarman", distanceToCM=0.71971)
FinSet = Calisto.addFins(4, span=0.1, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956)
Tail = Calisto.addTail(topRadius=0.0635, bottomRadius=0.0435, length=0.060, distanceToCM=-1.194656)
# Prepare Environment Class
Env = Environment(5.2, 9.8)
Env.setAtmosphericModel(type='CostumAtmosphere', wind_v=-5)
# Simulate Different Static Margins by Varying Fin Position
simulation_results = []
for factor in [0.5, 0.7, 0.9, 1.1, 1.3]:
# Modify rocket fin set by removing previous one and adding new one
Calisto.aerodynamicSurfaces.remove(FinSet)
FinSet = Calisto.addFins(4, span=0.1, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956*factor)
# Simulate
print('Simulating Rocket with Static Margin of {:1.3f}->{:1.3f} c'.format(Calisto.staticMargin(0), Calisto.staticMargin(Calisto.motor.burnOutTime)))
TestFlight = Flight(rocket=Calisto, environment=Env, inclination=90, heading=0, maxTimeStep=0.01, maxTime=5, terminateOnApogee=True, verbose=True)
# Post process flight data
TestFlight.postProcess()
# Store Results
staticMarginAtIginition = Calisto.staticMargin(0)
staticMarginAtOutOfRail = Calisto.staticMargin(TestFlight.outOfRailTime)
staticMarginAtSteadyState = Calisto.staticMargin(TestFlight.tFinal)
simulation_results += [(TestFlight.attitudeAngle, '{:1.2f} c | {:1.2f} c | {:1.2f} c'.format(staticMarginAtIginition, staticMarginAtOutOfRail, staticMarginAtSteadyState))]
Function.comparePlots(simulation_results, lower=0, upper=1.5, xlabel='Time (s)', ylabel='Attitude Angle (deg)')
# + [markdown] id="WHIeM9f3NVF_" colab_type="text"
# ### Characteristic Frequency Calculation
#
# Here we analyse the characterist frequency of oscilation of our rocket just as it leaves the launch rail. Note that when we ran TestFlight.allInfo(), one of the plots already showed us the frequency spectrum of our flight. Here, however, we have more control of what we are plotting.
# + id="OJdN2XMANVGA" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
Env = Environment(
railLength=5.2,
latitude=32.990254,
longitude=-106.974998,
elevation=1400
)
Env.setAtmosphericModel(type='CostumAtmosphere', wind_v=-5)
# Prepare Motor
Pro75M1670 = SolidMotor(
thrustSource="../../data/motors/Cesaroni_M1670.eng",
burnOut=3.9,
grainNumber=5,
grainSeparation=5/1000,
grainDensity=1815,
grainOuterRadius=33/1000,
grainInitialInnerRadius=15/1000,
grainInitialHeight=120/1000,
nozzleRadius=33/1000,
throatRadius=11/1000,
interpolationMethod='linear'
)
# Prepare Rocket
Calisto = Rocket(
motor=Pro75M1670,
radius=127/2000,
mass=19.197-2.956,
inertiaI=6.60,
inertiaZ=0.0351,
distanceRocketNozzle=-1.255,
distanceRocketPropellant=-0.85704,
powerOffDrag='../../data/calisto/powerOffDragCurve.csv',
powerOnDrag='../../data/calisto/powerOnDragCurve.csv'
)
Calisto.setRailButtons([0.2, -0.5])
Nose = Calisto.addNose(length=0.55829, kind="vonKarman", distanceToCM=0.71971)
FinSet = Calisto.addFins(4, span=0.1, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956)
Tail = Calisto.addTail(topRadius=0.0635, bottomRadius=0.0435, length=0.060, distanceToCM=-1.194656)
# Simulate first 5 seconds of Flight
TestFlight = Flight(rocket=Calisto, environment=Env, inclination=90, heading=0, maxTimeStep=0.01, maxTime=5)
TestFlight.postProcess()
# Perform a Fourier Analysis
Fs = 100.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(1,400,Ts) # time vector
ff = 5; # frequency of the signal
y = TestFlight.attitudeAngle(t) - np.mean(TestFlight.attitudeAngle(t))
n = len(y) # length of the signal
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[range(n//2)] # one side frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n//2)]
fig, ax = plt.subplots(2, 1)
ax[0].plot(t,y)
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Signal')
ax[0].set_xlim((0, 5))
ax[1].plot(frq,abs(Y),'r') # plotting the spectrum
ax[1].set_xlabel('Freq (Hz)')
ax[1].set_ylabel('|Y(freq)|')
ax[1].set_xlim((0, 5))
plt.subplots_adjust(hspace=0.5)
plt.show()
# + [markdown] id="qsXBVgGANVGD" colab_type="text"
# ### Apogee as a Function of Mass
#
# This one is a classic one! We always need to know how much our rocket's apogee will change when our payload gets havier.
# + id="XAxTud5MNVGE" colab_type="code" colab={}
def apogee(mass):
# Prepare Environment
Env = Environment(
railLength=5.2,
latitude=32.990254,
longitude=-106.974998,
elevation=1400,
date=(2018, 6, 20, 18)
)
Env.setAtmosphericModel(type='CostumAtmosphere', wind_v=-5)
# Prepare Motor
Pro75M1670 = SolidMotor(
thrustSource="../../data/motors/Cesaroni_M1670.eng",
burnOut=3.9,
grainNumber=5,
grainSeparation=5/1000,
grainDensity=1815,
grainOuterRadius=33/1000,
grainInitialInnerRadius=15/1000,
grainInitialHeight=120/1000,
nozzleRadius=33/1000,
throatRadius=11/1000,
interpolationMethod='linear'
)
# Prepare Rocket
Calisto = Rocket(
motor=Pro75M1670,
radius=127/2000,
mass=mass,
inertiaI=6.60,
inertiaZ=0.0351,
distanceRocketNozzle=-1.255,
distanceRocketPropellant=-0.85704,
powerOffDrag='../../data/calisto/powerOffDragCurve.csv',
powerOnDrag='../../data/calisto/powerOnDragCurve.csv'
)
Calisto.setRailButtons([0.2, -0.5])
Nose = Calisto.addNose(length=0.55829, kind="vonKarman", distanceToCM=0.71971)
FinSet = Calisto.addFins(4, span=0.1, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956)
Tail = Calisto.addTail(topRadius=0.0635, bottomRadius=0.0435, length=0.060, distanceToCM=-1.194656)
# Simulate Flight until Apogee
TestFlight = Flight(rocket=Calisto, environment=Env, inclination=85, heading=0, terminateOnApogee=True)
return TestFlight.apogee
apogeebymass = Function(apogee, inputs="Mass (kg)", outputs="Estimated Apogee (m)")
apogeebymass.plot(8,20,20)
# + [markdown] id="yBMOVQnUNVGG" colab_type="text"
# ### Out of Rail Speed as a Function of Mass
#
# To finish off, lets make a really important plot. Out of rail speed is the speed our rocket has when it is leaving the launch rail. This is crucial to make sure it can fly safely after leaving the rail. A common rule of thumb is that our rocket's out of rail speed should be 4 times the wind speed so that it does not stall and become unstable.
# + id="MJ7YRKt8NVGH" colab_type="code" colab={}
def speed(mass):
# Prepare Environment
Env = Environment(
railLength=5.2,
latitude=32.990254,
longitude=-106.974998,
elevation=1400,
date=(2018, 6, 20, 18)
)
Env.setAtmosphericModel(type='CostumAtmosphere', wind_v=-5)
# Prepare Motor
Pro75M1670 = SolidMotor(
thrustSource="../../data/motors/Cesaroni_M1670.eng",
burnOut=3.9,
grainNumber=5,
grainSeparation=5/1000,
grainDensity=1815,
grainOuterRadius=33/1000,
grainInitialInnerRadius=15/1000,
grainInitialHeight=120/1000,
nozzleRadius=33/1000,
throatRadius=11/1000,
interpolationMethod='linear'
)
# Prepare Rocket
Calisto = Rocket(
motor=Pro75M1670,
radius=127/2000,
mass=mass,
inertiaI=6.60,
inertiaZ=0.0351,
distanceRocketNozzle=-1.255,
distanceRocketPropellant=-0.85704,
powerOffDrag='../../data/calisto/powerOffDragCurve.csv',
powerOnDrag='../../data/calisto/powerOnDragCurve.csv'
)
Calisto.setRailButtons([0.2, -0.5])
Nose = Calisto.addNose(length=0.55829, kind="vonKarman", distanceToCM=0.71971)
FinSet = Calisto.addFins(4, span=0.1, rootChord=0.120, tipChord=0.040, distanceToCM=-1.04956)
Tail = Calisto.addTail(topRadius=0.0635, bottomRadius=0.0435, length=0.060, distanceToCM=-1.194656)
# Simulate Flight until Apogee
TestFlight = Flight(rocket=Calisto, environment=Env, inclination=85, heading=0, terminateOnApogee=True)
return TestFlight.outOfRailVelocity
speedbymass = Function(speed, inputs="Mass (kg)", outputs="Out of Rail Speed (m/s)")
speedbymass.plot(8,20,20)
| docs/notebooks/getting_started_colab.ipynb |
;; -*- coding: utf-8 -*-
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Calysto Scheme 3
;; language: scheme
;; name: calysto_scheme
;; ---
;; ### 練習問題2.38
;; accumulate⼿続きは、列の最初の要素と、右のすべての要素を組み合わせた結果とを組み合わせるため、
;; fold-rightとしても知られている。
;; fold-leftというものもあり、これはfold-rightに似ているが、
;; 要素の組み合わせを逆⽅向に⾏うという点が違う。
;;
;; (define (fold-left op initial sequence)
;; (define (iter result rest)
;; (if (null? rest) result
;; (iter (op result (car rest)) (cdr rest))))
;; (iter initial sequence))
;;
;; 以下の式の値はいくらか。
;;
;; (fold-right / 1 (list 1 2 3))
;; (fold-left / 1 (list 1 2 3))
;; (fold-right list nil (list 1 2 3))
;; (fold-left list nil (list 1 2 3))
;;
;; fold-rightとfold-leftが任意の列に対して同じ値を返すことを保証するために、
;; opが満たさなければならない性質を答えよ。
;; fold-right手続き、fold-left手続きについては、以下を参照。
;; https://en.wikipedia.org/wiki/Fold_(higher-order_function)
(define (fold-left op initial sequence)
(define (iter result rest)
(if (null? rest) result
(iter (op result (car rest)) (cdr rest))))
(iter initial sequence)
)
; accumulate手続きをfold-rightと名前を変更
(define (fold-right op initial sequence)
(if (null? sequence) initial
(op (car sequence) (fold-right op initial (cdr sequence)))))
; 回答
(fold-right / 1 (list 1 2 3))
; 回答
(fold-left / 1 (list 1 2 3))
; 回答
(fold-right list () (list 1 2 3))
; 回答
(fold-left list () (list 1 2 3))
(fold-right + 1 (list 1 2 3))
(fold-left + 1 (list 1 2 3))
;; 同じ値を返すことを保証するために、opが満たさなければならない性質:交換律
;;
;; +と/の動作の違いから分かるように、交換律を満たさないと同じ結果にはならない。
| exercises/2.38.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
from sklearn.metrics import roc_auc_score, precision_recall_curve, roc_curve, average_precision_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.preprocessing import normalize
# -
import gc
gc.enable()
def train_model(train_, val_x_, val_y_, test_, y_, folds_):
feats_ = [f_ for f_ in test_.columns if f_ not in ['SK_ID_CURR']]
oof_preds = np.zeros(train_.shape[0])
val_preds = np.zeros(val_x_.shape[0])
sub_preds = np.zeros(test_.shape[0])
for n_fold, (trn_idx, val_idx) in enumerate(folds_.split(train_)):
#print(train_.type)
trn_x, trn_y = pd.DataFrame(train_).iloc[trn_idx], pd.DataFrame(y_).iloc[trn_idx]
val_x, val_y = pd.DataFrame(train_).iloc[val_idx], pd.DataFrame(y_).iloc[val_idx]
clf = LogisticRegression(
penalty = 'l1',
C = 10.0,
class_weight = 'balanced',
random_state = 14,
n_jobs = 72
)
clf.fit(trn_x, trn_y)
oof_preds[val_idx] = clf.predict_proba(val_x)[:, 1]
val_preds += clf.predict_proba(pd.DataFrame(val_x_[feats_]))[:, 1] / folds_.n_splits
sub_preds += clf.predict_proba(pd.DataFrame(test_[feats_]))[:, 1] / folds_.n_splits
print('fold %2d validate AUC score %.6f'%(n_fold + 1,roc_auc_score(val_y_, val_preds* folds_.n_splits) ))
print('fold %2d AUC %.6f'%(n_fold+1, roc_auc_score(val_y, oof_preds[val_idx])))
del clf, trn_x, trn_y, val_x, val_y
gc.collect()
print('validate AUC score %.6f'%roc_auc_score(val_y_, val_preds))
print('full AUC score %.6f'%roc_auc_score(y_, oof_preds))
test_['TARGET'] = sub_preds
return oof_preds, test_[['SK_ID_CURR', 'TARGET']]
train = pd.read_csv('../data/train_.csv')
test = pd.read_csv('../data/test_.csv')
y = pd.read_csv('../data/y_.csv')
val_x = pd.read_csv('../data/val_x_.csv')
val_y = pd.read_csv('../data/val_y_.csv')
folds = KFold(n_splits=5, shuffle=True, random_state=0)
features = val_x.columns
feats = [f for f in features if f not in ['SK_ID_CURR']]
train = normalize(train)
test = normalize(test)
val_x = normalize(val_x)
train = pd.DataFrame(train)
test = pd.DataFrame(test)
val_x = pd.DataFrame(val_x)
train.columns = feats
test.columns = features
val_x.columns = features
oof_preds, test_preds = train_model(train, val_x, val_y['TARGET'].values.ravel(), test, y['0'].values.ravel(), folds)
test_preds.to_csv('../data/lr_submission.csv', index=False)
| model/LR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parsing and Inspecting PDDL problems
#
# Let's use the built-in PDDL parser to inspect some problem encoded in PDDL:
# + pycharm={"is_executing": false, "name": "#%%\n"}
from tarski.io import PDDLReader
reader = PDDLReader(raise_on_error=True)
reader.parse_domain('./benchmarks/blocksworld.pddl')
problem = reader.parse_instance('./benchmarks/probBLOCKS-4-2.pddl')
lang = problem.language
# -
# Notice how the parsing of a standard instance of the Blocks world results in
# a _problem_ object and, within that problem, a _language_ object. There
# is a clear distinction in Tarski between the language used to define a planning
# problem, and the problem itself. Tarski sticks as close as possible to the
# standard definition of
# [many-sorted first-order languages](https://en.wikipedia.org/wiki/First-order_logic#Many-sorted_logic).
# Hence, languages have a vocabulary made up of predicate and function names,
# each with their arity and sort, and allow the definition of terms and formulas
# in the usual manner. Let us inspect the language of the problem we just parsed:
# + pycharm={"is_executing": false, "name": "#%%\n"}
print(lang.sorts)
print(lang.predicates)
print(lang.functions)
# -
# Turns out that our blocks encoding has one single sort `object` (the default
# sort in PDDL when no sort is declared), no functions, and a few predicates.
# Besides the predicates defined in the PDDL file, Tarski assumes (unless explicitly
# disallowed) the existence of a built-in equality predicate and its negation.
# A string `clear/1` denotes a predicate named `clear` with arity 1.
#
# Constants are usually considered as nullary functions in the literature, but in
# Tarski they are stored separately:
# + pycharm={"is_executing": false, "name": "#%%\n"}
lang.constants()
# -
# Our blocks encoding this has four constants of type object, which we know
# represent the four different blocks in the problem.
#
# Languages also provide means to directly retrieve any sort, constant, function or predicate
# when their name is known:
# + pycharm={"is_executing": false, "name": "#%%\n"}
print(lang.get('on'))
print(lang.get('clear'))
print(lang.get('a', 'b', 'c'))
# -
# Notice how in the last statement we retrieve three different elements (in this case, constants) in one single call.
# We can also easily inspect all constants of a certain sort:
# + pycharm={"is_executing": false, "name": "#%%\n"}
list(lang.get('object').domain())
# -
# We can of course inspect not only the language, but also the problem itself.
# + pycharm={"name": "#%%\n"}
problem
# -
# A PDDL planning problem is made up of some initial state, some goal formula,
# and a number of actions. The initial state is essentially an encoding of
# a first-order interpretation over the language of the problem:
# + pycharm={"name": "#%%\n"}
problem.init
# -
# We can also get an extensional description of our initial state:
# + pycharm={"name": "#%%\n"}
print(problem.init.as_atoms())
# -
# Since the initial state is just another Tarski model, we can perform with it all operations that
# we have seen on previous tutorials, e.g. inspecting the value of some atom on it:
# + pycharm={"name": "#%%\n"}
clear, b = lang.get('clear', 'b')
problem.init[clear(b)]
# -
# In contrast with the initial state, the goal can be any arbitrary first-order formula.
# Unsurprisingly, it turns out that the goal is not satisfied in the initial state;
# + pycharm={"is_executing": false, "name": "#%%\n"}
print(problem.goal)
print(problem.init[problem.goal])
# -
# Our blocks encoding has four actions:
#
# + pycharm={"is_executing": false, "name": "#%%\n"}
print(list(problem.actions))
# -
# We can of course retrieve and inspect each action individually:
# + pycharm={"name": "#%%\n"}
stack = problem.get_action('stack')
stack.parameters
# + pycharm={"name": "#%%\n"}
stack.precondition
# + pycharm={"name": "#%%\n"}
stack.effects
| docs/notebooks/parsing-and-inspecting-pddl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="PsWDAqI0EFMo"
# https://www.kaggle.com/mlg-ulb/creditcardfraud
#
#
# > The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset present transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
# + [markdown] colab_type="text" id="u8NoV6kLKcVm"
# Libraries:
# ----------
# - https://karateclub.readthedocs.io/en/latest/modules/root.html
# - networkx - https://networkx.github.io/documentation
#
# Reading materials:
# ------------------
# - https://github.com/benedekrozemberczki/awesome-graph-classification
# + colab={} colab_type="code" id="8kVif7_1B4gP"
import pandas as pd
# + colab={} colab_type="code" id="Wm-jTFInDM_u"
from sklearn.datasets import fetch_openml
# + colab={} colab_type="code" id="lBqnzRuUDIbZ"
X, y = fetch_openml(data_id=1597, return_X_y=True)
# -
# Please note that you will need a lot of RAM to run this notebook. If you want to reduce the size of RAM needed, you can work with smaller datasets as follows:
#
# ```python
# from sklearn.model_selection import train_test_split
#
# X, _, y, _ = train_test_split(X, y, test_size=0.33)
# ```
#
# The above would get rid of 33% of the data. You can reduce it by more if you increase the test_size value.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="I6rPOiKkCvgT" outputId="6e3d09ca-1817-462b-b7aa-9b5d6e5f6a80"
len(X)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="4Bv_eH8ACwFK" outputId="7913af0c-ce2d-49e3-a0b6-9383afbf9039"
X.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="4NfTTUk2DjDV" outputId="91a91e84-0b7b-432c-c642-6b459acca6eb"
import numpy as np
np.unique(y)
# + colab={} colab_type="code" id="92Xe5uwqDreR"
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="o3wbJua7D1Ey" outputId="788e20e4-9b8f-4d42-ab6a-c77c58b2f3c1"
pd.Series(y).hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 158} colab_type="code" id="bwWR4L4QF6BK" outputId="bc5aae7d-b79d-470a-81af-21b596dcdd44"
X.mean(axis=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 158} colab_type="code" id="oXSRjImSGAFF" outputId="ce52c363-9c98-4e49-f9b5-39f039724794"
X.std(axis=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 336} colab_type="code" id="O4au-CvGGXOD" outputId="f0eb76c7-f4ce-4999-db5a-c1dd6a315409"
pip install dython
# + colab={} colab_type="code" id="wrtg2jIXGSms"
from dython.nominal import associations
# + colab={"base_uri": "https://localhost:8080/", "height": 881} colab_type="code" id="2S85bpGgGrzn" outputId="abd6ee1e-142a-46b7-c3db-1250d3870ca6"
print('feature correlations')
associations(X, nominal_columns=None, return_results=False, figsize=[15, 15])
# + colab={"base_uri": "https://localhost:8080/", "height": 194} colab_type="code" id="xxtCzbCWMMez" outputId="58fe4692-4604-4600-d3d4-50f782074b0f"
pip install annoy
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" id="U8gSzMPyMLW0" outputId="ccad2364-cb3f-41bf-b1f5-97577dacc833"
from annoy import AnnoyIndex
t = AnnoyIndex(X.shape[1], 'euclidean') # Length of item vector that will be indexed
for i, v in enumerate(X):
t.add_item(i, v)
t.build(10) # 10 trees
# + colab={"base_uri": "https://localhost:8080/", "height": 194} colab_type="code" id="5-I8RH-LKE8F" outputId="98623720-8ec8-4fda-cf5a-3fc14bbe0755"
pip install mpld3
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" id="kvUCm-JvNmLH" outputId="f941869a-efc0-4d30-9a67-fbb0e4a2a4f0"
# %matplotlib inline
import mpld3
mpld3.enable_notebook()
_, distances = t.get_nns_by_item(0, 10000, include_distances=True)
pd.Series(distances).hist(bins=200)
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" id="vCvWX-kgeio4" outputId="efe17c3e-bda2-4139-bc32-16138e98b1b7"
X.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="AvHGgAiKMxy6" outputId="26910e1b-087b-4f50-c41b-8e27e87c9d67"
from tqdm import trange
from scipy.sparse import lil_matrix
MAX_NEIGHBORS = 10000 # Careful: this parameter determines the run-time of the loop!
THRESHOLD = 6.0
def get_neighbors(i):
neighbors, distances = t.get_nns_by_item(i, MAX_NEIGHBORS, include_distances=True)
return [n for n, d in zip(neighbors, distances) if d < THRESHOLD]
n_rows = X.shape[0]
neighborhood = dict()
for i in trange(n_rows):
neighborhood[i] = get_neighbors(i)
# + colab={} colab_type="code" id="V1Oh4IMAgxiw"
A = lil_matrix((n_rows, n_rows), dtype=np.int8)
for i, n in neighbors.items():
A[i, n] = 1.0
A[n, i] = 1.0
# + colab={} colab_type="code" id="Nayd6MDDSNL0"
print('max sparsity given max neighbors parameter: {}'.format(MAX_NEIGHBORS / n_rows))
# + colab={} colab_type="code" id="7BZ-1QniSHqu"
print('average number of connections: {}'.format(A.sum(axis=0).mean()))
# + colab={} colab_type="code" id="OOUfItyTK_yD"
print('sparsity: {}'.format(A.sum() / (n_rows * n_rows)))
# + colab={} colab_type="code" id="Y8eJ3vIxmGrO"
# another attempt: I used the previous one in the experiment reported in the book
from numba import njit, jit, prange
import numpy as np
from numba.pycc import CC
from scipy.sparse import lil_matrix
from scipy.spatial.distance import cosine
cc = CC('adjacency_utils')
def angular(u, v):
return np.sqrt(2 * (cosine(u, v)))
@cc.export('calc_dist', 'f8(f8[:], f8[:])')
@jit("f8(f8[:], f8[:])")
def calc_dist(u, v):
'''Euclidean distance (without sqrt)
Example:
--------
>> calc_dist(X[0, :], X[1, :])
12.795783809844064
'''
d = u - v
return np.sum(d * d)
@jit(nopython=False, parallel=True, forceobj=True)
def calculate_adjacency(X, threshold=0.5):
'''Calculate an adjacency matrix
given a feature matrix
'''
n_rows = X.shape[0]
A = lil_matrix((n_rows, n_rows), dtype=np.int8)
for i in prange(n_rows):
for i2 in range(i+1, n_rows):
d = calc_dist(X[i, :], X[i2, :])
if d < threshold:
A[i, i2] = 1.0
A[i2, i] = 1.0
return A
cc.compile()
# too slow, don't run!
#A = calculate_adjacency(X, threshold=threshold)
# + colab={} colab_type="code" id="4WnxYs_8Osn5"
import networkx as nx
G = nx.from_scipy_sparse_matrix(A)
# -
from scipy.sparse.csgraph import connected_components
n_components, labels = connected_components(A, directed=False, return_labels=True)
# + colab={} colab_type="code" id="XFEIfk2ERx7q"
len(dists)
# + colab={} colab_type="code" id="BzQlXZF-QlJY"
nx.draw(G, with_labels=True, font_weight='bold')
# + colab={} colab_type="code" id="SHhjNe6EM28Q"
t.get_distance(0, 64172)
# + colab={} colab_type="code" id="wzR9T5HbNBB5"
t.get_distance(0, 212379)
# + colab={} colab_type="code" id="15l8EJJsJsV6"
transaction_correlations = associations(X.transpose(), nominal_columns=None, return_results=True)
# + colab={} colab_type="code" id="0O1Lm-0iJWJr"
correlations.shape
# + colab={} colab_type="code" id="Hbgwx4enIKn8"
from scipy.spatial.distance import pdist
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.pdist.html
# + colab={} colab_type="code" id="hy2Jh1FuIRDG"
correlations = pdist(X, metric='euclidean')
# + colab={} colab_type="code" id="j2tpaagjG-Ch"
from sklearn.metrics import pairwise_distances
correlations = pairwise_distances(X, metric='euclidean')
# + colab={} colab_type="code" id="XRIDMwjKH-6G"
correlations.shape
# + colab={} colab_type="code" id="ze9XMV3gG6dV"
# + colab={} colab_type="code" id="8hbhZucvD9gw"
import networkx as nx
# + colab={} colab_type="code" id="fI48SzT0D2JS"
graph = nx.convert_matrix.from_pandas_edgelist(ata, "id_1", "id_2")
| chapter03/Spotting fraudster communities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Lambda-School-Labs/Labs25-Bridges_to_Prosperity-TeamC-ds/blob/feature%2Fget-correct-sect-dist-gov-ID%2Ftrent-bernhisel/notebooks/Cleaning_to_get_correct_sector_and_District_IDs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="zy7O7Z4pDKvB" colab_type="code" colab={}
import pandas as pd
# + id="r9OBQtl_DOCR" colab_type="code" colab={}
# gov_df = pd.read_excel('/content/Rwanda Administrative Levels and Codes_Province through Village_2019.02.28 (1).xlsx', encoding='latin-1')
bridges_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/Labs25-Bridges_to_Prosperity-TeamC-ds/main/final_csv/final.csv')
gov_unique = pd.read_csv('/content/unique_gov_id_PDS.csv')
# + id="ZXDOeZU-DYN7" colab_type="code" colab={}
# gov_df = gov_df.drop(columns=['Cell', 'Cell_ID', 'Vill_ID', 'Village', 'FID', 'Province', 'Status'])
# + id="5MJ-xlpPainN" colab_type="code" colab={}
gov_unique = gov_unique.drop(columns=['Prov_ID', 'Province'])
# + id="0YOqj1l0DoBv" colab_type="code" colab={}
# make sure that all the sectors in both dataframes are title case.
bridges_df['Sector'] = bridges_df['Sector'].str.title()
gov_unique['Sector'] = gov_unique['Sector'].str.title()
# + id="HsHcgfE6Dw_H" colab_type="code" colab={}
# testing_df = bridges_df[bridges_df['Sector'] == 'Nyarugenge']
# + id="n_gctDDXD7vt" colab_type="code" colab={}
# testing_df[testing_df['Project Code'] == 1014597]
# + id="OtdZ6ShJZujO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="88292f4f-8201-4496-c304-e051ebe43116"
# the initial shape of our bridges dataframe
bridges_df.shape
# + id="0iVQL57CVUsJ" colab_type="code" colab={}
# Need to update certain project codes Sectors to the correct sector
# mapping of correct sector by Project code.
sector_mapping = {1007327: 'Juru',
1007328: 'Masaka',
1007329: 'Masaka',
1007330: 'Masaka',
1007343: 'Kivuruga',
1007345: 'Busengo',
1007354: 'Kivuruga',
1007404: 'Mataba',
1007508: 'Kirimbi',
1007509: 'Kirimbi',
1007515: 'Mahembe',
1007518: 'Macuba',
1007521: 'Macuba',
1007530: 'Kanyinya',
1007531: 'Gisozi',
1007583: 'Munini',
1007593: 'Kivu',
1007626: 'Bushoki',
1007629: 'Bushoki',
1007633: 'Base',
1007634: 'Tumba',
1007652: 'Jali',
1007653: 'Jali',
1007661: 'Bugarama',
1007663: 'Gitambi',
1007898: 'Gakenke',
1009329: 'Muhondo',
1012701: 'Bweramana',
1012812: 'Mushubati',
1013233: 'Nemba',
1013270: 'Kibilizi',
1014009: 'Karengera',
1014011: 'Karengera',
1014021: 'Karengera',
1014060: 'Ruganda',
1014154: 'Mururu',
1014161: 'Nyakabuye',
1014340: 'Kayumbu',
1014493: 'Jarama',
1014511: 'Rukumberi',
1014512: 'Rukumberi',
1014554: 'Rukumberi',
1014555: 'Rukumberi',
1014560: 'Gishali',
1014569: 'Ruhuha'}
# + id="hDFp4TZvYre9" colab_type="code" colab={}
# loop over the dictionary and update the sector for each project code
for project, sector in sector_mapping.items():
bridges_df.loc[bridges_df['Project Code'] == project, 'Sector'] = sector
# + [markdown] id="MTjsQqz7Uqck" colab_type="text"
# 1013611:
# District = 'Nyaruguru',
# Sector = Mata
# Cell = Nyamabuye
# + id="YNhKOLYQZRL0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="7a9ec7c0-018f-4e63-a2bd-f41b090d6b74"
# need to update the project code 1013611 and correct the district, sector and cell
bridges_df.loc[bridges_df['Project Code'] == 1013611, 'District'] = 'Nyaruguru'
bridges_df.loc[bridges_df['Project Code'] == 1013611, 'Sector'] = 'Mata'
bridges_df.loc[bridges_df['Project Code'] == 1013611, 'Cell'] = 'Nyamabuye'
bridges_df.loc[bridges_df['Project Code'] == 1013611]
# + id="_qB7pVKobHMZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0fbeff9e-da22-49b0-af8d-81d164f03f34"
# shape after making sector updates
bridges_df.shape
# + id="0tj-mkzNEL5V" colab_type="code" colab={}
# create a dataframe that merges the two if they have the same District
# and Sector combination
final = pd.merge(left=bridges_df, right=gov_unique, left_on=['District', 'Sector'], right_on=['District', 'Sector'])
# + id="v8YcpPbcEiKU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 518} outputId="85de7557-f5c3-4cc6-f4e7-826e277b5ef3"
print(final.shape)
final.head()
# + id="b0cA7__JE9DB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="473d4643-caf4-4c48-deb7-3ce310297758"
# make sure the observations in the final are the same as what existed in the
# original dataframe
bridges_df.shape[0] - final.shape[0]
# + id="Sffg1TCxdQAC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="19208b00-ecfa-4079-e20b-c3ca53419707"
# make sure that there are the same number of observations per project code
# in the final merged dataframe as existed in the original bridges dataframe
(final['Project Code'].value_counts() == bridges_df['Project Code'].value_counts()).value_counts()
# + id="WhfvN1oRWElG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="685411e4-72f6-4766-ba9c-eff790024d5f"
print(f'Unique B2P Sector Sites : {len(bridges_df["Sector"].unique())}')
print(f'Unique Gov Sector Sites : {len(gov_unique["Sector"].unique())}')
print(f'Unique B2P District Sites : {len(bridges_df["District"].unique())}')
print(f'Unique Gov District Sites : {len(gov_unique["District"].unique())}')
# + id="Khl0xR2BkiQ4" colab_type="code" colab={}
# make sure the same districts appear in both the original dataframes
for i in bridges_df['District'].unique():
if i in gov_unique['District'].unique():
pass
else:
print(i, False)
# + id="R59INdU8Xxj3" colab_type="code" colab={}
# make sure there are no sectors that appear in the b2p that dont appear in the
# government data.
# there were and we ended up cleaning these values.
for i in bridges_df['Sector'].unique():
if i in gov_unique['Sector'].unique():
pass
else:
print(i, False)
# + id="CCuMYnc3hNvM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="f2d8831c-2633-4a5f-8fe3-35d75f2cffe3"
print(f'Unique Project codes before merge : {len(bridges_df["Project Code"].unique())}')
print(f'Unique Project Codes after merge : {len(dumb["Project Code"].unique())}')
# + id="RofzO4R-l4al" colab_type="code" colab={}
cache = {}
for i in bridges_df['Project Code'].unique():
if i in dumb['Project Code'].unique():
pass
else:
print(i, False)
cache[i] = ""
# + id="96XDIDSAaZfN" colab_type="code" colab={}
final = final.drop(columns=['District_ID', 'Sector_ID'])
# + id="rjmIFPjdjsBV" colab_type="code" colab={}
final.to_csv('Final_with_gov_ID.csv')
# + id="lmiIChvDj1Ur" colab_type="code" colab={}
| notebooks/Cleaning_to_get_correct_sector_and_District_IDs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <hr style="height:.9px;border:none;color:#333;background-color:#333;" />
# <hr style="height:.9px;border:none;color:#333;background-color:#333;" />
#
# Regression Model | Birthweight</h2>
#
# <NAME> - MBAN
#
#
# <hr style="height:.9px;border:none;color:#333;background-color:#333;" />
# <hr style="height:.9px;border:none;color:#333;background-color:#333;" />
# <h2>Data Preparation</h2><br>
# <h3>Import library and Read the file</h3><br>
# +
# import libraries
import numpy as np # mathematical essentials
import matplotlib.pyplot as plt # essential graphical output
import pandas as pd # data science essentials
import seaborn as sns # enhanced graphical essentials
import sklearn.linear_model # linear model
import statsmodels.formula.api as smf # regression modelling
from sklearn.linear_model import LinearRegression # linear regression (scikit-learn)
from sklearn.model_selection import train_test_split # train_test split
from sklearn.neighbors import KNeighborsRegressor # KNN for regression
from sklearn.preprocessing import StandardScaler # standard scalar
# -
# a) Print first five rows of the data
# +
# setting pandas print options
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
# specifying file name
file = './birthweight_low.xlsx'
# reading the file inot Python
birthweight = pd.read_excel(io = file)
# produce result of first 5 columns
birthweight.head(5)
# -
# b) Obtain the shape of the data set
# +
# using .shape to view rows and columns
birthweight.shape
# printing the dimensions of the dataset
print(f"""
Size of Original Dataset
------------------------
Observations: {birthweight.shape[0]}
Features: {birthweight.shape[1]}
""")
# -
# <h3>Replace Missing Values</h3><br>
# a) Check for missing values in the data
# +
# using .isnull() to find columns with missing values
print(birthweight.isnull().any())
# printing total number of missing values
print(f"Total number of missing values: {birthweight.isnull().sum(axis = 0).sum()}")
# -
# b) Find the mode of rows which consists of missing values
# to find the mode of the columns with missing values
birthweight_mode = birthweight[['meduc', 'npvis', 'feduc']].mode()
birthweight_mode
# c) Replace the missing values with respective mode values
# +
# assigning the mode value
bw_mode = 12.0
# to fill in the missing values with mode
birthweight['meduc'] = birthweight['meduc'].fillna(bw_mode)
birthweight['npvis'] = birthweight['npvis'].fillna(bw_mode)
birthweight['feduc'] = birthweight['feduc'].fillna(bw_mode)
# -
# d) Re-check for missing values
# to check if missing values have been replaceds
birthweight.isnull().any()
# e) Categorize variables based on 'continuous' and 'countable' characteristics
"""CONTINUOUS\
----------
bwght\
mage\
meduc\
monpre\
npvis\
fage\
feduc\
cigs\
drink\
#INTERVAL/COUNT\
--------------
male\
mwhte\
mblck\
moth\
fwhte\
fblck\
foth\
omaps\
fmaps\
"""
# <h2>Develop Regression Model</h2><br>
# <h3>Regression Model with 'continuous' variables only</h3><br>
# +
# running regression with continuous varibales
lm_best = smf.ols(formula = """bwght ~ fmaps + mage + cigs + drink + fage """,
data = birthweight)
results = lm_best.fit()
print(results.summary())
# -
# <h3>Scatter Plots</h3><br>
# Run scatter plots to understand the relation between the 'continuous' variables
# +
# set the size
fig, ax = plt.subplots(figsize = (10, 8))
# "mother's age"
plt.subplot(2, 2, 1)
sns.scatterplot(x = birthweight['mage'],
y = birthweight['bwght'],
color = 'g')
# adding labels only
plt.xlabel(xlabel = 'mage')
plt.ylabel(ylabel = 'bwght')
################################################
# "mother's education"
plt.subplot(2, 2, 2)
sns.scatterplot(x = birthweight['meduc'],
y = birthweight['bwght'],
color = 'g')
# adding labels only
plt.xlabel(xlabel = 'meduc')
plt.ylabel(ylabel = 'bwght')
################################################
# "month prenatal care began"
plt.subplot(2, 2, 3)
sns.scatterplot(x = birthweight['monpre'],
y = birthweight['bwght'],
color = 'orange')
# adding labels only
plt.xlabel(xlabel = 'monpre')
plt.ylabel(ylabel = 'bwght')
################################################
# "numbers of prenatal visits"
plt.subplot(2, 2, 4)
sns.scatterplot(x = birthweight['npvis'],
y = birthweight['bwght'],
color = 'r')
# adding labels only
plt.xlabel(xlabel = 'npvis')
plt.ylabel(ylabel = 'bwght')
# print
plt.tight_layout()
plt.show()
################################################
# set the size
fig, ax = plt.subplots(figsize = (10, 12))
# "father's age"
plt.subplot(3, 2, 1)
sns.scatterplot(x = birthweight['fage'],
y = birthweight['bwght'],
color = 'y')
# adding labels only
plt.xlabel(xlabel = 'fage')
plt.ylabel(ylabel = 'bwght')
################################################
# "father's education"
plt.subplot(3, 2, 2)
sns.scatterplot(x = birthweight['feduc'],
y = birthweight['bwght'],
color = 'orange')
# adding labels only
plt.xlabel(xlabel = 'feduc')
plt.ylabel(ylabel = 'bwght')
################################################
# "average cigarettes per day"
plt.subplot(3, 2, 3)
sns.scatterplot(x = birthweight['cigs'],
y = birthweight['bwght'],
color = 'r')
# adding labels only
plt.xlabel(xlabel = 'cigs')
plt.ylabel(ylabel = 'bwght')
################################################
# "average drinks per week"
plt.subplot(3, 2, 4)
sns.scatterplot(x = birthweight['drink'],
y = birthweight['bwght'],
color = 'r')
# adding labels only
plt.xlabel(xlabel = 'drink')
plt.ylabel(ylabel = 'bwght')
# print
plt.tight_layout()
plt.show()
# -
# <h3>Train- Test Split</h3><br>
# Split the data into training data set and testing data set
# +
# preparing explanatory variable data
birthweight_data = birthweight.drop(["bwght"],
axis = 1)
birthweight_target = birthweight.loc[: , 'bwght']
# preparing training and testing sets
x_train, x_test, y_train, y_test = train_test_split(
birthweight_data,
birthweight_target,
test_size = 0.25,
random_state = 219)
# checking the shapes of the data sets
print(f"""
Training Data
-------------
X-side : {x_train.shape}
y-side : {y_train.shape}
Testing Data
------------
x-side : {x_test.shape}
y-side : {y_test.shape}
""")
# -
# Create the list of variables to be used for regression
# creating list for running different models
x_variables_mod = ['mage', 'cigs', 'drink', 'fage']
# <h3>OLS Model</h3>
# Train- test split for OLS model
# +
# applying modeline scikit- learn
# preparing x-varibales from the OLS model
ols_data = birthweight.loc[:, x_variables_mod]
# preparing response variable
ols_target = birthweight.loc[:, 'bwght']
# setting more than one train- test split
# full x-dataset
x_train_Full, x_test_Full, y_train_Full, y_test_Full = train_test_split(
birthweight_data,
birthweight_target,
test_size = 0.25,
random_state = 219)
# OLS value dataset
x_train_ols, x_test_ols, y_train_ols, y_test_ols = train_test_split(
ols_data,
ols_target,
test_size = 0.25,
random_state = 219)
# +
# merging x_train and y_train so that they can be used in statsmodel
birthweight_train = pd.concat([x_train, y_train], axis = 1)
lm_best = smf.ols(formula = """ bwght ~ mage + cigs + drink + fage """,
data = birthweight_train)
# fit the data into the model object
results = lm_best.fit()
# print the summary of results
print(results.summary())
# +
# instantiating a model object
lr = LinearRegression()
# fitting to the training data
lr_fit = lr.fit(x_train_ols, y_train_ols)
#predicting on new data
lr_pred = lr_fit.predict(x_test_ols)
# scoring the results
print('OLS Training Score :', lr.score(x_train_ols, y_train_ols).round(4))
print('OLS Testing Score :', lr.score(x_test_ols, y_test_ols).round(4))
# scorinf the results
lr_train_score = lr.score(x_train_ols, y_train_ols).round(4)
lr_test_score = lr.score(x_test_ols, y_test_ols).round(4)
# displaying and saving the gap between training and testing score
print('OLS Train_Test Gap:', abs(lr_train_score -lr_test_score).round(4))
lr_test_gap = abs(lr_train_score - lr_test_score).round(4)
# +
# zipping each feature name to its coefficient
lr_model_values = zip(birthweight_data[x_variables_mod].columns,
lr_fit.coef_.round(2))
# setting up a placeholder list to store model features
lr_model_lst = [('intercept', lr_fit.intercept_.round(2))]
# printing out each feature_coefficient pair one by one
for val in lr_model_values:
lr_model_lst.append(val)
# printing the results
for pair in lr_model_lst:
print(pair)
# -
# <h3>Lasso Model</h3>
# +
# instantiating a lasso model
lasso_model = sklearn.linear_model.Lasso(alpha = 1.0,
normalize =True)
# fitting the training model
lasso_fit = lasso_model.fit(x_train_ols, y_train_ols)
# predicting on new data
lasso_pred = lasso_fit.predict(x_test_ols)
# scoring the results
print('Lasso Training Score :', lasso_model.score(x_train_ols, y_train_ols).round(4))
print('Lasso Testing Score :', lasso_model.score(x_test_ols, y_test_ols).round(4))
lasso_train_score = lasso_model.score(x_train_ols, y_train_ols).round(4)
lasso_test_score = lasso_model.score(x_test_ols, y_test_ols).round(4)
# displaying and saving the gap between training and testing score
print('Lasso Train_Test Gap:', abs(lasso_train_score - lasso_test_score).round(4))
lasso_test_gap = abs(lasso_train_score - lasso_test_score).round(4)
# +
# zipping each feature name to its coefficient
lasso_model_values = zip(birthweight.columns,
lasso_fit.coef_.round(2))
# setting up a placeholder list to store model features
lasso_model_lst = [('intercept', lasso_fit.intercept_.round(2))]
# printing out each feature_coefficient pair one by one
for val in lasso_model_values:
lasso_model_lst.append(val)
# printing the results
for pair in lasso_model_lst:
print(pair)
# -
# <h3>ARD Model</h3>
# +
# instantiating a ARD model
ard_model = sklearn.linear_model.ARDRegression()
# fitting the training model
ard_fit = ard_model.fit(x_train_ols, y_train_ols)
# predicting on new data
ard_pred = ard_fit.predict(x_test_ols)
# scoring the results
print('ARD Training Score :', ard_model.score(x_train_ols, y_train_ols).round(4))
print('ARD Testing Score :', ard_model.score(x_test_ols, y_test_ols).round(4))
# scorinf the results
ard_train_score = ard_model.score(x_train_ols, y_train_ols).round(4)
ard_test_score = ard_model.score(x_test_ols, y_test_ols).round(4)
# displaying and saving the gap between training and testing score
print('ARD Train_Test Gap:', abs(ard_train_score - ard_test_score).round(4))
ard_test_gap = abs(ard_train_score - ard_test_score).round(4)
# +
# zipping each feature name to its coefficient
ard_model_values = zip(birthweight.columns,
ard_fit.coef_.round(2))
# setting up a placeholder list to store model features
ard_model_lst = [('intercept', ard_fit.intercept_.round(2))]
# printing out each feature_coefficient pair one by one
for val in ard_model_values:
ard_model_lst.append(val)
# printing the results
for pair in ard_model_lst:
print(pair)
# +
# dropping co-efficients equal to 0
# printing out each feature-coefficient oair one by one
for feature, coefficient in ard_model_lst:
if coefficient == 0:
ard_model_lst.remove((feature, coefficient))
# printing results
for pair in ard_model_lst:
print(pair)
# -
# Converting into DataFrame
# +
# instantiating a standard scaler object
scaler = StandardScaler()
# fitting the scalar with birthweight data
scaler.fit(birthweight_data)
# transforming our data after fit
x_scaled = scaler.transform(birthweight_data)
# converting scaled data into a Data Frame
x_scaled_df = pd.DataFrame(x_scaled)
# printing the results
x_scaled_df.describe().round(2)
# +
# adding labels to the scaled DataFrames
x_scaled_df.columns = birthweight_data.columns
# checking pre and posr scaling of the data
print(f"""
Dataset Before Scaling
----------------------
{np.var(birthweight_data)}
Dataset After Scaling
---------------------
{np.var(x_scaled_df)}
""")
# -
# <h3>KNN Model</h3>
# Split the data for training and testing
# +
x_train, x_test, y_train, y_test = train_test_split(
birthweight_data,
birthweight_target,
test_size = 0.25,
random_state = 219)
# +
# instantiating a KNN model
knn_reg =KNeighborsRegressor(algorithm = 'auto',
n_neighbors = 1)
# fitting to the training data
knn_fit = knn_reg.fit(x_train, y_train)
# predicting on new data
knn_reg_pred = knn_fit.predict(x_test)
# scoring the results
print('KNN Training Score :', knn_reg.score(x_train, y_train).round(4))
print('KNN Testing Score :', knn_reg.score(x_test, y_test).round(4))
# scorinf the results
knn_train_score = knn_reg.score(x_train, y_train).round(4)
knn_test_score = knn_reg.score(x_test, y_test).round(4)
# displaying and saving the gap between training and testing score
print('KNN Train_Test Gap:', abs(knn_train_score - knn_test_score).round(4))
knn_test_gap = abs(knn_train_score - knn_test_score).round(4)
# -
# result table
print(f"""
Model Train Score Test Score Train-Test Gap
----- ----------- ---------- --------------
ARD {ard_train_score} {ard_test_score} {ard_test_gap}
Lasso* {lasso_train_score} {lasso_test_score} {lasso_test_gap}
OLS {lr_train_score} {lr_test_score} {lr_test_gap}
""")
""" * Lasso is the final model used """
| Ramu_Sneha_Regression_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import matplotlib.image as implt
from PIL import Image
# +
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# -
from src.visualization import plot_acc_loss
from src.data import make_dataset_224x224x3, dataset_train, dataset_val
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dense, Activation, Flatten
from keras.optimizers import Adam
from keras.wrappers.scikit_learn import KerasClassifier
from keras.applications import VGG16
from keras.callbacks import ModelCheckpoint
# +
process_train = '../data/processed/train/'
process_test = '../data/processed/test/'
process_val = '../data/processed/val/'
test_data = '../data/raw/test/'
category_names = ['messy/', 'clean/']
# -
# # VGG 16
#
# this convolution neural network is based on CNN VGG 16
Image.open('../docs/vgg16.png')
make_dataset_224x224x3()
# +
x_train, y_train = dataset_train()
x_val, y_val = dataset_val()
print('train x = ', x_train.shape)
print('train y = ', y_train.shape)
print('val x = ', x_val.shape)
print('val y = ', y_val.shape)
# -
# # Model VGG16
def build_classifier():
model = Sequential()
model.add(Conv2D(64, (3,3), padding='same', activation='relu', input_shape=(x_train.shape[1:]) ) )
model.add(Conv2D(64, (3,3), padding='same', activation='relu'))
model.add(MaxPool2D( pool_size=(2,2), strides=(2,2) ))
model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(MaxPool2D( pool_size=(2,2), strides=(2,2) ))
model.add(Conv2D(256, (3,3), padding='same', activation='relu'))
model.add(Conv2D(256, (3,3), padding='same', activation='relu'))
model.add(Conv2D(256, (3,3), padding='same', activation='relu'))
model.add(MaxPool2D( pool_size=(2,2), strides=(2,2) ))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(MaxPool2D( pool_size=(2,2), strides=(2,2) ))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(Conv2D(512, (3,3), padding='same', activation='relu'))
model.add(MaxPool2D( pool_size=(2,2), strides=(2,2) ))
model.add(Flatten())
model.add(Dense(4096 , activation="relu"))
model.add(Dense(4096 , activation="relu"))
model.add(Dense(1 , activation="sigmoid"))
return model
model = build_classifier()
# # Summary
model.summary()
optimizer = Adam(lr = 0.0001)
model.compile( optimizer, loss='binary_crossentropy', metrics = ['accuracy'] )
history = model.fit(x_train, y_train, batch_size = 32, epochs = 40, validation_data = (x_val, y_val) )
model.save('../models/model_vgg16')
plot_acc_loss(history)
# # Conclusion
# To train a neural network VGG16 is need one large data set, because the amount the weights. So in this case the best way the training this CNN it's using transfer learning.
def build_vgg16_transfer_learning():
model = Sequential()
# Transfer Learning
model.add(VGG16( include_top = False, weights = 'imagenet',input_shape=(x_train.shape[1:])))
model.add(Flatten())
model.add(Dense(1 ,activation = 'sigmoid'))
return model
model_vgg16 = build_vgg16_transfer_learning()
model_vgg16.summary()
model_vgg16.compile( optimizer, loss='binary_crossentropy', metrics = ['accuracy'] )
mcp_save = ModelCheckpoint('../models/model_vgg16_TL', save_best_only=True, monitor='val_loss', mode='min')
history = model_vgg16.fit(x_train, y_train, batch_size = 32, epochs = 40, validation_data = (x_val, y_val), callbacks=[mcp_save] )
plot_acc_loss(history)
| notebooks/0.3-VGG16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ###### Validation:
# 1. Generate all keys into excel - summary report (Read from code base - ejs files).
# 2. From figma extract keys ( Add / remove keys in excel based on new designs.(for dynamic keys - communicate with UI/UX) - is export from figma possible??? (value updation, new key,value) )
# 3. Send to meity (Updation, deletion, Addition will be done).
# 4. Read excel from meity and generate json and summary report.
# 5. Update the current html files in case of deletion, addition.
# 6. Keys in other locale files(hi,pa,ta,te) will be automatically updated.
#
# ###### Translation:
# 7. Translation excel files for other locales will be generated.
# 8. Send to sme's(Update translation).
# 9. Read excel from sme's and generate json and summary report for the respective locales.
# 10. Ingest into the project.
#
# 1. Can string names from design can be exported in figma?
# 2. Already present keys + new keys that might come up (excel).
# 3. hash in the next phase.
# *** Current Types of keys: ***
# 1. Exact string as keys.
# 2. Keys contain html tags like span, a ,b tags.
# 3. Dynamic key generation ```(eg: ${text} variable which is replaced by string while running gulp and then matched with json) , ${path} variable```
# 4. Static keys. ```(eg: Language: English)```
# 5. special characters in keys. ```(eg: { recordings(s) contributed: '', Transgender - He: '', "(No Username)": "(No Username)",
# "*required": "*required" })```
# 6. Mix of uppercased, lowercased, camelcased keys.
# 7. Duplicate Keys differing just by spaces in between.
# 8. Empty keys.
# 9. Unused keys.
# 10. Keys differing by just one word.
# (
# Eg: {
# "Back to Bolo India Home": "Back to Bolo India Home",
# "Back to Dekho India Home": "Back to Dekho India Home"
# }
# )
import pandas as pd
import openpyxl
import json
import re
import os
import pathlib
from ParseHtmlAndGetKeys import get_keys_with_path
import argparse
from datetime import datetime
import json
def move_column(dataframe, column_name, index):
popped_column = dataframe.pop(column_name)
dataframe.insert(index, column_name, popped_column)
def read_json(json_file_path):
with open(json_file_path) as f:
data = json.load(f)
return data
def get_dict_for_data(key, processed_text, replacement_mapping_dict, key_path_list):
out_dict = {}
out_dict["Key"] = key
out_dict["English copy"] = [processed_text]
out_dict["PATH"] = [key_path_list[key]]
for replacement in sorted (replacement_mapping_dict.keys()):
out_dict[replacement] = [replacement_mapping_dict[replacement]]
return out_dict
def extract_and_replace_tags(text, allowed_replacements):
tag_identification_regex = r"<(\S*?)[^>]*>.*?<\/\1>|<.*?\/>"
out_txt = text
matched_tags = re.finditer(tag_identification_regex, out_txt, re.MULTILINE)
replacement_identifier_index = 0
replacement_mapping_dict = {}
for match in matched_tags:
matched_tag = match.group()
if "<b>" in matched_tag:
continue
elif "<a" in matched_tag:
attributes_part_string = matched_tag[matched_tag.find('<a')+2: matched_tag.find('>')]
replacement_mapping_dict['a-tag-replacement'] = attributes_part_string
matched_tag_replacement = matched_tag.replace(attributes_part_string,"")
out_txt = out_txt.replace(matched_tag, matched_tag_replacement)
else:
replacement = allowed_replacements[replacement_identifier_index]
replacement_mapping_dict[replacement] = matched_tag
replacement_identifier_index+=1
out_txt = out_txt.replace(matched_tag, '<{}>'.format(replacement))
return out_txt , replacement_mapping_dict
def get_processed_data(json_data, allowed_replacements, key_path_list):
language_df = pd.DataFrame([], columns=[])
for key, value in json_data.items():
processed_text, replacement_mapping_dict = extract_and_replace_tags(value, allowed_replacements)
data_dict = get_dict_for_data(key, processed_text, replacement_mapping_dict, key_path_list)
try:
tmp_df = pd.DataFrame.from_dict(data_dict, orient='columns')
language_df = language_df.append(tmp_df, ignore_index=True)
except Exception as e:
print(e, "\n", data_dict, "\n\n")
return language_df
def get_path(key, keys_with_path_map):
for k, path in keys_with_path_map.items():
if key == k:
return path
return None
def generate_keys(input_json_path, output_excel_path, keys_with_path_map):
allowed_replacements = ["u","v","w","x", "y", "z"]
en_data = read_json(input_json_path)
language_code = 'en'
key_path_list = {key: get_path(key, keys_with_path_map) for key in en_data.keys()}
language_df = get_processed_data(en_data, allowed_replacements, key_path_list)
language_df.to_excel(output_excel_path, index = False, startrow=1)
def export_report(report_json, report_type):
now = datetime.now()
report_json['last_run_timestamp'] = str(now)
os.makedirs('reports',exist_ok=True)
with open('{}/report_{}_{}.json'.format('reports', report_type, now), 'w') as f:
f.write(json.dumps(report_json, indent = 4, ensure_ascii=False))
def generate_report():
report = {}
total_keys = len(read_json(input_json_path).keys())
report['total_keys_in_input_json'] = total_keys
export_report(report, 'excel')
# +
example = '''
Example commands:
python AllKeysExcelGenerator.py -j ./../../../crowdsource-ui/locales/en.json -o ./en/out/en.xlsx
'''
parser = argparse.ArgumentParser(epilog=example,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("-j", "--input-json-path", required=True, help = "Path of json file with en keys present")
parser.add_argument("-o","--output-excel-path", required=True, help = "Output path")
args = parser.parse_args("-j ./../../../crowdsource-ui/locales/en.json -o ./en/out/en.xlsx".split())
input_json_path = args.input_json_path
output_excel_path = args.output_excel_path
if '/' in output_excel_path:
os.makedirs(output_excel_path[:output_excel_path.rindex("/")], exist_ok=True)
keys_with_path_map = get_keys_with_path()
generate_keys(input_json_path, output_excel_path, keys_with_path_map)
generate_report()
| utils/localisation_script/all_keys_generator/AllKeysExcelGenerator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
'''
Using a neural network to learn to speak like the Apollo 11 astronauts.
Based on Keras example: https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py
This utilizes an LSTM (Long-Short Term Memory) neural network to learn language.
In the example, 20 epochs are required for a corpus of 600,000 characters.
Apollo 11 mission logs contain 1.5 million (1.3 million, stripping out JSON syntax).
These are from the Olipy repository.
https://github.com/leonardr/olipy
It is recommended to run this script on GPU, as recurrent
networks are quite computationally intensive.
If you try this script on new data, make sure your corpus
has at least ~100k characters. ~1M is better.
'''
from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import json
# +
with open('data/apollo_11.txt') as f:
lines = f.readlines()
#tokens = json.loads(f.read())
#print(len(token.keys()))
print(type(lines))
print(lines[0])
# -
| Apollo 11 Neural Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of COVID-19 in UK, GER, FR, IT, GR and BG using API. Compare the cases and deaths across time and cumulative cases on the last date of the study.
# +
import requests
import json
import pandas as pd
import scipy
import datetime as dt
import matplotlib.pyplot as plt
import seaborn as sns
import altair as alt
import numpy as np
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
from datetime import datetime,timedelta
from sklearn.metrics import mean_squared_error
from scipy.optimize import curve_fit
from scipy.optimize import fsolve
# -
# make a request of COVID-19 API data for all countries and covert JSON format to pandas dataframe
payload = {'code': 'ALL'} # If you want to query just Greece, replace this line with {'code': 'Greece'}
URL = 'https://api.statworx.com/covid'
data = requests.post(url=URL, data=json.dumps(payload))
df = pd.DataFrame.from_dict(json.loads(data.text))
print(len(df))
df.head()
# convert date column (str) to datetime
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')
df = df[df['date'] >= pd.to_datetime('2020-03-01') ]
print(len(df))
df.head()
df = df.drop(['day', 'month', 'year'], axis=1)
df.head()
df.dtypes
# select 6 countries and reset the index of this dataframe
df = (df.loc[df['country'].isin(['United_Kingdom', 'Germany', 'France', 'Italy','Greece', 'Bulgaria'])]
.reset_index()
.drop(['index'], axis=1)
)
df.head()
df.isnull().values.any()
# convert population in millions (for visualization only)
df['population_millions']=df['population']/10000000
plot_pop=sns.catplot(x='country', y='population_millions', data= df,kind="bar", height=7)
plot_pop.set_axis_labels("Country", "Population (in millions)")
df.describe()
df.groupby('country').mean()
# +
# define a function to plot number of cases and deaths (and their comulatives) over time for each country
def country_plot(y):
my_tooltips = ['country', 'date', 'cases_cum', 'deaths_cum', 'deaths', 'cases']
return alt.Chart(df).mark_line(point=True).encode(
x='date', y = y, color='country',
tooltip = my_tooltips
).properties(
width=1000,
height=300
)
country_plot('cases')
# Looks like there is a data quality issue. Why in the UK at 21-May-2020 there are -525 (negative) number of cases?
# -
country_plot('deaths')
country_plot('cases_cum')
country_plot('deaths_cum')
df.groupby('country').max()
df_last_date = df.groupby('country').max()
df_last_date
# percetage of cumulative cases and cumulative deaths from the population
df_last_date.loc[:,'cases_cum_percentage'] = df_last_date['cases_cum']/df_last_date['population'] * 100
df_last_date.loc[:,'deaths_cum_percentage'] = df_last_date['deaths_cum']/df_last_date['population'] * 100
# percetage of cumulative deaths from the cumulative cases
df_last_date.loc[:,'deaths_cum_percentage_cases'] = df_last_date['deaths_cum']/df_last_date['cases_cum'] * 100
df_last_date
# +
my_tooltips = ['country','cases_cum_percentage', 'deaths_cum_percentage', 'deaths_cum_percentage_cases' ]
alt.Chart(df_last_date.reset_index()).transform_fold(
fold=['cases_cum_percentage','deaths_cum_percentage']).mark_bar().encode(
y='country:N',
x='value:Q',
color='key:N',
tooltip = my_tooltips).properties(
width=1000,
height=300
)
# -
alt.Chart(df_last_date.reset_index()).mark_bar().encode(
y='country:N',
x='deaths_cum_percentage_cases:Q',
tooltip = my_tooltips).properties(
width=1000,
height=300
)
| COVID19.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: speech
# language: python
# name: speech
# ---
from text import text_to_sequence
import eng_to_ipa as ipa
text = "banana asldkf apple"
phonemes = ipa.convert(text)
print(phonemes)
[c for c in phonemes]
text_to_sequence(phonemes, ['english_cleaners'])
| testing_data_preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="SJrk6FVJklcr" outputId="6b387cca-ea63-468e-e4b6-91f671175835"
# !git clone https://github.com/mozilla/DeepSpeech.git
# + id="c9ShEM-wLl9Q" colab={"base_uri": "https://localhost:8080/"} outputId="e394c842-7a17-46e6-d02c-57e0267cf12c"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="jvWyROKI30Eo" outputId="d44fe81e-ab73-4906-ce6e-936eb4233974"
# !pip install --upgrade tensorflow
# + colab={"base_uri": "https://localhost:8080/"} id="L-k8M6flcAo3" outputId="5aed3443-3bfd-458d-a28d-ca61b0d17162"
# !nvidia-smi
# + colab={"base_uri": "https://localhost:8080/"} id="JAwbQAfbdMUt" outputId="8540ccfd-3f1b-4e37-8c9d-c4388adbe72a"
# %cd DeepSpeech
# !pip3 install --upgrade -e .
# + colab={"base_uri": "https://localhost:8080/"} id="C3i9nnM8hxEt" outputId="c72f62a0-6590-4c30-bab1-06b6045d8a90"
# !sudo apt-get install python3-dev
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="KXzcOG6Fh0X4" outputId="cfdc44f4-db8f-4aa1-fda1-09fbd6863ab1"
# !pip3 uninstall tensorflow
# !pip3 install 'tensorflow-gpu==1.15.4'
# + colab={"base_uri": "https://localhost:8080/"} id="X4uSN9sWx3hY" outputId="ebaef399-2bc3-45a9-a4f1-7048e49f02e2"
# !pip install deepspeech
# + colab={"base_uri": "https://localhost:8080/"} id="TtYiRsH1iNi3" outputId="62b2b5cb-e0f1-433d-bbe5-7f566412dc27"
# %ls
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="j7TWVuGVNI9F" outputId="b21f752a-b46d-44d1-a6c3-cb9d7f962db8"
python3 generate_lm.py --input_txt ../../../drive/MyDrive/deep/bangla.txt --output_dir ../../../drive/MyDrive/deep/scorer --top_k 500000 --kenlm_bins ../../kenlm/build/bin/ --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|1" --binary_a_bits 255 --binary_q_bits 8 --binary_type trie
# + colab={"base_uri": "https://localhost:8080/"} id="5_WGku0o_Rjg" outputId="5327831f-4c42-44aa-8ae9-26eda26ca4a4"
./generate_scorer_package \
--alphabet ../../../drive/MyDrive/deep/alphabet.txt \
--lm ../../../drive/MyDrive/deep/scorer/lm.binary \
--vocab ../../../drive/MyDrive/deep/scorer/vocab-500000.txt \
--package bd.scorer \
--default_alpha 0.931289039105002 \
--default_beta 1.1834137581510284 \
--force_bytes_output_mode True
# + id="LqHOF_4iuFdv" colab={"base_uri": "https://localhost:8080/"} outputId="1d572346-5432-4712-88c7-42f3dfc2d0e2"
# !python3 DeepSpeech.py \
# --alphabet_config_path ../drive/MyDrive/Data1/alphabet.txt \
# --train_files ../drive/MyDrive/Data1/train/train.csv \
# --dev_files ../drive/MyDrive/Data1/dev/dev.csv \
# --test_files ../drive/MyDrive/Data1/test/test.csv \
# --checkpoint_dir ../drive/Shareddrives/SynthiaSoft.com-Unlimited-03/Data_for_Atik/train512/checkpoints_demo \
# --export_dir ../drive/MyDrive/Data1/exported-model \
# --load_checkpoint_dir ../drive/Shareddrives/SynthiaSoft.com-Unlimited-03/Data_for_Atik/train512/checkpoints_demo \
# --save_checkpoint_dir ../drive/Shareddrives/SynthiaSoft.com-Unlimited-03/Data_for_Atik/train512/checkpoints_demo \
# --checkpoint_secs 1800 \
# --max_to_keep 2 \
# --n_hidden 64 \
# --early_stop true \
# --es_epochs 10 \
# --use_allow_growth true \
#
# + colab={"base_uri": "https://localhost:8080/"} id="dOymLETvtlNN" outputId="543e0c55-4556-43d3-c41e-3a551fe3f209"
# !apt -qq install -y sox
# + id="Vod55GZ42ix-" colab={"base_uri": "https://localhost:8080/"} outputId="fdfddf81-ee17-41b4-bb80-0097baa58570"
# !deepspeech --model ../drive/Shareddrives/SynthiaSoft.com-Unlimited-03/Data_for_Atik/Data_for_Atik_4/exported_model/output_graph.pb --audio ../drive/Shareddrives/SynthiaSoft.com-Unlimited-03/195590bd5b.wav
| deep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# argv:
# - C:/Users/<NAME>/Anaconda3\python.exe
# - -m
# - ipykernel_launcher
# - -f
# - '{connection_file}'
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nteract={"transient": {"deleting": false}}
# # K Means Clustering Part 2
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
import pandas as pd
import numpy as np
import pylab as pl
import datetime as dt
from math import sqrt
import warnings
warnings.filterwarnings("ignore")
# yahoo finance used to fetch data
import yfinance as yf
yf.pdr_override()
from sklearn.cluster import KMeans
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
stocks = si.tickers_dow()
stocks
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
start = dt.datetime(2020, 1, 1)
now = dt.datetime.now()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
df = yf.download(stocks, start, now)['Adj Close']
df.head()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
returns = df.pct_change().mean() * 252
variance = df.pct_change().std() * sqrt(252)
returns.columns = ["Returns"]
variance.columns = ["Variance"]
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
ret_var = pd.concat([returns, variance], axis = 1).dropna()
ret_var.columns = ["Returns", "Variance"]
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
X = ret_var.values
sse = []
for k in range(2,15):
kmeans = KMeans(n_clusters = k)
kmeans.fit(X)
sse.append(kmeans.inertia_) #SSE for each n_clusters
pl.plot(range(2,15), sse)
pl.title("Elbow Curve")
pl.subplots()
pl.show()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
kmeans = KMeans(n_clusters = 5).fit(X)
centroids = kmeans.cluster_centers_
pl.scatter(X[:,0],X[:,1], c = kmeans.labels_, cmap ="rainbow")
pl.show()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
X = ret_var.values
kmeans =KMeans(n_clusters = 5).fit(X)
centroids = kmeans.cluster_centers_
pl.scatter(X[:,0],X[:,1], c = kmeans.labels_, cmap ="rainbow")
pl.show()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
Companies = pd.DataFrame(ret_var.index)
cluster_labels = pd.DataFrame(kmeans.labels_)
df = pd.concat([Companies, cluster_labels],axis = 1)
df.columns = ['Stock', 'Cluster Labels']
df.set_index('Stock')
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
df
| Stock_Algorithms/K_Means_Clustering_Part2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.insert(0, '../scripts/')
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from ruler.measures.cwl_rbp import RBPCWLMetric
from ruler.measures.cwl_inst import INSTCWLMetric
from ruler.measures.cwl_bpm import BPMDCWLMetric, BPMCWLMetric
from ruler.measures.cwl_rr import RRCWLMetric
from ruler.measures.cwl_ap import APCWLMetric
from ruler.measures.cwl_tbg import TBGCWLMetric
from ruler.measures.cwl_precision import PrecisionCWLMetric
from ruler.cwl_ruler import Ranking
def plot_three_cwl_measures(a, b, c=None):
n=30
plt.figure(figsize=(15,5))
plt.subplots_adjust(hspace=0.2)
legend = [a.name(), b.name()]
if c:
legend.append(c.name())
print(c.ranking.topic_id)
ax = plt.subplot(131)
plt.title('Continue')
plt.plot(range(1,n+1), a.c_vector(a.ranking)[0:n])
plt.plot(range(1,n+1), b.c_vector(b.ranking)[0:n])
if c:
plt.plot(range(1,n+1), c.c_vector(c.ranking)[0:n])
plt.grid(True)
ax.set_ylim(ymin=-0.01, ymax=1.01)
ax = plt.subplot(132)
plt.title('Weight')
plt.plot(range(1,n+1), a.w_vector(a.ranking)[0:n])
plt.plot(range(1,n+1), b.w_vector(b.ranking)[0:n])
if c:
plt.plot(range(1,n+1), c.w_vector(c.ranking)[0:n])
plt.grid(True)
ax.set_ylim(ymin=-0.01, ymax=1.01)
ax = plt.subplot(133)
plt.title('Last')
plt.plot(range(1,n+1),a.l_vector(a.ranking)[0:n])
plt.plot(range(1,n+1),b.l_vector(b.ranking)[0:n])
if c:
plt.plot(range(1,n+1), c.l_vector(c.ranking)[0:n])
plt.grid(True)
ax.set_ylim(ymin=-0.01, ymax=1.01)
if legend:
plt.legend(legend)
# -
g1 = [ 1,0,1,1,0,0,1,0,1,0]
c1 = [ 12,13,2,3,4,5,6,33,1,1]
t1 = Ranking("T1",g1,c1)
rbp = RBPCWLMetric(theta=0.9)
rbp.measure(t1)
rbp.report()
print(rbp.c_vector(t1)[0:10])
print(rbp.w_vector(t1)[0:10])
print(rbp.l_vector(t1)[0:10])
inst = INSTCWLMetric(T=1)
inst.measure(t1)
inst.report()
print(inst.c_vector(t1)[0:10])
print(inst.w_vector(t1)[0:10])
print(inst.l_vector(t1)[0:10])
bpm = BPMCWLMetric(T=2,K=10)
bpm.measure(t1)
bpm.report()
plot_three_cwl_measures(rbp,inst,bpm)
# +
g1 = [ 1,1,0,0,0,0,1,0,1,0]
c1 = [ 1,1,1,1,1,1,1,1,1,1]
t2 = Ranking("T2",g1,c1)
bpm38 = BPMDCWLMetric(T=3, K=8, hc=1.0, hb=1.0)
bpm33 = BPMDCWLMetric(T=5, K=4, hc=1.0, hb=1.0)
bpm38.measure(t2)
bpm38.report()
bpm33.measure(t2)
bpm33.report()
bpm.measure(t2)
bpm.report()
# -
print(bpm.l_vector(t2)[0:10])
print(bpm33.l_vector(t2)[0:10])
print(bpm38.l_vector(t2)[0:10])
plot_three_cwl_measures( bpm, bpm33, bpm38)
rr = RRCWLMetric()
rr.measure(t1)
rr.report()
ap = APCWLMetric()
ap.measure(t1)
ap.report()
tbg = TBGCWLMetric(halflife=20)
tbg.measure(t1)
tbg.report()
plot_three_cwl_measures( ap, rr, tbg)
| examples/CWL-Weights-Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
# +
#Usar pca para deixar apenas os 15 principais componentes
import pandas as pd
pd.set_option('display.max_columns', 81)
arquivo = pd.read_csv('/home/stain/Documents/MachineLearning/Datasets/BrazilianCities/BRAZIL_CITIES.csv', sep=';',decimal=',')
arquivo.head()
# -
arquivo.drop(['CITY', 'IDHM Ranking 2010', 'IDHM_Renda', 'IDHM_Longevidade', 'IDHM_Educacao', 'LONG', 'LAT', 'GVA_MAIN', 'REGIAO_TUR', 'MUN_EXPENDIT', 'HOTELS', 'BEDS', 'Pr_Agencies', 'Pu_Agencies', 'Pr_Bank', 'Pu_Bank', 'Pr_Assets', 'Pu_Assets','UBER','MAC','WAL-MART'], axis=1, inplace=True)
arquivo['AREA']= arquivo['AREA'].str.replace(',', '')
arquivo['AREA'] = arquivo['AREA'].astype(float)
arquivo.dtypes
# +
state_encode = pd.get_dummies(arquivo['STATE'])
rural_encode = pd.get_dummies(arquivo['RURAL_URBAN'])
categoria_encode = pd.get_dummies(arquivo['CATEGORIA_TUR'])
arquivo.drop(['STATE','RURAL_URBAN', 'CATEGORIA_TUR'], axis = 1, inplace = True)
concatenado = pd.concat([arquivo, state_encode, rural_encode, categoria_encode], axis = 1)
concatenado.head()
# -
faltantes = concatenado.isnull().sum()
print((faltantes / len(concatenado)) * 100)
novoArq = concatenado.dropna()
y = novoArq['IDHM']
x = novoArq.drop('IDHM', axis = 1)
# +
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
normalizador = MinMaxScaler(feature_range=(0,1))
x_norm = normalizador.fit_transform(x)
pca = PCA(n_components = 15)
x_pca = pca.fit_transform(x_norm)
# +
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import KFold, cross_val_score
# Definindo o número de árvores
modelo = RandomForestRegressor(n_estimators = 100, n_jobs = -1)
kfold = KFold(n_splits=3)
resultado = cross_val_score(modelo, x_pca, y, cv = kfold, n_jobs = -1)
print(resultado.mean())
# +
import numpy as np
from sklearn.model_selection import GridSearchCV
min_split = np.array([2,4,8])
max_nivel = np.array([3,6,10,14,16])
min_leaf = np.array([1,3,4,6])
valores_gird = {'min_samples_split': min_split, 'max_depth': max_nivel, 'min_samples_leaf': min_leaf}
modelo = RandomForestRegressor(n_estimators = 100, n_jobs =-1)
gridRandomForest = GridSearchCV(estimator = modelo, param_grid=valores_gird, cv = 3, n_jobs =-1)
gridRandomForest.fit(x_pca, y)
print('Mínimo split', gridRandomForest.best_estimator_.min_samples_split)
print('Maxima profundidade', gridRandomForest.best_estimator_.max_depth)
print('Mínimo leaf', gridRandomForest.best_estimator_.min_samples_leaf)
print('R2', gridRandomForest.best_score_)
| RandomForest/RandomForestExerc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simulating PSF variation within JWST instruments
#
# WebbPSF version 0.6 added models for science instrument contributions to overall wavefront error based on real ground testing data (rather than statistical realizations of the requirements). These are enabled by default since version 0.6, to provide the most realistic PSF by default. This notebook demonstrates how to effectively use the SI WFE models to examine PSF variation based on field position and wavelength.
#
# First, import the necessary packages:
# +
# %matplotlib inline
from __future__ import print_function, division
import matplotlib
matplotlib.rc('image', interpolation='nearest')
import matplotlib.pyplot as plt
import webbpsf
print("Tested with WebbPSF 0.6.0, currently running on WebbPSF", webbpsf.version.version)
# -
# ### Disclaimers
#
# * The per-instrument wavefront error models still require validation and sign-offs from the SI teams.
# * Wavefront error maps were computed at specific field points; WebbPSF currently chooses the map for the nearest field point rather than interpolating across the SI field of view.
# * These models were derived from CV3 ground testing and will evolve as better information becomes available.
#
# **Report any bugs you encounter on our [Issue Tracker](https://github.com/spacetelescope/webbpsf/issues/new).**
# ## What does this do and how does it work?
#
# The goal is to put measured wavefront error maps from ground testing of the JWST instruments into the WebbPSF models.
#
# These SI WFE models use Zernike polynomial coefficients to build up an OPD map for the wavefront error at specific field points. The coefficients were derived from fitting Zernike polynomials to the WFE maps for various field points in the instrument fields of view. Using Zernike polynomial terms up to $Z_{36}$ was found to give residuals under 10 nanometers RMS when fitting the wavefront maps (as determined by wavefront sensing) from the <NAME> and the JWST ISIM Optics Group at Goddard Space Flight Center.
#
# The OPD map is generated on the fly by WebbPSF and inserted as a virtual optical element in the JWST instrument model.
# ## Example: Compare the MIRI PSF at two field positions
# Let's create an instance of the MIRI instrument model and display the planes in its optical train.
miri = webbpsf.MIRI()
plt.figure(figsize=(7, 4))
miri.display()
# The contributions to wavefront error can be separated into:
#
# 1. OTE wavefront error (due to variations in the telescope optics common to all instruments)
# 2. static SI wavefront error (due to variations in the specific science instrument's optics)
# 3. field dependent SI wavefront error (due to variation in the PSF based on the position of the source within the field of view)
#
# The first two components are combined in the `pupilopd` map shown in the top right panel. The last component of the model is generated at runtime from a Zernike coefficient lookup table. The `include_si_wfe` attribute switches this last component of the model on and off.
#
# By default (since WebbPSF 0.6), it is enabled:
miri.include_si_wfe
# Turning off the OTE and static SI WFE makes it easier to see the small differences from the instrument internal WFE. That is accomplished by setting the pupil OPD map to `None`.
miri.pupilopd = None
# Now we ask WebbPSF to display the instrument optical train again:
plt.figure(figsize=(7, 4))
miri.display()
# The plane for the internal WFE contribution is visible in the row labeled "MIRI internal WFE near ISIM13". ISIM13 is the designation of the field point from which the wavefront map was taken.
#
# *Note: The circle on which the Zernike terms are computed appears undersized because MIRI includes a 4% oversized pupil stop, but WebbPSF does not attempt to model wavefront errors beyond the extent of the OTE pupil.*
#
# The SI wavefront error model has been sampled at various field points. WebbPSF accepts a detector pixel position, from which it determines the wavefront error map at the nearest sampled point. (WebbPSF does not currently interpolate between points.)
#
# The default is the center of the detector:
miri.detector_position
# Let's compute a PSF here and call it `miri_psf_center`:
plt.figure(figsize=(7, 6.5))
miri_psf_center = miri.calc_psf(monochromatic=10e-6, display=True)
# Now, let's look at the SI wavefront errors from a different position:
miri.detector_position = (10, 10)
plt.figure(figsize=(7, 4))
miri.display()
# A few things have changed compared to the last plot of the optical train. The wavefront error map has changed, as expected with a change in field position. There is also a change to the transmission of the oversized MIRI pupil mask. This is a field-dependent obscuration from the MIRI internal calibration source pickoff mirror.
#
# Now, let's compute a PSF at the new position (call it `miri_psf_corner`), and perform the comparison.
plt.figure(figsize=(7, 6.5))
miri_psf_corner = miri.calc_psf(monochromatic=10e-6, display=True)
# ### Compare MIRI ideal to realistic PSF
#
# Let's compare some of the output PSFs. The differences are pretty minor in this case.
fig, (ax_ideal, ax_realistic, ax_diff) = plt.subplots(1, 3,
figsize=(15, 3.5))
webbpsf.display_psf(miri_psf_center, ext='DET_SAMP',
title='MIRI Center @ 10 $\mu$m',
ax=ax_ideal, imagecrop=5)
webbpsf.display_psf(miri_psf_corner, ext='DET_SAMP',
title='MIRI Corner @ 10 $\mu$m',
ax=ax_realistic, imagecrop=5)
webbpsf.display_psf_difference(miri_psf_center, miri_psf_corner,
ext1='DET_SAMP', ext2='DET_SAMP',
title='Center minus Corner',
ax=ax_diff, cmap='RdBu_r',
imagecrop=5)
| notebooks/Simulating PSF variation within JWST instruments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center/>将数据集转换为MindSpore数据格式
# ## 概述
#
# 用户可以将非标准的数据集转换为MindSpore数据格式—MindRecord,从而方便地加载到MindSpore中进行训练。同时,MindSpore在部分场景做了性能优化,使用MindRecord数据格式可以获得更好的性能体验。
#
# MindSpore数据格式具备的特征如下:
# - 实现多变的用户数据统一存储、访问,训练数据读取更加简便。
# - 数据聚合存储,高效读取,且方便管理、移动。
# - 高效的数据编解码操作,对用户透明、无感知。
# - 可以灵活控制分区的大小,实现分布式训练。
# ## 整体流程
#
# 1. 准备环节。
# 2. 将MNIST数据集转换为MindSpore数据格式。
# 3. 将CSV数据集转换为MindSpore数据格式。
# 4. 将CIFAR-10数据集转换为MindSpore数据格式。
# 5. 将CIFAR-100数据集转换为MindSpore数据格式。
# 6. 将ImageNet数据集转换为MindSpore数据格式。
# 7. 用户自定义生成MindSpore数据格式。
# ## 准备环节
# ### 导入模块
#
# 该模块提供API以加载和处理数据集。
import mindspore.dataset as ds
# ### 创建目录
# - 在jupyter工作目录下创建dataset目录,本次体验将所有的原始数据集都放在该目录下。
# - 在jupyter工作目录下创建transform目录,本次体验将所有的转换数据集都放在该目录下。
# ## 将MNIST数据集转换为MindSpore数据格式
# ### MNIST数据集下载
# - 训练数据集:
# > http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
# > http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
# - 测试数据集:
# > http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
# > http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
# - 将下载好的数据集放在`jupyter工作目录/dataset/MnistData/`下,如图所示:
# dataset/MnistData/
# ├── t10k-images-idx3-ubyte.gz
# ├── t10k-labels-idx1-ubyte.gz
# ├── train-images-idx3-ubyte.gz
# └── train-labels-idx1-ubyte.gz
# ### MNIST数据集转换
# `MnistToMR`这个类用于将MNIST数据集转换为MindSpore数据格式,参数用法如下:
# - `source` - 包含t10k-images-idx3-ubyte.gz,train-images-idx3-ubyte.gz,t10k-labels-idx1-ubyte.gz,train-labels-idx1-ubyte.gz的目录,本例中使用变量`mnist_path`传入该参数。
# - `destination` - 要转换成MindSpore数据格式文件的目录,本例中使用变量`mnist_mindrecord_path`传入该参数。
# - `partition_number` - 分区的大小,默认为1,本例中使用的默认参数。
# +
from mindspore.mindrecord import MnistToMR
mnist_path = './dataset/MnistData'
mnist_mindrecord_path = './transform/mnist.record'
mnist_transformer = MnistToMR(mnist_path,mnist_mindrecord_path)
# executes transformation from Mnist to MindRecord
mnist_transformer.transform()
# -
# 因为MNIST数据集包含训练数据集以及测试数据集,所以生成的MindSpore数据格式文件也分别是训练数据集和测试数据集的,其中.db结尾的文件保存的是描述MindSpore数据格式文件的元数据信息,切记一定不要删除它,生成的文件如下所示:
# transform/mnist.record_test.mindrecord
# transform/mnist.record_test.mindrecord.db
# transform/mnist.record_train.mindrecord
# transform/mnist.record_train.mindrecord.db
# 以下内容首先加载MindSpore数据格式的数据集,本例只进行了训练数据集的加载,如果需要测试数据集的加载也是同样的操作,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
file_name = './transform/mnist.record_train.mindrecord'
# create MindDataset for reading data
mnist_data_set = ds.MindDataset(dataset_file=file_name)
# create a dictionary iterator and read a data record through the iterator
print(next(mnist_data_set.create_dict_iterator()))
# ## 将CSV数据集转换为MindSpore数据格式
# ### CSV数据集下载
# - 本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/convert_dataset_to_mindspore_data_format/csv_data/data.csv
# 中,使用过程中可以在此路径下找到文件并下载,并且保存在`jupyter工作目录/dataset/`下,如图所示:
# dataset/data.csv
# ### CSV数据集转换
# `CsvToMR`这个类用于将CSV数据集转换为MindSpore数据格式,参数用法如下:
# - `source` - CSV的文件的路径,本例中使用变量`csv_path`传入该参数。
# - `destination` - 要转换成MindSpore数据格式文件的目录,本例使用变量`csv_mindrecord_path`传入该参数。
# - `columns_list` - 要读取的列的列表,默认为None,本例使用默认参数。
# - `partition_number` - 分区的大小,默认为1,本例使用默认参数。
# +
from mindspore.mindrecord import CsvToMR
from mindspore.mindrecord import FileReader
csv_path = './dataset/data.csv'
csv_mindrecord_path = './transform/csv.mindrecord'
csv_transformer = CsvToMR(csv_path,csv_mindrecord_path)
# executes transformation from Csv to MindRecord
csv_transformer.transform()
# -
# 生成的文件如下所示:
# transform/csv.mindrecord
# transform/csv.mindrecord.db
# 以下内容首先加载MindSpore数据格式的数据集,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
# create MindDataset for reading data
csv_data_set = ds.MindDataset(dataset_file=csv_mindrecord_path)
# create a dictionary iterator and read a data record through the iterator
print(next(csv_data_set.create_dict_iterator(output_numpy=True)))
# ## 将CIFAR-10数据集转换为MindSpore数据格式
# ### CIFAR-10数据集下载
# - CIFAR-10数据集:
# > https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
# - 将下载好的CIFAR-10数据集放在`jupyter工作目录/dataset/Cifar10Data/`下,如图所示:
# dataset/Cifar10Data/
# ├── batches.meta
# ├── data_batch_1
# ├── data_batch_2
# ├── data_batch_3
# ├── data_batch_4
# ├── data_batch_5
# ├── readme.html
# └── test_batch
# ### CIFAR-10数据集转换
# `Cifar10ToMR`这个类用于将CIFAR-10数据集转换为MindSpore数据格式,参数用法如下:
# - `source` - 存放CIFAR-10数据集的目录,本例中使用变量`cifar10_path`传入该参数。
# - `destination` - 要转换成MindSpore数据格式文件的目录,本例中使用变量`cifar10_mindrecord_path`传入该参数。
# +
from mindspore.mindrecord import Cifar10ToMR
cifar10_path = './dataset/Cifar10Data/'
cifar10_mindrecord_path = './transform/cifar10.record'
cifar10_transformer = Cifar10ToMR(cifar10_path,cifar10_mindrecord_path)
# executes transformation from Cifar10 to MindRecord
cifar10_transformer.transform(['label'])
# -
# 因为CIFAR-10数据集包含训练数据集以及测试数据集,所以生成的MindSpore数据格式文件也分别是训练数据集和测试数据集的,生成的文件如下所示:
# transform/cifar10.record
# transform/cifar10.record.db
# transform/cifar10.record_test
# transform/cifar10.record_test.db
# 以下内容首先加载MindSpore数据格式的数据集,本例只进行了训练数据集的加载,如果需要测试数据集的加载也是同样的操作,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
# create MindDataset for reading data
cifar10_data_set = ds.MindDataset(dataset_file=cifar10_mindrecord_path)
# create a dictionary iterator and read a data record through the iterator
print(next(cifar10_data_set.create_dict_iterator()))
# ## 将CIFAR-100数据集转换为MindSpore数据格式
# ### CIFAR-100数据集下载
# - CIFAR-100数据集:
# > https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
# - 将下载好的CIFAR-100数据集放在`jupyter工作目录/dataset/Cifar100Data/`下,如图所示:
# dataset/Cifar100Data/
# ├── file.txt~
# ├── meta
# ├── test
# └── train
# ### CIFAR-100数据集转换
# `Cifar100ToMR`这个类用于将CIFAR-100数据集转换为MindSpore数据格式,参数用法如下:
# - `source` - 存放CIFAR-100数据集的目录,本例中使用变量`cifar100_path`传入该参数。
# - `destination` - 要转换成MindSpore数据格式文件的目录,本例中使用变量`cifar100_mindrecord_path`传入该参数。
# +
from mindspore.mindrecord import Cifar100ToMR
cifar100_path = './dataset/Cifar100Data/'
cifar100_mindrecord_path = './transform/cifar100.record'
cifar100_transformer = Cifar100ToMR(cifar100_path,cifar100_mindrecord_path)
#executes transformation from Cifar100 to MindRecord
cifar100_transformer.transform(['fine_label','coarse_label'])
# -
# 因为CIFAR-100数据集包含训练数据集以及测试数据集,所以生成的MindSpore数据格式文件也分别是训练数据集和测试数据集的,生成的文件如下所示:
# transform/cifar100.record
# transform/cifar100.record.db
# transform/cifar100.record_test
# transform/cifar100.record_test.db
# 以下内容首先加载MindSpore数据格式的数据集,本例只进行了训练数据集的加载,如果需要测试数据集的加载也是同样的操作,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
# create MindDataset for reading data
cifar100_data_set = ds.MindDataset(dataset_file=cifar100_mindrecord_path)
# create a dictionary iterator and read a data record through the iterator
print(next(cifar100_data_set.create_dict_iterator()))
# ## 将ImageNet数据集转换为MindSpore数据格式
# ### ImageNet数据集下载
# - ImageNet数据集:
# > http://image-net.org/download
# - 将下载好的数据集放在`jupyter工作目录/dataset/ImageNetData/`下,如图所示:
# dataset/ImageNetData/
# ├── bounding_boxes
# ├── imagenet_map.txt
# ├── train
# └── validation
# ### ImageNet数据集转换
# `ImageNetToMR`这个类用于将ImageNet数据集转换为MindSpore数据格式,参数用法如下:
# - `map_file` - map file 应该显示标签,内容如下所示:
#
# n02119789 0
# n02100735 1
# n02110185 2
# n02096294 3
# 本例中使用变量`imagenet_map_path`传入该参数。
#
#
# - `image_dir` - image目录应该是包含n02119789、n02100735、n02110185、n02096294的目录,本例中使用变量`imagenet_image_dir`传入该参数。
# - `destination` - 要转换成MindSpore数据格式文件的路径,本例中使用变量`imagenet_mindrecord_path`传入该参数。
# - `partititon_number` - 分区的大小,本例中设置该值为4,表示会生成4个MindSpore格式文件,使用变量`partition_number`传入该参数。
# +
from mindspore.mindrecord import ImageNetToMR
imagenet_map_path = './dataset/ImageNetData/imagenet_map.txt'
imagenet_image_dir = './dataset/ImageNetData/train'
imagenet_mindrecord_path = './transform/imagenet.record'
partition_number = 4
imagenet_transformer = ImageNetToMR(imagenet_map_path,imagenet_image_dir,imagenet_mindrecord_path,partition_number)
#executes transformation from ImageNet to MindRecord
imagenet_transformer.transform()
# -
# 因为本例中指定了分区大小为4,所以会生成4个MindSpore数据格式文件,因为数据集比较大,转换时间有点长,请耐心等待,生成的文件如下所示:
# transform/imagenet.record0
# transform/imagenet.record0.db
# transform/imagenet.record1
# transform/imagenet.record1.db
# transform/imagenet.record2
# transform/imagenet.record2.db
# transform/imagenet.record3
# transform/imagenet.record3.db
# 以下内容首先加载MindSpore数据格式的数据集,虽然是自己指定了分区,但是文件之间还是相互有联系,所以不管对任何一个MindSpore数据格式文件加载,都会同时把其他三个文件一起加载,本例将imagenet.record0文件进行了加载,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
file_name = './transform/imagenet.record0'
# create MindDataset for reading data
imagenet_data_set = ds.MindDataset(dataset_file=file_name)
# create a dictionary iterator and read a data record through the iterator
print(next(imagenet_data_set.create_dict_iterator(output_numpy=True)))
# ## 用户自定义生成MindSpore数据格式
# 1. 导入`FileWriter`类,用于将用户定义的原始数据写入,参数用法如下:
#
#
# - `file_name` - MindSpore数据格式文件的文件名,本例使用变量`data_record_path`传入该参数。
# - `shard_num` - MindSpore数据格式文件的数量,默认为1,取值范围在[1,1000],本例使用默认参数。
from mindspore.mindrecord import FileWriter
data_record_path = './transform/data.record'
writer = FileWriter(data_record_path,1)
# 2. 定义数据集Schema,Schema用于定义数据集包含哪些字段以及字段的类型,然后添加Schema,相关规范如下:
#
#
# - 字段名:字母、数字、下划线。
# - 字段属性type:int32、int64、float32、float64、string、bytes。
# - 字段属性shape:如果是一维数组,用[-1]表示,如果是二维数组,用[m,n]表示,如果是三维数组,用[x,y,z]表示。
#
# 本例中定义了`file_name`字段,用于标注准备写入数据的文件名字,定义了`label`字段,用于给数据打标签,定义了`data`字段,用于保存数据。
data_schema = {"file_name":{"type":"string"},"label":{"type":"int32"},"data":{"type":"bytes"}}
writer.add_schema(data_schema,"test_schema")
# 3. 准备需要写入的数据,按照用户定义的Schema形式,准备需要写入的样本列表,本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/convert_dataset_to_mindspore_data_format/images/transform.jpg
# 中,使用过程中可以在此路径下找到图片并下载,并且保存在`jupyter工作目录/dataset/`下。
# +
def image_to_bytes(file_name):
f = open(file_name,'rb')
image_bytes = f.read()
f.close()
return image_bytes
data = [{"file_name":"transform.jpg","label":1,"data":image_to_bytes('./dataset/transform.jpg')}]
# -
# 4. 添加索引字段,添加索引字段可以加速数据读取,改步骤为可选操作。
indexes = ["file_name","label"]
writer.add_index(indexes)
# 5. 写入数据,最后生成MindSpore数据格式文件,`write_raw_data`接口可以被重复调用,方便用户将多个样本添加至MindSpore数据格式文件中。
writer.write_raw_data(data)
writer.commit()
# 6. 以下内容首先加载MindSpore数据格式的数据集,然后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。
file_name = './transform/data.record'
# create MindDataset for reading data
define_data_set = ds.MindDataset(dataset_file=file_name)
# create a dictionary iterator and read a data record through the iterator
print(next(define_data_set.create_dict_iterator(output_numpy=True)))
# ## 总结
#
# 以上便是本次体验的内容,我们通过此次体验全面了解了如何通过各个相应的子模块将其他格式的数据集转换为MindSpore数据格式。
| tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### 1. Which command can be used to encrypt all passwords in the configuration file?
# ##### Ans: service password-encryption
# #### 2. Which configuration step should be performed first when enabling SSH on a Cisco device?
# ##### Ans: Configure an IP domain name.
# #### 3. What is the purpose of assigning an IP address to the VLAN1 interface on a Cisco Layer 2 switch?
# ##### Ans: to enable remote access to the switch to manage it
# #### 4. What is the purpose of configuring a default gateway address on a host?
# ##### Ans: to identify the device that allows local network computers to communicate with devices on other networks
# #### 5. Perform the tasks in the activity instructions and then answer the question.
# ##### Ans: Correct
| Coursera/Cisco Networking Basics Specializations/Course_5-Introduction_to_Cisco_Networking/Week-3/Quiz/Week-3-Quiz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.1
# language: julia
# name: julia-1.4
# ---
using Revise, DataStructures
# +
module Huffman
export Node
export generate_huffman_code
using DataStructures
# a binary tree to store the string, probability
# and left and right child
struct Node
symbol::String
prob::Float64
left::Union{Node, Nothing}
right::Union{Node, Nothing}
end
# convert a list of symbols with a probability
# into a priority queue
function from_list!(pq, list)
# go through a symbols and add them to priority queue
for (str, prob) in list
enqueue!(pq, Node(str, prob, nothing, nothing), prob)
end
return pq
end
# generate a huffman tree from priority queue
function generate_huffman_tree!(pq)
while length(pq) > 1
a = dequeue!(pq)
b = dequeue!(pq)
new_node = Node(string(a.symbol, b.symbol),
a.prob + b.prob,
a, b)
enqueue!(pq, new_node, new_node.prob)
end
return dequeue!(pq)
end
# generate huffman code from an array
function generate_huffman_code(arr)
# first generate priority queue
pq = from_list!(PriorityQueue{Node, Float64}(), arr)
# array to store the final result
code = []
# generate the huffman tree
tree = generate_huffman_tree!(pq)
# go recursively to left and right child
generate_huffman_code_aux(tree.left, code, "0")
generate_huffman_code_aux(tree.right, code, "1")
return code
end
# helper function which is recursively called
function generate_huffman_code_aux(tree, code, suffix)
# if tree ends, stop
if tree == nothing
return
end
# if left and right child are nothing, we reach a leaf
# and store the current code
if tree.left == nothing && tree.right == nothing
return push!(code, (tree.symbol, suffix))
end
# recursive call left and right
generate_huffman_code_aux(tree.left, code, string(suffix, "0"))
generate_huffman_code_aux(tree.right, code, string(suffix, "1"))
return
end
end
using .Huffman
# +
x = [("A", 0.1), ("B", 0.15), ("C", 0.3), ("D", 0.16), ("E", 0.29)];
#x = [("A", 0.4), ("B", 0.3), ("C", 0.1), ("D", 0.1), ("E", 0.06), ("F", 0.04)];
@time code = generate_huffman_code(x)
sort(code)
| huffman.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Demonstrate impact of whitening on source estimates
#
#
# This example demonstrates the relationship between the noise covariance
# estimate and the MNE / dSPM source amplitudes. It computes source estimates for
# the SPM faces data and compares proper regularization with insufficient
# regularization based on the methods described in [1]_. The example demonstrates
# that improper regularization can lead to overestimation of source amplitudes.
# This example makes use of the previous, non-optimized code path that was used
# before implementing the suggestions presented in [1]_.
#
# <div class="alert alert-danger"><h4>Warning</h4><p>Please do not copy the patterns presented here for your own
# analysis, this is example is purely illustrative.</p></div>
#
# <div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
# fast machine it can take a couple of minutes to complete.</p></div>
#
# References
# ----------
# .. [1] <NAME>. and <NAME>. (2015) Automated model selection in
# covariance estimation and spatial whitening of MEG and EEG signals,
# vol. 108, 328-342, NeuroImage.
#
#
# +
# Author: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import spm_face
from mne.minimum_norm import apply_inverse, make_inverse_operator
from mne.cov import compute_covariance
print(__doc__)
# -
# Get data
#
#
# +
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1) # Take first run
# To save time and memory for this demo, we'll just use the first
# 2.5 minutes (all we need to get 30 total events) and heavily
# resample 480->60 Hz (usually you wouldn't do either of these!)
raw = raw.crop(0, 150.).load_data()
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, 20., n_jobs=1, fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.5
baseline = None # no baseline as high-pass is applied
reject = dict(mag=3e-12)
# Make source space
trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
src = mne.setup_source_space('spm', spacing='oct6', subjects_dir=subjects_dir,
add_dist=False)
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(raw.info, trans, src, bem)
del src
# inverse parameters
conditions = 'faces', 'scrambled'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
clim = dict(kind='value', lims=[0, 2.5, 5])
# -
# Estimate covariances
#
#
# +
samples_epochs = 5, 15,
method = 'empirical', 'shrunk'
colors = 'steelblue', 'red'
evokeds = list()
stcs = list()
methods_ordered = list()
for n_train in samples_epochs:
# estimate covs based on a subset of samples
# make sure we have the same number of conditions.
events_ = np.concatenate([events[events[:, 2] == id_][:n_train]
for id_ in [event_ids[k] for k in conditions]])
events_ = events_[np.argsort(events_[:, 0])]
epochs_train = mne.Epochs(raw, events_, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject,
decim=8)
epochs_train.equalize_event_counts(event_ids)
assert len(epochs_train) == 2 * n_train
# We know some of these have too few samples, so suppress warning
# with verbose='error'
noise_covs = compute_covariance(
epochs_train, method=method, tmin=None, tmax=0, # baseline only
return_estimators=True, verbose='error') # returns list
# prepare contrast
evokeds = [epochs_train[k].average() for k in conditions]
del epochs_train, events_
# do contrast
# We skip empirical rank estimation that we introduced in response to
# the findings in reference [1] to use the naive code path that
# triggered the behavior described in [1]. The expected true rank is
# 274 for this dataset. Please do not do this with your data but
# rely on the default rank estimator that helps regularizing the
# covariance.
stcs.append(list())
methods_ordered.append(list())
for cov in noise_covs:
inverse_operator = make_inverse_operator(evokeds[0].info, forward,
cov, loose=0.2, depth=0.8,
rank=274)
stc_a, stc_b = (apply_inverse(e, inverse_operator, lambda2, "dSPM",
pick_ori=None) for e in evokeds)
stc = stc_a - stc_b
methods_ordered[-1].append(cov['method'])
stcs[-1].append(stc)
del inverse_operator, evokeds, cov, noise_covs, stc, stc_a, stc_b
del raw, forward # save some memory
# -
# Show the resulting source estimates
#
#
# +
fig, (axes1, axes2) = plt.subplots(2, 3, figsize=(9.5, 5))
for ni, (n_train, axes) in enumerate(zip(samples_epochs, (axes1, axes2))):
# compute stc based on worst and best
ax_dynamics = axes[1]
for stc, ax, method, kind, color in zip(stcs[ni],
axes[::2],
methods_ordered[ni],
['best', 'worst'],
colors):
brain = stc.plot(subjects_dir=subjects_dir, hemi='both', clim=clim,
initial_time=0.175, background='w', foreground='k')
brain.show_view('ven')
im = brain.screenshot()
brain.close()
ax.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.imshow(im)
ax.set_title('{0} ({1} epochs)'.format(kind, n_train * 2))
# plot spatial mean
stc_mean = stc.data.mean(0)
ax_dynamics.plot(stc.times * 1e3, stc_mean,
label='{0} ({1})'.format(method, kind),
color=color)
# plot spatial std
stc_var = stc.data.std(0)
ax_dynamics.fill_between(stc.times * 1e3, stc_mean - stc_var,
stc_mean + stc_var, alpha=0.2, color=color)
# signal dynamics worst and best
ax_dynamics.set(title='{0} epochs'.format(n_train * 2),
xlabel='Time (ms)', ylabel='Source Activation (dSPM)',
xlim=(tmin * 1e3, tmax * 1e3), ylim=(-3, 3))
ax_dynamics.legend(loc='upper left', fontsize=10)
fig.subplots_adjust(hspace=0.2, left=0.01, right=0.99, wspace=0.03)
mne.viz.utils.plt_show()
| 0.16/_downloads/plot_covariance_whitening_dspm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Ym70RJOyiMm_" executionInfo={"status": "ok", "timestamp": 1616812747704, "user_tz": 180, "elapsed": 404, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}}
#this is based on information found at:
#https://colab.research.google.com/github/astg606/py_materials/blob/master/science_data_format/introduction_netcdf4.ipynb#scrollTo=jzSepRxHhqhF
#some more info can be found at
# https://towardsdatascience.com/read-netcdf-data-with-python-901f7ff61648
# + colab={"base_uri": "https://localhost:8080/"} id="RTuXzmiciQmI" executionInfo={"status": "ok", "timestamp": 1616817036424, "user_tz": 180, "elapsed": 3484, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="5e8577f4-774e-421a-f3ad-6546981fca08"
# !pip install netCDF4
# + id="IT8HrcHDiZD-" executionInfo={"status": "ok", "timestamp": 1616817342658, "user_tz": 180, "elapsed": 324, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}}
import datetime
import numpy as np
import netCDF4 as nc4
# + colab={"base_uri": "https://localhost:8080/"} id="8Jx3hlLCifKM" executionInfo={"status": "ok", "timestamp": 1616817346167, "user_tz": 180, "elapsed": 2065, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="1b3b4764-955d-4830-914e-c060d9b4937b"
# Get the remote file
nc_file = "sresa1b_ncar_ccsm3-example.nc"
url = "https://www.unidata.ucar.edu/software/netcdf/examples/"
import urllib.request
urllib.request.urlretrieve(url+nc_file, nc_file)
# + id="QfZLDivZiiBl" executionInfo={"status": "ok", "timestamp": 1616817346180, "user_tz": 180, "elapsed": 433, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}}
# Open the netCDF file and read surface air temperature
with nc4.Dataset(nc_file,'r') as ncid:
lons = ncid.variables['lon'][:] # longitude grid points
lats = ncid.variables['lat'][:] # latitude grid points
levs = ncid.variables['plev'][:] # pressure leves
surf_temp = ncid.variables['tas'][:]
uwind = ncid.variables['ua'][:]
# + colab={"base_uri": "https://localhost:8080/"} id="FKUo6bVojSf8" executionInfo={"status": "ok", "timestamp": 1616817347766, "user_tz": 180, "elapsed": 345, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="908d2106-b5c8-4e92-a07d-4d97ae344bf6"
print(lons.shape)
print(lats.shape)
print(levs.shape)
print(surf_temp.shape)
print(uwind.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="IBtP3Kxokci0" executionInfo={"status": "ok", "timestamp": 1616817355218, "user_tz": 180, "elapsed": 6116, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="c87cc737-0459-4f3e-f7aa-ed6f0343f8e2"
# what a horribly sketchy place to find GOES data!
# https://geo.nsstc.nasa.gov/satellite/goes16/abi/l1b/fullDisk/
# Get the remote file
url = "https://geo.nsstc.nasa.gov/satellite/goes16/abi/l1b/fullDisk/OR_ABI-L1b-RadF-M6C01_G16_s20210860030172_e20210860039480_c20210860039525.nc"
url = "https://geo.nsstc.nasa.gov/satellite/goes16/abi/l1b/fullDisk/OR_ABI-L1b-RadF-M6C01_G16_s20210860040172_e20210860049480_c20210860049524.nc"
url = "https://geo.nsstc.nasa.gov/satellite/goes16/abi/l1b/fullDisk/OR_ABI-L1b-RadF-M6C02_G16_s20210860050172_e20210860059480_c20210860059516.nc"
url = "https://geo.nsstc.nasa.gov/satellite/goes16/abi/l1b/fullDisk/OR_ABI-L1b-RadF-M6C02_G16_s20210860140172_e20210860149480_c20210860149520.nc"
import urllib.request
urllib.request.urlretrieve(url, nc_file)
# + colab={"base_uri": "https://localhost:8080/"} id="tBrDmIQqlG6u" executionInfo={"status": "ok", "timestamp": 1616817355821, "user_tz": 180, "elapsed": 561, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="16775f54-2489-4e95-f1a5-73196716c258"
ncc = nc4.Dataset(nc_file,'r')
print(ncc)
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="yaU26Ps4lvl1" executionInfo={"status": "error", "timestamp": 1616817355988, "user_tz": 180, "elapsed": 671, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="dcc6f71c-2e14-43d5-beeb-24e5ca4b9c87"
# Open the netCDF file and read surface air temperature
with nc4.Dataset(nc_file,'r') as ncid:
y = ncid.variables['y'][:] # longitude grid points
x = ncid.variables['x'][:] # latitude grid points
rad = ncid.variables['Rad'][:] # latitude grid points
# + colab={"base_uri": "https://localhost:8080/"} id="IWhY_tZAl-Y3" executionInfo={"status": "ok", "timestamp": 1616817307269, "user_tz": 180, "elapsed": 386, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="0999b732-91bf-4fed-fd52-69399487921a"
print(x.shape)
print(y.shape)
print(rad.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 269} id="c5gqqAA1mnOD" executionInfo={"status": "ok", "timestamp": 1616817294526, "user_tz": 180, "elapsed": 726, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjOFfOS64ZAnagVYfbYGoMIydxrAsDl6YNZLk9caQ=s64", "userId": "05174630595095594796"}} outputId="58c951bf-e9fa-4b4e-9bb7-50ddbb8362a9"
import matplotlib.pyplot as plt
plt.imshow(rad[::100,::100], )
plt.show()
# + id="q5vYeaKmnLoh"
import plotly.express as px
fig = px.imshow(rad)
fig.update_layout(width=256, height=256, margin=dict(l=10, r=10, b=10, t=10))
fig.update_xaxes(showticklabels=False).update_yaxes(showticklabels=False)
fig.show()
| Experiments/goes/netCDF_download.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Heaton_6594_201110A5 - rna_seq
# This notebook will create all the necessary files, scripts and folders to pre-process the aforementioned project. Is designed to be used in a jupyter server deployed in a system running SLURM. The majority of the scripts and heavy-lifting processes are wrapped up in sbatch scripts.As an end user, in order to pre-process your samples provided in the spread sheet, you will simply need to *run the entire notebook* (Cell > Run all) and the system should take care of the rest for you.
# #### Create necessary folder(s)
# + language="bash"
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/metadata
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/raw_reads
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/processed_raw_reads
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/jsons
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs
# -
# Save metadata file
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/metadata/rna_seq_download_metadata.Heaton_6594_201110A5.txt
Sequencing core project Sequencing core library name Name Paired-end or single-end Genome Library type Strand specificity
Heaton_6594_201110A5 KS150-Cre-neg-gs-rep1 KS150.CreNeg.g5.rep1redo PE mm10 RNA-seq revstranded
Heaton_6594_201110A5 KS150-Cre-neg-gs-rep3 KS150.CreNeg.g5.rep3redo PE mm10 RNA-seq revstranded
# #### Download FASTQ from dukeds
# Create file to download FASTQ files
# +
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/download_Heaton_6594_201110A5.sh
# #!/bin/bash
METADATA=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/metadata/rna_seq_download_metadata.Heaton_6594_201110A5.txt
DATA_HOME=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq
# mkdir -p ${DATA_HOME}/raw_reads/
module load ddsclient
ddsclient download -p Heaton_6594_201110A5 ${DATA_HOME}/raw_reads/Heaton_6594_201110A5
# -
# Execute file to download files
# + magic_args="--out blocking_job_str bash" language="script"
# sbatch -o /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs/Heaton_6594_201110A5_download_fastq_files.out \
# -p all,new \
# --wrap="ssh <EMAIL> 'sh /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/download_Heaton_6594_201110A5.sh' "
# -
# Extract blocking job id
import re
blocking_job = re.match('Submitted batch job (\d+).*', blocking_job_str).group(1)
# #### Merge lanes of FASTQ files
# +
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/merge_lanes_Heaton_6594_201110A5.sh
# #!/bin/bash
#SBATCH --array=0-2%20
ORDER=Heaton_6594_201110A5
RAW_DATA_DIR=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/raw_reads/${ORDER}
PROCESSED_DATA_DIR=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/processed_raw_reads/${ORDER}
METADATA=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/metadata/rna_seq_download_metadata.Heaton_6594_201110A5.txt
# mkdir -p ${PROCESSED_DATA_DIR}
# cd ${PROCESSED_DATA_DIR}
seq_name_header=$(/bin/grep -Eoi "sequencing.?core.?library.?name" ${METADATA})
if [[ $? == 1 ]];
then
echo -e "ERROR: Sequencing core library name not found in ${METADATA}"
exit 1
fi
name_header=$(/bin/grep -Poi "\tname\t" ${METADATA})
if [[ $? == 1 ]];
then
echo -e "ERROR: Library Name column not found in ${METADATA}"
exit 1
fi
name_header=$(echo ${name_header} | cut -f2)
seq_type_header=$(head -1 ${METADATA} | /bin/grep -Poi "paired.?end.?or.?single.?end")
if [[ $? == 1 ]];
then
echo -e "ERROR: Paired-end or single-end column not found in ${METADATA}"
exit 1
fi
sample_seq_name=$(/data/reddylab/software/bin/print_tab_cols.awk -v cols="${seq_name_header}" ${METADATA} \
| awk -v SLURM_ARRAY_TASK_ID=${SLURM_ARRAY_TASK_ID} 'NR==SLURM_ARRAY_TASK_ID+1{print}');
sample_name=$(/data/reddylab/software/bin/print_tab_cols.awk -v cols="${name_header}" ${METADATA} \
| awk -v SLURM_ARRAY_TASK_ID=${SLURM_ARRAY_TASK_ID} 'NR==SLURM_ARRAY_TASK_ID+1{print}');
seq_type=$(/data/reddylab/software/bin/print_tab_cols.awk -v cols="${seq_type_header}" ${METADATA} \
| awk -v SLURM_ARRAY_TASK_ID=${SLURM_ARRAY_TASK_ID} 'NR==SLURM_ARRAY_TASK_ID+1{print}');
for read_pair in R1 R2 UMI;
do
sample_files=$(/bin/ls ${RAW_DATA_DIR}/${sample_seq_name/ /}_S[0-9]*_L[0-9][0-9][0-9]_${read_pair}_* 2> /dev/null)
if [[ $? != 0 ]]; # If no samples found with that read_pair, continue
then
continue;
fi
if [[ ${read_pair} == "R1" || (${seq_type/ /} == "PE" || ${seq_type/ /} == "pe") ]];
then
# Merge all lanes
merged=$(basename $(echo ${sample_files} | awk '{print $1}') | sed -e 's/_L[0-9]\{3\}_/_/')
cat ${sample_files} > ${merged};
# Rename samples with our sample Names
dest_filename=$(basename $(echo ${merged} | awk '{print $1}') | sed -r 's/\_S[0-9]+//; s/\_(R1|R2|UMI)\_/\.\1\./; s/\.[0-9]+\.fastq/\.fastq/')
mv ${merged} ${dest_filename}
cleaned_dest_filename=${dest_filename/${sample_seq_name/ /}/${sample_name/ /}}
if [[ ${seq_type/ /} == "SE" || ${seq_type/ /} == "se" ]];
then
cleaned_dest_filename=${cleaned_dest_filename/.R1/}
fi
mv ${dest_filename} ${cleaned_dest_filename}
fi
done
# -
# Execute file to merge lanes of FASTQ files
# + magic_args="--out blocking_job_str bash -s \"$blocking_job\"" language="script"
# sbatch -o /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs/Heaton_6594_201110A5_merge_fastq_files_%a.out \
# -p all,new \
# --array 0-1%20 \
# /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/merge_lanes_Heaton_6594_201110A5.sh
# -
# Extract blocking job id
import re
blocking_job = re.match('Submitted batch job (\d+).*', blocking_job_str).group(1)
# #### Create JSON files for CWL pipeline files
# +
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/cwl_json_gen_Heaton_6594_201110A5.sh
# #!/bin/bash
ORDER=Heaton_6594_201110A5
PROCESSED_DATA_DIR=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/processed_raw_reads/${ORDER}
METADATA=/data/reddylab/Alex/collab/20190701_Matt//data/rna_seq/metadata/rna_seq_download_metadata.Heaton_6594_201110A5.txt
python /data/reddylab/software/cwl/GGR-cwl/v1.0/json-generator/run.py \
-m ${METADATA} \
-d ${PROCESSED_DATA_DIR} \
-o /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/jsons \
-t rna-seq \
--fastq-gzipped \
--mem 48000 \
--nthreads 24 \
--separate-jsons \
--skip-star-2pass \
# -
# Execute file to create JSON files
# + magic_args="--out blocking_job_str bash -s \"$blocking_job\"" language="script"
# source /data/reddylab/software/miniconda2/bin/activate cwl10
# sbatch -o /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs/Heaton_6594_201110A5_cwl_json_gen.out \
# -p all,new \
# --depend afterok:$1 \
# /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/cwl_json_gen_Heaton_6594_201110A5.sh
# -
# Extract blocking job id
import re
blocking_job = re.match('Submitted batch job (\d+).*', blocking_job_str).group(1)
# #### Create SLURM array master bash file for pe-revstranded-with-sjdb samples
# +
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/Heaton_6594_201110A5-pe-revstranded-with-sjdb.sh
# #!/bin/bash
#SBATCH --job-name=cwl_rna_seq
#SBATCH --output=/data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs/Heaton_6594_201110A5-pe-revstranded-with-sjdb-%a.out
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-type=FAIL,END
#SBATCH --mem=48000
#SBATCH --cpus-per-task=24
export PATH="/data/reddylab/software/bin:$PATH"
export PATH="/data/reddylab/software/cwl/bin:$PATH"
export PATH="/data/reddylab/software/preseq_v2.0:$PATH"
export PATH="/data/reddylab/software/rsem-1.2.21/:$PATH"
export PATH="/data/reddylab/software/STAR-STAR_2.4.1a/bin/Linux_x86_64/:$PATH"
export PATH="/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/:$PATH"
export PATH="/data/reddylab/software/bamtools-2.2.3/bin/:$PATH"
export PATH="/data/reddylab/software/miniconda2/envs/cwl10/bin:$PATH"
module load bedtools2
module load fastqc
module load samtools
module load bowtie2
module load java
# For Fastqc
export DISPLAY=:0.0
# Make sure temporary files and folders are created in a specific folder
# mkdir -p /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/tmpdirs/tmp-Heaton_6594_201110A5-pe-revstranded-with-sjdb-${SLURM_ARRAY_TASK_ID}-
export TMPDIR="/data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/tmpdirs/tmp-Heaton_6594_201110A5-pe-revstranded-with-sjdb-${SLURM_ARRAY_TASK_ID}-"
cwltool --debug \
--non-strict \
--preserve-environment PATH \
--preserve-environment DISPLAY \
--preserve-environment TMPDIR \
--outdir /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/Heaton_6594_201110A5-pe-revstranded-with-sjdb \
--no-container \
/data/reddylab/software/cwl/GGR-cwl/v1.0/RNA-seq_pipeline/pipeline-pe-revstranded-with-sjdb.cwl \
/data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/jsons/rna_seq_download_metadata.Heaton_6594_201110A5-pe-revstranded-with-sjdb-${SLURM_ARRAY_TASK_ID}.json
# Delete any tmpdir not removed by cwltool
# rm -rf /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/tmpdirs/tmp-Heaton_6594_201110A5-pe-revstranded-with-sjdb-${SLURM_ARRAY_TASK_ID}-
# -
# Execute SLURM array master file
# + magic_args="--out blocking_job_str bash -s \"$blocking_job\"" language="script"
# source /data/reddylab/software/miniconda2/bin/activate cwl10
# sbatch -p all,new \
# --depend afterok:$1 \
# --array 0-1%20 \
# /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/Heaton_6594_201110A5-pe-revstranded-with-sjdb.sh
# -
# Extract blocking job id
import re
blocking_job = re.match('Submitted batch job (\d+).*', blocking_job_str).group(1)
# #### Create QC generating script
# +
# %%writefile /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/generate_qc_cell_Heaton_6594_201110A5-pe-revstranded-with-sjdb.sh
# #!/bin/bash
#SBATCH --job-name=qc
#SBATCH --output=/data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/logs/qc_gen.Heaton_6594_201110A5-pe-revstranded-with-sjdb.out
source /data/reddylab/software/miniconda2/bin/activate alex
# cd /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/Heaton_6594_201110A5-pe-revstranded-with-sjdb
python /data/reddylab/software/cwl/bin/generate_stats_rnaseq_paired_end.py ./ \
-samples $(/bin/ls -1 *PBC.txt | sed 's@.PBC.txt@@') \
> qc.txt
# -
# Generate QCs for Heaton_6594_201110A5-pe-revstranded-with-sjdb
# + magic_args="--out blocking_job_str bash -s \"$blocking_job\"" language="script"
# sbatch -p all,new \
# --depend afterok:$1 \
# /data/reddylab/Alex/collab/20190701_Matt//processing/rna_seq/scripts/generate_qc_cell_Heaton_6594_201110A5-pe-revstranded-with-sjdb.sh
# -
# Extract blocking job id
import re
blocking_job = re.match('Submitted batch job (\d+).*', blocking_job_str).group(1)
| Heaton_6594_201110A5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Choosing the backend
# *echelle* provides support for two different backends when dealing with interactivity. The default backend is *bokeh*, due to its good performance. The fall-back is to use the *matplotlib* widgets. The only time I recommend choosing *matplotlib* is a backend is when you are working outside of a Jupyter notebook environment (i.e., accessing echelle from the terminal).
# Let's use the same star from our previous example:
import lightkurve as lk
lc = lk.search_lightcurve('KIC 11615890').download_all().stitch()
pg = lc.to_periodogram(normalization='psd')
freq, amp = pg.frequency.value, pg.power.value
# ## Bokeh
# *Bokeh* is the default backend for *echelle*. When using *Bokeh*, you will need to pass in the URL of the notebook if there are multiple notebooks open. *echelle* will warn you about this.
import echelle
echelle.interact_echelle(freq, amp, 5, 10)
# ## Matplotlib
# Matplotlib has no such constraints and can work from the terminal or in a Jupyter notebook.
# If you use matplotlib from a notebook you must set the following cell magic:
# %matplotlib notebook
# If you want the cells to "pop out" from the notebook and appear as a separate window, you need to instead call:
# %matplotlib qt
echelle.interact_echelle(freq, amp, 5, 10, backend='matplotlib')
| docs/notebooks/backend.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import scipy.io
import matplotlib.pyplot as mpl
c2p3 = scipy.io.loadmat('c2p3.mat')
stim = c2p3['stim'].T
counts = c2p3['counts']
print(np.shape(stim))
print(np.shape((counts)))
# +
#Part A
def STA(step, stim, counts):
total_spike = 0
result = np.zeros((step, 16, 16))
for i in range(len(stim[:,0,0])):
for j in range(step):
if i > j and counts[i] >= 0:
result[j,:,:] += stim[i-(j+1),:,:] * counts[i]
total_spike += counts[i]
#Normalization
result[:,:,:] = result[:,:,:] / total_spike
return result
# -
STA_image = STA(10,stim,counts)
figure = 0
for i in range(np.shape(STA_image)[0]):
figure += 1
mpl.figure(figure)
mpl.title("Step size before a spike: " +str(i+1) )
mpl.imshow(STA_image[i,:,:], cmap='gray', vmin=np.min(STA_image), vmax=np.max(STA_image))
# +
#Part B
row_sum = np.sum(STA_image, axis=1)
col_sum = np.sum(STA_image, axis=2)
figure += 1
mpl.figure(figure)
mpl.title("STA images summed over rows: ", fontsize=13)
mpl.xlabel('pixel', fontsize=11)
mpl.ylabel('time step', fontsize=11)
mpl.imshow(row_sum, cmap='gray')
mpl.show(block=False)
figure += 1
mpl.figure(figure)
mpl.title("STA images summed over columns: ", fontsize=13)
mpl.xlabel('pixel', fontsize=11)
mpl.ylabel('time step', fontsize=11)
mpl.imshow(col_sum, cmap='gray')
mpl.show(block=False)
# +
#Part C
def frobenius(STA, stim, counts, allSpikes):
if allSpikes == True:
result = np.zeros(len(counts))
normalizer = 0
for i in range(len(counts)):
result[i] = np.sum(np.multiply(STA[0,:,:],stim[i,:,:]))
if result[i] > normalizer:
normalizer = result[i]
result[:] = result[:] / normalizer
else:
result = []
normalizer = 0
for i in range(len(counts)):
if counts[i] != 0:
result.append(np.sum(np.multiply(STA[0,:,:],stim[i,:,:])))
normalizer = max(result)
result[:] = result[:] / normalizer
return result
# -
histo_frobenius = frobenius(STA_image, stim, counts, True)
figure += 1
mpl.figure(figure)
mpl.title("Stimulus Projections")
mpl.ylabel('Spike Count')
mpl.hist(histo_frobenius, bins=100)
mpl.show()
histo_frobenius_nonzero_spikes = frobenius(STA_image, stim, counts, False)
figure += 1
mpl.figure(figure)
mpl.title("Stimulus Projections with Non-Zero Spikes")
mpl.hist(histo_frobenius_nonzero_spikes, bins=100)
mpl.ylabel('Spike Count')
mpl.show()
figure += 1
mpl.figure(figure)
mpl.hist([histo_frobenius,histo_frobenius_nonzero_spikes],bins=100,color=['blue','red'])
mpl.title("Projection of All Stimuli vs Spike Occurrence")
mpl.ylabel('Spike Count')
| Homework2/Question1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic Regression with Implementation
# ## Introduction
#
# Logistic regression is one of the most fundamental machine learning models for binary classification. I will summarize its methodology and implement it in NumPy, TensorFlow, and PyTorch.
#
# The problem we solve is **binary classification,** for example, the doctor would like to base on patients's features, including mean radius, mean texture, etc, to classify breat cancer into one of the following two case:
#
# - "malignant": 𝑦=1
# - "benign": 𝑦=0
#
# which correspond to serious and gentle case respectively.
#
# We will load the breast cancer data from scikit-learn as a toy dataset, and split the data into the training and test datasets.
# ## Logistic Regression Model
#
# [To be continued.]
# +
import random
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import tensorflow as tf
import sys
sys.path.append('../numpy/')
from metrics import accuracy
np.random.seed(71)
# -
# %load_ext autoreload
# %autoreload 2
# ## Breast Cancer Dataset and Preprocessing
import sklearn
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# Read breast cancer data.
# X, y = load_breast_cancer(return_X_y=True)
breast_cancer_data = load_breast_cancer()
X, y = breast_cancer_data.data, breast_cancer_data.target
X.shape, y.shape
print(breast_cancer_data.feature_names)
X[:3]
print(breast_cancer_data.target_names)
y[:3]
# Split data into training and test datasets.
X_train_raw, X_test_raw, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=71, shuffle=True, stratify=y)
print(X_train_raw.shape, y_train.shape)
print(X_test_raw.shape, y_test.shape)
# +
# Feature engineering for standardizing features by min-max scaler.
min_max_scaler = MinMaxScaler()
X_train = min_max_scaler.fit_transform(X_train_raw)
X_test = min_max_scaler.transform(X_test_raw)
# -
# Convert arrays to float32.
X_train, X_test, y_train, y_test = (
np.float32(X_train), np.float32(X_test), np.float32(y_train), np.float32(y_test))
X_train.dtype, y_train.dtype
# ## Numpy Implementation of Logistic Regression
class LogisticRegression(object):
"""Numpy implementation of Logistic Regression."""
def __init__(self, batch_size=64, lr=0.01, n_epochs=1000):
self.batch_size = batch_size
self.lr = lr
self.n_epochs = n_epochs
def get_data(self, X_train, y_train, shuffle=True):
"""Get dataset and information."""
self.X_train = X_train
self.y_train = y_train
# Get the numbers of examples and inputs.
self.n_examples, self.n_inputs = self.X_train.shape
if shuffle:
idx = list(range(self.n_examples))
random.shuffle(idx)
self.X_train = self.X_train[idx]
self.y_train = self.y_train[idx]
def _create_weights(self):
"""Create model weights and bias."""
self.w = np.zeros(self.n_inputs).reshape(self.n_inputs, 1)
self.b = np.zeros(1).reshape(1, 1)
def _logit(self, X):
"""Logit: unnormalized log probability."""
return np.matmul(X, self.w) + self.b
def _sigmoid(self, logit):
"""Sigmoid function by stabilization trick.
sigmoid(z) = 1 / (1 + exp(-z))
= exp(z) / (1 + exp(z)) * exp(z_max) / exp(z_max)
= exp(z - z_max) / (exp(-z_max) + exp(z - z_max)),
where z is the logit, and z_max = z - max(0, z).
"""
logit_max = np.maximum(0, logit)
logit_stable = logit - logit_max
return np.exp(logit_stable) / (np.exp(-logit_max) + np.exp(logit_stable))
def _model(self, X):
"""Logistic regression model."""
logit = self._logit(X)
return self._sigmoid(logit)
def _loss(self, y, logit):
"""Cross entropy loss by stabilizaiton trick.
cross_entropy_loss(y, z)
= - 1/n * \sum_{i=1}^n y_i * log p(y_i = 1|x_i) + (1 - y_i) * log p(y_i = 0|x_i)
= - 1/n * \sum_{i=1}^n y_i * (z_i - log(1 + exp(z_i))) + (1 - y_i) * (-log(1 + exp(z_i))),
where z is the logit, z_max = z - max(0, z),
log p(y = 1|x)
= log (1 / (1 + exp(-z)))
= log (exp(z) / (1 + exp(z)))
= z - log(1 + exp(z))
and
log(1 + exp(z)) := logsumexp(z)
= log(exp(0) + exp(z))
= log(exp(0) + exp(z) * exp(z_max) / exp(z_max))
= z_max + log(exp(-z_max) + exp(z - z_max)).
"""
logit_max = np.maximum(0, logit)
logit_stable = logit - logit_max
logsumexp_stable = logit_max + np.log(np.exp(-logit_max) + np.exp(logit_stable))
self.cross_entropy = -(y * (logit - logsumexp_stable) + (1 - y) * (-logsumexp_stable))
return np.mean(self.cross_entropy)
def _optimize(self, X, y):
"""Optimize by stochastic gradient descent."""
m = X.shape[0]
y_ = self._model(X)
dw = 1 / m * np.matmul(X.T, y_ - y)
db = np.mean(y_ - y)
for (param, grad) in zip([self.w, self.b], [dw, db]):
param[:] = param - self.lr * grad
def _fetch_batch(self):
"""Fetch batch dataset."""
idx = list(range(self.n_examples))
for i in range(0, self.n_examples, self.batch_size):
idx_batch = idx[i:min(i + self.batch_size, self.n_examples)]
yield (self.X_train.take(idx_batch, axis=0), self.y_train.take(idx_batch, axis=0))
def fit(self):
"""Fit model."""
self._create_weights()
for epoch in range(1, self.n_epochs + 1):
total_loss = 0
for X_train_b, y_train_b in self._fetch_batch():
y_train_b = y_train_b.reshape((y_train_b.shape[0], -1))
self._optimize(X_train_b, y_train_b)
train_loss = self._loss(y_train_b, self._logit(X_train_b))
total_loss += train_loss * X_train_b.shape[0]
if epoch % 100 == 0:
print('epoch {0}: training loss {1}'.format(epoch, total_loss / self.n_examples))
return self
def get_coeff(self):
return self.b, self.w.reshape((-1,))
def predict(self, X):
return self._model(X).reshape((-1,))
# ## Fitting Logistic Regression in NumPy
# Fit our Logistic Regression.
logreg = LogisticRegression(batch_size=64, lr=1, n_epochs=1000)
# Get datasets and build graph.
logreg.get_data(X_train, y_train, shuffle=True)
logreg.fit()
# Get coefficient.
logreg.get_coeff()
# Predicted probabilities for training data.
p_train_ = logreg.predict(X_train)
p_train_[:10]
# Predicted labels for training data.
y_train_ = (p_train_ > 0.5) * 1
y_train_[:3]
# Prediction accuracy for training data.
accuracy(y_train_, y_train)
# Predicted label correctness for test data.
p_test_ = logreg.predict(X_test)
print(p_test_[:10])
y_test_ = (p_test_ > 0.5) * 1
# Prediction accuracy for test data.
accuracy(y_test_, y_test)
# ## PyTorch Implementation of Logistic Regression
class LogisticRegressionTorch(nn.Module):
"""PyTorch implementation of Logistic Regression."""
def __init__(self, batch_size=64, lr=0.01, n_epochs=1000):
super(LogisticRegressionTorch, self).__init__()
self.batch_size = batch_size
self.lr = lr
self.n_epochs = n_epochs
def get_data(self, X_train, y_train, shuffle=True):
"""Get dataset and information."""
self.X_train = X_train
self.y_train = y_train
# Get the numbers of examples and inputs.
self.n_examples, self.n_inputs = self.X_train.shape
if shuffle:
idx = list(range(self.n_examples))
random.shuffle(idx)
self.X_train = self.X_train[idx]
self.y_train = self.y_train[idx]
def _create_model(self):
"""Create logistic regression model."""
self.model = nn.Sequential(
nn.Linear(self.n_inputs, 1),
nn.Sigmoid(),
)
def forward(self, x):
y = self.model(x)
return y
def _create_loss(self):
"""Create (binary) cross entropy loss."""
self.criterion = nn.BCELoss()
def _create_optimizer(self):
"""Create optimizer by stochastic gradient descent."""
self.optimizer = optim.SGD(self.parameters(), lr=self.lr)
def build(self):
"""Build model, loss function and optimizer."""
self._create_model()
self._create_loss()
self._create_optimizer()
def _fetch_batch(self):
"""Fetch batch dataset."""
idx = list(range(self.n_examples))
for i in range(0, self.n_examples, self.batch_size):
idx_batch = idx[i:min(i + self.batch_size, self.n_examples)]
yield (self.X_train.take(idx_batch, axis=0),
self.y_train.take(idx_batch, axis=0))
def fit(self):
"""Fit model."""
for epoch in range(1, self.n_epochs + 1):
total_loss = 0
for X_train_b, y_train_b in self._fetch_batch():
# Convert to Tensor from NumPy array and reshape ys.
X_train_b, y_train_b = (
torch.from_numpy(X_train_b),
torch.from_numpy(y_train_b).view(-1, 1))
y_pred_b = self.model(X_train_b)
loss = self.criterion(y_pred_b, y_train_b)
total_loss += loss * X_train_b.shape[0]
# Zero grads, performs backward pass, and update weights.
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
if epoch % 100 == 0:
print('Epoch {0}: training loss: {1}'
.format(epoch, total_loss / self.n_examples))
def get_coeff(self):
"""Get model coefficients."""
# Detach var which require grad.
return (self.model[0].bias.detach().numpy(),
self.model[0].weight.detach().numpy())
def predict(self, X):
"""Predict for new data."""
with torch.no_grad():
X_ = torch.from_numpy(X)
return self.model(X_).numpy().reshape((-1,))
# ## Fitting Logistic Regression in PyTorch
# Fit PyTorch Logistic Regression.
logreg_torch = LogisticRegressionTorch(batch_size=64, lr=0.5, n_epochs=1000)
logreg_torch.get_data(X_train, y_train, shuffle=True)
logreg_torch.build()
logreg_torch.model
logreg_torch.fit()
# Get coefficient.
logreg_torch.get_coeff()
# Predicted probabilities for training data.
p_train_ = logreg_torch.predict(X_train)
p_train_[:10]
# Predicted labels for training data.
y_train_ = (p_train_ > 0.5) * 1
y_train_[:3]
# Prediction accuracy for training data.
accuracy(y_train_, y_train)
# Predicted label correctness for test data.
p_test_ = logreg_torch.predict(X_test)
print(p_test_[:10])
y_test_ = (p_test_ > 0.5) * 1
# Prediction accuracy for test data.
accuracy(y_test_, y_test)
# ## TensorFlow Implementation of Logistic Regression
# +
def reset_tf_graph(seed=71):
"""Reset default TensorFlow graph."""
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
class LogisticRegressionTF(object):
"""A TensorFlow implementation of Logistic Regression."""
def __init__(self, batch_size=64, learning_rate=0.01, n_epochs=1000):
self.batch_size = batch_size
self.n_epochs = n_epochs
self.learning_rate = learning_rate
def get_data(self, X_train, y_train, shuffle=True):
"""Get dataset and information."""
self.X_train = X_train
self.y_train = y_train
# Get the numbers of examples and inputs.
self.n_examples, self.n_inputs = self.X_train.shape
idx = list(range(self.n_examples))
if shuffle:
random.shuffle(idx)
self.X_train = self.X_train[idx]
self.y_train = self.y_train[idx]
def _create_placeholders(self):
"""Create placeholder for features and labels."""
self.X = tf.placeholder(tf.float32, shape=(None, self.n_inputs), name='X')
self.y = tf.placeholder(tf.float32, shape=(None, 1), name='y')
def _create_weights(self):
"""Create and initialize model weights and bias."""
self.w = tf.get_variable(shape=[self.n_inputs, 1],
initializer=tf.random_normal_initializer(),
name='weights')
self.b = tf.get_variable(shape=[1],
initializer=tf.zeros_initializer(),
name='bias')
def _logit(self, X):
"""Logit: unnormalized log probability."""
return tf.matmul(X, self.w) + self.b
def _model(self, X):
"""Logistic regression model."""
logits = self._logit(X)
return tf.math.sigmoid(logits)
def _create_model(self):
# Create logistic regression model.
self.logits = self._logit(self.X)
def _create_loss(self):
# Create cross entropy loss.
self.cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(
labels=self.y,
logits=self.logits,
name='cross_entropy')
self.loss = tf.reduce_mean(self.cross_entropy, name='loss')
def _create_optimizer(self):
# Create gradient descent optimization.
self.optimizer = (
tf.train.GradientDescentOptimizer(learning_rate=self.learning_rate)
.minimize(self.loss))
def build_graph(self):
"""Build computational graph."""
self._create_placeholders()
self._create_weights()
self._create_model()
self._create_loss()
self._create_optimizer()
def _fetch_batch(self):
"""Fetch batch dataset."""
idx = list(range(self.n_examples))
for i in range(0, self.n_examples, self.batch_size):
idx_batch = idx[i:min(i + self.batch_size, self.n_examples)]
yield (self.X_train[idx_batch, :], self.y_train[idx_batch].reshape(-1, 1))
def fit(self):
"""Fit model."""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
for epoch in range(1, self.n_epochs + 1):
total_loss = 0
for X_train_b, y_train_b in self._fetch_batch():
feed_dict = {self.X: X_train_b, self.y: y_train_b}
_, batch_loss = sess.run([self.optimizer, self.loss],
feed_dict=feed_dict)
total_loss += batch_loss * X_train_b.shape[0]
if epoch % 100 == 0:
print('Epoch {0}: training loss: {1}'
.format(epoch, total_loss / self.n_examples))
# Save model.
saver.save(sess, 'checkpoints/logreg')
def get_coeff(self):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Load model.
saver = tf.train.Saver()
saver.restore(sess, 'checkpoints/logreg')
return self.b.eval(), self.w.eval().reshape((-1,))
def predict(self, X):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Load model.
saver = tf.train.Saver()
saver.restore(sess, 'checkpoints/logreg')
return self._model(X).eval().reshape((-1,))
# -
# ## Fitting Logistic Regression in TensorFlow
reset_tf_graph()
logreg_tf = LogisticRegressionTF(batch_size=64, learning_rate=0.5, n_epochs=1000)
logreg_tf.get_data(X_train, y_train, shuffle=True)
logreg_tf.build_graph()
logreg_tf.fit()
logreg_tf.get_coeff()
# +
# Predicted probabilities for training data.
p_train_ = logreg_tf.predict((tf.cast(X_train, dtype=tf.float32)))
print(p_train_[:10])
# Predicted labels for training data.
y_train_ = (p_train_ > 0.5) * 1
print(y_train_[:10])
# Prediction accuracy for training data.
accuracy(y_train_, y_train)
# +
# Predicted probabilities for test data.
p_test_ = logreg_tf.predict((tf.cast(X_test, dtype=tf.float32)))
print(p_test_[:10])
# Predicted labels for training data.
y_test_ = (p_test_ > 0.5) * 1
y_test_[:3]
# Prediction accuracy for training data.
accuracy(y_test_, y_test)
# -
# ## Benchmark with Sklearn's Logistic Regression
# +
# Fit sklearn's Logistic Regression.
from sklearn.linear_model import LogisticRegression as LogisticRegressionSklearn
logreg_sk = LogisticRegressionSklearn(C=1e4, solver='lbfgs', max_iter=500)
logreg_sk.fit(X_train, y_train.reshape(y_train.shape[0], ))
# -
# Get coefficients.
logreg_sk.intercept_, logreg_sk.coef_
# Predicted labels for training data.
p_train_ = logreg_sk.predict(X_train)
p_train_[:3]
y_train_ = (p_train_ > 0.5) * 1
# Prediction accuracy for training data.
accuracy(y_train_, y_train)
# Predicted label correctness for test data.
p_test_ = logreg_sk.predict(X_test)
y_test_ = (p_test_ > 0.5) * 1
# # Prediction accuracy for test data.
accuracy(y_test_, y_test)
| notebook/logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# ### Weighted covariance matrix
# Calculate a weighted centered image $(y)_{ij}$ from image $(x)_{ij},\ \ i=1\dots N,\ \ j=1\dots n$, with weights $w_j,\ \ j=1\dots n$, where $N$ is the number of bands and $n$ is the number of pixels:
#
# $$
# \bar x_i = {1 \over \sum_{j=1}^n w_j} \sum_{j=1}^n w_j x_{ij},\quad i = 1\dots N
# $$
# $$
# y_{ij} = x_{ij}-\bar x_i,\quad i = 1\dots N,\ j=1\dots n
# $$
# Calculate the weighted covariance matrix $(c)_{k\ell},\ \ k,\ell = 1\dots N$, of a weighted centered image $(y)_{ij}$:
#
# $$
# c_{k,\ell} = {1\over \sum_{i=1}^n w_i}\sum_{j=1}^n w_j y_{kj}y_{\ell j} = {1\over \sum_{i=1}^n w_i}\sum_{j=1}^n \sqrt{w_j} y_{kj}\sqrt{w_j}y_{\ell j}
# $$
import ee
ee.Initialize()
def covarw(image, weights, scale=30, maxPixels=1e9):
'''Return the weighted centered image and its weighted covariance matrix'''
bandNames = image.bandNames()
N = bandNames.length()
weightsImage = image.multiply(ee.Image.constant(0)).add(weights)
means = image.addBands(weightsImage) \
.reduceRegion(ee.Reducer.mean().repeat(N).splitWeights(),scale=scale,maxPixels=maxPixels) \
.toArray() \
.project([1])
centered = image.toArray().subtract(means)
B1 = centered.bandNames().get(0)
b1 = weights.bandNames().get(0)
nPixels = ee.Number(centered.reduceRegion(ee.Reducer.count(), scale=scale, maxPixels=maxPixels).get(B1))
sumWeights = ee.Number(weights.reduceRegion(ee.Reducer.sum(), scale=scale, maxPixels=maxPixels).get(b1))
covw = centered.multiply(weights.sqrt()) \
.toArray() \
.reduceRegion(ee.Reducer.centeredCovariance(), scale=scale, maxPixels=maxPixels) \
.get('array')
covw = ee.Array(covw).multiply(nPixels).divide(sumWeights)
return (centered.arrayFlatten([bandNames]), covw)
# #### Test
# +
import numpy as np
minlon = -116.117
minlat = 36.964
maxlon = -115.920
maxlat = 37.109
rect = ee.Geometry.Rectangle(minlon,minlat,maxlon,maxlat)
image = ee.Image('LT5_L1T/LT50400341985097XXX04') \
.select('B1','B2','B3','B4') \
.clip(rect)
npixels = ee.Number(image.select(0).reduceRegion(ee.Reducer.count(), scale=30, maxPixels=1e9).get('B1'))
print 'number of pixels'
print npixels.getInfo()
# equal weights
weights = image.select(0).multiply(0.0).add(1.0)
_, covw = covarw(image,weights)
print 'unweighted covariance matrix, covarw()'
print np.array(ee.Array(covw).getInfo())
# should be same as ordinary covariance
cov = image.toArray().reduceRegion(ee.Reducer.covariance(),scale=30,maxPixels=1e9)
print 'unweighted covariance matrix, ee.Reducer.covariance()'
print np.array(cov.get('array').getInfo())
# different weights (just the pixel values themselves)
weights = image.float().select(0)
_, covw = covarw(image,weights)
print 'weighted covariance matrix, covarw()'
print np.array(ee.Array(covw).getInfo())
# -
# #### Export images for validation
gdexport = ee.batch.Export.image.toDrive(image,
description='driveExportTask',
folder = 'EarthEngineImages',
fileNamePrefix='image',scale=30,maxPixels=1e9)
gdexport.start()
gdexport = ee.batch.Export.image.toDrive(weights,
description='driveExportTask',
folder = 'EarthEngineImages',
fileNamePrefix='weights',scale=30,maxPixels=1e9)
gdexport.start()
# #### Validation
# +
import gdal,sys
from osgeo.gdalconst import GA_ReadOnly
inDataseti = gdal.Open('image.tif',GA_ReadOnly)
cols = inDataseti.RasterXSize
rows = inDataseti.RasterYSize
bands = inDataseti.RasterCount
tmp = inDataseti.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(float).ravel()
idx = np.where(tmp>0)
npixels = np.size(idx)
print 'number of pixels'
print npixels
print 'unweighted covariance matrix'
G = np.zeros((npixels,bands))
for b in range(bands):
band = inDataseti.GetRasterBand(b+1)
tmp = band.ReadAsArray(0,0,cols,rows).astype(float).ravel()
tmp = tmp[idx]
G[:,b] = tmp
C = np.cov(G,rowvar=0)
print C
inDatasetw = gdal.Open('weights.tif',GA_ReadOnly)
cols = inDatasetw.RasterXSize
rows = inDatasetw.RasterYSize
weights = inDatasetw.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(float).ravel()
weights = weights[idx]
print 'weighted covariance matrix'
C = np.cov(G,rowvar=0,aweights=weights)
print C
# -
| src/covw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project - Programming for Data Analysis
# ***
#
# ### References
# ***
# **Road Safty Authority (RSA) [Road accident information] **
# - www.rsa.ie/en/RSA/Road-Safety/Our-Research/Deaths-injuries-on-Irish-roads
# - www.rsa.ie/Documents
# **Irish Times [Road accident information] **
# - https://www.irishtimes.com/news/environment/crash-report
# **Technical References**
# - http://pandas.pydata.org/pandas-docs/stable/
# - https://docs.scipy.org/doc/numpy/reference/routines.random.html
# - https://www.bogotobogo.com/python/python_fncs_map_filter_reduce.php
# - https://www.analyticsvidhya.com/blog/2017/09/6-probability-distributions-data-science/
# - http://effbot.org/zone/python-list.htm
# - https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html
# - https://pypi.org/project/pandasql/
# ***
# ***
#
# ### Real Scenario based on the facts captured by RSA <br> <br>
#
# The below summary is based on the road accidents statistics prepared by the Road Saftey Authority in the year 2016.
#
# - There were 175 fatal collisions happened in the Irish roads which resulted in 187 fatalities
# - 13% more collisions and 15% more deaths compared to the previous year (2015)
# - Maximum number of fatalities occured in counties Dublin and Cork
# - Highest fatalities occured in the age group "66 and above"
# - Maximum fatalities occured for the road user type "Driver"
# - Maximum number of fatalities occured on the week day "Sunday"
#
# ***
# ### Project - Scope and Summary <br> <br>
# This project is inspired from the above real world scenario. The objectives of the projects are listed below
#
# - Generate 100 data sets using the python random sampling functions
# - Each data set to contain 6 variables
#
# - Irish counties where the accident took place
# - Age group of the Driver [ Traditionalists, Baby Boomers, Gen-Y]
# - Type of the Vehicle [Car, Van, Bus, Lorry, bi-cycle, Jeep]
# - Road Type [Two-way single carriageway, One-way single carriageway, Dual Carriageway]
# - Weather on the particular day [Sunny, Rainy, Snow, Windy, cloudy]
#
# - Investigate the types of variables involved, their likely distributions, and their relationships with each other.
#
# ***
# ***
# ### Project - Contents
# The dataset creation code is divided into 3 sections
# - Section 1 : Reference dataset creation for the variables
# - Section 2 : Use distribution functions to create the dataset (100 records)
# - Section 3 : Use pre-determined rules to populate the number of accidents for the random variable combination
# - Section 4 : Plot the relations between different variables using Seaborn library
# ***
#
# ***
# **Section 1** <br>
# - Create the reference dataset for the different variables (Panda List) as set out in the project description above
# - The Irish Counties are loaded from JSON files
# - The rest of the refernces datasets are hardcoded within the below python code
# - Print the reference data set (except counties)
#
#
# ***
# +
#**************************** SECTION 1 STARTS HERE ************************#
#Import Pandas library
import pandas as pd
# Variable 1 - Counties
# The irish counties are stored in the Json file
# Create a dataframe for the irish counties
url = "https://raw.githubusercontent.com/SomanathanSubramaniyan/PDA-Project/master/Counties.json"
df_counties = pd.read_json(url, orient='columns')
# Variable 2 - Age group of the Driver
# Create a list for the AgeGroup
#AgeGroup =[ 'Baby Boomers', 'Traditionalists','Gen-Y', 'Gen-Z','Gen-X',]
AgeGroup =[ 'Baby Boomers', 'Traditionalists','Gen-Y']
# Variable 3 - Type of the Vehicle
# Create a list for different type of vechicles
VehicleType = ['Van', 'Bus', 'bi-cycle', 'Car','SUV', 'Lorry']
# Variable 4 - Road type
# Create a list for different Road Types
RoadType = ['Two-way single carriageway', 'One-way single carriageway', 'Dual Carriageway']
# Variable 5 - weather
# Create a list for different weather scenarios
Weather = ['Sunny','Cloudy','Rainy', 'Windy','Snow']
print ("\n")
print ("*** Reference Variables used in this project ***")
print ("\n")
print ("Age Group "+ " : "+ '{0!r}'.format(AgeGroup))
print ("Vehicle Type "+ " : "+ '{0!r}'.format(VehicleType))
print ("Road Type "+ " : "+ '{0!r}'.format(RoadType))
print ("Weather "+ " : "+ '{0!r}'.format(Weather))
#**************************** SECTION 1 ENDS HERE *************************#
# -
# ***
# **Section 2** <br>
#
# - Use Uniform, Normal and Poisson distributions to randomly choose the reference variables
# - Uniform Distribution : County random data selection from the reference set
# - Normal Distribution : Age Group random data selection from the reference set
# - Poisson Distribution : Vehicle, Road type and Weather random data selection from the reference set
# - Choose the distribution function parameters so that the random selection largly reflect the real world scenario
# - Create 100 dataset for all the 5 variables using a for loop
# - Remove any duplicate in the dataset
# ***
# +
#**************************** SECTION 2 STARTS HERE ************************#
# Create dataframe for variables county, Agegroup, VehicleType,Road Type, Weather and Number of accidents
# User for loop to create a 100 data set
# import the pandasql to identify the unique records in the dataframe
from scipy.stats import truncnorm,poisson, uniform
from pandasql import sqldf
import numpy as np
import random
import pandas as pd
# Function to return the truncated NORMAL random values
# the upper and the lower values are within expected range
def truncatednormal(mu=0, sigma=1, low=0, upp=10):
return truncnorm( (low - mu)/sigma, (upp - mu)/ sigma, mu, sigma)
# Function to return the POISSON random values
# the upper and the lower values are within expected range
def tpoisson(sample_size=1, maxval=5, mu=3.2):
cutoff = poisson.cdf(maxval, mu)
u = uniform.rvs(scale=cutoff, size= sample_size)
y = poisson.ppf(u, mu)
return y
dataset = pd.DataFrame(columns=['County','AgeGroup','VehicleType','RoadType', 'Weather','NoofAccidents'])
### Variable 1 -- County ###
# Use UNIFORM DISTRIBUTION to populate the county column in the dataframe
# this ensures all the country are equally represented in the dataset.
# On average 31 distinct counties out of 32 are populated using logic during each execution
# Use round and integer functions to convert the float result to the nearest integer.
for x in range(100):
icounty = int(round(random.uniform(0,31),0))
dataset.loc[x,'County'] = df_counties.at[icounty,0]
# County - Unique value and their counts - results of the UNIFORM random distribution
dataset.County.value_counts()
### Variable 2 -- Age Group of the Driver ###
# Use TRUNCATED NORMAL DISTRIBUTION to populate the Age Group column in the dataframe
# this ensures most of the data set has "Gen-Y" or "Gen-X"
# Use round and integer functions to convert the float result to the nearest integer.
for x in range(100):
y = truncatednormal(2.2,1,0,2)
iAG = y.rvs(1)
z = int(round(iAG[0],0))
dataset.loc[x,'AgeGroup'] = AgeGroup[z]
# Age Group - Unique value and their counts - results of the Normal random distribution
dataset.AgeGroup.value_counts()
### Variable 3, Variable 4 and Varibale 5 -- Vehicle Type, Road Type and Weather ###
# Use POISSON DISTRIBUTION to populate the Vechicle, Road Type and weather from the reference data
# this ensures most of the data set has values as "car", "SUV" and "bi-cycle"
for x in range(100):
# call function tpoisson and pass the size, upper limite and mu parameters
y = tpoisson(1,5,3.2)
dataset.loc[x,'VehicleType'] = VehicleType[int(y)]
# call function tpoisson and pass the size, upper limite and mu parameters
y = tpoisson(1,4,1.5)
dataset.loc[x,'Weather'] = Weather[int(y)]
# call function tpoisson and pass the size, upper limite and mu parameters
y = tpoisson(1,2,0.5)
dataset.loc[x,'RoadType'] = RoadType[int(y)]
#Drop the duplicate records from the dataset
#dataset.drop_duplicates(subset=['County', 'AgeGroup','VehicleType','Weather','RoadType','NoofAccidents'])
#**************************** SECTION 2 ENDS HERE ************************#
# -
# ***
# **Section 3** <br>
#
# Populate the number of accidents based rules defined in this section. The below facts are based on RSA.ie website. Road deaths in the years 2015,2016 and 2017
#
# - Maximum number of death in road accidents happened in the counties Dublin, cork, Donegal and Maya [RSA.ie]
# - Maximum number of death in road accidents involve Generation Z and the Traditionalist
# - Maximum number of death in road accidents happend "Two-Way" single carriageway
#
# ***
# +
#**************************** SECTION 3 STARTS HERE ************************#
#The below rules are based on the assumption that more number of accidents happen on the below conditions
# weather -- rain, Wind and snow
# Vechicle type -- car and SUV
# Road type -- Two-way single carriageway', One-way single carriageway
# Weather -- Rainy, Windy,'Snow'
# Capture the frequent causes of accidents in the list variable
mCounty = ['Kildare','Dublin','Cork','Mayo']
mAgeGroup = ['Traditionalists', 'Gen-Z']
mVehicleType = ['Car','SUV']
mRoadType = ['Two-way single carriageway', 'One-way single carriageway']
mWeather = ['Rainy', 'Windy','Snow']
#Ensure the number of accidents are populated randomly based on the above frequent occuring causes
for index, row in dataset.iterrows():
if (row['County'] in mCounty) and (row['AgeGroup'] in mAgeGroup) and (row['VehicleType'] in mVehicleType) and \
(row['RoadType'] in mRoadType) and (row['Weather'] in mWeather):
row['NoofAccidents'] = random.randint(15,50)
elif (row['County'] in mCounty) and (row['AgeGroup'] in mAgeGroup) and (row['VehicleType'] in mVehicleType) and \
(row['RoadType'] in mRoadType):
row['NoofAccidents'] = random.randint(15,40)
elif (row['County'] in mCounty) and (row['AgeGroup'] in mAgeGroup) and (row['VehicleType'] in mVehicleType):
row['NoofAccidents'] = random.randint(15,35)
elif (row['County'] in mCounty) and (row['AgeGroup'] in mAgeGroup):
row['NoofAccidents'] = random.randint(5,30)
elif (row['County'] in mCounty):
row['NoofAccidents'] = random.randint(5,20)
else:
row['NoofAccidents'] = random.randint(1, 10)
#Print the dataset
dataset
#**************************** SECTION 3 ENDS HERE ************************#
# -
# ***
# **Section 4** <br>
#
# - Plot Seaboran graps which shows the relation between below 3 variables
# - Road Type (Dual Carriage way, One Way Single Carriage way, Two Way Single Carriage way)
# - Weather
# - Number of Accidents
#
# - Plot Seaboran graps which shows the relation between below 3 variables
# - Age Group (Gen Y, Traditionalist and Baby Boomers)
# - Weather
# - Number of Accidents
#
#
# ***
# +
#**************************** SECTION 4 STARTS HERE ************************#
#import numpy,seanboran and matplotlib libraries
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
#seaborn relational plot to display the relationship between the variables accidents, weather, vehicle type and Road Type
sns.set(style="darkgrid")
sns.relplot(x='NoofAccidents', y='Weather', hue='VehicleType', col='RoadType',data=dataset)
# -
#seaborn relational plot to display the relationship between the variables accidents, weather, vehicle type and Age Group
sns.set(style="darkgrid")
sns.relplot(x='NoofAccidents', y='Weather', hue='VehicleType', col='AgeGroup',data=dataset)
#seaborn relational plot to display the relationship between the variables accidents, weather, County and Age Group
sns.set(style="darkgrid")
sns.relplot(x='NoofAccidents', y='Weather', hue='County', col='AgeGroup',data=dataset)
#**************************** SECTION 4 STARTS HERE ************************#
# The relationship plots between the variables are in-line with the rules framed for the number of accidents count for different variable combinations as detailed in Section 3.
# ## END
| Programming for Data Analysis - Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This analysis was requested to help inform some discussions about article creation dynamics on the English Wikipedia. See [T149021](https://phabricator.wikimedia.org/T149021) and [T149049](https://phabricator.wikimedia.org/T149049).
import pandas as pd
import bokeh.plotting as bk
import bokeh
import datetime as dt
from IPython.display import display, HTML
# +
# #!/usr/bin/python
import pymysql
import pandas as pd
from impala.dbapi import connect as impala_conn
from impala.util import as_pandas
def try_decode(cell):
try:
return cell.decode(encoding = "utf-8")
except AttributeError:
return cell
def decode_data(d):
return [{try_decode(key): try_decode(val) for key, val in item.items()} for item in d]
def query_db(query, db = "mariadb", fmt = "pandas"):
if db not in ["mariadb", "hadoop"]:
raise ValueError("The db should be `mariadb` or `hadoop`.")
if fmt not in ["pandas", "raw"]:
raise ValueError("The format should be either `pandas` or `raw`.")
if db == "mariadb":
try:
conn = pymysql.connect(
host = "analytics-store.eqiad.wmnet",
read_default_file = '/etc/mysql/conf.d/research-client.cnf',
charset = 'utf8mb4',
db='staging',
cursorclass=pymysql.cursors.DictCursor
)
if fmt == "pandas":
result = pd.read_sql_query(query, conn)
# Turn any binary data into strings
result = result.applymap(try_decode)
elif fmt == "raw":
cursor = conn.cursor()
cursor.execute(query)
result = cursor.fetchall()
result = decode_data(result)
finally:
conn.close()
elif db == "hadoop":
try:
hive_conn = impala_conn(host='analytics1003.eqiad.wmnet', port=10000, auth_mechanism='PLAIN')
hive_cursor = hive_conn.cursor()
hive_cursor.execute(query)
if fmt == "pandas":
result = as_pandas(hive_cursor)
elif fmt == "raw":
result = hive_cursor.fetchall()
finally:
hive_conn.close()
return result
# -
# # Survival of new articles over time
# Data from the following queries (surviving creations are held in the `revision` table, while deleted creations have been moved to the `archive` table):
#
# ```
# select left(rev_timestamp, 6) as month, count(*) as surviving_creations
# from enwiki.revision
# left join enwiki.page
# on rev_page = page_id
# where
# page_namespace = 0 and
# rev_parent_id = 0 and
# convert(rev_comment using utf8) not like "%redir%" and
# rev_len > 100
# group by left(rev_timestamp, 6);
# ```
#
# ```
# select left(a.ar_timestamp, 6) as month, count(*) as deleted_creations
# from enwiki.archive a
# inner join
# (
# select ar_title, min(ar_timestamp) as ar_timestamp
# from enwiki.archive
# where
# ar_namespace = 0 and
# convert(ar_comment using utf8) not like "%redir%" and
# ar_len > 100
# group by ar_title
# ) b
# using (ar_title, ar_timestamp)
# group by left(a.ar_timestamp, 6)
# ```
survived = pd.read_table("2016-10_enwiki_surviving_creations.tsv")
survived.head()
deleted = pd.read_table("2016-10_enwiki_deleted_creations.tsv")
deleted.head()
# +
survival = survived.merge(deleted, on = "month")
survival["pct_survival"] = \
survival["surviving_creations"] / \
(survival["surviving_creations"] + survival["deleted_creations"])
# Convert month column to real date
survival["month"] = pd.to_datetime(survival["month"], format = "%Y%m")
survival.set_index(keys = "month", inplace = True)
# Get rid of incomplete data for November
survival.drop(pd.to_datetime("2016-11-01"), inplace = True)
survival.tail()
# +
bokeh.io.output_notebook()
c = bk.figure(width = 800, height = 400, x_axis_type = "datetime", y_range = (0, 1))
c.line(survival.index, survival["pct_survival"], color = "navy", line_width = 2)
c.toolbar.active_drag = None
bk.show(c)
# -
# # Article creation by non-autoconfirmed editors
# ## Data Lake
# The [Data Lake](https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake) makes it possible to include data on deleted articles (I had previously tried this using the MariaDB replicas, but found it unable to handle the complexity of the query).
#
# This method gives the answer that 6.6% of article creations are done by non-autoconfirmed editors, which definitely seems too low. Possible explanations starting with the most likely in my view:
# 1. There's simple error in the query
# 1. There's
# 3. There's an error in the data
# 1. This answer is in fact correct, meaning that ar
dl_creations_hql = """
select
mediawiki_history.wiki_db,
substr(event_timestamp, 0, 8) as day,
count(*) as creations,
sum(
if(
((unix_timestamp(event_timestamp, 'yyyyMMddHHmmss') -
unix_timestamp(coalesce(event_user_creation_timestamp, '20050101000000'), 'yyyyMMddHHmmss'))
< 345600) or
(user_edits_by_hour.edit_count < 10),
1, 0)
)
as non_autoconfirmed_creations
from wmf.mediawiki_history
inner join
(select wiki_db,
event_user_id,
hour,
sum(edit_count) over (partition by wiki_db, event_user_id order by hour) as edit_count
from (select wiki_db,
event_user_id,
substr(event_timestamp, 0, 10) as hour,
count(*) as edit_count
from wmf.mediawiki_history
where event_entity = 'revision'
and event_type = 'create'
and wiki_db = 'enwiki'
and snapshot = '2017-04'
group by wiki_db, event_user_id, substr(event_timestamp, 0, 10)
) user_edits_per_hour
) user_edits_by_hour ON mediawiki_history.wiki_db = user_edits_by_hour.wiki_db
AND mediawiki_history.event_user_id = user_edits_by_hour.event_user_id
AND substr(event_timestamp, 0, 10) = user_edits_by_hour.hour
where event_entity = 'revision'
and event_type = 'create'
and page_namespace = 0
and mediawiki_history.wiki_db = 'enwiki'
and revision_parent_id = 0
and event_comment not regexp "[Rr]edir"
and snapshot = '2017-04'
and event_timestamp > '20170101000000'
group by mediawiki_history.wiki_db, substr(event_timestamp, 0, 8)
order by mediawiki_history.wiki_db, day
limit 100000
"""
dl_creations = query_db(dl_creations_hql, db = "hadoop")
dl_creations.tail(n = 20)
# The aggregate percentage of non-autoconfirmed creations over the entire period
(dl_creations["non_autoconfirmed_creations"].sum() / dl_creations["creations"].sum()) * 100
# ## Recent changes
# **Summary**: I estimate that about 87% of new articles on the English Wikipedia are created by autoconfirmed users. (In this case, an article is a main-namespace page which is not a redirect.)
#
# This estimate is based on articles created in the week before the query was run, and does *not* include any articles created and then deleted by that time (anywhere from 0 to 7 days after their creation).
page_creations = """
select
page_title,
page_title_latest,
page_id,
event_timestamp,
revision_id as creation_rev_id,
revision_text_bytes as length_at_creation,
event_user_id as creator_id,
event_user_text as creator_name,
user_creator_timestamp as creator_registration
from wmf.mediawiki_history
where
wiki_db = "enwiki" and
event_entity = "revision" and
event_type = "create"
revision_parent_id = 0
page_namespace = 0 and
event_comment not regexp "[Rr]edir" and
event_timestamp >= "201704" and
event_timestamp < "201705"
;
"""
creations = pd.read_table(
"2016-10_enwiki_article_creations.tsv",
parse_dates = [2, 8])
creations.head()
four_d = dt.timedelta(days = 4)
creations["creator_autoconfirmed"] = (
(creations["user_edit_count"] >= 10) &
(creations["creation_timestamp"] >= (creations["user_registration"] + four_d))
)
# However, this leaves a couple entries with null account creations dates because their accounts were created before Mediawiki started recording them. I'll manually set them to be autoconfirmed.
null_reg = creations[ creations["user_registration"].isnull() ]
null_reg
creations.ix[null_reg.index, "creator_autoconfirmed"] = True
creations[ creations["user_registration"].isnull() ]["creator_autoconfirmed"]
creations.groupby("creator_autoconfirmed").size()
# So 88.1% of creations were by autoconfirmed users. But I think there are still a good number of redirect creations here, even though I filtered out most of them using the edit summary. What if we pull out everything where the inital size was less than 100 bytes and the user was autoconfirmed? From spot-checking, it looks like that should get most of them while not removing too many creations of real stubs.
# +
to_remove = creations[
(creations["length_at_creation"] < 100) &
(creations["creator_autoconfirmed"] == True)
]
creations = creations.drop(to_remove.index)
# -
creations.groupby("creator_autoconfirmed").size()
# That gives 86.9% of creations by autoconfirmed users. That's likely a better estimate, though it's not a large difference in any case.
#
# Ideally, I'd check the text of each revision to know for sure whether it was a redirect at the time of creation. But that would require a lot of API work, and this suggests that it wouldn't change the results much.
# ## Example articles
# I'll pull out a linked list of the creations from 31 October.
# +
examples = creations[
(creations["creation_timestamp"] >= "2016-10-31") &
(creations["creation_timestamp"] < "2016-11-01")
]
ac_ex = examples[examples["creator_autoconfirmed"] == True]
non_ac_ex = examples[examples["creator_autoconfirmed"] == False]
def print_table(df):
print("Printing {} rows".format(df.shape[0]))
output = "<table><tr><th>Page</th><th>Initial version</th></tr>"
for row in df.iterrows():
table_row = """
<tr>
<td><a href='https://en.wikipedia.org/wiki/{title}'>{title}</a></td>
<td><a href='http://en.wikipedia.org/wiki/Special:Diff/{rev_id}'>Special:Diff/{rev_id}</a></td>
</tr>
"""
output += table_row.format(title = row[1][0], rev_id = row[1][3])
output += "</table>"
display(HTML(output))
# -
# ### Autoconfirmed creations
print_table(ac_ex)
# ### Non-autoconfirmed creations
#
print_table(non_ac_ex)
| Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
df.head()
from pandas import get_dummies
df = pd.get_dummies(df, columns=['cut','color','clarity'])
y = df["price"]
X = df.drop(columns=["price"])
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y)
# -
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestRegressor,ExtraTreesRegressor
estimator = [('rf', RandomForestRegressor(n_estimators=500,n_jobs=-1,verbose=2,random_state = 42))]
sr = make_pipeline(StandardScaler(),ExtraTreesRegressor(n_estimators=500,n_jobs=-1,verbose=2))
sr.fit(X_train, y_train)
y_pred_train = sr.predict(X_train)
y_pred_test = sr.predict(X_test)
from sklearn.metrics import mean_squared_error as mse
print("Train:", mse(y_train,y_pred_train, squared=False))
print("Test:", mse(y_test,y_pred_test, squared=False))
sr.fit(X,y)
predict = pd.read_csv('predict.csv')
predict = pd.get_dummies(predict, columns=['cut','color','clarity'])
y_pred = sr.predict(predict)
df_7 = pd.DataFrame({"index":predict["index"], "price":y_pred})
df_7.to_csv('submission_7.csv', index=False)
| sub_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Universidad Tecnologica Nacional
# # Ingeniería Industrial
#
# #### Autor: <NAME>
# #### Cátedra de Investigación Operativa - Curso I4051 - Turno Miércoles Noche - Docente: <NAME>
# # Resolviendio Problemas Probabilisticos c/Python
#
# # Indice
# 1. [Monty Hall](#Monty-Hall)<br>
# 1.1 [Problema](#Problema)<br>
# 1.2 [Premisas](#Premisas)<br>
# 1.3 [Definición de Variables](#Definicion-de-variables)<br>
# 1.4 [Casos a probar](#Casos-a-probar)<br>
# 1.5 [Simulación](#Simulacion)<br>
# 1.6 [Resultados y conclusiones](#Resultados-y-conclusiones)
# 2. [50 blancas y 50 negras](#50-blancas-y-50-negras)<br>
# 2.1 [Enunciado](#Enunciado)<br>
# 2.2 [Premisas y observaciones](#Premisas-y-observaciones)<br>
# 2.3 [Variables y funciones](#Variables-y-funciones)<br>
# 2.4 [Iteracion y simulacion](#Iteracion-y-simulacion)
# 3. [Tarea para el hogar](#Tarea-para-el-hogar)<br>
# 3.1 [La caja de Bertrand](#La-caja-de-Bertrand)<br>
# 3.2 [Paradoja de los hijos](#Paradoja-de-los-hijos)<br>
# 3.3 [Problema del prisionero](#Problema-del-prisionero)
#
# ## <NAME>
#
#
# ### Problema
#
# El concursante debe elegir una puerta entre tres (todas cerradas); el premio consiste en llevarse lo que se encuentra detrás de la elegida. Se sabe con certeza que tras una de ellas se oculta un automóvil, y tras las otras dos hay cabras. Una vez que el concursante haya elegido una puerta y comunicado su elección a los presentes, el presentador, que sabe lo que hay detrás de cada puerta, abrirá una de las otras dos en la que haya una cabra. A continuación, le da la opción al concursante de cambiar, si lo desea, de puerta (tiene dos opciones). ¿Debe el concursante mantener su elección original o escoger la otra puerta? ¿Hay alguna diferencia?
#
#
# ### Premisas
#
# - Existe un solo premio de valor que sería el auto
# - El participante solo puede eligir 1 puerta
# - El presentador siempre abre una puerta que no contiene el premio de valor
# - El participante luego de que el presentador abra una puerta puede escoger entre quedarse con la que eligio inicialmente o cambiarla
#
#
# ### Definicion de variables
#
# - ___X___: puerta en la que esta el premio - variable aleatoria independiente
# - ___Y___: elección inicial del participante - variable aleatoria independiente
# - ___Z___: puerta sin premio abierta por el presentador - variable aleatoria dependiente
#
#Importamos las librerias
import random
import numpy as np
# +
def random_X():
'''
Función que devuelve el valor de X
'''
value=int(random.uniform(1,4))
if value ==4:
value-=1
return value
def random_Y():
'''
Función que devuelve el valor de Y
'''
value=int(random.uniform(1,4))
if value ==4:
value-=1
return value
def random_Z(X,Y):
'''
Función que devuelve el valor de Z dado el valor de X e Y
'''
doors=[1,2,3]
try:
if X==Y:
doors.remove(X)
return doors[int(random.uniform(0,2))]
else:
doors.remove(Y)
doors.remove(X)
return doors[0]
except:
print("Ups...esa puerta no existe")
raise ValueError('Error en numero de puerta')
# -
# ### Casos a probar
#
# Voy a crear un diccionario cuya clave sera cuantos intentos el participante hará antes de optar por la estrategia de cambiar de puerta.
# A cada clave le correspondera un vector que me indicara con 1 cuando ganó y con 0 cuando perdio
#
# Visto que queremos saber si conviene cambiar la elección, vamos a decir que 0 implica que nunca cambia su elección, y que 1 su inversa
# +
#Defino escenarios cantidad de escenarios
n_cases=2
#Creo un diccionario que me dice cada cuantos intentos en el juego el participante decide cambiar de elección
case={}
for i in range(0,n_cases):
#setdefault nos permite ver si una llave esta en el diccionario, si no existe la agrega c/ un valor por defecto
case.setdefault(i,[])
# 0 = nunca cambia de la elección inicial | 1 = siempre cambia de la elección inicial | n = al intento n cambia la elección inicial
# -
# ### Simulacion
#
# 1. Se define cuantas juegos se jugaran
# 2. Se determina la puerta con premio (X) y la eleccioón inicial (Y).
# 3. Abre la puerta el presentador (Z)
# 4. Evaluo cada caso - for loop a traves del diccionario
# 5. Evaluo si gano o perdio y guardo el resultado
# +
#Defino cuantas veces voy a simular la variable aleatoria
runs=10000
for i in range (0,runs):
X=random_X()
Y=random_Y()
#Obtengo el resultado para cada escenario
for j in case:
#Presentador abre la puerta Z
Z=random_Z(X,Y)
options=[1,2,3]
options.remove(Z)
if j==0 or i % j !=0: #Participante NO cambia de puerta
if Y == X: #Verifico si gano o perdio
case[j].extend([1])
else:
case[j].extend([0])
else: #Participante cambia de puerta
options.remove(Y)
if options[0] == X: #Verifico si gano o perdio
case[j].extend([1])
else:
case[j].extend([0])
# -
# ### Resultados y conclusiones
#
# Obtendremos para cada caso el porcentaje de victorias como cantidad de victorias sobre total de simulaciones hechas.
#
# Aquel caso cuyo valor sea mayor será la mejor estrategia a tomar ante el problema.
# +
perc_victories = []
for i in case:
results=np.array(case[i])
perc_victories.append(np.sum(results)/runs)
if np.argmax(perc_victories) == 0:
print('No cambiar de puerta es la mejor estrategia - Probabilidad de victoria: '+str(np.amax(perc_victories)*100)+'%')
elif np.argmax(perc_victories)==1:
print('Cambiar siempre de puerta es la mejor estrategia - Probabilidad de victoria: '+str(np.amax(perc_victories)*100)+'%')
else:
print('El caso '+str(np.argmax(perc_victories))+' es la mejor estrategia - Probabilidad de victoria: '+str(np.amax(perc_victories)*100)+'%')
# -
perc_victories
# ## Tarea para el hogar
# ### La caja de Bertrand
#
# En una habitación hay tres escritorios iguales con dos cajones cada uno. Uno de los escritorios tiene una moneda de oro en cada cajón; otro tiene una moneda de plata en cada uno mientras que en el restante hay una moneda de cada metal. Desde afuera no hay manera de decidir qué contiene cada cajón.
#
# Usted entra en esa habitación y elige un cajón de cualquiera de los tres escritorios. Lo abre y descubre que adentro hay una moneda de oro. Aquí es donde viene la pregunta (y el problema):
#
# “¿Cuál es la probabilidad de que en el otro cajón del mismo escritorio, haya también una moneda de oro?”
# ### Paradoja de los hijos
#
# El matrimonio Smith camina por la calle y se encuentra a don Pepito. Don Pepito reconoce a su amigo del colegio José Smith, así que se detienen a saludarse: “Hola, don Pepito”, “Hola, don José”. “¿Recuerda usted a mi esposa?”-dice don José, señalando a su pareja. Don Pepito asiente-. ¿Y a mi hijo Joseíto? -dice, señalando a un chico que va a su lado.
#
# Sabiendo que el matrimonio Smith tiene dos hijos, ¿cuál es la probabilidad de que el otro hijo sea también varón? (Suponemos que la probabilidad de nacer hombre o mujer es 0.5).
# ### Problema del prisionero
#
# En una cárcel, tres prisioneros de historiales similares, solicitan el indulto a un tribunal. Poco después se sabe que el indulto ha sido concedido a dos de los tres presos. Uno de los prisioneros conoce a uno de los miembros del tribunal y sabe que si le pregunta podrá obtener algo de información. Podrá preguntarle por el nombre de uno de los indultados, pero no podrá preguntar si él es uno de ellos.
#
# Reflexionando, concluye que, si no pregunta, la probabilidad de ser uno de los indultados es 2/3; mientras que, si pregunta, obtendrá una respuesta, y entonces la probabilidad de ser él otro de los indultados es 1/2. Por lo tanto, concluye que será mejor no preguntar, puesto que eso solo le servirá para disminuir su probabilidad de ser uno de los indultados.
#
# ¿Es correcto el razonamiento del preso?
| Puzzles de Probabilidad - IO - UTN FRBA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1394850f892c0188", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # SLU04 - Data Structures
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-5a86818cb223223a", "locked": true, "schema_version": 3, "solution": false}
# ### Start by importing the following packages
# + nbgrader={"grade": false, "grade_id": "cell-23ad005b1393abe1", "locked": true, "schema_version": 3, "solution": false, "task": false}
#used for evaluation
import hashlib
import json
import random
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a39755a5c47d569e", "locked": true, "schema_version": 3, "solution": false, "task": false}
# In this notebook the following is tested:
# - Tuples
# - Lists
# - Dictionaries
# - Sets
#
# **IMPORTANT:** Some exercises require you to use a variable that was defined in a previous cell, so don't forget to run it before!
#
# For example, in Exercise 1.2 you need to run the cell with `color = ("red", "blue", "green", "yellow", "black", "white")` before running the following cell with your solution.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a77c1aeea1e0bae8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Exercise 1: Tuples <a name="1"></a>
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1ae98f9e0e6d921f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# This exercise covers topics learned regarding tuples.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-553f559dced32ebd", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 1.1) Create a tuple
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-328945c1835ecba5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Create a tuple of __floats__ named `this_tuple` with size 5.
# + nbgrader={"grade": false, "grade_id": "cell-3a4c0b59df7bcf57", "locked": false, "schema_version": 3, "solution": true, "task": false}
#this_tuple = ...
### BEGIN SOLUTION
this_tuple = (1., 2., 3., 4., 5.)
print(type(this_tuple))
print(len(this_tuple))
print(type(this_tuple[3]))
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-b63230a58719fa25", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert isinstance(this_tuple, tuple), "Are you sure this_tuple is a tuple?"
assert len(this_tuple) == 5, "The length is not quite right."
for i in this_tuple:
assert isinstance(i, float), "Did you write floats?"
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-caade60192e9ee98", "locked": true, "schema_version": 3, "solution": false}
# ### 1.2) Index a tuple
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-b9e18f365178dad9", "locked": true, "schema_version": 3, "solution": false}
# Consider the following tuple:
# + nbgrader={"grade": false, "grade_id": "cell-db62d36d8237c211", "locked": true, "schema_version": 3, "solution": false}
color = ("red", "blue", "green", "yellow", "black", "white")
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-3c9441a336c01871", "locked": true, "schema_version": 3, "solution": false}
# Using __negative__ indexing, what is the index of the element `"green"`? Assign this value to the variable `green_index`.
# + nbgrader={"grade": false, "grade_id": "cell-268b39f922ddc862", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Hint: the solution should be a number
# green_index = ...
### BEGIN SOLUTION
green_index = -4
print(color[green_index])
print(
'green_index hashed:',
hashlib.sha256(
json.dumps(green_index).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-d3eb2f19b4f68632", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(green_index).encode()).hexdigest() == 'e5e0093f285a4fb94c3fcc2ad7fd04edd10d429ccda87a9aa5e4718efadf182e', "The index is not correct. Are you using negative indexing?"
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-5abddeaeeb3b55f7", "locked": true, "schema_version": 3, "solution": false}
# ### 1.3) Slice a tuple
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-3f647fb2f1a39be1", "locked": true, "schema_version": 3, "solution": false}
# Extract `("green", "blue", "red")` (in this same order) from the tuple `color`.
# Assign the results to a variable called `rgb`.
# + nbgrader={"grade": false, "grade_id": "cell-8a98581ad0586a03", "locked": true, "schema_version": 3, "solution": false, "task": false}
color = ("red", "blue", "green", "yellow", "black", "white")
# + nbgrader={"grade": false, "grade_id": "cell-205bd03d74d9e9f5", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Hint: use backwards slicing
# rgb = ...
### BEGIN SOLUTION
rgb = color[-4:-7:-1]
print(rgb)
print(
'rgb hashed:',
hashlib.sha256(
json.dumps(rgb).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-f17f899e57df23ad", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert isinstance(rgb, tuple), "Is your result a tuple?"
assert len(rgb) == 3, "You aren't selecting the correct number of elements."
assert rgb[0] == "green", "Check which elements you are selecting and their order."
assert hashlib.sha256(json.dumps(rgb).encode()).hexdigest() == '43da8949efc00c51f7d96130a25a6b902f6cfd6157817015ca6ee524e3085374'
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-5e645b1e02334bba", "locked": true, "schema_version": 3, "solution": false}
# ### 1.4) Index a tuple of tuples
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-014146bcca814cea", "locked": true, "schema_version": 3, "solution": false}
# Considering the following tuple of tuples:
# + nbgrader={"grade": false, "grade_id": "cell-de8a17d24bdd1974", "locked": true, "schema_version": 3, "solution": false}
random_numbers = ((1, 2, 3),(4, 5, 6),(7, 8, 9),(10, 11, 12))
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-648f815248e68152", "locked": true, "schema_version": 3, "solution": false}
# What is the right way to extract the number 8 from `random_numbers`?
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-74a3ef6157c4d1da", "locked": true, "schema_version": 3, "solution": false}
# a) `random_numbers[3][2]`
# b) `random_numbers[-1][-1]`
# c) `random_numbers[-2][1]`
# d) `random_numbers[-1][1]`
# + nbgrader={"grade": false, "grade_id": "cell-7e6632972efca3db", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "a"
#answer = "b"
#answer = "c"
#answer = "d"
### BEGIN SOLUTION
random_numbers = ( (1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12) )
answer = "c"
print(answer)
print(random_numbers[-2][1])
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-a17e088c506ff934", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == '879923da020d1533f4d8e921ea7bac61e8ba41d3c89d17a4d14e3a89c6780d5d', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a04f60836c526476", "locked": true, "schema_version": 3, "solution": false}
# ### 1.5) Tuple of size one
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a9ff06d993ef64d8", "locked": true, "schema_version": 3, "solution": false}
# How can we create the tuple `example` of size 1 __without__ using the function __tuple__?
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-74acf2e1421786c0", "locked": true, "schema_version": 3, "solution": false}
# a) example = (5)
# b) example = 5
# c) example = [5]
# d) example = 5,
# + nbgrader={"grade": false, "grade_id": "cell-d239485262f1edcd", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "a"
#answer = "b"
#answer = "c"
#answer = "d"
### BEGIN SOLUTION
answer = "d"
print(answer)
this_tuple = 5,
print(type(this_tuple))
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-3e575a27a20113ad", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == '3fa5834dc920d385ca9b099c9fe55dcca163a6b256a261f8f147291b0e7cf633', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-6da7420f7fbc3fd5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 1.6) Replace values in a tuple
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a1bd5ea31721b6e5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Can a tuple be modified after its creation?
# + nbgrader={"grade": false, "grade_id": "cell-1d7ed0f610a63b1a", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "yes"
#answer = "no"
### BEGIN SOLUTION
answer = "no"
print(answer)
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-c7188110ce71150e", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == '04a06452677210a3cdaec376fd5ebbca1714cb7af9e62bf5cce1644310a9086a', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-973512063e02c5a4", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 1.7) Merge two tuples
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-117c66c870648a4f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the following tuples:
# + nbgrader={"grade": false, "grade_id": "cell-b8ec0c5b5fee7b4e", "locked": true, "schema_version": 3, "solution": false, "task": false}
left = (1,11,22215,7,14,1,11,9,1,6,2,5)
right = (1,24,50,45,2,45,1,1,2,1,2,1,88,9,9,9,44,5,2)
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-22370b87851a9a6e", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Create a new tuple `this_tuple` by merging the tuples above. The elements in the tuple `left` should precede the ones on the tuple `right`.
# + nbgrader={"grade": false, "grade_id": "cell-05376cff3e8e7950", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Hint: Using operations between tuples.
#this_tuple = ...
### BEGIN SOLUTION
this_tuple = left + right
print(this_tuple)
print(type(this_tuple))
print(len(this_tuple))
print(this_tuple[-3])
print(
'this_tuple hashed:',
hashlib.sha256(
json.dumps(this_tuple).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-74db790f2e9e6836", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert isinstance(this_tuple, tuple), "The result should be a tuple."
assert len(this_tuple) == 31, "The merging is not right."
assert this_tuple[-3] == 44, "Re-check the order of the tuples."
assert hashlib.sha256(json.dumps(this_tuple).encode()).hexdigest() == '0c1afd35431992aba438b9382b08332f4d4fed7d4380e538c047e42a223a9dd5'
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-bd55f1b4fa5c38d0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Exercise 2: Lists <a name="2"></a>
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-2101d19e1e91bb2d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# This exercise covers topics learned regarding lists.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1550468cd836e0f0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.1) List Creation
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-b61e2028d8892d9b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Write a list named `this_list` with length five and with the string `"bananas"` on negative index -4.
# + nbgrader={"grade": false, "grade_id": "cell-f8854927f9656f23", "locked": false, "schema_version": 3, "solution": true, "task": false}
#bananas = ...
#this_list = ...
### BEGIN SOLUTION
bananas = "bananas"
this_list = ["kiwi", bananas, "melon", "lemon", "kiwi"]
print(this_list)
print(type(this_list))
print(len(this_list))
print(this_list[-4])
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-5de18c0078e98c2e", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert isinstance(this_list, list), "The result should be a list."
assert len(this_list) == 5, "The length is not quite right."
assert this_list[-4] == "bananas", "There is no \"bananas\" on position -4."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-fd541dc946c20f47", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.2) Delete and append values in a list
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1fdfc306ff602ffb", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Given the list `ice_cream`, delete `"chocolate"` and append `"dulce de leche"`.
# + nbgrader={"grade": false, "grade_id": "cell-924c2b63a7da319a", "locked": true, "schema_version": 3, "solution": false, "task": false}
ice_cream = ["lemon", "stracciatella", "pistacchio", "chocolate", "vanilla"]
# + nbgrader={"grade": false, "grade_id": "cell-c22be3e4356c6e22", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
del ice_cream[-2]
ice_cream.append("dulce de leche")
print("chocolate" in ice_cream)
print(ice_cream[-1])
print(
'ice_cream hashed:',
hashlib.sha256(
json.dumps(ice_cream).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-298523708d65510d", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert "chocolate" not in ice_cream, "There is still \"chocolate\" in the ice_cream."
assert ice_cream[-1] == "dulce de leche", "Did you append the right ingredient?"
assert hashlib.sha256(json.dumps(ice_cream).encode()).hexdigest() == '083936d7994a5de336fe655aa14bb4b14ad95e30605fa4252937ddc8776cfa09'
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-0b6ee6c4b495135b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.3) Delete the last value in a list
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-712757f07a9f4350", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the following list, `ice_cream = ["lemon", "vanilla", "stracciatella", "pistacchio", "chocolate", "vanilla"]`, what is the right answer if we want to delete `"vanilla"` in the last position?
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-8e584b9581fa898c", "locked": true, "schema_version": 3, "solution": false, "task": false}
# a) `del ice_cream[1]`
# b) `del ice_cream[-1]`
# c) `ice_cream.remove("vanilla")`
# d) `ice_cream[-1] = None`
# + nbgrader={"grade": false, "grade_id": "cell-fafe1f47881998b6", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "a"
#answer = "b"
#answer = "c"
#answer = "d"
### BEGIN SOLUTION
answer = "b"
ice_cream = ["lemon", "vanilla", "stracciatella", "pistacchio", "chocolate", "vanilla"]
del ice_cream[1]
print(ice_cream)
ice_cream = ["lemon", "vanilla", "stracciatella", "pistacchio", "chocolate", "vanilla"]
del ice_cream[-1]
print(ice_cream)
ice_cream = ["lemon", "vanilla", "stracciatella", "pistacchio", "chocolate", "vanilla"]
ice_cream.remove("vanilla")
print(ice_cream)
ice_cream = ["lemon", "vanilla", "stracciatella", "pistacchio", "chocolate", "vanilla"]
ice_cream[-1] = None
print(ice_cream)
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-6589184220a059bc", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == 'c100f95c1913f9c72fc1f4ef0847e1e723ffe0bde0b36e5f36c13f81fe8c26ed', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-8ffa96b3a1115a12", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.4) List Operations
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-d3823726beb8ca39", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Using list operations, create a list called `hello_world` of size 10000. It should have just two unique values, `"hello"` and `"world"`. Starting with `"hello"` then `"world"`, then `"hello"` and so on.
# + nbgrader={"grade": false, "grade_id": "cell-0989f9a6a1c1f63e", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Hint: multiply a list by a int
#hello_world = ...
### BEGIN SOLUTION
hello_world = ["hello", "world"] * 5000
print(len(hello_world))
print(hello_world[0])
print(hello_world[3])
print(hello_world[4532])
print(
'hello_world hashed:',
hashlib.sha256(
json.dumps(hello_world).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-4ac5d3b434c8a490", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert len(hello_world) == 10000, "The size is not right."
assert hello_world[0] == "hello", "I guess you started on the wrong foot."
for idx,word in enumerate(hello_world):
if (idx % 2) == 0:
assert word == "hello", "Are the words alternating?"
else:
assert word == "world", "Are the words alternating?"
assert hashlib.sha256(json.dumps(hello_world).encode()).hexdigest() == 'ba01ef9ff3158651119ad5f13ef61b559dd73eff752f4a2cc05e3912c2c85d32', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-723e0ece87a053c0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.5) Replace values and sort a list
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-c799b67d2a4e420d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Replace `"pistacchio"` in the list `ice_cream` by `"cream"`.
#
# Add the sublist `others` to the end of the list `ice_cream`.
#
# After these operations, sort the elements in the list and convert it to a __tuple__.
# + nbgrader={"grade": false, "grade_id": "cell-a75c90f15f4bc3f1", "locked": true, "schema_version": 3, "solution": false, "task": false}
ice_cream = ["lemon", "stracciatella", "pistacchio", "chocolate", "vanilla"]
others = ["dulce de leche", "caramel", "cookies", "peanut butter"] * 100
# + nbgrader={"grade": false, "grade_id": "cell-7327a77943c8fe67", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
ice_cream[2] = "cream"
ice_cream = ice_cream + others
ice_cream.sort()
ice_cream = tuple(ice_cream)
print(len(ice_cream))
print(
'ice_cream hashed:',
hashlib.sha256(
json.dumps(ice_cream).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-955f097b2db82b73", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert len(ice_cream) == 405, "The size of the tuple is not right."
assert ice_cream.index("cream") == 201, "Did you replace \"pistacchio\"?"
assert isinstance(ice_cream, tuple), "The result is not a tuple."
assert hashlib.sha256(json.dumps(ice_cream).encode()).hexdigest() == 'bbe23485294172ad0fab1f37eee62d4f518a296aad06ea944d6d7a31e6ef58a6'
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-3d05def00c8085bb", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Exercise 3: Dictionaries <a name="3"></a>
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-5a58ad1650ffcb65", "locked": true, "schema_version": 3, "solution": false, "task": false}
# This exercise covers topics learned regarding dictionaries.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-ef80e72e7619c2a5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 3.1) Create a dictionary
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-40652f0b472e8069", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Create a dictionary called `this_dict` with 5 key-value pairs where the keys are strings and the values are lists.
# + nbgrader={"grade": false, "grade_id": "cell-bb5b978acdaaae16", "locked": false, "schema_version": 3, "solution": true, "task": false}
#this_dict = ...
### BEGIN SOLUTION
this_dict = {"a":[],"b":[],"c":[],"d":[],"e":[]}
print(len(this_dict))
print(isinstance(list(this_dict.keys())[0], str))
print(isinstance(list(this_dict.values())[0], list))
print(isinstance(list(this_dict.keys())[2], str))
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-0f3a2761c0922b5f", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert isinstance(this_dict, dict), "The result is not a dictionary."
assert len(this_dict) == 5, "The dictionary doesn't have 5 key-value pairs."
for key, value in this_dict.items():
assert isinstance(key, str), "The dictionary keys are not strings: %s" % key
assert isinstance(value, list), "The dictionary values are not lists: %s" % value
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-4eb22784ec1faef7", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 3.2) Extract a value from a dictionary
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-0ddf38a04c12d753", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the following dictionary named `groceries`:
# + nbgrader={"grade": false, "grade_id": "cell-25012b410acd6346", "locked": true, "schema_version": 3, "solution": false, "task": false}
groceries = {
"bread": {"type": "grains", "price_per_unit": 2, "quantity_purchased": 1},
"onions": {"type": "vegetables", "price_per_unit": 0.5, "quantity_purchased": 2},
"spinaches": {"type": "vegetables" , "price_per_unit": 1.5, "quantity_purchased": 1}
}
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-9383a3256ed6cf8e", "locked": true, "schema_version": 3, "solution": false, "task": false}
# What is the notation that we should use in order to extract "grains"?
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-806e772ad4255a5a", "locked": true, "schema_version": 3, "solution": false, "task": false}
# a) `groceries["type"]`
# b) `groceries["bread"]`
# c) `groceries[0][0]`
# d) `groceries["bread"]["type"]`
# + nbgrader={"grade": false, "grade_id": "cell-fd6a36d5f663708b", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "a"
#answer = "b"
#answer = "c"
#answer = "d"
### BEGIN SOLUTION
answer = "d"
print(answer)
print(groceries["bread"]["type"])
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-978adad567f2a3cb", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == '3fa5834dc920d385ca9b099c9fe55dcca163a6b256a261f8f147291b0e7cf633', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-8f98dc2d01c8e2ea", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 3.3) Replace, Append and Delete operations on dictionaries
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-4fb3bae7a757d100", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the following dictionary named groceries:
# + nbgrader={"grade": false, "grade_id": "cell-5bd6dd4a4dd7f77b", "locked": true, "schema_version": 3, "solution": false, "task": false}
groceries = {
"bread": {"type": "grains", "price_per_unit": 2, "quantity_purchased": 1},
"onions": {"type": "vegetables", "price_per_unit": 0.5, "quantity_purchased": 2},
"spinages": {"type": "vegetables" , "price_per_unit": 1.5, "quantity_purchased": 1}
}
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-87ba0d0c0af07caf", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Complete the following questions regarding Replacing, Appending and Deleting key-value pairs to/from a dictionary.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-841136c405a567d5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 3.3.1) Replace a value on a key-value pair
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-277c158b687d0838", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the dictionary `groceries`, update the `price_per_unit` for bread from 2 to 3.
# + nbgrader={"grade": false, "grade_id": "cell-095cb4bce1528b15", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
groceries["bread"]["price_per_unit"] = 3
print(groceries["bread"]["price_per_unit"])
print(
'groceries hashed:',
hashlib.sha256(
json.dumps(groceries).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-c5c653d824ffcda9", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(groceries).encode()).hexdigest() == 'be21838b4e9ebe7f267a1a172ec093414999f321b6c027fa0a8a02f9f4f5174d', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-05bcf92ae245295a", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 3.3.2) Add a key-value pair
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-8dc1761328f299cb", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Let's add `"rice"` to our `groceries` dictionary.
#
# The `"type"` should be `"grains"`, its `"price_per_unit"` is 1 and the `"quantity_purchased"` is 2.
# + nbgrader={"grade": false, "grade_id": "cell-bacb5fca35772b35", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
groceries["rice"] = {"type": "grains", "price_per_unit": 1, "quantity_purchased": 2}
print(groceries["rice"])
print(
'groceries_rice hashed:',
hashlib.sha256(
json.dumps(groceries["rice"]).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-59704708ceb3f98c", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert "rice" in groceries.keys(), "Are you sure you've added rice to the dictionary?"
assert hashlib.sha256(json.dumps(groceries["rice"]).encode()).hexdigest() == '8cfb3220f7b8b23cc8c9e578d6a7a5680dfa301c90c90d287764517e42850a82', "Wrong answer."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-3e19bd3419133aa8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 3.3.3) Delete a key-value pair
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-05cc4696c8447e19", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Delete onions from our `groceries` dictionary.
# + nbgrader={"grade": false, "grade_id": "cell-c8c6802714175503", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
del groceries["onions"]
"onions" not in groceries.keys()
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-9b202febf2268e6d", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert "onions" not in groceries.keys(), "There's still \"onions\" on the groceries."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-6eeb3376a7765ecb", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 3.4) Extract keys and values using methods
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-75ca056c4b91f3d6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Considering the dictionary `seniority`, extract the keys and values of the dictionary to the variables `names` and `ages`, respectively.
#
# Convert the variable `ages` to a list called `list_ages` and sort this list.
#
# In the end, calculate how many people are senior and assign the value to `n_seniors`.
# + nbgrader={"grade": false, "grade_id": "cell-4ee5afb5ef3e3f67", "locked": true, "schema_version": 3, "solution": false, "task": false}
seniority = {"joao":"senior", "bernardo":"adult", "gabriel":"child", "antonio":"senior", "maria":"senior", "joel":"adult", "ines":"adult", "alberto":"adult", "amilcar":"senior", "emilia":"adult", "ana":"adult", "margarida":"adult" }
# + nbgrader={"grade": false, "grade_id": "cell-1f5f20a99e9e053b", "locked": false, "schema_version": 3, "solution": true, "task": false}
#names = ...
#ages = ...
#list_ages = ..
#n_seniors = ...
### BEGIN SOLUTION
names = seniority.keys()
print(names)
print(len(names))
ages = seniority.values()
print(ages)
print(len(ages))
list_ages = list(ages)
list_ages.sort()
n_seniors = list_ages.count("senior")
print(
'list(names) hashed:',
hashlib.sha256(
json.dumps(list(names)).encode()
).hexdigest()
)
print(
'list_ages hashed:',
hashlib.sha256(
json.dumps(list_ages).encode()
).hexdigest()
)
print(
'n_seniors hashed:',
hashlib.sha256(
json.dumps(n_seniors).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-2977c2eb6bba3963", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert len(names) == 12, "The number of names is not correct."
assert hashlib.sha256(json.dumps(list(names)).encode()).hexdigest() == '9a2a381eef2e8212d8c8b48ab6531a62c064e9fbc39091015c8131138174eaf1', "The variable names is incorrect."
assert len(ages) == 12, "The number of ages is incorrect."
assert hashlib.sha256(json.dumps(list_ages).encode()).hexdigest() == '1fe2f77f82fc12b06e339263d31d6d6d261994a598e6c870152429dfc00a1ca5', "The variable list_ages is not correct. Did you sort it?"
assert isinstance(list_ages, list), "list_ages should be a list."
assert hashlib.sha256(json.dumps(n_seniors).encode()).hexdigest() == '4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a', "The number of seniors is incorrect."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-835571b102be26a7", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Exercise 4: Sets <a name="4"></a>
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-f5ff91bb8b5d4474", "locked": true, "schema_version": 3, "solution": false, "task": false}
# This exercise covers topics learned regarding sets.
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-b2e742c1e8ee2eac", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 4.1) Create a set
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-a300b171be249437", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Create an empty set called `mary_collection`.
# + nbgrader={"grade": false, "grade_id": "cell-939dc6aefe259e77", "locked": false, "schema_version": 3, "solution": true, "task": false}
#mary_collection = ...
### BEGIN SOLUTION
mary_collection = set()
print(type(mary_collection))
print(len(mary_collection))
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-66f3420a0f38d7f9", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert isinstance(mary_collection, set), "Are you sure mary_collection is a set?"
assert len(mary_collection) == 0, "The length is not quite right."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-cf4fecc77c253e28", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 4.2) Add values to a set
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1e8914dde5654d70", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 4.2.1) Add values from a list to a set
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-e2c35d3c5309ded3", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Add the values in the list `mary_albums` to the set `mary_collection` created in the previous step.
# + nbgrader={"grade": false, "grade_id": "cell-2c3218919b3d5463", "locked": true, "schema_version": 3, "solution": false, "task": false}
mary_albums = ["Repeater", "The Queen Is Dead", "Atom Heart Mother", "Loveless", "Milo Goes To College", "Revolver"]
# + nbgrader={"grade": false, "grade_id": "cell-56dd2081f31559cc", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
mary_collection.update(mary_albums)
print(
'len(mary_collection) hashed:',
hashlib.sha256(
json.dumps(len(mary_collection)).encode()
).hexdigest()
)
print(
'list(mary_collection) hashed:',
hashlib.sha256(
json.dumps(sorted(list(mary_collection))).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-9d163b623ea9cb27", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(len(mary_collection)).encode()).hexdigest() == 'e7f6c011776e8db7cd330b54174fd76f7d0216b612387a5ffcfb81e6f0919683', "The number of elements in the set is not correct."
assert hashlib.sha256(json.dumps(sorted(list(mary_collection))).encode()).hexdigest() == '3e697cd768486645a889b03f2041357b2cdaf517d2c2ea222410506a0286cedc', "The result is not correct..."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-f86f8760f127501c", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 4.2.2) Add a single value to a set
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-38b9957d6ac2f572", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Add the album `"Antics"` to the set `mary_collection`.
# + nbgrader={"grade": false, "grade_id": "cell-2f1785dbc24b6d2c", "locked": false, "schema_version": 3, "solution": true, "task": false}
### BEGIN SOLUTION
mary_collection.add("Antics")
print(
'len(mary_collection) hashed:',
hashlib.sha256(
json.dumps(len(mary_collection)).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-af70c7273fff9dca", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(len(mary_collection)).encode()).hexdigest() == '7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451', "The number of elements in the set is not correct. Did you add the new album?"
assert "Antics" in mary_collection, "The album is not there... Did you make a typo?"
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-43382e14559ee126", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 4.3) Operations with sets
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-7d91a37c601a26ea", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 4.3.1) Find common values
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-b58f6ee9b8ecc5bd", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now, please find out which albums Mary and Fred have in common, using the sets `mary_collection` and `fred_collection`.
#
# Assign the result to the variable `same_albums`.
# + nbgrader={"grade": false, "grade_id": "cell-d4fa87bda11ec211", "locked": true, "schema_version": 3, "solution": false, "task": false}
fred_collection = {"Still Life", "Atom Heart Mother", "Blackwater Park", "Lateralus", "Revolver", "Foxtrot"}
# + nbgrader={"grade": false, "grade_id": "cell-0865490a991ecbcb", "locked": false, "schema_version": 3, "solution": true, "task": false}
#same_albums =
### BEGIN SOLUTION
same_albums = mary_collection.intersection(fred_collection)
print(same_albums)
print(
'same_albums hashed:',
hashlib.sha256(
json.dumps(sorted(list(same_albums))).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-caaff753f2640a4f", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(sorted(list(same_albums))).encode()).hexdigest() == '244aa15017be9070eafbf75ded26c1cdb11c103a886c0a77f398554a7849c4ed', "The final set is not correct..."
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-72f4f5431f1e3546", "locked": true, "schema_version": 3, "solution": false, "task": false}
# #### 4.3.2) Get all values from 2 sets
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-b529c8191a0216af", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now, let's create the set `full_collection` that contains all albums from `mary_collection` and `fred_collection`.
# What operation should we use?
#
# a) `mary_collection & fred_collection`
# b) `fred_collection - mary_collection`
# c) `mary_collection.union(fred_collection)`
# d) `mary_collection.difference(fred_collection)`
# + nbgrader={"grade": false, "grade_id": "cell-cd331dcd713eaf11", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Uncomment the right answer
#answer = "a"
#answer = "b"
#answer = "c"
#answer = "d"
### BEGIN SOLUTION
answer = "c"
full_collection = mary_collection.union(fred_collection)
print(full_collection)
print(
'answer hashed:',
hashlib.sha256(
json.dumps(answer).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-4f884d0d1a257efd", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(answer).encode()).hexdigest() == '879923da020d1533f4d8e921ea7bac61e8ba41d3c89d17a4d14e3a89c6780d5d', "Oops...that's not the right answer"
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-1e6d4e16cf4b5c84", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 4.4) Use set to check duplicates in a list
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-570ac2a434f1798b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Given the list `lucky_numbers`, create the variable `total_size` corresponding to the total number of values in the list and create the variable `unique_size` that corresponds to the number of **unique** values there are in the list (i.e. excluding duplicates).
# + nbgrader={"grade": false, "grade_id": "cell-4bd871964473b515", "locked": true, "schema_version": 3, "solution": false, "task": false}
lucky_numbers = [1,6,3,8,4237,243,2,73,3,753,531,8,2,982,12,5,6,73,531,0,642,568,0,132]
# + nbgrader={"grade": false, "grade_id": "cell-fa5c1cbe99743894", "locked": false, "schema_version": 3, "solution": true, "task": false}
#total_size = ...
#unique_size = ...
### BEGIN SOLUTION
total_size = len(lucky_numbers)
unique_size = len(set(lucky_numbers))
print(total_size)
print(unique_size)
print(
'total_size hashed:',
hashlib.sha256(
json.dumps(total_size).encode()
).hexdigest()
)
print(
'unique_size hashed:',
hashlib.sha256(
json.dumps(unique_size).encode()
).hexdigest()
)
### END SOLUTION
# + nbgrader={"grade": true, "grade_id": "cell-ca3480f44778254d", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
assert hashlib.sha256(json.dumps(total_size).encode()).hexdigest() == 'c2356069e9d1e79ca924378153cfbbfb4d4416b1f99d41a2940bfdb66c5319db', 'The total size of the list is not correct'
assert hashlib.sha256(json.dumps(unique_size).encode()).hexdigest() == '4523540f1504cd17100c4835e85b7eefd49911580f8efff0599a8f283be6b9e3', 'The number of unique values in the list in not correct...'
# + [markdown] nbgrader={"grade": false, "grade_id": "cell-df69809a45a1e893", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Submit your work!
#
# To submit your work, [follow the instructions here in the step "Grading the Exercise Notebook"!](https://github.com/LDSSA/ds-prep-course-2022#22---working-on-the-learning-units)
| Week 02/SLU04 - Data Structures/Solutions notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistical Regression - Assignment #2
# In this jupyter notebook, we will create a **Logistical Regression model** for the `heartfailure.csv` dataset. The dataset is available on the [UCI Machine Learning Repository website](https://archive.ics.uci.edu/ml/datasets/Heart+failure+clinical+records).
# 
#
# ---
#
# **Lecturer: <NAME>**<br></br>
# **Module: DATA 2204 - Statistical Pred Modelling**
#
# ---
#
# # Table of Contents:
# * [1. Dataset Information](#dataset-information)
# * [2. Loading Data](#loading-data)
# * [3. Pre-Processing Data](#preprocessing-data)
# * [4. Modelling and Evaluation](#modelling)
# * [4.1 Standard Model](#standard-model)
# * [4.2 Create Pipeline](#create-pipeline)
# * [4.3 Model Analysis - Learning Curve and Recall](#model-analysis)
# * [4.2 Optimized Model](#optimized-model)
# * [5. Feature Importance](#feature-importance)
#
# ---
#
# Background for the dataset (source: [BMC - Part of Springer Nature](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-1023-5)):
#
# > "Cardiovascular diseases kill approximately 17 million people globally every year, and they mainly exhibit as myocardial infarctions and heart failures. Heart failure (HF) occurs when the heart cannot pump enough blood to meet the needs of the body.Available electronic medical records of patients quantify symptoms, body features, and clinical laboratory test values, which can be used to perform biostatistics analysis aimed at highlighting patterns and correlations otherwise undetectable by medical doctors. Machine learning, in particular, can predict patients’ survival from their data and can individuate the most important features among those included in their medical records."
#
# ---
#
# <a id="dataset-information"></a>
# # 1. Dataset Information
#
# ## Independent Variables
#
# - `age`: age of the patient (years)
# - `anaemia`: decrease of red blood cells or hemoglobin (boolean)
# - `high blood pressure`: if the patient has hypertension (boolean)
# - `creatinine phosphokinase (CPK)`: level of the CPK enzyme in the blood (mcg/L)
# - `diabetes`: if the patient has diabetes (boolean)
# - `ejection fraction`: percentage of blood leaving the heart at each contraction (percentage)
# - `platelets`: platelets in the blood (kiloplatelets/mL)
# - `sex`: woman or man (binary)
# - `serum creatinine`: level of serum creatinine in the blood (mg/dL)
# - `serum sodium`: level of serum sodium in the blood (mEq/L)
# - `smoking`: if the patient smokes or not (boolean)
# - `time`: follow-up period (days)
#
# ## Dependent Variable
# - `death event`: if the patient deceased during the follow-up period (0-Alive, 1-Deceased)
#
# <a id="loading-data"></a>
# # 2. Loading Data
# +
#Load Libraries
import numpy as np
import pandas as pd
import pandas_profiling as pp
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from IPython.display import display, IFrame
from sklearn.model_selection import train_test_split, learning_curve, GridSearchCV
from sklearn.model_selection import RepeatedKFold, cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report, confusion_matrix, auc
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
try:
import eli5
from eli5.sklearn import PermutationImportance
except ModuleNotFoundError:
print('pip installing eli5 package')
# !pip install eli5 --quiet
import eli5
from eli5.sklearn import PermutationImportance
from mlxtend.evaluate import bias_variance_decomp
import os
import pathlib
import json
from pprint import pprint
# +
# Define location of the data
data_dir = '../data'
filename = 'heartfailure.csv'
data_path = os.path.join(data_dir, filename)
if not pathlib.Path(data_path).exists():
raise FileNotFoundError('No file found at defined location.')
# -
# Load data into a pandas DataFrame
data = pd.read_csv(data_path)
data.head()
# Overview of Dataset Characteristics
data.info()
# Check for any missing values
data.isna().sum()
data.describe()
# Profile Report
data2 = pp.ProfileReport(data)
data2.to_file('heartfailureLogR.html')
display(IFrame('heartfailureLogR.html', width=900, height=350))
# +
def boxplot(data_df: pd.DataFrame, dep_variable: str = None, ind_variables: list = None):
"""
The function takes the dataframe and creates a boxplot for all the variables in the
DataFrame if ind_variables is `None`.
data_df (pd.DataFrame): a pandas dataframe with all the data to be plotted.
dep_variable (str): name of the column in data_df that contains the dependent variable. If `None`
it will be ignored.
ind_variables (list of str): contains the column names from data_df that should be plotted. If
`None`, all the variables will be plotted.
"""
if ind_variables is None:
if dep_variable is None:
ind_variables = data_df.columns
else:
ind_variables = data_df.drop(dep_variable, axis=1).columns
n_cols = 4
n_rows = round(len(ind_variables) / n_cols) + (len(ind_variables) % n_cols)
fig, axes = plt.subplots(nrows=n_rows, ncols=n_cols,
figsize=(n_cols*5,n_rows*5))
idx = 0
for row_num in range(n_rows):
for col_num in range(n_cols):
sns.boxplot(data=data_df,y=ind_variables[idx],x=dep_variable,ax=axes[row_num,col_num])
axes[row_num,col_num].set_title(ind_variables[idx])
idx += 1
fig.tight_layout(pad=5)
boxplot(data, "DEATH_EVENT")
# -
#Class Balance
print('Class Split')
print(data['DEATH_EVENT'].value_counts())
data['DEATH_EVENT'].value_counts().plot.bar(figsize=(10,4),title='Classes Split for Dataset')
plt.xlabel('Classes')
plt.ylabel('Count')
# +
#Find Independent Column Correlations
def correlation(dataset,threshold):
col_corr= [] # List of correlated columns
corr_matrix=dataset.corr() #finding correlation between columns
for i in range (len(corr_matrix.columns)): #Number of columns
for j in range (i):
if abs(corr_matrix.iloc[i,j])>threshold: #checking correlation between columns
colName=(corr_matrix.columns[i], corr_matrix.columns[j]) #getting correlated columns
col_corr.append(colName) #adding correlated column name
return col_corr #returning set of column names
col=correlation(data,0.8)
print('Correlated columns @ 0.8:')
pprint(col, indent=3)
# -
# <a id="preprocessing-data"></a>
# # 3. Pre-Processing
# +
# Define x and y variables for CMS prediction
x = data.drop('DEATH_EVENT', axis=1).to_numpy()
y = data["DEATH_EVENT"].to_numpy()
# Splitting data into train and test datasets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=100, stratify=y)
# Scaling the data
sc = StandardScaler()
x_train_scaled = sc.fit_transform(x_train)
x_test_scaled = sc.transform(x_test)
x_2 = sc.fit_transform(x)
# -
# <a id="modelling"></a>
# # 4. Modelling
#
#
# <a id="standard-model"></a>
# ## 4.1 Standard Model
# Base Logistical Regression Model
for name,method in [('LogReg', LogisticRegression(solver='lbfgs',class_weight='balanced',
random_state=100))]:
method.fit(x_train_scaled,y_train)
predict = method.predict(x_test_scaled)
print('\nEstimator: {}'.format(name))
print(confusion_matrix(y_test,predict))
print(classification_report(y_test,predict))
# <a id="create-pipeline"></a>
# ## 4.2 Create Pipeline
# +
# Construct some pipelines
#Create Pipeline
pipeline =[]
pipe_logreg = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(solver='lbfgs',class_weight='balanced',
random_state=100))])
pipeline.insert(0,pipe_logreg)
# Set grid search params
modelpara =[]
param_gridlogreg = {'clf__C': [0.01, 0.1, 1, 10, 100],
'clf__penalty': ['l2']}
modelpara.insert(0,param_gridlogreg)
# -
# <a id="model-analysis"></a>
# ## 4.3 Model Analysis - Learning Curve and Recall
# +
#Define Plot for learning curve
def plot_learning_curves(model):
train_sizes, train_scores, test_scores = learning_curve(estimator=model,
X=x_train,
y=y_train,
train_sizes= np.linspace(0.1, 1.0, 10),
cv=10,
scoring='recall_weighted',
n_jobs=1,random_state=100)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,color='blue', marker='o',
markersize=5, label='training recall')
plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean, color='green', linestyle='--', marker='s', markersize=5,
label='validation recall')
plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std,
alpha=0.15, color='green')
plt.grid(True)
plt.xlabel('Number of training samples')
plt.ylabel('Recall')
plt.legend(loc='best')
plt.ylim([0.5, 1.01])
plt.show()
# -
# Plot Learning Curve
print('Logisistic Regression - Learning Curve')
plot_learning_curves(pipe_logreg)
#Script for Bias Variance
print('Bias Variance Trade-Off')
for name,method in [('LogReg', LogisticRegression(solver='lbfgs',class_weight='balanced',
random_state=100))]:
avg_expected_loss, avg_bias, avg_var = bias_variance_decomp(
method, x_train_scaled, y_train, x_test_scaled, y_test,
loss='mse',
random_seed=100)
print('\nEstimator: {}'.format(name))
print('Average Bias: {:.2f}'.format(avg_bias))
print('Average Variance: {:.2f}'.format(avg_var))
# +
#Model Analysis
models=[]
models.append(('Logistic Regression',pipe_logreg))
#Model Evaluation
results =[]
names=[]
scoring ='recall_weighted'
print('Model Evaluation - Recall Weighted')
for name, model in models:
rkf=RepeatedKFold(n_splits=10, n_repeats=5, random_state=100)
cv_results = cross_val_score(model,x,y,cv=rkf,scoring=scoring)
results.append(cv_results)
names.append(name)
print('{} {:.2f} +/- {:.2f}'.format(name,cv_results.mean(),cv_results.std()))
print('\n')
fig = plt.figure(figsize=(5,5))
fig.suptitle('Boxplot View')
ax = fig.add_subplot(111)
sns.boxplot(data=results)
ax.set_xticklabels(names)
plt.ylabel('Recall')
plt.xlabel('Model')
plt.show()
# -
# <a id="create-pipeline"></a>
# ## 4.2 Create Pipeline
# +
# Construct some pipelines
#Create Pipeline
pipeline =[]
pipe_logreg = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(solver='lbfgs',class_weight='balanced',
random_state=100))])
pipeline.insert(0,pipe_logreg)
# Set grid search params
modelpara =[]
param_gridlogreg = {'clf__C': [0.01, 0.1, 1, 10, 100],
'clf__penalty': ['l2']}
modelpara.insert(0,param_gridlogreg)
# -
# <a id="optimized-model"></a>
# ## 4.4 Optimized Model
# +
#Define Gridsearch Function
def Gridsearch_cv(model, params):
#Cross-validation Function
cv2=RepeatedKFold(n_splits=10, n_repeats=5, random_state=100)
#GridSearch CV
gs_clf = GridSearchCV(model, params, n_jobs=1, cv=cv2,scoring='recall_weighted')
gs_clf = gs_clf.fit(x_train_scaled, y_train)
model = gs_clf.best_estimator_
#Nested CV
scoreACC = cross_val_score(gs_clf, x_2, y,
scoring='accuracy', cv=5,
n_jobs= -1)
scorePM = cross_val_score(gs_clf, x_2, y,
scoring='precision_weighted', cv=5,
n_jobs= -1)
scoreRM = cross_val_score(gs_clf, x_2, y,
scoring='recall_weighted', cv=5,
n_jobs= -1)
# Use best model and test data for final evaluation
y_pred = model.predict(x_test_scaled)
#Identify Best Parameters to Optimize the Model
bestpara=str(gs_clf.best_params_)
#Output Heading
print('\nOptimized Model')
print('\nModel Name:',str(pipeline.named_steps['clf']))
#Output Validation Statistics
target_names=['Outcome 0','Outcome 1']
print('\nBest Parameters:',bestpara)
print('\n', confusion_matrix(y_test,y_pred))
print('\n',classification_report(y_test,y_pred,target_names=target_names))
print('\nNestedCV Accuracy(weighted) :{:0.2f} +/-{:0.2f} '.format(np.mean(scoreACC),np.std(scoreACC)))
print('NestedCV Precision(weighted) :{:0.2f} +/-{:0.2f} '.format(np.mean(scorePM),np.std(scorePM)))
print('NestedCV Recall(weighted) :{:0.2f} +/-{:0.2f} '.format(np.mean(scoreRM),np.std(scoreRM)))
print('\n')
#Transform the variables into binary (0,1) - ROC Curve
from sklearn import preprocessing
Forecast1=pd.DataFrame(y_pred)
Outcome1=pd.DataFrame(y_test)
lb1 = preprocessing.LabelBinarizer()
OutcomeB1 =lb1.fit_transform(Outcome1)
ForecastB1 = lb1.fit_transform(Forecast1)
#Setup the ROC Curve
from sklearn.metrics import roc_curve, auc
from sklearn import metrics
fpr, tpr, threshold = metrics.roc_curve(OutcomeB1, ForecastB1)
roc_auc = metrics.auc(fpr, tpr)
print('ROC Curve')
#Plot the ROC Curve
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# -
# Run Models
for pipeline, modelpara in zip(pipeline,modelpara):
Gridsearch_cv(pipeline,modelpara)
# <a id="feature-importance"></a>
# # 5. Feature Importance
# Next Steps - Feature Importance
for name, model in models:
print(name)
perm=PermutationImportance(model.fit(x_train_scaled,y_train),random_state=100).fit(x_test_scaled,y_test)
features=data.drop('DEATH_EVENT',axis=1).columns
print('\nPermutation Importance')
print('\n')
df=eli5.show_weights(perm,feature_names=data.drop('DEATH_EVENT',axis=1).columns.tolist())
display(df)
df2= pd.DataFrame(data=perm.results_,columns=features)
fig = plt.figure(figsize=(25,10))
sns.boxplot(data=df2).set(title='Feature Importance Distributions',
ylabel='Importance')
plt.show()
# Next Steps - Feature Selection using SelectFromModel
from sklearn.feature_selection import SelectFromModel
clf = LogisticRegression(solver='liblinear',class_weight='balanced',
random_state=100)
clf.fit(x_train_scaled,y_train)
model = SelectFromModel(clf, prefit=True)
feature_idx = model.get_support()
feature_name = data.drop('DEATH_EVENT',axis=1).columns[feature_idx]
print('\nKey Features:\n',list(feature_name))
| logistical_regression/notebook/Logistical Regression - Assignment 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
from keras_tuner.tuners import RandomSearch
df=pd.read_csv('C:/Users/User/Desktop/TensorFlow/ANN/Keras-Tuner-main/Real_Combine.csv')
df.head()
len(df.columns)
X=df.iloc[:,0:8]
Y=df.iloc[:,8]
def build_model (hp):
model=keras.Sequential()
for i in range(hp.Int('number_layers',2,20)):
model.add(layers.Dense(units=hp.Int('units_'+str(i),
min_value=32,
max_value=512,
step=32), activation='relu'))
model.add(layers.Dense(units=1,activation='linear'))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp.Choice('learning_rate',values=[1e-2,1e-3,1e-4])),
loss='mean_absolute_error',metrics=['mean_absolute_error'])
return model
tuner=RandomSearch(
build_model,
objective='val_mean_absolute_error',
max_trials=5,
executions_per_trial=3,
directory='ANN_Study',
project_name='RandomSearch_Air_quality_index_tuner_0')
tuner.search_space_summary()
from sklearn.model_selection import train_test_split
X_train,x_test,Y_train,y_test=train_test_split(X,Y,test_size=0.3,random_state=0)
tuner.search(X_train,Y_train,epochs=100,validation_data=(x_test,y_test))
tuner.results_summary()
def build_model_1 (hp):
model=keras.Sequential()
for i in range(hp.Int('number_layers',2,20)):
model.add(layers.Dense(units=hp.Int('units_'+str(i),
min_value=32,
max_value=512,
step=32), activation='relu'))
model.add(layers.Dense(units=1,activation='linear'))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp.Choice('learning_rate',values=[1e-2,1e-3,1e-4])),
loss='mean_absolute_error',metrics=['mean_absolute_error'])
return model
tuner_1=RandomSearch(
build_model_1,
objective='val_mean_absolute_error',
max_trials=5,
executions_per_trial=3,
directory='ANN_Study',
project_name='RandomSearch_Air_quality_index_tuner_1')
tuner_1.search(X_train,Y_train,epochs=5,validation_data=(x_test,y_test))
tuner_1.results_summary()
# +
drop_out_val=0
model=keras.Sequential()
model.add(layers.Dense(488, activation='relu',input_dim=8,name='layer1'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(256, activation='relu',name='layer2'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(128, activation='relu',name='layer3'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(96, activation='relu',name='layer4'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(384, activation='relu',name='layer5'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(224, activation='relu',name='layer6'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(480, activation='relu',name='layer7'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(480, activation='relu',name='layer8'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(224, activation='relu',name='layer9'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(192, activation='relu',name='layer10'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(32, activation='relu',name='layer11'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(352, activation='relu',name='layer12'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(384, activation='relu',name='layer13'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(32, activation='relu',name='layer14'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(320, activation='relu',name='layer15'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(256, activation='relu',name='layer16'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(288, activation='relu',name='layer17'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(480, activation='relu',name='layer18'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(448, activation='relu',name='layer19'))
model.add(layers.Dropout(drop_out_val))
model.add(layers.Dense(1,activation='linear',name='outputlayer'))
model.compile(loss='mean_absolute_error',optimizer='adam',metrics=['mean_absolute_error'])
# -
model.fit(X_train,Y_train,epochs=100,batch_size=10,validation_data=(x_test,y_test))
test_acc=model.evaluate(x_test,y_test)
test_acc
# ### How to summarize Regression Prediction ? ###
# 1. Classification can use Confusion Matrix
# 2 Regression use R2 score??
| ANN_Study/RandomSearch ANN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# ## Welcome to Week 4, Single Cell RNA (cont.)!
#
# ### This week, we're going to go a bit deeper into scRNA analysis, such as how to interact with Seurat objects, add additional datatypes including CITE-seq and TCR/BCR-seq data, and create custom, publication-ready plots.
#
# We'll continue to use Seurat, which has some nice capabilities for multi-modal data analysis. The two datatypes we will be working with today at **CITE-seq** and **TCR/BCR-seq** data. The main idea of both is that additional information about the cell is captured using the same cell barcode from reverse transcription so that multiple types of data can be assigned to the same cell. CITE-seq is a method for capturing surface protein information using oligo-conjugated antibodies developed at the New York Genome Center. Here antibodies are conjugated to oligos which contain two important sequences: an antibody specific barcode which is used to quantify surface protein levels in individual cells and a capture sequence (either poly-A sequence or a 10X specific capture sequence) which enables the antibody oligo to be tagged with the cell barcode during reverse transcription. You can look at more details in the publication here:
# * https://www.ncbi.nlm.nih.gov/pubmed/28759029
#
# Oligo-conjugated anitbodies compatible with 10X scRNA (both 5' and 3') are commercially available from BioLegend (https://www.biolegend.com/en-us/totalseq) and can also be used to multiplex different samples in the same 10X capture. This works by using an antibody which recognizes a common surface antigen and using the antibody barcode to distinguish between samples, a process known as **cell hashing**:
# * https://www.ncbi.nlm.nih.gov/pubmed/30567574
#
# We won't be using hashtag data today, but many of the same strategies apply and feel free to reach out if you are interested in learning more!
#
# The second data type we will be working with is TCR/BCR sequencing data. T and B cells express a highly diverse repertoire of transcripts resulting from V(D)J recombination - the T cell receptor (TCR) in T cells and immunoglobulin (Ig) or BCR in B cells. Daughter cells will share the same TCR/BCR sequence, allowing this sequence to be used to track clonal cell populations over time and space, as well as infer lineage relationships. TCR/BCR sequences are amplified from the cDNA library in the 5' immune profiling 10X kit, allowing these sequences to be matched to the gene expression library from the same cell. For more details, see the 10X website:
# * https://www.10xgenomics.com/products/vdj/
#
# For both of these applications, a more in-depth understanding of how data is stored within a Seurat object will be helpful. We'll loosely be following this multimodal vingette here https://satijalab.org/seurat/v3.1/multimodal_vignette.html. Let's get started!
# ## Part 1: CITE-seq Analysis
# First, let's reload the R object from last week using the **readRDS** function.
# +
# load required libraries
library(dplyr)
library(Seurat)
library(patchwork)
library(RColorBrewer)
library(gridExtra)
library(ggplot2)
library(cowplot)
library(tidyr)
library(ggpubr)
# -
figpath <- '4_figures/'
dir.create(figpath)
pbmc <- readRDS("scrna_data/9K_pbmc.rds")
# +
# Now let's add in the CITE-seq data that we skipped over last week:
pbmc.data <- Read10X(data.dir = "filtered_feature_bc_matrix")
# -
# Since we did some filtering on our cells to remove doublets, let's subset the CITE data to only include these cells.
adt.data <- pbmc.data$`Antibody Capture`
adt.data <- adt.data[,colnames(adt.data) %in% colnames(pbmc)]
dim(adt.data)
dim(pbmc)
# which ADT features do we have?
rownames(adt.data)
# I think these names are too long. Let's remove -TotalSeqC from the end using gsub:
rownames(adt.data) <- gsub("_TotalSeqC", "", rownames(adt.data))
rownames(adt.data)
# +
# We add the CITE-seq data, or antibody-derived tags (ADT) to a seperate "slot" in our R object.
# Note the difference from when we created a Seurat object from scratch using the RNA count data.
pbmc[["ADT"]] <- CreateAssayObject(counts = adt.data)
# -
# Our Seurat object now has two **assays** - RNA and ADT, which are stored in different slots. Seurat objects are a version of an S3 object in R. See more details here: https://github.com/satijalab/seurat/wiki/Seurat
#
# Here are some common ways of interacting with Seurat/S3 objects that will be useful. An important thing to consider now is which assay we are using by default, or the active assay (RNA or ADT for us).
# object summary:
pbmc
# how many RNA features and cells (since RNA is our default)
dim(pbmc)
# how many ADT features and cells (accessed throught the ADT slot)
# we can access this two ways
dim(pbmc@assays$ADT)
dim(pbmc[["ADT"]])
# which assay types are stored in our object?
names(pbmc)
# +
# Let's process our ADT data according to the multimodal analysis vignette.
# Note that we specify the ADT assay when we run NormalizeData and ScaleData
pbmc <- NormalizeData(pbmc, assay = "ADT", normalization.method = "CLR")
pbmc <- ScaleData(pbmc, assay = "ADT")
# +
# Quickly let's refresh our memory on which clusters are which. Let's also add a custom color palette.
# It's ok if your clusters/UMAP are a little different!
IdentColors <- c(brewer.pal(n = 9, name = "Set1"), brewer.pal(n = 8, name = "Set2"))
names(IdentColors) <- levels(Idents(pbmc))
DimPlot(pbmc, reduction = "umap", label = TRUE
, cols = IdentColors[levels(Idents(pbmc))]) + NoLegend()
# -
# Now let's compare RNA to ADT data on our UMAP from last week:
# in this plot, protein (ADT) levels are on top, and RNA levels are on the bottom
FeaturePlot(pbmc, features = c("adt_CD3", "adt_CD19", "CD3E", "CD19")
, min.cutoff = "q05", max.cutoff = "q95", ncol = 2)
# +
# Now we'll do a bit to make this figure look a little nicer.
# I personally like a red color scale and to leave off the UMAP coordinates and scale bar when plotting a lot.
# We can also "flatten" the plots so they don't lag in illustrator.
# To give ourselves more control, let's return the FeaturePlot as a list.
# Feel free to play with other parameters!
plot_list <- FeaturePlot(pbmc, combine = FALSE, pt.size = 1
, features = c("adt_CD3", "adt_CD19", "CD3E", "CD19") #features to plot
, cols = c("lightgrey", brewer.pal(n = 9, name = "Reds")[7:9]) #colors to use
, sort.cell = TRUE) #put positive cells on top of plot
plot_list <- lapply(X = plot_list, FUN = AugmentPlot, width = 5, height = 5, dpi = 500) #flatten cells
for (i in 1:length(plot_list)) {
plot_list[[i]] <- plot_list[[i]] + NoAxes() #remove axes and combine plots
}
grid.arrange(grobs = plot_list, ncol = 2)
# +
# Let's make a larger plot comparing all our ADT antibodies with their RNA expression.
# Note that some protein names and RNA names differ, like PD-1 (gene name is PDCD1).
# Also CD45RO and CD45RA are different protein isoforms of CD45 so they will correspond to the same gene.
# Make sure it is organized nicely and rename the plot labels if you want.
# In the end save as a PDF and try opening in illustrator.
# see 4_ADTvRNA_UMAP.pdf for example output
# +
# Now let's compare how RNA compares to ADT for a few markers.
# I'll show an example, but then plot all markers and arrange nicely in a PDF
# Where does protein correlate well with RNA? Where does protein provide different information?
# see 4_ADTvRNA_scatter.pdf for example output
FeatureScatter(pbmc, feature1 = "adt_CD8a", feature2 = "CD8A", group.by = "orig.ident"
, cols = "navy") + NoLegend()
# -
# +
# Let's find protein markers for all clusters, and draw a heatmap
# But let's make it pretty
#Reorder pbmc factors based on cell type similarity
Idents(pbmc) <- factor(Idents(pbmc), levels = c('Naive CD4 T', 'Memory CD4 T', 'Memory CD8 T', 'Effector CD8 T/NK'
,'B cells 1', 'B cells 2', 'CD14 Monocytes 1', 'CD14 Monocytes 2'
, 'FCGR3A Monocytes', 'Platelets'))
adt.markers <- FindAllMarkers(pbmc, assay = "ADT", only.pos = TRUE)
adt.markers <- adt.markers %>% group_by(cluster) %>% top_n(20, avg_logFC)
# Let's use a constant number of cells for each cluster so small clusters aren't hard to see:
cells.use <- c()
for (i in unique(Idents(pbmc))) {
cells.use <- c(cells.use, sample(Cells(pbmc)[Idents(pbmc) == i], size = min(table(Idents(pbmc)))))
}
p <- DoHeatmap(pbmc, features = unique(adt.markers$gene)[!grepl("IgG", unique(adt.markers$gene))],
, group.colors = IdentColors[levels(Idents(pbmc))]
, cells = cells.use
, assay = "ADT", angle = 45
, size = 4
, raster = FALSE
) +
scale_fill_gradientn(colours = rev(brewer.pal(n = 11, name = "RdBu")[1:9])) +
NoLegend()
pdf(paste0(figpath, '4_ADT_heatmap.pdf'), useDingbats = FALSE, width = 6, height = 6)
p
dev.off()
# -
# ## Part 1: TCR/BCR-seq Analysis
# First, we need to download the required data from these links:
# * https://support.10xgenomics.com/single-cell-vdj/datasets/3.0.0/vdj_v1_hs_pbmc2_t
# * https://support.10xgenomics.com/single-cell-vdj/datasets/3.0.0/vdj_v1_hs_pbmc2_b
#
# We'll need to download the **Clonotype info (CSV)** and **Filtered contig annotations (CSV)** files.
# Let's take a look at the TCR clonotypes first:
tcr_clonotypes <- read.csv("scrna_data/vdj_v1_hs_pbmc2_t_clonotypes.csv")
head(tcr_clonotypes)
# The clonotypes file summarizes the TCR clonotypes (unique TCR sequences) detected in our data. It groups them by **clonotype_id** (an arbitrarily numbered group) and also provides the CDR3 amino acid (**cdr3s_aa**) and nucleotide (**cdr3s_nt**) sequences for both the alpha (TRA) and beta (TRB) chains of the TCR. *But*, this file does not have any cell barcode information we can use to match with our scRNA data! Let's look at the filtered contig annotation file next.
tcr_contigs <- read.csv("scrna_data/vdj_v1_hs_pbmc2_t_filtered_contig_annotations.csv")
head(tcr_contigs)
# This file contains a lot of information, but the most important for us is the link between the cell barcode (**barcode**) and the clonotype (**raw_clonotype_id**). Subset this dataframe to just include these columns and remove duplicate barcodes. Check that each cell barcode is assigned to only one clonotype.
# Now let's merge tcr_contigs with the clonotype information to annotate each cell with its TCR information:
# Finally, let's add the TCR information as metadata to our seurat object. Note that the cell names in our seurat object do no have the "-1" suffix for the cell names, so lets remove that using gsub. The rownames of our matrix also need to be the cell names for the **AddMetaData** function.
# Great! Let's write a function **add_clonotype** to do all of this for us. Our function should take as arguments: `seurat_object`, `clonotypes_path`, `contigs_path`, `lib_type` (tcr or bcr). Use this function to add the BCR data to our R object.
add_clonotype <- function(seurat_object, clonotypes_path, contigs_path, lib_type) {
#Add your code here!
}
# How many cells were matched with a TCR sequence? How many with a BCR sequence? How many with both? Which scRNA clusters have the most cells matched with TCR/BCR sequences?
# Let's color our UMAP by which cells were annotated with either a TCR or BCR sequence (Hint: use the `cells.highlight` argument to `DimPlot`.
# +
# see 4_TCR_BCR_UMAP.pdf for example output
# -
# Now make a stacked bar plot showing how many cells in each cluster were annotated with a TCR sequence, BCR sequence, or both (Hint: use the **FetchData** function and the **group_by** or **table** functions).
# +
# see 4_TCR_BCR_barplot.pdf for example output
# -
# How big is the biggest TCR clone? What about the biggest BCR clone? Do all cells in the clone belong to the same cluster?
# Let's make another stacked bar plot, this time plotting the clusters for cells belonging to the clones with at least 3 cells. What phenotypes are expanded clones enriched in? Do cells in the same clone often share the same phenotype or have different phenotypes?
# +
# see 4_TCR_clone_barplot.pdf for example output
# -
# Finally, let's use gene signature scoring to see if there are any differences in T cell activation signature between our T cell phenotypes. Download the GSEA T_CELL_ACTIVATION gene set from here: https://www.gsea-msigdb.org/gsea/msigdb/cards/T_CELL_ACTIVATION in text file format. Use the **AddModuleScore** function to score cells for this signature and then plot the distribution of gene scores seperated by phenotype for all T cells assigned to a TCR clone. Use the **ggpubr** package to add p-values to your plot to see if the T cell activation signature is higher than naive T cells.
# +
# see 4_TcellActivation_violinplot.pdf for example output
| wk4_scRNA-multimodal/4_scRNA_multimodal_R_Seurat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# queue: FIFO 구조로 데이터를 저장
# 리스트를 통해 queue를 구현할 경우
## 추가: .append(data)의 경우 O(1) 시간복잡도
## 삭제: .pop(0)으로 수행해야 하는데 O(n)의 시간복잡도
# 파이썬의 deque를 이용하여 queue를 구현하는 것이 가장 빠름
## deque: double-ended queue, 양방향에서 데이터를 처리
## 앞뒤에서 데이터를 처리할때 모두 O(1) 시간복잡도
from collections import deque
dq = deque([])
dq.append(1)
dq.append(2)
dq.append(3)
# -
print(dq)
popResult = dq.popleft()
print(popResult)
print(dq)
| data-structure/queue.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="XViM8k_IAEYV"
# ## Parte 3 - IC -IBRA
# + [markdown] id="TLCjfgsOQ-eF"
# ### Importando bibliotecas básicas
# + id="C8mm81iU_6TC" colab={"base_uri": "https://localhost:8080/"} outputId="98acbb85-46c9-4c6c-e967-4e29c2e7bb33"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="wz_a-eu7_3Yd" outputId="09242434-406d-4d21-b2b7-2d9e008c5a74"
# !pip install xlsxwriter
# + colab={"base_uri": "https://localhost:8080/"} id="O8H40pDz_0wA" outputId="e39a1421-b778-4439-d185-4c7323cc8693"
# !pip install sentencepiece
# + colab={"base_uri": "https://localhost:8080/"} id="epenadLd_xyd" outputId="19492482-d11d-43fc-9b61-1a25dd43daeb"
# !pip install transformers
# + colab={"base_uri": "https://localhost:8080/", "height": 878} id="GRiCW5IY_vCL" outputId="908ef47b-8bd3-42d2-d02a-06c8bd1c8b74"
# !pip install fairseq fastBPE
# + id="T4ZahsVI_q2I" colab={"base_uri": "https://localhost:8080/"} outputId="7c41ac91-dbf7-4303-ff00-eb36023a3d7b"
import pandas as pd, numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from sklearn.model_selection import StratifiedKFold
import tokenizers
from transformers import RobertaConfig, TFRobertaModel
print('TF version',tf.__version__)
np.random.seed(seed=42)
tf.keras.utils.set_random_seed(42)
# + id="jCNdC0AM_occ"
from types import SimpleNamespace
from fairseq.data.encoders.fastbpe import fastBPE
from fairseq.data import Dictionary
from sklearn.utils import shuffle
# + id="p-OIbg0h_ksZ"
import json
import tensorflow as tf
import csv
import random
import numpy as np
import sklearn
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
import random
# + colab={"base_uri": "https://localhost:8080/"} id="wSHuLY9J_icM" outputId="5cd13431-9f15-4a26-fca4-e2d8c6741cb5"
# !wget https://public.vinai.io/BERTweet_base_transformers.tar.gz
# !tar -xzvf BERTweet_base_transformers.tar.gz
# + id="w78WuoS2Rf0M"
class BERTweetTokenizer():
def __init__(self,pretrained_path = '/content/BERTweet_base_transformers/'):
self.bpe = fastBPE(SimpleNamespace(bpe_codes= pretrained_path + "bpe.codes"))
self.vocab = Dictionary()
self.vocab.add_from_file(pretrained_path + "dict.txt")
self.cls_token_id = 0
self.pad_token_id = 1
self.sep_token_id = 2
self.pad_token = '<pad>'
self.cls_token = '<s>'
self.sep_token = '</s>'
def bpe_encode(self,text):
return self.bpe.encode(text) # bpe.encode(line)
def encode(self,text,add_special_tokens=False):
subwords = self.bpe.encode(text)
input_ids = self.vocab.encode_line(subwords, append_eos=False, add_if_not_exist=False).long().tolist() ## Map subword tokens to corresponding indices in the dictionary
return input_ids
def tokenize(self,text):
return self.bpe_encode(text).split()
def convert_tokens_to_ids(self,tokens):
input_ids = self.vocab.encode_line(' '.join(tokens), append_eos=False, add_if_not_exist=False).long().tolist()
return input_ids
#from: https://www.kaggle.com/nandhuelan/bertweet-first-look
def decode_id(self,id):
return self.vocab.string(id, bpe_symbol = '@@')
def decode_id_nospace(self,id):
return self.vocab.string(id, bpe_symbol = '@@ ')
def bert_encode(self, texts, max_len=512):
all_tokens = []
all_masks = []
all_segments = []
for text in texts:
text = self.bpe.encode(text)
input_sequence = '<s> ' + text + ' </s>'
enc = self.vocab.encode_line(input_sequence, append_eos=False, add_if_not_exist=False).long().tolist()
enc = enc[:max_len-2]
pad_len = max_len - len(enc)
tokens = enc + [1] * pad_len #input_ids
pad_masks = [1] * len(enc) + [0] * pad_len #attention_mask
segment_ids = [0] * max_len #token_type_ids
all_tokens.append(tokens)
all_masks.append(pad_masks)
all_segments.append(segment_ids)
return np.array(all_tokens), np.array(all_masks), np.array(all_segments)
# + id="ZRsS2KVsRspI"
def build_model(max_len=512):
PATH = '/content/BERTweet_base_transformers/'
input_word_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
input_mask = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_mask")
segment_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="segment_ids")
config = RobertaConfig.from_pretrained(PATH+'config.json')
bert_model = TFRobertaModel.from_pretrained(PATH+'model.bin',config=config,from_pt=True)
x = bert_model(input_word_ids,attention_mask=input_mask,token_type_ids=segment_ids)
#pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, segment_ids])
#clf_output = sequence_output[:, 0, :]
net = tf.keras.layers.Dense(64, activation='relu')(x[0])
net = tf.keras.layers.Dropout(0.2)(net)
net = tf.keras.layers.Dense(32, activation='relu')(net)
net = tf.keras.layers.Dropout(0.2)(net)
net = tf.keras.layers.Flatten()(net)
out = tf.keras.layers.Dense(1, activation='sigmoid')(net)
model = tf.keras.models.Model(inputs=[input_word_ids, input_mask, segment_ids], outputs=out)
model.compile(tf.keras.optimizers.Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
#print(out.shape)
return model
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="P9rUBzvkRwWo" outputId="c25a1f8b-bac4-4889-fc31-32c41460c6dd"
df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/E1 - Hate Speech and Offensive Language/labeled_data.csv', dtype={'Class': int, 'Tweet': str})
df.head()
# + [markdown] id="k_jclWVOVfOT"
# ### Pre-processing
# + id="3j9k1gVwT3kx"
# Ver se tem valores nulos de tweet
null_tweets = df[df['tweet'].isna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 49} id="g3cuuPvYVYR4" outputId="09c3740c-a8b2-49fe-f531-961459823c7c"
null_tweets
# + id="KfXlQ_nMVrNl"
# Analisar tamanho dos tweets
df['tweet_len'] = df.tweet.apply(lambda x: len(x))
# + colab={"base_uri": "https://localhost:8080/", "height": 314} id="GGzanIv4WPuG" outputId="627d8d8d-8104-459e-9b3f-075eab692f79"
df.hist('tweet_len', bins=400)
# + id="GeWn32AsXcIr"
#df_without_outliers = df[(df.tweet_len > df.tweet_len.quantile(5/1000))]
df_small_outliers = df[df.tweet_len < df.tweet_len.quantile(5/1000)]
df_big_outliers = df[df.tweet_len > df.tweet_len.quantile(995/1000)]
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="PosbswcjYvnM" outputId="805a953a-18bb-4376-d60a-37c807a80f34"
df_small_outliers.sort_values(by='tweet_len')
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="QX7qsFp-ZE3D" outputId="675582a6-023e-484e-98d0-c4a9ff04e8ca"
df_big_outliers.sort_values(by='tweet_len')
# + id="_hw15kPGbDiR"
df = df.drop(['tweet_len'], axis=1)
# + [markdown] id="mZBpHcjq8Ve7"
# ## Class Pre-processing
#
# Creating the necessary columns to analyze the model.
# + [markdown] id="kr6NdnMCZ43e"
# #### Binary Class to Hate speech
#
# A class will be created that will have a value of 1 if it is offensive language or hate speech, and zero if it is not.
#
# + id="wtMVpP50Z4No"
def binaryClassHateSpeech(dataframe, mod=1):
if(mod == 1): #Put together hate speech and Ofensive language
dataframe['hate_ofencive_speech'] = dataframe['class'].apply(lambda x: 1 if x!=2 else 0)
if(mod == 2): # It take just the hate speech
dataframe['hate_ofencive_speech'] = dataframe['class'].apply(lambda x: 1 if x==0 else 0)
return dataframe
# + id="c4Qd-6rTdkFH"
df = binaryClassHateSpeech(df)
# + colab={"base_uri": "https://localhost:8080/", "height": 405} id="p0XJPL5tdrWi" outputId="f99c8588-8422-43c1-8dd6-d42b3e67f2c9"
df.head()
# + [markdown] id="OVf6d7bD81qw"
# #### Creating collumns with artificial subclassification
# + id="u3V3LUiW-biH"
def creat_subclass(df, column='hate_ofencive_speech', number_subclasses=3, percent=0.7, seed=10):
random.seed(seed)
for i in range(number_subclasses):
df['subclass' + str(i)] = df['hate_ofencive_speech'].apply(lambda x: 1 if (x==1 and random.random()>percent) else 0)
return df
# + id="ohgwvRgnA5sI"
df = creat_subclass(df)
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="CK6NxjunBDig" outputId="300cc00c-3218-420f-8539-7546fafeba54"
df.head()
# + [markdown] id="1qQgucZJBfcu"
# ## Making Samples to the model
# + id="kwYm10n8BlnJ"
# Separate dataset in train and test
def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
train_samples = []
test_samples = [] # A test_sample it's gonna be all the dataset without a the elementes from train
if sample_index*percent_sample > 1:
print("ERRO: Invalide sample Index")
return [], []
df_without_subclasses = df
# Cut of the subclasses we don't need
for subclass in sub_classes_toTakeOff:
df_without_subclasses = df[df[subclass] != 1]
for subclass in sub_classes_toKeep:
df_without_subclasses = df[df[subclass] == 1]
df_without_subclasses = shuffle(df_without_subclasses, random_state=seed)
tam_new_df = df_without_subclasses.shape[0]
#Getting the samples, doing manual stratification
df2 = df_without_subclasses[df_without_subclasses[class_column] == 1]
tam_df2 = df2.shape[0]
df_train2 = df2[int(percent_sample*tam_df2*(sample_index-1)):int(percent_sample*tam_df2*(sample_index))]
df_test2 = df.loc[df[class_column] == 1].drop(df_train2.index)
df3 = df_without_subclasses[df_without_subclasses[class_column] == 0]
tam_df3 = df3.shape[0]
df_train3 = df3[int(percent_sample*tam_df3*(sample_index-1)):int(percent_sample*tam_df3*(sample_index))]
df_test3 = df.loc[df[class_column] == 0].drop(df_train3.index)
#Juntar
df_train = pd.concat([df_train3, df_train2])
df_test = pd.concat([df_test3, df_test2])
#aleatorizar
df_train = shuffle(df_train, random_state=seed)
df_test = shuffle(df_test, random_state=seed)
return df_train, df_test
# + id="VNhQ9pF-S3di"
df_train1, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toKeep=['subclass0'])
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="WAoCa0-DU7Vc" outputId="e8e70881-4622-455e-d230-682dbbd2f1fe"
df_train1
# + id="rDSXyCS4V_P1"
df_train2, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
# + colab={"base_uri": "https://localhost:8080/", "height": 756} id="frKBdpUsWEI7" outputId="adc67047-ebac-44c7-edd2-3ea1c691ed8c"
df_train2
# + colab={"base_uri": "https://localhost:8080/"} id="jNzFiTmKWMBb" outputId="875eab65-4b48-4551-80b7-978db94abe53"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="pr0Rpss1Wiqt" outputId="591ba861-a77e-461a-d017-14b6890aa835"
df_train2.shape
# + id="t63nMIlOWmB6" colab={"base_uri": "https://localhost:8080/"} outputId="98e5289c-bf57-4325-b853-d150afae3f6f"
df_test.shape
# + [markdown] id="_5qX28WhCnHh"
# #### Separate the validation dataset
# + id="PPvcdd9KCv3a"
def separate_train_validation(df_train, class_column, percent=0.7, seed=12):
from sklearn.model_selection import train_test_split
X_t, X_val, y_t, y_val = train_test_split(df_train, df_train[class_column], train_size=percent, random_state=seed)
return X_t, X_val
# + [markdown] id="OKm26zbRCa5c"
# ## Running the Bertwitter Model
# + id="bEBZok4yCjPn"
df_train, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
# + id="a20SgtMtFGzH"
df_train, df_val = separate_train_validation(df_train, 'hate_ofencive_speech')
# + id="aZyPIdmjF0xd"
#Transform to Numpy
def transform_to_numpy(df, tweet_column, class_column):
X = df[tweet_column].to_numpy()
Y = df[class_column].to_numpy()
return X, Y
# + id="WX0ZYFMqGZEk"
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_ofencive_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_ofencive_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_ofencive_speech')
# + id="RGUJitBhFegr"
def class_size_graph(Y_train,Y_test, Y_val):
labels = ["%s"%i for i in range(3)]
unique, counts = np.unique(Y_train, return_counts=True)
uniquet, countst = np.unique(Y_test, return_counts=True)
uniquev, countsv = np.unique(Y_val, return_counts=True)
fig, ax = plt.subplots()
rects3 = ax.bar(uniquev - 0.5, countsv, 0.25, label='Validation')
rects1 = ax.bar(unique - 0.2, counts, 0.25, label='Train')
rects2 = ax.bar(unique + 0.1, countst, 0.25, label='Test')
ax.legend()
ax.set_xticks(unique)
ax.set_xticklabels(labels)
plt.title('Hate Speech classes')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="cCqWB52I7M5z" outputId="61129865-2dc0-4895-c2c6-e8c56f5e827b"
class_size_graph(Y_train,Y_test, Y_val)
# + id="utduI_jsJrLk"
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
#fit
# + colab={"base_uri": "https://localhost:8080/"} id="ZWLqmhPzJ3dn" outputId="c05e8cdf-8d25-4868-ad88-462785834786"
model = build_model(max_len=max_len)
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="DFqva1LEJ-kl" outputId="204b406a-2e02-4b1d-d2b6-2b5fa59a813e"
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
#model.save_weights('savefile')
# + id="KlaEopL-KdLi"
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="pEC2eb34Lc0C" outputId="f50923f6-f6b3-47bb-856c-4f957ea8af98"
P_hat[:20]
# + [markdown] id="45tEpaBFLsq1"
# ## Analyzing the result
# + id="s4BmGlFQLeuo"
def plot_confusion_matrix(y, y_pred, beta = 2):
"""
It receives an array with the ground-truth (y)
and another with the prediction (y_pred), both with binary labels
(positve=+1 and negative=-1) and plots the confusion
matrix.
It uses P (positive class id) and N (negative class id)
which are "global" variables ...
"""
TP = np.sum((y_pred == 1) * (y == 1))
TN = np.sum((y_pred == 0) * (y == 0))
FP = np.sum((y_pred == 1) * (y == 0))
FN = np.sum((y_pred == 0) * (y == 1))
total = TP+FP+TN+FN
accuracy = (TP+TN)/total
recall = (TP)/(TP+FN)
precision = (TP)/(TP+FP)
Fbeta = (precision*recall)*(1+beta**2)/(beta**2*precision + recall)
print("TP = %4d FP = %4d\nFN = %4d TN = %4d\n"%(TP,FP,FN,TN))
print("Accuracy = %d / %d (%f)" %((TP+TN),total, (TP+TN)/total))
print("Recall = %d / %d (%f)" %((TP),(TP+FN), (TP)/(TP+FN)))
print("Precision = %d / %d (%f)" %((TP),(TP+FP), (TP)/(TP+FP)))
print("Fbeta Score = %f" %(Fbeta))
confusion = [
[TP/(TP+FN), FP/(TN+FP)],
[FN/(TP+FN), TN/(TN+FP)]
]
P = 1
N = 0
df_cm = pd.DataFrame(confusion, \
['$\hat{y} = %d$'%P, '$\hat{y} = %d$'%N],\
['$y = %d$'%P, '$y = %d$'%N])
plt.figure(figsize = (8,4))
sb.set(font_scale=1.4)
sb.heatmap(df_cm, annot=True) #, annot_kws={"size": 16}, cmap = 'coolwarm')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 378} id="3sv17QkHLhzQ" outputId="b3835e07-45a6-4a29-ebdb-a17fdf667da7"
threshold = 0.55
y_hat = np.where(P_hat > threshold, 1, 0)
y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(y_hat)
plot_confusion_matrix(y_test, y_hat)
# + id="1592zsNbhOmC"
def recall(y, y_pred):
TP = np.sum((y_pred == 1) * (y == 1))
TN = np.sum((y_pred == 0) * (y == 0))
FP = np.sum((y_pred == 1) * (y == 0))
FN = np.sum((y_pred == 0) * (y == 1))
total = TP+FP+TN+FN
recall = (TP)/(TP+FN)
return recall
# + id="7TnmHjS2h7ZB"
def plot_threshold_recall(Y_test, P_hat, step_size=0.05):
recalls = []
thresholds = []
i = 0.2
Y_test = Y_test.reshape([Y_test.shape[0], 1])
while i < 0.95:
threshold = i
Y_hat = np.where(P_hat > threshold, 1, 0)
recalls.append(recall(Y_test, Y_hat))
thresholds.append(threshold)
i += step_size
plt.plot(thresholds, recalls)
# + colab={"base_uri": "https://localhost:8080/", "height": 273} id="EPOtrTVwj0PZ" outputId="14495b79-f239-469f-ef81-4936e53f652d"
plot_threshold_recall(Y_test, P_hat, step_size=0.05)
# + [markdown] id="RbKzUx03j29n"
# ### Put all the process together
# + id="UhcZ8J8Pj2Iz"
def train_model(df_train, df_test, df_val, xColumn, yColumn):
# Pass to numpy array
X_train, Y_train = transform_to_numpy(df_train, xColumn, yColumn)
X_test, Y_test = transform_to_numpy(df_test, xColumn, yColumn)
X_val, Y_val = transform_to_numpy(df_val, xColumn, yColumn)
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_train = tokenizer.bert_encode(X_train, max_len=max_len)
X_val = tokenizer.bert_encode(X_val, max_len=max_len)
# Train the model
model = build_model(max_len=max_len)
model.summary()
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
model.save_weights('savefile')
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
return P_hat
# + id="FWEmQ_wFm1OI"
df = pd.read_csv('/content/drive/MyDrive/IME/IC/datasets/D1-original.csv', dtype={'Class': int, 'Tweet': str})
df = binaryClassHateSpeech(df, mod=2) #let's try just with hate speech
df = creat_subclass(df)
# + id="fpIaAq0Om8Jg"
# Separate train, test and validation
df_train, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
df_train, df_val = separate_train_validation(df_train, 'hate_ofencive_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 315} id="KMjWD0dD7TMt" outputId="6851224a-256a-496c-d73e-bf6d3ebb9000"
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_ofencive_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_ofencive_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_ofencive_speech')
class_size_graph(Y_train, Y_test, Y_val)
# + colab={"base_uri": "https://localhost:8080/"} id="PHj4JsyA7ywh" outputId="68f8e4ff-d604-4d3b-8f2b-d47b8bc585e4"
P_hat = train_model(df_train, df_test, df_val, 'tweet', 'hate_ofencive_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 384} id="1ERyhxSf6Owi" outputId="0246a30e-0ce5-498b-90f8-06c83a49f654"
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
# + [markdown] id="K-3KVg2US1p7"
# # Analizing a new dataset (**E9**)
# + [markdown] id="SE_xIoTAViY8"
# ### Pre-processing
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="hhq_4A8yS7kY" outputId="4bb5e4f8-3519-4170-a727-63f2ac738e92"
test_df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/collected_tweets/NAACL_SRW_2016.csv')#, encoding = 'latin-1')
test_label_df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/collected_tweets/NAACL_SRW_2016Labels.csv', header = None)
df = pd.concat([test_df,test_label_df], axis = 1)
df.columns = ['tweet','class']
df = df.dropna()
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="UFaScniyXQ5C" outputId="84c96966-036c-45e3-e954-20348b51332b"
pd.unique(df['class'])
# + id="bw5ZP6U3VnU1"
df['hate_speech'] = df['class'].apply(lambda x : 0 if x=='none' else 1)
df['racism'] = df['class'].apply(lambda x : 1 if x=='racism' else 0)
df['sexism'] = df['class'].apply(lambda x : 1 if x=='sexism' else 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="TFA0F1qsWTsX" outputId="fbfcf216-58e3-4acc-815c-d30fd14631ca"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="NJO0oxrwXiKD" outputId="13acfb54-7034-434d-a9f5-dca2d8139c0a"
plt.hist(df['class'])
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="uQQ36fREX48Z" outputId="bffe4518-6ac8-434b-d973-a1d6e80706e5"
len(df[df['class'] == 'racism'])
# + colab={"base_uri": "https://localhost:8080/"} id="Xt-g3nGTg5U5" outputId="16fbb6dd-a1c0-4c76-a0d5-5057da787606"
len(df[df['class'] == 'sexism'])
# + colab={"base_uri": "https://localhost:8080/"} id="kCnKx8ULhDhg" outputId="7740c14c-618f-4bfc-ca7c-bc4c51031c1d"
len(df[df['class'] == 'none'])
# + colab={"base_uri": "https://localhost:8080/"} id="FOhRVBjMZIUE" outputId="8fd4aef9-5f29-4638-8f7e-db27214abbdd"
for i in range(10):
print(df['tweet'][i] + '\n')
# + [markdown] id="7bi1DWtQaM3s"
# ### Running the model
# + id="FHDpyIdaaMM3"
"""
Training with a sample of sexism
"""
# Separate train, test and validation
#def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
df_train, df_test = separate_train_and_test(df, 'hate_speech',sub_classes_toTakeOff=['racism'])
df_train, df_val = separate_train_validation(df_train, 'hate_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="u2oVbHfAbxCG" outputId="acaf57d7-8c6c-4e09-a71c-f4dca212678c"
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_speech')
class_size_graph(Y_train, Y_test, Y_val)
# + colab={"base_uri": "https://localhost:8080/"} id="XHbfxWHJXAkx" outputId="932984b4-00e1-4c83-9980-0198cfeace79"
print(len(df_train), len(df_test), len(df_val))
print(len(df_train[df_train['hate_speech']== 1]), len(df_test[df_test['hate_speech'] == 1]), len(df_val[df_val['hate_speech']==1]))
# + colab={"base_uri": "https://localhost:8080/"} id="ARmX0jY4Z3Jj" outputId="cae7ed14-f5e3-4889-91ec-a88677ef6dba"
P_hat, model = train_model2(df_train, df_test, df_val, 'tweet', 'hate_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="bXAd2CAZfNuL" outputId="fafca197-dd49-4d8a-f39d-76fef7f6734b"
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
# + [markdown] id="VZjpVfY5hB_4"
# ### Increaing the size of the sample
# + id="9BQE1GcLhkFb"
def train_model2(df_train, df_test, df_val, xColumn, yColumn):
# Pass to numpy array
X_train, Y_train = transform_to_numpy(df_train, xColumn, yColumn)
X_test, Y_test = transform_to_numpy(df_test, xColumn, yColumn)
X_val, Y_val = transform_to_numpy(df_val, xColumn, yColumn)
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_train = tokenizer.bert_encode(X_train, max_len=max_len)
X_val = tokenizer.bert_encode(X_val, max_len=max_len)
# Train the model
model = build_model(max_len=max_len)
model.summary()
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
model.save_weights('savefile')
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
return P_hat, model
# + id="8x0jJDdcg-C4"
"""
Training with a sample of sexism
"""
# Separate train, test and validation
#def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
df_train, df_test = separate_train_and_test(df, 'hate_speech',sub_classes_toTakeOff=['racism'],percent_sample=0.5)
df_train, df_val = separate_train_validation(df_train, 'hate_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 315} id="HWSPS4JlhUEw" outputId="923d48e1-4773-4db5-b34e-558e3c7f1b08"
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_speech')
class_size_graph(Y_train, Y_test, Y_val)
# + colab={"base_uri": "https://localhost:8080/"} id="ZHqJw8ueZfSB" outputId="e2f0cd90-958c-4954-d641-2cc0797f68c8"
print(len(df_train), len(df_test), len(df_val))
print(len(df_train[df_train['hate_speech']== 1]), len(df_test[df_test['hate_speech'] == 1]), len(df_val[df_val['hate_speech']==1]))
# + colab={"base_uri": "https://localhost:8080/"} id="7YrxWYaxhaEG" outputId="3b0f076e-582f-41d3-a1ad-dc082fa12c98"
P_hat, model2 = train_model2(df_train, df_test, df_val, 'tweet', 'hate_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="24ritRFJhg56" outputId="23bfa941-665e-40b1-b5a6-11c596adc2df"
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
# + [markdown] id="cq_3dHOIm0eF"
# Testing the model with other dataset
# + id="ZxcJPV-Km0Bw"
df_E1 = pd.read_csv('/content/drive/MyDrive/IME/IC/datasets/D1-original.csv', dtype={'Class': int, 'Tweet': str})
df_E1 = binaryClassHateSpeech(df_E1, mod=1) #let's try just with hate speech
# + id="wOxBhJ3Anqad"
X_test, Y_test = transform_to_numpy(df_E1, 'tweet', 'hate_ofencive_speech')
# + colab={"base_uri": "https://localhost:8080/", "height": 273} id="Ds8KJGzi2M8T" outputId="5a01474a-6697-4db8-db43-2e8d0e1407d7"
plt.hist(Y_test)
plt.show()
# + [markdown] id="E5MM_Pi2YBM6"
# Amostra de 10% dos casos de machismo
# + id="gIWj3oAMYAFl"
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="Ml8QxBcKYARs" outputId="d09097d1-6982-4c03-f3e2-a2ffc93dec0f"
threshold = 0.50
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
# + [markdown] id="Y3F0bzZTYFxR"
# Amostra de 50% dos casos de machismo
# + id="Js89QthIn6Bi"
#Tokenização
P_hat = model2.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="hhyt--vCoswx" outputId="6b4441ab-71b3-4d4d-dfda-c1acce4c464f"
# Print confusion matriz:]
threshold = 0.50
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
| part3test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import re
import math
def getwords(doc):
"""分裂字符串用"""
splitter = re.compile('\\W*')
words = [val.lower() for val in splitter.split(doc) if len(val) > 2 and len(val) < 20]
return {key: 1 for key in words}
class Classifier(object):
def __init__(self, getfeatures, filename=None):
# 统计特征/分类组合的数量
self.fc = {}
# 统计每个分类中的文档数量
self.cc = {}
self.getfeatures = getfeatures
self.thresholds = {}
def incf(self, f, cat):
"""增加对特征/分类组合的计数值"""
self.fc.setdefault(f, {})
self.fc[f].setdefault(cat, 0)
self.fc[f][cat] += 1
def incc(self, cat):
"""增加对某一分类的计数值"""
self.cc.setdefault(cat, 0)
self.cc[cat] += 1
def fcount(self, f, cat):
"""某一特征出现于某一分类中的次数"""
if f in self.fc and cat in self.fc[f]:
return float(self.fc[f][cat])
return 0.0
def catcount(self, cat):
"""属于某一分类的内容项数量"""
if cat in self.cc:
return float(self.cc[cat])
return 0.0
def totalcount(self):
"""所有内容项的数量"""
return sum(self.cc.values())
def categories(self):
"""所有分类的列表"""
return self.cc.keys()
def train(self, item, cat):
"""增加训练数据"""
features = self.getfeatures(item)
# 针对该分类为每个特征增加计数值
for f in features:
self.incf(f, cat)
# 增加针对该分类的计数值
self.incc(cat)
def fprob(self, f, cat):
"""计算特征在分类下的概率"""
if self.catcount(cat) == 0: return 0
return self.fcount(f, cat) / self.catcount(cat)
def weightedprob(self, f, cat, prf, weight=1.0, ap=0.5):
"""计算加权概率"""
basicprob=prf(f, cat)
totals = sum([self.fcount(f,c) for c in self.categories()])
bp = ((weight * ap) + (totals * basicprob)) / (weight + totals)
return bp
def setthreshold(self, cat, t):
"""设置阀值"""
self.thresholds[cat] = t
def getthreshold(self, cat):
"""获取阀值"""
if cat not in self.thresholds: return 1.0
return self.thresholds[cat]
def classify(self, item, default=None):
"""获取特征的在不同分类下的最大概率"""
probs = {}
max = 0.0
for cat in self.categories():
probs[cat] = self.prob(item, cat)
if probs[cat] > max:
max= probs[cat]
best = cat
for cat in probs:
if cat == best: continue
if probs[cat] * self.getthreshold(best) > probs[best]: return default, max
return best, max
cl = Classifier(getwords)
# +
# cl.train("the quick brown fox jumps over the lazy dog", "good")
# +
# cl.train("make quick money in the online casino", "bad")
# +
# cl.fcount("quick", "good")
# +
# cl.fcount("quick", "bad")
# -
def sampletrain(cl):
cl.train("Nobody owns the water.", "good")
cl.train("the quick rabbit jumps fences", "good")
cl.train("buy pharmaceuticals now", "bad")
cl.train("make quick money at the online casino", "bad")
cl.train("the quick brown fox jumps", "good")
sampletrain(cl)
cl.fprob("quick", "good")
cl.fc, cl.cc
cl.fprob("money", "good"), cl.fprob("money", "bad")
cl.weightedprob("money", "good", cl.fprob)
sampletrain(cl)
cl.weightedprob("money", "good", cl.fprob)
# +
############################################################
# -
class Naivebayes(Classifier):
def docprob(self, item, cat):
"""计算所有特征概率相乘的整体概率"""
features = self.getfeatures(item)
p = 1
for f in features: p *= self.weightedprob(f, cat, self.fprob)
return p
def prob(self, item, cat):
"""计算分类的概率"""
catprob = self.catcount(cat) / self.totalcount()
docprob = self.docprob(item, cat)
return docprob * catprob
nb = Naivebayes(getwords)
sampletrain(nb)
nb.prob("quick rabbit", "good")
nb.prob("quick rabbit", "bad")
nb.classify("quick rabbit", default="unknown")
nb.classify("quick money", default="unknown")
nb.setthreshold("bad", 3.0)
nb.classify("quick rabbit", default="unknown")
nb.classify("quick money", default="unknown")
# +
###########################################################
# +
# 1. 写淘宝联盟数据抓取爬虫 抓 分类 + 商品名
# 2. 准备jieba分词处理商品名的切分
# 3. web展现并人工筛选一次降低错误率
# 4. 导入数据的标题输出分类,判断准确率
# 5. 数据的web可视化功能
# -
| bias_classification/test/Document Filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# Dependencies and Setup
import pandas as pd
# +
# File to Load (Remember to Change These)
school_data_path = "../PyCitySchool/schools_complete.csv"
student_data_path = "../PyCitySchool/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_df = pd.read_csv(school_data_path)
student_df = pd.read_csv(student_data_path)
school_df.head()
# -
student_df.head()
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_df, school_df, how="left", on=["school_name", "school_name"])
school_data_complete.head()
| PyCitySchool/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The NREL S805 airfoil and the rest of the family have been specifically designed for use in wind turbines. The S805 has been studied at the Delft low-turbulence wind tunnel[1], including transition points and oil-flow photographs.
# The S805A airfoil is a modification of this airfoil to improve the drag, based on the experimental results of the S805.
# While the improvements are visible in the numerical results here, I am not aware of any windtunnel results.
#
# ## Viiflow Parameters
# The Delft low-turbulence wind tunnel is commonly used with a critical amplification factor of 11.2[2], but the transition locations are better matched by assuming a bit more turbulent conditions at 10.2. All calculations are performed with a Mach Number assuming a cord length of 0.5[1] and the geometry is taken from the reference.
# +
import viiflow as vf
import viiflowtools.vf_tools as vft
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
import logging
logging.getLogger().setLevel(logging.WARNING)
matplotlib.rcParams['figure.figsize'] = [12, 6]
# Read Airfoil Data
N = 220
S805 = vft.repanel(vft.read_selig("S805.dat"),N)
S805A = vft.repanel(vft.read_selig("S805A.dat"),N)
fig,ax = plt.subplots(1,1)
ax.plot(S805[0,:],S805[1,:],'-k',label="S805")
ax.plot(S805A[0,:],S805A[1,:],'-r',label="S805A")
ax.axis('equal')
ax.legend()
# -
# ## Calculation
results = {} # Dictionary of results
AOARange = np.arange(-5,18.5,.25)
# Go over RE range
af=-1
for airfoil in [S805,S805A]:
af+=1
results[af] = {} # Dictionary of results
for RE in [5e5, 7e5, 10e5, 15e5, 20e5]:
print("RE %g"%RE)
# Settings
ncrit = 10.2
Mach = 0.04*RE/5e5 # c = 0.5m, assuming 20°C
s = vf.setup(Re=RE,Ma=Mach,Ncrit=ncrit,Alpha=AOARange[0],IterateWakes=False)
s.Silent = True
# (Maximum) internal iterations
s.Itermax = 100
results[af][RE] = {} # Sub-Dictionary of results
results[af][RE]["AOA"] = []
results[af][RE]["CL"] = []
results[af][RE]["CD"] = []
results[af][RE]["TRUP"] = []
results[af][RE]["TRLO"] = []
# Go over AOA range
faults = 0
init = True
for alpha in AOARange:
# Set current alpha and set res/grad to None to tell viiflow that they are not valid
s.Alpha = alpha
res = None
grad = None
# Set-up and initialize based on inviscid panel solution
# This calculates panel operator
if init:
[p,bl,x] = vf.init([airfoil],s)
init = False
# Run viiflow
[x,flag,res,grad,_] = vf.iter(x,bl,p,s,res,grad)
# If converged add to cl/cd vectors (could check flag as well, but this allows custom tolerance
# to use the results anyways)
if flag:
results[af][RE]["AOA"].append(alpha)
results[af][RE]["CL"].append(p.CL)
results[af][RE]["CD"].append(bl[0].CD)
# Calculate transition position based on BL variable
results[af][RE]["TRUP"].append( \
np.interp(bl[0].ST-bl[0].bl_fl.node_tr_up.xi[0],p.foils[0].S,p.foils[0].X[0,:]))
results[af][RE]["TRLO"].append( \
np.interp(bl[0].ST+bl[0].bl_fl.node_tr_lo.xi[0],p.foils[0].S,p.foils[0].X[0,:]))
faults = 0
else:
faults+=1
init = True
# Skip current polar if 4 unconverged results in a row
if faults>3:
print("Exiting RE %u polar calculation at AOA %f°"%(RE,alpha))
break
# ## Aerodynamic Polars
# Both the experimental values and the viiflow calculations show that the design goal of a “laminar bucket” up to a lift coefficient of 0.9 is quite well achieved.
# +
EXPRES=np.genfromtxt("S805Polars.csv",delimiter=",",names=True)
matplotlib.rcParams['figure.figsize'] = [12, 6]
expnames = ['EXPPOLAR5','EXPPOLAR7','EXPPOLAR10','EXPPOLAR15','EXPPOLAR20']
resnames = [5e5,7e5,10e5,15e5,20e5]
colors = ["tab:blue","tab:orange","tab:green","tab:red","tab:purple","tab:pink"]
fix,ax = plt.subplots(1,1)
# Add pseudo lines for legend
ax.plot([],[],'-k',label="S805")
ax.plot([],[],'--k',label="S805A")
ax.plot([],[],'k',marker=".",linestyle = 'None',label="Experiment")
for k in range(len(resnames)):
ax.plot(results[0][resnames[k]]["CD"],results[0][resnames[k]]["CL"], color=colors[k],label="RE %2.1e"%resnames[k])
ax.plot(results[1][resnames[k]]["CD"],results[1][resnames[k]]["CL"], '--',color=colors[k])
for k in range(len(expnames)):
ax.plot(EXPRES['%s_X'%expnames[k]],EXPRES['%s_Y'%expnames[k]],marker=".",linestyle = 'None', color=colors[k])
ax.set_xlabel('CD')
ax.set_ylabel('CL')
ax.set_xlim([0.004,0.0125])
ax.set_ylim([-.1,1.3])
ax.grid(True)
ax.set_title("Polars at different Reynolds Numbers");
ax.legend(ncol=2)
# -
# ## Transition Location
# The transition location behavior in shape and with respect to increasing Reynolds Number is generally well described by the computational results. At higher angles of attack, the calculations in general assume earlier transitions near the leading edge at high angles of attack. This, I assume, leads to lower maximum lift than calculated.
# +
EXPRES=np.genfromtxt("S805TransitionLocations.csv",delimiter=",",names=True)
expnames1 = ['EXPBOTRE500','EXPBOTRE700','EXPBOTRE1000','EXPBOTRE1500','EXPBOTRE2000']
expnames2 = ['EXPTOPRE500','EXPTOPRE700','EXPTOPRE1000','EXPTOPRE1500','EXPTOPRE2000']
fix,ax = plt.subplots(1,1)
for k in range(len(resnames)):
plt.plot(results[0][resnames[k]]["TRUP"],results[0][resnames[k]]["AOA"], color=colors[k])
plt.plot(results[0][resnames[k]]["TRLO"],results[0][resnames[k]]["AOA"], color=colors[k])
for k in range(len(expnames)):
plt.plot(EXPRES['%s_X'%expnames1[k]],EXPRES['%s_Y'%expnames1[k]],marker="v",linestyle = 'None', color=colors[k])
plt.plot(EXPRES['%s_X'%expnames2[k]],EXPRES['%s_Y'%expnames2[k]],marker="^",linestyle = 'None', color=colors[k])
plt.xlabel('Transition Location x/c')
plt.ylabel('AOA')
plt.title('Left: Top, Right: Bottom')
plt.ylim([-5,15])
plt.xlim([-.01,1.01])
plt.grid(True)
plt.show()
# -
# ## Lift Coefficients
# The lift slope and zero lift angle of attack at low angles of attack, from -3° to 7°, is very good. Above 7°, the prediction and experiment diverge and lead to a disagreement in maximum lift coefficient and stall behavior. Although the maximum angle of attack is quite accurate.
# +
marker = None
fix,ax = plt.subplots(1,1)
EXPRES=np.genfromtxt("S805CL.csv",delimiter=",",names=True)
for k in range(len(resnames)):
plt.plot(results[0][resnames[k]]["AOA"],results[0][resnames[k]]["CL"],color=colors[k])
plt.plot(results[1][resnames[k]]["AOA"],results[1][resnames[k]]["CL"], '-.',color=colors[k])
plt.plot(EXPRES['EXPRE500_X'],EXPRES['EXPRE500_Y'],marker=".",linestyle = 'None', color="tab:blue")
plt.plot(EXPRES['EXPRE700_X'],EXPRES['EXPRE700_Y'],marker=".", linestyle = 'None',color='tab:orange')
plt.plot(EXPRES['EXPRE1000_X'],EXPRES['EXPRE1000_Y'],marker=".", linestyle = 'None',color='tab:green')
plt.plot(EXPRES['EXPRE1500_X'],EXPRES['EXPRE1500_Y'],marker=".", linestyle = 'None',color='tab:red')
plt.plot(EXPRES['EXPRE2000_X'],EXPRES['EXPRE2000_Y'],marker=".", linestyle = 'None',color='tab:purple')
plt.xlabel('AOA')
plt.ylabel('CL')
plt.xlim([-5,18])
plt.ylim([-.25,1.7])
plt.grid(True)
plt.title("Lift coefficient")
plt.show()
# -
# ## References
#
# [1] D.Somers, *Design and Experimental Results for the S805 airfoil.* NREL/SR-440-6917
#
# [2] <NAME>. *The eN method for transition prediction: historical review of work at TU Delft.* AIAA, 2008.
| S805 Basic Airfoil Polar/S805-Basic-Airfoil-Polar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## poisson mixture model
# %load_ext autoreload
# %autoreload 2
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from general_mm import GeneralizedMixtureModel, ModuledDistribution
import torch
from torch import nn
from torch.distributions import MultivariateNormal
plt.style.use('ggplot')
# + pycharm={"name": "#%%\n"}
from sklearn.datasets import load_wine
wine = load_wine()
print(wine)
learn_data = torch.tensor(wine.data).to(torch.float32)
plt.scatter(learn_data[:, 1], learn_data[:, 3], c=wine.target)
plt.plot()
# + pycharm={"name": "#%%\n"}
from torch.distributions import Poisson, Gamma
cluster_num = 3
parameters = [{"rate": nn.Parameter(torch.rand(1)*3), "concentration": torch.tensor([1.], dtype=torch.float32)}
for k in range(cluster_num)]
distributions = [ModuledDistribution(Gamma(**parameters[k]),
parameters[k])
for k in range(cluster_num)]
model = GeneralizedMixtureModel(distributions, rtol=1e-10)
# + pycharm={"name": "#%%\n"}
predicted = model.fit_predict(learn_data)
# + pycharm={"name": "#%%\n"}
color = predicted.argmax(dim=0)
print(color)
plt.scatter(learn_data[:, 2], learn_data[:, 3], c=color)
for k in range(cluster_num):
sample = distributions[k].sample([1000])[:, (2, 3)]
plt.scatter(sample[:, 0], sample[:, 1], s=1., alpha=0.3)
# + pycharm={"name": "#%%\n"}
print(distributions[1].parameter["loc"])
# + pycharm={"name": "#%%\n"}
| notebooks/example_pmm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/expert-search/glg-ack/blob/main/1.2-cgc-BERT_With_HuggingFace(add_other_label).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="8-Seq3ecsO0S"
# # GrailQA Dataset Exploration w/ BERT (w/ domain selection and mapping)
# + [markdown] id="9COHLDxRsIBC"
# Adpated from https://towardsdatascience.com/multi-label-multi-class-text-classification-with-bert-transformer-and-keras-c6355eccb63a
# + colab={"base_uri": "https://localhost:8080/"} id="XYD8YWlxXMWt" outputId="03a284e1-cd13-4ea9-b36e-20d15ce040e7"
# !pip install transformers
from transformers import TFBertModel, BertConfig, BertTokenizerFast
from tensorflow.keras.layers import Input, Dropout, Dense
from tensorflow.keras.models import Model
from tensorflow.math import confusion_matrix
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.initializers import TruncatedNormal
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.metrics import CategoricalAccuracy, Precision, Recall
from tensorflow.keras.utils import to_categorical
import pandas as pd
from sklearn.model_selection import train_test_split
# + colab={"base_uri": "https://localhost:8080/"} id="oGbmhMSEePPx" outputId="6cb27a65-8fae-47ac-a62b-d9a624112c87"
# !pip install tensorflow_addons
from tensorflow_addons.metrics import F1Score
# + colab={"base_uri": "https://localhost:8080/"} id="Unu1C9F3XDHR" outputId="e81a095c-589a-4d0c-9882-5bf4eed44d63"
# !pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'grail_qa')
# + id="taUndV62eHkW"
df_train = pd.DataFrame(dataset['train'])
df_valid = pd.DataFrame(dataset['validation'])
df_test = pd.DataFrame(dataset['test'])
# + colab={"base_uri": "https://localhost:8080/", "height": 289} id="pJdjDA9KA4nO" outputId="a077c578-1caf-4114-d9fd-093423b74c0c"
df_train.head()
# + [markdown] id="MNDlvysiyd0f"
# The test partition has empty lists for the 'domains' column 🤔
# + id="5IBjE5eguZQN"
df_train['label'] = pd.Categorical([domains[0] for domains in df_train['domains']])
df_valid['label'] = pd.Categorical([domains[0] for domains in df_valid['domains']])
# + colab={"base_uri": "https://localhost:8080/"} id="vVvGBjE9Vrz0" outputId="de36b207-27d5-4c45-99a8-5d4f40cf702d"
len(df_train[(df_train['label'] == 'medicine')])
# + id="1nJp1kwijI17"
domains_to_keep = ['medicine', 'computer', 'spaceflight', 'biology', 'automotive', 'internet', 'engineering']
# + id="-Ha1LBIyKFhM"
df_train_other = df_train[df_train['label'].isin(domains_to_keep) == False]
df_valid_other = df_valid[df_valid['label'].isin(domains_to_keep) == False]
# + id="wzB3j_CBJ9ZL"
df_train = df_train[df_train['label'].isin(domains_to_keep)]
df_valid = df_valid[df_valid['label'].isin(domains_to_keep)]
# + id="vzMQKcnYpch0"
domain_map = {
'medicine': 'healthcare',
'computer': 'technology',
'spaceflight': 'technology',
'biology': 'healthcare',
'automotive': 'technology',
'internet': 'technology',
'engineering': 'technology'
}
df_train['label'] = df_train['label'].map(domain_map)
df_valid['label'] = df_valid['label'].map(domain_map)
# + id="BJeGQTpUI_-9"
df_train_other['label'] = 'other'
df_valid_other['label'] = 'other'
# + id="4SFh387tLUSL"
df_train_other_subset = df_train_other.sample(n=3700, random_state=42)
df_valid_other_subset = df_valid_other.sample(n=350, random_state=42)
# + id="itW6KTF6Mfns"
df_train = pd.concat([df_train, df_train_other_subset])
df_valid = pd.concat([df_valid, df_valid_other_subset])
# + id="HjufHCgQjuiG"
df_train['label'] = pd.Categorical([label for label in df_train['label']])
df_valid['label'] = pd.Categorical([label for label in df_valid['label']])
df_train['numeric_label'] = df_train['label'].cat.codes
df_valid['numeric_label'] = df_valid['label'].cat.codes
# + colab={"base_uri": "https://localhost:8080/"} id="Ni4yQ3M7UUSv" outputId="cd2f20a7-59cd-4d27-bd96-75aef567169a"
print(len(df_train['label'].unique()))
df_train['label'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="5IIThiuNOCOs" outputId="ea1746fa-34a3-4c58-fd9b-1666c2fadab4"
df_valid['label'].value_counts()
# + [markdown] id="do7Z-dYauRoq"
# ### Fetch BERT pre-trained encoder and tokenizer
# + colab={"base_uri": "https://localhost:8080/", "height": 262, "referenced_widgets": ["1a17b38d468741b7b8bbcc6badd639f7", "e05d4a5ae6f643ca9aa853c07c4b539d", "41d05c079c7b4d7ba32e1f458395937f", "235b35616e4e401c91760800b9a54ea3", "1f6707c94e0e4917bc33f448c7f2df4a", "2e6ac4a4c28f4452891b5fc2a0fec275", "1cc15463fae2419a9cb8ba10264ce256", "f1cd2eb11bc14a2ea279a3ca1f44f60a", "<KEY>", "<KEY>", "b478ad53bf644d789c28e45e3e445844", "c49ca43cf32d48108bf703ce70694c6a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "e2537a8613d64eb984eec169ebae63c6", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "ea0b457464564b0e95957b0d0aba1608", "<KEY>", "<KEY>", "<KEY>", "cfe1b89aded84c6e9066135c6719e8a4", "<KEY>", "<KEY>", "6445f9f704ae49c185197532915c58e8", "239afa2db11c4bef8eae18cc4a102699", "5fa3306323674ee99d2daed5b95f4339", "82f77d86fcde4df5bea490b42a90822a", "<KEY>", "81af7ec8d9e840f09a29bc6de1e000e9", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "0a17d4f00e0e4c3aabfa2c20df578f4d", "2b85f155f5f44f1898abe7b55754ed37", "4875900c375645edab8ebf762aeee46d", "66b41d8e97ab4a6c8d7fbc7a3f4f19d7", "a9b914a6272443ea831271de5ad9b2de", "<KEY>", "c46e65ec4dc5459ea9b4a4724bee825d", "<KEY>", "eedd3eb3332f452682a3a675f77834f2", "<KEY>", "<KEY>", "58c1040a14154ce6890ff9bb7cefcb47", "63fecfb30b8d49c49e14e8251003cef5"]} id="_qsWGlRqfNn2" outputId="b809543b-35c4-4c97-ec10-f683956e91bb"
#######################################
### --------- Setup BERT ---------- ###
# Name of the BERT model to use
model_name = 'bert-base-uncased'
# Max length of tokens
max_length = 100
# Load transformers config and set output_hidden_states to False
config = BertConfig.from_pretrained(model_name)
config.output_hidden_states = False
# Load BERT tokenizer
tokenizer = BertTokenizerFast.from_pretrained(pretrained_model_name_or_path = model_name, config = config)
# Load the Transformers BERT model
transformer_model = TFBertModel.from_pretrained(model_name, config = config)
# + colab={"base_uri": "https://localhost:8080/"} id="SI90mmVypa0a" outputId="90c492bc-2bd2-4b47-bcd8-8b988e27f6a7"
#######################################
### ------- Build the model ------- ###
# TF Keras documentation: https://www.tensorflow.org/api_docs/python/tf/keras/Model
# Load the MainLayer
bert = transformer_model.layers[0]
# Build your model input
input_ids = Input(shape=(max_length,), name='input_ids', dtype='int32')
inputs = {'input_ids': input_ids}
# Load the Transformers BERT model as a layer in a Keras model
bert_model = bert(inputs)[1]
dropout = Dropout(config.hidden_dropout_prob, name='pooled_output')
pooled_output = dropout(bert_model, training=False)
# Then build your model output
label = Dense(units=len(df_train['numeric_label'].value_counts()),
kernel_initializer=TruncatedNormal(
stddev=config.initializer_range
),
name='label')(pooled_output)
outputs = {'label': label}
# And combine it all in a model object
model = Model(inputs=inputs, outputs=outputs, name='BERT_MultiClass')
# Take a look at the model
model.summary()
# + [markdown] id="saKwpFNHGPoh"
# ### Model Training
# + colab={"base_uri": "https://localhost:8080/"} id="B5Ntbpoiqf2o" outputId="3362224c-24c4-4a53-c547-26565e21224d"
optimizer = Adam(
learning_rate=5e-05,
epsilon=1e-08,
decay=0.01,
clipnorm=1.0
)
loss = {'label': CategoricalCrossentropy(from_logits = True)}
metric = {'label': CategoricalAccuracy('accuracy')}
model.compile(
optimizer = optimizer,
loss = loss,
metrics = metric)
y_label = to_categorical(df_train['numeric_label'])
x = tokenizer(
text=df_train['question'].to_list(),
add_special_tokens=True,
max_length=max_length,
truncation=True,
padding=True,
return_tensors='tf',
return_token_type_ids=False,
return_attention_mask=False,
verbose=True)
history = model.fit(
x={'input_ids': x['input_ids']},
y={'label': y_label},
validation_split=0.2,
batch_size=64,
epochs=2
)
# + colab={"base_uri": "https://localhost:8080/"} id="gYMzKnPhPNdw" outputId="33ef785b-eecc-48be-d3fa-e469cf40b27f"
from google.colab import drive
drive.mount('/content/drive')
import os
# The path below should point to the directory containing this notebook and the associated utility files
# Change it if necessary
os.chdir('/content/drive/MyDrive/')
# + id="1rg7KVkePelQ"
model.save('model_bert_w_other_label.hdf5')
# + [markdown] id="4YszVFzxGZFv"
# ### Evalutation
#
# Using the dev partition since the test partition is unlabeled.
# + id="c5rAuoYW9qlo" colab={"base_uri": "https://localhost:8080/"} outputId="0b53e7c2-6a0a-46a1-f2c3-44ab2cf11a7d"
test_y = to_categorical(df_valid['numeric_label'])
test_x = tokenizer(
text=df_valid['question'].to_list(),
add_special_tokens=True,
max_length=max_length,
truncation=True,
padding="max_length",
pad_to_max_length=True,
return_tensors='tf',
return_token_type_ids=False,
return_attention_mask=False,
verbose=True)
model_eval = model.evaluate(
x={'input_ids': test_x['input_ids']},
y={'label': test_y}
)
# + [markdown] id="V1YaF5yI7OKG"
# ### Verify Accuracy
# + id="tkjHi3pPndL5"
preds = model.predict(x={'input_ids': test_x['input_ids']})
# + colab={"base_uri": "https://localhost:8080/"} id="SnCymsu1yWVR" outputId="0fc48ae6-58b9-40e5-ceac-bc0461e68fc3"
from tensorflow.math import argmax
correct = 0
for pred, expected in zip(argmax(preds['label'], axis=1),
df_valid['numeric_label']):
if pred == expected:
correct += 1
print(f"Accuracy: {correct / len(df_valid)}")
# + [markdown] id="gGRMAfiy7dwA"
# ### Confusion Matrix
# + id="EBe8fDyunfm1" colab={"base_uri": "https://localhost:8080/"} outputId="a25e28b8-0f88-4e44-9764-960d2f2e1312"
from tensorflow import argmax
confusion_matrix(df_valid['numeric_label'],
argmax(preds['label'], axis=1),
num_classes=len(df_valid['label'].unique()))
# + [markdown] id="Mxm5Ivym_a59"
# ### OOD Evaluation
# [Yahoo QA Dataset](https://huggingface.co/datasets/viewer/?dataset=yahoo_answers_qa)
# + colab={"base_uri": "https://localhost:8080/", "height": 167, "referenced_widgets": ["dd886fee7fe34d6a90d7935e53832e5e", "677eb27ded2a4a2f91438e88ab779148", "9130fc1287954d84b6f6545dba429266", "f062c1740c134a54b01d041f9a0417a9", "<KEY>", "<KEY>", "50bdbdf1fe9d41b9b7efc3261ed366f3", "d2a4ee8e7335495e89faca4939941bcf", "bb3bf923bb154cadac0437e31e7df540", "4920d15667324d6e8ed440ca1add2982", "<KEY>", "166a572cf7684d499f1a36d97ef76d90", "<KEY>", "0773e3ee1f7b40bfb90c57348de50cac", "db6a811839b047b7bad7b084ab4cd281", "<KEY>", "80562b034b3742e5afd58c06300581fa", "<KEY>", "dccf897722594155a5232f9ed0db88b8", "d056bc7594ca40f59db19c9262ee3d81", "<KEY>", "<KEY>", "c8afe4c4d4714deeab93e96eff935c71", "<KEY>", "<KEY>", "<KEY>", "51e34c23e42e49dd9498eb150bcaad93", "<KEY>", "<KEY>", "5870f258095640ffad6d06173c299b3d", "0c46119b2c354ea4b94979905abe4ece", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "86e159977b494e6d900a69765a4bdea0", "<KEY>", "e371d02286bb48288e4a6191b65f09da", "ca59f3c36c434072827d863eeab17833", "<KEY>", "6ad72e00d5c6492c9c8d1b67be1ad416", "37eaf5d1817a4ff6a258267c84e1cbdc", "8ad42590684b4ecb839e3f987ebcab61"]} id="XyzZ7f1k_mFE" outputId="33d5a4e7-34ee-4cc8-e15d-b924e595bd5f"
yahoo_dataset = load_dataset(
'yahoo_answers_qa')
# + id="Pf3Tbu2g_3xl"
yahoo_df = pd.DataFrame(yahoo_dataset['train'])
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="H2cKTz7nA0Kn" outputId="b407ddab-81c5-467a-bc10-47e1b6c3f9dd"
yahoo_df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="-t1iqwMYA_fQ" outputId="778895c9-af90-4edd-d368-9274fdc3df36"
yahoo_df['main_category'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="f7IeV8-IBnf-" outputId="9998eabd-f77a-4e00-ec6b-07b7d536d30a"
yahoo_df['main_category'].value_counts()
# + id="D3eYmKRdCeTQ"
yahoo_df['label'] = pd.Categorical([main_cat for main_cat in yahoo_df['main_category']])
# + id="tB5GLQjQCE1s"
domains_to_keep = ['Computers & Internet', 'Health']
# + id="oJ_8kt7OQZam"
yahoo_df_other = yahoo_df[yahoo_df['label'].isin(domains_to_keep) == False]
yahoo_df = yahoo_df[yahoo_df['label'].isin(domains_to_keep)]
# + id="RhnBMXKvCGIq"
domain_map = {
'Health': 'healthcare',
'Computers & Internet': 'technology'
}
yahoo_df['label'] = yahoo_df['label'].map(domain_map)
# + id="UG9yguZhRbg7"
yahoo_df_other['label'] = 'other'
yahoo_df_other_subset = yahoo_df_other.sample(n=11000, random_state=42)
# + id="dsGHiLW9RzN1"
yahoo_df = pd.concat([yahoo_df, yahoo_df_other_subset])
# + colab={"base_uri": "https://localhost:8080/"} id="RjSE0GcDTmFB" outputId="f0673645-e013-49a0-d211-854fd775ba03"
yahoo_df['label'].value_counts()
# + id="VdcaR-cfQs06"
yahoo_df['label'] = pd.Categorical([label for label in yahoo_df['label']])
yahoo_df['numeric_label'] = yahoo_df['label'].cat.codes
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="7c0LPHaCEOHN" outputId="15c5cd44-1cde-4ea4-fe8c-871c7adaf589"
yahoo_df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="k3pjA8Z8VqjK" outputId="dff1ec3e-a887-42e0-d391-6a4e9b31696f"
yahoo_df.info()
# + colab={"base_uri": "https://localhost:8080/"} id="lZCkPQ8ND4jX" outputId="a6c48aab-fa1f-4fae-cf2a-716383b405e3"
test_y = to_categorical(yahoo_df['numeric_label'])
test_x = tokenizer(
text=yahoo_df['question'].to_list(),
add_special_tokens=True,
max_length=max_length,
truncation=True,
padding="max_length",
pad_to_max_length=True,
return_tensors='tf',
return_token_type_ids=False,
return_attention_mask=False,
verbose=True)
model_eval = model.evaluate(
x={'input_ids': test_x['input_ids']},
y={'label': test_y}
)
# + [markdown] id="J5S_Hr0ZFxLs"
# #### Verify Accuracy + Confusion Matrix
# + colab={"base_uri": "https://localhost:8080/"} id="iVTucxDuFa86" outputId="e37afcf4-cf40-47bb-b83c-de23c80c45fb"
yahoo_preds = model.predict(x={'input_ids': test_x['input_ids']})
correct = 0
for pred, expected in zip(argmax(yahoo_preds['label'], axis=1),
yahoo_df['numeric_label']):
if pred == expected:
correct += 1
print(f"Accuracy: {correct / len(yahoo_df)}")
# + colab={"base_uri": "https://localhost:8080/"} id="bouuHwgKF1NP" outputId="2b93cdee-58c3-47e6-b8c6-d3e0edba9f14"
confusion_matrix(yahoo_df['numeric_label'],
argmax(yahoo_preds['label'], axis=1),
num_classes=len(yahoo_df['label'].unique()))
# + [markdown] id="sDr9H6VzFoPV"
#
| notebooks/cgc-1.2-BERT_With_HuggingFace(add_other_label).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Training a Classifier based on TResNet
# To follow the best practices provided by the official, this notebook is adapted from [pytorch tutorial](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py). If you are familiar with pytorch, you can skip the introduction.
#
# ### What about data?
# Generally, when you have to deal with image, text, audio or video data,
# you can use standard python packages that load data into a numpy array.
# Then you can convert this array into a ``torch.*Tensor``.
# - For images, packages such as Pillow, OpenCV are useful
# - For audio, packages such as scipy and librosa
# - For text, either raw Python or Cython based loading, or NLTK and
# SpaCy are useful
#
# Specifically for vision, we have created a package called
# ``torchvision``, that has data loaders for common datasets such as
# Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
# ``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
# This provides a huge convenience and avoids writing boilerplate code.
# For this tutorial, we will use the CIFAR10 dataset.
# It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
# ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
# size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
#
# ### Training an image classifier
# We will do the following steps in order:
# 1. Load and normalizing the CIFAR10 training and test datasets using
# ``torchvision``
# 2. Define a Convolutional Neural Network
# 3. Define a loss function
# 4. Train the network on the training data
# 5. Test the network on the test data
# 1. Loading and normalizing CIFAR10
#
# Using ``torchvision``, it’s extremely easy to load CIFAR10.
import torch
import torchvision
import torchvision.transforms as transforms
opt_level = 'O1'
gpu = 0
device = torch.device("cuda:{}".format(gpu) if torch.cuda.is_available() else "cpu")
# #### Load and normalizing the CIFAR10 training and test datasets
# +
# no need to use transforms.Normalize, see https://github.com/mrT23/TResNet/issues/5#issuecomment-608440989
transform = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10_data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./cifar10_data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# -
# #### Let us show some of the training images, for fun.
# +
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.cpu().numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
# -
# #### Define a Convolutional Neural Network
#
# Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined).
# +
# # !pip install torch_tresnet
# +
from torch_tresnet import tresnet_xl
net = tresnet_xl(pretrained=True, num_classes=10, in_chans=3).to(device)
# -
# #### Define a Loss function and optimizer
#
# Let's use a Classification Cross-Entropy loss and SGD with momentum.
# +
import torch.nn as nn
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
# -
# #### Train the network
#
# * This is when things start to get interesting.
# * We simply have to loop over our data iterator, and feed the inputs to the network and optimize.
#
# The original paper uses mixed precision. Here `apex` is required:
# ```
# $ git clone https://github.com/NVIDIA/apex
# $ cd apex
# $ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
# ```
# +
from apex import amp
net, optimizer = amp.initialize(net, optimizer, opt_level=opt_level)
# +
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
# loss.backward()
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
# -
# #### Let's quickly save our trained model:
# See [here](https://pytorch.org/docs/stable/notes/serialization.html) for more details on saving PyTorch models.
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
# #### Test the network on the test data
# We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all.
#
# We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
#
# Okay, first step. Let us display an image from the test set to get familiar.
# +
dataiter = iter(testloader)
images, labels = dataiter.next()
images = images.to(device)
labels = labels.to(device)
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
########################################################################
# Next, let's load back in our saved model (note: saving and re-loading the model
# wasn't necessary here, we only did it to illustrate how to do so):
net.load_state_dict(torch.load(PATH))
########################################################################
# Okay, now let us see what the neural network thinks these examples above are:
outputs = net(images)
########################################################################
# The outputs are energies for the 10 classes.
# The higher the energy for a class, the more the network
# thinks that the image is of the particular class.
# So, let's get the index of the highest energy:
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
# +
########################################################################
# The results seem pretty good.
#
# Let us look at how the network performs on the whole dataset.
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# +
########################################################################
# That looks way better than chance, which is 10% accuracy (randomly picking
# a class out of 10 classes).
# Seems like the network learnt something.
#
# Hmmm, what are the classes that performed well, and the classes that did
# not perform well:
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
# -
| example/quickstart.ipynb |