markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Do NMF analysis and plot results: | nmf_results = imstack.imblock_nmf(4, plot_results=True) | _____no_output_____ | MIT | examples/notebooks/atomai_atomstat.ipynb | aghosh92/atomai |
Load Estonian weather service- https://www.ilmateenistus.ee/teenused/ilmainfo/ilmatikker/ | import requests
import datetime
import xml.etree.ElementTree as ET
import pandas as pd
from pandas.api.types import is_string_dtype
from pandas.api.types import is_numeric_dtype
import geopandas as gpd
import fiona
from fiona.crs import from_epsg
import numpy as np
from shapely.geometry import Point
import matplotlib.pyplot as plt
%matplotlib inline
req = requests.get("http://www.ilmateenistus.ee/ilma_andmed/xml/observations.php")
print(req.encoding)
print(req.headers['content-type'])
tree = ET.fromstring(req.content.decode(req.encoding) )
print(tree.tag)
print(tree.attrib)
ts = tree.attrib['timestamp']
print(datetime.datetime.fromtimestamp(int(ts)))
data = {'stations' : [],
'wmocode': [],
'precipitations': [],
'airtemperature': [],
'windspeed': [],
'waterlevel': [],
'watertemperature': [],
'geometry': []
}
counter = 0
for station in tree.findall('station'):
counter += 1
# print(station.tag, child.attrib)
# < name > Virtsu < /name > – jaama nimi.
name = station.find('name').text
data['stations'].append(name)
# < wmocode > 26128 < /wmocode > – jaama WMO kood.
wmocode = station.find('wmocode').text
data['wmocode'].append(wmocode)
try:
# < longitude > 23.51355555534363 < /longitude > – jaama asukoha koordinaat.
lon = station.find('longitude').text
# < latitude > 58.572674999100215 < /latitude > – jaama asukoha koordinaat.
lat = station.find('latitude').text
coords = Point(float(lon), float(lat))
data['geometry'].append(coords)
except ValueError as ve:
pass
# < phenomenon > Light snowfall < /phenomenon > – jaamas esinev ilmastikunähtus, selle puudumisel pilvisuse aste (kui jaamas tehakse manuaalseid pilvisuse mõõtmisi). Täielik nimekiri nähtustest on allpool olevas tabelis.
# < visibility > 34.0 < /visibility > – nähtavus (km).
# < precipitations > 0 < /precipitations > – sademed (mm) viimase tunni jooksul. Lume, lörtsi, rahe ja teiste taoliste sademete hulk on samuti esitatud vee millimeetritena. 1 cm lund ~ 1 mm vett.
precip = station.find('precipitations').text
data['precipitations'].append(precip)
# < airpressure > 1005.4 < /airpressure > – õhurõhk (hPa). Normaalrõhk on 1013.25 hPa.
# < relativehumidity > 57 < /relativehumidity > – suhteline õhuniiskus (%).
# < airtemperature > -3.6 < /airtemperature > – õhutemperatuur (°C).
temp = station.find('airtemperature').text
data['airtemperature'].append(temp)
# < winddirection > 101 < /winddirection > – tuule suund (°).
# < windspeed > 3.2 < /windspeed > – keskmine tuule kiirus (m/s).
wind = station.find('windspeed').text
data['windspeed'].append(wind)
# < windspeedmax > 5.1 < /windspeedmax > – maksimaalne tuule kiirus ehk puhangud (m/s).
# < waterlevel > -49 < /waterlevel > – veetase (cm Kroonlinna nulli suhtes)
waterlevel = station.find('waterlevel').text
data['waterlevel'].append(waterlevel)
# < waterlevel_eh2000 > -28 < waterlevel_eh2000/ > – veetase (cm Amsterdami nulli suhtes)
# waterlevel_eh2000 = station.find('waterlevel_eh2000').text
# < watertemperature > -0.2 < /watertemperature > – veetemperatuur (°C)
watertemp = station.find('watertemperature').text
data['watertemperature'].append(watertemp)
print(counter)
df = pd.DataFrame(data)
for field in ['precipitations','airtemperature','windspeed','waterlevel','watertemperature']:
if field in df.columns:
if is_string_dtype(df[field]):
df[field] = df[field].astype(float)
display(df.head(5))
geo_df = gpd.GeoDataFrame(df, crs=from_epsg(4326), geometry='geometry')
geo_df.plot()
water_df = geo_df.dropna(subset=['precipitations'])
water_df.plot(column='precipitations', legend=True)
geo_df_3301 = geo_df.dropna(subset=['precipitations']).to_crs(epsg=3301)
geo_df_3301['x'] = geo_df_3301['geometry'].apply(lambda p: p.x)
geo_df_3301['y'] = geo_df_3301['geometry'].apply(lambda p: p.y)
display(geo_df_3301.head(5))
geo_df_3301.to_file('ilmateenistus_precip_stations.shp', encoding='utf-8') | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
IDW in Python from scratch blogposthttps://www.geodose.com/2019/09/creating-idw-interpolation-from-scratch-python.html- IDW Algorithm Implementation in Python - IDW Interpolation Algorithm Based on Block Radius Sampling Point - IDW Interpolation based on Minimum Number of Sampling Point | geo_df_3301.dtypes
from idw_basic import idw_rblock, idw_npoint
x_idw_list1, y_idw_list1, z_head1 = idw_rblock(x=geo_df_3301['x'].astype(float).values.tolist(),
y=geo_df_3301['y'].astype(float).values.tolist(),
z=geo_df_3301['precipitations'].values.tolist(),
grid_side_length=200,
search_radius=50000,
p=1.5)
display(len(x_idw_list1))
display(len(y_idw_list1))
display(len(z_head1))
display(np.array(z_head1).shape)
plt.matshow(z_head1, origin='lower')
plt.colorbar()
plt.show() | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
_idw_npoint_ might take very long, due to ierative search radius increase to find at least n nearest neighbours | x_idw_list2, y_idw_list2, z_head2 = idw_npoint(x=geo_df_3301['x'].astype(float).values.tolist(),
y=geo_df_3301['y'].astype(float).values.tolist(),
z=geo_df_3301['airtemperature'].values.tolist(),
grid_side_length=100,
n_points=3,
p=1.5,
rblock_iter_distance=50000)
display(len(x_idw_list2))
display(len(y_idw_list2))
display(len(z_head2))
display(np.array(z_head2).shape)
plt.matshow(z_head2, origin='lower')
plt.colorbar()
plt.show() | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
Inverse distance weighting (IDW) in Python with a KDTreeBy Copyright (C) 2016 Paul Brodersen under GPL-3.0code: https://github.com/paulbrodersen/inverse_distance_weightingInverse distance weighting is an interpolation method that computes the score of query points based on the scores of their k-nearest neighbours, weighted by the inverse of their distances.As each query point is evaluated using the same number of data points, this method allows for strong gradient changes in regions of high sample density while imposing smoothness in data sparse regions.uses:- numpy- scipy.spatial (for cKDTree) | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import idw_knn
XY_obs_coords = np.vstack([geo_df_3301['x'].values, geo_df_3301['y'].values]).T
z_arr = geo_df_3301['precipitations'].values
display(XY_obs_coords.shape)
display(z_arr.shape)
# returns a function that is trained (the tree setup) for the interpolation on the grid
idw_tree = idw_knn.tree(XY_obs_coords, z_arr)
all_dist_m = geo_df_3301['x'].max() - geo_df_3301['x'].min()
dist_km_x = all_dist_m / 1000
display(dist_km_x)
all_dist_m_y = geo_df_3301['y'].max() - geo_df_3301['y'].min()
dist_km_y = all_dist_m_y / 1000
display(dist_km_y)
# prepare grids
# number of target interpolation grid shape along x and y axis, e.g. 150*100 raster pixels
nx=int(dist_km_x)
ny=int(dist_km_y)
# preparing the "output" grid
x_spacing = np.linspace(geo_df_3301['x'].min(), geo_df_3301['x'].max(), nx)
y_spacing = np.linspace(geo_df_3301['y'].min(), geo_df_3301['y'].max(), ny)
# preparing the target grid
x_y_grid_pairs = np.meshgrid(x_spacing, y_spacing)
x_y_grid_pairs_list = np.reshape(x_y_grid_pairs, (2, -1)).T
display(f"x_y_grid_pairs {len(x_y_grid_pairs)}")
display(f"x_y_grid_pairs_list reshaped {x_y_grid_pairs_list.shape}")
# now interpolating onto the target grid
z_arr_interp = idw_tree(x_y_grid_pairs_list)
display(f"z_arr_interp {z_arr_interp.shape}")
# plot
fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3))
ax1.scatter(XY_obs_coords[:,0], XY_obs_coords[:,1], c=geo_df_3301['precipitations'], linewidths=0)
ax1.set_title('Observation samples')
ax2.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx)))
ax2.set_title('Interpolation')
plt.show()
z_arr_interp.shape
plt.matshow(z_arr_interp.reshape((ny,nx)), origin='lower')
plt.colorbar()
plt.show()
display(f"x_spacing {x_spacing.shape}")
display(f"y_spacing {y_spacing.shape}")
# is a x_y_grid_pair a list of two ndarrays, each is fully spatial 100x150 fields, one holds the x coords the other the y coords
x_mg = np.meshgrid(x_spacing, y_spacing)
display(f"x_mg {type(x_mg)} {len(x_mg)} len0 {type(x_mg[0])} {len(x_mg[0])} {x_mg[0].shape} len1 {type(x_mg[1])} {len(x_mg[1])} {x_mg[0].shape}")
# the yget reshaped into two long flattened arrays the joint full list of target x y pairs representing all grid locations
x_mg_interp_prep = np.reshape(x_mg, (2, -1)).T
display(f"x_mg_interp_prep {type(x_mg_interp_prep)} {len(x_mg_interp_prep)} {x_mg_interp_prep.shape}")
| _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
Interpolation in Python with Radial Basis Function - https://stackoverflow.com/a/3114117 | from scipy.interpolate import Rbf
def scipy_idw(x, y, z, xi, yi):
interp = Rbf(x, y, z, function='linear')
return interp(xi, yi)
def plot(x,y,z,grid):
plt.figure()
grid_flipped = np.flipud(grid)
plt.imshow(grid, extent=(x.min(), x.max(), y.min(), y.max()), origin='lower')
# plt.hold(True)
plt.scatter(x,y,c=z)
plt.colorbar()
# nx, ny = 50, 50
x=geo_df_3301['x'].astype(float).values
y=geo_df_3301['y'].astype(float).values
z=geo_df_3301['precipitations'].values
xi = np.linspace(x.min(), x.max(), nx)
yi = np.linspace(y.min(), y.max(), ny)
xi, yi = np.meshgrid(xi, yi)
xi, yi = xi.flatten(), yi.flatten()
grid2 = scipy_idw(x,y,z,xi,yi)
grid2 = grid2.reshape((ny, nx))
plot(x,y,z,grid2)
plt.title("Scipy's Rbf with function=linear")
# plot
fig, (ax1, ax2, ax3) = plt.subplots(1,3, sharex=True, sharey=True, figsize=(10,3))
ax1.scatter(x,y, c=z, linewidths=0)
ax1.set_title('Observation samples')
ax2.contourf(np.linspace(x.min(), x.max(), nx), np.linspace(y.min(), y.max(), ny), grid2)
ax2.set_title('Interpolation contours')
ax3.imshow(np.flipud(grid2), extent=(x.min(), x.max(), y.min(), y.max()))
ax3.set_title('RBF pixels')
plt.show() | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
surface/contour/mesh plotting of interpolated gridshttps://matplotlib.org/3.1.0/gallery/images_contours_and_fields/pcolormesh_levels.htmlsphx-glr-gallery-images-contours-and-fields-pcolormesh-levels-py | from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from matplotlib import cm
nbins=15
levels = MaxNLocator(nbins=nbins).tick_values(z_arr_interp.min(), z_arr_interp.max())
# pick the desired colormap, sensible levels, and define a normalization
# instance which takes data values and translates those into levels.
cmap = plt.get_cmap('viridis')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
# plot
fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3))
im = ax1.pcolormesh(x_idw_list1, y_idw_list1, np.array(z_head1), cmap=cmap, norm=norm)
fig.colorbar(im, ax=ax1)
ax1.set_title('pcolormesh with normalisation (nbins={})'.format(nbins))
im2 = ax2.pcolormesh(x_idw_list1, y_idw_list1, np.array(z_head1), cmap=cm.viridis)
fig.colorbar(im2, ax=ax2)
ax2.set_title('pcolormesh without explicit normalisation')
plt.show()
# plot
fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3))
cf = ax1.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx)), levels=levels, cmap=cmap)
fig.colorbar(cf, ax=ax1)
ax1.set_title('contourf with {} levels'.format(nbins))
cf2 = ax2.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx)), cmap=cm.viridis)
fig.colorbar(cf2, ax=ax2)
ax2.set_title('contourf with defaut levels')
plt.show()
z_arr_interp.reshape((ny,nx)).shape | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
Writing interpolated array to a raster file- GeoTiff raster with GDAL Python | from fiona.crs import from_epsg
import pyproj
import osgeo.osr
import gdal
gdal.UseExceptions()
# wkt_projection = CRS("EPSG:3301") -> techniclly should tae crs from the geodataframe
crs = pyproj.Proj(from_epsg(3301))
srs = osgeo.osr.SpatialReference()
srs.ImportFromProj4(crs.srs)
wkt_projection = srs.ExportToWkt()
#
# KDTree z_arr_interp
#
ncols = nx
nrows = ny
cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols
cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows
testnp = z_arr_interp.reshape((ny,nx))
xllcorner = geo_df_3301['x'].min()
xulcorner = geo_df_3301['x'].min()
yllcorner = geo_df_3301['y'].min()
yulcorner = geo_df_3301['y'].max()
nodata_value = -9999
driver = gdal.GetDriverByName("GTiff")
dataset = driver.Create("kdtree_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 )
dataset.SetProjection(wkt_projection)
dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY))
dataset.GetRasterBand(1).WriteArray(np.flipud(testnp))
band = dataset.GetRasterBand(1)
band.SetNoDataValue(nodata_value)
dataset.FlushCache()
# dereference band to avoid gotcha described previously
band = None
dataset = None
#
# RBF grid2
#
testnp = grid2.reshape((ny,nx))
ncols = nx
nrows = ny
cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols
cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows
xllcorner = geo_df_3301['x'].min()
xulcorner = geo_df_3301['x'].min()
yllcorner = geo_df_3301['y'].min()
yulcorner = geo_df_3301['y'].max()
nodata_value = -9999
driver = gdal.GetDriverByName("GTiff")
dataset = driver.Create("rbf_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 )
dataset.SetProjection(wkt_projection)
dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY))
dataset.GetRasterBand(1).WriteArray(np.flipud(testnp))
band = dataset.GetRasterBand(1)
band.SetNoDataValue(nodata_value)
dataset.FlushCache()
# dereference band to avoid gotcha described previously
band = None
dataset = None
ncols = 200
nrows = 200
cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols
cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows
xllcorner = geo_df_3301['x'].min()
xulcorner = geo_df_3301['x'].min()
yllcorner = geo_df_3301['y'].min()
yulcorner = geo_df_3301['y'].max()
nodata_value = -9999
driver = gdal.GetDriverByName("GTiff")
# dataset = driver.Create("%s"%(OutputFile), NROWS, NCOLS, 1, gdal.GDT_Float32 )
dataset = driver.Create("idw_basic_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 )
dataset.SetProjection(wkt_projection)
dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY))
dataset.GetRasterBand(1).WriteArray(np.flipud(np.array(z_head1)))
band = dataset.GetRasterBand(1)
band.SetNoDataValue(nodata_value)
dataset.FlushCache()
# dereference band to avoid gotcha described previously
band = None
dataset = None | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
Point Query RasterStats- https://pythonhosted.org/rasterstats/manual.htmlbasic-example | from rasterstats import point_query
xm = gpd.read_file('ilmateenistus_precip_stations.shp', encoding="utf-8")
pts_kd = point_query('ilmateenistus_precip_stations.shp', "kdtree_precip_rasterout1.tif")
pts_rbf = point_query('ilmateenistus_precip_stations.shp', "rbf_precip_rasterout1.tif")
pts_idw = point_query('ilmateenistus_precip_stations.shp', "idw_basic_precip_rasterout1.tif")
xm['pcp_kdtree'] = pts_kd
xm['pcp_rbf'] = pts_rbf
xm['pcp_idw'] = pts_idw
xm = xm[['precipitat','pcp_kdtree','pcp_rbf','pcp_idw']].dropna()
from sklearn.metrics import mean_squared_error, r2_score
x_l = []
for rst in ['pcp_kdtree', 'pcp_rbf', 'pcp_idw']:
rmse = np.sqrt(mean_squared_error(xm['precipitat'], xm[rst]))
r2 = r2_score(xm['precipitat'], xm[rst])
x_l.append({ 'name': rst, 'rmse': rmse, 'r2': r2})
pd.DataFrame(x_l) | _____no_output_____ | MIT | interpol_precip.ipynb | allixender/py_interpol_demo |
Distirbuted Training of Mask-RCNN in Amazon SageMaker using EFSThis notebook is a step-by-step tutorial on distributed tranining of [Mask R-CNN](https://arxiv.org/abs/1703.06870) implemented in [TensorFlow](https://www.tensorflow.org/) framework. Mask R-CNN is also referred to as heavy weight object detection model and it is part of [MLPerf](https://www.mlperf.org/training-results-0-6/).Concretely, we will describe the steps for training [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) in [Amazon SageMaker](https://aws.amazon.com/sagemaker/) using [Amazon EFS](https://aws.amazon.com/efs/) file-system as data source.The outline of steps is as follows:1. Stage COCO 2017 dataset in [Amazon S3](https://aws.amazon.com/s3/)2. Copy COCO 2017 dataset from S3 to Amazon EFS file-system mounted on this notebook instance3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)4. Configure data input channels5. Configure hyper-prarameters6. Define training metrics7. Define training job and start trainingBefore we get started, let us initialize two python variables ```aws_region``` and ```s3_bucket``` that we will use throughout the notebook: | aws_region = # aws-region-code e.g. us-east-1
s3_bucket = # your-s3-bucket-name | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Stage COCO 2017 dataset in Amazon S3We use [COCO 2017 dataset](http://cocodataset.org/home) for training. We download COCO 2017 training and validation dataset to this notebook instance, extract the files from the dataset archives, and upload the extracted files to your Amazon [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html). The ```prepare-s3-bucket.sh``` script executes this step. | !cat ./prepare-s3-bucket.sh | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Using your *Amazon S3 bucket* as argument, run the cell below. If you have already uploaded COCO 2017 dataset to your Amazon S3 bucket, you may skip this step. | %%time
!./prepare-s3-bucket.sh {s3_bucket} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Copy COCO 2017 dataset from S3 to Amazon EFSNext, we copy COCO 2017 dataset from S3 to EFS file-system. The ```prepare-efs.sh``` script executes this step. | !cat ./prepare-efs.sh | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
If you have already copied COCO 2017 dataset from S3 to your EFS file-system, skip this step. | %%time
!./prepare-efs.sh {s3_bucket} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Build and push SageMaker training imagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to Amazon ECR service. If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to ECR service. Below, we have a choice of two different implementations:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) implementation supports a maximum per-GPU batch size of 1, and does not support mixed precision. It can be used with mainstream TensorFlow releases.2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) is an optimized implementation that supports a maximum batch size of 4 and supports mixed precision. This implementation uses custom TensorFlow ops. The required custom TensorFlow ops are available in [AWS Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) images in ```tensorflow-training``` repository with image tag ```1.15.2-gpu-py36-cu100-ubuntu18.04```, or later.It is recommended that you build and push both SageMaker training images and use either image for training later. TensorPack Faster-RCNN/Mask-RCNNUse ```./container/build_tools/build_and_push.sh``` script to build and push the TensorPack Faster-RCNN/Mask-RCNN training image to Amazon ECR. | !cat ./container/build_tools/build_and_push.sh | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Using your *AWS region* as argument, run the cell below. | %%time
! ./container/build_tools/build_and_push.sh {aws_region} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Set ```tensorpack_image``` below to Amazon ECR URI of the image you pushed above. | tensorpack_image = # mask-rcnn-tensorpack-sagemaker ECR URI | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
AWS Samples Mask R-CNNUse ```./container-optimized/build_tools/build_and_push.sh``` script to build and push the AWS Samples Mask R-CNN training image to Amazon ECR. | !cat ./container-optimized/build_tools/build_and_push.sh | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Using your *AWS region* as argument, run the cell below. | %%time
! ./container-optimized/build_tools/build_and_push.sh {aws_region} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Set ```aws_samples_image``` below to Amazon ECR URI of the image you pushed above. | aws_samples_image = # mask-rcnn-tensorflow-sagemaker ECR URI | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
SageMaker Initialization First we upgrade SageMaker to 2.3.0 API. If your notebook is already using latest Sagemaker 2.x API, you may skip the next cell. | ! pip install --upgrade pip
! pip install sagemaker==2.3.0 | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
We have staged the data and we have built and pushed the training docker image to Amazon ECR. Now we are ready to start using Amazon SageMaker. | %%time
import os
import time
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}') | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Next, we set the Amazon ECR image URI used for training. You saved this URI in a previous step. | training_image = # set to tensorpack_image or aws_samples_image
print(f'Training image: {training_image}') | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Define SageMaker Data ChannelsNext, we define the *train* and *log* data channels using EFS file-system. To do so, we need to specify the EFS file-system id, which is shown in the output of the command below. | !df -kh | grep 'fs-' | sed 's/\(fs-[0-9a-z]*\).*/\1/' | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Set the EFS ```file_system_id``` below to the ouput of the command shown above. In the cell below, we define the `train` data input channel. | from sagemaker.inputs import FileSystemInput
# Specify EFS ile system id.
file_system_id = # 'fs-xxxxxxxx'
print(f"EFS file-system-id: {file_system_id}")
# Specify directory path for input data on the file system.
# You need to provide normalized and absolute path below.
file_system_directory_path = '/mask-rcnn/sagemaker/input/train'
print(f'EFS file-system data input path: {file_system_directory_path}')
# Specify the access mode of the mount of the directory associated with the file system.
# Directory must be mounted 'ro'(read-only).
file_system_access_mode = 'ro'
# Specify your file system type
file_system_type = 'EFS'
train = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode) | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Below we create the log output directory and define the `log` data output channel. | # Specify directory path for log output on the EFS file system.
# You need to provide normalized and absolute path below.
# For example, '/mask-rcnn/sagemaker/output/log'
# Log output directory must not exist
file_system_directory_path = f'/mask-rcnn/sagemaker/output/log-{int(time.time())}'
# Create the log output directory.
# EFS file-system is mounted on '$HOME/efs' mount point for this notebook.
home_dir=os.environ['HOME']
local_efs_path = os.path.join(home_dir,'efs', file_system_directory_path[1:])
print(f"Creating log directory on EFS: {local_efs_path}")
assert not os.path.isdir(local_efs_path)
! sudo mkdir -p -m a=rw {local_efs_path}
assert os.path.isdir(local_efs_path)
# Specify the access mode of the mount of the directory associated with the file system.
# Directory must be mounted 'rw'(read-write).
file_system_access_mode = 'rw'
log = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode)
data_channels = {'train': train, 'log': log} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Next, we define the model output location in S3. Set ```s3_bucket``` to your S3 bucket name prior to running the cell below. The model checkpoints, logs and Tensorboard events will be written to the log output directory on the EFS file system you created above. At the end of the model training, they will be copied from the log output directory to the `s3_output_location` defined below. | prefix = "mask-rcnn/sagemaker" #prefix in your bucket
s3_output_location = f's3://{s3_bucket}/{prefix}/output'
print(f'S3 model output location: {s3_output_location}') | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Configure Hyper-parametersNext we define the hyper-parameters. Note, some hyper-parameters are different between the two implementations. The batch size per GPU in TensorPack Faster-RCNN/Mask-RCNN is fixed at 1, but is configurable in AWS Samples Mask-RCNN. The learning rate schedule is specified in units of steps in TensorPack Faster-RCNN/Mask-RCNN, but in epochs in AWS Samples Mask-RCNN.The detault learning rate schedule values shown below correspond to training for a total of 24 epochs, at 120,000 images per epoch. TensorPack Faster-RCNN/Mask-RCNN Hyper-parameters Hyper-parameter Description Default mode_fpn Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone "True" mode_mask A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel "True" eval_period Number of epochs period for evaluation during training 1 lr_schedule Learning rate schedule in training steps '[240000, 320000, 360000]' batch_norm Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') 'FreezeBN' images_per_epoch Images per epoch 120000 data_train Training data under data directory 'coco_train2017' data_val Validation data under data directory 'coco_val2017' resnet_arch Must be 'resnet50' or 'resnet101' 'resnet50' backbone_weights ResNet backbone weights 'ImageNet-R50-AlignPadding.npz' load_model Pre-trained model to load config: Any hyperparamter prefixed with config: is set as a model config parameter AWS Samples Mask-RCNN Hyper-parameters Hyper-parameter Description Default mode_fpn Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone "True" mode_mask A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel "True" eval_period Number of epochs period for evaluation during training 1 lr_epoch_schedule Learning rate schedule in epochs '[(16, 0.1), (20, 0.01), (24, None)]' batch_size_per_gpu Batch size per gpu ( Minimum 1, Maximum 4) 4 batch_norm Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') 'FreezeBN' images_per_epoch Images per epoch 120000 data_train Training data under data directory 'train2017' backbone_weights ResNet backbone weights 'ImageNet-R50-AlignPadding.npz' load_model Pre-trained model to load config: Any hyperparamter prefixed with config: is set as a model config parameter | hyperparameters = {
"mode_fpn": "True",
"mode_mask": "True",
"eval_period": 1,
"batch_norm": "FreezeBN"
} | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Define Training MetricsNext, we define the regular expressions that SageMaker uses to extract algorithm metrics from training logs and send them to [AWS CloudWatch metrics](https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/working_with_metrics.html). These algorithm metrics are visualized in SageMaker console. | metric_definitions=[
{
"Name": "fastrcnn_losses/box_loss",
"Regex": ".*fastrcnn_losses/box_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_loss",
"Regex": ".*fastrcnn_losses/label_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/false_negative",
"Regex": ".*fastrcnn_losses/label_metrics/false_negative:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/fg_accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/fg_accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/num_fg_label",
"Regex": ".*fastrcnn_losses/num_fg_label:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/accuracy",
"Regex": ".*maskrcnn_loss/accuracy:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/fg_pixel_ratio",
"Regex": ".*maskrcnn_loss/fg_pixel_ratio:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/maskrcnn_loss",
"Regex": ".*maskrcnn_loss/maskrcnn_loss:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/pos_accuracy",
"Regex": ".*maskrcnn_loss/pos_accuracy:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5:0.95",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.75",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/large",
"Regex": ".*mAP\\(bbox\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/medium",
"Regex": ".*mAP\\(bbox\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/small",
"Regex": ".*mAP\\(bbox\\)/small:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5:0.95",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.75",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/large",
"Regex": ".*mAP\\(segm\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/medium",
"Regex": ".*mAP\\(segm\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/small",
"Regex": ".*mAP\\(segm\\)/small:\\s*(\\S+).*"
}
] | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Define SageMaker Training JobNext, we use SageMaker [Estimator](https://sagemaker.readthedocs.io/en/stable/estimators.html) API to define a SageMaker Training Job. We recommned using 32 GPUs, so we set ```instance_count=4``` and ```instance_type='ml.p3.16xlarge'```, because there are 8 Tesla V100 GPUs per ```ml.p3.16xlarge``` instance. We recommend using 100 GB [Amazon EBS](https://aws.amazon.com/ebs/) storage volume with each training instance, so we set ```volume_size = 100```. We run the training job in your private VPC, so we need to set the ```subnets``` and ```security_group_ids``` prior to running the cell below. You may specify multiple subnet ids in the ```subnets``` list. The subnets included in the ```sunbets``` list must be part of the output of ```./stack-sm.sh``` CloudFormation stack script used to create this notebook instance. Specify only one security group id in ```security_group_ids``` list. The security group id must be part of the output of ```./stack-sm.sh``` script.For ```instance_type``` below, you have the option to use ```ml.p3.16xlarge``` with 16 GB per-GPU memory and 25 Gbs network interconnectivity, or ```ml.p3dn.24xlarge``` with 32 GB per-GPU memory and 100 Gbs network interconnectivity. The ```ml.p3dn.24xlarge``` instance type offers significantly better performance than ```ml.p3.16xlarge``` for Mask R-CNN distributed TensorFlow training. | # Give Amazon SageMaker Training Jobs Access to FileSystem Resources in Your Amazon VPC.
security_group_ids = # ['sg-xxxxxxxx']
subnets = # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx' ]
sagemaker_session = sagemaker.session.Session(boto_session=session)
mask_rcnn_estimator = Estimator(image_uri=training_image,
role=role,
instance_count=4,
instance_type='ml.p3.16xlarge',
volume_size = 100,
max_run = 400000,
output_path=s3_output_location,
sagemaker_session=sagemaker_session,
hyperparameters = hyperparameters,
metric_definitions = metric_definitions,
subnets=subnets,
security_group_ids=security_group_ids)
| _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Finally, we launch the SageMaker training job. See ```Training Jobs``` in SageMaker console to monitor the training job. | import time
job_name=f'mask-rcnn-efs-{int(time.time())}'
print(f"Launching Training Job: {job_name}")
# set wait=True below if you want to print logs in cell output
mask_rcnn_estimator.fit(inputs=data_channels, job_name=job_name, logs="All", wait=False) | _____no_output_____ | Apache-2.0 | advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-efs.ipynb | fhirschmann/amazon-sagemaker-examples |
Plotting and Programming in Python (Continued) Plotting | %matplotlib inline
import matplotlib.pyplot as plt
time = [0, 1, 2, 3]
position = [0, 100, 200, 300]
plt.plot(time, position)
plt.xlabel('Time (hr)')
plt.ylabel('Position (km)') | _____no_output_____ | MIT | notebooks/Plotting_Tutorial/Tutorial_pt2.ipynb | chazbethelbrescia/pei2020 |
Plot direclty from Pandas DataFrame | import pandas as pd
data = pd.read_csv('./gapminder_gdp_oceania.csv',
index_col='country')
# so we want to keep the (year) part only for clarity when plotting GDP vs. years
# To do this we use strip(), which removes from the string the characters stated in the argument
# This method works on strings, so we call str before strip()
years = data.columns.str.strip('gdpPercap_')
# Convert year values to integers, saving results back to dataframe
data.columns = years.astype(int) # note astype() --> casting function
data.loc['Australia'].plot()
# More examples:
# GDP Per Capita
data.T.plot() # line by default
plt.ylabel('GDP per capita')
plt.xlabel('Year')
plt.title('Gdp per Bapita in Oceana')
# MANY styles of plots are available
plt.style.use('ggplot')
data.T.plot(kind='bar') # line, bar, barh, hist, box, area, pie, scatter, hexbin
plt.ylabel('GDP per capita')
# Plotting data using the matplotlib.plot() function direclty
years = data.columns
gdp_australia = data.loc['Australia']
plt.plot(years, gdp_australia, 'g--') # last flag determines color of line
plt.title('Annual GDP in Australia', fontsize=15)
plt.ylabel('GDP')
plt.xlabel('Year') | _____no_output_____ | MIT | notebooks/Plotting_Tutorial/Tutorial_pt2.ipynb | chazbethelbrescia/pei2020 |
Can plot many sets of data together | # Select two countries' worth of data.
gdp_australia = data.loc['Australia']
gdp_nz = data.loc['New Zealand']
# Plot with differently-colored markers.
plt.plot(years, gdp_australia, 'b-', label='Australia')
plt.plot(years, gdp_nz, 'y-', label='New Zealand')
# Create legend.
plt.legend(loc='upper left') # location parameter
plt.xlabel('Year')
plt.ylabel('GDP per capita ($)')
plt.title('GDP per capita ($) in Oceana')
# Scatterplot examples:
plt.scatter(gdp_australia, gdp_nz)
data.T.plot.scatter(x = 'Australia', y = 'New Zealand')
# Transpose --> so country indices are now values
# Minima and Maxima
data_europe = pd.read_csv('./gapminder_gdp_europe.csv', index_col='country')
# Note: use of strip technique to clean up labels
years = data_europe.columns.str.strip('gdpPercap_')
data_europe.columns = years;
data_europe.min().plot(label='min')
data_europe.max().plot(label='max')
plt.legend(loc='best')
plt.xticks(rotation=50) # rotate tick labels
# Correlations
data_asia = pd.read_csv('./gapminder_gdp_asia.csv', index_col='country')
data_asia.describe().T.plot(kind='scatter', x='min', y='max')
# Variability of Max is much higher than Min --> take a look at Max variable
data_asia = pd.read_csv('./gapminder_gdp_asia.csv',
index_col='country')
years = data_asia.columns.str.strip('gdbPercapita_')
data_asia.columns = years
data_asia.max().plot()
plt.xticks(rotation=80)
print(data_asia.idxmax()) # Remember idxmax function (max value for each index)
# More Correlations
# Create a plot showing correlation between GDP and life expectancy for 2007
data_all = pd.read_csv('./gapminder_all.csv', index_col='country')
data_all.plot(kind='scatter', x='gdpPercap_2007', y='lifeExp_2007',
s=data_all['pop_2007']/1e6) # change size of plotted points
plt.title('Life Expectancy vs. GDP in 2007', fontsize=16) | _____no_output_____ | MIT | notebooks/Plotting_Tutorial/Tutorial_pt2.ipynb | chazbethelbrescia/pei2020 |
Save your plot to a file | # fig = plt.gcf() --> get current figure
data.T.plot(kind='line')
# must get the current figure AFTER it has been plotted
fig = plt.gcf()
plt.legend(loc='upper left')
plt.xlabel('GDP per capita')
plt.ylabel('Year')
fig.savefig('my_figure.png') | _____no_output_____ | MIT | notebooks/Plotting_Tutorial/Tutorial_pt2.ipynb | chazbethelbrescia/pei2020 |
ДекораторыДекоратор это функция, которая в качестве одного из аргументов принимает объект и что-то возвращает. Декораторы в Python можно применять ко всему: функциям, классам и методам. Основная цель декораторов – изменить поведение объекта, не меняя сам объект. Это очень гибкая функциональная возможность языка.Декорирование функций происходит с помощью следующего синтаксиса```Python@decoratordef function(): ...```Такая запись будет аналогично следующему определению:```Pythondef function(): ...function = decorator(function)```В этом случае результат выполнения функции ```decorator``` записывается обратно по имени ```function```.С помощью декораторов можно, например, измерять время выполнения функций, контролировать количество вызовов, кеширование, вывод предупреждений об использовании устаревших функций, трассировка, использование в контрактном программировании.Рассмотрим пример измерения времени выполнения кода функции. | import time
def timeit(f):
def inner(*args, **kwargs):
start = time.time()
res = f(*args, **kwargs)
end = time.time()
print(f'{end - start} seconds')
return res
return inner
@timeit
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
res = my_sum([i for i in range(int(1e5))]) | 0.0019989013671875 seconds
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
В такой реализации есть несколько проблем:- нет возможности отключить трассировку;- вывод в стандартный поток вывода (```sys.stdout```);- пропала строка документации и атрибуты декорируемой функции. | print(f'{my_sum.__name__ = }')
print(f'{my_sum.__doc__ = }')
help(my_sum) | my_sum.__name__ = 'inner'
my_sum.__doc__ = None
Help on function inner in module __main__:
inner(*args, **kwargs)
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Так как в Python функции являются объектами, то их можно изменять во время выполнения. В этом кроется решение этой проблемы. Можно скопировать нужные атрибуты декорируемой функции.Чтобы не копировать каждый атрибут вручную существует готовая реализация этого функционала в модуле ```functools``` стандартной библиотеки. | from functools import wraps
def timeit(f):
@wraps(f)
def inner(*args, **kwargs):
start = time.time()
res = f(*args, **kwargs)
end = time.time()
print(f'{end - start} seconds')
return res
return inner
@timeit
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
print(f'{my_sum.__name__ = }')
print(f'{my_sum.__doc__ = }')
help(my_sum) | my_sum.__name__ = 'my_sum'
my_sum.__doc__ = 'Функция суммы'
Help on function my_sum in module __main__:
my_sum(*args, **kwargs)
Функция суммы
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Параметризованные декораторыУ реализованного нами декоратора сильно ограниченное применения, попробуем его расширить.Отключение декоратора можно реализовать, используя глобальную переменную, например, ```dec_enabled```, принимающую значение ```True```, если декоратор активен и ```False``` в противном случае.Возможность вывода не только в стандартный поток (```sys.stdout```), но и в поток ошибок (```sys.stderr```) или файл можно с помощью передачи аргументов. Добавление аргументов к декораторам немного усложняет задачу.```python@decorator(arg)def foo(): ...```В этом случае добавляется дополнительный этап, а именно вычисление декоратора.```pythondef foo(): ...dec = decorator(x) новый этапfoo = dec(foo)```Решить проблему передачи аргументов можно несколькими способами. Первый из них, и не самый лучший заключается в добавлении еще одной вложенной функции. | import sys
dec_enabled = True
def timeit(file):
def dec(func):
@wraps(func)
def inner(*args, **kwargs):
start = time.time()
res = func(*args, **kwargs)
end = time.time()
print(f'{end - start} seconds', file=file)
return res
return inner if dec_enabled else func
return dec
@timeit(sys.stderr)
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
res = my_sum([i for i in range(int(1e5))])
print(res) | 4999950000
0.0009996891021728516 seconds
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Такой вариант будет работать при декорировании следующим образом ```@timeit(sys.stderr)```. Однако постоянно писать декораторы с тройной вложенностью это не путь питониста. Можно один раз сделать декоратор для декоратора, позволяющий передавать аргументы (да, декоратор для декоратора). | from functools import update_wrapper
def with_args(dec):
@wraps(dec)
def wrapper(*args, **kwargs):
def decorator(func):
res = dec(func, *args, **kwargs)
update_wrapper(res, func)
return res
return decorator
return wrapper | _____no_output_____ | MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Функция ```with_args``` принимает декоратор, оборачивает его в обертку ```wrapper```, внутри которой происходит создание нового декоратора. Исходный декоратор при этом не изменяется. | dec_enabled = True
@with_args
def timeit(func, file):
def inner(*args, **kwargs):
start = time.time()
res = func(*args, **kwargs)
end = time.time()
print(f'{end - start} seconds', file=file)
return res
return inner if dec_enabled else func
@timeit(sys.stderr)
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
res = my_sum([i for i in range(int(1e5))])
print(res) | 4999950000
0.001997709274291992 seconds
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Однако это все еще слишком сложно. Гораздо удобнее добавить возможность вызывать декоратор без аргументов. Попробуем воспользоваться только ключевыми аргументами. | dec_enabled = True
def timeit(func=None, *, file=sys.stderr):
if func is None:
def dec(func):
return timeit(func, file=file)
return dec if dec_enabled else func
@wraps(func)
def inner(*args, **kwargs):
start = time.time()
res = func(*args, **kwargs)
end = time.time()
print(f'{end - start} seconds', file=file)
return res
return inner if dec_enabled else func | _____no_output_____ | MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Теперь декоратор ```timeit``` можно вызывать двумя способами. Во-первых, не передавая никаких аргументов. Тогда вывод будет осуществляться в стандартный поток вывода. При этом помня, что декоратор раскрывается как ```f = timeit(f)```, можно видеть, что аргумент ```func``` принимает значение функции ```f```. Тогда первое условие не будет выполнено, а будет создана обертка ```inner```. | dec_enabled = True
@timeit
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
res = my_sum([i for i in range(int(1e5))])
print(res) | 4999950000
0.0009999275207519531 seconds
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Во-вторых, передавая в качестве именованного аргумента ```file``` ```sys.stderr``` или имя файла. В этом случае происходит явный вызов декоратора ```timeit(file=sys.stderr)``` без передачи аргумента ```func```, в связи с этим он принимает значение ```None```, а значит, выполняется первое условие и создается обертка ```dec```. | dec_enabled = True
@timeit(file=sys.stderr)
def my_sum(*args, **kwargs):
"""Функция суммы"""
return sum(*args, **kwargs)
res = my_sum([i for i in range(int(1e5))])
print(res) | 4999950000
0.000997304916381836 seconds
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Благодаря переменной ```dec_enabled``` измерение времени можно отключить. В этом случае никаких накладных расходов, связанных с вызовом дополнительных функций не будет.К одной функции можно применить сразу несколько декораторов, порядок их работы будет зависеть от порядка их применения к функции. Рассмотрим на примере гамбургера. | def with_bun(f):
@wraps(f)
def inner():
print('-' * 8)
f()
print('-' * 8)
return inner
def with_vegetables(f):
@wraps(f)
def inner():
print(' onion')
f()
print(' tomato')
return inner
def with_sauce(f):
@wraps(f)
def inner():
print(' sauce')
f()
return inner | _____no_output_____ | MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Определим основную функцию и задекорируем ее. | @with_bun
@with_vegetables
@with_sauce
def burger():
print(' cutlet')
burger() | --------
onion
sauce
cutlet
tomato
--------
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Если явно такое декорирование, то получиться следующая последовательность вызовов: | def burger():
print(' cutlet')
burger = with_sauce(burger)
burger = with_vegetables(burger)
burger = with_bun(burger)
burger() | --------
onion
sauce
cutlet
tomato
--------
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Первым будет применяться самый нижний (внутренний) декоратор. Если изменить последовательность декорирования, то результат ожидаемо измениться.Вот еще пару примеров декораторов. Декоратор трассировки вызовов функций: | def trace(function=None, *, file=sys.stderr):
if function is None:
def dec(function):
return trace(function, file=file)
return dec if dec_enabled else function
@wraps(function)
def inner(*args, **kwargs):
print(f'{function.__name__}, {args}, {kwargs}')
return function(*args, **kwargs)
return inner if dec_enabled else function
@trace
def foo():
print('Nothing')
foo() | foo, (), {}
Nothing
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
Декоратор проверки входа пользователя в систему (в упрощенном виде). | def is_authenticated(user):
return user in ('monty', 'guido')
def login_required(function=None, login_url=''):
def user_passes_test(view_func):
@wraps(view_func)
def wrapped(user, *args, **kwargs):
if is_authenticated(user):
return view_func(user, *args, **kwargs)
print(f'Пользователь {user} перенаправлен на страницу логина: {login_url}')
return wrapped
if function:
return user_passes_test(function)
return user_passes_test
@login_required(login_url='localhost/login')
def foo(user):
print(f'{user = }')
foo('monty')
foo('guido')
foo('pyuty') | user = 'monty'
user = 'guido'
Пользователь pyuty перенаправлен на страницу логина: localhost/login
| MIT | python_pd/04_functions/07_decorators.ipynb | AsakoKabe/python-bp |
TIME-SERIES DECOMPOSITION**File:** Decomposition.ipynb**Course:** Data Science Foundations: Data Mining in Python IMPORT LIBRARIES | import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.dates import DateFormatter
from statsmodels.tsa.seasonal import seasonal_decompose | _____no_output_____ | Apache-2.0 | Decomposition.ipynb | VladimirsHisamutdinovs/data-mining |
LOAD AND PREPARE DATA | df = pd.read_csv('data/AirPassengers.csv', parse_dates=['Month'], index_col=['Month']) | _____no_output_____ | Apache-2.0 | Decomposition.ipynb | VladimirsHisamutdinovs/data-mining |
PLOT DATA | fig, ax = plt.subplots()
plt.xlabel('Year: 1949-1960')
plt.ylabel('Monthly Passengers (1000s)')
plt.title('Monthly Intl Air Passengers')
plt.plot(df, color='black')
ax.xaxis.set_major_formatter(DateFormatter('%Y')) | _____no_output_____ | Apache-2.0 | Decomposition.ipynb | VladimirsHisamutdinovs/data-mining |
DECOMPOSE TIME SERIES - Decompose the time series into three components: trend, seasonal, and residuals or noise.- This commands also plots the components. - The argument `period` specifies that there are 12 observations (i.e., months) in the cycle.- By default, `seasonal_decompose` performs an additive (as opposed to multiplicative) decomposition. | # Set the figure size
plt.rcParams['figure.figsize'] = [7, 8]
# Plot the decomposition components
sd = seasonal_decompose(df, period=12).plot() | _____no_output_____ | Apache-2.0 | Decomposition.ipynb | VladimirsHisamutdinovs/data-mining |
- For growth over time, it may be more appropriate to use a multiplicative trend.- The approach can show consistent changes by percentage.- In this approach, the residuals should be centered on 1 instead of 0. | sd = seasonal_decompose(df, model='multiplicative').plot() | _____no_output_____ | Apache-2.0 | Decomposition.ipynb | VladimirsHisamutdinovs/data-mining |
The Explicit Forward Time Centered Space (FTCS) Difference Equation for the Heat Equation John S Butler john.s.butler@tudublin.ie [Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf) [Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python) OverviewThis notebook will implement the explicit Forward Time Centered Space (FTCS) Difference method for the Heat Equation. The Heat EquationThe Heat Equation is the first order in time ($t$) and second order in space ($x$) Partial Differential Equation [1-3]: \begin{equation} \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2},\end{equation}The equation describes heat transfer on a domain\begin{equation} \Omega = \{ t \geq 0\leq x \leq 1\}. \end{equation}with an initial condition at time $t=0$ for all $x$ and boundary condition on the left ($x=0$) and right side ($x=1$). Forward Time Centered Space (FTCS) Difference methodThis notebook will illustrate the Forward Time Centered Space (FTCS) Difference method for the Heat Equation with the __initial conditions__ \begin{equation} u(x,0)=2x, \ \ 0 \leq x \leq \frac{1}{2}, \end{equation}\begin{equation} u(x,0)=2(1-x), \ \ \frac{1}{2} \leq x \leq 1, \end{equation}and __boundary condition__\begin{equation}u(0,t)=0, u(1,t)=0. \end{equation} | # LIBRARY
# vector manipulation
import numpy as np
# math functions
import math
# THIS IS FOR PLOTTING
%matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import warnings
warnings.filterwarnings("ignore") | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python |
Discete GridThe region $\Omega$ is discretised into a uniform mesh $\Omega_h$. In the space $x$ direction into $N$ steps giving a stepsize of\begin{equation}h=\frac{1-0}{N},\end{equation}resulting in \begin{equation}x[i]=0+ih, \ \ \ i=0,1,...,N,\end{equation}and into $N_t$ steps in the time $t$ direction giving a stepsize of \begin{equation} k=\frac{1-0}{N_t}\end{equation}resulting in \begin{equation}t[j]=0+jk, \ \ \ j=0,...,15.\end{equation}The Figure below shows the discrete grid points for $N=10$ and $Nt=100$, the known boundary conditions (green), initial conditions (blue) and the unknown values (red) of the Heat Equation. | N=10
Nt=1000
h=1/N
k=1/Nt
r=k/(h*h)
time_steps=15
time=np.arange(0,(time_steps+.5)*k,k)
x=np.arange(0,1.0001,h)
X, Y = np.meshgrid(x, time)
fig = plt.figure()
plt.plot(X,Y,'ro');
plt.plot(x,0*x,'bo',label='Initial Condition');
plt.plot(np.ones(time_steps+1),time,'go',label='Boundary Condition');
plt.plot(x,0*x,'bo');
plt.plot(0*time,time,'go');
plt.xlim((-0.02,1.02))
plt.xlabel('x')
plt.ylabel('time (ms)')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.title(r'Discrete Grid $\Omega_h,$ h= %s, k=%s'%(h,k),fontsize=24,y=1.08)
plt.show(); | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python |
Discrete Initial and Boundary ConditionsThe discrete initial conditions are \begin{equation} w[i,0]=2x[i], \ \ 0 \leq x[i] \leq \frac{1}{2} \end{equation}\begin{equation}w[i,0]=2(1-x[i]), \ \ \frac{1}{2} \leq x[i] \leq 1 \end{equation}and the discrete boundary conditions are \begin{equation} w[0,j]=0, w[10,j]=0, \end{equation}where $w[i,j]$ is the numerical approximation of $U(x[i],t[j])$.The Figure below plots values of $w[i,0]$ for the inital (blue) and boundary (red) conditions for $t[0]=0.$ | w=np.zeros((N+1,time_steps+1))
b=np.zeros(N-1)
# Initial Condition
for i in range (1,N):
w[i,0]=2*x[i]
if x[i]>0.5:
w[i,0]=2*(1-x[i])
# Boundary Condition
for k in range (0,time_steps):
w[0,k]=0
w[N,k]=0
fig = plt.figure(figsize=(8,4))
plt.plot(x,w[:,0],'o:',label='Initial Condition')
plt.plot(x[[0,N]],w[[0,N],0],'go',label='Boundary Condition t[0]=0')
#plt.plot(x[N],w[N,0],'go')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Intitial and Boundary Condition',fontsize=24)
plt.xlabel('x')
plt.ylabel('w')
plt.legend(loc='best')
plt.show() | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python |
The Explicit Forward Time Centered Space (FTCS) Difference EquationThe explicit Forwards Time Centered Space (FTCS) difference equation of the Heat Equationis derived by discretising \begin{equation} \frac{\partial u_{ij}}{\partial t} = \frac{\partial^2 u_{ij}}{\partial x^2},\end{equation}around $(x_i,t_{j})$ giving the difference equation\begin{equation}\frac{w_{ij+1}-w_{ij}}{k}=\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2},\end{equation}rearranging the equation we get\begin{equation}w_{ij+1}=rw_{i-1j}+(1-2r)w_{ij}+rw_{i+1j},\end{equation}for $i=1,...9$ where $r=\frac{k}{h^2}$.This gives the formula for the unknown term $w_{ij+1}$ at the $(ij+1)$ mesh pointsin terms of $x[i]$ along the jth time row.Hence we can calculate the unknown pivotal values of $w$ along the first row of $j=1$ in terms of the known boundary conditions.This can be written in matrix form \begin{equation}\mathbf{w}_{j+1}=A\mathbf{w}_{j} +\mathbf{b}_{j} \end{equation}for which $A$ is a $9\times9$ matrix:\begin{equation}\left(\begin{array}{c}w_{1j+1}\\w_{2j+1}\\w_{3j+1}\\w_{4j+1}\\w_{5j+1}\\w_{6j+1}\\w_{7j+1}\\w_{8j+1}\\w_{9j+1}\\\end{array}\right).=\left(\begin{array}{cccc cccc}1-2r&r& 0&0&0 &0&0&0\\r&1-2r&r&0&0&0 &0&0&0\\0&r&1-2r &r&0&0& 0&0&0\\0&0&r&1-2r &r&0&0& 0&0\\0&0&0&r&1-2r &r&0&0& 0\\0&0&0&0&r&1-2r &r&0&0\\0&0&0&0&0&r&1-2r &r&0\\0&0&0&0&0&0&r&1-2r&r\\0&0&0&0&0&0&0&r&1-2r\\\end{array}\right)\left(\begin{array}{c}w_{1j}\\w_{2j}\\w_{3j}\\w_{4j}\\w_{5j}\\w_{6j}\\w_{7j}\\w_{8j}\\w_{9j}\\\end{array}\right)+\left(\begin{array}{c}rw_{0j}\\0\\0\\0\\0\\0\\0\\0\\rw_{10j}\\\end{array}\right).\end{equation}It is assumed that the boundary values $w_{0j}$ and $w_{10j}$ are known for $j=1,2,...$, and $w_{i0}$ for $i=0,...,10$ is the initial condition.The Figure below shows the values of the $9\times 9$ matrix in colour plot form for $r=\frac{k}{h^2}$. | A=np.zeros((N-1,N-1))
for i in range (0,N-1):
A[i,i]=1-2*r # DIAGONAL
for i in range (0,N-2):
A[i+1,i]=r # UPPER DIAGONAL
A[i,i+1]=r # LOWER DIAGONAL
fig = plt.figure(figsize=(6,4));
#plt.matshow(A);
plt.imshow(A,interpolation='none');
plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1));
plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1));
clb=plt.colorbar();
clb.set_label('Matrix elements values');
#clb.set_clim((-1,1));
plt.title('Matrix r=%s'%(np.round(r,3)),fontsize=24)
fig.tight_layout()
plt.show(); | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python |
ResultsTo numerically approximate the solution at $t[1]$ the matrix equation becomes \begin{equation} \mathbf{w}_{1}=A\mathbf{w}_{0} +\mathbf{b}_{0} \end{equation}where all the right hand side is known. To approximate solution at time $t[2]$ we use the matrix equation\begin{equation} \mathbf{w}_{2}=A\mathbf{w}_{1} +\mathbf{b}_{1}. \end{equation}Each set of numerical solutions $w[i,j]$ for all $i$ at the previous time step is used to approximate the solution $w[i,j+1]$. The Figure below shows the numerical approximation $w[i,j]$ of the Heat Equation using the FTCS method at $x[i]$ for $i=0,...,10$ and time steps $t[j]$ for $j=1,...,15$. The left plot shows the numerical approximation $w[i,j]$ as a function of $x[i]$ with each color representing the different time steps $t[j]$. The right plot shows the numerical approximation $w[i,j]$ as colour plot as a function of $x[i]$, on the $x[i]$ axis and time $t[j]$ on the $y$ axis. For $r>\frac{1}{2}$ the method is unstable resulting a solution that oscillates unnaturally between positive and negative values for each time step. | fig = plt.figure(figsize=(12,6))
plt.subplot(121)
for j in range (1,time_steps+1):
b[0]=r*w[0,j-1]
b[N-2]=r*w[N,j-1]
w[1:(N),j]=np.dot(A,w[1:(N),j-1])
plt.plot(x,w[:,j],'o:',label='t[%s]=%s'%(j,np.round(time[j],4)))
plt.xlabel('x')
plt.ylabel('w')
#plt.legend(loc='bottom', bbox_to_anchor=(0.5, -0.1))
plt.legend(bbox_to_anchor=(-.4, 1), loc=2, borderaxespad=0.)
plt.subplot(122)
plt.imshow(w.transpose())
plt.xticks(np.arange(len(x)), x)
plt.yticks(np.arange(len(time)), np.round(time,4))
plt.xlabel('x')
plt.ylabel('time')
clb=plt.colorbar()
clb.set_label('Temperature (w)')
plt.suptitle('Numerical Solution of the Heat Equation r=%s'%(np.round(r,3)),fontsize=24,y=1.08)
fig.tight_layout()
plt.show() | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python |
Local Trunction ErrorThe local truncation error of the classical explicit difference approach to \begin{equation}\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2}=0,\end{equation} with\begin{equation}F_{ij}(w)=\frac{w_{ij+1}-w_{ij}}{k}-\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2}=0,\end{equation} is \begin{equation}T_{ij}=F_{ij}(U)=\frac{U_{ij+1}-U_{ij}}{k}-\frac{U_{i+1j}-2U_{ij}+U_{i-1j}}{h^2},\end{equation} By Taylors expansions we have\begin{eqnarray*}U_{i+1j}&=&U((i+1)h,jk)=U(x_i+h,t_j)\\&=&U_{ij}+h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\U_{i-1j}&=&U((i-1)h,jk)=U(x_i-h,t_j)\\&=&U_{ij}-h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}-\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\U_{ij+1}&=&U(ih,(j+1)k)=U(x_i,t_j+k)\\&=&U_{ij}+k\left(\frac{\partial U}{\partial t} \right)_{ij}+\frac{k^2}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}+\frac{k^3}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij} +...\end{eqnarray*}substitution into the expression for $T_{ij}$ then gives\begin{eqnarray*}T_{ij}&=&\left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}-\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}\\& & +\frac{k^2}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij}-\frac{h^4}{360}\left(\frac{\partial^6 U}{\partial x^6} \right)_{ij}+ ...\end{eqnarray*}But $U$ is the solution to the differential equation so\begin{equation} \left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}=0,\end{equation} the principal part of the local truncation error is \begin{equation}\frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}-\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}.\end{equation} Hence the truncation error is\begin{equation} T_{ij}=O(k)+O(h^2).\end{equation} Stability Analysis To investigating the stability of the fully explicit FTCS difference method of the Heat Equation, we will use the von Neumann method.The FTCS difference equation is:\begin{equation}\frac{1}{k}(w_{pq+1}-w_{pq})=\frac{1}{h_x^2}(w_{p-1q}-2w_{pq}+w_{p+1q}),\end{equation}approximating \begin{equation}\frac{\partial U}{\partial t}=\frac{\partial^2 U}{\partial x^2}\end{equation}at $(ph,qk)$. Substituting $w_{pq}=e^{i\beta x}\xi^{q}$ into the difference equation gives: \begin{equation}e^{i\beta ph}\xi^{q+1}-e^{i\beta ph}\xi^{q}=r\{e^{i\beta (p-1)h}\xi^{q}-2e^{i\beta ph}\xi^{q}+e^{i\beta (p+1)h}\xi^{q} \}\end{equation}where $r=\frac{k}{h_x^2}$. Divide across by $e^{i\beta (p)h}\xi^{q}$ leads to\begin{equation} \xi-1=r(e^{i\beta (-1)h} -2+e^{i\beta h}),\end{equation}\begin{equation}\xi= 1+r (2\cos(\beta h)-2),\end{equation}\begin{equation}\xi=1-4r(\sin^2(\beta\frac{h}{2})).\end{equation}Hence \begin{equation}\left| 1-4r(\sin^2(\beta\frac{h}{2}) )\right|\leq 1\end{equation}for this to hold \begin{equation} 4r(\sin^2(\beta\frac{h}{2}) )\leq 2 \end{equation}which means \begin{equation} r\leq \frac{1}{2}.\end{equation}therefore the equation is conditionally stable as $0 < \xi \leq 1$ for $r<\frac{1}{2}$ and all $\beta$ . References[1] G D Smith Numerical Solution of Partial Differential Equations: Finite Difference Method Oxford 1992[2] Butler, J. (2019). John S Butler Numerical Methods for Differential Equations. [online] Maths.dit.ie. Available at: http://www.maths.dit.ie/~johnbutler/Teaching_NumericalMethods.html [Accessed 14 Mar. 2019].[3] Wikipedia contributors. (2019, February 22). Heat equation. In Wikipedia, The Free Encyclopedia. Available at: https://en.wikipedia.org/w/index.php?title=Heat_equation&oldid=884580138 [Accessed 14 Mar. 2019]. | _____no_output_____ | MIT | Chapter 08 - Heat Equations/801_Heat Equation- FTCS.ipynb | jjcrofts77/Numerical-Analysis-Python | |
DS/CMPSC 410 MiniProject 3 Spring 2021 Instructor: John Yen TA: Rupesh Prajapati and Dongkuan Xu Learning Objectives- Be able to apply thermometer encoding to encode numerical variables into binary variable format.- Be able to apply k-means clustering to the Darknet dataset based on both thermometer encoding and one-hot encoding.- Be able to use external labels (e.g., mirai, zmap, and masscan) to evaluate the result of k-means clustering.- Be able to investigate characteristics of a cluster using one-hot encoded feature. Total points: 100 - Exercise 1: 5 points- Exercise 2: 5 points - Exercise 3: 5 points - Exercise 4: 15 points- Exercise 5: 5 points- Exercise 6: 10 points- Exercise 7: 5 points- Exercise 8: 5 points- Exercise 9: 10 points- Exercise 10: 5 points- Exercise 11: 10 points- Exercise 12: 20 points Due: 5 pm, April 23, 2021 | import pyspark
import csv
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.types import StructField, StructType, StringType, LongType
from pyspark.sql.functions import col, column
from pyspark.sql.functions import expr
from pyspark.sql.functions import split
from pyspark.sql.functions import array_contains
from pyspark.sql import Row
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, IndexToString, PCA
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
import pandas as pd
import numpy as np
import math
ss = SparkSession.builder.master("local").appName("ClusteringTE").getOrCreate() | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 1 (5 points)Complete the path for input file in the code below and enter your name in this Markdown cell:- Name: Kangdong Yuan | Scanners_df = ss.read.csv("/storage/home/kky5082/ds410/Lab10/sampled_profile.csv", header= True, inferSchema=True ) | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
We can use printSchema() to display the schema of the DataFrame Scanners_df to see whether it was inferred correctly. | Scanners_df.printSchema()
Scanners_df.where(col('mirai')).count() | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Part A: One Hot Encoding This part is identical to that of Miniproject Deliverable 2We want to apply one hot encoding to the set of ports scanned by scanners. - A.1 Like Mini Project deliverable 1 and 2, we first convert the feature "ports_scanned_str" to a feature that is an Array of ports- A.2 We then calculate the total number of scanners for each port- A.3 We identify the top n port to use for one-hot encoding (You choose the number n).- A.4 Generate one-hot encoded feature for these top n ports. | # Scanners_df.select("ports_scanned_str").show(30)
Scanners_df2=Scanners_df.withColumn("Ports_Array", split(col("ports_scanned_str"), "-") )
# Scanners_df2.persist().show(10) | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
A.1 We only need the column ```Ports_Array``` to calculate the top ports being scanned | Ports_Scanned_RDD = Scanners_df2.select("Ports_Array").rdd
# Ports_Scanned_RDD.persist().take(5) | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Because each port number in the Ports_Array column for each row occurs only once, we can count the total occurance of each port number through flatMap. | Ports_list_RDD = Ports_Scanned_RDD.map(lambda row: row[0] )
# Ports_list_RDD.persist()
Ports_list2_RDD = Ports_Scanned_RDD.flatMap(lambda row: row[0] )
Port_count_RDD = Ports_list2_RDD.map(lambda x: (x, 1))
# Port_count_RDD.take(2)
Port_count_total_RDD = Port_count_RDD.reduceByKey(lambda x,y: x+y, 1)
# Port_count_total_RDD.persist().take(5)
Sorted_Count_Port_RDD = Port_count_total_RDD.map(lambda x: (x[1], x[0])).sortByKey( ascending = False)
# Sorted_Count_Port_RDD.persist().take(50) | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 2 (5%)Select top_ports to be the number of top ports you want to use for one-hot encoding. I recommend a number between 20 and 40. | top_ports=30
Sorted_Ports_RDD= Sorted_Count_Port_RDD.map(lambda x: x[1])
Top_Ports_list = Sorted_Ports_RDD.take(top_ports)
# Top_Ports_list
# Scanners_df3=Scanners_df2.withColumn(FeatureName, array_contains("Ports_Array", Top_Ports_list[0]))
# Scanners_df3.show(10) | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
A.4 Generate Hot-One Encoded Feature for each of the top ports in the Top_Ports_list- Iterate through the Top_Ports_list so that each top port is one-hot encoded. Exercise 3 (5 %)Complete the following PySpark code for encoding the n ports using One Hot Encoding, where n is specified by the variable ```top_ports``` | for i in range(0, top_ports - 1):
# "Port" + Top_Ports_list[i] is the name of each new feature created through One Hot Encoding
Scanners_df3 = Scanners_df2.withColumn("Port" + Top_Ports_list[i], array_contains("Ports_Array", Top_Ports_list[i]))
Scanners_df2 = Scanners_df3
Scanners_df2.printSchema() | root
|-- _c0: integer (nullable = true)
|-- id: integer (nullable = true)
|-- numports: integer (nullable = true)
|-- lifetime: double (nullable = true)
|-- Bytes: integer (nullable = true)
|-- Packets: integer (nullable = true)
|-- average_packetsize: integer (nullable = true)
|-- MinUniqueDests: integer (nullable = true)
|-- MaxUniqueDests: integer (nullable = true)
|-- MinUniqueDest24s: integer (nullable = true)
|-- MaxUniqueDest24s: integer (nullable = true)
|-- average_lifetime: double (nullable = true)
|-- mirai: boolean (nullable = true)
|-- zmap: boolean (nullable = true)
|-- masscan: boolean (nullable = true)
|-- country: string (nullable = true)
|-- traffic_types_scanned_str: string (nullable = true)
|-- ports_scanned_str: string (nullable = true)
|-- host_tags_per_censys: string (nullable = true)
|-- host_services_per_censys: string (nullable = true)
|-- Ports_Array: array (nullable = true)
| |-- element: string (containsNull = true)
|-- Port17132: boolean (nullable = true)
|-- Port17140: boolean (nullable = true)
|-- Port17128: boolean (nullable = true)
|-- Port17138: boolean (nullable = true)
|-- Port17130: boolean (nullable = true)
|-- Port17136: boolean (nullable = true)
|-- Port23: boolean (nullable = true)
|-- Port445: boolean (nullable = true)
|-- Port54594: boolean (nullable = true)
|-- Port17142: boolean (nullable = true)
|-- Port17134: boolean (nullable = true)
|-- Port80: boolean (nullable = true)
|-- Port8080: boolean (nullable = true)
|-- Port0: boolean (nullable = true)
|-- Port2323: boolean (nullable = true)
|-- Port5555: boolean (nullable = true)
|-- Port81: boolean (nullable = true)
|-- Port1023: boolean (nullable = true)
|-- Port52869: boolean (nullable = true)
|-- Port8443: boolean (nullable = true)
|-- Port49152: boolean (nullable = true)
|-- Port7574: boolean (nullable = true)
|-- Port37215: boolean (nullable = true)
|-- Port34218: boolean (nullable = true)
|-- Port34220: boolean (nullable = true)
|-- Port33968: boolean (nullable = true)
|-- Port34224: boolean (nullable = true)
|-- Port34228: boolean (nullable = true)
|-- Port33962: boolean (nullable = true)
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Part B Thermometer Encoding of Numerical Variables We encode the numerical variable numports (number of ports being scanned) using thermometer encoding | pow(2,15)
Scanners_df3=Scanners_df2.withColumn("TE_numports_0", col("numports") > 0)
Scanners_df2 = Scanners_df3
Scanners_df3.count()
Scanners_df3.where(col('TE_numports_0')).count() | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 4 (15%)Complete the following pyspark code to use the column "numports" to create 16 additional columns as follows:- TE_numports_0 : True, if the scanner scans more than 0 ports, otherwise False.- TE_numports_1 : True, if the scanner scans more than 2**0 (1) port, otherwise False.- TE_numports_2 : True, if the scanner scans more than 2**1 (2) ports, otherwise False.- TE_numports_3 : True, if the scanner scans more than 2**2 (4) ports, otherwise False ...- TE_numports_15 : True, if the scanner scans more than 2**14 ports, otherwise False- TE_numports_16 : True, if the scanner scans more than 2**15 (32768) ports, otherwise False | for i in range(0, 16):
# "TE_numports_" + str(i+1) is the name of each new feature created for each Bin in Thermometer Encoding
Scanners_df3 = Scanners_df2.withColumn("TE_numports_" + str(i+1), col("numports") > pow(2,i))
Scanners_df2 = Scanners_df3
Scanners_df2.printSchema() | root
|-- _c0: integer (nullable = true)
|-- id: integer (nullable = true)
|-- numports: integer (nullable = true)
|-- lifetime: double (nullable = true)
|-- Bytes: integer (nullable = true)
|-- Packets: integer (nullable = true)
|-- average_packetsize: integer (nullable = true)
|-- MinUniqueDests: integer (nullable = true)
|-- MaxUniqueDests: integer (nullable = true)
|-- MinUniqueDest24s: integer (nullable = true)
|-- MaxUniqueDest24s: integer (nullable = true)
|-- average_lifetime: double (nullable = true)
|-- mirai: boolean (nullable = true)
|-- zmap: boolean (nullable = true)
|-- masscan: boolean (nullable = true)
|-- country: string (nullable = true)
|-- traffic_types_scanned_str: string (nullable = true)
|-- ports_scanned_str: string (nullable = true)
|-- host_tags_per_censys: string (nullable = true)
|-- host_services_per_censys: string (nullable = true)
|-- Ports_Array: array (nullable = true)
| |-- element: string (containsNull = true)
|-- Port17132: boolean (nullable = true)
|-- Port17140: boolean (nullable = true)
|-- Port17128: boolean (nullable = true)
|-- Port17138: boolean (nullable = true)
|-- Port17130: boolean (nullable = true)
|-- Port17136: boolean (nullable = true)
|-- Port23: boolean (nullable = true)
|-- Port445: boolean (nullable = true)
|-- Port54594: boolean (nullable = true)
|-- Port17142: boolean (nullable = true)
|-- Port17134: boolean (nullable = true)
|-- Port80: boolean (nullable = true)
|-- Port8080: boolean (nullable = true)
|-- Port0: boolean (nullable = true)
|-- Port2323: boolean (nullable = true)
|-- Port5555: boolean (nullable = true)
|-- Port81: boolean (nullable = true)
|-- Port1023: boolean (nullable = true)
|-- Port52869: boolean (nullable = true)
|-- Port8443: boolean (nullable = true)
|-- Port49152: boolean (nullable = true)
|-- Port7574: boolean (nullable = true)
|-- Port37215: boolean (nullable = true)
|-- Port34218: boolean (nullable = true)
|-- Port34220: boolean (nullable = true)
|-- Port33968: boolean (nullable = true)
|-- Port34224: boolean (nullable = true)
|-- Port34228: boolean (nullable = true)
|-- Port33962: boolean (nullable = true)
|-- TE_numports_0: boolean (nullable = true)
|-- TE_numports_1: boolean (nullable = true)
|-- TE_numports_2: boolean (nullable = true)
|-- TE_numports_3: boolean (nullable = true)
|-- TE_numports_4: boolean (nullable = true)
|-- TE_numports_5: boolean (nullable = true)
|-- TE_numports_6: boolean (nullable = true)
|-- TE_numports_7: boolean (nullable = true)
|-- TE_numports_8: boolean (nullable = true)
|-- TE_numports_9: boolean (nullable = true)
|-- TE_numports_10: boolean (nullable = true)
|-- TE_numports_11: boolean (nullable = true)
|-- TE_numports_12: boolean (nullable = true)
|-- TE_numports_13: boolean (nullable = true)
|-- TE_numports_14: boolean (nullable = true)
|-- TE_numports_15: boolean (nullable = true)
|-- TE_numports_16: boolean (nullable = true)
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 5 (5 points)What is the total number of scanners that scan more than 2^15 (i.e., 32768) ports? Complete the code below using Scanners_df2 to find out the answer. | HFScanners_df2 = Scanners_df2.where(col('TE_numports_15'))
HFScanners_df2.count() | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 6 (10 points)Complete the following code to use k-means to cluster the scanners using the following - thermometer encoding of 'numports' numerical feature- one-hot encoding of top k ports (k chosen by you in Exercise 2). Specify Parameters for k Means Clustering | km = KMeans(featuresCol="features", predictionCol="prediction").setK(50).setSeed(123)
km.explainParams()
input_features = []
for i in range(0, top_ports - 1):
input_features.append( "Port"+Top_Ports_list[i] )
for i in range(0, 15):
input_features.append( "TE_numports_" + str(i))
print(input_features)
va = VectorAssembler().setInputCols(input_features).setOutputCol("features")
data= va.transform(Scanners_df2)
data.persist()
kmModel=km.fit(data)
kmModel
predictions = kmModel.transform(data)
predictions.persist()
Cluster1_df=predictions.where(col("prediction")==0)
Cluster1_df.persist().count() | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 7 (5 points)Complete the following code to find the size of all of the clusters generated. | summary = kmModel.summary
summary.clusterSizes | _____no_output_____ | MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 8 (5 points)Complete the following code to find the Silhouette Score of the clustering result. | evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print('Silhouette Score of the Clustering Result is ', silhouette)
centers = kmModel.clusterCenters()
centers[0]
print("Cluster Centers:")
i=0
for center in centers:
print("Cluster ", str(i+1), center)
i = i+1 | Cluster Centers:
Cluster 1 [9.87079646e-01 9.83893805e-01 9.85663717e-01 9.87256637e-01
9.84424779e-01 9.82477876e-01 7.07964602e-04 8.84955752e-04
1.94690265e-03 1.00000000e+00 9.40707965e-01 5.30973451e-04
3.53982301e-04 6.37168142e-03 1.76991150e-04 1.59292035e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.05486726e-01
1.99115044e-01 1.97345133e-01 2.03362832e-01 2.00000000e-01
1.99469027e-01 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.52212389e-02 1.59292035e-03
5.30973451e-04 1.76991150e-04 1.76991150e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 2 [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.22139891e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 8.14265939e-05
0.00000000e+00 4.07132970e-05 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.00000000e+00 7.93909291e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 3 [1.41242938e-02 1.66352793e-02 1.06716886e-02 9.41619586e-03
1.12994350e-02 1.16133082e-02 1.06716886e-02 1.13622097e-01
1.00000000e+00 5.96359071e-03 6.59133710e-03 6.27746390e-03
6.59133710e-03 7.21908349e-03 0.00000000e+00 3.76647834e-03
6.27746390e-04 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 3.13873195e-04 6.27746390e-04 3.13873195e-04
1.25549278e-03 3.13873195e-04 3.13873195e-04 9.41619586e-04
6.27746390e-04 1.00000000e+00 1.00000000e+00 4.33145009e-02
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 4 [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.04040404e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.00000000e+00 0.00000000e+00 6.31313131e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.26262626e-04 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.00000000e+00 2.97979798e-02 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 5 [7.07714952e-02 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.58653256e-02 1.00000000e+00 0.00000000e+00 2.08512204e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.45308475e-04
0.00000000e+00 8.58579664e-04 0.00000000e+00 2.45308475e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.45308475e-03
2.45308475e-03 2.69839323e-03 2.94370170e-03 2.69839323e-03
2.33043052e-03 1.00000000e+00 1.81896235e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 6 [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 7 [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 8 [1.72777778e-01 1.72777778e-01 0.00000000e+00 1.48888889e-01
1.56111111e-01 1.75000000e-01 0.00000000e+00 2.77777778e-03
0.00000000e+00 1.00000000e+00 7.55555556e-02 0.00000000e+00
0.00000000e+00 1.66666667e-03 0.00000000e+00 5.55555556e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.11111111e-03
5.55555556e-03 5.55555556e-03 5.00000000e-03 3.88888889e-03
6.66666667e-03 1.00000000e+00 1.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 9 [0.00000000e+00 1.35969142e-01 2.14079074e-01 1.90935391e-01
1.00000000e+00 2.32401157e-01 0.00000000e+00 9.64320154e-04
9.64320154e-04 1.13789778e-01 1.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.75024108e-03
9.64320154e-04 6.75024108e-03 8.67888139e-03 7.71456123e-03
1.25361620e-02 1.00000000e+00 1.00000000e+00 6.95274831e-01
1.44648023e-02 2.89296046e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 10 [1.03385888e-03 2.06771776e-03 1.03385888e-03 7.75394159e-04
1.29232360e-03 1.03385888e-03 6.70199018e-01 1.55078832e-02
2.58464720e-03 2.58464720e-04 7.75394159e-04 9.98190747e-01
9.95864564e-01 4.47143965e-02 9.53476350e-01 9.11863531e-01
9.24786767e-01 9.22202119e-01 9.20909796e-01 9.25820625e-01
9.20392866e-01 9.05918842e-01 9.07728095e-01 2.58464720e-04
2.58464720e-04 2.58464720e-04 2.58464720e-04 2.58464720e-04
2.58464720e-04 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 4.62651848e-02 1.39570949e-02
1.39570949e-02 1.39570949e-02 1.29232360e-02 1.21478418e-02
2.58464720e-04 2.58464720e-04 2.58464720e-04 2.58464720e-04]
Cluster 11 [1.29948625e-02 1.32970686e-02 1.84345724e-02 1.87367785e-02
1.78301602e-02 1.45058930e-02 7.25294651e-03 1.32970686e-02
5.13750378e-03 1.08794198e-02 9.97280145e-03 3.47537020e-02
4.29132668e-02 5.65125416e-02 8.46177093e-03 1.60169235e-02
4.53309157e-03 3.02206105e-04 0.00000000e+00 9.06618314e-04
0.00000000e+00 0.00000000e+00 9.06618314e-04 1.20882442e-03
1.51103052e-03 9.06618314e-04 1.20882442e-03 9.06618314e-04
2.11544273e-03 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 5.40646721e-01 1.83439105e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 12 [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 13 [6.86719637e-02 1.00000000e+00 0.00000000e+00 6.51532350e-02
6.65153235e-02 0.00000000e+00 0.00000000e+00 1.24858116e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.13507378e-04
0.00000000e+00 1.02156640e-03 0.00000000e+00 1.13507378e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.58910329e-03
2.27014756e-03 1.92962543e-03 1.70261067e-03 1.81611805e-03
1.36208854e-03 1.00000000e+00 2.35641317e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 14 [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 2.35155791e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.91064080e-03 0.00000000e+00 2.93944738e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 3.52733686e-03
2.79247501e-03 3.08641975e-03 1.91064080e-03 2.79247501e-03
2.64550265e-03 1.00000000e+00 4.71781305e-02 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 15 [7.61729531e-01 7.69089236e-01 7.61269549e-01 1.00000000e+00
7.52529899e-01 7.53449862e-01 0.00000000e+00 9.19963201e-04
4.59981601e-04 1.00000000e+00 1.00000000e+00 0.00000000e+00
0.00000000e+00 1.37994480e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
2.52989880e-02 2.16191352e-02 2.71389144e-02 2.94388224e-02
2.62189512e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 4.59981601e-04 4.59981601e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 16 [8.48601736e-01 8.37994214e-01 8.30279653e-01 0.00000000e+00
8.26422372e-01 8.18707811e-01 9.64320154e-04 4.82160077e-03
4.82160077e-03 0.00000000e+00 6.22950820e-01 0.00000000e+00
0.00000000e+00 4.82160077e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.78592093e-03
5.88235294e-02 5.40019286e-02 6.17164899e-02 6.17164899e-02
5.49662488e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.15718419e-02 2.89296046e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 17 [0.99838057 0.99595142 0.99595142 0.99919028 0.99595142 0.99757085
0.0048583 0.00566802 0.00566802 1. 0.9951417 0.00647773
0.0048583 0.00890688 0.0048583 0.00404858 0.0048583 0.00404858
0.00647773 0.00323887 0.00566802 0.00323887 0.00404858 0.78866397
0.78461538 0.80809717 0.79838057 0.77489879 0.78785425 1.
1. 1. 1. 1. 0.78704453 0.01376518
0.00809717 0.00809717 0.00728745 0.00728745 0.00728745 0.00728745
0.00728745 0.00728745]
Cluster 18 [8.41737781e-02 0.00000000e+00 0.00000000e+00 1.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.06878717e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.03439359e-03 0.00000000e+00 2.58598397e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.45668477e-03
2.45668477e-03 3.36177916e-03 2.71528317e-03 2.19808637e-03
3.49107836e-03 1.00000000e+00 1.27359710e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 19 [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.40311774e-02
1.17872282e-02 0.00000000e+00 1.16778340e-02 4.03391221e-02
1.80500479e-03 8.20456721e-05 3.82879803e-04 1.91439902e-04
1.64091344e-04 1.91439902e-04 1.36742787e-04 5.87993983e-03
5.90728839e-03 5.77054560e-03 5.74319705e-03 5.93463695e-03
5.57910570e-03 1.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 20 [1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.84606646e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 9.94035785e-04 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.84606646e-03
2.84010224e-03 1.98807157e-03 2.13007668e-03 3.12411247e-03
2.27208179e-03 1.00000000e+00 4.44476001e-02 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 21 [0.0010917 0. 0. 0. 0. 0.
0.7128821 0.00436681 0.00436681 0. 0. 0.86790393
0.8558952 0.04585153 0.819869 0.36899563 0.40283843 0.51310044
0.40283843 0.38427948 0.36462882 0.34279476 0.33187773 0.
0. 0. 0. 0. 0. 1.
1. 1. 0.99344978 0.09606987 0.05131004 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 22 [7.75981524e-01 9.76135489e-01 0.00000000e+00 4.71131640e-01
3.77213241e-02 2.31716705e-01 7.69822941e-04 6.15858353e-03
9.23787529e-03 3.84911470e-01 1.64742109e-01 1.53964588e-03
0.00000000e+00 4.61893764e-03 0.00000000e+00 2.30946882e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.38568129e-02
2.30946882e-02 1.61662818e-02 2.61739800e-02 2.07852194e-02
2.07852194e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
3.84911470e-02 2.30946882e-03 7.69822941e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 23 [0.01491366 0.01295133 0.01255887 0.01844584 0.01805338 0.01726845
0.01805338 0.03375196 0.00313972 0.00824176 0.00824176 0.01491366
0.00706436 0.06161695 0.01138148 0.01138148 0.0188383 0.00510204
0.00470958 0.00431711 0.00470958 0.00549451 0.00392465 0.0066719
0.00431711 0.00627943 0.00431711 0.00510204 0.00588697 1.
1. 1. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 24 [7.58937198e-01 7.71980676e-01 7.68115942e-01 1.00000000e+00
7.66183575e-01 7.40579710e-01 4.83091787e-04 9.66183575e-04
9.66183575e-04 1.00000000e+00 0.00000000e+00 4.83091787e-04
4.83091787e-04 3.38164251e-03 0.00000000e+00 9.66183575e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
4.20289855e-02 3.86473430e-02 4.39613527e-02 4.58937198e-02
4.87922705e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.78743961e-02 2.41545894e-03 9.66183575e-04
4.83091787e-04 4.83091787e-04 4.83091787e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 25 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 26 [0.00000000e+00 1.95599862e-01 1.00000000e+00 1.93193537e-01
1.87005844e-01 2.08662771e-01 6.87521485e-04 4.12512891e-03
0.00000000e+00 0.00000000e+00 9.93468546e-02 0.00000000e+00
0.00000000e+00 3.43760743e-03 0.00000000e+00 6.87521485e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 9.62530079e-03
4.12512891e-03 4.81265040e-03 3.43760743e-03 8.59401856e-03
6.53145411e-03 1.00000000e+00 1.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 27 [0.00000000e+00 5.98334401e-01 1.67200512e-01 7.24535554e-01
1.00000000e+00 3.51057015e-01 1.92184497e-03 6.40614990e-04
7.04676489e-03 2.01793722e-01 2.88276746e-02 0.00000000e+00
1.92184497e-03 5.12491992e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.53747598e-02
2.04996797e-02 1.85778347e-02 1.28122998e-02 2.04996797e-02
1.85778347e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
5.25304292e-02 2.56245996e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 28 [7.91181364e-01 7.66777593e-01 7.78702163e-01 1.00000000e+00
7.83693844e-01 7.84248475e-01 0.00000000e+00 1.38657793e-03
1.38657793e-03 0.00000000e+00 5.70160843e-01 0.00000000e+00
0.00000000e+00 3.88241819e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
4.24292845e-02 5.04714365e-02 3.77149196e-02 4.35385469e-02
4.18746534e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.94120910e-03 2.77315585e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 29 [1. 0. 0. 0. 1. 0.
0. 0.00144509 0. 0. 0. 0.
0. 0.00144509 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.00722543
0.01156069 0.00867052 0.00578035 0.00433526 0.00867052 1.
1. 0.1300578 0.00289017 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 30 [0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 31 [0.22232472 0.18265683 0.19741697 0.21678967 0. 1.
0. 0.00184502 0. 0.1097786 1. 0.
0. 0.00276753 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01199262
0.00922509 0.00461255 0.01107011 0.00645756 0.00553506 1.
1. 0.72416974 0.01107011 0.00184502 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 32 [3.98006135e-01 6.97852761e-02 1.00000000e+00 2.87576687e-01
3.90337423e-01 9.27914110e-01 0.00000000e+00 3.83435583e-03
5.36809816e-03 1.29601227e-01 0.00000000e+00 0.00000000e+00
0.00000000e+00 7.66871166e-04 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.76380368e-02
1.91717791e-02 1.45705521e-02 2.45398773e-02 1.45705521e-02
1.99386503e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
2.30061350e-02 3.06748466e-03 1.53374233e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 33 [0.69957983 0.71218487 0.74369748 0.73529412 0.76260504 0.67016807
0. 0.00210084 0.00210084 0.45798319 0.45168067 0.
0.00210084 0.00420168 0. 0. 0. 0.
0. 0. 0. 0. 0. 1.
0.03151261 0.03781513 0.04621849 0.03991597 0.05252101 1.
1. 1. 1. 0.02310924 0.00210084 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 34 [1.51327555e-03 9.62993534e-04 1.23813454e-03 6.87852524e-04
1.23813454e-03 8.25423029e-04 9.52813317e-01 1.08680699e-02
1.96725822e-02 8.25423029e-04 4.12711515e-04 9.93534186e-01
9.92571193e-01 8.25423029e-04 4.26468565e-03 2.88898060e-03
1.78841656e-03 9.62993534e-04 2.47626909e-03 2.20112808e-03
8.25423029e-04 1.92598707e-03 1.78841656e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.37570505e-02 2.20112808e-03 1.23813454e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 35 [7.58097864e-01 7.74638181e-01 7.60854583e-01 0.00000000e+00
7.58097864e-01 7.56030324e-01 0.00000000e+00 2.06753963e-03
4.13507926e-03 1.00000000e+00 5.36871123e-01 0.00000000e+00
6.89179876e-04 5.51343901e-03 0.00000000e+00 6.89179876e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 8.27015851e-03
5.09993108e-02 5.72019297e-02 4.34183322e-02 5.23776706e-02
5.44452102e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.58511371e-02 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 36 [0.15212766 0.03829787 0.85851064 0.62446809 0.16170213 0.04361702
0. 0.00531915 0.00957447 0.75638298 0.35531915 0.
0. 0.00638298 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.0212766
0.01808511 0.02234043 0.0212766 0.0212766 0.01808511 1.
1. 1. 0.03085106 0.00319149 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 37 [1.83835182e-01 2.04437401e-01 0.00000000e+00 1.00000000e+00
0.00000000e+00 1.00000000e+00 0.00000000e+00 3.16957211e-03
1.58478605e-03 1.51347068e-01 0.00000000e+00 0.00000000e+00
0.00000000e+00 6.33914422e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 9.50871632e-03
1.66402536e-02 7.92393027e-03 1.74326466e-02 1.34706815e-02
6.33914422e-03 1.00000000e+00 1.00000000e+00 5.40412044e-01
1.50554675e-02 7.92393027e-04 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 38 [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 9.90033223e-01 8.30564784e-04
8.30564784e-04 0.00000000e+00 0.00000000e+00 2.49169435e-02
2.82392027e-02 8.30564784e-04 1.00000000e+00 1.41196013e-02
1.16279070e-02 1.82724252e-02 1.41196013e-02 9.96677741e-03
1.41196013e-02 9.96677741e-03 7.47508306e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.00000000e+00 1.00000000e+00 1.10465116e-01
8.30564784e-04 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 39 [1. 0.48055556 0.86319444 0.25763889 0.48125 0.
0. 0.00208333 0.00486111 0.05555556 0.11736111 0.
0. 0.00347222 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01180556
0.01666667 0.01041667 0.00902778 0.00972222 0.01180556 1.
1. 1. 0.01527778 0.00138889 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 40 [0.00000000e+00 0.00000000e+00 0.00000000e+00 2.09380235e-04
2.09380235e-04 2.09380235e-04 3.24329983e-01 7.53768844e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 2.50837521e-01
2.25711893e-01 1.46566164e-03 1.67504188e-03 1.88442211e-03
1.67294807e-01 1.25628141e-03 7.74706868e-03 6.28140704e-04
8.37520938e-04 1.46566164e-03 1.67504188e-03 2.09380235e-03
1.46566164e-03 1.67504188e-03 1.88442211e-03 1.46566164e-03
1.25628141e-03 1.00000000e+00 1.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 41 [1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 42 [7.39088264e-01 1.29000970e-01 0.00000000e+00 8.82638215e-02
7.65276431e-01 9.17555771e-01 0.00000000e+00 4.84966052e-03
7.75945684e-03 4.06401552e-01 3.78273521e-02 0.00000000e+00
0.00000000e+00 7.75945684e-03 0.00000000e+00 9.69932105e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.55189137e-02
3.10378274e-02 1.45489816e-02 2.32783705e-02 2.13385063e-02
2.03685742e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
4.07371484e-02 3.87972842e-03 9.69932105e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 43 [0. 1. 0. 0. 0. 1.
0. 0.00135501 0.00271003 0.11382114 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.00271003
0.00542005 0.00948509 0.00406504 0.00948509 0.01084011 1.
1. 0.21815718 0.00406504 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 44 [0.96055227 0.94871795 0.97238659 0.95857988 0.95463511 0.95463511
0. 0.01183432 0.00394477 0. 0.87376726 0.
0. 0.02169625 0. 0.00394477 0. 0.
0. 0. 0. 0. 0. 0.20118343
0.21893491 0.18145957 0.17751479 0.20118343 0.18343195 1.
1. 1. 1. 1. 0.01775148 0.00197239
0. 0. 0. 0. 0. 0.
0. 0. ]
Cluster 45 [0.00000000e+00 1.00000000e+00 1.00000000e+00 2.40797546e-01
3.00613497e-01 3.67331288e-01 7.66871166e-04 1.53374233e-03
5.36809816e-03 1.58742331e-01 1.74846626e-01 0.00000000e+00
1.53374233e-03 1.53374233e-03 0.00000000e+00 1.53374233e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.07361963e-02
1.61042945e-02 1.30368098e-02 1.68711656e-02 1.45705521e-02
1.61042945e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
2.60736196e-02 3.83435583e-03 1.53374233e-03 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 46 [1.00000000e+00 1.59156280e-01 5.08149569e-02 7.66059444e-01
5.71428571e-01 0.00000000e+00 9.58772771e-04 1.91754554e-03
5.75263663e-03 2.34899329e-01 4.59252157e-01 0.00000000e+00
9.58772771e-04 2.87631831e-03 0.00000000e+00 9.58772771e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.43815916e-02
1.15052733e-02 2.30105465e-02 1.15052733e-02 8.62895494e-03
1.24640460e-02 1.00000000e+00 1.00000000e+00 1.00000000e+00
3.54745925e-02 3.83509108e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
Cluster 47 [0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 48 [0.03733766 0.04383117 0.03733766 0.04545455 0.04220779 0.03409091
0.18993506 0.03733766 0.01298701 0.02272727 0.0211039 0.16720779
0.17045455 0.07954545 0.14772727 0.04058442 0.03246753 0.01298701
0.03246753 0.03571429 0.01136364 0.01136364 0.00487013 0.01136364
0.01136364 0.00811688 0.00974026 0.00811688 0.00487013 1.
1. 1. 1. 1. 1. 0.98214286
0.52922078 0.25487013 0.15097403 0.10064935 0.06655844 0.05032468
0.03571429 0.0211039 ]
Cluster 49 [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Cluster 50 [2.34848485e-01 3.42592593e-01 0.00000000e+00 3.67003367e-01
0.00000000e+00 0.00000000e+00 8.41750842e-04 9.25925926e-03
4.20875421e-03 0.00000000e+00 1.00000000e+00 0.00000000e+00
0.00000000e+00 6.73400673e-03 0.00000000e+00 8.41750842e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 1.26262626e-02
1.76767677e-02 1.68350168e-02 1.76767677e-02 6.73400673e-03
1.34680135e-02 1.00000000e+00 1.00000000e+00 1.54882155e-01
2.52525253e-03 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Part C Percentage of Mirai Malwares in Each Cluster Exercise 9 (10 points)Complete the following code to compute the percentage of Mirai Malwares, Zmap, and Masscan in each cluster. | cluster_eval_df = pd.DataFrame( columns = ['cluster ID', 'size', 'cluster center', 'mirai_ratio', 'zmap_ratio', 'masscan_ratio'] )
for i in range(0, 50):
cluster_i = predictions.where(col('prediction')==i)
cluster_i_size = cluster_i.count()
cluster_i_mirai_count = cluster_i.where(col('mirai')).count()
cluster_i_mirai_ratio = cluster_i_mirai_count/cluster_i_size
if cluster_i_mirai_count > 0:
print("Cluster ", i, "; Mirai Ratio:", cluster_i_mirai_ratio, "; Cluster Size: ", cluster_i_size)
cluster_i_zmap_ratio = (cluster_i.where(col('zmap')).count())/cluster_i_size
cluster_i_masscan_ratio = (cluster_i.where(col('masscan')).count())/cluster_i_size
cluster_eval_df.loc[i]=[i, cluster_i_size, centers[i], cluster_i_mirai_ratio, cluster_i_zmap_ratio, cluster_i_masscan_ratio ]
| Cluster 5 ; Mirai Ratio: 0.8424333084018948 ; Cluster Size: 16044
Cluster 10 ; Mirai Ratio: 0.009066183136899365 ; Cluster Size: 3309
Cluster 18 ; Mirai Ratio: 0.06232736223164228 ; Cluster Size: 36565
Cluster 20 ; Mirai Ratio: 0.07641921397379912 ; Cluster Size: 916
Cluster 22 ; Mirai Ratio: 0.00706436420722135 ; Cluster Size: 2548
Cluster 33 ; Mirai Ratio: 0.001513275553721282 ; Cluster Size: 7269
Cluster 37 ; Mirai Ratio: 0.8878737541528239 ; Cluster Size: 1204
Cluster 39 ; Mirai Ratio: 0.027219430485762145 ; Cluster Size: 4776
Cluster 47 ; Mirai Ratio: 0.01461038961038961 ; Cluster Size: 616
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 10 (5 points) Identify all of the clusters that have a large percentage of Mirai malware. For example, you can choose clusters with at least 80% of Mirai ratio. If you use a different threshold (other than 80%), describe the threshold you used and the rational of your choice. Answer to Exercise 10: if I choose 80% as threshold- Cluster 5 ; Mirai Ratio: 0.8424333084018948 ; Cluster Size: 16044- Cluster 37 ; Mirai Ratio: 0.8878737541528239 ; Cluster Size: 1204... | # You can filter predictions DataFrame (Spark) to get all scanners in a cluster.
# For example, the code below selects scanners in cluster 5. However, you should
# replace 5 with the ID of the cluster you want to investigate.
cluster_selected = predictions.where((col('prediction')==5) | (col('prediction')==37))
# If you prefer to use Pandas dataframe, you can use the following to convert a cluster to a Pandas dataframe
cluster_selected_df = cluster_selected.select("*").toPandas()
cluster_selected.printSchema() | root
|-- _c0: integer (nullable = true)
|-- id: integer (nullable = true)
|-- numports: integer (nullable = true)
|-- lifetime: double (nullable = true)
|-- Bytes: integer (nullable = true)
|-- Packets: integer (nullable = true)
|-- average_packetsize: integer (nullable = true)
|-- MinUniqueDests: integer (nullable = true)
|-- MaxUniqueDests: integer (nullable = true)
|-- MinUniqueDest24s: integer (nullable = true)
|-- MaxUniqueDest24s: integer (nullable = true)
|-- average_lifetime: double (nullable = true)
|-- mirai: boolean (nullable = true)
|-- zmap: boolean (nullable = true)
|-- masscan: boolean (nullable = true)
|-- country: string (nullable = true)
|-- traffic_types_scanned_str: string (nullable = true)
|-- ports_scanned_str: string (nullable = true)
|-- host_tags_per_censys: string (nullable = true)
|-- host_services_per_censys: string (nullable = true)
|-- Ports_Array: array (nullable = true)
| |-- element: string (containsNull = true)
|-- Port17132: boolean (nullable = true)
|-- Port17140: boolean (nullable = true)
|-- Port17128: boolean (nullable = true)
|-- Port17138: boolean (nullable = true)
|-- Port17130: boolean (nullable = true)
|-- Port17136: boolean (nullable = true)
|-- Port23: boolean (nullable = true)
|-- Port445: boolean (nullable = true)
|-- Port54594: boolean (nullable = true)
|-- Port17142: boolean (nullable = true)
|-- Port17134: boolean (nullable = true)
|-- Port80: boolean (nullable = true)
|-- Port8080: boolean (nullable = true)
|-- Port0: boolean (nullable = true)
|-- Port2323: boolean (nullable = true)
|-- Port5555: boolean (nullable = true)
|-- Port81: boolean (nullable = true)
|-- Port1023: boolean (nullable = true)
|-- Port52869: boolean (nullable = true)
|-- Port8443: boolean (nullable = true)
|-- Port49152: boolean (nullable = true)
|-- Port7574: boolean (nullable = true)
|-- Port37215: boolean (nullable = true)
|-- Port34218: boolean (nullable = true)
|-- Port34220: boolean (nullable = true)
|-- Port33968: boolean (nullable = true)
|-- Port34224: boolean (nullable = true)
|-- Port34228: boolean (nullable = true)
|-- Port33962: boolean (nullable = true)
|-- TE_numports_0: boolean (nullable = true)
|-- TE_numports_1: boolean (nullable = true)
|-- TE_numports_2: boolean (nullable = true)
|-- TE_numports_3: boolean (nullable = true)
|-- TE_numports_4: boolean (nullable = true)
|-- TE_numports_5: boolean (nullable = true)
|-- TE_numports_6: boolean (nullable = true)
|-- TE_numports_7: boolean (nullable = true)
|-- TE_numports_8: boolean (nullable = true)
|-- TE_numports_9: boolean (nullable = true)
|-- TE_numports_10: boolean (nullable = true)
|-- TE_numports_11: boolean (nullable = true)
|-- TE_numports_12: boolean (nullable = true)
|-- TE_numports_13: boolean (nullable = true)
|-- TE_numports_14: boolean (nullable = true)
|-- TE_numports_15: boolean (nullable = true)
|-- TE_numports_16: boolean (nullable = true)
|-- features: vector (nullable = true)
|-- prediction: integer (nullable = false)
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Exercise 11 (10 points)Complete the following code to find out, for each of the clusters you identified in Exercise 10, - (1) (5 points) determine whether they scan a common port, and - (2) (5 points) what is the port number if most of them in a cluster scan a common port. You canuse the code below to find out what top port is scanned by the scanner in a cluster. | # You fill in the ??? based on the cluster you want to investigate.
cluster_5= predictions.where(col('prediction')==5)
cluster_37= predictions.where(col('prediction')==37)
for i in range(0, top_ports -1):
port_num = "Port" + Top_Ports_list[i]
port_i_count = cluster_5.where(col(port_num)).count()
if port_i_count > 0:
print("Scanners of Port ", Top_Ports_list[i], " = ", port_i_count)
for i in range(0, top_ports -1):
port_num = "Port" + Top_Ports_list[i]
port_i_count = cluster_37.where(col(port_num)).count()
if port_i_count > 0:
print("Scanners of Port ", Top_Ports_list[i], " = ", port_i_count) | Scanners of Port 23 = 1192
Scanners of Port 445 = 1
Scanners of Port 54594 = 1
Scanners of Port 80 = 30
Scanners of Port 8080 = 34
Scanners of Port 0 = 1
Scanners of Port 2323 = 1204
Scanners of Port 5555 = 17
Scanners of Port 81 = 14
Scanners of Port 1023 = 22
Scanners of Port 52869 = 17
Scanners of Port 8443 = 12
Scanners of Port 49152 = 17
Scanners of Port 7574 = 12
Scanners of Port 37215 = 9
| MIT | 10. Thermometer Encoding and Cluster Evaluation/kMeans_OHE_TE_Eval.ipynb | yedkk/spark-data-mining |
Dictionaries in Python Welcome! This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it. Table of Contents Dictionaries What are Dictionaries? Keys Quiz on Dictionaries Estimated time needed: 20 min Dictionaries What are Dictionaries? A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary. An example of a Dictionary Dict: | # Create the dictionary
Dict = {"key1": 1, "key2": "2", "key3": [3, 3, 3], "key4": (4, 4, 4), ('key5'): 5, (0, 1): 6}
Dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
The keys can be strings: | # Access to the value by the key
Dict["key1"] | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Keys can also be any immutable object such as a tuple: | # Access to the value by the key
Dict[(0, 1)] | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Each key is separated from its value by a colon ":". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "{}". | # Create a sample dictionary
release_year_dict = {"Thriller": "1982", "Back in Black": "1980", \
"The Dark Side of the Moon": "1973", "The Bodyguard": "1992", \
"Bat Out of Hell": "1977", "Their Greatest Hits (1971-1975)": "1976", \
"Saturday Night Fever": "1977", "Rumours": "1977"}
release_year_dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type. It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values. Keys You can retrieve the values based on the names: | # Get value by keys
release_year_dict['Thriller'] | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
This corresponds to: Similarly for The Bodyguard | # Get value by key
release_year_dict['The Bodyguard'] | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Now let you retrieve the keys of the dictionary using the method release_year_dict(): | # Get all the keys in dictionary
release_year_dict.keys() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
You can retrieve the values using the method values(): | # Get all the values in dictionary
release_year_dict.values() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
We can add an entry: | # Append value with key into dictionary
release_year_dict['Graduation'] = '2007'
release_year_dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
We can delete an entry: | # Delete entries by key
del(release_year_dict['Thriller'])
del(release_year_dict['Graduation'])
release_year_dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
We can verify if an element is in the dictionary: | # Verify the key is in the dictionary
'The Bodyguard' in release_year_dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Quiz on Dictionaries You will need this dictionary for the next two questions: | # Question sample dictionary
soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
a) In the dictionary soundtrack_dict what are the keys ? | # Write your code below and press Shift+Enter to execute
soundtrack_dic.keys() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Double-click __here__ for the solution.<!-- Your answer is below:soundtrack_dic.keys() The Keys "The Bodyguard" and "Saturday Night Fever" --> b) In the dictionary soundtrack_dict what are the values ? | # Write your code below and press Shift+Enter to execute
soundtrack_dic.values() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Double-click __here__ for the solution.<!-- Your answer is below:soundtrack_dic.values() The values are "1992" and "1977"--> You will need this dictionary for the following questions: The Albums Back in Black, The Bodyguard and Thriller have the following music recording sales in millions 50, 50 and 65 respectively: a) Create a dictionary album_sales_dict where the keys are the album name and the sales in millions are the values. | # Write your code below and press Shift+Enter to execute
album_sales_dict = {"Back in Black":50, "The Bodyguard":50, "Thriller":65}
album_sales_dict | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Double-click __here__ for the solution.<!-- Your answer is below:album_sales_dict = {"The Bodyguard":50, "Back in Black":50, "Thriller":65}--> b) Use the dictionary to find the total sales of Thriller: | # Write your code below and press Shift+Enter to execute
album_sales_dict["Thriller"] | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Double-click __here__ for the solution.<!-- Your answer is below:album_sales_dict["Thriller"]--> c) Find the names of the albums from the dictionary using the method keys: | # Write your code below and press Shift+Enter to execute
album_sales_dict.keys() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
Double-click __here__ for the solution.<!-- Your answer is below:album_sales_dict.keys()--> d) Find the names of the recording sales from the dictionary using the method values: | # Write your code below and press Shift+Enter to execute
album_sales_dict.values() | _____no_output_____ | RSA-MD | Python for DS, AI & Development/PY0101EN-2-3-Dictionaries.ipynb | mesbahiba/IBM_Professional_Data_Analyst |
In this assignment, you'll continue working with the U.S. Education Dataset from Kaggle. The data gives detailed state level information on several facets of education on an annual basis. To learn more about the data and the column descriptions, you can view the Kaggle link above.Access this data using the Thinkful database using these credentials:* postgres_user = 'dsbc_student'* postgres_pw = '7*.8G9QH21'* postgres_host = '142.93.121.174'* postgres_port = '5432'* postgres_db = 'useducation'Don't forget to apply the most suitable missing value filling techniques from the previous checkpoint to the data. Provide the answers to the following only after you've addressed missing values!To complete this assignment, submit a link to a Jupyter notebook containing your solutions to the following tasks:1. Consider the two variables: TOTAL_REVENUE and TOTAL_EXPENDITURE. Do these variables have outlier values?2. If you detect outliers in the TOTAL_REVENUE and TOTAL_EXPENDITURE variables, apply the techniques you learned in this checkpoint to eliminate them and validate that there's no outlier values after you handled them.3. Create another variable by subtracting the original TOTAL_EXPENDITURE from TOTAL_REVENUE (before you eliminated the outliers). You can think of it as a kind of budget deficit in education. Do you find any outlier values in this new variable? 4. If so, eliminate them using the technique you think most suitable.5. Now create another variable by subtracting the TOTAL_EXPENDITURE from TOTAL_REVENUE. This time, use the outlier eliminated versions of TOTAL_EXPENDITURE from TOTAL_REVENUE. In this newly created variable, can you find any outliers? If so, eliminate them.6. Compare some basic descriptive statistics of the budget variables you end up with in the 3rd and the 4th questions. Do you see any differences?7. If our variable of interest is the budget deficit variable, which method do you think is the appropriate in dealing with the outliers in this variable: the method in the 3rd question or the one in the 4th question? | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
import warnings
warnings.filterwarnings('ignore')
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'useducation'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
education_df = pd.read_sql_query('select * from useducation',con=engine)
# no need for an open connection,
# as we're only doing a single query
engine.dispose()
fill_list = ["STATE_REVENUE", "LOCAL_REVENUE", "TOTAL_EXPENDITURE",
"INSTRUCTION_EXPENDITURE", "SUPPORT_SERVICES_EXPENDITURE",
"OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE", "GRADES_PK_G",
"GRADES_KG_G", "GRADES_4_G", "GRADES_8_G", "GRADES_12_G", "GRADES_1_8_G",
"GRADES_9_12_G", "GRADES_ALL_G"]
states = education_df["STATE"].unique()
for state in states:
education_df.loc[education_df["STATE"] == state, fill_list] = education_df.loc[education_df["STATE"] == state, fill_list].interpolate()
# we drop the null values after interpolation
education_df.dropna(inplace=True) | _____no_output_____ | MIT | Data_cleaning_Outlier_EDA_Practice.ipynb | sgf-afk/Class_Assignments |
1. Consider the two variables: TOTAL_REVENUE and TOTAL_EXPENDITURE. Do these variables have outlier values? | education_df.info()
education_df.head() | _____no_output_____ | MIT | Data_cleaning_Outlier_EDA_Practice.ipynb | sgf-afk/Class_Assignments |
__Time series data, I can interpolate the missing values__ Z-Score Test | from scipy.stats import zscore
z_scores = zscore(education_df['TOTAL_REVENUE'])
for threshold in range(1,10):
print("The score threshold is: {}".format(threshold))
print("The indices of the outliers:")
print(np.where(z_scores > threshold))
print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0]))))
z_scores = zscore(education_df['TOTAL_EXPENDITURE'])
for threshold in range(1,10):
print("The score threshold is: {}".format(threshold))
print("The indices of the outliers:")
print(np.where(z_scores > threshold))
print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) | The score threshold is: 1
The indices of the outliers:
(array([ 3, 26, 52, 61, 89, 100, 112, 140, 151, 163, 168, 189, 191,
197, 202, 214, 220, 224, 241, 243, 249, 254, 266, 271, 275, 292,
294, 300, 305, 317, 322, 326, 343, 345, 351, 356, 368, 373, 377,
394, 396, 402, 407], dtype=int64),)
Number of outliers is: 43
The score threshold is: 2
The indices of the outliers:
(array([ 26, 61, 89, 100, 112, 140, 151, 163, 191, 202, 214, 243, 254,
266, 294, 305, 317, 345, 356, 368, 396, 407], dtype=int64),)
Number of outliers is: 22
The score threshold is: 3
The indices of the outliers:
(array([ 61, 112, 163, 191, 214, 243, 254, 266, 294, 305, 317, 345, 356,
368, 396, 407], dtype=int64),)
Number of outliers is: 16
The score threshold is: 4
The indices of the outliers:
(array([112, 163, 214, 266, 317, 368, 396], dtype=int64),)
Number of outliers is: 7
The score threshold is: 5
The indices of the outliers:
(array([368], dtype=int64),)
Number of outliers is: 1
The score threshold is: 6
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 7
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 8
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 9
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
| MIT | Data_cleaning_Outlier_EDA_Practice.ipynb | sgf-afk/Class_Assignments |
According to Zscores both have outliers 2. If you detect outliers in the TOTAL_REVENUE and TOTAL_EXPENDITURE variables, apply the techniques you learned in this checkpoint to eliminate them and validate that there's no outlier values after you handled them. | from scipy.stats.mstats import winsorize
winsorized_revenue = winsorize(education_df["TOTAL_REVENUE"], (0, 0.05))
winsorized_expenditure = winsorize(education_df["TOTAL_EXPENDITURE"], (0, 0.05))
z_scores = zscore(winsorized_revenue)
for threshold in range(1,10):
print("The score threshold is: {}".format(threshold))
print("The indices of the outliers:")
print(np.where(z_scores > threshold))
print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0]))))
z_scores = zscore(winsorized_expenditure)
for threshold in range(1,10):
print("The score threshold is: {}".format(threshold))
print("The indices of the outliers:")
print(np.where(z_scores > threshold))
print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) | The score threshold is: 1
The indices of the outliers:
(array([ 3, 19, 26, 52, 61, 66, 70, 87, 89, 95, 100, 112, 117,
121, 130, 138, 140, 143, 146, 151, 163, 168, 172, 181, 189, 191,
194, 197, 202, 214, 220, 224, 233, 241, 243, 246, 249, 254, 266,
271, 275, 292, 294, 297, 300, 305, 317, 322, 326, 343, 345, 348,
351, 356, 368, 373, 377, 394, 396, 399, 402, 407], dtype=int64),)
Number of outliers is: 62
The score threshold is: 2
The indices of the outliers:
(array([ 3, 26, 52, 61, 89, 100, 112, 140, 151, 163, 168, 191, 202,
214, 243, 254, 266, 294, 305, 317, 345, 356, 368, 377, 396, 407],
dtype=int64),)
Number of outliers is: 26
The score threshold is: 3
The indices of the outliers:
(array([ 26, 61, 89, 112, 140, 151, 163, 191, 202, 214, 243, 254, 266,
294, 305, 317, 345, 356, 368, 396, 407], dtype=int64),)
Number of outliers is: 21
The score threshold is: 4
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 5
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 6
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 7
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 8
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
The score threshold is: 9
The indices of the outliers:
(array([], dtype=int64),)
Number of outliers is: 0
| MIT | Data_cleaning_Outlier_EDA_Practice.ipynb | sgf-afk/Class_Assignments |
After the outlier threshold of 3 (75%) we lose our outliers, Winsorization worked. 3. Create another variable by subtracting the original TOTAL_EXPENDITURE from TOTAL_REVENUE (before you eliminated the outliers). You can think of it as a kind of budget deficit in education. Do you find any outlier values in this new variable? | education_df['Deficit'] = education_df['TOTAL_REVENUE'] - education_df['TOTAL_EXPENDITURE']
plt.boxplot(education_df['Deficit'], whis = 5) | _____no_output_____ | MIT | Data_cleaning_Outlier_EDA_Practice.ipynb | sgf-afk/Class_Assignments |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.